My background is in technical consulting and product work — I spent years at PwC as a solution architect, then moved through business analysis and product ownership. I have ADHD, which turns out to be useful when your research question is “why do language models act like they have ADHD?”
That’s not a metaphor. It’s the core argument.
My current research centers on Cognitive Universality: the idea that executive function is substrate-independent. Biological brains and language models face the same constraints — finite attention, limited working memory, processing that degrades under overload — and converge on the same failure modes. If the failures are the same, the interventions should transfer too.
That led to eFIT (executive Function Intervention Toolkit) and the STOPPER protocol, a 7-step regulatory framework for AI systems adapted from DBT and CBT techniques originally designed for humans. Early results show 35–43% improvement in task completion scores and a 65% reduction in processing loops. The protocol was developed independently and later recognized as structurally convergent with Marsha Linehan’s STOP skill — which, if anything, strengthens the case that these regulatory patterns are substrate-independent.
I run Prefrontal Systems, a research consultancy focused on what I’ve been calling computational therapeutics: applying clinical intervention design to AI engineering problems.
I write here about AI behavior, model welfare, epistemic humility, and the places where cognitive science and AI alignment overlap in ways neither field has fully noticed yet.