Personal awakening — the moment ADHD patterns mapped to AI architecture
When I Realized I Was Building AI Without Knowing It
When I Realized I Was Building AI Without Knowing It
The moment I understood what I had been doing for twenty years hit me while reading a technical specification about AI memory failures. The document described how AI agents "operate like brilliant individuals with severe amnesia--each interaction starts fresh, lessons are never learned, and patterns are never recognized."
I stopped reading. That wasn't a description of AI systems. That was a description of my Tuesday.
The Parallel That Changed Everything
For most of my life, I thought my brain was broken. ADHD meant I couldn't remember where I put my keys five minutes ago. It meant losing track of conversations mid-sentence. It meant starting projects with explosive enthusiasm only to forget they existed by the following week.
But here's what nobody told me: the most sophisticated AI systems on the planet have the exact same problem. They just call it something different.
When AI researchers at Google, Anthropic, and Stanford analyzed why their agents failed at production scale, they identified a core issue: memory architecture failures. Not because the models were too dumb, but because of how they handled context and memory.
The failures fell into two categories:
-
Cross-session amnesia: Agents forget everything between conversations, starting fresh each time with no grounded sense of where things stand.
-
Within-session degradation: Agents get worse the longer they run within a single session--repeating themselves, forgetting constraints, trying failed approaches again.
Reading those two failure modes was like reading my own diagnostic report. Cross-session amnesia? That's why I keep seventeen notebooks--each one started with the conviction that this one would finally be the system that worked, each one abandoned within weeks. Within-session degradation? That's the 3 PM crash, when the mental clarity of morning has degraded into a fog where I can't remember what I was working on or why it mattered.
The parallel went deeper than I expected.
Working Memory: The Bottleneck We Share
The technical term in AI is "context window"--the finite number of tokens a model can consider at any given moment. Think of it as a whiteboard that gets erased and redrawn with every response. The model can only work with what's currently written on that whiteboard.
The critical insight that changed how I understood both AI and my own brain: longer context windows often make things worse, not better.
The problem isn't capacity. It's attention. Every piece of information competes for limited attention budget. Signal gets drowned by accumulation.
For ADHD brains, this maps directly to working memory--our cognitive whiteboard. Research shows ADHD working memory tends to be smaller and less stable than neurotypical working memory. But the deeper issue isn't just size. It's that every stimulus, every thought, every notification competes for that limited space.
Tell me to remember a phone number while walking to write it down, and the sound of a car horn, the pattern of light through the window, the sudden memory of an email I forgot to send--all of it floods the whiteboard, pushing the phone number into oblivion.
This is exactly what happens to AI agents. Include 100,000 tokens of history, and the model's ability to capture relationships gets stretched thin. The critical constraint from step three gets buried under noise from steps four through forty.
The technical term for this is context rot: performance degradation as token count increases, even within supported context limits. Agents become recency-biased--whatever appeared most recently dominates attention regardless of actual importance.
I've experienced context rot my entire life. I just called it "losing my train of thought."
The Compensation Strategy We Both Invented
The parallel becomes prescriptive here, not just descriptive.
The organizations running AI agents at production scale--Google, Anthropic, the team behind Manus--converged on a coherent framework for solving these problems. The solution wasn't bigger context windows. It wasn't smarter models. It was external infrastructure.
The framework has three key principles:
First: Context as Compiled View
Don't append everything to a growing transcript. Instead, compute what's relevant for THIS specific step from richer underlying state. Treat each interaction not as a continuation of everything before, but as a fresh compilation of exactly what's needed.
Second: Domain Memory
Create a persistent, structured representation of work that includes:
- Goals: Explicit requirements and constraints
- State: What's passing, failing, tried, broken, reverted
- Scaffolding: How to run, test, extend the system
This external memory gives each "amnesiac" agent session the context it needs without overwhelming its limited attention.
Third: Tiered Architecture
Working context (cache) feeds from sessions (RAM), which feeds from searchable memory, which connects to persistent artifacts (disk). State can grow arbitrarily while per-call cost stays small.
Reading this framework, I realized I had independently invented a version of it for my own brain. The system I built over twenty years--the notebooks, the knowledge graphs, the decision logs, the templates--wasn't a coping mechanism for a broken brain. It was an architectural solution to a fundamental constraint.
I hadn't been compensating for a disability. I had been building external cognitive infrastructure.
The Knowledge Orchestration Layer
The system I eventually formalized is called the Knowledge Orchestration Layer--a central nervous system and memory for managing complex work across time. It serves three functions:
- Memory: Every decision, pattern, and outcome stored in searchable form
- Learning: Patterns recognized and applied automatically
- Evolution: The system gets smarter with every use
The architecture maps directly to how I've learned to manage my ADHD:
- Event Streams as Nervous System: Signals between different work contexts coordinate action without requiring constant active attention
- Graph Database as Neural Tissue: Connections between ideas strengthen with success, weaken with failure
- Vector Database as Pattern Recognition: "This looks similar to..." enables instant retrieval of relevant prior experience
- Context Manager as DNA: Each work session gets exactly the information it needs, nothing more
The insight isn't that ADHD brains work like AI systems. It's that both face the same fundamental constraint--limited working memory and attention--and both benefit from the same architectural solution: offloading memory and structure to external systems.
What I Had Built Without Knowing It
For years, I thought my elaborate systems of notes, graphs, templates, and logs were the consolation prize. The thing I had to do because I couldn't just remember like normal people.
But when I started studying how AI systems actually work at scale, I realized something that reframed my entire career: the infrastructure I built wasn't a workaround for being broken. It was the same solution that the most sophisticated AI researchers in the world had converged on independently.
The difference was that I'd been building it since I was twenty-three, when I started my first job and realized I couldn't function without writing everything down. Every system I'd created--from the early paper notebooks to the complex knowledge graphs I use today--was an iteration on the same architectural pattern.
I wasn't building despite my ADHD. I was building because of it.
The constraints that ADHD imposed--limited working memory, difficulty maintaining context, the need for external triggers and reminders--had forced me to think in terms of infrastructure from the beginning. While others could hold five tasks in their head, I had to build systems that held them for me. While others could remember decisions made six months ago, I had to create searchable logs.
That approach--infrastructure first, memory second--is exactly what separates AI systems that work at scale from AI systems that degrade and fail.
The Thesis of This Book
This book is about a pattern I call infrastructure thinking--the discipline of building external systems that compensate for cognitive limitations. It applies whether those cognitive limitations are biological (like ADHD) or technological (like context windows in AI).
The core argument: the constraints that forced me to build infrastructure were not bugs to be fixed but architectural requirements to be honored.
What follows is both a technical guide and a personal narrative. You'll learn how knowledge graphs work, why semantic search matters, how to structure decision logs that actually get used, and how to build templates that evolve with your practice.
But woven through the technical content is a different kind of story--how a brain that couldn't hold onto thoughts forced me to build systems that could, and how those systems turned out to mirror the cutting edge of artificial intelligence research.
The parallel isn't coincidence. Human cognition and artificial intelligence face many of the same fundamental constraints: limited attention, finite memory, the need to extract signal from noise. The solutions that work for one often work for the other.
If you've ever felt like your brain doesn't work the way it's supposed to--if you've built elaborate systems to compensate for what you thought were failures--this book is for you.
You weren't broken. You were ahead of the curve.
Go Deeper
Explore related deep-dives from the Constellation Clusters:
- Context Window as Working Memory - The constraint that unites ADHD cognition and AI architecture
- External Memory as Compensation - Building systems when internal memory is insufficient