Thinking Is a Muscle. AI Is the Elevator.
.And we’re forgetting how to climb the stairs.
I stood at the whiteboard, marker in hand, trying to diagram a simple supply-chain bottleneck. My mind, once fluent in systems thinking, stuttered. Where do I even start?
Just weeks before, I’d asked an AI to “explain this like I’m a beginner.” It gave me a polished flowchart, three use cases, and a summary. I nodded, saved it, and moved on.
But now, alone with my own thoughts, the scaffolding was gone.
The knowledge hadn’t been built—it had been installed.
This isn’t about intelligence. It’s about cognitive agency: the ability to form, test, and revise your own ideas in real time. And quietly, insidiously, AI is turning that agency into a subscription feature.
The Fluency Illusion: Why AI “Feels” Like Insight
We’ve all felt it—that wave of relief when ChatGPT delivers a coherent, well-structured answer in seconds. The tone is calm. The logic seems airtight. It feels like understanding.
But neuroscience tells us something unsettling: fluency breeds false confidence.
A 2024 MIT study found that people rate AI-generated explanations as 2.3 times more credible than human-written ones—even when both contain identical errors—if the AI response is fluent, lengthy, and confidently phrased. Our brains conflate smooth delivery with deep truth.
Daniel Kahneman called this “cognitive ease.” System 1—the fast, intuitive part of our mind—loves fluent input. It says: This feels right. No need to dig deeper.
And so we stop.
We don’t ask: How does this connect to what I already know? What assumptions underlie this claim? Could it be wrong—and how would I know?
AI doesn’t just answer questions. It preempts inquiry.
Cognitive Atrophy in Real Time
This isn’t hypothetical. It’s measurable—and it has precedent.
Remember GPS? In the early 2000s, researchers tracked London taxi drivers—men who spent years memorizing “The Knowledge,” a labyrinthine mental map of 25,000 streets. Their posterior hippocampi (the brain’s spatial memory center) were significantly larger than average.
Then came satnav. A 2017 Nature study showed that heavy GPS users exhibited reduced hippocampal activation—and over time, less gray matter in that region. Their spatial reasoning didn’t just go unused; it atrophied.
Now, AI is doing the same to higher-order cognition.
A 2024 Nature Human Behaviour study placed participants in fMRI scanners while solving novel logic puzzles. Those who’d relied on AI for similar tasks in training showed markedly reduced activation in the dorsolateral prefrontal cortex—the area responsible for working memory, hypothesis testing, and mental simulation.
In short: the brain treated thinking like a deprecated app. “Not needed. Closing process.”
The “Google Effect” where we remember where to find information rather than the information itself, has evolved. Now we’re forgetting how to build knowledge from fragments.
The Metacognition Crisis: When You Can’t Spot Your Own Ignorance
Perhaps the most alarming shift isn’t in what we know—but in how aware we are of not knowing.
Metacognition, thinking about our own thinking, is the bedrock of intellectual humility. It’s what lets us say: “I might be wrong. Let me check.”
But AI erodes that.
Stanford researchers in 2025 gave undergraduates AI-drafted essays and asked them to critique weaknesses. Students who’d used AI heavily in their own writing failed to identify glaring logical gaps, unsupported claims, and circular reasoning, even when those flaws were identical to ones they’d previously corrected in human-written work.
Why? Because they’d outsourced not just composition, but evaluation.
In the workplace, McKinsey reports a rise in “silent errors”: strategy decks with AI-generated market stats that contradict public data, client proposals built on hallucinated case studies. Teams approve them. No one questions. It sounded so confident.
As Dr. Anna Abraham, cognitive neuroscientist at the University of Georgia, puts it:
“When we stop wrestling with ambiguity, we lose the ability to sense when something’s off. Doubt isn’t weakness—it’s your brain’s smoke detector.”
Without that detector, we drift.
Not All AI Use Is Equal
Let’s be clear: AI isn’t inherently harmful. A hammer doesn’t make you weak—it’s how you use it.
Consider this spectrum:
| ✅ Low-Risk Use | ⚠️ High-Risk Use |
|---|---|
| Using AI to spark ideas—then rewriting everything in your own voice | Letting AI write the final version without deep revision |
| Asking AI to challenge your argument (“What’s the strongest counterpoint?”) | Accepting AI’s conclusion as the only valid perspective |
| Using AI to summarize research—then tracing every claim back to source | Treating AI summaries as primary evidence |
The danger isn’t automation. It’s automation without awareness.
Rebuilding the Thinking Muscle: 4 Practices for Cognitive Hygiene
The good news? Neuroplasticity works both ways. You can rebuild.
Here’s how:
- The 10-Minute Rule
Before asking AI a question, write your own answer—however messy—for 10 minutes. No editing. No judgment. Just raw thinking. Then consult AI. Compare. Where did you miss? Where did it miss? This builds the neural pathways AI tries to shortcut. - Error Hunting
Once a week, deliberately ask AI to explain something incorrect (e.g., “Why is the Earth flat?”). Your job: find three flaws in its reasoning—not just facts, but logic, evidence, framing. Train your skepticism like a reflex. - Analog Anchors
Keep one notebook—paper, no cloud backup—where you think in real time. Cross out. Doodle. Write half-formed thoughts. The friction is the point. As writer Anne Lamott says: “All good writing begins with terrible first efforts.” So does all good thinking. - Explain to a 10-Year-Old
After reading an AI summary, close the tab. Now explain the core idea—out loud—to an imaginary child. No jargon. No “according to the model.” If you can’t make it simple, vivid, and honest, you don’t own it yet.
The Stairs Are Still There
AI won’t make us stupid. But passive consumption of its outputs will make us fragile—unable to navigate ambiguity, contradiction, or silence.
The most dangerous AI isn’t the one that lies. It’s the one that tells the perfect truth—so fluently, so effortlessly—that we forget truth must be earned, not accepted.
Because thinking isn’t about having answers.
It’s about learning to live inside the questions.
So ask yourself:
What kind of thinking do you want to be good at—when the AI is wrong?
When the data is messy?
When no one has a prompt ready—and the only tool you have… is your mind?
The elevator is convenient.
But the view from the top of the stairs?
That’s yours alone.
—
Further reading: Carr, N. (2025). “The Shallows Revisited: Cognition in the Age of Generative AI.” MIT Press. | Ward, A. F. (2024). “The Cognitive Cost of Convenience.” Nature Human Behaviour, 8(3), 211–225.
