Future

Cover image for The generative AI revolution promises productivity gains, but is it making us smarter or simply outsourcing our thinking?
Edward Burton
Edward Burton

Posted on

The generative AI revolution promises productivity gains, but is it making us smarter or simply outsourcing our thinking?

The spectre haunting higher education today isn’t plagiarism. That’s a problem of ethics and detection, and we’ve largely figured that out. The new spectre is far more insidious, far more dangerous — it’s the spectre of perfect output and an empty mind.

Picture this. You’re in a university seminar, and a student can, in seconds, conjure a 2,000-word essay on the geopolitical intricacies of the Treaty of Westphalia. The prose is impeccable, the arguments razor-sharp, the structure flawless. But then, the uncomfortable question arises — “Can you explain the third paragraph without looking at your screen?” The answer is often silence.

We are living through a profound uncoupling of doing from understanding. For centuries, the very act of labour — writing, calculating, coding — was the crucible where learning was forged. The inherent friction of the process was where the cognition truly happened. Now, that friction has been https://www.polytechnique-insights.com/en/columns/neuroscience/generative-ai-the-risk-of-cognitive-atrophy/.

The dominant narrative, peddled by EdTech evangelists and industry leaders, tells us that AI is a benevolent “co-pilot,” liberating students from mundane tasks to focus on “higher-order thinking.” It’s a comforting story, isn’t it? A narrative that paints a future of https://www.deloitte.com/us/en/insights/industry/articles-on-higher-education/generative-ai-higher-education.html.

But what if that story is a carefully constructed illusion? What if, instead of freeing minds, we are actively atrophying them? What if we are laying the groundwork for a https://www.campustimes.org/2024/04/07/the-ai-divide-creating-a-new-class-system-in-education/, not between those who have access to technology and those who don’t, but between those who can wield AI and those who will be replaced by it?

— -
The Seductive Siren Song of Efficiency

To truly grasp the peril we’re in, we must first give the optimists their due. Their argument is compelling, often rooted in a well-intentioned, if simplistic, application of https://ciddl.org/the-impact-of-artificial-intelligence-on-cognitive-load/.

The core idea of CLT is that our working memory is a limited resource. Learning suffers when we are bogged down by Extraneous Cognitive Load — unnecessary mental effort caused by confusing instructions or logistical nightmares. AI, they argue, acts as a powerful scaffold, sweeping away this extraneous load and freeing up precious mental bandwidth for Germane Cognitive Load. This is the “good” kind of load, the effort required to build robust mental models, or schemas, that form the bedrock of deep understanding.

It’s a beautifully simple proposition — if a calculator can liberate you from the drudgery of long division so you can grapple with calculus, surely ChatGPT can free you from wrestling with sentence structure so you can master the art of argumentation?

The narrative is seductive. It promises a future where AI is a “collaborative partner,” a force that https://www.aacsb.edu/insights/articles/2023/10/how-ai-is-reshaping-higher-education and grants everyone access to sophisticated output. Students, they contend, will transition from mere creators to esteemed editors and architects of knowledge.

The fundamental flaw in this optimistic vision lies in a crucial, unspoken assumption — it presumes that the “grunt work” of learning is entirely separable from the learning itself. It assumes that the act of writing is merely a passive transcription of pre-formed thoughts, rather than the very engine that generates and refines those thoughts.

This assumption, I fear, is not just flawed; it is dangerous.

— -
The Uncomfortable Truth — Friction is Not a Bug, It’s a Feature

The illusion of AI-driven efficiency begins to crumble when we look beyond the polished output and examine the underlying cognitive processes.

The first and most critical crack in the orthodoxy is the profound confusion between output and outcome. In education, the tangible output — the essay, the code, the report — is merely evidence of a deeper, internal outcome — the neural restructuring that occurs within the student’s brain. AI, in its current iteration, excels at enabling the production of the output while systematically bypassing the necessity of the outcome.

The Quiet Erosion of Understanding

The evidence of this cognitive erosion is no longer confined to concerned whispers. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1550621/full is beginning to illuminate this unsettling trend.

A study conducted by MIT, for instance, offered a quantitative glimpse into this phenomenon. Researchers observed that while generative AI tools significantly accelerated the speed and improved the quality of written outputs, they fundamentally altered the user’s engagement with the task. The productivity gains, the study noted, were achieved at the expense of the “human struggle” — that vital, messy process essential for deep comprehension.

The participants weren’t truly “collaborating” with the AI; they were, in essence, supervising it. And the cognitive skills required for supervision are often distinct from, and frequently shallower than, those required for genuine creation.

This resonates with findings from researchers investigating the https://www.mdpi.com/2227-7102/15/3/343. Their work suggests a concerning negative correlation, largely mediated by a phenomenon known as cognitive offloading.

https://www.computer.org/publications/tech-news/trends/cognitive-offloading, at its core, is our intelligent use of external aids to reduce the mental burden of a task. Writing down a number to avoid holding it in your head is a simple, beneficial example. However, when this offloading extends to encompass the entire cognitive architecture of a task — the ideation, the structuring, the synthesis of complex ideas — we venture into the perilous territory of https://www.404media.co/microsoft-study-finds-ai-makes-human-cognition-atrophied-and-unprepared-3/.

To visualize this shift, consider the fundamental re-engineering of the learning loop.

Traditional Learning Loop

Input → Internal Processing (The Struggle) → Synthesis → Output

Schema Construction (Long-term Memory)

AI-Mediated Learning Loop

Input → Prompt Engineering → AI Processing → Output

Surface Review (Verification)

In this reconfigured model, the crucial phase of “Internal Processing” — the very forge where long-term memory is built and critical analysis is honed — is effectively circumvented. The student can produce the final product, but they build no lasting schema. They have “done” the assignment, but they have fundamentally “understood” very little.

— -
The Deeper Erosion — The Ghost of Germane Load

The most profound error embedded within the “AI as Co-pilot” orthodoxy is its misunderstanding of germane cognitive load.

Germane load isn’t simply about “thinking hard.” It is the specific, effortful mental work that underpins the creation of lasting connections in our long-term memory.

It is the friction that precedes clarity. It is the frustration of searching for the precise word, a search that compels the brain to delve into its semantic network, thereby strengthening those very pathways.

When AI effortlessly provides the answer, the perfect structure, or the concise summary, it doesn’t just remove extraneous load; it obliterates the germane load.

Become a member
The Brain’s Inbuilt Laziness

This sets the stage for a phenomenon I term Metacognitive Laziness.

Our brains are inherently thrifty. They are wired to conserve energy. If an external agent — in this case, an AI — can perform a high-energy cognitive task, such as constructing a logically sound argument, with significantly less energy expenditure (via a well-crafted prompt), the brain will invariably default to the path of least resistance.

This is not a moral failing of the student. It is a deep-seated biological imperative.

However, the https://www.forbes.com/sites/chriswestfall/2024/12/18/the-dark-side-of-ai-tracking-the-decline-of-human-cognitive-skills/ are profoundly detrimental. By opting out of the struggle of articulation, students fail to cultivate essential cognitive faculties.

Metacognition — The ability to understand one’s own knowledge, to know what one knows and, crucially, what one doesn’t know.
Epistemic Vigilance — The https://wmich.edu/x/teaching-learning/teaching-resources/ai-critical-thinking. As emerging research highlights, students heavily reliant on AI often struggle to critically evaluate the outputs from these “black boxes” precisely because they lack the foundational knowledge required to spot subtle inaccuracies or outright hallucinations.
Synthesis — The creative power to weave together disparate ideas and forge novel concepts.
The chilling reality is that we are not cultivating a generation of innovative architects. Instead, we risk producing a cohort of highly capable construction site managers who have never personally laid a single brick and lack the fundamental knowledge to discern whether the foundation is being poured with concrete or with quicksand.

— -
The Looming Divide — A New Socio-Economic Structure

This cognitive erosion leads us to the most provocative and disturbing conclusion of this analysis — we are hurtling towards a https://www.cbsnews.com/pittsburgh/news/ai-digital-divide/. This divide will not be predicated on access to technology, but on the fundamental relationship individuals have with it.

The Wielders vs. The Replaced

At one end of this spectrum, we will find the Wielders. This will likely be a minority group, individuals who already possess a high intrinsic capacity for handling cognitive load before they even engage with AI. They are experts in their domains. They can write fluently without AI assistance, code proficiently without Copilot, and think critically without the constant presence of a chatbot. Because they possess robust, underlying schemas, they are able to leverage AI as a genuine force multiplier, accelerating their existing capabilities. They use AI to streamline the “doing” only after they have mastered the “understanding.”

At the other end of the spectrum lie the Replaced. This group consists of students who have used AI as a shortcut, effectively bypassing the rigorous process of acquiring fundamental skills. They lack the internal cognitive architecture to critically evaluate information, making them https://news.harvard.edu/gazette/story/2025/11/is-ai-dulling-our-minds/. They become inextricably tethered to the machine, not as a master, but as an indispensable, and ultimately debilitating, dependency.

Visualizing the Cognitive Chasm

To better illustrate the impact of this divergence, consider the stark contrast in cognitive architecture that emerges over time between a traditionally educated learner and an AI-dependent learner.

THE COGNITIVE DIVIDE

The Dangerous Recursion in Teacher Training

The danger is particularly acute when we consider the
https://www.digitaleducationcouncil.com/post/how-students-use-ai-the-evolving-relationship-between-ai-and-higher-education. If the educators of tomorrow are themselves reliant on AI to generate lesson plans, assess student work, and even grasp pedagogical theories, we risk creating a devastating recursive loop of incompetence. A teacher who cannot think critically or synthesize complex ideas is fundamentally ill-equipped to teach those very skills. We are courting a “cascade of mediocrity,”
where each successive generation of students is educated by teachers with a diminishing capacity for cognitive autonomy.

The Unsparing Professional Consequence

In the professional arena, this divide will manifest with brutal efficiency. The global economy, by its very nature, rewards scarcity and demonstrable value.

If your primary professional asset is the ability to “produce syntactically correct text,” your market value approaches zero. An AI can perform that task for free. If your core function is “summarising documents,” you are likewise devalued. If your contribution is “writing standard code,” the AI becomes your direct, and often cheaper, competitor.

True value will increasingly accrue to those individuals who possess the ability to direct, critically evaluate, and creatively synthesize — skills that can only be truly forged through the very intellectual struggle that AI so readily invites us to skip. The “Replaced” will find themselves facing a stark professional reality - they will be unemployable not because they lack access to AI, but because they have https://rustcodeweb.medium.com/ai-is-creating-a-new-class-divide-and-its-not-what-you-think-a634145b83fa.

— -
The Imperative for Artificial Friction

I am not advocating for a return to the Luddite days, for a wholesale rejection of technological advancement. AI is here, and its potential as a powerful tool is
undeniable. However, we must confront the uncomfortable truth about its https://sfihealth.com/news/the-impact-of-ai-on-cognitive-function-are-our-brains-at-stake.

The prevailing narrative asserts that AI makes learning easier. The reality, however, is that deep learning — the kind that fundamentally rewires our brains and builds enduring careers — should not be easy. It is inherently demanding, requiring the robust application of germane cognitive load.

The path forward demands a https://er.educause.edu/articles/sponsored/2025/5/shaping-the-future-of-learning-ai-in-higher-education and assessment strategies. We must
pivot away from valuing merely the final artefact — the polished essay, the elegant code snippet — and recommit to valuing the arduous, but essential, process of creation.

Reintroduce Deliberate Friction — We need to establish
https://packback.co/resources/blog/ai-proof-your-assignments-5-strategies-to-prevent-cognitive-offloading-in-higher-education/" within educational settings, spaces where students are compelled to rely solely on their own innate cognitive capabilities. In-class writing exercises, oral examinations, and live whiteboard coding sessions must be reinstated as essential components of the learning experience.
Assess the Journey, Not Just the Destination — Academic assessment should evolve to encompass the entire developmental arc of a project. Grades must reflect the effort invested in drafts, the iterative process of editing and refinement, and the student’s ability to orally defend their work, not merely the final deliverable.

Champion Explicit “Wielding” Skills — It is incumbent upon educators to explicitly teach students that AI is a force multiplier, and that a multiplier applied to zero yields only zero. True mastery lies in becoming a “1” before the multiplier can have any meaningful impact.
The choice facing educators, institutions, and students today is stark and unambiguous. We can succumb to the seductive comfort of https://www.curriculumcomplete.com/post/ai-and-cognitive-offloading-what-it-means-for-teaching-and-learning, producing a generation adept at generating outputs but devoid of genuine understanding. Or, we can embrace the challenging, yet ultimately rewarding, path of https://tlconestoga.ca/critical-thinking-with-ai-3-approaches/,
learning to wield AI only when our own minds are sufficiently robust and autonomous to guide its power.

One path leads inexorably to obsolescence. The other, arduous though it may be, leads to genuine mastery. The time to choose is now.

— -
For a deeper dive into the technical underpinnings of AI’s cognitive limitations and the research supporting these claims, read my full analysis on
https://tyingshoelaces.com/blog/cognitive-atrophy-class-divide.

— -
If this resonated with you, please hit the 👍 button. It significantly helps others discover this story.

Top comments (1)

Collapse
 
sonia_alraini profile image
Sonia Al-Ra'ini

Exactly deep learning should not be easy. I’ve been building a dual‑core agent, and one of its goals is to highlight this exact danger: the dependency created when people use AI tools without understanding what they’re doing. AI can create a false sense of competence a kind of virtual positivity where you feel like you understand something simply because the output looks correct. But without real cognitive struggle, there’s no true learning happening.
If you’re curious, I’ve shared a post on my profile that explores this idea in more detail I’d love to hear your thoughts on it.