Published:
|
Updated:

The Scriptorium Problem
In the scriptoriums of medieval monasteries, young monks spent years learning to copy manuscripts by hand. Each letter required deliberate attention, each word demanded careful consideration. When Johannes Gutenberg’s printing press arrived in the 15th century, it could produce in hours what took scribes months to complete. Yet something unexpected happened: while books became abundant, the deep textual engagement that created scholars began to disappear.
Today, we’re running the same experiment—but this time with human thinking itself. MIT researchers recently placed 54 students under EEG monitoring while they wrote essays using ChatGPT, search engines, or just their brains. What they discovered about neural connectivity patterns would have sounded familiar to those medieval monks: the tools that make tasks easier don’t necessarily make minds stronger.
The Stanford Discovery
Bharat Chandar, a postdoctoral researcher at Stanford’s Digital Economy Lab, was studying decision-making when he stumbled onto something that would reshape how we think about AI alignment. His analysis revealed that all consequential choices depend on two distinct components: intelligence (our capacity to predict outcomes) and values (our judgment about what matters).
The mathematical precision is striking: while AI can dramatically improve our intelligence component, it cannot—and should not—replace our values component. This isn’t philosophical speculation; it’s measurable in how people make real decisions. When Chandar analyzed job selection decisions, he found that even with perfect information about consequences, people still faced the fundamental challenge of weighing competing values—higher salary versus interesting work, career advancement versus family time.
This distinction matters because it reveals why simply making AI “more helpful” misses the deeper question of human development.
The Brain Scanner Results
The MIT study, led by Nataliya Kosmyna’s team, used high-density EEG to measure brain activity during essay writing. Their experimental design was elegantly simple: divide students into three groups (ChatGPT users, search engine users, and brain-only writers), track them over four months, then switch conditions in a surprise fourth session.
The quantified results tell a clear story. Students using only their brains exhibited the strongest, most distributed neural networks. Search engine users showed moderate connectivity. ChatGPT users displayed the weakest overall neural coupling, particularly in alpha and beta frequency bands associated with attention and executive function.
But here’s what makes this more than just another “screen time is bad” study: when researchers switched ChatGPT users to brain-only writing in session four, their neural connectivity didn’t bounce back. Even after removing AI assistance, these students showed “under-engagement of alpha and beta networks” compared to their baseline. The brain had adapted to the external support, and that adaptation persisted.
The Sequence Solution
The MIT researchers discovered something that would have made William Gosset proud: the sequence of tool introduction matters more than the tools themselves. Students who started with brain-only writing and later used AI maintained strong neural connectivity. Those who began with AI assistance struggled to activate the same cognitive networks when later forced to work independently.
This finding echoes Gosset’s insight at Guinness brewery: the goal isn’t just to achieve the desired outcome efficiently, but to build the capability to achieve it reliably. Gosset realized that conducting experiments with minimal observations could hit his alcohol targets 80% of the time with just two measurements, 92% of the time with four measurements. But what he also understood—and what we’re rediscovering with AI—is that building internal capability requires deliberate practice, not just external optimization.
The neural evidence suggests that starting with AI creates what researchers call “cognitive debt”—borrowed efficiency that comes at the cost of developing internal processing capacity.
The Performance Paradox
Early studies painted an optimistic picture: AI would level the playing field by helping lower-skilled workers most. A 2023 study by MIT economists Erik Brynjolfsson, Danielle Li, and Lindsey Raymond found that AI increased productivity of novice customer service agents by 35% while having minimal impact on experienced workers.
But newer research reveals this was the exception, not the rule. In complex cognitive tasks such as research, analysis, and strategic thinking, high performers benefit exponentially more from AI assistance. The reason is mathematically straightforward: effective AI use requires the very expertise the AI is supposed to provide. You need to know good research from bad to evaluate AI-generated research. You need strategic thinking skills to guide AI through strategic problems.
The agricultural parallel is instructive. Precision agriculture tools provide the biggest advantages to farmers who already understand soil science, crop rotation, and yield optimization. The technology amplifies existing knowledge rather than replacing it. In cognitive work, we’re seeing the same pattern: AI is becoming a multiplier of human capability, not a substitute for it.
The Iron Man Model
The solution isn’t to abandon AI assistance. It’s to design it correctly. In Marvel’s Iron Man films, the AI system Jarvis amplifies Tony Stark’s engineering capabilities without replacing his decision-making authority. Stark tells Jarvis what he wants to accomplish; Jarvis figures out how to make it happen while preserving Stark’s agency and expertise.
This model already works in finance, where algorithmic trading systems process millions of data points to identify patterns and calculate probabilities, but human analysts make the final investment decisions. The algorithms handle the computational heavy lifting, what we might call the intelligence component, while humans apply judgment about risk tolerance, market conditions, and strategic goals. This represents the values component.
Research from precision agriculture suggests a similar approach: sensors and algorithms can monitor soil conditions 24/7 and model thousands of planting scenarios, but the farmer decides which recommendations align with their land management philosophy, risk tolerance, and long-term goals. The AI provides superhuman analytical capability while preserving human autonomy and expertise.
The Development Framework
Based on the accumulated evidence, we can identify specific principles for AI implementation that preserves human cognitive development.
First, sequence matters. We should introduce cognitive challenge before cognitive assistance. Students should grapple with essay structure before getting AI help with organization, learn mathematical reasoning before using AI calculators, develop research skills before accessing AI research assistants.
Second, we must preserve agency. AI systems should provide recommendations rather than decisions. The human should always maintain authority over consequential choices, with AI serving as an advisor rather than an autopilot.
Third, we need to maintain cognitive load. Not all mental effort should be eliminated. The research suggests that some cognitive tasks, those that build fundamental thinking skills, merit human investment even when AI could handle them more efficiently.
Finally, we should monitor development. Organizations and educational institutions need metrics that measure whether people are developing thinking skills, not just producing output.
The Implementation Science
Educational institutions can apply these principles through age-appropriate AI introduction. Elementary students might use AI for research after learning basic information literacy skills. High school students could employ AI writing assistants after developing fundamental composition abilities. Graduate students might leverage AI for data analysis after mastering statistical reasoning.
Organizations can preserve cognitive development through strategic workflow design. Financial analysts might use AI for market data processing while handling investment strategy personally. Researchers could employ AI for literature reviews after developing critical evaluation skills. Managers might utilize AI for scheduling and coordination while maintaining personal responsibility for team development and strategic decisions.
Individual practitioners can adopt what we might call “cognitive hygiene” practices: regularly engaging in unassisted thinking, seeking tasks that require genuine mental effort, and consciously choosing when to accept AI help versus when to preserve human cognitive engagement.
The Research Horizon
The current evidence base, while compelling, represents early findings that require deeper investigation. We need longitudinal studies spanning years rather than months to understand whether cognitive changes become permanent. We need research across age groups to identify critical developmental windows. We need cross-cultural studies to understand how different educational traditions interact with AI assistance.
Most critically, we need intervention research: can cognitive training programs reverse AI-induced changes? Do certain types of mental exercise rebuild neural connectivity? How can we design AI systems that actively promote rather than passive cognitive development?
The Choice We’re Making
The medieval scribes eventually found new roles as scholars, teachers, and administrators. Their deep textual engagement translated into other forms of intellectual work. But that transition required conscious effort to preserve and redirect their cognitive capabilities.
We face a similar choice with AI. The technology offers genuine benefits: faster research, better writing assistance, more sophisticated analysis. But the MIT study demonstrates that these benefits come with measurable cognitive costs when implemented carelessly.
The path forward requires the same kind of strategic thinking that William Gosset brought to brewing: optimizing for long-term capability building, not just short-term efficiency gains. We need AI systems designed like Jarvis, amplifying human intelligence while preserving human agency. We need educational approaches that sequence challenge before assistance. We need organizational cultures that value cognitive development alongside productivity.
The stakes are higher than individual performance. As Chandar notes, we will always face decisions that require both intelligence and values. If we design AI systems that enhance our intelligence while preserving our decision-making authority, we can achieve unprecedented human capability. If we design systems that replace human thinking, we risk creating a generation unable to think for themselves.
The technology is advancing rapidly. The neuroscience evidence is accumulating. The choice of which path to take remains ours to make.

Joseph Byrum is an accomplished executive leader, innovator, and cross-domain strategist with a proven track record of success across multiple industries. With a diverse background spanning biotech, finance, and data science, he has earned over 50 patents that have collectively generated more than $1 billion in revenue. Dr. Byrum’s groundbreaking contributions have been recognized with prestigious honors, including the INFORMS Franz Edelman Prize and the ANA Genius Award. His vision of the “intelligent enterprise” blends his scientific expertise with business acumen to help Fortune 500 companies transform their operations through his signature approach: “Unlearn, Transform, Reinvent.” Dr. Byrum earned a PhD in genetics from Iowa State University and an MBA from the Stephen M. Ross School of Business, University of Michigan.