A new study from the Massachusetts Institute of Technology (MIT) has raised concerns about the cognitive effects of overusing large language models (LLMs) like ChatGPT. Researchers found that relying too heavily on AI-generated content may reduce brain activity during writing tasks, weaken memory retention, and create what they describe as “cognitive debt.”
The study, titled “The Cognitive Cost of Using LLMs,” was conducted by MIT Media Lab and is currently published as a preprint, pending peer review. Over a four-month period, researchers observed 54 participants—mostly MIT students—who were divided into three groups tasked with writing short essays.
Methodology and Group Setup
Participants were split into:
- LLM Group: Used ChatGPT to help write essays
- Google Group: Used Google Search to find information
- Brain-Only Group: Wrote essays without any digital tools
Each participant completed multiple 20-minute essay-writing sessions while wearing EEG (electroencephalogram) headsets to monitor brain activity. In a final session, participants were randomly reassigned to different groups, allowing researchers to compare performance across tool usage and track changes in brain engagement.
What the Data Shows
Findings showed that participants in the Brain-Only group exhibited the highest levels of brain activity, particularly in regions associated with memory formation, creativity, and deep cognitive processing. These participants also reported higher satisfaction and stronger recall of their writing.
Conversely, those in the LLM group demonstrated significantly lower brain activity during tasks. Many relied heavily on ChatGPT, with some simply copying responses verbatim without engaging in critical thinking or editing. When later asked to recall or rewrite their essays without AI support, most were unable to reproduce key content or ideas they had previously written with ChatGPT’s help.
This lack of engagement led researchers to identify what they call “cognitive debt”—a situation in which the mind defers active thinking and processing in favor of quick AI-generated answers. According to the study, this may impair a user’s ability to internalize information or develop original thoughts.
“When participants copied suggestions without evaluating them, they risked absorbing shallow, potentially biased perspectives,” the report states. “They also struggled to recall their own written content.”
In terms of essay quality, the LLM group scored lowest across all categories, including linguistic structure and teacher-reviewed assessments. Interestingly, the Google group fell in the middle, showing moderate brain activity and better essay performance than the LLM group, but not as strong as the Brain-Only writers.
Implications for Learning and AI Use
Lead researcher Dr. Nataliya Kosmyna emphasized that the study is not an indictment of AI tools but a call for thoughtful, guided use—especially in educational contexts.
“The developing brain is most vulnerable,” Kosmyna said. “We need to be cautious about allowing students to offload cognitive effort too early or too often.”
Other experts echoed this concern. Child psychiatrist Dr. Zishan Khan noted the potential for unintended psychological and cognitive consequences, especially in younger users who depend on AI tools without developing core learning and critical thinking skills.
That said, the study also revealed a silver lining. Participants who began in the Brain-Only group and later used ChatGPT demonstrated increased brain connectivity and memory performance compared to those who had started with AI assistance. This suggests that AI can be beneficial when layered on top of existing mental models—as an enhancer, not a substitute.
What Is Cognitive Debt?
The term “cognitive debt” refers to the long-term costs of consistently relying on AI to perform mental work. Much like financial debt, it may feel convenient in the short term but accumulate harmful consequences if left unchecked.
“Users may lose ownership of their ideas, struggle to recall information, and become passive in their thinking,” the authors warn.
Researchers recommend that educators and institutions introduce AI literacy programs that teach students how to engage with AI tools critically. Instead of asking ChatGPT to write full essays, learners might use it to brainstorm outlines, correct grammar, or explore alternative perspectives—while ensuring their cognitive engagement remains high.
Conclusion
While the MIT study is still awaiting formal peer review, it offers a valuable early look at the nuanced effects of AI in daily cognitive tasks. The core message is not that ChatGPT makes users “dumber,” as some headlines claim, but rather that overdependence may reduce the brain’s ability to actively think, analyze, and remember.
As AI tools become more integrated into classrooms, offices, and homes, finding the right balance will be key. Used wisely, ChatGPT and similar models can empower learners and professionals alike. Used carelessly, they may rob us of the very skills that define human intelligence.