LIRNEasia Journal Club: Your Brain on ChatGPT – Exploring the Cognitive Debt of AI-Assisted Writing


Posted on December 8, 2025  /  0 Comments

In LIRNEasia’s Journal Clubs, we take an in-depth look at existing literature to inform and guide our research. Given that AI and digital technologies in education are a key focus area for us, we selected a study closely aligned with this theme for the Journal Club held on 7 October 2025. During the session, we evaluated the report ‘Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task’, authored by Nataliya Kosmyna, PhD, and her colleagues at the MIT Media Lab.

The session aimed to unpack the paper’s findings, implications, and limitations in the context of learning, AI, and education, helping us reflect on how these insights can inform our ongoing work in the education and technology space.

About the Study

This MIT study examined how using ChatGPT and other tools, such as search engines, influences the brain’s engagement, memory, and sense of ownership during essay writing.

Participants – 54 university students from MIT and other institutions were divided into three groups:

  • Group 1 (LLM): Used ChatGPT (GPT-4o) exclusively.
  • Group 2 (Search): Used only Google Search, with no AI-generated results.
  • Group 3 (Brain-Only): Wrote essays without any digital assistance.

Each group completed three writing sessions under the conditions they were originally assigned. In a fourth session, the conditions were reversed for the ChatGPT and Brain-only groups: ChatGPT users wrote without AI, while Brain-only writers used ChatGPT. This research design aimed to reveal the concept of “cognitive debt,” the condition in which repeated reliance on external systems like Large Language Models (LLMs) replaces the effortful cognitive processes required for independent thinking (Kosmyna et al., 2025).

Methodology

  • An electroencephalogram (EEG) was used to record participants’ brain activity during the essay-writing sessions. EEG measures electrical signals in the brain through electrodes placed on the scalp.
  • Natural Language Processing (NLP), a computational method for analyzing written text, was used to examine the essays produced by the three groups.
  • Human teachers and an AI judge (a specially built AI agent) were used to score the written essays.
  • Participant interviews were conducted to gather participants’ views on ownership, satisfaction, and memory recall.

Key Findings

The presentation summarized the study’s key findings under the following four areas.

  1. Brain and Cognitive Engagement
  • EEG scans showed that the Brain-only group displayed the strongest neural connectivity, whereas ChatGPT users showed the weakest cognitive engagement.
  • When roles were reversed in the fourth session, previous ChatGPT users struggled to focus and recall information, indicating cognitive debt.
  • Conversely, Brain-only writers who later used ChatGPT benefited from the tool, combining their prior cognitive effort with the AI’s efficiency.
  1. Natural Language Processing (NLP) Analysis of Essays
  • ChatGPT essays were grammatically perfect but repetitive and impersonal, exhibiting surface-level polish but limited conceptual depth.
  • The search group’s (Group 2) essays were factual but formulaic, whereas the Brain-only group’s essays showed the most originality and diversity of ideas.
  1. Human vs. AI Evaluation
  • The AI judge scored ChatGPT essays higher for structure and grammar.
  • Human teachers, however, preferred Brain-only essays for originality, insight, and authenticity, illustrating that AI values form, while humans value substance.
  1. Participant Reflections
  • ChatGPT users reported low ownership: “Not really my essay.”
  • Brain-only writers expressed pride and an emotional connection to their work.
  • In session 4, ChatGPT users struggled when writing without AI, whereas unaided writers adapted easily when given access to ChatGPT.

Conclusions of the study 

The study concludes that, while LLMs simplify and accelerate writing, they also reduce deep cognitive engagement, memory, and sense of ownership. ChatGPT users produced smoother essays but were often detached from their own ideas, reflecting a shift from independent thinking to AI prompting. Brain-only participants, by contrast, exhibited stronger neural engagement and more meaningful learning experiences. The researchers caution that, as AI tools become increasingly embedded in education, their long-term impact on human cognition must be studied before assuming they are universally beneficial.

Discussion 

Our discussion centred on three key themes:

  1. Study Design and Cognitive Debt

We noted several experimental design limitations, primarily related to sample selection, screening, writing task duration, and other design instructions and settings. We discussed the need for long-term and customized studies for diverse AI tasks and more diverse participant samples.

We also looked at the meaning and function of evaluation techniques used by the authors for NLP analysis, including Named Entity Recognition (NER) and N-gram analysis. The discussion focused on the high NER counts in essays written using ChatGPT, which indicate frequent use of named entities (people, places, organizations), reflecting ChatGPT’s data-driven writing style rather than conceptual understanding. Similarly, the ChatGPT group had a high N-gram count, sequences of repeated items from a text (letters, syllables, words, etc.), indicating numerous repeated words but lower conceptual depth.

  1. Improving Experimental Approaches

We discussed that future studies should differentiate the purposes for which AI is used, such as brainstorming, structuring, and editing, and include participants with varying levels of AI familiarity.

  1. Applications in Education

We emphasized the importance of balancing human thought with AI assistance, encouraging students to think first and use AI second. We also discussed that AI should be used to complement, not replace, human cognition and teaching. Overreliance on LLMs may weaken students’ critical thinking and independence, while structured and guided use can enhance creativity and learning outcomes. Examples, such as EkStep and Mindspark from India, were cited as models of adaptive learning tools. These systems personalize content, support teachers, and use AI for assessment and curriculum adaptation.

We also reflected on the ethical and pedagogical implications of “AI dependence” and the potential erosion of critical thinking if educational systems prioritize speed over cognitive depth. The discussion broadened to distinguish between different AI tools: generative AI, recommender systems, and speech-to-text applications. It was agreed that, while all these systems fall under the AI umbrella, generative AI uniquely creates content that mimics human reasoning.

In conclusion, we agreed that LLMs offer powerful tools for learning, but their use must be intentional and guided. True educational value comes not from offloading thinking to AI, but from integrating it as a supportive partner that augments human intelligence rather than replaces it.

By Isuru Udakara Yakandawala (Junior Researcher, LIRNEasia)

Find the slide set of this journal club below.

  Download PDF   Email

Leave a Reply

Your email address will not be published. Required fields are marked *

*

*

*