AI Monthly: is AI making us less intelligent?
Many of us have turned to ChatGPT to help with writing an email, summarising a report, or even brainstorming ideas. But in doing so, are we not just outsourcing our tasks, but our thinking too? Does relying on AI ultimately make us less intelligent?
Do Large Language Models (LLMs) make us dumb?
Two recent studies have analysed the effects of Large Language Models (LLM) on our cognitive behaviour, and the results are fascinating. Not only do we tend to adopt AI-generated language, but we also seem to engage our brains less, becoming cognitively lazier.
In the paper “Your brain on ChatGPT” from the MIT Media Lab, the cognitive, neural and behavioural effects of using ChatGPT in educational essay writing were studied. Participants were divided into three groups: one group used only ChatGPT, another group used the traditional Google search engine, and the third group did not use any tools. The latter “brain-only” group showed the highest cognitive engagement, memory retention and originality with strong neural cognitive engagement. The "LLM" group, which heavily depended on AI, exhibited the lowest brain activity and had poor memory recall. Their essays were well-polished but generic and lacked creativity. The group using the search engine fell somewhere in between.
When participants switched roles, those who had previously written essays without AI (the brain-only group) showed a significant increase in brain connectivity when they began using an LLM – suggesting that strategic, later-stage use of AI can enhance cognitive engagement and performance. In contrast, participants who had relied on LLMs in earlier sessions showed reduced neural activity and poorer performance when asked to write without AI, indicating that early dependence on AI may hinder the development of essential cognitive skills.
The authors acknowledge that the study size was relatively small, focused on the Boston area, and that the results obtained using ChatGPT alone could not be generalised to other LLM models. Still, its implications are far-reaching, especially as LLMs become embedded in educational and professional workflows worldwide. The authors caution against using LLMs too early or too often as it can diminish deep cognitive engagement, critical thinking and creativity – ultimately making us less intelligent.
In another study conducted recently by the Max-Planck-Institute for Human Development, the authors found that humans are increasingly imitating the linguistic style of LLMs in spoken communication, probably leading to a reduction in linguistic diversity. After screening 280,000 English-language videos from more than 20,000 academic YouTube channels, the study found a significant increase in the use of words commonly found in ChatGPT-edited texts, such as “delve”, “realm”, or “meticulous”. The authors suggest that this trend reflects a feedback loop in which humans not only shape machine learning models but are also shaped by them through exposure to machine-generated content.
Should we therefore stop relying on AI to enhance our work? As is often the case, the truth likely lies somewhere in between. If we allow GenAI to do all the thinking for us, it's no surprise that our ability to think independently diminishes, and we risk becoming more like LLM copies rather than the other way around. Thus, instead of becoming more like LLMs, we should focus on using them to amplify – not replace – our human strengths: curiosity, critical thinking, and creativity. Otherwise, we may end up like Joanna Maciejewska’s viral quote: "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes”.
The illusion of thinking: are AI models dumb too?
It's not just humans who might be getting less intelligent. Could AI models be less smart than they appear?
LLMs have evolved into Large Reasoning Models (LRMs), models that are supposed to feature logical inference, multi-step problem-solving, structured decision-making and self-reflection. A study by Apple dubbed “The Illusion of Thinking” challenges the reasoning capabilities of these models, suggesting that current LRMs simulate reasoning rather than truly understanding or generalising it, thus creating the “illusion of thinking”. While LRMs excel at moderate complexity due to their extended reasoning capabilities, they often overthink simple problems, struggle with self-correction in harder ones, and ultimately fail to solve the most complex tasks. In short, today’s LRMs may sound smart – but they largely still seem to be guessing, not reasoning.
Are AI models ultimately only as capable as the human programmers behind them? And are our fears of a super AI that could wipe us out misplaced? CoPilot’s answer? “Yes – at least for now. Current AI models are powerful but not autonomous. They are tools, not agents. The idea of a superintelligent AI that could wipe us out is, at present, science fiction. The real challenges lie in the responsible development, regulation, and application of AI by us humans.”
For now, it seems we don't need to worry too much about GenAI... do we?
This publication has been prepared by ING solely for information purposes irrespective of a particular user's means, financial situation or investment objectives. The information does not constitute investment recommendation, and nor is it investment, legal or tax advice or an offer or solicitation to purchase or sell any financial instrument. Read more
Download
Download article