In just a few years, artificial intelligence â known as AI â has gone from being an abstract concept reserved almost exclusively for science fiction films to becoming a tool we use constantly. It helps us to write texts, search for answers, translate languages, plan trips and create our fitness regimes among what seems like an endless number of other uses. But this technological revolution poses a question: could AI be changing, or even atrophying, the way our brain functions?
As we increasingly delegate more of our cognitive tasks to algorithms, itâs logical to wonder if we arenât hampering the exercise of skills that were once essential: remembering, deducing, paying attention, writing, calculating or even conversing face-to-face.Â
And although the aim isnât to demonise AI, experts warn that its thoughtless use could have more significant consequences than we imagine, especially in developing brains or in emotionally vulnerable people.
Could this new tool could have a truly devastating impact on our brain? The short answer is no â at least not for adults, and in the strictest sense. But it can modify the brainâs functioning and, with it, alter fundamental processes such as memory, attention and decision-making.
How AI alters brain function: Memory, attention and decision-Making
Professor Ignacio Morgado, Emeritus Professor of Psychobiology at the Institute of Neurosciences of the University of Barcelona, explains: âArtificial intelligence makes the brain work in a different way. Instead of directly storing information, it stores it in âfilesâ that contain much more information than the brain itself can handle.â
In other words, when we constantly consult an AI for recipes, writing help or solutions to problems, it can make our brain act like an orchestra conductor who just pushes a button instead of actually interpreting the music. Although this frees up some mental space, it could eventually make us dependent and less able to think for ourselves.Â
The impact of AI on the developing adolescent brain
This effect becomes especially relevant in the case of teenagers. âThe adolescent brain is still immature; itâs easier to deceive, influence and steer in inappropriate directions, such as violence,â warns Morgado. âReason, which is protective against these dangers, doesnât fully mature until at least 20 years of age, though earlier in girls than in boys.â Thatâs how the inappropriate use of AI can alter adolescent development.
Still, the problem isnât just early access but the way teens are using AI. Minors may turn to these tools to do homework, solve social problems or seek emotional answers they arenât yet prepared to interpret. âThe fact that children use AI from a very young age can indeed be harmful to their development,â the expert emphasises.Â
Can AI accelerate cognitive decline in older adults?
In older adults, the impact appears to be different. âCognitive decline can indeed accelerate from not exercising the brain; but not from using AI,â Morgado points out. In fact, artificial intelligence could have some therapeutic value if used in a directed way to stimulate the mind, play, learn or train cognitive abilities.
However, there are two things that always apply: common sense and information. âWe need to be well-informed about the tool we are using. Understanding, for example, that weâre not talking to a person or a professional, but to a programmed machine with limitations,â Morgado reminds us.Â
Essentially, if we know how to use artificial intelligence, it can be beneficial. If we abuse it or donât use it properly, it could end up affecting our brain. One thing, though, is clear: â[AI] is here to stay and will condition our lives in many ways; we need to try to ensure that the effects are positive.â
Mental health chatbots: When do AI assistants become dangerous?
One of the areas where the use of artificial intelligence has expanded most is mental health. Chatbots designed to converse with people who feel lonely, anxious or depressed are increasingly common. And although AI can be useful as a complement to treatment, psychotherapy experts are concerned about unsupervised use.
According to a review of studies carried out by Barcelonaâs Itersia Psychotherapy Centre, chatbotsâ 24/7 accessibility and ability to simulate empathy mean that many users, especially young people or emotionally fragile individuals, substitute psychological consultations for conversations with virtual assistants.Â
âSwapping a therapist for a chatbot can lead to anything from missing a clinical crisis to emotional dependence without real support,â warns psychologist Elisabet SĂĄnchez. âThese systems can complement, but not replace, human clinical supervision in mental health interventions.â
The situation is especially worrying in rural areas or contexts where access to psychologists is limited. âPatients with emotional or mental health problems have swapped what used to be dubbed âDr Googleâ consultations for âDr ChatGPTâ or similar tools, where they get a real-time response at any time of the day,â explains SĂĄnchez. âSocietyâs need for immediacy has meant a shift from a real psychological consultation to conversational assistants.
A study published in the journal JMIR Mental Health concludes that although AI models can be useful in psychoeducation and basic emotional management, they are not capable of detecting warning signs, making accurate diagnoses or genuinely empathising with patients. Their âdiagnostic accuracy, cultural competence, and ability to engage users emotionally remain limited,â the report states.Â
AI models are not capable of detecting mental health warning signs, making accurate psychological diagnoses or genuinely empathising with patients
5 critical risks of using AI for emotional support and therapy
Mental health experts at Itersia have compiled a list of the main dangers associated with the indiscriminate use of AI and chatbots by emotionally vulnerable people:
- Failure to detect crises: AI assistants are not prepared to identify or intervene in risk situations such as suicidal thoughts, violence or psychotic episodes.
- Illusion of emotional connection: Conversational language can create the false sensation of talking to someone who genuinely understands, which creates an artificial relationship without professional support.
- Incorrect advice: They can offer erroneous, non-evidence-based or even dangerous answers.
- Lack of clinical context: They donât properly âunderstandâ the personal, cultural or emotional nuances that a human therapist can interpret.
- Absence of ethical oversight:Â They are not subject to ethical codes or clinical supervision, which can lead to irresponsible use.
A recent investigation with patients suffering from social anxiety found that those with the most severe symptoms were precisely the ones who most trusted chatbots, seeing them as a safe refuge from human interaction. This, however, can reinforce social avoidance and reduce the likelihood of seeking real professional help.
Read the full article here




