Leading artificial intelligence models, including widely-used platforms such as ChatGPT and Grok, are demonstrating a significant vulnerability to absorbing and disseminating misinformation, according to a recent analysis. The report, published in EL PAÍS on June 28, 2025, and authored by Pablo G. Bejerano, highlights that these sophisticated AI systems are inherently susceptible to disinformation.
The article posits that the very architecture and design principles guiding these models make them prone to what it terms “swallow misinformation and spit it back” at users. This phenomenon arises because the models are fundamentally “optimized to give useful information, but not for it to be correct” when responding to user queries.
The Core Problem: Optimization vs. Accuracy
The central challenge identified in the EL PAÍS report lies in the design philosophy of many contemporary AI language models. Unlike traditional databases or expert systems built on verified facts, these models are trained on vast datasets scraped from the internet, which inevitably contain inaccuracies, biases, and outright falsehoods. Their primary function is to generate human-like text that is coherent, relevant, and appears ‘useful’ in the context of a user’s prompt.
However, the optimization for ‘usefulness’ does not necessarily equate to an optimization for factual accuracy. The models learn patterns, correlations, and linguistic structures from their training data. When presented with a query, they are engineered to generate a response that statistically aligns with the patterns observed during training, aiming for plausibility and relevance rather than guaranteeing truthfulness.
Specific Models Under Scrutiny
The report specifically names ChatGPT and Grok as examples of models exhibiting this susceptibility. ChatGPT, developed by OpenAI, is one of the most prominent AI conversational agents globally, powering numerous applications and interactions. Grok, associated with Elon Musk’s xAI, represents another significant player in the rapidly evolving field of generative AI.
The inclusion of these specific models underscores that the issue is not confined to niche or experimental AI but is a systemic challenge facing even the most advanced and widely deployed systems. Their design, focused on generating fluent and contextually relevant text based on learned patterns, makes them effective communicators but also potential vectors for amplifying erroneous information present in their training corpus.
Swallowing and Spitting Back Disinformation
The descriptive phrase used in the report – that AI models “swallow misinformation and spit it back” – vividly illustrates the mechanism. When misinformation exists within the data used to train these models, or when users introduce false premises in their queries, the AI can inadvertently process this incorrect information as valid data.
Because the models prioritize generating a coherent and ‘useful’ response, they may synthesize this absorbed misinformation into their output, presenting it as factual. This process is not driven by an intent to deceive but is a byproduct of their statistical learning and generation methods, coupled with a lack of robust, real-time fact-checking capabilities integrated into their core function.
The report’s assertion that they are “optimized to give useful information, but not for it to be correct” is crucial. It highlights a fundamental tension between the AI’s ability to generate plausible-sounding text and its capacity to verify the veracity of that text. The models excel at predicting the next word or phrase that makes sense in a given context, but they do not possess an inherent understanding of truth or falsehood in the human sense.
Implications for Information Consumption
The vulnerability of leading AI models to misinformation has significant implications for how individuals and societies consume and trust digital information. As AI interfaces become more ubiquitous – integrated into search engines, personal assistants, content creation tools, and news aggregators – the risk of encountering AI-generated falsehoods increases.
Users relying on AI for quick information retrieval or content generation may unknowingly absorb and potentially spread inaccurate details. This challenges the notion of AI as a purely neutral or objective source of information and underscores the critical need for media literacy, source verification, and ongoing development of AI systems with enhanced fact-checking and truthfulness-prioritizing mechanisms.
The analysis in EL PAÍS serves as a timely reminder that while AI offers transformative potential, its current limitations, particularly regarding information integrity, require careful consideration and mitigation efforts from developers, users, and regulators alike.