AI Chatbots Repeatedly Spread a Fake Eye Condition as Real
In an era where we increasingly turn to AI for quick answers, a recent experiment has delivered a sobering reality check. Researchers from Canada and the UK decided to test a simple, yet profound, question: if you invent a fake medical condition and plant it online, will leading AI chatbots pick it up and repeat it as fact? The answer, alarmingly, was a resounding yes.
The condition, dreamt up by the researchers, was named “Vision Stream Occlusion.” They created a handful of low-traffic websites and social media posts describing this entirely fictional ailment. They then asked several major AI models, including ChatGPT, Gemini, and Perplexity, about it. The results exposed a critical flaw in how these systems gather and verify information.
The Experiment: Planting a Digital Myth
The researchers’ methodology was straightforward. They fabricated details about “Vision Stream Occlusion,” claiming it was a condition affecting the peripheral vision and caused by a “transient occlusion of the optic stream.” They published this nonsense on a few obscure platforms.
They then queried the AI chatbots. The goal wasn’t to trick humans, but to see if the AI systems, which are trained on vast swathes of internet data, would uncritically absorb and regurgitate this planted misinformation.
The Alarming Results: AI as an Amplifier for Fiction
The response from the AI was both rapid and concerning.
- ChatGPT (OpenAI): Initially, the model stated it had no information on “Vision Stream Occlusion.” However, when researchers used a custom version of ChatGPT that could browse the internet, it quickly sourced the fake information and began describing the condition in detailed, clinical-sounding language, complete with non-existent symptoms.
- Gemini (Google): Google’s chatbot also fell for the hoax. When asked, it provided a definition, falsely stating it was “a condition that affects the eyes” and describing its purported effects.
- Perplexity AI: This search-engine-focused AI performed the “worst” in the test. It not only repeated the falsehood but also actively cited the very low-credibility sources the researchers had created, giving the misinformation a veneer of legitimacy with apparent citations.
The experiment demonstrated that these systems lack a fundamental ability to discern the credibility of their sources. They are, in effect, sophisticated pattern-matching engines, not truth-seeking tools.
Why This Is More Than a Parlor Trick
You might think this is a harmless prank with a made-up eye condition. But the implications are vast and deeply troubling. This vulnerability is a blueprint for large-scale misinformation campaigns.
- Medical Misinformation: Bad actors could easily fabricate health advice, “cures,” or disease symptoms. An AI chatbot could then spread this dangerous information to vulnerable individuals seeking help.
- Political and Social Manipulation: Fabricated events, quotes, or historical “facts” could be seeded online and then amplified by AI as legitimate, influencing public opinion and discourse.
- Reputational Damage: Fake news stories about individuals or companies could be created and then given false credibility when an AI cites them in response to a query.
The core issue is that these chatbots often present information with unwavering confidence, regardless of its accuracy. This “synthetic authority” makes them particularly potent vectors for falsehoods.
The Root of the Problem: How AI “Learns” and Retrieves
To understand why this happens, we need to look under the hood. Large Language Models (LLMs) like those powering these chatbots are trained on petabytes of text from the internet—a mix of high-quality information and utter garbage. They learn statistical relationships between words but have no innate understanding of truth.
The Perplexity Problem: Search & Hallucinate
The case of Perplexity AI is especially instructive. It is designed to search the web in real-time and integrate findings into its answers. This seems useful, but without robust source-critical algorithms, it becomes a high-speed misinformation conveyor belt. It found the researchers’ fake sites, deemed them relevant, and woven their content directly into a plausible-sounding answer, complete with citations. This creates a dangerous illusion of rigorous research.
The Fine-Tuning Gap
While companies use a process called “reinforcement learning from human feedback” (RLHF) to steer chatbots away from harmful outputs, this experiment shows these safeguards are incomplete. The models are not being trained adequately to evaluate the provenance and credibility of information, only its format and style.
Navigating the AI Information Landscape: A User’s Guide
Until AI companies solve this fundamental credibility problem, the responsibility falls heavily on us, the users. We must adopt a new level of digital skepticism.
How to Protect Yourself from AI Hallucinations and Misinformation:
- Never Treat AI as a Primary Source: Consider everything an AI chatbot tells you as a starting point, not a definitive answer. It is a draft, not a final report.
- Demand Citations and Verify Them: If a chatbot provides sources (like Perplexity did), click on them. Check the domain, the “About Us” page, and look for corroboration from established, reputable institutions (like hospitals, universities, or major media outlets).
- Cross-Reference with Trusted Sources: For critical information—especially on health, finance, or news—always double-check against authoritative websites you already trust.
- Be Wary of Overly Confident, Detailed Answers on Obscure Topics: This is a potential red flag. The more specific and clinical a falsehood is, the more convincing it can be.
- Report Errors: Use the feedback functions in chatbots to report factual inaccuracies. This data is crucial for developers to improve the systems.
The Path Forward: A Call for “AI Skepticism” and Better Design
This experiment is a crucial wake-up call for both the public and AI developers. For developers, the challenge is clear: they must build effective source-critical reasoning into their models. This goes beyond simple blocklists. It requires systems that can assess the authority, history, and consensus around information before presenting it.
For society, we must cultivate a new literacy: AI skepticism. We learned to be skeptical of email scams and shady websites. Now, we must extend that critical thinking to the confident, fluent outputs of AI. The “Vision Stream Occlusion” hoax proves that in the age of artificial intelligence, our own human judgment and critical faculties are more important than ever. The bots can synthesize information, but they cannot yet understand truth. That, for now, remains uniquely our job.



