top of page
ReganByte logo
ReganByte text logo

Understanding AI Hallucinations: How ReganByte Keeps Medical Chatbots Safe and Accurate

What are AI hallucinations?


AI hallucinations happen when a chatbot gives an answer that sounds convincing but is actually wrong or completely made up. You will often see this in large standalone LLMs like ChatGPT or Claude. Sometimes, these chatbots confidently give answers that just are not true.


Why does this happen?


It is because these models are trained on huge amounts of text from the internet. That includes solid, well-researched information but also personal blogs, opinions, and outdated data. The chatbot does not truly know what is correct. It is just trying to guess what looks like a good answer based on patterns in its training data. And sometimes, that guess is way off.

If these models encounter gaps in their training data they may just fill those gaps with plausible but incorrect information, instead of admitting that they "don't know".


Real-World Examples of AI Hallucinations


Recent incidents have highlighted the risks of AI hallucinations:​


  • Inaccurate Food Safety Advice: Google's AI tool, Gemini, provided a response suggesting that adding glue to pizza sauce could help cheese adhere better. This incorrect advice not only misled users but also posed potential health risks. ​The Verge

  • False Cheese Consumption Statistics: In a Super Bowl advertisement, Gemini claimed that Gouda cheese accounts for "50% to 60% of global cheese consumption," a statistic that was later debunked and attributed to an AI hallucination. https://nypost.com/2025/02/06/business/google-edited-super-bowl-ad-after-viewers-cited-by-gemini-ai/



Collage of headlines discussing AI errors, with text excerpts about AI hallucinations and misleading advice. Mood is critical and concerned.


Why this matters for healthcare


When people are asking about their health, accuracy has to be the priority above all else. A wrong answer could confuse them, make them anxious or even lead to dangerous decisions. This is why using open AI models like ChatGPT, Claude or Gemini directly for healthcare advice comes with real risks. These models can hallucinate and give incorrect answers that sound trustworthy but are not.


If you’d like to double-check your own chatbot setup, we’ve put together a short, practical checklist to help. You can download it here and use it as a guide to make sure everything is safe and reliable.




How ReganByte’s medical chatbots avoid this problem


ReganByte's medical chatbots are based on a RAG (Retrieval Augmented Generation) system. This basically means that our chatbots will generate answers to your questions, but it will only use the data that it retrieves from its controlled and verified knowledge base of sources. This approach is vital for use with healthcare charities and patient organisation.


  • Every answer comes from information we have checked and agreed with you.

  • The chatbot will never guess. If the answer is not in the knowledge base, it will say so.

  • It will not pull data from the internet or make things up.


But what happens if someone asks a question the chatbot cannot answer?


This is where we add another layer of safety. We have built an alert and monitoring system that keeps us closely connected to what is happening.

  • If the chatbot is asked something outside the knowledge base, we are alerted straight away.

  • We will review that query, discuss it with you, and, if needed, help add verified information to the knowledge base.

  • If you would like, we can also set up alerts for your team, so you are kept in the loop as well.



A chat screen shows a user asking about putting sweetcorn on pizza. Dwayne, the AI, apologizes, notes it's outside its knowledge, and informs its team.




We do not just deploy and disappear


For the first three months after your chatbot goes live, we do more than just monitor. We actively review conversations, watch for patterns, and make sure everything is working as expected. If gaps appear, we help you fill them. If something unexpected comes up, we deal with it together. This post-deployment care helps make sure your chatbot is safe, reliable, and doing exactly what you need it to do.


A simple comparison

Open AI models like ChatGPT and Claude

ReganByte Medical Chatbot

Trained on massive public datasets that may include errors or incomplete data

Locked to a private, verified knowledge base

Can hallucinate and make things up

Will only answer from trusted, approved content

No monitoring for your specific use case

Active alert system that flags gaps and unusual questions

No post-launch oversight

Three months of close monitoring and improvement


In conclusion


Chatbots can be incredibly helpful for patients and charities, but only if they are built with care and watched closely. At ReganByte, we do not just build chatbots. We build partnerships. We stay with you, keep monitoring, and help make sure your users are getting the right information every time.




Comments


bottom of page