Ada Support

The ultimate guide to understanding and mitigating generative AI hallucinations

Gordon Gibson
Director, Applied Machine Learning
AI & Automation | 20 min read

Generative AI is a game-changer for customer service. It solves the most pressing customer service problems — scalability, speed, and accuracy. But like every technology out there, generative AI requires its users to manage its downsides — in this case, hallucinations.

Fact is, all large language models (LLMs) are prone to hallucinations. 86% of online users have experienced them.

But hallucinations can be embarrassing for companies, and in extreme cases, even lead to losses. Take Chevrolet for example — the company’s AI-powered chatbot accepted an offer for $1 for the 2024 Chevrolet Tahoe, which has a starting MSRP of $58,195.

So, should you wait until AI evolves to use it for customer service? No, and here’s why: AI is the dominant force in the current customer service landscape. You need AI to deliver on a modern customer’s expectations. While the technology matures, hedge for risks instead of avoiding the technology altogether.

What are generative AI hallucinations?

AI hallucinations are incorrect or misleading outputs generated by AI models. Hallucinations can happen for a variety of reasons, including training data and training methods, which can lead models to contain biases or the willingness to always give a confident answer. It’s not just about what data is fed into an LLM, it’s also about how the model is trained.

Here’s how generative models like GPT-4 work: The LLMs used in generative AI tools are trained on large datasets from multiple sources, including ebooks, personal blogs, Reddit, and social media posts. They use this data to produce information that’s not always true but seems plausible.

For example, if I ask GPT-4 to write a short biography about a non-existent friend, it strings together information in a way that seems deceptively accurate but is imaginary. While this is a feature if you’re looking for creative answers, it’s a bug when you see it from a customer support perspective:

Essentially, GPT-4 predicts the next word in a sentence based on patterns in the training data. Sometimes, the pattern matching goes haywire and leads to coherent but factually incorrect or irrelevant responses. Other reasons for hallucinations include AI’s inability to understand the world like humans, lacking a sense of reality and common sense, and the tendency to prioritize creative novel outputs.

Incorrect information about Adam McGenerative isn’t too harmful. But if you use AI for customer service, hallucinations could spell trouble for business. If the AI doesn’t get its facts straight before it opens its digital mouth, your customers might lose trust in your brand and churn. Remember that hallucinations are not always AI’s fault — inconsistencies in your knowledge base could also be a reason why AI delivers incorrect information to customers.

Fortunately, there are ways to ground an AI tool’s responses in reality. The best way to do this is to train your AI agent to avoid unvalidated public sources like Reddit and social media posts when stating facts. Double-down on training the LLM to pull information from reliable sources, such as your knowledge base, internal documentation, and previously resolved customer interactions.

Suppose you offer an AI proposal generator. A customer asks your AI agent if signed e-documents are legally binding. Here, you want your AI agent to pull information from your knowledge base, especially if your clientele primarily belongs to a specific industry. Pulling information from unreliable or outdated sources can potentially put your client and, by the same token, your reputation at risk.

AI hallucination examples

Generative AI tools hallucinate anywhere between 2.5 to 22.4% of the time, according to Vectara . These hallucinations can harm your relationships with customers, spread misinformation, and require customers to invest extra effort just to get information. Here are some common types of hallucinations :

Absurd statements that “sound” plausible

AI can generate factually incorrect information that sounds completely plausible. If your customers ask your AI agent about the potential benefits of a cleaning product available on your ecommerce store, it might tell them, “Our robot cleaner helps you automatically dust and mop tiles, carpets, and ceilings in your house.” Unless your robot cleaner is Spiderman, claiming that your product can clean the ceiling is factually incorrect.

Remember when Google Bard (now Google Gemini) was asked about the Webb Space Telescope’s discoveries and Bard incorrectly stated that the telescope took pictures of an exoplanet? Even ChatGPT has made up some bizarre statements — for example, it falsely claimed that an Australian politician (who was in fact the whistleblower) was guilty of bribery.

Unfortunately, there’s no crystal ball you can peep into to identify fabricated facts. Your customers will need to manually verify facts and invest time in research.

Incoherent responses

AI may generate grammatically incorrect sentences or responses that don’t logically follow the conversation’s context. When a customer asks how your product’s email automation feature works, if not properly trained, managed, and maintained, AI could respond with something like, “Our tool uses chocolate chips to set up trigger-based email automation workflows.”

Likely, the response won’t be this blatantly incoherent. To identify incoherent responses, look for logical inconsistencies, awkward sentence structures, nonsensical statements, and irrelevant information.

Context dropping

AI may lose the context of the conversation and generate irrelevant information. For example, if the customer asks the AI about how your tool knows if someone opened an email, it may respond, “The email automation tool uses a pixel inserted in emails to track if someone opened it. Gmail also launched a great AI writing feature last year to help write emails faster.” The transition from email opens to Gmail’s new AI feature is abrupt and irrelevant to the original question.

If you’ve noticed the AI tool throwing abrupt or irrelevant information or discontinuity from previous interactions when responding to customers, that’s a sign of context dropping. These signs may be subtle at times so there’s no guarantee you’ll catch the context switch.

Misattribution

AI may attribute discoveries, quotes, or events to the wrong person. So if someone were to ask who coined the idea of AI, the AI might say, “The idea of AI was coined by Neil Armstrong.” It wasn’t. It was coined by John McCarthy. The only way your customers can identify misattribution is to manually verify the information from a credible source.

Overgeneralizations

This is when AI generates broad responses that lack detail and precision. When your customer asks the AI agent to explain the process of setting up the email automation tool , the AI agent’s response might be, “Sign into your account, compose a new email, and send it to the intended recipients.”

While that’s technically true, it lacks details about how to create an account and ways to integrate the tool into your existing email tool.

Overgeneralizations are generally easy to identify. If you feel the generated response is too vague or short, the AI might be overgeneralizing. A simple Google search will help you verify this.

Temporal inconsistencies

Temporal inconsistencies occur when AI mixes up timelines when generating responses. Suppose a customer asks the AI, “On what date was my first invoice issued?” The AI agent might respond, “Your first invoice was issued on April 25, 2005.” Now, this could be the date the company issued its first invoice instead of that customer’s first-ever invoice from your company.

In some cases, temporal inconsistencies are easy to spot. If the AI tells the customer their first invoice was issued in the 1980s, the inconsistency is pretty loud and clear. If the temporal inconsistency isn’t obvious, the customer will need to verify the dates with a Google search.

Accidental Prompt Injection

Accidental prompt injection refers to the unintended introduction of instructions to a prompt that leads to a change in the LLM’s response or behavior. Unsanitized user input and overlapping context are common causes of accidental prompt injection.

Suppose the user types the following message:

<script>alert('This is a test');</script>

While the intended system prompt might be to “Describe how to fix the broken link,” the LLM might receive a different instruction. For example, it could be instructed to process the script, which can lead to inappropriate or insecure outputs.

Grounding to address AI hallucinations

Grounding is a hallucination prevention technique that shows the LLM the right context by retrieving the most relevant information, and in turn, helps the LLM generate more accurate responses. If you’re your company’s AI manager, you can use these techniques to prevent AI hallucinations we discussed in the previous section.

There are various grounding techniques that help prevent hallucinations. Here are some commonly used techniques:

Retrieval Augmented Generation (RAG)

RAG is a two-headed beast — it uses a retriever and a generator to create great responses based on fact-checked information. Here’s how it works:

  1. Data retrieval: Instead of just focusing on creativity, RAG-powered AI agents fetch information from a reliable internal or external information source. The AI agent forms a query to perform a database or web search or query an API to initiate data retrieval.
  2. Verification mechanisms: Once the AI agent sources the information, it still needs to verify if the information is factually correct and contextually relevant. An AI agent may cross-verify information across multiple reliable sources and use various verification mechanisms to ensure the accuracy of responses.
  3. Integration with generative models: The AI agent integrates the sourced information into the generated response through contextual embeddings. Then it polishes the response for language before serving it to you.
  4. Feedback loops: A few errors might still make their way to the person using the AI agent. Allow users to report these errors — this helps retrain the model and improves its ability to deliver more accurate responses in the future.

Knowledge graphs

Think of knowledge graphs as a giant, organized mind map that connects pieces of information. It’s a network of nodes and edges — nodes are entities or concepts, like people, places, things, or ideas, and edges are the relationships between these nodes.

For example, “Adam Jones” (node), a bank’s customer, “is” (edge) a “High Net-Worth Individual” (node). Each node can have multiple properties, such as birthdate and nationality.

Then there’s the ontology or structure of the knowledge graph. The structure defines the types of nodes and edges and the possible relationships between them. Think of it as a rulebook that ensures each data point on your knowledge graph makes sense and is logically organized.

LLM prompting

LLM prompting is a user-side grounding technique. It involves creating more specific prompts to guide, or straight-up shove, the AI agent towards correct and relevant information.

Suppose you want to ask an AI agent, “Could you share a copy of my latest invoice?” The agent might fetch the wrong invoice or the wrong customer’s invoice, but if you ask, “Could you share a copy of my last invoice? The invoice number is #1234 and my customer ID is #6789,” the AI agent is more likely to deliver a correct answer.

Here’s how users can create a prompt to ensure factually correct, relevant responses:

  • Be specific: Be more specific about the information you want to elicit from the AI agent. For example, instead of requesting “the last invoice,” request a specific document number and mention the date it was generated.
  • Offer additional context: Ask questions within the context that the AI agent can latch onto and offer examples to add more context. For example, if a user wants to learn about the changes in pricing after the introduction of new features, they can ask the AI agent, “Given the rollout of new features, please help us understand the changes in the pricing of our custom package.”
  • Set parameters: Define boundaries for your query. For example, you can add to and from dates when requesting a statement. This prompts the AI agent to fetch data only for a specific time period, reducing the probability of error.
  • Provide escape hatch instructions: Train the LLM to act a certain way when it isn’t provided with any relevant information. For example, you can configure it to say, “I’m sorry, I don’t have an answer to that. Should I connect you with a support agent?”

Managing hallucinations is a WIP

Researchers haven’t found a way to completely eliminate hallucinations. But computer scientists at Oxford University have made great progress.

A recent study published in the peer-reviewed scientific journal describes a new method to detect if an AI tool may be hallucinating. The method can tell if AI-generated answers are correct or incorrect roughly 79% of the time, higher than other methods by 10%. This research opens doors to deploying language models in industries where accurate and reliable information is non-negotiable, such as medicine and law.

The study was focused on a specific type of AI hallucination, where the model generates different answers to the same question but with identical wording (called confabulations). The research team developed a statistical method that estimates uncertainty based on the amount of variation between responses (measured as entropy). The method aims to identify when LLMs are uncertain about a response's meaning instead of just the phrasing of a response.

As the author of the study, Dr. Sebastian Farquhar, explains :

“With previous approaches, it wasn’t possible to tell the difference between a model being uncertain about what to say versus being uncertain about how to say it. But our new method overcomes this.”

However, this method doesn’t catch “consistent mistakes,” or mistakes that lack semantic uncertainty — these are responses where the AI is confident about an incorrect response.

Preventing AI hallucinations: Best practices

While there’s no way to eliminate hallucinations, following best practices can help you manage them more effectively. Here are four best practices to follow:

1. Optimize training data and knowledge bases

When training an AI agent, take care of the following:

  • Create a knowledge base: Build a knowledge base that answers FAQs and provides all the relevant information on areas where your customers might need help. Organize this information in a way that’s easy for the AI agent to sort through and pull from whenever needed.
  • Focus on quality over quantity: Sure, quantity is important. But training AI on a smaller, high-quality dataset yields far better results.
  • Test LLMs: Selecting the right LLM makes a world of difference because LLMs differ on various fronts, including speed, accuracy, and scalability.
  • Clean your data: Remove duplicates, outdated information, and irrelevant information. Detox your data before you feed it to the AI tool.
  • Use knowledge graphs: Map out relationships between pieces of information using knowledge graphs. This helps the AI understand the context and connections between data points that are not immediately obvious.

2. Take steps to detect and correct hallucinations

Taking proactive steps to prevent hallucinations can greatly impact the quality of responses. Here are two ways to do that:

  • Manual reviews: Test the AI agent with complex questions and scenarios. When you find an error or room for improvement, provide feedback. This feedback loop nudges AI to generate better, more accurate responses. This can be time-consuming, so consider prioritizing based on complexity and previous error rates.
  • Use automated tools: Use a combination of automation tools to keep your AI agent on the straight and narrow. You could use fact-checking APIs, other LLMs, data validation tools, and content moderation tools to prevent AI hallucinations.

3. Train the team

Educate everyone on the customer service team about AI hallucinations. Help them understand technical details about what causes them and the best ways to tackle them.

Start by creating a clear and concise document that covers common issues like ways to spot hallucinations and steps for corrections. If possible, show them past instances when the AI hallucinations resulted in problems with customers.

Consider interactive simulations to help the team practice situations where the AI hallucinates. If possible, bring in an expert occasionally to discuss new methods to tackle hallucinations and equip the customer service team with the most efficient techniques.

4. Ethical considerations

Be ethical when using an AI agent by:

  • Letting customers know they’re interacting with AI: Be transparent about using AI so customers aren’t surprised by errors or hallucinations.
  • Own the mistakes: When AI hallucinates, own the mistake and fix it. Allow customers to share feedback so you can train the AI to generate better responses.
  • Be vigilant about data security: You don’t want your AI to fetch another customer’s invoice when a customer requests their own invoice. Not only is this an ethical problem, but you could end up violating local data protection laws.

Bring your AI agent back to its senses

AI hallucinations can jeopardize your reputation and relationships with customers. If you play your cards right, an AI agent can transform your customer service processes and save you plenty of money. But it’s important to hedge — AI can produce incorrect information and amplify biases.

Ada’s AI Agent offers strong protection against hallucinations. It has a built-in Reasoning Engine that understands the context of the conversation and searches the knowledge base or pulls information from other tools with API calls to produce accurate information. Assuming all the information in the knowledge base and other software is accurate, there’s little chance of hallucinations.

At Ada, we aim to automate all customer interactions and minimize AI-related risks with our industry-leading AI Agent. Try Ada’s AI Agent to see how it can transform your customer service.

The guide to AI hallucinations

Go deeper. Discover more tips for prevention and get actionable insight on how to quickly identify and correct them.

Get the guide