Strategies to Solve LLM Hallucinations

By Cheuk Ting Ho

Elevator Pitch

Hallucinations have been a big pain in products that use LLM. Users expect high accuracy in answers and rely increasingly on LLM or AI assistants. Hallucinations would result in destroying the users’ trust and have real-life consequences. Solving it has been a high-priority task for AI researchers.

Description

In this talk, the speaker will explain, in technical terms, what causes LLM to hallucinations or misinformation. From there, the speaker will introduce a few strategies to minimize the impact of hallucination and show examples of how to mitigate LLM hallucination.

Topics covered

  • what causes AI hallucinations
  • how to avoid hallucinations by working inside the model
  • how to avoid hallucinations by working outside of the model
  • how to avoid hallucinations as a user

Goal

To educate the audience about LLM hallucination and to explore methods to minimize it.

Target Audiences

Engineers who integrate LLM in their products and researcher who care about the topic.