Strategies to Solve LLM Hallucinations

By Cheuk Ting Ho

Elevator Pitch

Hallucinations have been a big pain in products that use LLM. Users expect high accuracy in answers and rely increasingly on LLM or AI assistants. Hallucinations would result from destroying the users’ trust in real-life consequences. Solving it has been a high-priority task for AI researchers.

Description

In this talk, the speaker will explain, in technical terms, what causes LLM to hallucinations. From there, the speaker will introduce a few strategies to minimize hallucination and show examples of designs and use cases.

Topics covered

  • what causes AI hallucinations
  • how to avoid hallucinations by working inside the model
  • how to avoid hallucinations by working outside of the model
  • how to avoid hallucinations as a user

Goal

To educate the audience about LLM hallucination and to explore methods to minimize it.

Target Audiences

Engineers who integrate LLM in their products.