Gerard de Melo 9 months ago
LLMs "hallucinate" by making up information instead of revealing their lack of knowledge 🤔
Our #NeurIPS2024 paper adds an [I-Don't-Know] 🤷🏾♂️ token to the LLM and fine-tunes it to predict this instead of hallucinating.
1/2