Gerard de Melo about 1 year ago
LLMs "hallucinate" by making up information instead of revealing their lack of knowledge 🤔
Our #NeurIPS2024 paper adds an [I-Don't-Know] 🤷🏾♂️ token to the LLM and fine-tunes it to predict this instead of hallucinating.
1/2