Interesting. I think I've noticed this somewhat. I often have to start a new chat, if the LLM hallucinates something, or else it will integrate that hallucination into the context and base further output on the hallucination.
add a skeleton here at some point
7 months ago