Commercial concerns aside, I’ve been wondering who to hold responsible when chatbots emit disinformation and slander? If users blindly trust these LLMs—and they increasingly do—how do we go about correcting emergent errant output on an already-trained model?
add a skeleton here at some point
5 months ago