One tension here is that current gen LLMs can have a ~high success rate for accurately predicting what a real answer would be (depending on your topic), so there’s undeniably some utility there
but knowing when it’s NOT accurate is increasingly difficult which means you always need to be skeptical
add a skeleton here at some point
12 months ago