in a better implementation, an LLM would be a middle man that could gauge semantic meaning and track down where you can find the correct answer
you know, like Google ๐ถ๐ด๐ฆ๐ฅ ๐ต๐ฐ
instead, it's constantly built as though its a source of answers, which is why it's constantly, flagrantly wrong
add a skeleton here at some point
about 1 month ago