I can’t tell if it’s the author of the article, or actually part of the research, but LLMs have NO mapping to “meaning” as implied in the article. No semantics. It’s just a neural net driven Markov chain; statistically likely text with no meaning or understanding.
add a skeleton here at some point
4 months ago