I like this and will add one bit - LLM outputs must be verified, always. Follow sources, double-check claims, scrutinize cited data.
LLMs create an information bootstrap, but their fundamental limitations mean they may use their training data to fabricate or modify facts.
add a skeleton here at some point
10 months ago