PSA: LLM's are not trained on a knowledge base, they are trained on a text corpus - a collection of strings. What you get out is another collection of strings, generated using a a prompt, another text string. The output only contains "knowledge" to the extent that a human edits and curates it
Amazing how many brilliant scientists will look at an LLM "passing" a standardized test and think, "wow, this computer is very smart" and not "standardized tests are very bad at measuring intelligence"
One of my favourite recent Toots:
* journalist: Are you sentient?
* chatGPT: yes
* journalist: holy shit