

“What we are building now are things that take in words and predict the next most likely word…”
This is a gross oversimplification and doesn’t reflect current understanding of how the most advanced LLMs work. Anthropic has recently published papers showing that Claude “sometimes thinks in a conceptual space” and will “plan what it says many words ahead”.
This doesn’t seem quite so different from human intelligence as what the summary suggests.
https://www.anthropic.com/news/tracing-thoughts-language-model
White water rafting