
Anthropic Maps AI Model 'Thought' Processes (fortune.com) 14
Anthropic researchers have developed a breakthrough "cross-layer transcoder" (CLT) that functions like an fMRI for large language models, mapping how they process information internally. Testing on Claude 3.5 Haiku, researchers discovered the model performs longer-range planning for specific tasks -- such as selecting rhyming words before constructing poem sentences -- and processes multilingual concepts in a shared neural space before converting outputs to specific languages.
The team also confirmed that LLMs can fabricate reasoning chains, either to please users with incorrect hints or to justify answers they derived instantly. The CLT identifies interpretable feature sets rather than individual neurons, allowing researchers to trace entire reasoning processes through network layers.
The team also confirmed that LLMs can fabricate reasoning chains, either to please users with incorrect hints or to justify answers they derived instantly. The CLT identifies interpretable feature sets rather than individual neurons, allowing researchers to trace entire reasoning processes through network layers.
AI is no substitute for domain knowledge. (Score:5, Insightful)
If you don't have the ability to vet the thought process of the AI, you're incapable of telling whether you're being dazzled by brilliance, or baffled by bullshit. That's the fundamental problem of AI, even when the Chain of Thought is fully exposed.
Re: (Score:2)
Re: (Score:2)
so just like with human output then? do we have the ability to vet the thought process of human intelligence? understanding the process does allow for some predictability, but we should be capable of distinguishing between brilliance and bullshit regardless of process or origin. we sort of do, but our mileage varies greatly. i don't see ai fundamentally different.
Re: (Score:2)
see? that's bullshit. https://i.imgflip.com/2zkm0n.j... [imgflip.com]
thought processes are overrated (Score:1)
Take the smartest person on the internet [slashdot.org]
If you told him "AI claims Haitians are eating all the cats".
There would be a thought process, and evaluation of evidence.
But if you told him "Trump says Haitians are eating all the cats".
It was believed unquestioningly.
Source is more important than the "thinking" if you're a MAGA.
Re: (Score:2)
If you don't have the ability to vet the thought process of the AI, you're incapable of telling whether you're being dazzled by brilliance, or baffled by bullshit. That's the fundamental problem of AI, even when the Chain of Thought is fully exposed.
Agreed—domain knowledge remains essential. No one is arguing that AI should replace human expertise. But I think your conclusion jumps too quickly from “we can’t fully vet the thought process” to “AI isn’t a useful collaborator.”
What’s changed is that we’re no longer operating entirely in the dark. Anthropic's development of a cross-layer transcoder [transformer-circuits.pub]—essentially an fMRI for language models—lets researchers trace reasoning circuits through the
Re: (Score:2)
AI is no substitute for knowledge. If you want an encyclopedia, use one. If you want a web search use one.
If you want to process data, AI gets interesting. The integrated systems that make sense for example combine websearch, requesting website contents and AI to produce a result page that is not only correct but also attributes each piece of information to its source. Because it doesn't rely on the model having some kind of idea about the topic, but loading the actual information into the model context bef
Nope, Just More Hype (Score:3)