AI generated quotes in a story about AI clanker writing a blog post about a human developer because they didn’t accept their code contributions.
How deep can someone go here.
AI generated quotes in a story about AI clanker writing a blog post about a human developer because they didn’t accept their code contributions.
How deep can someone go here.
Everything you said is right, but you’re only proving that LLM weights is a severely simplified version of neurons. It neither disproves that they don’t have consciousness or that being a mathematical model precludes it from having consciousness at all.
In my opinion, the current models doesn’t express any consciousness, but I am against saying they don’t because they are a mathematical model rather than by the results we can measure. The fact that we can’t theoretically prove consciousness in the human brain also means we can’t theoretically disprove consciousness in an LLM model. They aren’t conscious because they haven’t expressed enough to be considered conscious, and that’s the extent we should claim to know.
You can’t prove all ravens are black. The discovery of even one white raven would disprove the “fact” that all ravens are black, and we can by no means be sure that we gathered all ravens to test the theory.
However, we can look around and comment that there doesn’t appear to be any white ravens anywhere…
Do you know about the ‘bobo’ and ‘kiki’ study - can’t remember the name? People made up words that don’t exist in English and asked people whether round objects are more bobo or kiki. AI can’t answer this question - not without being fed how to. Toddlers could answer it. It comes down to how it consumes information and if there’s no pattern… When asked to define words it had been rarely fed, I.e. usernames people had made up, the AIs apparent consciousness breaks down. As soon as something isn’t likely followed by another word, the machine breaks and no one would pretend it has consciousness after that.
Learning models are just pattern recognition machines. LLMs are the kind that mix and match words really well. This makes them seem intelligent, but it just means they can express language and information in a way we understand, and tend to not do so. Consciousness gets into the “what is the soul” territory, so I’m staying away from it. The best I can say of AI is its interesting that language appears to be a system constructed well enough that we can teach it to machines. Even more so we anthropomorphise models when they do it well.
AI doesn’t have memory, it can’t think for itself - it references what it has consumed - and it can’t teach itself new tricks. All of these are experimental research areas for AI. All of them lend to consciousness. Its just very good at sentence generation.
I don’t know what you’re even arguing. Your analogy breaks down because in this case, we can’t even see if the raven is black or not. No one can theoretically prove consciousness. The rest of your comments seems to be arguing that current AI has no consciousness, which is exactly what I said, so I guess this is just an attempt at supporting my point?