

It’s a long article. But I’m not sure about the claims. Will we get more efficient computers that work like a brain? I’d say that’s scifi. Will we get artificial general intelligence? Current LLMs don’t look like they’re able to fully achieve that. And how would AI continuously learn? That’s an entirely unsolved problem at the scale of LLMs. And if we ask if computer science is science… Why compare it to engineering? I found it’s much more aligned with maths at university level…
I’m not sure. I didn’t read the entire essay. It sounds to me like it isn’t really based on reality. But LLMs are certainly challenging our definition of intelligence.
Edit: And are the history lessons in the text correct? Why do they say a Turing machine is a imaginary concept (which is correct), then say ENIAC became the first one, but then maybe not? Did we invent the binary computation because of reliability issues with vacuum tubes? This is the first time I read that and I highly doubt it. The entire text just looks like a fever dream to me.
Yes. Plus the turing machine has an infinite memory tape to write and read. Something that is in scope of mathematics, but we don’t have any infinite tapes in reality. That’s why we call it a mathematical model and imaginary… and it’s a useful model. But not a real machine. Whereas an abacus can actually be built. But an Abacus or a real-world “Turing machine” with a finite tape doesn’t teach us a lot about the halting problem and the important theoretical concepts. It wouldn’t be such a useful model without those imaginary definitions.
(And I don’t really see how someone would confuse that. Knowing what models are, what we use them for, and what maths is, is kind of high-school level science education…)