• 0 Posts
  • 27 Comments
Joined 7 days ago
cake
Cake day: June 20th, 2025

help-circle







  • That’s a very emphatic restatement of your initial claim.

    I can’t help but notice that, for all the fancy formatting, that wall of text doesn’t contain a single line which actually defines the difference between “learning” and “statistical optimization”. It just repeats the claim that they are different without supporting that claim in any way.

    Nothing in there, precludes the alternative hypothesis; that human learning is entirely (or almost entirely) an emergent property of “statistical optimization”. Without some definition of what the difference would be we can’t even theorize a test







  • Pollution per GDP is a better measure. https://ourworldindata.org/grapher/co2-intensity Pollution per GNP would be even better but I can’t find it.

    Individuals don’t pollution much, it’s mostly industry. Really poor countries often don’t pollution much because they can’t afford to. Sometimes they pollute prodigiously because the only thing they can afford to do is destructive resource extraction. Rich countries can often outsource their pollution to poorer countries.

    China has been making mind boggling investments in renewables. They have been expanding all their energy sources but their renewables have the lions share of the growth.

    They’ve been building roads and all kinds of infrastructure. That’s what the BRI is all about, even if they’re being a bit quieter about saying the phrase. They like to build their long haul roads on elevated columns; not only because it’s less disruptive to wildlife but because it lets them use giant road laying robots to place prefab highway segments.

    They dropped the one-child policy a while back but they’re having some trouble getting people to have more babies. That said, there’s some research that suggests that rural populations around the world are severely undercounted, so they may have a bunch more subsistence farmers than they, or anyone else, realizes.




  • You’re correct that a collection of deterministic elements will produce a deterministic result.

    LLMs produce a probability distribution of next tokens and then randomly select one of them. That’s where the non-determinism enters the system. Even if you set the temperature to 0 you’re going to get some randomness. The GPU can round two different real numbers to the same floating point representation. When that happens, it’s a hardware-level coin toss on which token gets selected.

    You can test this empirically. Set the temperature to 0 and ask it, “give me a random number”. You’ll rarely get the same number twice in a row, no matter how similar you try to make the starting conditions.


  • You may be correct but we don’t really know how humans learn.

    There’s a ton of research on it and a lot of theories but no clear answers.
    There’s general agreement that the brain is a bunch of neurons; there are no convincing ideas on how consciousness arises from that mass of neurons.
    The brain also has a bunch of chemicals that affect neural processing; there are no convincing ideas on how that gets you consciousness either.

    We modeled perceptrons after neurons and we’ve been working to make them more like neurons. They don’t have any obvious capabilities that perceptrons don’t have.

    That’s the big problem with any claim that “AI doesn’t do X like a person”; since we don’t know how people do it we can neither verify nor refute that claim.

    There’s more to AI than just being non-deterministic. Anything that’s too deterministic definitely isn’t an intelligence though; natural or artificial. Video compression algorithms are definitely very far removed from AI.


  • That’s a reasonable critique.

    The point is that it’s trivial to come up with new words. Put that same prompt into a bunch of different LLMs and you’ll get a bunch of different words. Some of them may exist somewhere that don’t exist. There are simple rules for combining words that are so simple that children play them as games.

    The LLM doesn’t actually even recognize “words” it recognizes tokens which are typically parts of words. It usually avoids random combinations of those but you can easily get it to do so, if you want.