• 0 Posts
  • 28 Comments
Joined 2 years ago
cake
Cake day: July 16th, 2023

help-circle

  • Because no matter how harmful he may have been in life, his death is probably more harmful.

    We had enough problems without tit-for-tat assassinations of anyone that anyone else dislikes.

    The Luigi assassination didn’t come out so bad since there wasn’t a strong political back and forth (there was some, but he wasn’t really a political/public figure, just an arsehole ceo, and didn’t make a great wedge issue), this one is much more dangerous, and yeah, probably it would have been better if he’d continued his harmful speeches from a limited platform than become an excuse for so many “justified” attacks on the left, and that’s assuming it stops here, and doesn’t escalate further.



  • scratchee@feddit.uktoTechnology@lemmy.worldWhat If There’s No AGI?
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    17 days ago

    Modern llms were a left field development.

    Most ai research has serious and obvious scaling problems. It did well at first, but scaling up the training didn’t significantly improve the results. LLMs went from more of the same to a gold rush the day it was revealed that they scaled “well” (relatively speaking). They then went through orders of magnitude improvements very quickly because they could (unlike previous ai training models which wouldn’t have benefited like this).

    We’ve had chatbots for decades, but with a the same low capability ceiling that most other old techniques had, they really were a different beast to modern LLMs with their stupidly excessive training regimes.



  • Same logic would suggest we’d never compete with an eyeball, but we went from 10 minute photos to outperforming most of the eyes abilities in cheap consumer hardware in little more than a century.

    And the eye is almost as crucial to survival as the brain.

    That said, I do agree it seems likely we’ll borrow from biology on the computer problem. Brains have very impressive parallelism despite how terrible the design of neurons is. If we can grow a brain in the lab that would be very useful indeed. More useful if we could skip the chemical messaging somehow and get signals around at a speed that wasn’t embarrassingly slow, then we’d be way ahead of biology in the hardware performance game and would have a real chance of coming up with something like agi, even without the level of problem solving that billions of years of evolution can provide.


  • Oh sure, the current ai craze is just a hype train based on one seemingly effective trick.

    We have outperformed biology in a number of areas, and cannot compete in a number of others (yet), so I see it as a bit of a wash atm whether we’re better engineers than nature or worse atm.

    The brain looks to be a tricky thing to compete with, but it has some really big limitations we don’t need to deal with (chemical neuron messaging really sucks by most measures).

    So yeah, not saying we’ll do agi in the next few decades (and not with just LLMs, for sure), but I’d be surprised if we don’t figure something out once get computers a couple orders of magnitude faster so more than a handful of companies can afford to experiment.


  • scratchee@feddit.uktoTechnology@lemmy.worldWhat If There’s No AGI?
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    19 days ago

    Possible, but seems unlikely.

    Evolution managed it, and evolution isn’t as smart as us, it’s just got many many chances to guess right.

    If we can’t figure it out we can find a way to get lucky like evolution did, it’ll be expensive and maybe needs us to get a more efficient computing platform (cheap brain-scale computers so we can make millions of attempts quickly).

    So yeah. My money is that we’ll figure it out sooner or later.

    Whether we’ll be smart enough to make it do what we want and not turn us all into paperclips or something is another question.




  • And android users are not obligated to give a good review after not receiving support.

    I have no problem with his actions, (if he doesn’t have the resources/energy/time to support on all platforms, who can complain about that?), but I don’t think he’s very good at the whole communicating with other humans part of software that sadly in the OSS world tends to fall on the same devs that do the work, he could have avoided both this comment thread and the angry android user above with zero extra effort by simply phrasing things better.

    The particular poor phrasing he chose seems to imply to me that he’s lumping all users of each platform together in his head, and each negative interaction builds on the previous, which isn’t the healthiest attitude, and does indeed make him look like an arsehole to anyone who’s just turned up and hasn’t yet done anything wrong.






  • Neither of you are talking nonsense. The US clearly has a combination of problems that combine to cause their massive problem with mass shootings.

    Their limited gun control is a contributing factor, but not the only factor. Other countries have weak gun laws and don’t have nearly the same problems, the US didn’t have the same problems in the past, they’ve grown worse over time, and at this point the very concept of mass shootings in media is a major cause of them.

    Removing guns (magically removing all existing guns) would certainly reduce the problem and probably would eventually fix things, but at this point the US has been broiling itself in this idea for too long and it would probably continue with knives or homemade bombs or something instead, at least for a while.