• 1 Post
  • 24 Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle





  • To be fair, this all started under the Biden administration with the CHIPS and Science Act of 2022.

    The US is increasingly concerned that, if China invades Taiwan, it will completely lock them out from semiconductor manufacturing and crater the US economy. Rather than flex their soft power and exercise a little diplomacy like the US used to do in decades past, they’ve apparently decided that the invasion of Taiwan is inevitable and the only course of action is to bolster semiconductor manufacturing at home.

    Trump, of course, has all the subtlety of a torpedo and his rhetoric here has been needlessly antagonistic… but yeah, this whole thing started under Biden and now Trump is pretending it was always his idea. So really the thing he stole was the policy.






  • Man… of all the vibe coding tools, Lovable has gotta be one of the most useless, too.

    I work with people (all middle managers) who love Loveable because they can type a two sentence description of an app and it will immediately vomit something into existence. But the code it generates is an absolute disaster and the UIs it designs (which is supposed to be its main draw) is some of the most generic crap I’ve ever seen.

    0/10, do not recommend.







  • But how would you use words to explain the phenomenon?

    I don’t know, I’ve been struggling to find the right ‘sound bite’ for it myself. The problem is that all of the simplified explanations encourage people to anthropomorphize these things, which just further fuels the toxic hype cycle.

    In the end, I’m unsure which does more damage.

    Is it better to convince people the AI “lies”, so they’ll stop using it? Or is it better to convince people AI doesn’t actually have the capacity to lie so that they’ll stop shoveling money onto the datacenter altar like we’ve just created some bullshit techno-god?


  • It refers to when an LLM will in some way try to deceive or manipulate the user interacting with it.

    I think this still gives the model too much credit by implying that there’s any sort of intentionally behind this behavior.

    There’s not.

    These models are trained on the output of real humans and real humans lie and deceive constantly. All that’s happening is that the underlying mathematical model has encoded the statistical likelihood that someone will lie in a given situation. If that statistical likelihood is high enough, the model itself will lie when put in a similar situation.




  • very_well_lost@lemmy.worldtoGames@lemmy.worldPet Peeves with Games?
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 months ago

    I can’t remember specific examples (probably because I didn’t stick with any of them very long), but I’ve played several games that don’t even let you touch the options until after you’ve finished some tutorial section… which is especially annoying for players who play with inverted y axis.