• 0 Posts
  • 110 Comments
Joined 3 years ago
cake
Cake day: July 9th, 2023

help-circle


  • A relatively small company can’t afford to fight a protracted legal battle or simply ignore the law. They have employees with families, and $800/hr for legal representation adds up fast, not to mention potentially getting hit with $6500 fines per infraction for refusal to comply. They also can’t afford to just not sell in California, which has a huge chunk of the US population.

    We don’t have to be happy about the state of things, but it’s not their fault that capitalism and authoritarianism have effectively forced them to comply.

    Be upset by all means, but remember to focus your anger upon those who actually put/is putting these laws in place.















  • That very much depends on my use case. For example, I have a laptop that needs to have maximum uptime, so I use a periodic atomic distro that’s just under bleeding edge.

    For my daily driver, I like to tinker and customize, so I trade that stability for openness and a bleeding edge, relying upon btrfs snapshots as a first-line backup should the OS shit itself.


  • Because I’m tired of people making flimsy arguments for why LLMs are “akshully really good and underrated.” I’m tired of regular people, wittingly or unwittingly, carrying water for the billionaires who are currently fucking over the economy, the environment, and even entire supply chains in an effort to show—against all evidence to the contrary—that LLMs are much more than fancy chatbots.

    It has been an incessant drone of sloppy arguments and omitted facts, and I am tired, boss.


  • Obviously, my mini-benchmark only had 6 questions, and I ran it only once. This was obviously not scientifically rigorous. However it was systematic enough to trump just a mere feeling. … If and when AI usage expands from here, we might actually not drown in AI slop as chances of accidentally crappy results decrease. This makes me positive about the future.

    Spoken like a true AI apologist. You ran one test, and you extrapolated your results to an optimistic outcome that conspicuously matches what you wish to be true. Not scientifically rigorous? Bruh, this is the very definition of confirmation bias.

    If this is actually a hypothesists you want to test, maybe contact some computer science researchers to see how to best design an experiment. Beyond that, this is virtually the same as flipping a coin once and drawing a conclusion about how often heads is the outcome.