• 0 Posts
  • 70 Comments
Joined 8 months ago
cake
Cake day: May 7th, 2024

help-circle




  • I suggest to remedy what must clearly be a misunderstanding, we give him a deep and personal insight: Cut him off from all of his assets, give him nothing but a set of cheap clothes and kick him to the curb.

    Of course, we’d need to make sure his billionaire buddies don’t help him, but maybe we can just enroll them in this experiment too.

    Actually, they might just promise someone a reward once they get access to their funds again, so we need to make sure that this can’t influence the experiment. Maybe we could just seize the assets without giving them back? With their hard work, surely they can get back to where they were, pulling bootstraps and all.





  • I once made a reddit comment in anger that was most certainly over the line. I don’t remember the context, but someone had my blood boiling quite badly, which I voiced by wishing pain on them. However, it was a support-oriented community, and my outburst was definitely not tone-appropriate for that environment - the last thing people seeking support need is a graphic description of pain. I got a two-day (I think?) temp ban from that sub, citing that reason. First I was pissed, then reflected, acknowledged my error and didn’t repeat that mistake.

    In hindsight, I think that makes for a good moderation approach:

    Lock an escalating thread, clean out comments that cross the line, hand out brief temporary bans to particularly excessive offenders or those continuing their venting spree in other threads after the first one was locked, give them an opportunity to step back and reflect.

    Of course, there’s still the question of “what do the mods consider excessive?” But that’s a question you’d have either way.


  • Science communicators that make complex things accessible for the general public are a critical component to building and maintaining public support for scientific institutions. If we want science to serve public interests rather than corporate ones, we need to establish public funding for it, which requires a public understanding of what they are doing and why it’s valuable.

    A blog I very much like and keep recommending talks about both the importance of this and the differing viewpoints within academic culture (specifically about history, but many of the concepts apply to sciences in general). It also has cat pictures.

    This isn’t the first time I’ve heard about toxic culture in universities (Section “The Advisor”). Again, the entry is about graduate programs in the humanities, but it’s not just a humanities-specific issue.

    I personally didn’t know about HowStuffWorks (I was under the misconception that it was just a YouTube format, which I generally don’t watch a whole lot), but checking it out now, I definitely missed out, and I think it fits the criteria of the field-to-public communication.

    To drive such a valuable contributor to such despair they no longer want to live at all is a disservice to the public, a threat to what good their institution can do (which, for all its toxicity, probably also provided valuable research) and most of all a crime against that person. I hope they’re held accountable, but I also hope that public scrutiny can bring about improvements in academic culture so that his death might still do some good in the end.


  • AGI and ASI are what I am referring to. Of course we don’t actually have that right now, I never claimed we did.

    I was talking about the currently available technology though, its inefficiency, and the danger of tech illiteracy leading to overreliance on tools that aren’t quite so “smart” yet to warrant that reliance.

    I agree with your sentiment that it may well some day reach that point. If it does and the energy consumption is no longer an active concern, I do see how it could justifiably be deployed at scale.

    But we also agree that “we don’t actually have that right now”, and with what we do have, I don’t think it’s reasonable. I’m happy to debate that point civilly, if you’re interested in that.

    It is hilarious and insulting you trying to “erm actually” me when I literally work in this field doing research on uses of current gen ML/AI models.

    And how would I know that? Everyone on the Internet is an expert, how would I come to assume you’re actually one? Given the misunderstanding outlined above, I assumed you were conflating the (topical) current models with the (hypothetical) future ones.

    Go fuck yourself

    There is no need for such hostility. I meant no insult, I just misunderstood what you were talking about and sought to correct a common misconception. Seeing how the Internet is already full of vitriol, I think we’d all do each other a favour if we tried applying Hanlon’s Razor more often and look for explanations of human error instead of concluding malice.

    I hope you have a wonderful week, and good luck with your ongoing research!







  • I was contesting the general logic of this sentiment:

    Which “experts” do you need for what’s common knowledge?

    I took this to mean “If common knowledge suggests an obvious understanding, an expert’s assessment is can add no value, as they would either agree or be wrong.” Put differently: “If it seems obviously true to me, it must be true in general.”

    TL;DR: If you think you know more than experts on a given topic, you’re most likely wrong.

    On a fundamental level, this claim in general holds no water. Experts in a given field are usually aware of the “common knowledge”. They also usually have special knowledge, which is what makes them experts. If they claim things that contradict “common knowledge”, it’s more likely that their special knowledge includes additional considerations a layperson wouldn’t be aware of.
    Appeal to Authority as a fallacy applies if the person in question isn’t actually an authority on the subject just because they’re prominent or versed in some other context, but it doesn’t work as universal refutation of “experts say”.

     

    For this specific case, I’m inclined to assume there is some nuance I might not know about. Obvious to me seems that large, central power plants are both easier targets and more vulnerable to total disruption if a part of their machinery is damaged. On the other hand, a distributed grid of solar panels may be more resilient, as the rest can continue to function even if some are destroyed, in addition to being harder to spot, making efforts to disrupt power supply far more expensive in terms of resources.

    However, I’m not qualified to assess the expertise of the people in question, let alone make an accurate assessment myself. Maybe you’re right, they’re grifters telling bullshit. But I’d be wary of assuming so just because it seems true.




  • This is one of the nasty cases of “multiple parties fucked up to let this happen, and all of them are to blame”. The fascists themselves obviously bear the most blame, followed by their enablers, including various media outlets that went terribly softball on one side while picking apart the other at every opportunity. The Dems’ continuous lurch to the right and resulting voter disillusionment also counts among those, the lack of education, the Reps’ skill at pinning all the issues they cause on the Dems…

    The list is long. It also includes the voters who, faced with the question “genocide with or without fascism?” threw up their hands in frustration and said “Do whatever you want, go install the fascist for all I care” as well as the relentless “bOtH SiDeS” bullshit.

    The Dems fucked up, badly and consistently. They deserve to lose their political standing, to be usurped by an actually progressive left wing party. But achieving that through a fash victory is like weeding your garden with incendiary bombs: Sure, it might burn away the visible part of the weeds, but it’ll probably kill the crop too and still leave an invisible part ready to become a nuisance in the future.