“Falsehood flies, and truth comes limping after it, so that when men come to be undeceived, it is too late; the jest is over, and the tale hath had its effect: […] like a physician, who hath found out an infallible medicine, after the patient is dead.” —Jonathan Swift

  • 0 Posts
  • 9 Comments
Joined 10 months ago
cake
Cake day: July 25th, 2024

help-circle
  • I don’t disbelieve you, but I think a huge part of the mis/disinformation problem right now is that we can just say “I read something not that long ago that said [something that sounds true and confirms 90% of readers’ pre-existing bias]” and it’ll be uncritically accepted.

    If we don’t know where it’s published, who published it, who wrote it, when it was written, what degree of correlation was established, the methodology to establish correlation, how it defines corruption, what kind and how many politicians over what time period and from where, or even if this comment accurately recalls what you read, then it’s about the same as pulling a Senator Armstrong even if it means well. And if anyone does step in to disagree, an absence of sources invites them to counterargue based on vibes and citing random anecdotes instead of empirical data.

    What can I immediately find? An anti-term limits opinion piece from Anthony Fowler of the University of Chicago which does do a good job citing its sources but doesn’t seem to say anything about this specific claim. Likewise, this analysis in the European Journal of Political Economy which posits that term limits increase corruption but in return decrease the magnitude of the corruption because of an inability to develop connections.

    Internet comments aren’t a thesis defense. But I think for anything to get better, we need to challenge ourselves to create a healthy information ecosystem where we still can.



  • It’s an easy mistake to make. For future reference, Wikiquote – a sister project of Wikipedia like Wiktionary and Wikimedia Commons are – is very often a good benchmark for whether famous people have said a quote.

    • For famous quotes that they’ve said, they’re usually listed (if they are, there’s a citation to exactly where that quote came from).
    • For famous quotes they didn’t say, the “Misattributed” section often has the quote with a cited explanation of where it actually comes from.
    • For famous quotes they might’ve or probably didn’t say, the “Disputed” section shows where it’s first attributed to them but of course cannot provide a source where they themselves say it.

    It doesn’t have every quote, but for very famous people, it filters out a lot of false positives. Since it gives you a citation, often you can leave a URL to the original source alongside your quote for further context and just so people who’d otherwise call BS have the source. And it sets a good example for others to cite their sources.



  • This is entirely correct, and it’s deeply troubling seeing the general public use LLMs for confirmation bias because they don’t understand anything about them. It’s not “accidentally confessing” like the other reply to your comment is suggesting. An LLM is just designed to process language, and by nature of the fact it’s trained on the largest datasets in history, practically there’s no way to know where this individual output came from if you can’t directly verify it yourself.

    Information you prompt it with is tokenized, run through a transformer model whose hundreds of billions or even trillions of parameters were adjusted according to god only knows how many petabytes of text data (weighted and sanitized however the trainers decided), and then detokenized and printed to the screen. There’s no “thinking” involved here, but if we anthropomorphize it like that, then there could be any number of things: it “thinks” that’s what you want to hear; it “thinks” that based on the mountains of text data it’s been trained on calling Musk racist, etc. You’re talking to a faceless amalgam unslakably feeding on unfathomable quantities of information with minimal scrutiny and literally no possible way to enforce quality beyond bare-bones manual constraints.

    There are ways to exploit LLMs to reveal sensitive information, yes, but you have to then confirm that sensitive information is true, because you’ve just sent data into a black box and gotten something out. You can get a GPT to solve the sudoku puzzle, but you can’t then parade that around before you’ve checked to make sure the puzzle is correct. You cannot ever, under literally any circumstance, trust anything a generative AI creates for factual accuracy; at best, you can use it as a shortcut to an answer which you can attempt to verify.




  • Another reason donating to FOSS is better than paying for proprietary software. Proprietary software devs get to run around stealing whatever code they like from the open-source community and never suffer any consequence because they don’t make their source available. I can think of a select few proprietary projects that have the balls to be source-available.

    If you want to intentionally create a system that lets you evade accountability for stealing code, “fine”, but I have zero respect for you or your product, and I’m certainly not paying you a dime. I’ll put my money toward the developers who work to better the world instead of the rat fucks who steal from them to make money and pollute the software ecosystem with proprietary trash.