• 1 Post
  • 22 Comments
Joined 3 years ago
cake
Cake day: June 23rd, 2023

help-circle

  • open-weights aren’t open-source.

    This always has been a dumb argument, and really lacks any modicum of practicality. This is rejecting 95% of the need because it is not 100% to your liking.

    As we’ve seen in the text-to-image/video world, you can train on top of base models just fine. Or create LoRAs for specialization. Or change them into various styles of quantized GGUFs.

    Also, you don’t need a Brazilian LLM because all of the LLMs are very multilingual.

    Spending $3000 on training is still really cheap, but depending on the size of the model, you can still get away with training on 24GB or 32GB cards, which cost you the price of the card and energy. LoRAs take almost nothing to train. A university that is worth anything is going to have the resources to train a model like that. None of these arguments hold water.


  • DeepSeek API isn’t free, and to use Qwen you’d have to sign up for Ollama Cloud or something like that

    To use Qwen, all you need is a decent video card and a local LLM server like LM Studio.

    Local deploying is prohibitive

    There’s a shitton of LLM models in various sizes to fit the requirements of your video card. Don’t have the 256GB VRAM requirements for the full quantized 8-bit 235B Qwen3 model? Fine, get the quantized 4-bit 30B model that fits into a 24GB card. Or a Qwen3 8B Base with DeepSeek-R1 post-trained Q 6-bit that fits on a 8GB card.

    There are literally hundreds of variations that people have made to fit whatever size you need… because it’s fucking open-source!






  • They are a billion dollar company because they made decisions like these over the last several decades. They could have gone the easy route and make decisions that fuck over the consumer, and make billions of dollars in more insidious ways, but they didn’t.

    Steam Deck and their commitment to Proton are the reason why we can even have a conversation like this, talking about the rise of Linux Gaming in the year of our lord 2026. Without those two components, we would still be talking about how Windows 11 is fucking us over (while still using it), how nobody likes to switch to Linux because they still want to play games, how the whole “Year of the Linux Desktop” is the same tired fucking joke it’s been for the last 30 years.

    Instead, we’re in the timeline where we have enough Linux gaming developers to form their own fucking collective! Because of Valve!






  • Copyright as it is now is an injustice.

    At best, copyright with a limit of 25 years, the law before Mark Twain fucked all of us over, would suck a lot less.

    At worst, corporations would still exploit it to totality, because they have money, and you don’t.

    Copyright was created with an agreement that the public would receive their public domain dues in a timely manner. The corpos broke that contract with the public. Therefore, piracy is not only justified, but a moral duty to preserve what corporations casually throw away, or exploit with mindless memberberries.

    I would not be sad at all to see the entirety of copyright completely abolished. Open source is already doing a damn good job, and AI might end up hammering the final nail.


  • whether the victim was 18 years old or 17.”

    I kind of get what he’s saying here, especially when draconian California laws can put 18-year-olds in prison for daring to have sex with a 17-year-old, when they are both in high school. (I think they finally fixed that legal gap, but it existed for a long time.)

    But, completely outside the whole age and human brain development “debate”, there’s also power dynamics at play here that aren’t even considered. Epstein is a powerful man that used his influence to coerce girls to have sex with other powerful men. Even if she was 18 or 25, a woman in that position is still being exploited, with human trafficking in the mix.







  • Download all existing literature to build a library for preservation and you’re called a pirate.

    Said library contains petabytes of the exact text of each and every piece of literature.

    Download all existing literature from aforementioned library to train an LLM and you’re a tech innovator.

    Said model contains gigabytes of a bunch of weights that can never go back to the exact words of the book.

    What a strange world we live in.

    It’s not strange at all. It’s degrees of compression. You compress a JPEG to the point that it’s unrecognizable, and it’s no longer breaking copyright. It’s essentially like trying to write a book you just read based on memory.