• 1 Post
  • 26 Comments
Joined 3 years ago
cake
Cake day: July 7th, 2023

help-circle
  • Eager Eagle@lemmy.worldtoOpen Source@lemmy.mlLLM/"AI" Policies | Jellyfin
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    9 hours ago

    you just know a company like Microsoft or Apple will eventually try suing an open source project over AI code that’s “too similar” to their proprietary code.

    Doubt it. The incentives don’t align. They benefit from open source much more than are threatened by it. Even that “embrace, extent, extinguish” idea comes from different times and it’s likely less profitable than the vendor lock-in and other modern practices that are actually in place today. Even the copyright argument is something that could easily backfire if they just throw it in a case, because of all this questionable AI training.


  • Even if you’re into AI coding, I never understood the hype around cursor. In the beginning, they were just 3 months ahead of alternatives. Today you can’t even say that anymore and they’re still “worth” billions. You can get a similar prediction quality from other editors if you know how to use them, paying a fraction of the price.

    Cursor also chugs on tokens like a 1978 Lincoln Continental, that’s how they get marginally better results, so bringing your API is not even a viable option. The first time I tried it, I asked a simple 1-line edit on a markdown and it sent out 20k tokens before I could say “AGI is 6 months away” and still got the change wrong.





  • yes, the system will likely use some swap if available even when there’s plenty of free RAM left:

    The casual reader1 may think that with a sufficient amount of memory, swap is unnecessary but this brings us to the second reason. A significant number of the pages referenced by a process early in its life may only be used for initialisation and then never used again. It is better to swap out those pages and create more disk buffers than leave them resident and unused.

    Src: https://www.kernel.org/doc/gorman/html/understand/understand014.html

    In my recently booted system with 32GB and half of that free (not even “available”), I can already see 10s of MB of swap used.

    As rule of thumb, it’s only a concern or indication that the system is/was starved of memory if a significant share of swap is in use. But even then, it might just be some cached pages hanging around because the kernel decided to keep instead of evicting them.


  • if my system touches SWAP at all, it’s run out of memory

    That’s a swap myth. Swap is not an emergency memory, it’s about creating a memory reclamation space on disk for anonymous pages (pages that are not file-backed) so that the OS can more efficiently use the main memory.

    The swapping algorithm does take into account the higher cost of putting pages in swap. Touching swap may just mean that a lot of system files are being cached, but that’s reclaimable space and it doesn’t mean the system is running out of memory.


  • I disagree. What I could hack over a weekend starting a project, I can do in a couple hours with AI, because starting a project is where the typing bottleneck is, due to all of the boilerplate. I can’t type faster than an LLM.

    Also, because there are hundreds of similar projects out there and I won’t get to the parts that make mine unique in a weekend, that’s the perfect use case for “vibe coding”.












  • No, this is about adding guidelines for tool-generated submissions to the kernel. The tailwind conversation was on making their documentations more accessible to AI tools.

    Linus doesn’t want to add guidelines to not fuel any side of the whole discussion, and says that adding guidelines won’t solve the problem because a lot of times it’s not trivial to detect whether or not a contribution was written with AI tools, after all, “documentation is for good actors”, hinting that anyone contributing AI slop is not expected to respect it anyway.