Article: https://proton.me/blog/deepseek

Calls it “Deepsneak”, failing to make it clear that the reason people love Deepseek is that you can download and it run it securely on any of your own private devices or servers - unlike most of the competing SOTA AIs.

I can’t speak for Proton, but the last couple weeks are showing some very clear biases coming out.

  • abobla@lemm.ee
    link
    fedilink
    arrow-up
    8
    ·
    2 hours ago

    Jesus fuckin Christ, just marry Trump at this point, Mister proton CEO.

  • yourFanatic@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 hours ago

    I cancelled my Proton renewal for January and am very happy with Mullvad VPN.

    Mozilla VPN runs Mullvad under the hood as well.

  • Victor@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    5 hours ago

    Goddammit I had such high hopes for Proton. Was planning on that being my post-Google main. Now what. 💀

      • Victor@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 hours ago

        Anything European-based to recommend? I’d like something as far-removed from America as possible, respecting GDPR, privacy, etc., but with a good-sized free-tier storage. I don’t think I need more than a couple GB for email. Calendar included would be a big plus as well. 😅 Probably asking for a lot here…

    • Victor@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      5 hours ago

      You have two email addresses in both Tuta and Mailbox? Any particular reason for that, that you could share with us? 🙏

      • AlecSadler@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        3 hours ago

        I have two domains, one in each of Tuta and Mailbox. It was originally so I could try both out, but now I figure it doesn’t hurt to keep 'em separated. I’m still new to non-proton so I am sort of still feeling things out.

        Nothing really too interesting or tricky about it, just bred out of curiosity.

        • Victor@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 hours ago

          Ah I see. So now to the possibly tough question, if you had to choose only one, or recommend only one of them to someone who wants to make a minimal amount of new email addresses, which one would you recommend over the other? 😅 Or maybe a third option?

          • AlecSadler@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            50 minutes ago

            I think I’d need some more time to really answer, but on the outset, I find Mailbox.org’s interface more intuitive with more settings and generally feels cleaner and more streamlined. Creating aliases and domain aliases in mailbox seems more proton-like in its simplicity.

            Tuta I think is more private and secure, but bits of their interface and app need polish. One reason I think Tuta is more secure despite them both touting security and privacy is that Mailbox search works immediately, whereas Tuta requires you to agree to a permission and states it stores everything locally to you so it may take up space. I think Tuta isn’t doing any server-side indexing of any kind? Unsure.

            edit: Mailbox doesn’t have a native app, and Tuta has a native app but I think it’s largely a webview. Notifications work OK but you’ll click on a notification and then have to wait for the app to actually connect and resync before you can view it.

  • Majestic@lemmy.ml
    link
    fedilink
    arrow-up
    43
    arrow-down
    3
    ·
    edit-2
    17 hours ago

    People got flack for saying Proton is the CIA, Proton is NSA, Proton is a joint five-eyes country intelligence operation despite the convenient timing of their formation and lots of other things.

    Maybe they’re not, maybe their CEO is just acting this way.

    But consider for a moment if they were. IF they were then all of this would make more sense. The CIA/NSA/etc have a vested interest in discrediting and attacking Chinese technology they have no ability to spy or gather data through. The CIA/NSA could also for example see a point to throwing in publicly with Trump as part of a larger agreed upon push with the tech companies towards reactionary politics, towards what many call fascism or fascism-ish.

    My mind is not made up. It’s kind of unknowable. I think they’re suspicious enough to be wary of trusting them but there’s no smoking gun, yet there wasn’t a smoking gun that CryptoAG was a CIA cut-out until some unauthorized leaks nearly a half century after they gained control and use of it. We know they have an interest in subverting encryption, in going fishing among “interesting” targets who might seek to use privacy-conscious services and among dissidents outside the west they may wish to vet and recruit.

    True privacy advocates should not be throwing in with the agenda of any regime or bloc, especially those who so trample human and privacy rights as that of the US and co. They should be roundly suspicious of all power.

  • cygnus@lemmy.ca
    link
    fedilink
    arrow-up
    152
    arrow-down
    1
    ·
    23 hours ago

    Pretty rich coming from Proton, who shoved a LLM into their mail client mere months ago.

    • harsh3466@lemmy.ml
      link
      fedilink
      arrow-up
      34
      ·
      22 hours ago

      wait, what? How did I miss that? I use protonmail, and I didn’t see anything about an LLM in the mail client. Nor have I noticed it when I check my mail. Where/how do I find and disable that shit?

        • harsh3466@lemmy.ml
          link
          fedilink
          arrow-up
          49
          ·
          22 hours ago

          Thank you. I’ve saved the link and will be disabling it next time I log in. Can’t fucking escape this AI/LLM bullshit anywhere.

          • cygnus@lemmy.ca
            link
            fedilink
            arrow-up
            68
            arrow-down
            3
            ·
            22 hours ago

            The combination of AI, crypto wallet and CEO’s pro-MAGA comments (all within six months or so!) are why I quit Proton. They’ve completely lost the plot. I just want a reliable email service and file storage.

            • h6pw5@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              14 hours ago

              Crypto and AI focus was a weird step before all this came out. But now we know Andy is pro republican… completes a very unappealing picture. We should have a database tho, plenty of c level execs and investor groups do far worse and get no scrutiny simply because they don’t post about it on the internet.

            • harsh3466@lemmy.ml
              link
              fedilink
              arrow-up
              18
              arrow-down
              1
              ·
              22 hours ago

              I’m considering leaving proton too. The two things I really care about are simplelogin and the VPN with port forwarding. As far as I understand it, proton is about the last VPN option you can trust with port forwarding

              • limitedduck@awful.systems
                link
                fedilink
                arrow-up
                3
                ·
                21 hours ago

                As far as I understand it, proton is about the last VPN option you can trust with port forwarding

                Could you explain this part please? What makes them untrustworthy?

                • harsh3466@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  19 hours ago

                  I’m not 100% sure if you mean what do I think makes proton untrustworthy, or what do I think makes other vpns untrustworthy?

                  If you’re referring to proton, some of the statements Andy Yen have made recently are painting proton as less neutral than they claim to be.

                  I’m also generally aware that a LOT of vpn outfits are just a different company mining your traffic and data, and that there are few “no log” vpns that you can trust.

                  Despite their recent statements that sour my taste in giving proton money (and the ai bullshit that every goddam company is shoving down our throats), I trust proton when they say no logs. They’re regularly audited for it.

                  I don’t trust all these other VPN companies that claim to be no log and have nothing to back them up. Especially when several of them have been caught logging and mining/selling the data they claim to not be logging.

            • kboy101222@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              10
              arrow-down
              2
              ·
              21 hours ago

              Once all that crap came out, I felt incredibly justified by never having switched to Proton.

              It was entirely out of laziness, but still

  • Rogue@feddit.uk
    link
    fedilink
    arrow-up
    69
    arrow-down
    4
    ·
    20 hours ago

    How apt, just yesterday I put together an evidenced summary of the CEOs recent absurd comments. Why are Proton so keen to throw away so much good will people had invested in them?!


    This is what the CEO posting as u/Proton_Team stated in a response on r/ProtonMail:

    Here is our official response, also available on the Mastodon post in the screenshot:

    Corporate capture of Dems is real. In 2022, we campaigned extensively in the US for anti-trust legislation.

    Two bills were ready, with bipartisan support. Chuck Schumer (who coincidently has two daughters working as big tech lobbyists) refused to bring the bills for a vote.

    At a 2024 event covering antitrust remedies, out of all the invited senators, just a single one showed up - JD Vance.

    By working on the front lines of many policy issues, we have seen the shift between Dems and Republicans over the past decade first hand.

    Dems had a choice between the progressive wing (Bernie Sanders, etc), versus corporate Dems, but in the end money won and constituents lost.

    Until corporate Dems are thrown out, the reality is that Republicans remain more likely to tackle Big Tech abuses.

    Source: https://archive.ph/quYyb

    To call out the important bits:

    1. He refers to it as the “official response”
    2. Indicates that JD Vance is on their side just because he attended an event that other invited senators didn’t
    3. Rattles on about “corporate Dems” with incredible bias
    4. States “Republicans remain more likely to tackle Big Tech abuses” which is immediately refuted by every response

    That was posted in ther/ProtonMail sub where the majority of the event took place: https://old.reddit.com/r/ProtonMail/comments/1i1zjgn/so_that_happened/m7ahrlm/

    However be aware that the CEO posting as u/Proton_Team kept editing his comments so I wouldn’t trust the current state of it. Plus the proton team/subreddit mods deleted a ton of discussion they didn’t like. Therefore this archive link captured the day after might show more but not all: https://web.archive.org/web/20250116060727/https://old.reddit.com/r/ProtonMail/comments/1i1zjgn/so_that_happened/m7ahrlm/

    Some statements were made on Mastodon but these are subsequently deleted, but they’re capture by an archive link: https://web.archive.org/web/20250115165213/https://mastodon.social/@protonprivacy/113833073219145503

    I learned about it from an r/privacy thread but true to their reputation the mods there also went on a deletion spree and removed the entire post: https://www.reddit.com/r/privacy/comments/1i210jg/protonmail_supporting_the_party_that_killed/

    This archive link might show more but I’ve not checked: https://web.archive.org/web/20250115193443/https://old.reddit.com/r/privacy/comments/1i210jg/protonmail_supporting_the_party_that_killed/

    There’s also this lemmy discussion from the day after but by that point the Proton team had fully kicked in their censorship so I don’t know how much people were aware of (apologies I don’t know how to make a generic lemmy link) https://feddit.uk/post/22741653

    • ERROR: Earth.exe has crashed@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      44 minutes ago

      Indicates that JD Vance is on their side just because he attended an event that other invited senators didn’t

      🤣

      Show up at an event = my best friend and definitely not a leopard ready to eat my face ???

      🤔

      (What a dumbass)

    • doubtingtammy@lemmy.ml
      link
      fedilink
      arrow-up
      23
      arrow-down
      3
      ·
      edit-2
      15 hours ago

      Until corporate Dems are thrown out, the reality is that Republicans remain more likely to tackle Big Tech abuses.

      What a fucking dumbass. Yes, dems suck. But at least Lina Khan was head of the FTC and starting to change how antitrust laws are enforced. Did he delete this post after Trump was inaugurated with 3 of the richest tech billionaires?

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    111
    arrow-down
    4
    ·
    22 hours ago

    DeepSeek is open source, meaning you can modify code(new window) on your own app to create an independent — and more secure — version. This has led some to hope that a more privacy-friendly version of DeepSeek could be developed. However, using DeepSeek in its current form — as it exists today, hosted in China — comes with serious risks for anyone concerned about their most sensitive, private information.

    Any model trained or operated on DeepSeek’s servers is still subject to Chinese data laws, meaning that the Chinese government can demand access at any time.

    What??? Whoever wrote this sounds like he has 0 understanding of how it works. There is no “more privacy-friendly version” that could be developed, the models are already out and you can run the entire model 100% locally. That’s as privacy-friendly as it gets.

    “Any model trained or operated on DeepSeek’s servers are still subject to Chinese data laws”

    Operated, yes. Trained, no. The model is MIT licensed, China has nothing on you when you run it yourself. I expect better from a company whose whole business is on privacy.

    • lily33@lemm.ee
      link
      fedilink
      arrow-up
      34
      arrow-down
      2
      ·
      22 hours ago

      To be fair, most people can’t actually self-host Deepseek, but there already are other providers offering API access to it.

      • halcyoncmdr@lemmy.world
        link
        fedilink
        English
        arrow-up
        33
        arrow-down
        3
        ·
        22 hours ago

        There are plenty of step-by-step guides to run Deepseek locally. Hell, someone even had it running on a Raspberry Pi. It seems to be much more efficient than other current alternatives.

        That’s about as openly available to self host as you can get without a 1-button installer.

        • Aria@lemmygrad.ml
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          5 hours ago

          Running R1 locally isn’t realistic. But you can rent a server and run it privately on someone else’s computer. It costs about 10 per hour to run. You can run it on CPU for a little less. You need about 2TB of RAM.

          If you want to run it at home, even quantized in 4 bit, you need 20 4090s. And since you can only have 4 per computer for normal desktop mainboards, that’s 5 whole extra computers too, and you need to figure out networking between them. A more realistic setup is probably running it on CPU, with some layers offloaded to 4 GPUs. In that case you’ll need 4 4090s and 512GB of system RAM. Absolutely not cheap or what most people have, but technically still within the top top top end of what you might have on your home computer. And remember this is still the dumb 4 bit configuration.

          Edit: I double-checked and 512GB of RAM is unrealistic. In fact anything higher than 192 is unrealistic. (High-end) AM5 mainboards support up to 256GB, but 64GB RAM sticks are much more expensive than 48GB ones. Most people will probably opt for 48GB or lower sticks. You need a Threadripper to be able to use 512GB. Very unlikely for your home computer, but maybe it makes sense with something else you do professionally. In which case you might also have 8 RAM slots. And such a person might then think it’s reasonable to spend 3000 Euro on RAM. If you spent 15K Euro on your home computer, you might be able to run a reduced version of R1 very slowly.

        • tekato@lemmy.world
          link
          fedilink
          arrow-up
          15
          ·
          21 hours ago

          You can run an imitation of the DeepSeek R1 model, but not the actual one unless you literally buy a dozen of whatever NVIDIA’s top GPU is at the moment.

        • Dyf_Tfh@lemmy.sdf.org
          link
          fedilink
          arrow-up
          10
          arrow-down
          4
          ·
          edit-2
          21 hours ago

          Those are not deepseek R1. They are unrelated models like llama3 from Meta or Qwen from Alibaba “distilled” by deepseek.

          This is a common method to smarten a smaller model from a larger one.

          Ollama should have never labelled them deepseek:8B/32B. Way too many people misunderstood that.

          • pcalau12i@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            2
            ·
            edit-2
            2 hours ago

            The 1.5B/7B/8B/13B/32B/70B models are all officially DeepSeek R1 models, that is what DeepSeek themselves refer to those models as. It is DeepSeek themselves who produced those models and released them to the public and gave them their names. And their names are correct, it is just factually false to say they are not DeepSeek R1 models. They are.

            The “R1” in the name means “reasoning version one” because it does not just spit out an answer but reasons through it with an internal monologue. For example, here is a simple query I asked DeepSeek R1 13B:

            Me: can all the planets in the solar system fit between the earth and the moon?

            DeepSeek: Yes, all eight planets could theoretically be lined up along the line connecting Earth and the Moon without overlapping. The combined length of their diameters (approximately 379,011 km) is slightly less than the average Earth-Moon distance (about 384,400 km), allowing them to fit if placed consecutively with no required spacing.

            However, on top of its answer, I can expand an option to see its internal monologue it went through before generating the answer, which you can find the internal monologue here because it’s too long to paste.

            What makes these consumer-oriented models different is that that rather than being trained on raw data, they are trained on synthetic data from pre-existing models. That’s what the “Qwen” or “Llama” parts mean in the name. The 7B model is trained on synthetic data produced by Qwen, so it is effectively a compressed version of Qen. However, neither Qwen nor Llama can “reason,” they do not have an internal monologue.

            This is why it is just incorrect to claim that something like DeepSeek R1 7B Qwen Distill has no relevance to DeepSeek R1 but is just a Qwen model. If it’s supposedly a Qwen model, why is it that it can do something that Qwen cannot do but only DeepSeek R1 can? It’s because, again, it is a DeepSeek R1 model, they add the R1 reasoning to it during the distillation process as part of its training. They basically use synthetic data generated from DeepSeek R1 to fine-tune readjust its parameters so it adopts a similar reasoning style. It is objectively a new model because it performs better on reasoning tasks than just a normal Qwen model. It cannot be considered solely a Qwen model nor an R1 model because its parameters contain information from both.

            • lily33@lemm.ee
              link
              fedilink
              arrow-up
              4
              arrow-down
              1
              ·
              edit-2
              10 hours ago

              What makes these consumer-oriented models different is that that rather than being trained on raw data, they are trained on synthetic data from pre-existing models. That’s what the “Qwen” or “Llama” parts mean in the name. The 7B model is trained on synthetic data produced by Qwen, so it is effectively a compressed version of Qen. However, neither Qwen nor Llama can “reason,” they do not have an internal monologue.

              You got that backwards. They’re other models - qwen or llama - fine-tuned on synthetic data generated by Deepseek-R1. Specifically, reasoning data, so that they can learn some of its reasoning ability.

              But the base model - and so the base capability there - is that of the corresponding qwen or llama model. Calling them “Deepseek-R1-something” doesn’t change what they fundamentally are, it’s just marketing.

              • pcalau12i@lemmygrad.ml
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                2 hours ago

                There is no “fundamentally” here, you are referring to some abstraction that doesn’t exist. The models are modified during the fine-tuning process, and the process trains them to learn to adopt DeepSeek R1’s reasoning technique. You are acting like there is some “essence” underlying the model which is the same between the original Qwen and this model. There isn’t. It is a hybrid and its own thing. There is no such thing as “base capability,” the model is not two separate pieces that can be judged independently. You can only evaluate the model as a whole. Your comment is just incredibly bizarre to respond to because you are referring to non-existent abstractions and not actually speaking of anything concretely real.

                The model is neither Qwen nor DeepSeek R1, it is DeepSeek R1 Qwen Distill as the name says. it would be like saying it’s false advertising to say a mule is a hybrid of a donkey and a horse because the “base capabilities” is a donkey and so it has nothing to do with horses, and it’s really just a donkey at the end of the day. The statement is so bizarre I just do not even know how to address it. It is a hybrid, it’s its own distinct third thing that is a hybrid of them both. The model’s capabilities can only be judged as it exists, and its capabilities differ from Qwen and the original DeepSeek R1 as actually scored by various metrics.

                Do you not know what fine-tuning is? It refers to actually adjusting the weights in the model, and it is the weights that define the model. And this fine-tuning is being done alongside DeepSeek R1, meaning it is being adjusted to take on capabilities of R1 within the model. It gains R1 capabilities at the expense of Qwen capabilities as DeepSeek R1 Qwen Distill performs better on reasoning tasks but actually not as well as baseline models on non-reasoning tasks. The weights literally have information both of Qwen and R1 within them at the same time.

                Speaking of its “base capabilities” is a meaningless floating abstraction which cannot be empirically measured and doesn’t refer to anything concretely real. It only has its real concrete capabilities, not some hypothetical imagined capabilities. You accuse them of “marketing” even though it is literally free. All DeepSeek sells is compute to run models, but you can pay any company to run these distill models. They have no financial benefit for misleading people about the distill models.

                You genuinely are not making any coherent sense at all, you are insisting a hybrid model which is objectively different and objectively scores and performs differently should be given the exact same name, for reasons you cannot seem to actually articulate. It clearly needs a different name, and since it was created utilizing the DeepSeek R1 model’s distillation process to fine-tune it, it seems to make sense to call it DeepSeek R1 Qwen Distill. Yet for some reason you insist this is lying and misrepresenting it and it actually has literally nothing to do with DeepSeek R1 at all and it should just be called Qwen and we should pretend it is literally the same model despite it not being the same model as its training weights are different (you can do a “diff” on the two model files if you don’t believe me!) and it performs differently on the same metrics.

                There is simply no rational reason to intentionally want to mislabel the model as just being Qwen and having no relevance to DeepSeek R1. You yourself admitted that the weights are trained on R1 data so they necessarily contain some R1 capabilities. If DeepSeek was lying and trying to hide that the distill models are based on Qwen and Llama, they wouldn’t have literally put that in the name to let everyone know, and released a paper explaining exactly how those were produced.

                It is clear to me that you and your other friends here have some sort of alternative agenda that makes you not want to label it correctly. DeepSeek is open about the distill models using Qwen and Llama, but you want them to be closed and not reveal that they also used DeepSeek R1. The current name for it is perfectly fine and pretending it is just a Qwen model (or Llama, for the other distilled versioned) is straight-up misinformation, and anyone who downloads the models and runs them themselves will clearly see immediately that they perform differently. It is a hybrid model correctly called what they are: DeepSeek R1 Qwen Distill and DeepSeek R1 Llama Distill.

          • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            19 hours ago

            I’m running deepseek-r1:14b-qwen-distill-fp16 locally and it produces really good results I find. Like yeah it’s a reduced version of the online one, but it’s still far better than anything else I’ve tried running locally.

              • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                7 hours ago

                The main difference is speed and memory usage. Qwen is a full-sized, high-parameter model while qwen-distill is a smaller model created using knowledge distillation to mimic qwen’s outputs. If you have the resources to run qwen fast then I’d just go with that.

                • morrowind@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  34 minutes ago

                  I think you’re confusing the two. I’m talking about the regular qwen before it was finetuned by deep seek, not the regular deepseek

            • stink@lemmygrad.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              17 hours ago

              Its so cute when chinese is sprinkled in randomly hehe my little bilingual robot in my pc

    • ReversalHatchery@beehaw.org
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      17
      ·
      22 hours ago

      What??? Whoever wrote this sounds like he has 0 understanding of how it works. There is no “more privacy-friendly version” that could be developed, the models are already out and you can run the entire model 100% locally. That’s as privacy-friendly as it gets.

      Unfortunately it is you who have 0 understanding of it. Read my comment below. Tldr: good luck to have the hardware

      • simple@lemm.ee
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        edit-2
        22 hours ago

        I understand it well. It’s still relevant to mention that you can run the distilled models on consumer hardware if you really care about privacy. 8GB+ VRAM isn’t crazy, especially if you have a ton of unified memory on macbooks or some Windows laptops releasing this year that have 64+GB unified memory. There are also websites re-hosting various versions of Deepseek like Huggingface hosting the 32B model which is good enough for most people.

        Instead, the article is written like there is literally no way to use Deepseek privately, which is literally wrong.

        • superglue@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          20 hours ago

          So I’ve been interested in running one locally but honestly I’m pretty confused what model I should be using. I have a laptop with a 3070 mobile in it. What model should I be going after?

      • v_krishna@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        18 hours ago

        Obviously you need lots of GPUs to run large deep learning models. I don’t see how that’s a fault of the developers and researchers, it’s just a fact of this technology.

      • lily33@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        20 hours ago

        There are already other providers like Deepinfra offering DeepSeek. So while the the average person (like me) couldn’t run it themselves, they do have alternative options.

      • azron@lemmy.ml
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        20 hours ago

        Down votes be damned, you are right to call out the parent they clearly dont articulate their point in a way that confirms they actually understand what is going on and how an open source model can still have privacy implications if the masses use the company’s hosted version.

  • pineapple@lemmy.ml
    link
    fedilink
    English
    arrow-up
    42
    ·
    20 hours ago

    OpenAI, Google, and Meta, for example, can push back against most excessive government demands.

    Sure they “can” but do they?

    • geneva_convenience@lemmy.ml
      link
      fedilink
      arrow-up
      7
      ·
      8 hours ago

      They cannot. When big daddy FBI knocks on the door and you get that forced NDA you, will build in backdoors and comply with anything the US government tells you.

      Even then the US might want to you to shut down because they want to control your company.

      TikTok.

    • davel@lemmy.ml
      link
      fedilink
      English
      arrow-up
      21
      ·
      13 hours ago

      “Pushing back against the government” doesn’t even make sense. These people are oligarchs. They largely are the government. Who attended Trump’s inauguration? Who hosted Trump’s inauguration party? These US tech oligarchs.

    • HiddenLayer555@lemmy.ml
      link
      fedilink
      English
      arrow-up
      27
      ·
      edit-2
      17 hours ago

      Why do that when you can just score a deal with the government to give them whatever information they want for sweet perks like foreign competitors getting banned?

  • AustralianSimon@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    23 hours ago

    To be fair its correct but it’s poor writing to skip the self hosted component. These articles target the company not the model.

  • lemmus@szmer.info
    link
    fedilink
    arrow-up
    6
    arrow-down
    31
    ·
    9 hours ago

    Guys I know OpenAI is not clear, its as bad as deepseek and even worse, BUT you have to realize, that most people don’t give a fuck about running deepseek locally, they just download deepsek app and use it, which is more privacy intrusive even than ClosedAI. Giving information to China, when you live on the west is like giving russians information, when you live in Ukraine. We are on constant war with China, because we are democratic, they are communism, and we cannot just give them our data for free, therefore I have to admit PROTON IS RIGHT about deepseek being “deepsneak”

    • Naia@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      3
      ·
      7 hours ago

      As a queer person I don’t really care at this point if China or Russia is tracking me. They aren’t the ones who are currently stripping me and others of rights and so many other things.

      I don’t trust any governments on this front, but the government I live under is way more of a concern.

      • Nalivai@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        13
        ·
        7 hours ago

        Russia specifically is a big part of why trump is in power. They weren’t the sole contributors, but they definitely helped a lot. And they achieved it by buying, stealing, and collecting data on people and doing targeting misinformation campaign.

        • wholookshere@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          7 hours ago

          That’s… Not actually a reasonse to what was said?

          Sure, that’s all fine and dandy. But it doesn’t change the point that was being made.

          The election happened. Here and now, Russia and China tracking me is no different than the US. They’re all authoritarian governments hell bent on stripping rights away.

          Now I’m not the same person you replied to. I’m in Canada, so I’m weary of all of them. But if I was in the states, I’d RATHER give my data to an advisory that won’t do much with it. As apposed to the current government hellbent on making life for me and my trans siblings as hard and difficult as possible.

    • SatanClaus@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      7 hours ago

      Propaganda got ya good bud. Sure. It’s important but Jesus. Lol. ChatGPT does the same shit but doesn’t let me run it locally. Fuck ChatGPT

    • cmhe@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      6 hours ago

      China is not communist, they are market-captialistic, one-party highly authoritarian state. “socialism” and “cmmunism” is just used to make them sound better and more legitimate than they are.