• ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      10
      ·
      9 hours ago

      What they’re actually in panic over is companies using a Chinese service instead of US ones. The threat here is that DeepSeek becomes the standard that everyone uses, and it would become entrenched. At that point nobody would want to switch to US services.

    • Corngood@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      5 hours ago

      I keep seeing this sentiment, but in order to run the model on a high end consumer GPU, doesn’t it have to be reduced to like 1-2% of the size of the official one?

      Edit: I just did a tiny bit of reading and I guess model size is a lot more complicated than I thought. I don’t have a good sense of how much it’s being reduced in quality to run locally.

      • skuzz@discuss.tchncs.de
        link
        fedilink
        arrow-up
        2
        ·
        4 hours ago

        Just think of it this way. Less digital neurons in smaller models means a smaller “brain”. It will be less accurate, more vague, and make more mistakes.