• 3 Posts
  • 310 Comments
Joined 10 months ago
cake
Cake day: March 22nd, 2024

help-circle


  • Unfortunately Nvidia is, by fair, the best choice for local LLM coder hosting, and there are basically two tiers:

    • Buy a used 3090, limit the clocks to like 1400 Mhz, and then host Qwen 2.5 coder 32B.

    • Buy a used 3060, host Arcee Medius 14B.

    Both these will expose an OpenAI endpoint.

    Run tabbyAPI instead of ollama, as it’s far faster and more vram efficient.

    You can use AMD, but the setup is more involved. The kernel has to be compatible with the rocm package, and you need a 7000 card and some extra hoops for TabbyAPI compatibility.

    Aside from that, an Arc B570 is not a terrible option for 14B coder models.













  • No, all the weights, all the “data” essentially has to be in RAM. If you “talk to” a LLM on your GPU, it is not making any calls to the internet, but making a pass through all the weights every time a word is generated.

    There are system to augment the prompt with external data (RAG is one word for this), but fundamentally the system is closed.



  • Oh I didn’t mean “should cost $4000” just “would cost $4000”

    Ah, yeah. Absolutely. The situation sucks though.

    I wish that the vram on video cards was modular, there’s so much ewaste generated by these bottlenecks.

    Not possible, the speeds are so high that GDDR physically has to be soldered. Future CPUs will be that way too, unfortunately. SO-DIMMs have already topped out at 5600, with tons of wasted power/voltage, and I believe desktop DIMMs are bumping against their limits too.

    But look into CAMM modules and LPCAMMS. My hope is that we will get modular LPDDR5X-8533 on AMD Strix Halo boards.


  • GDDR is actually super cheap! I think it would only be like another $75 on paper to double the 4090’s VRAM to 48GB (like they do for pro cards already).

    Nvidia just doesn’t do it for market segmentation. AMD doesn’t do it for… honestly I have no idea why? They basically have no pro market to lose, the only explanation I can come up with is that their CEOs are colluding because they are cousins. And Intel doesn’t do it because they didn’t make a (consumer) GPU that was eally worth it until the B580.


  • The issue with Macs is that Apple does price gouge for memory, your software stack is effectively limited to llama.cpp or MLX, and 70B class LLMs do start to chug, especially at high context.

    Diffusion is kinda a different duck. It’s more compute heavy, yes, but the “generally accessible” software stack is also much less optimized for Macs than it is for transformers LLMs.

    I view AMD Strix Halo as a solution to this, as its a big IGP with a wide memory bus like a Mac, but it can use the same CUDA software stacks that discrete GPUs use for that speed/feature advantage… albeit with some quirks. But I’m willing to put up with that if AMD doesn’t price gouge it.


  • second-hand TPU

    From where? I keep a look out for used Gaudi/TPU setups, but they’re like impossible to find, and usually in huge full-server configs. I can’t find Xeon Max GPUs or CPUs either.

    Also, Google’s software stack isn’t really accessible. TPUs are made for internal use at Google, not for resale.

    You can find used AMD MI100s or MI210s, sometimes, but the go-to used server card is still the venerable Tesla P40.