

7·
2 months agoThis isn’t really true — a lot of the newer MoE models run just fine on a CPU coupled with gobs of RAM. Yes, they won’t be quite as fast as a GPU, but getting 128GB+ of VRAM is out of reach of most people.
You can even run Deepseek R1 671b (Q8) on a Xeon or Epyc with 768GB+ of RAM, at 4-8 tokens/sec depending on configuration. A system supporting this would be at least an order of magnitude cheaper than a GPU setup to run the same thing.
This is false: Mistral small 24b at q4_K_M quantization is 15GB. q8 is 26GB. A 3090/4090/5090 with 24GB or two cards with 16GB (I recommend the 4060 Ti 16GB) will work fine with this model, and will work in a single computer. Like others have said, 10Gbe will be a huge bottleneck, plus it’s just simply not necessary to distribute a 24b model across multiple machines.