• 1 Post
  • 82 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle
  • Yeah, but most potatoes can run RimWorld, so we’re talking a difference between 2000 and 2500 fps. Not to mention that the game uses forking processes on Linux, which means saves happen in the background instead of freezing your entire game, so I’ll take that any day.

    I have an overclocked 7800X3D, 6000MHz low latency RAM and… I’m still majorly CPU bound in big, modded colonies. TPS can drop below 180, or slower than realtime (60) if I’m not careful with the game’s settings, especially during raids or with multiple maps loaded, and this causes major frametime spikes too.







  • Honestly, most LLMs suck at the full 128K. Look up benchmarks like RULER.

    In my personal tests over API, LLama 70B is bad out there. Qwen (and any fine tune based on Qwen Instruct, with maybe an exception or two) not only sucks, but is impractical past 32K once its internal rope scaling kicks in. Even GPT-4 is bad out there, with Gemini and some other very large models being the only usable ones I found.

    So, ask yourself… Do you really need 128K? Because 32K-64K is a boatload of code with modern tokenizers, and that is perfectly doable on a single 24G GPU like a 3090 or 7900 XTX, and that’s where models actually perform well.


  • Late to this post, but shoot for and AMD Strix Halo or Nvidia Digits mini PC.

    Prompt processing is just too slow on Apple, and the Nvidia/AMD backends are so much faster with long context.

    Otherwise, your only sane option for 128K context in a server with a bunch of big GPUs.

    Also… what model are you trying to use? You can fit Qwen coder 32B with like 70K context on a single 3090, but honestly its not good above 32K tokens anyway.


  • Unfortunately Nvidia is, by fair, the best choice for local LLM coder hosting, and there are basically two tiers:

    • Buy a used 3090, limit the clocks to like 1400 Mhz, and then host Qwen 2.5 coder 32B.

    • Buy a used 3060, host Arcee Medius 14B.

    Both these will expose an OpenAI endpoint.

    Run tabbyAPI instead of ollama, as it’s far faster and more vram efficient.

    You can use AMD, but the setup is more involved. The kernel has to be compatible with the rocm package, and you need a 7000 card and some extra hoops for TabbyAPI compatibility.

    Aside from that, an Arc B570 is not a terrible option for 14B coder models.











  • brucethemoose@lemmy.worldtoSelfhosted@lemmy.worldCan't relate at all.
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 months ago

    No, all the weights, all the “data” essentially has to be in RAM. If you “talk to” a LLM on your GPU, it is not making any calls to the internet, but making a pass through all the weights every time a word is generated.

    There are system to augment the prompt with external data (RAG is one word for this), but fundamentally the system is closed.


  • brucethemoose@lemmy.worldtoSelfhosted@lemmy.worldCan't relate at all.
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 months ago

    Oh I didn’t mean “should cost $4000” just “would cost $4000”

    Ah, yeah. Absolutely. The situation sucks though.

    I wish that the vram on video cards was modular, there’s so much ewaste generated by these bottlenecks.

    Not possible, the speeds are so high that GDDR physically has to be soldered. Future CPUs will be that way too, unfortunately. SO-DIMMs have already topped out at 5600, with tons of wasted power/voltage, and I believe desktop DIMMs are bumping against their limits too.

    But look into CAMM modules and LPCAMMS. My hope is that we will get modular LPDDR5X-8533 on AMD Strix Halo boards.