Hacker News new | past | comments | ask | show | jobs | submit login
How to Run DeepSeek R1 671B Locally on a $2000 EPYC Server (digitalspaceport.com)
405 points by walterbell 16 hours ago | hide | past | favorite | 237 comments





This runs the 671B model in Q4 quantization at 3.5-4.25 TPS for $2K on a single socket Epyc server motherboard using 512GB of RAM.

This [1] X thread runs the 671B model in the original Q8 at 6-8 TPS for $6K using a dual socket Epyc server motherboard using 768GB of RAM. I think this could be made cheaper by getting slower RAM but since this is RAM bandwidth limited that would likely reduce TPS. I’d be curious if this would just be a linear slowdown proportional to the RAM MHz or whether CAS latency plays into it as well.

[1] https://x.com/carrigmat/status/1884244369907278106?s=46&t=5D...


I've been running the unsloth 200GB dynamic quantisation with 8k context on my 64GB Ryzen 7 5800G. CPU and iGPU utilization were super low, because it basically has to read the entire model from disk. (Looks like it needs ~40GB of actual memory that it cannot easily mmap from disk) With a Samsung 970 Evo Plus that gave me 2.5GB/s read speed. That came out at 0.15 tps Not bad for completely underspecced hardware.

Given the model has only so few active parameters per token (~40B), it is likely that just being able to hold it in memory absolve the largest bottleneck. I guess with a single consumer PCIe4.0x16 graphics card you could get at most 1tps just because of the PCIe transfer speed? Maybe CPU processing can be faster simply because DDR transfer is faster than transfer to the graphics card.


To add another datapoint, I've been running the 131GB (140GB on disk) 1.58 bit dynamic quant from Unsloth with 4k context on my 32GB Ryzen 7 2700X (8 cores, 3.70 GHz), and achieved exactly the same speed - around 0.15 tps on average, sometimes dropping to 0.11, tps occasionally going up to 0.16 tps. Roughly 1/2 of your specs, roughly 1/2 smaller quant, same tps.

I've had to disable the overload safeties in LM Studio and tweak with some loader parameters to get the model to run mostly from disk (NVMe SSD), but once it did, it also used very little CPU!

I tried offloading to GPU, but my RTX 4070 Ti (12GB VRAM) can take at most 4 layers, and it turned out to make no difference in tps.

My RAM is DDR4, maybe switching to DDR5 would improve things? Testing that would require replacing everything but the GPU, though, as my motherboard is too old :/.


For a 131GB model, the biggest difference would be to fit it all in RAM, eg get 192GB of RAM. Sorry if this is too obvious, but it's pointless to run an llm if it doesn't fit in ram, even if it's an MOE model. And also obviously, it may take a server motherboard and cpu to fit that much RAM.

I wonder if one could just replicate the "Mac mini LLM cluster" setup over Ethernet of some form and 128GB per node of DDR4 RAM. Used DDR4 RAM with likely dead bits are dirt cheap, but I would imagine that there will be challenges linking systems together.

More channels > faster ram.

Some math:

DDR5 6000 is 3000mhz x 2 (double data rate) x 64 bits / 8 for bytes = 48000 /1000 = 48GB/s

DDR3 1866 is 933mhz x 2 x 64 / 8 / 1000 = 14.93GB/s. If you have 4 channels that is 4 x 14.93 = 59.72GB/s


I wonder if the, now abandoned, Intel Optane drives could help with this. They had very low latency, high IOPS, and decent throughput. They made RAM modules as well. A ram disk made of them might be faster.

Intel PMem really shines for things you need to be non-volatile (preserved when the power goes out) like fast changing rows in a database. As far as I understand it, "for when you need millions of TPS on a DB that can't fit in RAM" was/is the "killer app" of PMem.

Which suggests it wouldn't be quite the right fit here -- the precomputed constants in the model aren't changing, nor do they need to persist.

Still, interesting question, and I wonder if there's some other existing bit of tech that can be repurposed for this.

I wonder if/when this application (LLMs in general) will slow down and stabilize long enough for anything but general purpose components to make sense. Like, we could totally shove model parameters in some sort of ROM and have hardware offload for a transformer, IF it wasn't the case that 10 years from now we might be on to some other paradigm.


I get around 4-5t/s with the unsloth 1.58bit quant on my home server that has 2x3090 and 192GB of DDR5 Ryzen 9, usable but slow.

how much context size?

Just 4K. Because deepseek doesn't allow for the use of flash attention it means you can't run quantised qkv

I imagine you can get more by striping drives. Depending on what chipset you have, the CPU should handle at least 4. Sucks that no AM4 APU supports PCIe 4 while the platform otherwise does.

> I’d be curious if this would just be a linear slowdown proportional to the RAM MHz or whether CAS latency plays into it as well.

Per o3-mini, the blocked gemm (matrix multiply) operations have very good locality and therefore MT/s should matter much more than CAS latency.


3x the price for less than 2x the speed increase. I don't think the price justifies the upgrade.

Q4 vs Q8.

> TacticalCoder 14 minutes ago [dead] | root | parent | prev | next [–]

>

> TFA says it can bump the spec to 768 GB but that it's then more like > $2500 than $2000. At 768 GB that'd be the full, 8 bit, model.

> Seems indeed like a good price compared to $6000 for someone who wants to hack a build.

> I mean: $6 K is doable but I take it take many who'd want to build such a machine for fun would prefer to only fork $2.5K.

.

I am not sure why TacticalCoder's comment was downvoted to oblivion. I would have upvoted if the comment wasn't already dead.


You probably already know/have done this but just in case (or if someone else reading along isn't aware): if you click the timestamp "<x> ago" text for a comment it forces the "vouch" button to appear.

I've also vouched as it doesn't seem like a comment deserving to be dead at all. For at least this instant it looks like that was enough vouches to restore the comment.


I don't know the specifics of this, but vouching "against" the hive mind leads to your vouches not doing anything any more. I assume that either there's some kind of threshold after which you're shadowbanned from vouching or perhaps there's a kind of vouch weight and "correctly" vouching (comment is not re-flagged) increases it, while wrongly vouching (comment remains flagged or is re-flagged) decreases your weight.

We sometimes take vouching privileges away from accounts that repeatedly vouch for comments that are bad for HN in the sense that they break the site guidelines. That's necessary in order for the system to function—you wouldn't believe some of the trollish and/or abusive stuff that some people vouch for. (Not to mention the usual tricks of using multiple accounts, etc.) But it's nothing to do with the hive mind and it isn't done by the software.

It wasn't downvoted - rather, the account is banned (https://news.ycombinator.com/item?id=42653007) and comments by banned accounts are [dead] by default unless users vouch for them (as described by zamadatix) or mods unkill them (which we do when we see good comments by banned accounts).

Btw, I agree that that was a good comment that deserved vouching! But of course we have to ban accounts because of the worst things they post, not the best.


I mean, nothing ever actually scales linearly, right?

TFA says it can bump the spec to 768 GB but that it's then more like $2500 than $2000. At 768 GB that'd be the full, 8 bit, model.

Seems indeed like a good price compared to $6000 for someone who wants to hack a build.

I mean: $6 K is doable but I take it take many who'd want to build such a machine for fun would prefer to only fork $2.5K.


The Q8 model will likely slow this down to 50%, probably not a very useful speed. The 6k setup will probably do 10-12t/s at Q4.

Is there a source that unrolls that without creating an account?


Thank you! Domain seems easy to remember, too.

Online, R1 costs what, $2/MTok?

This rig does >4 tok/s, which is ~15-20 ktok/hr, or $0.04/hr when purchased through a provider.

You're probably spending $0.20/hr on power (1 kW) alone.

Cool achievement, but to me it doesn't make a lot of sense (besides privacy...)


> Cool achievement, but to me it doesn't make a lot of sense (besides privacy...)

I would argue that is enough and that this is awesome. It was a long time ago I wanted to do a tech hack like this much.


Well thinking about it a bit more, it would be so cool if you could

A) somehow continuously interact with the running model, ambient-computing style. Say have the thing observe you as you work, letting it store memories.

B) allowing it to process those memories when it chooses to/whenever it's not getting any external input/when it is "sleeping" and

C) (this is probably very difficult) have it change it's own weights somehow due to whatever it does in A+B.

THAT, in a privacy friendly self-hosted package, i'd pay serious money for


I imagine it could solve crimes if it watched millions of hours of security footage…scary thought. Possibly it could arrest us before we even commit a crime through prediction like that black mirror episode.

Oh, you're thinking of "Hated in the Nation"? More relevant would possibly be "Minority Report" (set in 2054) and Hulu's "Class of '09", in which the FBI starts deploying a crime prediction AI in their version of 2025.

Quite scary. As the meme has it, it seems that we're getting ready to create the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus.


> doesn't make a lot of sense (besides privacy...)

Privacy is worth very much though.


What privacy benefit do you get running this locally vs renting a baremetal GPU and running it there?

Wouldn't that be much more cost-effective?

Especially when you inevitably want to run a better / different model in the near future that would benefit from different hardware?

You can get similar Tok/sec on a single RTX 4090 - which you can rent for <$1/hr.


Definitely but when you can run this in places like Azure with tight contracts it makes little sense except for the ultra paranoid.

Considering the power of three letter agencies in the USA and the complete unhingedness of the new administration, I would not trust anything to a contract.

Sure I am certain there is a possibility but unless you have airgapped your local instance and locked down your local network securely it does not really matter.

It’s cool to run things locally and it will get better as time goes on but for most use cases I don’t find it worth it. Everyone is different and folks that enjoy the idea of local network secure can run it locally.


Even a badly operated on-prem system has the advantage that if someone breaks in, they are taking a risk of getting caught. Whereas with Azure the TLAs could just hoover up everything there without high risk of customers finding out (assuming they can gag MS). Given the reporting about NSA's "collect everything" modus operandi this doesn't seem very far fetched.

hmm do we still have to pretend that this is some sort of conspiracy theory? really? after snowden? it doesn't "seem very far fetched", its a fact

It's less "possibility" and more "certainty."

can we even trust the hardware?

The hardware can be airgapped.

Well you don't need to worry unless you are already on the list.

These days getting on a list may require as little as "is trans" or "has immigrant parents."

or "your competitor donated to Musk/Trump campaign"

no, just being speculative... about that.

[flagged]


What does that even mean? Shame these newer account post such low intelligence reaction replies.

For most use cases, you can consider a GCP/AWS/Azure secure.


They are eluding to it not being secure against state actors. The distrust in government isn’t novel to this discussion so it should come as no surprise on HN. There is also a general fear of censorship which should be held more toward the base model owners and not toward cloud providers. I still think doing this in the cloud makes more sense initially but I see the appeal for a home model that is decoupled from the wider ecosystem.

You could absolutely install 2kw of solar for probably around 2-4k and then at worst it turns your daytime usage into 0$. I also would be surprised if this was pulling 1kw in reality, I would want to see an actual measurement of what it is realistically pulling at the wall.

I believe it was an 850w PSU on the spec sheet?


Marginal cost $0, 2kw solar + inverter + battery + install is worth more than this rig

Quick note that solar power doesn't have zero cost.

It could have zero marginal cost, right? In particular, if you over-provisioned your solar installation already anyway, most of the time it should be producing more energy than you need.

And in winter, depending on the region, it might generate 0kW

Or, in my case, currently 32.9 W.

Don’t worry, you can charge an iPhone!

How would it use 1kW? Socket SP3 tops at 280W and the system in the article has a 850W PSU so I'm not sure what I'm missing.

I assume that the parent just rounded 850W up to 1kW, no?

Yeah i was vigorously waving hands. Even at 200W, 10 cents/kWh you'd need to run this a LONG time to break even

This gets you the (arguably) most powerful AI in the world running completely privately, under your control, in around $2000. There are many use cases for when you wouldn't want to send your prompts and data to a 3rd party. A lot of businesses have a data export policy where you are just not allowed to use company data anywhere but internal services. This is actually insanely useful.

The point is running locally, not efficiently

> You're probably spending $0.20/hr on power (1 kW) alone.

For those that aren't following - means you're spending ~$10/MTok on power alone (compared to $2/MTok hosted).


"besides privacy"

lol.

Yeah, just besides that one little thing. We really are a beaten down society aren't we.


Most people value privacy, but they’re practical about it.

The odds of a cloud server leaking my information is non-zero, but it’s very small. A government entity could theoretically get to it, but they would be bored to tears because I have nothing of interest to them. So practically speaking, the threat surface of cloud hosting is an acceptable tradeoff for the speed and ease of use.

Running things at home is fun, but the hosted solutions are so much faster when you actually want to get work done. If you’re doing some secret sensitive work or have contract obligations then I could understand running it locally. For most people, trying to secure your LLM interactions from the government isn’t a priority because the government isn’t even interested.

Legally, the government could come and take your home server too. People like to have fantasies about destroying the server during a raid or encrypting things, but practically speaking they’ll get to it or lock you up if they want it.


What about privacy from enriching other entities through contributions to their models, with thoughts concieved from your own mind? A non-standard way of thinking about privacy, sure. But I look forward to the ability to improve an offline model of my own with my own thoughts and intellect—rather than giving it away to OpenAI/Microsoft/Google/Apple/DeepSeek/whoever.

If the odds are so small, how come there are numerous password dumps? Your credentials may well be in them.

There is something about this comment that is so petty that I had to re-read it. Nice dunk, I guess.

Privacy is a relatively new concept, and the idea that individuals are entitled to complete privacy is a very new and radical concept.

I am as pro-privacy as they come, but let’s not pretend that government and corporate surveillance is some wild new thing that just appeared. Read Horace’s Satires for insight into how non-private private correspondence often was in Ancient Rome.


It's a bit of both. Village societies don't have a lot of privacy. But they also don't make it possible for powerful individuals to datamine personal information of millions.

Most of us have more privacy than 200 years ago in some ways, and much less privacy in other ways.


I think the main point of local model is privacy set aside hobby and tinkering.

I think the privacy should be the whole point. There's always a price to pay. I'm optimistic that soon you'll be able to get better speeds with less hardware.

The system idles at 60w and running hits 260w.

How is it that cloud LLMs can be so much cheaper? Especially given that local compute, RAM, and storage are often orders of magnitude cheaper than cloud.

Is it possible that this is an AI bubble subsidy where we are actually getting it below cost?

Of course for conventional compute cloud markup is ludicrous, so maybe this is just cloud economy of scale with a much smaller markup.


I think batch processing of many requests is cheaper. As each layer of the model is loaded into cache, you can put through many prompts. Running it locally you don't have that benefit.

My guess is two things:

1. Economies of scale. Cloud providers are using clusters in the tens of thousands of GPUs. I think they are able to run inference much more efficiently than you would be able to in a single cluster just built for your needs.

2. As you mentioned, they are selling at a loss. OpenAI is hugely unprofitable, and they reportedly lose money on every query.


The purchase price for a H100 is dramatically lower when you buy a few thousand at a time

It is shared between users and better utilized and optimized.

"Sharing between users" doesn't make it cheaper. It makes it more expensive due to the inherent inefficiencies of switching user contexts. (Unless your sales people are doing some underhanded market segmentation trickery, of course.)

No, batched inference can work very well. Depending on architecture, you can get 100x or even more tokens out of the system if you feed it multiple requests in parallel.

Couldn't you do this locally just the same?

Of course that doesn't map well to an individual chatting with a chat bot. It does map well to something like "hey, laptop, summarize these 10,000 documents."


Isn't that just because they can get massive discounts on hardware buying in bulk (for lack of a proper term) + absorb losses?

All that, but also because they have those GPUs with crazy amounts of RAM and crazy bandwidth? So the TPS is that much higher, but in terms of power, I guess those boards run in the same ballpark of power used by consumer GPUs?

It's cheaper because you are unlikely to run your local AI at top capacity 24/7 so you have unused capacity which you are paying for.

The calculation shows it's cheaper even if you run local AI 24/7

They are specifically referring to usage of APIs where you just pay by the token, not by compute. In this case, you aren’t paying for capacity at all, just usage.

Privacy, for me, is a necessary feature for something like this.

And I think your math is off, $0.20 per kWh at 1 kW is is $145 a month. I pay $0.06 per kWh. I've got what, 7 or 8 computers running right now and my electric bill for that and everything else is around $100 a month, at least until I start using AC. I don't think the power usage of something like this would be significant enough for me to even shut it off when I wasn't using it.

Anyway, we'll find out, just ordered the motherboard.


Depends on where you live. The average in San Francisco is $0.29 per kWh.

> (besides privacy...)

that's the whole point of local models


What is a bit weird about AI currently is that you basically always want to run the best model, but the price of the hardware is a bit ridiculous. In the 1990s, it was possible to run Linux on scrappy hardware. You could also always run other “building blocks” like Python, Docker, or C++ easily.

But the newest AI models require an order of magnitude more RAM than my system or the systems I typically rent have.

So I’m curious to people here, has this in the history of software happened before? Maybe computer games are a good example. There people would also have to upgrade their system to run the latest games.


Like AI, there were exciting classes of applications in the 70s, 80s and 90s that mandated pricier hardware. Anything 3D related, running multi-user systems, higher end CAD/EDA tooling, and running any server that actually got put under “real” load (more than 20 users).

If anything this isn’t so bad: $4K in 2025 dollars is an affordable desktop computer from the 90s.


The thing is I'm not that interested in running something that will run on a $4K rig. I'm a little frustrated by articles like this, because they claim to be running "R1" but it's a quantized version and/or it has a small context window... it's not meaningfully R1. I think to actually run R1 properly you need more like $250k.

But it's hard to tell because most of the stuff posted is people trying to do duct tape and bailing wire solutions.


I can run the 671B-Q8 version of R1 with a big context on a used dual-socket Xeon I bought for about $2k with 768GB of RAM. It gets about 1-1.5 tokens/sec, which is fine to give it a prompt and just come back an hour or so later. To get to many 10s of tokens/sec, you would need >8 GPUs with 80GB of HBM each, and you're probably talking well north of $250k. For the price, the 'used workstation with a ton of DDR4' approach works amazingly well.

If you google, there is a $6k setup for the non-quantized version running like 3-4 tps.

Indeed, even design and prepress required quite expensive hardware. There was a time when very expensive Silicone Graphics workstations were a thing.

Of course it has. Coughs in SGI and advanced 3D and video software like PowerAnimator, Softimage, Flame. Hardware + software combo starting around 60k of 90's dollars, but to do something really useful with it you'd have to enter 100-250k of 90's dollars range.

> What is a bit weird about AI currently is that you basically always want to run the best model,

I think the problem is thinking that you always need to use the best LLM. Consider this:

- When you don't need correct output (such as when writing a blog post, there's no right/wrong answer), "best" can be subjective.

- When you need correct output (such as when coding), you always need to review the result, no matter how good the model is.

IMO you can get 70% of the value of high end proprietary models by just using something like Llama 8b, which is runnable on most commodity hardware. That should increase to something like 80% - 90% when using bigger open models such as the newly released "mistral small 3"


With o1 I had a hairy mathematical problem recently related to video transcoding. I explained my flawed reasoning to o1, and it was kind of funny in that it took roughly the same amount of time to figure out the flaw in my reasoning, but it did, and it also provided detailed reasoning with correct math to correct me. Something like Llama 8b would've been worse than useless. I ran the same prompt by ChatGPT and Gemini, and both gave me sycophantic confirmation of my flawed reasoning.

> When you don't need correct output (such as when writing a blog post, there's no right/wrong answer), "best" can be subjective.

This is like, everything that is wrong with the Internet in a single sentence. If you are writing a blog post, please write the best blog post you can, if you don't have a strong opinion on "best," don't write.


This isn’t he best comment I’ve seen on HN, you should delete it, or stop gatekeeping.

for coding insights / suggestions as you type, similar to copilot, i agree.

for rapidly developing prototypes or working on side projects, i find llama 8b useless. it might take 5-6 iterations to generate something truly useful. compared to say 1-shot with claude sonnet 3.5 or open ai gpt-4o. that’s a lot less typing and time wasted.


I'm not sure Linux is the best comparison; it was specifically created to run on standard PC hardware. We have user access to AI models for little or no monetary cost, but they can be insanely expensive to run.

Maybe a better comparison would be weather simulations in the 90s? We had access to their outputs in the 90s but running the comparable calculations as a regular Joe might've actually been impossible without a huge bankroll.


Or 3D rendering, or even particularly intense graphic design-y stuff I think, right? In the 90’s… I mean, computers in the $1k-$2k range were pretty much entry level, right?

The early 90's and digital graphic production. Computer upgrades could make intensive alterations interactive. This was true of photoshop and excel. There were many bottle necks to speed. Upgrade a network of graphic machines from 10mbit networking to 100mbit did wonders for server based workflows.

Adjusting for inflation, $2000 is about the same price as the first iMac, an entry level consumer PC at the time. Local AI is still pretty accessible to hobbyist level spending.

well, if there was e.g. a model trained for coding - i.e. specialization as such, having models trained mostly for this or that - instead of everything incl. Shakespeare, the kitchen sink and the cockroaches biology under it, that would make those runable on much low level hardware. But there is only one, The-Big-Deal.. in many incarnations.

In the 90's it was really expensive to run 3D Studio or POVray. It could take days to render a single image. Silicon Graphics workstations could do it faster but were out of the budget of non professionals.

Raytracing decent scenes was a big CPU hog in the 80s/90s for me. I'd have to leave single frames running overnight.

Read “masters of doom”, they go into quite some detail on how Carmack got himself a very expensive work station to develop Doom/Quake.

We finally enter an era where the demand for more memory is really needed. Small local ai models will be used for many things in the near future. Requiring lots of memory. Even phones will be in the need for terabytes of fast memory in the future.

How were you running Docker in the 1990s?

> you basically always want to run the best model, but the price of the hardware is a bit ridiculous. In the 1990s, it was possible to run Linux on scrappy hardware. You could also always run other “building blocks” like Python, Docker, or C++ easily

= "When you needed to run common «building blocks» (such as, in other times, «Python, Docker, or C++» - normal fundamental software you may have needed), even scrappy hardware would suffice in the '90s"

As a matter of facts, people would upgrade foremostly for performance.


Heh. I caught that too, and was going to say "I totally remember running Docker on Slackware on my 386DX40. I had to upgrade to 8MB of RAM. Good times."

I think it would be more interesting doing this with smaller models (33b-70b) and see if you could get 5-10 tokens/sec on a budget. I've desperately wanted something locally thats around the same level of 4o, but I'm not in a hurry to spend $3k on an overpriced GPU or $2k on this

Your best bet for 33B is already having a computer and buying a used RTX 3090 for <$1k. I don't think there's currently any cheap options for 70B that would give you >5. High memory bandwidth is just too expensive. Strix Halo might give you >5 once it comes out, but will probably be significantly more than $1k for 64 GB RAM.

With used GPUs do you have to be concerned that they're close to EOL due to high utilization in a Bitcoin or AI rig?

I guess it will be a bigger issue the longer it's been since they stopped making them, but most I've heard (including me) haven't had any issue. Crypto rigs don't necessarily break GPUs faster because they care about power consumption and run the cards at a pretty even temperature. What probably breaks first is the fans. You might also have to open the card up and repaste/repad them to keep the cooling under control.

awesome thanks!

M4 Mac with unified GPU RAM

Not very cheap though! But you get a quite usable personal computer with it...


Any that can run 70B at >5 t/s are >$2k as far as I know.

How does inference happen on a GPU with such limited memory compared with the full requirements of the model? This is something I’ve been wondering for a while

You can run a quantized version of the model to reduce the memory requirements, and you can do partial offload, where some of the model is on GPU and some is on CPU. If you are running a 70B Q4, that’s 40-ish GB including some context cache, and you can offload at least half onto a 3090, which will run its portion of the load very fast. It makes a huge difference even if you can’t fit every layer on the GPU.

So the more GPUs we have the faster it will be and we don't have to have the model run solely CPU or GPU -- it can be combined. Very cool. Think that is how it's running now with my single 4090.

Umm, two 3090's? Additional cards scale as long as you have enough PCIe channels.

I arbitrarily chose $1k as the "cheap" cut-off. Two 3090 is definitely the most bang for the buck if you can fit them.

It will be slower for a 70b model since Deepseek is an MoE that only activates 37b at a time. That's what makes CPU inference remotely feasible here.

As a data point: you can get an RTX 3090 for ~$1.2k and it runs deepseek-r1:32b perfectly fine via Ollama + open webui at ~35 tok/s in an OpenAI-like web app and basically as fast as 4o.

You mean Qwen 32b fine-tuned on Deepseek :)

There is only one model of Deepseek (671b), all others are fine-tunes of other models


> you can get an RTX 3090 for ~$1.2k

If you're paying that much you're being ripped off. They're $800-900 on eBay and IMO are still overpriced.


Would it be something like this?

> OpenAI's nightmare: DeepSeek R1 on a Raspberry Pi

https://x.com/geerlingguy/status/1884994878477623485

I haven't tried it myself or haven't verified the creds, but seems exciting at least


That's 1.2 t/s for the 14B Qwen finetune, not the real R1. Unless you go with the GPU with the extra cost, but hardly anyone but Jeff Geerling is going to run a dedicated GPU on a Pi.

it's using a Raspberry Pi with a.... USD$1k gpu, which kinda defeat the purpose of using the RPI in the first place imo.

or well, I guess you save a bit on power usage.


Oh, I was naive to think that the Pi was capable of some kind of magic (sweaty smile emoji goes here)

I put together a $350 build with a 3060 12GB and its still my favorite build. I run llama 3.2 11b q4 on it and its a really efficient way to get started and the tps is great.

You can run smaller models on MacbookPro with ollama with those speeds. Even with several 3k GPUs it won't come close to 4o level.

Apple M chips with their unified GPU memory are not terrible. I have one of the first M1 Max laptops with 64G and it can run up to 70B models at very useful speeds. Newer M series are going to be faster and they offer more RAM now.

Are there any other laptops around other than the larger M series Macs that can run 30-70B LLMs at usable speeds that also have useful battery life and don’t sound like a jet taxiing to the runway?

For non-portables I bet a huge desktop or server CPU with fast RAM beats the Mac Mini and Studio for price performance, but I’d be curious to see benchmarks comparing fast many core CPU performance to a large M series GPU with unified RAM.


Does it make any sense to have specialized models, which could possibly be a lot smaller. Say a model that just translates between English and Spanish, or maybe a model that just understands unix utilities and bash. I don’t know if limiting the training content affects the ultimate output quality or model size.

Some enterprises have trained small specialized models based on proprietary data.

https://www.maginative.com/article/nvidia-leverages-ai-to-as...

> NVIDIA researchers customized LLaMA by training it on 24 billion tokens derived from internal documents, code, and other textual data related to chip design. This advanced “pretraining” tuned the model to understand the nuances of hardware engineering. The team then “fine-tuned” ChipNeMo on over 1,000 real-world examples of potential assistance applications collected from NVIDIA’s designers.

2023 paper, https://research.nvidia.com/publication/2023-10_chipnemo-dom...

> Our results show that these domain adaptation techniques enable significant LLM performance improvements over general-purpose base models across the three evaluated applications, enabling up to 5x model size reduction with similar or better performance on a range of design tasks.

2024 paper, https://developer.nvidia.com/blog/streamlining-data-processi...

> Domain-adaptive pretraining (DAPT) of large language models (LLMs) is an important step towards building domain-specific models. These models demonstrate greater capabilities in domain-specific tasks compared to their off-the-shelf open or commercial counterparts.


Last fall I built a new workstation with an EPYC 9274F (24C Zen4 4.1-4.3GHz, $2400), 384GB 12 x 32GB DDR5-4800 RDIMM ($1600), and a Gigabyte MZ33-AR0 motherboard. I'm slowly populating with GPUs (including using C-Payne MCIO gen5 adapters), not focused on memory, but I did spend some time recently poking at it.

I spent extra on the 9274F because of some published benchmarks [1] that showed that the 9274F had STREAM TRIAD results of 395 GB/s (on 460.8 GB/s of theoretical peak memory bandwidth), however sadly, my results have been nowhere near that. I did testing with LIKWID, Sysbench, and llama-bench, and even w/ an updated BIOS and NUMA tweaks, I was getting <1/2 the Fujitsu benchmark numbers:

  Results for results-f31-l3-srat:
  {
      "likwid_copy": 172.293857421875,
      "likwid_stream": 173.132177734375,
      "likwid_triad": 172.4758203125,
      "sysbench_memory_read_gib": 191.199125,
      "llama_llama-2-7b.Q4_0": {
          "tokens_per_second": 38.361456,
          "model_size_gb": 3.5623703002929688,
          "mbw": 136.6577115303955
      }
  }
For those interested in all the system details/running their own tests (also MLC and PMBW results among others): https://github.com/AUGMXNT/speed-benchmarking/tree/main/epyc...

[1] https://sp.ts.fujitsu.com/dmsp/Publications/public/wp-perfor...


Still surprised that the $3000 NVIDIA Digits doesn’t come up more often in that and also the gung-ho market cap discussion.

I was an AI sceptic until 6 months ago, but that’s probably going to be my dev setup from spring onwards - running DeepSeek on it locally, with a nice RAG to pull in local documentation and datasheets, plus a curl plugin.

https://www.nvidia.com/en-us/project-digits/


It'll probably be more relevant when you can actually buy the things.

It's just vaporware until then.


Call me naive, but I somehow trust them to deliver in time/specs?

It’s also a more general comment around „AI desktop appliance“ vs homebuilts. I’d rather give NVIDIA/AMD $3k for a well adjusted local box than tinkering too much or feeding the next tech moloch, and have a hunch I’m not the only one feeling that way. Once it’s possible of course.


Oh, if it's anything close to what they claim, I'll probably buy one as well, but I certainly do not expect them to deliver on time.

Also, LPDDR memory, and no published bandwidth numbers.

and people are missing the "Starting at" price. I suspect the advertised specs will end up more than $3k. If it comes out at that price, i'm in for 2. But I'm not holding my breath given Nvidia and all.

probably because nvidia digits is just a concept rn

Aside: it’s pretty amazing what $2K will buy. It’s been a minute since I built my desktop, and this has given me the itch to upgrade.

Any suggestions on building a low-power desktop that still yields decent performance?


>Any suggestions on building a low-power desktop that still yields decent performance?

You don't for now. The bottleneck is mem throughput. That's why people using CPU for LLM are running xeon-ish/epyc setups...lots of mem channels.

The APU class gear along the lines of Halo Strix is probably the path closest to lower power but it's not going to do 500gb of ram and still doesn't have enough throughput for big models


Not to be that yt'r that shills my videos all over, but you did ask for a low powered desktop build and this $350 one I put together is still my favorite. The 3060 12GB with llama 3.2 vision 11b is a very fun box that is low idle power (intel rules) to leave on 24/7 and have it run some additional services like HA.

https://youtu.be/iflTQFn0jx4


Hard to know what ranges you have in mind with "decent performance" and "low-power".

I think your best bet might be a Ryzen U-series mini PC. Or perhaps an APU barebone. The ATX platform is not ideal from a power-efficiency perspective (whether inherently or from laziness or conspiracy from mobo and PSU makers, I do not know). If you want the flexibility or scale, you pay the price of course but first make sure it's what you want. I wouldn't look at discrete graphics unless you have specific needs (really high-end gaming, workstation, LLMs, etc) - the integrated graphics of last few years can both drive your 4k monitors and play recent games at 1080p smoothly, albeit perhaps not simultaneously ;)

Lenovo Tiny mq has some really impressive flavors (ECC support at the cost of CPU vendor-lock on PRO models) and there's the whole roster of Chinese competitors and up-and-comers if you're feeling adventerous. Believe me you can still get creative if you want to scratch the builder itch - thermals is generally what keeps these systems from really roaring (:


Hi HN, Garage youtuber here. Wanted to add in some stats on the wattages/ram.

Idle wattage: 60w (well below what I expected, this is w/o GPUs plugged in)

Loaded wattage: 260w

RAM Speed I am running currently: 2400 (V likely 3200 has a decent perf impact)


This is neat, but what I really want to see is someone running it on 8x 3090/4090/5090 and what is the most practical configuration for that.

According to NVIDIA. > a single server with eight H200 GPUs connected using NVLink and NVLink Switch can run the full, 671-billion-parameter DeepSeek-R1 model at up to 3,872 tokens per second.

You can rent a single H200 for 3$/hour.


I have been searching for a single example of someone running it like this (or 8x P40 and alike), and found nothing..

8x 3090 will net you around 10-12tok/s

It would not be that slow as it is an MoE model with 37b activated parameters.

Still, 8x3090 gives you ~2.25 bits per weight, which is not a healthy quantization. Doing bifurcation to get up to 16x3090 would be necessary for lightning fast inference with 4bit quants.

At that point though it becomes very hard to build a system due to PCIE lanes, signal integrity, the volume of space you require, the heat generated, and the power requirements.

This is the advantage of moving up to Quadro cards, half the power for 2-4x the VRAM (top end Blackwell Quadro expected to be 96GB).


What is the fastest documented way so far to serve the full R1 or V3 models (Q8, not Q4) if the main purpose is inference with many parallel queries and maximizing the total tokens per sec? Did anyone document and benchmark efficient distributed service setups?

The top comment in this thread mentions a 6k setup, which likely could be used with vLLM with more tinkering. AFAIK vLLM‘s batched inference is great.

You need enough VRAM to hold the whole thing plus context. So probably a bunch of H100s, or MI300s.

I'm also kind of new to this and coming from coding with ChatGPT. Isnt the time to first token important? He is sitting there for minutes waiting for a response. Shouldnt that be a concern?

I'd rather wait to get a good response, than get a quick response that is much less useful, and it's the nature of these "reasoning" models that they reason before responding.

Yesterday I was comparing DeepSeek-R1 (NVidia hosted version) with both Sonnet 3.5 (regarded by most as most capable coder) and the new Gemini 2.0 flash, and the wait was worth it. I was trying to get all three to create a web page with a horizontally scrolling timeline with associated clickable photos...

Gemini got to about 90% success after half a dozen prompts, after which it became a frustrating game of whack-a-mole trying to get it to fix the remaining 10% without introducing new bugs - I gave up after ~30min. Sonnet 3.5 looked promising at first, generating based on a sketch I gave it, but also only got to 90%, then hit daily usage limit after a few attempts to complete the task.

DeepSeek-R took a while to generate it, but nailed it on first attempt.


Interesting. So in my use, I rarely see gpt get it right on the first pass but thats mostly due to interpretation of the question. I'm ruling out the times when it hallucinates calls to functions that dont exist.

Lets say I ask for some function that calculates some matrix math in python. It will spit out something but I dont like what it did. So I will say, now dont us any calls to that library you pulled in, and also allow for these types of inputs. Add exception handling...

So response time is important since its a conversation, no matter how correct the response is.

When you say deep seek "nailed it on the first attempt" do you mean it was without bugs? Or do you mean it worked how you imagined? Or what exactly?


DeepSeek-R generated a working web page on first attempt, based on a single brief prompt I gave it.

With Sonnet 3.5, given the same brief prompt I gave DeepSeek-R, it took a half dozen feedback steps to get to 90%. Trying a hand drawn sketch input to Sonnet instead was quicker - impressive first attempt, but iterative attempts to fix it failed before I hit the usage limit. Gemini was the slowest to work with, and took a lot of feedback to get to the "almost there" stage, after which it floundered.

The AI companies seem to want to move in the direction of autonomous agents (with reasoning) that you hand a task off to that they'll work on while you do something else. I guess that'd be useful if they are close to human level and can make meaningful progress without feedback, and I suppose today's slow-responding reasoning models can be seen as a step in that direction.

I think I'd personally prefer something fast enough responding to use as a capable "pair programmer", rather than an autonomous agent trying to be an independent team member (at least until the AI gets MUCH better), but in either case being able to do what's being asked is what matters. If the fast/interactive AI only gets me 90% complete (then wastes my time floundering until I figure out it's just not capable of the task), then the slower but more capable model seems preferable as long as it's significantly better.


The alternative isn't to use a weaker model, the alternative is to solve the problem myself. These are all very academically interesting, but they don't usually save any time. On the other hand, the other day I had a math problem I asked o1 for help with, and it was barely worth it. I realized my problem at the exact moment it gave me the correct answer. I say that because these high-end reasoning models are getting better. "Barely useful" is a huge deal and it seems like we are hitting the inflection point where expensive models are starting to be consistently useful.

Yes, it seems we've only recently passed the point where these models are extremely impressive but still not good enough to really be useful, to now being actual time savers for doing quite a few everyday tasks.

The AI companies seem to be pushing AI-assisted software development as an early use case, but I've always thought this is one of the more difficult things for them to become good at, since many/most development tasks require both advanced reasoning (which they are weak at) and ability to learn from experience (which they just can't do). The everyday, non-development tasks, like "take this photo of my credit card bill and give me category subtotals" are where the models are now actually useful, but software development still seems to be an area where they are highly impressive but ultimately not capable enough to be useful outside of certain narrow use cases. That said, it'll be interesting to see how good these reasoning models can get, but I think that things like inability to learn (other than in-context) put a hard limit on what this type of pre-trained LLM tech will be useful for.


Went through the steps and ran it on a similar r6a.16xlarge and the model seems to only load after the first prompt. After that it takes maybe more than half an hour trying to load the model and still no answer. The context size in the post is also not validated in my experiment with the above. With 512GB of ram I cannot use more than 4k context size without the model outright refusing to load. I am new to model setups so I might have missed something.

If you are going to go to that effort, adding a second NMVE drive, and doing RAID 0 across them, will improve the speed of getting the model into RAM.

than you will way more memory, since you can only do software raid a nvme drive.

I’ve found that striping across two drives like the 980 Pro described here or WD SN850 Black drives easily gets direct IO read speeds over 12 GB/s on threadripper pro systems. This assumes a stripe size somewhere around 1 -2 MiB. This means that most reads will not need to be split and high queue depth sequential reads and random reads keep both drives busy. With careful alignment of IOs, performance approaches 2x of one drive’s performance.

IO takes CPU cycles but I’ve not seen evidence that striping impacts that. Memory overhead is minimal, as the stripe to read from is done via simple math from a tiny data structure.


About how much memory overhead would that require?


Or just wait for the NVIDIA Digits PC later this year which will cost the ~same amount and can fit on your desk

That one can handle up to 200B parameters according to NVIDIA.

That's a shame. I suppose you'll need 4 of them with RDMA to run a 671B, but somehow that seems better to me than trying to run it on DDR4 RAM like the OP is saying. I have a system with 230G of DDR4 RAM, and running even small models on it is atrociously slow.

Kind of embarrassed to ask, I use AI a lot, I haven't really understood how the nuts and bolts work (other than at a 5th-grader 30000ft level)...

So, when I use a "full" AI like chatGPT4o, I ask it questions and it has a firm grip on a vast amount of knowledge, like, whole-internet/search-engine scope knowledge.

If I run an AI "locally", on even a muscular server, it obviously does NOT have vast amounts of stored information about everything. So what use is it to run locally? Can I just talk to it as though it were a very smart person who, tragically, knows nothing?

I mean, I suppose I could point it to a NAS box full of pdf's and ask questions about that narrow range of knowledge, or maybe get one of those downloaded wikipedia stores. Is that what folks are doing? It seems like you would really need a lot content for the AI to even be remotely useable like the online versions.


Running it locally it will still have the vast/"full Internet" knowledge.

This is probably one of the most confusing things about LLMs. They are not vast archives of information and the models do not contain petabytes of copied data.

This is also why LLMs are so often wrong. They work by association, not by recall.


Try one and find out. Look at https://github.com/Mozilla-Ocho/llamafile/ Quickstart section; download a single cross-platform ~3.7GB file and execute it, it starts a local model, local webserver, and you can query it.

See it demonstrated in a <7 minute video here: https://www.youtube.com/watch?v=d1Fnfvat6nM

The video explains that you can download the larger models on that Github page and use them with other command line parameters, and shows how you can get a Windows + nVidia setup to GPU accelerate the model (install CUDA and MSVC / VS Community edition with C++ tools, run for the first time from MSVC x64 command prompt so it can build a thing using cuBLAS, rerun normally with "-ngl 35" command line parameter to use 3.5GB of GPU memory (my card doesn't have much)).


The LLMs have the ‘knowledge’ baked in, one of the things you will hear about are quantized models with lower precision (think 16-bit -> 4-bit) weights, which enables them to be run on greater variety of hardware and/or with greater performance.

When you quantize, you sacrifice model performance. In addition, a lot of the models favored for local use are already very small (7b, 3b).

What OP is pointing out is that you can actually run the full deepseek r1 model, along with all of the ‘knowledge’ on relatively modest hardware.

Not many people want to make that tradeoff when there are cheap, performant APIs around but for a lot of people who have privacy concerns or just like to tinker, it is pretty big deal.

I am far removed from having a high performance computer (although I suppose my MacBook is nothing to sneeze at), but I remember building computers or homelabs back in the day and then being like ‘okay now what is the most stressful workload I can find?!’ — this is perfect for that.


I've also been away from the tech (and AI scene) for a few years now. And I mostly stayed away from LLMs. But I'm certain that all the content is baked into the model, during training. When you query the model locally (since I suppose you don't train it yourself), you get all that knowledge that's baked into the model weights.

So I would assume the locally queried output to be comparable with the output you get from an online service (they probably use slightly better models, I don't think they release their latest ones to the public).


It's all in the model. If you look for a good definition of "intellingence" that is compression. You can see ZIP algorithm as a primordial antenate of Chatgpt :))

Most of an AI's knowledge is inside the weights, so when you run it locally, it has all that knowledge inside!

Some AI services allow the use of 'tools', and some of those tools can search the web, calculate numbers, reserve restaurants, etc. However, you'll typically see it doing that in the UI.

Local models can do that too, but it's typically a bit more setup.


LLMs are to a good approximation zip files intertwined with... magic... that allows the compressed data to be queried with plain old English - but you need to process all[0] the magic through some special matrix mincers together with the query (encoded as a matrix, too) to get an answer.

[0] not true but let's ignore that for a second


The knowledge is stored in the model, the one mentioned here is rather large, the full version needs over 700GB of disk space. Most people use compressed versions, but even those will often be 10-30GB in size.

    ...the full version needs over 700GB of disk space.
THAT is rather shocking. Vastly smaller than I would expect.

I ask this all the time. locally running an LLM seems super hobbyist to me. like tweaking terminal font sizes on fringe BSD distros kind of thing

Privacy.

> 512GB 2400 ECC RAM $400

Is this really that cheap? Looking at several local (CZ) eshops, i cannot find 32 GB DDR4 ECC RDIMM cheaper than $75, which will be $1200 for 512 GB.


Used server hardware is much more expensive in the EU generally, because the market is much smaller (fewer data centers to begin with, longer cycles to reduce costs and EU WEEE mandatory scrapping instead of reuse).

Wow! Only $2k with no quantization.

  hit between 4.25 to 3.5 TPS (tokens per second) on the Q4 671b full model

I think it is quantised, they actually said no distillation.

Maybe Intel or AMD should bring back a Larrabee style CPU which can use say 48 sockets of DDR/CAMM2 sticks.

Has anyone run any benchmarks on the quantized (non-distilled) versions?

Someone please productize this

"Okay"

Can you describe for me what the product does?

It's not that hard to make a turnkey "just add power" appliance that does nothing but spit out tokens. Some sort of "ollama appliance", which just sits on your network and provides LLM functionality for your home lab?

But beyond that, what would your mythical dream product do?



Do we have any estimate on the size of OpenAI top of the line models? Would they also fit in ~512GB of (V)RAM?

Also, this whole self hosting of LLMs is a bit like cloud. Yes, you can do it, but it's a lot easier to pay for API access. And not just for small users. Personally I don't even bother self hosting transcription models which are so small that they can run on nearly any hardware.


It's nice because a company can optionaly provide a SOTA reasoning model for their clients, without having to go through a middleman e.g. HR co. Can provide an LLM for their HRMS system for a small 2000$ investment. Not 2000/month, just a one time 2000 investment.

No one will be doing anything practical with a local version of Deepseek on a $2000 server. The token throughout of this thing is like, 1 token every 4 seconds. It would take nearly a full minute just to produce a standard “Roses are red, violets are blue” poem. There’s absolutely no practical usage that you can use that for. It’s cool that you can do it, and it’s a step in the right direction, but self-hosting these wont be a viable alternative to using providers like OpenAI for business applications for a while.

The OP said they were getting 3-4 TPS on their $2000 rig.

We crawl so we can learn to walk.

Is the size of OpenAI‘s top of the line models even relevant? Last I checked they weren’t open source in the slightest.

it would make sense if you don't want somebody else to have access to all your code and customer data.

Any idea what the power draw is? Resting power, vs resting with the model loaded in memory, vs full power computation. In case you may want to also run this as your desktop for basic web browsing etc when not LLMing.

Are there any security concerns over DeepSeek as there are over TikTok?

It's a local model, what security concern would you have ?

I think this is unlikely but a local model could generate malicious code. You would have to run it manually though.

They also have an app that connects to their datacenter with R1.

Also barely anyone can actually run the real R1 locally.


We are speaking about a 2k$ server here.

Local model - no. Using deepseek.com - absolutely. Do not put anything private there.

Well, I read this, now I am sure: as of today, deepseek handling of LLMs is the less wrong, and by far.

so this $2k investment can substitute for $0.69 in API calls per day

He links to a ram kit that is 8x 32Gb but says it should have 512 Gb ram. What gives? Also with 8 ram slots you would obviously need more than 256.

Is the setup disingenuous to get people excited about the post or what is going on here?


The board recommended has 16 ram slots.

The ram kit is still 8x32gb so the price is lower than it actually would be.

If you really want to cheap—run it in your browser via webgpu.

Not comparable. That is a quantized distill.

Oh yes, for sure. Don’t mean otherwise.

I got downvoted, but you really can run it in the browser.

ONNX version: https://huggingface.co/onnx-community/DeepSeek-R1-Distill-Qw...


Ha, I'd recently asked about this here as well, just using some high memory AMD setup to infer.

Another thing I wonder is whether using a bunch of Geforce 4060 TI with 16GB could be useful - they cost only around 500 EUR. If VRAM is the bottleneck, perhaps a couple of these could really help with inference (unless they become GPU bound, like too slow).


Basically three components : ollama , openwebui and deepseek. Nice!

Is there any hope that desktops get 64GB dimms? 48GB dimms have been a nice boost but are we gonna get more anytime soon?

I'd love so much if quad-channel Strix Halo could get up to 256gb of memory. 192GB (4x48) won't be too bad, and at 8533MT/s, should provide competitive-ish throughput to these massive Epyc systems. Of course the $6k 24 channel 3200MHz is 4.4x more throughput but it does have a field of dimms to get there, and high power consumption.


Interesting and surprising to see, but 3-4 t/s is not practical overall.

Anything lower than 10 t/s is going to give lot of wait time considering the reasoning wait time


Those EPYCs he's advertising as $700 are either engineering samples or used. New he's off by 2:1.

He's running quantized Q4 671b. However, MoE doesn't need cluster networking so you could probably run the full thing on two of them unquantized. Maybe the router could be all resident in GPU RAM instead of in contrast offloading a larger percentage of everything there, or is that already how it is set up in his gpu offload config?

[flagged]


This is a verbatim quote from Jason Calacanis.

Affiliate link spam.

No. Affiliate link spam is when someone fills a page with content they stole, or it’s a bunch of nonsense that is stuffed to the brim with keywords, and they combine that with affiliate links.

Someone getting a dollar or two in return for you following an affiliate link after you read something they put real time and effort into to make it valuable info for others is not “affiliate link spam”.


I’m fine with useful content linking to affiliate links too. I am still confused at the ram specs required and those they linked to being off by a factor of 2 though. IF the setup is not realistic or accurate then that wouldn’t be cool.

Few people would spend $6k to run a model locally on CPU. But lower it to $2k and you might get some sweet affiliate link commissions. And make it fuzzy so they don't get it is running so quantized it probably is useless.

This is a lame me-too page to make money with affiliate links. The actual specs were in the linked original tweet. So, IMHO, it is affiliate link spam.

I can't imagine this setup will get more than 1 token per second.

I would love to see Deepseek running on premise with a decent TPS.


It says 4.25 TPS in the first para.

Honest mistake. Some people think HN is just a series of short tweets and haven’t realized they are links yet!

It's the modern way. Why read when you can just imagine facts straight out of your own brain.

I agree but also found your comment funny in the context of LLMs. People love getting facts straight out of their models.

4.25 is enough tps for a lot of use cases.

That's still pretty slow, considering there's that "thinking" phase.

True, but 4.25 is the number we all want to know.

You can get 1t/s on a raspberry pi.

https://youtu.be/o1sN1lB76EA?si=i8ecEBjLdV0zewFQ


this has nothing to do with the full 671B and the ollama models are distilled qwen2.5

I appreciate both of these comments, thank you both.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: