What is your PC/Server/AI Server/Homelab idle power consumption?
Posted by panchovix@reddit | LocalLLaMA | View on Reddit | 47 comments
Hello guys, hope you guys are having a nice day.
I was wondering, how much is the power consumption at idle (aka with the PC booted up, with either a model loaded or not but not using it).
I will start:
- Consumer Board: MSI X670E Carbon
- Consumer CPU: AMD Ryzen 9 9900X
- 7 GPUs
- 5090x2
- 4090x2
- A6000
- 3090x2
- 5 M2 SSDs (via USB to M2 NVME adapters)
- 2 SATA SSDs
- 7 120mm fans
- 4 PSUs:
- 1250W Gold
- 850W Bronze
- 1200W Gold
- 700W Gold
Idle power consumption: 240-260W
Also for reference, here in Chile electricity is insanely expensive (0.25USD per kwh).
When using a model on lcpp it uses about 800W. When using a model with exl or vllm, it uses about 1400W.
Most of the time I have it powered off as that price accumulates quite a bit.
How much is your idle power consumption?
TokenRingAI@reddit
0.25 a kwh is half the price we pay in California for electricity
a_beautiful_rhind@reddit
https://i.ibb.co/5gVYKF4x/power.jpg
EXL3 GLM-4.6 loaded on 4x3090 ComfyUI with compiled SDXL model on 2080ti
Only get close to 1500w when doing wan2.2 distributed. Using LACT to undervolt seems to cause the idle to go up but in-use to really go down.
tmvr@reddit
Sorry, what does this mean?:
ComfyUI with compiled SDXL model on 2080ti
a_beautiful_rhind@reddit
In image models they have torch.compile and other such things to speed up inference.
tmvr@reddit
Ahh, OK, what speed-up do you get with that 2080Ti? I never bothered with any of that with a 4090 because the 7-8 tok/s is fine, not much to gain anymore when you get an image in about 4 sec.
a_beautiful_rhind@reddit
I go from like 20s down to 4 and get to enjoy image gen on the weaker card. For a 4090 it simply scales up. Now you're having to speed up flux and friends.
tmvr@reddit
That's wild, going to have to dig out the old 2080 machine and try it. Anything else done besides torch compile?
kei-ayanami@reddit
Fellow 4x3090'er, what quant exactly did you use? Have a link? Also how good is the quality at that quant?
a_beautiful_rhind@reddit
https://huggingface.co/MikeRoz/GLM-4.6-exl3/tree/2.06bpw_H6
Seems ok so far. It can still write out the 4chan simulator flawlessly but it's SVG creation skills are diminished compared to Q3K_XL
nero10578@reddit
How do you run Wan 2.2 distributed? You mean running the model on multiple GPUs?
a_beautiful_rhind@reddit
There's a comfy node called raylight that lets you split it and many other models. Both the weights and the work.
lemondrops9@reddit
How much of improvement did you see with Raylight?
a_beautiful_rhind@reddit
For single images, not much. For video models a ton. Plus you can make it as high res and long as the model supports without OOM.
nero10578@reddit
Ooh interesting okay
Vaddieg@reddit
1.2W (gpt-oss-20b on m1 pro)
segmond@reddit
Why? Who cares? I don't see car guys asking, how much mileage does your project car get while idling? Who cares. local LLM is about passion, so share your rigs alright, share what you have built with it, share cost if other's want to know what it will cost. But how much does it cost idle or to run? Seriously what does it matter? If anyone cares about that, get a mac or go cloud.
One-Employment3759@reddit
Because it's interesting to compare and most people are not running their rigs full tilt 24/7, many of us leave our machines on for availability, and it can get expensive?
Really passionate people like to explore all aspects of the field.
segmond@reddit
What exactly are you comparing? It's ridiculous. You can't take your rig and tune it to get the same idle wattage as others because we all have different motherboards, cpu, GPUs, etc. If the goal is to reduce your idle power, then the question should simple be, "what can I do to reduce my idle power" not a compare with others...
sunole123@reddit
how much investment is that? $15k??
More impressive is how much time you spend and gain from it?? or how many hours you interact with it??
panchovix@reddit (OP)
A bit less on the span of 4 years. Calculating an equivalent form CLP (Chilean peso) to USD (all this including 19%):
Total: \~12800 USD with 19% tax, so about \~10700USD without tax.
Nowadays I barely use it tbh, I have some personal issues so not much motivation.
I gain nothing in monetary terms, just loses.
The server about 10-12 hours per week maybe?
Maleficent-Ad5999@reddit
Thanks for the detailed answer. I’m just curious how did you connect all 7 GPUs to a consumer motherboard? Did that motherboard support pcie bifurcation?
panchovix@reddit (OP)
X8/X8 from CPU from top 2 PCIe slots.
X4/X4 from CPU from top 2 M2 slots, to PCIe adapters.
X4 from Chipset from bottom PCIe slot.
X4/X4 from Chipset from bottom M2 slots, to PCIe adapters.
I use Add2PSU, so I just power on one PSU and all the others sync.
Maleficent-Ad5999@reddit
Thanks for the response.. cheers 🍻
bullerwins@reddit
According to the smart plug i have, at idle, everything (cpu, 7 gpus 6000/5090x2/3090x4, 10gb nic, 1 nvme) is 200-250w idle. So I turn it off every night or if I'm not going to use it for a few hours.
PermanentLiminality@reddit
I'm in California. My power is more like $0.45. I dream about 25 cents per kwh.
Rynn-7@reddit
45 cents!? It's only $0.12/kWh in my local area of Pa.
One-Employment3759@reddit
I'm on equivalent of ~17c, I think the power company I built models for didn't remove me from employee discount.
MitsotakiShogun@reddit
I'm at ~$0.41 in Switzerland (~0.33 CHF). Fun times.
ViRROOO@reddit
7 to 10 watts. Framework desktop (128gb version)
budz@reddit
What is this idle that you speak of
_hypochonder_@reddit
4x AMD MI50
TR 1950X
128GB (8x 16GB)
Idle is 160W.
llama.cpp 300-400W.
vLLM 1100-1200W.(dense models)
The pc is only at weekend for SillyTavern on.
Ok-Hawk-5828@reddit
AGX Xavier 32. 5-6w idle. 44w generating. Probably average 10w running workflow intermittently around the clock.
toomanypubes@reddit
M3 Ultra 512 - 12 watts at true idle with an external NVME attached.
MitsotakiShogun@reddit
Meanwhile my Zyxel 10 Gbps router consumes 50W just by being plugged in without anything connected to it...
panchovix@reddit (OP)
Man that's insane.
recitegod@reddit
Trully alien, kinda slow in the 17tk sec for the fastest, but still... alien.
ParaboloidalCrest@reddit
I tried every trick in the books and my GPU alone idles at 14 watts 😭
woahdudee2a@reddit
that shit is alien tech
see_spot_ruminate@reddit
7600x3d
2x 5060ti (both idle at ~4 watts)
4 hdd for raid
idle at ~80 watts
at my electric rate, less than $8 per month at idle
createthiscom@reddit
I dunno, like 150-180W with dual 9355s and a 6000 pro.
quangspkt@reddit
How can x670e handle such a bunch of cards?
panchovix@reddit (OP)
3 PCIe slots and 4 M2 to PCIe adapters.
zipperlein@reddit
Ryzen 9 7900X
ASRock B650 LiveMixer
4x3090
4 HDDs (2 via USB -> slow as hell, do not recommend)
2 SSDs
Idle:\~120-200W depends if a model is loaded
Max: \~750W due to 150W power limits on the 3090s, could crank it up but I want to keep them for a while.
Running off solar a lot of the time considering heating is still fossile. Planning to add a power station as a buffer for the night.
PermanentLiminality@reddit
I have a rig that is a Wyse 5070 and a P102-100. That gives me 10G of 450GB/s VRAM and an idle consumption of 10 watts.
UniqueAttourney@reddit
what do you use that amount of GPUs for ? is it even worth in terms of returns ?
panchovix@reddit (OP)
Mostly LLMs and Diffusion (txt2img, txt2vid).
Not worth in monetary returns (I get no money by using AI personally, I also haven't rented or sold any thing related to it).
Old_Consideration228@reddit
Leave some VRAM for us man