LACT "indirect undervolt & OC" method beats `nvidia-smi -pl 400` on 3090TI FE.
Posted by VoidAlchemy@reddit | LocalLLaMA | View on Reddit | 7 comments

There have been some recent posts about using the new "indirect undervolt and overclock" method with LACT under Linux instead of simply naieve power capping your GPU(s) with nvidia-smi -pl 300
for example.
I wasn't sure if it was really any better or not, so vibe coded a small script to integrate 1Hz power measurements from my 3090TI FE 24GB GPU and run two benchmarks:
- Baseline
nvidia -pl 400
naieve 400W power cap - LACT overclock profile with same 400W power cap
I then ran the same ik_llama.cpp llama-sweep-bench test and sure enough the LACT overclock profile performs better/faster with less overall energy usage within the same power envelope.
LACT has worked on a variety of Intel/AMD/NVIDIA GPUs for a while now, but the "new" discovery to me was this "indirect undervolt and overclock" method specific to NVIDIA GPUs.
I have some anecdotal measurements with ComfyUI Wan2.2 i2v workflows suggesting it is faster for a given power cap as well. However, when I increased the overclocks too far it would output all dark/black videos or have occasional grey/dark square tile patches appear in the output video. I had to undo the aggressive overclock, reboot, and then it was all fine again. The values listed in the legend here seem to be working fine for now.
Curious what overclock profiles other folks are using for various GPU make/models. It does work headless as well and some have reported using it to reduce idle power psure. Also has anyone compared this against using nvidia-smi to set frequency cap instead of power cap or other strategies?
a_beautiful_rhind@reddit
I never did the power limit. I always limit the clocks and now apply an offset. For non TI, I put it at 1695 and 200mhz. Cards don't seem to go above 250W, even with 99% gpu use. The boost clock is 1920 or somewhere around there.
In the winter I might up the ram. Read 1100 is safe. With the proprietary driver, this stuff was harder to do and needed an X server. Lact made it super convenient.
panchovix@reddit
Is more noticeable to do both on a 5090.
For example on diffusion pipelines, even with a 0.86V undervolt it still can hit 600W. So when you both undervolt + overclock + power limit, you get more performance than just power limiting (as your core clocks will be higher).
On Ada and older with just undervolting (and optional overclocking) should be enough.
Secure_Reflection409@reddit
What's the tldr? How many watts for the same generation speed?
jwpbe@reddit
Given that the method that LACT uses is driven using the nvidia-ml-py project, you should be able to make a simple CLI program to do basic 'indirect undervolting', but I don't understand if needing an x server to do 'coolbits' would factor in to this, or if it's even needed at all with that library
ArtyfacialIntelagent@reddit
I've been running a combined (not sure what you mean by "indirect") undervolt/overclock since I got my 4090 in May 2023. I'm on Windows, so I use MSI Afterburner. Posting profiles isn't very helpful since everyone's cards are different depending on how lucky you are in the silicon lottery, but my card never pulls more than 350W and still matches vanilla 4090 performance at 450W. Haven't touched the settings since the initial setup, it's been rock solid.
jwpbe@reddit
Given that the method that LACT uses is exposed by the nvidia-ml-py project, you should be able to make a simple CLI program to do basic 'indirect undervolting', but I don't understand how needing an x server to do 'coolbits' would factor in to this, or if it's even needed at all with that library
VoidAlchemy@reddit (OP)