Poor man's guide to servicing a used RTX 3090 for local LLM inference

Posted by canred@reddit | LocalLLaMA | View on Reddit | 19 comments

Wrote up the whole process with disassembly photos and HWiNFO before/after data. Hope it saves someone some headaches.

https://github.com/cubebecu/writeups/tree/main/gpu-service"