The $130 GPU that performs on par w/ an RTX3090
Posted by desexmachina@reddit | LocalLLaMA | View on Reddit | 11 comments
https://gist.github.com/synchronic1/22ad2e229fe760f0ccd5313f53adea59
No-Refrigerator-1672@reddit
Cmp100-210 is Tesla V100 adapted for mining. During adaptation, Nvidia artifically cut it down to PCIe gen 1 x2, so this card will perform terribly in multi-gpu setup. Also, it inherits the idle power of 50w from V100, which is 4x more idle consumption than 3090. Just putting it out there for anyone who considers this card to keep in mind.
Affectionate-Cap-600@reddit
unfortunately the price for the v100 sxm2 32Gb increased a lot
atape_1@reddit
For a card that is going to stop receiving CUDA support after 13.0. Jut avoid Volta at this point IMO.
No-Refrigerator-1672@reddit
Yeah, that's a bummer, those 32GBs are quite useful. On the other hand, any GPU with usable amounts of vram are skyrocketing for the last two or so years.
Affectionate-Cap-600@reddit
yeah exactly. still years ago the spread between the 16 and 32 gb versions of the v100 was much lower.
Formal-Exam-8767@reddit
I suppose you are referring to tensor parallel, and not pipeline parallel?
So it would be slower than a single card? By how much? I couldn't find any benchmarks.
No-Refrigerator-1672@reddit
My pair of 3080 when inferencing Qwen 3.6 35B (fully in VRAM) consumes around 300-400MB/s of PCIe bandwidth in pipeline parallel mode, as reported by nvtop. So you're looking at at least 30% reduction in best case, most forgiving scenario; IRL it'll be much worse because the latency on transactions goes up, and your GPUs spend disproportionally more time just sitting there and waiting for data. In my experiments, PCIe bandwidth affects PP the most, you can experience drop of up to 10x of PP performance with constricted pipeline parallelism.
ambient_temp_xeno@reddit
Looks like it's got some youtube shill inspired pricing for such a turkey of a card.
No-Refrigerator-1672@reddit
Not really, I'm constantly searching out for old junk that can be adotped for AI, CMP100-210 was hovering that $130 for at least half a year for now. Also, since we're discussing 3090 altegratives, I'll smuggle in my review of 3080 bumped up to 20GB and priced at 500 eur a piece, I'd call it a reasonable replacement. China also offers 2080ti 22GB for roughly 350 eur if you want to save some cash. I'd say their options are the best value right now.
ambient_temp_xeno@reddit
$130 seems fair. I was seeing much worse.
Mr_Moonsilver@reddit
"Turkey of a card" 😂