Don’t buy the DGX Spark: NVFP4 Still Missing After 6 Months
Posted by Secure_Archer_1529@reddit | LocalLLaMA | View on Reddit | 120 comments
This post was written in my own words, but AI assistance.
I own two DGX Sparks myself, and the lack of NVFP4 has been a real pain in the ass.
The reason the product made sense in the first place was the Blackwell + NVFP4 combo on a local AI machine with a proper NVIDIA software stack around it. Without that, Spark becomes much harder to justify, especially given the bandwidth limitations and the compromises that comes with it.
The DGX Spark was presented like a finished, premium system where NVFP4 was supposed to work out of the box. It was not marketed like an experimental dev kit where buyers should expect to spend months switching backends, testing builds, setting flags, and relying on community or hardcore fan fixes just to make a core feature work properly.
More than six months in, NVFP4 is still not properly delivered on the Spark. Yes, you can get things somewhat running. But there is a big difference between a feature technically existing and a feature being delivered as a mature, stable, and supported experience.
Right now, NVFP4 on Spark is much closer to the first than the second.
The hardware itself is not the main issue. Spark has potential, and in some scenarios it can perform well. But the overall experience does not match what was implied. At this point, it no longer feels like normal early friction. It feels like NVIDIA pushed the story before the software was actually ready.
So the takeaway is simple:
Do not buy DGX Spark assuming NVFP4 is already delivered as a polished, mature, supported feature.
NVIDIA overpromised and underdelivered on DGX Spark.
Rant over and out.
Haiart@reddit
So, what advantage does the DGX Spark has over any Ryzen AI Max+ 395 systems to warrant the much higher price?
fallingdowndizzyvr@reddit
The price difference isn't as much anymore. Strix Halo has really gone up in price. It used to be basically half the price. It's not anymore.
ImportancePitiful795@reddit
Still plenty of miniPCs with 395 128GB at $2000.
Framework decided to shoot themselves doubling the price.
CapeChill@reddit
Yea no I spent $2950 on a minisforum one a couple weeks ago ~2k for 64gb maybe.
fallingdowndizzyvr@reddit
96GB models are around $2000. You can still get 64GB models around $1600. Probably because no one bought them and thus there's still plenty of pre dram surge stock.
CapeChill@reddit
The 64gb is a rough sku. Not enough to run 80-120b models and much slower are running a 30b class model than similarly priced Arc pro or Radeon pro card with 24gb or 32gb of vram.
At least the m4 and m5 Mac minis and sparks have more gpu compute so they run models faster than strix halo. I thought about the 96gb but really the 128 makes the most sense with that chip.
fallingdowndizzyvr@reddit
I didn't understand why at launch, the 64GB model was only $200-$300 less than the 128GB model. Today though, you can just about get 2x64GB machines for the price of a 128GB machine. With TP a thing, that 2x64GB cluster should be faster than a single 128GB machine.
fastheadcrab@reddit
How did you run it in parallel? From my understanding it is very difficult. There are only a handful of verifiable examples that I know of including the YouTuber Donato and one or two forum posters. All other claims seem to be BS
fallingdowndizzyvr@reddit
I don't. Since I only have one Strix Halo. But it shouldn't be hard anymore.
https://github.com/ggml-org/llama.cpp/pull/19378
Didn't one of those people who runs vllm TP on Strix Halo say that RDMA isn't really necessary. It's a nice to have, not a must have.
fastheadcrab@reddit
Tensor Parallel has not yet implemented, fyi. The developers did mention it was coming not too long ago but they've also said that for a while now.
No I'm pretty sure that was one of those keyboard warriors posting nonsense. I think I've seen the same post. The guy was clueless.
CapeChill@reddit
I would’ve been sad buying the 64 but remember thinking the is too because I would’ve bought the 64gb model if it were say $600 less.
ImportancePitiful795@reddit
Do not spend $3000 for 395. Ain't worth it. At this point 3xB70 or 2xR9700 are better options with a gazillion years old X399 (+1950X) or X299 (+10940X).
Bosgame M5 is around $2000 these days. Got mine for €1700 two months ago.
Ok-Recognition7109@reddit
LOL 2 months ago. May well have been 10 years ago. Currently $4,000 for a Bosgame M5. STFU.
Own_Bandicoot4290@reddit
You would still need a mobo, CPU, ram, storage, case, power supply. In the end process will be the same but pretty safe will be much higher.
CapeChill@reddit
The worth is highly questionable but I wanted to play with shared memory. I have a 5090 so I’ve wasted several thousand on this nonsense. It’s okay I’m having fun.
The power draw/noise could make this justifiable imo but I have an epyc server running arc already too…
fallingdowndizzyvr@reddit
Link please. Since the lowest priced one, the Bosgame M5, is now $2400. The one I got, the GMK M5, was as low as $1700. Now it's $3000. Which just happens to be price the cheapest Spark was until a couple of weeks ago.
ImportancePitiful795@reddit
Bosgame used to be the cheapest one. At $2400 then, only the Abee makes sense and is watercooled.
OpenAI & NVIDIA right now are f direct responsible for these prices, which are driven by LPDDR5X costs.
fallingdowndizzyvr@reddit
How so? Even before the dram surge pricing, the Abee was one of the more expensive options. Newegg doesn't even carry it. They are the US Abee outlet.
ohgoditsdoddy@reddit
I don’t have a Ryzen but I hear Spark is much more performant. Plus, the Ryzen systems don’t really have unified memory.
fallingdowndizzyvr@reddit
Yes it does.
bnolsen@reddit
The ai max absolutely does.
StardockEngineer@reddit
Prefill speeds are much higher.
RipperFox@reddit
also massiv parallelism - not that better that Strix Halo in single user, but you can do multiple jobs at once..
MoffKalast@reddit
Also working blackwell flash attention I presume?
ggone20@reddit
We’re talking about completely different worlds. You cannot compare a datacenter analog to a consumer max+ chipset. They aren’t even in the same league despite, yes, individual performance being roughly similar (let’s not split hairs on this last point).
unjustifiably_angry@reddit
I bought two of these and from day one it was a journey of disappointment as I realized I'd been scammed by the Leather Jacket, particularly due to NVFP4 performance (lack thereof). But my impression has gradually improved and I've learned to take advantage of what they're actually good at. Compared to Strix Halo:
1) You can very easily link two together to roughly double performance and VRAM capacity to get it in a much more usable and interesting range. It's not exactly double but considerably better than single-node. Tensor parallelism in this mode works extremely well and is perfectly stable. This opens the door to ~400B models or even more with a modest REAP. You can also go up to four or even eight nodes but the scaling isn't linear and you need a fancy network box thing that's pretty expensive.
2) Despite still lacking NVFP4 the FP8 performance is quite good and with a pair of them together there's a wide variety of models you can run that way. FP8 runs at almost exactly the same speed as INT4. Strix Halo doesn't support FP8 acceleration.
3) A simple symlink from one spark to the other (and vice versa) effectively allows you to double the available storage by storing models on only one Spark and loading them on-demand across the super fast 200 mbps direct connection. This means even the 1TB models are perfectly adequate, that's tons of storage for models considering you'll probably only need to have a handful at the ready.
4) I found a cheap source of the aforementioned 200G cables from China. Since you actually need two to get the full advertised bandwidth and you can get two such Chinese cables for cheaper than one "official" cable and this was a nice surprise since they're impossible to get your hands on here. And alternatively you can get a pair of much cheaper 100G cables very easily from Amazon for a pretty fair price.
5) There are community attempts to get NVFP4 working which are making progress, although in all honesty at this point I've gotten used to ignoring that function, so I'll need to see some KLD and perplexity benchmarks before I'll believe it's worth the bother. I do believe that NVFP4 can potentially be a game-changer but it seems like it requires a lot of legwork on the part of the person who does the quantization to "get it right" and I'd be hesitant to trust randos on huggingface to do that.
6) Minimax 2.7 is expected very soon and I'll be able to run that at INT4 (if not NVFP4) with absolute assloads of room left over for full FP16 200K context with 2-4x concurrency. Strix Halo will need to quantize to Q3 and even then may not have enough room for full 200K for single-user let alone concurrency. Considering how exciting that model seems to be, this alone is a huge thing in Spark's favor.
batman0912@reddit
i also wanna know
Easy-Unit2087@reddit
The 200GbE interface alone makes up the difference, but PP is much higher as well.
spky-dev@reddit
It is more powerful compute wise, despite having the same memory bandwidth limitations. Also, access to Cuda and scalable since you can connect them.
It generally achieves higher prompt processing rates, though all these unified boxes, Mac Studio inclusive, suffer from slow pp vs dedicated GPU’s.
-dysangel-@reddit
Will be interesting to see how much M5 Ultra changes the equation. There is also TinyGPU now
FastDecode1@reddit
small pp*
seamonn@reddit
weak pp*
CalligrapherFar7833@reddit
Compute wise isnt it just more powerful due to cuda optimizations ?
Late_Night_AI@reddit
Being nvida and cuda based. Hate it as we might, a lot of the AI development is happening on cuda. 🤷♂️
ea_nasir_official_@reddit
Hoping that Vulkan gets better. It already partially fixed cross platform gaming, I hope it can do the same for AI.
ea_nasir_official_@reddit
CUDA and bragging rights
keyser1884@reddit
Primarily CUDA support (although it’s a newer version of CUDA with less mainstream support…)
Daniel_H212@reddit
As a strix halo owner, vLLM and SGLang are basically impossible to get working on my system, and I think the DGX spark has better compatibility in that regard. But llama.cpp has gotten so good I don't feel like I care about vLLM or SGLang anymore.
bnolsen@reddit
The ai max is seeing massive amounts of development and its stack is rapidly advancing. Hopefully we'll also see the npu come more into play as support for that improves. What sucks is trying to play a game while ampn llm is crunching.
madsheepPL@reddit
cuda and contextX-7
Royale_AJS@reddit
Nvidia’s entrenchment in the tooling space, and frankly, a bigger GPU that does prompt processing a whole lot faster. The AMD systems aren’t that fast at prompt processing. Source: Strix Halo owner.
Wildnimal@reddit
CUDA. T2I and Video is still better with nVidia
Secure_Archer_1529@reddit (OP)
Nvidia stack.
Repoman444@reddit
Spark owner here and this is absolutely true. Prettt much but our system then fuck you after. Jensen too busy doing interviews and not supporting his own products
ZCEyPFOYr0MWyHDQJZO4@reddit
I bought a Jetson nano years ago. My most recent hardware purchase was a framework desktop. You've gotta learn not to buy Nvidia arm platforms sooner or later.
JohnMason6504@reddit
The bandwidth wall is the real issue here. Spark has 273GB/s memory bandwidth. Without NVFP4 you are stuck at FP16 which means your effective model capacity is halved. A 70B at FP16 needs 140GB, which Spark can physically hold, but decode throughput drops to maybe 15 t/s because you are memory-bound. NVFP4 was supposed to fix this by letting you fit the same model in a quarter of the memory footprint. Without it, two 3090s at 936GB/s each genuinely outperform Spark for most inference workloads.
ggone20@reddit
Welcome to Linux in general? Lol
But yea it sucks nvfp4 isn’t native.
All that said, the spark is NOT overpriced at all. The NICs are worth over $1000 alone. It just specialized hobbyist hardware.. which carries a premium over ‘plain’ hardware that doesn’t truly scale like some people mentioning the Max+ chipset and such.
fallingdowndizzyvr@reddit
What possibly in that post needed "AI assistance"?
Savantskie1@reddit
Maybe translation? I mean come on that’s the first thing I thought of
fallingdowndizzyvr@reddit
You don't need "AI" for that. A simple translator program would do that. Browsers even do that now, no need for a separate translator app.
OP posts plenty in English. I haven't seen any "AI assistance" notations in any of those other posts.
Savantskie1@reddit
Yeah keep trying to fuel the ai hate, eventually you’ll convince someone to accept your opinion lol
fallingdowndizzyvr@reddit
LOL. Throw away everything you think is the likely reason, then you'll be left with the truth.
Savantskie1@reddit
Why? When everything I see, falls into the same patterns I see everyday.
fallingdowndizzyvr@reddit
Because you don't see. You just think you see.
MelodicRecognition7@reddit
Spark is a failed gaming platform that was rebranded as "AI" platform, it was never intended to have any NVFP4
fallingdowndizzyvr@reddit
It was never intended to be a gaming platform. If it was, it would be x86 and not Arm.
unjustifiably_angry@reddit
It most definitely was. Strip away the AI branding and it's a gaming monster, with or without x86->Arm emulation.
fallingdowndizzyvr@reddit
It most definitely isn't.
And that's why it isn't. You would be way better off getting Strix Halo for gaming.
ImportancePitiful795@reddit
Actually the GB10 is from the gaming platform/laptop NVIDIA was pushing to the likes of Nintendo and handheld manufacturers.
However nobody is stupid enough to use ARM on gaming when 100% of the software is x86-64.
unjustifiably_angry@reddit
The CPU cores in GB10 are fast enough to translate x86 code to Arm and still be extremely performant. You're not keeping up with progress on that front.
Maleficent_Celery_55@reddit
Not saying it was intended to be a gaming platform but there are rumours that nvidia is making an arm gaming SoC. I think its called n1x or something.
Ok_Mammoth589@reddit
The rtx pro lineup isn't a failed gaming platform. I get that there were rumors the dgx was supposed to be a gaming nuc, but that doesn't change the rtx pro lineup with the same problem
MelodicRecognition7@reddit
where did you see any "pro" in my message starting with "Spark ..."?
Practical-Collar3063@reddit
He is talking about the “pro” lineup because they have the exact same problem as the spark, meaning, if NVFP4 is not supported on Spark because it was intended to be a gaming platform why is it also not supported on the PRO lineup ?
Are those also failed gaming products never intended to have NVFP4 ?
Most likely reason is what many people have said already “they care about the server hardware, if the Spark was too good it would have overshadowed some of there server solution for certain applications”
CoconutMario@reddit
Heh, so what exactly is your problem? I own one for 3 weeks now... and have to say, it works like a charm...
for serving -> https://github.com/eugr/spark-vllm-docker / e.g in combination with my beautiful NVFP4 images at https://huggingface.co/bg-digitalservices - but also from nvidia, there are now many NVFP4 which work like a charm (nemotron etc.)?
if not for serving... then most things also work quite well with jupyter, etc.? what problem are you exactly facing?
but yeah, i agree it's not a "full-polished all neatly perfectly curated" product yet
Practical-Collar3063@reddit
Look at your logs when starting VLLM you will see an error message about NVFP4 and fallback to marlin.
They don’t actually work “like a charm” they are functional but your are leaving performance on the table due to lack of support.
Tyme4Trouble@reddit
Last I checked, NVFP4 works for vLLM, SGLang, and TRT-LLM now so I don’t know what OP is talking about. TRT has the weirdest issues where MoE works properly, but sense has issues, but in general it works on the Spark.
Impossible_Style_136@reddit
This highlights the exact risk of buying early-lifecycle enterprise hardware for local/prosumer use. The hardware-software co-design required for NVFP4 isn't something the community can just patch via a custom backend flag; it requires deep integration at the cuBLAS/Triton level.
If you are stuck with the hardware right now, verify your dependencies and drop back to relying on FP8 scaling. Treat NVFP4 as non-existent until NVIDIA pushes it into the mainline containers.
RoughContract872@reddit
Can we organize a group to sue them and get our money back? They lied about the product.
unjustifiably_angry@reddit
They didn't actually lie, they're just not being especially helpful with getting the product's software stack to work as advertised. It's a like a car manufacturer delivering your car in pieces from the back of a dump truck and sending a mechanic to work on assembling it four hours every weekend.
ZachCope@reddit
Has this been posted before?
Ok_Mammoth589@reddit
Yes, i think it bears repeating bc nvidia has clearly done a bait and switch with dgx and the rtx pro line. In a better world we'd have regulatory authorities we could go to... but this America.
unjustifiably_angry@reddit
The problem is I don't think there are technically any lies or perhaps even exaggerations, it just needs custom code to be written specifically for it and Nvidia's not being especially helpful on that front, they're probably too focused on the billion-dollar contracts.
Blackwell in general came in extremely hot, you might already be aware. I think basically everyone is currently needing to do their own in-house optimizations to get Blackwell to run as advertised. Nvidia is trying to make the most of the market while it's hot and that means software isn't keeping up with hardware.
catplusplusok@reddit
I thought https://github.com/Avarok-Cybersecurity/dgx-vllm is pretty fast? If not, could try Thor Dev Kit, it has working NVFP4 in vllm, though may need to build from source, it's also a bit cheaper.
unjustifiably_angry@reddit
I looked into Thor and its tradeoffs just to get functioning NVFP4 weren't really worth it. I think I might've done the math and figured out that even accelerated NVFP4 would be slower than INT4 on Spark.
Sixstringsickness@reddit
I have a strix halo machine and have always wondered if I would be better off with a GB10 based device...
It seems like both platforms have their pros and cons - being tied to Nvidia Unbuntu was my biggest concern, followed by, from my understanding their tendency to simply not support devices over time.
StardockEngineer@reddit
It’s just regular Ubuntu
unjustifiably_angry@reddit
With Nvidia customized closed drivers specifically for GB10. So if Nvidia gets bored, GB10 might be bricks.
I don't see them doing that anytime soon but it could happen.
whichsideisup@reddit
I dunno. They seem to have pretty amazing driver support for the consumer cards and the shield is 10 years old with updates still coming.
gethooge@reddit
Good timing, similar sentiment on the NVIDIA DGX Spark user forum.
Given the hyprescalers/AI are printing unlimited money, I guess even consumer "AI" products are just left to rot.
conockrad@reddit
All native FP4 MoE backends produce garbage output or crash on SM120 (compute_120) due to broken CUTLASS grouped GEMM templates: https://github.com/NVIDIA/cutlass/issues/3096
Spara-Extreme@reddit
Where CAN you run NVFP4? I had read somewhere that it didnt run on blackwell cards and I had issues on my RTX6000 pro blackwell.
Vicar_of_Wibbly@reddit
There’s Blackwell and there’s Blackwell.
Datacenter sm100 Blackwell is the real deal with TMEM hardware and support for the advanced instruction set, tcgen05. This is what the cutlass kernels use for accelerated NVFP4.
Consumer sm120/sm121 Blackwell does not have Blackwell hardware, so I renamed it Brownwell 💩. There’s no cutlass support (due to the missing hardware in fake/consumer Blackwell), so there’s no accelerated NVFP4. Just fallback to sm89/triton.
So we can run NVFP4, we just can do it fast as promised for Blackwell generation GPUs.
Total scam on the consumer cards.
catplusplusok@reddit
Consumer sm120 absolutely does fast NVFP4 with vLLM/TensortRT-LLM/nunchaku. I mean maybe not as fast as datacenter, but quite usable. I don't know what's the deal with DGX Spark, but there is an optimized vLLM clone that works apparently.
conockrad@reddit
It’s “fast” because it’s half the size of fp8. Not because it’s fast. While post exactly about this
catplusplusok@reddit
I don't have a Spark or a reason to complain about vLLM performance on my desktop, but I personally verified that nunchaku uses real NVFP4 instructions, register based ones rather than TMEM, rather than emulation. If inference engines don't, it's a software issue
conockrad@reddit
According to Claude: “They write their own W4A4 GEMM kernels (not CUTLASS, not cuBLAS) that use Blackwell’s native FP4 tensor core instructions, compiled with compute_120a/compute_121a gencode flags. This is for diffusion models (FLUX, Qwen-Image, SANA), not LLM serving — so they don’t hit the MoE grouped GEMM hell that vLLM/FlashInfer are drowning in”
Xp_12@reddit
yeah... I'm running dual 5060ti with qwen 3.5 35b nvfp4 through vllm with flashinfer_cutlass and people in the discord from this forum have nvfp4 running on vllm with dgx sparks... dunno whats up here.
-dysangel-@reddit
some kind of astroturfing against nvidia maybe?
sautdepage@reddit
It's customers buying expensive hardware and expecting to match advertised material. It works, but forget about performance charts and optimizations pubished by Nvidia mentioning "Blackwell".
There was no way to tell in product spec sheets. "5th gen tensor cores" is borderline false advertising when you dig up CUDA spec sheets to find 5th gen instructions are not available.
Xp_12@reddit
who knows to be honest. it is kind of annoying with the sm variance among "Blackwell" products. the brownwell comment got a chuckle out of me.
Mkengine@reddit
I use Atlas Inference on my DGX Spark (they are currently in Version 2.6 I think) and can use NVFP4 models without problems, Qwen3.5-35B-A3B should be around 130 tok/s with MTP enabled for example.
Altruistic_Heat_9531@reddit
B200/B300 variant. And even then those stuff is whole can of worm of hardware stacks, just like you i also tripped out when the profiler tell a much lower perf on NVFP4 on 5090 and Pro 6000. I though "oh 5090 maybe it is cut down on some ISA and hardware here and there because of consumer", and then switch to Pro 6000, but noped....
Sufficient_Prune3897@reddit
On the data center Blackwell cards
Spara-Extreme@reddit
Yea - and thats whats annoying to me. I wouldn't mind getting a Spark as a local inference machine if nvfp4 worked without jumping hoops. That it doesn't isn't necessarily the deal breaker...that it doesn't after all this time is the deal breaker.
txgsync@reddit
This is the exact reason I bought a DGX Spark and returned it three weeks later. FP8 is fine, but NVFP4 is functionally nonexistent.
Bizarrely, MXFP4 is fast. But I already have a MacBook Pro M4 Max that’s faster.
Vicar_of_Wibbly@reddit
Same deal with the RTX 6000 PRO 96GB GPUs. Exactly the same. Advertised as having accelerated FP4, etc. and in reality it’s just an sm89 fallback to Triton. 💩.
Nvidia really pulled the rug on these so-called Blackwell systems.
Shady AF.
Annual_Fondant2644@reddit
What does “sm89 fallback to triton” mean?
ANR2ME@reddit
May be only use sm89 (aka. Ada Lovelace) features 🤔
abnormal_human@reddit
It means in practice you’re often using cuda kernels targeting sm89 (ada iirc) instead of nvfp4.
sk-sakul@reddit
Can you point me to some resources about this?
We all have clearly higher expectations for that price...
mxforest@reddit
Rubin will be different. Trust me! -leatherboy69
JGoneWild42@reddit
I've been able to run some NVFP4 setups just fine using Docker Compose and following the Playbook in Nvidias website. Nemotron 3 Super NVFP4 runs at about 16 token/s on a single DGX Spark.
What models are you having issues using?
kaliku@reddit
Isn't that...very slow? I get up to 170 tps with gpt OSS 120b on a pro 6000
JGoneWild42@reddit
It's a Spark, it has a memory bandwith of 274 GB per second. It's slow inference but large for bigger models. It's a machine built for research, development and fine tuning, it's not an inference machine. For inference I recommend a dedicated GPU or desktop build.
Source: GenAI Team Lead
kaliku@reddit
Thanks for educating me. I was under the wrong impression that the memory bandwidth was somewhat similar to the rtx pro 6000.
StardockEngineer@reddit
Different active parameters
StardockEngineer@reddit
It’s hardly an issue. Intel’s own INT4 format runs well.
Blaisun@reddit
Hard disagree, Nvidia has not delivered what they promised.. other solutions that are workable do not excuse them not delivering on the platform they promised..
StardockEngineer@reddit
Im not saying promises weren’t kept. But to dismiss the whole platform? Sounds like tunnel vision.
Ok_Appearance3584@reddit
Agreed. NVFP4 underperformance is staggering and there's no excuse, given this is the biggest market cap company in the world right now.
Aaaaaaaaaeeeee@reddit
I've seen discussions and people's forks, but have had the time to inspect the reports. It would be good to check for a W4A4 30-70B dense example on int4-based baseline and nvfp4. (Prompt processing)
Hearcharted@reddit
Don't worry, I cannot even afford food 🤷♂️
Operation_Fluffy@reddit
If NVFP4 is a must-have for you, I agree. That’s said, I honestly love my sparks and I feel like the headline is a little binary. They do a good job as a local cluster with lots of memory. For example, with open code and qwen3.5 I can run a coding agent very similar to Claude code locally at about the same speed using FP8, not NVFP4. (260k context too)
justserg@reddit
got mine a few months ago and honestly the fp4 thing stings, but the prefill speed alone makes it worth it over my mac studio for anything context-heavy
ProfessionalSpend589@reddit
Disclosure: i’m team AMD for this battle :)
This is standard practice in many industries and maybe the most popular example is with smartphone. The other examples is niche hardware. At least NVidia has the funds to finance the work and eventually release something.
The other annoying thing is announcing in advance something will be available after X months in stores.
Ok_Mammoth589@reddit
AMD has no leg to stand on. The 7900xtx has been a "fully" supported member of rocm for half a decade now but AMD routinely develops rocm support only for rdna3 enterprise gpus. Same rdna3, same hardware units. Not supported under rocm.
misha1350@reddit
How the turns have tabled. AMD used to be the driverless one, now NVIDIA is going the way of the Thermi. Now you have Thermi with its missing ROPs, but now with horrible drivers, and lots and lots and lots of greed.
rebelSun25@reddit
I never would. They're so overpriced here in our area that it would be a logical and financial disaster until prices normalize. It's minimum $4400 or so
Late_Night_AI@reddit
Ouch, that’s rough. I got mine off new egg for 3,299$