Why does FurMark have a “GPU killer” reputation, but playing heavy games with 100% load for hours is totally fine?
Posted by rucekooker@reddit | buildapc | View on Reddit | 92 comments
What’s the actual difference between running something like FurMark vs just playing demanding games like Cyberpunk 2077, Kingdom Come: Deliverance II, or Red Dead Redemption 2?
From what I’ve read, a lot of people say FurMark can “kill” your GPU if you run it for too long. But at the same time, people are out here playing games at basically 100% GPU usage for hours every day, for months or even years, and their GPUs are completely fine apart from thermals creeping up a bit.
For example, I’ve been playing Crimson Desert at 100% GPU load for around 60 hours total now and everything is completely stable. But the moment I run FurMark for more than 2 minutes, id literally sweat my balls.
Is FurMark just propaganda?
Curious to hear from people who understand this deeper.
NarutoDragon732@reddit
Furmark cannot brick or damage hardware, assuming said hardware is stock and functional.
thereddaikon@reddit
Software can absolutely damage hardware if it's written poorly or the hardware isn't capable of operating to the level the software can ask it to. Properly engineered systems should have safeguards in place to prevent it but it can happen.
Drenlin@reddit
It can't damage anything in ideal conditions, but it will absolutely reveal power delivery bugs or bad PSUs.
In years past, GPUs also weren't as good at thermal regulation. Furmark would absolutely highlight a bad overclock.
dotareddit@reddit
what a time.
My 7950 was a fucking space heater but i was too stoked about my sick OC to give a shit.
MegaScubadude@reddit
haha I was living a college dorm in the northeast with terrible insulation with my 7950. I might as well have been waging war on my roommates with how hot it would get at times, even when it was freezing outside. They were probably so relieved when I started going out on weekend nights instead of holing up to play games every time.
Secret_Cow_5053@reddit
Free heat! /s
theunfilteredtruth@reddit
In the winter, the estimate the gas company has me pay is always way under my actual use so the bill is always a credit.
Summer is when I switch from Ultimate Performance to High or else I have to hydrate more just staying in the room.
bphase@reddit
Meanwhile a 5090 be like 7950 heaters these days. But then I guess most with a 5090 also run AC.
Shoshke@reddit
Fun fact, that 7950 was about as much a space heater as a 5070.
The difference was the cooling design.
RiloxAres@reddit
I mean just open a window at that point. Even with an underclock my room gets up to 80f in the winter.
flesjewater@reddit
And nowadays such power usage is considered normal. Crazy stuff.
DJKaotica@reddit
hahaha it's cold enough here that if I leave the sliding door open in the winter it doesn't matter (3rd floor condo; only the door and one window open).
In the summer? Heatwaves will hit low-mid 30s with high humidity, and periodically high 30s. Highest recorded temp is 42c in June. I regularly set FPS limits and whatnot to ensure I'm not cooking myself in the summer.
zipzoomramblafloon@reddit
I live more north, and used to have a summer and winter overclock for when the ambient room temp was 15c or 22c
dagelijksestijl@reddit
Furmark killed ATi cards which didn’t have any thermal protection at all at the time of its release.
PogTuber@reddit
Strangely enough it didn't reveal Ray tracing instability.
I did an undervolt on a 3080 which survived every test I threw at it except for the Metro Exodus benchmark.
scratcher1679@reddit
yep i ran furmark and prime95 contemporarily once on a friend's system which we had just finished cleaning up and the 700w 80+ white psu started smoking (turned it off right away, he then brought it to the shop he got it from which i still have no idea if they replaced the psu with another model or if at all)
rucekooker@reddit (OP)
then what causes a GPU to break?
MightBeYourDad_@reddit
Nothing, just silicone degredation over time kr traces breaking from many heat cycles
rucekooker@reddit (OP)
does repasting and repadding GPUs count as “reviving” them, or is it more like giving them a second wind?
therealslapper@reddit
Repasting or repadding does not revive broken hardware.
rucekooker@reddit (OP)
Does it guarantee longer lifespan though?
MyzMyz1995@reddit
No it does not. It take years for thermal paste to dry. Most thermal paste need replacement after 5 years or more. There's no reasons to replace the thermal paste on a new gpu.
ThereAndFapAgain2@reddit
I disagree you need to pay a professional to repaste a GPU, it’s what like 6 screws and a fan cable? Most people can manage that no problem.
F-You-Hard@reddit
The problem with that is you need to know the thickness of the thermalpads or you can damage your GPU. But some people did the work. So some models you can simply google.
kaje@reddit
Thermal putty exists now. You can just use that if you don't know the thickness of the pads.
ShaqShoes@reddit
To be honest I was just careful and re-used the pads and haven't had any issues years later (I actually have repasted my 3080 twice in 5 years due to swapping out the cooler for an AIO and then back when the pump broke)
windowpuncher@reddit
Not entirely true. I had a RX480 that developed horrible stuttering problems for no reason. Temps were just fine. I swapped a bunch of hardware, new drivers, old drivers, new memory, nothing fixed it until I re-pasted the thing. After that, everything ran fine.
Something was getting hot, but whatever it was wasn't being reported by the sensors.
dsinsti@reddit
If temperatures are bad ,( my 4 yo rx 6600 powercolor fighter has always had horrible delta and hotspot i'd dare to say bc of bad design/heatsink fault) I decided to repaste using TPM and undervolt it and took my chances because its value was not justifying paying a professional. While I agree it was a risk, using youtube and patience it was pretty straightforward and now it does work so far flawlessly in my 1080p kid's gaming build. If needed do it.
heroicxidiot@reddit
Everything breaks down eventually. It's a matter of how well you, take care of it. And not turning your PC into a jet engine everytime you want to play modded minecraft
Bac-Te@reddit
Fuck now you're making me self conscious about my own body and health. Thanks for the wake up call.
heroicxidiot@reddit
Well, that's one way to respond to that, but good luck to your efforts on that.
Happy_Brilliant7827@reddit
It fixes paste issues if they exist. Paste can dry out and work less well after a couple years.
If its turning off or throttling due to heat, repasting can help. GPU also have a estimated lifespan thats basically in data.
Also a game at 100% isnt using all the cores at 100%. Its just at 100% power. I suspect furnark might use more different types of cores at once.
Federal-Property-395@reddit
No, it just helps to solve thermal issues if they are present
Thermal issues do not kill hardware. They are designed to throttle and shutdown before any damage can be caused
FrequentWay@reddit
The damage is already done if you smoked something. Silicon doesn’t repair itself. But repasting and fresh thermal pads means a healthier GPU as in it’s away from thermal damages.
necheffa@reddit
Here, plug this USB flash drive into your uranium enrichment centrifuges... it'll be fine, software can't hurt hardware. 🙂
withoutapaddle@reddit
I never get tired of how genius the whole Stuxnet ordeal was.
misteryk@reddit
tell that to people who got their 3080 bricked by diablo 4 cutscenes
cakemates@reddit
the way gpus and cpus are built these days is if you can kill your gpu or cpu with software, then thats a firmware defect and its the manufacturer job to fix, the firmware should make using the device idiot proof. That diablo example you listed is an nvidia problem.
Biduleman@reddit
That's because the cards had a defect so the hardware was not really "functional".
HavocInferno@reddit
That assumption is never fully safe though, that's the issue.
Furmark got its reputation because way back then, GPUs had much less robust power and thermal control and Furmark managed to run the cards beyond their electrical design limits. So, sure, it didn't directly damage hardware, but it rather consistently exposed fundamental design flaws across entire generations.
And while rarer, it still happens sometimes these days. Remember New World and the burned cards it caused? Sure, not on purpose, but software repeatedly triggering a defect via an existing design flaw still happens.
People for some reason think software and hardware are always strictly and safely decoupled from each other. But that's just not how it ever will be. Instructions from software need to eventually be run on hardware, and the steps to get there will most likely always have some yet unknown flaws that can lead to software inadvertently killing hardware.
omega552003@reddit
New World did: https://youtu.be/kxoXbfzP5BU
Annsly@reddit
It wasn't the game's fault: https://www.pcworld.com/article/395090/evga-explains-how-amazons-mmo-bricked-24-geforce-rtx-3090s.html
Cute_Customer420@reddit
its shitty programming when they let menus run at 1000fps. both are at fault
lichtspieler@reddit
Diablo4 did murder a few high end GPUs aswell.
Unlocked menu FPS is usually one of the ingridients for the GPU murder cocktail.
twinkeybrain@reddit
Have you seen Live Free or Die Hard though?
fistfulloframen@reddit
Tell that to every computer I've left it running on.
Faolanth@reddit
Software can’t harm hardware. The exception being if you disable safety measures (tjmax, mess with voltages).
Furmark was known for running more intensive tasks - you’d draw maximum power - maximum current. Games tend to be a lot lighter, if you would pull 350w on furmark, you might only pull 220-260w in 99% of games for example.
But it can’t actually damage anything. Voltages and current is limited, heat causes throttling so it’s safe, etc.
It may make what is already broken apparent though - if your card is failing already but doesn’t show signs, hitting 100% of its power limit may finally show you that it’s broken.
Warcraft_Fan@reddit
Older cards had less safeties and would roast itself if you used Furmark back then.
HavocInferno@reddit
Furmark got its reputation because years ago GPUs simply did not have such sophisticated safety measures. It did actually damage hardware by running a workload that pushed the hardware beyond its safe physical limits.
These days safety measures in drivers and hardware can guard against it.
Skarth@reddit
very very very few games can run a gpu at a full 100%.
Most games will fluctuate between 90-100% at most, because other components will bottleneck the GPU on occasion.
Furmark doesn't have that restriction as it's designed to run entirely off the GPU with no restrictions.
Gaming is running a car at a race track, you throttle it hard, but do let off the engine on occasion for braking and turning.
Furmark is parking the car and flooring the accelerator at max RPMs non-stop.
Lord_Val@reddit
Fail safes are built into the gpu at the hardware level. If your gpu blew up while running furmark. Id said its either a shitty gpu, or user error (from improperly overacting)
Noxious89123@reddit
In 2026 sure, but not in 2007!
icantchoosewisely@reddit
Back in the day, when Furmark was first launched (about 20 years ago), those hardware level safeties were not present in quite a few GPUs and CPUs.
Back then, a software such a Furmark or whatever CPU equivalent stress test software you might use, could cause a chip to burn itself - I would say this is the reason for Furmark's reputation of "GPU killer".
With modern hardware that functions properly, this should not be an issue.
Even if the cooling system fails, with a modern chip, thermal protection should kick in and shutdown before any damage occurs.
HeidenShadows@reddit
Furmark is a power virus load. It only renders for the sake of having so many calculations per second that it pulls maximum power as it's a hair simulation donut.
Noxious89123@reddit
Reminds me of my ex-wife
aragorn18@reddit
Furmark is running specific functions for the sole purpose of drawing as much power and generating as much heat as possible. It's not trying to render a real game.
An actual game will call various functions and draw a variable amount of power. But certain scenes in certain games can sometimes draw more power than even Furmark.
rucekooker@reddit (OP)
Does it actually kill GPUs though
Noxious89123@reddit
In 2026, no.
Things were different back in 2007.
postsshortcomments@reddit
I have a bit of a theory on this one. Furmark gained that reputation years and years ago when OCP/OPP was still flat up not standard or when it was a massive premium.
In the building community, you'll regularly read posts of people asking if a 600W PSU is fine with a certain GPU model even though the manufacturer recommends 700W. Someone will reply: "Yes, it should be fine, you still have a buffer." After a few rounds of that, search engine indexing, and a game of telephone, the next person comes along, is a bit more ambitious, and tries a 550W GPU then relays 'their findings" every time the question comes up and explains there is a buffer. Cycle repeats and the next person tries 500W. Now all of a sudden "yes ive been using a 450W GPU on a 700W recommended PSU and it's been fine" [but for 3 weeks, not in benchmarking software for sustained 8 hour loads, maybe undervolted after 3 days and forgot to mention that, and maybe they have an A++ tier PSU while the next guys is a C-tier that's become an E-Tier.]. Also: your 180W 14th gen CPU is not the same as a 65W TDP 3600 with the same GPU, which may have been considered in that original post.
Eventually you get to the point of no buffer, one person has an OC model that does consume more wattage, transient spikes outside of wattage specs on a high-quality PSU become required to get those results, one person has a 180W 14th Gen Intel CPU with 13 infinity fans and the other a 65W TDP, the people taking such unnecessary risks are complacent on A-tier GPUs, another users definition of "it's been working fine" is a week after install, and the next persons "it's working fine" requires it to run a 8 hour Furmark cycle on a 450W PSU with a 750W overclock model.
Running PSUs near rated wattage is extremely hard on equipment over sustained lows. OPP/OCP on the box doesn't always mean OPP/OCP works. An momentary 530W transient spike on a 550W PSU is not the same thing as running a 530W build on a 550W PSU 24/7 that can still be exposed to transient spikes. Some high-quality PSUs can operate well-outside of listed wattage, others cannot.
If you have to ask and it involves a PSU or wattage, the answer should be "no, you probably don't want to do that." Yes, PCPartPicker mentions maximum wattage.. but you still have transient power spikes and if you're 40W under it's also super hard on the equipment - especially bad equipment.
Whether or not that's what it was and that's solely what it is, I cannot tell you. But I can guarantee you something like that has killed many PSUs over the years and has taken many GPUs with it.
aragorn18@reddit
It shouldn't on a properly functioning GPU. There are thermal and power limits that should kick in to protect the GPU.
DaedalusRaistlin@reddit
Games tend to have input loops and other logic that means the gpu isn't being pushed as hard as these benchmarks.
The benchmarks are often not really indicative of gaming performance, since they tend to use features games skip for a better framerate. It's more about seeing just how much your GPU can do when it's the only thing being stressed.
A few games had uncapped framerates in menus back in the day. Starcraft 2 nearly burned out an old nVidia gpu I had because the cooling just wasn't up to it under full load, which was the games menu running uncapped at 800 fps. 110 degrees celcius!
Stuff like that actually did take out a few graphics cards, because back then gpu cooling wasn't always good enough for a card under full load. Combine it with crappy cases of the time with poor airflow and it could be hot enough in there to do damage to your hardware. I had a hardware monitor, and would stop playing games at 110C on the GPU. I suspect many people weren't that savvy, and the combined CPU and GPU heat was enough to actually cause damage.
Noxious89123@reddit
Lots of the answers here are wrong, and I wonder if perhaps it's because it's younger users or those with shorter memories...
FurMark absolutely could and did kill hardware. It is a power virus.
For many years now, cards have been able to detect FurMark running and massively reduce clock speeds.
Back in t' day, a card would just pull as much power as possible, and for some cards this could either overload the VRM (thermal runaway is a bitch) or just overheat the core.
SethMatrix@reddit
It’s a different load type than those games.
No furmark is not propaganda lmao. They’ve been using furmark among other programs to test GPUs before they’re sold since forever.
Prof_Linux@reddit
So software cannot kill hardware unless say the software flashes bad firmware onto the hardware, but even then that extreme and intentional (Ie. GPU firmware flashing).
The thing with FurMark is that its a GPU stress test tool or "everyone's favorite power virus" - Gamer Nexus. Now really all that it dose is cause the GPU to pull max power, run at max clock speeds, and stress the hardware like Uniengine Heaven or Superposition.
The reason why it can "kill" a GPU is because people run the software on either stock coolers with are either closed with dust they overheat degrading the silicone, run stock with an aggressive over clock that kills the hardware. Other factors apply such as an adequate cooler but poor case air flow that causes over heating, really aggressive over clocks that cause premarital hardware failure. Or other issues like a bad PSU, bad component (ie a faulty VRAM chip), or so on. Now the people who can run FurMark for days and not see issues with their GPU's are people who have liquid cooled their cards and have adequate heat exchanges to deal with the heat.
I should note however is that although FurMark just like Heaven or Superposition do not create a real world example of how games will perform.
Now you cay that :
imply you have issues running FurMark on your system? Or scared to run it?
HavocInferno@reddit
Software can kill hardware. Not directly, but by exploiting existing design flaws in the hardware, either on purpose or by accident. There are entire fields of research dedicated to figuring out such error propagation.
Nothing to do with deteriorated or dirty hardware. Hardware just sometimes doesn't have adequate safety measures for specific edge case workloads.
rucekooker@reddit (OP)
Every time I go to my go-to technician to check temps, I run FurMark and Unigine Heaven Benchmark until the scores finish, and I’m always a bit stressed thinking it might kill my GPU.
Part of it is because my current card is a used NVIDIA GeForce RTX 3080 blower-style model. When I first got it from the used market, temps would shoot up to around 88°C at just \~60% usage. After about a month, I repasted it, and now it sits around 71°C under full load with an undervolt and a custom fan curve. Without the undervolt and fan tuning, it still gradually climbs back to \~88°C, which I don’t fully understand.
Even with those improvements, I still can’t shake the feeling that something might go wrong. I paid around $290 for it about 8 months ago, but I’ve also spent extra on technician visits, MX-6 paste, Valor Odin pads, improving case airflow, adding more fans, and learning airflow optimization.
At this point I’m wondering if I actually got a good deal, or if I just ended up pouring more money into a card that needed a lot of fixing.
Biduleman@reddit
This is how physics work.
The chip heats up.
The thermal pastor's job is to allow that heat to dissipate in the metal heatsink as fast as possible.
The heatsink's job is to have as much thermal conductivity as possible to absorb the heat as fast as possible, while also having as much surface area possible to allow for a fan to cool it fast.
With an undervolt, the card is outputting less heat, so the heatsink+fan have time to push the heat away. Without the undervolt, it pushes more heat so the heatsink has a harder time cooling down.
ChaZcaTriX@reddit
You are really overthinking it. You're like a person who got a new car, then immediately started messing with the engine, suspension, etc. because they read blogs about fixing up a 60 years old car.
Something I'm also worried about are your mentions of a technician. Going there one time to repaste (once in 5-10 years) is okay, but if they tell you to return to fix and tune up stuff - that's a scammer, not a real tech.
EquipmentSome@reddit
It's not a GPU Killer in that it kills GPU's.. It's just guaranteed to put a max load on any GPU immediately.
HavocInferno@reddit
Which, back when it got its reputation, it could kill GPUs because manufacturers didn't expect such a synthetic workload to be run for that long, and so they did not build the cards' safety measures to that spec.
rucekooker@reddit (OP)
Does it put strain on the GPU in a way that’s similar to someone doing a bench press continuously without ever stopping?
FrequentWay@reddit
That’s what a benchmark is supposed to do. If your GPU or cpu is healthy it shouldn’t go into thermal throttling. If it is figure out what you are doing wrong such as trying to push more power on a laptop.
EquipmentSome@reddit
Ya sure. But the weight is past your max and your energy supply is regenerating so youre just constantly pushing up as hard as you can lol
dalooooongway@reddit
Electronics are not muscles so no
Purple_Holiday2102@reddit
Like everybody else said, it's used to absolutely slam hardware. For instance, my 4090 will usually run at 300-400 watts in most games, at most. I usually have the power limit at 130% to get the most out of the GPU. If I run Furmark in that condition it will go straight to 600 watts. Even with my cooling focuswd setup the temp will climb fairly quickly. I don't let it get over 79c though.
At 110% power limit it goes to 500 watts. Much more manageable and it stays at 74-75c. Just did a bunch of testing this weekend so it's fresh on the mind haha.
FranticBronchitis@reddit
Furmark is especially hard on the VRAM and GPU memory controller in a different way from games. Software can absolutely damage hardware though if a graphics card dies to Furmark it's likely it already had some damage to start with
Kitchen_Cup_8643@reddit
So I bought this used rx 470 a while back, and it was really struggling to keep high clocks.
Kept on stuttering. Figured I'd use furmark to see if it was due to temps or anything, as it seemed to be.
Needless to say, the cars was fucked. The instant I started furmark, PSU tripped and the card got fried. From personal experience, it can definitely kill cards lmfao, but again, I'm not sure what the previous owner did with it. I had dumped the card's vbios and curiously its hash didn't match anything I found online but the values seemed fine.
Though furmark never harmed any of my well functioning cards, so there's that.
UpstairsConnection57@reddit
Your card was dying and would have died the next time you maxed it out on a game.
Pikselardo@reddit
Unless you are in msi afterburner or amd master softwere wont kill ur gpu
UpstairsConnection57@reddit
Those apps wont kill hardware either. CPU and GPUs have thermal throttling and automatic shutdown that prevents them from overheating. In fact those apps are made by the manufacturers and do not even void the warranty.
coolmouse7777@reddit
Furmark can produce higher load on GPU than any game, but calling it "GPU killer" is obliviously BS. If GPU died during Furmark run it will be GPU problem, not Furmark one.
UpstairsConnection57@reddit
Furmark will not kill a modern GPU. It is simply a tool that maxes GPUs out for testing. If you have weak power or thermals Furmark will reveal that weakness. If a GPU dies after using Furmark on it then it was hanging by a thread and ready to go at any moment.
Westerdutch@reddit
It has the reputation because people test old hardware with it, if the gpu is on its last legs or if a psu is poor enough to make a damaging pairing then it can mean the end of the gpu. However, a similar load from any game would also do the exact same, its just that furmark is used more for this than game x, y or z.
tinysydneh@reddit
Furmark also tends to be a much more sustained 100% load.
Repulsive_Coffee_675@reddit
Furmark pulls much more power than normal gaming. My 6800XT pulls ~200 watt under full load (99% usage) but pulls 255W (limit) under furmark
mwdmeyer@reddit
People that say software cannot harm your hardware clearly don't understanding the realationship between them. Back in the old days if you didn't park your hard drive heads and turned off the machine they would crash into the platter (hence the name crash), more recently the Intel 14th gen series CPUs had overvolt issues caused by the firmware they can run. So it is very possible.
FurMark could have tested some very specific thing that would cause more load on the GPU than normal games, so much so that "it" could cause damage to the GPU as the silicon wasn't tested in that specific use case; now personally I think it is unlikely and the most likely cause it that were a few cases where FurMark "killed" GPUs but this would have happened if you used any application loading the GPU to 100%, the main cause was probably users will poor cooling or old hardware etc.
KaiDay11@reddit
There are lots of parts to a CPU/GPU.
100% CPU/GPU usage doesn't mean you're using 100% of every part of your CPU/GPU all the time, just that you're basically at the limit of what it can do with the task it's doing.
In a real game, certain parts may wait (doing little or nothing) a few nanoseconds or milliseconds at a time, here and there for another part to finish something.
Furmark has the goal of using an unrealistic load to push as much of your GPU as hard as possible as consistently as possible. It doesn't stress everything, but it stresses the most.
ChironXII@reddit
Furmark is designed to maximally stress components, which is more load than something like a game. Normal workloads are inefficient and the GPU spends time doing different things like specific small tasks or waiting for something to happen etc, which furmark does not. So it draws a lot of power, which can reveal flaws that wouldn't have shown up otherwise. It won't kill a properly designed and made card.
kester76a@reddit
Don't use Furmark, use MSI Kombustor as it's made to display faults in the GPU hardware.
apachelives@reddit
Workshop. We hammer all rigs with Furmark including laptops. If it fails it was already bad. Hardware should be able to handle 100% load any time no issue. If your rig can handle furmark for a few hours any other GPU intensive task should be fine. We also load up the rest of the system at the same time. RIP crappy PSU's.
Only if it was already bad.
Furmark is a heavier synthetic load. Probably 20-50% more intense. Much higher power consumption and heat output.
No, just stupid people with little to no understanding things spreading bullshit.
ShiftPrimeNet@reddit
furmark's weird bit is the sustained power-virus load: it can keep shaders, memory, and the vrm sitting flatter at the power limit than a game like cyberpunk, which has scene changes and natural dips. on a stock modern gpu the limiter should clamp it; if furmark kills the card, the cooling or power delivery was already marginal.
Meaty32ID@reddit
It just exposes crap hardware that can't handle its own load. Like a bunch of Toshiba laptops from around 2012 when i sold them. If you left 30 of those with furmark running for a few hours, there was almost a 20% failure rate.