Fun fact: 1070 laptop GPU launched with 8GB of VRAM
Posted by Antagonin@reddit | hardware | View on Reddit | 250 comments
9 years later, 5070 laptop GPU has still only 8GB of VRAM.
explain2mewhatsauser@reddit
I love RAM
hitsujiTMO@reddit
The logic is that the chipset still targets the same resolution, 1080p - 1440p. But completely ignores the fact that there's newer tech that require more VRAM in more modern titles.
I think the reality is that there's only so much VRAM being produced and they want to keep it all for the datacentre.
wtallis@reddit
There never was much logic to begin with in tying VRAM quantity to screen resolution. A 4k framebuffer at 4 bytes per pixel is just under 32MB. Essentially all of the variability in VRAM requirements comes from the assets, not the screen resolution. And games can (and should) have high-resolution textures available even if you're playing at 1080p, in case the camera ends up close to that texture.
There's at most a loose correlation between screen resolution and VRAM requirements, if the game is good about dynamically loading and unloading the highest resolutions of textures from VRAM (which most games aren't good at). But most of the time, the VRAM requirements really come down to whether the quality settings are at Low, Medium, High, etc., regardless of resolution.
reallynotnick@reddit
Yeah it’s not like 20-25 years ago when I was running 1280x1024 I needed like 4-6GB of VRAM.
Idk what the best short hand would be but I’d guess something like AAA games released after 20XX need XGB of VRAM. But even that has obvious flaws.
Positive-Vibes-All@reddit
The shorthand is that you get as much VRAM as the PSX has VRAM, I said it in 2020 and I was right 8GB for a 3070 was nuts yet here we are.
slither378962@reddit
They better. The ideal texture streaming system would keep high-res mip levels out of VRAM. This should be easier with this new sampler feedback thing.
And also, games love deferred rendering. It makes framebuffers very thicc. And you have these extra post processing render targets.
What would be nice is if tech tubers did VRAM allocation visuals like in https://chipsandcheese.com/p/cyberpunk-2077s-path-tracing-update#%C2%A7video-memory-usage.
wtallis@reddit
It still doesn't add up. You can fit dozens of screen-sized buffers into a mere 1GB, and once you subtract out how much memory those buffers would already need at 1440p, you're left with the conclusion that any game that fits in 8GB at 1440p would be just fine with 9GB at 4k at the same quality settings. Screen resolution really just isn't what makes a game want 16GB instead of 8GB.
slither378962@reddit
The "render target" stuff in the visual is about 1.3GB. But, I don't know how it scales with resolution. But somebody could easily find out. Get some numbers.
bedrooms-ds@reddit
My AAA title uses 7GB VRAM for 4k. I need 4GB more because somehow Windows uses it WTF.
Dull-Tea8669@reddit
You are confused. Windows uses 4GB of RAM, definitely not VRAM
Strazdas1@reddit
Windows can easily use 4 GB of VRAM for frame buffers if you have enough windows open.
bedrooms-ds@reddit
I see it on the VRAM usage view in the in-game settings.
Strazdas1@reddit
windows uses frame buffer for every window that is open unless you are playing in fullscreen mode on a single monitor setup, then the frame buffers get cleared and desktop manager gets paused. It does not matter if the window is minimized or behind another window. It is kept in the buffer. the more, larger windows you have open, the more VRAM windows will eat. Also potential cause: browser keeping a buffer for every open tab.
randomkidlol@reddit
vram density is still going up at the same rate as it used to. workstation cards like the rtx pro 6000 blackwell have 96gb of vram on gddr7, which means its very doable for a top of the line consumer card to have at least half of that. companies are penny pinching on vram because thats how you upsell people.
prajaybasu@reddit
Please explain how.
Samsung started producing 1GB GDDR5 chips in early 2015
Samsung started producing 2GB GDDR6 chips in early 2018 (+ 100% in 3 years)
Samsung started producing 3GB GDDR7 chips in late 2024 (+ 50% in 6.75 years)
Strazdas1@reddit
RTX Pro 6000 has a 384 bit Bus Width utilizing 2 GB chips in clamshell design for 48 GB of VRAM.
piggymoo66@reddit
They also want productivity users to stick with their pro hardware. In the years past, it was pretty easy to separate them and gaming users, but now the demands have a lot of overlap. If they make gaming GPUs scaled properly to demands, they would also be useful for pro users, and they want pro users to be forced to spend more money on hardware that they need. So what you get is an entire lineup of kneecapped gaming GPUs that are a complete laughingstock, but they don't care because they're raking in the big bucks with pro hardware.
Strazdas1@reddit
Majority of 4090 GPUs were used by pro users. I have no doubt we will see the same with 5090s.
reddit_equals_censor@reddit
that's nonsense. you can buy yourself some gddr6 or gddr7 yourself if you want.
there is no excuse here. we "aren't running out of gddr supply" that is nonsense. the reason, that nvidia and amd are still selling 8 GB cards and graphics modules for laptops is to scam people, who don't know any better or strong arm them into buying one, when there is no other option, which is actually how bad it is in laptops.
this then forces people to upgrade again at at least half the time they would otherwise upgrade. i mean technically the cards are broken at launch already. i mean 8 GB vram isn't good enough for 1080p max in 7/8 games, so i guess buy, throw in garbage and buy again?
idk.
but yeah it is about scamming people, vram supply has NOTHING to do with any of this.
also amd is using old gddr6 and the 5050 desktop version is using gddr6 and nvidia with the gddr7 cards also can chose what modules to use IF there was any supply concern.
launching a 16 GB and 24 GB 5060 for example using 3 or 2 GB modules.
but again there is no vram supply issue here. it is just about scamming people.
MrMPFR@reddit
100%. One product made to target PS4 era titles, while the newer products conveniently ignores the existence of PS5 era titles, forcing compromised experience on a upper mid end gaming laptop.
Well done Ngreedia. We went from easily being able to achieve 1080p high to 1080p low in 9 years on laptops. Real progression xD
NeroClaudius199907@reddit
You can play 1080p high on 5070 laptop
DarthV506@reddit
Doubt it has anything to do with the datacenter, they want gamers to keep buying shit at the low-mid end so they need to upgrade more often.
prajaybasu@reddit
Modern titles aren't using more VRAM just due to newer tech.
Stefen_007@reddit
The price would reflect a vram scarcity if there was one. In a time where hard performance gains are disappointing, vram is a great price gauging tool to upsell people to a more expensive gpu or the next generation. On the highend obviously a destination to your workstation cards for ai
Lamborghini4616@reddit
The logic is that they want to push you to buy a higher end card for more profit
AvengerTitan@reddit
I just bought rx 9060xt with 16GB for 300£ quiet cheap for amount of VRAM
Ragecommie@reddit
Even the friggin' AI upscaling, RT and other stuff require tons of VRAM. The logic is weak, we are indeed getting the bare minimum with a premium pricetag.
Darksider123@reddit
Like f. ex. Ray tracing. Some games destroy 8gb cards
Healthy-Doughnut4939@reddit
The 1st gen Nehalam Core i7 in 2008 had 4c 8t
The 7th gen Kaby Lake Core i7 in 2017 had 4c 8t
Then AMD released the Ryzen 7 1700 with 8c 16t
The 8th gen Coffee Lake Core i7 in 2018 had 6c 12t
See what happens when you have competition?
vandreulv@reddit
*looks at his Skylake and Kaby Lake ultrabook i7 CPUs with only 2C 4T*
inaccurateTempedesc@reddit
That was a fun time to be running old hardware. Progress was so stagnant that 8 year old Core 2 Duos could still keep up with AAA games.
vandreulv@reddit
The i7-4790K, with just a slight overclock, continuing to slap the shit out of newer processors for nearly a decade didn't help.
inaccurateTempedesc@reddit
Haswell and Sandy Bridge refuse to die
Ballerbarsch747@reddit
I've just ordered a 5960x, really looking forward to it lol. They go for about 60bucks by now.
MikeimusPrime@reddit
The HEDT chips at that time aged so well. I had a 3820k from about 2013 and upgraded to 3rd gen Ryzen in the pandemic, but only because I was doing a full build, not because of any real need for more performance for the games I play day to day.
It's a shame those sort of chips really died out, and the HEDT world moved away from high end gaming to productivity only from a cost perspective.
hackenclaw@reddit
I am still playing some gacha game at high setting in 2025 with my OCed 2600K+ 1660Ti.
Games like Genshin & Wuthering Waves can run at medium high setting at 1080p/60fps. I also just finished Metro Exodus, it also run at high setting at 50-60fps.
It is crazy I can play games on this same PC for more than decade.
yimingwuzere@reddit
The 5775C has also aged well among Intel's quad cores, primarily due to its large L4 cache. Keeps up or beats most of Intel's other quad cores in games until you crank up the clockspeeds dramatically.
lEatSand@reddit
Devils Canyon were beasts.
vandreulv@reddit
And then Intel had to go and curse us all with their Skylake garbage.
proesporter@reddit
Haswell was the first CPU arch to support AVX2, and this helped keep it relevant further along than Ivy Bridge and Sandy Bridge for games and emulators. Eventually, the successive mitigation patches stacked up to kill its performance by quite a bit
Healthy-Doughnut4939@reddit
Even the Nehalem based i7 920 still performs surprisingly well for a 17 year old CPU
It came out in Q4 2008 and it can still run most modern games that don't require AVX insteuction
Healthy-Doughnut4939@reddit
I have one of those Kaby Lake laptops, it had a 500gb HDD and 4gb of 2666mhz ram. It aged like sour milk.
GodTierAimbotUser69@reddit
Upgrade to SSD more ram and linux and it would be given a 2nd chance at life.
xX_Thr0wnshade_Xx@reddit
This is what I did. Have a kaby lake i3 Laptop, came with a slow 5400 rpm hdd and 8gb of ram. Upgraded to ssd, 16gb ram and Linux mint, now running as reliably as ever.
vandreulv@reddit
Had one Skylake i7-6500U.
Went from being a usable laptop to the slowest machine in the house after the mitigation patches.
Creative-Expert8086@reddit
The 2017 13" MacBook Pro literally had a 2-core, 4-thread CPU with a Touch Bar and a butterfly keyboard — a magical combination of flaws that made it one of the worst designs of the decade.
vandreulv@reddit
Intel had a system for their model numbers for a time that made sense.
i3 = 2C 4T
i5 = 4C
i7 = 4C 8T
Simple and easy to remember.
And then they had to fuck it all up by daring to brand 2C 4T mobile chips as fucking i7s.
Omniwar@reddit
There wasn't any sort of established system. 2C4T mobile i7s existed from the very start (Arrandale). Even on desktop it wasn't uniform. First-gen desktop i5s were split between 4C8T, 4C4T, and 2C4T, for example.
https://en.wikipedia.org/wiki/List_of_Intel_Core_processors
Creative-Expert8086@reddit
Even 5W chipset got i7 with the MacBook fanless, notorious for heat related damage.
ImBackAndImAngry@reddit
Ultra book class CPU’s at that time were cheeks.
totally_normal_here@reddit
I had a late 2019 Ultrabook with an 11th gen i7-1165G7 (4c/8t), and that thing was so terrible.
Even at that time, 4 cores seemed pretty underwhelming. It also had an absolutely pathetic base clock of 1.2 GHz, would idle at 50-60°C when doing nothing, and shoot up to 100°C and throttle as soon as you put it under any sort of workload.
PMARC14@reddit
I think you are talking about the 1065g7 or whatever the prequel cause the 1165g7 is actually a pretty decent chip that basically fixed all the problems with the earlier ones terrible clocks and performance and it's anemic performance due to faulty and low quality 10 nm at the time (now called "Intel 7").
totally_normal_here@reddit
Yep, that's right. My bad. It was the 10th gen i7-1065G7.
PMARC14@reddit
Yeah the 1065g7 was supposed to be the first actually good Intel Ultrabook chip but it took another gen. I am actually really impressed with the improvements, I swore them off after suffering a i5-7200u for a while, but Lunar lake seems great and Panther lake is exciting
kingwhocares@reddit
AMD released the Ryzen 7 1700 with 8c 16t in 2017
AMD released the Ryzen 7 9700x3D with 8c 16t in 2025
Intel released the i5 13400 with 10 c 16t in 2023
wtallis@reddit
Are you trying to imply that AMD has been holding back on core count for their desktop processors, despite them having 12 and 16 core options since 2019? Or are you trying to imply that a 6p4e Intel CPU is clearly superior to an 8-core AMD CPU with 3D Cache?
jay9e@reddit
They're not "trying to imply" anything.
All they're saying is that a Ryzen 7 in 2017 had the same core count as a Ryzen 7 in 2025 which feels eerily similar to what Intel was doing before them.
wtallis@reddit
Only if you're ignoring the context. Intel didn't have "Core i9" parts for their mainstream socket platforms (LGA11xx) until Comet Lake, three years after Kaby Lake. So for a very long time, "Core i7" meant "top of the line option" for the mainstream CPU sockets from Intel, and as described above, their top option was stuck at four cores for a very long time.
"Ryzen 7" stopped being the top-tier parts for AMD's mainstream desktop platform several generations before the 9700X3D, because they started doing "Ryzen 9" parts with the Ryzen 3000 series. The fact that AMD continues to have 8-core CPUs somewhere in their product stack isn't a particularly relevant or useful observation when 8 cores stopped being the top of the line.
It's even less of a valid comparison when you consider that the 3D cache on the 9700X3D is a big chunk of extra silicon as a value-add option on top of an 8-core CPU, and that option wasn't available back in 2017; even if the 9700X3D was the top CPU for its generation/platform, it would demonstrate that AMD had started offering a significant step up from a basic 8-core CPU. The progress in AMD's product line is a lot more obvious, without having to get into details of IPC or clock speed improvements from one generation to the next.
sh1boleth@reddit
Intel had 8c offerings for consumers way back then in 2014.
AMD disrupted the market by bringing 8c for the same price as Intel 4c. Now AMD's 8c is still priced the same 8 years after Zen 1.
https://www.intel.com/content/www/us/en/products/sku/82930/intel-core-i75960x-processor-extreme-edition-20m-cache-up-to-3-50-ghz/specifications.html
42LSx@reddit
Yeah, did people already forgot or are they too young to remember the old saying "All's well with Haswell"?
You could get a 6ct/12t CPU in 2014 for less than $400, the best 4ct/8t Intel CPU was $340 - AMD FX-9xxx was at least $550, and generally slower than the Quadcores.
42LSx@reddit
Yeah, did people already forgot or are they too young to remember the old saying "All's well with Haswell"?
You could get a 6ct/12t CPU in 2014 for less than $400, the best 4ct/8t Intel CPU was $340 - AMD FX-9xxx was at least $550, and generally slower than the Quadcores.
Helpdesk_Guy@reddit
Nonsense. These offerings weren't even remotely aimed at consumers but professionals and businesses alone, as no-one private was going to spend as much money on a CPU back in the day (or could afford to), which costed like 4–5× a whole rig's worth alone …
It was a offer, which was priced at no-one but any non-private clientele, and Intel did everything in their power to hold the price-tag way out of reach for private consumers, way above $1,000 USD. +$1,300 USD, for literally several years.
The whole reasons forwhy AMD's Ryzen made such a dent in the market in the first place, was that AMD priced their quite powerful CPU with 8 full grown cores so incredible inexpensive and disruptive, that it didn't really mattered, even IF these cores would've been 10% slower – In a time-frame, when people had to get used to $270–420 USD for f–king quad-cores, as this was basically all what anyone could get in the time of Intel's well-maintained status quo (of mandated nation-wide stagnation, for milking the market dry) for basically a decade straight.
The Ryzen was so stupidly inexpensive and a utter no-brainer, that only bra!n-deads, m∅rοns would stick to anything Intel and their aged-out 4-cores voluntary … Everyone (sane) instantly jumped the technological Intel-siding, to leave the overpaying rest turning into well-deserved social outcasts, which still defend their ludicrous position since …
sh1boleth@reddit
The 5960X cost 999
Not really professional or business pricing or 4-5X the whole rig as you say. They also offered quad channel memory and extra pcie lanes.
Like threadripper but cheaper or slightly more expensive than the 9950X3D with more features.
Helpdesk_Guy@reddit
For the record: Even the previous Core i7-4960X Extreme Edition had a price-tag of no less than $990 USD at release, for just six cores! Thus for just two cores more than their ordinary quad-core, which went for ~$250 USD.
While the i7-4960X EE offered only mere 50% more cores/threads, it also came at a price-tag of easily 4× as much as the cheapest lame-o Intel quad-core – 50% Cores/threads vs 400% price-increase.
A intentional totally skewed price-performance ratio done fully on purpose, for deliberately keeping 6-Core CPUs way out of reach for the ordinary, and the mainstream at expensive dual-cores only and quad-cores at max.
Now keep in mind here, that this very Intel consumer SKU (for personal consumers and private end-users, at last) came out by end of Q3 2013 – Already +3 years AFTER AMD had already successfully introduced their instantly adopted and wide-spread Phenom X6 hexa-core CPUs in 2010 – AMD released its first server-class 6-Core Opteron server CPU, known as “Istanbul”, the then previous year in June 2009 by the way.
So it was not up until Intel's 4th Gen Intel Core with Ivy Bridge-E, that Santa Clara even offered anything hexa-core for ordinary consumers in form of the aforementioned vastly over-expensive i7-4960X for no less than $990 USD (apart from the severely cache-crippled i7-4930K, released the very same day on September 10th, 2013 … for $555 USD!) – AMD's first hexa-core for the private end-user, went on sale as Phenom II X6 1090T Black Edition already more than 3 years earlier in April 2010, released for $295 USD only …
So even the only other six-core next to the i7-4960X EE, the i7-4930K, was not only severely cache-crippled (and clocked lower too), but cost no less than already basically twice as much as AMD's first top-of-the-line 6-core CPU, when the actual street-price for the i7-4930K of course came in higher with $583 USD at release, nonewithstanding the fact that the actual Intel-MSRP was $555 USD. Bummer, I guess.
So there you have it. Even Intel's offering of hexa-cores before were always at least 2;times as pricey as AMD's offerings of their 6-Core CPUs, making Intel's offerings often barely existing in the first place as any viable option to begin with. And that should be the overall final conclusion.
As said before, Intel deliberately made all of it so and wanted to keep the market by all means with utmost pressure for years at their decreed perfect conditions to milk it dry: → Dual-cores for the ordinaries, Quad-cores at max, Hexa-cores only for the really wealthy ones with deep pockets.
Helpdesk_Guy@reddit
Correction: You jumped onto a statement of mine, which (evidently!) had me WRONGLY claiming, that the i7-5960X would (even at Intel's official SRP/RRP) cost "4–5× as much as a whole rig/was whole rig's worth" (which is nonsense, for obvious reason). Instead of the actual correct assessment: “4–5× as much as a regular quad-core CPU”) at that time.
… which is actually true and the very thing what I was trying to point out, of course.
So we accidentally talked past each other here – My bad!
Helpdesk_Guy@reddit
It doesn't really matter what alleged particular technical advantages a given product offers, if said advantageous attributes *only* come at such a fundamentally disproportionate cost-benefit ratio and its (deliberately) totally whack and skewed price-performance ratio, that basically no-one would tap into and engage with it anyway … Since it makes no sense to do so!
That's just nuts (and missing bolts for the blocked head), and no-one (sane) does that,
when each and every, all thoughtful cost-effectiveness considerations are out of balance.
It's basically Optane 2.0, and the very reason why it failed – Hypothetical vast advantages, which only ever came with incredibly stoop!d (already well-subsidized and artificially brought-down) still vast price-tags.
That's also the same idiotic try-hard move from Intel their Skylake-X was, with their quad-cores on LGA-2066 – Artificially lure in clueless consumers into their vastly overpriced already dying HEDT-platform.
Their Skylake-X quad-cores on the X299-Chipset in the form of Core i7-7740X and Core i5-7640X (as Kabylake-X), were virtually nothing but the Core i7-7700K in disguise for the Sockel LGA-2066.
Just nuts and a daft part of the Skylake-X portfolio, which didn't really had any raison d'être in the first place. — Another lame-o try of Intel to curtain the market (To maintain the consumers perception, that dual-cores has to be 'good enough' for the peasants and quad-cores ought to be enough for anyone else, as decreed by Intel), was their even more whack Kaby Lake-based Intel Core i3-7350K back in the days in 2017.
A petty meaningless yet unlocked, mercilessly pumped-up dual-core for their z170/270 chipset.
It was, of course, vastly overpriced as is right and proper for Santa Clara since ever.
So even by 2017, when AMD already was well underway to phase out decade-old, for sure hopelessly insufficient and outdated Dual-Cores once and for all for good now, tried to push the market's overall technological advancements for the good of all and the rest of us, while being just about to establish 4 cores as the bare minimum to deal with, 6 cores as lower mainstream and at least 8-Cores as the high-end of mainstream as the new normal (with AMD's Threadripper topping out at 16 cores as HEDT and for anything beyond that to come and just around the corner) …
… Intel again went demonstratively in the direction of the polar opposite, when Santa Clara tried nothing but to dial back the market to where it was already a decade ago: Trying to establish already decade-old, well obsolete dual-cores for their high-end platform – The Unlocked i3 Cometh!
Keep in mind, Intel tried the very same already with the Pentium G3258 Anniversary Edition in 2014.
Vushivushi@reddit
But AMD also disrupted the market again by increasing cache sizes via advanced packaging as SRAM scaling significantly slowed with new process technologies, offering generational to multigenerational performance increases for the one growing PC market: gaming.
The market got the core counts it needed.
I do think AMD's 600 series should be 8C and 700/800 series should be 12c by next gen, even if it's just a hybrid configuration.
AMD truly offering more value to consumers would be putting in more orders for TSMC SoIC to have widespread Ryzen 5 X3D CPUs availability at launch.
I think it would be hard to blame AMD for core counts this generation if the 9600X3D was a regular part of the crew.
Hytht@reddit
> "Core i7" meant "top of the line option" for the mainstream CPU sockets from Intel, and as described above, their top option was stuck at four cores for a very long time.
The four cores were NOT the top option. There were 8c 16t i7s in 2014 (i7-5960x)
wtallis@reddit
Try reading the entire sentence you quoted.
jay9e@reddit
Yes, AMD does have some higher end offerings available and x3D is also a very nice technology. But it doesn't really change much about what was said.
Their top of the line consumer CPU has been stuck at 16 cores for nearly 6 years now.
Their more budget level entries with Ryzen 3 and 5 have been stuck at 4 and 6 cores for 8 years now.
Helpdesk_Guy@reddit
Blaming CPU-manufactures for not increasing core-count anymore, when the complacent software-industry still outright REFUSES to optimize for anything more than 2–4 cores since two decades straight — Everyone ignorant
Dependent-Maize4430@reddit
I could be wrong, but I do believe the i5 13600k beats most of the AM4 3d v cache CPUs in gaming.
Beefmytaco@reddit
Honestly, I feel we're way overdue from amd to have 700 series cpu to have at least 10c20t at this point.
IMO the 600 should be 8c16t, the 700 should be 10c20t, the 900 be 16c32t and the 950 be 24c48t.
They keep shirking the node, they have the room to do it.
Just sayin...
CrzyJek@reddit
Zen6 is increasing core counts by 50%. 8 cores will be 12 cores. 16 cores will be 24 cores. Etc etc
Flynny123@reddit
Core count increases coming next gen I believe.
CrzyJek@reddit
Correct.
kingwhocares@reddit
Are you saying you don't see the pricing difference?
Does those cost the same as an i5 13400 or 14400?
wtallis@reddit
If you want to make a coherent argument about hardware available at a particular price point, over time (so some inflation adjustment will be needed) — as opposed to the simpler comparison between top of the line parts — then you'll need to bring a lot more data and explanation to the discussion. Otherwise, nobody has any reason to believe you actually went to the effort of ensuring the comparisons you're drawing really are fair.
reallynotnick@reddit
It’s definitely a weird comparison since I would think it’d make more sense to list like the 13700K than the 13400 since they are comparing the 9700 and 1700. But if I had to assume I’d think their point is it’s now AMD is the one not improving core counts? (Obviously P and E cores make things a bit odd to straight compare)
Archimedley@reddit
It is kinda weird when they have 16 core parts and x3d cache stuff now
Kinda disappointing that zen5 didn't bump the 9600x to like 8 cores since there's not too big a perf difference on desktop otherwise, but they've been making pretty big gains gen over gen compared to whatever intel is doing
Like yeah, intel is giving more cores, but I'm not sure I really care that much past eight
Vushivushi@reddit
The core race is over for me.
I want more CPUs with large caches.
kingwhocares@reddit
Given that 6 core isn't enough, the core race is still there.
Vushivushi@reddit
I'd buy a 9600X3D over a 9700X at the same price any day of the week.
kingwhocares@reddit
Where is this 9600X3D available?
Strazdas1@reddit
Its not. I wish it was.
Danishmeat@reddit
6 core is fine still
Agreeable-Weather-89@reddit
Not to defend Intel or AMD... 8 cores are fine.
Strazdas1@reddit
and 4 cores was fine in 2015.
Vb_33@reddit
4 cores was fine in the 4 core era. He'll the vast majority of apps didn't benefit from more than 4 cores back then.
kingwhocares@reddit
Not available in a Ryzen 5 though.
INITMalcanis@reddit
AMD released a 16-core 3950 in 2020
The 9800x3d isn't 3x the price of the 1800
Montezumawazzap@reddit
13400 has only 4 performance cores though.
kingwhocares@reddit
More cores > hyperthreading.
laffer1@reddit
Yes but only if they are p cores. 2 e cores = 1 p core but with the caveat that many problems don’t scale across cores well (Amdahl’s law applies)
A 12 core amd chip smoked a 20 core intel chip because of e core shenanigans in many workloads.
I can compile the same code in six minutes on a 7900 but 16 minutes on a 14700k. (Assuming an os without thread director support) with scheduling support, it tightens up to be close.
Johnny_Oro@reddit
Nope, 6.
Montezumawazzap@reddit
Do you know the difference between the performance core and the efficient core?
V13T@reddit
lol no. It's 6P+4E. threads are 6*2+4=16 threads
Montezumawazzap@reddit
Oh, yeah, right. My bad. You are right.
Minimum-Account-1893@reddit
I knew you were going to immediately bring out an AMD defending corp lover.
Reddit or probably social media comment warriors dressed as "gamers" is deeply, emotionally tied to that corporation. The conversation is glaringly unbalanced with AMD and Nvidia too. FSR FG always gets praised. Nvidia FG always is called fake frames, for example.
vandreulv@reddit
E cores don't count as full cores, mate. They're literally Atom CPU cores stuck on the same package as full i5 and i7 cores.
Healthy-Doughnut4939@reddit
These Atom cores are shockingly powerful for the die area they use
Gracemont is half the size of Zen-3 while having Skylake IPC
Skymont is 1.7mm2 while having 2% IPC better than Raptor Cove
Lion Cove is 4.5mm2 and only has 14% better IPC than Redwood Cove.
So LNC is only 14% better in IPC while using 3x the area of SKT.
vandreulv@reddit
With a fuckton of latency added due to the nature of the two types of cores and substantially lower cache overall for the number of cores versus an all performance design with more cache in total.
Skylake cores are, on average, 15%-20% better than Alder Lake (aka Gracemont) at the same clock speed. When you have a task that ends up bottlenecked because it's locked to a type of core before the director can switch it to a performance core as needed, the difference is noticeable.
kseniyasobchak@reddit
Let’s be real, AMD could’ve made a 24/48 beast or something along those lines, the “problem” is that no consumer needs that. Games are not gonna take advantage of that, and you’d have to deal with slower cores with less cache per core. I’d argue that’s why Threadripper is dead, it’s a really niche product for a pretty small customer base, and it became especially pointless when they released Ryzens with 16/32 threads.
wtallis@reddit
The only plausible way for AMD to have made a 24-core CPU in recent years would have been to add an extra CPU chiplet, which would have preserved the cores to cache ratio. The reason they haven't pretty much comes down to the fact that they're trying to average less than one IO die per generation, and taping out an IO die with the extra PHYs to talk to a third CPU chiplet would have pretty poor ROI. On the other hand, Strix Halo got a custom IO die, so they're clearly willing to do it some of the time.
Healthy-Doughnut4939@reddit
This comment is not meant to be imply support for one company over another.
Both AMD and Intel have done shady, deceptive and anti consumer actions.
AMD lied about the performance of the 5800XT in their marketing saying it was faster in sT snd gaming than Raptor Lake. they got these deceptive numbers by testing games with a RX6600XT which caused obvious GPU bottlenecks
Intel tried to cover up the vmin shift instability issue that bricked many Raptor Lake CPU's that was caused by a rushed CPU validation process after Intel cancelled MTL-S due to it's terrible chiplet design.
AMD and Intel will gladly screw over consumers to maximize profits if they thought they could get away with it.
prajaybasu@reddit
Samsung started producing 1GB GDDR5 chips in 2015
Samsung started producing 2GB GDDR6 chips in 2018 (+ 100% in 3 years)
Samsung started producing 3GB GDDR7 chips in 2024 (+ 50% in 6 years)
See what happens when there's a stagnation in the actual fucking hardware chips?
Beefmytaco@reddit
And from what I've been reading in the past year, it's gotten a lot cheaper to produce them too, so nvidia is really loosing ground here for an excuse that memory is expensive when it really isn't, they're just shafting the customer and want them to spend 5 figures on a workstation gpu to actually get some decent memory amounts.
Strazdas1@reddit
memory isnt expensive because chips are expensive. Memory i expensive because memory bus takes A LOT of space on the chip. So you can sacrifice performance for more memory by adding extra memory buses or you can increase the capacity of the chips on same memory buses. Memory capacity has slowed down, so thats not an option. Would you like a 20% slower GPU for 4 GB extra VRAM?
Beefmytaco@reddit
But we're already throwing 450+ watt coolers on these gpu's, which means we have thermal headroom to push more voltages into the chips and get higher frequencies to compensate. Now that even the top teir cards have the memory directly interfacing with the heatsink, their temps are staying far better than back in the 3090 days when they put them on the back of the card with no active cooling and they hit 90C+, which caused a lot of issues; the 3090ti had the newer 2GB chips and all were on the same side as the gpu die and interfaced with the heatsink, and those chips stayed a cool 65C most of the time.
So yes, I'd rather have more VRAM with a better heatsink on the device and just force higher frequencies to compensate for the loss of some speed. In the end with those combined factors, you're losing at best like 5% performance.
Strazdas1@reddit
Just last year we saw what happens when you push voltage villy nilly, courtesy of Intel.
You cannot just push more voltage to compensate for loosing 20% of compute area on the chip.
Beefmytaco@reddit
Nah, that's a bad example. Intel 13th/14th gen had manufacturing issues in the chip itself, where the substrate that insulated the tiny transistors from each other was actively breaking down in the chip due to manufacturing errors. This was in turn exacerbated by a poor voltage regulation algorithm that caused the substrate to break down at an even faster pace; it's why intel said once the damage was done, it was permanent, before they got the algorithm fix out, which took waaaaay too long.
AMD, Nvidia and M$ all had similar issues in the past but figured them out fast enough with their hardware, it's just intel really did a half assed job and the result was a catastrophic PR situation for them they've yet to recover from; CEO had to step down from it IIRC.
But my counter to your argument is yes, you can push voltage and frequency to compensate for loosing 20% of your compute power, nvidia has done it countless times before and will continue to do it. The 1k series was a good example of this where they were pushing those chips to their absolute max. It's why they didn't OC for crap, hell the 1080ti even with 1000W pumped into it barely gained 5% performance uplift cause it was just completely tapped out already.
Reality is you're correct, this is why cut down chips suck, but you can make up for a lot of the loss with higher frequencies and power.
A full die at 1500mhz will get you 1000GBps bandwidth, but the 1/2 die at 2300mhz will get you 920GBps bandwidth, it just costs you in power usage and heat generation.
Strazdas1@reddit
Incorrect. The manufacturing issue was a tiny amount of CPUs from a single batch. The vast vast majority of 13th/14th gen issues were firmware allowing voltage spikes that are too high. And yes, voltage caused damge is permanent. you are permanently frying the chip.
the 1k series was an architectural redesign that was an unique case of easy impremented speedup that has never been replicated by anyone since. Its an exception, not an example.
prajaybasu@reddit
Price is decided by supply and demand too. With the reduction in price came the increase in demand due to AI applications.
bedbugs8521@reddit
That's not a problem, Intel proved it by adjusting the bus width to accommodate more VRAM.
Nvidia and AMD designs their cards with bus width first VRAM second.
prajaybasu@reddit
How many Intel external GPUs are shipping on laptops right now?
bedbugs8521@reddit
What does this have to do with laptops now? Intel isn't interested putting a dedicated GPU chip in their laptops when their iGPU are good enough for most things.
If people need more GPU power, they can you know, plug it through the Thunderbolt port.
prajaybasu@reddit
Check the post title.
Antagonin@reddit (OP)
this has nothing to do with capacity of chips... but ngreedia selling GB206 die as 5070
42LSx@reddit
The 5.gen Core i7 in 2014 had 8ct/16t
Dependent-Maize4430@reddit
There are rumors than zen 6 will have a 12 core CCD, which will be massive for 3D V Cache CPUs.
mrheosuper@reddit
R7 9700x in 2024 had 8c16t, it's not like amd is a good guy here.
bedbugs8521@reddit
AMD isn't restricting the limit of the socket though, you can still get a 16/32 in the same socket.
The best Intel can do back then was 4/8, then they forced you to get x99s if you need 6 cores or more.
mrheosuper@reddit
The max number of core on am4/5 has not changed since zen 2
bedbugs8521@reddit
The core count is fine, you don't wanna raise the power consumption too high. What matters is the IPC, a newer 8 core CPU are actually faster than my Xeons.
terriblestperson@reddit
Now the 9800x3d has only 8c 16t, probably in part because AMD realized they no longer have real competition from Intel.
bedbugs8521@reddit
8/16 is plenty fast mate, if you need anything more get the Ryzen 9. AMD gave the option to do a drop in upgrade.
terriblestperson@reddit
It's not about speed, it's about multitasking. It's also about the fact that despite... I don't really know, three or four shrinks, they haven't increased the number of cores at a given price point in a while.
Modern browsers run a lot of separate processes. Games now utilize multiple threads. Minecraft does world-gen on separate threads. For me, I want to run a modded Minecraft server on my PC while I play on that server. While that works, I'm a little starved for cores. Distant Horizons in particular really benefits from additional cores.
Offering a generous number of cores was one of the ways AMD first differentiated Ryzen from Intel's offerings and started to retake market share. AMD seems to have largely abandoned that with Intel unable to compete, though it looks like Zen 6 will have 12 cores per chiplet (so 12 in the 10800x3d or whatever they call it).
bedbugs8521@reddit
It comes down to IPC, modern 8 core Ryzen PC is still faster than a Xeon 24 core server I have in office from 6 years ago, both single task and multitasking.
secretOPstrat@reddit
Cores are not the same thing as ram. A core can get much faster in single threaded and multithreaded tasks over generations, 6 core cpus now beat i9 extreme 10 cores from a few years ago, but 8gb vram is the same amount as before.
Coffee_Ops@reddit
AMD had 8c pile driver CPUs in 2014.
Creative-Expert8086@reddit
Fake 8 core, they got sued and had to do pay outs
SmileyBMM@reddit
Yeah, but they sucked.
starburstases@reddit
And I thought there was debate as to whether or not each core had enough elements to be considered a discrete "core"
logosuwu@reddit
Ehh shared FPU but different pipelines. They are distinctively different cores but needed a lot more optimisation.
sh1boleth@reddit
AMD even settled on a lawsuit regarding their fx 8000 having “8 cores”
Beefmytaco@reddit
Nah, not true 8c cpu's. I know so cause I was able to sign up for the class action lawsuit against amd for bulldozer and the false advertising they did with those chips, and got a check I never cashed for like $40.
They were actually 4c8t using a primitive version of SMT that we have in ryzen today. Just hid it from hardware the ability to see that it wasn't a true 8c.
I had a 9370x that I got to 5ghz but had to pump 1.58V into to get there; thing used 250W+ when maxed out, which was a ton back on 2012. Dumped it for a 5820k in 2015 that I got to 4.6ghz and it just slapped it in performance.
Then jumped to a 3900x in 2019 which honestly gaming wise, wasn't much an improvement, mostly due to all the latency issues ryzen still had. Went to a 5900x in 2021 which actually was a really good chip for a long time, till recent times where it couldn't keep my gpu fed all the way, and just this february jumped to a 9800x3d, which while it sucks to have lost the extra cores, it blows that 5900x out of the water with gaming performance.
World of warcraft went from 50-60 in the main city to 144 on the same gpu.
Creative-Expert8086@reddit
Also an 1650 running at 4.5Ghz get similar results compared to 8700k
TDYDave2@reddit
The first car with a V8 was made by Rolls-Royce in 1902, today most cars are limited to no more than a V8.
Sometimes more is mostly a marketing gimmick.
Beefmytaco@reddit
I still say intel slapped people in the face that bought the 7700k cause the 8700k came out literally like 6 months after to try and combat the 16t cpus amd was throwing out there for cheap.
Intel is no better than nvidia where they have no competition and therefor shaft the customer with underwhelming hardware.
Been saying it for years, nvidia has the capabilities to give us a ridiculously powerful gpu, right now, but if they did that they wouldn't be able to sell it to us again in 2 years. Gotta have those small incremental jumps to justify those upgrades, though it seems these days nvidia doesn't really care about the consumer market but instead the enterprise on, as shown with how much a waste of sand everything below the 5070ti is.
Qweasdy@reddit
I'd say intel were far worse then than Nvidia is now, at least Nvidia continues to actually innovate even with it's market position. Ray tracing, ML upscaling and framegen are all technologies they were pushing hard even while they had a near monopoly.
Intel rested on their success, Nvidia kept going but instead started charging an arm and a leg for it.
yungfishstick@reddit
Competent competition would be more fitting. AMD is pretty good at being competitive in the desktop processor market. It's the polar opposite for the desktop/mobile GPU market and the same goes for Intel. Nvidia is really the only competent GPU maker in the industry, which is why they can get away with doing whatever the hell they want.
People complain, yet they still buy their GPUs.
Package_Objective@reddit
Rx 480 8gb was the same age but significantly cheaper. About 230-250 bucks.
billyfudger69@reddit
Sapphires R9 290X VAPOR-X in 2014 was the first consumer 8GB GPU.
Dangerman1337@reddit
1070 Mobile had a 256-bit bus, 5070 mobile had a 128-bit bus. There's the problem.
Dull-Tea8669@reddit
And who made the decision to go with 128-bit bus
reddit_equals_censor@reddit
incorrect.
the main problem is missing vram. as they are selling broken hardware, that can't run games at all anymore due to missing vram.
but having a bullshit tiny memory interface is not at fault here, even though it shouldn't have said interface.
nvidia can put 24 GB vram on a 128-bit memory bus mobile or desktop version doesn't matter.
that is clam shell with 3 GB modules.
they could without clam shell have at BAREST MINIMUM 12 GB vram with just using 3 GB modules.
it is not the memory bus. don't fall for that idea of the manufacturers bs-ing like "oh but we only got a 128 bit bus with that gpu so we are limited to... bla bla bla.... " that is bullshit. it is doubly bullshit, because they decided on the bus width on those gpus, but as said the bus width doesn't matter as we can slap 24 GB on a 128 bit bus with gddr7, NOT A PROBLEM.
___
and again to say it super clearly. the die size and memory bus are an insult, but an insult could still be a working card if it had enough vram, but they dont' anymore, so they are broken and that is the biggest issue.
wusurspaghettipolicy@reddit
Always laugh when you compare volume vs performance on 2 different architectures from 10 years apart. These are not comparable. Stop it.
Antagonin@reddit (OP)
8GB of memory is still 8GB of memory.
wusurspaghettipolicy@reddit
Im convinced you do not understand anything other than the number.
Antagonin@reddit (OP)
please do tell us, big brained individual.
wusurspaghettipolicy@reddit
I'm not biting on your troll bait unfortunately.
Antagonin@reddit (OP)
good for you, even though you are the one who seemingly doesn't understand a thing
wusurspaghettipolicy@reddit
Sure. Keep thinking that. I will send you a private message.
NeroClaudius199907@reddit
Whatever happened to Strix Halo. Those would've been good since you can adjust vram. Everyone here would've still not bought it but be good ammo against ngreedia
grumble11@reddit
The iGPU model will get there eventually to take out the 4050 tier and maybe the 4060 tier. Strix had too many cores on its top-GPU model though. Halo's best option traded blows with a 4070, but it costs a fortune and some of the iGPU benefits didn't materialize as well as hoped this time around.
I suspect that in the next five years that big iGPU solutions will take over the lower end though once the all-in-one solutions get a bit more mature.
auradragon1@reddit
Nothing happened to it. It's way worse in $/performance for gaming laptops than Nvidia laptop GPUs.
NeroClaudius199907@reddit
Basically we'll be stuck with dgpus and whatever vram jensen sets. I doubt unda apu will be better than similar nvidia dgpu.
dampflokfreund@reddit
Imagine paying 2500€ for a RTX 5070 laptop, only for it to become obsolete when the new XBox in 2026 launches. That's what Nvidia is doing here. They know 8 GB won't cut it for games that are built exclusively for next gen consoles.
reddit_equals_censor@reddit
while the vram is the MOST CRUCIAL part here as 8 GB in 2025 is broken, it is a scam. nvidia and amd are SCAMMING PEOPLE!
claiming, that these are working graphics cards, but then you get them, try to run a game in 1080p medium and oh it breaks... (oblivion remaster breaks with 8 GB vram in 1080p medium already)
BUT it is also interesting to look at the gpus to focus a bit further on the laptop scam.
the 1070 mobile uses the same gpu as the 1070 desktop version. the mobile version actually has more cores unlocked. 6.7% more.
so you actually did get a 1070 in your laptop back then with enough vram for the time.
but the die size is interesting to look as well. the 1070 die is 314 mm2 die on tsmc 16 nm and pascal from what i remember still was on a new node.
nowadays the 5070 desktop chip the gb205 is a 263 mm2 die INSULT, that is AT LEAST one process node behind with the tsmc 5 nm family of nodes, instead of the 3 nm family of nodes.
BUT the mobile "5070" version is gb206, which is an unbelievably insutling 181 mm2 die size.
OR the mobile 5070 version is just 69% the die size of the 5070 desktop. disgusting.
OR the 5070 mobile is just 58% the die size of the 1070 mobile!!!!
just 58% and that is an unfair comparison in favor or the 5070 mobile here, because the 5070 mobile is as said one generation behind in process node.
___
and again this is taking the backseat compared to straight up shipping broken graphics cards due to missing vram, that can't run games in 1080p medium anymore, but none the less it is important to remember how insane of a scam they are running just looking at the die sizes AND the fact, that you aren't even getting the latest process nodes anymore for graphics cards.
InsidePraline@reddit
After adjusting to inflation, it's more expensive. Capitalism FTW.
drykarma@reddit
Isn’t capitalism why we also have great competition in the CPU space now?
DerpSenpai@reddit
Nvidia margins were really bad back then, GPUs were seen as trash in $ per mm\^2 of die. Really bad margins while CPUs where getting 50-60% margins. With the new margins, it opens up space for more competitors.
prajaybasu@reddit
So AMD taking over Intel is capitalism and free market. But Nvidia remaining the king due to their investment in efficiency (space & power both) and R&D somehow signifies that capitalism is broken in your sarcastic remark?
Capitalism FTW indeed. If AMD and Intel produced better GPUs and software then people would buy them. Just like how people are buying ARM and AMD CPUs now.
InsidePraline@reddit
Didn't say anything about AMD. I do think that Nvidia software-locking features between generations instead of traditional innovation is not good for the consumer and hence my "sarcastic remark". Enjoy your weekend, not really trying to get into some Reddit debate about something that's been beaten to death.
Mr_Axelg@reddit
this has nothing to do with capitalism. Nvidia is consistently making much better GPUs than AMD and consumers keep buying them. Nvidia is awesome! We the consumer should be thankful that we are getting ray tracing, DLSS, tensor cores to say nothing of the large language model advances of the last few years. Yey capitalism!
InsidePraline@reddit
I think they have enough PR people but you should apply: https://www.nvidia.com/en-us/about-nvidia/careers/
Mr_Axelg@reddit
I would love to work at nvidia haha. i've applied a couple of times but didn't make it. I am more interested in the software side though
dern_the_hermit@reddit
Yeah but the new one has two more GDDR's than the old one so that makes it twice as better, right?
Olde94@reddit
Oh and does no one think about the added cache!! /s
InsidePraline@reddit
Expontential growth when you consider DLSS wizardry. I'd say it's 4x better. Nvidia logic.
Ulvarin@reddit
It’s a joke, especially when you can’t even get a laptop with a 5070 and just a Full HD screen. They’re forcing 4K everywhere, even though the GPU can’t handle it properly.
thelastsupper316@reddit
1440p not 4k.
hackenclaw@reddit
It is normally 8Gb vram + 1440p/1600p + 160Hz-240Hz screen . What a recipe of disaster.
AreYouOKAni@reddit
I mean, I have a Zephyrus G14 with a 4060, 1600p and 165Hz VRR screen. It works pretty good, I can play most games on Medium at 60 FPS, and older games like Red Dead Redemption 1 at 165 Hz locked. For a 14" machine, it is pretty fucking good.
vandreulv@reddit
Or worse, "gaming" laptops still coming with TN panels.
Marv18GOAT@reddit
Even worse any laptop in 2025 having plastic bezels and non glass display
Beefmytaco@reddit
Yea, it's 2025, no excuse for them to not have either a really nice IPS panel or 4th gen OLED which are much better with burn in these days, and not that overly expensive as processes have improved.
TN is just bottom of the barrel cheap, and laptop versions the worst of all with some of the worse color reproduction out there. I've got a spydar5 color spectrometer and calibrated a few laptop monitors for a few different brands, and they're always like high 80%s in RGB reproduction, after calibration.
My $400 gigabyte ultrawide with a TN in it had 97.4% RGB coverage after calibration, and that's a cheap panel right there.
reddanit@reddit
GTX 1000 series has been a bit of an outlier in terms of laptop GPUs. At that time laptops got almost entire desktop lineup (with exception of GP102 from 1080Ti/Titan), that on top had only moderately reduced power budgets. They also had the same memory buses and VRAM.
This has basically never happened before or since 1000 series. At minimum there is some shuffling of different GPU dies between different tiers between laptops and desktops. Recently the sheer power consumption and size of top tier GPU dies made them completely out of reach of anything laptop-sized - they have now grown in size so much that even at power efficiency sweet spot they are too much for laptop cooling/power delivery.
With 5000 series this is reaching some new apogeum of rebranding the GPU dies - laptop 5070 is the same die that is present in 5060Ti, but with more severe power limits. 5060Ti itself in turn uses a die that's proportionally tiny compared to previous generations of XX60 products. Basically, the laptop 5070 when looking at its relative performance to desktop flagship, an equivalent of 1050. It's not surprising it's skimping on VRAM - what's actually kinda disgusting is that it's now supposedly in the middle of the stack rather than on the rock bottom.
prajaybasu@reddit
Desktop GPU dies being optimized for a higher TDP makes sense. Why should desktop and laptops be limited to the same power anyway?
The smaller dies costing the same as larger dies makes somewhat sense. The jump to EUV lithography in 40 series increased costs down the line. There's also the higher R&D costs with all of the RT/AI stuff now.
Of course, I can't say much about Nvidia's profit margins and how much of the increased margins will actually benefit consumers (in terms of R&D) vs. just shareholders.
reddanit@reddit
It's not about whether giving more power to GPUs in desktop makes sense - it's kinda obvious that both power and size constraints are completely different in it vs. a mobile platform.
This is more of an explanation of why laptop GPUs have been falling further and further behind desktop over last bunch of years (since Pascal). This is also fully independent from how NVidia/AMD decide to name their mobile chips.
Basically this all comes down to how relative stagnation in desktop GPUs is still miles better than the shitshow happening in laptops.
Alive_Worth_2032@reddit
4K is perfectly fine as a option for that very reason. Since you can have perfect pixel scaling down to 1080P. You can have both the desktop advantages of higher DPI. And running games at FHD with "native" clarity.
jonydevidson@reddit
You don't have to game at 4k. Set DLSS to Ultra Performance and have fun.
DeliciousIncident@reddit
With a 4k screen you can at least set the display resolution to 1080p without any downscaling artifacts, since it's an integer multiple.
CaapsLock@reddit
the 390x made 8gb a standard for mid range in like, 2014? the 480 made 8GB a standard for lower mid range in like 2016, here we are in 2025 with huge amounts of cards with 8GB and twice the price of those almost 10 years later...
Helpdesk_Guy@reddit
I was wondering that too – People really ignore everything AMD graphics for a living, I guess.
People compare against the GTX 1070 with 8 GB VRAM in June 2016 for $399 US, yet forgetting, that …
The AMD Radeon RX 480 with 8 GByte GDDR5 launched same month for even less, at only $239 US.
The follow-up RX 580 launched in spring 2017 with the same 8GB of VRAM, for yet less $229 US.
And lets not forget about the fact, that AMD gifted a bunch of people "accidentally" some 8GB VRAM, when the 4GB-variants could be turned into 8GB-models with a simple BIOS-flash – 8GByte VRAM in 2016 for only $199 US!
Tman11S@reddit
This isn’t a fun fact, this is a sad fact.
I was playing on my 3070ti last week and noticed a gpu usage of 50% with the 8GB ram maxed out. I could have squeezed out a lot more fps if nvidia didn’t purposely bottleneck their chips.
Sopel97@reddit
don't play on ultra settings maybe?
Tman11S@reddit
That’s beside the point. The chip can handle a lot more, but it’s bottlenecked by VRAM. The cost of VRAM is pennies compared to the chip, so it’s just deliberately done by nvidia to make you spend 500€ more on a higher tier card, while this one would have been fine
Sopel97@reddit
lmao
NeroClaudius199907@reddit
Why did you buy $600+ 8gb card when amd gave you 16gb and more perf for $579 year prior? You are part of the problem not just nvidia
Tman11S@reddit
Because ut was the middle of the scalper crisis and I was happy I could get antthing at all at MSRP
NeroClaudius199907@reddit
Jensen was actually clever. He released it during covid knowing it would've sold out anyways and 3 years later, the same people would want to upgrade again.
Insane 5D chess, but at least its better than nothing
prajaybasu@reddit
I'd rather not buy unoptimized garbage AAA $60 games than choose a GPU solely because of more VRAM.
NeroClaudius199907@reddit
Games that use more than 8gb vram are unoptimized aaa garbage?
VastTension6022@reddit
We've been through this so many times, with literally every single product release the past few years. Please.
Antagonin@reddit (OP)
Maybe Nvidia should stop doing that then ?
VastTension6022@reddit
The point is this has already been discussed when nvidia actually did(past tense) it.
NeroClaudius199907@reddit
People will continue talking about it as long as Nvidia continues producing 8gbs.
auradragon1@reddit
Nvidia will continue to produce 8GB GPUs as long as there are people who will buy it.
NeroClaudius199907@reddit
Thats loop I guess we have to live with. Wish theres a script to block out mentions of 8gb. I'm tired
prajaybasu@reddit
Fun fact: People don't want to carry around laptops that weigh the same as small desktops.
Nvidia went for power and PCB size efficiency which was enabled by 2GB VRAM chips that launched in 2018 (just 3 years after 1GB in 2015) and the smaller 128-bit bus (4 VRAM chips instead of 8). That is why you have compact gaming laptops that you can actually carry in a backpack and not require 330W bricks to charge. This also simplifies manufacturing/cooling/board design because the 4 less VRAM chips enable more space for fins and larger fans.
However, it took until late 2024 for 3GB (not even 4!) chips to begin mass production. That is more than a 6-year difference, literally double compared to the difference between the production of 1GB and 2GB chips.
To reiterate: the rate of VRAM density improvement after 2GB chips has been more than 4 times slower. It's clearly the practical fucking option to hedge bets on AI to upscale which is what Nvidia chose and clearly the market (despite the screeching of youtubers and Redditors) also saw the same. All of the nonsense excuses talking about vendor lock in and brand recognition to justify why consumers are buying 8GBs are simply excuses when we JUST saw a giant like Intel being taken down in the desktop and server CPU market.
During the same time, AAA studios began pumping out unoptimized garbage that barely looked better than their 10-year-old predecessor games relying on the blurriness of TAA and upscaling to deal with their unoptimized rushed releases. There is simply no excuse for 10-year-old games to look better than modern games on low end GPUs. They should look at least the same with the same VRAM, but they don't.
AMD, unlike Nvidia, has virtually no presence in the laptop dedicated GPU market (or really, the wider desktop GPU market) and most of their mobile GPU share is coming from iGPUs. The most popular AMD GPU on Steam below the Radeon iGPUs is the RX 6600, which, by the way, uses the same 128-bit bus that people complain about, and is limited to 8GB just like Nvidia is.
On desktop, it's a slightly different story since the power and space constraints don't apply, which is why AMD does better on desktop (the RX 6600 on Steam is the desktop variant, not the mobile variant). But cost constraints still do apply, which is why Nvidia's stuff has better supply, there's clearly more incentives to produce 128-bit chips.
Once Nvidia upgrades to the new 3GB chips (that just started production) for 12GB VRAM total (4x3GB), people will still complain that it's not 16GB or 24GB. 12GB will be the new 8GB.
Meanwhile laptops are still coming with 8, 16 and 32 GB RAM mostly with 48, 64 and 96GB being new options enabled by denser RAM modules (not chips, except for the 48/96 GB options) at the cost of higher latencies. Nobody is complaining about the fact that you could buy a 16 GB RAM laptop 10 years ago too?
FrequentWay@reddit
Unfortunately AMD hasn't been a decent competitor to Nvidia on the laptop market. Its been dominated by Nvidia the entire time. Until we get some true competition, we will continue to bent over by Nvidia over VRAM allocation. Or pay shit loads of money for additional VRAM.
Asus Strix Scar 16 with 5080 $3300
Asus Strix Scar 16 with 5090 $4211
Asus Strix G16 with 5070 $2400
Asus Strix G16 with 5060 $2000
Laptop prices obtained via Neweg for Core Ultra 2 based hardware. RAM configuration is 16GB to 32GB ; 1TB PCIE SSD to 2TB SSD as minor variations.
prajaybasu@reddit
Because on laptop, both footprint and power efficiency matter and AMD utterly failed at it for the last few generations at both while Nvidia offered a OEMs the chance to make 70 series laptops with 1 tiny GPU die and only 4 GDDR6 chips.
speed_demon24@reddit
My old laptop gtx 880m had 8gb vram.
prajaybasu@reddit
It also weighted 10 pounds. Modern 8GB laptops weigh under 4 pounds. The reduction in the GPU die size and half the VRAM chips is what enabled that.
sniperwhg@reddit
Since no one else has commented, you're getting downvoted because you're attributing a 6 pound reduction to the GPU packaging. The largest reduction in weight for laptops over the years isn't the 113mm^2 in die size, nor is it 4 less VRAM modules.
The weight reduction comes from improvements in completely different components in the laptop. The battery density in laptops have doubled if not tripled in the last twenty years due to the improvements in lithium-ion batteries.
Modern thermal solutions for laptops also contributes to weight reduction, especially for higher power systems, thanks to development in heat pipes and vapor chambers compared to the old aluminum / copper finstacks solutions.
If you wanted to attribute some weight to the GPU packaging, modern systems have mostly abandoned MXM cards in favor of soldered die. Losing a daughter card and interface would contribute more to weight reduction than either example you gave.
The weight of a few mm^2 of silicon in GPU die space and VRAM is hundreds of grams at most. Based on the flawed logic of weight reduction, shouldn't they go ahead and add VRAM chips and a wider bus since there's so much weight to spare now?
prajaybasu@reddit
Smaller die and 4 less VRAM modules is more than halving the area needed for the GPU because the memory modules are the size of the GPU die these days as the dies have shrunk.
If it wasn't for the substrate/packaging then the GPU and 1 GDDR7 chip would be taking the same amount of space.
And it's not just a plain area reduction. Each VRAM module needs the traces and cooling thermal pad to go with it. It does add up to a significant amount because you can see that a LOT of manufacturers simply do not offer a version of their laptop with 8 memory chips anymore.
Look at how dense something like the RTX PRO 6000 Blackwell is. Using super dense PCBs is not possible on a laptop that people will buy for $1000 during a sale.
The CPUs benefit from the exact same. Lunar Lake's packaging not only helps with power but also area/density. The Acer Predator Triton 14 AI uses Lunar Lake and happens to be one of the thinnest gaming laptops right now.
When you don't have RAM chips and their traces taking up space, you can use that space for cooling.
We barely had GPUs 20 years ago, let alone on laptops, so who exactly is looking at last 20 years? I'm comparing it to a gaming laptop from TEN years ago, as the comment I replied to mentions 880M.
Aside from the switch from 18650 to denser Lithium Polymer batteries, the weight reduction in laptops is not due to density but cutting the capacity of the battery from the airline limit 99.9Wh altogether because the GPUs and CPUs are more efficient now. That efficiency is partly enabled by EUV lithography, which unlike previous node jumps is not cheaper every generation but quite the opposite.
There has hardly been much improvement in battery tech. Apple recently increased the battery sizes for the iPhone 16 by shrinking the PCB and increasing the volume of the battery. Not because there's some magical breakthrough.
Again, comparing solutions from 20 years ago. Gaming laptops were definitely using heat pipes 10 years back.
And your analogy is wrong, because the vapor chamber replaces the heat spreader and part of the heat pipes not the fin stack and all heat pipes entirely. A fin stack and some heat pipes are still necessary because you have to get the heat out of the system. How will a vapor chamber transfer the heat to the air without a finstack??
Also vapor chambers are also 10 years old - Razer Blade Pro from 2016 had it and so does my current laptop.
What I can tell you is that neither of your explanations (battery or vapor chamber) explains the 6 pound reduction because the finstack and heatpipes are still there and so is a relatively heavy battery.
The vapor chamber on my laptop weighs about the same as a heat pipes + copper heatspreader on a similar laptop, the difference is that my current laptop can handle double the power because the PCB is half the size which allows for more fins and larger fans.
Because the VOLUME is not infinite and each mm^2 you add in terms of the chip area and VRAM dies and PCB traces, you lose in cooling?
Look at the sizes of laptop PCBs from 10 years ago and today. They have shrunk while the cooling footprint has increased.
That is how gaming laptops have continued to meaningfully exist when PC components have doubled in terms of TDP in the past 10 years. Power density increases when you reduce the footprint of the components since you can afford more cooling.
If laptop designers had sliders, the sliders would be cooling, PCB size and battery size all of which add up to the volume of a typical 14" or 16" laptop today. The PCB size and battery size sliders are moved way lower to allow for more cooling to support the higher power density of the newer chips.
Why do you think I care about what the gamer brained Reddit mob think about hardware?
Your comment doesn't explain a 6-pound reduction either unless you really think in superficial terms. Typical Reddit brained mentality.
randylush@reddit
Petition to rename this subreddit /r/BitchingAboutVRAM
hardware-ModTeam@reddit
Thank you for your submission! Unfortunately, your submission has been removed for the following reason:
auradragon1@reddit
/r/GamersBitchingAboutGPUPrices
OvulatingAnus@reddit
The GTX 10XX series was the only series that had identical GPU layout for both desktop and mobile.
1-800-KETAMINE@reddit
In fairness to you regardless of the litigation of specific core counts, etc. in the replies to this, it was the one gen where the mobile cards actually performed like their desktop namesakes if given their full TDP and sufficient cooling.
The 20 series had the same core configurations as the desktop, but the power requirements were much higher on the desktop cards compared to the 10 series so the mobile versions were falling behind again. The vanilla, not-super desktop 2070, for example, was just 5w behind the desktop GTX 1080's 180w TDP. Much harder to squeeze into a notebook's limitations.
Really an incredible generation IMO, one of the best Nvidia has ever put out in terms of efficiency and performance.
OvulatingAnus@reddit
It was crazy in that with sufficient cooling and power, the mobile GPUs performs identically to the desktop variants.
Jon_TWR@reddit
I'm pretty sure the GTX 1070 mobile actually had more cores than the desktop variant--so, not identical...the mobile version was actually better in that way!
TheNiebuhr@reddit
This is blatantly false and easily verifiable.
wickedplayer494@reddit
Your comment is blatantly false and also easily verifiable: https://videocardz.net/browse/nvidia/geforce-10 https://videocardz.net/browse/nvidia/geforce-10m
You're wrong in a better way, because the only disparity is that the mobile 1070 actually had a 2048/128/64 CUDA/TMU/ROPs config versus the desktop card's 1920/120/64 config.
TheNiebuhr@reddit
I'm absolutely right. Pascal wasnt the only generation in which gpus were physically identical on desktop and mobile. So, the original claim is wrong.
OvulatingAnus@reddit
How so? The RTX 20XX series had a 2050 version that was not available for desktop but otherwise was the same for both mobile and desktop. Pretty much the only thing keeping the desktop and mobile variants being identical.
TheNiebuhr@reddit
What you wrote. Implying that in every other generation the gpus were different. All 1600/2000 gpus were identical across both platforms.
1-800-KETAMINE@reddit
Funnily enough, the 1070 mobile was the only one that didn't share the same core config with its desktop counterparts out of 1060-1080. It actually had slightly more functional units enabled than its desktop counterpart.
dorting@reddit
almost 10 years, that's crazy, this is a 2016 GPU, in 2006 we had 256/512 MB of memory... imagine how much VRAM we would have had if we had kept the same pace
Archimedley@reddit
Pretty much why I haven't bought a laptop even though I've been shopping for one on and off for a couple years
I just refuse to buy one with less than 12gb of vram
I honestly don't think I care about the performance that much, 6 and 8gb is just a non-starter
Antagonin@reddit (OP)
Same opinion here. Saw 5070ti laptop too... 50% more VRAM, for 50% higher price 🤣 This lineup is a total joke.
Guess I will be keeping my 3060 until the end of times.
Tower21@reddit
I remember when Nvidia produced laptop GPU with double the VRAM compared to the desktop "equivalents"
How times have changed
noiserr@reddit
Nvidia: Our bad, we gave you too much VRAM in 2016.
yeshitsbond@reddit
they fucking put 8GB into a 5070 laptop? are they actually that cheap? thats more shocking to me than the 1070 having 8GBs.
DerpSenpai@reddit
It's not about being cheap, it's the power consumption, 12 or 16GB cards will perform worse at 1440p at same power vs 8GB
ProfessionalPrincipa@reddit
That's exactly what that W1zzard character from TPU would say.
yeshitsbond@reddit
lmao
PotentialAstronaut39@reddit
And 9 years later they launched a 5070 laptop with still only 8GB of VRAM.
"Progress"
BlueGoliath@reddit
Posts like this stay up but a video going over VRAM sizes isn't high quality. Love you mods.
Roadside-Strelok@reddit
There are laptop variants of a 3080 with 16 GB of VRAM.
Kamikaze-X@reddit
Yeah and the first Mercedes Benz in 1901 had four wheels, and cars today still have 4 wheels! /s
It's a false equivalence, 1070 mobile had GDDR5 and the 5060 mobile have GDDR7. 256GB/s bandwidth vs 384GB/s.
I get it though, they're still being stingy, but it isn't apples to apples
_eg0_@reddit
You do realize that going from 256GB/s to 384GB/s is also an insultingly low increase over the time, too.......
TheNiebuhr@reddit
2019 laptops with 2070 had 448GBs of peak dram transfer, how will he justify 5070m having less than that 6 years later?
prajaybasu@reddit
5070 laptops are significantly thinner and lighter than 2070 laptops.
BatteryPoweredFriend@reddit
Your GPU could have the latest HBM and multi-TB's worth of bandwidth, but you're still going to error out or outright crash if whatever active project exceeds the available VRAM capacity.
VastTension6022@reddit
It's true, 640Kb is still enough today because the bandwidth is much higher!
Antagonin@reddit (OP)
Bandwidth is completely irrelevant, since when you run out of the 8GB (which you wil alotl), you will be completely limited by PCI-E bandwidth and latency.
Hayden247@reddit
Vram generation does not change how much vram a game will need. If game wants more than 8GB then GDDR5 vs 7 isn't going to make much difference at all. Capacity is KING, that's what the game demands, as long there's adequate bandwidth to feed the GPU all you need is capacity and even then bandwidth just affects how fast your GPU is gonna be.
In fact a 5070M will be far more vram bottlenecked vs a 1070M because the 5070M has a lot more raw power to be held back and choked by a low capacity vs the 1070M where for the time it was lots of vram and today the raw performance is going to be the clear limitation, not vram (other than with texture quality where it's effectively free as long you have the memory for it).
VaultBoy636@reddit
vram bw doesn't matter for shit once you run out of vram
UnsaidRnD@reddit
Yeah, and it can only do marginally better things (from a strictly consumer point of view - ray tracing may be admirable as a technology, but it's not yet that amazing in practice).
sahui@reddit
as long as people keep buying them, nvidia will keep making these decisions.
kingwhocares@reddit
RTX 5050 laptop is supposed to have 8GB as well.
mokkat@reddit
Provided GPU companies can implement AI assisted texture compression for huge gains, it might be viable going forward. But they're already a few years late, considering it won't be back ported to heavy games from these years.
Lamborghini4616@reddit
Or they could just give us more vram instead of cheaping out.
shugthedug3@reddit
It's a paltry amount for any GPU carrying the 70 tier name, mobile or not.
Of course a 128 bit bus is cheaper than a 192 bit bus especially in the case of a laptop motherboard but still... cards like the 3070 are really showing their age because of the inadequate RAM, to release a 5070 this much later with the same amount is shitty behaviour.
I_Thranduil@reddit
If it's stupid but it works, it's still stupid.
bubblesort33@reddit
They are going to use AI as an excuse. Neural textures compression stuff. There is some merit to if you look at some of the VRAM savings, 8gb is still insane for this level of GPU.
Antagonin@reddit (OP)
Will take years until we see first practical implementations of them in games. Not to even mention, it doesn't help in any other task except gaming/3D rendering (unsure whether any 3D renderer will even support them).