AMD Now "World's Fastest" in Nearly all Processor Form-factors
Posted by gurugabrielpradipaka@reddit | hardware | View on Reddit | 166 comments
Posted by gurugabrielpradipaka@reddit | hardware | View on Reddit | 166 comments
Temporary-Brick-8295@reddit
this is going to be bigg
Sevastous-of-Caria@reddit
Dont know lack of competition will let them turn into intel or nvidia (decadence,greed) but If there is a peak to show in amds processor business. This is it.
Firefox72@reddit
Big part of Intels downfall was them not being able to stich a node landing for years and years.
iokiae@reddit
Core counts will probably not rise much more on full x86 cores. There is almost no benefit (ie Threadripper losing in gaming to X3D) in daily workloads. Editing/rendering/computation already has EPYC chips. Consumer chips will probably go in direction of acceleration hardware (APU with hardware en/decoders, neural network engines and so on)
einmaldrin_alleshin@reddit
Games not scaling well with multi-chiplet CPUs has a lot to do with the high core to core latency between chiplets, not just difficulty with scaling multi threading performance. A monolithic 10 or 12 core wouldn't have that issue. See for example the 10900k all the way back in 2020
Alive_Worth_2032@reddit
But that's where it ends. We already had monolithic CPUs above that, and they lost vs their smaller counterparts utilizing ring buses.
The reason is the core interconnect. You get the same problem as with chiplets and latency. The mesh used in Skylake-X is slower than the ring used in Comet.
The ring then doesn't scale past 12~ cores where worst core to core latency starts to become higher than mesh. The ring also had other issues when pushed to far. Just ask overclockers how easy it is to verify stability on 10900K, it's a absolute mess.
A 14900 or 285K is already at the limit of the ring. With 4 e-cores taking roughly the space of 1 p-core, they are equivalent of "12 core cpus".
AMD can move to 12 core CCXs, which they are rumored to be doing. But then they as well are tapped out with the same core interconnect.
After that you start paying a rather large latency penalty for scaling core count. Doesn't matter if it is monolithic, tiles or chiplets. You are still dealing with physical distance between cores, which adds latency.
Strazdas1@reddit
It depends on what you use it for. even if we are limited to games, we now are getting engines that scale up to 32 threads just fine.
Alive_Worth_2032@reddit
That doesn't matter, because the performance gained from spawning more threads is lost to the increased latency.
Gaming is fundamentally limited by single thread speed. Because the workload cannot be perfectly split. Even if you can split the workload to a lot of threads. There will always be a couple of main threads that are latency bound and holding back performance.
Strazdas1@reddit
Simultaneous things in games can be done without needing to coordinate everything in single thread. Things like PhysX proves it. The latency issues are only there if you go cross CCD. On same bus its not an issue.
Alive_Worth_2032@reddit
Yes, BUT THEY CAN'T BE SPLIT PERFECTLY EVENLY.
There will always be some parts of the workload that is IPC and latency bound. No matter how many cores and threads you utilize. Gaming is not rendering where the workload can be split in near even perfect chunks.
No, because AMD would have to change topology in the CCX as well if they expand core count past 10-12 cores. Latency of the ring gets worse the more cores you add and size increases. Eventually the ring will perform worse than topologies like mesh. That is why Intel switches for Skylake-X. It is why they utilized dual rings for Broadwell at the high end Xeons.
Even if they kept the ring. The latency and performance penalty would still be there. You simply can't beat physical distance and physics. Longer paths and more jumps means higher latency.
Strazdas1@reddit
It does not have to be split perfectly even if the core is sufficient to beat the slowest one of those. I agree it is hard to paralelize the tasks in games. The games i was thinking of when talking about the 32 thread engine were ones made by Paradox, where paralelization is easier due to the way the games are designed.
Werent there sucesful ring bus 12 core loops that performed fine? Cant we just adopt that to consumer space?
But is it latency that will hinder performance? The internal latency was what, about 30 ns?
Pelembem@reddit
Just FYI on paradox games: In CK3 and Victoria 3 (their two latest games) my 7800X3D sees very little benefit going from 10 to 12 threads, and sees no benefit going from 12 to 14 threads. So they absolutely do not utilise 32 cores well. They don't even utilise 14 cores well.
Strazdas1@reddit
in CK3 the devs said they did not multithread the scripts and thats likely going to be bottlenecking people. This is supposedly fixed for vicky 3. They can utilize the threads as test show, altrough if they do it very efficiently is arguable.
Pelembem@reddit
Multithreading is absolutely a substitute to real cores, it just won't see the same performance uplift as real cores. But 0 performance uplift on multithreads is still 0 performance uplift on real cores.
Strazdas1@reddit
Absolutely not. If the cores are heavily loaded multithreading will at best give you 20% extra performance for a core and if forntend is fed properly it can be as low as zero benefits.
Pelembem@reddit
Indeed, 20% means that anything above 0 would show. There are no real workloads where the frontend is perfectly fed and as such there are 0 benefits.
Strazdas1@reddit
Not in gaming, no. In data servers such workloads can exist.
ScepticMatt@reddit
maybe we need 3D stacking of logic dies to reduce latency, if thermals can permit it
Thorusss@reddit
thanks, I always wonder if the "silicon lottery" includes which parts of a higher quality die had to be disabled in a lower bin, and you confirm that the physical location does matter.
Feath3rblade@reddit
Remember that we said these things about the 4c chips Intel was putting out for years until AMD lot a fire under their ass. I still remember the recommendation being to get a 4c i5 since the extra 4 threads on the i7 were "useless" for gaming, not to mention that the higher core count HEDT parts performed worse than said i5 due to lower clocks
Pelembem@reddit
And they were kinda useless. I remember I was happy throughout the life of my i5 2500K with saving $100. At no point was I missing the HT threads, and when I replaced it 5 years later I did so because I wanted faster single-core speeds mostly.
Plank_With_A_Nail_In@reddit
We weren't saying that at all, we were saying there was no advantage because there was no software and now we are saying there is no advantage because of physical limits.
We knew 8+ cores would be a benefit back then because we did actually have 8+ core chips in Xeons.
We know there is no benefit to 12+ cores (in some workloads) because we have Thread-ripper and Epic.
RandomFatAmerican420@reddit
They will. It’s more efficient to make smaller cores. The two main drawbacks are cache and the ability of programs to multithread.
The days of games being single threaded is largely gone. And most forward looking games can use over 8 cores… it’s just the standard now pretty much.
So what you are really left with is cache. And with 3d cache stacking being a thing… realistically l3 cache could increase by 50% as soon as next gen for AMD… so that concern is basically gone too.
So, now you can have say 48 “smallish” cores hooked up to double, triple stacked cache systems. And it doesn’t even cost that much to make.
If you gave game devs 2x the cpu power… they would use it easily. If you gave them 4x or 8x the cpu power… they would probably use that too. I think people are under the misapprehension that somehow cpu improvements aren’t needed or cannot be used for gaming. I would argue the opposite is true. We have in many ways reached severe diminishing returns in terms of how much more a GPU can bring to the table. But, things like physics, destructible environments, not having “cities” be limited to like 20 people because of cpu constraints, etc, would bring actual massive real world benefits to games. These things are largely not done because CPUs aren’t powerful enough. Bring out tons of more cores, and these things can become reality, because even low/mid tier(and things like consoles) will have tons of processing ability.
Pelembem@reddit
I'd say we have plenty of fast CPUs for those things already. The main problem is that number of NPCs and destructible environments aren't something that the user can really scale up and down, and a 8700K can't handle it. And games want to have older computers in the minimum spec to allow more sales. If they picked the 7800X3D as the minimum spec the things you mention would be no problem implementing. We don't need leaps in CPUs, we just need everybody to catch up to the leaps already made.
Thorusss@reddit
Yes, bigger cores are in the area of limited returns, but so are more cores in many games. Even games that use them, do not make GOOD use of them. Like doubling the core count gives you a few percent, or can even harm you, if E cores instead of P cores are used on Intel, which was a reported problem with hacky workaround. So for gaming, cores are also in the limited return to no good effect area.
FrogNoPants@reddit
A chip designed for productivity workloads not gaming, and clocked much slower is worse at gaming?
Who could have guessed!
Vb_33@reddit
Zen 6 consumer chips will increase core count by 50% so yea... I don't think so.
Guinness@reddit
The next generation of chips is increasing their core counts. Not sure why you’re complaining? AMD had to outperform Intel before they could start increasing cores.
despiole@reddit
it's not only that, Intel failed on the strategy too and on the implementation of stuff the invented (like avx512)
Amd's implementation was far more elegant.
The same happened with the E/C.
vandreulv@reddit
Intel played the core count game with the P and E cores... and lost badly.
Number of cores isn't everything. If you expect to see ever increasing core counts as a sign of performance improvement then you really have no understanding what benefits workstation and gaming desktops the most.
Cedar-and-Mist@reddit
AMD Medusa will be drastically increasing core counts. It's the next big thing nobody is talking about for some reason. That will be THE time for a platform upgrade.
Plank_With_A_Nail_In@reddit
There's literally no information apart from "MoRe CoREs" thats why no one is talking about it.
bogglingsnog@reddit
Threadripper killers?
Vb_33@reddit
No threadripper will get even more cores and will have ample memory bandwidth. While Zen 6 is still on DDR5 and AM5.
Canadian_Border_Czar@reddit
Its not a downfall, it's deliberate. Intel has been saying for years that they aren't interested in high end consumer chips.
Gamers are irrelevant to them. Almost all of their biggest customers have power efficiency and form factor requirements that prohibit insane performance. Even the high performance stuff that's enterprise needs to be extremely reliable and stable
Tim-Sylvester@reddit
The real breakthrough is for whoever finally puts a few bucks down on building memristors.
v1king3r@reddit
The new Threadrippers have just been released with up to 64 cores and they beat everything in performance and efficiency.
fire2day@reddit
7000 series (Non-pro) Threadripper had 64 cores. The big difference is that the 9000 series has 80 PCIe 5.0 lanes, and support for up to 6400MT/s memory.
bogglingsnog@reddit
Except in gaming benchmarks. Do not get a threadripped for gaming. Biggest advantages were in video editing and code compilation.
v1king3r@reddit
When someone asks for more than 16 cores, I assume they know it's not for gaming :)
atatassault47@reddit
Unless they can put 12 cores on one chiplet, no thanks. I dont want schefuling issues nor chip jumping latency.
frankchn@reddit
Zen 6 is rumored to have 12 core chiplets.
klement_pikhtura@reddit
R5 7600x outperforms r7 5800x. R5 5600x outperforms r7 3800x. R5 3600x outperforms r7 2700x and 1800x. It's not only about cores if you compare different generations.
UGMadness@reddit
AMD has made great progress in IPC and especially clockspeed during that time though. I'd rather they keep dedicating their die area to more cache than to more cores.
And for applications that actually need the cores, the Threadrippers have grown in core count.
nismotigerwvu@reddit
It's not like this is uncharted waters for AMD. From the release of the original Athlon until Conroe they held the performance crown with relative grace. A lot of younger folk don't realize they were the ones that developed X64, not Intel. I think they'll pretty well behave the same they always have and focus some of that extra cash to GPU R&D
kernel_task@reddit
I still use "amd64" to refer to that ISA (not x64, not x86-64, etc.), since I think it's the most accurate name.
ffiarpg@reddit
If you measure accuracy based on making sure things are named after who came up with it, sure, but for general use x64 is a much better name. Especially since very few people know the history and amd64 is confusing at this point. AMD doesn't make the only x86-x64 chips obviously.
cp5184@reddit
Like intel AVX. Intel doesn't make the only AVX chips obviously. In fact, many of their consumer chips DON'T support it while AMD chips DO... Maybe it should be called AMD AVX to let consumers know their ryzen 7xxx and 9xxx support it but the intel competition doesn't...
kernel_task@reddit
I think you make a great point, but man does “x64” bother me on many different levels.
mailslot@reddit
Intel wanted people to move to their slow & expensive Itanium (IA-64) for 64bit compute. 32-bit CPUs were the intended limit for home users.
nismotigerwvu@reddit
Yup. Even back then the community saw VLIW and knew it would fail. Honestly I think Terrascale is the only successful VLIW design.
ZekeSulastin@reddit
It wasn’t that long ago that Zen3 was going to be restricted to 500-series motherboards, conveniently only extended as far back as the 300-series when Intel suddenly had half-decent competition; not to mention the games they played with RX 90_0 rebates to retailers to meet MSRP at launch.
Being more ✨consumer-friendly✨ than the competition isn’t that high of a bar, and AMD is happy to brush against it as they go over.
No-Relationship8261@reddit
While AMD is destroying Intel. Calling them better than Nvidia is... something I can't get.
Wander715@reddit
Only on reddit would you see this opinion. AMD's GPU division has been pretty awful in recent years. Even with RDNA4 while they somewhat caught up in things like RT and upscaling their pricing ensures they won't gain any market share.
hardlyreadit@reddit
They werent talking about their gpu division. They are saying how amd handles their cpu division is better than how nvidia handles geforce. For the most part. Amd cpus have remained in the same price tiers, while nvidia jumped from 700 to 1200 to 1600 and now its msrp is just nonexistent
f3n2x@reddit
This is straight up false. The very first chance they got where they were really competitive (Ryzen 5000) they IMMEDIATELY jacked up the price, even though they almost certainly were cheaper to make than Ryzen 3000 at launch.
hardlyreadit@reddit
Yea price tier doesnt mean it was the same price. There was like a 50-70 dollar increase, but generally the 6core has been 250-300. The issue with 5000 series is amd was ahead so they didnt release their budget options: 5600 or 5700x were release much later in the product cycle. But cpus have generally been around the same price since 3000 series
Thorusss@reddit
I mean yes, nominally the mid and high tier cards ending in the same numbers went up in price, but there are now more performance levels to chose from in each generation. The spread is just wider.
You do get more performance per $ each generation. For someone tech savy, you should to be distracted by product names, just what you get for your money.
chmilz@reddit
Gaming GPUs are a rounding error. Gamers need to realize nobody is putting real effort into gaming GPUs.
EdliA@reddit
What do you mean nobody. Nvidia cards are fire for gaming.
peakdecline@reddit
Ultimately Nvidia and AMD have their chips made in the same places. And both companies make far more profit per-chip selling to anyone but gamers.
That's to say... there's actually next to nothing to gain by chasing what you want them to chase.
Thorusss@reddit
and again: Nvidia is NOT limited by single chip production to produce more Blackwell AI accelerators, it is mostly the CoWaS step to join the two chips together, and partially also the HBM RAM, both technologies are NOT used even in the 5090, so they don't eat into their AI capacity.
wilhelm-moan@reddit
Only Reddit cares about gaming GPUs. They own the largest FPGA company which is also pushing the most state of the art CNN synthesis library (FINN). I’ve never met an engineer who uses quartus. Intel is done and NVIDIA will hold as long as GPU focused models beat ASIC/FPGA focused models (likely forever since there’s an infinite number of computer science grads now)
MdxBhmt@reddit
It's barely even on reddit.
RedTuesdayMusic@reddit
Not the priority as long as same fab is used for CPU and GPU. When they have a good enough GPU to be confident about a win will they push GPU more
Dave10293847@reddit
It’s because Nvidia is mean and Reddit refuses to accept that raster is less relevant. Like I’m sorry guys, but relying on only raster to fill millions of pixels with modern rendering is just not happening. AI acceleration is going to be necessary to move forward.
MSAA nor TAA is the answer. Hyper efficient AI approximating the answer is the most reasonable way to keep pushing graphics and photo realism. I’ve seen it myself with DLSS4. It’s really good now.
Apprehensive-Buy3340@reddit
That's because you failed your reading comprehension test, OP is saying that AMD is turning into Nvidia (on the CPU side), as in having such a competitive lead that they're risking of turning complacent.
mrpops2ko@reddit
the one area where intel at least seems to dominate with the N100 / N305 is on the super low power consumption SKU
i dont think AMD can hold a candle to intel there, the N100 consuming 6w at idle and 12w at full load or something for 4 cores is pretty impressive
it'd be nice to see AMD try take the fight in that area
Hytht@reddit
A 6W TDP CPU taking 6W at idle is pretty bad
Vb_33@reddit
Le Nvidia bad, caveman black and white thinking.
MC_chrome@reddit
AMD is better than NVIDIA, from the perspective that they have constantly invested in open source software systems that don’t tie you down to one specific piece of hardware
NVIDIA, meanwhile, is quite content with building their walled garden as high as they want because neither customers nor governments are willing to touch them.
ResponsibleJudge3172@reddit
Nothing to do with being best as presented here
Plank_With_A_Nail_In@reddit
Calling it now AM6 will only support one CPU generation or have soldered RAM or some other shit.
NerdProcrastinating@reddit
I expect Panther Lake will soon become the fastest processor for the general Windows PC laptop segment.
VastTension6022@reddit
AMD doesn't need to "turn into" intel or Nvidia, they always have been. Look at how they've acted in the times where they were the underdog or even hopelessly behind and you can be sure it's not going to get better when they're ahead.
ShoutOfDawn@reddit
as the cost of increased performance keeps increasing at seemingly parabolic rate. the market wont be big enough for two companies to still be profitable.
look at amd going back to a unified arch in UDNA, rdan was their consumer/gamer product. but they have seen the profit from AI. and you can bet that UDNA is targeting enterprise first and foremost. even at the cost of it being a lackluster consumer product like ryzen 9xxx
Exist50@reddit
Where's that claim coming from?
ShoutOfDawn@reddit
a bit of hyperbole and the fact that from 20nm node to 2nm node the price of wafers became x8, plus the increased chip design costs.
MysteriousBeef6395@reddit
dont like how things are going rn. with intel getting weaker every day we might see another few years of consumer cpu monopoly soon. at least until intel gets back on track or arm stops being niche
nithrean@reddit
Yeah. I hope they can get their act together. Competition has always been good for the consumer. I really wonder where Intel went so wrong.
phd24@reddit
Years and years of quad-core refreshes with little to no improvement in IPC or thermal efficiency resulting in at best 10% increases in performance. That's the technical part. But that came about through underinvestment in core technologies and instead spending their profits on share buy-back schemes that gave short-term benefits to shareholders.
Pelembem@reddit
That's not at all the issue, Intel pulled out of the quad-core spam era just fine, Alder Lake for example was great, beating most of AMDs lineup and having higher core counts. The issue for Intel is sticking with their own fabs, and their fabs failing. AMD has has a serious node advantage for the past couple of years.
nithrean@reddit
They also got stuck on their 10nm process so were on 14 forever ... with little real generational uplift. But they were executing well for quite a while and bringing some real improvements. Now they suddenly seemed to have died almost completely ...
slither378962@reddit
AMD also sucked at one point. But they got gud!
pianobench007@reddit
Money supply drained. IE capital was not allocated to core business. Intel core business is CPU design and manufacturing.
Intel has many side businesses and a side investment arm.
Instead of side business ventures, capital would have been better allocated to foundry and chip design. Both of which they are losing to TSMC and AMD among many others.
TSMC is being supported by the capital from foundry only business by the following cpmpanies; AMD, Apple, Qualcomm, MediaTek, NVIDIA, and even Intel themselves. All or most of that revenue from foundry at TSMC is then reinvested into foundry.
I think that was Intel's problem.
https://en.m.wikipedia.org/wiki/List_of_mergers_and_acquisitions_by_Intel#Acquisitions
Thorusss@reddit
One could so Intels core business focus should have been their cores.
pianobench007@reddit
I don't know enough. But Intel did purchase internet security company McAfee plus some other cloud gaming and a whole bunch of VoIP companies.
I think their biggest foray into something outside of CPUs is Mobileye. Which could be seen as a conflict of interest with Waymo and Tesla. So another minus to their foundry business. Just a whole bunch of different purchases and yeah.
TSMC appears to just take in foundry capital and spend it on foundry R&D. And that makes sense why they are doing so much better with their process technology.
It also makes sense why Samsung is behind. Samsung itself has it's hands in many markets too. If they built a Switch competitor, Nintendo might not have chosen Samsung 8N for the Switch 2. So who knows? I don't see Qualcomm fabbing with Samsung and that makes sense as Exynos is a direct competitor to Snapdragon.
raptorlightning@reddit
Dividends instead of R&D. Ran the business like a monopoly money printet but didn't bother upgrading the printing presses.
logosuwu@reddit
Why does everyone fall for this story? Intel spent more on RnD than AMD's entire revenue for years.
Strazdas1@reddit
When you do 150 billion stock buybacks its easy to fall for it. The truth is Intel RnD was a string of failures that really ended up hurting them long term.
farnoy@reddit
It's probably an exaggeration but they have been returning way too much value to investors for too long and that's cut into the remaining runway to save the business. Had Gelsinger stopped all of that when he became CEO, they might have had another year's worth of Foundry's spend to refine and look for customers.
I think the numbers work out, there's been around $16B dividends issued during Gelsinger's tenure and their total r&d spend for 2024 was $16.5B. Or if you look at operating loss of Foundry alone, it's about 5+ quarters worth.
Olde94@reddit
While i don’t want stagnation, having just upgraded to newest, i wouldn’t mind to relive my 2500K time where i didn’t have to think about buying new as it wasn’t relevant
Vb_33@reddit
Fuck that give me new significantly better tech all the time. If I bought a GTX 980 in 2015 and the 1080 in 2016 runs circles around it then that's a good fucking thing to me. Doesn't mean I'll upgrade a year later but it does mean technology is continuing to evolve and things that were previously impossible are becoming possible at a nice brisk pace.
NeroClaudius199907@reddit
Apple's & qualcomm are pretty good
MysteriousBeef6395@reddit
not denying that, but itll be a while until the average person buys an arm powered gaming pc
Vb_33@reddit
Well see what Nvidia has to say about that at least for gaming laptops.
MysteriousBeef6395@reddit
arm powered gaming laptops/handhelds would be awesome honestly, i just see microsofts bad windows compatability layer as a hurdle. obv things like proton exist but anticheat limits that somewhat as well
Creepy-Bell-4527@reddit
You mean Windows on ARM? because I don't think you can call ARM niche in the sector anymore since all Macs in the last 4-5 years have been ARM.
MysteriousBeef6395@reddit
im not really counting what apple does since apple only does apple things. my comment is more regarding the gaming/desktop market
Creepy-Bell-4527@reddit
Fair. Chromebooks also but that doesn't fall into the market you're talking about either.
MysteriousBeef6395@reddit
honestly i wasnt aware that there were arm powered chromebooks, thats pretty nice
WJMazepas@reddit
Intel is still selling really well and is the major player on the laptop space
I doubt AMD will be able to take the laptop space from Intel
DaddaMongo@reddit
Please correct me if I'm wrong but every few years there's a major change in technology leading to an upset in the rankings. At one point ATI were GPU kings but I think it as CUDA that broke them. AMD sold off their foundries due to Intel dominance. Hyperthteading was a major change, multicore also and AMD chiplets alongside 3D cache stacking has hit Intel hard. Hopefully they pull their socks up and in a few years the tables turn, we really don't want monopolies in technology. My biggest concern is the lack of new players ARM could really upset things if they push it but mores Nvidia aren't slowing and someone really needs to kick their arse.
WJMazepas@reddit
Nvidia always sold more than ATI/AMD
Tradeoffer69@reddit
With ARM itself considering entering the CPU market, it will probably remain niche or others will drop out of ARM as it will be seen as a second Intel of 2000s.
xeridium@reddit
Intel setting its own ass on fire right now probably not going to help.
DerpSenpai@reddit
There won't be a monopoly with ARM getting into the picture. you will have QC,MTK/Nvidia and more will join (specially Chinese ones too)
Raikaru@reddit
Since when are they faster than the M4 Max in laptops?
Internal_Quail3960@reddit
m3 ultra is faster in desktops too
rip-droptire@reddit
Threadripper
Creative-Expert8086@reddit
Cheerypicking, AI PC is a Microsoft thing.
despiole@reddit
Imagine if someone had told you in 2015 that amd would be the fastest all around.
I don't think even the most optimistic AMD fan would have believed it.
Tiny-Independent273@reddit
when's Intel making a comeback?
mestar12345@reddit
When they get paranoid enough, again.
vdbmario@reddit
No surprise here. INTEL was too cocky in the past, then stopped innovating all together, today they have zero innovation, they rather focus on firing people than creating better products, LIP is incompetence on another level. How is a 65 year old going to bring back innovation? The guy probably still uses a rotary phone, was part of the fraud at Cadence Systems and has allegiance to China more than USA. He's the final nail in the coffin for Intel.
DeliciousIncident@reddit
That's cool, but why are there only a couple of 9955HX/3D laptops and hundreds of Intel 275HX?
chamcha__slayer@reddit
9955HX/3D models have the worst battery life, probably thats why.
Lukeforce123@reddit
As if high end laptops had any battery life to begin with
Internal_Quail3960@reddit
mac’s do
vandreulv@reddit
He said high end. Not proprietary garbage.
Internal_Quail3960@reddit
the m4 max chip competes with a 4080M in terms of gpu
vandreulv@reddit
You've got more glaze for Apple than Krispy Kreme does for doughnuts.
https://old.reddit.com/r/hardware/comments/1m4ngx1/we_benchmarked_cyberpunk_2077_on_mac_m1_to_m4_the/
M4 Max barely matches the 4060 laptop chip.
When it comes to gaming, Apple hardware is the last I'd consider if just on account of its poor cost vs performance ratio.
Let me know when I can upgrade 32GB of Ram in a Macbook AFTER I already paid for it.
Internal_Quail3960@reddit
if you are doing a general comparison instead of just gaming, the. the m4 max will slam the 4060M.
look at blender, video editing, photo editing, or basically any kind of creative work
vandreulv@reddit
Nah. I prefer machines that aren't obsolete due to fixed specs.
My laptop: Open bottom cover, add more ram.
Your laptop: Throw it away and buy another one, more eWaste.
No thanks.
Oh and btw, wipe your chin.
S1rTerra@reddit
Both of you guys should kiss and make out
chamcha__slayer@reddit
The latest gen intel ones last 5-6 hours on battery.
got-trunks@reddit
Do really everything except for business/productivity/media software?
Internal_Quail3960@reddit
because they can’t beat the mac’s haha
got-trunks@reddit
Double Intel ouch lol.
wh33t@reddit
Fucking Intel, get your shit together.
Dreamerlax@reddit
Imagine saying this 10 years ago.
vandreulv@reddit
I WAS saying this 10 years ago.
Skylake is an abomination and it's been a speedy downhill slide for Intel ever since. Absolutely garbage architecture full of problems and everything since has been patching over unfixable holes.
wh33t@reddit
I dunno about 10 years ago, but I remember buying like my 5th 4c4t CPU from Intel and thinking this company just can't seem to innovate anymore.
THXFLS@reddit
Not hard to imagine. 10 years ago they were just about to launch Skylake, after only just bringing the very delayed 14nm process that was originally scheduled for 2013 to desktop with the pretty underwhelming Broadwell. Easy to forget because of how much worse the move to 10nm turned out.
oh-monsieur@reddit
Eh but as others are saying, there are a lot of signs that AMD is stalling out. For laptops i am loving 226v iGPU performance considering i scored this asus laptop for under 600$ and AMD hasn't made too many big leaps in that form factor since like 2022. Hopefully intel keeps pushing on their gpus to keep AMD, Apple, Qualcomm, and (eventually)MediaTek-Nvidia laptop designs honest
DNosnibor@reddit
Strix halo and their X3D mobile chips are pretty significant leaps AMD has made in the laptop space since 2022, but those are geared at maximum performance, not high efficiency. Strix point was a decent step up in performance/watt for them, but Apple is still well in the lead on that front, yeah, and even Intel with lunar lake for a lot of workloads.
Tim-Sylvester@reddit
I remember 15 years ago when I was exclusively buying AMD procs because Intel was sitting on its ass doing fuck all and everyone was like "lmao AMDs for poors" and I was like "nah fam Intel is a waste of money they ain't doin shit" and everyone thought I was stupid for it. Well look now you bastards!
Qesa@reddit
Nehalem and sandy bridge was Intel sitting on its ass?
Tim-Sylvester@reddit
And since then?
Setepenre@reddit
haha, the blue part is what Intel used to be known for
the_dude_that_faps@reddit
Until they have an Apple Silicon contender, they won't have the laptop crown. GL with that. I don't think I've seen any leaks about anything regarding laptops with any substance.
ocxtitan@reddit
Not a fan of monopolies, especially not when there are no innocent players in the industry, so hopefully competition comes soon to force a win to the consumers, not to one particular company.
Lille7@reddit
Intel has a larger marketshare than AMD on the cpu side and nvidia has a larger marketshare on the gpu side.
Creative-Expert8086@reddit
M4 Max:???? Am I not an AI PC?
DNosnibor@reddit
I mean, Apple intentionally tries to differentiate their devices from PCs. Like those old "I'm a Mac... and I'm a PC" ads. So yeah, I don't think Apple would call their devices AI PCs.
CalmSpinach2140@reddit
about that.. https://www.reddit.com/r/Surface/comments/1dol5f1/feels_like_apple_is_responding_to_microsoft/
DNosnibor@reddit
Hahaha, ok you got me, I hadn't seen that lol
Creative-Expert8086@reddit
cherrypicking excerise?
m0rogfar@reddit
Apple considers the capitalized PC brand to refer to the lineage of IBM PC compatibles, which their ARM-based computers definitely do not qualify as.
kyleleblanc@reddit
Meanwhile the M3 Ultra in the Mac Studio is an absolute AI beast because it can be configured with up to 512GB’s of unified system memory. 🤷🏼♂️
BlueGoliath@reddit
The AMD circlejerking will continue until there is a monopoly.
7silverlights@reddit
I kinda want to see Intel go bankrupt and dissolve then see AMD and Nvidia go absolutely even more nuts in pricing.
kyleleblanc@reddit
They may be the “world’s fastest” but nobody beats Apple at performance per watt.
noiserr@reddit
For light workloads. For heavy workloads AMD owns that too.
vlakreeh@reddit
I mean, that’s more on Apple not building server chips not the efficiency of the micro architecture. M3 ultra is more power efficient than any CPU comparable 32c you can get from AMD, there’s no reason why couldn’t be the case for higher core count chips if Apple was willing to spend some serious money at the fab.
noiserr@reddit
Architecture absolutely plays a role. SMT is a huge benefit in high throughput applications.
vlakreeh@reddit
Apple is much more efficient than Zen 5 in efficiency, at least 5% faster core for core, and only ~12.5% larger in die area (excluding l3, using split l2 for m4). AMD’s PPA is definitely matched.
noiserr@reddit
SMT can give 50% more IPC in workloads like databases compared to regular benchmarks we see between these two.
It's actually not even close.
vlakreeh@reddit
I mean, sure in some benchmarks there can be drastic performance differences but there’s always something that specific micro architectures excel at. There are workloads where Apple has a huge lead in performance just down to the memory bandwidth advantage they have over any EPYC platform out at the moment.
Throughput of what? And what does being the leading TSMC customer have to do with anything?
shmehh123@reddit
If Apple had kept the Mac Server around it’d be absolutely killing it in data centers. But from the mid 2000’s to 2020 idk what they could have done to keep that division afloat lol.
exscape@reddit
Apple are generally fastest in raw 1T and few-T performance, too. For example, Geekbench 6 has the 9950X3D at 3379 1T, 22536 nT and M4 Max at 4054 1T, 25913 nT.
So a laptop beats it by 20% single-threaded and 15% multi-threaded.
Looking at Cinebench 2024 which is more trivially multithreaded, the 9950X3D scores 141 / 2410 with the M4 Max at 178 / 2089, so +26% 1T but -14% multithreaded.
They seem to ignore them and count x86 only -- I wonder why...
996forever@reddit
Well let’s make it “fastest adoption rate by OEM” next then.
terribilus@reddit
and about to be 20% more expensive.
Ok-Position5435@reddit
The absolute state of Intel
ResponsibleJudge3172@reddit
They are best at high power workstation CPUs, desktops and laptops.
Are behind at low power laptops.
Are behind at GPUs of any kind (why would latest AMD be compared to Hopper otherwise?)
DerpSenpai@reddit
World's Fastest Processors for laptops, just wildly behind Apple in both IPC and performance and in 2 months, behind IPC and performance from QC.