Apple’s new M6 chip could launch surprisingly soon, per report
Posted by Forsaken_Arm5698@reddit | hardware | View on Reddit | 140 comments
Posted by Forsaken_Arm5698@reddit | hardware | View on Reddit | 140 comments
Forsaken_Arm5698@reddit (OP)
I wonder if we are in for a core count increase with M6? One could guess 6P + 6E, but Apple doesn't need it. M5 has formidable nT performance for a 10-core CPU, faster than M2 Max. On the GPU side a core count bump is plausible like M1 -> M2, since M6 is expected to have the same Family 10 architecture aa M5.
schwimmcoder@reddit
They won‘t increase the P-cores, would hurt the efficiency. My guess is 2 more E-cores, so that 4P + 8E will be M6‘s config.
bazhvn@reddit
A 2 core increase for GPU to bring it on par with Intel X chips specwise would be nice too
MassiveInteraction23@reddit
I would, naively, expect focus on GPU/Tensor/“Neural” silicon over more traditional cores.
bwjxjelsbd@reddit
I hope they increase the speed of unified memory and add more 'neural accelerator' so it can run AI locally faster.
I tried of paying for these providers
No_Housing_1857@reddit
The biggest evolution will be in the GPU because the GPU is important for games but also for AI.
Forsaken_Arm5698@reddit (OP)
I think that's unlikely, for several reasons.
schwimmcoder@reddit
But an E-core need 1/10 or so of power of an P-Core. And Apple does not focus performance, they focus more on efficiency. Same like Apple A-series. Since the Introduction of the big.little architecture with A10, every chip had 2 P-Cores, but the E-Cores increased over time from 2 to 4.
-Purrfection-@reddit
They would just cannibalize their own higher end chips, since most users don't need more. This is also why the base storage is low and they make you upgrade.
If the majority is never going to use some super multithreaded application or more than 200Gb of storage, then why give them something they won't use, therfore screwing yourself over? The prosumers and businesses are happy paying more for the upsells since they make money using the hardware. Who you really piss off here is the enthusiast who wants higher specs but doesn't earn any money back with the machine.
sylfy@reddit
Basically this. The majority of users are simply doing web browsing and maybe some Microsoft office, browsing photos, and watching videos. For these tasks, the base M4/5 chip is already overkill. For more serious users, the Pro or Max configs make more sense.
theholylancer@reddit
yeah, unless they seriously get into gaming, and even then it would be more the iGPU bump, more P cores make not a lot of sense unless something major changes.
who knows, maybe apple with its TIGHT integration they can somehow make changes to the stack as a whole / how you program games for the mac to use more P cores
but on the PC front, that has been kind of a major hurdle for the longest of time to be more multi core centric and even the ones that use more than just a few cores use the cores very unevenly with like 2-4 cores actually being loaded to the max while others are more supporting cores.
DYMAXIONman@reddit
They still need to ensure that their multi-core performance is at least on par with whatever Intel and AMD plan to do. Zen6 is going to be 12 cores per ccd, and Intel is planning on spamming a lot of cores on Nova.
Lille7@reddit
Why? Extremely few people would switch platform/ecosystem, unless the performance gap is massive.
vlakreeh@reddit
Others have barely caught up to M4, let alone M5. I don't see how anyone can catch up anytime soon or any reason why people like me (software engineer) should buy anything but Macs.
Sopel97@reddit
you don't need more for web browsing
DYMAXIONman@reddit
It's switching to n2, so they can either run back the same configuration for increased efficiency or add more cores.
pluckyvirus@reddit
M3 is 8.
Artoriuz@reddit
It hasn't even been 6 months since the M5 debuted...
ResolveSea9089@reddit
Can I ask a stupid question. How can they improve the chip so quickly and so systematically? Did they have some breakthrough in the last 6 months? What engineering breakthrough do they have now that they didn't have before? Obviously that last one is rhetorical but I'm just curious how all this works, it's amazing to me how chips just improve year over year so systematically.
Like there's a book and every year they read a new chapter. Incredible.
haykding@reddit
It's a mainly chip/package fabrication innovation happening in TSMC. Apple is only doing clever CPU design.
Sensitive-Ad-2486@reddit
They probably deliver only what's below their potential. What we see now in stores is what's going to be outperformed soon. That's not to aay the M series isn't pretty great
Forsaken_Arm5698@reddit (OP)
If you recall, M4 dropped only 7 months after M3, which took everyone by surprise. Gurman is saying we could see a similar thing with M6.
Klutzy-Residen@reddit
Probably a strange thing to say, but I don't understand the benefits for Apple of constantly releasing new chips with such a short time frame between.
Adds a lot more products to support while it is unlikely that it would have a significant impact on upgrade cycles for customer.
MassiveInteraction23@reddit
A) Being as consistently ahead of competition as you can is good.
B) Incrementally adjusting production for chip improvements is probably a lot safer.
So, as long as you’re working on improvements why not pipeline them rather than hold?
m0rogfar@reddit
They're already paying most of the R&D cost to have a new microarchitecture for the iPhone every year, making a variant with more cores is easy, relatively speaking, because they've already paid for the microarchitecture design.
Forsaken_Arm5698@reddit (OP)
True, though it still does cost some to design and takeout a chip.
https://www.tomshardware.com/software/macos/apple-spent-dollar1-billion-to-tape-out-new-m3-processors-analyst
Strazdas1@reddit
As you pointed out
so this is actually just 0.25% of their revenue.
Wonderful-Sail-1126@reddit
>Probably a strange thing to say, but I don't understand the benefits for Apple of constantly releasing new chips with such a short time frame between.
It's a yearly cadence. The big R&D cost is designing new cores which is already "free" because of the yearly iPhone release. All Apple has to do is scale those new cores up for M series.
Base M series go into iPad Air, iPad Pro, MacBook Air 13 & 15", Mac Mini, Macbook Pro 14". That's 6 products. It's worth the yearly upgrade.
dagmx@reddit
Because they’re vertically integrated. Having all their major products including software on a yearly schedule means they can release features that are enabled by each other, and simultaneously also have consistent baselines for adoption meaning they can have better guarantees of availability
Forsaken_Arm5698@reddit (OP)
They make $400B in annual revenue. An annual cadence is aggressive and costly, but it does seem to be bearing fruit;
https://www.reddit.com/r/hardware/comments/1qjsy7q/apple_silicon_approaches_amds_laptop_market_share/
Klutzy-Residen@reddit
Sure, but what I'm trying to say is that I think they would be just as successful if they launched yearly rather than with 6-7 months between.
Forsaken_Arm5698@reddit (OP)
oh they are on an yearly schedule still, I think. Thing is it's not 'fixed'. Sometimes a chip can be a few months early or a few months late, but it's still roughly 12 months for a generation.
VastTension6022@reddit
I mean on average they do release yearly, just some gens take 16 months and then the next ends up looking early 7 months later.
spydormunkay@reddit
It at least benefits consumers as with every 6-8 month release of M-series chips, older M-series laptops become much cheaper, even though their performance is still very good in the modern era.
ixid@reddit
If you achieve a big enough gap between yourself and your competitors you can aim for completely crushing them, and the gap becomes too big, and uneconomical, for them to attempt to bridge it.
battler624@reddit
different devices tho.
996forever@reddit
They're not AMD sitting on rebadges after rebadges after only having the sightliest bit of taste of victory.
Indie89@reddit
Marketplace momentum. They're eating into Windows shares, something they've struggled to do for years
Wonderful-Sail-1126@reddit
M4 release cadence could have been a one time thing because it was using N3B, a more expensive node than N3E. Apple probably wanted to move away from N3B asap.
DerpSenpai@reddit
M series is now a yearly release
KeyboardG@reddit
Its a good rumor since the Intel review embargo dropped yesterday with headlines claiming that Intel finally caught Apple.
The actual quote: "One wrinkle: I think the M6 chip is potentially coming sooner than people anticipate. Not necessarily in these next laptops, but still in the near future in some configurations."
DerpSenpai@reddit
>with headlines claiming that Intel finally caught Apple.
Those headlines are wrong, Apple is like 40% faster in Cinebench ST
Successful-Royal-424@reddit
qualcomm gpus are so garbage its practically closer to a CPU than APU, also atleast apple and intel fully work on their own operative systems qualcomm is still stuck in the mobile space
KeyboardG@reddit
Nobody is debating the actual facts. I am talking about what the headlines ran with.
Johnny_Oro@reddit
Intel's ST hasnt caught up actually, but Panther Lake is a tock, just a slightly improved version of Lion/Skymont coves, plus unified IMC which improves memory latency. Nova Lake (and Zen 6 to that extent, though AMD has just confirmed the iGPU will be meh) will bring much bigger improvements to the cores.
neil_va@reddit
What do you guys think memory bandwidth will be on the M6 pro (not max)?
I have an old M1 Max and the 400gb/s memory bandwidth is really nice for local llm's, but new max models are SO expensive.
Forsaken_Arm5698@reddit (OP)
M5 is 153 GB/s. So M5 Pro will be 306 GB/s.
M6 Pro will be similar or slightly higher.
neil_va@reddit
Ya maybe good enough. Honestly the cost of the macbook max cpus are so high that it’s just hard to justify anymore
SmashStrider@reddit
Apple's gains of over 20-30% in ST with every M-series launch is honestly both remarkable and terrifying, to say the least.
Baume12@reddit
Why terrifying?
Seanspeed@reddit
Because nobody else can keep up.
YeOldeMemeShoppe@reddit
When the competition (e.g. Intel) shoot themselves in the foot, I’m not gonna feel sorry for them losing the race.
techraito@reddit
Idk if they really shot themselves or just a poor gamble.
Intel went full business route first, and only backpedaled to gaming after they saw Nvidia and AMD's success. Even the modern Intel CPUs with E and P-cores feel designed for business/work tasks.
Artoriuz@reddit
The money is in server CPUs. Both Intel and AMD design their CPU cores to meet server demands, and then reuse these designs to make consumer products.
This has always been the case for them, it's genuinely nothing new. And no, AMD didn't design 3D cache to improve gaming performance either, it was designed for and debuted with Milan-X.
goldcakes@reddit
You see this with Zen5 and even Blackwell.
Zen5 is much better at AVX512 and a few things, but outside of packaging stack change for X3D, it doesn’t deliver much for general everyday, or gaming use cases.
Blackwell’s most exciting feature over Ada is probably NVFP4 support for AI, and HEVC 4:2:2 decode for professional video work.
techraito@reddit
Yes, thank you for the slight correction. As much as gamers are, data centers will always have more money. True then, truer now than ever.
I was encompassing "gaming" as just computationally heavy things done for the sake of entertainment. I feel like it all goes back to either video games or movies at the end of the day, but I digress and didn't make that clear.
EPYC would eventually kill Xeon.
DYMAXIONman@reddit
It's not that they intentionally jacked up their gaming performance. To save a lot of money they use some Intel fabbed tiles as well as Intel packaging. This should allow them to push out chips cheaper than AMD could (in theory). However, the one downside of this is that Intel packaging did not have an answer for TSMC 3d vcache, which is how AMD has been able to dominate Intel in gaming performance the last couple of gens.
This will change with Nova Lake, as Intel now has BLLC, which should (on paper) outperform TSMC 3dvcache, but allowing the same high levels of cache but also allow for higher core frequencies.
techraito@reddit
I also think technology was going towards a different direction back then as well when it came to pumping more core clock and physical cores itself. Intel was dominant and had the fastest single core clocks on the market, which benefited gaming indirectly.
Like had you ask me about CPUs 15 years ago, I would have probably said we'd be still seeing mostly 4 and 8 cores in 2026, but clocking at 8-9Ghz instead.
AMD nailed it with the 3D cache, and if intel can come up with something of similar performance, they can really pull themselves back, here.
DYMAXIONman@reddit
Intel should have seen it coming because they offered on package L4 RAM for Broadwell and saw obvious performance benefits from doing so.
Strazdas1@reddit
The way they did it for broadwell wouldnt work today. Latency would be too bad.
DYMAXIONman@reddit
I mean yeah, but Apple has had unified memory for several years now. There is clearly a need for both large cache and some smaller pool of DRAM to reduce the latency penalty of going to system RAM.
Strazdas1@reddit
Unified memory is a compromise when you cannot do proper memory for both types of tasks. Its not a benefit that apple did this. What Apple did very well is how they made memory integrated with the controllers. memory bandwith scales very well for them. I expect CAMM2 to result in similar effect if it ever actually happens.
Strazdas1@reddit
Intel had large cache before AMD did, but it wasnt a very sucesful product. They are now experimenting with something similar to 3D cache but not vertically stacked.
996forever@reddit
RDNA 3.5++++++++++++
techraito@reddit
Intel been on 10nm nodes for a decade now.
nanonan@reddit
Competition won't go anywhere. Best in class doesn't mean best seller. Technically inferior products still have a multitude of ways to compete.
Seanspeed@reddit
But you generally want competition at every level. Apple for the most part is sitting alone at the top and in some ways driving further into isolation. lol
msolace@reddit
too bad you have to use a apple software though. upgrades mean throw in trash and buy new one. mother earth is crying!!!!
dabocx@reddit
Their laptops have a pretty long time and software support. If someone has a M1 laptop they are probably pretty happy with it still
Strazdas1@reddit
tell that to anyone using x86 software. Sorry, if you still want to support your x86 software when we launch M1 you will be banned from app store.
mooocow@reddit
8GB RAM on base M1 Macbook Air is starting to get annoying right now, due to just how busy websites and apps are now. Still very usable and will have at least a good 3-4 years left, probably more.
M3 Macbook Air with the "free" upgrade to 16GB will last forever.
ComprehensiveYak4399@reddit
idk why thisnis downvoted its an obvious fact that there would be less e waste if apple documented apple silicon a lot more and provided linux drivers for it
ClassicPart@reddit
A second-hand MacBook Air will find its way to landfill much later than any similar Windows laptop.
But sure, waahhh, the software.
DanielKramer_@reddit
windows plastic craptops are actual ewaste. I don't understand why they exist they're not even cheaper to own in the long run they're only rational to buy if you plan to use it as a desktop and never open/close the lid
9Blu@reddit
When you upgrade your PC you throw out the old parts? Need a new laptop you throw the old one out? That's more a you problem than an Apple (or any mfg for that matter) problem.
jawisko@reddit
Their laptops last 7-8 years easily,I am still using m1 and have no plans to change as it still has 6-7 hour heavy use battery. In my 10th gen Intel i7 mini pc though, I had to install debian because Windows got so bad in last 3 months. So mac support is anyday better than pc.
crshbndct@reddit
This is true, but in basically every sector they sell in, their only competition are their older products.
Usually monopoly leads to lack of innovation and lack of development, and stifling competition through anticompetitive practises. But Apple effectively has flipped that so that their monopoly on usable laptops is maintained by them getting 20-30% better every year.
Forsaken_Arm5698@reddit (OP)
Apple has \~50% lead in ST perf, and if they continue their pace of execution, the gap will continue to widen.
vlakreeh@reddit
Others have barely caught up to M4, let alone M5. I don't see how anyone can catch up anytime soon or any reason why people like me (software engineer) should buy anything but Macs.
Hour_Firefighter_707@reddit
Correction: Others have barely caught up to M3. Only Qualcomm is in the same ballpark as M4 and they’re having to push very high clock speeds (and power) to get there
MassiveInteraction23@reddit
Not relevant for consumer right now, but alternative designs could change things. e.g. the E1 chip recently created — it’s for embedded applications right now, but it’s ultra-low power and it’s in significant part because the whole architecture of the chip is changed — so it’s not a “von Neumann” Architecture with serial instructions.
Right now the M chips seem to be ahead in part because they take serial instructions, recompute dependency graphs of the actual code, and then are able to do efficient Out of Order Execution.
A chip that just … doesn’t deal with the artificial serialization of code instructions and doesn’t have to pay power and silicon overhead to recomputing those dependent graphs could, potentially, jump ahead of the M series chips.
I’m not aware of anything on the horizon for consumer general purpose computers. Just responding to the ‘how’ people can get ahead of apple silicon.
vlakreeh@reddit
The issue that always comes up with this is that by shifting that responsibility away from the CPU you have to move it into the compiler, Itanium and Explicitly Parallel Instruction Computing comes to mind which was an absolute mess.
While some languages do have more strict semantics that may help crafting binaries with instructions meant to be run in parallel, a lot of compilers just can't become that smart without being omnipotent.
MassiveInteraction23@reddit
But we’re already doing this on the binary instructions in real time on a smaller scale (few hundred instructions).
So worst case scenario: you add a secondary compiler that basically just does what our chips are doing now for out of order execution. A naive implementation would create bubbles of dependency graphs, but those should be smoothly integrate into one another.
It would mean that performant and non-performant languages would likely see a further divide in performance, but shouldn’t result in a degradation (modulo the traders of silicon design, of course - which would be an empirical question.)
Brilliant-Weekend-68@reddit
You should buy a Chromebook to make sure you write software that is efficient.
MassiveInteraction23@reddit
Software engineers doing that tend to use compiled languages. We want powerful voters because we can compile faster.
vlakreeh@reddit
Devs write inefficient software because they're optimizing for profit not because they're on powerful machines. Maximizing profits almost always leads you to writing working but inefficient software for the sake of building quickly.
DerpSenpai@reddit
It's not about maximizing profits, the majority of devs suck. Simple as that. I just saw a piece of code in my clients where a pod crashes if it gets a 500 request from Azure. So anytime Azure is down, or the connection, the pod simply doesn't work and doesn't tell you why. They need to pay a guy extra 500$ a week to be on call in case anything of sorts happens. And now make that pod x 1000 and you have a normal company
vlakreeh@reddit
There can be shitty devs and economic forces prioritizing money over quality software. I work at a large public cloud building large distributed systems that handle a huge amount of requests per second, and most of what I do ends up being in TypeScript because it's fast enough and it's cheaper. Don't get me wrong, I've written a lot of C++, Rust, and Go which performs substantially better and is dramatically more memory and CPU efficient, but it takes anyone longer to build a perfomant and efficient system than one that is good enough.
For most of the industry it's cheaper to have engineers working in higher level languages doing low-hanging-fruit optimizations to build something that works quickly and can be worked on by engineers from a much bigger talent pool. We could save hundreds of thousands, maybe even millions of dollars a year, but in the time we could spend optimizing we could also build more things to generate more revenue.
FollowingFeisty5321@reddit
Yeah but they suck because they have very little agency over their work and priorities, can't really do anything out of scope of their current ticket or deviate from their assigned tickets, can't even fix that bug unless someone decides to move it up the backlog and into a sprint and puts it at the top of their list.
It's like blaming the sweatshop worker for the shitty quality of the shoes Nike make.
VampiroMedicado@reddit
It’s profits what makes websites slower.
In fact the best example I have is, a client wanted a fancy way to see some information with interactions similar to Figma or apps like that (meaning you can move inside the panel).
We found 3 ways to do that:
The battletested default library which was paid for enterprise usage.
Some random dude free library which hasn’t updated in 2 years, but did most things we wanted and had to hack together the rest.
A custom made solution for the user case, to ensure proper performance 1 month for MVP, 3 months for production ready after extensive testing (maybe less).
Guess who won and takes minutes to load 100 nodes.
windozeFanboi@reddit
Well what more can they deliver for M6 ?
Apple for last 2 gens has been riding the TSMC frequency bandwagon....
What was peak frequency for M3? 3.2GHz? Went to 3.7GHz and now 4.4GHz.
I'm pretty sure they're hitting the limit of efficient boost frequency. IPC itself hasn't gone forward much on the performance cores for a while ... It s the efficiency cores that are actually insane.
M5 GPU upgrade is important though. It's design adding tensor acceleration was important .
Edenz_@reddit
M3 was already at 4.1, M1 was 3.2. Clockspeed has gone up 12% since M3 and the ST has gone up \~40% so theres significant IPC increases in the last few gens.
Ok_Spirit9482@reddit
that's due to SME mainly, if you look at geekbench 5(without sme), it matches up fairly well
VastTension6022@reddit
What do you mean? M3 to M5 gains 32% single core in GB5.
Ok_Spirit9482@reddit
M2 : 1899(100%), 3.5GHz(100%): https://browser.geekbench.com/v5/cpu/15890783
M3: 2256(116%), 4.05Ghz(116%): https://browser.geekbench.com/v5/cpu/compare/22197181?baseline=17087269
M4: 2628(136%), 4.4Ghz(126%):Mac mini M4, 2024 - Geekbench
M5: 2867(148%), 4.58Ghz(131%):Mac mini M4, 2024 - Geekbench
you are right, M2 and M3 scales by frequency, and m4 and M5 scales by frequency for their geekebench 5 score, but M2/3 and M4/5 have a \~10%delta in IPC, looks like M2's N4 and M3's N3B is very similar, and N3E has a slightly jump in efficiency at higher frequencies.
Kryohi@reddit
As long as TSMC pumps out new well-working nodes the frequency bandwagon is available.
N2P will let everyone increase fmax without necessarily increasing power too much
lorner96@reddit
That’s if they keep giving Apple the good deals and comfy allocations they’ve grown accustomed to
YourVelourFog@reddit
They're not giving Apple deals any longer. AI has been dominating so much that most of their capacity has been bought out by Nvidia and AMD where Apple used to be the only player in town.
lorner96@reddit
That's what I was getting at yeah
996forever@reddit
Apple can afford it. Their high volume dies are tiny, anyway, unlike Nvidia/AMD.
Nkrth@reddit
and less complex packaging.
krystof24@reddit
And demand is fairly predictable IMO which is also a huge advantage
Dangerman1337@reddit
And probably N2X for the M7.
pluckyvirus@reddit
I can see 4056 MHz normally on my M3
DoctorKhitpit@reddit
I think it's 10-15% every generation in Single Thread.
Just out of memory, for Cinebench R23 Single Thread: M1 = 115, M4 = 180. That's 60%. And 12% every generation.
pianobench007@reddit
So a new Apple CPU can render a single scene at 20% to 30% increment improvement with each new purchase of a completely new apple device. Is that cost/efficient for the end user over an AMD/Intel PC equipped with a GPU for a single scene rendering?
In other words will a CPU alone justify the user to purchase an entirely new computer (cpu+hdd+mobo+gpu+psu) over a PC user who only needs to upgrade their GPU and now has over 14x the performance of a CPU (in cinebench r26)?
I am speaking of course about cinebench rendering.
In the PC gaming space, they already have CPU that provided 300 fps and increased to 600 fps without DLSS or frame gen. (Counter Strike numbers).
Nvidia now has framegen with 6x boost from the GPU alone, what is so scary about a CPU that only does 1.3x improvement?
6x is 600% improvement.
For reference, an M3 Ultra 32T has a CB R26 score of ~12,000 points. A NVIDIA RTX 5090 score ~173,000 points to render a scene. The RTX 5090 now renders a scene close enough to be near CGI maybe?
Ar0ndight@reddit
This with the Macbook redesign coming later this year/early next year, if nothing changed, will make for some insane hardware.
Assuming Apple doesn't fuck up the redesign touchbar style ofc
Dontdoitagain69@reddit
I write custom benchmarks, get an m3 and save money. If you trust geekbench or other closed source bench crap. It doesn’t matter what you use from m1 to m100 they won’t differ much
MassiveInteraction23@reddit
M5 GPU is a significant step already.
M4s also have some really exciting ability to spit out instruction by instruction readout programs you run — which can be exciting if a programmer.
sinholueiro@reddit
M4 got AV1 decode, which can be important for streaming purposes.
F9-0021@reddit
Yeah, my M1 Air is still plenty powerful. Not so much for multithreaded rendering, but for single threaded tasks it's still very competitive with X86.
mrkstu@reddit
I recently consolidated my M1 Air and M4 Mini to a single M3 Air, but bumped to 512Gb/24Gb. Great sweet spot.
1000yroldenglishking@reddit
Why m3 and not m2? M3 was mostly a die shrink without increase in performance
Seanspeed@reddit
My mom is still using an M1 Max and it remains a very, very impressive machine.
yuiop300@reddit
2021 M1 Pro binned for me. It’s still a beast.
jaguarone@reddit
I second this (at least the concept)
Where I work we have M1-M4 Pro's and Max's (almost all the combinations)
Even the M1 Max is standing extremely well for it's age
mr_tolkien@reddit
I'm still rocking my day 1 8Gb M1 macbook air for 99% of my tasks. Works perfectly for running a terminal + neovim and compiling some Rust code.
Snoo26183@reddit
Could you elaborate a bit what algorithms you’ve used? Things like hashes, Clang kLOC found a significant gain too between the generations, but it’s Object Detection / SME that skews the points a bit. Looks deceptive.
HayatoKongo@reddit
Unless you're on the bleeding edge, I'd agree with this. Even then, money is likely better spent on an M3 Ultra or Max vs a base M6 if you actually need a beefy machine for compute.
dumbdarkcat@reddit
Well at least for MacBooks they're introducing a new chassis for potential better thermal performance, and OLED touch screen to replace the Mini LED, doing away the notch. So it's not just about the SoC.
Dontdoitagain69@reddit
Actually for those who need an extremely heavy benchmark, I mean take your cpu to the space. I can sent source code you can compare between themselves, with technical papers and ability to change links.
D_gate@reddit
Still with 8GB base model I bet.
forgottenendeavours@reddit
They changed to 16GB base model a while ago.
Strazdas1@reddit
By a while you mean in 2023.
Checho-73@reddit
It will be funny if they go back to 8GB due to AI eating up all the memory manufacturing capacity.
-Purrfection-@reddit
They got rid of those
FollowingFeisty5321@reddit
They still actually sell one 8GB MacBook Air model through Walmart
Famous-Ebb3041@reddit
I'm still using my 2020 M1 Mac Mini (16/512 configuration), because I'm still trying to find ways to use up all it's performance. Mind-blowing how fast and powerful the M-series Macs are... when my friend Dennis Travis (RIP) was telling me how the A-series chips were blowing away some higher-end Intels, I was like, "No way, man. No way a cellphone chip can beat a desktop processor." and then the M1 was released and I was completely floored. He was RIGHT! I still can't wrap my brain around HOW, but it's happening!
someshooter@reddit
I'm still pretty happy with M2 :)
CalmSpinach2140@reddit
Eh it won’t launch till Q4 2026
WorriedGiraffe2793@reddit
So they won't be releasing M5 Pro machines and go straight to the M6?
dabocx@reddit
The rumor mill says we will be getting the m5 pro and max sometime very soon. And regular m6 later this year in fall.
DeuzExMachina_@reddit
So either (1) M6 is just a slight improvement on M5 to make sure apple is better than PTL on all categories (maybe a couple more cores), or (2) they were sitting on another big architecture leap but were waiting for Intel/amd to catch. Now that that Intel did(ish), they're ready to embarrass the completion again
jocnews@reddit
M4 came so quickly after M3 because M3 was late being first chip on a new node that itself was late. Was M5 late? Not it's M6 that should be on a new node, so dunno.
Brewskiz@reddit
My M3 Max still kicks butt, these new ones must be crazy good.
ItsTheSlime@reddit
I kinda feel like Apple is diluting the value of their M chips by releasing new ones so often. The difference from one to the other is rather minimal, and I cant see how anyone could be super excited at new ones coming out every 6 months.
noiserr@reddit
too bad Mac OS sucks for docker and file system performance, my Strix Halo runs circles around my M3 Ultra for like half the price.
AoeDreaMEr@reddit
What people don’t realize is each additional M series laptop sold is a potential win for Apple eco system unlike phones.
Kotschcus_Domesticus@reddit
any reason to get it these days?
996forever@reddit
The only fanless laptop that does not immediately start lagging once you unplug it on the market.