Nvidia will reportedly showcase N1 SoC for laptops at Computex 2026
Posted by imaginary_num6er@reddit | hardware | View on Reddit | 145 comments
Posted by imaginary_num6er@reddit | hardware | View on Reddit | 145 comments
0x75727375706572@reddit
They need to showcase a new shield already
randomkidlol@reddit
maybe once theyve built up enough inventory of rejected switch 2 silicon. only reason why the original shield existed was to burn through warehouses of unsold tegra x1s when nvidia burned all bridges with phone and tablet manufacturers. they got real lucky nintendo decided to buy up all that leftover stock.
Sorry_Soup_6558@reddit
Yeah you don't need a 3tf gaming GPU in a Android box they could just have 6 of the 12 SMs and 4 of the 8 cores and it would be more than enough.
DiscostewSM@reddit
The contract for the Tegra X1 between Nintendo and Nvidia was made before the chip released.
randomkidlol@reddit
the tegra x1 was officially released in jan 2015 and the original switch was ~2 years from launch. nintendo was rumored and most likely still evaluating options for which manufacturer would make the SoC for their new console in 2015. their primary concern that year was attempting to bolster wiiu sales which would have been in its 3rd year of production.
Sorry_Soup_6558@reddit
Nope 2014 we have Nvidia and Nintendo docs confirming.
Exist50@reddit
No? It was widely reported that Nvidia's tegra division was told to find a buyer at any price.
DiscostewSM@reddit
Nvidia approached Nintendo in 2014, showing them demos and such, and even made an API specifically for Nintendo to use. This has been known in the leaks. They needed a "console win" to get back into that sector.
smarlitos_@reddit
Yep
Kichigai@reddit
I think Nvidia is more likely to kill it than revive it. The device came out when there was excitement about the Ouya, and folks thought Android gaming was ready for the big screen. Then that market segment withered away, and all Nvidia had was Google TV going for it, which people aren't all that excited about, outside of people looking for a good turn-key HTPC.
It's a good product, but I think Nvidia is making bigger bucks elsewhere.
isekai_cheese@reddit
nvidia been cookin laptops since over 20 years ago. doubt they'll ever learn.
alabasterskim@reddit
I've got enough "fell for it again" awards, thanks. I'll believe it when I see it.
AHrubik@reddit
What OS is it going to be running? What software will be compatible with this platform? Hardware is only a portion of the equation.
Exist50@reddit
Pretty obvious it will be WoA. And anything compatible with that today should be the expected baseline. Maybe Nvidia will announce some new native programs at the reveal.
KodaKR-64@reddit
Is a bifurcated OS strategy a good idea long-term?
ARM Windows still has a very low market share worldwide. What's the incentive for developers to port their software to ARM and get it running well?
Especially now with Panther Lake having comparable battery life to ARM.
DerpSenpai@reddit
It's not a bifurcated OS. It's the same OS, just recompiled for ARM. It's the same for Android or Linux
KodaKR-64@reddit
It's bifurcated when tons of Windows software is still x86 only, and doesn't run natively on ARM (especially games).
What incentive is there for developers to spend the time and money to port their software when it's a tiny fraction of the market?
Apple forced customers and developers to switch by moving the entire product line to ARM.
Offering PCs with a choice of Intel/AMD x86 or Qualcomm/Nvidia ARM is only going to confuse most people, I think.
99% of people have no idea the difference between those 4 CPU options, and it's certainly not explained at all on Lenovo's website while you shop for laptops.
DerpSenpai@reddit
The OS is native, 1st party software is native. 3rd party might not be native but can be emulated at 70% the speed. It's a non issue.
90%+ of software works on ARM, Qualcomm X2 is fast enough that in emulation it will be as fast as Zen 5. (it's 30% faster in Single Core in Native workloads)
The Adreno Drivers are realistically a bigger challenge for Qualcomm than emulation at this point in time. Only EA and Riot have not ported their anti cheats and EA is working on it as we know, Riot too.
yreun@reddit
The performance loss with Prism emulation is actually closer to half according to Geekbench. You can look at the score of each subtest to see what it excels at and fails at others:
https://www.reddit.com/r/snapdragon/comments/1p3terr/comment/nq7444w/
The Snapdragon X2 and Nvidia N1/X might have a lower performance loss due to having SVE (but at only 128 bits wide, so probably not?), which might make it retain performance closer to 60% instead.
With FEX EMU on X Elite it's a tiny bit better than Prism I've seen but still about the same:
https://browser.geekbench.com/v6/cpu/compare/17218914?baseline=17218803
smarlitos_@reddit
Add Epic Games to that list of no anticheat support for ARM
DerpSenpai@reddit
They do
Epic Online Services Anti-Cheat and Fortnite are now available on Windows on Snapdragon - Epic Online Services
smarlitos_@reddit
Oh nice
Now gimme Fortnite on Mac, would be great considering Apple silicon’s great integrated graphics and Fortnite being more CPU intensive.
total_zoidberg@reddit
1.3 x 0.7 ~= 0.91
So the X2 should be about 10% slower than a Zen5, by the numbers you gave.
DerpSenpai@reddit
I just eyeballed it. Compared to laptop Zen 5, just checking the most expensive SKU is actually 37% faster. 30% for the normal boost ones (4.7Ghz)
Either way, the point is that the software works and it's fast enough to not feel slow
total_zoidberg@reddit
Yeah, 10% is small enough that it shouldn't be noticeable. But maybe someone finds something that is 20, 30 or even 50% slower (because the emulation gets into a rough edge case) and then we'll definitely hear about it.
work-school-account@reddit
One thing that causes confusion around this is that Windows doesn't force apps on its own Microsoft Store to support WoA. A program you download from the internet via the web browser or a game you install via Steam, people might understand, but an app that's on Microsoft's first-party app store is expected to run on WoA.
Forsaken_Arm5698@reddit
I expect games will run flawlessly on N1X, since Nvidia drivers are top notch (other than games that don't have ARM native anti cheats).
battler624@reddit
Nvidia drivers haven't been even a notch for the past 2 years.
DerpSenpai@reddit
100%, however CPU performance is "just" Zen 5. they are using cores announced in 2024
Randromeda2172@reddit
That's not bifurcation. Is MacOS bifurcated because it runs on Intel and now runs on ARM? Windows' compatibility layer for ARM is not as good as Rosetta 2, but it's not terrible either. Most apps used by consumers will work just fine. Gamers may see performance impacts but luckily they fall into the subset of people who can tell the difference between hardware.
More choice is good for consumers, even if they're too stupid to realize it. Windows on ARM needs support if Windows laptops can hope to compete with MacBooks, and Qualcomm just doesn't have the chops to get it done, while Nvidia does
Educational-Web31@reddit
source?
frogchris@reddit
Well since we are investing so much into Ai. The coding Ai should easily be able to port x86 to arm. That should be a trivial task for Ai.
Unless Ai can't do what it promised yet and we are over inverting too soon.
DerpSenpai@reddit
the issue is not coding the port. Most apps should be just a different CI/CD target. The issue usually is testing + dependencies.
frogchris@reddit
You don't think I know that lol. I literally designed the silicon for these things.
If Ai is so smart. Why doesn't it just do the testing itself. Why doesn't it just fix the dependency issue itself.
What I'm saying is that we are investing so much into Ai. Shouldn't Ai be able to do a simple port from x86 to arm. Or speed it up and make the cost of supporting different architecture easier? If Ai cannot make porting to to different hardware architecture easier, then it's simply useless for high level coding.
DerpSenpai@reddit
I was not being pedantic. AI is mot so smart. You asked, i answered. AI is good at making a base to work with but engineers need to make the bulk of the work/actual thinking
AI still has a porpuse. E.g how many office hours are wasted formatting powerpoints, excels? How many are used writting documentation and tests? AI is pretty good at all of those things
frogchris@reddit
My question wasn't why coding agents can't do it. I already know why.
The question was how much money and investment is needed to make it happen. Because you have Dario saying agi is so close. If agi is so close why can't anthropic coding agent do a basic port. Porting software architecture should be 100x easier than solving cancer.
The rules of the software and architecture and hardware is already defined. You can hire 1000 Indians and get some port working. But with all the training data, information, structured context the coding agent still can't do it? Interesting.
Educational-Web31@reddit
Why? Is Microsoft's 32 bit emulation bad? or is it a hardware issue?
DerpSenpai@reddit
It's not just Microsoft, i think it's a bit of both. It simply isn't what they optimised for.
Forsaken_Arm5698@reddit
The emulation layer has gotten pretty good, and most applications non-native application run through it without a hiccup.
https://www.worksonwoa.com/en/
Speaking of which, a surprising amount of apps are ARM native already.
battler624@reddit
Wish that website showed immediately if its compatible or native. it instead shows everything as compatible even if its native.
NeroClaudius199907@reddit
What can Microsoft realistically do? They want to support two platforms, they should be improving prism & monetarily incentivizing more devs to port. But all hands on ai right now
Educational-Web31@reddit
They should hire more devs to improve their graphics drivers first.
pac_cresco@reddit
Microsoft has had MAUI for a loooooong time, but the amount of legacy bloat that needs to be cleaned up when an app wasn't built with that in mind is amazing.
NeroClaudius199907@reddit
Somebody should tell Microsoft if they do that copilot users will increase
RephRayne@reddit
Apple is in a much better position to dictate terms to its users than Microsoft is.
Most Apple users are locked in by choice rather than need, Microsoft is one big bad decision away from seeing too many of its customers move to a competitor. Removing legacy support is definitely a big bad decision for those customers who require it.
Educational-Web31@reddit
The amount of FUD surrounding WoA is insane.
MegahertzMan@reddit
But with Panther Lake having roughly the same battery life as ARM now, why should people pick the Qualcomm laptops?
Plank_With_A_Nail_In@reddit
Why does this matter?
KodaKR-64@reddit
Qualcomm's fastest CPU nearly matches Apple's M4 Max in benchmarks. Performance isn't really an issue, software support is.
Forsaken_Arm5698@reddit
Yep, and X2EE is faster than Intel or AMD's best laptop CPU.
Educational-Web31@reddit
For casual users, software support is a non issue. 90% of the apps there are going to be using are native already. Browsers, office, social media, streaming, etc...
However, for pro-grade users with their specialist applications, much could be improved still.
Forsaken_Arm5698@reddit
Neither of which are true today.
zoltan99@reddit
software for it will be mature in 2030 when the chips are years old, right?
Get it right day one and win nvidia. Apple will get it right day one.
ElectronicStretch277@reddit
? Nvidia has largely been known for excellent support from day one. The only time it's not been the case has been for RT which they largely fixed by the 3000 series.
Baalii@reddit
Their drivers have been very hit and miss since the 5000 series release.
ElectronicStretch277@reddit
Agreed, but thats been the exception and not the rule from what I have seen.
Seanspeed@reddit
I mean, it's becoming a pattern, since the problems keep persisting with new drivers.
Combined with the extremely lackluster improvements from Lovelace to Blackwell in terms of architecture, it really points strongly to Nvidia dialing back the amount of resources they're putting towards graphics/gaming, be it hardware or drivers or whatever.
ElectronicStretch277@reddit
I know the architecture seems lackluster but there's been some good improvements there. Yes, power and performance increased in a 1:1 ratio this gen and that's disappointing. But we know that architectural improvements took place because that's not the case if you just increase the power on a previous gen (as seen by over clocking you usually need more power to get less improvements). Also I think architectural improvements are a bit overrated. The majority of improvements per gen (unless there's some major weakness in a prior architecture) come from process nodes.
I agree they've deprioritized gamers and regular workers. But a company like Nvidia doesn't need to give much attention (relative to their size) to keep pushing innovation and improvements in that area.
Seanspeed@reddit
No there isn't.
You literally cant even name any.
It's the worst new architecture by Nvidia since Fermi. Only saved because Lovelace was so good, that even just being an insignificant evolution of that means it's still decent enough.
It's amazing how much people glaze Nvidia's shortcomings here.
hackenclaw@reddit
I dont see them changing their style in software support.
jtoma5@reddit
Not among linux users?
ElectronicStretch277@reddit
Linux is like 5% of the community. Admittedly, Nvidias closed source approach is to blame for their issues on that front, and ideally theyd embrace open source like Amd, but overall even that has gotten better. I have seen constant updates from people talking about Nvidia on Linux becoming a smoother experience as time goes by.
Plank_With_A_Nail_In@reddit
Linux is like 80% of the AI community and Nvidia's drivers work just fine for that.
Kryohi@reddit
Lol every non-nvidia GPU has worked flawlessly on Linux since forever, without compatibility problems with Wayland, specific DEs, or specific monitor features, but somehow it's distro maintainers that must be causing problems.
To Nvidia's credit, since around last year they finally realized they were losing users by not playing nice with Linux, and started taking action. The latest (now stable) drivers are very very nice.
randomkidlol@reddit
they were losing enterprise money, not hobbyist linux users. all the big money customers were not moving their workloads off linux and if nvidia drivers didnt work right, they would choose another vendor.
the drivers getting better for hobbyist linux users was a nice side effect of that arrangement.
zoltan99@reddit
I was complaining about nvidia software in 2010 so idk man
I do not have the same experience at all in my personal or professional lives
Been a part of a lot of engineering discussions around how to handle broken new shit from them
KodaKR-64@reddit
Lol, Apple stopped using Nvidia in Macs years ago due to a variety of hardware issues across several different chips.
BSAENP@reddit
From day one it was obvious they would phase out x86 on Macs ASAP while on Windows (and Linux) x86 will continue to be a thing so developers have far less incentive to fix their shit optimization. it's not really a comparable situation here
KodaKR-64@reddit
I blame OEMs.
In many ways, Windows PCs and Android are still like cell phones were pre-iPhone.
The OEM still has the upper hand in many ways.
Lenovo is selling laptops with Intel, AMD, Qualcomm, and probably soon these ARM Nvidia chips too.
That's 4 different chip choices for customers.
Do you think 99% of customers understand the difference between these choices?
Apple has been successful with it because they moved fully to ARM, and didn't give customers or developers a choice.
A lot of these OEMs have lucrative marketing deals with Intel or AMD, and I don't see Lenovo and most of these others heavily promoting the Qualcomm laptops. I had to hunt around for several minutes to even find the Snapdragon laptops on their website. There's only one ThinkPad that uses ARM.
Windows on ARM market share will never increase if the OEMs don't promote it, and customers remain confused with 4 different CPU choices.
Slava_Tr@reddit
Kinda a shame the chip is coming out pretty late. At launch it’ll be around the level of a three-year-old M3 Max, but with the CUDA ecosystem behind it. Still, it should compete quite well with the Core Ultra X9 388H or the Core Ultra 9 385H paired with something like a 5050–5070(TI)
For Windows on ARM, this feels like a real hope for gaming to move forward. But the low memory bandwidth and the small number of native ARM games might hold it back, so it may not be all that great in practice
Really curious about the battery life, whether it’ll be a letdown like Radeon AI Max, or if it can match or even beat Panther Lake
The mobile 5070 is basically the same chip as the desktop 5060 Ti, just with a heavily limited TDP. Meanwhile, the N1 uses GPU from a desktop 5070, but built on a better TSMC N3 process instead of N4. So it might have a real chance to make up for the lower TDP. The process node difference is pretty significant, if this were a typical mobile GPU, both would probably end up performing at a similar level. But who knows how it’ll play out in an ARM SoC with limited memory bandwidth
From-UoM@reddit
Desktop 5070 uses 4N. Not N4. 4N is a custom Tsmc N5
4NP used in Blackwell Data Center is Tsmc N4 or Tsmc N4P
Exist50@reddit
Same shit, different name.
From-UoM@reddit
Sort of i guess?
Tsmc 4N is a custom 5nm node.
Tsmc N4 and N4P are 4nm nodes (4nm is optimized 5nm)
Then again 5nm and 4nmprocess which itself is a marketing name and doesn't actually represent transistor size
Point is Tsmc 4N is still better than base TSMC N5 despite both being based on 5nm
Geddagod@reddit
Nvidia claims TSMC 4N is a custom 4nm node.
ResponsibleJudge3172@reddit
4NP is a custom 4nm node. Only Blackwell datacenter gets 4NP. Client chips stayed on 4N
Geddagod@reddit
Nvidia outright said products on 4N was "Built with a custom TSMC 4 nanometer process".
Not sure what the source for Nvidia 4N being a "5nm" node and not a "4nm" node is tbh. For what little the difference that actually makes.
ResponsibleJudge3172@reddit
SLightly different characteristics with different name
Slava_Tr@reddit
Hahaha, you corrected my mistake and then made the same one yourselves. It would be correct to use N3 and N4. As for N4, it isn’t a custom process by Nvidia. Many others have long been making, or are still making, their chips on this node. However, N4P is a process purely tailored to Nvidia’s needs.
Roughly speaking, doubling the power gives around +25% performance on the same chip.
This is especially obvious on mobile chips, and it applies to others too. A 120W GPU is about 25% faster than a 60W one. This corresponds to the basis that power consumption grows exponentially, while frequency grows linearly.
N3 saves about 30% power at the same performance compared to N5. To reach similar performance 5070, you’d need roughly 175W TDP, whereas N1X has 140W TDP. To get 20% less performance, you’d need around 90W TDP, but here we have 140W TDP. Performance will be very close.
These two factors would work perfectly if this were just a mobile GPU, for example, the mobile 5090 on a 5080 chip with 2x lower TDP behaves similarly. But with an ARM SoC and limited memory bandwidth, there are additional factors that could negatively impact performance.
GDDR7 on the mobile 5050 compensates for the reduced TDP, making it even about 1% faster than the desktop 5050 on the same chip, but with GDDR6
From-UoM@reddit
N4P is the base tsmc one.
4NP is Nvidia's one.
https://www.nvidia.com/en-us/data-center/technologies/blackwell-architecture/
Slava_Tr@reddit
Yes, 4NP. Here I go, stepping on the same rake again, thanks for not letting that chain continue. TSMC 4N matches the characteristics of TSMC N4, kind of a variation of its custom preset and It also perfectly matches the transistor density of TSMC N4
Which is why even a year before the release of Nvidia’s 50 series, I was expecting a corresponding performance boost, while people were skeptical about it. Later, they were disappointed to see the gain amounted to only +15% or +30% in shader performance, depending on the GPU. However, the architecture has improved significantly, and we’ll only see its full implementation in games over time
From-UoM@reddit
Tsmc 4N is a custom 5nm. You can even open GPUz with a 40 or 50 series and see its named 5nm process.
It's custom nature allows it get close the density of the Tsmc N4
If open up something like a 9070xt you will see 4nm process stated on GPUz
Geddagod@reddit
GPUz isn't going out and measuring the size of the transistors or anything like that.
Slava_Tr@reddit
N4 is also an improved version of N5 N4 and 4N have the same characteristics. The full N4 also includes the characteristics of 4N, but not the other way around. There are dozens of transistor types, parameters, and features included in N4, whereas 4N is specifically focused on Nvidia’s needs from this set
From-UoM@reddit
How do you know they have the same characteristics when the density of 4N was ever revealed?
You cant use Ada/blackwell dies to measure it either as L2 Cache and Memory controller will obfuscate the density.
Slava_Tr@reddit
Yes, the L2 cache affects density, slightly differently, but its density remains within the process node’s specifications. We can use Nvidia chips, their die size, and transistor count. The transistor density of N4 and 4N matches that of the N5 process. They both launched around the same time. All three belong to the same family. Depending on the transistor configuration, density can vary significantly, but it will remain within the limits of the process
Nvidia uses the same set of transistors in both mobile GPUs and server Blackwell chips, as the transistor density matches, differing only slightly due to variations in cache size.
While other companies make their mobile chips denser than their desktop, since mobile chips run at lower frequencies, TSMC’s noncustom node can scale differently depending on the requirements. Nvidia’s node is custom-tailored for specific needs, so all chips essentially have the same density
MegahertzMan@reddit
If Panther Lake already has similar battery life to ARM, I'm wondering what the purpose of Windows on ARM is now.
DerpSenpai@reddit
1st Nvidia chip on Windows, because it's their first one, it's releasing a year late. Next gen most likely is already in the works for release in 2027 with C2 Ultra cores and a fat 40% Single Core uplift.
Forsaken_Arm5698@reddit
What are the odds, now that Nvidia has it's own custom ARM cores?
DerpSenpai@reddit
Their custom cores are for Enterprise + the Nvidia N1 CPU is made by Mediatek.
Exist50@reddit
If their custom cores are worth using for server, they should also be worth using for client.
DerpSenpai@reddit
Yes but they need to develop E cores too and ARM has been late to adopt their newer designs for servers. The new ARM AGI is based on the X4 only
Educational-Web31@reddit
X Elite was fine without E cores. X2 series doesn't have "E cores" either.
DerpSenpai@reddit
It does, the X2 every SKU has some number of E cores, up to 6, only the lowest SKU doesn't have it.
Apple showed the way with the M5 Pro and Max IMO.
Next Gen ARM laptops most likely will use that philosophy with 1 small C2 Ultra cluster and the rest being C2 Premium and Pro
Artoriuz@reddit
But isn't replicating the server ecosystem the whole point? Personally I'd be surprised if the "N2X" isn't just a small Vera Rubin.
-WingsForLife-@reddit
Might be time to sell my 125H laptop.
AbhishMuk@reddit
Something's massively wrong if a 125H laptop is "slowing down". Check your thermal paste, cooling, OS settings (eg power saver) etc.
-WingsForLife-@reddit
https://imgur.com/a/OUhnU1L
Here's me putting a ptm 7950 when I bought it, the chip is just power limited on battery.
AbhishMuk@reddit
I don't doubt you for a moment, OEMs do funky stuff all the time.
Can you try running some benchmarks (plugged in and on battery), and see how they vary with time? If they start high but drop then it still likely is a thermal issue, but if it never even goes high, it might be worth looking at deeper settings. Intel XTU (or its modern version, I've been out of the loop for a hot minute) might help.
whispous@reddit
That's insane, a 3 year old CPU shouldn't be "slowing down".
Try reinstalling Windows fresh.
Seanspeed@reddit
This is exactly the sort of shit you put up with when you use a laptop.
And why I'll be desktop for life.
Plank_With_A_Nail_In@reddit
It's a made up story, probably a bot.
Educational-Web31@reddit
Probably not. Bots don't become Top 1% commenters here.
Front_Expression_367@reddit
I don't think they are a bot, but also I heavily doubt this is just a chip problem. Feels like it would be helpful to also mention the model of the laptop just in the case of firmware or BIOS or software problem but I got downvoted for saying that much so I guess not?...
XTornado@reddit
Well, I generally agree on that statement, and no specific to that cpu as I am clueless about that mode, but it can happen and did happen in the past where the CPUs had to have a software patch for security issues and that in a way "slowed" down the CPU and a reinstall would not fix that as the patch was needed.
Front_Expression_367@reddit
I guess it could also be certain BIOS update that messed up with the scheduling, but without knowing the model of the laptop, no one can be sure if that would be true.
-WingsForLife-@reddit
I wish I was a mod, then I could just show that the moment I edited in performance, I always stated it was on battery.
XTornado@reddit
Ok, tbh is confusing he is not the only person saying that about cpu slowing down, but I guess was some misunderstanding.
-WingsForLife-@reddit
idek how it became controversial, the older gen laptops being worse on battery isn't a new concept.
-WingsForLife-@reddit
It's fine plugged in, not on battery.
I've tested it against notebookcheck's results.
Educational-Web31@reddit
Snapdragon chips don't throttle on battery, and hopefully neither will this N1X chip.
tecedu@reddit
No this is just normal power limited windows; happens even on my 265H laptop. Balanced and power savings mode are terrible.
dampflokfreund@reddit
Nah, it is normal on battery. Laptops throttle really hard there and couple that with how slow Windows 11 is in general, it amplifies that greatly. My laptop feels 20 years old when there's only a few percent of battery life left.
KodaKR-64@reddit
Laughs in MacOS
willis936@reddit
Might be time to upgrade to a five year old used macbook.
Front_Expression_367@reddit
Seeing the chip in my laptop being mentioned here for being "sluggish" is so funny lol. Then again lying on the Internet is as old as is.
Endeeeeeeeee@reddit
What are your primary use cases for nvidia broadcast ? I enjoyed the background blur
-WingsForLife-@reddit
Background effects are definitely a good one stop for every meeting app and account I have to swap to. I don't like auto focus much, grabs direction too hard.
The noise cancelling features have gotten me noisy ass cafes , but seems like all vendors have now gotten better at this.
eriksp92@reddit
That’s just the power throttling on the default balanced mode - try switching to ‘best performance on battery’ in the Windows settings.
-WingsForLife-@reddit
Yeah I know, but I've put it on balanced so I can actually last through a 5hour meeting.
Snapdragon and Apple chips simply don't throttle as hard on battery, and maybe this one too.
hackenclaw@reddit
must be iGPU being crap?
I looked into 125H specs, its a 4+8+2 CPU, the CPU it cant be that bad.
-WingsForLife-@reddit
The iGPU is good enough to handle low Cyberpunk at 30fps, I think it just throttles itself to hell once on battery, unless I put it on High performance, but if the meeting goes long it turns into an issue.
NeroClaudius199907@reddit
5070 core count less than half bandwidth. Confusing chip, is it built for mainstream or ai bros? Strix halo has terrible battery life but its already in market
Vb_33@reddit
AI bros love bandwidth tho
Forsaken_Arm5698@reddit
Surely, it must be bandwidth starved then?
MaxPlanck_420@reddit
Quantity over quality for VRAM here. It's all about the use case and lots of AI workloads benefit more from volume of RAM rather then speed of RAM. I mean speed is always helpful but quantity is an absolute requirement for large models. I'm assuming this is just a DGX spark in laptop form so there will be 128GB DDR5x unified ram shared between CPU and GPU.
v00d00_@reddit
Yep, it’s less a question of “how well will this workload run on my machine?” and more “will this workload run on my machine at all?”
ResponsibleJudge3172@reddit
Interesting to call N1 a 5070 but strix halo not calling it 9070
From-UoM@reddit
Strix Halo has a name. Its called the 8060S
Meanwhile on the N1, the iGPU here has the exact same cuda core count of 6144 as the desktop 5070
DerpSenpai@reddit
Nvidia will most likely call it a 5070 too
boissez@reddit
It has more cores than the mobile RTX 5070 Ti though - but half the bandwidth. It'll be interesting to see where it lands in terms of performance.
From-UoM@reddit
Its also tsmc N3
Rtx 5070 is tsmc 4N (Nvidia's custom Tsmc N5)
DerpSenpai@reddit
Strix Halo is nowhere near a 9070. It's 40 CUs of RDNA 3
ElectronicStretch277@reddit
It has always been marketed to the AI sector.
NeroClaudius199907@reddit
Thats true. Ai bros should be happy at least
https://videocardz.com/newz/nvidia-confirms-mediatek-built-n1-pc-chip-is-aimed-at-ai-computers
From-UoM@reddit
Its the same GB10 chip as on the DGX Spark.
So 128 GB variants will be there
NeroClaudius199907@reddit
Strix halo 128gb $3799
M5 Max 128gb $5099
Theres market for it, the ai bros will buy them quickly
From-UoM@reddit
Well DGX spark has 200G Connect X 7. Something the others lack completely.
The Connect X 7 alone is worth a $1000+
NeroClaudius199907@reddit
Sounds good more reasons for ai bros to be happy. This looks like a system seller.
rimki2@reddit
With 128mb RAM would be on-brand.
EmilMR@reddit
I feel like they waited so long that it is outdated now.
KeyboardG@reddit
Suddenly there is silicon to spare to make laptop chips?
LastChancellor@reddit
now what about that mysterious laptop 12GB 5070
Serious_Rub_3674@reddit
Still waiting for a new shield announcement.
Cubanitto@reddit
I'll be excited to hear from all you beta testers.
Rigman-@reddit
Cool, fuck Nvidia, so anyways…