Intel Core Ultra 7 255H "Arrow Lake-H" thrashes AMD Ryzen AI 9 HX 370 in Passmark single-thread CPU benchmark
Posted by Qsand0@reddit | hardware | View on Reddit | 83 comments
Qaxar@reddit
How about outside of synthetic benchmarks?
laffer1@reddit
One should look at many benchmarks before purchasing but passmark is actually good for relative positioning of a cpu in my experience.
I use passmark scores as a starting point before buying any cpu or system. On the desktop side, I wouldn’t buy anything under 4k single core and 45k multicore at this point. Laptops are always behind so like 20k multicore (11700 speeds) to 25k (11900k or 3700x) would be minimum.
Passmark scores line up fairly well with cinebench r23 and my own personal experience with compilers and lzma compression.
gatorbater5@reddit
do you do a lot of work on a laptop that relies on tons of parallel processing?
laffer1@reddit
I do. I'm a programmer. I compile code all the time on laptops. My work issues a laptop!
raniraaan@reddit
Hi! I am planning to purchase a laptop for work. I also do programming. What would you recommend between the two: Core Ultra 7 255H or Ryzen AI 7 350? I mainly do data engineering, sometimes web development. I also want to explore machine learning in the future.
laffer1@reddit
I would recommend looking at reviews on those chips and trying to find some benchmarks for compilers on them.
I usually start off with looking at passmark scores to get a baseline. https://www.cpubenchmark.net/compare/6397vs6471/AMD-Ryzen-AI-7-350-vs-Intel-Ultra-7-255H
in this case, the intel chip appears to be faster. That result likely holds true in Windows and Linux but not necessarily in other operating systems.
As far as AI/ML workloads, AMD integrated graphics and AI acceleration is supported a bit better in frameworks than intel chips right now. Intel has been trying to upstream support in some places, but you'd probably have better luck with the AMD chip. I've been able to run small LLM models on my linux box using the integrated graphics in my ryzen 7900 but my arc a750 GPU does not work with many of them.
The TDP is lower on the intel chip which might be good for battery life. Older intel chips were very bad at battery 12-14th gen on the H or P chips. That's something a review might cover better.
I have a mix of intel and AMD systems and usually have a preference slightly toward AMD so I'd likely get the AMD laptop in your situation.
To put it in context, the amd chip runs in between a ryzen 3700x/5700x desktop chip (or 11900k) and the intel chip is a bit faster than all of those. It would be between a 5700x and a 7600x. Compared to a newer desktop part, the amd chip is about half speed and the intel chip is like a lower end current chip or a little better at multicore. It's not unusual for laptop parts to be slower than desktops like this. It used to be a lot worse a few years ago, like 10 year gap in performance.
For web development, either chip is more than adequate. JavaScript might be a bit faster on the intel part due to the single core performance. (Check out geekbench scores to get a better idea on that) This would also likely be true for python as it's single core heavy.
M4mb0@reddit
Reminds me when pycharm would run out of memory on my older laptop and cause GDM to nuke everything and restart. So happy to have 64GB now, never going lower than that again.
cesaroncalves@reddit
Do you use a browser on a laptop?
Hairy-Dare6686@reddit
The website lists the 285K as a better gaming CPU than the 7800X3D, the entire intel ultra lineup better gaming CPUs than their 14th gen CPUs.
Not saying that it is entirely useless but synthetic benchmarks in general are a bit misleading when they don't translate into real world performance.
laffer1@reddit
No, it doesn't. Passmark isn't a gaming benchmark. It's a benchmark for general compute and it's accurate for other workloads.
They have a GPU benchmark also, but the actual compute benchmark doesn't tell you gaming performance. If you want those numbers, you need to look at gaming benchmark sources like Gamers Nexus, Hardware Unboxed, Phoronix, etc.
Hairy-Dare6686@reddit
What is this then?
M4mb0@reddit
You can safely ignore this list.
laffer1@reddit
I hadn't seen that list. It's completely wrong and I wouldn't use that. Still, passmark multicore and single core scores are valid for general purpose compuete.
GYN-k4H-Q3z-75B@reddit
These names are making me gag. Core Ultra 7 255H? Ryzen AI 9 HX 370? Who came up with this garbage.
Not sure how helpful a lead in single-thread applications is for a 16 core CPU. But good for them to win something at long last.
Elfotografoalocado@reddit
Single thread performance is super important. Not all tasks scale to 16 cores.
exomachina@reddit
Especially gaming. You can multithread your renderer as much as you want but core game logic is still mostly running on a single dedicated thread.
Emergency_Rock_740@reddit
At the moment for iGPU Gamming I try to decide between:
255H (worth to wait?)
258V Lunar Lake (Max 37W TDP).
370 HX (Max 35W TDP on the new GPD WIN MINI 2025)
What do you think will be the best solution, especially for the new Monster Hunter Wilds?
exomachina@reddit
seems like it would be a miserable experience on any of those.
Emergency_Rock_740@reddit
I plan to game on a 7“ display line a handheld, so it should be ok for me as i dont need max graphics, but what you think would be the best iGPU choice of these three? Thank you!
JawaOfficial@reddit
The 255H isn't as good as the Ryzen or Lunar Lake unfortunately - I have the Onexplayer X1 Pro and it gets beat by the Ryzen 370
XyneWasTaken@reddit
I think if you have the choice for LNL, go for it
diddidntreddit@reddit
What would be a good name instead?
crab_quiche@reddit
That’s not even the worst name from AMD, Ryzen AI Max+ Pro 395 takes that
LeptinGhrelin@reddit
At least that's a genuine AI chip
XyneWasTaken@reddit
with ROCM? no way
LeptinGhrelin@reddit
Are you joking or serious? Vulkan compute is usually used for this.
XyneWasTaken@reddit
are you serious?
most of AI (pytorch) operates through compute backends, like CUDA, ROCM, or oneAPI.
unless you're talking about the new Vulkan cooperative matrix stuff, which RDNA3.5 doesn't support, I have literally never seen anyone use Vulkan for AI, outside of gaming related technologies.
LeptinGhrelin@reddit
Literally every large open source LLM inference backend supports vulkan compute.
Ex:
https://github.com/ggerganov/llama.cpp
XyneWasTaken@reddit
which is why I said inference model serving.
Basically, you have a gimped chip that can't do anything except run specific pretrained models on specific inference frameworks, including the fact that an unspecialized Vulkan backend is 1.5-3x slower than ROCM, which itself is already slower than CUDA.
If you want to do anything in the line of actual AI work, such as using pytorch to finetune anything on huggingface, you need a stable DNN backend, of which ROCM/hipDNN is not right now. I guess, if you're an extremely casual user and only want to mess around with AI, it could work for you.
LeptinGhrelin@reddit
Who the fuck trains on a mobile chip?
XyneWasTaken@reddit
Point still stands.
I for one, know that almost all of the image-gen community software (webui, kohya, etc) runs directly on pytorch, which tosses any possibility of using a bespoke C++ vulkan backend out the window.
Not to mention, there are many, many people who finetune small character or artist LoRAs on their laptops.
LeptinGhrelin@reddit
I personally only use H/A100, and if in a pinch Ada 6000. You're not saving any money by training locally.
Even for older CNNs, a 5090 or better is much more cost efficient.
XyneWasTaken@reddit
when you want to train a simple (SDXL, which by the way is a 3.5b transformer model) LoRA, lets say with 500 images of a specific character, it can hardly be reasonable to expect someone to pay for and spin up a H100 cloud instance for it.
By definition, you are saving money by using hardware you already have, and people want to use the hardware that they have... I have even seen people train on 3050M even if it is arguably a stupid idea.
and, it is seems strange to say that a chip that cannot support most AI workloads without custom software, as a "good AI chip" just because you presume that any workload other than basic LLM inference should not be done on that chip. A good chip would generally mean that it can run most things (like Stable Diffusion inference) without problem, using the well established software frameworks that already exist.
LeptinGhrelin@reddit
I personally don't believe in LoRA, despite what Microsoft says, newer papers have came out to debate whether LoRA is actually in any way equivalent to full fine tuning at all in terms of knowledge retrieval.
This chip is wonderful for my company’s use case. Use used to use H100s in the cloud, but with this chip, we would be able to run 70B models locally faster than on m4.
LeptinGhrelin@reddit
No one is using this chip for small CNNs
LeptinGhrelin@reddit
Vulkan is not any slower when you are largely handwidth limited.
LeptinGhrelin@reddit
This is an alternative to the versal edge AI From AMD.
The_Man-Himself@reddit
Agree, trash names.
Zednot123@reddit
Hey, it's still better than Acer!
Acer R382CQK bmijqphuzx Really rolls of the tongue.
timbomfg@reddit
IDK what you're talking about, Acer's Loser Nitro Suckface is the best name ever!
Melbuf@reddit
lol i own a Acer B246WL ymdprzx
monitor names have been this bad for at least a decade
Olde94@reddit
I would like to highlight that it’s a 6-performance cores cpu. The next 8 is efficiency and 2 are low power efficiency cores
TGSMKe@reddit
Even better when you think about it. Just six cores doing the heavy lifting.
Olde94@reddit
It’s a single threaded benchmark. What are you on about
Hot_Kaleidoscope_961@reddit
Most applications are still single core. Hello)
XyneWasTaken@reddit
this is after they already changed the names once lmao
marketing people don't understand to not fix what is not broken
Pugs-r-cool@reddit
Don’t get them wrong, the confusion is part of the strategy. If no one can keep track of what each name means, they can sell you a worse CPU without you noticing.
laffer1@reddit
It’s also how marketing people get promotions and show they are doing something
Emergency_Rock_740@reddit
I try to decide for iGPU Gaming between the 255H (worth to wait?) and the 258V Lunar Lake (Max 37W TDP). What do you Guys think will be the better solution, especially for the new Monster Hunter Wilds?
Qsand0@reddit (OP)
Dedicated gpu
shadowlid@reddit
I haven't kept up with Laptop cpus that closely but wouldn't this be Intel's top of the line mobile chip? Why is it not compared against the Ryzen 7845HX3D?
william_blake_@reddit
12-core zen 5 archtecture, aka 3XX is the current top of the AMD line. 8-core zen 4 arch, aka 7(8)X4X are slower. But yes, upcoming 9XXXHX aka 16 zen 5 cores is more of a direct competitor, would be faster. Anyway, 3XX is the best mobile platform, at 10-50w it has the best performance. The only downside, 3XX is paperlaunched and only ASUS has full lineup with it. BTW Passmark results above is bs, coz sample sizes are hundreds times different.
william_blake_@reddit
Elephant in the room. 255h-one sample, hx 370-300+samples average. People should learn how to read.
IsThereAnythingLeft-@reddit
Yeah I’m sure that’s legit
Shadow647@reddit
straight from Intel Ark: Maximum Turbo Power 115 W
ok, lol.
VastTension6022@reddit
you think intel is using 115 watts for single core on mobile?
ok, lol.
SkillYourself@reddit
He also dug up the one of the slowest HX 370 laptops for that lol. The absolute state of this sub.
Shadow647@reddit
I literally looked up review of my own HX 370 laptop
SkillYourself@reddit
Congrats on owning the slowest HX 370 laptop lmao.
Shadow647@reddit
Ah yes, you definitely know better what I should have bought. Next time I will consult you specifically!
2TierKeir@reddit
Yes you’re right no one on the internet could possibly know more than you!
Shadow647@reddit
Nah that dude definitely knows my needs and choices better than I do
tucketnucket@reddit
I'm starting to think the vast majority of reddit is just AI bots that try to piss you off to drive up engagement.
ExtendedDeadline@reddit
Man's gotta hype his stonks somehow.
Shadow647@reddit
I don't really care whether it's a space heater when using multiple cores or just one, I am not into buying garbage ¯\(ツ)/¯
DoTheThing_Again@reddit
you don't understand the metrics you posted. OEMs decide the power draw not intel. intel just allows them the option to go that high.
Hytht@reddit
The Ryzen ai has copilot+, arrow lake doesn't
chamcha__slayer@reddit
That's a feature, not a bug
Dependent_Big_3793@reddit
Still performing normally, amd got multi thread, intel got single-thread. just intel got better N3B node this time.
imaginary_num6er@reddit
Yeah but how about actual gaming benchmarks?
oledtechnology@reddit
doesn't really matter much in mobile. They're all GPU bound there.
gatorbater5@reddit
do they no longer have integrated graphics?
Chowdaaair@reddit
It matters for cpu heavy strategy games like civ
VibeHistorian@reddit
one would hope that ryzen's efficiency would help out with nvidia dGPUs which can have an extra +25W allocated with dynamic boost
SkillYourself@reddit
> Yeah but how about actual gaming benchmarks?
The HX 370 CPU 1080p frame rate ties with a Meteor Lake 185H on a like-for-like 4070 laptop comparison. Arrow Lake H and HX will do just fine against it.
https://youtu.be/X_I8kPlHJ3M?t=720
abuassar@reddit
AMD Ryzen AI 9 HX 370 is just a silly name
jaaval@reddit
Apple doesn’t need to provide 20 different models because their only customer is themselves and their very limited product set. AMD doesn’t have that luxury so they can never use naming like m4 pro.
Saitham83@reddit
Intel H series competes with strict halo, not strix point
SmashStrider@reddit
No it doesn't.
Physical-King-5432@reddit
Looks like Intel is finally securing some decent gen-on-gen improvements and better thermals with their new chiplet designs. According to the article, this is a ~35% improvement over Meteor Lake with efficient power draw. Great to see.
Cipher-IX@reddit
115w vs 35w. Limit the Intel chip to 35w.
Oh wait.
ElementII5@reddit
King of synthetics, as always.
AutoModerator@reddit
Hello Qsand0! Please double check that this submission is original reporting and is not an unverified rumor or repost that does not rise to the standards of /r/hardware. If this link is reporting on the work of another site/source or is an unverified rumor, please delete this submission. If this warning is in error, please report this comment and we will remove it.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.