funny how they never list any other x3d model in their so called testing lists, only non x3d, their is no 7800x3d, nore any 7950x3d, or any other versions in the so called review how stupid is that.
ive seen many reviews of this dont add in any other versions, like this reviewer above leaves out.
click bait they have a review yet they sub list it in reddit because nobody will actually go to their crap to read it. so the click bait to get suckers here to go some other web space, lamo
As a benchmarker/reviewer, your job is to isolate your variables as much as possible.
All of your system must be the exact same except the CPU, same settings, same components, same speeds and ideally do it with as many BIOS settings on default, no disabling for some, enabling for others, same windows settings, same everything as much as possible except the CPU. That's what every other reviewer/benchmarker with good and reliable reputations do.
I always think that having a stock slot in test is good as a comparison but then having a hyper optimized test to pull the best from the platforms later. Giving Arrow Lakes CUDIMMS and tuning the power profile while AMD gets perfect DDR5 on PBO + boost clocks at the minimum. It's really weird that they did that though.
I think there are other valid approaches besides "same everything". E.G. giving all your CPUs you're trying to measure the best fighting chance you can, or building systems that all cost the same. You're not measuring the same thing, but you're still measuring something useful.
Of course, making your systems as identical as possible is way easier, and there's absolutely no excuse for gimping one of your systems in the way this 'benchmark' did.
I don't fully agree with the 100% isolate your variables idea.
If the goal is to show typical experience, then you need to use best practice components.
For example, if you run ddr5 6800 cas 36 on both system, the amd system will desync the memory, and have a performance loss, while the intel system will not.
What would make the most sense is using memory appropriate for the system.
In that case, using slower 6000 mhz speed ram on amd would result in better performance due to the infinity fabric syncing
In terms of a production system, using memory within spec is the standard, and amd says no higher than 5600mhz, and intel says no higher than 6000
Should intel be arbitrarily nerfed to 6000? Should amd be overclocked to 6000?
What should have happened is each is given memory with jedec timings at the maximum memory speed the manufacturer recommends.
Mind you, this is for production systems and not gaming
Ok but nothing about that indicates that Puget systems is using 5600 MT/S instead of the Intel spec of 6400 MT/s due to stability reasons as you suggested.
I'm not sure I've even seen any ARL-S CPU fail to handle even 7200, many seem to do well over 8000. I don't see any evidence that ARL-S CPUs would fail to handle the stock spec of 6400 MT/S. Unless you have some data for this, this looks like pure speculation on your part without evidence.
I'm not saying ARL can't handle 6400mhz RAM. I'm saying it's funny that they don't test it with the same 5600mhz RAM as the Ryzen, since they also sell their ARL systems with 5600mhz RAM.
Logically they should test the RAM with the same speeds if they sell them with the same speeds.
Why they're selling ARL workstations with 5600mhz RAM, we don't know for sure. But it's not because of cost, that would not be an issue when they sell them for $4,156+ for the most basic configuration. It would make most sense that it's for stability reasons, since stability is super important with a workstation, obviously, but the fact that they sell the systems with support included probably also factors in. It's just easier if the RAM is the same, less unknowns.
But my point was that it's funny they didn't test both systems with the same RAM speeds they sell them with.
The test setup listed the hardware name, and that kit has an XMP profile ar CAS 32. The article states that even though an XMP was used they set timings to JEDEC standards.
5600 is the max bandwidth on the amd product page, while 6000 is the max menory on the intel product page
They're not giving the Intel system 6000mhz RAM, they're giving it 6400mhz, with good timings.
And no one is buying a high end AMD CPU with 5600mhz RAM. That's not a realistic setting. Sounds more like they wanted to handicap it and found a way to do it that looks legit on the surface.
It's funny cause their staff is ALWAYS around to thank people whenever they get praised but when people point out their constant and obvious mistakes.....nowhere to be found
I know GN uses the Puget benchmark suite in their testing and I'm pretty sure there are a few others that do as well, seems you're better off looking at those reviews for Puget scores rather than Puget themselves.
I know Intel gets a ton of hate, but while Quicksync is huge for many video production workloads, it's also that Intel gets you far more capable production chips per dollar in the R5/R7 ranges. The downvoter poster isn't that far off.
Quicksync is huge for many video production workloads.
As a hobby video editor (not as a profession), QuickSync isn't very useful in practice. I deal with a lot of phone video footage, which is all 4K VFR. QuickSync can decode this in theory, but VFR is sluggish as hell in Premiere. In order to edit accurately with fast scrubbing, I need to re-encode sources to CFR 60fps and then use ProRes 1080p proxies. My 12900K does this intermediate step much faster than QuickSync or NVENC.
QuickSync is great for local screen capture on OBS, though. CPU under 1% load. I use it all the time for that.
Sorry I meant 8+12, as that's what Intel provides with the 14700k or 265k. The Intel chips are around 50% faster, and even more so per dollar if you go with something like the 14700k.
these are retailer-driven sales rather than official price cuts
Pretty sure they are official.
And even then still, the 265k has got a faster MT performance than the 9900x
It's a coinflip as to which one is faster in productivity. https://www.pugetsystems.com/labs/articles/intel-core-ultra-200s-content-creation-review/
Not to mention AMDs AVX advantage either.
I'm guessing you're in Europe because everywhere else in the world the 9900X is cheaper.
Of course they aren't entirely useless, but they are clearly not strong competitors even compared to their previous gen and a poor choice for many. The fact you are bringing up 14th series says it all really. At least the 200 series hopefully won't be worthless used.
as Intel also gets you far more capable production chips per dollar in the R5/R7 price ranges. The MT performance difference is especially massive considering the R7 gets you just 8 cores, against Intel's 20 (8+12) in the very same price tier.
Yesterday I checked the pricing of the 285K vs 9950X and 275K vs 9900X. They are roughly the same.
What makes Arrow Lake even more expensive is needing a new board (compared to the Zen 5 regular CPUs that run on any AM5 boards) and +8000 MHz DDR5 CU-DIMM to avoid losing further performance.
“Non toy use” really? You own some stock in a company that specializes in ECC or something? The overwhelming majority of computers don’t use ECC and the world hasn’t exploded.
I like that it has slightly better multicore performance than a 5900X. So 8 Zen 5 cores are slightly more powerful (for multicore heavy tasks) than 12 Zen 3 cores.
Of course, single core and gaming are a whole other story.
Looks like a well balanced cpu. If one is not using a DIY system for professional studio editing, the 9800x3d seems to hold its own very well in “productivity”. This is why I am liking it over the 7800x3d. Very respectable all around.
SuperTrix5@reddit
funny how they never list any other x3d model in their so called testing lists, only non x3d, their is no 7800x3d, nore any 7950x3d, or any other versions in the so called review how stupid is that.
ive seen many reviews of this dont add in any other versions, like this reviewer above leaves out.
SuperTrix5@reddit
click bait they have a review yet they sub list it in reddit because nobody will actually go to their crap to read it. so the click bait to get suckers here to go some other web space, lamo
PotentialAstronaut39@reddit
Be warned and look at the test setup.
AMD = Shitty Unnamed Crucial CL40 5600 RAM
Intel = Good Trident Z5 CL32 6400 RAM
Facepalms
Classic Puget Systems.
I swear oftentimes Puget's benchmarks are as shady as those Intel and AMD sometimes put out themselves.
exomachina@reddit
"shitty unnamed crucial ram"
Crucial is Micron which is one of the leading suppliers of datacenter memory and storage modules. It is literally top tier.
PotentialAstronaut39@reddit
CL40 4800 RAM "top tier" ( Their link to the ram in the article goes to this -> https://www.pugetsystems.com/parts/Ram/DDR5-4800-16GB-14404/ ).
You must be trolling, there's no way. LOL
exomachina@reddit
Crucial makes top tier ram, just because the timings aren't balls to the wall doesn't make it shitty.
Moscato359@reddit
This is amd's fault for the clock speed
Puget benchmarks use ram as the manufacturer dictates
5600 is the max bandwidth on the amd product page, while 6000 is the max menory on the intel product page
As for timings, jedec 5600 is cas 40 so they did the amd system correctly for media creation
As for the intel system, that should be cas 42, and it is cas 32 so something is wromg
PotentialAstronaut39@reddit
As a benchmarker/reviewer, your job is to isolate your variables as much as possible.
All of your system must be the exact same except the CPU, same settings, same components, same speeds and ideally do it with as many BIOS settings on default, no disabling for some, enabling for others, same windows settings, same everything as much as possible except the CPU. That's what every other reviewer/benchmarker with good and reliable reputations do.
They didn't even try to isolate anything.
Sloppiest benchmarks I've seen in decades.
Aggressive_Ask89144@reddit
I always think that having a stock slot in test is good as a comparison but then having a hyper optimized test to pull the best from the platforms later. Giving Arrow Lakes CUDIMMS and tuning the power profile while AMD gets perfect DDR5 on PBO + boost clocks at the minimum. It's really weird that they did that though.
terriblestperson@reddit
There are actual fraudulent benchmarks on youtube, E.G. some of frame chasers, so this is relatively mild. Still bad.
terriblestperson@reddit
I think there are other valid approaches besides "same everything". E.G. giving all your CPUs you're trying to measure the best fighting chance you can, or building systems that all cost the same. You're not measuring the same thing, but you're still measuring something useful.
Of course, making your systems as identical as possible is way easier, and there's absolutely no excuse for gimping one of your systems in the way this 'benchmark' did.
Moscato359@reddit
I don't fully agree with the 100% isolate your variables idea.
If the goal is to show typical experience, then you need to use best practice components.
For example, if you run ddr5 6800 cas 36 on both system, the amd system will desync the memory, and have a performance loss, while the intel system will not.
What would make the most sense is using memory appropriate for the system.
In that case, using slower 6000 mhz speed ram on amd would result in better performance due to the infinity fabric syncing
In terms of a production system, using memory within spec is the standard, and amd says no higher than 5600mhz, and intel says no higher than 6000
Should intel be arbitrarily nerfed to 6000? Should amd be overclocked to 6000?
What should have happened is each is given memory with jedec timings at the maximum memory speed the manufacturer recommends.
Mind you, this is for production systems and not gaming
Lisaismyfav@reddit
But if 6000 is the max memory for Intel, why did they use 6400?
Moscato359@reddit
That is odd...
It should have been 6000 cas 42, while amd being on 5600 cas 40.
Vushivushi@reddit
https://www.intel.com/content/www/us/en/products/sku/241060/intel-core-ultra-9-processor-285k-36m-cache-up-to-5-70-ghz/specifications.html
6400 is spec.
ConsistencyWelder@reddit
But the funny thing is, their own workstations with Core Ultra 9s come with 5600mhz RAM.
ComfortableEar5976@reddit
Can you provide a link for this? Would be great if you can show evidence that this was done for stability reasons too.
ConsistencyWelder@reddit
You need me to link you to Puget Systems website?
ComfortableEar5976@reddit
Yes, specifically regarding your claim that they picked 5600 for stability reasons instead of cost.
ConsistencyWelder@reddit
Right.
https://www.pugetsystems.com/workstations/core-ultra/c121-l/
I'm looking at their base model workstation here. It has a
265k
32GB DDR5-5600
4060Ti
1 TB Gen 4 SSD
And the price for that config is $4,156
That kinda clues you in that it's not because of the price that they only give you 5600mz RAM.
ComfortableEar5976@reddit
Ok but nothing about that indicates that Puget systems is using 5600 MT/S instead of the Intel spec of 6400 MT/s due to stability reasons as you suggested.
I'm not sure I've even seen any ARL-S CPU fail to handle even 7200, many seem to do well over 8000. I don't see any evidence that ARL-S CPUs would fail to handle the stock spec of 6400 MT/S. Unless you have some data for this, this looks like pure speculation on your part without evidence.
ConsistencyWelder@reddit
I'm not saying ARL can't handle 6400mhz RAM. I'm saying it's funny that they don't test it with the same 5600mhz RAM as the Ryzen, since they also sell their ARL systems with 5600mhz RAM.
Logically they should test the RAM with the same speeds if they sell them with the same speeds.
Why they're selling ARL workstations with 5600mhz RAM, we don't know for sure. But it's not because of cost, that would not be an issue when they sell them for $4,156+ for the most basic configuration. It would make most sense that it's for stability reasons, since stability is super important with a workstation, obviously, but the fact that they sell the systems with support included probably also factors in. It's just easier if the RAM is the same, less unknowns.
But my point was that it's funny they didn't test both systems with the same RAM speeds they sell them with.
Vushivushi@reddit
Lol just checked their configurations and you're right.
That's silly.
Moscato359@reddit
Okay, so they messed around with timings but not clocks then
arandomguy111@reddit
arandomguy111@reddit
Moscato359@reddit
The site listed cas 32
arandomguy111@reddit
The test setup listed the hardware name, and that kit has an XMP profile ar CAS 32. The article states that even though an XMP was used they set timings to JEDEC standards.
ConsistencyWelder@reddit
They're not giving the Intel system 6000mhz RAM, they're giving it 6400mhz, with good timings.
And no one is buying a high end AMD CPU with 5600mhz RAM. That's not a realistic setting. Sounds more like they wanted to handicap it and found a way to do it that looks legit on the surface.
viti---@reddit
It's funny cause their staff is ALWAYS around to thank people whenever they get praised but when people point out their constant and obvious mistakes.....nowhere to be found
arandomguy111@reddit
Shrike79@reddit
I know GN uses the Puget benchmark suite in their testing and I'm pretty sure there are a few others that do as well, seems you're better off looking at those reviews for Puget scores rather than Puget themselves.
deleted_by_reddit@reddit
[removed]
AutoModerator@reddit
Hey tuhdo, your comment has been removed because it is not a trustworthy benchmark website. Consider using another website instead.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
auradragon1@reddit
Intel is a definitive win in video production over AMD.
nanonan@reddit
Only if you can take advantage of Quicksync.
PastaPandaSimon@reddit
I know Intel gets a ton of hate, but while Quicksync is huge for many video production workloads, it's also that Intel gets you far more capable production chips per dollar in the R5/R7 ranges. The downvoter poster isn't that far off.
Gippy_@reddit
As a hobby video editor (not as a profession), QuickSync isn't very useful in practice. I deal with a lot of phone video footage, which is all 4K VFR. QuickSync can decode this in theory, but VFR is sluggish as hell in Premiere. In order to edit accurately with fast scrubbing, I need to re-encode sources to CFR 60fps and then use ProRes 1080p proxies. My 12900K does this intermediate step much faster than QuickSync or NVENC.
QuickSync is great for local screen capture on OBS, though. CPU under 1% load. I use it all the time for that.
nanonan@reddit
8 HT cores vs 8+8 non-HT cores isn't that big a difference.
PastaPandaSimon@reddit
Sorry I meant 8+12, as that's what Intel provides with the 14700k or 265k. The Intel chips are around 50% faster, and even more so per dollar if you go with something like the 14700k.
nanonan@reddit
The 265K costs as much as a 9900X, so your comparison to the 9700X is a little odd.
PastaPandaSimon@reddit
The comparison is valid as the 265k and 9700x launched at prices just $35 apart. The 9900X was $105 more than the 265k.
The AMD chips like the 9900x and 9700x have only already started coming down in price around the Arrow Lake launch time.
nanonan@reddit
Not for today it isn't.
Pretty sure they are official.
It's a coinflip as to which one is faster in productivity. https://www.pugetsystems.com/labs/articles/intel-core-ultra-200s-content-creation-review/
Not to mention AMDs AVX advantage either.
I'm guessing you're in Europe because everywhere else in the world the 9900X is cheaper.
Of course they aren't entirely useless, but they are clearly not strong competitors even compared to their previous gen and a poor choice for many. The fact you are bringing up 14th series says it all really. At least the 200 series hopefully won't be worthless used.
COMPUTER1313@reddit
Yesterday I checked the pricing of the 285K vs 9950X and 275K vs 9900X. They are roughly the same.
What makes Arrow Lake even more expensive is needing a new board (compared to the Zen 5 regular CPUs that run on any AM5 boards) and +8000 MHz DDR5 CU-DIMM to avoid losing further performance.
3G6A5W338E@reddit
Intel does not even support ECC memory.
You can't produce shit there. It is just a random bit corrupting machine.
ryanvsrobots@reddit
No one cares about ECC in video prod.
3G6A5W338E@reddit
Yet another "serious use" benchmark done without ECC memory.
Reactor-Licker@reddit
Intel doesn’t support ECC memory and AMD’s support is quite limited and sketchy sometimes.
3G6A5W338E@reddit
Disqualifying Intel for non-toy use.
It does not help that reviewers ignore it, despite being a requirement for any serious workload.
teutorix_aleria@reddit
Genuine question, how is ECC necessary for Audio/Video/Photo editing? Or is your definition of serious work more narrow than that?
Reactor-Licker@reddit
“Non toy use” really? You own some stock in a company that specializes in ECC or something? The overwhelming majority of computers don’t use ECC and the world hasn’t exploded.
ansha96@reddit
ECC requirement for any serious workload? Lol...
ConsistencyWelder@reddit
I like that it has slightly better multicore performance than a 5900X. So 8 Zen 5 cores are slightly more powerful (for multicore heavy tasks) than 12 Zen 3 cores.
Of course, single core and gaming are a whole other story.
NeighborhoodOdd9584@reddit
Wow good uplift from Zen 3 to 5.
devinprocess@reddit
Looks like a well balanced cpu. If one is not using a DIY system for professional studio editing, the 9800x3d seems to hold its own very well in “productivity”. This is why I am liking it over the 7800x3d. Very respectable all around.
Consistent-Theory681@reddit
I wish they'd also cover audio production like Number of vst's running concurrently and mixdown speed.
_Kai@reddit
While no X3D yet, perhaps this site helps: https://www.scanproaudio.info/tag/dawbench/
Consistent-Theory681@reddit
Thanks, I didn;t kknow this existed.