Arc Pro B70 Review: The best graphics card Intel has to offer
Posted by pcgameshardware@reddit | hardware | View on Reddit | 140 comments
Posted by pcgameshardware@reddit | hardware | View on Reddit | 140 comments
IshTheFace@reddit
Why but Intel over AMD or Nvidia? What's their competing angle? It can't be performance and it can't be price and most definitely not compatibility?
pr0metheusssss@reddit
>what’s their competing angle
Highest VRAM/dollar.
Highest absolute number of virtual functions (SRIOV), to “partition” the GPU and pass through to multiple VMs, while being totally free (nvidia has horribly expensive licensing, and it’s only available in higher end models to begin with).
pppjurac@reddit
Which is absolutely neat feature for my homelab ....
varateshh@reddit
It will likely be price and supply. Nvidia/AMD will throw all their TSMC supply at higher margin products and Intel will still be here because they have their own foundry.
Then Intel can go the Chinese route of high volume low margin production while they build up their capabilities and software makers improve compatibility. Eventually and with some luck Intel might be able to compete on the higher margin products. Low Margin production will hopefully be revenue neutral or slightly profitable. This is possible because Nvidia and AMD have abandoned the low end market allowing a newcomer to enter (hell, if Intel does not take some Chinese company will).
Alternatively, Intel fully abandons the dedicated GPU market and gives up on enterprise GPUs.
LAUAR@reddit
Doesn't Intel use TSMC for the Arc dGPUs?
siazdghw@reddit
Yes but that probably won't last. It's a holdover from Intel wanting to diversify what their products are fabbed on to free up internal supply for CPUs and prevent any fab delays bottlenecking releases.
They will 100% fab their own dGPUs, but it's unclear when that will happen now that Apple and Nvidia want Intel fab capacity too.
IshTheFace@reddit
I think Intel would want to maximize their profits too though. I will never believe they would just cater to gamers if they could just go the nvidia route and lean hard into ai. Just from a business perspective.
ghenriks@reddit
What everyone forgets is it is all about the software, and if you want the software you need to get affordable hardware into developers hands and maintain a consistent roadmap
Intel’s messaging is all over the place and provides no confidence that they will release new generations of discrete GPU (C is cancelled, D is a maybe maybe not)
So while the hardware is affordable enough (in a 2026 way) no one is going to invest time or money into porting AI software to an uncertain platform nor will game companies optimize their games for Intel
varateshh@reddit
This is the route to maximize future profits if they believe that they can compete with Nvidia/AMD in the future. They currently do not have the technology to compete with Nvidia/AMD on the high end. The question then becomes, does Intel want to spend billions on developing their GPUs for a decade with zero revenue before entering the enterprise market or will they sell consumer GPUs to offset the costs of development? This does not mean that Intel loves gamers but that it is willing to use them for revenue as long as they have foundry capacity. If Nvidia/AMD effectively abandon this market then Intel will even be able to squeeze higher margins out of the market.
FranciumGoesBoom@reddit
The Arc Pro line is targeted at things like in house LLM and hardware acceleration for VMs. $1200 for a card with 32g of vram or a 5090 for 5k? easy choice. Add in SR-IOV and you've got a VM host powerhouse.
absolute-degen1337@reddit
lol. Thats like saying why spend six figures on a Lamborghini Urus when the Dacia Dacia is 20k and fits 5 people as well
curryslapper@reddit
except in this case the dacia compares OK in performance with the lambo
absolute-degen1337@reddit
except it does not lol? this is a shitty card with some more RAM.
curryslapper@reddit
inference performance seems ok?
absolute-degen1337@reddit
https://www.phoronix.com/review/intel-arc-pro-b70-linux/3
how lol. that performance is a complete joke?
curryslapper@reddit
https://www.pugetsystems.com/labs/articles/intel-arc-pro-b70-review/#MLPerf
absolute-degen1337@reddit
https://www.phoronix.com/review/intel-arc-pro-b70-linux/3
nice lambo.
Tech_Itch@reddit
It's a professional card, so that's the wrong angle to begin with.
IshTheFace@reddit
Yeah, I figured ot was a new gaming card since they only had those before
Skensis@reddit
There is little reason to buy Intel besides the novelty of getting something different.
They're the windowsphone of GPUs!
xole@reddit
I remember reading a while back that intel was going to support gpu partitioning in VMs on their consumer cards. If that's true, it would make intel gpus great for people running home labs since afaik you need higher end enterprise cards for nvidia or amd to do so.
42LSx@reddit
For AMD yes, Nvidias vGPU will run on GTX900 series up to RTX2000 series consumer cards.
Area51_Spurs@reddit
They’re low volume products. You have to start somewhere. The entire point of this is to build a GPU business and gain experience and build out their R&D department.
It’s not about what’s happening now. It’s about laying the groundwork to be a player years from now.
The dream is obviously competing in AI data centers.
They also use the same graphics technology for graphics integrated with the CPU and they want to have a product that’s competitive with the Apple M processors, AMD APUs, NVIDIA, and the Qualcomm chips for laptops, handhelds, game consoles, automobiles, and obviously data centers and other large enterprise installs.
IshTheFace@reddit
I understand it from Intels perspective I just don't understand why anyone would pick them over established brands as a consumer.
Area51_Spurs@reddit
Money and contrarianism
that_70_show_fan@reddit
consumers != gamers
Plenty of users who are interested in experimentation. Intel is also a solid option if you want to encode/decode on the cheap.
IguassuIronman@reddit
Not nearly the number that are interested in gaming
that_70_show_fan@reddit
Yes, captain obvious. AMD been in the market forever and yet what is their market share? I wonder why it is hard to enter the gaming market.
theangriestbird@reddit
Cheaper cards that can do some forms of encoding really well. I've considered getting one for my media server, these cards are really good at decoding AV1 files.
imaginary_num6er@reddit
The entire point was it being Patrick Gelsinger's pet project, just like PTL being abbreviated for PaTrick Lake and not Panther Lake.
Johnny_Oro@reddit
People aren't that picky about anything with VRAM these days.
Among the three brands, Nvidia GPUs are the worst in linux FYI.
IshTheFace@reddit
I know nothing about Linux and most people use windows so I guess I'm an average person with average takes.
I was thinking about Intel and Crimson Desert as an example. Although maybe that's mostly on the developers from what I gather but still.
ea_man@reddit
People that buy 32GB of VRAM use linux with it.
Johnny_Oro@reddit
Oh you mean for gaming? Yeah I'd take radeon or rtx if im staying on windows, for gaming. If the price is too close at least, intel arc is often cheaper.
And lack of optimization is due to both the developer and intel themselves. Intel arc has rather primitive features compared to the competitors and requires more hand tuning of the driver, although they've been making big jumps with every generation of Arc. More primitive features means more places you can improve easily, in comparison to the more mature architectures.
B390 iGPU has been great, better than the original battlemage, and the upcoming celestial architecture this year should be a lot better. In the iGPU market, intel has little competition, as AMD is slower at implementing their newest generation architecture, and Nvidia has no presence outside ARM core CPUs.
But for workstation uses, intel has its own merits. VRAM often times matters more than architecture in this segment too.
S_A_N_D_@reddit
Bare in mind this isn't targeted to the average person, and it's not optimized for gaming.
This is meant as a workstation GPU for professional applications, which is also why this review isn't very comprehensive in my opinion. I'm ok with the reviews centering around gaming, because people will want to know how of does stack up even if that's not the use case, but when leveraging criticism, it's somewhat disingenuous to do so based on gaming performance.
I'd be curious to see how it stacks up in inference and other more specialized applications. That is where the value might come in due to the lower price but high vram.
notam00se@reddit
Worst for the open community, yes.
But as far as support goes for compute stack, nvidia is king. Install CUDA, laptop GPU to workstation GPU to gaming GPU to dedicated server GPU is supported.
AMD doesn't support ROCM with mobile GPU, they don't support entry level gaming GPU, it is all hit or miss, requiring command line options to trick it into even trying to run for the most part, and forcing specific pipelines. Installing ROCM is a single package, but with options for what you are trying to use it for.
Intel supports all their hardware with their compute stack, including NPU. But not every distro has all the packages, and Intel only officially supports Ubuntu LTS for full compute stack, everything else is community support. Full compute stack requires 4-7 packages to be installed. Atomic distros don't cater to Intel workstation at all, zero compute packages installed, just kernel and mesa (for desktop and gaming, but no compute)
ea_man@reddit
Because it has 32GB of VRAM: it's not for your gaming it's for people who do inference.
adaminc@reddit
I know a bunch of people that stream and use 2nd video cards to encode video for Twitch or YouTube.
cdoublejj@reddit
consumer choice by having a 3rd option, also Intel allows vGPU without extra licensing
BenchmarkLowwa@reddit
Avoiding monopoly (or the kind-of-duopoly we're having at the moment) is key. Not to buy Nvidia is good for the whole market.
IshTheFace@reddit
This is true and I agree. But again, from a consumer perspective I would still not chose to be an early adopter of any technology. So how are they going to gain market share? Their GPUs have more issues than Nvidia and AMD from what I gather.
I would sooner buy a last gen Nvidia card on the aftermarket than a brand new Arc gpu at this stage. They're not that powerful so you could probably get away with just going for an older model from Nvidia instead and be just as well off with DLSS 4.5 support. It's not like Nvidia is getting your money if you buy aftermarket. Neither does Intel in that case, but you at least know what you're getting.
pesca_22@reddit
in today market making consumer gpu doesnt make any sense.
in regular market the question would be "would you prefer to sell 500 200$ boards or 10 500$ ones?", in today doped market it would be "would you prefer to sell 1000 200$ boards or 1000 1000$ ones?"
the only limit is production capabilities, not demand.
cloud_t@reddit
Intel is gaining a lot of terrain on software and drivers, last I heard. It also plays kinda well with Intel CPUs, but then again that may be seen as a disadvantage.
vegetable__lasagne@reddit
Is Intel ever going to fix this?
siazdghw@reddit
It is sorta fixed. Intel addressed it long ago, but it needed some settings changed; other reviewers were able to confirm a significant reduction in idle power, however I don't think the fix worked for everyone.
vegetable__lasagne@reddit
Isn't that only for low idle states? Like if you play a video it'll jump straight back up?
can999999999@reddit
I'm desperately waiting for that day, I love my B70 but 50-55W idle is my one big issue with it
Apprehensive-Fail458@reddit
Im a newb at pc hardware and I always see people concerned about this. Is too expensive? Does it wear out the gpu faster?
can999999999@reddit
It's just a waste of energy
kingwhocares@reddit
Did you enable ASPM in BIOS?
can999999999@reddit
Jup, also in the OS, didn't really do much at all tho
PastaPandaSimon@reddit
Their CPUs idle very low to compensate /s
Bderken@reddit
I remember owning a 6800xt then 7900xtx, and it took AMD like 3 years+ to fix it (can’t remember the exact timeline).
Senior-Daikon-8334@reddit
I've had a 7900xtx since launch and I never had that problem. Maybe it was just your specific card.
Bderken@reddit
it was monitored by AMD directly. They had it tracked in known bugs for years. And they did eventually fix it.
But yes it was when you had 2 4K high refresh rate monitors plugged in or something like that.
bubblesort33@reddit
I think it's monitor specific. I don't remember if it was some AMD representative that said this, or just someone familiar with driver coding. Maybe it was just a dual monitor issue for AMD? I think that was the story back then, but occasionally people reported it even on single monitor setups, but most of the complaints were regarding dual monitors with high refresh rates. I don't know if they have like a white list they need to configure for each type of monitor that could possibly be run on a GPU or what.
From-UoM@reddit
It has a die size of the GB203 on the 5080 but performs like the die GB206 (half the size) of the 5060 Ti
That's some awful die usage.
BenchmarkLowwa@reddit
The same counts for the B580. I don't care as a consumer, als long as I get the best bang for the buck.
A full review. Finally. 6 (?!) weeks after launch. Intel, B770 WHEN? 😄
From-UoM@reddit
Sell these at a loss is the reason why Arc gaming dGPUs may have gotten canned according to reports.
siazdghw@reddit
Even at profitable margins it was going to get sidelined in the current environment.
Between memory prices surging, and fabs being supply constrained and thus companies need to be more selective on what they allocate their wafers towards, it simply doesn't make sense to produce a ton of dGPUs when there are better and far more profitable ventures elsewhere.
Also as we've seen with LNL and PNL, Arc R&D is paying dividends, selling consumer dGPUs right now is not the play when Intel can just slap the Arc designs into iGPUs and beat AMD, Qualcomm and Nvidia all in one swoop, with higher margins and lower expenditures than trying to make dGPU sales right now.
I'd like to see more Arc dGPUs in the far future, but Intel absolutely made the right move putting them on the backburner and pivoting to other more important markets.
Plus_sleep214@reddit
I hope Intel revisit dedicated GPUs again in the future but I get why at least for now they'd want to focus on only caring about integrated graphics so they can maintain their strong position with laptops.
Vb_33@reddit
Shame because Intel is starting to pop off. Stock went from $19 to well over $100 now. Grandma must have become close friends with God or something.
TechTechTerrible@reddit
Arc dgpus are probably dead in favor of nvidia gpu chiplets integrated into an intel SoC.
Zosimas@reddit
why nvidia instead of their own?
Geddagod@reddit
Realistically it shouldn't be an "either/or", even if large iGPU configs like what we see in strix halo is making the lower end dGPUs obsolete (which even strix halo doesn't really do IMO), dGPUs are still justified for higher power/perf levels.
Ofc this is also clouded by Intel not being able to compete at those higher perf ranges too, but Intel's goal always was to eventually get to that tier of competitive performance anyways.
TechTechTerrible@reddit
It shouldn’t have to be one or the other, but given the die sizes they need to compete in gaming and intel wanting a certain profit margin for their products to get the go ahead I think its going to be tough for them to launch a new dedicated arc gaming gpu.
ResponsibleJudge3172@reddit
This was AMD till literally just this gen
Earthborn92@reddit
RDNA3 with mixed nodes and chipsets isn’t directly comparable
Geddagod@reddit
Is it? Even with RDNA 3, the 7900xtx has similar perf as the 4080 super in raster, and the total die area is \~40% larger, or the Nvidia die is 70% the area of the AMD one.
Tons of caveats that make the comparison on AMD's side more favorable, but even in this simplified and pessimistic calculation, the numbers aren't nearly as bad as what the original comment is claiming is the case for Intel vs Nvidia, where the Nvidia die is 50% the area of the AMD one.
ResponsibleJudge3172@reddit
40% is an entire tier and that's the point (not to mention that it only compared in non RT performance despite the larger die size).
My point is that it's very normal to not be par in PPA from a historical perspective
notam00se@reddit
B70 beats the R9700 Pro, best comparison since they are both 32GB workstation cards, and B70 is a few hundred bucks cheaper
cloud_t@reddit
It has less than half the count of transistors than the GB203 (5080), and about the same amount of transistors the 5060 Ti.
The math ads up. It's all about having the latest tech. But supposedly the larger die size allows it better cooling than a 5060 Ti for the same amount of electricity going in (I think).
CaptainForward836@reddit
Better cooling? It uses WAY MORE power than a 5060ti.
Whats with the insane spin.
Also this much wafer space for 5060ti performance would be a terrible choice even if power consumption was the same.
The only reason to ever have a bigger die for the same performance is if it results in significantly less power consumption so you can put it in a laptop and get more performance out of shitty laptop cooling
HIGH_PRESSURE_TOILET@reddit
AMD had a node advantage briefly when they were fabbing the 5700XT on TSMC 7 nm while NVIDIA was still stuck on Samsung 8 nm, but NVIDIA was still better lol.
SoTOP@reddit
Nvidia used bigger dies, negating better node advantage that AMD had.
Area51_Spurs@reddit
Makes sense, more surface area and less dense transistors.
Was this made in Intel’s fabs or at TSMC?
I’m sure it must be way less money to manufacture. Capacity on the high end nodes is so expensive rn that the raw materials going into the actual wafers must be a tiny fraction of the overall manufacturing costs.
cloud_t@reddit
Yeah. I went and checked and b570 is 5nm while 50 series is 4nm.
From-UoM@reddit
Rtx 50 and 40 series are not tsmc N4
They use tsmc 4N which is Nvidia's custom Tsmc 5nm process.
SoTOP@reddit
It is 4nm with very minor tweaks specific to Nvidia.
BausTidus@reddit
It‘s 5nm with tweaks not the other way around.
Geddagod@reddit
According to what?
BausTidus@reddit
„To further enhance the N5 family’s performance and power, TSMC introduced N4P and N4X, targeting the next wave of 5nm products.“ from TSMC‘s website about N4 since there is not alot of information on the NVIDIA specific 4N i could only list several news outlets like techpowerup.
nanonan@reddit
The "N5 family" includes 4nm products.
Geddagod@reddit
4nm itself is 5nm with tweaks, so 4N by definition, whether it's based on 4nm with tweaks, or 5nm with tweaks, would be considered "5nm with tweak" either way.
What's interesting is how people insist Nvidia 4N is actually based on 5nm instead of 4nm, when there's no information other than a twitter tech leaker (Kopite7kimi) claiming that's the case.
SoTOP@reddit
There isn't even a space for deep deliberations. We do know from Nvidia that their Blackwell AI chips are made using 4NP process, which very clearly is just TSMC N4P version for Nvidia. We also know Samsung 8nm for Nvidia was called 8N. With this knowledge anyone using a smidge of logic can easily deduce between N5 or N4 being used as 4N.
This was not clear cut when Ada was brand new, but the naming of Blackwell AI node and the fact that Nvidia did not upgrade the node from 4000 to 5000 series, which only makes sense if they were using 4nm already, seals this firmly into one side.
Yet people who claim 4N is 5nm were upvoted already thus believe they are correct, so they will say the same "facts" the next time this comes up and the cycle continues.
ResponsibleJudge3172@reddit
Not just kimi. The retired Corgi as well. Based on launch timing of N4
And the fact that 4NP has more than 5% benefit for them unlike N4 vs N4P
cloud_t@reddit
To be fair to both of you and myself, these days, node tech info is kind of opaque due to market influence. The stock price is heavily affected by these perceptions. Intel started this trend IIRC with their ways to spin their 13, 12 and 10nm tech as comparable to what TSMC was doing at the time (8 and 6, right?), but then everyone started noticing how that impacted stock price and went along. A really shitty practice IMHO, and makes these arguments on nodes and their impact really confusing.
nanonan@reddit
It's 4nm. All TSMC 4nm are modified 5nm processes.
SoTOP@reddit
The only "evidence" there is about it being 5nm is that nvidia calls 4N a "5nm class" node. N4 is also "5nm class" node using same logic, since both are derived from baseline TSMC 5nm node and are just minor improvements over it.
The name itself should be quite a big clue to what 4N actually is based upon.
BausTidus@reddit
„To further enhance the N5 family’s performance and power, TSMC introduced N4P and N4X, targeting the next wave of 5nm products.“ from TSMC‘s website about N4 since there is not alot of information on the NVIDIA specific 4N i could only list several news outlets like techpowerup.
Exist50@reddit
They're wrong about the numbers, btw. And the node is more or less the same as TSMC's using. Though I do expect Nvidia would have better density from a better design.
imaginary_num6er@reddit
Intel doesn’t make GPUs in their own fabs
Area51_Spurs@reddit
Thanks for the info
RealThanny@reddit
This is the B70, which has a completely different die than the B570.
cloud_t@reddit
OH SHIT I may have messed up the models.
lalalaphillip@reddit
Compare it to the 3070Ti (392mm2 on 8nm). Arc B70 is just 10% faster. It’s just not good.
cloud_t@reddit
Depends on price. Can you find a 3070Ti at the same price? And even if you could, we must take account inflation since the 3070Ti came a good 5y ago.
From-UoM@reddit
This isn't using the B570 BMG-21
Its using the much larger BMG-31
flat6croc@reddit
I'm seeing 27.7B transistors for G31, 45 for GB203. So, well, well over half. Slightly fewer than GB205, which is said to be 31B.
LAUAR@reddit
Aren't better nodes are more efficient?
cloud_t@reddit
Slightly, yes. But it's like 3-12% depending on jump. The best guideline we have was when Intel was doing Tic-toc on CPUs (one year arch change, another node change) - we could compare the same architecture/design on different nodes. Of course it may be slightly different for GPUs, but my take is it affects even less because these days, GPU architecture actually deprioritizes focus on raster perf (in favor of tensor/ml/media performance and mahbe memory bandwidth).
ResponsibleJudge3172@reddit
Its a matter of Intel never using high density logic because they are incompetent. Neither CPUs nor GPUs use anything but HP
Wallcrawler62@reddit
Does this affect the average consumer with no knowledge of die size or usage? Serious question, not being sarcastic.
Exist50@reddit
It's not something people care about when buying a product, but it very much factors into the economics of selling that product in the first place.
kingwhocares@reddit
Nvidia uses TSMC 4NP (a custom 4N die), while Intel uses TSMC 5nm. Intel's Media Engine is much larger than Nvidia's, while also using 192-bit instead of 128-bit.
From-UoM@reddit
Tsmc 4NP is for Blackwell Data Centers
Tsmc 4N ( a custom N5) is used the RTX 40 and 50 series.
The B70 is using the 368mm2 BGM 31 which is 256 bit
The 5080 uses the 378mm2 GB203 which is also 256 bit
kingwhocares@reddit
The RTX 5070 ti is also using the same die. Is Nvidia taking losses on it? Or Is Nvidia taking losses with the RTX 5060 which shares the same die as RTX 5060 ti and costs $80 more?
Do you know what it rather says? It says that if Nvidia is using the same die on 2 GPUs that have a MSRP of$250 (RTX 5080 - RTX 5070 ti), that means the die cost is actually lower than additional costs incurred introducing a new die.
ResponsibleJudge3172@reddit
It's called binning. Selling scrap as a lower tier with lower requirements such as less voltage scaling to make the product cheaper to make
kingwhocares@reddit
They are selling huge quantiles of RTX 5060 and RTX 5070 ti. If there's this much fault, they wouldn't be going with TSMC.
ResponsibleJudge3172@reddit
Fault is not just "not working" but also, "not clocking high enough", "cache is too hot", "temperature is not to spec at higher voltages", "1SM is doing funny things". I'm speaking parametric yields. As the Chinese would say, Huang is an expert of cutting with precision.If they want 2 SKUs based on an XX203 chip, he sets 2 standards that allow one SKU to outperform AMD and the other to max out the yield. 5070ti exists to max out the yield, so yes, it makes the absolute majority of GB203 cards Nvidia has to offer.
Same reason Nvidia has never fully specced out a single AI GPU either. Not A100, not H100, not B100 and not likely R100. They are all cut down from a full chip to max out yield.
kingwhocares@reddit
Nvidia generally outsells the XX60 more than all others. So, it has little to do with that.
ResponsibleJudge3172@reddit
XX60 is an entirely different die and thus not relevant to the whole issue of GB203 allocation.
XX60 serves 2 purposes. A cut down XX60TI which improves yield (though less than XX70TI does for XX80 simply due to size) as well as a stand alone product which can stand alone due to being the size of a CPU die. In this case, 5060 is a cut down rtx 5060ti.
As I mentioned that 5060ti is relatively small, the yield in any scenario is quite high, so it's more of Nvidia allocating which is which than any economic benefit
kingwhocares@reddit
Same die as RTX 5060 ti.
Also, RTX 5070 ti is outselling RTX 5080 with same die size.
From-UoM@reddit
And what's the price difference between the 5070 ti and 5060 ti?
$320. Care to say something about that?
kingwhocares@reddit
They aren't using the same die! RTX 5070 ti and 5080 are.
From-UoM@reddit
So how do you expect intel to sell a die the size of the 5080/5070 Ti for less than the price of the 5060 Ti without incurring big losses.
kingwhocares@reddit
I don't expect them to incur losses. All of AMD, Nvidia, Intel are probably losing 40-50% of revenue to AIB partners and distributors/retailers, and has probably the biggest impact on profits.
imaginary_num6er@reddit
It’s consistent. The A770 was supposed to compete with a 3070, it performed like a 3060. Battlemage was supposed to come close to a 5070, but performed closer to a 5060.
thunder6776@reddit
Similar to the 9070 xt terrible die efficiency. Turns out there is a reason nvidia is so far ahead.
Minced-Juice@reddit
A meaningless comparison.
A better way to compare would be to see the linearity of the increase in performance over BMG-G21 relative to the increased die size of BMG-G31.
HuckleberryFit5435@reddit
Don’t look up the 9070/9070XT die size
470vinyl@reddit
Damn it. I want a high end one.
porcinechoirmaster@reddit
We all want more competition in the high end GPU market, I feel.
pcgameshardware@reddit (OP)
Honestly, who wouldn’t? \^\^
- Jacky
Yearlaren@reddit
hi jacky
pcgameshardware@reddit (OP)
Hi 💅
detrophy@reddit
This card has the AMD Vega logo on it and uses a shroud that looks partially from the VII.
Kinda funny.
buildzoid@reddit
the card on the right is literally an RX VEGA Frontier Edition
detrophy@reddit
No wonder it looked familiar. Thanks!
Skimmed through the article and I couldn't find a reference to the card. Usually such pictures do tend to have a meaning behind them.
Johnny_Oro@reddit
So it's below 4070, and around equal to 5060 Ti, so a bit worse than I expected. But this single fan blower model is capped at 230W, 233W is the max it could do in blender, while some other models are rated higher, up to 330W.
The 290W Gunnir model is substantially ahead of 5060 Ti in CP2077: https://videocardz.com/newz/intel-arc-pro-b70-tested-in-games-performs-close-to-but-below-rtx-5060-ti
It might be possible the 330W model is closer to 4070.
boringestnickname@reddit
5060 Ti is under 200W, yes?
I mean, I'm not not impressed. It's hard getting into AMD/Nvidia turf after all these years.
Perfect_Exercise_232@reddit
I have mine at 3-3.1ghz givin better fps then stock while capped at 150 watts
Johnny_Oro@reddit
Yes, but B70's chip was designed to take more. If you take a look at the power efficiency chart, B70 is very much equal to B580 in power efficiency. Not the case with the other cards, 7900 XT is 35% less efficient than 7600XT and even 5060 Ti is 12% less efficient than 5060 for example.
In a workstation GPU, you don't want too much power and heat. But B70's chip, a repurposed B770, is definitely designed to take 100W+ more.
Vb_33@reddit
It's exactly what I expected out of a B770, I'd scour my old posts if I had the time but I seriously called this. It pays to be conservative.
jrdnmdhl@reddit
You know your brand is in the toilet when the headline would read like a strong endorsement with your two top competitors' names in it but with yours it reads like damning with faint praise.
Wallcrawler62@reddit
I don't think Intel is "in the toilet" to most people. The average consumer doesn't know or care that their latest CPUs underperformed in price and performance. To me it's also hard to say it's in the toilet when their GPU foray is still relatively new. The turd is still dangling. Maybe it's even polished.
R-ten-K@reddit
The average consumer doesn't even know these GPUs exist. Intel has for all intents and purposes 0 dGPU market penetration.
gigashadowwolf@reddit
I thought their latest CPUs (Panther Lake/Series 3) were actually performing pretty well. They just sort of prioritized energy efficiency for laptops over desktop beast CPUs.
Their graphics cards though, yeah, they don't compete with AMD and Nvidia really. They are once again mainly praised for their energy efficiency.
Intel's current strategy doesn't seem to be competing with AMD in either CPUs or GPUs along with Nvidia. They are trying to make the industry itself stay competitive with Apple. They are trying to keep X-86 relevant in an era where it's quickly becoming not.
max123246@reddit
Intel has already given up on consumer graphics. There's no roadmap for consumer GPUs
We almost got a competitor to Nvidia in the consumer market and they've given up at the finish line
ea_man@reddit
I mean they could have at least test one LLM to show how prompt processing and token generation perform vs similar priced GPU, that's what that card is made for.