Was the PS3 actually more powerful than the 360?
Posted by Kaszilla94@reddit | hardware | View on Reddit | 129 comments
I've been playing on my 360 a lot lately and I'm still blown away by some of the games. Gears of War 3 still looks amazing today on a 1080p LCD. I've never really thought that the PS3's exclusives looked leaps and bounds above what the 360 had to offer. Even today I don't see much of a difference. But the 369 factually had a much better gpu and better ram setup. Was the cell powerful enough to say that it was overall more powerful than the 360 considering almost every third party game looked and ran worse? I'm watching a lot of videos comparing the two and it keeps being mentioned that the PS3 was more powerful, but i'm just not really seeing much evidence of that. Was there anything the PS3 could do that the 360 couldn't? Killzone 2 looks great but has terrible input lag and I still dont think it looks better than Gear 3 or Halo 4 and that's apparently one of the best looking games on the system. What do yall think?
CammKelly@reddit
The theoretical gigaflop performance combined on both systems is surprisingly similar, 377 gflop for PS3 v 355 tflop with the Xbox 360.
The difference is with the CPU & GPU. PS3's CPU is significantly faster than the Xbox 360's, but its also much more highly specialised, where as the Xbox 360's GPU is about 1/3 faster than the PS3's and was much more flexible and efficient due to its vertex and pixel shaders being unified.
ers620@reddit
Where are you getting those flop numbers? I’ve usually read they were both around \~240gflops.
The Switch which is far more powerful is at 394gflops
Spiral1407@reddit
He's likely taking both the CPU and GPU into consideration.
jocnews@reddit
I recall how Sony put out some bogus 2 TFLOPS marketing number on PS3 and then some person kept adding that to Wikipedia, sigh...
Less_Party@reddit
It played Blu-Rays.
Even in an alternate universe where every game got 100% usage out of the Cell you've built a gaming console with a great CPU and a mediocre GPU which is the opposite of what you want in a gaming machine, especially back in the mid-2000s.
WingedGundark@reddit
Cell is probably most overhyped CPU ever and its failure is often jus said to be ”iT wAs dIfFicUlT tO pRoGrAm”.
It was simply a shitty general purpose CPU, especially for a gaming console, but Sony was so heavily invested in it that they felt that they had to use it. It was in-order processor with huge pipeline which meant that it suffered from massive performance crippling pipeline stalls and which were often almost impossible to get rid of in general purpose computing applications, especially in games. Using SPEs somewhat efficiently required painful, often completely manual mamory management, something that shouldn’t be required by 2006 with any modern platform and develpoment tool in the order that Cell required. Sony themselves realized really soon that the whole concept was a disaster, hence they added the GPU to offload graphics from the nightmare that the processore was. Yeah, at first the PS3 was supposed to be just Cell doing all the lifting and RSX was added just as an afterthought. Such a terrific architecture it was!
It was of course usable in some niche cases where you didn’t have to worry about the stalling pipelines and could efficiently feed the SPEs, such as some scientific computing and for example PowerXCell 8i powered IBM supercomputers, but I think the most telling thing about the general uselessness of the design is that it was more or less a one off thing. IBM ceased all Cell development in 2009 and that was pretty much the end of the story.
I absolutely still love PS3 and the architecture makes it very fascinating. But Cell was simply bad. If you manage to get huge bandwidth out of it on paper, but in real life just a small fraction of that, it is simply just a shitty design.
R-ten-K@reddit
Cell was ultimately an architectural dead end, and such a disaster that it arguably killed both PowerPC in the consumer space and IBM Microelectronics.
It was basically the final gasp of the old 80s/90s high performance computing mentality obsessed with exotic parallelism, manual orchestration, and software complexity, right as the industry was pivoting toward SoCs with heterogeneous compute and specialized IP blocks.
In many ways, Cell was exactly the kind of “theoretically brilliant” architecture you would expect from teams deeply experienced in HPC, but with limited intuition for real world multimedia and graphics pipelines. I remember IBM folk presenting early Cell development results at conferences using mostly scientific compute kernels as benchmarks, assuming they represented worst case multimedia workloads, which turned out to be completely wrong.
It was also bizarre how Cell ultimately had poor streaming memory performance, when that was a common requirement for many HPC use cases. So even as a purely compute architecture, Cell was shit.
Outside of a few narrowly optimized scenarios, the architecture was extraordinarily programmer-hostile and inefficient for the streaming multimedia workloads that were its intended application.
In other words, it was exactly the sort of system old-school IBM culture would produce: underperforming, bizarrely complex for the sake of complexity, overengineered in the portions that mattered less, expensive, and fundamentally out of step with the rest of the field.
WingedGundark@reddit
Very well put! It still seems that IBM and Sony marketing did the trick as far too many say that it is actually more like misunderstood rather than a failure. I at one point thought too, for crying out loud.
If it would be great, we would have the architecture with us to this day. But we don’t and IBM killed it very fast.
Less_Party@reddit
Lmao their very own Saturn.
MairusuPawa@reddit
Except that the Saturn was designed for 3D with a dedicated GPU (VDP1) for polygons from the get-go.
The internet is usually so, so wrong about this one.
Kaszilla94@reddit (OP)
*graphically
Fast_Passenger_2890@reddit
The Cell was theoretically more powerful than the Xenon when fully utilised, but the RSX was quite a bit weaker than the Xenos which was a quite forward looking GPU architecture.
Xenos was the first GPU ever to feature unified shaders, while the RSX still had separate Pixel and vertex shaders and the Xenos also had the 10MB of EDRAM
monocasa@reddit
The Cell's read pathway from VRAM was also ungodly slow. Something like 16MB/sec.
Shouldn't have been an issue since you don't normally need to read from VRAM that much as the CPU, but it would have helped later in the generation when SPUs were having to pick up the slack and do work that would normally be a screen space shader pass.
Alternative_Spite_11@reddit
The SPEs always did most of the vector math we now consider “GPU work”.
R-ten-K@reddit
No. In theory, that was the original expectation for SPEs to handle a large portion of the graphics pipeline, but Cell ended up behaving quite inefficiently for streaming memory patterns and didn't have enough band width to keep the SPEs fed as shaders.
Sony eventually had to bolt on a separate external GPU to handle the bulk of shader computation and image generation. So in practice, the SPEs ended up being used more for things like game logic, physics, compression/decompression, media processing, and even OS background tasks.
Alternative_Spite_11@reddit
I’m actually not familiar at all with the PS3’s memory subsystem ore caching strategy but not having enough bandwidth to feed the SPEs is the kind of thing they should’ve seen just from napkin math at the very beginning of the project like an hour after the PS2 launched. I was under the impression the standard Cell processor design put forward by IBM used a buttload of SRAM to keep the SPEs fed. Did Sony think IBM/PowerPC just did that for fun?
R-ten-K@reddit
The thing about Cell is how little intuition the design teams seemed to have for the actual multimedia/graphics workloads they were supposedly targeting.
I think each SPE had like 256 KB of local SRAM that had to store both instructions and data. And on top of that, all memory movement had to be manually orchestrated through DMA over a shared ring bus all SPEs and CPU competing for memory controller bandwidth. So you ended up with an absurdly complex model: multiple isolated memory spaces, explicit DMA management, asynchronous orchestration, terrible interrupt overhead, and constant contention/resource-management headaches pushed onto the developer!
There was no modern NoC-style abstraction or sane unified memory behavior. Most of the orchestration burden was simply dumped onto the programmer.
On paper the architecture may have looked “theoretically brilliant,” but in practice huge amounts of performance were lost to synchronization, data movement, interrupt servicing, and mem management overhead that the original designers badly underestimated when writing numbers on those napkins.
It really was an architectural disaster. They somehow managed to combine many of the worst aspects of both scalar and data-parallel approaches into one bizarrely overengineered system that often felt actively hostile to software developers. Like that whole team hated software.
Alternative_Spite_11@reddit
Yeah i always thought it was weird that every bit of it was explicitly in order, when high performance CPUs had been generally out of order for around a decade at that point. I mean sure it makes sense for the SPEs since they were originally intended to do most of the vector stuff that a GPU does now but having only one true general purpose CPU core with so little memory and a weak caching hierarchy (well i guess I shouldn’t call it a hierarchy when of the 7 main compute elements had one fairly small cache for instructions and data. Now Im wondering if the rpcs3 emulator just treats each PPE/SPE as a strict in order thread (or preferably a physical core if you have enough).
R-ten-K@reddit
They had no choice but to use an in-order scalar core. They were initially targeting 90nm and an out-of-order core would have taken half of that die alone, if not more.
Emulators don't have to be cycle accurate, so I assume they are just doing a simple JIT and calling a day. A modern x86 CPU should have enough cores and SIMD units to do PS3 software emulation faster than the original HW.
monocasa@reddit
That is not the case. The RSX had pixel and vertex shader hardware that were used as usual under normal operation.
EmergencyCucumber905@reddit
That doesn't sound right. It's not 15GB/s read and 20GB/s write?
AlyxBizan@reddit
I remember the leaked presentation "No, that's not a typo..."
Alternative_Spite_11@reddit
The PS3 did most of of the vector calculations thaf would normally be on a GPU on the SPEs. Its “GPU” was basically the render backend only. It was equivalent to ROPS on a modern GPU.
monocasa@reddit
That is not the case. The RSX had pixel and vertex shader hardware that were used as usual under normal operation. This is in addition to its ROPs.
hellotanjent@reddit
Former console graphics/engine programmer from the X360 era here - the above is correct.
One quirk of the X360 though is that the EDRAM that stores the framebuffer was sliiiightly too small for 720p with 2x antialiasing, which forced a lot of games to do multi-pass rendering or weird scaling tricks to work around it. The PS3 could just have the whole framebuffer in VRAM at once, but since the GPU itself was weaker the PS3 ended up doing scaling tricks too, just for performance instead of RAM reasons.
letsgoiowa@reddit
Ah, is that why Halo 3 was at something like 640p?
ItsMeSlinky@reddit
Exactly. Almost all 360 games ended up doing strange resolution combos like 1280x704 or 1024x680 to ensure they fit within the eDRAM buffer.
monocasa@reddit
The higher end games pretty quickly started doing stuff that required multiple render targets anyway that gen, so even a little larger EDRAM (say 12MB) wouldn't have cut it anyway.
Saneless@reddit
The last bit is why companies started developing with the PS3 in mind first. It's not that those versions were even necessarily better, it just avoided the disaster because if you led with 360 and your ram split was 350/150, you're in trouble on the PS3. But if you made it work with 250/250, the 360 was going to be just fine either way
R-ten-K@reddit
The 360 ultimately had the more balanced architecture overall, and just as importantly, it had far better software tooling and developer support from day one.
The PS3 was an awkward design. Sony originally bet heavily on Cell, but it became clear it was not going to meet the original perf expectations, so they essentially had to bolt on the RSX GPU relatively late in development. The result was a Frankenstein arch that was notoriously difficult to program.
In practice, even highly optimized PS3 code often only managed to roughly match what the 360 could achieve. And it took years for the PS3 stack to mature enough to really exploit the hardware properly.
To Sony’s credit, they clearly learned from that experience. And adopted far more conventional, developer friendly architectures after the PS3. But for a while, Sony had one of the most sadistic architecture teams in the industry. Both the PS2 and PS3 were particularly developer-hostile architectures.
Noreng@reddit
There was always a GPU in the PS3, the original plan was actually to use a Toshiba design similar to the PS2's Graphics Synthesizer (a bunch of ROPs and TMUs). This was proven to be a poor idea, so Sony asked Nvidia for a GPU, and that's how the PS3 ended up with the RSX. The bigger problem with the PS3 was how it was delayed for almost a year due to a lack of blue LEDs for the BD-ROM
bestanonever@reddit
In some ways, the PS3 was peak Sony Hubris and they paid with slow sales in the early years and a long road to hardware profitability. Aiming for a GPU system that didn't work out instead of going to the experts on the field from the start, using a complex and co-developed CPU, offering yet again a console that pushed their new media format in a trans-media strategy and at outrageous pricing. And, to top it all, a harsh dev environment for the actual game devs.
It was very ambitious but it also felt like a development based on the smashing success of the two previous generations. They took their success for granted.
Routine_Ask_7272@reddit
Agree.
With the PS1, CDs had been around for many years.
With the PS2, DVDs had been available for \~3 years.
With the PS3, Blu-ray Discs had only been available for a few months. The first standalone players launched in June 2006. The PS3 was launched in November 2006.
The PS3 had a lot of new hardware, compared to the PS2. Bluetooth, WiFi, HDMI, along with a HDD & Ethernet in every unit (PS2 had these as optional features).
Sony was ambitious, but maybe too ambitious. I loved my PS3, but Sony lost a lot of market share that generation.
sharpshooter42@reddit
It was also gigabit Ethernet too! Xbox 360 did not get that
R-ten-K@reddit
True, but the original “GPU” for the PS3 was closer to a display generator and video decoder than a conventional modern GPU. The expectation was that the SIMD/SPU units on Cell would handle most of the geometry, lighting, and rest of "shadery" workloads themselves, which actually made a lot of sense, conceptually, as a highly programmable graphics pipeline.
Unfortunately, the Cell SoC ended having severe bottlenecks with the internal ring bus and mem controller being unable to feed all that IP with the kind of streaming bandwidth required for full HD. The RSX GPU ended up getting bolted onto the design fairly late in the process.
That is also why the final memory architecture was so awkward and constrained, especially given how little memory the system already had to begin with. PS3 essentially ended up with a split and somewhat dysfunctional memory model born out of a late stage architectural compromise.
(Disclosure: I was on the RSX team, and the interactions with Sony were… “interesting.” It definitely burned NVDA on getting involved w any console development for quite a while afterward.)
FruktSorbetogIskrem@reddit
The gtx 8800 came out before the PS3. What was the team’s reaction. Because I think if Sony didn’t had cell and simply went with Nvidia the entire development of ps3 we definitely would have a different console. Sony really took the lessons and mistakes with ps3 well.
R-ten-K@reddit
The dev cycles for Tesla and PS3 were very different. So even though PS3 may have come out slightly after G80 (although honestly I don't have those timeline details off the top of my mind at this point), Sony likely had started the development of Cell way before.
In a sense Xbox 360 is what the PS3 would have looked like if Sony had simply gone with a GPU w programmable shaders rather than wanting to do a big chunk of the graphics pipeline on the SPEs.
There is also a matter of cost, since the RSX was significantly cheaper than what a G80 derived GPU for the PS3 would have costed.
Johnny_Oro@reddit
Yeah I figure the PS3 was going to be like PS2 on steroids, just like PS2 was PS1 on steroids.
Geometry calculation and T&L on CPU die, rasterization with really fast eDRAM on a separate die. It worked good for PS1 and PS2, saving a ton of cost and improving graphics data throughput at the same time. Generally having fewer chips, and having all your chips come from the same IP holder and manufacturer, saved costs. Nintendo brokered a $1 billion agreement with Motorola for the Gekko CPU, and probably another billion with ATI, and in addition to that lost 20 billion yen annually subsidizing the gamecube hardware. Microsoft lost $4 billion, mostly to botched Nvidia deal and mandatory hard drive. PS2 was the only profitable console hardware that generation, only taking loss in its first year and making hardware profit ever since.
Yeah to combine CPU and GPU means compromises had to be made. But while the CPU had to be clocked slower, PS2 had CPU features that its competitors Gamecube and Xbox didn't, like two 64-bit ALUs and huge vector coprocessor that could be used for both graphics and gameplay logic. And while the PS2 had no dedicated hardware for specific T&L features, it could brute force them with the sheer bandwidth of the eDRAM. But it was a much more complicated system to optimize for, with hard to use in-order execution and tiny amount of cache.
By the late 2000s though, things had changed. Like, you couldn't combine CPU and hardware T&L on a single die without sacificing a ton of performance anymore, and big eDRAM wasn't more economical than a slab of high bandwidth GDDR3.
PS4/Xbone's Jaguar APU changed the game by combining half decent CPU and a high performance GPU on a single package. It's a very cost efficient solution. It's more of a case of AMD's ATI buyout paying off than Sony "learning from their experience" though.
LaDiDa1993@reddit
The CPU definitely was, on paper. It was also notoriously complex to utilise to its full potential.
FunCalligrapher3979@reddit
Both looked and performed like ass past 2008
Kaszilla94@reddit (OP)
The games looked much better post 2008 to me
MC_chrome@reddit
Theoretically, yes.
However, many PS3 titles did not properly take advantage of the Cell processor (especially early PS3 titles). As the generation progressed, however, games got better support
crazy_goat@reddit
Correct. Just to add - the heart of the cell processor were some general purpose PowerPC CPU cores, but fewer (and slower) than those found in Xbox's Xenon CPU. The advanced companion SPU cores were proprietary and needed engine and game support to utilize them (and to do so was itself, a dance)
The tools and familiarity with powerPC made those the favorable cores to use for rapid development of games, meaning those early PS3 games basically relied on the PowerPC cores exclusively - putting it at a huge disadvantage against the XBOX, most of the CPU package was dark
HustlinInTheHall@reddit
Bingo. PS3's big failure was inability to share learnings and tools beyond their own 1st party studios. In particular building games themselves that they could functionally use as tech demos. PS4 era they focused on that exclusively and even though the Xbox was more powerful and had more features the PS4's simplicity and ease of design tools made it much better to develop for.
Ninja_Weedle@reddit
The base PS4 was a bit faster than base Xbox One thanks to choosing GDDR5 over DDR3 and having more shader cores on the GPU, though they were largely the same or at least extremely similar to each other, they use the same CPU and the GPUs are the same architecture. The One X was faster than the PS4 Pro, but that was not how it was early generation.
Buris@reddit
the base xbox one was significantly slower graphically, It was the difference between 720p and 1080p in quite a lot of titles.
detectiveDollar@reddit
It was 900p vs 1080p if I remember correctly. The Xbox One also reserved some system resources for snap, although that probably affected framerates more than resolution.
Zhunter5000@reddit
And the CPU on the Xbox One ran 150mhz faster (1.75ghz vs 1.6ghz) which made it run smoother in select CPU intensive titles such as Battlefield 1.
iluvchromosomes@reddit
That is a very interesting way to say "Developers won't use better technology. We must force them to do it."
Sony had plenty of information to teach devs. They ignored it.
Fast forward to the PS5, Sony rolls out new storage technology that makes SSDs as fast as RAM. But now they own a whole bunch of studio and they made the devs properly use that technology. And we see the results. Tons of amazing games and PS5 dominating.
If Sony was not vertically integrated now, it would have been PS3 all over again. Devs would have ignored the new tech and done the least amount of work possible, as is typical.
dagelijksestijl@reddit
Sony’s response to developer complaints about the PS2 was to make developing even more difficult. Like requiring 1000 lines of code to accomplish something the 360 does in around 50 lines. They killed the entire mid-budget Japanese game in the process.
virtualmnemonic@reddit
Lol, no. The SSD peaks at like 5.5gb/second; less than half that of DDR3 1600 RAM. Let alone access times. Even the best SSD and controllers available can't compete with RAM in raw bandwidth and especially latency.
scrndude@reddit
Lol when the PS3 launched the dev docs were only in Japanese
dagelijksestijl@reddit
There only was one PPE in the Cell, which mostly exists to coordinate the SPEs and run tasks which are hard to run in parallel.
The problem with the architecture was that it was the worst of both worlds: it wasn’t good as a massively parallel processor (GPGPU would eat its lunch within a year of release), nor was it good at serial tasks.
Capital-Froyo-4359@reddit
The PS3 only had 1 performance core compared to the 3 that Xbox had.
doscomputer@reddit
objectively wrong, the 360 had a faster GPU, and way more CPU integer performance.
PS3 had what was essentially 6 bolt on 128-bit FPUs, which is cool from a hardware design perspective, could do interesting post processing fx, physic engines, on their own transistors separate from the main system. The theory is sound, integration was good, but in the real world the applications are just limited.
Yes there are compiler tricks to offload tasks to the SPEs that wouldn't normally be handled by FPUs, and ways to force compute to happen, but generally it wasn't the best tool for the job in a gaming situation.
MC_chrome@reddit
So why weren't there entities trying to make supercomputers out of 360's instead of PS3's?
wintrmt3@reddit
Supercomputer workloads are much simpler than a game, running the same few calculations over and over for a gigantic grid.
FruktSorbetogIskrem@reddit
It also had Linux support before Sony took it away.
HavocInferno@reddit
The ability to scale to a supercomputer (with specially written software) does not inherently make an architecture great for a local consumer scale gaming machine.
PMARC14@reddit
The SPEs are supercomputer architecture, IBM pitched it to Sony based on that while the Xbox was basically just a PowerPC PC (and a ton of lockdowns Microsoft developed since the first Xbox).
f3n2x@reddit
Cell was just awful for a console. It's basically a normal CPU core plus 7 SIMD cores each with their own small local memory you have to write to and read from. The idea of Cell was basically anawkward version "GPGPU" before GPUs could do it, except the first GPUs which could (8800GTX) released the same year as the PS3, and on a console in particular the silicon and power budget was better spent on a better GPU instead, like the 360 did, even ithough not as freely programmable.
detectiveDollar@reddit
Funny you mention that, because originally the PS3 was only going to have the Cell chip, with the SPU's handling the graphics. When this didn't work out, Sony partnered with Nvidia to make the RSX barely a year out from the consoles launch.
HealthyFruitSorbet@reddit
It’s strange that every dev kit did have an Nvidia gpu.
Aggrokid@reddit
There was the aborted Toshiba GPU project somewhere in between the two, either due to performance or format war tension.
HealthyFruitSorbet@reddit
It directly affected game development. Games overall ran worse, and it took more time to port to the ps3. It also affected emulation for Sony. I argue that cell was a lesson for Sony. There’s games that on PS4/Ps5 can absolutely push the hardware regardless.
fatso486@reddit
The x360 was kinda no brainier. I was actually shocked when Sony reached the same level of success at the end of the generation. Xbox had a better GPU And no weirdly configured CPU that no one could easily figure how to optimize for. I remember pretty much all multi-platform games either looked better or performed noticeably better on Xbox. The fact that the console came almost 1 year earlier and was cheaper to make and sell was just adding insult to injury.
FruktSorbetogIskrem@reddit
Not only that but it also affected game development. Games took longer to make, and cost more. Cell also affected Sony’s ability to handle emulation. Due to the complexity of Cell. Games that took advantage of cell like Crysis 2/3, GTA 5, Battlefield 3/4 look the same or run slightly worse compared to Xbox 360.
unixmachine@reddit
If we look at purely technical aspects, the 360 is more powerful. Better textures, graphics effects, resolution.
To overcome the PS3's shortcomings, developers made the games with a more cinematic presentation, with well-controlled scripted cutscenes. You would put a character on screen with many polygons and textures, while hiding the rest of the rendering, for example.
That's why players get the false impression of better graphics. Art and presentation end up overshadowing purely technical aspects.
Naughty Dog developing for the 360 would likely achieve similar results to the PS4, but at a lower resolution. Just look at some late ports like Rise of the Tomb Raider and Titanfall.
FruktSorbetogIskrem@reddit
And games that do take advantage of cell Crysis 2/3, Rage, Battlefield 3/4, La Noir, GTA 5, etc looked equivalent to or run a bit worse compared to the Xbox 360.
Routine_Ask_7272@reddit
Agree.
With the PS1, CDs had been around for many years.
With the PS2, DVDs had been available for \~3 years.
With the PS3, Blu-ray Discs had only been available for a few months. The first standalone players launched in June 2006. The PS3 was launched in November 2006.
The PS3 had a lot of new hardware, compared to the PS2. Bluetooth, WiFi, HDMI, along with a HDD & Ethernet in every unit (PS2 had these as optional features). Sony was ambitious, but maybe too ambitious. I loved my PS3, but Sony lost a lot of market share that generation.
Nutsack_VS_Acetylene@reddit
People hype up the Cell a lot. The Cell was a weird (and in hindsight) backwards looking processor. The Cell was an in-order processor with a really long pipeline and the SPEs didn't have dynamic branch prediction or speculative execution! It was awful for general CPU tasks and branch heavy code. Compilers weren't magic enough to constantly to keep it fed / avoid pipeline stalls so it relied on a lot of manual management. A lot of gaming tasks can have a linear structure that was extremely fast on the PS3, but a CPU also has to handle branching and hard to predict operations. The reality is a LOT of CS problems take the form of trees/branch heavy structures. The GPU was worse than the 360 but a lot of the graphics tasks were expected to be processed on the SPEs. So the Cell was weirdly really powerful for very specific linear tasks but bad at actually being a general purpose CPU. The PPE core was supposed to prevent this type of problem but it seems to have failed as all the impressive wins for the Cell were tasks that heavily leverage the SPEs. The 360 was also in-order but it did have speculative execution and dynamic branch prediction without the complication of SPEs that lacked those features.
That being said, CUDA was released in 2006 (2007 publicly) and OpenCL wasn't released until ~2009. The future for consoles was GPU compute and unified memory for GPU-CPU operations without moving things in memory, rather than weird vector units. However, the Cell was designed in this strange intermediate period where GPU compute was still a research subject rather than a mainstream technology. All in all, an interesting experiment but companies like Nvidia ended up predicting the future rather than Sony and IBM.
EmergencyCucumber905@reddit
They didn't need it. The SPEs aren't for general purpose code with many branches. They are for number crunching.
Nutsack_VS_Acetylene@reddit
That's exactly problem with the Cell in the context of the PS3 console, the CPU was basically one core with integrated accelerators that couldn't really do typical CPU tasks and they couldn't manage themselves, hence why I said "It was awful for general CPU tasks ".
The development difficultly and many of the performance woes were because of the fact you would eat up CPU time babysitting the SPEs and still need to run a game on the thing. I don't think I ever implied the SPEs were bad for linear dense compute tasks, I think I said the exact opposite. In the context of OPs question, the 360 was FAR more powerful for CPU tasks, which are unavoidable in something like a video game.
playerlsaysr69@reddit
Yep. There is this https://gbatemp.net/blogs/this-is-how-capable-the-ps3s-cell-processor-really-was.14922/ that perfectly explains the CELL which shows the SPEs were actually more closer to GPUs than they were to CPUs
Nutsack_VS_Acetylene@reddit
Nice write up. Sony originally wanted to forgo a GPU and just have two Cell processors, but the performance wasn't what they expected so they replaced it with a GPU shockingly close to launch.
Spiral1407@reddit
I don't think it ever was. It just had way too many bottlenecks and poor design decisions that often made it perform worse in realistic scenarios:
Narishma@reddit
I think that had more to do with the fact that every PS3 had a guaranteed hard drive. That wasn't the case with the Xbox 360, which had a few models without one.
Spiral1407@reddit
The 360 was the target system for development during that generation, so most games were already being built with the assumption that there wouldn't be a HDD.
HealthyFruitSorbet@reddit
Nvidia released the gtx 8800 series before the ps3 did launched that featured unified shaders.
Noreng@reddit
That was because Sony had to delay the PS3 for almost a year due to waiting on blue LEDs for the BluRay drive.
JonWood007@reddit
Gpu wise they were pretty close. PS3 = 7800 gt, Xbox 360 = x1950. I thought they were equivalent but Google is telling me the x1950 was actually faster. The ps3 had the cell processor though which seemed very powerful at the time but very hard to code for.
All in all the 360 likely offered a better overall package at a lower price point too. Microsoft really hit it out of the park with that one.
EmergencyCucumber905@reddit
I work in HPC not and I've written code for Cell way back when, but not game code. So I might have some blind spots here.
My take would be the Xbox 360 had more performance available for games and general purpose tasks
The PS3 had
Each SPE is a vector processor (with a pretty rich ISA for the time) with 256KB of local store (SRAM). You load programs on the SPEs and DMA data in/out of the local store (preferably in large chunks e.g. 16KB is the max per request). Great for a lot of HPC applications but i don't see how it would be amenable to game logic. I think they were often used for pre-processing data before sending it to the GPU, or post-processing the scene.
InformalEngine4972@reddit
Games like the last of us and uncharted 3 looked better than anything on Xbox
TranslatorBoring2419@reddit
God of war 3 on a 3d is very intense. During the diving scenes I'd get severe vertigo. It was a trip lol.
mittelwerk@reddit
GoW 3 also benefitted from fixed camera angles. With that in mind, not only they could save on poligon budget but, also, because of the fixed camera, they didn't have to render a massive open world, thus allowing the use of higher resolution textures for a given scene.
Kaszilla94@reddit (OP)
What makes these games better than anything on the 360? Who's to say these games wouldn't look just as good or better if they were developed for the 360? Talented first party developers don't mean more powerful hardware.
InformalEngine4972@reddit
Yeah they just “happen” to look better on accident.
So you are saying Xbox devs have no talent? Since the last of us really looks half a generation better than anything on Xbox.
Kaszilla94@reddit (OP)
What looks better? Looking better is subjective. Nothing on the PS3 looks anywhere near half a gen better than Gears 3 and Judgement or Halo 4. What are those games doing that the 360 could not do? The PS3 simply was not powerful enough to produce games that look half a gen above what the 360 was outputting. And yes, I would say sony's device are more talented than MS's generally speaking.
InformalEngine4972@reddit
I mean use your eyes. I don’t know what you are arguing about. It’s a widely known fact.
Are you really going to say a game like gears 3 with its piss filter browns look comparable to the last of us ?
The last of us holds up so well it’s still a current gen title to show off graphics.
lowlymarine@reddit
In the sense that it was completely remade on a new engine with new assets, sure.
InformalEngine4972@reddit
Nah , naughty dog has been using the same engine since uncharted 3.
detectiveDollar@reddit
That means nothing, every Halo game has used the same engine.
InformalEngine4972@reddit
I’m not the one using it as an argument?
Kaszilla94@reddit (OP)
I don't see it. Are you playing the game on an actual PS3 or PS4/PS5? Halo 4 looks like an Xbox one game to me.
detectiveDollar@reddit
It was originally pitched as one in fact, but the 8th generation was delayed due to the recession recovery, so it was released on 360 instead.
Which I imagine is why Halo 4 had many regressions (AI intelligence and missions being scaled down in size) as well as some odd quirks (dropped weapons despawning quickly and using low poly models a lot closer to the player).
Tiddums@reddit
Conventional wisdom is that the 360 was easier to use but the PS3 was more powerful. But I'm not particularly convinced that any difference in power, such as it existed, was noticable in practice when both machines were pushed to their limits. Uncharted 3 and TLOU look amazing, but so do Halo 4 and Gears of War Judgement. Microsoft's lineup of first party games that pushed the graphical envelope were sparse from 2010 onwards as they pivoted to Kinect and drew down investment in 1P software, which was the period of time that Sony's first party started firing on all cylinders, so I suspect it's easier for people to check their mental model of amazing looking ps3 games and think of way more Sony games from 2010-2013 than they can think of amazing looking Microsoft games from the same period. But they do exist, just in smaller numbers!
Keep in mind also that a lot of what people were praising on some PS3 first party games like Uncharted was the quality of their animation work and the overall technical polish, which is more to do with budget and talent than it does the technical characteristics of the PS3. Peple used to share endless low resolution gifs of Drake doing combat melee attack animations because they looked so slick, but that's got nothing to do with the PS3 and everything to do with the Naughty Dog art team. Uncharted does a lot of cool tricks where high quality prerendered cinematics transition completely seamlessly into gameplay, and some people don't ever realize what they were watching *wasn't* in-game footage because they did such a good job with it.
dakjelle@reddit
Yes, in theory, but it took years to get it to the screen and by then the vast majority of the 360 titles looked better.
In many ways it was the peak console and also why the original hardware path for consoles ended.
Sony learned a hard lesson and is in many ways to this day benefitting from that lecture.
Also unified memory is king 😎
ZekeSulastin@reddit
dio Model 100 - Firmware version 0.92.6+116
Motor_Trouble2280@reddit
The PS3 has 6 vector processors that can do geometry preprocessing, before the RSX even touches the scenes. This is important, because G70 has terrible vertex performance. So, without even touching on everything else the SPU's can do to aid the RSX, you have the ability to shift tons of geometry at a relatively high framerate. The SPU's can also do post processing, run physics, and handle jobs similar to the 360's smt tri core. The memory subsystem is the weak link with the split ram, but streaming and lots of DMA helped Killzone look the way it does.
Killzone 2 is outside of the 360 capabilities. It's deferred rendering with a crazy amount of behind the scenes jobs, full physics, 4x msaa, full 720p, per object motion blur, etc. The closest thing the 360 got tech-wise was Crysis 2, which had a lower resolution and ran like crap.
Kaszilla94@reddit (OP)
Gear 3 looks better than Killzone 2 to me. KZ2 was very monotone and the input lag was terrible.
Motor_Trouble2280@reddit
You asked if the PS3 was more powerful, not if Gears 3 looks better. Gears 3 and Halo 4 use a PBR workflow that became standard at the end of the generation going into the PS4/Xbone era. It's all textures and specular. Killzone 2 came out in 2009 and doing far more than both games under the hood.
Kaszilla94@reddit (OP)
I dont care when the games came out. Sony's developers are more talented than MS's. If we compare the best of the PS3 to the best of the 360 there really isn't that much of a difference. A console with a weaker gpu and worse memory cannot be more powerful than a console with a better gpu and memory.
Motor_Trouble2280@reddit
You have a bias, making it hard to have an honest conversation.
The PS3 is the equivalent of having a GPU+CPU+AVX512, which is useful for more than textures. PS3 first party games were pushing more TECH. All you have to do is look at the technical presentations they have amassed, online. The only 360 game with tech matching Killzone 2 is Crysis 2, which runs like crap on the 360.
Halo 4 and Gears 3 are more pleasing to the eye because they came at the end of the generation, using techniques that were just newly discovered. Halo 4 has nearly no post processing and a very simple lighting setup. It looks good because of the PBR workflow. Gears 3 runs at a terrible framerate because Per Object Motion blur is too expensive there.
The 360 is a nice piece of kit, but it relies on heavy use of tiling, and the advantages are just in the per pixel shading power over the RSX. It also has better vertex processing before the SPU's get involved. But you aren't getting Killzone 2, Uncharted 3, and GOW3 with that setup.
Kaszilla94@reddit (OP)
So what exactly made the PS3 more powerful than the 360 then? Are you saying the Cell was powerful enough to not only narrow the gap when it comes to inferior gpu and memory but also surpass the 360? And Crysis 2 still ran better on the 360. KZ2 is very overrated graphically. GOW 3 look great but it could definitely be done on the 360. Same with UC3. Rise of the Tomb Raider is another game that looks at least on par with the games you mentioned.
Motor_Trouble2280@reddit
RSX is only worse because of vertex processing and less shading power. In a vacuum, xenos wipes the floor with it. But the 360 cpu is a little anemic with 6 threads. It also needs a tiled renderer to take advantage of the edram. It's a very suitable system for forward rendered, 2x msaa games with decent amounts of geometry. Unreal Engine 3 is basically perfect for what the 360 offers.
The PS3 has multi purpose vertex processors that can bring RSX far above its own weight. The amount of culling in PS3 first party games tends to be insane, before RSX even renders the scene. You can also push for post processing to be done on the spus, freeing up RSX to just render more simply. Finally, there is more to games than graphics. SPUs allow for animation rigging, skinning, physics, etc, that the 360 has to do on the simple smt cores.
The 360 is a more balanced system, since it's traditional. But it has a lower ceiling. The best looking games came at the end, when PBR was finally understood and developers knew how to fake Area Lights with cube mapping and light probes. This workflow is more bandwidth heavy, which plays to the 360 strengths.
Kaszilla94@reddit (OP)
OK this is what I was looking for. I find hardware comparisons between older consoles much more interesting than current ones.
thoughtcriminaaaal@reddit
It's mostly just a question of whether or not the juice of the Cell was worth the squeeze, and the answer was no. The list of games better on 360 is a lot, lot longer than the list of games better on PS3 for a reason, and that reason was that the 360 just had a better, simpler architecture that developers could more easily squeeze the juice out of. Only very late gen, most notably with Battlefield 3, 4 and Hardline did developers manage to clearly make the PS3 version better than the 360 (although I recall reading someplace that DICE had some wizard who was super good at programming for Cell, so that may have just been because it was a pet project.)
The PS3 did have more native 1080p games, since the the 10MB EDRAM on the 360 was too constrained to do that. The vast majority obviously didn't do 1080p, since the hardware was not quite there yet. Gran Turismo 6 at 1080p was and still is impressive, but on real hardware this did come at a framerate cost.
OverlyOptimisticNerd@reddit
We’re going to dig up some console war drivel here. I apologize in advance.
First, let’s address the prior generation because it is relevant. The Saturn had more power under the hood than the PS1. And if properly utilized, it could do things that the PS1 could not. But the simplicity of the PS1 (a strong single CPU, a strong single video card, and a triangle setup engine to reduce strain on the CPU because hardware transformation wasn’t a thing for GPUs yet) led to it being easier to get good results. So while the Saturn had more raw power, the PS1 put out better graphics more often than not (strictly talking 3D).
The PS3/360 debate is potentially similar. It’s debatable which is more powerful, but the 360 had the better setup for getting better visuals with lower effort
In terms of GPU, the 360’s ATI card was a generation ahead of the Nvidia GPU used in the PS3. It also had embedded DRAM that was used to give it some level of AA for “free” to give games a cleaner image. The CPU situation was hilarious though. Sony and IBM partnered to create the Cell CPU, a beefy single core CPU with multiple SPEs (like mini cores to be programmed for specific tasks). Sony owned the name to the Cell and the SPEs but IBm owned the underlying CPU. So Microsoft went to IBM and got a tri-core version of the Cell CPU without the SPEs.
The end result was that the 360’s CPU was beefier for most common tasks. But the SPEs could be used in creative ways to push the PS3 to heights that the 360 could not reach. It was also daunting to do this.
In the end, while early multi platform ports performance better on the 360, developers eventually made the PS3 the lead platform and ported to the 360 to minimize differences between the versions.
jocnews@reddit
Well put but I have some reservation about the "a beefy single core CPU". The core (PPE) was actually quite slow. In-order, even simple ALU ops all took 2 cycles iirc, generally it was one of the weaknesses of these consoles. I mean, 1.8GHz Jaguar was a huge improvement in the next generation. I don't think there exists readily comparable 1T performance benchmarks to put it in perspective, but Intel Bonnel (the original Intel Atom) is probably what to think about, only scale the performance even lower as PPE was several years earlier on worse silicon nodes.
DaMan619@reddit
About equal to a G5 1.6GHZ
monocasa@reddit
It had a lot of weird cases though that make it easy to fall off the perf cliff. Shift by non-constant was microcoded, and and basically took as many cycles to execute as bits you shifted. There was also no read pathway out of the store buffer, so the core would stall until the store buffers flushed (which could be a long time in the really bad cases).
jocnews@reddit
Looks like it's propped up by the memory performance though, while losing in most of tasks (also possibly by its SMT if the test isn't strictly 1T).
PastaPandaSimon@reddit
The "easier to program" argument kinda breaks with the original Xbox. It used a Pentium 3 and an Nvidia GPU, PC style. So it was easier to program for than the PS2. It was also more powerful than the PS2. The PS2 was more popular though due to the PlayStation branding and exclusives. The success of the Xbox 360 was pretty unexpected at the time.
WJMazepas@reddit
I would say yes, because at the end of their lifes, there were multiplatform games were running better on PS3, like GTAV, BF3
But yeah, it did take a while to reach there and its not like PS3 was winning in those games by a lot. It was better, but not all that much, especially compared to One and PS4 differences
Kaszilla94@reddit (OP)
From what I remember GTAV ran better on 360 but the textures were better on PS3
Deathwatch72@reddit
It has more raw computing power but it was much more difficult to utilize that power because of the very unique architecture that went into its CPU design. A good analogy would be a race car, even though the 360 had a smaller engine it was able to actually use all of the power instead of just spinning its tires
Limited_Distractions@reddit
I would describe it as more powerful but less suitable, to some extent it is a rocket car being driven on a winding road
The SPEs being in-order and not having branch prediction means most game engines of the time would perform comparatively terrible, so you're optimizing purely for a secondary platform (at least if you're also releasing on 360) and few studios could afford to do a whole lot of that
I would say even now, in the era of the assumed hexacore CPU, most games aren't really parallelized in the way the SPEs would demand either, and a lot would still be very branch heavy in ways that would be extremely punishing all things being equal
xenocea@reddit
Games like these looked more impressive than Xbox 360 offerings
WJMazepas@reddit
Heavenly Sword, MGS4 and Heavy Rain didnt looked more impressive than Gears 3 at all
xenocea@reddit
É, mas os outros da lista têm
xenocea@reddit
Games like these looked more impressive than Xbox 360 offerings
ToshiroK_Arai@reddit
There was a supercomputer made with thousands of PS3 when they could install Linux natively. Just Google it
AlyxBizan@reddit
Not in a meaningful way for 3D graphics
Youfallforpolitics@reddit
It was on the "CPU" side by a massive amount which most devs used and left the GPU regulated to a basic functions and video out.
The complex dma system and memory management was awful.
The 360 GPU was more powerful than the PS3's GPU by quite a bit. Unified shaders versus fixed function. It wasn't even close there...
The thing is is that the cell wasn't any old CPU Or a CPU at all really. it's spe's mimicked what we do in shaders today on gpus. So essentially PS3 had two GPU LIKE structures where one of them devs were using as an APU And the other was left idle.
PS3 CPU vs X360 CPU and GPU
That's what you're primarily seeing in 99% of games on the PS3 vs 360. Only the first party games where they could actually get the high throughput from the spe's truly looked groundbreaking like tlou and uncharted but it's apples to oranges because those same games never shipped on 360 so we'll never know what it would have looked like.
Personally, I would choose the Xbox 360 if they came out today and I had to make a game for it. Now if I could design the hardware I would design a modern version of the PS3s cell that wasn't so hard to develop for With other special fixed function Hardware instead of general purpose.
LunarCorpse32@reddit
PS3 had a slower GPU but a much more capable (situationally) CPU setup.
Barebones ports just had the CPU used like a dual core in most cases.
The 360 CPU shared the same base architecture of the PS3(Power PC) but had a different, more common implementation that made it function more or less like a tri core CPU.
The GPU of the 360 is more powerful capability wise than the PS3 however.
You can see why many ports just ran better in that regard. Simpler CPU, Faster GPU, Unified Memory = less dev work.
On the PS3, if it's not ported well then the split memory setup (256 system, 256 vram) paired with the slower GPU meant that if the game wasn't CPU bound, it was going to be running at a slightly lower resolution compared to it's 360 counterpart. (Look at most Call of Duty games of the era, also GTA 4 and Red Dead Redemption.)
If the game was CPU bound then you'd get a lower framerate and worse resolution than 360. (Bayonetta 1 stands out)
All and all, I think the 360 shines best with visual fidelity and framerate most of the time (with the exception of FMV playback. The DVD situation made many 360 games use significantly worse video encoding to save space.)
If you want exclusives or were a fan of JRPGs, then you'd get the PS3 (most of the time, again. In the early days, there were a lot of 360 only JRPGs due to Microsoft taking Japan seriously for five minutes in 2005)
tgwombat@reddit
It had a lot more potential power, but was much harder to program. Grab Turismo 6 and Uncharted 2 and 3 are good examples of what it was capable of when used closer to its potential.
Most multi-platform ports didn’t have the same level care put into them and tended to run more poorly than their 360 counterparts.
EnigmaSpore@reddit
The 360 had a better gpu and a better unified ram system, but the ps3 had the cell cpu that could help handle some gpu type of functions, so if utilized properly, it would outperform the 360.
Beautiful_Ninja@reddit
PS3 was Theoretically Better. But it was extremely hard to program for compared to 360, along with 360 being the more popular console for most of its lifespan, led to most games being made for 360 first and PS3 secondary. The PS3 also had an advantage with Blu-Ray that in theory could allow things like better texture quality on PS3 due to more space on disc, but it was rarely utilized as PS3 was rarely the main dev platform.
Last of Us and Uncharted 3 showed how good PS3 games could look if properly optimized, but very rarely did PS3 get to be lead platform for games that weren't already exclusive. Final Fantasy 13 is one of the few cross-plats I can think of that was PS3 lead platform and it was significantly better looking than the Xbox 360 version, since it took full advantage of the Blu-Ray drive as well to increase texture quality and FMV quality.
Spicy-hot_Ramen@reddit
PS3 had a tricky hardware to work with