Sony and the PS4, I'm Impressed. Your Thoughts?

Started by
64 comments, last by warhound 11 years, 1 month ago

Though I wonder if there is any drawback for having GDDR5 for all the memory. As far as I understand, GDDR was designed with graphics in mind, but I don't know how different it is with normal "general usage" DDR.

GDDR favours bandwidth over latency, while DDR doesn't make that compromise.
GPUs are able to hide the latency, which is why gddr it works so well for them.

I suspect one of the reasons they made that choice (besides simplicity, i.e. "the radeon already uses that tech and we want UMA") is that ps3 developers are already used to branchless code and dealing with LHS (load hit store).

I wonder how OoOE, branch prediction & high latency memory will mix though. May be they'll strip the OoOEs and branch predictors, since they're expensive and power hungry.
Advertisement

Indeed. Some pdf going around from Sony seems to suggest that it's an AMD Jaguar. But I wonder if that's "We're taking AMD Jaguar as is" or "We took AMD Jaguar and made some of our own modifications". The Jaguar already seems pretty low power, though.

Outside of technical things, one of the things I've thought about during the Sony conference:

1. Very heavily internet driven system. I wonder if it will be hampered at all by the (generally) poor climate with regards to internet bandwidth usage and infrastructure in the United States.

2. Seems like Sony wants to deal directly with consumers more. If I were a brick and mortar store, I wouldn't be nearly as thrilled about the PS4...

BestBuy's PC section is basically just Blizzard, The Sims, and a bunch of subscription cards. Does Sony have some sort of plan to help those guys (retail) out this generation? Everything about the conference seems to ignore them. Me, personally, I'm on the internet. I don't watch TV anymore. I prefer to avoid using my phone; instead, I use the internet. I play games. On the internet. If I could never step into another BestBuy in my life, then I would do it and not miss it, so I'm not exactly unhappy about Sony not having mentioned anything really with regard to retail. The closer we can get to Steam on a console, the better I think.

The Jaguar CPUs are essentially the next generation of AMD's netbook processors. The PS4 and rumored Xbox Next specs are basically spot-on what I've been predicting for 4-5 years, with the exception of the CPUs -- I had expected 4 beefier cores (a-la, i5) rather than 8 "thinner" ones. On the other hand, I'd wager they get better aggregate performance from these 8, than from 4 "fat" cores at the same power draw, and they're probably cheaper to boot. It's just up to devs to spread work across 8 cores efficiently, rather than 4. The Jaguar cores are Out-of-Order, and can decode and put-in-flight two instructions from everything I've heard. Not terribly wide at all, but a far-cry ahead of the in-order PPC cores of the last generation. Pipeline should be fairly short as well, so less penalty for branch misprediction compared to a fast, fat core like an i5.

The GPU sounds nice, and there's plenty of RAM and bandwidth to feed it. In general, I think its always going to be preferable to use a cutting-edge GPU in a console, even if that means paring it down further (in total number of compute units) than you might have had to do with one that's a generation or two old. That said, it seems like they didn't cut it down a great deal compare to current high-end PC GPUs, it sounds roughly equivilent to a Radeon 7830. Acount for the fact that a console is only going to be doing 1080p/60 w/3D at the very most, and the more-direct hardware access, and I suspect visuals will be comparable to what PC gamers expect out of their super-high-res (2560x1440+) or triple-head setups.

I don't think that the GDDR bandwidth/latency compromise will be seen as a poor decision -- The type of code (following Data-Oriented-Design) that will make the compute and graphics hardware sing is friendly to that sort of tradeoff. It'll take a hit on code that's more random-access in nature, but that's fast becoming a smaller and smaller piece of the pie. On the flip side, while game devs are onboard with DoD, general app developers and nimble/indie developers may not be, so there's probably going to be a bit of a learning curve for them, and the first cut at these kind of apps might end up looking a little sluggish, given the hardware, until everyone's onboard.

If either platform prevents used game sales or makes it unreasonably burdensome/restrictive I'll probably end up not jumping into the new consoles. I don't even really buy used games for the most part -- its almost exclusively pre-order or first-week grabs for me -- I just don't feel its worth it to buy a $60 or $70 game without the ability to sell it if I wanted, even though I don't really do that either. On the other hand, if they threw out a big bone, like dropping new, AAA game MSRP to $40-$50, it might be inticing enough to compensate.

I probably won't care much for the always-on video encoding, but I do think its a smart feature that's going to appeal to many people, and the social sharing that will result will probably help sales along -- its basically gonna be free marketting for sony when all your facebook friends are posting videos of their cool play sessions.

Another thing I expect out of this generation is the return to a shorter life-cycle. The uniqueness of previous generations of hardware has always meant that consoles aged more-gracefully than similarly-aged PC hardware, because new things were always being discovered about how to make the hardware sing, but with an architecture that is so close to the PC, there's going to be less unknown territory to discover and exploit.

throw table_exception("(? ???)? ? ???");

Me, personally, I'm on the internet. I don't watch TV anymore. I prefer to avoid using my phone; instead, I use the internet. I play games. On the internet. If I could never step into another BestBuy in my life, then I would do it and not miss it, so I'm not exactly unhappy about Sony not having mentioned anything really with regard to retail. The closer we can get to Steam on a console, the better I think.

I think this sums up my feelings on the PS4 in general: excited about its integration with the 'net. The PSN has been very nice to have (minus the hack issue) because I've always been able to play with my friends whenever we want: if we both bought the game, we can play it online. I'm already "always-on" with regards to net presence and home connectivity; we watch all our favorite shows and movies through netflix and amazon on the PS3. I'm used to seeing popups on my laptop when friends hop into Steam games. I like buying things from the comfort of my living room.

Games playable as the rest of the title downloads? Thank god.

Large hard drive for all the software-only purchases? Yes please!

I could care less about the video streaming, maybe it'll come in handy once or twice if a buddy has a title I'm curious about but haven't dropped money on yet, but I can already use services like twitch.tv for that. And I don't really foresee the desire to "share a video of some awesome sequence" since I expect most titles will lean towards Destiny's mentality: you're probably going to be playing along with your friends already. Or there will at least be other players around to witness when something happens that would give you the "I wish I had recorded that" feeling.

The share button is pointless to me. I'm bad at a lot of web 2.0, I don't tweet, instagram, or tumbl; I barely share on facebook (though I consume it), I'm a forum poster. I feel no need to share what I'm up to in real time with the internet. I like the asynchronous measuring-up you can do with PS3's trophy collections, but I don't need people to know the minute I've done something in a game. Maybe I'm missing something there, but it feels like a late grab at interfacing with social media in the wrong direction. I think many games would benefit from being reachable FROM any platform (like app integration, for example: mini-windows into the persistent game world's data). But pushing information out proactively from the source is just spam in my eyes.

Hazard Pay :: FPS/RTS in SharpDX (gathering dust, retained for... historical purposes)
DeviantArt :: Because right-brain needs love too (also pretty neglected these days)

I was just rewatching the conference to catch some of the bits I missed early on, but one thing I just noticed Mark Cerny say: "this system memory is backed by the massive local storage that only a hard drive can provide." (

">
). Do you think he's implying the existence of a virtual memory system? Does it matter that the console has or does not have a virtual memory system?

Another thing I expect out of this generation is the return to a shorter life-cycle.

Everyone says this, goes on about how it's been a 'long cycle' but I'm not really convinced this is the case.

Playstation released: Dec '94 in Japan, Sept 95 US/EU
Playstation 2 rel : March 2000
- Time between; 5 years 2 months Japan, 4 years 5 months EU/US

Playstation 3 rel; Nov 2006
- Time between; 6 years 6 months

At this point we are 6 years 3 months into the PS3's life cycle, so depending on the release date it'll come out at a little over 7 years between consoles and considering the world's economy kinda tanked mid-cycle I would have said this was 'on time' for a Playstation release.

I think between the short Xbox cycle and various other products being thrown out every year (WHY HELLO APPLE/SAMSUNG/ETC!) the impression is this hardware has been around 'forever' when in reality the timing is about right based on previous iterations.

(Also, if it had been released say 2 years ago...well, look at the state of the hardware then. NV was power hungry and AMD were still VLIW GPUs so the choice would have been hot or 'soon to become slow'. Both would have been gimped compute workload wise.. and as for CPUs... so about now we'd hear cries of 'the consoles are holding back PC games!' and 'released too soon! look what they could have used if they had waited!' ringing out as well... while you can always count on something better coming along now is a pretty good point to draw a line in the sand I think.)

1. They refused to show the actual console, so, knowing Sony and their shadiness, I'm more than confident that the PS4 is going to be a big fat monster.


I'm continually surprised by the people making a big deal over not seeing an empty plastic box on the stage. In this day in age there's so many aspects to a console: the online ecosystem, the user experience, the developer platforms, the hardware specs, the games itself...is the look console itself really so important compared to those things?

Well, we get to know a lot about a console by seeing it physically. What's the video output? Just HDMI, or will there be component cables as well? How big is the thing? Is it big enough to even fit under my TV at all? And, most importantly, is there a disc drive on the thing? That's a very important question that no one is sure of yet. We would know this without being told if we could just look at the thing. Although I'm not discounting physical design in the least.

I was just rewatching the conference to catch some of the bits I missed early on, but one thing I just noticed Mark Cerny say: "this system memory is backed by the massive local storage that only a hard drive can provide." (

">
). Do you think he's implying the existence of a virtual memory system? Does it matter that the console has or does not have a virtual memory system?

I'm sure the new systems will support virtual memory -- the alternative is that game devs would do it anyways and then have to deal with what happens when the disk is full -- but its not particularly exceptionial that it would, so I doubt he was calling it out specifically. I suspect, if anything more than "we've got a big hard drive", he was more-likely to mean that the HDD will be used for content caching or title installation, since HDD read speeds are 3-5x faster than the optical drive, and SSD would be another 3-5x faster than that, if they release such an SKU.

throw table_exception("(? ???)? ? ???");

Though I wonder if there is any drawback for having GDDR5 for all the memory. As far as I understand, GDDR was designed with graphics in mind, but I don't know how different it is with normal "general usage" DDR.

GDDR favours bandwidth over latency, while DDR doesn't make that compromise.
GPUs are able to hide the latency, which is why gddr it works so well for them.

I suspect one of the reasons they made that choice (besides simplicity, i.e. "the radeon already uses that tech and we want UMA") is that ps3 developers are already used to branchless code and dealing with LHS (load hit store).

I wonder how OoOE, branch prediction & high latency memory will mix though. May be they'll strip the OoOEs and branch predictors, since they're expensive and power hungry.

I see. but how will latency affect branch prediction? By worsening the miss prediction hit by having to wait more for instruction fetches?

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

I wonder why Sony didn't go with the Cell architecture. The devs have had 6 to 7 years to be familiar with it. I figure a more powerful Cell (and more SPUs) would be in this new console.

Could there be such a thing as SPU and unified memory? What was so awful about it?

The thing that was both awful and amazing (depending on your point of view) about the SPU (actually, the SPE, which contains a SPU) is that it did away with the transparent cache hierarchy and made memory explicit. If you're interested, you can download all the specs and programming guides from the IBM website.

On normal CPUs -
When we're programming in high level languages, we act as if there is only RAM and that we can modify it directly. If we're programming in assembly, we act as if there are registers and RAM, and that we can modify registers, and can copy values between registers and RAM.
However, in reality, there are multiple layers of complex caching hardware between registers and RAM, and there's very little that you can do to program these -- they're fixed function hardware. When they make the right guesses (i.e. we've written code that is friendly to their fixed algorithms) then RAM seems much faster than it really is. When they don't work (i.e. we've written cache-unfriendly code), then we realize just how slow RAM really is.

With the SPE -
They made the giant leap of throwing out the cache altogether, and instead each SPU core is paired with 256KiB of it's own RAM called the local-store. This is physically close enough that it's as fast as a L1 cache usually is, instead of being horribly slow like RAM usually is. Also, each SPE has a little memory controller that lets you conduct DMA requests (think: asynchronous memcpy calls) in the background. To move data between the local-store and RAM, you've got to explicitly write these asynch memcpy calls instead of relying on the invisible automatic cache hardware like a regular CPU.
With this architecture, you can download 128KiB of data into a SPE's local-store, then operate on it without having to worry about cache-misses or memory bandwidth or any of the stuff that is the main-freaking-bottleneck in computing nowadays, while simultaneously in the background you're downloading the next 128KiB packet of work and/or uploading the results from the previous packet back to some RAM location. This means that (if programmed right) you can be continuously doing a ton of compute work (3GHz clock dual issuing instructions for 128-bit SIMD registers) without memory latency (cache misses) being an issue.

That said, if you program in this same style (of having very large contiguous blocks of data) then it turns out that regular CPUs perform very, very well too happy.png

Is SPU programming at all similar to some of these GPGPU implementations? I feel like SPUs are at least much more general and GPUs are tasked to solve very specific problems and architected in such a way that trying to get them to solve anything other than something that looks like graphics would make it very hard to use for general purpose. Am I wrong here?

This architecture lets you write code that is an order of magnitude faster than other CPU designs, but it requires a different style of programming. Typical shared-state mutable-object systems with code-flow obfuscated by virutal callbacks doesn't run well. You need to have all of your data in large, contiguous blocks, and then be able to feed them through a kernel to produce other large contiguous blocks of output. In this way, it is amenable to GPGPU type workloads.

However, there definitely hasn't been so much love for the system's development tools.

Yeah, Sony are no Microsoft when it comes to development tools... However, I was quite impressed with the free indie/hobbyist toolchain that they released for the Vita, so here's hoping the PS4 has received some love in that department too.

This topic is closed to new replies.

Advertisement