• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
MARS_999

DX maybe dead before long....

42 posts in this topic

I think the future, the real future in Graphics, lies with the unification of the GPU and the CPU. A General Massively Parallel Processing Unit. This would comfortably pave the way for the IMHO only real way forward in graphics: physics based lighting.

The lines are already quite blurred between the GPU and the CPU. Uncomfortably so. I don't know enough about hardware to know if replacement of CPUs is achievable with current GPUs. But I do know that we would need a new way to program GPUs to achieve that, one that is more flexible and powerful, or in other words more low level. If that were to materialise I think it could well be in the form of a new DX version. After all, the current version of DirectX has something quite close to that already in the form of DirectCompute.

On the other hand, as Larrabee was meant to once, perhaps the CPU will replace the GPU as the GMPPU. In that case, we will certainly kiss DX goodbye and wave hello to, emm, C++?

Either way, whether CPUs replace GPUs or GPUs replace CPUs, it doesn't fit in with what AMD envisions

EDIT: Just got me wondering something nuts. Could one theoretically do work and visualise it on a monitor using only a power source, a graphics card, a mobo, maybe a hardrive, and appropriate software? :o
0

Share this post


Link to post
Share on other sites
Its not that PC sales are, necessarily, shrinking terribly in terms of numbers -- its more the fact that console sales have grown by huge bounds in the past 15 years or so. That same link says it straight out -- Console sales account for 72% of EA's revenue, and I'd be willing to bet that the remaining 28% isn't just PC sales, but other revenue streams like iPhone/android sales and MMO subscriptions. And the PC is a platform where a publisher stands to make, perhaps twice as much per sale, since no platform license fees are assessed and there is less manufacturing involved (as the article also states, PC gaming *retail* is markedly down, but services like Steam are thriving). So, a platform which is twice as profitable per sale, has only 1/5th the total revenue and requires doubling the input effort (and I'd say "doubling" fairly conservatively).

I really want things to be more programmable, and I was as much a Larrabee fan as anyone -- heck, I'd buy one today for a reasonable price, even if it was a lackluster GPU -- but the fundamental issue with the PC ecosystem is that it spans too broad a range to make the necessary effort of optimizing for even a subset of the most popular configurations worthwhile. Creating "minimal" abstractions is really the best we can realistically hope for. We'll get there, to be sure, but its going to take time, and its never going to be as thin as some (perhaps even most) will want it to be.
1

Share this post


Link to post
Share on other sites
The problem with any sort of complete merging of the GPU and CPU into one core (which isn't what an APU is as that still has its x64 and ALU cores seperate) is one of workload and work dispatch.

The GPU is good at what it does because it is a deep pipeline with high latency which executes multiple concurrent threads in lock step. It executes in wave fronts/warps of threads setup in such a way as to hide latency for memory requests which aren't in cache. It serves highly parallel workloads well however as soon as noncoherant branching or scattering enters the equation you can kiss the performance good bye as the architecture wastes time and resources on unneeded work and performing poorly localised writes.

CPUs on the other hand are great at scattered branching tasks but suffer when executing work loads where frequent uncached trips to memory are required as there is no real ability to hide the latency and do more 'useful' work as the GPU can do. At best out-of-order archs let you hide some of the latency by placing the request early but it'll still hurt you.

Effectively any merging of the two is going to result in just more 'cpu like' cores rather than GPU like cores as it is easier to 'emulate' the GPU work load (lots of threads doing the same thing) via the CPU method than the other way around which would require the GPU look at every running thread and try to regroup a-like workloads as best it can. Of course without some form of hardware to reschedule threads to hide latency you have the CPU problem all over again of waiting for memory (which is pretty damned slow).

Maybe the people at AMD and Intel will come up with a way to do it but given AMD have been talking 'Fusion' for about 5 years now and have only just got around to doing it I'm not pinning my hopes on them 'solving' this problem any time soon, never mind the problem of how you get all this stuff down the memory bus...
0

Share this post


Link to post
Share on other sites
The next wave that will finally replace consoles is/will be smartphones. In the next few years you will have decent power and the ability to hook that phone to your TV and game with wireless devices to the phone for input. So yeah IMO eventually consoles will be a very small market if not replaced by items such as Nintendo DSi or Sony PSP and Smartphones, and PC gaming will still be around.

That is my prediction, not sure the time frame but 10years be my guess.

Also with Nvidia's Maxwell GPU's coming out in 2013 they will have an ARM based CPU onboard which makes for some very interesting avenues....
1

Share this post


Link to post
Share on other sites
[quote name='MARS_999' timestamp='1300463348' post='4787514']
[url="http://www.bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/1"]http://www.bit-tech....ll-to-directx/1[/url]

Comments....[/quote][quote][color="#222222"][font="Arial, Helvetica, sans-serif"]It seems pretty amazing, then, that while PC games often look better than their console equivalents, they still don't beat console graphics into the ground. [/font][/color]
[font="Arial, Helvetica, sans-serif"][color="#222222"]according to AMD, this could potentially change if PC games developers were able to program PC hardware directly at a low-level, rather than having to go through an API, such as DirectX.[/quote]Ok - so the argument goes like this:[/color][/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]** Consoles have worse hardware, but can program the device at a low-level, resulting in better bang for your buck.[/color][/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]** PC is stuck having to go through DX's abstractions, which adds unnecessary overhead.[/color][/font]
[font="Arial, Helvetica, sans-serif"] [/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]Both these points are true, but the thing that makes it seem like nonsense to me is that the low-down-close-to-the-metal API on Xbox360, which lets us get awesome performance out of the GPU is.... DX 9 and a half.[/color][/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]It's DirectX, with some of the layers peeled back. You can do your own VRam allocations, you can create resources yourself, you've got access to some of the API source and can inline your API calls, you've got access to command buffers, your can take ownership over individual GPU registers controlling things like blend states, and you've got amazing debugging and performance tools compared to PC.... [b]but you still do all of these things through the DirectX API[/b]![/color][/font]
[font="Arial, Helvetica, sans-serif"] [/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]This means the argument is a bit of a red herring. The problem isn't DirectX itself, the problem is the PC-specific implementations of DirectX that are lacking these low-level features.[/color][/font]
[font="Arial, Helvetica, sans-serif"] [/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]The above argument is basically saying that DirectX9.5 games can achieve better performance than DirectX9 games... which is true... but also seems like a fairly obvious statement...[/color][/font][quote]I been saying DX is a dog for years, [b]all the DX nut jobs[/b], no its fast your doing something wrong… Bah eat crow…[/quote]Wow. Way to start a nice level-headed discussion... Attacking fanbois just makes you look like a fanboi from a different camp... Don't do that.
0

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1300508779' post='4787819']
[quote name='MARS_999' timestamp='1300463348' post='4787514']
[url="http://www.bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/1"]http://www.bit-tech....ll-to-directx/1[/url]

Comments....[/quote][quote][color="#222222"][font="Arial, Helvetica, sans-serif"]It seems pretty amazing, then, that while PC games often look better than their console equivalents, they still don't beat console graphics into the ground. [/font][/color]
[font="Arial, Helvetica, sans-serif"][color="#222222"]according to AMD, this could potentially change if PC games developers were able to program PC hardware directly at a low-level, rather than having to go through an API, such as DirectX.[/quote]Ok - so the argument goes like this:[/color][/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]** Consoles have worse hardware, but can program the device at a low-level, resulting in better bang for your buck.[/color][/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]** PC is stuck having to go through DX's abstractions, which adds unnecessary overhead.[/color][/font]
[font="Arial, Helvetica, sans-serif"] [/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]Both these points are true, but the thing that makes it seem like nonsense to me is that the low-down-close-to-the-metal API on Xbox360, which lets us get awesome performance out of the GPU is.... DX 9 and a half.[/color][/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]It's DirectX, with some of the layers peeled back. You can do your own VRam allocations, you can create resources yourself, you've got access to some of the API source and can inline your API calls, you've got access to command buffers, your can take ownership over individual GPU registers controlling things like blend states, and you've got amazing debugging and performance tools compared to PC.... [b]but you still do all of these things through the DirectX API[/b]![/color][/font]
[font="Arial, Helvetica, sans-serif"] [/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]This means the argument is a bit of a red herring. The problem isn't DirectX itself, the problem is the PC-specific implementations of DirectX that are lacking these low-level features.[/color][/font]
[font="Arial, Helvetica, sans-serif"] [/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]The above argument is basically saying that DirectX9.5 games can achieve better performance than DirectX9 games... which is true... but also seems like a fairly obvious statement...[/color][/font][quote]I been saying DX is a dog for years, [b]all the DX nut jobs[/b], no its fast your doing something wrong… Bah eat crow…[/quote]Wow. Way to start a nice level-headed discussion... Attacking fanbois just makes you look like a fanboi from a different camp... Don't do that.
[/quote]

And if you read at the end of my post

"Anyway I am for whatever gets us the best IQ and FPS on the hardware gamers spend their hard earned money for."

I could care less about the API, but what I am sick of is the people saying DX is better and when its not in fact, so if you are sticking up for DX than you are just as much of a fanboy....

And that should have been apparent by me saying I wish Larrabee would have taken off...

You want to take another shot....
-4

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1300514380' post='4787840']
ok
[/quote]

Sorry about jumping on you... Not usually what I like to do. :)
1

Share this post


Link to post
Share on other sites
[quote name='MARS_999' timestamp='1300513861' post='4787837']
I could care less about the API, but what I am sick of is the people saying DX is better and when its not in fact, so if you are sticking up for DX than you are just as much of a fanboy....
[/quote]

Better than "what" exactly? What is a practical and realistic alternative to DX or OpenGL on PC? That's the crux of the issue here. If you know anything about how these PC API's work then you know why they're slower than the equivalent on consoles, but you also know that they're 100% necessary.
1

Share this post


Link to post
Share on other sites
[quote name='MJP' timestamp='1300519854' post='4787854']
Better than "what" exactly? What is a practical and realistic alternative to DX or OpenGL on PC? That's the crux of the issue here. If you know anything about how these PC API's work then you know why they're slower than the equivalent on consoles, but you also know that they're 100% necessary.
[/quote]

I think there is both truth and over pessimism in that post.

It is true that we will always need a way to talk to the hardware on our PC. A way to feed the right information to the GPU at the right time in order to get it to calculate and display what we want when we want. In this case the chain goes like DirectX -> Drivers -> Hardware. The drivers are really part of the operating system of the machine. So I don't think that layer can be removed. And therefore neither can the DirectX layer.

Having said that, I don't think its true that this chain has to be slow. The drivers layer and the DirectX layer simply need to be better. Given the drivers are already about as low level as you can get, the biggest improvement in that chain can probably come in the form of making DirectX more low level (as has been stated before in this thread). And I don't think there is any reason why that can't happen.

Programmable shading was a step in the right direction. Now we need full programming access to the GPU in way that facilitates parallel programming. I think we need to get rid of the idea of treating the GPU as a way to process individual triangles through the vertex and pixel shaders. I think we need to instead start thinking about the GPU as way to perform a task thousands of times at the same time, and thus get that task done thousands of times faster. This will mean that the G in GPU no longer means Graphics, but instead means General. It also means that filling the ~10^6 pixels on a monitor will become a purpose of the GPU instead of the purpose.
[b]
[/b]Either that or we get rid of the GPU and go from ~10 core CPUs to ~1000 core CPUs. I would prefer that tbh, would be more efficient I think. Duplicating information on the GPU that I already have much better organised on the CPU is a pain for a start. Let the GPU die.
0

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1300508779' post='4787819']
[font="Arial, Helvetica, sans-serif"][color="#222222"]The problem isn't DirectX itself, the problem is the PC-specific implementations of DirectX that are lacking these low-level features.[/color][/font]
[/quote]

And the question really should be, is that low level even feasible on a PC?

Lets just talk draw calls. The 360 and the PS3 GPUs read the same memory that the CPU writes. The API level is just blasting the draw call and state information out of the CPU caches to main RAM. On a PC, that data has to be flushed, then DMA'd (which depend on non GPU or CPU hardware) to the GPU's memory where it can be processed. It may not seem like much, but its a substantial amount of work that makes this all happen reliably.

Even if the PC version can be stripped down to the barest essentials, you would still see large discrepancies in the number of things you could draw without instancing. The PC drawcall limits haven't really gone up in years despite huge performance improvements in GPU chipsets. This is because that type of data transfer (small block DMA) is not what the rest of the PC architecture has been made to handle.

[quote name='forsandifs' timestamp='1300537791' post='4787898']
Either that or we get rid of the GPU and go from ~10 core CPUs to ~1000 core CPUs. I would prefer that tbh, would be more efficient I think. Duplicating information on the GPU that I already have much better organised on the CPU is a pain for a start. Let the GPU die.
[/quote]

I actually like the separation, it makes programming GPU's far easier. Modern GPU's are a nice example of constrained and focused parallelism. Even neophyte programmers can write shaders that operate massively parallel and never deadlock the GPU. Most veteran programmers I've met that are working on general massively parallel systems still fight with concurrency issues and deadlocks.

Sure, you could constrain your massively parallel system on these new 1000 core CPUs such that you have the same protection, but then you have just re-invented shaders on less focused and probably slightly slower hardware.

With great power comes great responsibility. My experience with seasoned programmers and dual core machine has lead me to be skeptical that these massively parallel systems will actually be "general" in practice.

My prediction, 95% of the engineers that would use such a thing would subscribe to an API/Paradigm as restrictive as the current shader model on GPUs. The other 5% will release marginally better titles at best and go stark raving mad at worst.
0

Share this post


Link to post
Share on other sites
[quote name='wanderingbort' timestamp='1300543808' post='4787918']
And the question really should be, is that low level even feasible on a PC?[/quote]Yeah I don't think so, without putting a whole lot of extra work onto the game developers... but that's what the ATI rep in the article is suggesting. Some companies with enough manpower/money might be able to take advantage of such a low-level ([i]portability nightmare[/i]) API though...
[quote]The 360 and the PS3 GPUs read the same memory that the CPU writes. The API level is just blasting the draw call and state information out of the CPU caches to main RAM. On a PC, that data has to be flushed, then DMA'd (which depend on non GPU or CPU hardware) to the GPU's memory where it can be processed. It may not seem like much, but its a substantial amount of work that makes this all happen reliably.[/quote]Well, the PS3's GPU [i]can [/i]read from the CPU-local system RAM, but it does have two main banks ([i]one local to GPU and one local to CPU[/i]), just like a PC does.
The CPU can write directly into VRAM, but sometimes it's faster to have a SPU DMA from system-RAM into it's LS and then DMA from there into VRAM ([i]thanks, sony[/i]). However, on PS3, the CPU actually writes draw-calls to system RAM, and the GPU reads them directly from the system RAM (not from VRAM).
But yes, it's these kinds of details that DX ([i]thankfully[/i]) protects us from ;)
0

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1300550943' post='4787947']
Well, the PS3...
[/quote]

I did gloss over a lot of the gory details (hard to believe in a post as long as it was). The whole main RAM/VRAM distinction on the PS3 is the kind of thing that makes direct hardware access a pain. Yet it is still less restrictive than a modern PC architecture.

Perhaps this is the solution to this thread... all those who are in favor of ditching DX for "low level goodness", spend a dev cycle on a PS3. You will learn to love your DirectX/PC safety mittens roughly the first time you read about or independently re-invent an idea like using the GPU to DMA memory between RAM banks.
2

Share this post


Link to post
Share on other sites
[quote name='wanderingbort' timestamp='1300553999' post='4787982']Perhaps this is the solution to this thread... all those who are in favor of ditching DX for "low level goodness", spend a dev cycle on a PS3. You will learn to love your DirectX/PC safety mittens roughly the first time you read about or independently re-invent an idea like using the GPU to DMA memory between RAM banks.
[/quote]

I am in favour of low level goodness, but I am not in favour of ditching DirectX (unless we also ditch the GPU all together). I simply think DirectX should be evolved, just as it it evolved from fixed function to programmable shading, it should now evolve to more flexible, efficient, and powerful programmable shading, which if I'm not mistaken is a synonym for lower level.
0

Share this post


Link to post
Share on other sites
[quote name='forsandifs' timestamp='1300537791' post='4787898']
Either that or we get rid of the GPU and go from ~10 core CPUs to ~1000 core CPUs. I would prefer that tbh, would be more efficient I think. Duplicating information on the GPU that I already have much better organised on the CPU is a pain for a start. Let the GPU die.
[/quote]

The problem with this is that as any significant move in one direction by either the CPU or GPU will hurt what the CPU or GPU is good at as I previously indicated.

A 1000 core CPU still isn't going to match a GPU simply because of how the modern GPU works vs how a CPU works when it comes to processing work loads.

SPUs are a pretty good indication of what a more 'general' GPU would look like; horrible branch performance, but great at data processing, but a CPU made up of cores like that is going to fail horribly at the more general tasks which a CPU has to do.

From a hardware point of view imo an ideal setup would be;[list][*] x64/ARM cores for normal processing[*]SPU/ALU array on the same die as the above for steam style processing[*]DX11+ class GPU for graphics processing[/list]That way you get a decent mix of the various processing requirements you have in a game; I'm kinda hoping the next consoles pick up this sort of mix of hardware tbh.


1

Share this post


Link to post
Share on other sites
[quote name='forsandifs' timestamp='1300555963' post='4787992']
it should now evolve to more flexible, efficient, and powerful programmable shading, which if I'm not mistaken is a synonym for lower level.
[/quote]

This "lower level" is something I totally agree with. DirectX will probably evolve in this direction as the hardware does, or it will give way to an API that does.

I think the article was championing access to "the metal" and a console development paradigm as the "lower level" and I cannot disagree with that more.

I can see now how we've had confusion. Both are "lower" than the current part-fixed-part-programmable pipeline. Your version of lower is far more rational and tractable in my opinion. In many ways, it is a natural extension of the current piecewise program-ability of GPUs. But, it has nothing to do with what the current generation of consoles do to compete with the PC market.
0

Share this post


Link to post
Share on other sites
[quote name='forsandifs' timestamp='1300537791' post='4787898']
Programmable shading was a step in the right direction. Now we need full programming access to the GPU in way that facilitates parallel programming. I think we need to get rid of the idea of treating the GPU as a way to process individual triangles through the vertex and pixel shaders.
[/quote]

[i]Right now[/i], on a currently-available version of DirectX, you have the ability to author completely generic shader programs that are dispatched in groups of threads whose size you control, and are mapped to the hardware's separate processors in a way that you directly control. So I think it's completely reasonable to say that DX lets you completely bypass the whole triangle rasterization thing and treat it as a generic parallel processing unit. It doesn't get much more generic than "create resources, point shaders at them, and let them rip".

So the question is, how much further can we take it? Let's look at what we can do on consoles right now, and whether we could do it on PC:

[list][*]Direct access to the command buffer - can't do this, because the low-level command buffer format is proprietary and changes from GPU to GPU. There may also be device memory access concerns, as someone else already pointed out[*]Statically creating command buffers, either at runtime or at tool time - this is kind of interesting...maybe you link to an Nvidia GTX480 library and it lets you directly spit out GPU-readable command buffers rather than an intermediate DX format. The problems are that this puts the burden of validation and debug layers on the IHV, who has to have it work for many library/hardware variations. And I think we all know better than to rely on IHV's for that sort of thing. Plus your app wouldn't be forward-compatible with new GPU's unless IHV's built translation into the driver layer, at which point you have the same situation as DirectX. It could possibly also be built into the GPU, but then you have an x86-esque situation where the GPU pretends it has some ISA but in reality it's something else under the hood.[*]Compiling shaders directly to GPU-readable microcode, and potentially allow access to inline microcode or microcode-only authoring - this is really the same situation as above. Now IHV's are in charge of the shader compiler, and you have the issue with backwards compatibility.[*]Direct access to device memory allocation - I don't think this really buys you much, and would make life very difficult due to the peculiarities of different hardware. I mean does anybody not get annoyed when they have to deal with tiled memory regions on the PS3, and its quirky memory alignment requirements?[/list]Really when I think of the big advantages of console development with regards to graphics performance, most of them come from the simple fact of knowing what hardware your code is going to be run on. You can profile your shaders, make micro-optimizations, and know that they'll still work for the end user. Or you can exploit certain quirks of the hardware, like people do for a certain popular RSX synchronization method. But that stuff would almost never be worth it on PC, because the number of hardware configurations is just too vast.
2

Share this post


Link to post
Share on other sites
[quote name='phantom' timestamp='1300557069' post='4787997']
From a hardware point of view imo an ideal setup would be;[list][*] x64/ARM cores for normal processing[*]SPU/ALU array on the same die as the above for steam style processing[*]DX11+ class GPU for graphics processing[/list]That way you get a decent mix of the various processing requirements you have in a game; I'm kinda hoping the next consoles pick up this sort of mix of hardware tbh.
[/quote]

I agree entirely. We're going to need traditional CPUs for branchy stuff and scalar processing, pretty much everything we do on SIMD units today could be mapped to something like an SPU, and a discrete GPU can be optimized around graphics problems specifically, while still being general enough (as it already is) to be called in as reinforcements on some parallel problems.

I think the question in this set up is then, how much floating-point resources do CPUs have? Do they still have SIMD units? How about an FPU? If so, does each integer core get its own SIMD/FPU, or do we share between 2 or more integer cores like the newer SPARC cores and AMDs upcoming Bulldozer?

In my mind, you probably still need floating point on a per-core basis, but SIMD can probably be shared among 2-4 cores, if not eschewed entirely in favor of those SPU-like elements.

Give each CPU and SPU core their own cache, but put them on a shared cache at the highest level -- maybe let them DMA between lower-level caches -- and give them all explicit cache-control instructions. I think that would make for an awfully interesting architecture. Sans SIMD, you could probably double the number of CPU cores for a given area of die space.

8-16 ARM/x64/PPC cores, 8-32 "SPU" cores, plus a DX11-class GPU with 800 or so shader elements. Unified memory. Sign me up. I don't think its that far of a stretch to imagine something like that coming out of one of the console vendors next generation -- heck, its not much more than an updated mash-up of the PS3/360 anyhow.
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0