CPU vs. GPU war!

Started by
8 comments, last by trzy 17 years ago
CPU vs. GPU war Well what is everyones thoughts on this now. This stuff just never ends.
Advertisement
of course it doesn't end, hardware marches on and companies want to make money...

Having looked at the slides it looks like Intel have gone 'oh poo, GPGPU is going to cut into our profit margin, we should probably come up with a product to sell'..

I was amused by the 'does this look familar' type slide on the 3rd page; my thoughts were 'why, yes it does.. it looks like the G80 is practically doing what you plan to do in silcon already... opps, can you say 'behind?' [smile]
Well, I suppose if the worst that comes out of this is a better integrated GPU rather than the usually Intel Integrated cruft, then I suppose we all win. [smile]
Quote:Original post by phantom
of course it doesn't end, hardware marches on and companies want to make money...

Having looked at the slides it looks like Intel have gone 'oh poo, GPGPU is going to cut into our profit margin, we should probably come up with a product to sell'..

I was amused by the 'does this look familar' type slide on the 3rd page; my thoughts were 'why, yes it does.. it looks like the G80 is practically doing what you plan to do in silcon already... opps, can you say 'behind?' [smile]


That's an inaccurate assessment of the situation.

The capabilities and architecture of GPUs and CPUs have been slowly converging. Neither was playing catch-up with the other. They're both trying to accomplish different tasks but as they find themselves employing features useful to the other, there is an interest in integrating CPUs and GPUs and finding some sort of "optimal" architecture for both.

I don't agree that multi-core CPUs are an idea stolen from GPUs. Everyone knew it was going to happen. Likewise, things like parallelism and programmability in GPUs are also natural ideas. GPUs, being composed of functional units which are simpler than a microprocessor's, and not really requiring large amounts of cache space, were a natural place to see very wide parallelism appear first. Transistors cost $0.00 these days -- tremendous amounts of logic can be implemented. It makes sense to add more pixel pipelines. But on a CPU, the situation is different. It doesn't make sense to have too many execution units. It does make sense to add multi-threading capability and more cores but the latter requires a significant amount of space and wasn't economical when GPUs were becoming massively parallel.
----Bart
Yes, it's true that CPUs and GPUs have been converging, however this presentation shows Intel 'planning' to produce a chip with a large number of very simple cores attached to a load of cache and, archicture wise, it appears very much like tha G80, which is already on sale.. so yes, effectively Intel are 'behind' in this regard as their plans already exist in a slightly modified form.

I'm also intrested as to where you got the idea that I thought multi-core CPUs are an idea stole from GPUs; having had a dual CPU celeron PC back in 1999/2000 I was already muttering about the idea of putting both cores on the same chip.

However there is, imo, a significant difference between GPUs and CPUs evolving to be come closer together and these plans which are basically underpowered cores with a vector unit strapped to them; CPU wise it's kinda a step back and to the side and GPU-wise we are already pretty much there.

I'm not saying they can't do it, or it won't work, but this perticular plan does seem like a grab for revenue; that's just bussiness however
Conveniently enough, I just went to a presentation yesterday by an old professor of mine. He now works at Nvidia in their Compute program, which is basically everything outside of graphics. He pretty much works with stuff like Cuda and the G80.

It really all depends on what you are doing. The GPUs ability to do general purpose computing is insane if it fits your application. The G80 can do > 200 gigaflops for a $600 card. Of course, that is doing a n-body simulation, which is basically (so he says) a perfect application to take advantage of the way the chip is designed.

He mentioned one university that had a farm of 300 itanium processors (~$500,000) which were replaced by 3 G80 GPUs (~$1800). The G80s also performed better.

The GPUs are getting there, but still are primarily created for graphics. He said his team had to push really really hard to get things like double precision implemented, which in graphics isn't needed. He also said they get about a 1mm x 1mm section of the chip, to put non-graphics related additions on (and the chip is 20mm x 21mm I believe).

If your application can take advantage of the optimizations that help in graphics (like lots of multiplys followed by adds, or inverse square roots) than a GPU is great. But it will be a while until it can be used for applications that don't fall into that realm. Right now, I can't really think of any non-scientific applications that will greatly benefit. However, it's great that scientific applications can greatly benefit, since there are plenty of them that need to be done, and with lower costs we can get more per $$$ of investment.
Quote:Original post by Phaz

He mentioned one university that had a farm of 300 itanium processors (~$500,000) which were replaced by 3 G80 GPUs (~$1800). The G80s also performed better.


Whose to say that 3 core 2 extreme wouldn't beat the crap out of the G80s? The itaniums weren't good processors and they are 6 years behind g80s and core2 xs..

Does anyone else see this being used to replace traditional graphics rendering with Real time ray tracing? From the slides its easy to imagine the 4MB of shared cache holding your geometry and the small fixed function units handling ray intersections. The different cores each handle a different part of the screen. If it catches on, it could make the advances of everyone else moot. A lot of games would require an extra render path just for this architecture, but if its easy to do and your game looks like Toy story, why not?
Quote:Original post by chessnut
Quote:Original post by Phaz

He mentioned one university that had a farm of 300 itanium processors (~$500,000) which were replaced by 3 G80 GPUs (~$1800). The G80s also performed better.


Whose to say that 3 core 2 extreme wouldn't beat the crap out of the G80s? The itaniums weren't good processors and they are 6 years behind g80s and core2 xs..

Or you can just use a Cell with one SPE that can even render a 1080i scene by itself. But I disgress....

Beginner in Game Development?  Read here. And read here.

 

Quote:Original post by phantom
Yes, it's true that CPUs and GPUs have been converging, however this presentation shows Intel 'planning' to produce a chip with a large number of very simple cores attached to a load of cache and, archicture wise, it appears very much like tha G80, which is already on sale.. so yes, effectively Intel are 'behind' in this regard as their plans already exist in a slightly modified form.


But it's not a GPU, it's a CPU. That's the difference. The G80 isn't a general purpose CPU so Intel isn't really late to market with this idea.

Quote:
However there is, imo, a significant difference between GPUs and CPUs evolving to be come closer together and these plans which are basically underpowered cores with a vector unit strapped to them; CPU wise it's kinda a step back and to the side and GPU-wise we are already pretty much there.


Not really. It's arguably more effective to have lots of cores which lack the full capability of a CPU, that are subordinate to the CPU cores in capability and memory privileges. For computing tasks that can be made parallel and require lots of number crunching, it makes sense to offload to a dedicated processing unit that is general enough to handle to handle fairly complex control-flow.

Ideally, these processors would not even be X86, but that won't happen. It would potentially be a way for us to get away from the X86 ISA where it isn't necessary.
----Bart

This topic is closed to new replies.

Advertisement