physx chip

Started by
223 comments, last by GameDev.net 17 years, 10 months ago
I think a dedicated physics card is a great idea, especially with an API like aegia have made that supports physics processing in both hardware AND software.

Also having it on a card probably makes it easier to synchronise in game. Graphics processing is done on the graphics card but the data has to be passed to it by the CPU in the games' render function. And the physics card, presumably, would work in the same way.

C0D1F1ED: I just checked some prices considering you're on about cost-effectiveness:
Intel Pentium 4 830 Dual Core "LGA775 Smithfield" - £223.19AMD Athlon 64 X2 Dual Core 3800+                  - £205.57BFG Ageia PhysX Accelerator                       - £217.32

I suppose you do have a point about the price, but I strongly disagree with your point about graphics being the main selling point for games. I think 'immersion' is the major selling point for games, and this includes graphics, physics, AI and sound. Gamers want to feel as if they're in the world.

NVIDIA brought a card to the mainstream with a separate processor for graphics, Creative did the same for sound cards and now Aegia have done the same for physics. So how is this separate physics processor any different to separate graphics or sound processors?
"I just wanted to hang a picture on my wall, but somehow now I'm in the Amazon Jungle looking for raw materials." - Jekler
Advertisement
Quote:Original post by C0D1F1ED
But they are not used in real-time games and will never be. In games, only 'simple' physics really make sense.


That's like saying only simple graphics make sense. Games were perfectly playable in the 80s but people wanted more realism and effects. The hardware moved to accommodate that. Similarly in games, people want more realism and effects from the physics. The hardware will move to accommodate that too.

Quote:Compare it with rasterization versus ray-tracing. Rasterization is an accepted hack in real-time games.


That's not because nobody wants ray-tracing, it's because it's not efficient. Once upon a time, real-time lighting wasn't efficient either. When it became practical, it started appearing everywhere. There are many other examples. So when physics hardware makes it easy to create more realistic simulations, people will use that.

Quote:Sure, but not 100x more. And there's a technical reason for that. Every game needs a visibility algorithm for solid geometry to get acceptable performance.


I don't agree that's true to the extent that you suggest it is. Geometry can still be effectively culled if it moves around, and besides which, not every game needs the absolute top performance. In fact, the slightly slower-moving games are the ones which are more likely to require accurate physics anyway, since you'll be taking the time to examine your surroundings.

Quote:The CPU's attempt to catch up with dedicated hardware is multi-core, which is only in its infancy. Quad-core and octa-core are already on the roadmaps, and now that Intel has learned that clock frequency isn't everything we're going to see some very powerful CPUs in the not so distant future. It's crazy to invest in PPU technology at this point.

Put 128 cores in a CPU if you like - each of those is still going to achieve less per cycle than a dedicated piece of hardware which has a totally custom instruction set and memory architecture for the particular job. That's just the nature of it. General purpose CPUs are actually very slow, relatively speaking. Besides which, the PPU is likely to be easier for programmers to take advantage of than the parallelism in the CPU anyway, if past history of concurrency is anything to go by.

Quote:That's hardly comparable. MMX allowed to roughly double the processing workload. Dual-core means a whole extra processor is available almost completely for physics. If on a single-core the budget for phycics calculations is 10% then with dual-core you can do 10x more at the same framerate.


Why do you think the second core is going to be used almost exclusively for physics?
I'm no expert, just a random high-schooler geussing, but here's my thoughts:

PhysX card has a future, but not a present. Once games using the Unreal 3 engine and other ones that can take advantage of the PhysX card start comming out, you'll start seeing a point to these cards. Until then, only elitists and professionals will be having them. Even then, alot of people won't have them.

I've heard that the PS3 will be using one of these cards, is that correct? If so, that will be a *big* help in getting it out, as developers would probably make use of it on the PC when porting a game from the PS3.

I think we can all see a future where every computer ends up having 15-20 different processors. We've already got at least 3(graphics, sounds, and CPU), a dn Ageia wants to make that 4. If you count networking cards and things like that, that would increase it some more. What other things do you think could use their own cards? AI?
In seven years im sure computers will be more integrated and probably everything on one chip. I say if you want something for gaming buy a console they are specialized hardware and are cheap. Either that or there will be a massive breakthrough which will change everything and the whole pc architecture will have to be rewritten. Computers based on neural nets photonics who knows what will come in the future. When the silicon chip was made it changed everything it will probably all change again some time soon.

NEURAL NETWORKS ARE THE FUTURE BEWARNED YER SCURVY DOGS sorry tht was a bit random ;)
codified: I suggest you hit up physx.ageia.com and check out the before physx / after physx videos they have. Pretty spiffy stuff that imo is very hard to do on a cpu in real time like that ;-)

The realism added is quite simply amazing! And after all hasn't the general trend for video games been to include more and more realism? Again I highly doubt that a single core of a 3ghz+ cpu will be able to do the simulation of 1000+ rigid bodies in real time without using some integrator that explodes!
Quote:Original post by TwinX
In seven years im sure computers will be more integrated and probably everything on one chip. I say if you want something for gaming buy a console they are specialized hardware and are cheap. Either that or there will be a massive breakthrough which will change everything and the whole pc architecture will have to be rewritten. Computers based on neural nets photonics who knows what will come in the future. When the silicon chip was made it changed everything it will probably all change again some time soon.



Now that is just random useless blabbering :-P
Quote:Original post by Ezbez

I've heard that the PS3 will be using one of these cards, is that correct? If so, that will be a *big* help in getting it out, as developers would probably make use of it on the PC when porting a game from the PS3.

?



Nada. The PS3 relies on its cell processor for physics, but cell has similarities with the PPU, massive bandwith and lots of parallelization. The PhysX SDK is used on the PS3 and its designed to use the Cell.

Basically one of the spiffy things about using the PhysX API is that you can develop a game for PC/xbox360/PS3 and have the physics be capable of hardware acceleration. PC via the PPU or software mode with multicore. 360 on the 2nd/3rd coe. PS3 via Cell.

I think that will be one of the big drawing features imo, Games made for PS3 know they will have some spiffy hardware accelerated physics so a port from PS3 might REQUIRE a PPU. Maybe thats Ageia's strategy?
Quote:Original post by Kylotan
That's like saying only simple graphics make sense. Games were perfectly playable in the 80s but people wanted more realism and effects. The hardware moved to accommodate that. Similarly in games, people want more realism and effects from the physics. The hardware will move to accommodate that too.

Dual-core, quad-core, octa-core and beyond will perfectly accommodate that -and- make other multi-threaded applications faster as well.
Quote:Once upon a time, real-time lighting wasn't efficient either. When it became practical, it started appearing everywhere.

What are you talking about? Lighting has been real-time since the first 3D game. Don't mistake a modern CPU for a pocket calculator.
Quote:Put 128 cores in a CPU if you like - each of those is still going to achieve less per cycle than a dedicated piece of hardware which has a totally custom instruction set and memory architecture for the particular job.

Did you ever look at the SSE instruction set? It doesn't differ much from the SIMD instruction sets of any 'dedicated' hardware. A CPU is hardware too you know. It runs at much higher clock frequencies than any other chip and with multi-core it's catching up on the parallelism.
Quote:Why do you think the second core is going to be used almost exclusively for physics?

Because everything else just scales linearly. Or is there any other new tasks you want to do on it? Intel's latest architecture is much faster than NetBurst, even for single-core. With 45 nm technology almost ready it will keep scaling per-thread performance nicely. And on top of that the dual-core doubles the performance. The real question is, what will we do with the extra cores once they go multi-core?
Quote: The real question is, what will we do with the extra cores once they go multi-core?


Run norton anti virus.
Quote:Original post by Cubed3
codified: I suggest you hit up physx.ageia.com and check out the before physx / after physx videos they have. Pretty spiffy stuff that imo is very hard to do on a cpu in real time like that ;-)

You mean the new Ghost Recon footage? Not impressed. All I see is a bit more debris flying around. And ironically the framerate seemed lower. Even though that might just be the Flash format is this the best they can show?

Excellent site from a marketing point of view, it will convince many customers, but I'm an engineer. I'm willing to bet that the extra physics would run just fine on a modern dual-core with a well optimized multi-threaded physics engine. Why do they not publish technical details like FLOPS, you think?
Quote:The realism added is quite simply amazing! And after all hasn't the general trend for video games been to include more and more realism? Again I highly doubt that a single core of a 3ghz+ cpu will be able to do the simulation of 1000+ rigid bodies in real time without using some integrator that explodes!

PhysX is nothing but a 400 MHz chip with a handful of SIMD units that are very much like SSE. And given the extra overhead the actual available processing power of a PPU could be closer to a CPU than you think.

By the way, let's have a look at Cell. 4 GHz x 8 vector coprocessors x 8 floating-point calculations per clock cycle = 256 GFLOPS. This just wipes the floor with PhysX. Also, a GeForce 7800 GTX is 165 GFLOPS. And yes, Cell is a CPU! x86 processors are evolving in the same direction.

This topic is closed to new replies.

Advertisement