AGEIA PhysX Questions

Started by
12 comments, last by technomancer 18 years, 5 months ago
After watching some show on G4 briefly talk about a seperate physics processor, I googled it, and found AGEIA PhysX. I'm have a few quetsions about it. 1. Won't not having a physics processor affect gameplay in games that use this? 2. If it doesn't affect gameplay, then it would only be visual effects, like hair and plants in the wind? 3. Where will this plug into the computer? PCI? PCIx? Some new motherboard supporting this? 4. If I get the SDK(prolly too advance for me) can it be software based? Like how do you dev for a product that isn't available yet? 5. If you don't have a PPU, does the game just distribute the work between GPU and CPU or completely get rid of the physics related features.
Advertisement
1. Won't not having a physics processor affect gameplay in games that use this?

I thought this - perhaps it would just slow the game down, or mean that the sampling rate is much low within the physics engine?

2. If it doesn't affect gameplay, then it would only be visual effects, like hair and plants in the wind?

Read my answer to 1.

3. Where will this plug into the computer? PCI? PCIx? Some new motherboard supporting this?

PCI or PCIx was my guess, aslong as there is no APP port as there was AGP.

4. If I get the SDK(prolly too advance for me) can it be software based? Like how do you dev for a product that isn't available yet?

I think thats exactly what it is.

5. If you don't have a PPU, does the game just distribute the work between GPU and CPU or completely get rid of the physics related features.

I'd guess it would suck up some CPU time as is done at the moment, but eventually (if it takes off) a PPU would be required. Not many 3D games come with a software renderer now, but back in the early days there was options for both.
Adventures of a Pro & Hobby Games Programmer - http://neilo-gd.blogspot.com/Twitter - http://twitter.com/neilogd
Ageia bought Novodex, so games written for PhysX will use the Novodex SDK. This used to be a software-only SDK, but now it'll be hardware accelerated if you have the add-in card. I presume that they will ship both PCI cards (for current machines) and PCI-Express (for cutting-edge machines and future machines that will lack PCI slots).

When accelerated graphics came out (remember the Voodoo?), games used to have a switch between software rendering, and hardware rendering. Turning on hardware gave you faster frame rates, and it looked better. Chances are that, with PhysX in your machine, and for a PhysX accelerated game, it'll run smoother, and the physics will look "better".

Note that physics isn't just rigid bodies and ragdolls (although those certainly help) but can also be used for things like particle systems, clothing, liquids, and whatnot. The NovodeX SDK (that you can download for free, although you'll have to sign a license with them to actually ship whatever you write) doesn't talk about specific support for clothing, but it does talk about specific support for liquids.

Personally, I think it's really quite interesting -- I've gone from sceptical when hardware physics was first mentioned, to convinced. A regular GPU can't really do physics all that well, because physics has a lot of branching and conditional execution; however, a CPU targeted purely towards collision detection and constraint solution would be able to give a traditional CPU quite a boost.

Last, an interesting observation: the NovodeX SDK is multi-threaded on a regular (PhysX-less) machine, so a dual-core machine (or, for that matter, a next-generation game console) will probably benefit from this kind of SDK as well. In fact, at GDC they showed a ragdoll demo together with Intel, where the FPS was abysmal on a single-core CPU, but was acceptable on a dual-core CPU.
enum Bool { True, False, FileNotFound };
I'm looking forward to hardware accelerated occlusion culling. Think of the possibilities...
I like the DARK layout!
What worries me about this whole PPU thing is the lack of any apparent open standard. As far as I can tell, in order to use the PPU, a game has to use Novodex. Novodex is, of course, only free for non commercial use, which means you have to license it for a regular commercial (or even indie) game. Can you imagine having to license OpenGL or Direct3D?

Additionally, it doesn't seem like there's any room for competition. ATI can't decide to make its own physics chip, and make it interoperate with Novodex. No competition means a single price point, and dubious innovation.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Didn't OpenGL out as a proprietary API (before it became "open") that only worked on SGI hardware?
I like the DARK layout!
Quote:Original post by BradDaBug
Didn't OpenGL out as a proprietary API (before it became "open") that only worked on SGI hardware?


It was based on IrisGL, a proprietary API that SGI had been using for several years on their IRIX workstations. OpenGL, however, has always been open.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
To answer question 5 first: GPUs are getting to the point where they can execute physics code similar the the PhysX chip so the PhysX might never need to come out.

To address the open standard debate. If you want an open standard try ODE instead of Novodex. If recompiled to take advantage of general purpose code running on GPUs it will work well. The only catch is that you'd have to link to a .DLL file under Windows or a .so file under Linux so that your code can take advantage of the GPU-centric code or fallback to software calculation if a fast enough GPU isn't present.
I believe the PS3 is going to be having an integrated PhysX chip.

"I can't believe I'm defending logic to a turing machine." - Kent Woolworth [Other Space]

Quote:Original post by samuraicrow
To address the open standard debate. If you want an open standard try ODE instead of Novodex. If recompiled to take advantage of general purpose code running on GPUs it will work well. The only catch is that you'd have to link to a .DLL file under Windows or a .so file under Linux so that your code can take advantage of the GPU-centric code or fallback to software calculation if a fast enough GPU isn't present.

Wiggidy wha? You're saying ODE can be compiled to use a GPU to do physics calculations? Never heard that before. Got a link or anything about that?
I like the DARK layout!

This topic is closed to new replies.

Advertisement