Are PhysX days numbered?

posted in Not dead...
Published September 24, 2009
Advertisement
nVidia have a problem.

Last generation they were taken by surprise by an RV770 chip which was both fast and cheap to produce. While they might have been first to market with the GT200 series card every review site closed out their review with 'but wait to see what AMD have in store' and a week later, when those reviews hit, it was everything that was promised; a cheap card which could perform well.

The HD4 series was born, ATI was back.

In the scramble which followed NV were forced to cut prices, go on a PR defensive and generally engage the hype machine with regards to CUDA and PhysX.

This was a good plan, at the time AMD had no competing products and CUDA and PhysX were, and indeed are, good solutions to the problems of GPGPU and GPU physics on NV hardware.

Fast forward 15 months to September 23rd 2009.

Even before the HD5 launch day there were issues coming from NV. AMD's change of tactic to smaller GPU units had allowed them to produce their cards with good yeilds and an early transition to 40nm allowed them to get to grips with the process.

NV on the other hand have stuck with the tried and tested monolithic design they have been running with for some time; the die have been getting bigger and more expensive to produce due to lack of yield. They have been transitioning to 40nm with their current chip design, however this has been dogged with yield issues and has caused their own 40nm based cards to come to market very late.

The final problem is OpenCL and DirectCompute.

With regards to OpenCL both AMD and NV fully behind this standard, with AMD having submitted their implimentation to Khronos of late to have it certified.

With the rise of OpenCL there is finally a high level, cross platform, solution to GPGPU and with it being embraced by the major vendors it has a good chance at life.

While CUDA might remain the best way to get the most of out every cycle for the GPU, with AMD's newest clocking in at 2.7TFlops in single precession mode (and 544Gig flops in double precession mode) we are going the same way as with CPUs where abstraction and the small performance hit it will come with are more important than cramming every last flop from the GPU.

So, already we see one of NV's marketing point begin to wobble; until now they have been king of the GPGPU kingdom but this openning up of the platforms will make many developers (but not all) question being locked to one vendor.

OpenCL beings yet another problem to NV's door; physics.

Right now the only GPU physics solution going is PhysX. Havok and AMD started working on something together however this broke down when Intel brought Havok out and the project fell apart.

More recently however AMD have shown Havok working on the HD4 series cards using an early version of OpenCL. Bullet, the open source physics library, while having a CUDA backend is also developing an OpenCL backend.

Assuming that this OpenCL support makes it into the free Havok binaries this starts to raise intresting questions; namely why use PhysX?

Right now PhysX locks you to a single GPU platform, and while a large developer might get a fair chunk of cash from NV to use it, there is also the point of view that developers might start turning away from this solution if they can get wider support for gpu physics for their products.

History has already taught use that APIs which are tied to one piece of hardware tend to die (3dfx) and it also teaches us that NV's previous attempts to push their standard has failed (Cg as OpenGL's main shading language, although I'll grant it survived on in other areas).

NV's other problem is the strong showing from AMD this generation.

The HD5870 comes close to matching the GTX295 in some situations (the chip is mostly strangled by its 256bit bus at higher resolutions, above 1920*1200) and NV currently don't have a solution to compete with this card or AMD's DX11 support nor will they have one for probably some months yet, maybe some time in Q1 if the rumors are to be believed.

NV have already hit out at AMD's card, including the following quote;
Quote:(source)
Because we support GPU-accelerated physics, our $129 card that's shipping today is faster than their new RV870 (code name for new AMD chips) that sells for $399. - nVidia 09/10/2009

which shows a degree of concern.

NV could choose to allow PhysX to target OpenCL which would keep the API alive but at the cost of them being able to use it as a major marketing bullet point in the future (unless they some way limit the OpenCL version but that would just as likely send people to use another API).

PhysX as it stands is a dead end piece of technology, if it does die then its place in history as the first API to use GPGPUs is of course secure.
Previous Entry Apple.
0 likes 2 comments

Comments

Groove
With the PhysX debated, I always wonder why it's always this GPGPU side that come out ... PhysX was called Movodex before and was an amazing CPU based physic middleware ... this is still the case!

I would advice every games to use it, it's easy to use and robust physic! But I would not actually expect any use of the GPU. Beside, the GPU have plenty of thinks to do when XBox360, PS3 and PC have plenty on cores to run it more than fast enough!

The GPGPU side is almost casual in PhysX, not really amazing and just a little part of the API is accelerated by the GPU ... Works well with particles, not really with collision detection ... what's the point?

PhysX is great, the GPGPU side is just the buzz around PhysX.
September 24, 2009 06:14 AM
Gaiiden
You did see my comment to you in my Daily right? I wanna know how you think the HD5870 performs - I'm thinking of upgrading come Cyber Monday since my HD3870 is a bit lacking these days
September 25, 2009 02:21 PM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Advertisement

Latest Entries

Advertisement