Jump to content
  • Advertisement
Sign in to follow this  
Creepy_cheese

physx chip

This topic is 4629 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

From some very rough numbers of the PhysX performance, it has about 100-200 times the flops of a high end single core cpu (and about 35 times the ram bandwidth of dual channel DDR400). These numbers would be reasonable for a mid level graphics card (for those who are surprised by that). Now assuming cpus double in speed every year. Then in log(200)/log(2)=7.64 years your second core will be able to replace the physx. And assuming only 100 times the flops then it'll take 6.64 years. CPU's are dang quick but they don't touch specialized hardware. (As a side note, I'm hoping for FPGAs to become standard part of computers at some point in the near future.)

My main concern is that there's no Open API for physics cards. So there's going to be a bunch of proprietary pyshics hardware (once some other companies get into the act). And each game is only going to use one physics engine (at least any game with multiplayer). So mainstream games that make heavy use of pyshics hardware will be rare (since only so many people has X type of physics hardware). Of course one company may just conquer the market and become a monopoly. If that happens, odds are top graphics cards will be cheap by comparison.

Share this post


Link to post
Share on other sites
Advertisement
Well for one PhysX has a software mode too. Im hoping to use it in a game that I will be writing this summer. I like the nice fluid dynamics effects the PhysX SDK allows for.

Share this post


Link to post
Share on other sites
I really don't like this idea of having physics calculated off-chip, requiring local memory and data transfer and all of this other headache... it seems like the least cost effective way to do dedicated physics.

This is precicely the sort of thing that needs to be onboard the CPU, either as a dedicated IC or as some sort of PLD. The idea is really hurt by the fact that Ageia stuck outside the CPU.

But, having said that, the mainstream GPU industry was practically birthed by dedicated gamers, so I wouldn't be surprised at PC gamers picking these things up in droves if the next Doom or Quake or Unreal put it to good use.

Share this post


Link to post
Share on other sites
I'm thinking it's a bad idea. Firstly, there's the problem that you have to cater to the lowest common denominator; games that use PhysX will only be able to use it for eye candy, not anything gameplay-changing. It's not like graphics cards at all; everybody has a graphics card, but they vary in performance, whereas with the PhysX card it's somewhat binary, you either have it or you don't.

Secondly, I just don't like the idea of it. I like parallelism, but splitting tasks up like this is just silly. What's next, an AI card? A Scripting card? There's only so many areas you can split it down into. Really, more work should be done in getting multi-cored processors; they may not be as optimal but they are more useful and scaleable since they are generic and not tied to any one task.

Share this post


Link to post
Share on other sites
Quick question:
How are we going to use this hardware? Does it require a specialized library that won't apply to the nVidia physics enhancements coming out? Or is there already some abstraction, ala OpenGL (OpenPhys?), to handle that for us?

Cheers,
--Brian

Share this post


Link to post
Share on other sites
Quote:
Original post by Nairb
Quick question:
How are we going to use this hardware? Does it require a specialized library that won't apply to the nVidia physics enhancements coming out? Or is there already some abstraction, ala OpenGL (OpenPhys?), to handle that for us?

Cheers,
--Brian


a proprietary api, of course!

Share this post


Link to post
Share on other sites
They provide a very opengl-like library which will use the card, or fall back to software if it isn't present. At least that's my understanding based on talking to people who've used them.

Share this post


Link to post
Share on other sites
I'm using the card for my Genesis project, a self-evolving AI structure that is partially based on the real world, and uses a physical simulator to simulate real-world mechanics on robots, and their problem-solving abilities...

So I guess that, even though I'm saying I'm going to buy it, and kind of in the group of people that say that this might not work comercially, after all I am buying it, but not for the reasons they probably developed the card for...

Share this post


Link to post
Share on other sites
What with nVidia teaming up with Havoc to deliver their own PPU, this is sounding alot like the early days of Glide vs. OpenGL vs. D3D v3

I prophecy the creation of a generalized physics API, (say, OpenPL), and a Microsoft-DirectX physics API DirectPhys (direct-fizz?).

Certainly highly parallel operations are no new idea, and I wouldn't be surprised if a new class of secondary processor, called, oh, I dunno... Parallel Pipe Processor, will be made standard on many PCs.

Oh wait, isn't that exactly what the Cell Processor is? Hmmm...

Keep an eye out for PPPs and physics engines built to take advantage of them (not to mention whole host of other applications, can you imagine photoshop accelerated with a PPP? it would be astoundingly fast!

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!