physx chip

Started by
223 comments, last by GameDev.net 17 years, 10 months ago
I'm sorry I know I'm dense. but! if you're using a Physics API that will normally use software and use the hardware chip when detected how are you suffering or being inconvenienced? is that not what any other Graphics API does? we could take that concept a step further and have the API check to see if you have the PhysX chip, if not, check to see if you have a second processor, if not check to see if you have a dual core, if not go to software. i don't think that's unreasonable or unefficient. does anyone else?

is that not what the PhysX API enabled games do or will do?

Beginner in Game Development?  Read here. And read here.

 

Advertisement
Quote:Original post by Alpha_ProgDes
I'm sorry I know I'm dense. but! if you're using a Physics API that will normally use software and use the hardware chip when detected how are you suffering or being inconvenienced? is that not what any other Graphics API does? we could take that concept a step further and have the API check to see if you have the PhysX chip, if not, check to see if you have a second processor, if not check to see if you have a dual core, if not go to software. i don't think that's unreasonable or unefficient. does anyone else?

is that not what the PhysX API enabled games do or will do?


That is what they would do if the hardware is not detected, yes. That does make a good point of it. If people don't buy the card, they'll run the physics in software. For those who buy the card, they'll run in hardware. It'll be faster with hardware, but everyone can still play.
Quote:Original post by NickGravelyn
Quote:Original post by Alpha_ProgDes
I'm sorry I know I'm dense. but! if you're using a Physics API that will normally use software and use the hardware chip when detected how are you suffering or being inconvenienced? is that not what any other Graphics API does? we could take that concept a step further and have the API check to see if you have the PhysX chip, if not, check to see if you have a second processor, if not check to see if you have a dual core, if not go to software. i don't think that's unreasonable or unefficient. does anyone else?

is that not what the PhysX API enabled games do or will do?


That is what they would do if the hardware is not detected, yes. That does make a good point of it. If people don't buy the card, they'll run the physics in software. For those who buy the card, they'll run in hardware. It'll be faster with hardware, but everyone can still play.


The difference is that when using PhysX card one can use advance physics with thousands of rigid body ... On a CPU that's not possible (same as GPU-CPU relations), and won't be possible on several upcoming generation of CPU's (I don't beleve that the dualcore athlons & pentiums can handle physics bether than PhysX card, and after seeing several videos from AGEIA developer site, i don't beleve that it's going to change soon, but this is IMHO) ...
So the developer must build two level of detail physics, and can't use rich physics (like fluid simulation, and similar) for game play or anything other than eye candy effects ...
And for this GPU physics are sufficient, so there is no point in PhysX card as an accelerator, buth as a game requirement, it's a powerfull tool.
(Again i note that this is IMHO)
I think its been said before, but I think part of the problem with the current physics chip is that it does seem to be API dependant. The only things it will help is games that specifically use that API. If there isn't a major shift to using that API or if other Physics middleware vendors (like Havoc) don't integrate the ability to take advantage of the chip, it basically just sits there.

From what I understand, it is basically another vector based processor ... over the PCI bus. All mighty 33 mHz. Also will it get to the point where it is like the video card industry and you constantly need a new one to keep up with the technology?

We have a lot of vector based math technology available on the CPUs that i don't think get taken advantage of as much as they could. mmx, 3dnow, sse/2/3, etc

"I can't believe I'm defending logic to a turing machine." - Kent Woolworth [Other Space]

EDIT: looked up the performance improvement figures for dual core gaming: http://www.firingsquad.com/hardware/quake_4_dual-core_performance/
I had written 40%, it's actually around 60%. Also made a few clarifications.

C0D1F1ED: You need to chill. Your tone is condecending towards the rest of the posters, to the point of almost insulting them. Remember that everyone is entitled to their own opinions.

That being said, here's my take on some of the things you've mentioned:

- About the speed of the CPU: You can't really do the calculations the way you did. You are only calculating raw performance based on clock speed, and simple instructions. The P4 does NOT have a sustained rate of 3 instructions per clock, on all instructions (only simple ones). You also totally disregard cache misses, and pipeline stalls. Also, the memory subsystem of any modern PC is vastly inferior to the dedicated memory subsystem of the card, AND optimized memory pattern. The memory access speed quickly deteriorates (up to 20x time penalty) on any modern PC when access patterns are scattered (but the PhysX can maintain it's high speed).

- About multicore: You can't expect to get 8x performance on your imaginary octa-core (or whatever) CPU's. Actual processing power does NOT scale linearly, as you will face severe synchronization issues, copying of data, and other problems related with multithreading. This is already apparent in "dual-core optimized" games today, the best performance improvement (that I know of, feel free to correct me on this) was in the Doom3 engine, giving a ~60% boost in performance. Other "optimized" games have reported a boost of around 15-20%. While this is pretty low, and will almost certainly improve in the future, they're not exactly the figures you'd expect. Adding more cores will just increase the overhead, and lessen the improvement.

- About the speed of the PhysX processor: Unlike you, I have first hand experience with the PhysX-cards (we have hardware support in our title currently in development), and as several people have tried to point out to you, they are a LOT faster than using a CPU simulation. Some of the features are simply not even feasible to do in software right now (one of them being realistic fluids). Even *if* you could do it, you can do more of it with the PhysX-card. There is also no extra hassle when using them, the API handles all of the synchronization with the hardware. All you have to worry about is integrating the extra power in the gameplay. Also, please remember: this is a first generation chip. There's a lot of room for improvement.

- About the flexibility of the PhysX-processor: the processor does NOT have a static physics-only feature set hardwired into it. It is pretty general-purpose (more so than a PS3.0 GPU, but not so much as an regular CPU). The features a provided by the driver, and can be extended in the future. The processor is however optimized for specific calculations that occur often in physics computations. Some people have already used it for AI applications, such as AI (neural networks) and pathfinding. This means that if devs catch on, you can still have use for the processor, even if you're playing a game that does not depend on physics in the gameplay.

- About using GPU to do physics: There are two main reasons why this is alot worse than having a dedicated PPU.

1. You need all the graphics horsepower you can get. There will never be enough GPU power. When GPU's get faster, devs use it to do more things, and nicer. More shadows, more complex lighting models, and so on. Simple as that. (This was already pointed out by another poster).

2. Check out the limitations of GPU physics: you can't read back the results, so you can't integrate it into the gameplay. No collision reports, no callbacks, no nothing. Read nvidias paper, they clearly state: it's only for effects. The PhysX chip accelerates everything, so there's no problem using it for gameplay at all (in fact, there's not even any difference).

- About the performance figures of the Cell chip: while the Cell boasts a very impressive theoretical performance, this will not translate to actual real-world performance for physics or in general (only certain special-purpose applications will benefit, for example video stream decoding). The reason is this: The processor is divided into a main general purpose CPU (PPC RISC), and eight smaller computation units, each with a small memory (about 128k). The figured you mentioned comes from when all of these units are working together, with maximum efficiency. If the smaller computation units can't be used, the performance degrades to a standard 4 Ghz PPC processor. Here's why that will happen for physics: The memory access pattern for physics isn't localized, because any objects must be tested with alot of other objects (you can use hundreds or thousands on the physx cards). You can't fit the data in those 128k, and sorting the data (even from a spatial data structure) and then feeding it to the "right" processing unit involves so much overhead that it's just not feasible (nor would provide any speedup). Hence, the only part of the processor that can be properly used is the PPC master unit, and using only that will make performance degrade to that of a regular CPU running a software simulation.


As you see, there's really no point in discussing if the PhysX chip is faster than a CPU. It breaks a CPU (even a dual-core one) by a really wide margin, and that margin will continue to increase when the next generation of chips arrive. Specialized hardware will always break general purpose hardware in terms of performance. You mentioned that you are an engineer. If you have a degree in technical engineering, this must have already been pointed out to you in several books and/or by your professors.

Now, these technical points aside, another (completely different) discussion is if the dedicated physics hardware will catch on. This is something that nobody can prove with techno-babble, as it's really anyone's guess and only time will tell.

Personally, I hope that the PPU will catch on, because it's really cool to have that much more power to play with :) Whether my dreams will come true remains to be seen ;)

/Simon

[Edited by - simonjacoby on April 25, 2006 4:47:44 PM]
whether or not people buy it is going to be determined by the same factors that determined whether or not people bought:

1) the FPU (floating point processing unit) you had to strap on to x386 mobos to play games in the dark ages: lots of people did until intel built one into the x486

2) sound cards. originally you just got the PC-speaker: lots of people payed ~$200 for a sound card for games that didn't require one just so it would sound cooler. Now they're required.

3) 3D graphics cards: was slow to be adopted but now you can't buy a computer without one, nor can you play any "modern" game without one.

All of those "addons" were ridiculed in their time as something that would never catch on. "Who wants to pay $200 for a FPU just to play a couple games???" "Why would I buy a sound card when the game has sound without one???" "Why would I buy a 3D card when it'll just make current games a little faster???"

All it takes is enough people seeing a "value-add" to pick one up. Once you get a consumer base you can build more and more features that require one to work. It could work, it could bust. No one here can predict that.

But.... a bunch of soon to be released games will support the card. If the industry thinks it rocks, maybe it will...

-me
The main problem with the card is that its not related to one of our senses (not directly anyway). With graphic cards, it improved visuals within the game. With sound cards it was better sound quality (although I know plenty of people that still do not own one). But Physics? There isn't really any big difference to the quality of the games when the average consumer plays your game. Personally I just don't see the average consumer picking one up (I think the API might end up being used though if not the card). Then again if game companies jump on board, it might end up being more common place... Who knows.

By the way, does anyone else feel like Creepy_cheese was a plant? :) Weird subject to make your first and only post on... Not to derail the conversation.
Quote:Original post by TheFez6255
The main problem with the card is that its not related to one of our senses (not directly anyway). With graphic cards, it improved visuals within the game. With sound cards it was better sound quality (although I know plenty of people that still do not own one). But Physics? There isn't really any big difference to the quality of the games when the average consumer plays your game.


As graphics get more photorealistic, players become more attuned to unrealistic behaviour in the in-game models. On this level, better physics will make the gameplay appear more believable.

As for sound cards, you're missing the point slightly - the onboard sound today is based on what used to be add-on cards 10 years ago, therefore the add-on card market for it has raised the bar for everybody. I remember the days before digital sound playback was common on PCs. The same thing applies to on-board video, really.

Quote:Original post by simonjacoby
C0D1F1ED: You need to chill. Your tone is condecending towards the rest of the posters, to the point of almost insulting them. Remember that everyone is entitled to their own opinions.

Don't worry, I'm really a nice guy. I respect everybody's opinion. I have a strong opinion of my own and I think I'm just as entiteled to it. I try to strengthen my opinion with a bunch of arguments, which might seem agressive at first but its just to make people think twice before they just copy an Ageia marketing slogan as their opinion. Eventually this should lead to a more healthy conclusion for everybody, including me. I only come to Gamedev.net to exchange opinions and learn... I apologize if anyone feels offended in that process.

At the moment I still think physics cards are recieved with a little too much enthusiasm. I'm an engineer and I think in terms of technical facts and cost effectiveness. I'm not denying that physics will play a larger role in future games, I just don't believe an extra card is the best way to get there.

First of all we need to expand our knowledge and use good software (cfr. Oblivion bugs). Using a physics card is like using the brute-force solution and shifting the problem to the consumer (who has to shell out). Furhtermore, there's a good chance the amount of extra processing power we will -actually- need in real next-generation games will be offered already by dual-core processors (which are cost effective for more than just NovodeX powered games). Intel is about to launch some really nice processors that should not be underestimated when combined with optimized software. If that's still not enough, there's plenty of bang in current and next generation graphics cards. It may cost us 1 FPS but its available for all and again more cost effective. Also cosider that GPUs use the latest silicon technology (which makes them fast and affordable) and it will take years for PPUs to get that efficient.

If that's still not convincing why physics cards are not so hot then consider this: Suppose that in a couple of months Joe 'Gamer' Average buys a fantastic new laptop with a 64-bit dual-core processor that even brings todays fastest Athlon 64 X2 to its knees, for just a fraction of the price. And since he loves games he'll combine that with something like a Geforce Go 7900 GS. Not cheap but it definitely delivers worth for every buck. Now, would it be unreasonable to expect this great laptop to run every new game in full glory for at least two years? Imagine how frustrating it must be if a game is released where advanced physics really add to the gameplay but it only allows these when a PPU is available, even though he already has two really fast processors in his system.

That's how I feel about about physics cards. Adding an expensive third processor with SIMD units no different from those found in the CPU or GPU just doesn't seem justifiable to me. Feel very free to have a different opinion but I'd love to hear some sound arguments why physics cards are the best idea since sliced bread. Thank you.
Quote:Original post by C0D1F1ED
If that's still not convincing why physics cards are not so hot then consider this: Suppose that in a couple of months Joe 'Gamer' Average buys a fantastic new laptop with a 64-bit dual-core processor that even brings todays fastest Athlon 64 X2 to its knees, for just a fraction of the price. And since he loves games he'll combine that with something like a Geforce Go 7900 GS. Not cheap but it definitely delivers worth for every buck. Now, would it be unreasonable to expect this great laptop to run every new game in full glory for at least two years? Imagine how frustrating it must be if a game is released where advanced physics really add to the gameplay but it only allows these when a PPU is available, even though he already has two really fast processors in his system.

First of all it woud be serously unreasnable to expect runing each new title for next two years in FULL GLORY. At some low detail settings the games shoud run tough. Just consider the fact that DX10 is coming in about that time :), and that FarCry already shows flashy DX10 videos around :D

From what i have seen in PhysX API, i came under the impresion that the API alows usage of hardvare/softvare with no code difference (probably initialisation code only).
So if the user CPU IS FAST ENOUGH to run advance physics simulation at optimal speed (at least half of PPU to be playable i guess :?: ) the user Joe 'Gamer' won't be missing a lot :) (probably small preformance issues).
There is a part of the docs i did not understand fully (i didn't have enough time to search), it's the fact that joints, pmaps and some other features are not PPU accelerated - what does this mean ? Is some kind of CPU/PPU procesing mixing used ? I woud certanly think so (i can't imagine physics simulation widouth joints/constrains).

Another note - who has a laptop as a primary gameing station and calls himself a gamer i wonder :) Probably a small % of "hardcore" gamers market. (hardcore - the ones that bother playing somthing other than tetris and similar games to kill time in the office :) )
It woud be more afordable to buy a regular laptop, and a console if you as me :)
You always need to pay extra $ if you intend to play serious brand new game titles ...

This topic is closed to new replies.

Advertisement