physx chip

Started by
223 comments, last by GameDev.net 17 years, 10 months ago
Quote:Original post by WeirdoFu
Technically, I haven't seen any real way of specifying where you want a thread to be run. So, even if we were to use a dual core processor, there really wouldn't be a guarantee that physics will run on a core on its own. Optimally, we hope that's what happens.

It can be done using SetProcessAffinity. But you actually don't want to do this. The operating system schedules threads every few milliseconds. It balances things automatically and will typically assign the same thread to the same core. The best thing to do is have two threads that constantly decide what task to do next. So they never go idle. The tricky part is determining the granularity of the tasks and how to synchronize as little and as efficiently as possible. But the CPU and O.S. are really quite optimal at thread scheduling. It's up to the software to make the best use of it.
Quote:I guess my main point is that though a second core "may" help physics run better on the CPU, but we shouldn't expect a 1 + 1 deal, but more like a 0.75 + 0.75, and that saying you can have physics running on a second core may not be a technically sound argument as there is no guarantee.

That's true, but it's in fact the best we can do. When tasking a PPU or GPU with physics calculations we also have to ensure the CPU can do something else in the meantime. It's not much different from having multiple cores. It requires the same task scheduling and synchronization. In fact it's worse because communication between separate devices is slower than between processor cores. So dual-core is really feasible for physics and I expect efficiencies between 170% and 190% to be possible.
Advertisement
"And one such graphics card will be enough to do physics processing in between every frame."

considering there are even fewer benchmarks on havok's implementation than aegia's (and considering we won't see rev 1 of dx10 hardware until the end of the year), i'll believe this when i see it. any actual sources on this, at all? based on today's performance on today's cards (http://images.anandtech.com/graphs/7950gx2_06040690609/12186.png), and considering the impending arrival of crysis and ue3, gamers that want to get high quality graphics at large resolutions (aka the market that wants accelerated physics eye candy) are going to need all the power they can get.

if a gamer doesnt want to mortgage their framerate for in game physics...it's just another card for us to buy. *exactly* like physx. multi-core ftw.
Quote:Original post by Malal
I'd be far more interested in a card that helps simulate realistic physics within a game like friction, collisions, inertia etc. While freeing up the processor for other jobs.
Malal

Actually since the physics equations are the same not mater what hardware is used to do the calculation, there is not reason to believe that physics on PPU will be more realistic than physics in a GPU, nor there is reason to believe that the PPU will have any easier time passing information back to the CPU than the GPU would.

Both PPU and GPU are plugged to the CPU using an external bus, so the same badwithd restritions apply to both o fthem, but the PPU will consume a lot more.

There are ton of physics effects that do not required feedback to the CPU, and those can be processed and rendered directly in the GPU (that is what vertex shader and pixel shader do now), the PPU cannot do the rendering in any way.
Any single particle animated by the PPU need to be updated twice, one for transferring the position back to memory and again for passing the data to the GPU.
This is why PPU demos are running slower than the CPU versions.

Being everything equal, PPU and GPU (same bandwidth and flops) I cannot think of one single feature that would run faster on the PPU than it would on a GPU.
I can think of many that would be by order of magnitude faster in GPU. Take for example particles, fluids, and cloth; for the most part feedback is minimal yet these effects totally eat the bandwidth of the PPU when updating geometry and position update between PPU/CPU/GPU

Doing these in GPU only need occasional CPU feedback for object interation with the particles. For example a body affected by wind or water the GPU will only have to notify the CPU what happened to the body, but not to each element making the fluid.

If you add to this, that the GPU will be 10x the PPU (ATI estimates) I cannot think how the PPU will be a competitive option once physics in GPU catch up with the developers.

The PPU was an attractive solution only if the video card makers were not interested on doing physic on this PPU, but since video games is a competitive market and there are other big players in the physics middle business (Havok), this is just the natural market reaction.

My guess is that once the GPU solution become more accepted, Ageia will end up rewriting Novodex/PhysX for the GPU and the PPU will silently fall into oblivion.
They did that already for PS3, and 360 I think.

http://enthusiast.hardocp.com/article.html?art=MTA1MiwxLCxoZW50aHVzaWFzdA==

It seems the facts do not rise to the level of expectation. The reviewer politely says the hardware basically sucks. The physics are not better than what you get on any other game using Havok, with the difference that you get more of it: same clipping bugs, same distorted ragdoll, same collision penetration.
The fluids considered to be the more advanced feature, leave a lot to be desired. In fact it said it is terrible.

old

This topic is closed to new replies.

Advertisement