physx chip

Started by
223 comments, last by GameDev.net 17 years, 10 months ago
Quote:Original post by Dabacle
Good point I forgot about those. But those supercomputers arn't the same as a processor with, say, 16 cores. And they only function well on fully parallel problems like partial differentiation/integration, analyzing different outcomes of chess moves, brute force cracking an encryption or some problem that can be broken up into equal tasks that don't rely on each other. Try writing normal software on one of those and it would be a waste of all but a few cores anyway... Plus, when you add multiple CPUs together you get an equal amount of added cache. That linear growth wouldn't happen with adding more cores to a single processor.


I would thought that if putting various CPU in one board yield better performance, then having then integrated in the same dice would be even better.
Also it was my impression the PPU is good at physics because physic is parallel by nature, why it would not be parallel for GPUs and multicores CPUs?

What people do not realized is that PPU proposition is not an open one. Very few people will get to program what ever they one on a PPU. (those chess games an encryption, and brute fore problems)
The prosition is that an extra video card can do the same job or perhaps better than PPU card that can only do physic the way a selected group of people thinks it should be done.

I do like the idea of a PPU, but I do not like the idea of Ageai and its PhysX solution that completely alienates the competition. If I am going to plug a device in my computer would like to have the chance to do what I want with it. I can do that with a CPU.
We can get to program any effect we want with Dirextx or OpeneGL in any video card. But I doubt very much anyone here will get to do anything on the PPU.

If Ageia can pool this stunt up, thne more power to then, I do not think any body is opposing then on that. What other companies are proposing is that the Agiea PPU card will not be the only way to do accelerated physics in a game.

Advertisement
When we look at the problem of graphics, physics and ai, we can find two types of mathematical problems. One of them can be run in parallel the other can't. Graphics and physics are two highly parallelizable problems. Any bunch of processor units that are optimized for vector processing are capable to solve both problems with high efficiency. Physics simulation is just an interation through a data vector (object) and the computation of a result set. The transformation problems of 3d graphics are the same. You have your input vector (objects) and you have to generate a result set that can be rendered. The two problems are similar. In case of a skeleton animated object, both algorithms have to compute the same data. Then the physics code computes the new bone positions and the grapics code computes the vertices that have to be rendered. By doing the two together, we have to compute the skeleton only once.

About the possible future trends:

We can have dedicated cards for everything, mostly doing the same work on the same datasets to extract different things from the results. You will need a type 1 physical accelerator for game 1 and a type 2 physical accelerator for game 2 and an ai accelerator for pathfinding and maybe a facial expression and hair flow accelerator to see nicer characters.

Alternatively we can have cores that are good in logic, running the game logic and the rule based decision making ai. We can have cores that are good in floating point vector processing running the transform, physics and sound environment calculations. And finally we can have cores that are good in fixed point vector processing, doing the blitting (rasterisation), the vector ai (pathfinding) and the sound mixing.

Finally, we can have a chip that has both logical, vector and blitter cores in it. Looking around, I have a feeling that we already have one... :-) but it's still not powerful enough, because the number of low level cores is still too low. A better version would be: 2 logic, 16 vector and 128 blitter cores (instead of 2/2/7). If someone could produce a chip like this, then the only things needed for a complete system would be random access memory and i/o signal level converter chips (phy-s).

Viktor
I think the PPU is not such a good idea. Don't get me wrong, I do like the Ageia Physic lib. But a card for every problem... I don't like that thought. And then you need to install thousands of drivers.

Anyway, I think you guys are forgetting something. If I have a game I would like the single player to be the same as the multiplayer part of the game. At least from the gameplay point of view.

But no matter how you solve the physics you will not be able to update an entire world over the internet. So, such things as fluids are not suitable for online games. Since you would have to reduce the physics for online gameing you would also not need an additional card for physics.
HardOCP: ATI Takes on Physics - Computex Day 1.

That's the first big nail in PhysX's coffin. Current generation graphics cards outperform PhysX at a lower price. And future GPU's will no doubt be even better.
Kudos to ATI
http://www.extremetech.com/article2/0,1697,1943872,00.asp
Here is another article that may clarify few points about ATI plans for physics and other math intensive tasks.
I believe if they get their way, we all will be doing accelerated physics, as it should be.

My hope is that if the trend is follow by other manufactures (Matrox and Nvedea) by realizing each they assemble code, they will be setting the stage for some high level language for embedded math adapters.
This is good news.


Well, I find this article quite thought provoking.
http://www.theinquirer.net/?article=32197

It would create interesting results, especially if you factor in the fact that AMD just opened up their socket architecture along with advocating dual sockets for consumer grade motherboards. This means 2 things.

1. Dual processors, not just dual core, may become main stream consumer hardware.

2. (hich is much more interesting)Anybody with the cash can now manufacture and market a chip that slots into that second socket on an AMD dual socket board. Can you think GPU without the card, or the bus, or whatever just connected directly to the processor. I know ATI and nVidia would love that. You can pretty much plug in anything you want based on your needs. Heck, plug in a PhysX chip, if they make one for that package.

So, from a different point of view, maybe we should stop looking at how physics can be done on GPUs, but rather wonder whether you can just create a vector co-processor that slots in beside an AMD processor. The fact is, the PhysX card may just die like the ill fated Voodoo Banshee series, but are we also looking at the demise of a video card? And at what point do we ask ourselves whether we're plugging in a video card and not another vector co-processor?
Quote:Original post by C0D1F1ED
HardOCP: ATI Takes on Physics - Computex Day 1.

That's the first big nail in PhysX's coffin. Current generation graphics cards outperform PhysX at a lower price.


Where does it say that in the article?

Quote:Original post by Kylotan
Where does it say that in the article?

Third paragraph:
Quote:Also ATI says they are going to be simply faster than an Ageia PhysX card even with using a X1600 XT and that a X1900 XT should deliver 9 X the performance of an Ageia PhysX card

I can buy an X1600 XT for 2/3 the price of a PhysX card. Or, I can add the price of a PhysX card to that of a mid-end graphics card and get a high-end card that improves my graphics and has power to spare for physics.
Quote:Original post by C0D1F1ED
Quote:Original post by Kylotan
Where does it say that in the article?

Third paragraph:
Quote:Also ATI says they are going to be simply faster than an Ageia PhysX card even with using a X1600 XT and that a X1900 XT should deliver 9 X the performance of an Ageia PhysX card

I can buy an X1600 XT for 2/3 the price of a PhysX card. Or, I can add the price of a PhysX card to that of a mid-end graphics card and get a high-end card that improves my graphics and has power to spare for physics.

I think he's looking for benchmarks not claims. I think.

Beginner in Game Development?  Read here. And read here.

 

Quote from that article - "ATI does not see their physics technology as an offload for the CPU but rather processes a new category of features know as “effects physics.”"

So I take it we are talking about bigger and more complex explosions like those seen in GRAW(more polygons per explosion that then disappear quickly afterwards).

In which case personally I'm not too bothered about it, why spend an extra $100-$200 for something that is only going to be fun the first few times you see it.

I'd be far more interested in a card that helps simulate realistic physics within a game like friction, collisions, inertia etc. While freeing up the processor for other jobs.

Malal

This topic is closed to new replies.

Advertisement