Followers 0

# physx chip

## 224 posts in this topic

Quote:
 Original post by DabacleGood point I forgot about those. But those supercomputers arn't the same as a processor with, say, 16 cores. And they only function well on fully parallel problems like partial differentiation/integration, analyzing different outcomes of chess moves, brute force cracking an encryption or some problem that can be broken up into equal tasks that don't rely on each other. Try writing normal software on one of those and it would be a waste of all but a few cores anyway... Plus, when you add multiple CPUs together you get an equal amount of added cache. That linear growth wouldn't happen with adding more cores to a single processor.

I would thought that if putting various CPU in one board yield better performance, then having then integrated in the same dice would be even better.
Also it was my impression the PPU is good at physics because physic is parallel by nature, why it would not be parallel for GPUs and multicores CPUs?

What people do not realized is that PPU proposition is not an open one. Very few people will get to program what ever they one on a PPU. (those chess games an encryption, and brute fore problems)
The prosition is that an extra video card can do the same job or perhaps better than PPU card that can only do physic the way a selected group of people thinks it should be done.

I do like the idea of a PPU, but I do not like the idea of Ageai and its PhysX solution that completely alienates the competition. If I am going to plug a device in my computer would like to have the chance to do what I want with it. I can do that with a CPU.
We can get to program any effect we want with Dirextx or OpeneGL in any video card. But I doubt very much anyone here will get to do anything on the PPU.

If Ageia can pool this stunt up, thne more power to then, I do not think any body is opposing then on that. What other companies are proposing is that the Agiea PPU card will not be the only way to do accelerated physics in a game.

0

##### Share on other sites
When we look at the problem of graphics, physics and ai, we can find two types of mathematical problems. One of them can be run in parallel the other can't. Graphics and physics are two highly parallelizable problems. Any bunch of processor units that are optimized for vector processing are capable to solve both problems with high efficiency. Physics simulation is just an interation through a data vector (object) and the computation of a result set. The transformation problems of 3d graphics are the same. You have your input vector (objects) and you have to generate a result set that can be rendered. The two problems are similar. In case of a skeleton animated object, both algorithms have to compute the same data. Then the physics code computes the new bone positions and the grapics code computes the vertices that have to be rendered. By doing the two together, we have to compute the skeleton only once.

We can have dedicated cards for everything, mostly doing the same work on the same datasets to extract different things from the results. You will need a type 1 physical accelerator for game 1 and a type 2 physical accelerator for game 2 and an ai accelerator for pathfinding and maybe a facial expression and hair flow accelerator to see nicer characters.

Alternatively we can have cores that are good in logic, running the game logic and the rule based decision making ai. We can have cores that are good in floating point vector processing running the transform, physics and sound environment calculations. And finally we can have cores that are good in fixed point vector processing, doing the blitting (rasterisation), the vector ai (pathfinding) and the sound mixing.

Finally, we can have a chip that has both logical, vector and blitter cores in it. Looking around, I have a feeling that we already have one... :-) but it's still not powerful enough, because the number of low level cores is still too low. A better version would be: 2 logic, 16 vector and 128 blitter cores (instead of 2/2/7). If someone could produce a chip like this, then the only things needed for a complete system would be random access memory and i/o signal level converter chips (phy-s).

Viktor
0

##### Share on other sites
I think the PPU is not such a good idea. Don't get me wrong, I do like the Ageia Physic lib. But a card for every problem... I don't like that thought. And then you need to install thousands of drivers.

Anyway, I think you guys are forgetting something. If I have a game I would like the single player to be the same as the multiplayer part of the game. At least from the gameplay point of view.

But no matter how you solve the physics you will not be able to update an entire world over the internet. So, such things as fluids are not suitable for online games. Since you would have to reduce the physics for online gameing you would also not need an additional card for physics.
0

##### Share on other sites
HardOCP: ATI Takes on Physics - Computex Day 1.

That's the first big nail in PhysX's coffin. Current generation graphics cards outperform PhysX at a lower price. And future GPU's will no doubt be even better.
0

##### Share on other sites
Kudos to ATI
http://www.extremetech.com/article2/0,1697,1943872,00.asp
Here is another article that may clarify few points about ATI plans for physics and other math intensive tasks.
I believe if they get their way, we all will be doing accelerated physics, as it should be.

My hope is that if the trend is follow by other manufactures (Matrox and Nvedea) by realizing each they assemble code, they will be setting the stage for some high level language for embedded math adapters.
This is good news.

0

##### Share on other sites
http://www.theinquirer.net/?article=32197

It would create interesting results, especially if you factor in the fact that AMD just opened up their socket architecture along with advocating dual sockets for consumer grade motherboards. This means 2 things.

1. Dual processors, not just dual core, may become main stream consumer hardware.

2. (hich is much more interesting)Anybody with the cash can now manufacture and market a chip that slots into that second socket on an AMD dual socket board. Can you think GPU without the card, or the bus, or whatever just connected directly to the processor. I know ATI and nVidia would love that. You can pretty much plug in anything you want based on your needs. Heck, plug in a PhysX chip, if they make one for that package.

So, from a different point of view, maybe we should stop looking at how physics can be done on GPUs, but rather wonder whether you can just create a vector co-processor that slots in beside an AMD processor. The fact is, the PhysX card may just die like the ill fated Voodoo Banshee series, but are we also looking at the demise of a video card? And at what point do we ask ourselves whether we're plugging in a video card and not another vector co-processor?
0

##### Share on other sites
Quote:
 Original post by C0D1F1EDHardOCP: ATI Takes on Physics - Computex Day 1.That's the first big nail in PhysX's coffin. Current generation graphics cards outperform PhysX at a lower price.

Where does it say that in the article?

0

##### Share on other sites
Quote:
 Original post by KylotanWhere does it say that in the article?

Third paragraph:
Quote:
 Also ATI says they are going to be simply faster than an Ageia PhysX card even with using a X1600 XT and that a X1900 XT should deliver 9 X the performance of an Ageia PhysX card

I can buy an X1600 XT for 2/3 the price of a PhysX card. Or, I can add the price of a PhysX card to that of a mid-end graphics card and get a high-end card that improves my graphics and has power to spare for physics.
0

##### Share on other sites
Quote:
Original post by C0D1F1ED
Quote:
 Original post by KylotanWhere does it say that in the article?

Third paragraph:
Quote:
 Also ATI says they are going to be simply faster than an Ageia PhysX card even with using a X1600 XT and that a X1900 XT should deliver 9 X the performance of an Ageia PhysX card

I can buy an X1600 XT for 2/3 the price of a PhysX card. Or, I can add the price of a PhysX card to that of a mid-end graphics card and get a high-end card that improves my graphics and has power to spare for physics.

I think he's looking for benchmarks not claims. I think.

0

##### Share on other sites
Quote from that article - "ATI does not see their physics technology as an offload for the CPU but rather processes a new category of features know as “effects physics.”"

So I take it we are talking about bigger and more complex explosions like those seen in GRAW(more polygons per explosion that then disappear quickly afterwards).

In which case personally I'm not too bothered about it, why spend an extra $100-$200 for something that is only going to be fun the first few times you see it.

I'd be far more interested in a card that helps simulate realistic physics within a game like friction, collisions, inertia etc. While freeing up the processor for other jobs.

Malal
0

##### Share on other sites
The ATI thing sounds neat but it has no chance of survival. Dual-core processors have a whole lot of extra processing power that can be used for physics without the additional overhead (bandwidth, synchronization) of a separate physics card. Furthermore, investing in a dual-core processor benefits much more than just the physics in a few games. Extra graphics cards are quite expensive and will rarely be used.

Besides, no game developer in his right mind would create a game that will only run on a fraction of PCs. So there always has to be a fallback without affecting gameplay (hence the "effects physics"). It took graphics cards about three years to become widespread, but by the time physics play a key role in games the CPUs will be multi-core with highly improved architectures...
0

##### Share on other sites
Quote:
 Original post by Alpha_ProgDesI think he's looking for benchmarks not claims. I think.

Fair enough. I'm still looking for extensive PhysX benchmarks...
0

##### Share on other sites
Quote:
 Original post by Anonymous PosterExtra graphics cards are quite expensive and will rarely be used.

I don't think it really needs an extra graphics card. You can just use a fraction of the processing power of one graphics card. And it's still more cost-effective than buying a PhysX card.

Of course it's still not fully optimal on current GPUs; they have to deal with Direct3D 9 API limitations and it's not a unified architecture. But AGEIA definitely already has strong competition from all sides and with next generation GPUs and CPUs it's going to be a waste of money to buy a PhysX card. I feel sorry AGEIA but they should have seen this coming.
0

##### Share on other sites
You need at least 2 cards, they want you to use 3. You're thinking of nvidia's solution maybe.

Besides, don't underestimate the effective efficiency of a CPU. It has a comparatively huge and fast cache, it has highly efficient jump prediction, can do out-of-order execution, and the execution units are fully pipelined and run at several gigahertz. A GPU-based solution uses a brute-force approach, but can't be that efficient. So even if it has a multiple of the GFLOPS of a CPU, a lot goes to waste because of additional overhead. Moreover, the second core of a dual-core processor is practically unused at the moment. In a year dual-core CPUs will be widespread and price-effective, while only over-enthousiastic gamers will buy a second (or third) graphics card.
0

##### Share on other sites
http://www.bit-tech.net/news/2006/06/06/ATI_touts_physics_on_GPU/
0

##### Share on other sites
Quote:
 Original post by Anonymous PosterYou need at least 2 cards, they want you to use 3. You're thinking of nvidia's solution maybe.

That's only with today's solution. I see no reason why they couldn't get it working with one card in the near future. Context switches are still inefficient with the Direct3D 9 API but that will change as well.
Quote:
 Besides, don't underestimate the effective efficiency of a CPU.

I certainly don't. I have been advocating thoughout this whole thread that a Core 2 Duo might very well beat PhysX thanks to its high floating-point performance and efficient architecture.
0

##### Share on other sites
and yet you want a gpu solution. granted, nvidia's and ati's next cards are apparently magical ones that have no latencies and infinite memory access, but they are still extra cards that no one is going to buy. why not just run it on the cpu?

you're advocating the same solution as aegia...an extra card. theres no reason to do the computations anywhere except the cpu.
0

##### Share on other sites
and yet you want a gpu solution. granted, nvidia's and ati's next cards are apparently magical ones that have no latencies and infinite memory access, but they are still extra cards that no one is going to buy. why not just run it on the cpu?

you're advocating the same solution as aegia...an extra card. theres no reason to do the computations anywhere except the cpu.
0

##### Share on other sites
A thought just crossed my mind after reading so many advocates of the power of dual core. How do we know that the current games aren't already multithreaded to begin with? It common knowledge that to speed up certain processes, you have to do operations asynchronously. So, doesn't that usually involve a thread of some sort with a callback when it is done? Also, if the current game engines are already somewhat multi-threaded, then you run into that problem of how do you know which core which process is running on? Technically, I haven't seen any real way of specifying where you want a thread to be run. So, even if we were to use a dual core processor, there really wouldn't be a guarantee that physics will run on a core on its own. Optimally, we hope that's what happens.

Personally, I've only played a hyperthreaded processor. Under window's task manager, the two hardware threads show up as 2 processors as we all know. When running single threaded number crunching programs, the load, though concentrates on one of the thread, but never really max it out, while the other thread never stays idle either. So, if this is also what happens on a multi-core processor, then we may have to factor in the fact that when the system tries to do load balancing, it may not do so by assigning threads to specific processing units. I guess my main point is that though a second core "may" help physics run better on the CPU, but we shouldn't expect a 1 + 1 deal, but more like a 0.75 + 0.75, and that saying you can have physics running on a second core may not be a technically sound argument as there is no guarantee.
0

##### Share on other sites
Quote:
 Original post by Anonymous Posterand yet you want a gpu solution. granted, nvidia's and ati's next cards are apparently magical ones that have no latencies and infinite memory access, but they are still extra cards that no one is going to buy. why not just run it on the cpu?you're advocating the same solution as aegia...an extra card. theres no reason to do the computations anywhere except the cpu.

No, just like people will upgrade to a powerful dual-core processor anyway, they will upgrade to a Direct3D 10 card in due time. And one such graphics card will be enough to do physics processing in between every frame.

I know the future dual-core CPUs will be (more than) adequate for physics processing and likely easier to work with, but I don't think we can rule out the GPU solution. I mean, it was easy to predict PhysX's doom, but whether CPU or GPU based physics will prevail would require a crystal ball to look years ahead. It's not unlikely that both solutions will be used for some time. It's clear that NVIDIA and ATI are very serious about GPGPU applications...
0

##### Share on other sites
Quote:
 Original post by WeirdoFuTechnically, I haven't seen any real way of specifying where you want a thread to be run. So, even if we were to use a dual core processor, there really wouldn't be a guarantee that physics will run on a core on its own. Optimally, we hope that's what happens.

It can be done using SetProcessAffinity. But you actually don't want to do this. The operating system schedules threads every few milliseconds. It balances things automatically and will typically assign the same thread to the same core. The best thing to do is have two threads that constantly decide what task to do next. So they never go idle. The tricky part is determining the granularity of the tasks and how to synchronize as little and as efficiently as possible. But the CPU and O.S. are really quite optimal at thread scheduling. It's up to the software to make the best use of it.
Quote:
 I guess my main point is that though a second core "may" help physics run better on the CPU, but we shouldn't expect a 1 + 1 deal, but more like a 0.75 + 0.75, and that saying you can have physics running on a second core may not be a technically sound argument as there is no guarantee.

That's true, but it's in fact the best we can do. When tasking a PPU or GPU with physics calculations we also have to ensure the CPU can do something else in the meantime. It's not much different from having multiple cores. It requires the same task scheduling and synchronization. In fact it's worse because communication between separate devices is slower than between processor cores. So dual-core is really feasible for physics and I expect efficiencies between 170% and 190% to be possible.
0

##### Share on other sites
"And one such graphics card will be enough to do physics processing in between every frame."

considering there are even fewer benchmarks on havok's implementation than aegia's (and considering we won't see rev 1 of dx10 hardware until the end of the year), i'll believe this when i see it. any actual sources on this, at all? based on today's performance on today's cards (http://images.anandtech.com/graphs/7950gx2_06040690609/12186.png), and considering the impending arrival of crysis and ue3, gamers that want to get high quality graphics at large resolutions (aka the market that wants accelerated physics eye candy) are going to need all the power they can get.

if a gamer doesnt want to mortgage their framerate for in game physics...it's just another card for us to buy. *exactly* like physx. multi-core ftw.
0

##### Share on other sites
Quote:
 Original post by MalalI'd be far more interested in a card that helps simulate realistic physics within a game like friction, collisions, inertia etc. While freeing up the processor for other jobs.Malal

Actually since the physics equations are the same not mater what hardware is used to do the calculation, there is not reason to believe that physics on PPU will be more realistic than physics in a GPU, nor there is reason to believe that the PPU will have any easier time passing information back to the CPU than the GPU would.

Both PPU and GPU are plugged to the CPU using an external bus, so the same badwithd restritions apply to both o fthem, but the PPU will consume a lot more.

There are ton of physics effects that do not required feedback to the CPU, and those can be processed and rendered directly in the GPU (that is what vertex shader and pixel shader do now), the PPU cannot do the rendering in any way.
Any single particle animated by the PPU need to be updated twice, one for transferring the position back to memory and again for passing the data to the GPU.
This is why PPU demos are running slower than the CPU versions.

Being everything equal, PPU and GPU (same bandwidth and flops) I cannot think of one single feature that would run faster on the PPU than it would on a GPU.
I can think of many that would be by order of magnitude faster in GPU. Take for example particles, fluids, and cloth; for the most part feedback is minimal yet these effects totally eat the bandwidth of the PPU when updating geometry and position update between PPU/CPU/GPU

Doing these in GPU only need occasional CPU feedback for object interation with the particles. For example a body affected by wind or water the GPU will only have to notify the CPU what happened to the body, but not to each element making the fluid.

If you add to this, that the GPU will be 10x the PPU (ATI estimates) I cannot think how the PPU will be a competitive option once physics in GPU catch up with the developers.

The PPU was an attractive solution only if the video card makers were not interested on doing physic on this PPU, but since video games is a competitive market and there are other big players in the physics middle business (Havok), this is just the natural market reaction.

My guess is that once the GPU solution become more accepted, Ageia will end up rewriting Novodex/PhysX for the GPU and the PPU will silently fall into oblivion.
They did that already for PS3, and 360 I think.

0

##### Share on other sites
http://enthusiast.hardocp.com/article.html?art=MTA1MiwxLCxoZW50aHVzaWFzdA==

It seems the facts do not rise to the level of expectation. The reviewer politely says the hardware basically sucks. The physics are not better than what you get on any other game using Havok, with the difference that you get more of it: same clipping bugs, same distorted ragdoll, same collision penetration.
The fluids considered to be the more advanced feature, leave a lot to be desired. In fact it said it is terrible.

0

old
0

## Create an account

Register a new account