Jump to content

  • Log In with Google      Sign In   
  • Create Account


physx chip


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
224 replies to this topic

#21 KulSeran   Members   -  Reputation: 2163

Like
0Likes
Like

Posted 20 April 2006 - 07:40 PM

The difference is that nVidia is working to do general purpose computations on their GPU, which will still be focused on providing
top-notch graphics performance. Where as Ageia has made a special purpose processor to deal with the physics computations.

This means:
1) the Ageia processor has real memory, like a CPU. The GPU may have comperable bandwidth, but the access pattern that a GPU is designed for is not
the same access pattern seen in physics computations. So, the physics work has to be broken up in "unnatural" ways to accomidate the limited access to memory.

2) the Ageia processor is not "too far" from the CPU, where the CPU needs data such as position, velocity, direction, and rotation in order to
come do decisions for AI and graphics processing, the CPU no longer needs to deal with all the side-effect data that the physics needs inorder to compute
said object information. This is the same idea as HW T&L, the CPU doesn't need to know about all the side effect geometry, it only has to know the
position and orientation data for the mesh.

There is a very significant amount of processing that the GPU and PPU are both able to do. There is a significant amount of data that gets
cached on both that does not have to be transfered every frame. And anything you can offload from the CPU means more cycles for everything else.
It still takes time to itterate through 1000 objects and do frame updates, if you can half the processing time, you can either make the
world more "rich and full" with another 1000 objects, or make each object "smarter" in some way.

Quote:

What's next, an AI card? A Scripting card? There's only so many areas you can split it down into.

There is already specific hardware in your computer to decode MPEG4 video, MP3 audio. Your network card likely has RLE compression, and hardware to
help support the TCP/IP stack. RAID can be done with software, but there is hardware to deal with that too.

The point is that there are types of computation that can be done better with specific hardware. In 5 years we will likely have people
using the PhysX API to do fast general purpose computations on the PPU, just like we have people using fragment shaders to do some
general purpose computation on the GPU.

If we didn't have hardware acceleration for many facets of computation today, then the technology would have never made it to the mainstream.
(think a 500Mhz CPU trying to decode a DVD w/o GPU support, or tring to encode an mp3 in real time straght from the cd/line in.
Posible, but a 500Mhz + dvd/tv decoder card will make all the diference in the world as far as playback responsiveness.)
The PhysX chip is a step in the right direction. The cell probably is too.

And even if the consumer market doesn't take up the PhysX chip, there is likely
a commercial market for it, just as SGI opened the market for
commercial high-speed/quality graphics processing.
Likely the chip will not "flop", but it will be something that only gamers will buy. The average computer user will never see the need.
Alienware, Voodoo PC, and several other companies focus on the gamer market, and seem to be doing fine.
The PhysX chip will likely become part of their full line of products, just as SLI slowly edged its way in.


P.S.
Personally? Im waiting for the realtime raytracing GPU's to come to market.
When that day comes, all the pixel-shader hacks will be gone, and we can finally
see graphics that have the real effects in full 3d with no artifacts like today.

Sponsor:

#22 Timkin   Members   -  Reputation: 864

Like
0Likes
Like

Posted 20 April 2006 - 07:52 PM

What starts out as an addon chip today may well become an inbuilt part of motherboard designs of the future. Games are not the only environments that require physics processing. Indeed, there are many more research machines out there that require simulation capabilities than there are games machines. This will indeed by a solid market place for PPUs. Additionally, I think you'll find that if PhysX is successful in penetration into software design (or at least the industry isn't adverse to catering for it) then you may well see next or latter generation consoles with dedicated PPUs.

My personal opinion is that this is actually well overdue. There is a great capacity within both the gaming/entertainment and the scientific/engineering research markets for the uptake of physics chips because what they bring to the table is increased performance, if you're prepared to take advantage of them. Since the market is always moving forward, there will always be people prepared to take the advantage and mark their place in the front line... and more power to them if it means my research life is made easier and my games look and feel more realistic!

Cheers,

Timkin

#23 C0D1F1ED   Members   -  Reputation: 456

Like
0Likes
Like

Posted 21 April 2006 - 01:00 AM

Quote:
Original post by justo
...in the same way you dont want to be using a dual core processor for rasterizing triangles and running shader code, dual cores will never be as efficient or be able to run as many operations per second as a dedicated card...

You can't compare the two. First of all, modern games absolutely can't run without a graphics card because of the vast amount of processing power required. For physics, you don't need that many GFLOPS. Only a fraction of the geometry is in motion for any realistic game, and that's vertex processing, not pixel processing. Secondly, CPUs are bad at pixel processing (bilinear texture filtering in software is quite slow), but they are very good at vertex processing (Intel Integrated Graphics still uses software T&L in many cases). Thirdly, don't underestimate the effective efficiency of a CPU. It has a comparatively huge and fast cache, it has highly efficient jump prediction, can do out-of-order execution, and the execution units are fully pipelined and run at several gigahertz. A PPU uses a brute-force approach, but can't be that efficient. So even if it has a multiple of the GFLOPS of a CPU, a lot goes to waste because of additional overhead (not to mention PCI bandwidth and synchronization). Last but not least, the second core of a dual-core processor is practically unused at the moment. So while previously there was maybe a 10% 'budget' for physics processing, this has now become 100%. It's already going to be hard to use all of that (without wasting it of course) for any realistic game. Plus, in a year dual-core CPUs will be widespread and price-effective, while only over-enthousiastic gamers will buy a PPU.

#24 Kylotan   Moderators   -  Reputation: 3329

Like
0Likes
Like

Posted 21 April 2006 - 01:27 AM

Quote:
Original post by C0D1F1ED
You can't compare the two. First of all, modern games absolutely can't run without a graphics card because of the vast amount of processing power required. For physics, you don't need that many GFLOPS.


That entirely depends on what you're simulating!

Quote:
Only a fraction of the geometry is in motion for any realistic game, and that's vertex processing, not pixel processing.


Physics is about more than just accommodating motion of a few models. Comparing it to vertex processing is doing it a huge disservice. Anyone who's looked at endless equations of Hamiltonian or Lagrangian derivations of rotational velocity and stuff like that will know this. To do a good job of physics, you have various iterative methods and approximations that, to be done properly, require a lot of power. A large part of the reason little of the geometry moves in current games is because it's expensive to make it do so. Make it cheaper, and more will move.

Quote:
Thirdly, don't underestimate the effective efficiency of a CPU. It has a comparatively huge and fast cache, it has highly efficient jump prediction, can do out-of-order execution, and the execution units are fully pipelined and run at several gigahertz. A PPU uses a brute-force approach, but can't be that efficient.


I seriously doubt that a PPU is not 'fully pipelined'. Nor is jump prediction or out of order execution a big deal for such hardware. Those features are part of how CPUs attempt to catch up to dedicated hardware, not advantages over such hardware.

Quote:
Last but not least, the second core of a dual-core processor is practically unused at the moment.


This may be true, but it's almost exactly what Intel said about MMX for graphics. It was nowhere near enough for what was needed.


#25 Marvin   Members   -  Reputation: 127

Like
0Likes
Like

Posted 21 April 2006 - 01:29 AM

Quote:
Original post by gumpy macdrunken
don't forget about server-side physics. allowing clients to calculate physics leads to hacks/cheats. dedicated physics hardware could be a cost-effective solution for online game servers.


This is exactly the use i had in mind for these cards where you can dictate the hardware specification and make your app make full use of them.



#26 C0D1F1ED   Members   -  Reputation: 456

Like
0Likes
Like

Posted 21 April 2006 - 05:17 AM

Quote:
Original post by Kylotan
Quote:
Original post by C0D1F1ED
You can't compare the two. First of all, modern games absolutely can't run without a graphics card because of the vast amount of processing power required. For physics, you don't need that many GFLOPS.

That entirely depends on what you're simulating!

Absolutely. I know simulations being done with FEM and modelling nuclear physics. These are of high complexity and dedicated hardware can be of great use.

But they are not used in real-time games and will never be. In games, only 'simple' physics really make sense. For slightly more complex things (e.g. fluid dynamics) hacks are very acceptable. Compare it with rasterization versus ray-tracing. Rasterization is an accepted hack in real-time games.
Quote:
A large part of the reason little of the geometry moves in current games is because it's expensive to make it do so. Make it cheaper, and more will move.

Sure, but not 100x more. And there's a technical reason for that. Every game needs a visibility algorithm for solid geometry to get acceptable performance. A fully deformable world is not practical (and even if it was not every vertex would need complex physics calculations at all time). So I definitely agree that extra processing power for physics is welcome, but in my opinion the 10x increase offered by dual-core is plenty.
Quote:
I seriously doubt that a PPU is not 'fully pipelined'. Nor is jump prediction or out of order execution a big deal for such hardware. Those features are part of how CPUs attempt to catch up to dedicated hardware, not advantages over such hardware.

The CPU's attempt to catch up with dedicated hardware is multi-core, which is only in its infancy. Quad-core and octa-core are already on the roadmaps, and now that Intel has learned that clock frequency isn't everything we're going to see some very powerful CPUs in the not so distant future. It's crazy to invest in PPU technology at this point.
Quote:
Quote:
Last but not least, the second core of a dual-core processor is practically unused at the moment.

This may be true, but it's almost exactly what Intel said about MMX for graphics. It was nowhere near enough for what was needed.

That's hardly comparable. MMX allowed to roughly double the processing workload. Dual-core means a whole extra processor is available almost completely for physics. If on a single-core the budget for phycics calculations is 10% then with dual-core you can do 10x more at the same framerate.

#27 Nick Gravelyn   Members   -  Reputation: 846

Like
0Likes
Like

Posted 21 April 2006 - 05:58 AM

Quote:
Original post by C0D1F1ED
Quote:
Original post by Kylotan
Quote:
Original post by C0D1F1ED
You can't compare the two. First of all, modern games absolutely can't run without a graphics card because of the vast amount of processing power required. For physics, you don't need that many GFLOPS.

That entirely depends on what you're simulating!

Absolutely. I know simulations being done with FEM and modelling nuclear physics. These are of high complexity and dedicated hardware can be of great use.

But they are not used in real-time games and will never be. In games, only 'simple' physics really make sense. For slightly more complex things (e.g. fluid dynamics) hacks are very acceptable. Compare it with rasterization versus ray-tracing. Rasterization is an accepted hack in real-time games.

But the only reason high-complexity physics isn't applied to games is because of the technological bottleneck. If it didn't reduce framerates at all due to dedicated hardware, why wouldn't you put elaborate simulations into games. Games like Oblivion would be that much more immersive if the physics was even more realistic (which, in some aspects, it is)
Quote:

Quote:
A large part of the reason little of the geometry moves in current games is because it's expensive to make it do so. Make it cheaper, and more will move.

Sure, but not 100x more. And there's a technical reason for that. Every game needs a visibility algorithm for solid geometry to get acceptable performance. A fully deformable world is not practical (and even if it was not every vertex would need complex physics calculations at all time). So I definitely agree that extra processing power for physics is welcome, but in my opinion the 10x increase offered by dual-core is plenty.

Again, this hardware removes the technical bottleneck from games allowing more objects to move without hindering performance.

I think the PPU is a great idea. I know that in a few years, my next PC will have one. If you see what nVidia and Havok did on the GPU for physics, imagine what could be done on a card of that power that wasn't also trying to render the geometry. It would allow for smooth, precise physical representations of the world.

I personally can't wait until every object in a visible scene is actively a part of the physics. Nothing bugs me more in Oblivion than touching something on a table and having all the other objects become "active" and either rise up a hair or, worse, fall through the table because they were set interpenetrating. It's just unacceptable in my eyes. A dedicated card would allow game makers to just activate all visible objects on the PPU so that you don't have these glitches or bugs in the physics simulation.

Now, my main question: why is this in the AI forum?

#28 justo   Members   -  Reputation: 184

Like
0Likes
Like

Posted 21 April 2006 - 06:06 AM

C0D1F1ED: i dont really understand your point. is it that its better to dedicate an entire general purpose core to physics rather than a specialized chip?

if that is your point and you think that because of this that there will never be a market, again i have to disagree...there is a fairly substantial gamer market out there (not huge but enough to sustain 512 MB 7900gtx's and the like) that will easily pay for a card to offload all physics calcs for any game based on unreal engine 3 (and others) and boost their framerate however much.

as a developer who is using gpgpu techniques to do somewhat massive simulations, i'd love to have a card to do the same stuff in realtime (and no, a cpu could *never* handle it, even if dedicated) *and* be able to readback large amounts of data to influence the interactive parts of the program (precluded from nvidia's work). there is almost no way to do this with a general purpose processor even for relatively simple scenes (like tens of thousands of interacting particles) with any kind of iterative solver.

Quote:
Original post by NickGravelyn
Now, my main question: why is this in the AI forum?

haha.

#29 C0D1F1ED   Members   -  Reputation: 456

Like
0Likes
Like

Posted 21 April 2006 - 08:31 AM

Quote:
Original post by NickGravelyn
But the only reason high-complexity physics isn't applied to games is because of the technological bottleneck. If it didn't reduce framerates at all due to dedicated hardware, why wouldn't you put elaborate simulations into games.

Because for a long time graphics have been the main selling point for games. Physics, or gameplay in general, has been neglected for some years. You're right that there is a technological bottleneck, knowledge and software, but not that much a performance bottleneck. We could have had games with awesome physics for years, if only they had put more effort into it.
Quote:
Again, this hardware removes the technical bottleneck from games allowing more objects to move without hindering performance.

My argument was that you can't make the whole game world deformable. The visibility algorithms only work for solid geometry, which is absolutely required to get adequate framerates. The PPU won't help with visibility, so the possibilities are really limited. 10x more physics processing power (offered by dual-core) can be put to good use, 100x more (offered by the PPU), I doubt it.
Quote:
A dedicated card would allow game makers to just activate all visible objects on the PPU so that you don't have these glitches or bugs in the physics simulation.

I'm really sure they are just bugs, not a fundamental limitation of the software physics engine. Like I said before, it doesn't cost that much GFLOPS to make a dozen objects on a table have correct physics. It's a perfect example that lots of effort is put into the graphics of games, but not always the physics. Don't blame the CPU for that, blame the buggy software and the lack of knowledge to fix it. Investing in a robust physics engine would have made all the difference. And for future games the power of a dual-core processor is really plenty.

#30 C0D1F1ED   Members   -  Reputation: 456

Like
0Likes
Like

Posted 21 April 2006 - 08:52 AM

Quote:
Original post by justo
C0D1F1ED: i dont really understand your point. is it that its better to dedicate an entire general purpose core to physics rather than a specialized chip?

That depends on what you would consider 'better'. The way I see it, a dual-core processor is most cost effective. All software will soon benefit from it and it's great for running multiple applications. Besides it's getting mainstream so in a couple years everyone has one. And from my experience the extra amount of processing power is plenty for impressive physics.

The alternative is to have a PPU, which in my eyes is a big waste of money. It will only be used in a fraction of games, won't (can't) offer that much extra, and it's very expensive compared to a dual-core processor that also helps other applications (and which you need to buy anyway).
Quote:
if that is your point and you think that because of this that there will never be a market, again i have to disagree...there is a fairly substantial gamer market out there (not huge but enough to sustain 512 MB 7900gtx's and the like) that will easily pay for a card to offload all physics calcs for any game based on unreal engine 3 (and others) and boost their framerate however much.

As a matter of fact I'm very interested in buying a Geforce 7900 GT(X) myself. I know it delivers awesome graphics for the right price. And I already have an Athlon 64 X2 4400+. But a PPU doesn't interest me one bit. I am convinced that for any real game (not a specially fabricated tech demo) the immense processing power of the CPU is plenty if only they use a robust and well optimized multi-threaded physics engine.

Anyway, I sure realize there is a market. Hardcore gamers will buy anything that 'might' improve their game experience. You could sell them a dead rat if you market it well enough (it does give Doom 3 the right smell you know). So I'm sure pretty Ageia will make a profit. I'm just not convinced that a PPU on a separate card is what most consumers really need...

#31 DuFaceTheBass   Members   -  Reputation: 208

Like
0Likes
Like

Posted 21 April 2006 - 09:30 AM

I think a dedicated physics card is a great idea, especially with an API like aegia have made that supports physics processing in both hardware AND software.

Also having it on a card probably makes it easier to synchronise in game. Graphics processing is done on the graphics card but the data has to be passed to it by the CPU in the games' render function. And the physics card, presumably, would work in the same way.

C0D1F1ED: I just checked some prices considering you're on about cost-effectiveness:

Intel Pentium 4 830 Dual Core "LGA775 Smithfield" - £223.19
AMD Athlon 64 X2 Dual Core 3800+ - £205.57
BFG Ageia PhysX Accelerator - £217.32

I suppose you do have a point about the price, but I strongly disagree with your point about graphics being the main selling point for games. I think 'immersion' is the major selling point for games, and this includes graphics, physics, AI and sound. Gamers want to feel as if they're in the world.

NVIDIA brought a card to the mainstream with a separate processor for graphics, Creative did the same for sound cards and now Aegia have done the same for physics. So how is this separate physics processor any different to separate graphics or sound processors?


#32 Kylotan   Moderators   -  Reputation: 3329

Like
0Likes
Like

Posted 21 April 2006 - 10:15 AM

Quote:
Original post by C0D1F1ED
But they are not used in real-time games and will never be. In games, only 'simple' physics really make sense.


That's like saying only simple graphics make sense. Games were perfectly playable in the 80s but people wanted more realism and effects. The hardware moved to accommodate that. Similarly in games, people want more realism and effects from the physics. The hardware will move to accommodate that too.

Quote:
Compare it with rasterization versus ray-tracing. Rasterization is an accepted hack in real-time games.


That's not because nobody wants ray-tracing, it's because it's not efficient. Once upon a time, real-time lighting wasn't efficient either. When it became practical, it started appearing everywhere. There are many other examples. So when physics hardware makes it easy to create more realistic simulations, people will use that.

Quote:
Sure, but not 100x more. And there's a technical reason for that. Every game needs a visibility algorithm for solid geometry to get acceptable performance.


I don't agree that's true to the extent that you suggest it is. Geometry can still be effectively culled if it moves around, and besides which, not every game needs the absolute top performance. In fact, the slightly slower-moving games are the ones which are more likely to require accurate physics anyway, since you'll be taking the time to examine your surroundings.

Quote:
The CPU's attempt to catch up with dedicated hardware is multi-core, which is only in its infancy. Quad-core and octa-core are already on the roadmaps, and now that Intel has learned that clock frequency isn't everything we're going to see some very powerful CPUs in the not so distant future. It's crazy to invest in PPU technology at this point.

Put 128 cores in a CPU if you like - each of those is still going to achieve less per cycle than a dedicated piece of hardware which has a totally custom instruction set and memory architecture for the particular job. That's just the nature of it. General purpose CPUs are actually very slow, relatively speaking. Besides which, the PPU is likely to be easier for programmers to take advantage of than the parallelism in the CPU anyway, if past history of concurrency is anything to go by.

Quote:
That's hardly comparable. MMX allowed to roughly double the processing workload. Dual-core means a whole extra processor is available almost completely for physics. If on a single-core the budget for phycics calculations is 10% then with dual-core you can do 10x more at the same framerate.


Why do you think the second core is going to be used almost exclusively for physics?

#33 Ezbez   Crossbones+   -  Reputation: 1164

Like
0Likes
Like

Posted 21 April 2006 - 10:30 AM

I'm no expert, just a random high-schooler geussing, but here's my thoughts:

PhysX card has a future, but not a present. Once games using the Unreal 3 engine and other ones that can take advantage of the PhysX card start comming out, you'll start seeing a point to these cards. Until then, only elitists and professionals will be having them. Even then, alot of people won't have them.

I've heard that the PS3 will be using one of these cards, is that correct? If so, that will be a *big* help in getting it out, as developers would probably make use of it on the PC when porting a game from the PS3.

I think we can all see a future where every computer ends up having 15-20 different processors. We've already got at least 3(graphics, sounds, and CPU), a dn Ageia wants to make that 4. If you count networking cards and things like that, that would increase it some more. What other things do you think could use their own cards? AI?

#34 TwinX   Members   -  Reputation: 151

Like
0Likes
Like

Posted 21 April 2006 - 10:31 AM

In seven years im sure computers will be more integrated and probably everything on one chip. I say if you want something for gaming buy a console they are specialized hardware and are cheap. Either that or there will be a massive breakthrough which will change everything and the whole pc architecture will have to be rewritten. Computers based on neural nets photonics who knows what will come in the future. When the silicon chip was made it changed everything it will probably all change again some time soon.

NEURAL NETWORKS ARE THE FUTURE BEWARNED YER SCURVY DOGS sorry tht was a bit random ;)

#35 Cubed3   Members   -  Reputation: 156

Like
0Likes
Like

Posted 21 April 2006 - 10:33 AM

codified: I suggest you hit up physx.ageia.com and check out the before physx / after physx videos they have. Pretty spiffy stuff that imo is very hard to do on a cpu in real time like that ;-)

The realism added is quite simply amazing! And after all hasn't the general trend for video games been to include more and more realism? Again I highly doubt that a single core of a 3ghz+ cpu will be able to do the simulation of 1000+ rigid bodies in real time without using some integrator that explodes!

#36 Cubed3   Members   -  Reputation: 156

Like
0Likes
Like

Posted 21 April 2006 - 10:34 AM

Quote:
Original post by TwinX
In seven years im sure computers will be more integrated and probably everything on one chip. I say if you want something for gaming buy a console they are specialized hardware and are cheap. Either that or there will be a massive breakthrough which will change everything and the whole pc architecture will have to be rewritten. Computers based on neural nets photonics who knows what will come in the future. When the silicon chip was made it changed everything it will probably all change again some time soon.



Now that is just random useless blabbering :-P


#37 Cubed3   Members   -  Reputation: 156

Like
0Likes
Like

Posted 21 April 2006 - 10:37 AM

Quote:
Original post by Ezbez

I've heard that the PS3 will be using one of these cards, is that correct? If so, that will be a *big* help in getting it out, as developers would probably make use of it on the PC when porting a game from the PS3.

?



Nada. The PS3 relies on its cell processor for physics, but cell has similarities with the PPU, massive bandwith and lots of parallelization. The PhysX SDK is used on the PS3 and its designed to use the Cell.

Basically one of the spiffy things about using the PhysX API is that you can develop a game for PC/xbox360/PS3 and have the physics be capable of hardware acceleration. PC via the PPU or software mode with multicore. 360 on the 2nd/3rd coe. PS3 via Cell.

I think that will be one of the big drawing features imo, Games made for PS3 know they will have some spiffy hardware accelerated physics so a port from PS3 might REQUIRE a PPU. Maybe thats Ageia's strategy?

#38 C0D1F1ED   Members   -  Reputation: 456

Like
0Likes
Like

Posted 21 April 2006 - 11:40 AM

Quote:
Original post by Kylotan
That's like saying only simple graphics make sense. Games were perfectly playable in the 80s but people wanted more realism and effects. The hardware moved to accommodate that. Similarly in games, people want more realism and effects from the physics. The hardware will move to accommodate that too.

Dual-core, quad-core, octa-core and beyond will perfectly accommodate that -and- make other multi-threaded applications faster as well.
Quote:
Once upon a time, real-time lighting wasn't efficient either. When it became practical, it started appearing everywhere.

What are you talking about? Lighting has been real-time since the first 3D game. Don't mistake a modern CPU for a pocket calculator.
Quote:
Put 128 cores in a CPU if you like - each of those is still going to achieve less per cycle than a dedicated piece of hardware which has a totally custom instruction set and memory architecture for the particular job.

Did you ever look at the SSE instruction set? It doesn't differ much from the SIMD instruction sets of any 'dedicated' hardware. A CPU is hardware too you know. It runs at much higher clock frequencies than any other chip and with multi-core it's catching up on the parallelism.
Quote:
Why do you think the second core is going to be used almost exclusively for physics?

Because everything else just scales linearly. Or is there any other new tasks you want to do on it? Intel's latest architecture is much faster than NetBurst, even for single-core. With 45 nm technology almost ready it will keep scaling per-thread performance nicely. And on top of that the dual-core doubles the performance. The real question is, what will we do with the extra cores once they go multi-core?

#39 Cubed3   Members   -  Reputation: 156

Like
0Likes
Like

Posted 21 April 2006 - 11:46 AM

Quote:
The real question is, what will we do with the extra cores once they go multi-core?


Run norton anti virus.

#40 C0D1F1ED   Members   -  Reputation: 456

Like
0Likes
Like

Posted 21 April 2006 - 12:21 PM

Quote:
Original post by Cubed3
codified: I suggest you hit up physx.ageia.com and check out the before physx / after physx videos they have. Pretty spiffy stuff that imo is very hard to do on a cpu in real time like that ;-)

You mean the new Ghost Recon footage? Not impressed. All I see is a bit more debris flying around. And ironically the framerate seemed lower. Even though that might just be the Flash format is this the best they can show?

Excellent site from a marketing point of view, it will convince many customers, but I'm an engineer. I'm willing to bet that the extra physics would run just fine on a modern dual-core with a well optimized multi-threaded physics engine. Why do they not publish technical details like FLOPS, you think?
Quote:
The realism added is quite simply amazing! And after all hasn't the general trend for video games been to include more and more realism? Again I highly doubt that a single core of a 3ghz+ cpu will be able to do the simulation of 1000+ rigid bodies in real time without using some integrator that explodes!

PhysX is nothing but a 400 MHz chip with a handful of SIMD units that are very much like SSE. And given the extra overhead the actual available processing power of a PPU could be closer to a CPU than you think.

By the way, let's have a look at Cell. 4 GHz x 8 vector coprocessors x 8 floating-point calculations per clock cycle = 256 GFLOPS. This just wipes the floor with PhysX. Also, a GeForce 7800 GTX is 165 GFLOPS. And yes, Cell is a CPU! x86 processors are evolving in the same direction.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS