• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Creepy_cheese

physx chip

224 posts in this topic

Quote:
Original post by Kylotan
Quote:
Original post by C0D1F1ED
You can't compare the two. First of all, modern games absolutely can't run without a graphics card because of the vast amount of processing power required. For physics, you don't need that many GFLOPS.

That entirely depends on what you're simulating!

Absolutely. I know simulations being done with FEM and modelling nuclear physics. These are of high complexity and dedicated hardware can be of great use.

But they are not used in real-time games and will never be. In games, only 'simple' physics really make sense. For slightly more complex things (e.g. fluid dynamics) hacks are very acceptable. Compare it with rasterization versus ray-tracing. Rasterization is an accepted hack in real-time games.
Quote:
A large part of the reason little of the geometry moves in current games is because it's expensive to make it do so. Make it cheaper, and more will move.

Sure, but not 100x more. And there's a technical reason for that. Every game needs a visibility algorithm for solid geometry to get acceptable performance. A fully deformable world is not practical (and even if it was not every vertex would need complex physics calculations at all time). So I definitely agree that extra processing power for physics is welcome, but in my opinion the 10x increase offered by dual-core is plenty.
Quote:
I seriously doubt that a PPU is not 'fully pipelined'. Nor is jump prediction or out of order execution a big deal for such hardware. Those features are part of how CPUs attempt to catch up to dedicated hardware, not advantages over such hardware.

The CPU's attempt to catch up with dedicated hardware is multi-core, which is only in its infancy. Quad-core and octa-core are already on the roadmaps, and now that Intel has learned that clock frequency isn't everything we're going to see some very powerful CPUs in the not so distant future. It's crazy to invest in PPU technology at this point.
Quote:
Quote:
Last but not least, the second core of a dual-core processor is practically unused at the moment.

This may be true, but it's almost exactly what Intel said about MMX for graphics. It was nowhere near enough for what was needed.

That's hardly comparable. MMX allowed to roughly double the processing workload. Dual-core means a whole extra processor is available almost completely for physics. If on a single-core the budget for phycics calculations is 10% then with dual-core you can do 10x more at the same framerate.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by C0D1F1ED
Quote:
Original post by Kylotan
Quote:
Original post by C0D1F1ED
You can't compare the two. First of all, modern games absolutely can't run without a graphics card because of the vast amount of processing power required. For physics, you don't need that many GFLOPS.

That entirely depends on what you're simulating!

Absolutely. I know simulations being done with FEM and modelling nuclear physics. These are of high complexity and dedicated hardware can be of great use.

But they are not used in real-time games and will never be. In games, only 'simple' physics really make sense. For slightly more complex things (e.g. fluid dynamics) hacks are very acceptable. Compare it with rasterization versus ray-tracing. Rasterization is an accepted hack in real-time games.

But the only reason high-complexity physics isn't applied to games is because of the technological bottleneck. If it didn't reduce framerates at all due to dedicated hardware, why wouldn't you put elaborate simulations into games. Games like Oblivion would be that much more immersive if the physics was even more realistic (which, in some aspects, it is)
Quote:

Quote:
A large part of the reason little of the geometry moves in current games is because it's expensive to make it do so. Make it cheaper, and more will move.

Sure, but not 100x more. And there's a technical reason for that. Every game needs a visibility algorithm for solid geometry to get acceptable performance. A fully deformable world is not practical (and even if it was not every vertex would need complex physics calculations at all time). So I definitely agree that extra processing power for physics is welcome, but in my opinion the 10x increase offered by dual-core is plenty.

Again, this hardware removes the technical bottleneck from games allowing more objects to move without hindering performance.

I think the PPU is a great idea. I know that in a few years, my next PC will have one. If you see what nVidia and Havok did on the GPU for physics, imagine what could be done on a card of that power that wasn't also trying to render the geometry. It would allow for smooth, precise physical representations of the world.

I personally can't wait until every object in a visible scene is actively a part of the physics. Nothing bugs me more in Oblivion than touching something on a table and having all the other objects become "active" and either rise up a hair or, worse, fall through the table because they were set interpenetrating. It's just unacceptable in my eyes. A dedicated card would allow game makers to just activate all visible objects on the PPU so that you don't have these glitches or bugs in the physics simulation.

Now, my main question: why is this in the AI forum?
0

Share this post


Link to post
Share on other sites
C0D1F1ED: i dont really understand your point. is it that its better to dedicate an entire general purpose core to physics rather than a specialized chip?

if that is your point and you think that because of this that there will never be a market, again i have to disagree...there is a fairly substantial gamer market out there (not huge but enough to sustain 512 MB 7900gtx's and the like) that will easily pay for a card to offload all physics calcs for any game based on unreal engine 3 (and others) and boost their framerate however much.

as a developer who is using gpgpu techniques to do somewhat massive simulations, i'd love to have a card to do the same stuff in realtime (and no, a cpu could *never* handle it, even if dedicated) *and* be able to readback large amounts of data to influence the interactive parts of the program (precluded from nvidia's work). there is almost no way to do this with a general purpose processor even for relatively simple scenes (like tens of thousands of interacting particles) with any kind of iterative solver.

Quote:
Original post by NickGravelyn
Now, my main question: why is this in the AI forum?

haha.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by NickGravelyn
But the only reason high-complexity physics isn't applied to games is because of the technological bottleneck. If it didn't reduce framerates at all due to dedicated hardware, why wouldn't you put elaborate simulations into games.

Because for a long time graphics have been the main selling point for games. Physics, or gameplay in general, has been neglected for some years. You're right that there is a technological bottleneck, knowledge and software, but not that much a performance bottleneck. We could have had games with awesome physics for years, if only they had put more effort into it.
Quote:
Again, this hardware removes the technical bottleneck from games allowing more objects to move without hindering performance.

My argument was that you can't make the whole game world deformable. The visibility algorithms only work for solid geometry, which is absolutely required to get adequate framerates. The PPU won't help with visibility, so the possibilities are really limited. 10x more physics processing power (offered by dual-core) can be put to good use, 100x more (offered by the PPU), I doubt it.
Quote:
A dedicated card would allow game makers to just activate all visible objects on the PPU so that you don't have these glitches or bugs in the physics simulation.

I'm really sure they are just bugs, not a fundamental limitation of the software physics engine. Like I said before, it doesn't cost that much GFLOPS to make a dozen objects on a table have correct physics. It's a perfect example that lots of effort is put into the graphics of games, but not always the physics. Don't blame the CPU for that, blame the buggy software and the lack of knowledge to fix it. Investing in a robust physics engine would have made all the difference. And for future games the power of a dual-core processor is really plenty.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by justo
C0D1F1ED: i dont really understand your point. is it that its better to dedicate an entire general purpose core to physics rather than a specialized chip?

That depends on what you would consider 'better'. The way I see it, a dual-core processor is most cost effective. All software will soon benefit from it and it's great for running multiple applications. Besides it's getting mainstream so in a couple years everyone has one. And from my experience the extra amount of processing power is plenty for impressive physics.

The alternative is to have a PPU, which in my eyes is a big waste of money. It will only be used in a fraction of games, won't (can't) offer that much extra, and it's very expensive compared to a dual-core processor that also helps other applications (and which you need to buy anyway).
Quote:
if that is your point and you think that because of this that there will never be a market, again i have to disagree...there is a fairly substantial gamer market out there (not huge but enough to sustain 512 MB 7900gtx's and the like) that will easily pay for a card to offload all physics calcs for any game based on unreal engine 3 (and others) and boost their framerate however much.

As a matter of fact I'm very interested in buying a Geforce 7900 GT(X) myself. I know it delivers awesome graphics for the right price. And I already have an Athlon 64 X2 4400+. But a PPU doesn't interest me one bit. I am convinced that for any real game (not a specially fabricated tech demo) the immense processing power of the CPU is plenty if only they use a robust and well optimized multi-threaded physics engine.

Anyway, I sure realize there is a market. Hardcore gamers will buy anything that 'might' improve their game experience. You could sell them a dead rat if you market it well enough (it does give Doom 3 the right smell you know). So I'm sure pretty Ageia will make a profit. I'm just not convinced that a PPU on a separate card is what most consumers really need...
0

Share this post


Link to post
Share on other sites
I think a dedicated physics card is a great idea, especially with an API like aegia have made that supports physics processing in both hardware AND software.

Also having it on a card probably makes it easier to synchronise in game. Graphics processing is done on the graphics card but the data has to be passed to it by the CPU in the games' render function. And the physics card, presumably, would work in the same way.

C0D1F1ED: I just checked some prices considering you're on about cost-effectiveness:

Intel Pentium 4 830 Dual Core "LGA775 Smithfield" - £223.19
AMD Athlon 64 X2 Dual Core 3800+ - £205.57
BFG Ageia PhysX Accelerator - £217.32

I suppose you do have a point about the price, but I strongly disagree with your point about graphics being the main selling point for games. I think 'immersion' is the major selling point for games, and this includes graphics, physics, AI and sound. Gamers want to feel as if they're in the world.

NVIDIA brought a card to the mainstream with a separate processor for graphics, Creative did the same for sound cards and now Aegia have done the same for physics. So how is this separate physics processor any different to separate graphics or sound processors?
0

Share this post


Link to post
Share on other sites
Quote:
Original post by C0D1F1ED
But they are not used in real-time games and will never be. In games, only 'simple' physics really make sense.


That's like saying only simple graphics make sense. Games were perfectly playable in the 80s but people wanted more realism and effects. The hardware moved to accommodate that. Similarly in games, people want more realism and effects from the physics. The hardware will move to accommodate that too.

Quote:
Compare it with rasterization versus ray-tracing. Rasterization is an accepted hack in real-time games.


That's not because nobody wants ray-tracing, it's because it's not efficient. Once upon a time, real-time lighting wasn't efficient either. When it became practical, it started appearing everywhere. There are many other examples. So when physics hardware makes it easy to create more realistic simulations, people will use that.

Quote:
Sure, but not 100x more. And there's a technical reason for that. Every game needs a visibility algorithm for solid geometry to get acceptable performance.


I don't agree that's true to the extent that you suggest it is. Geometry can still be effectively culled if it moves around, and besides which, not every game needs the absolute top performance. In fact, the slightly slower-moving games are the ones which are more likely to require accurate physics anyway, since you'll be taking the time to examine your surroundings.

Quote:
The CPU's attempt to catch up with dedicated hardware is multi-core, which is only in its infancy. Quad-core and octa-core are already on the roadmaps, and now that Intel has learned that clock frequency isn't everything we're going to see some very powerful CPUs in the not so distant future. It's crazy to invest in PPU technology at this point.

Put 128 cores in a CPU if you like - each of those is still going to achieve less per cycle than a dedicated piece of hardware which has a totally custom instruction set and memory architecture for the particular job. That's just the nature of it. General purpose CPUs are actually very slow, relatively speaking. Besides which, the PPU is likely to be easier for programmers to take advantage of than the parallelism in the CPU anyway, if past history of concurrency is anything to go by.

Quote:
That's hardly comparable. MMX allowed to roughly double the processing workload. Dual-core means a whole extra processor is available almost completely for physics. If on a single-core the budget for phycics calculations is 10% then with dual-core you can do 10x more at the same framerate.


Why do you think the second core is going to be used almost exclusively for physics?
0

Share this post


Link to post
Share on other sites
I'm no expert, just a random high-schooler geussing, but here's my thoughts:

PhysX card has a future, but not a present. Once games using the Unreal 3 engine and other ones that can take advantage of the PhysX card start comming out, you'll start seeing a point to these cards. Until then, only elitists and professionals will be having them. Even then, alot of people won't have them.

I've heard that the PS3 will be using one of these cards, is that correct? If so, that will be a *big* help in getting it out, as developers would probably make use of it on the PC when porting a game from the PS3.

I think we can all see a future where every computer ends up having 15-20 different processors. We've already got at least 3(graphics, sounds, and CPU), a dn Ageia wants to make that 4. If you count networking cards and things like that, that would increase it some more. What other things do you think could use their own cards? AI?
0

Share this post


Link to post
Share on other sites
In seven years im sure computers will be more integrated and probably everything on one chip. I say if you want something for gaming buy a console they are specialized hardware and are cheap. Either that or there will be a massive breakthrough which will change everything and the whole pc architecture will have to be rewritten. Computers based on neural nets photonics who knows what will come in the future. When the silicon chip was made it changed everything it will probably all change again some time soon.

NEURAL NETWORKS ARE THE FUTURE BEWARNED YER SCURVY DOGS sorry tht was a bit random ;)
0

Share this post


Link to post
Share on other sites
codified: I suggest you hit up physx.ageia.com and check out the before physx / after physx videos they have. Pretty spiffy stuff that imo is very hard to do on a cpu in real time like that ;-)

The realism added is quite simply amazing! And after all hasn't the general trend for video games been to include more and more realism? Again I highly doubt that a single core of a 3ghz+ cpu will be able to do the simulation of 1000+ rigid bodies in real time without using some integrator that explodes!
0

Share this post


Link to post
Share on other sites
Quote:
Original post by TwinX
In seven years im sure computers will be more integrated and probably everything on one chip. I say if you want something for gaming buy a console they are specialized hardware and are cheap. Either that or there will be a massive breakthrough which will change everything and the whole pc architecture will have to be rewritten. Computers based on neural nets photonics who knows what will come in the future. When the silicon chip was made it changed everything it will probably all change again some time soon.



Now that is just random useless blabbering :-P
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Ezbez

I've heard that the PS3 will be using one of these cards, is that correct? If so, that will be a *big* help in getting it out, as developers would probably make use of it on the PC when porting a game from the PS3.

?



Nada. The PS3 relies on its cell processor for physics, but cell has similarities with the PPU, massive bandwith and lots of parallelization. The PhysX SDK is used on the PS3 and its designed to use the Cell.

Basically one of the spiffy things about using the PhysX API is that you can develop a game for PC/xbox360/PS3 and have the physics be capable of hardware acceleration. PC via the PPU or software mode with multicore. 360 on the 2nd/3rd coe. PS3 via Cell.

I think that will be one of the big drawing features imo, Games made for PS3 know they will have some spiffy hardware accelerated physics so a port from PS3 might REQUIRE a PPU. Maybe thats Ageia's strategy?
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Kylotan
That's like saying only simple graphics make sense. Games were perfectly playable in the 80s but people wanted more realism and effects. The hardware moved to accommodate that. Similarly in games, people want more realism and effects from the physics. The hardware will move to accommodate that too.

Dual-core, quad-core, octa-core and beyond will perfectly accommodate that -and- make other multi-threaded applications faster as well.
Quote:
Once upon a time, real-time lighting wasn't efficient either. When it became practical, it started appearing everywhere.

What are you talking about? Lighting has been real-time since the first 3D game. Don't mistake a modern CPU for a pocket calculator.
Quote:
Put 128 cores in a CPU if you like - each of those is still going to achieve less per cycle than a dedicated piece of hardware which has a totally custom instruction set and memory architecture for the particular job.

Did you ever look at the SSE instruction set? It doesn't differ much from the SIMD instruction sets of any 'dedicated' hardware. A CPU is hardware too you know. It runs at much higher clock frequencies than any other chip and with multi-core it's catching up on the parallelism.
Quote:
Why do you think the second core is going to be used almost exclusively for physics?

Because everything else just scales linearly. Or is there any other new tasks you want to do on it? Intel's latest architecture is much faster than NetBurst, even for single-core. With 45 nm technology almost ready it will keep scaling per-thread performance nicely. And on top of that the dual-core doubles the performance. The real question is, what will we do with the extra cores once they go multi-core?
0

Share this post


Link to post
Share on other sites
Quote:
The real question is, what will we do with the extra cores once they go multi-core?


Run norton anti virus.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Cubed3
codified: I suggest you hit up physx.ageia.com and check out the before physx / after physx videos they have. Pretty spiffy stuff that imo is very hard to do on a cpu in real time like that ;-)

You mean the new Ghost Recon footage? Not impressed. All I see is a bit more debris flying around. And ironically the framerate seemed lower. Even though that might just be the Flash format is this the best they can show?

Excellent site from a marketing point of view, it will convince many customers, but I'm an engineer. I'm willing to bet that the extra physics would run just fine on a modern dual-core with a well optimized multi-threaded physics engine. Why do they not publish technical details like FLOPS, you think?
Quote:
The realism added is quite simply amazing! And after all hasn't the general trend for video games been to include more and more realism? Again I highly doubt that a single core of a 3ghz+ cpu will be able to do the simulation of 1000+ rigid bodies in real time without using some integrator that explodes!

PhysX is nothing but a 400 MHz chip with a handful of SIMD units that are very much like SSE. And given the extra overhead the actual available processing power of a PPU could be closer to a CPU than you think.

By the way, let's have a look at Cell. 4 GHz x 8 vector coprocessors x 8 floating-point calculations per clock cycle = 256 GFLOPS. This just wipes the floor with PhysX. Also, a GeForce 7800 GTX is 165 GFLOPS. And yes, Cell is a CPU! x86 processors are evolving in the same direction.
0

Share this post


Link to post
Share on other sites
The GPU has already filled in the niche market for parallel data tasks. There are already basic physics simulations "graphic gems" and its only going to get better over time. The new shader models will only make physics simulation easier and more powerful, and doesn't involve a middleman. I believe the people at AGEIA and some investors took what seemed like a good idea and ran with it without really thinking things through. Time will tell that I am right.
0

Share this post


Link to post
Share on other sites
The problem with the GPU for general-purpose computation is, it's stupid at doing things that don't fit neatly into four-float vectors. The thing is, that's why a PPU is also (in my opinion) a stupid idea. I DO think we're going to have something like a PPU in our systems, but it's going to be a general-purpose vectorized processing unit. Its application will not be etched into its silicon.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
The GPU has already filled in the niche market for parallel data tasks. There are already basic physics simulations "graphic gems" and its only going to get better over time. The new shader models will only make physics simulation easier and more powerful, and doesn't involve a middleman. I believe the people at AGEIA and some investors took what seemed like a good idea and ran with it without really thinking things through. Time will tell that I am right.



PhysX SDK aka NovodeX is also the physics engine included with the PS3 Dev Kits. Ageia's software has a very good support for parallel processing which is why they don't really have themselves dug into a hole even if the PPU doesn't take off.
0

Share this post


Link to post
Share on other sites
Quote:
PhysX is nothing but a 400 MHz chip with a handful of SIMD units that are very much like SSE. And given the extra overhead the actual available processing power of a PPU could be closer to a CPU than you think.

By the way, let's have a look at Cell. 4 GHz x 8 vector coprocessors x 8 floating-point calculations per clock cycle = 256 GFLOPS. This just wipes the floor with PhysX. Also, a GeForce 7800 GTX is 165 GFLOPS. And yes, Cell is a CPU! x86 processors are evolving in the same direction.


Cell is overhyped :-P

Anyways I dunno who is the better performer, but I will this summer ;-)

I plan on upgrading my athlon64 system to a dual core FX and purchasing a PPU(Im a Mech E, I figure it will be nice for some fluid simulations and rigid body dynamics, not 100% accurate, but who knows maybe it will?)
Another good thing is that NovodeX(AKA the PhysX SDK) Is free to use in a commercial product if you support hardware accelerated features via a ppu. On top of that NovodeX is better then ODE, dunno how it compares to newton though.
0

Share this post


Link to post
Share on other sites

Quote:
Quote:
Once upon a time, real-time lighting wasn't efficient either. When it became practical, it started appearing everywhere.

What are you talking about? Lighting has been real-time since the first 3D game. Don't mistake a modern CPU for a pocket calculator.


I'm not sure that's what he meant. Real-time lighting is a simple "choose greyscale light based on dot product" calculation. This is pretty much required for 3D. But good looking dynamic lights, lighting based on blended diffuse, pixel shading, and the newest techniques like HDR and normal maps are lighting that wasn't efficient until more recently. Now it's everywhere, possibly even on those pocket calculators.
0

Share this post


Link to post
Share on other sites
FYI from BFG techs website

Specifications


Processor: AGEIA PhysX


Memory Interface: 128-bit GDDR3


Memory Capacity: 128MB


Peak Instruction Bandwidth: 20 Billion/sec


Sphere-Sphere Collisions: 530 Million/sec max


Convex-Convex (Complex Collisions): 533,000/sec max



smokin!
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Cubed3
Processor: AGEIA PhysX
Memory Interface: 128-bit GDDR3
Memory Capacity: 128MB
Peak Instruction Bandwidth: 20 Billion/sec
Sphere-Sphere Collisions: 530 Million/sec max
Convex-Convex (Complex Collisions): 533,000/sec max

Processor: Intel Pentium D 950
Memory Capacity: ~2 GB
Instruction Bandwidth: 20.4 guops/sec (sustainable)
Sphere-Sphere Collisions: 1.7 billion/sec (theoretical)
Triangle-Triangle Intersection: 425 million/sec (theoretical)

I should also add that this processor has a crappy architecture compared to next generation's standards. The efficient branching and cache also allow advanced optimizations to avoid wasting time with the brute-force approach. So let's not stare ourselves blind at the raw numbers. I'm sure Ageia fears a direct benchmark between PhysX and the latest CPU for a real game.

Smokin?
0

Share this post


Link to post
Share on other sites
Quote:
Original post by C0D1F1ED
Processor: Intel Pentium D 950
Memory Capacity: ~2 GB
Instruction Bandwidth: 20.4 guops/sec (sustainable)
Sphere-Sphere Collisions: 1.7 billion/sec (theoretical)
Triangle-Triangle Intersection: 425 million/sec (theoretical)




Somehow I highly doubt those numbers...
Where did you get them?


I don't think its nearly as powerful as that


PhsyX vs Pentium XE 840 HT

[Edited by - Cubed3 on April 22, 2006 9:40:40 AM]
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Cubed3
Where did you get them?

Just calculate them. 3.4 GHz x dual-core x 3 instructions per clock (sustained) = 20.4 gigainstructions per second. SSE can do 4 floating-point operations per clock (but only one can start every clock cycle) so GFLOPS might be even higher. It's a theoretical maximum but so is the number from PhysX. The other numbers are derived from this, assuming optimal SSE code.

Quote:
PhsyX vs Pentium XE 840 HT

What's the source of this video? Ageia? Of course they will show a demo with a badly optimized software version! Don't be fooled by that. Their marketing is perfect, they want to sell the hardware, but I'm only interested in the capabilities of the product in a real situation versus optimized software.

Besides, a 6+ GFLOPS CPU not capable of handling 6000 objects at more than 5 FPS? Please. That's 200,000 floating-point operations per object. Two-hundred-thousand! Unless you're doing some really stupid brute force collision detection that's a vast amount of processing power.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by C0D1F1ED
Quote:
Original post by Cubed3
Where did you get them?

Just calculate them. 3.4 GHz x dual-core x 3 instructions per clock (sustained) = 20.4 gigainstructions per second. SSE can do 4 floating-point operations per clock (but only one can start every clock cycle) so GFLOPS might be even higher. It's a theoretical maximum but so is the number from PhysX. The other numbers are derived from this, assuming optimal SSE code.

Quote:
PhsyX vs Pentium XE 840 HT

What's the source of this video? Ageia? Of course they will show a demo with a badly optimized software version! Don't be fooled by that. Their marketing is perfect, they want to sell the hardware, but I'm only interested in the capabilities of the product in a real situation versus optimized software.

Besides, a 6+ GFLOPS CPU not capable of handling 6000 objects at more than 5 FPS? Please. That's 200,000 floating-point operations per object. Two-hundred-thousand! Unless you're doing some really stupid brute force collision detection that's a vast amount of processing power.


How did you calculate the sphere-sphere collisions? Im qutie curious :P

Also the source of the video is from ageia of course... But the software physics is being done via NovodeX which is a very optimized physics engine.

200,000 floating point operations per object? Is that per frame or per second?
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0