• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Creepy_cheese

physx chip

224 posts in this topic

Quote:
Original post by C0D1F1ED
Quote:
PhsyX vs Pentium XE 840 HT

What's the source of this video? Ageia? Of course they will show a demo with a badly optimized software version! Don't be fooled by that. Their marketing is perfect, they want to sell the hardware, but I'm only interested in the capabilities of the product in a real situation versus optimized software.

Besides, a 6+ GFLOPS CPU not capable of handling 6000 objects at more than 5 FPS? Please. That's 200,000 floating-point operations per object. Two-hundred-thousand! Unless you're doing some really stupid brute force collision detection that's a vast amount of processing power.


That demo seemed like a rocket demo (Ageia PhysX demo framework), and i think you can download the demo (not 100% sure tough). See for your self, there are various demos in that suit and compared to other physics engines i saw, they are weary fast and accurate.

IMO a thing to note from video is that using PhysX card the CPU usage is minimal merely for updates. Even if CPU can calculate physics as fast as PPU, PPU is still useful since CPU can't be dedicated to physics only. And havening realistic-real time physics solution is worth couple of hundred $ for a true gamer, since PPU should not have the tendency to upgrade constantly like GPU (every couple of months), and anyway a person who can afford dual core CPU (like you proposed) and other equipment can probably afford the extra $ for PPU.
Ageia PhysX simulation can run in software and is probably optimized for next gen consoles (it is available for X Box 360 & PS3) , since PC CPUs aren't built with that much vectorized units, PhysX card is a good replacement.
The only real barrier i see is the lack of the games for PPU to be useful ... but this is going to change i hope ...
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Cubed3
How did you calculate the sphere-sphere collisions? Im qutie curious :P


// Vector between sphere centers
movaps xmm0, c0x
subps xmm0, c1x
movaps xmm1, c0y
subps xmm1, c1y
movaps xmm2, c0z
subps xmm2, c1z

// Length calculation (reciproke)
mulps xmm0, xmm0
mulps xmm1, xmm1
mulps xmm2, xmm2
addps xmm0, xmm1
addps xmm0, xmm2
rsqrtps xmm0, xmm0

// Compare to sum of radii
movaps xmm1, r0
addps xmm1, r1
mulps xmm0, xmm1
cmpltps xmm0, one


Some explanation might be needed: This calculates sphere-sphere collision of four sets of spheres in parallel. Also, rsqrtps calculates the reciprocal square root at half precision. To avoid using rcpps to invert that again (and lose more precision than acceptable), I multiply by the sum of the radii (1/x < y becomes y/x < 1). So this is 16 instructions for 4 tests, or 4 instructions per test. Which should bring us to the number I wrote earlier. This is obviously peak performance but I'm pretty sure Ageia counted their performance in the same way. Strictly speaking there's a movmskps needed to get the final results but that's compensated by the fact that movaps instructions are not arithmetic and will be executed in parallel by the load/store units.
Quote:
Also the source of the video is from ageia of course... But the software physics is being done via NovodeX which is a very optimized physics engine.

NovodeX is an Ageia product. If it runs several times slower than optimal then that doesn't really hurt Ageia. And for this demo they wanted PhysX to look good so I wouldn't be surprised if they crippled the software version to make it look even better (e.g. by forcing a brute-force approach).

Part of the problem is that we don't have independent benchmarks and quality specifications for physics in games yet. But I think we can expect that to be standardized pretty soon.
Quote:
200,000 floating point operations per object? Is that per frame or per second?

Left your pocket calculator at home?
0

Share this post


Link to post
Share on other sites
Quote:
Original post by RedDrake
That demo seemed like a rocket demo (Ageia PhysX demo framework), and i think you can download the demo (not 100% sure tough). See for your self, there are various demos in that suit and compared to other physics engines i saw, they are weary fast and accurate.

I'm sure it's pretty fast and robust. They can't create hardware suited for physics calculations without superiour knowledge and excellent software. But that doesn't mean it's absolutely optimal yet, and the comparisons are fair. By the way, what other physics engines did you compare it with? Most freely available physics engines are C/C++ code that use suboptimal algorithms (lack of knowledge) and no SSE (for portability). It can't be that hard for a professional product to beat that. But in the comparison demos they specifically wanted PhysX to look good. They know it's hard to market so a bit of cheating has to be very tempting... They pretty much have a monopoly anyway and using non-standardized benchmarks works in their favor.
Quote:
Even if CPU can calculate physics as fast as PPU, PPU is still useful since CPU can't be dedicated to physics only.

You can pretty much dedicate a whole core to it with dual-core processors.
Quote:
...and anyway a person who can afford dual core CPU (like you proposed) and other equipment can probably afford the extra $ for PPU.

Within a few months, Intel's Conroe processors will set new performance records for a very modest price. This is extra dual performance that will soon be available for all games, and for other applications as well. So you won't need big investments to run impressive games with great physics. Buying a 250 $ card with a PPU really seems like a waste of money to me. Of course there will still be people who buy it anyway.

This is just my personal opinion of course. I just think multi-core CPUs have more future than PPUs. In three years from now, I envision 5+ GHz octa-cores with improved SIMD units (higher issue width, reverse hyper-threading) that are capable of doing everything except the heaviest graphics work. I don't see PCs with weak CPUs and a third processor. Both Intel and AMD produce the most advanced chips on the planet and the competition between the two will yield some very interesting results.
0

Share this post


Link to post
Share on other sites
honestly, i don't see the big deal. a dedicated PPU processor takes the math intensive load off the CPU. the GPU does its regular job. If you have a dual core chip or two processors you can use the other core/processor for AI. Plus if the chip is made specifically for physics processing it's going to beat a CPU/core hands down.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Alpha_ProgDes
honestly, i don't see the big deal. a dedicated PPU processor takes the math intensive load off the CPU. the GPU does its regular job. If you have a dual core chip or two processors you can use the other core/processor for AI. Plus if the chip is made specifically for physics processing it's going to beat a CPU/core hands down.

Do you really expect everyone who wants to play games 'the way they are meant to be played' to buy an extra card? It's simply not economical. For my next PC I don't want to be forced to choose yet another component. Also think about laptops, where the trend has been to integrate everything in the least amount of components and keep it cool as well. And laptop sales have surpassed desktop sales, but people expect to be able to play games on them almost equally well.

And where's the limit really if we introduce more dedicated hardware into our systems? There have been speculations about A.I. chips as well (this is the A.I. forum after all). But A.I. uses mainly integer calculations so another chip would have to be added if the CPU isn't considered fast enough. This adds even more bandwidth and synchronization overhead. And while we're at it lets add a chip for volumetric sound propagation, with a specialized architecture for accessing advanced data structures. Heck, why not implement each game separately as a chip? Ok, I'm getting ridiculous now but I had pretty much the same feeling when I first heard about a physics chip for games. It's a reasonable idea for consoles, but PCs have to be general purpose to keep the complexity manageable.

Besides, like I said before, let's not underestimate CPUs! In the Quake era people knew how to get the best performance out of every clock cycle. Nowadays an average programmer doesn't understand software performance beyond big O. They also expect the compiler to optimize their crappy code and blame the CPU when it doesn't run at the expected performance (instead of themselves). I personally have optimized several applications by 2-5 times using assembly code and improving the design in the right spots. Last but not least, CPU manufacturers finally realized that clock frequency isn't everything, and started to focus on vast improvements in IPC and thread parallelism.

If extra processing power is really absolutely required then I see a lot more future in using the GPU for physics. Especially with DirectX 10 and unified architectures, GPUs will be pretty much a huge array of SIMD units anyway. Borrowing a few GFLOPS shouldn't be much of a problem, and there would be little or no overhead to get the results from the physics stage to the rendering stage. So even if I'm wrong about future CPU performance, PhysX still has another huge threat.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by C0D1F1EDInvesting in a robust physics engine would have made all the difference.


Funny, because Oblivion is powered by Havok. :)
0

Share this post


Link to post
Share on other sites
Quote:
Original post by NickGravelyn
Quote:
Original post by C0D1F1EDInvesting in a robust physics engine would have made all the difference.

Funny, because Oblivion is powered by Havok. :)

Indeed, thanks for the info. I can't see such bugs in Half-Life 2 and it's powered by Havok as well. So apparently the integration of Havok didn't go too well. It's again a lack of knowledge as far as I can tell...
0

Share this post


Link to post
Share on other sites
1) I like the idea a specialized CPU for particular types of tasks, but this hardware only runs a single phsyics library. Unless this libary works for physics research (which seems unlikely, as games take shortcut all over) or they provide a scientific version of the lib (not sure what the profit margin is there), this first generation of cards seems very limited.

2) The library runs on next gen hardware, but I would be careful about comparisions with the cell. This hardware has 128 megs of dedicated memory - the cell spus have 512k each and require custom programs to run on them. I am sure they leverage the spus, but I don't know how similar the implementations would be under the hood.

3) Until these cards become mass market, games won't be able to rely on them being present. Until then, they will likely increase your framerate/add visual candy but little else. That said, that may be enough to sell them. They may take off, but I doubt they will become big for several (4-8) years.

4) Half-Life 2 physics look good because Valve got a source code license and spent a bit chunk of time customizing/rewriting it (something like a year+). I am sure the content/tweaking/polish played a bit role too.

0

Share this post


Link to post
Share on other sites
Honestly, I wouldn't be surprised if companies do start coming out with games that have different levels of physics or require the PPU altogether. Afterall, Oblivion can't run on most PCs because the video cards it recommends start at X800 (and even that can hardly handle it), so the belief that companies one day may require this for certain titles isn't out of the question.

In today's world, it just comes down to the cash you want to pay to play. Some people would rather spend $250 and have developers utilize that technology (if only for a minority) and create a very physically realistic experience.

As I said before, my next PC will have the PhysX chip. I will probably make a game that relies on the hardware because I am such a huge fan of intense physics as well as the improvement to the AI that could be done utilizing CPU power that is freed by outsourcing the physics.

And City of Villains is actually about to support the PhysX chip to allow for more particles and more destructive environments. http://ve3d.ign.com/articles/697/697302p1.html

Quote:
Original post by C0D1F1ED
If extra processing power is really absolutely required then I see a lot more future in using the GPU for physics. Especially with DirectX 10 and unified architectures, GPUs will be pretty much a huge array of SIMD units anyway. Borrowing a few GFLOPS shouldn't be much of a problem, and there would be little or no overhead to get the results from the physics stage to the rendering stage. So even if I'm wrong about future CPU performance, PhysX still has another huge threat.

But that's just dumb as well. Why continue to make the GPUs huge and expensive so they can power physics, something they were never meant to do? I think the 'using GPUs for physics with DX10' thing is just dumb. And honestly, comparing Ageia's physics demos with their PPU vs the Havok/nVidia videos, Ageia wins hands down. Maybe it's just the demos, but I personally don't think that using GPUs for physics will work out as well as everyone hopes. My card (an X800) is still striving hard to run BF2 on highest settings. If they were to take away from the rendering power for physics, I'd be more than upset because essentially instead of saying "You have to buy this PPU to play our game." they are saying "You need to buy nVidia's top-of-the-line GPU because, although the rendering could be done on an X800, we are using your GPU for physics as well." Clearly it's not as large of a deal now, but to achieve what the PhysX chip can do, the GPU requirements for games will go from the top about 20% of cards to the top 5% of cards.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by NickGravelyn
Honestly, I wouldn't be surprised if companies do start coming out with games that have different levels of physics or require the PPU altogether. Afterall, Oblivion can't run on most PCs because the video cards it recommends start at X800 (and even that can hardly handle it), so the belief that companies one day may require this for certain titles isn't out of the question.

In today's world, it just comes down to the cash you want to pay to play. Some people would rather spend $250 and have developers utilize that technology (if only for a minority) and create a very physically realistic experience.

I'm sorry but that's just silly. I'm playing and enjoying modern games on my budget laptop with integrated graphics. I don't want to be forced to pay several 100 bucks extra just so a game is runnable. I'm sure there are always people willing to spend all their money on new hardware but a game with insane demands will just not sell enough copies. Optimizing a game for mid-end systems vastly increases the market.
Quote:
And City of Villains is actually about to support the PhysX chip to allow for more particles and more destructive environments. http://ve3d.ign.com/articles/697/697302p1.html

I still want to see proof that a dual-core wouldn't be capable of handling that. They talk about 10 times the number of particles. If the physics processing budget goes from 10% to 100% thanks to dual-core then the exact same thing is possible.
Quote:
But that's just dumb as well. Why continue to make the GPUs huge and expensive so they can power physics, something they were never meant to do?

GPUs and PPUs are not that much different. Each has a number of SIMD units with a very similar instruction set. Especially the DirectX 10 generation will be capable of efficiently processing floating-point intensive tasks other than vertex and pixel processing. There's nothing in the PPU that makes it any better suited for physics.
Quote:
If they were to take away from the rendering power for physics, I'd be more than upset because essentially instead of saying "You have to buy this PPU to play our game." they are saying "You need to buy nVidia's top-of-the-line GPU because, although the rendering could be done on an X800, we are using your GPU for physics as well."

What's the difference, really? If I invested the price of a PPU into my graphics card it would have plenty extra FLOPS to perform the physics. I'd have SLI actually... But the most interesting thing is that people have a choice to buy a budget graphics card and still be able to play their games, even if at lower detail. What's best, playing a game at slightly lower detail or not playing it at all?

The same is true for dual-core. Budget versions are on their way and their extra processing power will be available for every application. By the time I see a PPU card in my local store Intel will have a worldwide lauch of Merom/Conroe based processors. Not to mention DirectX 10 will appear around the same time.

Everyone has a CPU and a GPU. Relying on a third processor that only a fraction of end-users will have is crazy. Worst of all is that it's not cost-effective at all.
0

Share this post


Link to post
Share on other sites
I'm sorry I know I'm dense. but! if you're using a Physics API that will normally use software and use the hardware chip when detected how are you suffering or being inconvenienced? is that not what any other Graphics API does? we could take that concept a step further and have the API check to see if you have the PhysX chip, if not, check to see if you have a second processor, if not check to see if you have a dual core, if not go to software. i don't think that's unreasonable or unefficient. does anyone else?

is that not what the PhysX API enabled games do or will do?
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Alpha_ProgDes
I'm sorry I know I'm dense. but! if you're using a Physics API that will normally use software and use the hardware chip when detected how are you suffering or being inconvenienced? is that not what any other Graphics API does? we could take that concept a step further and have the API check to see if you have the PhysX chip, if not, check to see if you have a second processor, if not check to see if you have a dual core, if not go to software. i don't think that's unreasonable or unefficient. does anyone else?

is that not what the PhysX API enabled games do or will do?


That is what they would do if the hardware is not detected, yes. That does make a good point of it. If people don't buy the card, they'll run the physics in software. For those who buy the card, they'll run in hardware. It'll be faster with hardware, but everyone can still play.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by NickGravelyn
Quote:
Original post by Alpha_ProgDes
I'm sorry I know I'm dense. but! if you're using a Physics API that will normally use software and use the hardware chip when detected how are you suffering or being inconvenienced? is that not what any other Graphics API does? we could take that concept a step further and have the API check to see if you have the PhysX chip, if not, check to see if you have a second processor, if not check to see if you have a dual core, if not go to software. i don't think that's unreasonable or unefficient. does anyone else?

is that not what the PhysX API enabled games do or will do?


That is what they would do if the hardware is not detected, yes. That does make a good point of it. If people don't buy the card, they'll run the physics in software. For those who buy the card, they'll run in hardware. It'll be faster with hardware, but everyone can still play.


The difference is that when using PhysX card one can use advance physics with thousands of rigid body ... On a CPU that's not possible (same as GPU-CPU relations), and won't be possible on several upcoming generation of CPU's (I don't beleve that the dualcore athlons & pentiums can handle physics bether than PhysX card, and after seeing several videos from AGEIA developer site, i don't beleve that it's going to change soon, but this is IMHO) ...
So the developer must build two level of detail physics, and can't use rich physics (like fluid simulation, and similar) for game play or anything other than eye candy effects ...
And for this GPU physics are sufficient, so there is no point in PhysX card as an accelerator, buth as a game requirement, it's a powerfull tool.
(Again i note that this is IMHO)
0

Share this post


Link to post
Share on other sites
I think its been said before, but I think part of the problem with the current physics chip is that it does seem to be API dependant. The only things it will help is games that specifically use that API. If there isn't a major shift to using that API or if other Physics middleware vendors (like Havoc) don't integrate the ability to take advantage of the chip, it basically just sits there.

From what I understand, it is basically another vector based processor ... over the PCI bus. All mighty 33 mHz. Also will it get to the point where it is like the video card industry and you constantly need a new one to keep up with the technology?

We have a lot of vector based math technology available on the CPUs that i don't think get taken advantage of as much as they could. mmx, 3dnow, sse/2/3, etc
0

Share this post


Link to post
Share on other sites
EDIT: looked up the performance improvement figures for dual core gaming: http://www.firingsquad.com/hardware/quake_4_dual-core_performance/
I had written 40%, it's actually around 60%. Also made a few clarifications.

C0D1F1ED: You need to chill. Your tone is condecending towards the rest of the posters, to the point of almost insulting them. Remember that everyone is entitled to their own opinions.

That being said, here's my take on some of the things you've mentioned:

- About the speed of the CPU: You can't really do the calculations the way you did. You are only calculating raw performance based on clock speed, and simple instructions. The P4 does NOT have a sustained rate of 3 instructions per clock, on all instructions (only simple ones). You also totally disregard cache misses, and pipeline stalls. Also, the memory subsystem of any modern PC is vastly inferior to the dedicated memory subsystem of the card, AND optimized memory pattern. The memory access speed quickly deteriorates (up to 20x time penalty) on any modern PC when access patterns are scattered (but the PhysX can maintain it's high speed).

- About multicore: You can't expect to get 8x performance on your imaginary octa-core (or whatever) CPU's. Actual processing power does NOT scale linearly, as you will face severe synchronization issues, copying of data, and other problems related with multithreading. This is already apparent in "dual-core optimized" games today, the best performance improvement (that I know of, feel free to correct me on this) was in the Doom3 engine, giving a ~60% boost in performance. Other "optimized" games have reported a boost of around 15-20%. While this is pretty low, and will almost certainly improve in the future, they're not exactly the figures you'd expect. Adding more cores will just increase the overhead, and lessen the improvement.

- About the speed of the PhysX processor: Unlike you, I have first hand experience with the PhysX-cards (we have hardware support in our title currently in development), and as several people have tried to point out to you, they are a LOT faster than using a CPU simulation. Some of the features are simply not even feasible to do in software right now (one of them being realistic fluids). Even *if* you could do it, you can do more of it with the PhysX-card. There is also no extra hassle when using them, the API handles all of the synchronization with the hardware. All you have to worry about is integrating the extra power in the gameplay. Also, please remember: this is a first generation chip. There's a lot of room for improvement.

- About the flexibility of the PhysX-processor: the processor does NOT have a static physics-only feature set hardwired into it. It is pretty general-purpose (more so than a PS3.0 GPU, but not so much as an regular CPU). The features a provided by the driver, and can be extended in the future. The processor is however optimized for specific calculations that occur often in physics computations. Some people have already used it for AI applications, such as AI (neural networks) and pathfinding. This means that if devs catch on, you can still have use for the processor, even if you're playing a game that does not depend on physics in the gameplay.

- About using GPU to do physics: There are two main reasons why this is alot worse than having a dedicated PPU.

1. You need all the graphics horsepower you can get. There will never be enough GPU power. When GPU's get faster, devs use it to do more things, and nicer. More shadows, more complex lighting models, and so on. Simple as that. (This was already pointed out by another poster).

2. Check out the limitations of GPU physics: you can't read back the results, so you can't integrate it into the gameplay. No collision reports, no callbacks, no nothing. Read nvidias paper, they clearly state: it's only for effects. The PhysX chip accelerates everything, so there's no problem using it for gameplay at all (in fact, there's not even any difference).

- About the performance figures of the Cell chip: while the Cell boasts a very impressive theoretical performance, this will not translate to actual real-world performance for physics or in general (only certain special-purpose applications will benefit, for example video stream decoding). The reason is this: The processor is divided into a main general purpose CPU (PPC RISC), and eight smaller computation units, each with a small memory (about 128k). The figured you mentioned comes from when all of these units are working together, with maximum efficiency. If the smaller computation units can't be used, the performance degrades to a standard 4 Ghz PPC processor. Here's why that will happen for physics: The memory access pattern for physics isn't localized, because any objects must be tested with alot of other objects (you can use hundreds or thousands on the physx cards). You can't fit the data in those 128k, and sorting the data (even from a spatial data structure) and then feeding it to the "right" processing unit involves so much overhead that it's just not feasible (nor would provide any speedup). Hence, the only part of the processor that can be properly used is the PPC master unit, and using only that will make performance degrade to that of a regular CPU running a software simulation.


As you see, there's really no point in discussing if the PhysX chip is faster than a CPU. It breaks a CPU (even a dual-core one) by a really wide margin, and that margin will continue to increase when the next generation of chips arrive. Specialized hardware will always break general purpose hardware in terms of performance. You mentioned that you are an engineer. If you have a degree in technical engineering, this must have already been pointed out to you in several books and/or by your professors.

Now, these technical points aside, another (completely different) discussion is if the dedicated physics hardware will catch on. This is something that nobody can prove with techno-babble, as it's really anyone's guess and only time will tell.

Personally, I hope that the PPU will catch on, because it's really cool to have that much more power to play with :) Whether my dreams will come true remains to be seen ;)

/Simon

[Edited by - simonjacoby on April 25, 2006 4:47:44 PM]
0

Share this post


Link to post
Share on other sites
whether or not people buy it is going to be determined by the same factors that determined whether or not people bought:

1) the FPU (floating point processing unit) you had to strap on to x386 mobos to play games in the dark ages: lots of people did until intel built one into the x486

2) sound cards. originally you just got the PC-speaker: lots of people payed ~$200 for a sound card for games that didn't require one just so it would sound cooler. Now they're required.

3) 3D graphics cards: was slow to be adopted but now you can't buy a computer without one, nor can you play any "modern" game without one.

All of those "addons" were ridiculed in their time as something that would never catch on. "Who wants to pay $200 for a FPU just to play a couple games???" "Why would I buy a sound card when the game has sound without one???" "Why would I buy a 3D card when it'll just make current games a little faster???"

All it takes is enough people seeing a "value-add" to pick one up. Once you get a consumer base you can build more and more features that require one to work. It could work, it could bust. No one here can predict that.

But.... a bunch of soon to be released games will support the card. If the industry thinks it rocks, maybe it will...

-me
0

Share this post


Link to post
Share on other sites
The main problem with the card is that its not related to one of our senses (not directly anyway). With graphic cards, it improved visuals within the game. With sound cards it was better sound quality (although I know plenty of people that still do not own one). But Physics? There isn't really any big difference to the quality of the games when the average consumer plays your game. Personally I just don't see the average consumer picking one up (I think the API might end up being used though if not the card). Then again if game companies jump on board, it might end up being more common place... Who knows.

By the way, does anyone else feel like Creepy_cheese was a plant? :) Weird subject to make your first and only post on... Not to derail the conversation.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by TheFez6255
The main problem with the card is that its not related to one of our senses (not directly anyway). With graphic cards, it improved visuals within the game. With sound cards it was better sound quality (although I know plenty of people that still do not own one). But Physics? There isn't really any big difference to the quality of the games when the average consumer plays your game.


As graphics get more photorealistic, players become more attuned to unrealistic behaviour in the in-game models. On this level, better physics will make the gameplay appear more believable.

As for sound cards, you're missing the point slightly - the onboard sound today is based on what used to be add-on cards 10 years ago, therefore the add-on card market for it has raised the bar for everybody. I remember the days before digital sound playback was common on PCs. The same thing applies to on-board video, really.

0

Share this post


Link to post
Share on other sites
Quote:
Original post by simonjacoby
C0D1F1ED: You need to chill. Your tone is condecending towards the rest of the posters, to the point of almost insulting them. Remember that everyone is entitled to their own opinions.

Don't worry, I'm really a nice guy. I respect everybody's opinion. I have a strong opinion of my own and I think I'm just as entiteled to it. I try to strengthen my opinion with a bunch of arguments, which might seem agressive at first but its just to make people think twice before they just copy an Ageia marketing slogan as their opinion. Eventually this should lead to a more healthy conclusion for everybody, including me. I only come to Gamedev.net to exchange opinions and learn... I apologize if anyone feels offended in that process.

At the moment I still think physics cards are recieved with a little too much enthusiasm. I'm an engineer and I think in terms of technical facts and cost effectiveness. I'm not denying that physics will play a larger role in future games, I just don't believe an extra card is the best way to get there.

First of all we need to expand our knowledge and use good software (cfr. Oblivion bugs). Using a physics card is like using the brute-force solution and shifting the problem to the consumer (who has to shell out). Furhtermore, there's a good chance the amount of extra processing power we will -actually- need in real next-generation games will be offered already by dual-core processors (which are cost effective for more than just NovodeX powered games). Intel is about to launch some really nice processors that should not be underestimated when combined with optimized software. If that's still not enough, there's plenty of bang in current and next generation graphics cards. It may cost us 1 FPS but its available for all and again more cost effective. Also cosider that GPUs use the latest silicon technology (which makes them fast and affordable) and it will take years for PPUs to get that efficient.

If that's still not convincing why physics cards are not so hot then consider this: Suppose that in a couple of months Joe 'Gamer' Average buys a fantastic new laptop with a 64-bit dual-core processor that even brings todays fastest Athlon 64 X2 to its knees, for just a fraction of the price. And since he loves games he'll combine that with something like a Geforce Go 7900 GS. Not cheap but it definitely delivers worth for every buck. Now, would it be unreasonable to expect this great laptop to run every new game in full glory for at least two years? Imagine how frustrating it must be if a game is released where advanced physics really add to the gameplay but it only allows these when a PPU is available, even though he already has two really fast processors in his system.

That's how I feel about about physics cards. Adding an expensive third processor with SIMD units no different from those found in the CPU or GPU just doesn't seem justifiable to me. Feel very free to have a different opinion but I'd love to hear some sound arguments why physics cards are the best idea since sliced bread. Thank you.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by C0D1F1ED
If that's still not convincing why physics cards are not so hot then consider this: Suppose that in a couple of months Joe 'Gamer' Average buys a fantastic new laptop with a 64-bit dual-core processor that even brings todays fastest Athlon 64 X2 to its knees, for just a fraction of the price. And since he loves games he'll combine that with something like a Geforce Go 7900 GS. Not cheap but it definitely delivers worth for every buck. Now, would it be unreasonable to expect this great laptop to run every new game in full glory for at least two years? Imagine how frustrating it must be if a game is released where advanced physics really add to the gameplay but it only allows these when a PPU is available, even though he already has two really fast processors in his system.

First of all it woud be serously unreasnable to expect runing each new title for next two years in FULL GLORY. At some low detail settings the games shoud run tough. Just consider the fact that DX10 is coming in about that time :), and that FarCry already shows flashy DX10 videos around :D

From what i have seen in PhysX API, i came under the impresion that the API alows usage of hardvare/softvare with no code difference (probably initialisation code only).
So if the user CPU IS FAST ENOUGH to run advance physics simulation at optimal speed (at least half of PPU to be playable i guess :?: ) the user Joe 'Gamer' won't be missing a lot :) (probably small preformance issues).
There is a part of the docs i did not understand fully (i didn't have enough time to search), it's the fact that joints, pmaps and some other features are not PPU accelerated - what does this mean ? Is some kind of CPU/PPU procesing mixing used ? I woud certanly think so (i can't imagine physics simulation widouth joints/constrains).

Another note - who has a laptop as a primary gameing station and calls himself a gamer i wonder :) Probably a small % of "hardcore" gamers market. (hardcore - the ones that bother playing somthing other than tetris and similar games to kill time in the office :) )
It woud be more afordable to buy a regular laptop, and a console if you as me :)
You always need to pay extra $ if you intend to play serious brand new game titles ...
0

Share this post


Link to post
Share on other sites
Quote:
Original post by C0D1F1ED
Adding an expensive third processor with SIMD units no different from those found in the CPU or GPU just doesn't seem justifiable to me.


The very fact that you're mentioning CPUs and GPUs as if they're fundamentally the same thing shows the error in your argument. They are totally different devices. The architecture is completely different, optimised for different instructions, different access patterns, different I/O requirements, different synchronisation constraints, etc.

0

Share this post


Link to post
Share on other sites
Quote:
Original post by Kylotan
Quote:
Original post by TheFez6255
The main problem with the card is that its not related to one of our senses (not directly anyway). With graphic cards, it improved visuals within the game. With sound cards it was better sound quality (although I know plenty of people that still do not own one). But Physics? There isn't really any big difference to the quality of the games when the average consumer plays your game.


As graphics get more photorealistic, players become more attuned to unrealistic behaviour in the in-game models. On this level, better physics will make the gameplay appear more believable.

As for sound cards, you're missing the point slightly - the onboard sound today is based on what used to be add-on cards 10 years ago, therefore the add-on card market for it has raised the bar for everybody. I remember the days before digital sound playback was common on PCs. The same thing applies to on-board video, really.


But look at it from the consumers point of view: How will the card make for better games?

Whether or not this eventually gets integrated in to some computer in the future is besides the point. I'm talking strictly from a consumer stand point as to how much the card will improve their gaming experience. My guess is that consumers simply wont get it or care. I'm guessing that most of them are going to say: "I have a computer thats about 3.5 ghz that plays any game I buy perfectly fine. In fact I have physics simulations that run on my machine perfectly fine. Why do I need this card?"

From a developer stand point the card might be gold (although I'm not willing to pay the amount that they want) but to the average gamer, I just don't see them wanting it. I mean I've bought games that are optimized for the card and they ran fine on my machine so why should I get the card (Once again I'm talking from a consumer's point of view)?

To me that means, as a developer, that I have to cut down on a lot of the physics in my game so that it runs on those systems without the card. Personally, unless a game comes along that requires the card, I just don't see it becoming that common place with the average gamer (or at least not in the next 5 to 10 years).
0

Share this post


Link to post
Share on other sites
A small question, will the card be practical MMORPGs or any other MMO ?
Consider the procesing power that can be simply replaced by simulating on PPU and even extend it, while CPU deals with player-simulation comunication ... this will alow to simulate large worlds on a single server, while player side simulates software version of engine for small area around him only.
0

Share this post


Link to post
Share on other sites
Right now these cards are rather expensive or only come with high end PCs were the 'little' extra money does not weight in much anyway, but what if the price could drop to a $100 something (optimized manufacturing, ..., hardware prices drop) people might think, well I can spend $200 on getting that tuned super high end CPU for 10% better performance or I can get a PPU for half the price that delivers the same overall performace gain (given of course that physics is increasingly used in future games)? A lot of people obviously spend a hell lot of money for a few more frames per second, why not on a PPU if it offers the same.
The fact that the SDK does a good job at abstracting the underlying hardware making it rather easy for developers to support the hardware - as an option - and as the SDK in that case comes virtually for free I think there is a good chance that we see a lot of nice games based on the Ageia technology in the near future. I also think that the additional power of a PPU will mostly be used for special effects physics that has no or little impact on gameplay so that if you should not have a PPU you may still have a good experience. Same thing with the Havok FX approach that is clearly targeted towards special effects. Bottom line, I have some doubts but I see some chances as well, we'll se soon enough.
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0