• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Creepy_cheese

physx chip

224 posts in this topic

ageia is making a physx chip that helps with the physics and stuff. i was reading a review and it said that it has its own processor so it relives the cpu or something, allowing it to focus on the ai, which makes the physics better. it sounded kinda cool, and the demos were nice. i think its a neat idea.
0

Share this post


Link to post
Share on other sites
dell, alienware, and falcon northwest are already selling pc's with physx cards. next month you should be able to find the first retail cards. it's going to be while until enough games and other software support it to justify buying one. then again, it might fail miserably.
0

Share this post


Link to post
Share on other sites
It sounds neat but it has no chance of survival. Dual-core processors have a whole lot of extra processing power that can be used for physics without the additional overhead (PCI bandwidth, synchronization) of a separate physics card. Furthermore, investing in a dual-core processor benefits much more than just the physics in a few games. The Ageia cards are quite expensive and will rarely be used.

Besides, no game developer in his right mind would create a game that will only run on a fraction of PCs. So there always has to be a fallback without affecting gameplay. It took graphics cards about three years to become widespread, but by the time physics play a key role in games the CPUs will be multi-core with highly improved architectures...
0

Share this post


Link to post
Share on other sites
gotta disagree, at least somewhat. in the same way you dont want to be using a dual core processor for rasterizing triangles and running shader code, dual cores will never be as efficient or be able to run as many operations per second as a dedicated card (case in point...low cost physx chip handily beats the fastest of single cpus available today). furthermore the overhead is nothing compared to graphics work and the synch work would have to be done anyway if the physics was running in another thread.

the main thing will just be market penetration. the nice thing is that the aegia lib automatically uses the card if it is found, so you arent preventing people from running the game. the bad thing is that means that the extra physics power cant be used to affect gameplay (though there are a few situations you could get away with this in single player, and it could benifit games like second life where physics calcs take place on the server). however, there are a quite a few people willing to pay $250 for just a bit more eye candy (see alienware for even more egregious examples)...just see the lengths people go for a few more fps; offloading all physics calcs could be a significant jump.

i'm not one of those people now, but the first killer app (some cool unreal engine 3 app for instance) or bring it down $100 or so and i might jump on board without a second thought.
0

Share this post


Link to post
Share on other sites
don't forget about server-side physics. allowing clients to calculate physics leads to hacks/cheats. dedicated physics hardware could be a cost-effective solution for online game servers.
0

Share this post


Link to post
Share on other sites
Quote:
It sounds neat but it has no chance of survival. Dual-core processors have a whole lot of extra processing power


Do you think the graphics cards will die too? People said the same thing (more or less) about HT&L, too.

The truth of the matter may be that physics calculations are streamlined enough that it's more cost effective to do it in a physics chip than a general-purpose CPU. x86 cycles may be quite useful, but they are also very expensive, compared to commodity special-purpose hardware.

Remember: Ageia can parallelize easier than Intel, because their API is more inherently parallel. Also, Ageia cards can, today, do quite a bit more with physics than Intel can, today. Thus, if richness in physics is important, and if physics stays data parallel, then Ageia has a pretty good chance to stay in the game.
0

Share this post


Link to post
Share on other sites
Ageia is at my school today. I saw a playable demo of Cell Factor. It was pretty sweet. Going later for a speech/demo. One of the cool things was shooting through a flag and having it tear to pieces.
0

Share this post


Link to post
Share on other sites
I don't think the dedicated PhysX cards have a chance in hell of capturing consumer attention. However, I think if Ageia signs a contract with ATI or NVidia (or both) to put their chips on the graphics cards, then we will have something special. Not to mention something to do with all of that extra unnecessary bandwidth that PCI Express offers.

Quote:
Original post by C0D1F1ED
Besides, no game developer in his right mind would create a game that will only run on a fraction of PCs.


Elder Scrolls 4 : Oblivion
Doom 3
FarCry

... and really any game that uses cutting-edge graphics technology only runs on a fraction of PCs. So why not physics technologies?


Quote:
Original post by dpadam450
Ageia is at my school today. I saw a playable demo of Cell Factor. It was pretty sweet. Going later for a speech/demo. One of the cool things was shooting through a flag and having it tear to pieces.


Hmmm... Ageia at your school... Bellevue... you must be a DigiPen Inmate!
0

Share this post


Link to post
Share on other sites
Quote:
Original post by JBourrie
I don't think the dedicated PhysX cards have a chance in hell of capturing consumer attention.


the hardcore gamer with too much cash is going to pick it up without hesitation. whether it goes beyond that, who knows?
0

Share this post


Link to post
Share on other sites
everyone who has the word Ageia on his package uses the physics engine already and should have a few effects that only run on hardware ... so what they did was making deals in which every user of Ageia can offer specific hardware accelerated effects that are only available on hardware ...
So I would expect games like Ghost Recon and all the UE3 based titles to use the card ..
0

Share this post


Link to post
Share on other sites
From some very rough numbers of the PhysX performance, it has about 100-200 times the flops of a high end single core cpu (and about 35 times the ram bandwidth of dual channel DDR400). These numbers would be reasonable for a mid level graphics card (for those who are surprised by that). Now assuming cpus double in speed every year. Then in log(200)/log(2)=7.64 years your second core will be able to replace the physx. And assuming only 100 times the flops then it'll take 6.64 years. CPU's are dang quick but they don't touch specialized hardware. (As a side note, I'm hoping for FPGAs to become standard part of computers at some point in the near future.)

My main concern is that there's no Open API for physics cards. So there's going to be a bunch of proprietary pyshics hardware (once some other companies get into the act). And each game is only going to use one physics engine (at least any game with multiplayer). So mainstream games that make heavy use of pyshics hardware will be rare (since only so many people has X type of physics hardware). Of course one company may just conquer the market and become a monopoly. If that happens, odds are top graphics cards will be cheap by comparison.
0

Share this post


Link to post
Share on other sites
Well for one PhysX has a software mode too. Im hoping to use it in a game that I will be writing this summer. I like the nice fluid dynamics effects the PhysX SDK allows for.
0

Share this post


Link to post
Share on other sites
I really don't like this idea of having physics calculated off-chip, requiring local memory and data transfer and all of this other headache... it seems like the least cost effective way to do dedicated physics.

This is precicely the sort of thing that needs to be onboard the CPU, either as a dedicated IC or as some sort of PLD. The idea is really hurt by the fact that Ageia stuck outside the CPU.

But, having said that, the mainstream GPU industry was practically birthed by dedicated gamers, so I wouldn't be surprised at PC gamers picking these things up in droves if the next Doom or Quake or Unreal put it to good use.
0

Share this post


Link to post
Share on other sites
I'm thinking it's a bad idea. Firstly, there's the problem that you have to cater to the lowest common denominator; games that use PhysX will only be able to use it for eye candy, not anything gameplay-changing. It's not like graphics cards at all; everybody has a graphics card, but they vary in performance, whereas with the PhysX card it's somewhat binary, you either have it or you don't.

Secondly, I just don't like the idea of it. I like parallelism, but splitting tasks up like this is just silly. What's next, an AI card? A Scripting card? There's only so many areas you can split it down into. Really, more work should be done in getting multi-cored processors; they may not be as optimal but they are more useful and scaleable since they are generic and not tied to any one task.
0

Share this post


Link to post
Share on other sites
Quick question:
How are we going to use this hardware? Does it require a specialized library that won't apply to the nVidia physics enhancements coming out? Or is there already some abstraction, ala OpenGL (OpenPhys?), to handle that for us?

Cheers,
--Brian
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Nairb
Quick question:
How are we going to use this hardware? Does it require a specialized library that won't apply to the nVidia physics enhancements coming out? Or is there already some abstraction, ala OpenGL (OpenPhys?), to handle that for us?

Cheers,
--Brian


a proprietary api, of course!
0

Share this post


Link to post
Share on other sites
They provide a very opengl-like library which will use the card, or fall back to software if it isn't present. At least that's my understanding based on talking to people who've used them.
0

Share this post


Link to post
Share on other sites
I'm using the card for my Genesis project, a self-evolving AI structure that is partially based on the real world, and uses a physical simulator to simulate real-world mechanics on robots, and their problem-solving abilities...

So I guess that, even though I'm saying I'm going to buy it, and kind of in the group of people that say that this might not work comercially, after all I am buying it, but not for the reasons they probably developed the card for...
0

Share this post


Link to post
Share on other sites
What with nVidia teaming up with Havoc to deliver their own PPU, this is sounding alot like the early days of Glide vs. OpenGL vs. D3D v3

I prophecy the creation of a generalized physics API, (say, OpenPL), and a Microsoft-DirectX physics API DirectPhys (direct-fizz?).

Certainly highly parallel operations are no new idea, and I wouldn't be surprised if a new class of secondary processor, called, oh, I dunno... Parallel Pipe Processor, will be made standard on many PCs.

Oh wait, isn't that exactly what the Cell Processor is? Hmmm...

Keep an eye out for PPPs and physics engines built to take advantage of them (not to mention whole host of other applications, can you imagine photoshop accelerated with a PPP? it would be astoundingly fast!
0

Share this post


Link to post
Share on other sites
The difference is that nVidia is working to do general purpose computations on their GPU, which will still be focused on providing
top-notch graphics performance. Where as Ageia has made a special purpose processor to deal with the physics computations.

This means:
1) the Ageia processor has real memory, like a CPU. The GPU may have comperable bandwidth, but the access pattern that a GPU is designed for is not
the same access pattern seen in physics computations. So, the physics work has to be broken up in "unnatural" ways to accomidate the limited access to memory.

2) the Ageia processor is not "too far" from the CPU, where the CPU needs data such as position, velocity, direction, and rotation in order to
come do decisions for AI and graphics processing, the CPU no longer needs to deal with all the side-effect data that the physics needs inorder to compute
said object information. This is the same idea as HW T&L, the CPU doesn't need to know about all the side effect geometry, it only has to know the
position and orientation data for the mesh.

There is a very significant amount of processing that the GPU and PPU are both able to do. There is a significant amount of data that gets
cached on both that does not have to be transfered every frame. And anything you can offload from the CPU means more cycles for everything else.
It still takes time to itterate through 1000 objects and do frame updates, if you can half the processing time, you can either make the
world more "rich and full" with another 1000 objects, or make each object "smarter" in some way.

Quote:

What's next, an AI card? A Scripting card? There's only so many areas you can split it down into.

There is already specific hardware in your computer to decode MPEG4 video, MP3 audio. Your network card likely has RLE compression, and hardware to
help support the TCP/IP stack. RAID can be done with software, but there is hardware to deal with that too.

The point is that there are types of computation that can be done better with specific hardware. In 5 years we will likely have people
using the PhysX API to do fast general purpose computations on the PPU, just like we have people using fragment shaders to do some
general purpose computation on the GPU.

If we didn't have hardware acceleration for many facets of computation today, then the technology would have never made it to the mainstream.
(think a 500Mhz CPU trying to decode a DVD w/o GPU support, or tring to encode an mp3 in real time straght from the cd/line in.
Posible, but a 500Mhz + dvd/tv decoder card will make all the diference in the world as far as playback responsiveness.)
The PhysX chip is a step in the right direction. The cell probably is too.

And even if the consumer market doesn't take up the PhysX chip, there is likely
a commercial market for it, just as SGI opened the market for
commercial high-speed/quality graphics processing.
Likely the chip will not "flop", but it will be something that only gamers will buy. The average computer user will never see the need.
Alienware, Voodoo PC, and several other companies focus on the gamer market, and seem to be doing fine.
The PhysX chip will likely become part of their full line of products, just as SLI slowly edged its way in.


P.S.
Personally? Im waiting for the realtime raytracing GPU's to come to market.
When that day comes, all the pixel-shader hacks will be gone, and we can finally
see graphics that have the real effects in full 3d with no artifacts like today.
0

Share this post


Link to post
Share on other sites
What starts out as an addon chip today may well become an inbuilt part of motherboard designs of the future. Games are not the only environments that require physics processing. Indeed, there are many more research machines out there that require simulation capabilities than there are games machines. This will indeed by a solid market place for PPUs. Additionally, I think you'll find that if PhysX is successful in penetration into software design (or at least the industry isn't adverse to catering for it) then you may well see next or latter generation consoles with dedicated PPUs.

My personal opinion is that this is actually well overdue. There is a great capacity within both the gaming/entertainment and the scientific/engineering research markets for the uptake of physics chips because what they bring to the table is increased performance, if you're prepared to take advantage of them. Since the market is always moving forward, there will always be people prepared to take the advantage and mark their place in the front line... and more power to them if it means my research life is made easier and my games look and feel more realistic!

Cheers,

Timkin
0

Share this post


Link to post
Share on other sites
Quote:
Original post by justo
...in the same way you dont want to be using a dual core processor for rasterizing triangles and running shader code, dual cores will never be as efficient or be able to run as many operations per second as a dedicated card...

You can't compare the two. First of all, modern games absolutely can't run without a graphics card because of the vast amount of processing power required. For physics, you don't need that many GFLOPS. Only a fraction of the geometry is in motion for any realistic game, and that's vertex processing, not pixel processing. Secondly, CPUs are bad at pixel processing (bilinear texture filtering in software is quite slow), but they are very good at vertex processing (Intel Integrated Graphics still uses software T&L in many cases). Thirdly, don't underestimate the effective efficiency of a CPU. It has a comparatively huge and fast cache, it has highly efficient jump prediction, can do out-of-order execution, and the execution units are fully pipelined and run at several gigahertz. A PPU uses a brute-force approach, but can't be that efficient. So even if it has a multiple of the GFLOPS of a CPU, a lot goes to waste because of additional overhead (not to mention PCI bandwidth and synchronization). Last but not least, the second core of a dual-core processor is practically unused at the moment. So while previously there was maybe a 10% 'budget' for physics processing, this has now become 100%. It's already going to be hard to use all of that (without wasting it of course) for any realistic game. Plus, in a year dual-core CPUs will be widespread and price-effective, while only over-enthousiastic gamers will buy a PPU.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by C0D1F1ED
You can't compare the two. First of all, modern games absolutely can't run without a graphics card because of the vast amount of processing power required. For physics, you don't need that many GFLOPS.


That entirely depends on what you're simulating!

Quote:
Only a fraction of the geometry is in motion for any realistic game, and that's vertex processing, not pixel processing.


Physics is about more than just accommodating motion of a few models. Comparing it to vertex processing is doing it a huge disservice. Anyone who's looked at endless equations of Hamiltonian or Lagrangian derivations of rotational velocity and stuff like that will know this. To do a good job of physics, you have various iterative methods and approximations that, to be done properly, require a lot of power. A large part of the reason little of the geometry moves in current games is because it's expensive to make it do so. Make it cheaper, and more will move.

Quote:
Thirdly, don't underestimate the effective efficiency of a CPU. It has a comparatively huge and fast cache, it has highly efficient jump prediction, can do out-of-order execution, and the execution units are fully pipelined and run at several gigahertz. A PPU uses a brute-force approach, but can't be that efficient.


I seriously doubt that a PPU is not 'fully pipelined'. Nor is jump prediction or out of order execution a big deal for such hardware. Those features are part of how CPUs attempt to catch up to dedicated hardware, not advantages over such hardware.

Quote:
Last but not least, the second core of a dual-core processor is practically unused at the moment.


This may be true, but it's almost exactly what Intel said about MMX for graphics. It was nowhere near enough for what was needed.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by gumpy macdrunken
don't forget about server-side physics. allowing clients to calculate physics leads to hacks/cheats. dedicated physics hardware could be a cost-effective solution for online game servers.


This is exactly the use i had in mind for these cards where you can dictate the hardware specification and make your app make full use of them.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0