Jump to content

  • Log In with Google      Sign In   
  • Create Account


physx chip


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
224 replies to this topic

#1 Creepy_cheese   Members   -  Reputation: 104

Like
0Likes
Like

Posted 20 April 2006 - 01:42 PM

ageia is making a physx chip that helps with the physics and stuff. i was reading a review and it said that it has its own processor so it relives the cpu or something, allowing it to focus on the ai, which makes the physics better. it sounded kinda cool, and the demos were nice. i think its a neat idea.

Sponsor:

#2 gumpy   Members   -  Reputation: 793

Like
0Likes
Like

Posted 20 April 2006 - 01:48 PM

dell, alienware, and falcon northwest are already selling pc's with physx cards. next month you should be able to find the first retail cards. it's going to be while until enough games and other software support it to justify buying one. then again, it might fail miserably.

This space for rent.

#3 C0D1F1ED   Members   -  Reputation: 456

Like
0Likes
Like

Posted 20 April 2006 - 01:59 PM

It sounds neat but it has no chance of survival. Dual-core processors have a whole lot of extra processing power that can be used for physics without the additional overhead (PCI bandwidth, synchronization) of a separate physics card. Furthermore, investing in a dual-core processor benefits much more than just the physics in a few games. The Ageia cards are quite expensive and will rarely be used.

Besides, no game developer in his right mind would create a game that will only run on a fraction of PCs. So there always has to be a fallback without affecting gameplay. It took graphics cards about three years to become widespread, but by the time physics play a key role in games the CPUs will be multi-core with highly improved architectures...

#4 justo   Members   -  Reputation: 184

Like
0Likes
Like

Posted 20 April 2006 - 02:16 PM

gotta disagree, at least somewhat. in the same way you dont want to be using a dual core processor for rasterizing triangles and running shader code, dual cores will never be as efficient or be able to run as many operations per second as a dedicated card (case in point...low cost physx chip handily beats the fastest of single cpus available today). furthermore the overhead is nothing compared to graphics work and the synch work would have to be done anyway if the physics was running in another thread.

the main thing will just be market penetration. the nice thing is that the aegia lib automatically uses the card if it is found, so you arent preventing people from running the game. the bad thing is that means that the extra physics power cant be used to affect gameplay (though there are a few situations you could get away with this in single player, and it could benifit games like second life where physics calcs take place on the server). however, there are a quite a few people willing to pay $250 for just a bit more eye candy (see alienware for even more egregious examples)...just see the lengths people go for a few more fps; offloading all physics calcs could be a significant jump.

i'm not one of those people now, but the first killer app (some cool unreal engine 3 app for instance) or bring it down $100 or so and i might jump on board without a second thought.

#5 gumpy   Members   -  Reputation: 793

Like
0Likes
Like

Posted 20 April 2006 - 02:34 PM

don't forget about server-side physics. allowing clients to calculate physics leads to hacks/cheats. dedicated physics hardware could be a cost-effective solution for online game servers.

This space for rent.

#6 hplus0603   Moderators   -  Reputation: 4906

Like
0Likes
Like

Posted 20 April 2006 - 02:37 PM

Quote:
It sounds neat but it has no chance of survival. Dual-core processors have a whole lot of extra processing power


Do you think the graphics cards will die too? People said the same thing (more or less) about HT&L, too.

The truth of the matter may be that physics calculations are streamlined enough that it's more cost effective to do it in a physics chip than a general-purpose CPU. x86 cycles may be quite useful, but they are also very expensive, compared to commodity special-purpose hardware.

Remember: Ageia can parallelize easier than Intel, because their API is more inherently parallel. Also, Ageia cards can, today, do quite a bit more with physics than Intel can, today. Thus, if richness in physics is important, and if physics stays data parallel, then Ageia has a pretty good chance to stay in the game.

#7 dpadam450   Members   -  Reputation: 826

Like
0Likes
Like

Posted 20 April 2006 - 02:38 PM

Ageia is at my school today. I saw a playable demo of Cell Factor. It was pretty sweet. Going later for a speech/demo. One of the cool things was shooting through a flag and having it tear to pieces.

#8 JBourrie   Members   -  Reputation: 1203

Like
0Likes
Like

Posted 20 April 2006 - 02:49 PM

I don't think the dedicated PhysX cards have a chance in hell of capturing consumer attention. However, I think if Ageia signs a contract with ATI or NVidia (or both) to put their chips on the graphics cards, then we will have something special. Not to mention something to do with all of that extra unnecessary bandwidth that PCI Express offers.

Quote:
Original post by C0D1F1ED
Besides, no game developer in his right mind would create a game that will only run on a fraction of PCs.


Elder Scrolls 4 : Oblivion
Doom 3
FarCry

... and really any game that uses cutting-edge graphics technology only runs on a fraction of PCs. So why not physics technologies?


Quote:
Original post by dpadam450
Ageia is at my school today. I saw a playable demo of Cell Factor. It was pretty sweet. Going later for a speech/demo. One of the cool things was shooting through a flag and having it tear to pieces.


Hmmm... Ageia at your school... Bellevue... you must be a DigiPen Inmate!

Check out my new game Smash and Dash at:

http://www.smashanddashgame.com/


#9 gumpy   Members   -  Reputation: 793

Like
0Likes
Like

Posted 20 April 2006 - 02:58 PM

Quote:
Original post by JBourrie
I don't think the dedicated PhysX cards have a chance in hell of capturing consumer attention.


the hardcore gamer with too much cash is going to pick it up without hesitation. whether it goes beyond that, who knows?

This space for rent.

#10 wolf   Members   -  Reputation: 848

Like
0Likes
Like

Posted 20 April 2006 - 02:58 PM

everyone who has the word Ageia on his package uses the physics engine already and should have a few effects that only run on hardware ... so what they did was making deals in which every user of Ageia can offer specific hardware accelerated effects that are only available on hardware ...
So I would expect games like Ghost Recon and all the UE3 based titles to use the card ..

#11 Cocalus   Members   -  Reputation: 554

Like
0Likes
Like

Posted 20 April 2006 - 03:08 PM

From some very rough numbers of the PhysX performance, it has about 100-200 times the flops of a high end single core cpu (and about 35 times the ram bandwidth of dual channel DDR400). These numbers would be reasonable for a mid level graphics card (for those who are surprised by that). Now assuming cpus double in speed every year. Then in log(200)/log(2)=7.64 years your second core will be able to replace the physx. And assuming only 100 times the flops then it'll take 6.64 years. CPU's are dang quick but they don't touch specialized hardware. (As a side note, I'm hoping for FPGAs to become standard part of computers at some point in the near future.)

My main concern is that there's no Open API for physics cards. So there's going to be a bunch of proprietary pyshics hardware (once some other companies get into the act). And each game is only going to use one physics engine (at least any game with multiplayer). So mainstream games that make heavy use of pyshics hardware will be rare (since only so many people has X type of physics hardware). Of course one company may just conquer the market and become a monopoly. If that happens, odds are top graphics cards will be cheap by comparison.

#12 Cubed3   Members   -  Reputation: 156

Like
0Likes
Like

Posted 20 April 2006 - 03:14 PM

Well for one PhysX has a software mode too. Im hoping to use it in a game that I will be writing this summer. I like the nice fluid dynamics effects the PhysX SDK allows for.

#13 outRider   Members   -  Reputation: 852

Like
0Likes
Like

Posted 20 April 2006 - 03:20 PM

I really don't like this idea of having physics calculated off-chip, requiring local memory and data transfer and all of this other headache... it seems like the least cost effective way to do dedicated physics.

This is precicely the sort of thing that needs to be onboard the CPU, either as a dedicated IC or as some sort of PLD. The idea is really hurt by the fact that Ageia stuck outside the CPU.

But, having said that, the mainstream GPU industry was practically birthed by dedicated gamers, so I wouldn't be surprised at PC gamers picking these things up in droves if the next Doom or Quake or Unreal put it to good use.

#14 hymerman   Members   -  Reputation: 221

Like
0Likes
Like

Posted 20 April 2006 - 03:24 PM

I'm thinking it's a bad idea. Firstly, there's the problem that you have to cater to the lowest common denominator; games that use PhysX will only be able to use it for eye candy, not anything gameplay-changing. It's not like graphics cards at all; everybody has a graphics card, but they vary in performance, whereas with the PhysX card it's somewhat binary, you either have it or you don't.

Secondly, I just don't like the idea of it. I like parallelism, but splitting tasks up like this is just silly. What's next, an AI card? A Scripting card? There's only so many areas you can split it down into. Really, more work should be done in getting multi-cored processors; they may not be as optimal but they are more useful and scaleable since they are generic and not tied to any one task.

#15 Nairb   Members   -  Reputation: 436

Like
0Likes
Like

Posted 20 April 2006 - 03:30 PM

Quick question:
How are we going to use this hardware? Does it require a specialized library that won't apply to the nVidia physics enhancements coming out? Or is there already some abstraction, ala OpenGL (OpenPhys?), to handle that for us?

Cheers,
--Brian


#16 gumpy   Members   -  Reputation: 793

Like
0Likes
Like

Posted 20 April 2006 - 03:33 PM

Quote:
Original post by Nairb
Quick question:
How are we going to use this hardware? Does it require a specialized library that won't apply to the nVidia physics enhancements coming out? Or is there already some abstraction, ala OpenGL (OpenPhys?), to handle that for us?

Cheers,
--Brian


a proprietary api, of course!

This space for rent.

#17 smitty1276   Members   -  Reputation: 560

Like
0Likes
Like

Posted 20 April 2006 - 03:34 PM

They provide a very opengl-like library which will use the card, or fall back to software if it isn't present. At least that's my understanding based on talking to people who've used them.

#18 Cubed3   Members   -  Reputation: 156

Like
0Likes
Like

Posted 20 April 2006 - 03:43 PM

AFAIK the PhysX API also has one of th efastest software physics engines.

#19 Prozak   Members   -  Reputation: 865

Like
0Likes
Like

Posted 20 April 2006 - 05:39 PM

I'm using the card for my Genesis project, a self-evolving AI structure that is partially based on the real world, and uses a physical simulator to simulate real-world mechanics on robots, and their problem-solving abilities...

So I guess that, even though I'm saying I'm going to buy it, and kind of in the group of people that say that this might not work comercially, after all I am buying it, but not for the reasons they probably developed the card for...

#20 Symphonic   Members   -  Reputation: 313

Like
0Likes
Like

Posted 20 April 2006 - 06:55 PM

What with nVidia teaming up with Havoc to deliver their own PPU, this is sounding alot like the early days of Glide vs. OpenGL vs. D3D v3

I prophecy the creation of a generalized physics API, (say, OpenPL), and a Microsoft-DirectX physics API DirectPhys (direct-fizz?).

Certainly highly parallel operations are no new idea, and I wouldn't be surprised if a new class of secondary processor, called, oh, I dunno... Parallel Pipe Processor, will be made standard on many PCs.

Oh wait, isn't that exactly what the Cell Processor is? Hmmm...

Keep an eye out for PPPs and physics engines built to take advantage of them (not to mention whole host of other applications, can you imagine photoshop accelerated with a PPP? it would be astoundingly fast!
Geordi
George D. Filiotis




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS