Archived

This topic is now archived and is closed to further replies.

RajanSky

AGP

Recommended Posts

Hi, I was wondering why people say that AGP is overkill for games. Wouldn''t it allow geometry to be transmitted to the video card more quickly, which would help to solve the problems with ugly low-polygon models? My guess is that the bottleneck is somewhere else in the graphics pipeline, but I don''t really have a clue heh. Does anyone know why the AGP supposedly doesn''t boost performance much? Thanks, Raj

Share this post


Link to post
Share on other sites
quote:

I was wondering why people say that AGP is overkill for games


Those people have no clue what they are talking about. The bus transfer speed is actually one of the major bottlenecks in today''s 3D applications.

Share this post


Link to post
Share on other sites
The limited bandwidth is only a problem when transfering large amounts of data like vertex and texture data. It''s not a problem when the data first gets to the graphics board and gets stored in the memory on the graphics board. So with all the memory available on todays graphics boards couldn''t you just transfer all data you need to the card while the game loads??? Then the limited bandwidth won''t be a problem while the actual game runs???



Real programmers don''t document, if it was hard to write it should be hard to understand


Share this post


Link to post
Share on other sites
quote:
Original post by Spartacus
The limited bandwidth is only a problem when transfering large amounts of data like vertex and texture data. It''s not a problem when the data first gets to the graphics board and gets stored in the memory on the graphics board. So with all the memory available on todays graphics boards couldn''t you just transfer all data you need to the card while the game loads??? Then the limited bandwidth won''t be a problem while the actual game runs???



Real programmers don''t document, if it was hard to write it should be hard to understand





That''d be impossible, look at UT2003 for example, all the level geometry, textures, meshes etc.... that needs handled would total hundreds of megabytes...

Share this post


Link to post
Share on other sites
AFAIK, the video boards store the textures/geometry in the video memory until it becomes full, then it has to move some of them in the system RAM, to accomodate new textures.
So, it is a caching system. Now, assuming that you don''t have an insane ammount of textures/geometry in a single scene, those things should remain in the video memory, until you change the scene.

But, in my engine, what kills the performance is the fillrate, of the video board, not the AGP bandwidth...

Height Map Editor | Eternal lands

Share this post


Link to post
Share on other sites
Does the GeforceFX have a hardware tesselator at all, cos then all you''d need to do is send a low resolution mesh to the hardware, have it tesselated on the card, and then write a vertex program to displace the vertices. But i suppose that''d mean the vertex program would have to have some kinda map to displace from, and you cant do texture lookups in a vertex program. Damnit

Share this post


Link to post
Share on other sites
As others have said, video memory fill up pretty fast, so swapping is unavoidable. Other catches: once your geometry is in VRAM, it can only be manipulated by vertex shaders, the CPU has no access to it anymore. For dynamic geometry, that can be a problem: either you require a 3D card with hardware vertex shaders (ie. your game would not run on anything below a GeForce3), or you stream your dynamic geometry to the card every frame (saturating the bandwidth). And while vertex shaders get more powerful with every new card generation, there are still computation beyond it''s possibilities (eg. NSE solvers, etc). For this type of geometry, you''ll always need to stream the data to the card.

Hairybudda: AFAIK, the GeFX will have hardware displacement mapping. That means that a low resolution mesh can be tesselated by the hardware, according to the data supplied in the form of a texture. Basically, it modifies the geometry by lookups in a texture map. Very nice. Other non-nVidia cards already have this feature, some even for quite some time now (Matrox Parhelia).

Share this post


Link to post
Share on other sites
arnt the gf3''s also capable of higher-order interpolation? such as rendering directly to screen from b(ezier)-splines instead of triangles? i thought i remember reading that somewhere.

Share this post


Link to post
Share on other sites
quote:
Original post by Yann L
Hairybudda: AFAIK, the GeFX will have hardware displacement mapping. That means that a low resolution mesh can be tesselated by the hardware, according to the data supplied in the form of a texture. Basically, it modifies the geometry by lookups in a texture map. Very nice. Other non-nVidia cards already have this feature, some even for quite some time now (Matrox Parhelia).

Nice, very nice
Looks like the £600 i have kept by for a GFFX when they''re released will pay off then....

What bothers me is what the implmentation is gonna be like, remember what nVidia were like with register combiners? I really dont want to have to put up with cr*p like that again

We need innovation beyond ATI and nVidia, neither of them have the spark anymore, the market is prime for the entry of a new leader, unfortunately, people will only buy what they know

Share this post


Link to post
Share on other sites