Jump to content
  • Advertisement
Sign in to follow this  
doodo

Polygon count v texture count

This topic is 4263 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi I have this small question that has been on the back of my mind since I’ve started doing 3D programming. Recently with a most of the new games coming out, a lot of emphasis have been put on pixel shader models 2.0 and 3. This includes bump mapping with direct shadows and new technique that are coming out. It seems a large process of the development work and also hardware is focused on improving the texture quality and the amount of memory available on graphics card with extra pipelining and optimisation. But, I haven’t seen any major increase in the amount of polygon’s cards can actually pump out these days. You have vertex processors on the GPU to lessen the effect of the CPU. Main question, why is that a large amount of work is being devoted to new texture and pixel shaders, when polygon counts are still needing to be relatively low? Is the expense of high polygon count with high texture count too large of a computational load on the cpu or gpu? The best example I have is if you look closely on new upcoming games you can identify very pointy surfaces on the model itself where it has low polygon count. But the designers use a very large hybrid of texturing technique that makes the model look high polygon. It’s my understanding that we’re using texture to substitute the low polygon crunch of gpu or cpu instead should be calculating a lot of normal maps from the polygon themselves.

Share this post


Link to post
Share on other sites
Advertisement
It has to do with the amount of scene complexity. When quake first came out, it was in sharp contrast with Doom in terms of the number of actual creatures that could be displayed at once.
As we add in the more objects like trees, people, we are still looking for methods that can reduce the polygon count so we can have more things happening and more different objects on the screen The normal mapping is an off-line method to take a 30,000 polygon creature and turn it down into a 10,000 polygon hit.

Share this post


Link to post
Share on other sites
Thanks for the answer ZoomBoy.

I guess its one of those questions that stair you straight in the face, but you don't notice it.

:)

Share this post


Link to post
Share on other sites
From what I've seen, we can already push quite a large number of polygons, but a couple of 2048x2048 textures are enough to choke the graphics card. From that point of view, the wanted improvement of texture handling is an understandable proposition, is it not?

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Vertex processing is no longer such an important topic. Take a look at the new GPU generations:
- unified shaders (each shader can be used as fragment or vertex shader => G80,Nvidia)
- like textures, vertex information are saved in video memory

So, do some simple math:
Take a expensive fragment shader for a fullscreen render at 1024x768. It will be called 768k times. Nowadays fragment shader do the same things like vertex shader, namely processing data. So there aren't any diffenrences any longer. So if a fragment shader can process 768k pixels, a vertex shader will not have any problem to do the same,namely processing 768k vertices per frame.

Ok, to be honest, I just break it down to processing power. An additional matter you have to consider is , that this calculation only works if you leave all the necessary vertex data in GPU memory. But games like doom/quake uses volume shadows which needs a lot of vertex processing on the CPU. They limit the number of vertices to reduce the impact on the CPU.
Other matters have to be considered: Once a unreal developer said, that they reached the upper polygon limit, why should I render a 10000 tri model if 4 tris shares one pixel on screen ?
The next generation of DX10 GPUs will introduce programmable geometry which will result in displacement maps. When this happens, vertex shaders will be more important and that's the reason you will need more of them(what will be no problem cause of unified shaders).



Share this post


Link to post
Share on other sites
Currently batch numbers are somewhat more important than raw polygon counts--10,000 1-triangle models are much slower than 1 10,000-triangle model.

About displacement mapping: i dont see any real use for this in real-time games except for certain special effects.. normally it is better to make an optimzed model with any displacement done in the modelling package. I think it wrong-headed to assume that displacement mapping will have a big impact.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Displacementmapping will have a big impact:

Today bump,normal,parallax mapping is used on consumer hardware. The result is good as long as you have the right view angle to the surface you are looking at, else it still looks flat. And there's the problem of shadow casting, bump/normal/parallax mapping can't cast shadows (in a easy and performance way).
Displacementmapping will get rid of these problems, introducing high level of geometry detail with good lod management and low memory consumption.

Share this post


Link to post
Share on other sites
I still doubt it, because you seem to suppose that all models will be made of dynamically tesselated patches, right? This just doesnt seem like a good solution anytime soon.

Because for displacement mapping to work you need a very high tesselation, and to me this doesnt seem any better than using high poly meshes, because ultimatly they need to tesselated and sent into the pipeline. Also they will only work on more or less regular surfaces.

Am I being pig-headed here? I do think displacement mapping could work with some kind of terrain LOD system. But still you need dynamic tesselations.



Share this post


Link to post
Share on other sites
Ok, I was the anonymous poster.

Let's take a look at mega textures (terrain rendering with unique and really large texture). It will consume a few gbytes of memory so dynamic loading is a must have. To improve the visual depth of the terrain you could use the following techniques:

1. Use a higher geometry tessalation level, but this will consume enormous amounts of memory for geometry data (position data, tangent space data for normal mapping).

2. Use normal/parallax/bump mapping, but this looks only good if you got the right view angle to the surface, else it still looks flat.

3. Use displacementmaps instead of normal/parallax/bump maps, it will only consume little amount of memory (a height value stored in a texture), casts shaodows (!) and could be used with a very high level of detail.

--
Ashaman

Share this post


Link to post
Share on other sites
Bump Self Shadow can be quite simply emulated by Horizon Mapping. At least for wall textures such as the common "rough stone" texture. Even a four directions approach did a good job for us and used only about 30% more rendering time than a parallax mapped fullscreen without it. I can supply a little testapp, if you want to see for yourself wether it fits your aims. For my personal taste, this is enough.

Parallax Occlusion Mapping is a horrible waste of resources. If you have objects which need this method, you propably go for real displacement mapping instead. DX10 and its limited subdivision capabilities might be able to help there.

Bye, Thomas

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!