Extra Overhead.

Started by
7 comments, last by PumpkinPieman 19 years, 6 months ago
What exactly should one stay away from doing, or do to aviod any unneccessary overhead when making a game? I know of sorting polys biased on opacity\texture, is there anything else?
Advertisement

Well thats a huge question, but just a few tips from me:

1) When creating you vertex-bufferes in DirectX, make sure to check the flags and their purpose.

2) Minimize switch between vertex bufferes, hence fill your buffers with as much as possible

3) keep textures to size 256x256 if possible

good luck
www.pokersniffer.org
Goto to nvidia.com, click developer, and browse their whitepapers. Usually the latest papers directly talk about the latest cards, so if you're worried about older cards you might want to check papers from a few years back. Repeat the process at ati's site.
How about index buffers in terms of efficiency over the standard drawprimitive method?
Quote:Original post by Mille

2) Minimize switch between vertex bufferes, hence fill your buffers with as much as possible


I've never quite understood this. If you cram all your data into a smaller number of VB's then sure, you reduce the number of VB switches. However, aren't the switches more costly due to the increased size of the buffer?

I'm not disagreeing, I just don't understand the reasons.
The Trouble With Robots - www.digitalchestnut.com/trouble
If your VBs and textures don't fit into video memory, then yes, too large a VB may cause a performance hit, as the entire buffer must be swapped in and out of video memory at once. Personally, I keep a list of formats, and a list of VBs/IBs/shaders (in DX8 a shader is tied to a format) for each format. If I load an object, and it fits in one of the existing VBs, I put it there. If not, I allocate a new VB, but I have a minimum size, programmable per app at initialization... I think we tend to use 5,000-6,000 verts. If you load lots of small objects, they'll share VBs. If you load a detailed 10,000 vertex mesh, it gets it own VB.

If you know your level requirements, you can better tune how you allocate space, but that's only going to be set in stone at the end of your project (or a data pre-processing step). Until then, try to share the VB with small objects. Another problem with creating a very large VB is waste. If I changed from buffers of 6000 vertices to buffers of 20,000 vertices, using a typical 32 byte vertex, my possible wasted space goes from 192,000 to 640,000. If I still target an 8MB or 16MB card, that's a bit percentage of possibly wasted space. Cards that old will also likely be AGP1X or 2X, or even PCI. Combine slow bus with large buffers, and a sort that doesn't sort by VB, and you'll have a problem.

I think nVidia once suggested 1M-2M buffers are good, or ~31K vertices... but really, your optimal buffer size depends on your app's usage patterns and target hardware.
Check out this thread. It lists some of the D3D operations that are very expensive. Therefore, you should try to minimize the number of times you call these functions per frame.
Dustin Franklin ( circlesoft :: KBase :: Mystic GD :: ApolloNL )
Quote:Original post by Namethatnobodyelsetook
I think nVidia once suggested 1M-2M buffers are good, or ~31K vertices... but really, your optimal buffer size depends on your app's usage patterns and target hardware.


Yep, in this slideset on batching, they recommend a buffer size of 2-4mb.

However, according to this ATI document, there is a large difference between the optimal size for static buffers and the optimal size for dynamic buffers. They recommend:

Static: 1 - 4mb
Dynamic: 256k - 1mb
Dustin Franklin ( circlesoft :: KBase :: Mystic GD :: ApolloNL )
Wow, that's great information! Thanks. :)

This topic is closed to new replies.

Advertisement