Is there any way to determine vertex cache size?

Started by
10 comments, last by Basiror 18 years ago
Greetings! I was wondering if it is possible to determine vertex cache size? I do not believe D3DXMeshOptimize does consider the real vertex cache instead I think it takes a value of 16 - not 100% sure about that. But anyway, for NVidia GeForce 6800 I believe the cache size is about 24 vertices or more. Is there any way to query the values?
Advertisement
I think the canonical values are 16 for GeForce 2 and 4 MX, 24 for higher. The problem is that these values almost certainly depend on your vertex format, and the cache is probably sized in bytes rather than vertices. There's no direct way to query the size; iirc you can find out indirectly through some clever tricks, but I don't know how offhand.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Using Direct3D, you can try VCACHE queries to get a D3DDEVINFO_VCACHE.

Just one note, if you use vertex cache optimized tristrips the cache size that is required to cause a considerable speed increase grows exponential so cache size of 24-32 entries won t change that much anymore in the near future
http://www.8ung.at/basiror/theironcross.html
Not that it's particularly useful right now, but for trivia...

In D3D10 there's ID3D10Device::CheckVertexCache() which in theory returns the information you're after. But I've only been able to use it on the RefRast which apparently doesn't have a vertex cache...

I suppose it's about the same level of information that Muhammad posted, but strikes me as being a bit easier to get hold of [smile]

Cheers,
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Are you unable to run a little benchmark with each differently optimized mesh to find the fastest variation?
A vertex size query can tell a lie, and even if you know the true value you need to prove that your optimizations are actually good.

Omae Wa Mou Shindeiru

Note that Radeons have smaller vertex caches than NVIDIA cards; on the order of 12 entries IICR.

Quote:if you use vertex cache optimized tristrips


That doesn't even make sense, unless you're re-starting your tri-strips A LOT, which generates tons of degenerate triangles.

A well sorted triangle list will outperform your typical triangle strip on modern cards, because a triangle strip will require 1.0 transformed vertices per triangle generated, whereas a well-cached triangle list (on a regular mesh) can get close to 0.6 transformed vertices per triangle generated. Strips are not worth the trouble on modern PC hardware, and may actually hurt performance.
enum Bool { True, False, FileNotFound };
Quote:
That doesn't even make sense, ...


Can a tri in a strip end up reusing old verts in the strip which would be in the cache if used soon enough?
There aren t many ways to optimize TriStrip without restarting them.
The degenerated triangles cost you nothing, since their vertices already been transformed and stored in the cache.

Have a look at this paper:
http://research.microsoft.com/~hoppe/tvc.pdf

With vertex cache optimized strips you get a missrate of ~0.5 vertices/triangle

In other words you only transform 0.5 vertices in the average case.

Theres also a powerpoint presentation from Hoppe at the ms research page that describes the individual cache strategies and their influence on tri strips
e.g.:LRU,FIFO
http://www.8ung.at/basiror/theironcross.html
You could take a look at Tom's little DirectX FAQ. He's got some interesting things to say about vertex caches, and Direct3D in general.

In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.

This topic is closed to new replies.

Advertisement