Vertex Buffer Locking on ATI and NVidia??

Started by
6 comments, last by Evil Steve 18 years, 1 month ago
This is a strange one. So, I'm creating 1 dynamic vertex buffer and 1 dynamic index buffer (both with DYNAMIC and WRITEONLY). I'm using this with my particle system and updating the buffers with this method: 1) First writes will always use D3DLOCK_DISCARD to updated the buffers 2) While the buffers have not been filled, I use D3DLOCK_NOOVERWRITE to update them (I keep track of the vertex offset so my indices will be correct as I add them). 3) When the buffers are about to overflow with the next update, D3DLOCK_DISCARD and continue the loop. I always render the portion on the buffers that have just been filled... Now, this works perfectly as expected on my Nvidia GeforceGo6600 card but doesn't display much of anything (except for some flickering) on my friends' ATI 9600, 9700, and 9800 cards. I'm not using any sort of shader that would cause a discrepancy here... I can't figure this one out.... Any ideas?
Advertisement
I can offer two suggestions :)

1) Try using a standard lock on the buffers. Instead of using No Overwrite and Discard locks, use locks with no flags at all. This will be extremely slow, but whether or not it works could help identify what's causing the problem - the locking or something else.

2) A good suggestion if you run into differences between ATI and NVidia is to check the REFRAST. In many cases, one of the two is capable of handling an error you make better than it's supposed to, making it harder to find a bug. The REFRAST is the standard, so that's always a good place to start from.

Hope this helps.
Sirob Yes.» - status: Work-O-Rama.
Quote:Original post by sirob
2) A good suggestion if you run into differences between ATI and NVidia is to check the REFRAST.
I 2nd this suggestion - saved my bacon just a few days ago [grin]

One other thing to bare in mind is that, traditionally, Nvidia hardware has been much more relaxed on Draw**() call parameters (specifically the complexities of DrawIndexedPrimitive()) whereas ATI have always been much closer to the specification.

The reference rasterizer should indicate which is the correct one though. Work from there!

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

A third thing to check is to install the debug runtime, turn up debug spew to the max, and read everything that comes out of the "debug output" window when running in the debugger. Might as well turn on the "break on error" feature, too, so you know you don't get any DX errors.
enum Bool { True, False, FileNotFound };
Got it! Apparently Adding the base vertex to all of my indices is different than simply specifying it in the DrawIndexedPrimitive call.... becuase the latter works fine!

Thanks everyone!
Damn, too late. nVidia drivers ignore a bunch of the parameters to Draw[Indexed]Primitive, whereas the ATi ones don't. The result being, if you get the parameters wrong, it works on an nVidia card, but not on an ATi...
Quote:Original post by Evil Steve
Damn, too late. nVidia drivers ignore a bunch of the parameters to Draw[Indexed]Primitive, whereas the ATi ones don't. The result being, if you get the parameters wrong, it works on an nVidia card, but not on an ATi...

steve, which parameters is nvidia ignoring? i use every dip parameter except for MinIndex - seems like nothing would work if they ignored any others?
I seem to remember that it's MinIndex and NumVertices (Off the top of my head). Ones that are used for optimizing the performance. I'd guess that ATi uses those values to pretransform some or all of the vertices, and nVidia just does it on the fly, so it doesn't need them.

This topic is closed to new replies.

Advertisement