DrawIndexedPrimitive & ATI driver

Started by
22 comments, last by quiSHADgho 11 years, 8 months ago
Had a similar problem with ATI cards not drawing some indexed textured primitives. Seem to remember I had to explicitly specify buffer offsets or something to assist the card whereas this could be ignored for nvidia or intel cards.
Can probably chase up the change required if it seems like a similar problem.
Advertisement
Sounds promising - it would be great if you could post this change.
I found when drawing indexed primitives the minVertexIndex parameter cannot be left at zero with ATI cards for some reason. NVIDIA and INTEL seem to work fine however. When I left this at zero, the ATI cards were missing polygons all over the place.

So your call " ->DrawIndexedPrimitive(D3DPT_TRIANGLELIST, 0, 0, m_VertexBuffer.m_NumVertices, index0, numIndices/3); "

must change to a variation like

" ->DrawIndexedPrimitive(D3DPT_TRIANGLELIST, 0, m_MinIndexNumber, m_VertexBuffer.m_NumVertices, index0, numIndices/3); "

see the MSDN entry

"The minVertexIndex and numVertices parameters specify the range of vertex indices used for each call to DrawIndexedPrimitives. These vertex indices are used to optimize vertex processing of indexed primitives by processing a sequential range of vertices prior to indexing into them. Indices used during this call cannot reference any vertices outside this range.
http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.graphicsdevice.drawindexedprimitives.aspx "
I experimented with different start vertex and index indices before. If I remember correctly, it delayed the occurrence of the bug or shifted it to other triangles. But it wasn't a reliable behavior.

Nevertheless, I reinstalled the most recent ati driver and tried what you suggested. Added 100 vertices to the beginning for padding, changed the index buffer accordingly and then rendered with minVertex=100. Exactly the same behavior as before. How exactly did you do it?

If all else fails, one thing seems to do the job: using DrawPrimitive instead of DrawIndexedPrimitive.
Which should be available from the options menu through "Enable embarrasing ATI pampering" ;)
And what card is this?
Are you padding vertex buffers to 32 bytes?


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Card is a ATI HD2600 mobility.

The vertexbuffers are XYZ|NORMAL|UV, i.e. 32 bytes per vertex.
Yeah - shouldn't have to pander to these cards,

My graphics structure may be a little difference to yours.

I ended up grouping the textured polygons in the index buffer array and then setting up another ordinary integer array which pointed to the first element of the indexed buffer array of each texture group for each primitive call. Kind of messy, but it worked 100%. But other issue were that some cards did not support 32 bit buffers (older Intel graphics) which was a pain for larger index groups.

Later on, I decided to get rid of indexed primitives all together, and then I just grouped the elements directly into a large vertexbuffer and then called the different parts with DRAWPRIMITIVES using offsets and lengths as required.
It's cracy, there is this awesome hardware with hundreds of features you usually do not even use, and then one has to struggle with these things.

Using a single big index & vertex buffer is a actually a good move. I remember there was an NVIDIA talk, where they suggested doing this (DX9 was hot and new at the time). But then again, risking compatibility issues with 32-bit index buffers would make me a bit nervous. Yes, I can imagine it is a pain to arrange support for 16-bit-index-buffers in such a scenario.

Fortunately, in my case, using DRAWPRIMITIVE as a cure is not a big deal. It will probably use up a little more GPU RAM and be a little slower because vertex caching becomes useless.
It's easy enough to support 16-bit with the "one huge buffer for everything" scheme. You just logically partition by groups of 64k vertexes, then call SetStreamSource with the appropriate offset into the buffer as required. Yeah, it's a few more SetStreamSource calls. I don't know about D3D9 in this regard, but with 10 and 11 it's the case that just changing the offset in this manner has less overhead than doing a full change of the buffer (and even then the overhead is low enough anyway so you'd need a quite extreme example for it to register on perf graphs).

Some older Intels will report that they don't support stream offset but they're (half) lying - what's happening is that they don't support it in hardware, but then again they don't support any of the per-vertex pipeline in hardware either. In practice if you CreateDevice with software vertex processing you're going to get SM3 capabilities in the per-vertex pipeline, which in turn means that you'll have stream offset too. True, it's software emulated, but it's no worse than the rest of software vertex processing.

(As a curious aside - I wonder has anyone ever tried to see if you'll also get instancing under these conditions).

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.


I found when drawing indexed primitives the minVertexIndex parameter cannot be left at zero with ATI cards for some reason. NVIDIA and INTEL seem to work fine however. When I left this at zero, the ATI cards were missing polygons all over the place.

So your call " ->DrawIndexedPrimitive(D3DPT_TRIANGLELIST, 0, 0, m_VertexBuffer.m_NumVertices, index0, numIndices/3); "

must change to a variation like

" ->DrawIndexedPrimitive(D3DPT_TRIANGLELIST, 0, m_MinIndexNumber, m_VertexBuffer.m_NumVertices, index0, numIndices/3); "
It could be that your minIndex/maxindex values do have to be correct by strict D3D standards, but these ATI cards/drivers are the only ones to actually be so strict... E.g. it could be that other drivers are simply tolerating incorrect values when they shouldn't.
IIRC
- [font=courier new,courier,monospace]MinIndex[/font] should be that minimum value in the specified range of your index buffer
and either:
- [font=courier new,courier,monospace]MinIndex+NumVertices[/font] should be one past the maximum value in that range of your index buffer, or...
- [font=courier new,courier,monospace]NumVertices[/font] should be the number of unique values in that range of your index buffer... It's been a while since I did DX9 tongue.png

However, most drivers just treat these values as a hint, or simply ignore them, so it doesn't matter if you pass in wrong values.

This topic is closed to new replies.

Advertisement