I know that this is a problem stemming from my own pathetic ignorance of dynamic vertex buffers, but I've tried a hundred different things and searched these forums for answers and can't seem to get this to work.
I'm trying to fix my missile trails, which are dynamically created vertex buffers that create a sort of "ribbon" that follows my missiles. I've had the code working beautifully on all nVidia cards for a while now, and I'm trying to fix a crash-to-desktop problem I'm having on most ATI cards.
Here is the code that works on nVidia cards, starting with the VB creation:
// Create the vertex buffer.
if (FAILED(g_pD3Ddevice->CreateVertexBuffer(MAXAMMO * MAX_VAPORTRAIL_SECTIONS * 2 * sizeof(VAPOR_VERTEX), 0, D3DFVF_VAPORVERTEX, D3DPOOL_DEFAULT, &pr_vaporVB)))
{
MessageBox(NULL, "Vapor trail vertex buffer creation failure.", "Error", MB_OK);
}
And now the actual VB render code:
// Create all of the vertices for the vertex buffer
for (i = 0; i < MAXAMMO; i++)
{
if (pr_is_active)
{
for (int c = 0; c < pr_maxVaporTrailSections; c++)
{
D3DXVECTOR3 tempvector, laservert0, laservert1;
D3DXVECTOR3 laserEnd0;
D3DXVECTOR3 laserEnd1;
if (c == 0)
{
laserEnd0 = tail_positions[c+1];
laserEnd1 = tail_positions[c];
}
else
{
laserEnd0 = tail_positions[c];
laserEnd1 = tail_positions[c-1];
}
D3DXVECTOR3 firstPart = laserEnd0 - laserEnd1;
D3DXVECTOR3 lastPart = laserEnd0 - cameraPos;
D3DXVec3Cross(&tempvector, &firstPart, &lastPart);
D3DXVec3Normalize(&tempvector, &tempvector);
laservert0 = laserEnd0 + (tempvector * 1);
laservert1 = laserEnd0 - (tempvector * 1);
vaporVerts[(pr_maxVaporTrailSections * i + c) * 2].x = laservert0.x;
vaporVerts[(pr_maxVaporTrailSections * i + c) * 2].y = laservert0.y;
vaporVerts[(pr_maxVaporTrailSections * i + c) * 2].z = laservert0.z;
vaporVerts[(pr_maxVaporTrailSections * i + c) * 2].diffuse = 0xFFFFFFFF;
vaporVerts[(pr_maxVaporTrailSections * i + c) * 2].tu = 0.0f;
vaporVerts[(pr_maxVaporTrailSections * i + c) * 2].tv = (float)(1.0f / (pr_maxVaporTrailSections - 1) * c);
vaporVerts[((pr_maxVaporTrailSections * i + c) * 2) + 1].x = laservert1.x;
vaporVerts[((pr_maxVaporTrailSections * i + c) * 2) + 1].y = laservert1.y;
vaporVerts[((pr_maxVaporTrailSections * i + c) * 2) + 1].z = laservert1.z;
vaporVerts[((pr_maxVaporTrailSections * i + c) * 2) + 1].diffuse = 0xFFFFFFFF;
vaporVerts[((pr_maxVaporTrailSections * i + c) * 2) + 1].tu = 1.0f;
vaporVerts[((pr_maxVaporTrailSections * i + c) * 2) + 1].tv = (float)(1.0f / (pr_maxVaporTrailSections - 1) * c);
}
}
}
// Fill the vertex buffer.
VOID* pVertices;
if( FAILED(pr_vaporVB->Lock( 0, sizeof(vaporVerts), (BYTE**)&pVertices, D3DLOCK_DISCARD)))
MessageBox(NULL, "Billboard vertex buffer locking failure.", "Error", MB_OK);
memcpy(pVertices, vaporVerts, sizeof(vaporVerts));
pr_vaporVB->Unlock();
It's that blasted memcpy line that crashes almost all the ATI cards I've tested it on, but this code run PERFECTLY on every nVidia card I've tested. I know ATI cards are more finicky about the correct parameters being in the DirectX calls, but I can't find where I'm going wrong. None of the DirectX calls are failing, and that memcpy operation crashes the game to the desktop. I could be doing this whole dynamic VB in the completely wrong way... Any ideas?