#### Archived

This topic is now archived and is closed to further replies.

# rendering a simple mesh (problems with GF2)

This topic is 5680 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

My program loads a heightmap, and converts it into a Vertexbuffer and an Indexbuffer (you know what I mean (c; ). On my Radeon 8500, the Rendering works well, but on a Geforce2 (MX, GTS Pro, Ultra) a flat ceiling with the same size as the terrain and the same coloring is rendered. Here is the some code (the data is loaded correctly):
// Heightmap size is 257 * 257
int nScanlineVertexCount = 257;
int nScanlineTriangleCount = (nScanlineVertexCount - 1) * 2;
int nScanlineIndexCount = (nScanlineTriangleCount) * 3;

CUSTOMVERTEX2 * Locked;
DWORD * LockedIndex;
m_pd3dDevice->CreateVertexBuffer(nScanlineVertexCount*nScanlineVertexCount*sizeof(CUSTOMVERTEX2), 0, D3DFVF_CUSTOMVERTEX2, D3DPOOL_MANAGED, &m_Heightmap);
m_pd3dDevice->CreateIndexBuffer(nScanlineIndexCount*nScanlineIndexCount, 0, D3DFMT_INDEX32, D3DPOOL_MANAGED, &m_HeightmapIndex);
m_Heightmap->Lock(0, 0, (BYTE**)&Locked, 0);
m_HeightmapIndex->Lock(0, 0, (BYTE**)&LockedIndex, 0);
ZeroMemory(Locked, nScanlineVertexCount * nScanlineVertexCount * sizeof(CUSTOMVERTEX2));
ZeroMemory(LockedIndex, nScanlineIndexCount * nScanlineIndexCount);
int offset;
for (int y = 0; y < nScanlineVertexCount; y++) {
for (int x = 0; x < nScanlineVertexCount; x++) {
offset = y * nScanlineVertexCount + x;
Locked[offset].Diffuse = D3DCOLOR_COLORVALUE((x % 3)/2.0f, 1.0f, (y % 3 ) / 2.0f, 1.0f);
Locked[offset].normal = D3DXVECTOR3(1.0f, 1.0f, 0.0f);
Locked[offset].position = D3DXVECTOR3( (float)x - nScanlineVertexCount/2, (float)-y + nScanlineVertexCount/2, (float)buffer[offset]/4);
Locked[offset].u = x / 64.0f;
Locked[offset].v = y / 64.0f;
}
}
D3DXVECTOR3 v, v1, v2;
int indices[4];
for (y = 0; y < nScanlineTriangleCount / 2; y++) {
for (int x = 0; x < nScanlineTriangleCount / 2; x++) {
indices[0] = y*nScanlineVertexCount+x;
indices[1] = y*nScanlineVertexCount+x+1;
indices[2] = (y+1)*nScanlineVertexCount+x;
indices[3] = (y+1)*nScanlineVertexCount+x+1;
LockedIndex[y*(nScanlineVertexCount-1)*6+x*6] = indices[0];
LockedIndex[y*(nScanlineVertexCount-1)*6+x*6+1] = indices[1];
LockedIndex[y*(nScanlineVertexCount-1)*6+x*6+2] = indices[2];
LockedIndex[y*(nScanlineVertexCount-1)*6+x*6+3] = indices[3];
LockedIndex[y*(nScanlineVertexCount-1)*6+x*6+4] = indices[2];
LockedIndex[y*(nScanlineVertexCount-1)*6+x*6+5] = indices[1];
Locked[indices[0]].normal = *D3DXVec3Cross(&v,
D3DXVec3Subtract(&v1, &(Locked[indices[1]].position), &(Locked[indices[0]].position)),
D3DXVec3Subtract(&v2, &(Locked[indices[2]].position), &(Locked[indices[0]].position))
);
}
}
m_Heightmap->Unlock();
m_HeightmapIndex->Unlock();

And the render code:
	m_pd3dDevice->SetStreamSource( 0, m_Heightmap, sizeof(CUSTOMVERTEX2));
m_pd3dDevice->SetIndices(m_HeightmapIndex, 0);
m_pd3dDevice->DrawIndexedPrimitive( D3DPT_TRIANGLELIST, 0, 257*257, 0, 256*256*2 );

I am using the Common Files Framework. The Init code is located in InitDeviceObjects(), and the render code is in the Render()-function, between the BeginScene()/EndScene() Pair. The catch is that this code produces a perfect image on my Radeon and the (flat!) ceiling is only displayed on a GF2. I also tried to split the Rendering into two DrawIndexedPrimitive()-Calls, to make the PrimitiveCount be equal to MaxPrimitiveCount, but it didn't change something. How can I avoid this? _____________________ Thanks In Advance, Oregon Ghost Wenn NULL besonders gross ist, ist es fast schon wie ein bisschen eins ;c) if NULL is very big, it is almost like a little ONE. [edited by - OregonGhost on June 28, 2002 4:27:39 PM] [edited by - OregonGhost on June 28, 2002 4:29:12 PM] [edited by - OregonGhost on June 28, 2002 4:30:40 PM]

##### Share on other sites
I didn''t really bother checking the code, since im pretty sure the problem is tied to you trying to access more vertices than your card is capable of The GeForce2 is usually capable of accessing 65535 vertices per indexbuffer, you can get that from the device-caps. So break things up into 2-3 buffers if you really need to render everything. Usually some basic frustumculling will get the number of vertices used down to way below 65535 though.

T

--
MFC is sorta like the swedish police... It''''s full of crap, and nothing can communicate with anything else.

##### Share on other sites
sorry doubleposting

[edited by - OregonGhost on June 30, 2002 6:21:01 AM]

##### Share on other sites
Yes, I thought about that too... But:
1. My Card and the GF2 have a MaxPrimitiveCount of 65536, but on my Radeon it works correctly.
2. I split the DrawIndexedPrimitive() call into two, each with exactly 65536 triangles to render. Doesn't make any difference on any card.

Funny is that I completely rewrote this code, and now it seems to look exactly as the above code, but now I have a ceiling on my Radeon too ;c)

I didn't want to begin with frustumculling unless I'am sure the simple render code renders on any card, but I think you're right. I'll give it a try ;c)

Oregon Ghost
Wenn NULL besonders gross ist, ist es fast schon wie ein bisschen eins ;c)

if NULL is very big, it is almost like a little one.

[edited by - OregonGhost on June 30, 2002 6:21:16 AM]

##### Share on other sites
If you want to make sure the code is correct, have you tried it with a smaller heightmap? Try using a 64*64 heighmap or something, if that renders correcly,it''s prbly due to the limitations of vertices.
Or you might enable a simpler sort of culling, like distance-based for example. Just draw the polygons closer to you than some certain distance. This will look like shit obviously, but it will tell you how many vertices and polys it takes for it to mess up

T

--
MFC is sorta like the swedish police... It''''s full of crap, and nothing can communicate with anything else.