• Advertisement
Sign in to follow this  

number of verices and framerate

This topic is 3226 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

i am using direct3d 9 to view direct3d ,odels in .x format when i draw a simple model if it have small vertices number i get 800 fps but i developed a model with 70,000 vertices when i deaw it i get 15 fps so please can you tell me what is the most decent number of ertices i should make my models do to get a decent fps or i should do some thing with the rendering states i tried modifying the presentation interval to the values ONE IMMEDIATE DEFAULT no big difference that's my specs geforce 8400 GS 4G RAM dual-core 2.5 GHZ 2m cashe memory so what??? please

Share this post


Link to post
Share on other sites
Advertisement
We can't know if there's something wrong in your program just like that. You should try loading the model in "DirectX Viewer" (see Tools section in the SDK). This way you will see if the problem is the model or your application.

I think your video card should be able to handle much more.

You should use immediate.

Share this post


Link to post
Share on other sites
70,000 vertices is a lot for one mesh, but still it shouldn't drop that much with your specs...there may be be something wrong with your code...but who knows what?

Share this post


Link to post
Share on other sites
Quote:
Original post by Dunge
We can't know if there's something wrong in your program just like that. You should try loading the model in "DirectX Viewer" (see Tools section in the SDK). This way you will see if the problem is the model or your application.

I think your video card should be able to handle much more.

You should use immediate.

i can view it very perfectly with the directx viewer
but in the directx viewer it uses cull mode and i donot use cull mode in my program
ok
here is all my direct3d init code
[source lang = cpp]

d3d = Direct3DCreate9(D3D_SDK_VERSION);
ZeroMemory(&d3dpp, sizeof(d3dpp));
d3dpp.Windowed = FALSE;
d3dpp.SwapEffect = D3DSWAPEFFECT_DISCARD;
d3dpp.hDeviceWindow = hWnd;
d3dpp.BackBufferFormat = D3DFMT_X8R8G8B8;
d3dpp.BackBufferCount = 1;
d3dpp.BackBufferWidth = SCREEN_WIDTH;
d3dpp.BackBufferHeight = SCREEN_HEIGHT;
d3dpp.EnableAutoDepthStencil = TRUE;
d3dpp.AutoDepthStencilFormat = D3DFMT_D16;
d3dpp.PresentationInterval = D3DPRESENT_INTERVAL_IMMEDIATE;
///////////
d3ddev->CreateDepthStencilSurface(SCREEN_WIDTH,
SCREEN_HEIGHT,
D3DFMT_D16,
D3DMULTISAMPLE_NONE,
0,
TRUE,
&z_buffer,
NULL);


d3ddev->SetRenderState(D3DRS_LIGHTING, TRUE);
d3ddev->SetRenderState(D3DRS_ZENABLE, TRUE);
d3ddev->SetRenderState(D3DRS_AMBIENT, D3DCOLOR_XRGB(150, 150, 150));
d3ddev->SetRenderState(D3DRS_CULLMODE,D3DCULL_NONE);
///////////////////////
//then i render every frame like that
d3ddev->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0);
d3ddev->Clear(0, NULL, D3DCLEAR_ZBUFFER, D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0);
D3DXMatrixPerspectiveFovLH(&matProjection,
D3DXToRadian(45),
SCREEN_WIDTH / SCREEN_HEIGHT,
1.0f,
2000.0f);

d3ddev->SetTransform(D3DTS_PROJECTION, &matProjection);


DrawMesh1();






d3ddev->EndScene();

d3ddev->Present(NULL, NULL, NULL, NULL);


Share this post


Link to post
Share on other sites
You only need to call clear once:
d3ddev->Clear(0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0);

From the code you have posted there is nothing wrong. But you omitted the interesting part which is how you laod and draw your mesh. Maybe you do not optimize the mesh after loading and end up with a bad vertex cache hit ratio, maybe you are drawing the mesh manually triangle by triangle, .... there are lots of ways to keep your graphics card from reaching top performance with just a few lines of code [grin]

Share this post


Link to post
Share on other sites
Quote:
Original post by Waterwalker
You only need to call clear once:
d3ddev->Clear(0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0);

From the code you have posted there is nothing wrong. But you omitted the interesting part which is how you laod and draw your mesh. Maybe you do not optimize the mesh after loading and end up with a bad vertex cache hit ratio, maybe you are drawing the mesh manually triangle by triangle, .... there are lots of ways to keep your graphics card from reaching top performance with just a few lines of code [grin]

thanks mate you sound confidente but i swear it still 15 fps
here is how i load and draw
and what you mean triangle by triangle
i load it once like that

D3DXLoadMeshFromX("sand.x",
D3DXMESH_SYSTEMMEM,
d3ddev,
NULL,
&bufmat1,
NULL,
&numMat1,
&mesh1
);


D3DXMATERIAL* tempMat1 = (D3DXMATERIAL*)bufmat1->GetBufferPointer();


mat1 = new D3DMATERIAL9[numMat1];
text1 = new LPDIRECT3DTEXTURE9[numMat1];
for(DWORD i = 0; i < numMat1; i++)
{
mat1 = tempMat1.MatD3D;
mat1.Ambient = mat1.Diffuse;
if(FAILED(D3DXCreateTextureFromFile(d3ddev,
(tempMat1.pTextureFilename),
&text1)))
text1 = NULL;
}
//////////////
//then i draw it every frame like that

for(DWORD i = 0; i < numMat1; i++)
{
d3ddev->SetMaterial(&mat1);

if(text1 != NULL)
{
d3ddev->SetTexture(0, text1);
}
mesh1->DrawSubset(i);
}



Share this post


Link to post
Share on other sites
Ok, this seems to be pretty straightforward and appears to be ok. However, there is still plenty of room to annoy your graphics card.

First, make sure to call ID3DXMesh::OptimizeInplace on your mesh. If you have a mesh that has a bad ordering of faces and vertices you can end up retransforming lots of vertices multiple times because of cache misses. This optimization reorders your vertices and faces to minimize the load on the transformation pipe of your graphics adapter (try googling for "tipsify mesh" to learn more).

Second, output how many materials you have. Each time you call drawSubset issues a new draw call to the driver which can easily become the bottleneck of your application if you end up issuing hundreds of draw calls for a single mesh due to apparently different materials. If you really have too many materials thus resulting in too many draw calls then using a texture atlas (if you have too many different textures) or reducing different materials (if you have too many different material colors for your subeset) can help improving the performance.

The second point does not seem to be too relevant in this case since the DX viewer renders the mesh ok. But before analyzing more you should tell us the number of drawSubset calls you issue each frame.

Share this post


Link to post
Share on other sites
whatever mate thanks for caring
i modelled a simple plane with 100,000 polys
when i draw it it give 15 fps
it's a plane i mean the vertices are a ligned
and i also tried loading 10 meshes every mesh has 10,000 polys it was the same
so do you think i still need to use that function that rearrange the mesh vertices
here is the model if you wanna see for your self
http://rapidshare.com/files/223021627/model.x.html
please see it

Share this post


Link to post
Share on other sites
A quick glance at the docs says your the loop counter to use with DrawSubset should be accessed via GetAttributeTable

Share this post


Link to post
Share on other sites
I am not sure but you use D3DXMESH_SYSTEMMEM for creating the mesh, so each time it must send it to the video buffer would that make it slow ?

my 2 cents

Share this post


Link to post
Share on other sites
Quote:
Original post by Coder88
I am not sure but you use D3DXMESH_SYSTEMMEM for creating the mesh, so each time it must send it to the video buffer would that make it slow ?

my 2 cents


quote, use DEFAULT or MANAGED.

Share this post


Link to post
Share on other sites
Quote:
Original post by Coder88
I am not sure but you use D3DXMESH_SYSTEMMEM for creating the mesh, so each time it must send it to the video buffer would that make it slow ?

my 2 cents

Oh yes I overlooked that one. [grin]

@OP
Read the following doc about the flags you can provide:

ID3DXMeh load flags

If you specify SYSTEM_MEMORY all of your geometry data is being stored in the RAM and it needs to be copied over the comparably slow bus to the VRAM of your graphics card EACH FRAME. This is most likely stalling your pipeline as of now. Change this to MANAGE or DEFAULT as suggested and you should be fast.

Still it is recommendable to optimize your mesh after loading. It cosumes some time but only upon initialization and can easily earn you a 20 % performance gain for big unoptimized meshes.

Share this post


Link to post
Share on other sites
i tried D3DXMESH_MANAGED
but it still at 15 fps
the solution is to optimize the mesh
you saying that the optimization speed the system up only 20 % ohhhh
ofcourse it speed up more than that

Share this post


Link to post
Share on other sites
If your framerates drops so much with a 70K mesh, but not with a 60K mesh, then I may have faced a similar situation.
My problem was that with a 60K mesh, the indices buffer got stored in 16 Bits integer, but with 70K they were with 32 Bits Integer. So, when 70K the indices buffer got more than doubled which made a big diference in my graphic card, it almost make a level unplayable.

The solution I found was to split all meshes that had more than 60K vertices into two or more, so that all meshes get 16Bits index buffers. Just with that splitting the 70K mesh into two of around 40K could make your framerates go high again.
Hope it Helps

Share this post


Link to post
Share on other sites
Quote:
Original post by yeisnier
If your framerates drops so much with a 70K mesh, but not with a 60K mesh, then I may have faced a similar situation.
My problem was that with a 60K mesh, the indices buffer got stored in 16 Bits integer, but with 70K they were with 32 Bits Integer. So, when 70K the indices buffer got more than doubled which made a big diference in my graphic card, it almost make a level unplayable.

The solution I found was to split all meshes that had more than 60K vertices into two or more, so that all meshes get 16Bits index buffers. Just with that splitting the 70K mesh into two of around 40K could make your framerates go high again.
Hope it Helps


i think i donot have that problem
the answer for my problem is to optimize the meshes i load
i am doing that right now
it will work

Share this post


Link to post
Share on other sites
@yeisnier
The GeForce 8400 is the lowest end of the GeForce 8 series. Still it should be quite capable of using 32 bit indices. Anyway, I doubt the SDK viewer does split the mesh to fall back to 16 bit indices with big meshes and as said by the OP the mesh is way faster with the SDK viewer. So that should not be the problem. Even if it is a valid thing to point out.

@DiPharoH
Did you test the optimization? As I said the optimization does only help if your bottleneck is the transformation pipeline in combination with a bad initial (mis-)ordering of your vertices. You graphics card has a vertex cache that stores the last n transformed vertices with n being something between 10 and 40 as far as I know. That means if you use a vertices in multiple triangles the graphics card does not have to re-transform it again and again as long as your vertex is still in the cache. But if you render other parts of the mesh inbewteen and then hit a triangle using a vertex you have already had for another triangle way earlier then the graphics card has to transform the vertex again.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement