Jump to content
  • Advertisement
Sign in to follow this  
Tape_Worm

[SharpDX] Speed issues when drawing

This topic is 2517 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm using SharpDX and Direct3D 11.I 've got 8192 textured (single texture) quads (32768 vertices), each vertex has a Vector4 for position, a Vector2 for texture coordinates, and a Vector4 for color. Every frame I discard my dynamic vertex buffer and refill it (this is just an unrefined stress test) with the vertex data.

It is REALLY slow. Something like 150+ msec for frame delta (~5-6 FPS).

All I'm doing is this:

Draw()
{
lock_vertex_buffer(discard);
for (int i = 0; i < 8192; i++)
{
Matrix proj_view_world = proj_view * world;
UpdateVertices(proj_view_world);
WriteData();
}
unlock_vertex_buffer();

ImmediateContext.DrawIndexed(8192 * 6, 0, 0);
}


And for the record, I'm initializing an index buffer, but I never touch it again.

Now, if I comment out the DrawIndexed, I get ~200FPS (which is comparable to my old DirectX 9 code and that's not much different than this), so that seems to indicate that the bottleneck is in the DrawIndexed method.

I'm really new to the whole Direct3D 11 thing, so can anyone shed any light/advice on this?

Thanks

Share this post


Link to post
Share on other sites
Advertisement
so that seems to indicate that the bottleneck is in the DrawIndexed method.
Kinda sorta, but not really.
Without the draw-indexed, then there's no dependency on the dynamic vertex buffer, so your app has no reason to wait for that data to arrive on the GPU.
Also, without the draw-indexed, you're not going to be launching any GPU commands, which means no pixel/vertex shaders will run.

Are your quads visible on the screen?

Share this post


Link to post
Share on other sites
Hi,

check your CPU usage too. If it is 100% then you aren't bound by the GPU.


[color=#000000] [color=#000088]for[color=#000000] [color=#666600]([color=#000088]int[color=#000000] i [color=#666600]=[color=#000000] [color=#006666]0[color=#666600];[color=#000000] i [color=#666600]<[color=#000000] [color=#006666]8192[color=#666600];[color=#000000] i[color=#666600]++)
[color=#000000] [color=#666600]{
[color=#000000] [color=#660066]Matrix[color=#000000] proj_view_world [color=#666600]=[color=#000000] proj_view [color=#666600]*[color=#000000] world[color=#666600];
[color=#000000] [color=#660066]UpdateVertices[color=#666600]([color=#000000]proj_view_world[color=#666600]);
[color=#000000] [color=#660066]WriteData[color=#666600]();
[color=#000000] [color=#666600]}

What do you do in the "[color=#660066][size=2]

UpdateVertices"

? Could you consider moving your calculations to GPU instead of performing them on CPU? Probably your GPU is just sitting idle while you perform your updates. You could easily draw 8000 quads with geometry instancing.

Cheers!

Share this post


Link to post
Share on other sites

How are you locking and unlocking buffer and also how are you writing data to buffer?


Using the Map/Unmap methods. Sorry, the terminology has changed and I'm still stuck in the past.


[quote name='Tape_Worm' timestamp='1326855562' post='4903871']so that seems to indicate that the bottleneck is in the DrawIndexed method.
Kinda sorta, but not really.
Without the draw-indexed, then there's no dependency on the dynamic vertex buffer, so your app has no reason to wait for that data to arrive on the GPU.
Also, without the draw-indexed, you're not going to be launching any GPU commands, which means no pixel/vertex shaders will run.

Are your quads visible on the screen?
[/quote]

Yep, they're quite visible. I only mentioned that because I thought maybe the stuff inside the loop (i.e. the transformation math, buffer writing) was causing the issue. They're not.


Hi,

check your CPU usage too. If it is 100% then you aren't bound by the GPU.


[color=#000000] [color=#000088]for[color=#000000] [color=#666600]([color=#000088]int[color=#000000] i [color=#666600]=[color=#000000] [color=#006666]0[color=#666600];[color=#000000] i [color=#666600]<[color=#000000] [color=#006666]8192[color=#666600];[color=#000000] i[color=#666600]++)
[color=#000000] [color=#666600]{
[color=#000000] [color=#660066]Matrix[color=#000000] proj_view_world [color=#666600]=[color=#000000] proj_view [color=#666600]*[color=#000000] world[color=#666600];
[color=#000000] [color=#660066]UpdateVertices[color=#666600]([color=#000000]proj_view_world[color=#666600]);
[color=#000000] [color=#660066]WriteData[color=#666600]();
[color=#000000] [color=#666600]}

What do you do in the "

UpdateVertices"


? Could you consider moving your calculations to GPU instead of performing them on CPU? Probably your GPU is just sitting idle while you perform your updates. You could easily draw 8000 quads with geometry instancing.

Cheers!


No, I'm trying to stay away from stuff like instancing for the time being (and seriously, 8000 quads shouldn't be affecting performance like this). No, I don't get 100% CPU. Granted I'm not sure if that's a good indicator or not. It's interesting to note that I put a Sleep(10) in the method that calls Draw() (after it) and I still get the same frame rate, so I'm assuming that means the CPU is free and clear.

I've tried PIX and a profiler, and the profiler doesn't seem to show anything glaring. And I really can't make hide nor hair out of the data pix is presenting me.

Anyway, thanks guys. If you all have any other ideas, that'd be swell.

Share this post


Link to post
Share on other sites
How many pixels does each quad cover?

If every quad is covering the entire screen, then that's only 0.018ms per full-screen-quad, which is actually extremely fast. Graphics cards from a few years back will take ~0.5ms to draw a textured quad over an entire "HD" res buffer,

Share this post


Link to post
Share on other sites
How many pixels does each quad cover? If every quad is covering the entire screen, then that's only 0.018ms per full-screen-quad, which is actually extremely fast. Graphics cards from a few years back will take ~0.5ms to draw a textured quad over an entire "HD" res buffer,


I had that thought last night. The screen is 800x600, the quad is 260x260. I resized the quad to 66x66 and it still gave lousy performance, something like 20 FPS if I recall. In my D3D9 app I've got 8192 128x128 textured and alpha blended quads on the screen at ~200 FPS.

I just tried the D3D11 app here at the office, and this thing has a Radeon 4550 which is quite slow (running in 10.1 downlevel mode, not that it matters, I get these issues on my home computer which uses a Direct3D 11 card). With my D3D9 app I got ~22 FPS, with this D3D11 (running in 10.1) I get ~0.15 FPS. Something's not kosher here.

I also updated the code to only fill the vertex buffer one time only, so every time the frame is drawn it just calls DrawIndexed(). Something's just not right here.

Oh, also, this is my vertex/pixel shader for the quads (in case there's something in here that'd be a problem):

Texture2D theTexture : register(t0);
SamplerState sample : register(s0);

struct VS_IN
{
float4 pos : POSITION;
float4 col : COLOR;
float2 uv : TEXTURECOORD;
};

struct PS_IN
{
float4 pos : SV_POSITION;
float4 col : COLOR;
float2 uv : TEXTURECOORD;
};

PS_IN VS( VS_IN input )
{
return input;
}

float4 PS( PS_IN input ) : SV_Target
{
return theTexture.Sample(sample, input.uv);
}


Yeah, I'm flummoxed.

Share this post


Link to post
Share on other sites
Just to keep up to date here:

I've modified my code to use SlimDX and I'm still getting the same issue. Also, I noticed on this machine that the system just chugs when running the application (e.g. the mouse pauses, window events take forever to happen, etc...) and yet, the CPU usage on task manager never climbs above 2-3%. I'm updating the driver on this computer to the latest ATI driver (11.12) to see if it helps any.

... some time passes...

Nope, still lousy performance.

Anyone have any idea why DrawIndexed would give such lousy performance with only 32,768 vertices?

Share this post


Link to post
Share on other sites
Hi,

I assume that you enabled the Direct3d debug libraries and studied the output (if any) from the program.

Best regards

Share this post


Link to post
Share on other sites
Here's some more ideas:

Is the shader in the D3D11 version more complicated than just doing a texture read?

Try turning alpha testing off in D3D9 (or on in D3D11 by using clip() in the shader). That can have a significant performance impact as it cuts down on the need to blend pixels with the frame buffer. The performance difference will depend on how transparent the texture is, and the card you're testing on.

It's also possible that the D3D9 version is rejecting pixels due to the depth testing or stencil testing. Make sure those are disabled in both cases.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!