• Advertisement
Sign in to follow this  

Texture-Splatting + Render to Texture problem [SOLVED].

This topic is 4577 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi guys! I'm still working on my texture splatting engine, and I've run into a little snag - I'm just not getting the performance I need. I'm trying to manage getting my FX5200 to do texture-splatting for an RTS game (45 degree style angle). This means that at any point my whole screen would be filled with terrain, excluding objects. Now, I'm currently doing a 4 texture splat with a 4 channel alpha map, and at best, doing 640x480, I can push out around 50fps. I'm basicly looking to get this figure up to about 100 (I don't want to get greedy). I've done some profiling, and clearly, my CPU is pretty idle when running this (98% my frame time is in the Present call - see image). Free Image Hosting at www.ImageShack.us I'm using a pretty basic pixel-shader, and after it is compiled from hlsl, it looks extremely similar to an asm shader I got on a tutorial from gamedev, so I'm quite sure I'm not doing something extremely wrong. I'll also mention, the tutorial's demo code is getting around 160-170 fps drawing a smaller screen portion (an estimated 500x500 pixels), which is a bit more worrying. I'm quite sure HLSL isn't giving me that much overhead, right? I've been thinking of some alternate ways to handle this, but they all seem to have quite a few disadvantages. I thought about pre-splatting, or maybe chaching several splats at runtime, but this doesn't seem like a real solution to me. Does anyone have any info on how this performs? I don't even know how to go about implementing it. Theres always the option of tiling, but I'm not quite fond of that option, and I think it doesn't look as good in the end. Any ideas on this? I can post some code if you guys think I'm doing something wrong. Thanks for the help :). [Edited by - sirob on October 9, 2005 2:46:58 PM]

Share this post


Link to post
Share on other sites
Advertisement
A common mistake when profiling D3D applications is to use some third-party profiler. They always list the most activity in Present(), because this is where the driver must empty the command buffer, switch modes, ect... Basically, all of your other calls lead to validation and submitting operations to the driver, which are then queued up. Take a look at the "Accurately Profiling Direct3D API Calls" article in the DXSDK.

You should try profilling this application with PIX. This will show you exactly what is going on...ie exactly what state changes are slowing you down. It will let you identify whether your app is CPU-bound or GPU-bound. Since you have a FX5200, GPU-bound might be the case, but the lack of complicated pixel shaders points me towards CPU-bound.

Share this post


Link to post
Share on other sites
Hi,

First off, thanks for answering.

I'm pretty much deliberatly making my app GPU bound, to be able to find a way to render enough frames while doing texture-splatting, which I'm currently unable to do.

I did specifically enquire about a few things, which I still havn't found an answer for (I've been looking).

- Could HLSL be giving me an overhead over an asm shader? FX Composer's asm view of my PS is nearly identical to the original asm shader. Would they perform differently because one originated in HLSL?

- Would 2 (or more) passes be an efficient way to perform this quicker? The way I see it - no, but I think I might try this next.

- Are there any techniques involving creating a splat during loading and saving it for the whole length of the game? Is this realistic, could it work?

- Could there maybe be a way to calculate the splats during runtime, but save them as a seperate texture to render in consecutive frames? Could this work?

Thanks for the reply! :).

Share this post


Link to post
Share on other sites
I've run into another problem while trying to find a solution to the first one. I've trying to perform a render to a texture, so that I could cache the different texture splats between frames.

It's rendering, but I seem to be getting some extreme artifacts on my target texture. Trying to debug it, I removed everything but a clear call on the texture, and I'm still getting artifacts (they do look different). If I swap the texture for a regular one (loaded from a file) it renders clean.

Heres a picture of the artifacts:
Free Image Hosting at www.ImageShack.us

I'm really not doing anything special, so I can't even start to think what could be causing this.

Heres some code:

LPDIRECT3DTEXTURE9 cTextureSplatter::CreateSplat(LPDIRECT3DTEXTURE9 AlphaMap)
{
LPDIRECT3DTEXTURE9 Texture;
m_Device->CreateTexture(512, 512, 0, D3DUSAGE_RENDERTARGET, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &Texture, 0);

IDirect3DSurface9 *Sfc, *Sfc2;
Texture->GetSurfaceLevel(0, &Sfc);
m_Device->GetRenderTarget(0, &Sfc2);
m_Device->SetRenderTarget(0, Sfc);

m_Device->Clear(0, 0, D3DCLEAR_TARGET, 0xFFFF00FF, 1.0f, 0);

m_Device->SetRenderTarget(0, Sfc2);
Sfc->Release();

return Texture;
}




Ans the rendering code:

m_Tex1 = m_Splatter.CreateSplat(m_AlphaMap);

m_Device->SetTexture(0, m_Tex1);

m_Patch.Render();

m_Device->SetTexture(0, 0);
m_Tex1->Release();
m_Tex1 = 0;





I'd appreciate any ideas on this. Thanks a bunch for reading :).

Share this post


Link to post
Share on other sites
SOLVED: I wasn't regenerating the MIP sublevels after rendering to the texture.

Thanks for the help!

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement