Multiple Render Targets + Alpha Test = Framerate Killer?

Started by
10 comments, last by superpig 19 years, 7 months ago
Hi guys, I'm new to this forum but have searched as hard as possible for an answer and not found one. I've also tried to read all the faqs etc, so I hope I don't come over as 'n00b' and say/do something I shouldn't ;) Wondered if anyone could help me with my multiple render targets issue (DX9, ATI Radeon9800, 1024mb ram). What I'm trying to do is have two textures, one that contains the entire scene (for various full scene effects I'll be doing), and one that contains depth information. Rather than render the entire scene twice, I thought I'd try MRTs, which after a lot of hunting around for information I managed to get working (thanks to whoever the guy is that did the frogger gpu thing :P). I'm getting around 30 fps rendering the entire scene to an 800x600 texture and outputting depth information into the second texture. Seems to work fine, looks nice, but have a couple of issue. My landscape uses alpha tested textures to simulate grass. If I disable the alpha testing, I get 30 fps. If I enable it (which I need to) it seems to drop to around 9fps. If I don't use MRT's (i.e. just the one texture) and use alpha test I get over 30 again. So the issue seems to be that MRT's + Alpha Test = framerate killer - does anyone know why this is? One other issue is that the data in my second texture seems to get fogged, and I'd rather it didn't. Is there a way to turn this off? Alternatively, is there a way to use my backbuffer and zbuffer as textures, and avoid rendering to an 800x600 texture (which I then draw as a full screen overlay)? I'm not too worried about these effects being compatible with older cards (no offense;))... just learning new techniques and things. Well I guess that's it... sorry it was a long post - hope you managed to read this far. Thanks for any light anyone can shed, much appreciated. Ta. -BPG
Advertisement
Well, though I'm not sure exactly what you're talking about; the more relevant thing I can ask is why exactly you need to render it twice? And, if you need to, why not just render to a surface, then set that onto a screen-sized quad? (no idea how slow this would be). Oh well.

Quote:One other issue is that the data in my second texture seems to get fogged, and I'd rather it didn't. Is there a way to turn this off?

D3DDevice->SetRenderState(D3DRS_FOGENABLE, 0);

Should do the trick if the only problem is your fog being enabled. Don't forget to renable it afterward, though.

Mushu - trying to help those he doesn't know, with things he doesn't know.
Why won't he just go away? An question the universe may never have an answer to...
Thanks for the reply :)
Well, the reason I need to render the scene twice is because...

a) I need the full scene drawn, landscape, objects, water etc
b) I need a second 'version' but instead of grass textures etc it contains depth information. i.e. how far away a particular pixel is from the camera (used for the water to look volumetric among other things, works nicely).

What I'm currently doing is rendering the scene once, but to two multiple render targets - texture 0 is sent colour (green grass etc) and texture 1 is sent a value (0-->1) for depth. This works okay, but my 3D grass is alpha tested (it's a billboard). If alpha test is enabled, it crawls along at 9fps instead of 30, which is what I'd expect. If I disable alpha test I get 30+fps. So alpha test seems to make it really slow.

The other issue with the fog, is that I need fog enabled so that texture0 (the grass, water etc) is all fogged as it should be, but I don't want texture1 to be fogged. The DX docs are a little vague... it just says "render target 0 will be fogged, and other render targets are *undefined*" :-/ "Implementations can choose to fog them all using the same state."

So I'm not even sure there *is* a way to turn off fog... which makes the multiple render targets useless for my app :( Seems a shame.

But yes, in answer to your question, I'm rendering the scene to a texture and then drawing that as a full screen quad.

hmm, I'm confused :)
-BPG
Quote:Well, the reason I need to render the scene twice is because...

a) I need the full scene drawn, landscape, objects, water etc
b) I need a second 'version' but instead of grass textures etc it contains depth information. i.e. how far away a particular pixel is from the camera (used for the water to look volumetric among other things, works nicely).


Rendering the scene twice is not the same as multiple render targets. As I understand it MRTs are SIMULTANEOUS render buffers (which why you would want to use I can't possibly imagine, except to render to 2 monitors?)

What you want to do is render ur first scene to a screen-sized texture then render the second pass to the backbuffer, then use your texture for whatever you want done with the second render.

Quote:The other issue with the fog, is that I need fog enabled so that texture0 (the grass, water etc) is all fogged as it should be, but I don't want texture1 to be fogged. The DX docs are a little vague...


just disable fog when u do your first pass (to the texture), then enable fog, change RT back to your back buffer.
again, MRTs should not be used for what you're trying to do, the behavior is extremely dependent on hardware implementation and the only reason I believe you have been able to get away with it so far is u have a 9800 which Im know supports 2 monitors (30 fps is a miracle).

Quote:Alternatively, is there a way to use my backbuffer and zbuffer as textures, and avoid rendering to an 800x600 texture (which I then draw as a full screen overlay)? I'm not too worried about these effects being compatible with older cards


render to texture then use the texture as a sprite, as long as you dont switch MRTs a 9800 should deliver solid FR.

Also I dont understand why you want to render your depth information in a separate pass, it is already being rendered in your first pass (unless you're using changed depth info?) to the depth buffer. If you want to play with depth values, it is better to use a vertex shader to calculate your depth on-the-fly. Unless you really want to stick to the FFP you can force stencil effects by using the stencil comparison and masking functions that come with the FFP (see the whole section on shadows or DEPTHTEST renderstate--I think).

Quote:I've also tried to read all the faqs etc, so I hope I don't come over as 'n00b' and say/do something I shouldn't ;)


well I dont think there's any skill minimum to these forums, we all try to help where we can, and get help where we can.. =)
________________
"I'm starting to think that maybe it's wrong to put someone who thinks they're a Vietnamese prostitute on a bull"       -- Stan from South Park
Lab74 Entertainment | Project Razor Online: the Future of Racing (flashsite preview)
You are doing all thing well I think. Multiply render targets are exactly for such a cases - when you need more information not only color. Multiply monitors is a really different case of course, you can not use MRT for that purpose. I think you should use MRT for creating the depth texture because rendering twice would be much slower I think.

First you should use the backbuffer as the first render target instead of the 800x600 texture because AFAIK if you render to a texture antialiasing will be disabled. So check the resolution of your back buffer and create the depth texture with the same resoultion (and of course recreate this texture on resolution change). Later you can access the content of the backbuffer with IDirect3DDevice9::GetBackBuffer and IDirect3DDevice9::UpdateSurface. I did not try it yet but I read that this is the right way. Right now I'm doing the same as you - rendering to a texture and later copy that to backbuffer with postprocessing effects - but antialiasing disabled in this case.

The two problem seems to be very strange, I would try a newer driver version (if exists). For the fog problem maybe you should turn off hardware fog and apply the fog in the pixel shader.

If you have any result please post here, will be usefull for me (and hopefully others)
Sorry dhanji, I've confused the world (and myself it seems) by my poor terminology :)

Okay, here's what I'm doing.

Rendering my landscape using MRT's. Render Target 0 is what you see on screen eventually (green landscape, blue sky etc). Render Target 1 is my depth information (bog standard, the same as what is in the depth buffer I imagine). It's grey scale, black is near, white is far.

I'm not rendering anything twice - in fact I'm trying to avoid that by using MRT's. MRT's mean that I can have a texture containing my scene, and a texture containing my depth information, but I don't have to put all the vertices through the pipeline twice. The pixel shader just outputs two colours, which I imagine to be faster than sending all the verts through again for each vertex buffer, redoing all the matrix math in the vertex shader etc.

What's happening is my depth information (in render target 1) is it's being fogged the same as the fogging in render target 0. I want render target 0 fogged, and render target 1 not fogged. I'm unsure as to whether this is even possible.

Failing that I might have to go the "render twice" route, or implement pixel shader fog myself and disable fogging. What I don't understand with MRT's is why alpha testing makes the fps drop to 9 :( If there's no fix for this, then I'll definitely have to render twice :(

What I need to do, is get a website started, and upload some screenshots to show you all what I mean. I'm using the render target 0 for various effects (like rain hitting the camera and distorting what you see, like metroid prime) and render target 1 for making the water look lovely and deep.

Thanks for your help though gang, I appreciate it :)
-BPG
oooh, sorry looks like I misunderstood a bit with the mrt thing.
But let me present this scenario and you can decide.

rendering simultaneously to 2 buffers means that the video hardware has to duplicate its ouput stream to two different memory spaces. rendering twice to the same buffer, means it use the same memory space.

I have a radeon 9000 mobility (thats like 8 ATI generations lower than a 9800, or roughly 6 shader generations), I can render my entire scene to a screen-sized (1024x768) texture, then render the whole scene to an environment map, then screen, then a rearview-viewport, then alpha blend the texture over the screen as an overlay 5 times (which increasing opacity) and my frame rate drops from about 150 to 70fps.

MRTs just doesnt make sense to me (atleast not for this purpose).

EDIT: also in target 2 if ur just duplicating the pixels, then post shader fog cant be switched off (which u can do EASILY in 2 passes).
________________
"I'm starting to think that maybe it's wrong to put someone who thinks they're a Vietnamese prostitute on a bull"       -- Stan from South Park
Lab74 Entertainment | Project Razor Online: the Future of Racing (flashsite preview)
Hi imajor, and thanks that's some really useful info there!

I've updated my drivers now (I think they're just released, a week old now) but still no luck :( Alpha Testing makes it crawl along, and it's still fogged.

I'd also like to use a different format for the depth information (a 32bit one would be better than A8R8G8B8) - it's not quite working for some reason, but I think that's probably my fault :)

I think I'll end up doing the fog in a pixel shader (which I'm okay with as I want to do nicer fogging anyway) and not render the alpha tested part into MRT's for the time being :-/

If I get around to rendering to the backbuffer and then rendering *that* into a texture as you described then I'll add to this thread my results :)

Thanks again gang, and any more info is much obliged over why fogging and alphaing are fuggered :(

-BPG
Hmmm....

Just a thought, but have you tried cracking out PIX and running it on your app to try and identify exactly what the slowdown *is*?

The zbuffer-as-texture thing is a pain, it hit me once when I was trying to do shadow mapping... NVidia used to support it, I believe, though ATI may not. I usually see people packing the depth information away as color...

Ooh, here's an idea.

Render into a texture the same size as the backbuffer, but in a format with an alpha channel (eg A8R8G8B8). Your pixel pipeline can still do all the regular alpha testing on the source textures, but in your pixel shader, you can duplicate the depth value into the alpha channel of the render target.

You can render the quad to the backbuffer with alpha blending disabled to achieve 'normal' output, and use the texture as input for more processing which uses the depth information.

Richard "Superpig" Fine - saving pigs from untimely fates - Microsoft DirectX MVP 2006/2007/2008/2009
"Shaders are not meant to do everything. Of course you can try to use it for everything, but it's like playing football using cabbage." - MickeyMouse

Good idea but maybe the precision of the alpha channel is not sufficient (8 bit only). Maybe it is enough for depth of field. You can also use float texture as depth but AFAIK float rendertargets are not supported on gf5 cards, but will work with radeon9 and gf6 series.

This topic is closed to new replies.

Advertisement