Sign in to follow this  

Multiple Render Targets + Alpha Test = Framerate Killer?

This topic is 4837 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi guys, I'm new to this forum but have searched as hard as possible for an answer and not found one. I've also tried to read all the faqs etc, so I hope I don't come over as 'n00b' and say/do something I shouldn't ;) Wondered if anyone could help me with my multiple render targets issue (DX9, ATI Radeon9800, 1024mb ram). What I'm trying to do is have two textures, one that contains the entire scene (for various full scene effects I'll be doing), and one that contains depth information. Rather than render the entire scene twice, I thought I'd try MRTs, which after a lot of hunting around for information I managed to get working (thanks to whoever the guy is that did the frogger gpu thing :P). I'm getting around 30 fps rendering the entire scene to an 800x600 texture and outputting depth information into the second texture. Seems to work fine, looks nice, but have a couple of issue. My landscape uses alpha tested textures to simulate grass. If I disable the alpha testing, I get 30 fps. If I enable it (which I need to) it seems to drop to around 9fps. If I don't use MRT's (i.e. just the one texture) and use alpha test I get over 30 again. So the issue seems to be that MRT's + Alpha Test = framerate killer - does anyone know why this is? One other issue is that the data in my second texture seems to get fogged, and I'd rather it didn't. Is there a way to turn this off? Alternatively, is there a way to use my backbuffer and zbuffer as textures, and avoid rendering to an 800x600 texture (which I then draw as a full screen overlay)? I'm not too worried about these effects being compatible with older cards (no offense;))... just learning new techniques and things. Well I guess that's it... sorry it was a long post - hope you managed to read this far. Thanks for any light anyone can shed, much appreciated. Ta. -BPG

Share this post


Link to post
Share on other sites
Well, though I'm not sure exactly what you're talking about; the more relevant thing I can ask is why exactly you need to render it twice? And, if you need to, why not just render to a surface, then set that onto a screen-sized quad? (no idea how slow this would be). Oh well.

Quote:
One other issue is that the data in my second texture seems to get fogged, and I'd rather it didn't. Is there a way to turn this off?

D3DDevice->SetRenderState(D3DRS_FOGENABLE, 0);

Should do the trick if the only problem is your fog being enabled. Don't forget to renable it afterward, though.


Mushu - trying to help those he doesn't know, with things he doesn't know.
Why won't he just go away? An question the universe may never have an answer to...

Share this post


Link to post
Share on other sites
Thanks for the reply :)
Well, the reason I need to render the scene twice is because...

a) I need the full scene drawn, landscape, objects, water etc
b) I need a second 'version' but instead of grass textures etc it contains depth information. i.e. how far away a particular pixel is from the camera (used for the water to look volumetric among other things, works nicely).

What I'm currently doing is rendering the scene once, but to two multiple render targets - texture 0 is sent colour (green grass etc) and texture 1 is sent a value (0-->1) for depth. This works okay, but my 3D grass is alpha tested (it's a billboard). If alpha test is enabled, it crawls along at 9fps instead of 30, which is what I'd expect. If I disable alpha test I get 30+fps. So alpha test seems to make it really slow.

The other issue with the fog, is that I need fog enabled so that texture0 (the grass, water etc) is all fogged as it should be, but I don't want texture1 to be fogged. The DX docs are a little vague... it just says "render target 0 will be fogged, and other render targets are *undefined*" :-/ "Implementations can choose to fog them all using the same state."

So I'm not even sure there *is* a way to turn off fog... which makes the multiple render targets useless for my app :( Seems a shame.

But yes, in answer to your question, I'm rendering the scene to a texture and then drawing that as a full screen quad.

hmm, I'm confused :)
-BPG

Share this post


Link to post
Share on other sites
Quote:
Well, the reason I need to render the scene twice is because...

a) I need the full scene drawn, landscape, objects, water etc
b) I need a second 'version' but instead of grass textures etc it contains depth information. i.e. how far away a particular pixel is from the camera (used for the water to look volumetric among other things, works nicely).


Rendering the scene twice is not the same as multiple render targets. As I understand it MRTs are SIMULTANEOUS render buffers (which why you would want to use I can't possibly imagine, except to render to 2 monitors?)

What you want to do is render ur first scene to a screen-sized texture then render the second pass to the backbuffer, then use your texture for whatever you want done with the second render.

Quote:
The other issue with the fog, is that I need fog enabled so that texture0 (the grass, water etc) is all fogged as it should be, but I don't want texture1 to be fogged. The DX docs are a little vague...


just disable fog when u do your first pass (to the texture), then enable fog, change RT back to your back buffer.
again, MRTs should not be used for what you're trying to do, the behavior is extremely dependent on hardware implementation and the only reason I believe you have been able to get away with it so far is u have a 9800 which Im know supports 2 monitors (30 fps is a miracle).

Quote:
Alternatively, is there a way to use my backbuffer and zbuffer as textures, and avoid rendering to an 800x600 texture (which I then draw as a full screen overlay)? I'm not too worried about these effects being compatible with older cards


render to texture then use the texture as a sprite, as long as you dont switch MRTs a 9800 should deliver solid FR.

Also I dont understand why you want to render your depth information in a separate pass, it is already being rendered in your first pass (unless you're using changed depth info?) to the depth buffer. If you want to play with depth values, it is better to use a vertex shader to calculate your depth on-the-fly. Unless you really want to stick to the FFP you can force stencil effects by using the stencil comparison and masking functions that come with the FFP (see the whole section on shadows or DEPTHTEST renderstate--I think).

Quote:
I've also tried to read all the faqs etc, so I hope I don't come over as 'n00b' and say/do something I shouldn't ;)


well I dont think there's any skill minimum to these forums, we all try to help where we can, and get help where we can.. =)

Share this post


Link to post
Share on other sites
You are doing all thing well I think. Multiply render targets are exactly for such a cases - when you need more information not only color. Multiply monitors is a really different case of course, you can not use MRT for that purpose. I think you should use MRT for creating the depth texture because rendering twice would be much slower I think.

First you should use the backbuffer as the first render target instead of the 800x600 texture because AFAIK if you render to a texture antialiasing will be disabled. So check the resolution of your back buffer and create the depth texture with the same resoultion (and of course recreate this texture on resolution change). Later you can access the content of the backbuffer with IDirect3DDevice9::GetBackBuffer and IDirect3DDevice9::UpdateSurface. I did not try it yet but I read that this is the right way. Right now I'm doing the same as you - rendering to a texture and later copy that to backbuffer with postprocessing effects - but antialiasing disabled in this case.

The two problem seems to be very strange, I would try a newer driver version (if exists). For the fog problem maybe you should turn off hardware fog and apply the fog in the pixel shader.

If you have any result please post here, will be usefull for me (and hopefully others)

Share this post


Link to post
Share on other sites
Sorry dhanji, I've confused the world (and myself it seems) by my poor terminology :)

Okay, here's what I'm doing.

Rendering my landscape using MRT's. Render Target 0 is what you see on screen eventually (green landscape, blue sky etc). Render Target 1 is my depth information (bog standard, the same as what is in the depth buffer I imagine). It's grey scale, black is near, white is far.

I'm not rendering anything twice - in fact I'm trying to avoid that by using MRT's. MRT's mean that I can have a texture containing my scene, and a texture containing my depth information, but I don't have to put all the vertices through the pipeline twice. The pixel shader just outputs two colours, which I imagine to be faster than sending all the verts through again for each vertex buffer, redoing all the matrix math in the vertex shader etc.

What's happening is my depth information (in render target 1) is it's being fogged the same as the fogging in render target 0. I want render target 0 fogged, and render target 1 not fogged. I'm unsure as to whether this is even possible.

Failing that I might have to go the "render twice" route, or implement pixel shader fog myself and disable fogging. What I don't understand with MRT's is why alpha testing makes the fps drop to 9 :( If there's no fix for this, then I'll definitely have to render twice :(

What I need to do, is get a website started, and upload some screenshots to show you all what I mean. I'm using the render target 0 for various effects (like rain hitting the camera and distorting what you see, like metroid prime) and render target 1 for making the water look lovely and deep.

Thanks for your help though gang, I appreciate it :)
-BPG

Share this post


Link to post
Share on other sites
oooh, sorry looks like I misunderstood a bit with the mrt thing.
But let me present this scenario and you can decide.

rendering simultaneously to 2 buffers means that the video hardware has to duplicate its ouput stream to two different memory spaces. rendering twice to the same buffer, means it use the same memory space.

I have a radeon 9000 mobility (thats like 8 ATI generations lower than a 9800, or roughly 6 shader generations), I can render my entire scene to a screen-sized (1024x768) texture, then render the whole scene to an environment map, then screen, then a rearview-viewport, then alpha blend the texture over the screen as an overlay 5 times (which increasing opacity) and my frame rate drops from about 150 to 70fps.

MRTs just doesnt make sense to me (atleast not for this purpose).

EDIT: also in target 2 if ur just duplicating the pixels, then post shader fog cant be switched off (which u can do EASILY in 2 passes).

Share this post


Link to post
Share on other sites
Hi imajor, and thanks that's some really useful info there!

I've updated my drivers now (I think they're just released, a week old now) but still no luck :( Alpha Testing makes it crawl along, and it's still fogged.

I'd also like to use a different format for the depth information (a 32bit one would be better than A8R8G8B8) - it's not quite working for some reason, but I think that's probably my fault :)

I think I'll end up doing the fog in a pixel shader (which I'm okay with as I want to do nicer fogging anyway) and not render the alpha tested part into MRT's for the time being :-/

If I get around to rendering to the backbuffer and then rendering *that* into a texture as you described then I'll add to this thread my results :)

Thanks again gang, and any more info is much obliged over why fogging and alphaing are fuggered :(

-BPG

Share this post


Link to post
Share on other sites
Hmmm....

Just a thought, but have you tried cracking out PIX and running it on your app to try and identify exactly what the slowdown *is*?

The zbuffer-as-texture thing is a pain, it hit me once when I was trying to do shadow mapping... NVidia used to support it, I believe, though ATI may not. I usually see people packing the depth information away as color...

Ooh, here's an idea.

Render into a texture the same size as the backbuffer, but in a format with an alpha channel (eg A8R8G8B8). Your pixel pipeline can still do all the regular alpha testing on the source textures, but in your pixel shader, you can duplicate the depth value into the alpha channel of the render target.

You can render the quad to the backbuffer with alpha blending disabled to achieve 'normal' output, and use the texture as input for more processing which uses the depth information.

Share this post


Link to post
Share on other sites
Good idea but maybe the precision of the alpha channel is not sufficient (8 bit only). Maybe it is enough for depth of field. You can also use float texture as depth but AFAIK float rendertargets are not supported on gf5 cards, but will work with radeon9 and gf6 series.

Share this post


Link to post
Share on other sites
Good idea superpig, but there's a couple of issues :(

1. As imajor says, it's not enough precision - you notice horrible banding as you move forwards and backwards.
2. This would mean that objects that I need alpha for can't be rendered doesn't it? (I think, I'm a bit tired :)). If my output alpha is my distance, then I can't do things like alpha test etc :( I could be wrong.

I've tired using PIX, and I realised I *didn't have it*.
hmmm... then I twigged that my DXSdk is about a year old (still version 9.0 though).

So I've upgraded to the summer 2004 release. oops. They've done some nice things with the new version, but the framework has completely changed, as have a few things about effect files. After spending ALL DAY (well, some very long amount of time) I got as far as managing to *compile* it again :-/ Finally, after much jiggery pokery, it now renders.

However.... alpha testing is still out for MRT's it seems. And now, fogging doesn't seem to work AT ALL. Anyone have any ideas? I guess it's something they changed between the released versions... fogging is most definitely enabled. All the code was the same for that part from the last and I've even enabled it in several other places just to make sure ;) My landscape looks so dull without fogging. Alternatively, I'll have to pixel shade the fog (oh lord why can't gfx coding be simple :P). Note that nothing gets fogged regardless if I use MRT's or not.

Back to the grind stone :)
-BPG

Share this post


Link to post
Share on other sites
Hmm...

First thing to do, I guess, is to get your fog back up and running. Once you're "back where you started" the alpha blending problem can be readdressed. See if you can't update your display drivers... if they're out of date, maybe there's an incompatability with DX9c that they later fixed.

I'm looking at the docs and not seeing anything explicitly new about fogging... The D3D debug runtime doesn't tell you anything?

I'm not sure as to which parts of your pipeline are now using shaders. If you now have a pixel shader running, I believe that means you need to do the fog yourself; fog's usually handled by the fixed function pixel pipeline, which you're now supplanting, so... depending on the type of fog, it shouldn't be too hard to implement - replicate the z value of your vertices into a 1-dimensional texture coordinate, and voila, you've got access to the depth of the pixel through that depth coord in your pixel shader.

As far as the problems with the alpha technique go...

1) Yes, precision is a problem. I guess that can't be fixed without using a nicer format with a greater number of bits for the alpha channel... if you take the MRT approach, is there no D3DFMT_A16 type format you could use for the second render target? (Hardware support may be a problem...)

2) Depends on how your alpha blending is done. If you blend with [ONE_MINUS_SRC_ALPHA, SRC_ALPHA] then it's not a problem because you're not using the destination alpha at all (where you're storing the depth data). Alpha *testing* is also purely done on the source texture, so no worries there.

Share this post


Link to post
Share on other sites

This topic is 4837 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this