help with bloom and directx

Started by
5 comments, last by unexist 15 years, 8 months ago
Hey all, I'm pretty new to graphics programming and I'm trying to implement a bloom effect in a mech game i'm working on for a student project, but I'm confused on a few things after reading the forums and articles here and on gama sutra. I'm using directx 9.0c (August 2007 SDK). Here is the general gist of what I've gotten from the articles and threads that I've read for how to implement bloom. Please correct me if I'm wrong or if there is a better way to do this. 1. Render the 3d portion of the scene to a texture the size of the back buffer. 2. Using the texture you just rendered to, for each pixel, multiply the rgb value of the color by the alpha value and put the result into a secondary texture. 3. Resize the texture with the alpha*rgb color data to 1/4 of the original size. 4. Perform horizontal gaussian distribution on the pixels that have color. 5. Perform vertical gaussian distribution on the pixels that have color. 6. Resize the texture back up to its original size. 7. Blend the original texture with the resized blurred texture you just created. 8. Copy the blended texture into the backbuffer and present. How do I write into an secondary texture from the pixel shader when doing the alpha*rgb operation? Is this as simple as calling device->SetRenderTarget(0, secondTexture) and then rendering the first texture from the sprite interface or is there a way to do it directly from the shader if I pass the second texture to it as a parameter? How do I resize the texture? The only way I know of is to create a texture of the appropriate size and then use the device->StrechRect() function. Can I do this in the shader as a pass by averaging each 4x4 block of texels of the larger texture and writing the result into a smaller texture? If so, once again, how do I write into another texture from the shader? Please forgive my ignorance of this topic. If anyone can recommend any books, articles, samples, or whatever else will help me pick this stuff up some more, it would be appreciated. Thanks in advance..
Advertisement
One of the first game to use Bloom/Glow over the image was "TRON 2.0"
There's a sample and whitepaper that covers the technique there :

Nvidia SDK

Also look at the Gamasutra article by Greg James :
Real time Glow

LeGreg
Quote:Original post by unexist
1. Render the 3d portion of the scene to a texture the size of the back buffer.
2. Using the texture you just rendered to, for each pixel, multiply the rgb value of the color by the alpha value and put the result into a secondary texture.
3. Resize the texture with the alpha*rgb color data to 1/4 of the original size.
4. Perform horizontal gaussian distribution on the pixels that have color.
5. Perform vertical gaussian distribution on the pixels that have color.
6. Resize the texture back up to its original size.
7. Blend the original texture with the resized blurred texture you just created.
8. Copy the blended texture into the backbuffer and present.


This is all generally correct, and is common way of implementing glow/bloom. In fact, it's exactly the way its implemented in the Nvidia paper that LeGreg mentioned. But just to be clear, it's certainly not the only way to do things. For instance, it's also very common not store a glow value in the alpha channel and instead perform a threshold on the original texture. A shader to perform this is very simple:

float g_fThreshold = 0.65f;float4 PS(in float2 vTexCoord : TEXCOORD0) : COLOR0{    float4 vColor = tex2D(texSampler, vTexCoord) - g_fThreshold;    return max(g_fThreshold, 0);}


So what happens when you do this, is that bright areas of the image are bloomed "automatically", you don't have to output a glow value to the alpha channel when you're rendering all the 3D geometry. Of course this will give you different results, so which is better depends on what you want out of your bloom and how your rendering is laid out...if you're using the alpha channel for something else (like alpha-blending), then a threshold might be a better idea.

Quote:Original post by unexist
How do I write into an secondary texture from the pixel shader when doing the alpha*rgb operation? Is this as simple as calling device->SetRenderTarget(0, secondTexture) and then rendering the first texture from the sprite interface or is there a way to do it directly from the shader if I pass the second texture to it as a parameter?


Yes, you just call SetRenderTarget with index 0 and then render as normal. After you do that anything you output using the COLOR0 semantic will be written to that surface.

a) the surface has to be either be from a texture created in D3DPOOL_DEFAULT with the D3DUSAGE_RENDERTARGET flag, or created using IDirect3DDevice9::CreateRenderTarget
b) the surface has to be in a format the device can render to
c) the texture can't be bound to a texture sampler input (this won't actually produce an error, it'll cause undefined behavior. The debug runtime will produce a warning about it)


Quote:Original post by unexist
How do I resize the texture? The only way I know of is to create a texture of the appropriate size and then use the device->StrechRect() function. Can I do this in the shader as a pass by averaging each 4x4 block of texels of the larger texture and writing the result into a smaller texture? If so, once again, how do I write into another texture from the shader?


You can use StretchRect to resize a texture...the catch is that with linear filtering you'll lose data if you go resize to a size smaller than 1/2 the original texture. Two passes would work fine, or if you want to do it one pass then averaging a 4x4 block will do the trick. Doing the filtering manually in the shader also allows you to work with formats that can't be filtered by the GPU, for instance ATI X-series and X1000-series can't filter fp16 and fp32 formats.


For this topic, I very much recommend checking out the PostProcess sample in the SDK. It not only can demonstrate bloom, but also shows you how to handle general post-processing tasks: setting up a full screen quad, chaining together effects, ping-ponging between textures. Definitely worth spending some time with. [smile]


Thanks for your advice.

I've actually read both the papers mentioned and looked at the post process and HDR lighting sample. There is a lot of code in there that's useless to me, and I'm having a hard time seperating from the stuff I actually need.

I decided I'm going to use the brightness threshold approach. To go a little bit more in detail on what exactly I'm confused on.

Once I draw the scene to the backbuffer,I want to do the brightness threshold and downsample 4x by averaging each 4x4 block of pixels in one pass. Can I just pass the surface of the backbuffer to the shader, set a new render target, and begin and end the shader without drawing anything in between the begin and end calls? That seem strange to me.. can you even do that?

Once I'm done with the downsampling and brightpass, outside of the shader switch the render target to a third texture and pass the downsampled blur texture as a parameter with nothing in between the begin and end calls again?

Then set the render target back to the back buffer, pass the blurred texture as a parameter, and upscale 4x while blending? How do I sample the backbuffer source texture that I'm rendering to in order to blend, if I can't pass it as a parameter?

Also, I'm not using a full screen quad. I'm just trying to work with textures and not have to deal with vertex shaders. Is that going to be a problem?

Please forgive my noobness, and thanks in advance for your help.

[Edited by - unexist on August 7, 2008 10:31:10 AM]
You can't pass a surface as input to a shader, you need to pass a texture. All textures have at least 1 surface, but not all surfaces belong to a texture. In particular the backbuffer is a surface, but one that doesn't belong to a texture. Therefore you can't directly read from it in a shader. However there are two simple, easy alternatives:

-Create a texture in the D3DPOOL_DEFAULT pool and then use StretchRect to copy data from the backbuffer to this texture
-Create a texture with D3DUSAGE_RENDERTARGET and render your 3D scene to this texture, instead of the backbuffer.

When you're doing all these downsampling and blurring passes, you have to render something. If you just call Begin and End on your effect, nothing will happen. You need to draw some geometry so that pixels are rendered to the current render target. The reason I mention a full-screen quad is because you want to make sure you render to every single texel of the render target, and the easiest/cheapest way to do that is just use two triangles that cover the whole screen.

If for now you don't want to mess around with handling the quad yourself, you can use ID3DXSprite. Just make sure when you call ID3DXSprite::Begin you pass the D3DXSPRITE_DONOTMODIFY_RENDERSTATE flag. Also make sure the Vertex Shader in your effect is set to NULL. Then you can just set the current texture you're sampling from, and draw a sprite that's the same size as the current render target.

Ok, so in that case how about this...

1. Create TEXTURE#1 a texture in the default pool with the render target usage flag.
2. Create TEXTURE#2 with the same flags that is 1/4 the size of that texture.
3. Create TEXTURE#3 with the same flags that is the same size as TEXTURE#2.
4. Render the 3d portion of the scene to TEXTURE#1.
5. Set the render target to TEXTURE#2
6. Begin ID3DXSPRITE with the D3DXSPRITE_DONOTMODIFY_RENDERSTATE flag.
7. Pass TEXTURE#1 as a parameter to the shader
8. Begin down sample 4x and brightness threshold pass.
9. Render TEXTURE#2 using ID3DXSPRITE. (Can I do this if its currently the render target?)
10. End down sample 4x and brightness threshold pass.
11. Set render target to TEXTURE#3
12. Pass TEXTURE#2 as a parameter
13. Begin horizontal blur pass.
14. Render TEXTURE#3 using ID3DXSPRITE. (Can I do this if its currently the render target?)
15. End horizontal blur pass.
16. Turn on alpha blending with source set to one and destination set to one.
17. Begin vertical blur pass.
18. Render TEXTURE#3 using ID3DXSPRITE. (Can I do this if its currently the render target?)(If i have blending enabled and i just return a color, will it blend automatially?)
19. End vertical blur pass.
20. Set render target back to TEXTURE#1.
21. Pass TEXTURE#3 as a parameter
22. Begin Upsample 4x pass
23. Render TEXTURE#1 using ID3DXSPRITE. (Can I do this if its currently the render target?)
24. End Upsample 4x pass
25. Copy TEXTURE#1 into the backbuffer using stretchrect
26. End sprite

Maybe?
?

This topic is closed to new replies.

Advertisement