Jump to content
  • Advertisement
Sign in to follow this  
ProgrammerDX

Glow Shader

This topic is 2113 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey,

Thanks for this forum.

I have a question regarding real-time glow on 2D objects. I currently already achieved this, but I feel like I'm wasting resources in my current method. On some older computers it's causing a lag.

Note: I use the word shader & effect as if its the same thing, it's the .fx file in which you can easily define vertex and pixel shaders together in one file. Frankly I don't know if I should call the global variables in effect files as 'shader global variables' or 'effect global variables'.

What I do is the following:
Initialize:
I create 3 render targets with the same size as the back buffer. (unlike back buffer the render targets have an alpha channel)
I load the shader/effect file. (ID3DXEFFECT) (it has only pixel shaders defined in them)
I load the 2D texture (it's any kind of shape and has transparency).
I precompute guassian blur.
I load sprite (ID3DXSPRITE) which I use for rendering 2D textures & render targets.

Per frame, for the 2D texture that I want to make 'glow':
1: I pass gaussian blur data & blur color to the shader/effect global variables.
2: I draw the 2D texture on the 1st render target (cleared it first).
3: I draw the 1st render target on the 2nd render target (cleared it first) and in the shader (pass 1) I color everything that is not 100% transparent in the glow color that I want. (so in the 2nd render target I will have a colored silhouette of the object I ultimately want to make glow).
3: I draw the 2nd render target on the 3rd render target (cleared it first) and in the shader/effect (pass 2) I do horizontal blur.
4: I draw the 3rd render target on the back buffer and in the shader/effect (pass 3) I do vertical blur. (additive blend with destop = ADD and srcop = SRCALPHA)
5: I draw the 1st render target on the back buffer to overlay it on top of the silhouette that would otherwise be there.

This all works and gives a 2D texture a glow around its edges.

Problem is that since the 2D texture is relatively small compared to the render targets, I feel like I'm wasting alot of resources because:
1. the shader is also blurring the 'empty' space
2. the render targets have empty spaces, causing that every frame the empty spaces of the render targets to be rendered aswell (though ofcourse you don't see that).

Any ways to optimize this? I'm thinking of the following:
1. Make the render targets that are used for blurring half size of back buffer.
2. clip() in the pixel shaders for pixels that are fully transparent/invisible and stay fully transparent even after blurring attempt.. would that even help? doubt it
3. Use stencil buffer, I could probably use 2 render targets instead of 3 and skip the 5th step.

Thanks for reading,

Share this post


Link to post
Share on other sites
Advertisement

How many source textures do you have? If you have a fixed number, you could just create pre-blurred versions of them in photoshop or something, and draw the blurred ones instead of the non-blurred ones.

Share this post


Link to post
Share on other sites

Hi,

 

I found this topic, which is related to my question, but does not answer it... So I thought it'd be the right place to ask !

 

I am trying to achieve Glowing in a 3D environment using Direct X 9. To get started I made a very simple scene: camera faces a flat, white square. No lights.

 

To understand and implement the effect I used mostly these two sources:

http://devmaster.net/posts/3100/shader-effects-glow-and-bloom

http://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/

 

 

Now before I start copy&pasting large chunks of code, I thought I'd explain my understanding of the algorithm and let you guys go "NO! You got it completely wrong!" ... :)

 

Main steps:

 

1. Render the scene to a empty 2D texture

2. Blur the resulting 2D texture

3. Render the scene to the back buffer as usual, applying the previously created texture

 

 

Key things according to my understanding:

 

1. The render-to-texture stage uses a custom pixel shader that is responsible for blurring it i.e. once rendering is finished, the "image" produced is already blurry. There is no need of back processing the texture pixels at C++ level.

2. This shader uses the "gaussian" method to create the blur effect, but that is almost irrelevant; what matters is to take each pixel and to spread its color to the surrounding pixels with a very low alpha: some pixel will thus stack more color than others (typically less on the edges)  and therefore have a higher alpha value (again, lower on the edges)

3. Finally rendering the scene using the produced, blurred texture is done with transparency so that my square appears covered  by a "cloud" that goes beyond its edges.

 

Note that this last point confuses me: I render my square (to the backbuffer) with texture coordinates of 0 to 1, so I would have thought that the blurred texture would be contained within the limits of the quad, no?

 

 

I am more than happy to share the entire code, if necessary, but here are the key sections, as a quick glance:

 

 

The Quad definition:

Vertex vertices[] =
{
{-1.0f, 1.0f, 0.0f, 0.0f, 0.0f},
{-1.0f,-1.0f, 0.0f, 0.0f, 1.0f},
{ 1.0f, 1.0f, 0.0f, 1.0f, 0.0f},
{ 1.0f,-1.0f, 0.0f, 1.0f, 1.0f}
};

Rendering code:

// Render to texture
device->SetRenderTarget(0, textureSurface);
device->Clear(0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, D3DCOLOR_XRGB(100, 100, 100), 1.0f, 0);
device->BeginScene(); 
device->SetTransform(D3DTS_WORLD, &(R * T));
device->SetTransform(D3DTS_PROJECTION, &PT);


    effect->Begin(&passes, 0);
    for (UINT p = 0; p < passes; p++)
    {
        effect->SetTechnique("Blur");
        effect->SetMatrix("WorldViewProj", &((R * T) * V * PT));
        effect->BeginPass(p);
        device->SetFVF(FvF);
        device->SetStreamSource(0, buffer, 0, sizeof(Vertex));
        device->SetTexture(0, NULL);
        device->DrawPrimitive(D3DPT_TRIANGLESTRIP, 0, 2);
        effect->EndPass();
    }
    effect->End();
device->EndScene();        


// Render to back buffer
device->SetRenderTarget(0, backbufferSurface);
device->Clear(0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, D3DCOLOR_XRGB(000, 000, 000), 1.0f, 0);
device->BeginScene(); 
device->SetTransform(D3DTS_WORLD, &(R * T));
device->SetTransform(D3DTS_PROJECTION, &PB);
device->SetFVF(FvF);
device->SetStreamSource(0, buffer, 0, sizeof(Vertex));
device->SetTexture(0,renderingTexture);
device->SetRenderState(D3DRS_ALPHABLENDENABLE,true);
device->SetRenderState(D3DRS_SRCBLEND,D3DBLEND_SRCALPHA);
device->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_DESTCOLOR); //D3DBLEND_INVSRCALPHA);
device->SetTextureStageState(0,D3DTSS_ALPHAARG1,D3DTA_TEXTURE);
device->DrawPrimitive(D3DPT_TRIANGLESTRIP, 0, 2);
device->EndScene();
device->Present(NULL, NULL, NULL, NULL);
Pixel shader for the render-to-texture phase:
PS_OUTPUT ps_main( in PS_INPUT In )
{
float offset[5] = { 0.0, 1.0, 2.0, 3.0, 4.0 };
float weight[5] = { 0.2270270270, 0.1945945946, 0.1216216216, 0.0540540541, 0.0162162162 };
    
    PS_OUTPUT Out = {float4(0.0,0.0,0.0,0.0)};
    
    for (int i = 0; i < 5; i ++) 
    {
        Out.Color += tex2D( Tex0, ( In.Texture + float2(0.0, offset[i]) ) / 1024.0 ) * weight[i];
        Out.Color += tex2D( Tex0, ( In.Texture - float2(0.0, offset[i]) ) / 1024.0 ) * weight[i];
    }
    
    return Out;
}

I hope this makes all sense. Do let me know, if there is anything unclear.

 

Remark: the code above does not give me a completely incorrect result. The main square appears textured with a slightly transparent inner square.

 

Any help/advice/comment would be much appreciated! Many thanks!

Philippe

Share this post


Link to post
Share on other sites

What are you doing in this first pass? What is Tex0? How are your projecting your full screen quad at the end of this stage?

Edited by Styves

Share this post


Link to post
Share on other sites

I'm not entirely sure what your question is. You quad's texcoords are only used for sampling the input color target. The blur will be applied to every pixel rasterized by the quad. If it's fullscreen, the entire screen will blur. 

Share this post


Link to post
Share on other sites

I'm not sure what the question is either. In fact I don't understand the setup either. The render-to-texture pass should only render objects using an emissive shader. The glow/blur should be applied to this texture as a second pass, and the third pass should render it to the screen (or you can do blur directly to the screen). I'm not sure why the blur is being done in the render-to-texture pass.

Edited by Styves

Share this post


Link to post
Share on other sites

Well, to be entirely correct, (not saying you're wrong, you're not), but the extraction pass should render any object you want to glow. This can be determined by things such as a color threshold, HDR, Venus entering the seventh house in January, whatever you want. 

But yes, there are issues with trying to blur as you extract, since everything will become blurry.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!