• Advertisement

3D Sun shafts via postprocess artefacts

Recommended Posts

Hi all!

I try to use the Sun shafts effects via post process in my 3DEngine, but i have some artefacts on final image(Please see attached images).

The effect contains the following passes:

1) Depth scene pass;

2) "Shafts pass" Using DepthPass Texture + RGBA BackBuffer texture.

3) Shafts pass texture +  RGBA BackBuffer texture.

Shafts shader for 2 pass:

//
uniform sampler2D FullSampler; // RGBA Back Buffer
uniform sampler2D DepthSampler;

varying vec2 tex;

#ifndef saturate
float saturate(float val)
{
    return clamp(val, 0.0, 1.0);
}
#endif

void main(void)
{
    vec2 uv = tex;
    float sceneDepth = texture2D(DepthSampler, uv.xy).r;
    vec4  scene        = texture2D(FullSampler, tex);
    float fShaftsMask     = (1.0 - sceneDepth);
    gl_FragColor = vec4( scene.xyz * saturate(sceneDepth), fShaftsMask );
}

final shader:

//
uniform sampler2D FullSampler; // RGBA Back Buffer
uniform sampler2D BlurSampler; // shafts sampler
varying vec4 Sun_pos;
const vec4    ShaftParams = vec4(0.1,2.0,0.1,2.0);

varying vec2 Tex_UV;

#ifndef saturate 
float saturate(float val)
{
    return clamp(val, 0.0, 1.0);
}
#endif

vec4 blendSoftLight(vec4 a, vec4 b)
{
  vec4 c = 2.0 * a * b + a * a * (1.0 - 2.0 * b);
  vec4 d = sqrt(a) * (2.0 * b - 1.0) + 2.0 * a * (1.0 - b);
   
  // TODO: To look in Crysis what it the shit???
  //return ( b < 0.5 )? c : d;
  return any(lessThan(b, vec4(0.5,0.5,0.5,0.5)))? c : d;
}

void main(void)
{
    vec4 sun_pos = Sun_pos;
    vec2    sunPosProj = sun_pos.xy;
    //float    sign = sun_pos.w;
    float    sign = 1.0;

    vec2    sunVec = sunPosProj.xy - (Tex_UV.xy - vec2(0.5, 0.5));
    float    sunDist = saturate(sign) * saturate( 1.0 - saturate(length(sunVec) * ShaftParams.y ));

    sunVec *= ShaftParams.x * sign;

    vec4 accum;
    vec2 tc = Tex_UV.xy;

    tc += sunVec;
    accum = texture2D(BlurSampler, tc);
    tc += sunVec;
    accum += texture2D(BlurSampler, tc) * 0.875;
    tc += sunVec;
    accum += texture2D(BlurSampler, tc) * 0.75;
    tc += sunVec;
    accum += texture2D(BlurSampler, tc) * 0.625;
    tc += sunVec;
    accum += texture2D(BlurSampler, tc) * 0.5;
    tc += sunVec;
    accum += texture2D(BlurSampler, tc) * 0.375;
    tc += sunVec;
    accum += texture2D(BlurSampler, tc) * 0.25;
    tc += sunVec;
    accum += texture2D(BlurSampler, tc) * 0.125;

    accum  *= 0.25 * vec4(sunDist, sunDist, sunDist, 1.0);
    
     accum.w += 1.0 - saturate(saturate(sign * 0.1 + 0.9));

    vec4    cScreen = texture2D(FullSampler, Tex_UV.xy);      
    vec4    cSunShafts = accum;

    float fShaftsMask = saturate(1.00001 - cSunShafts.w) * ShaftParams.z * 2.0;
        
    float fBlend = cSunShafts.w;

    vec4 sunColor = vec4(0.9, 0.8, 0.6, 1.0);

    accum =  cScreen + cSunShafts.xyzz * ShaftParams.w * sunColor * (1.0 - cScreen);
    accum = blendSoftLight(accum, sunColor * fShaftsMask * 0.5 + 0.5);

    gl_FragColor = accum;
}

Demo project:

Demo Project

Shaders for postprocess Shaders/SunShaft/

What i do wrong ?

Thanks!
 

sun_shafts.png

sun_shafts2.png

Edited by Andrey OGL_D3D

Share this post


Link to post
Share on other sites
Advertisement

From those images it seem to work correctly, but you should either take more samples or blur the result after the radial blur. You can also do the radial blur pass in a lower resolution which can help with performance and the banding as well. I couldn't really understand your process as you explained so here is what I do:

1.) Draw the sun against scene depth buffer to a render target, depth test on to discard occluded sun pixels

2.) Radial blur on a low-res (0.5x) texture

3.) Draw the result on top with additive blending

Share this post


Link to post
Share on other sites
8 hours ago, Matt_Aufderheide said:

It looks like its working correctly, you just need more samples to get it smoother I think...

 

6 hours ago, turanszkij said:

but you should either take more samples or blur the result after the radial blur.

Matt_Aufderheide, turanszkij Thanks! But how to calculate of the correct number of samples ? i think we can use the dimension of Render Target for calculation of number of samples.

 

6 hours ago, turanszkij said:

From those images it seem to work correctly, but you should either take more samples or blur the result after the radial blur. You can also do the radial blur pass in a lower resolution which can help with performance and the banding as well. I couldn't really understand your process as you explained so here is what I do:

1.) Draw the sun against scene depth buffer to a render target, depth test on to discard occluded sun pixels

2.) Radial blur on a low-res (0.5x) texture

3.) Draw the result on top with additive blending

Yes, but this is other Sun shafts technique? I think this technique are described there: herehere, also here ?

Share this post


Link to post
Share on other sites
5 hours ago, Andrey OGL_D3D said:

Matt_Aufderheide, turanszkij Thanks! But how to calculate of the correct number of samples ? i think we can use the dimension of Render Target for calculation of number of samples.

You don't have to calculate the correct number of samples, just tweak it until it looks good. I am using 35 samples, but rendering to half-resolution. This gives great results, but probably should take less samples. Also make sure you are using a linear texture filter when sampling.

5 hours ago, Andrey OGL_D3D said:

Yes, but this is other Sun shafts technique? I think this technique are described there: herehere, also here ?

It is the same technique, but for some reason all of them say to draw the scene depth buffer after the sun, but you probably already have a depth buffer by the time you are doing this effect, so best to just reuse that. By the radial blur I meant the shader that calculates vector from pixel to sun and samples along the direction.

Share this post


Link to post
Share on other sites
43 minutes ago, turanszkij said:

You don't have to calculate the correct number of samples, just tweak it until it looks good. I am using 35 samples, but rendering to half-resolution. This gives great results, but probably should take less samples. Also make sure you are using a linear texture filter when sampling.

Ok, Thanks again. I will try it and check linear filter.

44 minutes ago, turanszkij said:

It is the same technique, but for some reason all of them say to draw the scene depth buffer after the sun, but you probably already have a depth buffer by the time you are doing this effect, so best to just reuse that. By the radial blur I meant the shader that calculates vector from pixel to sun and samples along the direction

Yes, i have separate depth scene pass without Sun, now i understand, thanks

Share this post


Link to post
Share on other sites

Hi all!

After some modiifcation of shader i have the result for 16 samples:

image.thumb.png.5f35d407bc2589104f73b92d9640431c.png

the result for 32 samples:

image.thumb.png.7e1ae09735a907dd12598ad41c3d88a8.png

also i try to change ShaftParams:

const vec4    ShaftParams = vec4(0.05, 1.0, 0.1, 2.0);

But i have still some artefacts.

May be i should use additive blend + blur in low resolution, from turanszkij post, but can be simple to fix this implemenation.

I attached the new shader.

shaftsCryPS.glsl

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By Nikolay Dimchev
      Hello, Game Devs! 
      I am a 3D Modeler and Texture Artist currently looking for freelance work.
      If you're interested please feel free to checkout my portfolio at the link below.
      Contact me with details at nickeydimchev3d@gmail.com!
      ~Nick
      nickeydimchev3d.myportfolio.com
       
    • By PhillipHamlyn
      Hi
      I have a procedurally generated tiled landscape, and want to apply 'regional' information to the tiles at runtime; so Forests, Roads - pretty much anything that could be defined as a 'region'. Up until now I've done this by creating a mesh defining the 'region' on the CPU and interrogating that mesh during the landscape tile generation; I then add regional information to the landscape tile via a series of Vertex boolean properties. For each landscape tile vertex I do a ray-mesh intersect into the 'region' mesh and get some value from that mesh.

      For example my landscape vertex could be;
      struct Vtx { Vector3 Position; bool IsForest; bool IsRoad; bool IsRiver; } I would then have a region mesh defining a forest, another defining rivers etc. When generating my landscape veretexes I do an intersect check on the various 'region' meshes to see what kind of landscape that vertex falls within.

      My ray-mesh intersect code isn't particularly fast, and there may be many 'region' meshes to interrogate, and I want to see if I can move this work onto the GPU, so that when I create a set of tile vertexes I can call a compute/other shader and pass the region mesh to it, and interrogate that mesh inside the shader. The output would be a buffer where all the landscape vertex boolean values have been filled in.

      The way I see this being done is to pass in two RWStucturedBuffer to a compute shader, one containing the landscape vertexes, and the other containing some definition of the region mesh, (possibly the region might consist of two buffers containing a set of positions and indexes). The compute shader would do a ray-mesh intersect check on each landscape vertex and would set the boolean flags on a corresponding output buffer.

      In theory this is a parallelisable operation (no one landscape vertex relies on another for its values) but I've not seen any examples of a ray-mesh intersect being done in a compute shader; so I'm wondering if my approach is wrong, and the reason I've not seen any examples, is because no-one does it that way. If anyone can comment on;
      Is this a really bad idea ? If no-one does it that way, does everyone use a Texture to define this kind of 'region' information ? If so - given I've only got a small number of possible types of region, what Texture Format would be appropriate, as 32bits seems really wasteful. Is there a common other approach to adding information to a basic height-mapped tile system that would perform well for runtime generated tiles ? Thanks
      Phillip
    • By GytisDev
      Hello,
      without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles,  where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
      Thank you in advance.
    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
  • Advertisement