Recommended Posts

Hi, I came across this udk article:

https://docs.unrealengine.com/udk/Three/VolumetricLightbeamTutorial.html

that somewhat teaches you how to make the volumetric light beam using a cone. I'm not using unreal engine so I just wanted to understand how the technique works.

What I'm having problems is with how they calculate the X position of the uv coordinate, they mention the use of a "reflection vector" that according to the documentation (https://docs.unrealengine.com/latest/INT/Engine/Rendering/Materials/ExpressionReference/Vector/#reflectionvectorws ) it just reflects the camera direction across the surface normal in world space (I assume from the WS initials) .

So in my pixel shader I tried doing something like this:

float3 reflected_view = reflect(view_dir, vertex_normal);
tex2D(falloff_texture, float2(reflected_view.x * 0.5 + 0.5, uv.y))

view_dir is the direction that points from the camera to the point in world space. vertex normal is also in world space. But unfortunately it's not working as expected probably because the calculations are being made in world space. I moved them to view space but there is a problem when you move the camera horizontally that makes the coordinates "move" as well. The problem can be seen below:

Screenshot_4.png.9f80ef66237b81e7a53db583f1b3591c.pngScreenshot_5.png.9c801ecfb2d4515c81394eb04921bb13.png

Notice the white part in the second image, coming from the left side.

Surprisingly I couldn't find as much information about this technique on the internet as I would have liked to, so I decided to come here for help!

Edited by ramirofages

Share this post


Link to post
Share on other sites

I am interested in this as well. 

I tried to create hlsl by what is going on in their "material node graph" image:

void main_ps(in VERTEX_OUT IN, out PIXEL_OUT OUT)
{
    static const float AddR = 8.0f;
    static const float MulR = 0.8f;

	float3 vvec = camPos.xyz - IN.WorldPos;
	float3 v = normalize(vvec);
	float3 n = normalize(IN.Normal);
	float3 rvec = reflect(v, n);
	float3 rx = rvec.x;
	float rz = sqrt((rvec.z + AddR) * MulR);
	float xcoord = (rx / rz) + 0.5f;
	float2 coord = float2(xcoord, IN.TexCoord0.y);
	float3 lightFalloff = tex2D(lightSamp, coord);

	float3 lightCol = float3(1.0f, 1.0f, 1.0f);
	OUT.Color = float4(lightCol, lightFalloff.r);
}

but wrong result:

KF5WUyg.png

I copied their image to use as light texture

If you got it solved please share.

Share this post


Link to post
Share on other sites

I kinda made some progress. Found their original material and inspect it, i was missing transform ReflectionVector to tangent space.

In Unreal :

z5baTJW.png

gZyNSG8.png

 

In my test program:

xio5vwR.png

i have seam error somehow (might be problems with my mesh exporter). And could not get correct result with CLAMP addressing (above image is WRAP)

tpm2drH.png

void main_ps(in VERTEX_OUT IN, out PIXEL_OUT OUT)
{
    float3x3 tangentBasis = float3x3(normalize(IN.Tangent), normalize(IN.Binormal), normalize(IN.Normal));

	float2 uv = IN.TexCoord0;
	
    static const float AddR = 8.0f;
    static const float MulR = 0.6f;

	float3 toEye = camPos.xyz - IN.WorldPos.xyz;

	float3 v = normalize(toEye);
	float3 n = normalize(IN.Normal);
	
	float3 R = reflect(v, n);
	float3 rvec = mul(R, tangentBasis);

	float rx = rvec.x;
	float rz = sqrt( (rvec.z + AddR) * MulR );
	float xcoord = (rx / rz) + 0.5f;
	float2 coord = float2(xcoord, uv.y);

	float3 lightFalloff = tex2D(lightSamp, coord);

	float3 lightCol = float3(1.0f, 1.0f, 1.0f);
	OUT.Color = float4(lightCol, lightFalloff.r);
}

 

Edited by belfegor

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Forum Statistics

    • Total Topics
      628707
    • Total Posts
      2984310
  • Similar Content

    • By Sri Harsha
      Hi,
       
      I have a triangle,oriented in a 3D Plane i.e. I have my vertices as (x1,y1,z1) ; (x2,y2,z2) and (x3,y3,z3)
      I am trying to convert this triangular facet to voxelised model i.e.
      Along each edge,I am applying Bresenhams 3D Line algorithm and generating intermediate points.After generating intermediate points, I want to fill the inside region.
      I have been searching for some algorithm like flood filling,but did not find anything relevant till now.
      I would be really glad,if some one can provide an algorithm for achieving this.
      I basically have a List of tuple for storing all the (x,y,z) data created along the edges.(generated using Brsenhams 3D Line algorithm).
      Now,I want an algorithm,which creates cubes in the inside region.
    • By pabloreda
       
      I am coding the rasterization of triangles by the baricentric coordinate method.
      Look a lot of code and tutorials that are on the web about the optimization of this algorithm.
      I found a way to optimize it that I did not see it anywhere.
      I Only code the painting of triangles without Zbuffer and without textures. I am not comparing speeds and I am not interested in doing them, I am simply reducing the amount of instructions that are executed in the internal loop.
      The idea is simple, someone must have done it before but he did not publish the code or maybe in hardware it is already that way too.
      It should be noted that for each horizontal line drawn, of the three segments, you only need to look at one when going from negative to positive to start drawing and you only need to look at one when it goes from positive to negative when you stop drawing.
      I try it and it works well, now I am implementing a regular version with texture and zbuffer to realize how to add it to this optimization.
      Does anyone know if this optimization is already done?
      The code is in https://github.com/phreda4/reda4/blob/master/r4/Dev/graficos/rasterize2.txt
      From line 92 to 155
       
    • By @Teejay_Cherian
      This is for a dissertation im working on  regarding procedural generation directed towards indie Developers so if you're an indie dev please feel free to share your thoughts
      Does run-time procedural generation limit the designer's freedom and flexibility? if( Have you ever implemented procedural generation ==true){ talk about  some of the useful algorithms used}  else {explain why you haven't} Do you think indie Devs are taking advantage of the benefits provided by procedural generation? What are some of the games that inspired you to take up procedural content generation? If there is anyway i can see your work regarding proc gen please mention the link ( cz i need actual indie developers to make a valid point in my dissertation) Thank You So Much
    • By electrolysing
      Hello,
      when I have multiple Threads, reading and writing to a scene graph, how do I synchronize data over several nodes?
      I.e. when a character with a gun moves, the gun moves with him. A thread dedicated to calculating matrices of both objects might update the character first but before it is able to recalculate the gun's matrix the render thread is already drawing both. Inevitably this causes the character and the gun to be out of sync...
      Now this doesn't only apply to the renderer but for the other threads, too.
    • By Kaskerh
      I recently bought a new mobile phone (Samsung Galaxy Note 8) , i just want to know , which the best Andriod app ?
      I am going to install some app to manage phone  , it is mainly aimed at data management and cell phone space optimization, my phone has enough memory to store them
      What do you recommend ?
  • Popular Now