Raytraced lighting in 2D, HLSL

Started by
7 comments, last by neneboricua19 18 years, 5 months ago
Has anyone tried raytacing lighting in 2D using hlsl? So far, I've come up with thhis

float4 RenderLightPS( float2 Tex : TEXCOORD0 ) : COLOR0
{
float2 pixelPos;
pixelPos.x = Tex.x * 800.0f;
pixelPos.y = Tex.y * 600.0f;
float2 lPos;
lPos.x = lightPos.x;
lPos.y = lightPos.y;

float att = lightRadius - distance(lPos, pixelPos);
att = clamp(att,0,lightRadius);

att = att / lightRadius;
int i;
float nx = lPos.x - pixelPos.x;
float ny = lPos.y - pixelPos.y;

float2 NYV;
NYV.x = nx;
NYV.y = ny;

float2 NEV;
NEV = 0.0f;
float dst = distance(NYV,NEV);
nx = nx / dst;
ny = ny / dst;

float lR = 256;

for(i=0; i<lR;i++);
{
float2 samPos;
samPos.x = pixelPos.x;
samPos.y = pixelPos.x;
samPos.x += nx * i;
samPos.y += ny * i;
float4 sam = tex2D(texSampler0, samPos);
att = att * sam.x;
}

return float4(att,att,att,1);
}


Where the texture in texture stage 0 contains a "depth" map, as in empty space is rgba(1,1,1,1) and fully opaque space is rgba(0,0,0,1), 800x600. This hlsl shader is run on a texture which is 800x600, with an additive pass(Source*One + Dest * One) for each light. The texture is multiplied by the scene to apply lighting to the scene. When rendered, it looks like this Looks a bit wierd, but the lights and objects were randomly positioned and colored. Note that the left-most light is a spotlight. So,anyone else tried this?
Advertisement
I'm not sure what you're trying to do in the shader. Perhaps I just didn't look at it closely enough. Are you trying to implement ray tracing using a pixel shader?

neneboricua
Yes, that is what I am trying to do. I have done it, as shown in the screenshots, but I'm curious as to how any of you would do it.
GPGPU.org has many raytracing examples.
Part of my graduate thesis a few years back was on ray tracing on the GPU. I did it in a different way though.

Typical ray tracing is done in a depth first manner where a ray is shot into the scene and is bounced around a number of times before the final pixel color is computed.

To do ray tracing on the GPU, you have to do it in a breadth first manner because the graphics chip likes to operate on a lot of pixels at once. So the idea was to render a screen-aligned quad and associate with it a couple of textures that are the same size as the screen. One texture would contain a number of ray directions. Another texture would contain the ray origins. The pixel shader would then perform a ray intersection test with a single object that had been passed to the shader as a set of constants from the application.

I used multiple floating point render targets to record the point of intersection, the normal vector at that point, the color of the intersection point and a number of other pieces of data. The pixel shader also output the depth of this point to the z-buffer. This process was repeated for each primitive in the scene.

Once all of the objects had been ray traced, the final illumination of the scene was computed using a single screen-aligned quad. The pixel shader would then retrieve the position, normal vector, color, etc... of the closest intersection point (outputting depth from the pixel shader lets the z-buffer make sure that only the closest intersection points get written out) and perform any lighting computations necessary.

The beauty of this was that lighting computations were performed only once per pixel. In a typical 3D scene, pixels are often computed 3 or more times only to be overwritten later on. Another good thing about it is that you can ray trace any primitive for which you can write a ray-intersection pixel shader that will fit within your graphics card's instruction limits. However, there are a number of drawbacks. First is that unless your scene is very controlled, you have to use floating point render targets to save intermediate data. This can eat up a lot of bandwidth. Also, outputting depth from the pixel shader doesn't allow the graphics card to perform early z-testing. Rendering one screen-aligned quad for each primitive can also kill your fillrate. The way I got around that was to project my primitive onto the screen and take the bounding square of this projection and set this as my scissor rect. Any pixels outside of the scissor rect will not have the pixel shader run on them.

Visually, the results are the same as any rasterized scene. However, if you build a ray tracing engine around this concept, you can start to see a number of posibilities for reflections and refractions that just aren't possible with rasterized scenes.

The performance wasn't that bad. I was ray tracing 100k dynamic spheres and getting about 30 frames per second at a resolution of 500x500. This was on a Radeon 9800. There were no spatial optimizations used at all.

neneboricua
Did your raytracing include shadows?
Quote:Original post by CadeF
Did your raytracing include shadows?

Oh yes, I forgot to mention that. It did include shadows. :)
Quote:Original post by neneboricua19
Quote:Original post by CadeF
Did your raytracing include shadows?

Oh yes, I forgot to mention that. It did include shadows. :)

Is there any chance you can upload the thesis somewhere? I've only managed to find that you were advised by Sumanta N. Pattanaik.

Quote:Original post by Coder
Is there any chance you can upload the thesis somewhere? I've only managed to find that you were advised by Sumanta N. Pattanaik.

Yeah, that was me. I don't know if my advisor put it online anywhere but I know I haven't. But just cuz you asked, I've made a .pdf out of it and uploaded it to my little site.

The site itself isn't much to look at and I mainly use it for easy access to a few things I'm working on.

neneboricua

This topic is closed to new replies.

Advertisement