computing light attenuation using textures
Can anyone explain how to compute light attenuation using textures ala doom3.
I am setting up my lights so that they have a bounding box that will define thier range of influence. Now I just need to attenuate them.
It's explained here
Basically, you use an attenuation map consisting on a 3d texture or a pair of 2d and 1d textures. You then have to calculate the texture coordinates for every vertex illuminated by the light.
Well, it's explained much better at the link. One note though. The tutorial uses nvidia register combiners to perform the final calculation
Basically, you use an attenuation map consisting on a 3d texture or a pair of 2d and 1d textures. You then have to calculate the texture coordinates for every vertex illuminated by the light.
Well, it's explained much better at the link. One note though. The tutorial uses nvidia register combiners to perform the final calculation
(1 - (tex0 + tex1))*color
but i did it with GL_ARB_texture_env_combine without trouble.
Still struggling with this.
I am trying to implement this technique using GLSL.
I create and load my attenuation textures in the application like so:
For reference here are the attenuation textures:
I calculate the texture coordinates for the each vertex in the vertex shader:
Then in the fragment shader I combine the two attenuation textures and modulate the final lighting color:
Here is the scene without attenuation:
With attenuation:
Despite the fact that the light doesnt attenuate properly at all down the z-axis, I dont understand the banding that occurs either:
I am using a radius of 25, which should be pretty small given the scale of the scene. I feel I understand the conecpt explained in the article very well - maybe I am making a stupid mistake somewhere or misunderstanding something in the article.
[Edited by - luridcortex on May 9, 2006 9:32:34 AM]
I am trying to implement this technique using GLSL.
I create and load my attenuation textures in the application like so:
//Make 1d attenuation texturetImageTGA *img = LoadTGA("./data/textures/atten1d.tga");glGenTextures(1, &m_att1d);glBindTexture(GL_TEXTURE_1D, m_att1d);glTexImage1D( GL_TEXTURE_1D, 0, GL_INTENSITY8, img->sizeX, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, img->data);glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);//Make 2d attenuation textureimg = LoadTGA("./data/textures/atten2d.tga");glGenTextures(1, &m_att2d);glBindTexture(GL_TEXTURE_2D, m_att2d);glTexImage2D( GL_TEXTURE_2D, 0, GL_INTENSITY8, img->sizeX, img->sizeY, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, img->data);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
For reference here are the attenuation textures:
I calculate the texture coordinates for the each vertex in the vertex shader:
attribute vec3 tangent;uniform vec3 lightPos;uniform vec3 camPos;uniform float radius;varying vec3 lVec;varying vec3 attenCoords;void main(){ gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0; vec3 lightVec = lightPos - gl_Vertex.xyz; vec3 n = gl_Normal; vec3 t = tangent; vec3 b = cross(n,t); lVec.x = dot(lightVec, t); lVec.y = dot(lightVec, b); lVec.z = dot(lightVec, n); attenCoords = (gl_Vertex.xyz - lightPos) / radius; attenCoords = attenCoords / 2.0 + 0.5; gl_Position = ftransform();}
Then in the fragment shader I combine the two attenuation textures and modulate the final lighting color:
uniform sampler2D tex_1, tex_2, tex_3;uniform sampler2D att2d;uniform sampler1D att1d;uniform vec4 lightColor;varying vec3 lVec;varying vec3 attenCoords;void main(){ vec3 lightVec = normalize(lVec); vec4 atten2d = texture2D(att2d, attenCoords.st); vec4 atten1d = texture1D(att1d, attenCoords.r); vec4 base = texture2D(tex_1, gl_TexCoord[0].st); vec3 bump = (texture2D(tex_2, gl_TexCoord[0].st).xyz - 0.5) * 2.0; bump.y = -bump.y; bump = normalize(bump); float diffuse = max(0.0, dot(lightVec, bump)); vec4 lighting = diffuse * base; vec4 attenuation = 1.0-(atten1d+atten2d); gl_FragColor = lighting * lightColor * attenuation;}
Here is the scene without attenuation:
With attenuation:
Despite the fact that the light doesnt attenuate properly at all down the z-axis, I dont understand the banding that occurs either:
I am using a radius of 25, which should be pretty small given the scale of the scene. I feel I understand the conecpt explained in the article very well - maybe I am making a stupid mistake somewhere or misunderstanding something in the article.
[Edited by - luridcortex on May 9, 2006 9:32:34 AM]
Hmmm... are you sure you want to do this?
vec4 attenuation = 1.0-(atten1d+atten2d); gl_FragColor = lighting * lightColor * atten2d;
Ah, yeah, im not actually doing that.. that was left over from when I was taking screens of just the attenuation maps being applied. The screenshots use the correct equation. Bad copy/paste job...
Anything else look off?
Anything else look off?
I have tried with both textures, same problems.
I am pretty sure the problem is where I calculate the texture coordinates for each attenuation map. The article calculates them as follows:
I dont really understand how the first formula clamps the distance to [-1,1]. If I have vertex(0,0,0) and lightPos(0,100,0) and radius 25 we get the following:
x' = (0 - 0) / 25
y' = (0 - 100) / 25
z' = (0 - 0) / 25
y' is actually -4. Plugging this into the second equation we get -1.5.. which isnt in the [0,1] range.
I am pretty sure the problem is where I calculate the texture coordinates for each attenuation map. The article calculates them as follows:
Quote:Remember that we want to clamp values to the 0 to 1 range, so these textures should be created with the wrap mode set to CLAMP or CLAMP_TO_EDGE. Now, using the above textures, we need to map the (x,y,z) coordinates into this texture. To do this we can calculate the distance from the light as:
x0 = (x - lightX) / R
y0 = (y - lightY) / R
z0 = (z - lightZ) / R
Note that we scaled each distance by dividing it by the light radius R. This allows us to ensure that distances from -R to R lie in the -1 to 1 range. The next step is to map these (x0, y0, z0) distance which lie in the -1 to 1 range into (s, t, r) texture coordinates which lie in the 0 to 1 range. To do so we calculate:
s = x0/2 + 0.5
t = y0/2 + 0.5
r = z0/2 + 0.5
Then, we can use coordinates (s,t) as the texture coordinates for the 2D Texture 0, and use the (r) coordinate as the texture coordinate for the 1D Texture 1. Finally, now that we have the Intensity of the light calculated, we can multiply this by the color of the light to get the distance attenuated light value for the current pixel.
I dont really understand how the first formula clamps the distance to [-1,1]. If I have vertex(0,0,0) and lightPos(0,100,0) and radius 25 we get the following:
x' = (0 - 0) / 25
y' = (0 - 100) / 25
z' = (0 - 0) / 25
y' is actually -4. Plugging this into the second equation we get -1.5.. which isnt in the [0,1] range.
Well, Shader Designer says that this works:
and this produces the problem you see:
The only difference is using xyz instead of str. Using ron's textures, that is.
P.S: Why do you want to do it with textures? Since you use shaders, isn't it easier to do per pixel lighting?
void main(){ vec4 atten2d = texture2D(att2d, attenCoords.xy); vec4 atten1d = texture1D(att1d, attenCoords.z); vec4 attenuation = 1.0-(atten2d+atten1d); gl_FragColor = attenuation;}
and this produces the problem you see:
void main(){ vec4 atten2d = texture2D(att2d, attenCoords.st); vec4 atten1d = texture1D(att1d, attenCoords.r); vec4 attenuation = 1.0-(atten2d+atten1d); gl_FragColor = attenuation;}
The only difference is using xyz instead of str. Using ron's textures, that is.
P.S: Why do you want to do it with textures? Since you use shaders, isn't it easier to do per pixel lighting?
Ah, .str works only on gl_TexCoord I guess. Still wonder how that compiled ok..
I am doing per-pixel lighting. The advantage to using textures for attenuation is speed and quality. You can use different attenuation textures to simulate different lighting effects, if I understand correctly.
I am doing per-pixel lighting. The advantage to using textures for attenuation is speed and quality. You can use different attenuation textures to simulate different lighting effects, if I understand correctly.
Hey there,
I just wanted to thank you for that thread and accordingly the link above.
At the moment I'm also fighting with per-pixel lighting and performance issues. With that method, I can drop the attenuation calculation within my lighting shader - that should greatly boost my overall performance.
Thanks
Chris
I just wanted to thank you for that thread and accordingly the link above.
At the moment I'm also fighting with per-pixel lighting and performance issues. With that method, I can drop the attenuation calculation within my lighting shader - that should greatly boost my overall performance.
Thanks
Chris
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement