• Content count

  • Joined

  • Last visited

Community Reputation

122 Neutral

About Lopez

  • Rank
  1. I've been working on a way of using multiple pp lights in a shader to render scene objects - but I seem to be having a problem with the light vector being messed up with the projection matrix. As I have a limit on the amount of variables I can pass between the vs and the fs, and therefore how many light vectors in tangent space I can pass over; i'm trying to do the ts stuff per light in the fs. Does anyone know what i'm doing wrong? vert: attribute vec3 tangent; attribute vec3 binormal; varying vec3 T, B, N; varying vec3 V; uniform mat4 projMatrix; void main() { T = gl_NormalMatrix * tangent; B = gl_NormalMatrix * binormal; N = gl_NormalMatrix * gl_Normal; V = -vec3(gl_ModelViewMatrix * gl_Vertex); gl_TexCoord[1] = gl_MultiTexCoord0; gl_Position = ftransform(); } light stuff in frag: void _computeLighting( int nLight, vec3 addSpecular, vec3 addLight, vec3 map_normal, vec3 map_specular ){ // LIGHTING COMPONENTS // per light colors vec3 l_ambient; vec3 l_diffuse; vec3 l_specular; // diffuse contribution float diffuse; // specular contribution float specular; // distance from light to vertex in TS float dist; // complete attenuation float atten; // per light vec3 thisLight; vec3 thisSpecular; // for each light vec3 L = vec3(gl_LightSource[nLight].position); vec3 LtoV = (L - V); normalize(LtoV); // TS vertex, and light Lts = (gl_NormalMatrix *LtoV) * TBN_Matrix; Vts = V*TBN_Matrix; // distance from light to pixel dist = abs(distance(Lts, Vts)); // attenuation from light to pixel atten = 1.0 / (gl_LightSource[nLight].constantAttenuation + gl_LightSource[nLight].linearAttenuation * dist + gl_LightSource[nLight].quadraticAttenuation * dist * dist); l_ambient = gl_LightSource[nLight].ambient.rgb; l_diffuse = gl_LightSource[nLight].diffuse.rgb; l_specular = gl_LightSource[nLight].specular.rgb; diffuse = atten*clamp(dot(Lts, map_normal), 0.0, 1.0); thisLight = l_ambient+(l_diffuse*diffuse*atten);//+ LightingComponent += thisLight; thisSpecular = pow(clamp(dot(reflect(-Vts, map_normal), Lts), 0.0,1.0),16.0)*map_specular; SpecularComponent += thisSpecular; }
  2. Hi there, i'm attempting to implement cube map reflection in GLSL, but the problem is that if I use the method I have seen in various forums, ie. [vert] g_normal = gl_NormalMatrix * gl_Normal; g_view = gl_ModelViewMatrix * gl_Vertex; [frag] viewVec = normalize(g_view); normalVec = normalize(g_normal); cubeColor = textureCube( tex1, -reflect(viewVec, normalVec) ); Upon rendering, the cubemaps orientation follows the camera, and is not oriented with worldspace (the cubemap is just te 6 sides of a skybox). Anybody know where i'm going wrong?
  3. You're offsetting your tex coords by a full texture size. You need to offset by say offset_value/texture_size to get the correct offset.
  4. ao data to texture

    The only way I found was to use a second set of uvw coordinates, and bake the ao map using those; usually the whole scene into one BIG texture. The trouble is, as the 3ds format doesn't allow more than 1 set of uvw coordinates to be exported, you will have to either write your own exporter script, or export the second set of uvw's separately, and ten recombine them someway upon loading the scene. Hope that helps.
  5. What type off collision detection routine are you using? It doesn't sound like it's recursive. Are you implementing a slide-plane in your collision detection function?
  6. Hi there, I'm trying to compute the normals for a md5 model, in order to use lighting with it. The solution is described in this paper: http://tfc.duke.free.fr/coding/md5-specs-en.html but i'm having problems with my implementation. ""Precomputing normals You will probably need to compute normal vectors, for example for lighting. Here is how to compute them in order to get “weight normals”, like the weight positions (this method also works for tangents and bi-tangents): 1. First, compute all model's vertex positions in bind-pose (using the bind-pose skeleton). 2. Compute the vertex normals. You now have the normals in object space for the bind-pose skeleton. 3. For each weight of a vertex, transform the vertex normal by the inverse joint's orientation quaternion of the weight. You now have the normal in joint's local space. Then when calculating the final vertex positions, you will be able to do the same for the normals, except you won't have to translate from the joint's position when converting from joint's local space to object space."" I'm using the following function to obtain the bindpose normals... void CMD5::computeNormals (const md5_mesh_t *mesh, const md5_joint_t *joints){ int i, j; vec3_t *bindposeVertex; vec3_t *bindposeNormal; bindposeVertex = (vec3_t *)malloc (sizeof (vec3_t) * max_verts); bindposeNormal = (vec3_t *)malloc (sizeof (vec3_t) * max_verts); // zero bind pose arrays for (i = 0; i < mesh->num_verts; ++i){ bindposeVertex[i][0] = 0.0f; bindposeVertex[i][1] = 0.0f; bindposeVertex[i][2] = 0.0f; bindposeNormal[i][0] = 0.0f; bindposeNormal[i][1] = 0.0f; bindposeNormal[i][2] = 0.0f; } // fill bindpose vertices for (i = 0; i < mesh->num_verts; ++i){ for (j = 0; j < mesh->vertices[i].count; ++j){ const md5_weight_t *weight = &mesh->weights[mesh->vertices[i].start + j]; if(joints){ const md5_joint_t *joint = &joints[weight->joint]; vec3_t wv; Quat_rotatePoint (joint->orient, weight->pos, wv); bindposeVertex[i][0] += (joint->pos[0] + wv[0]) * weight->bias; bindposeVertex[i][1] += (joint->pos[1] + wv[1]) * weight->bias; bindposeVertex[i][2] += (joint->pos[2] + wv[2]) * weight->bias; } } } // compute normals for(i=0;i<mesh->num_tris;i++){ float n[3], v0[3], v1[3]; v0[0] = bindposeVertex[mesh->triangles[i].index[1] ][0] - bindposeVertex[mesh->triangles[i].index[0] ][0]; v0[1] = bindposeVertex[mesh->triangles[i].index[1] ][1] - bindposeVertex[mesh->triangles[i].index[0] ][1]; v0[2] = bindposeVertex[mesh->triangles[i].index[1] ][2] - bindposeVertex[mesh->triangles[i].index[0] ][2]; v1[0] = bindposeVertex[mesh->triangles[i].index[2] ][0] - bindposeVertex[mesh->triangles[i].index[0] ][0]; v1[1] = bindposeVertex[mesh->triangles[i].index[2] ][1] - bindposeVertex[mesh->triangles[i].index[0] ][1]; v1[2] = bindposeVertex[mesh->triangles[i].index[2] ][2] - bindposeVertex[mesh->triangles[i].index[0] ][2]; CVector3 vNormal = CrossProduct(CVector3(v0[0], v0[1], v0[2]), CVector3(v1[0], v1[1], v1[2])); n[0]=vNormal.x; n[1]=vNormal.y; n[2]=vNormal.z; for(j=0;j<3;j++){ bindposeNormal[mesh->triangles[i].index[j] ][0] += n[0]; bindposeNormal[mesh->triangles[i].index[j] ][1] += n[1]; bindposeNormal[mesh->triangles[i].index[j] ][2] += n[2]; } } for(i=0;i<mesh->num_verts;i++){ CVector3 n = CVector3( bindposeNormal[i][0], bindposeNormal[i][1], bindposeNormal[i][2] ); n.normalize(); mesh->weights[i].norm[0] = -n.x; mesh->weights[i].norm[1] = -n.y; mesh->weights[i].norm[2] = -n.z; } } and I process the normals before rendering as follows... // Calculate transformed vertex for this weight vec3_t wv; Quat_rotatePoint (joint->orient, weight->pos, wv); finalVertex[0] += (joint->pos[0] + wv[0]) * weight->bias; finalVertex[1] += (joint->pos[1] + wv[1]) * weight->bias; finalVertex[2] += (joint->pos[2] + wv[2]) * weight->bias; // Calculate transformed normal for this weight quat4_t inv; Quat_invert( joint->orient, inv ); vec3_t wn; Quat_rotatePoint (inv, weight->norm, wn); finalNormal[0] += (wn[0]) * weight->bias; finalNormal[1] += (wn[1]) * weight->bias; finalNormal[2] += (wn[2]) * weight->bias; but when I render the model in bind pose, the normals are all screwed up. Can anybody point me in te right direction?
  7. TOKAMAK bump map problem

    I apologise for my vagueness. I'm attempting to implement tangent space bump mapping on an object model. I'm rendering object meshes using the matrix from tokamak to translate and rotate the model. t = physicsman->getRigidBodyStruct(m_nPhysicalBodyID)->pRigidBody->GetTransform(); float mat[16] = { t.rot[0][0], t.rot[0][1], t.rot[0][2], 0.0f, t.rot[1][0], t.rot[1][1], t.rot[1][2], 0.0f, t.rot[2][0], t.rot[2][1], t.rot[2][2], 0.0f, t.pos[0], t.pos[1], t.pos[2], 1.0f }; // rotate the light into object space CVector3 ws_light; CVector3 os_light; CVector3 ws_mesh; ws_light = _lightman->getLight(0)->getPosition(); ws_mesh = CVector3(t.pos[0], t.pos[1], t.pos[2]); // reposition light, as if mesh is at the origin os_light = ws_light-ws_mesh; // now rotate it glMatrixMode(GL_MODELVIEW); glPushMatrix(); glLoadIdentity(); glTranslatef(-ws_mesh.x, -ws_mesh.y, -ws_mesh.z); glMultMatrixf(mat); glTranslatef(ws_mesh.x, ws_mesh.y, ws_mesh.z); glGetFloatv(GL_MODELVIEW_MATRIX, mat02); matrixInvert(mat); glPopMatrix(); // multiply the light vectorby the matrix float newvert[3]; newvert[0]=(os_light.x*mat02[0])+(os_light.y*mat02[4])+(os_light.z*mat02[8]); newvert[1]=(os_light.x*mat02[1])+(os_light.y*mat02[5])+(os_light.z*mat02[9]); newvert[2]=(os_light.x*mat02[2])+(os_light.y*mat02[6])+(os_light.z*mat02[10]); os_light = CVector3( newvert[0], newvert[1], newvert[2] ); os_light is then passed to a GLSL shader as a uniform. uniform vec4 rotated_light; void main() mat3 TBN_Matrix = gl_NormalMatrix * mat3(tangent, binormal, gl_Normal); vec4 mv_Vertex = gl_ModelViewMatrix * gl_Vertex; cullvertex = gl_Vertex; vec4 lightEye = gl_ModelViewMatrix * rotated_light; vec3 lightVec =lightEye.xyz - mv_Vertex.xyz; g_lightVec = lightVec * TBN_Matrix; g_viewVec = vec3(-mv_Vertex) * TBN_Matrix ; g_normalVec = gl_Normal * TBN_Matrix ; I must be doing something wrong, as each object is being illuminated incorrectly.
  8. Hi, I'm trying to corrently bump map some tokamak rigid bodies. The problem is, if I use the FFPL the lighting is correct, but when I use a GLSL shader, the lighting is all wrong. I have tried to rotate the light position into local object space, but with no luck. Any help would be appreciated.
  9. It's depth maps that I need access to, and also realtime, so no. (not currently supported). Mainly, i'm looking at a way to derive the oter 5 projection vectors, form the one I pass into the shader.
  10. Hi there, I'm trying to implement a shadowed point light in my lighting engine. I had this idea, as gl doesn't support depth cubemaps, to encode my own by rendering the 6 views to a single depth texture (res*2 x res*3), using 6 cameras at 90 degree angles to each other - done that no problem. Now I want to access this texture in the fragment shader. I'm sending the first projection's texture matrix to the shader, so currently, I can render 1 cube face. 1. Can I derive the other 5 projection vectors in the shader, rather than passing them as uniforms? 2. Can I access a certain part of a single texture in the fragment shader if the texture is to be projected? (ie. one of the 6 depth segments I rendered).
  11. How about using the gl define varyings, such as color, secondary color, etc???
  12. Hi there, I'm currently trying to implement multiple shadowed lights in GLSL for my renderer, but believe I have reached the varying float limit for my shader (gf6600gt = 32). Is there anyway to work around this? Are the built-in varying vec's such as gl_TextureCoord included in the 32 limit? HELP!!!!!!!!!!!!
  13. idea about volumetric lights

    Guoshima, I used a similar process to the ATI one, but have the quad aligned to the light, and can get 6 shadowed volumes running at 60fps, with 200 planes per light. I think it will work a lot faster using the ATI route of camera aligned planes, with clipping for the light volumes, as I will be able to render n lights using a max of say 200 planes aligned between the nearest and farthest light frustrums. Will update you when I get it working.
  14. idea about volumetric lights

    Compositing the volumes seems to speed up rendering quite a bit. Here's a 128x128 composite render of 3 volumes... ... and here's the render... I'm not applying a blur kernel to the composite image yet, but the initial results are looking promising. The artefacts around shadowed objects are due to the size of the render (128x128), but increasing the render target size will cause much of a slow-down either.
  15. idea about volumetric lights

    Hi there, i'm working on the same thing, but am having a few problems with performance. According to the paper on the ATI site, you should calculate the volume intersection of the light frustrum and the view frustum, and the render around 100 quads with additive blending, orientated towards the camera. I have managed to do the above by rendering the quads along the light frustrum, but as you can see, there are several artifacts. I needed to render 200 quads to get the quality shown above, and I think blurring would allow the use of less volume planes. I think your idea of downsampling might speed the fillrate up a bit (which I think is the biggest bottleneck), but don't forget, you will have to at least render the scene geometry to the depth buffer at this lower resolution to avoid the lights being seen behind geometry.