Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

263 Neutral

About mv348

  • Rank
  1. mv348

    Right vs Left-Handed Matrix Representation

    This relates to the cross product. There is a way to determine the direction of the cross product of two vectors, using the "right hand rule" - in short, to determine the direction of (A cross B), you orient your right hand such that you can sweep your fingers from A to B. Your thumb will point in the direction of the cross product. Google "cross product - right hand rule" for better pictures and explanations. But you could redefine the cross product by using your left hand - this will just reverse the direction of the resulting vector.    you can determine the Z vector of your coordinate system by taking the cross product of the X and Y vectors (or in other language, get the k vector by taking the cross product of the i and j vectors).  If you find the z direction by using a right handed cross product of x and y, its a right handed coordinate system. Likewise for a left handed cross product, and left handed coordinate system.   Hope that makes sense. But in short, the z vector is reversed.
  2. mv348

    Capsule line intersectionpoint only

    Maybe someone else will but I'm definitely not quite understanding your question. Could you give more details or post a diagram?   I implemented capsule collisions once.. to the best of my memory, to determine the closest point on a line to a capsule, I checked the distance between the two center center points to the line (point to infinite-line distance calculation) and then the distance between the line and the straight section of the cylinder (infinite-line to infinite-line distance calculation), and then projected the resulting closest point onto the line segment connecting the sphere center points to see if it was inside the cylinder portion. I had some simple logic to determine which of the 3 options was closest.   Hope that's somewhat helpful.. if not, please post more details.
  3. Thanks everyone.   It turned out the problem went away when I explicitly set the attribute layout locations in all the shaders (tesselation control, tesselation evaluation, and fragment shader). Initially I was only doing this in the vertex shader.   It seems odd to me. I had thought that if your vertex shader sets the position of each attribute, the subsequent shaders would assume the same convention provided they were listed in the same order and had the same type (which was the case).   Should this really have been necessary?
  4. Hello all!   I closely followed the following tutorial for basic OpenGL tesselation:   http://ogldev.atspace.co.uk/www/tutorial30/tutorial30.html   I am using it in a slightly different context with Deferred Rendering.   All I did was added a simple tesselation control shader to completely unalter the triangle patches:       Tesselation Control Shadee: #version 420 // define the number of CPs in the output patch layout (vertices = 3) out; uniform vec3 gEyeWorldPos; // attributes of the input CPs in vec3 WorldPos_CS_in[]; in vec2 TexCoord_CS_in[]; in vec3 Normal_CS_in []; in vec3 Tangent_CS_in []; // attributes of the output CPs out vec3 WorldPos_ES_in[]; out vec2 TexCoord_ES_in[]; out vec3 Normal_ES_in[]; out vec3 Tangent_ES_in[]; float GetTessLevel(float Distance0, float Distance1) { float AvgDistance = (Distance0 + Distance1) / 2.0; if (AvgDistance <= 2.0) { return 10.0; } else if (AvgDistance <= 5.0) { return 7.0; } else { return 3.0; } } void main() { // Set the control points of the output patch TexCoord_ES_in[gl_InvocationID] = TexCoord_CS_in[gl_InvocationID]; Normal_ES_in[gl_InvocationID] = Normal_CS_in[gl_InvocationID]; Tangent_ES_in[gl_InvocationID] = Tangent_CS_in[gl_InvocationID]; WorldPos_ES_in[gl_InvocationID] = WorldPos_CS_in[gl_InvocationID]; // Calculate the distance from the camera to the three control points float EyeToVertexDistance0 = distance(gEyeWorldPos, WorldPos_ES_in[0]); float EyeToVertexDistance1 = distance(gEyeWorldPos, WorldPos_ES_in[1]); float EyeToVertexDistance2 = distance(gEyeWorldPos, WorldPos_ES_in[2]); // Calculate the tessellation levels gl_TessLevelOuter[0] = GetTessLevel(EyeToVertexDistance1, EyeToVertexDistance2); gl_TessLevelOuter[1] = GetTessLevel(EyeToVertexDistance2, EyeToVertexDistance0); gl_TessLevelOuter[2] = GetTessLevel(EyeToVertexDistance0, EyeToVertexDistance1); gl_TessLevelInner[0] = gl_TessLevelOuter[2]; } And a simple Tesselation Evaluation Shader (this will ultimately sample from a Displacement Map, but at the moment I have the displacement commented out) #version 420 layout(triangles, equal_spacing, ccw) in; uniform mat4 gVP; uniform sampler2D gDisplacementMap; uniform float gDispFactor; in vec3 WorldPos_ES_in[]; in vec2 TexCoord_ES_in[]; in vec3 Normal_ES_in[]; in vec3 Tangent_ES_in[]; out vec3 WorldPos_FS_in; out vec2 TexCoord_FS_in; out vec3 Normal_FS_in; out vec3 Tangent_FS_in; vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2) { return vec2(gl_TessCoord.x) * v0 + vec2(gl_TessCoord.y) * v1 + vec2(gl_TessCoord.z) * v2; } vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2) { return vec3(gl_TessCoord.x) * v0 + vec3(gl_TessCoord.y) * v1 + vec3(gl_TessCoord.z) * v2; } void main() { // Interpolate the attributes of the output vertex using the barycentric coordinates TexCoord_FS_in = interpolate2D(TexCoord_ES_in[0], TexCoord_ES_in[1], TexCoord_ES_in[2]); Normal_FS_in = interpolate3D(Normal_ES_in[0], Normal_ES_in[1], Normal_ES_in[2]); Normal_FS_in = normalize(Normal_FS_in); Tangent_FS_in = interpolate3D(Tangent_ES_in[0], Tangent_ES_in[1], Tangent_ES_in[2]); Tangent_FS_in = normalize(Tangent_FS_in); WorldPos_FS_in = interpolate3D(WorldPos_ES_in[0], WorldPos_ES_in[1], WorldPos_ES_in[2]); // Displace the vertex along the normal //float Displacement = texture(gDisplacementMap, TexCoord_FS_in.xy).x; //WorldPos_FS_in += Normal_FS_in * Displacement * gDispFactor; gl_Position = gVP * vec4(WorldPos_FS_in, 1.0); } I am using NVidia NSight. If I do not attach the Control and Evaluation Shaders, the shader executes just fine. I can put breakpoints on my vertex and fragment shader in NSight, and the shaders are executing and I'm seeing what I'd expect on screen.   If I do attach the tesselation shaders, I see nothing on screen and putting breakpoints on the Vertex or Fragment shader doesn't work, indicating that they are not executing at all.   NSight requires that you are running OpenGL 4.x to run the shader debugger, so my OpenGL version should be fine. I remembered to call: glPatchParameteri(GL_PATCH_VERTICES, 3); and I checked my GL_MAX_PATCH_VERTICES and the value is 32.   I read that I may need to use GL_PATCHES as my primitive type, so I am using this in my draw call: glDrawElementsBaseVertex(GL_PATCHES, numIndices, GL_UNSIGNED_INT, (void*)(sizeof(unsigned int) * mesh->m_index_offset), start);   still the Shader refuses to execute.   Kind of at my wits end here. :\ does anyone have any ideas?   Thanks!  
  5. So I've read its best to write dedicated shaders for specific purposes rather than bloat them with too many conditionals. And it makes sense. But I look at how many options are possible and it quickly becomes overwhelming:   1. Normal mapping 2. GPU Tesselation / Displacement mapping 3. Shadow mapping 4. Animation/Skinning   I could go with or without the each of these for any particular shader, resulting in 16 possible shaders. When you consider I have to write shaders for shadow passes for different types of lights (point, spot, directional, etc). the number of shaders quickly becomes ridiculous.   What's a smart way to cluster these and keep it reasonable?    
  6. Yep! I set 7 as the active texture unit, bind the texture,and then set 7 as the value of the gCascadedShadowMap texture uniform.   I was hoping someone might now what possible things could 'block' the texture from being read from? Are their constraints on the bind states of the frame buffer objects? I've tried binding and unbinding the relevant frame buffers in various places, but I'm sure I've not exhausted the possibilities.
  7. In my last post I stated that I was having trouble with rendering to a depth texture. But I believe the following has demonstrated that the render to texture is successful and the issue is I can't read from the texture after the rendering is complete.    I used glReadPixels to check if my depth texture is being cleared. I used the following after calling glClear:   glBindFramebuffer(GL_READ_FRAMEBUFFER,  m_fbo);   float depth; glReadPixels(0, 0, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &depth);   printf("depth value is %f",depth);     The print message outputs whatever clear depth I've set before calling glClear (with the fbo bound as the draw buffer), so I at least now know that the depth texture IS being written to. The problem is I just can't get my shaders to read from it! Here is where the texture is sampled:     float mapSample = (texture( gCascadedShadowMap, TexCoord).x);   FragColor =   vec4(mapSample,mapSample,mapSample,1);     I am well aware of the usual faintness of the depth-texture rendering, that's why I'm using glClear() with a depth value of around 0.5. I should be seeing a mid-gray screen when I render, but instead I just see blackness. If I set other textures to gCascadedShadowMap, however, it draws the texture to the screen correctly.   Any help or ideas would be greatly appreciated!
  8. Fixed a small bug - I forgot to reset the glClearDepth to 1.0f after setting it to 0.25f. But still not working.    I think it has something to do with my gBuffer. My gBuffer has 4 color textures and a depth/stencil texture attached to its FBO. The shadow map FBO has only a depth texture attached. When I do my light pass I have my g-buffer fbo bound, and then I bind the shadow-map's shadow texture to an appropriate texture unit. It appears as if the gBuffer cannot "see" the shadow texture sitting in the unbound FBO. By my understanding I thought this would work but is there some technicality here that I'm not aware of?
  9. I don't think thats it. I'm not using the texture for anything else.   (edit) Being doing more testing and it would appear that the depth clearing is being applied to my g-buffer (previously bound frame buffer) and not to the shadow FBO. I can't see why though, I am definitely binding the shadow FBO before clearing the depth buffer.
  10. I am trying to implement cascaded shadow maps but I am having a strange issue with my FBO. As far as I can tell I am completely unable to write to the attached depth-texture. Even using glClear(GL_DEPTH_BUFFER_BIT) and trying various different clear values, I am unable to see any changes in the texture when I render it to the screen.   Here's a peek at some of my code: bool CascadedShadowMap::Initialize(unsigned int WindowWidth, unsigned int WindowHeight) { m_width = WindowWidth; m_height= WindowHeight;       // Create the FBO     glGenFramebuffers(1, &m_fbo);       // Create the depth buffer     glGenTextures(1, &m_shadowMap);     glBindTexture(GL_TEXTURE_2D, m_shadowMap);     glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, 2 * WindowWidth, 2 * WindowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);       glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo);     glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_shadowMap, 0);       // Disable writes to the color buffer     glDrawBuffer(GL_NONE);         GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER);       if (Status != GL_FRAMEBUFFER_COMPLETE) {         vDebugOut("error initializing shadow map! -CasadedShadowMap.cpp");   return false;     }       return true; }   The debug message is not printed so I know the frame buffer is complete. I also made sure the function is being called.   Here's where the texture should be cleared:   void vGraphics::fillShadowTextureDirectionalCascadedMap() {   m_noFragmentShader->enable();   glPushAttrib(GL_VIEWPORT_BIT);   cascaded_shadow_map->BindForWriting();   cascaded_shadow_map->Clear();   // Rendering models commented out for now. Trying to get Clear() to work. /*  ... */   glPopAttrib();   m_noFragmentShader->disable(); }   [/code] The methods "BindForReading" and "Clear" shown above are implemented like so:     void CascadedShadowMap::BindForWriting() {     glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo); }   void CascadedShadowMap::Clear() { glViewport(0, 0, 2* m_width, 2 * m_height); glDepthMask(GL_TRUE); glEnable(GL_DEPTH_TEST); glDisable(GL_STENCIL_TEST); glDrawBuffer(GL_NONE); glDisable(GL_DEPTH_CLAMP); glClearDepth(0.25f); glClear(GL_DEPTH_BUFFER_BIT); }     As you can see I'm setting a lot of options to try to make absolutely certain that nothing will prevent the GBuffer from getting cleared.    Now, just before my light pass, have: m_dirLightShader->enable(); cascaded_shadow_map->BindForReading(7); m_dirLightShader->setCascadedShadowUnit(7);     Here's the code for the two functions above.   void vDirectionalLightShader::setCascadedShadowUnit(GLuint textureUnit) { glUniform1i(m_unif_casc_shadow_map_id, textureUnit); } ... void  CascadedShadowMap::BindForReading(GLenum TextureUnit) { glEnable(GL_TEXTURE_2D);     glActiveTexture(TextureUnit);     glBindTexture(GL_TEXTURE_2D, m_shadowMap); }   Now at the end of my fragment shader in my light pass (or "texture render pass" if you will):     float mapSample = (texture(gCascadedShadowMap, TexCoord).x);   FragColor =   vec4(mapSample,mapSample,mapSample,1);   }   Its worth noting that if I say: cascaded_shadow_map->BindForReading(0); then the texture bound to texture unit 0 (I think its the Position map on my deferred rendering set up) displays perfectly fine. But using the texture unit that the texture CascadedShadowMap::m_shadowMap is bound to, produces a pure black screen, regardless of what I set the depth clear color to in CascadedShadowMap::Clear().   Any ideas would be most appreciated. Thanks!
  11. Weird.. I thought I submitted a reply to this...   Interesting! I mean, my self-shadowing certainly works, its more an issue of artifacts close to the silhouette edge. May I ask, how did you determine your silhouette edges?  The issue I seem to be having, even when calculating a smooth silhouette edge (cutting along the line where non-ambient light reaches zero), is that the lighting is smooth and continuous but the shadow volumes are sharp wedges, and therefore I seem to invariably see artifacts close to the silhouette edge. Later I'll try to post an example when I have my laptop.
  12. After realizing the benefits of stencil shadow volumes for handling point-light shadows, I have been determined to add them to my own graphics engine. Days have turned into weeks and weeks into months. I can easily cast beautiful, clean shadows behind an occluder; but the self shadowing always suffers from ugly artifacts.    I thought I'd found my solution to cast smooth silhouette edges here, but issues involving z-fighting and the per-polygon nature of these calculations always ensure self-shadowing issues crop up. Countless tweaks and special case handling haven't gotten me out of the woods.   Even if I can get this working, I have more concerns. How well will GPU tessellation and displacement mapping work with stencil shadow volumes? Can I realistically expect to get the smoothness of shadow maps with these per-polygon calculations?   Moreover, all the literature I find on stencil shadow volumes seems to be at least 6 years old.. and it has me wondering are they just not practical anymore?   The only possibility I can see is if you used a hybrid: shadow-maps to handle self shadowing, and projected stencil shadow volumes elsewhere. But I wonder if you might as well go for all-shadow maps in that case.   If there are any proponents of stencil shadow volumes here, I'd love to know how you handle these concerns because if I switch to only shadow maps, I'll have to throw away a lot of work! :(  
  13. Hello friends. I liked the link posted by Tesselator I set about to implementing it and I've gotten quite close I think. I've tested that it correctly emits the silhouette edges exactly where the non-ambient lighting reaches 0.    The only problem that remains is shown below. (Never mind the diamond shaped region at the left, that's a model in the background with non-continuous surface normals, which I need to add special case code to handle)   The lighting shader is currently set to draw green where the shadow will be. The Z-fighting you see on the upper half is not really an issue because the shadow only blocks non-ambient light, and so it will have no affect on the lighting there.   The real problem is the extreme right and left interior faces of the torus. The light is pointed in the negative Y direction. The shadow volume is projected straight down, and these extreme edges do not quite catch the shadow. I have tried a few tricks to enlarge the projected shadow. For example, If I was projecting a shadow volume from triangle (v0,v2,v4), I would instead project (v0-eps*n0, v2- eps*n2,v4-eps*n4) where n0,n2,n4 are the respective normals for v0,v2,v4 and epsilon is a small positive value. This actually worked very well except it created gaps in the shadow volume, producing narrow (or in some cases fairly wide) line artifacts. 
  14. Screenshots are hard to get of this since there's so much flicker and the artifacts aren't too noticeable unless the flicker is there. So I took a video clip:     For this video I reverted back to using only the triangle plane normals to calculate front or back facing. I found that offsetting the shadow volume polygons by epsilon*lightDirection (this is currently only directional light shadows) works well in eliminating a lot of the artifacts, but as you can see on the torus, its not quite gone. Even if I optimize epsilon for just the torus object I can't get it to look right.    I like your suggestions, Erik Rufelt. I really am tempted to try adding an extra pass to deal with this directly, but I'm worried that it will cost double the processing time per shadow.    Have a look at the video clip and see if you come up with some suggestions.
  15. Thanks again to you all, for your helpful feedback and suggestions.   Using the same for both. Its relatively low poly.   So I tried implementing what Erik Rufelt suggested:  Pretty much did exactly what you said. Definitely fixes the self shadowing issue. However, the shadow shrinking produces a shadow with a jagged, per-polygon edge. So ironically it really just moves the same jagged artifacts off the self-shadows and puts them on the regular shadows. :\   @Hodgeman, you said:     Are you suggesting the same technique as Erik? What do you mean by a truly back-facing edge?   @Kryzon, you said:     I fail to see how extruding the front-faces instead makes the self-shadowing problem any better. Don't have time to read through and digest a paper right now. Could you give me a few more details on how they resolve the self-shadowing issue? I already have ambient shadows in place. (which I assume means, only ambient light is rendered in shadowed pixels)
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!