# Xardes

Member

14

## Community Reputation

142 Neutral

• Rank
Member

Yes, that may introduce artifacts. At least that "shadow edge shimmering". But other techniques introduce artifacts as well and I still want to try this and see for myself, how it looks. However I am really stuck figuring out, which Matrices they use as view and projection matrix..

Thanks for the help so far.   What I want to achieve, is what Crytek calls "View space aligned shadow frustrum alignment" (http://www.crytek.com/download/Playing%20with%20Real-Time%20Shadows.pdf).   Basically, I want, that for 8 given points in world space, that form a cube (or box) of arbitrary size, position and orientation and an arbitrary light direction, I want to know, how to choose light view and projection matrix, such that I get a "View space aligned" shadow frustrum.   Thanks again, Xardes

I've also read the slides from Crytek "Playing with real time shadows" (not sure, if there is a paper or more except that). Yes, they do say, that rotating shadow maps introduces "shimmering" as a temporal artifact, however there are no visuals for that. I would like to evaluate for myself, if that is actually visible in my scenario. I doubt, that this is visible, for sufficiently high resolution and sampling rate of the shadow maps.

I got a little further with my shadow mapping, but its not perfect yet.   Since I provided a Video last time and people could help me faster, I just try that again:   (From the perspective of a second camera. You see the world space around the actual camera.)   I proceed as follows: 1. Compute the 8 Corners of the frustrum (yellow in Video) of the camera. 2. Compute the 8 corners of the smallest bounding box around that frustrum. 3. Compute the center of that bounding box. (sum of all corners divided by 8) 4. Compute the Light view Matrix (V = glm::lookAt(center, center + lightDirection, lightUpVector)) 5. Transform all 8 corners into light space (Multiply with V from 4.) 6. Compute the min/max X and Y values over all 8 transformed corners. 7. Choose the min/max values for z large enough, that all shadow casting objects are included, where ever the frustrum is. 8. Compute the Projection Matrix P = glm::ortho(xMin,xMax,yMin,yMax,zMin,zMax) 9. Use V and P for shadow mapping as usual.   You can see the result in the video. The problem is, that depending on the horizontal angle of the camera a lot of shadow map resolution is wasted, because the shadow map does not "rotate with the camera". How do I achieve that?   Thanks in advance for any help. o/   Xardes

Thanks for the hint. Is it really that.. distracting, that it doesn't perfectly fit?   So, to create an orthographic projection for directional light I can use glm::ortho(left, right, bottom, top, near, far).   What I need to do, if I understand that correctly, is to transform my bounding box of the view frustum from world space to lightspace. So my Lightviewmatrix transforms the vertices, such that the centroid of my boundingbox is at (0,0,0) and the lightdirection (0,-1,-1) equals (0,0,-1) right? So I would need to transform my bounding box in the same way, such that it is in light space and use these values for glm::ortho?   Any help is greatly appreciated.   Xardes

Hey,   I'm trying to implement Cascaded Shadow Mapping. Currently I have a problem creating the correct View and Projection Matrices for the light camera. I proceed as follows:   Let 0.1 be the near plane and 8.0f the far plane for the first cascade. Then I compute the following: float fovRadians = 70.0 * ( M_PI / 180.0); float viewRatio = 16.0 / 9.0; float nearHeight = 2.0 * tan(fovRadians / 2.0) * 8.0; float nearWidth = nearHeight * viewRatio; // Compute the 8 corner point of the bounding box around the view frustum from the near plane to the plane at depth 8.0f in world space. frontLeftBottom = (cam->getCamPos() + cam->getViewVector() * 0.1f - cam->getRightVector() * 0.5f * nearWidth - cam->getUpVector() * 0.5f * nearHeight); frontLeftTop = (cam->getCamPos() + cam->getViewVector() * 0.1f - cam->getRightVector() * 0.5f * nearWidth + cam->getUpVector() * 0.5f * nearHeight); frontRightBottom = (cam->getCamPos() + cam->getViewVector() * 0.1f + cam->getRightVector() * 0.5f * nearWidth - cam->getUpVector() * 0.5f * nearHeight); frontRightTop = (cam->getCamPos() + cam->getViewVector() * 0.1f + cam->getRightVector() * 0.5f * nearWidth + cam->getUpVector() * 0.5f * nearHeight); backLeftBottom = (cam->getCamPos() + cam->getViewVector() * 8.0f - cam->getRightVector() * 0.5f * nearWidth - cam->getUpVector() * 0.5f * nearHeight); backLeftTop = (cam->getCamPos() + cam->getViewVector() * 8.0f - cam->getRightVector() * 0.5f * nearWidth + cam->getUpVector() * 0.5f * nearHeight); backRightBottom = (cam->getCamPos() + cam->getViewVector() * 8.0f + cam->getRightVector() * 0.5f * nearWidth - cam->getUpVector() * 0.5f * nearHeight); backRightTop = (cam->getCamPos() + cam->getViewVector() * 8.0f + cam->getRightVector() * 0.5f * nearWidth + cam->getUpVector() * 0.5f * nearHeight); // Compute the center of that box centroid = (frontLeftBottom + frontLeftTop + frontRightBottom + frontRightTop + backLeftBottom + backLeftTop + backRightBottom + backRightTop) * 1.0f/8.0f; right = glm::cross(glm::vec3(0,-1.0,-1.0),glm::vec3(0,1,0)); up = glm::cross(glm::vec3(0,-1.0,-1.0),right); // This is the view matrix for the light and the first shadow cascade ( (0,-1.0,-1.0) is the direction of the light ) V = glm::lookAt(centroid,centroid + glm::vec3(0,-1.0,-1.0), up);    How do I now compute an orthographic projection matrix for the light space (directional light)? Ignore, that there may be shadow casters that are far away, I want an orthographic projection, that projects exactly the bounding box that I have computed above in world space. How exactly do I get my projection matrix from that 8 world space positions?   Thank you very much. Xardes
7. ## Computing smooth normals for procedually generated terrain

But, isn't that exactly, what sampling with GL_LINEAR should do? It should linearly interpolate the height, if I do not hit a perfect texel center and from that I should get interpolated normals, right?
8. ## Computing smooth normals for procedually generated terrain

As you can see, I've changed that with my second code posting. I started to ray-trace the neighbour vertices and take accurate samples of the texture. But still, I wanted to avoid the solution of the tutorial, because it creates somewhat like sharp edges of the normals ie. that staircase effect you can see in the videos. Is there any way I can avoid that alltogether?
9. ## Computing smooth normals for procedually generated terrain

I already tried that, but it didn't help me to clear things up. So, they are 2 things: First the normals/diffuse lighting have some sort of "staircase" effect and second the normals/diffuse lighting sort of "flickers". This is both not the case for the tree I'm rendering. There everything is totally smooth. I really don't understand, where the difference is.
10. ## Computing smooth normals for procedually generated terrain

No, I am not snapping my sampling points to the nearest texel center. I sample the heightmap texture with the direct (scaled) xz coordinates using GL_LINEAR. What exactly goes wrong here and what should I do instead?   The lighting isn't as smooth as the heightmap? I mean, the heightmap defines a continuous 3D Mesh over the xz plane and averaging normals per vertex should result in smooth lighting?
11. ## Computing smooth normals for procedually generated terrain

Alright. 2 Videos this time.   The first shows diffuse lighting, when I compute the normals in the fragment shader in exactly the same way it is done in the tutorial (but with 0.25 ambient light):   vec3 normal = normalize(cross( dFdx(oPos.xyz), dFdy(oPos.xyz))); float diffuse = max(dot(normalize(normal),normalize(vec3(0.0,15.0,15.0))),0.25);   Video:   The second one compute the normals in vertex shader and only the lighting in fragment shader with the code posted earlier.   Video:   @WiredCat I don't get, why this wouldn't work in shader.. But I guess computing an appropriate normal map would be an elegant work around. Also saves all the computation in the shader. If I do it via CPU over multiple frames once the player reaches the end of the current area, that shouldn't be too difficult. Thank you, I will try that. o/
12. ## Computing smooth normals for procedually generated terrain

Alright another video:   The color of the fragments in the video is as follows:       vec3 green = vec3(0.0, 0.75, 0.0);     vec3 grey = vec3(0.2, 0.2, 0.2);     vec3 col = mix(green, grey, oPos.y*0.1);     // Final fragment color     oFragColor = vec4(col, 1.0);   That means the color is more green in low areas and less green in higher ares. As you can see, the 3D world position generated by the heightmap is continuous (displayed to the limit of RGB space of course). Therefore the sampling of the heightmap is done correctly, is it not? I mean, if I just compute the normals correctly out of these world space coordinates, I should get smooth lighting, right?   If you want I will add another video with just diffuse lighting, I just wanted to make sure, everything is fine until here.   Thanks again.
13. ## Computing smooth normals for procedually generated terrain

Screenshots don't add much information, as static images don't really demonstrate the problem. See this Video instead:   Please compare the lighting. The lighting of the leafs is smooth. The Terrain not. Its not flat shading, but the "transition of normals" isn't smooth. Still it looks better than before. I would like to achieve lighting that is as smooth as the lighting of the leafs.   Right now, I do the following (Changed since creating this thread): 1. Compute the world Space pos (ray tracing interception with xz plane) of the Screen Space vertex to proccess. 2. Compute the world space pos (ray tracing interception with xz plane) of the 4 Adjacent vertices (x/y +1/0, -1/0, 0/+1, 0/-1). 3. Compute the 4 world space triangle normals. 4. Compute the vertex average   Code:     vec3 finalPos[5];     vec3 camera_dir = normalize( ( inverse(projectionMatrix) * vec4(aPosition.x, aPosition.z, 1.0, 1.0) ).xyz );     vec3 world_dir = normalize( inverse(modelViewMatrix) * vec4(camera_dir, 0) ).xyz;     float t = camPos.y / -world_dir.y;     finalPos[0] = camPos + t*world_dir;          // 256x256 is the screen Space grid resolution     camera_dir = normalize( ( inverse(projectionMatrix) * vec4(aPosition.x + 1.0/256.0, aPosition.z, 1.0, 1.0) ).xyz );     world_dir = normalize( inverse(modelViewMatrix) * vec4(camera_dir, 0) ).xyz;     t = camPos.y / -world_dir.y;     finalPos[1] = camPos + t*world_dir;         camera_dir = normalize( ( inverse(projectionMatrix) * vec4(aPosition.x - 1.0/256.0, aPosition.z, 1.0, 1.0) ).xyz );     world_dir = normalize( inverse(modelViewMatrix) * vec4(camera_dir, 0) ).xyz;     t = camPos.y / -world_dir.y;     finalPos[2] = camPos + t*world_dir;          camera_dir = normalize( ( inverse(projectionMatrix) * vec4(aPosition.x, aPosition.z + 1.0/256.0, 1.0, 1.0) ).xyz );     world_dir = normalize( inverse(modelViewMatrix) * vec4(camera_dir, 0) ).xyz;     t = camPos.y / -world_dir.y;     finalPos[3] = camPos + t*world_dir;          camera_dir = normalize( ( inverse(projectionMatrix) * vec4(aPosition.x, aPosition.z - 1.0/256.0, 1.0, 1.0) ).xyz );     world_dir = normalize( inverse(modelViewMatrix) * vec4(camera_dir, 0) ).xyz;     t = camPos.y / -world_dir.y;     finalPos[4] = camPos + t*world_dir;          float height[5];     height[0] = 8.0 * texture( myTextureSampler, finalPos[0].xz / 128.0 ).r;     height[1] = 8.0 * texture( myTextureSampler, finalPos[1].xz / 128.0 ).r;     height[2] = 8.0 * texture( myTextureSampler, finalPos[2].xz / 128.0 ).r;     height[3] = 8.0 * texture( myTextureSampler, finalPos[3].xz / 128.0 ).r;     height[4] = 8.0 * texture( myTextureSampler, finalPos[4].xz / 128.0 ).r;          float sx = height[1] - height[2];     float sz = height[3] - height[4];     oPos = vec3(finalPos[0].x, height[0], finalPos[0].z);          // This is passed to the fragment shader!     oNormal = normalize(vec3(sx,2.0/255.0,sz));   Any Ideas, how I can get it more smooth/what I am doing wrong?
14. ## Computing smooth normals for procedually generated terrain

Hey guys,   I recently implemented a way of procedurally generating my Terrain by following this Tutorial: https://rendermeapangolin.wordpress.com/2015/05/26/screen-space-grid/   This works quite well. If you have a look a the screenshot at the very end of this Tutorial, you can see, that the shading is some sort of flat shading. I would like to achieve a smoother shading. I tried to compute per-vertex normals within the vertex shader by averaging the normals of all faces that contain the vertex. This creates a smoother shading, however either I still have mistakes in my computation or it doesn't work so well with the Screen Space grid, because The normals have no.. "smooth transition" as I move my camera, but rather change their values drastically.   Currently I do the following:   //---------------------------------------------------------------- // Sample the heightmap(2048^2) float height[5]; height[0] = texture( myTextureSampler, finalPos.xz + vec2(-1.0/2048.0, 0.0)).r; height[1] = texture( myTextureSampler, finalPos.xz + vec2(0.0, -1.0/2048.0)).r; height[2] = texture( myTextureSampler, finalPos.xz + vec2(0.0, 0.0)).r; height[3] = texture( myTextureSampler, finalPos.xz + vec2(1.0/2048.0, 0.0)).r; height[4] = texture( myTextureSampler, finalPos.xz + vec2(0.0, 1.0/2048.0)).r;   vec3 n[4]; vec3 l0; vec3 l1;      // Bottom left Triangle l0 = vec3(finalPos.x - 1.0/2048.0, height[0], finalPos.z) - vec3(finalPos.x, height[2], finalPos.z); l1 = vec3(finalPos.x, height[1], finalPos.z - 1.0/2048.0) - vec3(finalPos.x, height[2], finalPos.z); n[0] = normalize(cross(l0,l1)); if(n[0].y <0) n[0] = -n[0];      // Upper right Triangle l0 = vec3(finalPos.x + 1.0/2048.0, height[3], finalPos.z) - vec3(finalPos.x, height[2], finalPos.z); l1 = vec3(finalPos.x, height[4], finalPos.z + 1.0/2048.0) - vec3(finalPos.x, height[2], finalPos.z); n[1] = normalize(cross(l0,l1)); if(n[1].y <0) n[1] = -n[1];    // Upper left Triangle l0 = vec3(finalPos.x - 1.0/2048.0, height[0], finalPos.z) - vec3(finalPos.x, height[2], finalPos.z); l1 = vec3(finalPos.x, height[4], finalPos.z + 1.0/2048.0) - vec3(finalPos.x, height[2], finalPos.z); n[2] = normalize(cross(l0,l1)); if(n[2].y <0) n[2] = -n[2];      // Bottom right Triangle l0 = vec3(finalPos.x + 1.0/2048.0, height[3], finalPos.z) - vec3(finalPos.x, height[2], finalPos.z); l1 = vec3(finalPos.x, height[1], finalPos.z - 1.0/2048.0) - vec3(finalPos.x, height[2], finalPos.z); n[3] = normalize(cross(l0,l1)); if(n[3].y <0) n[3] = -n[3];   // Average normal oNormal = normalize((n[0] + n[1] + n[2] + n[3])/4); //---------------------------------------------------------------------------------   The shader is certainly smooth now, but the normals have no smooth transition to one another. I suppose, that happens, because the vertices are more dense near the camera than further away and have not that uniform 1.0/2048.0 distance? Any Ideas, how I can realize a completely smooth shading, while procedurally generating Terrain as explained within the Tutorial?   Thank you very much!   Xardes