Jump to content
  • Advertisement

Lewa

Member
  • Content Count

    79
  • Joined

  • Last visited

Community Reputation

428 Neutral

About Lewa

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Business
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Wasn't able to revert back to my old state of znear < zfar to make a proper before/after comparison) but here is an example about the accuracy of reconstructing the world-space position of a pixel from the depth buffer: There are no visible artifacts (before that change, the pink area was bleeding all over the red part. Looked like z-fighting) I'm happy with that kind of precision. The only downside is that it requires GL_ARB_CLIP_CONTROL which is a pretty new-ish feature (core in OpenGL 4.5) which is a downside given that my renderer worked perfectly fine in OpenGL 3.3. (So essentially this raises the system requirements on the end-users part)
  2. Well, learned something new today. Works flawlessly. Thank you!
  3. So, i'm currently in the process of implementing a reversed floating point depth buffer (to increase the depth precision) in OpenGL. I got everything working except the modifications to the projectionmatrices nessecary for this to work. (Matrices are my weakness. ) What i got working are a projectionMatrix and an orthographic projection matrix which has the x-y range spaning from -1 to 1 and the z range from 0-1 as in DirectX. (it's basically the exact same code from the glm library) I use them in combination with: glClipControl(GL_LOWER_LEFT, GL_ZERO_TO_ONE); Here are the two matrices: void Camera::setProjectionPerspektiveY_reverseDepth_ZO(float fovY, float width, float height, float znear, float zfar) { float rad = MathE::toRadians(fovY); float h = glm::cos(0.5f * rad) / glm::sin(0.5f * rad); float w = h * height / width; glm::mat4 p; p[0][0] = w; p[1][1] = h; p[2][2] = zfar / (znear - zfar); p[2][3] = -1; p[3][2] = -(zfar * znear) / (zfar - znear); this->projectionMatrix = p; } void Camera::setProjectionOrtho_reversed_ZO(float left, float bottom, float right, float top, float znear, float zfar) { glm::mat4 p; p[0][0] = 2.0f / (right - left); p[1][1] = 2.0f / (top - bottom); p[2][2] = -1.0f / (zfar - znear); p[3][0] = -(right + left) / (right - left); p[3][1] = -(top + bottom) / (top - bottom); p[3][2] = -znear / (zfar - znear); this->projectionMatrix = p; } My question is: Does anybody know how to properly reverse the depth on both of these matrices? (from 0-1 to 1-0)
  4. Me again. So, i spend the last couple of days with trying to stabilise my cascaded shadowmaps. To do the shadowmap has to: 1) have a fixed size (so that it doesn't scale/change with the camera rotation) by using a spherical bounding box 2) round the position to the nearest texel for camera movement. Now, i got number 1 working (it may not be the smallest possible sphere, but it works as a start) but number 2 is still giving me headaches. Here is the whole code: The important bit for clamping the coordinates to the nearest texels is this: I applied the viewProjection matrix of the sun to a position at 0/0/0 then tried to clamp it to the neatest pixel by bringing the clipspace position to the 0-1 range, multiply it with the shadowmapresulotuin, round this and then calculate the difference between the rounded coordinate and the original one. But no matter what i do, the shadowmap is still completely unstable. So i presume that i'm missing something in this rounding calculation. Note that i had to reverse the Z coordinates and flip min/max. Not entirely sure why i had to do this but it seemed to fix shadowmapping for me. (it worked perfectly fine with the standard shadowmapping code.) Does anyone have an idea what could be missing in the rounding part of the code?
  5. Regarding issue number 2: turns out that this might be a floating point precision issue. Here is an almost perpendicular wall placed at the center of the world (coordiantes are 0/0/0) Here is the same wall at a steep angle at the position (500/500,0): (z is the up vector) I suppose it has to do with the view space matrix which looses precision the further you move away from the center. (thus the reconstructed position which should move along the plane also looses accuracy which leads to shadow acne.) I'll have to see if i can optimize my shader. (Currently i'm doing a few things not really optimally, though i'm not sure if this issue can be completely removed even if i keep my matrix operations to a minimum.) Currently, i do those operations: //works (verified) vec3 depthToWorld(vec2 texcoord,float depth,mat4 invViewProj){ vec4 clipSpaceLocation; clipSpaceLocation.xy = texcoord * 2.0f - 1.0f; clipSpaceLocation.z = depth * 2.0f - 1.0f; clipSpaceLocation.w = 1.0f; vec4 homogenousLocation = invViewProj * clipSpaceLocation; return homogenousLocation.xyz / homogenousLocation.w; } //calculate world-space position of fragment //------------- float cameraDepth = texture2D(sCameraDepth, vTexcoord).r; vec3 pixelWorldPos = depthToWorld(vTexcoord,cameraDepth,uInvViewProjection); //---------- //determine cascade int cascadeIndex = 0; for(int i = 0;i<NUM_CASCADES;i++){ cascadeIndex = i; if(cameraDepth <= uCascadeEndClipSpace[i+1]){ break; } } //------- vec3 shadowCoord = vec3(uSunViewProjection[cascadeIndex]*vec4(pixelWorldPos,1.0));//from world space to projectionspace shadowCoord = (shadowCoord+1.0)/2.0;//move coordinates from -1/1 range to 0/1 range (used later for texture lookup) //shadowCoord is now the position of the given fragment seen from the players perspective, projected onto the shadowmap //------------ the variable "shadowCoord" at the end of those operations probably looses way to much precision which leads to those acne artifacts.(Due to the transformation of the depth from the players View > to World Position > to viewspace from the shadowmaps view.) /Edit: Fixed the precision issues! What i did is to move the eye coordinates of the players view matrix to 0/0/0 for the shadowmap depth comparisons. Had to shift the sun view matrix relative to that too in order to not break the shadowmap calculations. But it worked! The precision critical part (depth to world position) is now done from the center of the world. (And this seems to fix the shadowacne) Now i have to fix issue number 1, but once that is done the results should be close to perfect.
  6. I'm done with the implementation now. (at least rougly.) Made a ton of screenshots to show the advantages (and some still existing issues) of this technique. Now, after testing that technique i understand why you got artifacts/visible wireframes in your implementation. If your sampling resolution is too low, detailed geometry which occupies a single fragment can't be properly reconstructed. (Thus the reconstruction yields wrong results and you get unwanted shadowing.) I have a similar issue here at the low 512x512 sampling resolution: ShadowMap (4 cascades, each with 512x512 resolution): However, if we up the resolution to 2k, the results are a whole lot better (the issue still persists, but is much less noticeable): ShadowMap (4 cascades, each with 2048x2048 resolution): There is basically no shadow acne in the distance (well, almost) and the peter-paning effect is kept to a minimum (as the bias value can kept very small due to the plane reconstruction) Now, the issue is that shadow acne still exists but only on faces which are almost perpendicular to the sun/light vector: You can see the view-space normals of the cascade in the lower left corner. The face which has shadow acne is almost perpendicular (the face is painted red in the normal buffer preview in the first screenshot). in the Second screenshot the wall is 100% perpendicular to the sun (the wall is not even visible in the normal buffer). I'm not 100% sure what causes the shadow acne in this case. (Have to investigate further.) So the remaining issues are: corners of the geometry can experience self-shadowing (most noticeable at lower shadowmap resolutions) Shadow acne on very steep (almost perpendicular) faces Number 1 can be hopefully fixed by doing additional filtering on the edges of the geometry (the normals of each texel can be used to detect edges rather consistently.) Number 2 ... no idea at the moment. /Edit: Issue number 2 is probably bias related. Now that i reconstruct the depth/face normal i'll have to take that into account before applying the bias. (otherwise if you have an almost perpendicular wall from the suns perspective and then apply bias in the direction of the sun, the overall distance from the bias-corrected depth and the reconstructed depth is too small and you get shadow acne.)
  7. Yeah, you are right. On a per face basis this could work. Sadly i don't have Geometry shader support in my Engine at the moment so testing that will require some time until i make the nessecary changes. So giving each face a unique ID will be a bit of a hassle at the moment. I tried adding a sphere into the scene. And i don't see any wireframe artifacts on there (yet?). The only artifacts are the ones due to the normal reconstruction of the depth buffer (same ones as on the block-geometry). Those will be hopefully fixed once i render the normals alongside the shadowmap. Here a few screenshots with the same 512x512 shadowmap: (Note, i'm not using any kind of depth peeling. I simply render a normal back face culled shadowmap and reconstruct the normals from there.)
  8. Yes, this has the potential to fix the issue somewhat, but this would effectively remove intended self-shadowing. (if you apply the same ID to all triangles of an object, it isn't going to cast a shadow onto itself. For example the arms/legs of an character wouldn't cast shadows, etc...) I played around with different bias values with no success. It either introduces shadow acne/light leaking or the peter-panning effect is that strong that it breaks the shadows completely. Now, i spend the last couple of days on this issue (hence the late reply) and experimented with various attempts at fixing this issue. (Depht peeling/dual layer shadow maps, Screen space gap filling, etc...) Nothing that would solve that problem properly. However, i found a potential solution/technique: The issue with Shadow-maps is that the rasterized depthbuffer creates a stairstepping effect due to each fragment having a fixed depth value and the finite amount stored inside the depth-map So what i tried is to render the scene with back-face culling and reconstruct the depth-values in between depth-fragments by taking the current shadow fragment, looking up the neighbouring pixels (effectively giving me 2 vectors) and reconstructing the plane normal from that. This plane normal then can be used to calculate the depth values in between the pixels of the shadow map (effectively bypassing the stairstepping effect of the regular shadowmap lookup). I think this technique is called "depth gradient" however i wasn't able to find anything related to that on the internet (besides some Nvidia presentation slides where this term is mentioned) The results are quite promising. Here is a comparison (i chose a low shadowmap resolution to make the shadow acne more pronounced): Cascaded shadowmap (4 cascades each 512x512) constant shadow bias: You can clearly see the shadow-acne. Now the same with the "depth gradient"/plane reconstruction Cascaded shadowmap (4 cascades each 512x512) "depth-gradient"/plane reconstruction + minor bias/offset Overall, most of the shadow acne is completely gone. You also get almost no peter panning as the bias can be relatively small. One issue that remains is that you get artifacts at the corners of the geometry. That's because i reconstruct the plane/normals from the depth buffer. (Issues arise on pixels which are adjacent to other pixels which belong to a different triangle with a different normal from the sun cameras perspective.) One potential solution to that problem would be to render shadowmaps with their triangle-normals into an FBO and use those normals for the plane reconstruction in the shadowmapping shader. (so you don't have to rely on adjacent fragments, thus you don't have to reconstruct the normal.) This will increase the memory bandwidth of course but it should fix basically all of those bad reconstruction cases. (Will test that in the next days.)
  9. i substract the bias from the depth of the shadowmap float texSunDepth = texture2D( sSunDepth[cascadeIndex], shadowCoord.xy ).r; float biasT = bias[cascadeIndex]; float shadow = 0; if(texSunDepth - biasT < shadowCoord.z ){ shadow = 1.0f; }
  10. So, recently i was in the process of improving shadowmapping and tried to fix (or at least reduce) shadowacne. One of the solutions frequently recommended is to use front-face culling (which completely removes shadow-acne on lit surfaces.) It works, but it comes with another artifact: It makes sense as to why it's happenning. If we put the camera inside the white block we see this: The first thing i tried is to experiment with shadow bias. While this removes the pixel crawl in the shadowed area, it introduces shadows on top of the edge of the lit surface: I also tried to come up with alternative solutions. For example rendering the shadowmap with front-and backface-culling and using the average between the two distances as the comparison point with the camera depth. Though this still didn't remove all shadow acne. (Not to mention that this adds another renderpass which hits performance,) I wasn't really able to find any ressources as to how to combat this issue. Are there any common techniques as to how this could be fixed or at least reduced to a minimum?
  11. I'm fully aware that PBR does require proper illumination in order to achieve the best results I don't have the ability to do that yet though. (I neither have realtime nor pre-beaked GI). So what i'm trying to do is to approximate the illumination as best as possible without deviating from PBR as much. The only thing i have is a very basic implementation of baked AO but even then it's not even single bounce: (Note that the floor is still unaffected by the shading as i didn't include the floor mesh into the baking process) The ambient lighting in this case does represent the lighting of the sky (areas which are occluded get progressively darker the less they are lit by the sky). So what i tried to do is to have only 1 lightsource (the sun) and simulate lighting from the sky/GI by having an ambient light which gets filled into the occluded areas. In order to darken interiors a baked AO texture is implemented by multiplying it with the AO value. So the shader code looks roughly like this: vec3 color = texture2D(sLight, vTexcoord).rgb;//this is the accumulated light on the given fragment (from the directional light) float shadow = texture2D(sShadowMap, vTexcoord).r;//tells us if the pixel is in shadow. 0 = fragment is occluded by the sun, 1 = pixel is lit by the sun color*=shadow;//occluded pixels are set to 0 (completely dark) //now approximate skydome-lighting vec3 ambient = uAmbientLightColor*texture2D(gAlbedo, vTexcoord).rgb;//ambient light (fixed value) multiplied with albedo texture float AO = texture2D(sAmbientOcclusion, vTexcoord).r;//baked World-space Ambient Occlusion color+=ambient;//... //apply AO only if pixel is in SunShadow (if shadow == 0) color *= mix(AO,1.0,shadow); outputF = color;//write to screen I'm fully aware that this isn't a physically correct solution. So in my case it's either implementing a proper GI solution or hacking the PBR to achieve my desired results? How is the ambient lighting in Unreal (seen in my first post) implemented?
  12. So, there is one thing that i don't quite understand. (Probably because i didn't dive that deep into PBR lighting in the first place.) Currently, i implemented a very basic PBR renderer (with the BRDF microfaced shading model) into my engine. The lighting system i have is pretty basic (1 directional/sun light, deffered point lights and 1 ambient light) I don't have a GI solution yet. (Only a very basic world-space Ambient occlusion technique). Here is how it looks like: Now, what i would like to do is to give the shadows a slightly blueish tint. (To simulate the blueish light from the sky.) Unreal seems to implement this too which gives the scene a much more natural look: Now, my renderer does render in HDR and i use exposure/tonemapping to bring this down to LDR. The first image used an indirect light with a RGB value of (40,40,40) and an indirect light of (15,15,15). Here is the same picture but with an ambient light of (15,15,15) * ((109, 162, 255) / (255,255,255)) which should give us this blueish tint. The problem is it looks like this: The shadows do get the desired color (more or less) the issue is that all lit pixels also get affected giving the scene a blue tint. Reducing the ambient light intensity results in way too dark shadows. Increase the intensity and the shadows look alright but then the whole scene gets affected way too much. In the shader i basically have: color = directionalLightColor * max(dot(normal,sunNormal),0.0) + ambientLight; The result is that the blue component of the color will be always higher than the other two. I could of course fix it by faking it (only adding the ambient light if the pixel is in shadow) but i want to stay as close to PBR as possible and avoid adding hacks like that. My question is: How is this effect done properly (with PBR / proper physically based lighting)?
  13. Thanks! The visualisation helped me to grasp the concept better. I was able to implement this into my c++ project. Some screenshots: It works quite well. (There are some issues like lightbleeding on the intersections between two planes and the UV map isn't that great but this can be fixed.) I used blenders Icosphere to create uniformly distributed points on the sky. The samplecount had to be quite high to avoid any banding artifacts: Now the issue is that due to the shadowmaps only being cast from above the ground (pointing downards) all triangles which point downwards (face normal at 0,0,-1) will be completely black. An example: One possible solution would be to have additional points under the ground (basically creating a full pointcloud sphere instead only a halfsphere) but the results were subpar. (Especially as the ground mesh occludes most of the stuff anyways.) Removing the floor mesh from rendering for shadowmaps with the origin under the ground might work, but this introduces artifacts on geometry which is in contact with the ground. I think the only proper solution for that would be to use the half-sphere (like in the screenshot above) and have (at least) one bounce lighting in the AO calculation to lighten interiors up a bit but i wasn't able to find a solution which would work well enough with this baking approach. (Maybe reflective shadowmaps? The issue is that they don't seem to check for occlusion of the bounced light.)
  14. I just tested this simple setup in blender and baked AO there: The middle part is correctly occluded but the edges on the side wouldn't be lit by the shadowmaps (because they are coming from the top.) I suppose that placing additional shadowmaps on the botton isn't enough as those may then interfere with additional geometry (like a floor for example) How did you handle this issue? Or is this just an artifact one has to accept with this technique? Baking those lightmaps during the loading process is a good idea. (Hopefully it doesn't drag the loadingtimes out too much.)
  15. So, i'm currently on a quest of finding a realtime world-space Ambient Occlusion algorithm for a game i'm making. (Or at least check if it's feasible. I wanted to avoid baking AO/lightmaps in a mapeditor as i would like to avoid storing lightmap data in my levelfiles in order to reduce the filesize as much as possible and avoid expensive/long precomputations in the first place.) Now, i stumbled upon an AO concept which works by using multiple shadowmaps which are placed on the skys hemishpere and then merged together to create the Ambient Occlusion effect. Here is an old example/demo from nvidia: http://developer.download.nvidia.com/SDK/9.5/Samples/samples.html#ambient_occlusion I was able to find a video which shows this in action: It seems to work rather well, although i can see a couple issues with it: - Rendering multiple shadowmaps is expensive (though that's expected with realtime AO) - As shadows are only cast from the top, every surface which is pointing downwards will be 100% in shadow/black. (normally such a surface would have a bit of light around the edges due to the light bouncing around. It works best for surfaces facing upwards/towards the sky.) - Flickering can be an issue if the shadowmap is covering a large scene/area or if the resolution of the shadowmap is too low. (could be fixed?) It's incredibely hard to find information on this technique on the internet. (either demos, implementation/improvement details, etc...). I suppose because it's not that widely used? Did anybody implement AO in a similar style like this? Are there any known sources which are covering this technique in more detail?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!