Jump to content
  • Advertisement

Lewa

Member
  • Content Count

    83
  • Joined

  • Last visited

Community Reputation

428 Neutral

About Lewa

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Business
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi! I'm sort of stuck on this one issue for a while and can't find a solution. So, i'm currently working with DirectX and want to write the depth-value of the geometry into a texture. (For shadow mapping) The code currently looks like this: vec4 object_space_pos = vec4( in_Position.x, in_Position.y, in_Position.z, 1.0); gl_Position = gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION] * object_space_pos;//vertex position depth = (gm_Matrices[MATRIX_WORLD_VIEW] * object_space_pos).z;//depth in view space Now, this works as expected. The issue is that storing the view depth isn't as efficient. I would like to normalize it into a 0-1 range so that i can encode it into an RGB texture. (Don't ask. It's a limitation of the system i'm working with.) Note that i'm using an orthographic projection and while it might look like GLSL, the backend is still running on DirectX. Now, from my understanding i can use the NDC coordinates and write them into the depth like this: depth = gl_Position.z/gl_Position.w; In another shader (during shadowmap occlusion testing) i'm reading the pixel and i want to reconstruct the view-space depth for comparison. Now, here is the issue. I'm not sure i'm doing it right. The code looks like this: float getViewZ_from_NdcZ(float ndcZ,float zNear,float zFar){ return zNear + ndcZ * (zFar-zNear); } float texDepth = convertRGBtoFloat(texture2D(sSunDepth, deptCoords.xy).rgb);//returns ndc space (0-1). texDepth = getViewZ_from_NdcZ(texDepth,-10000.0,10000.0); //znear and zFar are the zNear and zFar values of the orthographic projection of the sun. Am i overlooking something in the normalization of the depth values? Any hints/direction would be greatly appreciated.
  2. Looking at my code it seems that i somehow mixed up buffer creation and memory allocation. (I thought that the memory allocator creates one large buffer and makes multiple small allocations, the reverse of what is happening.) Looking back at it my assumption doesn't make any sense though. I also figured out what was causing those issues. I not only applied the offsets in the bind calls but also in the vkBufferCopy command during the transfer of the data from a stagingbuffer to the GPU: VkBufferCopy copyRegion = {}; copyRegion.srcOffset = 0;//srcBuffer.allocationInfo.offset; << Fixed copyRegion.dstOffset = 0;//dstBuffer.allocationInfo.offset; << fixed copyRegion.size = size; vkCmdCopyBuffer(commandBuffer, srcBuffer.buffer, dstBuffer.buffer, 1, &copyRegion); So, from what i'm understanding is that we allocate bigger chunks of memory and create individual buffers (with the requested size) which are then bound to specific locations in those memory blocks. I suppose the buffer is storing the offset which has to be applied to the allocated memory block behind the scenes? (and thus we don't have to worry about it in the bind/copy,etc... calls?) And the offsets are (as you said) only the offset of the buffer (starting from the given location to which the buffer was bound to with "vkBindBufferMemory"), and thus should be 0 in most scenarios (unless we want for example only copy a specific region of the buffer). Is this correct?
  3. I recently made the jump from OpenGL to vulkan. Following a tutorial i was able to display a simple colored triangle on the screen. Now, given that the tutorial was creating unique buffers for each allocation (vertex/Indexbuffer) i started implementing the vulkan memory allocator from AMD. The allocator seems to work, but i'm currently stumbling upon an issue in binding the vertex- and index buffer. Originally the buffers where bound with this code: VkBuffer vertexBuffers[] = { vertexBuffer }; VkDeviceSize offsets[] = { 0 };//offset is in both cases zero, as the vertex and indexbuffer have seperate buffers vkCmdBindVertexBuffers(commandBuffers[i], 0, 1, vertexBuffers, offsets); vkCmdBindIndexBuffer(commandBuffers[i], indexBuffer, 0, VK_INDEX_TYPE_UINT16); But after using the vulkan memory allocator, the memory was sub-allocated and thus buffers could be reused between object but the offset/startingposition of the specific data has to be taken into account. So i rewrote the code like this: //in this case the vertex and indexbuffer share the same VkBuffer object but the offset is different VkBuffer vertexBuffers[] = { vertexBuffer.buffer }; VkDeviceSize offsets[] = { vertexBuffer.allocationInfo.offset }; vkCmdBindVertexBuffers(commandBuffers[i], 0, 1, vertexBuffers, offsets); vkCmdBindIndexBuffer(commandBuffers[i], indexBuffer.buffer, indexBuffer.allocationInfo.offset, VK_INDEX_TYPE_UINT16); //======= //uniforms vkCmdBindDescriptorSets(commandBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelineLayout, 0, 1, &descriptorSets[i], 0, nullptr); //------- vkCmdDrawIndexed(commandBuffers[i], static_cast<uint32_t>(indices.size()), 1, 0, 0, 0); vkCmdEndRenderPass(commandBuffers[i]); Starting the c++ project does render the triangle correctly, but the validation layers throw this message: validation layer: vkCmdDrawIndexed() index size (2) * (firstIndex (0) + indexCount (6)) + binding offset (256) = an ending offset of 268 bytes, which is greater than the index buffer size (12). The Vulkan spec states: (indexSize * (firstIndex + indexCount) + offset) must be less than or equal to the size of the bound index buffer, with indexSize being based on the type specified by indexType, where the index buffer, indexType, and offset are specified via vkCmdBindIndexBuffer (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID- vkCmdDrawIndexed-indexSize-00463) I'm confused why it compares the index size * index Count + offset with the index buffer size. (which in this case is 12 bytes). The offset itself should be just an offset and not be counted towards the buffer size. Am i setting the parameters incorrectly? Are the offset parameters the offset for the starting position in the buffer for the information? I looked up the function calls in the vulkan documentation but i can't seem to find the issue here. (Unless i'm mixing up the offsets somehow).
  4. I have an issue which is more related to code organization rather than a technical issue. So, i'm developing my own game engine with c++ in visual studio and compile it as a static library so that i'm able to link it with a seperate visual studio game project. I recently started to move some header/cpp files of my engine project/solution into subdirectories (in particular the renderer, so that i can call my include statements with "#include "renderer/VertexBuffer.h") This works fine as i added my project directory ($(ProjectDir)) into my AdditionalIncludeDirectories in my engine solution. Now, in order to access my engine in other VS projects, i link the Engine.lib with the Project and include all header files of the engine project. In order to make it visually clear which include statements are engine header files, i added the solution directory into the IncludeDirectories of the project. (So that my statements look like this:" //in headers of a gameproject #include "Engine\ECS.h" #include "Engine\Physics\PhysicsWorld.h" #include "Engine\Physics\PhysicsDebugDrawer.h" This works fine for all game project related include statements. But now the game project throws include errors in all engine related header files which were included by the game project: The issue is that they don't have "Engine\" in the include statements: //in engine related header files: #include "ECS.h" #include "Physics\PhysicsWorld.h" #include "Physics\PhysicsDebugDrawer.h" So they work if i compile them in my Engine Visual studio project, but fail if they are included as external library headers. One solution would be to include the paths of the solution and the project of my game engine: D:\Projects\VisualStudio_projects\Engine\ //Solution Path (so that the game can access via #include "Engine/...." D:\Projects\VisualStudio_projects\Engine\Engine //Project path (so that engine headers can directly access stuff without #include "Engine/... But this does seem to be a workaround rather than a proper solution. What is cleanest solution to this problem? Any advice?
  5. Wasn't able to revert back to my old state of znear < zfar to make a proper before/after comparison) but here is an example about the accuracy of reconstructing the world-space position of a pixel from the depth buffer: There are no visible artifacts (before that change, the pink area was bleeding all over the red part. Looked like z-fighting) I'm happy with that kind of precision. The only downside is that it requires GL_ARB_CLIP_CONTROL which is a pretty new-ish feature (core in OpenGL 4.5) which is a downside given that my renderer worked perfectly fine in OpenGL 3.3. (So essentially this raises the system requirements on the end-users part)
  6. Well, learned something new today. Works flawlessly. Thank you!
  7. So, i'm currently in the process of implementing a reversed floating point depth buffer (to increase the depth precision) in OpenGL. I got everything working except the modifications to the projectionmatrices nessecary for this to work. (Matrices are my weakness. ) What i got working are a projectionMatrix and an orthographic projection matrix which has the x-y range spaning from -1 to 1 and the z range from 0-1 as in DirectX. (it's basically the exact same code from the glm library) I use them in combination with: glClipControl(GL_LOWER_LEFT, GL_ZERO_TO_ONE); Here are the two matrices: void Camera::setProjectionPerspektiveY_reverseDepth_ZO(float fovY, float width, float height, float znear, float zfar) { float rad = MathE::toRadians(fovY); float h = glm::cos(0.5f * rad) / glm::sin(0.5f * rad); float w = h * height / width; glm::mat4 p; p[0][0] = w; p[1][1] = h; p[2][2] = zfar / (znear - zfar); p[2][3] = -1; p[3][2] = -(zfar * znear) / (zfar - znear); this->projectionMatrix = p; } void Camera::setProjectionOrtho_reversed_ZO(float left, float bottom, float right, float top, float znear, float zfar) { glm::mat4 p; p[0][0] = 2.0f / (right - left); p[1][1] = 2.0f / (top - bottom); p[2][2] = -1.0f / (zfar - znear); p[3][0] = -(right + left) / (right - left); p[3][1] = -(top + bottom) / (top - bottom); p[3][2] = -znear / (zfar - znear); this->projectionMatrix = p; } My question is: Does anybody know how to properly reverse the depth on both of these matrices? (from 0-1 to 1-0)
  8. Me again. So, i spend the last couple of days with trying to stabilise my cascaded shadowmaps. To do the shadowmap has to: 1) have a fixed size (so that it doesn't scale/change with the camera rotation) by using a spherical bounding box 2) round the position to the nearest texel for camera movement. Now, i got number 1 working (it may not be the smallest possible sphere, but it works as a start) but number 2 is still giving me headaches. Here is the whole code: The important bit for clamping the coordinates to the nearest texels is this: I applied the viewProjection matrix of the sun to a position at 0/0/0 then tried to clamp it to the neatest pixel by bringing the clipspace position to the 0-1 range, multiply it with the shadowmapresulotuin, round this and then calculate the difference between the rounded coordinate and the original one. But no matter what i do, the shadowmap is still completely unstable. So i presume that i'm missing something in this rounding calculation. Note that i had to reverse the Z coordinates and flip min/max. Not entirely sure why i had to do this but it seemed to fix shadowmapping for me. (it worked perfectly fine with the standard shadowmapping code.) Does anyone have an idea what could be missing in the rounding part of the code?
  9. Regarding issue number 2: turns out that this might be a floating point precision issue. Here is an almost perpendicular wall placed at the center of the world (coordiantes are 0/0/0) Here is the same wall at a steep angle at the position (500/500,0): (z is the up vector) I suppose it has to do with the view space matrix which looses precision the further you move away from the center. (thus the reconstructed position which should move along the plane also looses accuracy which leads to shadow acne.) I'll have to see if i can optimize my shader. (Currently i'm doing a few things not really optimally, though i'm not sure if this issue can be completely removed even if i keep my matrix operations to a minimum.) Currently, i do those operations: //works (verified) vec3 depthToWorld(vec2 texcoord,float depth,mat4 invViewProj){ vec4 clipSpaceLocation; clipSpaceLocation.xy = texcoord * 2.0f - 1.0f; clipSpaceLocation.z = depth * 2.0f - 1.0f; clipSpaceLocation.w = 1.0f; vec4 homogenousLocation = invViewProj * clipSpaceLocation; return homogenousLocation.xyz / homogenousLocation.w; } //calculate world-space position of fragment //------------- float cameraDepth = texture2D(sCameraDepth, vTexcoord).r; vec3 pixelWorldPos = depthToWorld(vTexcoord,cameraDepth,uInvViewProjection); //---------- //determine cascade int cascadeIndex = 0; for(int i = 0;i<NUM_CASCADES;i++){ cascadeIndex = i; if(cameraDepth <= uCascadeEndClipSpace[i+1]){ break; } } //------- vec3 shadowCoord = vec3(uSunViewProjection[cascadeIndex]*vec4(pixelWorldPos,1.0));//from world space to projectionspace shadowCoord = (shadowCoord+1.0)/2.0;//move coordinates from -1/1 range to 0/1 range (used later for texture lookup) //shadowCoord is now the position of the given fragment seen from the players perspective, projected onto the shadowmap //------------ the variable "shadowCoord" at the end of those operations probably looses way to much precision which leads to those acne artifacts.(Due to the transformation of the depth from the players View > to World Position > to viewspace from the shadowmaps view.) /Edit: Fixed the precision issues! What i did is to move the eye coordinates of the players view matrix to 0/0/0 for the shadowmap depth comparisons. Had to shift the sun view matrix relative to that too in order to not break the shadowmap calculations. But it worked! The precision critical part (depth to world position) is now done from the center of the world. (And this seems to fix the shadowacne) Now i have to fix issue number 1, but once that is done the results should be close to perfect.
  10. I'm done with the implementation now. (at least rougly.) Made a ton of screenshots to show the advantages (and some still existing issues) of this technique. Now, after testing that technique i understand why you got artifacts/visible wireframes in your implementation. If your sampling resolution is too low, detailed geometry which occupies a single fragment can't be properly reconstructed. (Thus the reconstruction yields wrong results and you get unwanted shadowing.) I have a similar issue here at the low 512x512 sampling resolution: ShadowMap (4 cascades, each with 512x512 resolution): However, if we up the resolution to 2k, the results are a whole lot better (the issue still persists, but is much less noticeable): ShadowMap (4 cascades, each with 2048x2048 resolution): There is basically no shadow acne in the distance (well, almost) and the peter-paning effect is kept to a minimum (as the bias value can kept very small due to the plane reconstruction) Now, the issue is that shadow acne still exists but only on faces which are almost perpendicular to the sun/light vector: You can see the view-space normals of the cascade in the lower left corner. The face which has shadow acne is almost perpendicular (the face is painted red in the normal buffer preview in the first screenshot). in the Second screenshot the wall is 100% perpendicular to the sun (the wall is not even visible in the normal buffer). I'm not 100% sure what causes the shadow acne in this case. (Have to investigate further.) So the remaining issues are: corners of the geometry can experience self-shadowing (most noticeable at lower shadowmap resolutions) Shadow acne on very steep (almost perpendicular) faces Number 1 can be hopefully fixed by doing additional filtering on the edges of the geometry (the normals of each texel can be used to detect edges rather consistently.) Number 2 ... no idea at the moment. /Edit: Issue number 2 is probably bias related. Now that i reconstruct the depth/face normal i'll have to take that into account before applying the bias. (otherwise if you have an almost perpendicular wall from the suns perspective and then apply bias in the direction of the sun, the overall distance from the bias-corrected depth and the reconstructed depth is too small and you get shadow acne.)
  11. Yeah, you are right. On a per face basis this could work. Sadly i don't have Geometry shader support in my Engine at the moment so testing that will require some time until i make the nessecary changes. So giving each face a unique ID will be a bit of a hassle at the moment. I tried adding a sphere into the scene. And i don't see any wireframe artifacts on there (yet?). The only artifacts are the ones due to the normal reconstruction of the depth buffer (same ones as on the block-geometry). Those will be hopefully fixed once i render the normals alongside the shadowmap. Here a few screenshots with the same 512x512 shadowmap: (Note, i'm not using any kind of depth peeling. I simply render a normal back face culled shadowmap and reconstruct the normals from there.)
  12. Yes, this has the potential to fix the issue somewhat, but this would effectively remove intended self-shadowing. (if you apply the same ID to all triangles of an object, it isn't going to cast a shadow onto itself. For example the arms/legs of an character wouldn't cast shadows, etc...) I played around with different bias values with no success. It either introduces shadow acne/light leaking or the peter-panning effect is that strong that it breaks the shadows completely. Now, i spend the last couple of days on this issue (hence the late reply) and experimented with various attempts at fixing this issue. (Depht peeling/dual layer shadow maps, Screen space gap filling, etc...) Nothing that would solve that problem properly. However, i found a potential solution/technique: The issue with Shadow-maps is that the rasterized depthbuffer creates a stairstepping effect due to each fragment having a fixed depth value and the finite amount stored inside the depth-map So what i tried is to render the scene with back-face culling and reconstruct the depth-values in between depth-fragments by taking the current shadow fragment, looking up the neighbouring pixels (effectively giving me 2 vectors) and reconstructing the plane normal from that. This plane normal then can be used to calculate the depth values in between the pixels of the shadow map (effectively bypassing the stairstepping effect of the regular shadowmap lookup). I think this technique is called "depth gradient" however i wasn't able to find anything related to that on the internet (besides some Nvidia presentation slides where this term is mentioned) The results are quite promising. Here is a comparison (i chose a low shadowmap resolution to make the shadow acne more pronounced): Cascaded shadowmap (4 cascades each 512x512) constant shadow bias: You can clearly see the shadow-acne. Now the same with the "depth gradient"/plane reconstruction Cascaded shadowmap (4 cascades each 512x512) "depth-gradient"/plane reconstruction + minor bias/offset Overall, most of the shadow acne is completely gone. You also get almost no peter panning as the bias can be relatively small. One issue that remains is that you get artifacts at the corners of the geometry. That's because i reconstruct the plane/normals from the depth buffer. (Issues arise on pixels which are adjacent to other pixels which belong to a different triangle with a different normal from the sun cameras perspective.) One potential solution to that problem would be to render shadowmaps with their triangle-normals into an FBO and use those normals for the plane reconstruction in the shadowmapping shader. (so you don't have to rely on adjacent fragments, thus you don't have to reconstruct the normal.) This will increase the memory bandwidth of course but it should fix basically all of those bad reconstruction cases. (Will test that in the next days.)
  13. i substract the bias from the depth of the shadowmap float texSunDepth = texture2D( sSunDepth[cascadeIndex], shadowCoord.xy ).r; float biasT = bias[cascadeIndex]; float shadow = 0; if(texSunDepth - biasT < shadowCoord.z ){ shadow = 1.0f; }
  14. So, recently i was in the process of improving shadowmapping and tried to fix (or at least reduce) shadowacne. One of the solutions frequently recommended is to use front-face culling (which completely removes shadow-acne on lit surfaces.) It works, but it comes with another artifact: It makes sense as to why it's happenning. If we put the camera inside the white block we see this: The first thing i tried is to experiment with shadow bias. While this removes the pixel crawl in the shadowed area, it introduces shadows on top of the edge of the lit surface: I also tried to come up with alternative solutions. For example rendering the shadowmap with front-and backface-culling and using the average between the two distances as the comparison point with the camera depth. Though this still didn't remove all shadow acne. (Not to mention that this adds another renderpass which hits performance,) I wasn't really able to find any ressources as to how to combat this issue. Are there any common techniques as to how this could be fixed or at least reduced to a minimum?
  15. I'm fully aware that PBR does require proper illumination in order to achieve the best results I don't have the ability to do that yet though. (I neither have realtime nor pre-beaked GI). So what i'm trying to do is to approximate the illumination as best as possible without deviating from PBR as much. The only thing i have is a very basic implementation of baked AO but even then it's not even single bounce: (Note that the floor is still unaffected by the shading as i didn't include the floor mesh into the baking process) The ambient lighting in this case does represent the lighting of the sky (areas which are occluded get progressively darker the less they are lit by the sky). So what i tried to do is to have only 1 lightsource (the sun) and simulate lighting from the sky/GI by having an ambient light which gets filled into the occluded areas. In order to darken interiors a baked AO texture is implemented by multiplying it with the AO value. So the shader code looks roughly like this: vec3 color = texture2D(sLight, vTexcoord).rgb;//this is the accumulated light on the given fragment (from the directional light) float shadow = texture2D(sShadowMap, vTexcoord).r;//tells us if the pixel is in shadow. 0 = fragment is occluded by the sun, 1 = pixel is lit by the sun color*=shadow;//occluded pixels are set to 0 (completely dark) //now approximate skydome-lighting vec3 ambient = uAmbientLightColor*texture2D(gAlbedo, vTexcoord).rgb;//ambient light (fixed value) multiplied with albedo texture float AO = texture2D(sAmbientOcclusion, vTexcoord).r;//baked World-space Ambient Occlusion color+=ambient;//... //apply AO only if pixel is in SunShadow (if shadow == 0) color *= mix(AO,1.0,shadow); outputF = color;//write to screen I'm fully aware that this isn't a physically correct solution. So in my case it's either implementing a proper GI solution or hacking the PBR to achieve my desired results? How is the ambient lighting in Unreal (seen in my first post) implemented?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!