Jump to content
  • Advertisement

megav0xel

Member
  • Content Count

    11
  • Joined

  • Last visited

Community Reputation

215 Neutral

About megav0xel

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming

Recent Profile Visitors

1840 profile views
  1. Hi! I'm not sure this will work as every sample is weighted by it's PDF and BRDF value. They didn't mention any blur pass in the original slides. Hi! I do implement the prefiltering thing they mentioned in the slides. My problem with that is when using the function they showed in the presentation my reflected image becomes over-blurred and I'm getting some heavy flickering artifacts, so I have to keep the cone tanget in a very low value. I'm using hardware generated mip map for my color buffer. Do I have to manually convolve one for that? About ray reuse, I think it looks good enough for me on smooth surface. Currently I'm having problems with surface of medium and high roughness value, as shown in the image I post. I also checked that Unity plugin when I was working on my own one, as it's the only open sourced implementation I can find on web. I think his result(He released a demo) is slightly better than me mainly because he is using blue noise rather than halton sequence. But it's still worse than what was showed in the original slides. Another thing I just realized is that there is some bugs with my Hi-Z ray marching implementation. A lot of pixels couldn't find intersection point with higher roughness value when combined with importance sampling. IMO The original code in GPU Pro 5 isn't easy to understand, which makes it hard to debug.
  2. Anyone? I'm wondering what's the standard way currently to make SSR match GGX specular highlight?
  3. Hi all! I have been trying to implement this feature in my spare time for several months. Here is a brief summary of my implementation. Basically the algorithm can be break into 2 part, ray marching stage and resolve stage. For ray marching stage, firstly I generate reflection vector using importance sampling(Here I'm using GGX). In the original slides they use Hi-Z ray marching to get intersection point, which is described in GPU Pro 5. My code is adapted from the improved version from Stingray dev team and this post on Gamedev. After getting intersection point, I store the position of intersection point and the pdf of IS generated reflection vector in a texture. The resolve stage mainly does two thing, ray reuse and BRDF normalization. For every pixel on screen, search through neighboring pixels to see if any pixel get a hit point and "steal" its result. This trick allow every pixel on screen to get color info even if it didn't hit anything during ray marching stage. Then, to further reduce noise, the shading equation is reorganized to alleviate variance. This process is summarized in following pages. Finally, I apply TAA to screen to accumulate results from previous frames. And this is what I get. The techniques described in the slides do help to reduce some noise but the result I get is no where close to what they showed in the slide. I tried to increase resolve samples per pixel but it didn't help much. T Their result is almost free of noise. Actually I think it looks a bit too good for real time rendering:) Would glad if someone could give me some tips on noise reduction or point out something I may be wrong about. Thanks for any help in advance.
  4. Hi all.   I encountered a weird bug when computing screen space reflection vector for my SSR feature.   Here is a plane facing toward my camera. Color indicates the reflection vector in screen space. Everything works fine currently.   When I put my camera closer to the plane, suddenly the reflection vector in the center of the screen will get "mirrored", as shown by the following picture Here is my code in GLSL.  vec3 csDir=normalize(reflect(csPos,csNormal)); vec4 psPos = ProjectionMatrix * vec4(csPos, 1.0f); vec3 ndcsPos = psPos.xyz / psPos.w; vec3 ssPos = 0.5f * ndcsPos + 0.5f; csDir +=csPos; vec4 psDir = ProjectionMatrix * vec4(csDir, 1.0); vec3 ndcEndPoint = psDir.xyz / psDir.w; vec3 ssEndPoint = 0.5f * ndcEndPoint + 0.5f; vec3 ssDir=normalize(ssEndPoint-ssPos); Any help would be appreciated.
  5. megav0xel

    Convolve mipmap chain in OpenGL

    Thanks for reply!   I have solved the problem. I mistakenly set the attachment parameter of glFramebufferTexture2D() to GL_DEPTH_ATTACHMENT instead of GL_COLOR_ATTACHMENT. Now my code works correctly. Thanks for your instructions anyway.
  6. Hi all.   I'm trying to convolve a mipmap chain of my screen texture using the following code. Unfortunately all mipmaps levels I get are black.    C++ glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, prevColorFrame1); currentWidth = screenWidth; currentHeight = screenHeight; for (int i = 1; i<mipLevels; ++i) { Shader.SetUniform("MipSize", glm::vec2(currentWidth, currentHeight)); currentWidth /= 2, currentHeight /= 2; currentWidth = currentWidth > 0 ? currentWidth : 1; currentHeight = currentHeight > 0 ? currentHeight : 1; glViewport(0, 0, currentWidth, currentHeight); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, i - 1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, i - 1); glTexImage2D(GL_TEXTURE_2D, i, GL_RGBA16F, currentWidth, currentHeight, 0, GL_RGBA, GL_FLOAT, NULL); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, prevColorFrame1, i); DrawFullScreenQuad(); } glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, numLevels - 1); glViewport(0, 0, screenWidth, screenHeight); Here is my fragment shader. #version 430 out vec4 FragColor; uniform sampler2D colorBuffer; uniform vec2 MipSize; in vec2 TexCoords; ......... void main() { ...... vec3 result = vec3(0.0); for(int i = 0; i < NumSamples; ++i) { vec2 offset = offsets[i]/MipSize; vec3 sampleColor = texture(colorBuffer, TexCoords + offset).rgb; result += sampleColor * weights[i]; } FragColor = vec4(result, 1.0f); } Does anyone know how to manually generate mipmaps correctly in OpenGL? Any help would be appreciated.
  7. Hi!   I'm trying to add importance sampling to my screen space reflection shader. Currently I'm following the approach mentioned in Frostbite's stochastic ssr presentation but I am confused about how to use halton random number in shader. First I pre-calculate and store the halton sequence in an array and then pass the array to shader. After that I use the following code to get the random numbers in GLSL shader. float rand(vec2 co){ return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453); } void main() { .......... //somwhere in main float random1=rand(TexCoords); float random2=rand(TexCoords+vec2(0.301f)); int index1=int(random1*100); int index2=int(random2*100); vec2 Xi=vec2(haltonNum[index1],haltonNum[index2]); ImportanceSampleGGX(Xi, roughness); .......... } Then I will calculate the hit position, reproject it to previous frame and sample its color and neighbour pixels' color .   Unfortunately my result image has a lot of flickering black holes/gaps. It looks like the generated samples are pretty unstable. However if I sample Gbuffer directly I won't have such issue. Some other random functions(like the rand() presented above) work fine but the result is extremely noisy.   Does anyone know how to use halton sequence? Any help would be appreciated.
  8. Hi, thanks for the reply.   You are right. I didn't realize that the sample code require the projection matrix to project point into pixel coordinate rather than screen space.   Now my reflection looks correct, but I have some new problems. View from top down Seems like a large part of rays are intersecting with themselves, and there's weird pattern in the distance. Could you shed some light on what may cause this problem? Thanks!
  9. Hi! I'm trying to implement SSR in my graphics demo using Morgan McGuire's effecient screen space ray tracing method and I encounter several problems. As seen in the following images my result is completely messed up .     My code is basically the same as the sample code provided by Morgan McGuire http://casual-effects.blogspot.com/2014/08/screen-space-ray-tracing.html bool traceScreenSpaceRay1( // Camera-space ray origin, which must be within the view volume point3 csOrig, // Unit length camera-space ray direction vec3 csDir, // A projection matrix that maps to pixel coordinates (not [-1, +1] // normalized device coordinates) mat4x4 proj, // The camera-space Z buffer (all negative values) sampler2D csZBuffer, // Dimensions of csZBuffer vec2 csZBufferSize, // Camera space thickness to ascribe to each pixel in the depth buffer float zThickness, //pass in positive value float nearPlaneZ, float farPlaneZ, // Step in horizontal or vertical pixels between samples. This is a float // because integer math is slow on GPUs, but should be set to an integer >= 1 float stride, // Number between 0 and 1 for how far to bump the ray in stride units // to conceal banding artifacts float jitter, // Maximum number of iterations. Higher gives better images but may be slow const float maxSteps, // Maximum camera-space distance to trace before returning a miss float maxDistance, // Pixel coordinates of the first intersection with the scene out point2 hitPixel, // Camera space location of the ray hit out point3 hitPoint, out vec3 debug) { // Clip to the near plane float rayLength = ((csOrig.z + csDir.z * maxDistance) > -nearPlaneZ) ? (-nearPlaneZ - csOrig.z) / csDir.z : maxDistance; point3 csEndPoint = csOrig + csDir * rayLength; // Project into homogeneous clip space vec4 H0 = proj * vec4(csOrig, 1.0); vec4 H1 = proj * vec4(csEndPoint, 1.0); float k0 = 1.0 / H0.w, k1 = 1.0 / H1.w; // The interpolated homogeneous version of the camera-space points point3 Q0 = csOrig * k0, Q1 = csEndPoint * k1; // Screen-space endpoints point2 P0 = H0.xy * k0, P1 = H1.xy * k1;//Shouldn't P0 and P1 be in NDC space? // If the line is degenerate, make it cover at least one pixel // to avoid handling zero-pixel extent as a special case later P1 += vec2((distanceSquared(P0, P1) < 0.0001) ? 0.01 : 0.0); vec2 delta = P1 - P0; // Permute so that the primary iteration is in x to collapse // all quadrant-specific DDA cases later bool permute = false; if (abs(delta.x) < abs(delta.y)) { // This is a more-vertical line permute = true; delta = delta.yx; P0 = P0.yx; P1 = P1.yx; } float stepDir = sign(delta.x); float invdx = stepDir / delta.x; // Track the derivatives of Q and k vec3 dQ = (Q1 - Q0) * invdx; float dk = (k1 - k0) * invdx; vec2 dP = vec2(stepDir, delta.y * invdx); // Scale derivatives by the desired pixel stride and then // offset the starting values by the jitter fraction dP *= stride; dQ *= stride; dk *= stride; P0 += dP * jitter; Q0 += dQ * jitter; k0 += dk * jitter; // Slide P from P0 to P1, (now-homogeneous) Q from Q0 to Q1, k from k0 to k1 point3 Q = Q0; // Adjust end condition for iteration direction float end = P1.x * stepDir; float stepCount = 0.0f; float k = k0, prevZMaxEstimate = csOrig.z; float rayZMin = prevZMaxEstimate, rayZMax = prevZMaxEstimate; float sceneZMax = rayZMax + 100; for (point2 P = P0; ((P.x * stepDir) <= end) && (stepCount < maxSteps) && ((rayZMax < sceneZMax - zThickness) || (rayZMin > sceneZMax)) && (sceneZMax != 0); P += dP, Q.z += dQ.z, k += dk, stepCount+=1.0f) { rayZMin = prevZMaxEstimate; rayZMax = (dQ.z * 0.5 + Q.z) / (dk * 0.5 + k); prevZMaxEstimate = rayZMax; if (rayZMin > rayZMax) { float t = rayZMin; rayZMin = rayZMax; rayZMax = t; } hitPixel = permute ? P.yx : P; hitPixel=0.5f*hitPixel+vec2(0.5f);//map to [0,1] if(hitPixel.x>=1.0||hitPixel.y>=1.0||hitPixel.x<=0.0||hitPixel.y<=0.0) return false; // You may need hitPixel.y = csZBufferSize.y - hitPixel.y; here if your vertical axis // is different than ours in screen space sceneZMax=-LinearizeDepth(texture2D(sceneDepth,hitPixel).x)*farPlaneZ; } // Advance Q based on the number of steps Q.xy += dQ.xy * stepCount; hitPoint = Q * (1.0 / k); return (rayZMax >= sceneZMax - zThickness) && (rayZMin < sceneZMax); } I also have some problems understanding the sample code. In line 67 of the original version the annotation claims that both P0 and P1 are in sceen space but I think they should be in NDC space. I map the hitpixel to [0,1] before sampling the depth texture and the result seems to look "better".   Any help would be appreciated. :)
  10. megav0xel

    Blit depth buffer not working

    Does anyone know if there's alternative way to combine forward shading with deferred shading?
  11. Hi all.   I'm trying to combine forward rendering with my deferred renderer. My current approach is using glBlitFramebuffer to copy the depth buffer from G-buffer to another HDR fbo. However the blit function doesn't seem to work in my renderer. Gbuffer's depth buffer works well.   My G-buffer init code GLuint gBuffer; glGenFramebuffers(1, &gBuffer); glBindFramebuffer(GL_FRAMEBUFFER, gBuffer); GLuint gPosition, gNormal, gAlbedoSpec; // gbuffer: position glGenTextures(1, &gPosition); glBindTexture(GL_TEXTURE_2D, gPosition); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, screenWidth, screenHeight, 0, GL_RGB, GL_FLOAT, NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, gPosition, 0); // gbuffer:normal+roughness glGenTextures(1, &gNormal); glBindTexture(GL_TEXTURE_2D, gNormal); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, screenWidth, screenHeight, 0, GL_RGBA, GL_FLOAT, NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, gNormal, 0); // gbuffer:albedo+specular glGenTextures(1, &gAlbedoSpec); glBindTexture(GL_TEXTURE_2D, gAlbedoSpec); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, screenWidth, screenHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, GL_TEXTURE_2D, gAlbedoSpec, 0); GLuint attachments[3] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2 }; glDrawBuffers(3, attachments); //gbuffer depth GLuint gDepth; glGenRenderbuffers(1, &gDepth); glBindRenderbuffer(GL_RENDERBUFFER, gDepth); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, screenWidth, screenHeight); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, gDepth) HDR fbo init code: GLuint hdrFBO; glGenFramebuffers(1, &hdrFBO); glBindFramebuffer(GL_FRAMEBUFFER, hdrFBO); //floating point color buffer for HDR GLuint colorBuffer; glGenTextures(1, &colorBuffer); glBindTexture(GL_TEXTURE_2D, colorBuffer); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, screenWidth, screenHeight, 0, GL_RGBA, GL_FLOAT, NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, colorBuffer, 0); //Attach depth render buffer to HDR fbo GLuint hdrDepth; glGenRenderbuffers(1, &hdrDepth); glBindRenderbuffer(GL_RENDERBUFFER, hdrDepth); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, screenWidth, screenHeight); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, hdrDepth); Blit function call: //clear HDR fbo glBindFramebuffer(GL_FRAMEBUFFER, hdrFBO); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); //copy depth to hdr fbo glBindFramebuffer(GL_READ_FRAMEBUFFER, gBuffer); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, hdrFBO); glBlitFramebuffer(0, 0, screenWidth, screenHeight, 0, 0, screenWidth, screenHeight, GL_DEPTH_BUFFER_BIT, GL_NEAREST); glBindFramebuffer(GL_FRAMEBUFFER, hdrFBO); //...then apply lighting pass of deferred shading, forward rendering and post-processing effect to the HDR fbo Any help would be appreciated.
  12. megav0xel

    Importance Sampling

    Really appreciate for the help! Now it runs perfectly.   Any chance you will write about importance sampling for area lights? I find the soft shadow cast from area lights isn't quite noticeable.
  13. megav0xel

    Importance Sampling

    Hi Bacterius, here is the exception message. System.InvalidOperationException: Object is currently in use elsewhere. at System.Drawing.Image.get_Width() at SharpRT.MainClass.<>c__DisplayClass1.<Main>b__0(Int32 y) in d:\SharpRT-entry-4\Raytracer\Program.cs:line 455 at System.Threading.Tasks.Parallel.<>c__DisplayClassf`1.<ForWorker>b__c()} I'm using VS2013 on Win 7 SP1 64bit.
  14. megav0xel

    First Steps

    Thanks for the explanation!   So the matrix is actually still multiplied linearly under the homogeneous coordinates, but operator override is used to replace the last row for optimization. Do I understand correctly?
  15. megav0xel

    First Steps

    Hi  Bacterius! Really nice series! It helps me a lot when building my first raytracer.   However I have some problems understanding the Matrix multiplication code in the MathLibrary.cs. Forgive me for some silly questions here. public static Matrix operator *(Matrix m1, Matrix m2) { var m00 = (m1.U.X * m2.U.X) + (m1.V.X * m2.U.Y) + (m1.W.X * m2.U.Z); var m01 = (m1.U.X * m2.V.X) + (m1.V.X * m2.V.Y) + (m1.W.X * m2.V.Z); var m02 = (m1.U.X * m2.W.X) + (m1.V.X * m2.W.Y) + (m1.W.X * m2.W.Z); var m03 = (m1.U.X * m2.T.X) + (m1.V.X * m2.T.Y) + (m1.W.X * m2.T.Z) + m1.T.X; var m10 = (m1.U.Y * m2.U.X) + (m1.V.Y * m2.U.Y) + (m1.W.Y * m2.U.Z); var m11 = (m1.U.Y * m2.V.X) + (m1.V.Y * m2.V.Y) + (m1.W.Y * m2.V.Z); var m12 = (m1.U.Y * m2.W.X) + (m1.V.Y * m2.W.Y) + (m1.W.Y * m2.W.Z); var m13 = (m1.U.Y * m2.T.X) + (m1.V.Y * m2.T.Y) + (m1.W.Y * m2.T.Z) + m1.T.Y; var m20 = (m1.U.Z * m2.U.X) + (m1.V.Z * m2.U.Y) + (m1.W.Z * m2.U.Z); var m21 = (m1.U.Z * m2.V.X) + (m1.V.Z * m2.V.Y) + (m1.W.Z * m2.V.Z); var m22 = (m1.U.Z * m2.W.X) + (m1.V.Z * m2.W.Y) + (m1.W.Z * m2.W.Z); var m23 = (m1.U.Z * m2.T.X) + (m1.V.Z * m2.T.Y) + (m1.W.Z * m2.T.Z) + m1.T.Z; return new Matrix(new Vector(m00, m10, m20), new Vector(m01, m11, m21), new Vector(m02, m12, m22), new Vector(m03, m13, m23)); } Here it multiplied two 3x4 matrix. But why does it produce another 3x4 matrix here? Shouldn't the new matrix be 3x3 or 4x4? I also don't understand why only the U,V,W get multiplied while the components of T vector are directly added to the last column.   Do I miss something here? 
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!