Jump to content
  • Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

215 Neutral

About megav0xel

  • Rank

Personal Information

  • Industry Role
  • Interests

Recent Profile Visitors

1587 profile views
  1. Hi! I'm not sure this will work as every sample is weighted by it's PDF and BRDF value. They didn't mention any blur pass in the original slides. Hi! I do implement the prefiltering thing they mentioned in the slides. My problem with that is when using the function they showed in the presentation my reflected image becomes over-blurred and I'm getting some heavy flickering artifacts, so I have to keep the cone tanget in a very low value. I'm using hardware generated mip map for my color buffer. Do I have to manually convolve one for that? About ray reuse, I think it looks good enough for me on smooth surface. Currently I'm having problems with surface of medium and high roughness value, as shown in the image I post. I also checked that Unity plugin when I was working on my own one, as it's the only open sourced implementation I can find on web. I think his result(He released a demo) is slightly better than me mainly because he is using blue noise rather than halton sequence. But it's still worse than what was showed in the original slides. Another thing I just realized is that there is some bugs with my Hi-Z ray marching implementation. A lot of pixels couldn't find intersection point with higher roughness value when combined with importance sampling. IMO The original code in GPU Pro 5 isn't easy to understand, which makes it hard to debug.
  2. Anyone? I'm wondering what's the standard way currently to make SSR match GGX specular highlight?
  3. Hi all! I have been trying to implement this feature in my spare time for several months. Here is a brief summary of my implementation. Basically the algorithm can be break into 2 part, ray marching stage and resolve stage. For ray marching stage, firstly I generate reflection vector using importance sampling(Here I'm using GGX). In the original slides they use Hi-Z ray marching to get intersection point, which is described in GPU Pro 5. My code is adapted from the improved version from Stingray dev team and this post on Gamedev. After getting intersection point, I store the position of intersection point and the pdf of IS generated reflection vector in a texture. The resolve stage mainly does two thing, ray reuse and BRDF normalization. For every pixel on screen, search through neighboring pixels to see if any pixel get a hit point and "steal" its result. This trick allow every pixel on screen to get color info even if it didn't hit anything during ray marching stage. Then, to further reduce noise, the shading equation is reorganized to alleviate variance. This process is summarized in following pages. Finally, I apply TAA to screen to accumulate results from previous frames. And this is what I get. The techniques described in the slides do help to reduce some noise but the result I get is no where close to what they showed in the slide. I tried to increase resolve samples per pixel but it didn't help much. T Their result is almost free of noise. Actually I think it looks a bit too good for real time rendering:) Would glad if someone could give me some tips on noise reduction or point out something I may be wrong about. Thanks for any help in advance.
  4. megav0xel

    Importance Sampling

    Really appreciate for the help! Now it runs perfectly.   Any chance you will write about importance sampling for area lights? I find the soft shadow cast from area lights isn't quite noticeable.
  5. megav0xel

    Importance Sampling

    Hi Bacterius, here is the exception message. System.InvalidOperationException: Object is currently in use elsewhere. at System.Drawing.Image.get_Width() at SharpRT.MainClass.<>c__DisplayClass1.<Main>b__0(Int32 y) in d:\SharpRT-entry-4\Raytracer\Program.cs:line 455 at System.Threading.Tasks.Parallel.<>c__DisplayClassf`1.<ForWorker>b__c()} I'm using VS2013 on Win 7 SP1 64bit.
  6. megav0xel

    First Steps

    Thanks for the explanation!   So the matrix is actually still multiplied linearly under the homogeneous coordinates, but operator override is used to replace the last row for optimization. Do I understand correctly?
  7. megav0xel

    First Steps

    Hi  Bacterius! Really nice series! It helps me a lot when building my first raytracer.   However I have some problems understanding the Matrix multiplication code in the MathLibrary.cs. Forgive me for some silly questions here. public static Matrix operator *(Matrix m1, Matrix m2) { var m00 = (m1.U.X * m2.U.X) + (m1.V.X * m2.U.Y) + (m1.W.X * m2.U.Z); var m01 = (m1.U.X * m2.V.X) + (m1.V.X * m2.V.Y) + (m1.W.X * m2.V.Z); var m02 = (m1.U.X * m2.W.X) + (m1.V.X * m2.W.Y) + (m1.W.X * m2.W.Z); var m03 = (m1.U.X * m2.T.X) + (m1.V.X * m2.T.Y) + (m1.W.X * m2.T.Z) + m1.T.X; var m10 = (m1.U.Y * m2.U.X) + (m1.V.Y * m2.U.Y) + (m1.W.Y * m2.U.Z); var m11 = (m1.U.Y * m2.V.X) + (m1.V.Y * m2.V.Y) + (m1.W.Y * m2.V.Z); var m12 = (m1.U.Y * m2.W.X) + (m1.V.Y * m2.W.Y) + (m1.W.Y * m2.W.Z); var m13 = (m1.U.Y * m2.T.X) + (m1.V.Y * m2.T.Y) + (m1.W.Y * m2.T.Z) + m1.T.Y; var m20 = (m1.U.Z * m2.U.X) + (m1.V.Z * m2.U.Y) + (m1.W.Z * m2.U.Z); var m21 = (m1.U.Z * m2.V.X) + (m1.V.Z * m2.V.Y) + (m1.W.Z * m2.V.Z); var m22 = (m1.U.Z * m2.W.X) + (m1.V.Z * m2.W.Y) + (m1.W.Z * m2.W.Z); var m23 = (m1.U.Z * m2.T.X) + (m1.V.Z * m2.T.Y) + (m1.W.Z * m2.T.Z) + m1.T.Z; return new Matrix(new Vector(m00, m10, m20), new Vector(m01, m11, m21), new Vector(m02, m12, m22), new Vector(m03, m13, m23)); } Here it multiplied two 3x4 matrix. But why does it produce another 3x4 matrix here? Shouldn't the new matrix be 3x3 or 4x4? I also don't understand why only the U,V,W get multiplied while the components of T vector are directly added to the last column.   Do I miss something here? 
  8. megav0xel

    Importance Sampling

    Hi Bacterius!   I have a problem with your C# sample code. It threw an exception here when running on my laptop. pixelData[3 * (y * img.Width + x) + 2] = (byte)floatToInt(radiance.X / SAMPLES); pixelData[3 * (y * img.Width + x) + 1] = (byte)floatToInt(radiance.Y / SAMPLES); pixelData[3 * (y * img.Width + x) + 0] = (byte)floatToInt(radiance.Z / SAMPLES);  It seems that the image object couldn't be written in by multiple threads simultaneously.   I'm a newbie to multi-thread programming and still couldn't figure it out. Do you have any idea on the problem?   Thanks for any help in advance.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!