allingm

Members
  • Content count

    96
  • Joined

  • Last visited

Community Reputation

539 Good

About allingm

  • Rank
    Member
  1. It's possible you are having depth precision issues.  If you tweak your near and far plane does the banding change size?  If this is a problem you could possibly take a different approach and only change your blur width based on the first sample.  I know the original Jimenez paper did a ddx and ddy calculation on the first sample of depth to figure out the slope.   Edit: Oh, also, just for testing, try sampling the alpha of your specular map on each blur sample and reject the color if the resulting sample is greater than 0.  (I'm thinking my previous comment is wrong now, but I'll keep it just in case.)
  2. Personally I prefer to understand this from a mathematical perspective.  There seems to be a decent explanation here: http://www.codeguru.com/cpp/misc/misc/graphics/article.php/c10123/Deriving-Projection-Matrices.htm#page-3
  3. Shouldn't this line... float xScale = 1.0f / tanf(0.5f * fovy);   ...be:   float xScale = 1.0f / tanf(0.5f * fovx);   ?
  4. Fast exp2() function in shader

    exp2 http://msdn.microsoft.com/en-us/library/windows/desktop/bb509596%28v=vs.85%29.aspx The only way to find out the answer to your question is to test performance.  You can also get an idea by looking at the token assembly of your shader.   Also, integers operations were emulated with floats in DirectX 9, but 10 requires full integer support.  I don't see how they could simulate it with a float, and if they did you would notice a huge performance impact.
  5. For RTW is it possible to use the previous frame's shadow map to compute the warping maps?
  6. Global illumination techniques

    Look at those numbers though.  That technique looks next-next gen.  At least, out of the box it isn't viable.
  7. You can also find more on specular occlusion here: http://research.tri-ace.com/Data/cedec2011_RealtimePBR_Implementation_e.pptx http://research.tri-ace.com/Data/GDC2012_PracticalPBRinRealtime.ppt   They are near the end.
  8. Well, if I normalize Shlick's approximation I get:   http://www.wolframalpha.com/input/?i=solve%281+%3D+c+*+integrate%28s+%2B+%281-s%29%281-cos%28x%29%29^5*sin%28x%29%2C+x%2C+0%2C+pi%2F2%2C+y%2C+0%2C+2*pi%29%2C+c%29   and combining with Shlick's approximation I get:   http://www.wolframalpha.com/input/?i=3%2F%283+*+pi^2*s+-+pi+*+s+%2B+pi%29+*+%28s+%2B+%281-s%29*%281+-+cos%28x%29%29^5%29   but maybe this doesn't make sense.  I'm thinking it doesn't have to match 1 exactly, but if I integrate it over the hemisphere I get:   http://www.wolframalpha.com/input/?i=integrate%282+*pi+*%28s+%2B+%281-s%29%281-cos%28x%29%29^5*sin%28x%29%29%2C+x%2C+0%2C+pi%2F2%29   and if F(0) is 0 we get 1, and if F(0) is 1 we get:   http://www.wolframalpha.com/input/?i=pi^2+-+1%2F3+*+pi++%2B+1   Currently I've been looking at GGX and the GGX term itself is already normalized, but the GGX geometry term may or may not be normlized.  I have no way to verify this without shelling out money for Mathmatica.  So, I was hoping to trust the geometry term and normalize the Fresnel.
  9. So, you are supposed to normalize a BRDF equation so that it sums to 1 over the hemisphere.  It seems that everybody goes out of their way to normalize the NDF portion; however, I was wondering about the other portions.  Wouldn't it make sense to normalize the whole equation including the Fresnel term for example?  I did some Google searching and came across this:   http://seblagarde.wordpress.com/2011/08/17/hello-world/ "When working with microfacet BRDFs, normalize only microfacet normal distribution function (NDF)"   …but then I ask myself, why?  The writer doesn’t seem to give any explanation why.  Does anybody know?
  10. I’m having trouble understanding Cook-Torrance’s BRDF fundamentals.  The function is:   F * G * D ---------------- 4 * N.L * N.V   My question is where do the N.L and N.V come from?  The main reason I ask is the N.V is giving me trouble.  I know that the N.L goes away when we multiply the BRDF by the N.L and the incoming light intensity, but N.V remains and causes problems for me.  The objects in my scene have bright halo/sparkles around them.  Perhaps this isn’t even supposed to be a problem?  I would like a deeper understanding, so I can figure out what is going wrong.
  11. pimpl for renderer class

    Would this work for you?   public header ------------------   class CTexture; // Use forward declaration to avoid exposing the implementation.   class Renderer { public:   // All operations on this texture will happen through the renderer.   void DoSomethingToTexture(CTexture* tex); };   private cpp/header ------------------------   class CTexture {   // Do your platform specific stuff in the cpp or private header.  This is where "Renderer" is implemented.   LowLevelRenderer stuff; };   You can avoid the clutter this way, but you have to gaurentee that the user (public) will only ever pass around the pointer.  If you want to be able to use the texture class direcitly you will have to use Pimpl or Virtuals.  Personally I would use virtuals as the cost compared to the amount of work is low.  Virtuals get expensive when the cost compared to the amount of work is high.  For instance, you'll probably have a few hundred expensive draw calls, but particles might have a million low cost operations.  So, I wouldn't use virtuals on particles.  Of course this all depends on your platform.
  12. I would check out Hummus' "Framework 3". http://www.humus.name/index.php?page=3D
  13. Tonemapping Formula Help (Math Help)

    I finally found the book. You can find all the correct formulas here: http://content.gpwiki.org/index.php/D3DBook:High-Dynamic_Range_Rendering#Luminance_Transform
  14. I forgot to make this clear, but the Y in Yxy represents luminance.
  15. The dot product you are talking about is actually doing the RGB to Yxy conversion, but because all you care about is the luminance it simplifies down to a dot product. However, if you want to properly convert the whole color, RGB, to Yxy and then scale it and then convert it back it is a much more intensive operation. So, both are correct. One contains all the information necessary for RGB -> Yxy -> RGB and one only contains enough for RGB -> Y. I found the equations here: http://stackoverflow.com/questions/7104034/yxy-to-rgb-conversion If you do the algebra on the conversion functions you'll find what I said to be true.