n00body

Member
  • Content count

    379
  • Joined

  • Last visited

Community Reputation

345 Neutral

About n00body

  • Rank
    Member
  1. I already said the equation "a += lerp(0, b, mask);" was a mistake as I had briefly forgotten a basic math identity. Then someone replied before I could remove the thread. So while I appreciate the breakdown, it's for an erroneous equation that I don't plan to use.
  2. Doh! Well that's embarrassing. Apparently experience doesn't stop you from forgetting basic math from time to time. >_<   As for Question 2, I guess it won't really matter either way. Thanks for the help.
  3. Lighting Prepass and PBR?

    The problem you will run into is that LPP renderers don't have access to the material's f0/specular color during the lighting phase. Quality PBR generally requires this information be available in order to perform per-light fresnel calculations. Now you could go the cheap route by treating the f0 like a color tint and then multiplying it with the final accumulated specular lighting, but it won't look as good.   That's my two cents.    EDIT: As an example, Satellite Reign used such a system, and only had true fresnel on their ambient specular. 
  4. Background I'm trying to smoothly blend away an emission effect from my shader's total emission output by using the lerp() intrinsic.   Questions Is it cheaper or equal cost to do "a += lerp(0, b, mask);" or "a = lerp(a, a + b, mask);"? Is the answer the same when replacing "+" with "*", "+=" with "*=", and "0" with "1"? Will either optimize down to fewer instructions if the mask is set to a constant of 1?
  5. Okay. Thank you for your assistance. ;)
  6. Background So here's the scenario. I have a baseUV that I reuse to sample several base maps (eg. base color, material, normal map, height map, etc). When I apply Parallax Occlusion Mapping, I have to calculate ddx(baseUV) and ddy(baseUV) to use before sampling the height map in a while loop.   Questions Is it more efficient for me to reuse that ddx(baseUV) and ddy(baseUV) with tex2Dgrad() to sample all my base maps? Is it better to just use tex2D() at this point, especially after applying offsets from POM?   Thanks.
  7. Background So I'm trying to implement a material compositing system using Texture2DArrays and vector arrays indexed by a sample from a mask texture.    Questions 1.) Is it even possible to use a texture sample as an index into a uniform array? 2.) If it is, is there a huge performance hit?   Thanks.
  8. I've seen the term being thrown around a lot lately when I read up on engines with their new PBR pipelines. Quite a few of them are saying that they now specify their light intensities in Lumens. I know that that is a term referring to output from real-world light source, but that's the extent of my knowledge.    My questions are: 1.) How do they pertain to lights in shaders? 2.) Is there a specific formula for calculating the light intensity from them? 3.) Am I just over-thinking this, and Lumens is just another name for the light intensity scale value?   Any help would be most appreciated. Thank you.
  9. That's odd. I haven't needed to scale up the parallax factor in mine. I basically just do the following: float2 baseUV = uv0 * baseTiling + baseOffset; float2 parallaxOffset = pom(height, heightmap, baseUV, viewDirTangent); baseUV += parallaxOffset; parallaxOffset /= baseTiling; float2 detailUV = (uv0 + parallaxOffset) * detailTiling + detailOffset; Am I missing something?   EDIT:  For clarification, I am using the following POM implementation: A Closer Look at Parallax Occlusion Mapping
  10. Additionally, I've found out you need to divide the offset by the tiling of the base map to avoid swimming on detail maps when you tile the base map. Just keeps getting more complicated. :(
  11. That seems to have done the trick. I originally shied away from that option since I am doing this in Unity, and they hide texture transforms inside a macro. I'll just need to write my own.   Thanks for the help. 
  12. So basically when I do heavily tiled detail mapping on a surface that is using Offset Bump Mapping or Parallax Occlusion Mapping, as I orbit around the surface the detail maps appear to swim across the surface.   Conditions Both the base and detail maps use UV0. The base maps are tiled 1x with no offset. The detail maps are tiled 10x with no offset. The parallax offset is generated from a heightmap tied to the base maps texture coordinates. I apply the parallax offset like so: (UV0 * Tile + Offset) + parallaxOffset.   Questions Is there any way to counteract this problem? Do I need to modify the parallaxOffset somehow before I apply it to the transformed detail texture coordinates? Are Detail mapping and Parallax just inherently incompatible? Thanks. 
  13. Okay, I've sort of solved it myself. Basically, I found a way to apply Beckmann roughness to Blinn Phong. This modifies the normalization term to be the way the Epic described in their paper for GGX, so I suspect the same trick they used should now apply. For anyone who is interested, here is where I found my answer: http://graphicrants.blogspot.com/2013/08/specular-brdf-reference.html
  14. Background Okay, so I have seen Epic's paper that covers, among other things, how they did area lights in UE4 by the "representative point" method: http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf   Problem They specifically talk about how they needed to come up with a modification to the normalization factor for their BRDF to make the intensity looks close to correct. However, the one they list is for GGX, and I am using Normalized Blinn Phong. Sadly, I don't understand the math behind normalization.   Question Is there a nice simple way for me to find the normalization factor to use Normalized Blinn Phong with this area light trick?   Thanks. 
  15. Background I am working on a physically based shading system in Unity. Among the many components needed by this system is way to compensate for specular aliasing. Our chosen method involves calculating Toksvig AA from our normal maps, pre-correcting our specular powers, and storing the result for use at runtime.   Problem We are dealing with ARGB32 precision normal maps as inputs, and are encountering the problem that some of our artists tools don't quite output unit length normals in this precision. Unfortunately, this has dramatic consequences when applied to our pre-correction of specular powers, even for "flat" normals. So materials that are supposed to be smooth end up getting dramatically more rough.   Questions 1.) Is there a way to deal with this besides just using higher precision storage for our normal maps?  2.) One of our tactics to try to deal with this problem is offering the artist a bias factor that adjusts the calculated variance down slightly to compensate. Is this a valid solution, or will it break horribly down the line? 3.) Is there some other trick I could try that I have overlooked?   Thanks for any help you can provide.