Jump to content
  • Advertisement

n00body

Member
  • Content Count

    379
  • Joined

  • Last visited

Everything posted by n00body

  1. I already said the equation "a += lerp(0, b, mask);" was a mistake as I had briefly forgotten a basic math identity. Then someone replied before I could remove the thread. So while I appreciate the breakdown, it's for an erroneous equation that I don't plan to use.
  2. Background I'm trying to smoothly blend away an emission effect from my shader's total emission output by using the lerp() intrinsic.   Questions Is it cheaper or equal cost to do "a += lerp(0, b, mask);" or "a = lerp(a, a + b, mask);"? Is the answer the same when replacing "+" with "*", "+=" with "*=", and "0" with "1"? Will either optimize down to fewer instructions if the mask is set to a constant of 1?
  3. Doh! Well that's embarrassing. Apparently experience doesn't stop you from forgetting basic math from time to time. >_<   As for Question 2, I guess it won't really matter either way. Thanks for the help.
  4. n00body

    Lighting Prepass and PBR?

    The problem you will run into is that LPP renderers don't have access to the material's f0/specular color during the lighting phase. Quality PBR generally requires this information be available in order to perform per-light fresnel calculations. Now you could go the cheap route by treating the f0 like a color tint and then multiplying it with the final accumulated specular lighting, but it won't look as good.   That's my two cents.    EDIT: As an example, Satellite Reign used such a system, and only had true fresnel on their ambient specular. 
  5. Background So here's the scenario. I have a baseUV that I reuse to sample several base maps (eg. base color, material, normal map, height map, etc). When I apply Parallax Occlusion Mapping, I have to calculate ddx(baseUV) and ddy(baseUV) to use before sampling the height map in a while loop.   Questions Is it more efficient for me to reuse that ddx(baseUV) and ddy(baseUV) with tex2Dgrad() to sample all my base maps? Is it better to just use tex2D() at this point, especially after applying offsets from POM?   Thanks.
  6. Okay. Thank you for your assistance. ;)
  7. Background So I'm trying to implement a material compositing system using Texture2DArrays and vector arrays indexed by a sample from a mask texture.    Questions 1.) Is it even possible to use a texture sample as an index into a uniform array? 2.) If it is, is there a huge performance hit?   Thanks.
  8. I've seen the term being thrown around a lot lately when I read up on engines with their new PBR pipelines. Quite a few of them are saying that they now specify their light intensities in Lumens. I know that that is a term referring to output from real-world light source, but that's the extent of my knowledge.    My questions are: 1.) How do they pertain to lights in shaders? 2.) Is there a specific formula for calculating the light intensity from them? 3.) Am I just over-thinking this, and Lumens is just another name for the light intensity scale value?   Any help would be most appreciated. Thank you.
  9. So basically when I do heavily tiled detail mapping on a surface that is using Offset Bump Mapping or Parallax Occlusion Mapping, as I orbit around the surface the detail maps appear to swim across the surface.   Conditions Both the base and detail maps use UV0. The base maps are tiled 1x with no offset. The detail maps are tiled 10x with no offset. The parallax offset is generated from a heightmap tied to the base maps texture coordinates. I apply the parallax offset like so: (UV0 * Tile + Offset) + parallaxOffset.   Questions Is there any way to counteract this problem? Do I need to modify the parallaxOffset somehow before I apply it to the transformed detail texture coordinates? Are Detail mapping and Parallax just inherently incompatible? Thanks. 
  10. That's odd. I haven't needed to scale up the parallax factor in mine. I basically just do the following: float2 baseUV = uv0 * baseTiling + baseOffset; float2 parallaxOffset = pom(height, heightmap, baseUV, viewDirTangent); baseUV += parallaxOffset; parallaxOffset /= baseTiling; float2 detailUV = (uv0 + parallaxOffset) * detailTiling + detailOffset; Am I missing something?   EDIT:  For clarification, I am using the following POM implementation: A Closer Look at Parallax Occlusion Mapping
  11. Additionally, I've found out you need to divide the offset by the tiling of the base map to avoid swimming on detail maps when you tile the base map. Just keeps getting more complicated. :(
  12. That seems to have done the trick. I originally shied away from that option since I am doing this in Unity, and they hide texture transforms inside a macro. I'll just need to write my own.   Thanks for the help. 
  13. Okay, I've sort of solved it myself. Basically, I found a way to apply Beckmann roughness to Blinn Phong. This modifies the normalization term to be the way the Epic described in their paper for GGX, so I suspect the same trick they used should now apply. For anyone who is interested, here is where I found my answer: http://graphicrants.blogspot.com/2013/08/specular-brdf-reference.html
  14. Background Okay, so I have seen Epic's paper that covers, among other things, how they did area lights in UE4 by the "representative point" method: http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf   Problem They specifically talk about how they needed to come up with a modification to the normalization factor for their BRDF to make the intensity looks close to correct. However, the one they list is for GGX, and I am using Normalized Blinn Phong. Sadly, I don't understand the math behind normalization.   Question Is there a nice simple way for me to find the normalization factor to use Normalized Blinn Phong with this area light trick?   Thanks. 
  15. Background I am working on a physically based shading system in Unity. Among the many components needed by this system is way to compensate for specular aliasing. Our chosen method involves calculating Toksvig AA from our normal maps, pre-correcting our specular powers, and storing the result for use at runtime.   Problem We are dealing with ARGB32 precision normal maps as inputs, and are encountering the problem that some of our artists tools don't quite output unit length normals in this precision. Unfortunately, this has dramatic consequences when applied to our pre-correction of specular powers, even for "flat" normals. So materials that are supposed to be smooth end up getting dramatically more rough.   Questions 1.) Is there a way to deal with this besides just using higher precision storage for our normal maps?  2.) One of our tactics to try to deal with this problem is offering the artist a bias factor that adjusts the calculated variance down slightly to compensate. Is this a valid solution, or will it break horribly down the line? 3.) Is there some other trick I could try that I have overlooked?   Thanks for any help you can provide.
  16. Background I've been developing a physically-based shader framework for Unity3D. I am using the Normalized Blinn Phong NDF, and an approximation to the Schlick visibility function. I am using an effective specular power range of [4,8192] for direct illumination. I have also developed a translucent shader that uses premultiplied alpha to only make the diffuse translucent while preserving the specular intensity based on fresnel.   For all my testing, I am doing everything in Linear HDR mode which affords me an FP16 render target for my camera.   Situation So this is a highly contrived scenario, but my team's artist managed to make it happen. Basically he has a scene with a directional light whose intensity is effectively 1.0 (0.5 input for Unity)  shining on a glass bottle surrounding a smooth metallic liquid. As a result, the two substances' highlights overlapped and their combined intensity seems to have exceeded the range of the FP16 render target. This resulted in weird artifacts where the the highest intensity color component went to black, while the other two just looked really bright. (see example image below).   [attachment=19366:ExceededPrescisionOfHDR.jpg]   Upon further testing, I found I could remove the artifact by making the surface more rough, thus reducing the intensity of the highlight. However, I still found it having this visual error for even relatively rough overlapping materials.   Questions 1.) Is there any way to prevent this from happening programmatically without having to clamp the light values to an upper limit or otherwise harm the visual quality? 2.) Is it just something that falls to the artist to avoid doing? 3.) Even so, this means that I can't have multiple overlapping translucent objects or have to be careful about what objects pass behind them. Am I missing something here? 4.) Just for future reference, what is the actual upper limit value of FP16?   Thanks for any help you can provide.
  17. Well I feel silly about this. It turns out that it was a bug unique to Unity3D because of code they want you to add when doing custom forward lighting. Basically, they have you put the luminance of the specular lighting in the alpha channel of the lighting function's color output. Apparently this breaks down if the specular lighting has a ridiculously high value. Once I removed this, it started working fine again. 
  18. Actually, let me be more specific about my situation. In Unity, my tech can potentially be used in multipass light accumulation, single-pass translucent, and Light PrePass deferred. So knowing that: What value would you recommend I clamp my illumination to to keep it from blowing out? Should I be clamping at each light's output, or the final combined output for a given surface with all lighting? If during the each light's output, then what should I do about LPP mode where the lights are accumulated without material colors? Thanks.
  19. @MJP Any chance you could tell me what that value was? Was it something relative to your tonemapping algorithm? 
  20. @Hodgman Well one thing I discovered was that I made the faulty assumption that my team's artist added a tonemapper at all. Needless to say, adding that helped a lot. I am just using the Bloom and tonemappers that ship with Unity. The 1.0 cutoff just means anything above 1.0 in intensity contributes to bloom. And we are using the "Photographic" setting for the tonemapper.    @MJP Yeah, I figured it was probably mostly because of aliasing, but I was reluctant to go down that path so soon since Unity has absolutely awful support for custom mipmaps. I did manage to get it to work, and it helped a lot.    @all So yeah, I now understand the point of such high spec powers for up close smoothness. At a distance, the precomputed specular AA makes it appropriately more rough.    Thanks. 
  21. Background So I have been looking into physically-based BRDFs for some time now, and have observed a perplexing trend. The companies that post the papers always seem to use a ridiculously high specular power (eg. Treyarch for COD BO uses 8192 for lights, 2048 for environment maps) when using a Normalized Blinn Phong NDF. This seems strange to me, especially since I have read that these powers are for extremely rare, super-polished mirror surfaces. Also, it means that they have to store a high-res mip level in their prefiltered environment maps that will take up a lot of space but almost never get used.   Up until now, I have usually settled for a maximum spec power of 1024 for my analytical lights, and 256 for my environment maps. For my bloom, I have been using a threshold of 1.0 and additive blending (what Unity provides).    Problem Whenever I test these specular powers myself, it causes the resulting specular highlight intensity to shoot up so high that my bloom goes crazy. In particular, as my camera moves I get very distracting flashes of bright spots on sharp model edges that appear and disappear in a very annoying way.    Questions 1.) What is the incentive for using such high powers? 2.) Am I doing my bloom theshold wrong, and should I move it up? 3.) Is this a consequence of the Bloom blending mode?   Any help would be most appreciated.
  22. Background: For my rendering pipeline, I am using encoded RGBA8 buffers wherever possible. Currently, I am considering this order for all the passes: G-Buffer prepassLight accumulationSSAOMaterial (output motion vectors with MRT)Refractive (written to a separate buffer, then overwrites material buffer)EdgeAADepth of FieldMotion BlurTranslucency (accumulated in a separate buffer, applies its own fog)Fog (also applies translucency buffer)Generate BloomApply Bloom, Tonemapping, & Color Correction Problems: Right now, I have concerns about ordering post-passes that involve blurring (Motion Blur, Depth of Field, EdgeAA, etc) or that overwrite other parts of the buffer (volume lighting, bloom, translucency, etc). Blur passes introducing artifacts into each otherex. Motion Blur then Depth of Field blurring the streaksOverwrite passes happening before blur passesex. Translucency incorrectly blurredex. Fog blurring artifacts with EdgeAAOverwrite passes happening after blurring passesex. Translucency being sharp on blurred backgroundQuestions: Any insights from prior work would be most appreciated. Can you comment on my current proposed ordering of passes? Any potential problems, artifacts it will introduce?Can you offer you own proposed ordering? Any potential problems, artifacts it will introduce? Any help you can offer would be most appreciated. Thanks.
  23. n00body

    DLAA

    Example images would be most appreciated.
  24. n00body

    FBO, textures and MRT

    When you changed the bound textures, did you also make sure to change glDrawBuffers()?
  25. Open for business!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!