FreneticPonE

Members
  • Content count

    313
  • Joined

  • Last visited

Community Reputation

3296 Excellent

About FreneticPonE

  • Rank
    Member

Personal Information

  • Interests
    Art
    Design
    Programming
  1. Bloom

    Ergh, Fallout 4 has a threshold and it's glaring. The results of a threshold for bloom with proper HDR values introduces weird, non physical results. The idea behind bloom is that you are simulating light going through and being scattered by a lens, which it does in real life as your eye has a lens (well, an entire lens stack really), the brighter the part of your vision the more light from it will be scattered by the lens and the more obvious it will be. So scattering the whole thing is correct for both your eye and a camera. And while you can just introduce a cutoff, it's rendering you don't have to do physically based anything, I found it a bit glaring and annoying in Fallout 4 for example, while the non cutoff method has never bothered me. At least personally.
  2. FXAA and normals/depth

    Not exactly as such. For FXAA you're only blurring final colors. Basic idea is that it hides jaggies at the cost of being a bit blurry. For normals you do want AA, you want normals and roughness to correlate, so the normal map blends into your roughness. This will help prevent "sparklies" or bright shiny pixels you get from using HDR and reflective PBR materials. I know that's not the best explanation, but the full explanation is here. Code, demo, etc. etc.
  3. FXAA and normals/depth

    Depth AA? What would this even be?
  4. Tone Mapping

    Not saying it's not useful now. But that you'll get the exact same banding in white areas of textures that are darkly lit. So under the right circumstances it's still bad. IE a better format that gives more precision over the whole range is probably needed already. Not to mention something like HDR VR, which should be able to display a massive output range and show up banding very badly.
  5. Tone Mapping

    This is the idea I know. But suddenly I'm questioning it because you aren't displaying the actual srgb texture by itself as the final image. You're doing any number of things to it before finally displaying, thus while your input is correctly compressed for human perception your output might not be.
  6. Tone Mapping

    That's nice in theory, unfortunately the final brightness your textures end up as onscreen are only somewhat correlated to their albedo in the stored texture. IE a bright near white texture can end up darker than a fairly dark texture if the white one is in darkness and the dark one brightly lit in the actual environment (assuming a large hdr range). Thus you'd end up with banding on the near white texture while the brightly lit dark texture gets no benefit from the perceptual compression. I'd just stick with srgb for final tonemapping and final perception, instead of source data.
  7. Material parameters in PBR pipeline

    This is the ideal, unfortunately IHVs, bastards that they are, don't necessarily adhere to any spec while advertising "HDR!" and allowing for input as such anyway. The primary one I can think of is Samsung's CHG70 monitors, which don't formally follow HDR10 spec AFAIK. Fortunately for there Freesync 2 is available, so it'll tonemap directly to the monitors space. But it's an example that IHV's don't necessarily give a shit about technical specs or following them at all, especially when marketing gets their hands on a product (just look at Dells absolute bullshit "HDR" monitor from earlier this year).
  8. DX11 Dynamic ibl

    Not really, this leads into the "local lighting" infinite bounce trap. Light won't "travel" throughout the level correctly unless you iterate over every single cubemap therein, which you don't really want to do. So you end up with pockets of extreme brightness where the light bounces around next to ones of extreme darkness. You also have iteration time lag, so when you start it's very dark and the longer you hang around the brighter (though less exponentially) it gets as each iteration bounces more light. Still, it can be very annoying looking, as there's a literal "lag" to light and it's travelling very slowly somehow. The general idea is doable however! The only full shipped version I'm aware of is Call of Duty Infinite Warfare with their Fast filtering of reflection probes and the Rendering part. There's several strategies you could choose from, but all of them ditch the idea of taking the previous cubemap lighting results and re-applying them infinitely and recursively. One is only using local and sun light for lighting each probe at runtime. You'd only get one "bounce" but you could render and ambient light as well. Another is rendering the ambient term into the reflection probes, then just using the reflection probes for the final pass and no ambient there. But this can lead to odd colorbleeding results that don't look good. A hack could be as so: Light your cubemap with an ambient term, take the resulting hdr cubemap and re-light the original, unlit cubemap with it once. This should provide an approximation of multiple light bounces and smooth out any weird color/lightbleeding artifacts that come from doing only one "ambient" bounce. As long as you smoothly blend between cubemaps for both spec/diffuse I'd suspect there wouldn't be much "boundary" artefacts where inappropriate dramatic lighting changes happen. That being said check out the rendering parts separate spherical harmonic ambient occlusion like term. The idea is to take a higher resolution, precomputed sample of global illumination results. And then where that differs from the sparser cubemap information bake the difference into a greyscale spherical harmonic, so naturally dark areas don't get lit up inappropriately because the cubemap isn't correct, and vice versa. It's a hack, but an effective one. Edit - The Witcher 3 also does some sort of dynamic cubemap thing. But I'm not entirely sure how it works and I don't think they ever said.
  9. Depth-only pass

    Eh nevermind.
  10. Depth-only pass

    I doubt they do that anymore, Z-pre passes have been phased out after the transition to modern consoles. It was worth it on the previous generation because polycount could scale far better than memory bandwidth, but that's no longer the case today.
  11. This would depend entirely on the studio and project. If your engine uses approximate ACES, you may well want any art production viewports to use the same so as to give your artists a more accurate in game preview. Then again maybe you don't care, or you're expecting to create custom curves and etc. as you go along and so don't have a necessarily accurate way to preview it anyway. The best thing to do really depends on your specific situation.
  12. R&D [PBR] Renormalize Lambert

    Aye, Hodgman has the right of it. My personal favorite diffuse term comes from Respawn and Titanfall 2. They got the diffuse to not only be energy conserving, but reciprocal to GGX, as well as heightfield corellated and blah blah blah reference tested physically based etc. Take a looksie http://www.gdcvault.com/play/1024478/PBR-Diffuse-Lighting-for-GGX
  13. I'm confused about why you'd want to cast rays over an entire sphere. Unless you're sampling translucency lighting should only be incoming over a hemisphere, sampling over the entire sphere would produce overdarkening artifacts, IE shadows being cast onto objects from behind them.
  14. The black edges look like you're hitting out of screenspace without falling back on any other reflection, such as whatever it is the normal game does.
  15. IBL Diffuse wrong color

    What's the point of PBR if you're not using an HDR pipeline?