• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

3299 Excellent

About FreneticPonE

  • Rank

Personal Information

  • Interests
  1. The Nvidia paper is... unreliable. Cone tracing is potentially fast, the problem is lightleak makes its hard to implement reliably. By cone tracing's nature the farther you trace the more lightleak you get. But the shorter a cone you trace the less light you get, overall it was an idea that seemed like the future two+ years ago but has since fallen out due to its weaknesses. There are a lot of other GI techniques that can be considered depending on your requirements. EG is the environment static, or highly deformable, or runtime generated? Does light need to move fast or can it move slowly (EG a slow time of day?). That being said Signed Distance Field tracing and some version of lightcuts/many lights looks like it could, potentially, do what cone tracing once promised in realtime. Here's a nice presentation on signed distance fields, which is essentially a sparse voxel octree from cone tracing but you "sphere trace" instead of doing a cone. Benefits therein being no lightleak. Lightcuts/VPLs/"Many Lights" would be other half of the equation. Here's a nice presentation from Square Enix, wherein the biggest cost they have in the test scene is their choice of "adaptive imperfect shadow maps" which is a really hacky and slow way to do what SDF tracing can do easier and faster.
  2. You don't need to do virtual texturing with one, master texture for the whole world. You'll need to do blending again but you can just use it almost exactly like traditional texturing, not worrying about texel density or disc space at all. The latest Trials game does this (at least what they're GDC presentation indicated)
  3. 3D Blending local-global envmaps

    Remedy also had voxelized pointers towards which probes are relevant where. Heck you could go a step further (or does Remedy do this already) and store a SH probe, with channels pointing towards the relevant probes to blend. It'd be great for windows and the like, blending relevant outdoor probes would be great there. You could even make the entire system realtime, or near to it. Infinite Warfare used deferred probe rendering for realtime GI, and Shadow Warrior 2 had procedurally generated levels lit at creation time. I seriously hope those are the right links, I'm on a slow public wifi at the moment so... Regardless a nice trick is to use SH probes with say, ambient occlusion info, or static lighting info or something, to correct cubemap lighting. This way you can use cubemaps for both spec and diffuse, and then at least somewhat correct it later.
  4. 3D Spherical Harmonics and Lightmaps

    Oof, I remember that second one. At that point more traditional pathtracing is just as fast or faster, doesn't have any missing data problems, and would probably use less memory as there'd be no multiple copies of the same data.
  5. 3D Spherical Harmonics and Lightmaps

    Cubemaps only offer low frequency spatial data, ultra low frequency no matter how much angular frequency they offer. Invariably the farther away from the sample, or rather if it's just behind a pole or something, the less correct the data will be no matter how high a resolution it is. Lightmaps are ultra high frequency spatial data, even if angular data is low frequency it can still be more correct than a cubemap, no matter how many tricks you pull. And SSAO only works with onscreen data, and only works for darkening things. Most modern SH/SG lightmaps are used to somewhat correct or supplement cubamaps.
  6. 3D Spherical Harmonics and Lightmaps

    Cubemaps are only sampled from one spatial point, maybe 2 or so if you're blending across. Say, an H basis lightmap would sample light from each texel. You just contribute whatever specular response you can from your spherical harmonics to help with the fact that the cubemap is almost certainly going to be some level of incorrect. For rough serfaces the entire specular response can come from the lightmap, and thus (except for dynamic stuff) be entirely correct, position wise. Doing all this helps correlate your diffuse color to your specular response, which will become uncorrelated the more incorrect your cubemaps become. BTW if you're curious I'd consider "state of the art" to be Remedy's sparse SH grid used in Quantum Break: https://users.aalto.fi/~silvena4/Publications/SIGGRAPH_2015_Remedy_Notes.pdf The idea is to voxelize your level into a sparse voxel grid, then place SH (or SG/whatever) probes in each relevant grid point. The overall spatial resolution is less than a lightmap, but it's much easier to change up the lighting in realtime, and uses the same exact lighting terms for static and dynamic objects. It might not seem intuitive, but having a uniform response for lighting across all objects gives a nice look compared to the kind of disjointed look you get out of high detail lightmaps being right next to dynamic objects with less detailed indirect lighting.
  7. Not randomly, there's patterns and etc. Here's a nice tutorial instead of explaining here. As for variance shadow maps, they're very fast at low resolutions but grow badly with it, 4x the res = 4x the cost. They also have lightleak which to fix you need to go down some infinite hole of ever longer papers. I'd definitely stick with PCF first and try playing around with an offset biases to clean up artifacts.
  8. Bloom

    Ergh, Fallout 4 has a threshold and it's glaring. The results of a threshold for bloom with proper HDR values introduces weird, non physical results. The idea behind bloom is that you are simulating light going through and being scattered by a lens, which it does in real life as your eye has a lens (well, an entire lens stack really), the brighter the part of your vision the more light from it will be scattered by the lens and the more obvious it will be. So scattering the whole thing is correct for both your eye and a camera. And while you can just introduce a cutoff, it's rendering you don't have to do physically based anything, I found it a bit glaring and annoying in Fallout 4 for example, while the non cutoff method has never bothered me. At least personally.
  9. FXAA and normals/depth

    Not exactly as such. For FXAA you're only blurring final colors. Basic idea is that it hides jaggies at the cost of being a bit blurry. For normals you do want AA, you want normals and roughness to correlate, so the normal map blends into your roughness. This will help prevent "sparklies" or bright shiny pixels you get from using HDR and reflective PBR materials. I know that's not the best explanation, but the full explanation is here. Code, demo, etc. etc.
  10. FXAA and normals/depth

    Depth AA? What would this even be?
  11. Tone Mapping

    Not saying it's not useful now. But that you'll get the exact same banding in white areas of textures that are darkly lit. So under the right circumstances it's still bad. IE a better format that gives more precision over the whole range is probably needed already. Not to mention something like HDR VR, which should be able to display a massive output range and show up banding very badly.
  12. Tone Mapping

    This is the idea I know. But suddenly I'm questioning it because you aren't displaying the actual srgb texture by itself as the final image. You're doing any number of things to it before finally displaying, thus while your input is correctly compressed for human perception your output might not be.
  13. Tone Mapping

    That's nice in theory, unfortunately the final brightness your textures end up as onscreen are only somewhat correlated to their albedo in the stored texture. IE a bright near white texture can end up darker than a fairly dark texture if the white one is in darkness and the dark one brightly lit in the actual environment (assuming a large hdr range). Thus you'd end up with banding on the near white texture while the brightly lit dark texture gets no benefit from the perceptual compression. I'd just stick with srgb for final tonemapping and final perception, instead of source data.
  14. Material parameters in PBR pipeline

    This is the ideal, unfortunately IHVs, bastards that they are, don't necessarily adhere to any spec while advertising "HDR!" and allowing for input as such anyway. The primary one I can think of is Samsung's CHG70 monitors, which don't formally follow HDR10 spec AFAIK. Fortunately for there Freesync 2 is available, so it'll tonemap directly to the monitors space. But it's an example that IHV's don't necessarily give a shit about technical specs or following them at all, especially when marketing gets their hands on a product (just look at Dells absolute bullshit "HDR" monitor from earlier this year).
  15. DX11 Dynamic ibl

    Not really, this leads into the "local lighting" infinite bounce trap. Light won't "travel" throughout the level correctly unless you iterate over every single cubemap therein, which you don't really want to do. So you end up with pockets of extreme brightness where the light bounces around next to ones of extreme darkness. You also have iteration time lag, so when you start it's very dark and the longer you hang around the brighter (though less exponentially) it gets as each iteration bounces more light. Still, it can be very annoying looking, as there's a literal "lag" to light and it's travelling very slowly somehow. The general idea is doable however! The only full shipped version I'm aware of is Call of Duty Infinite Warfare with their Fast filtering of reflection probes and the Rendering part. There's several strategies you could choose from, but all of them ditch the idea of taking the previous cubemap lighting results and re-applying them infinitely and recursively. One is only using local and sun light for lighting each probe at runtime. You'd only get one "bounce" but you could render and ambient light as well. Another is rendering the ambient term into the reflection probes, then just using the reflection probes for the final pass and no ambient there. But this can lead to odd colorbleeding results that don't look good. A hack could be as so: Light your cubemap with an ambient term, take the resulting hdr cubemap and re-light the original, unlit cubemap with it once. This should provide an approximation of multiple light bounces and smooth out any weird color/lightbleeding artifacts that come from doing only one "ambient" bounce. As long as you smoothly blend between cubemaps for both spec/diffuse I'd suspect there wouldn't be much "boundary" artefacts where inappropriate dramatic lighting changes happen. That being said check out the rendering parts separate spherical harmonic ambient occlusion like term. The idea is to take a higher resolution, precomputed sample of global illumination results. And then where that differs from the sparser cubemap information bake the difference into a greyscale spherical harmonic, so naturally dark areas don't get lit up inappropriately because the cubemap isn't correct, and vice versa. It's a hack, but an effective one. Edit - The Witcher 3 also does some sort of dynamic cubemap thing. But I'm not entirely sure how it works and I don't think they ever said.
  • Advertisement