Jump to content

  • Log In with Google      Sign In   
  • Create Account

Frenetic Pony

Member Since 30 Oct 2011
Offline Last Active Today, 12:45 AM

Posts I've Made

In Topic: Moment Shadow Maps for Single Scattering, Soft Shadows and Translucent Occlud...

06 January 2016 - 06:42 PM

It's important to note this paper, which is here: http://cg.cs.uni-bonn.de/aigaion2root/attachments/MSMBeyondHardShadows.pdf

 

Concerns itself with filtering shadows for use in light scattering, aka: https://graphics.tudelft.nl/Publications-new/2014/KSE14/KSE14.pdf

Things like this, or Nvidia's hacky tessellation based god rays, are fine, but most people use something like: http://advances.realtimerendering.com/s2015/Frostbite%20PB%20and%20unified%20volumetrics.pptx

Which can support multiple lights easier, support visible fog volumes, and even do stuff like volumetric clouds: http://advances.realtimerendering.com/s2015/The%20Real-time%20Volumetric%20Cloudscapes%20of%20Horizon%20-%20Zero%20Dawn%20-%20ARTR.pdf , all potentially faster than the previous.

That being said you can still use the moment shadow mapping stuff for filtering, and the video/paper you're interested in seems to make the pre-filtered single scattering more efficient.

 

The paper you mention is also used to filter translucent occluders and soft shadows, aka something like: http://www.crytek.com/download/Playing%20with%20Real-Time%20Shadows.pdf  Both are nice to have if you can afford it.

 

But to get this long winded reply summed up, for filtering shadows you can still use exponential variance shadow mapping, which is still better looking for relatively the same speed as moment shadow mapping. Or, for say, filtering shadows specifically for atmospheric scattering/shadows on particles/etc. you can just use normal variance shadow mapping, and hope users don't notice the light leak because it's just atmospheric scattering/particles.


In Topic: Injection Step of VPLs into 3D Grid

01 January 2016 - 09:25 PM

Demo with source code for you to peruse http://blog.blackhc.net/2010/07/light-propagation-volumes/


In Topic: Questions on Baked GI Spherical Harmonics

31 December 2015 - 05:50 PM

 


Also what exactly do you mean "occlusion" like RaD? What occlusion specifically?

Maybe I'm misunderstanding and using the wrong term, but I was referring to the shadowing in the picture.

mV3oPkV.png?1

 

 

 


Generally an offline raytracer is used for baking indirect illumination, rather than just an ambient term. Shoot rays with bounces all around and gather.

 

Darn, I was kind of hoping I could just do it with out a raytracer. I'm going to be taking a ray tracing class this year, so hopefully I can come back to this and replace the ambient term.

 

 

 

I've also noticed these weird arfiacts on my light probes. Is this "ringing"? Or am I just really messing up the projection step? wacko.png

G8GeiWb.png?1

 

 

I believe that the occlusion referred to in the paper is occlusion for cubemaps/specular term. Which, since it's something you don't have at the moment, isn't something to concern yourself with immediately.

 

It's also possible that in part due to ringing artifacts from SH, but it doesn't generally refer to an actual ring shape as such.


In Topic: Questions on Baked GI Spherical Harmonics

29 December 2015 - 06:36 PM

 

Robin Green's paper was super helpful. A lot of it still went over my head, but I've been able put together a few things.

 

I have a 3D grid of light probes like MJP suggested. I'm rendering a cubemap for every light probe and processing the cube map to construct 9 SH coefficients from it for each color channel.  When rendering the cubemap, I apply some ambient lighting to every object in order to account for objects in shadow. (I wasn't to sure about this one)

 

I'd like to try attempting to get the nice occlusion that Ready At Dawn has in their Siggraph presentation pg. 18 (http://blog.selfshadow.com/publications/s2015-shading-course/rad/s2015_pbs_rad_slides.pdf). How do I get something like this?

 

I'm also wondering if anything looks really wrong with my current implementation. 

 

Generally an offline raytracer is used for baking indirect illumination, rather than just an ambient term. Shoot rays with bounces all around and gather. Also what exactly do you mean "occlusion" like RaD? What occlusion specifically?


In Topic: Screen-Space Reflection, enough or mix needed ?

24 December 2015 - 07:01 PM

One of the best cornerstoes for PBR is that diffuse and specular lighting should match as closely as possible (yay energy preservation!). You can go play Far Cry 4 and see where they don't quite get this right, eg under the right circumstances their indirect diffuse lighting term will be a lot darker than their specular probe, so everything will look dark and super shiny at the same time and it looks weird.

 

As others mentioned, just SSRR isn't enough, you'll get relatively little reflections from this. The most common way is to use some sort of cubemap specular probe. Either pre-computed ala UE4 and etc. if your game is linear, or just dynamically created (take a cubemap centered around the camera) and updated as often as performance allows. That's what GTAV/Witcher 3/etc. do. To get properly physically based lighting you'll also have to multiple importance sample the cubemap to match your BRDF. Fortunately there's filtered importance sampling (see below) and this nifty paper to do so in relatively little time: http://www.gris.informatik.tu-darmstadt.de/~mgoesele/download/Widmer-2015-AAS.pdf

 

Edit - the probe cost shouldn't be too bad. Stick with a low resolution (as low as 128x128 per face for a six sided cubemap, or 256x256 for a dual parabaloid map). Only draw large static objects in low LOD, big trees, buildings, terrain, skybox, and stick with a dithered 10-10-10-2 HDR render target for output, players wont notice banding that much. As a bonus, if you do a two layer cubemap like the above PDF has, drawing large static objects into the first layer and distant terrain/skybox into the second you can combine that with SSRR and get a decent water reflection out of it at the same time without having to do a separate planar reflection.

 

Of course the problem with the dynamic approach is that it doesn't work so well with indoor/outdoor environments by itself. If you're inside and looking out a window you don't want what's outside reflecting the indoor walls, and if you're outside looking in you don't want the indoors reflecting the sky. Both GTAV and The Witcher 3 handle this decently somehow. If I had to guess I'd say all indoor areas have some marked bounding area that uses a different lighting term from the dynamic probe, so the dynamic probe only renders from and to outdoor areas, and the indoor areas use something else. Just a guess though.

 

Something to go on:

 

Far Cry 4: http://www.gdcvault.com/play/1022235/Rendering-the-World-of-Far


PARTNERS