Jump to content

  • Log In with Google      Sign In   
  • Create Account

Frenetic Pony

Member Since 30 Oct 2011
Online Last Active Today, 09:54 PM

#5288055 What is the relationship between area lights and reflections (SSR/local/global)

Posted by Frenetic Pony on 21 April 2016 - 04:13 PM

 

 

Ah, right I got confused with that "blurry reflection look" and forgot that this video is only about lights.  What a bozo I am!

 

I've never written SSR/cube reflection before -- it seems like you would have to turn off the lights before your SSR/cube reflection pass so you don't "double up" the reflections of the light, right?  It Otherwise you would have one reflection from the analytic/punctual light model and another reflection from your SSR/cube reflection pass.  Or is that not that big of a deal?

 

For cubemaps you turn off any direct contribution, correct.

 

Do you really ? I always thought you capture it n number of times to simulate light bounces ?

 

 

I'm assuming he means the emissive texture/sprite that's supposed to represent the actual "light emitting" part of the light. EG your sun disc representing the sun. In which case you'd want to turn it off for cubemaps or else you'd get a double contribution, one from the cubemap capturing the sun disc, and one from your analytic directional light. This doesn't, or shouldn't matter for SSR as SSR is hopefully just going to overwrite your analytic specular with more accurate SSR reflections of your emissive material (assuming it hits).

 

There are plenty of circumstances where you may even want to leave them on for cubemaps too. If the light is distant enough that its analytic solution doesn't contribute, you can certainly capture it in a cubemap (distant city lights or something). 

 

And for actual light contribution otherwise, eg drawing point lights and etc. then you just leave them on for both cubemaps and SSR.

 

OH! And edit, duh. Here's the exact same thing as the area lights OP posted, but with source code and a permissive license and etc. etc. Also diffuse lighting at the same time, still no shadows though (raytrace analytic shapes/signed distance fields?) https://eheitzresearch.wordpress.com/415-2/

 




#5287724 What is the relationship between area lights and reflections (SSR/local/global)

Posted by Frenetic Pony on 19 April 2016 - 11:52 PM

Ah, right I got confused with that "blurry reflection look" and forgot that this video is only about lights.  What a bozo I am!

 

I've never written SSR/cube reflection before -- it seems like you would have to turn off the lights before your SSR/cube reflection pass so you don't "double up" the reflections of the light, right?  It Otherwise you would have one reflection from the analytic/punctual light model and another reflection from your SSR/cube reflection pass.  Or is that not that big of a deal?

 

For cubemaps you turn off any direct contribution, correct. But it's not necessary for SSR as you are essentially overwriting any other specular contribution if the SSR contribution actually hits valid info, so it shouldn't double the contribution. But this assumes your specular from the lightsource is actually at least somewhat of a match with whatever emissive material the light source is supposed to come from. If they're too mismatched it could certainly look odd.




#5287228 When would you want to use Forward+ or Differed Rendering?

Posted by Frenetic Pony on 16 April 2016 - 05:10 PM

Why is it called a Z-Pre-Pass if the Z-buffer is typically generated first before anything else in the first place?

 

It's a z only pass, just depth. You don't render anything else, such as textures and etc. As hodgeman said, there's a fast path for it, but you can't have a render target bound at the same time; it was mostly used on previous consoles as you were highly bandwidth bound so getting rid of overdraw could be a win, even if you did rasterize twice. But it's mostly outdated for deferred rendering today, at least so far as using the rasterizer. DICE recently (GDC) showed off what is essentially a compute based z-prepass, using async compute (in their case using it while the rasterizer is busy with doing shadow maps) to draw all the polys you have onscreen. Doing a compute based pass can be a win on today's consoles, which can choke a bit on the rasterization step, because you can discard triangles that would be onscreen but don't actually hit any centroid sampling and the like, and during the shadow pass your mostly using the rasterizer so your compute resources are free anyway.

 

For tiled/clustered forward it can be a bigger win though and z-pre pass is still relevant. It can reduce overdraw dramatically since you'll, again, discard triangles which miss the centroid sampling and thus don't contribute. While pre-pass can be useful for deferred cause you'll skip the big geometry step where you're still reading out textures which won't contribute and etc. it's much better for forward as you're still lighting as you draw each triangle, so you'll be saving there too. I wonder now, DICE's compute based z-prepass like thing could do the same thing for forward plus, might be a win.




#5284694 When would you want to use Forward+ or Differed Rendering?

Posted by Frenetic Pony on 01 April 2016 - 09:06 PM

Well, the preferred term is "uber-shader" which is less words, but yes it is the "big ass shader" you're referring too.




#5284559 When would you want to use Forward+ or Differed Rendering?

Posted by Frenetic Pony on 31 March 2016 - 07:58 PM

There's a lot to go over, but the basics are:

 

Forward+ (or rather clustered forward+ which is really what you'd use):

Positives:

Easy translucency (still not sorted, but at least it's there).

Easy MSAA.

Easy use of multiple material shading models.

Negatives:

Possibly high overdraw costs or vertex costs, as you might end up doing either a z-prepass and doubling your vertex load, or living with the remaining overdraw costs even after clustered light binning, which can still be more costly if you're running heavy per pixel operations.

 

Deferred (or rather tiled/clustered deferred, again what you'd really want to use)

Positives:

Extremely predictable costs (shade what's onscreen!)

Low overdraw costs (see above)

Easy use of deferred tricks (soft particles, screenspace raytracing, etc. etc.)

The above is really, really useful. Actually applying shadows can get quite cheap, (deferred shadow masks, or even just drawing them directly into the buffer for point lights), deferred decals, relatively easy area lights, etc. etc.

Negatives:

Translucency is haaard. You need to either do a foward pass (with clustered light binning you can re-use the light bins, so that's useful) or gather all your lighting into a spherical harmonics/guassian grid each frame, then do a screenspace pass on translucency using that (UE4 and Destiny do this).

Material types are more limited; you're limited to your g-buffer size, and need to pay the cost of each material type in each onscreen tile you find it in.

Can't do MSAA easily, though since temporal re-projection AA, morphological AA, and specular AA (toksvig/etc.) have gotten so good, and can be cheaper than MSAA anyway, I don't see the obsession over having MSAA as really justified anymore.

 

Really, you're going to have to look at each one and figure out what your project needs. Though since you're already doing a straight forward pass, and thus aren't doing any of the savings from all the deferred stuff, it would be simpler and more straightforward to just implement clustered forward plus.




#5283022 Per Triangle Culling (GDC Frostbite)

Posted by Frenetic Pony on 23 March 2016 - 06:04 PM

 

 


Note that on AMD's GCN, the compute shader could be ran async while rendering the shadow maps (which barely occupy the compute units), thus making this pass essentially "free".

 

Given that nvidia doesn't typically allow async compute, does that mean it wouldn't be useful on nvidia? 

 

It's easy to understand why rendering small triangles is expensive, but this culling process won't be free if it can't overlap other parts of the pipeline, right? I suppose I could see a overall positive benefit if the compute shader needs only position information and can ignore other attributes which won't contribute to culling?

 

Whether it's a net gain or a net loss depends on the scene. Async compute just increases the likelihood of being a net gain.

 

 

By a lot unfortunately, and Nvidia's release this year doesn't seem likely to change support for async. Still, it's not going to be a loss generally, so it's not like you'd even have to disable it in a Nvidia specific package.




#5282260 Object Space Lightning

Posted by Frenetic Pony on 20 March 2016 - 08:41 PM

To re-iterate from twitter, you could cull out texture patches from a virtual texture atlas, tie the patches to something like poly clusters (Dice's paper from GDC) cull the patches, and then you'd know which texture patches to shade without a lot of overdraw.

 

I like the idea of separating out which shaders to run, but this just goes back to a virtualized texture atlas, then re-ordering patches into coherent tiles of common materials, and running the shading on each tile. Eventually you'd just ditch the whole "pre" shading part anyway and it starts to look more like this stuff: https://t.co/hXCfJtnwWi




#5277407 Decompress quadtree image

Posted by Frenetic Pony on 22 February 2016 - 01:41 AM

Could just go with spherical guassians: http://blog.selfshadow.com/publications/s2015-shading-course/rad/s2015_pbs_rad_slides.pdf

 

Same exact purpose and idea as SH, but has better angular resolution and less artifacts.




#5277109 How fast is hardware-accelerated ray-tracing these days?

Posted by Frenetic Pony on 19 February 2016 - 11:03 PM

I don't see animation as a huge bottleneck at all, the upcoming Dreams managed a rough version in realtime on the PS4 just fine. Besides, offline can rebuild acceleration structures without getting bottlenecked by that, what offline gets bottlenecked by is simply brute force, eg indirect tracing and raymarching, pretty much the same thing realtime stuff is going to get bottlenecked by. Raymarching requires a ton of samples, but is used for volumetric stuff today in like, quarter res buffers to amortize the cost.

 

The worse part is indirect lighting where you have incoherent rays. Your wave fronts are going to end up incoherent and useless, and your going to get killed by latency when you end up chasing pointers all around your non uniform acceleration structure, either that or end up with too many samples and taking up too much ram with a uniform acceleration structure. Still, there isn't really a different way to do GI well other than tracing. Oh you can hack it if you have largely pre-computed stuff, but if you want realtime it seems to be a no go. It's been a dream to get realtime GI into something that doesn't involve tracing for years now, but every solution (and there's been tons of them) has ended up with far too many tradeoffs after creating far too complex of a system to be particularly useful. It's why both Crytek and Epic have just gone "screw it, we'll do tracing as cleverly as we can and brute force it for what's left" and so far it actually works! (Though it's still quite expensive).




#5269717 Moment Shadow Maps for Single Scattering, Soft Shadows and Translucent Occlud...

Posted by Frenetic Pony on 06 January 2016 - 06:42 PM

It's important to note this paper, which is here: http://cg.cs.uni-bonn.de/aigaion2root/attachments/MSMBeyondHardShadows.pdf

 

Concerns itself with filtering shadows for use in light scattering, aka: https://graphics.tudelft.nl/Publications-new/2014/KSE14/KSE14.pdf

Things like this, or Nvidia's hacky tessellation based god rays, are fine, but most people use something like: http://advances.realtimerendering.com/s2015/Frostbite%20PB%20and%20unified%20volumetrics.pptx

Which can support multiple lights easier, support visible fog volumes, and even do stuff like volumetric clouds: http://advances.realtimerendering.com/s2015/The%20Real-time%20Volumetric%20Cloudscapes%20of%20Horizon%20-%20Zero%20Dawn%20-%20ARTR.pdf , all potentially faster than the previous.

That being said you can still use the moment shadow mapping stuff for filtering, and the video/paper you're interested in seems to make the pre-filtered single scattering more efficient.

 

The paper you mention is also used to filter translucent occluders and soft shadows, aka something like: http://www.crytek.com/download/Playing%20with%20Real-Time%20Shadows.pdf  Both are nice to have if you can afford it.

 

But to get this long winded reply summed up, for filtering shadows you can still use exponential variance shadow mapping, which is still better looking for relatively the same speed as moment shadow mapping. Or, for say, filtering shadows specifically for atmospheric scattering/shadows on particles/etc. you can just use normal variance shadow mapping, and hope users don't notice the light leak because it's just atmospheric scattering/particles.




#5268776 Injection Step of VPLs into 3D Grid

Posted by Frenetic Pony on 01 January 2016 - 09:25 PM

Demo with source code for you to peruse http://blog.blackhc.net/2010/07/light-propagation-volumes/




#5268652 Questions on Baked GI Spherical Harmonics

Posted by Frenetic Pony on 31 December 2015 - 05:50 PM

 


Also what exactly do you mean "occlusion" like RaD? What occlusion specifically?

Maybe I'm misunderstanding and using the wrong term, but I was referring to the shadowing in the picture.

mV3oPkV.png?1

 

 

 


Generally an offline raytracer is used for baking indirect illumination, rather than just an ambient term. Shoot rays with bounces all around and gather.

 

Darn, I was kind of hoping I could just do it with out a raytracer. I'm going to be taking a ray tracing class this year, so hopefully I can come back to this and replace the ambient term.

 

 

 

I've also noticed these weird arfiacts on my light probes. Is this "ringing"? Or am I just really messing up the projection step? wacko.png

G8GeiWb.png?1

 

 

I believe that the occlusion referred to in the paper is occlusion for cubemaps/specular term. Which, since it's something you don't have at the moment, isn't something to concern yourself with immediately.

 

It's also possible that in part due to ringing artifacts from SH, but it doesn't generally refer to an actual ring shape as such.




#5268414 Questions on Baked GI Spherical Harmonics

Posted by Frenetic Pony on 29 December 2015 - 06:36 PM

 

Robin Green's paper was super helpful. A lot of it still went over my head, but I've been able put together a few things.

 

I have a 3D grid of light probes like MJP suggested. I'm rendering a cubemap for every light probe and processing the cube map to construct 9 SH coefficients from it for each color channel.  When rendering the cubemap, I apply some ambient lighting to every object in order to account for objects in shadow. (I wasn't to sure about this one)

 

I'd like to try attempting to get the nice occlusion that Ready At Dawn has in their Siggraph presentation pg. 18 (http://blog.selfshadow.com/publications/s2015-shading-course/rad/s2015_pbs_rad_slides.pdf). How do I get something like this?

 

I'm also wondering if anything looks really wrong with my current implementation. 

 

Generally an offline raytracer is used for baking indirect illumination, rather than just an ambient term. Shoot rays with bounces all around and gather. Also what exactly do you mean "occlusion" like RaD? What occlusion specifically?




#5267862 Screen-Space Reflection, enough or mix needed ?

Posted by Frenetic Pony on 24 December 2015 - 07:01 PM

One of the best cornerstoes for PBR is that diffuse and specular lighting should match as closely as possible (yay energy preservation!). You can go play Far Cry 4 and see where they don't quite get this right, eg under the right circumstances their indirect diffuse lighting term will be a lot darker than their specular probe, so everything will look dark and super shiny at the same time and it looks weird.

 

As others mentioned, just SSRR isn't enough, you'll get relatively little reflections from this. The most common way is to use some sort of cubemap specular probe. Either pre-computed ala UE4 and etc. if your game is linear, or just dynamically created (take a cubemap centered around the camera) and updated as often as performance allows. That's what GTAV/Witcher 3/etc. do. To get properly physically based lighting you'll also have to multiple importance sample the cubemap to match your BRDF. Fortunately there's filtered importance sampling (see below) and this nifty paper to do so in relatively little time: http://www.gris.informatik.tu-darmstadt.de/~mgoesele/download/Widmer-2015-AAS.pdf

 

Edit - the probe cost shouldn't be too bad. Stick with a low resolution (as low as 128x128 per face for a six sided cubemap, or 256x256 for a dual parabaloid map). Only draw large static objects in low LOD, big trees, buildings, terrain, skybox, and stick with a dithered 10-10-10-2 HDR render target for output, players wont notice banding that much. As a bonus, if you do a two layer cubemap like the above PDF has, drawing large static objects into the first layer and distant terrain/skybox into the second you can combine that with SSRR and get a decent water reflection out of it at the same time without having to do a separate planar reflection.

 

Of course the problem with the dynamic approach is that it doesn't work so well with indoor/outdoor environments by itself. If you're inside and looking out a window you don't want what's outside reflecting the indoor walls, and if you're outside looking in you don't want the indoors reflecting the sky. Both GTAV and The Witcher 3 handle this decently somehow. If I had to guess I'd say all indoor areas have some marked bounding area that uses a different lighting term from the dynamic probe, so the dynamic probe only renders from and to outdoor areas, and the indoor areas use something else. Just a guess though.

 

Something to go on:

 

Far Cry 4: http://www.gdcvault.com/play/1022235/Rendering-the-World-of-Far




#5267099 What will change with HDR monitor ?

Posted by Frenetic Pony on 19 December 2015 - 09:32 PM

10bit linear is worse than 8bit gamma, so sRGB will stay.

These aren't "HDR monitors" that's marketing buzzwords...
HDR photography usually combines three or more 10-16 bit images to create a 32bit floating point image, which can ve tonemapped back to 8bit sRGB.
HDR games usually use a 16bit floating point rendering, and tonemap it to 8/10bit sRGB.

10bits is not HDR.
These monitors have been abound for a while using the name "deep color", not "HDR".

Software support for them has been around for 10 years already. You just make a 10_10_10_2 backbuffer instead of 8_8_8_8! Lots of games already support this.

 

Aye, though in this case "HDR" as a buzzword has now moved towards meaning DCI colorspace with 10bit input requirements. Or rather it means that and possibly more and there's an argument among display makers as to what it should mean (what's the colorspace? what's the contrast ratio that should be required? what's the min brightness in nits? etc.). Regardless there's a new digital display standard to go with it, getting rid of the old analog stuff. The article in question is really vague as to what AMD even plans on doing in supporting "HDR" beyond incidentally moving to display port 1.3. Honestly for GPUs the only thing I can think of to do is that automatic conversion between the new gamma curves and linear, because higher bit backbuffers for output is, as you pointed out, software and 10bit has been supported for a while now. 






PARTNERS