Jump to content

  • Log In with Google      Sign In   
  • Create Account

Frenetic Pony

Member Since 30 Oct 2011
Online Last Active Today, 05:03 PM

#5292794 Hybrid Frustum Traced Shadows

Posted by Frenetic Pony on 21 May 2016 - 05:16 PM

So where does this https://developer.nvidia.com/sites/default/files/akamai/gameworks/Frustum_Trace.jpg fit into what you just described.  Also the how does the irregular z-buffer fit into this?

 

That's afaik, the ray vs triangle intersection test. You construct your frustum as in the jpeg then test if the onscreen pixel is inside that frustum. Don't remember what the irregular Z-buffer was for, as I glanced through the paper and concluded Sebastian's "virtual shadow mapping" (about 3/4ths of the way down) would serve something similar in terms of image quality while doing so a lot faster.

 

If you're really going for some "make high end PC stuff useful" as Joe-J suggests I've found just having everything be scaleable in the engine is a good idea anyway. That way you can turn things (SSR/SSAO samples, shadow map res, g-buffer quality, LOD distance/quality, HDR buffer quality, etc.) down and/or up as needed to hit any platform and target framerates.




#5289513 Irrandiance Volume v.s. 4-Basis PRT in Farcry

Posted by Frenetic Pony on 30 April 2016 - 09:55 PM

If you're disappointed with 2-band spherical harmonics (perfectly understandable) you can take a look at spherical guassians, or just play around with SH/SG stuff here: https://github.com/kayru/Probulator




#5288055 What is the relationship between area lights and reflections (SSR/local/global)

Posted by Frenetic Pony on 21 April 2016 - 04:13 PM

 

 

Ah, right I got confused with that "blurry reflection look" and forgot that this video is only about lights.  What a bozo I am!

 

I've never written SSR/cube reflection before -- it seems like you would have to turn off the lights before your SSR/cube reflection pass so you don't "double up" the reflections of the light, right?  It Otherwise you would have one reflection from the analytic/punctual light model and another reflection from your SSR/cube reflection pass.  Or is that not that big of a deal?

 

For cubemaps you turn off any direct contribution, correct.

 

Do you really ? I always thought you capture it n number of times to simulate light bounces ?

 

 

I'm assuming he means the emissive texture/sprite that's supposed to represent the actual "light emitting" part of the light. EG your sun disc representing the sun. In which case you'd want to turn it off for cubemaps or else you'd get a double contribution, one from the cubemap capturing the sun disc, and one from your analytic directional light. This doesn't, or shouldn't matter for SSR as SSR is hopefully just going to overwrite your analytic specular with more accurate SSR reflections of your emissive material (assuming it hits).

 

There are plenty of circumstances where you may even want to leave them on for cubemaps too. If the light is distant enough that its analytic solution doesn't contribute, you can certainly capture it in a cubemap (distant city lights or something). 

 

And for actual light contribution otherwise, eg drawing point lights and etc. then you just leave them on for both cubemaps and SSR.

 

OH! And edit, duh. Here's the exact same thing as the area lights OP posted, but with source code and a permissive license and etc. etc. Also diffuse lighting at the same time, still no shadows though (raytrace analytic shapes/signed distance fields?) https://eheitzresearch.wordpress.com/415-2/

 




#5287724 What is the relationship between area lights and reflections (SSR/local/global)

Posted by Frenetic Pony on 19 April 2016 - 11:52 PM

Ah, right I got confused with that "blurry reflection look" and forgot that this video is only about lights.  What a bozo I am!

 

I've never written SSR/cube reflection before -- it seems like you would have to turn off the lights before your SSR/cube reflection pass so you don't "double up" the reflections of the light, right?  It Otherwise you would have one reflection from the analytic/punctual light model and another reflection from your SSR/cube reflection pass.  Or is that not that big of a deal?

 

For cubemaps you turn off any direct contribution, correct. But it's not necessary for SSR as you are essentially overwriting any other specular contribution if the SSR contribution actually hits valid info, so it shouldn't double the contribution. But this assumes your specular from the lightsource is actually at least somewhat of a match with whatever emissive material the light source is supposed to come from. If they're too mismatched it could certainly look odd.




#5287228 When would you want to use Forward+ or Differed Rendering?

Posted by Frenetic Pony on 16 April 2016 - 05:10 PM

Why is it called a Z-Pre-Pass if the Z-buffer is typically generated first before anything else in the first place?

 

It's a z only pass, just depth. You don't render anything else, such as textures and etc. As hodgeman said, there's a fast path for it, but you can't have a render target bound at the same time; it was mostly used on previous consoles as you were highly bandwidth bound so getting rid of overdraw could be a win, even if you did rasterize twice. But it's mostly outdated for deferred rendering today, at least so far as using the rasterizer. DICE recently (GDC) showed off what is essentially a compute based z-prepass, using async compute (in their case using it while the rasterizer is busy with doing shadow maps) to draw all the polys you have onscreen. Doing a compute based pass can be a win on today's consoles, which can choke a bit on the rasterization step, because you can discard triangles that would be onscreen but don't actually hit any centroid sampling and the like, and during the shadow pass your mostly using the rasterizer so your compute resources are free anyway.

 

For tiled/clustered forward it can be a bigger win though and z-pre pass is still relevant. It can reduce overdraw dramatically since you'll, again, discard triangles which miss the centroid sampling and thus don't contribute. While pre-pass can be useful for deferred cause you'll skip the big geometry step where you're still reading out textures which won't contribute and etc. it's much better for forward as you're still lighting as you draw each triangle, so you'll be saving there too. I wonder now, DICE's compute based z-prepass like thing could do the same thing for forward plus, might be a win.




#5284694 When would you want to use Forward+ or Differed Rendering?

Posted by Frenetic Pony on 01 April 2016 - 09:06 PM

Well, the preferred term is "uber-shader" which is less words, but yes it is the "big ass shader" you're referring too.




#5284559 When would you want to use Forward+ or Differed Rendering?

Posted by Frenetic Pony on 31 March 2016 - 07:58 PM

There's a lot to go over, but the basics are:

 

Forward+ (or rather clustered forward+ which is really what you'd use):

Positives:

Easy translucency (still not sorted, but at least it's there).

Easy MSAA.

Easy use of multiple material shading models.

Negatives:

Possibly high overdraw costs or vertex costs, as you might end up doing either a z-prepass and doubling your vertex load, or living with the remaining overdraw costs even after clustered light binning, which can still be more costly if you're running heavy per pixel operations.

 

Deferred (or rather tiled/clustered deferred, again what you'd really want to use)

Positives:

Extremely predictable costs (shade what's onscreen!)

Low overdraw costs (see above)

Easy use of deferred tricks (soft particles, screenspace raytracing, etc. etc.)

The above is really, really useful. Actually applying shadows can get quite cheap, (deferred shadow masks, or even just drawing them directly into the buffer for point lights), deferred decals, relatively easy area lights, etc. etc.

Negatives:

Translucency is haaard. You need to either do a foward pass (with clustered light binning you can re-use the light bins, so that's useful) or gather all your lighting into a spherical harmonics/guassian grid each frame, then do a screenspace pass on translucency using that (UE4 and Destiny do this).

Material types are more limited; you're limited to your g-buffer size, and need to pay the cost of each material type in each onscreen tile you find it in.

Can't do MSAA easily, though since temporal re-projection AA, morphological AA, and specular AA (toksvig/etc.) have gotten so good, and can be cheaper than MSAA anyway, I don't see the obsession over having MSAA as really justified anymore.

 

Really, you're going to have to look at each one and figure out what your project needs. Though since you're already doing a straight forward pass, and thus aren't doing any of the savings from all the deferred stuff, it would be simpler and more straightforward to just implement clustered forward plus.




#5283022 Per Triangle Culling (GDC Frostbite)

Posted by Frenetic Pony on 23 March 2016 - 06:04 PM

 

 


Note that on AMD's GCN, the compute shader could be ran async while rendering the shadow maps (which barely occupy the compute units), thus making this pass essentially "free".

 

Given that nvidia doesn't typically allow async compute, does that mean it wouldn't be useful on nvidia? 

 

It's easy to understand why rendering small triangles is expensive, but this culling process won't be free if it can't overlap other parts of the pipeline, right? I suppose I could see a overall positive benefit if the compute shader needs only position information and can ignore other attributes which won't contribute to culling?

 

Whether it's a net gain or a net loss depends on the scene. Async compute just increases the likelihood of being a net gain.

 

 

By a lot unfortunately, and Nvidia's release this year doesn't seem likely to change support for async. Still, it's not going to be a loss generally, so it's not like you'd even have to disable it in a Nvidia specific package.




#5282260 Object Space Lightning

Posted by Frenetic Pony on 20 March 2016 - 08:41 PM

To re-iterate from twitter, you could cull out texture patches from a virtual texture atlas, tie the patches to something like poly clusters (Dice's paper from GDC) cull the patches, and then you'd know which texture patches to shade without a lot of overdraw.

 

I like the idea of separating out which shaders to run, but this just goes back to a virtualized texture atlas, then re-ordering patches into coherent tiles of common materials, and running the shading on each tile. Eventually you'd just ditch the whole "pre" shading part anyway and it starts to look more like this stuff: https://t.co/hXCfJtnwWi




#5277407 Decompress quadtree image

Posted by Frenetic Pony on 22 February 2016 - 01:41 AM

Could just go with spherical guassians: http://blog.selfshadow.com/publications/s2015-shading-course/rad/s2015_pbs_rad_slides.pdf

 

Same exact purpose and idea as SH, but has better angular resolution and less artifacts.




#5277109 How fast is hardware-accelerated ray-tracing these days?

Posted by Frenetic Pony on 19 February 2016 - 11:03 PM

I don't see animation as a huge bottleneck at all, the upcoming Dreams managed a rough version in realtime on the PS4 just fine. Besides, offline can rebuild acceleration structures without getting bottlenecked by that, what offline gets bottlenecked by is simply brute force, eg indirect tracing and raymarching, pretty much the same thing realtime stuff is going to get bottlenecked by. Raymarching requires a ton of samples, but is used for volumetric stuff today in like, quarter res buffers to amortize the cost.

 

The worse part is indirect lighting where you have incoherent rays. Your wave fronts are going to end up incoherent and useless, and your going to get killed by latency when you end up chasing pointers all around your non uniform acceleration structure, either that or end up with too many samples and taking up too much ram with a uniform acceleration structure. Still, there isn't really a different way to do GI well other than tracing. Oh you can hack it if you have largely pre-computed stuff, but if you want realtime it seems to be a no go. It's been a dream to get realtime GI into something that doesn't involve tracing for years now, but every solution (and there's been tons of them) has ended up with far too many tradeoffs after creating far too complex of a system to be particularly useful. It's why both Crytek and Epic have just gone "screw it, we'll do tracing as cleverly as we can and brute force it for what's left" and so far it actually works! (Though it's still quite expensive).




#5269717 Moment Shadow Maps for Single Scattering, Soft Shadows and Translucent Occlud...

Posted by Frenetic Pony on 06 January 2016 - 06:42 PM

It's important to note this paper, which is here: http://cg.cs.uni-bonn.de/aigaion2root/attachments/MSMBeyondHardShadows.pdf

 

Concerns itself with filtering shadows for use in light scattering, aka: https://graphics.tudelft.nl/Publications-new/2014/KSE14/KSE14.pdf

Things like this, or Nvidia's hacky tessellation based god rays, are fine, but most people use something like: http://advances.realtimerendering.com/s2015/Frostbite%20PB%20and%20unified%20volumetrics.pptx

Which can support multiple lights easier, support visible fog volumes, and even do stuff like volumetric clouds: http://advances.realtimerendering.com/s2015/The%20Real-time%20Volumetric%20Cloudscapes%20of%20Horizon%20-%20Zero%20Dawn%20-%20ARTR.pdf , all potentially faster than the previous.

That being said you can still use the moment shadow mapping stuff for filtering, and the video/paper you're interested in seems to make the pre-filtered single scattering more efficient.

 

The paper you mention is also used to filter translucent occluders and soft shadows, aka something like: http://www.crytek.com/download/Playing%20with%20Real-Time%20Shadows.pdf  Both are nice to have if you can afford it.

 

But to get this long winded reply summed up, for filtering shadows you can still use exponential variance shadow mapping, which is still better looking for relatively the same speed as moment shadow mapping. Or, for say, filtering shadows specifically for atmospheric scattering/shadows on particles/etc. you can just use normal variance shadow mapping, and hope users don't notice the light leak because it's just atmospheric scattering/particles.




#5268776 Injection Step of VPLs into 3D Grid

Posted by Frenetic Pony on 01 January 2016 - 09:25 PM

Demo with source code for you to peruse http://blog.blackhc.net/2010/07/light-propagation-volumes/




#5268652 Questions on Baked GI Spherical Harmonics

Posted by Frenetic Pony on 31 December 2015 - 05:50 PM

 


Also what exactly do you mean "occlusion" like RaD? What occlusion specifically?

Maybe I'm misunderstanding and using the wrong term, but I was referring to the shadowing in the picture.

mV3oPkV.png?1

 

 

 


Generally an offline raytracer is used for baking indirect illumination, rather than just an ambient term. Shoot rays with bounces all around and gather.

 

Darn, I was kind of hoping I could just do it with out a raytracer. I'm going to be taking a ray tracing class this year, so hopefully I can come back to this and replace the ambient term.

 

 

 

I've also noticed these weird arfiacts on my light probes. Is this "ringing"? Or am I just really messing up the projection step? wacko.png

G8GeiWb.png?1

 

 

I believe that the occlusion referred to in the paper is occlusion for cubemaps/specular term. Which, since it's something you don't have at the moment, isn't something to concern yourself with immediately.

 

It's also possible that in part due to ringing artifacts from SH, but it doesn't generally refer to an actual ring shape as such.




#5268414 Questions on Baked GI Spherical Harmonics

Posted by Frenetic Pony on 29 December 2015 - 06:36 PM

 

Robin Green's paper was super helpful. A lot of it still went over my head, but I've been able put together a few things.

 

I have a 3D grid of light probes like MJP suggested. I'm rendering a cubemap for every light probe and processing the cube map to construct 9 SH coefficients from it for each color channel.  When rendering the cubemap, I apply some ambient lighting to every object in order to account for objects in shadow. (I wasn't to sure about this one)

 

I'd like to try attempting to get the nice occlusion that Ready At Dawn has in their Siggraph presentation pg. 18 (http://blog.selfshadow.com/publications/s2015-shading-course/rad/s2015_pbs_rad_slides.pdf). How do I get something like this?

 

I'm also wondering if anything looks really wrong with my current implementation. 

 

Generally an offline raytracer is used for baking indirect illumination, rather than just an ambient term. Shoot rays with bounces all around and gather. Also what exactly do you mean "occlusion" like RaD? What occlusion specifically?






PARTNERS