• Create Account

# Frenetic Pony

Member Since 30 Oct 2011
Offline Last Active Today, 01:08 AM

### #5305683Theory of PBR lighting model[and maths too]

Posted by on 13 August 2016 - 06:00 PM

I am familiar with these ideas, though they are high school physics topics, lets test my nostalgia

A. Applicable only for mirror like surface. For rough surface, the angle of incident and the angle of reflection will not be same, is it?

It is actually, part of the laws of reflection . But due to visual reasons, we approximate micro-faucets. Which is why a probability distribution function is introduced. It represents the probability of running into a microfaucet along a given surface. So physically, the angle of reflection is always equal to the angle of incident across the representative normal. But we bend this rule to get a result that's more to life with limited data.

But yeah, that book is aligned for giving you concepts of physically based rendering for things like pixar's renderman. It won't help you to implement anything to the GPU.

To do that, you'd want to take a preexisting BRDF. I'm trying to use Pixar's BRDF, however I haven't optimized it well enough (not at all), so my frames are shit.

Well what both you an OP are probably looking for, at least for GPU stuff, is the Disney BRDF: https://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf

Which most everyone seems to use a variation of now, plugin things like GGX/smith geometry term, other optimizations. In fact the above offers a lot of things most don't use, usually concentrating on the metalness/roughness. I've not seen a lot of the sheen/etc. stuff done. Besides you'd want to optimize that out to save registers/G-buffer space/etc. anyway.

This, specifically the UE4/Black Ops 2 stuff, offers a more GPU centric view of PBR: http://blog.selfshadow.com/publications/s2013-shading-course/

### #5298178Link for Graphics Research Papers

Posted by on 26 June 2016 - 10:10 PM

http://kesen.realtimerendering.com/

http://www.gdcvault.com/

### #5298036Light Propagation Volumes: flickering when injecting RSM VPLs

Posted by on 25 June 2016 - 04:43 PM

Yay the VS/GS/PS approach fixed it! And it is not significantly slower or anything.

Direct lighting (and small ambient term): http://scrnsht.me/u/yPb/raw

Combined: http://scrnsht.me/u/wPb/raw

Indirect lighting only (and small ambient term): http://scrnsht.me/u/xPb/raw

Still quite some self-illumination and incorrect bleeding, but for now, I am quite happy and satisfied.

Well, that would come from LPV, and why others have abandoned it for the most part. But glad to see you've got it fixed!

### #5297272Planetary cloud rendering

Posted by on 19 June 2016 - 09:10 PM

Way back in the MS Flight Sim days they had "cloud impostors" which is exactly what it sounds like, just like a normal impostor a 2d pre-computed plane of what the volumetric cloud looked like from afar.

That being said now there's actual volumetrics, and all you ever wanted to know is here: http://advances.realtimerendering.com/s2015/index.html

Since with the above (no less than 3 excellent papers) you're marching through a volume you're then concerned with volume marching samples, and not at all with LOD. The trick would be some way to skip as much space as possible and only march once you actually get to the "cloud" layer you want. Because it's per pixel otherwise cost would then be a relative constant.

Posted by on 29 May 2016 - 10:22 PM

Not sure if the cubemaps your using have a high enough resolution, or rather what the end result for hair would look like. But with screenspace stuff you'd at least have the resolution and etc.

http://www.frostbite.com/2015/08/stochastic-screen-space-reflections/

### #5293975Question about GI and Pipelines

Posted by on 28 May 2016 - 04:32 PM

With deferred you're only ever shading visible pixels, with forward your shading triangles even if they don't end up being sampled. Forward+ just limits the shading to relevant screenspace tiles, which can apply to both deferred and forward shading. Deferred has a higher set up cost, but if you're scaling towards enough lights/shading it can end up cheaper in the end.

The biggest relevance here is that you want GI, and right now pretty much all GI passes need a more detailed pass based on deferred passes of some kind, as large scale GI can't simultaneously have enough resolution to scale to small details. So SSAO/SSR, or some variation of which that needs deferred information is often used. Even mostly forward titles like The Witcher 3 and The Order 1886 use deferred information for small/detail scale GI effects.

Posted by on 22 May 2016 - 06:53 PM

Yeah, looking at it again, it just appears to be "Create shadow map like structure, create frustum for each primitive in the shadow map, link onscreen pixels to shadow map texels (list of onscreen pixels as covered by which shadow map texel) then for each pixel, go to shadow map texel it's linked to, then within that texel test against each frustum you constructed to see whether that point (pixel I think) is within the frustum. It's a lot like shadow volumes, but unfortunately seems to involve atomics (mentioned as part of the irregular z-buffer, which creates the linked list of onscreen pixels to shadow map texels) as well as creating frustums and then frustum tests.

None of it by itself is necessarily expensive, but is overall a lot more expensive than just doing a normal shadow map, let alone the virtual shadow map I originally linked to which can be faster than a normal shadow naive cascaded shadow map. Virtual shadow maps will also come close to having a 1 to 1 match to screen resolution, and so will have close to the same quality as this anyway while being a lot more efficient. The only other thing they're doing is comparing unfiltered shadow maps without SMAA applied to their results (With SMAA applied, which has nothing specifically to do with this, cough cough) and frankly no one just uses unfiltered shadow maps and I had to look at the zoomed in portion of their comparison to appreciate the difference anyway.

Posted by on 21 May 2016 - 05:16 PM

So where does this https://developer.nvidia.com/sites/default/files/akamai/gameworks/Frustum_Trace.jpg fit into what you just described.  Also the how does the irregular z-buffer fit into this?

That's afaik, the ray vs triangle intersection test. You construct your frustum as in the jpeg then test if the onscreen pixel is inside that frustum. Don't remember what the irregular Z-buffer was for, as I glanced through the paper and concluded Sebastian's "virtual shadow mapping" (about 3/4ths of the way down) would serve something similar in terms of image quality while doing so a lot faster.

If you're really going for some "make high end PC stuff useful" as Joe-J suggests I've found just having everything be scaleable in the engine is a good idea anyway. That way you can turn things (SSR/SSAO samples, shadow map res, g-buffer quality, LOD distance/quality, HDR buffer quality, etc.) down and/or up as needed to hit any platform and target framerates.

### #5289513Irrandiance Volume v.s. 4-Basis PRT in Farcry

Posted by on 30 April 2016 - 09:55 PM

If you're disappointed with 2-band spherical harmonics (perfectly understandable) you can take a look at spherical guassians, or just play around with SH/SG stuff here: https://github.com/kayru/Probulator

### #5288055What is the relationship between area lights and reflections (SSR/local/global)

Posted by on 21 April 2016 - 04:13 PM

Ah, right I got confused with that "blurry reflection look" and forgot that this video is only about lights.  What a bozo I am!

I've never written SSR/cube reflection before -- it seems like you would have to turn off the lights before your SSR/cube reflection pass so you don't "double up" the reflections of the light, right?  It Otherwise you would have one reflection from the analytic/punctual light model and another reflection from your SSR/cube reflection pass.  Or is that not that big of a deal?

For cubemaps you turn off any direct contribution, correct.

Do you really ? I always thought you capture it n number of times to simulate light bounces ?

I'm assuming he means the emissive texture/sprite that's supposed to represent the actual "light emitting" part of the light. EG your sun disc representing the sun. In which case you'd want to turn it off for cubemaps or else you'd get a double contribution, one from the cubemap capturing the sun disc, and one from your analytic directional light. This doesn't, or shouldn't matter for SSR as SSR is hopefully just going to overwrite your analytic specular with more accurate SSR reflections of your emissive material (assuming it hits).

There are plenty of circumstances where you may even want to leave them on for cubemaps too. If the light is distant enough that its analytic solution doesn't contribute, you can certainly capture it in a cubemap (distant city lights or something).

And for actual light contribution otherwise, eg drawing point lights and etc. then you just leave them on for both cubemaps and SSR.

OH! And edit, duh. Here's the exact same thing as the area lights OP posted, but with source code and a permissive license and etc. etc. Also diffuse lighting at the same time, still no shadows though (raytrace analytic shapes/signed distance fields?) https://eheitzresearch.wordpress.com/415-2/

### #5287724What is the relationship between area lights and reflections (SSR/local/global)

Posted by on 19 April 2016 - 11:52 PM

Ah, right I got confused with that "blurry reflection look" and forgot that this video is only about lights.  What a bozo I am!

I've never written SSR/cube reflection before -- it seems like you would have to turn off the lights before your SSR/cube reflection pass so you don't "double up" the reflections of the light, right?  It Otherwise you would have one reflection from the analytic/punctual light model and another reflection from your SSR/cube reflection pass.  Or is that not that big of a deal?

For cubemaps you turn off any direct contribution, correct. But it's not necessary for SSR as you are essentially overwriting any other specular contribution if the SSR contribution actually hits valid info, so it shouldn't double the contribution. But this assumes your specular from the lightsource is actually at least somewhat of a match with whatever emissive material the light source is supposed to come from. If they're too mismatched it could certainly look odd.

### #5287228When would you want to use Forward+ or Differed Rendering?

Posted by on 16 April 2016 - 05:10 PM

Why is it called a Z-Pre-Pass if the Z-buffer is typically generated first before anything else in the first place?

It's a z only pass, just depth. You don't render anything else, such as textures and etc. As hodgeman said, there's a fast path for it, but you can't have a render target bound at the same time; it was mostly used on previous consoles as you were highly bandwidth bound so getting rid of overdraw could be a win, even if you did rasterize twice. But it's mostly outdated for deferred rendering today, at least so far as using the rasterizer. DICE recently (GDC) showed off what is essentially a compute based z-prepass, using async compute (in their case using it while the rasterizer is busy with doing shadow maps) to draw all the polys you have onscreen. Doing a compute based pass can be a win on today's consoles, which can choke a bit on the rasterization step, because you can discard triangles that would be onscreen but don't actually hit any centroid sampling and the like, and during the shadow pass your mostly using the rasterizer so your compute resources are free anyway.

For tiled/clustered forward it can be a bigger win though and z-pre pass is still relevant. It can reduce overdraw dramatically since you'll, again, discard triangles which miss the centroid sampling and thus don't contribute. While pre-pass can be useful for deferred cause you'll skip the big geometry step where you're still reading out textures which won't contribute and etc. it's much better for forward as you're still lighting as you draw each triangle, so you'll be saving there too. I wonder now, DICE's compute based z-prepass like thing could do the same thing for forward plus, might be a win.

### #5284694When would you want to use Forward+ or Differed Rendering?

Posted by on 01 April 2016 - 09:06 PM

Well, the preferred term is "uber-shader" which is less words, but yes it is the "big ass shader" you're referring too.

### #5284559When would you want to use Forward+ or Differed Rendering?

Posted by on 31 March 2016 - 07:58 PM

There's a lot to go over, but the basics are:

Forward+ (or rather clustered forward+ which is really what you'd use):

Positives:

Easy translucency (still not sorted, but at least it's there).

Easy MSAA.

Easy use of multiple material shading models.

Negatives:

Possibly high overdraw costs or vertex costs, as you might end up doing either a z-prepass and doubling your vertex load, or living with the remaining overdraw costs even after clustered light binning, which can still be more costly if you're running heavy per pixel operations.

Deferred (or rather tiled/clustered deferred, again what you'd really want to use)

Positives:

Extremely predictable costs (shade what's onscreen!)

Low overdraw costs (see above)

Easy use of deferred tricks (soft particles, screenspace raytracing, etc. etc.)

The above is really, really useful. Actually applying shadows can get quite cheap, (deferred shadow masks, or even just drawing them directly into the buffer for point lights), deferred decals, relatively easy area lights, etc. etc.

Negatives:

Translucency is haaard. You need to either do a foward pass (with clustered light binning you can re-use the light bins, so that's useful) or gather all your lighting into a spherical harmonics/guassian grid each frame, then do a screenspace pass on translucency using that (UE4 and Destiny do this).

Material types are more limited; you're limited to your g-buffer size, and need to pay the cost of each material type in each onscreen tile you find it in.

Can't do MSAA easily, though since temporal re-projection AA, morphological AA, and specular AA (toksvig/etc.) have gotten so good, and can be cheaper than MSAA anyway, I don't see the obsession over having MSAA as really justified anymore.

Really, you're going to have to look at each one and figure out what your project needs. Though since you're already doing a straight forward pass, and thus aren't doing any of the savings from all the deferred stuff, it would be simpler and more straightforward to just implement clustered forward plus.

### #5283022Per Triangle Culling (GDC Frostbite)

Posted by on 23 March 2016 - 06:04 PM

Note that on AMD's GCN, the compute shader could be ran async while rendering the shadow maps (which barely occupy the compute units), thus making this pass essentially "free".

Given that nvidia doesn't typically allow async compute, does that mean it wouldn't be useful on nvidia?

It's easy to understand why rendering small triangles is expensive, but this culling process won't be free if it can't overlap other parts of the pipeline, right? I suppose I could see a overall positive benefit if the compute shader needs only position information and can ignore other attributes which won't contribute to culling?

Whether it's a net gain or a net loss depends on the scene. Async compute just increases the likelihood of being a net gain.

By a lot unfortunately, and Nvidia's release this year doesn't seem likely to change support for async. Still, it's not going to be a loss generally, so it's not like you'd even have to disable it in a Nvidia specific package.

PARTNERS