Jump to content

  • Log In with Google      Sign In   
  • Create Account

Frenetic Pony

Member Since 30 Oct 2011
Online Last Active Today, 04:41 PM

#5269717 Moment Shadow Maps for Single Scattering, Soft Shadows and Translucent Occlud...

Posted by Frenetic Pony on 06 January 2016 - 06:42 PM

It's important to note this paper, which is here: http://cg.cs.uni-bonn.de/aigaion2root/attachments/MSMBeyondHardShadows.pdf

 

Concerns itself with filtering shadows for use in light scattering, aka: https://graphics.tudelft.nl/Publications-new/2014/KSE14/KSE14.pdf

Things like this, or Nvidia's hacky tessellation based god rays, are fine, but most people use something like: http://advances.realtimerendering.com/s2015/Frostbite%20PB%20and%20unified%20volumetrics.pptx

Which can support multiple lights easier, support visible fog volumes, and even do stuff like volumetric clouds: http://advances.realtimerendering.com/s2015/The%20Real-time%20Volumetric%20Cloudscapes%20of%20Horizon%20-%20Zero%20Dawn%20-%20ARTR.pdf , all potentially faster than the previous.

That being said you can still use the moment shadow mapping stuff for filtering, and the video/paper you're interested in seems to make the pre-filtered single scattering more efficient.

 

The paper you mention is also used to filter translucent occluders and soft shadows, aka something like: http://www.crytek.com/download/Playing%20with%20Real-Time%20Shadows.pdf  Both are nice to have if you can afford it.

 

But to get this long winded reply summed up, for filtering shadows you can still use exponential variance shadow mapping, which is still better looking for relatively the same speed as moment shadow mapping. Or, for say, filtering shadows specifically for atmospheric scattering/shadows on particles/etc. you can just use normal variance shadow mapping, and hope users don't notice the light leak because it's just atmospheric scattering/particles.




#5268776 Injection Step of VPLs into 3D Grid

Posted by Frenetic Pony on 01 January 2016 - 09:25 PM

Demo with source code for you to peruse http://blog.blackhc.net/2010/07/light-propagation-volumes/




#5268652 Questions on Baked GI Spherical Harmonics

Posted by Frenetic Pony on 31 December 2015 - 05:50 PM

 


Also what exactly do you mean "occlusion" like RaD? What occlusion specifically?

Maybe I'm misunderstanding and using the wrong term, but I was referring to the shadowing in the picture.

mV3oPkV.png?1

 

 

 


Generally an offline raytracer is used for baking indirect illumination, rather than just an ambient term. Shoot rays with bounces all around and gather.

 

Darn, I was kind of hoping I could just do it with out a raytracer. I'm going to be taking a ray tracing class this year, so hopefully I can come back to this and replace the ambient term.

 

 

 

I've also noticed these weird arfiacts on my light probes. Is this "ringing"? Or am I just really messing up the projection step? wacko.png

G8GeiWb.png?1

 

 

I believe that the occlusion referred to in the paper is occlusion for cubemaps/specular term. Which, since it's something you don't have at the moment, isn't something to concern yourself with immediately.

 

It's also possible that in part due to ringing artifacts from SH, but it doesn't generally refer to an actual ring shape as such.




#5268414 Questions on Baked GI Spherical Harmonics

Posted by Frenetic Pony on 29 December 2015 - 06:36 PM

 

Robin Green's paper was super helpful. A lot of it still went over my head, but I've been able put together a few things.

 

I have a 3D grid of light probes like MJP suggested. I'm rendering a cubemap for every light probe and processing the cube map to construct 9 SH coefficients from it for each color channel.  When rendering the cubemap, I apply some ambient lighting to every object in order to account for objects in shadow. (I wasn't to sure about this one)

 

I'd like to try attempting to get the nice occlusion that Ready At Dawn has in their Siggraph presentation pg. 18 (http://blog.selfshadow.com/publications/s2015-shading-course/rad/s2015_pbs_rad_slides.pdf). How do I get something like this?

 

I'm also wondering if anything looks really wrong with my current implementation. 

 

Generally an offline raytracer is used for baking indirect illumination, rather than just an ambient term. Shoot rays with bounces all around and gather. Also what exactly do you mean "occlusion" like RaD? What occlusion specifically?




#5267862 Screen-Space Reflection, enough or mix needed ?

Posted by Frenetic Pony on 24 December 2015 - 07:01 PM

One of the best cornerstoes for PBR is that diffuse and specular lighting should match as closely as possible (yay energy preservation!). You can go play Far Cry 4 and see where they don't quite get this right, eg under the right circumstances their indirect diffuse lighting term will be a lot darker than their specular probe, so everything will look dark and super shiny at the same time and it looks weird.

 

As others mentioned, just SSRR isn't enough, you'll get relatively little reflections from this. The most common way is to use some sort of cubemap specular probe. Either pre-computed ala UE4 and etc. if your game is linear, or just dynamically created (take a cubemap centered around the camera) and updated as often as performance allows. That's what GTAV/Witcher 3/etc. do. To get properly physically based lighting you'll also have to multiple importance sample the cubemap to match your BRDF. Fortunately there's filtered importance sampling (see below) and this nifty paper to do so in relatively little time: http://www.gris.informatik.tu-darmstadt.de/~mgoesele/download/Widmer-2015-AAS.pdf

 

Edit - the probe cost shouldn't be too bad. Stick with a low resolution (as low as 128x128 per face for a six sided cubemap, or 256x256 for a dual parabaloid map). Only draw large static objects in low LOD, big trees, buildings, terrain, skybox, and stick with a dithered 10-10-10-2 HDR render target for output, players wont notice banding that much. As a bonus, if you do a two layer cubemap like the above PDF has, drawing large static objects into the first layer and distant terrain/skybox into the second you can combine that with SSRR and get a decent water reflection out of it at the same time without having to do a separate planar reflection.

 

Of course the problem with the dynamic approach is that it doesn't work so well with indoor/outdoor environments by itself. If you're inside and looking out a window you don't want what's outside reflecting the indoor walls, and if you're outside looking in you don't want the indoors reflecting the sky. Both GTAV and The Witcher 3 handle this decently somehow. If I had to guess I'd say all indoor areas have some marked bounding area that uses a different lighting term from the dynamic probe, so the dynamic probe only renders from and to outdoor areas, and the indoor areas use something else. Just a guess though.

 

Something to go on:

 

Far Cry 4: http://www.gdcvault.com/play/1022235/Rendering-the-World-of-Far




#5267099 What will change with HDR monitor ?

Posted by Frenetic Pony on 19 December 2015 - 09:32 PM

10bit linear is worse than 8bit gamma, so sRGB will stay.

These aren't "HDR monitors" that's marketing buzzwords...
HDR photography usually combines three or more 10-16 bit images to create a 32bit floating point image, which can ve tonemapped back to 8bit sRGB.
HDR games usually use a 16bit floating point rendering, and tonemap it to 8/10bit sRGB.

10bits is not HDR.
These monitors have been abound for a while using the name "deep color", not "HDR".

Software support for them has been around for 10 years already. You just make a 10_10_10_2 backbuffer instead of 8_8_8_8! Lots of games already support this.

 

Aye, though in this case "HDR" as a buzzword has now moved towards meaning DCI colorspace with 10bit input requirements. Or rather it means that and possibly more and there's an argument among display makers as to what it should mean (what's the colorspace? what's the contrast ratio that should be required? what's the min brightness in nits? etc.). Regardless there's a new digital display standard to go with it, getting rid of the old analog stuff. The article in question is really vague as to what AMD even plans on doing in supporting "HDR" beyond incidentally moving to display port 1.3. Honestly for GPUs the only thing I can think of to do is that automatic conversion between the new gamma curves and linear, because higher bit backbuffers for output is, as you pointed out, software and 10bit has been supported for a while now. 




#5267087 What will change with HDR monitor ?

Posted by Frenetic Pony on 19 December 2015 - 06:46 PM

So the wrong thing to focus on is the "10 bit monitor" part, at least initially. The right thing to ask is "what is the colorspace". A "colorspace" represents the range of colors representable by a display. Here's a handy chart

 

DMColorRec2020_s.jpg

 

The total area of the chart represents the colors a human eye can see, the triangles represent different "colorspaces". The points of the triangles represent the shades of red, blue, and green that can be mixed together to create the colors in between. Right now we are at the smallest triangle, REC 709, and have been since, well practically since color TV was invented. The "bits" as in "10 bit, 8 bit" etc. come in when you want to display the colors in between the far points of the triangle.

 

Right now we have (for the most part) 8bit monitors, in binary that plays out to each color (Red, blue, green) having 256 shades each to combine together to make the colors in between. For REC 709 that's fine, you won't see banding (mostly). But when we get to bigger triangles, we need more shades of each color to cover the space in between unless we want banding. EG

 

MJPEG-Sky-grad-75percent-4096.jpg

 

This is supposed to be smooth, but doesn't have enough colors to represent a smooth change to our eye, so we see obvious "jumps" in color. That's where we need extra bits, to get more colors to put in between. Ten bits offers 1024 shades of each color, and is enough for the middle triangle, which is the DCI colorspace, or rather what movies are (ideally) projected in in theaters. It's also what the first wave of "HDR!" screens support.

 

Unfortunately, or fortunately depending, there's also the bigger triangle, the REC 2020 colorspace, which is what's supposed to be supported but couldn't quite make it out this year. To cover that area without banding would need 12 bits of each color. Which "colorspace" will win is a complicated mess and no one knows. Regardless, now that I've covered what's actually going on to the question.

 

For one part of production, the shades of color and ye 10+ bits, that's going to be easy. Right now textures covering albedo generally take up 24 bits to render (8 bits of color each). In order to cover the bigger triangle, IE in order for it to be done properly, you just up the bits. Maybe to say, 11,11,10 and hope people don't notice banding if we dither for textures. Other things also get upped, some people still use 10 bits per color channel for HDR (think going into a bigger triangle virtually, then shrinking it back down) which shows some banding on 8bit output. But for 10+ bits the minimum HDR target will probably be 16 bits per channel. So, more rendering power, ouch, but relatively easy to do. Though it should be noted right now GPUs have automatic "support" for REC 709, and automatically convert back and forth between REC 709s gamma curve and everything being linear and thus easier and proper to do maths with, while the bigger triangles have different gamma curves and will need to be either manually done or have new GPUs that do it quick and automatic.

 

Asset creation will be the hard part. Right now every studio does REC 709 textures and colorspace and etc. The cameras for texture sampling are set up for REC 709, the camera color checkers are, the monitors are, the GPUs are, the modeling software is. All of that will have to be replaced, ideally from the moment real production begins (years in advance of release for triple A games) to produce assets that are meant to be displayed in the higher colorspaces. Now you can convert older textures into the newer spaces automatically, but it doesn't mean artists are going to be happy with the results. It might take a while to go over all the textures manually to get them to look acceptable. You might also see older 8 bit textures used in games covering the higher colorspaces (according to marketing...) and just use lighting values that cover the higher colorspaces. Obviously not ideal, but I wouldn't doubt that at least one or more games would go for it.

 

But ideally you'd have all assets designed from the start to cover whatever higher colorspace is used. With (right now) quite slow adoption of "HDR!" screens, little to no support from anything else (Netflix, image formats, etc.), and the need for more processing power I'd say you aren't going to see much of any games supporting "HDR!" for years, and quite possibly not uniformly until yet another new generation of consoles (or whatever, depending on how long these last).

 

Hopefully that covers everything you, and anyone else for that matter, wanted to know.




#5265074 Specular aliasing : The order 1886 method

Posted by Frenetic Pony on 05 December 2015 - 05:25 PM

That link is, so far as I remember, based on Toksvig and the best results you can get for relatively cheap. You can also use temporal AA to reduce specular aliasing, cheap and doubly so if you already have temporal AA for other things (SSAO/Geometry/whatever). The Order actually combines both Toksvig based normal mip maps/roughness and Temporal AA together.

 

There's also "the best known solution" which is LEAN mapping: http://www.cse.chalmers.se/edu/year/2011/course/TDA361/Advanced%20Computer%20Graphics/LEANpres.pdf which s expensive. Realtime still, but expensive. Then there's CLEAN mapping, http://blog.selfshadow.com/2011/07/22/specular-showdown/ Google CLEAN mapping and you'll get a link to the GDC PPT if you want it.

 

As a next gen bonus there's also LEADR mapping: http://blog.selfshadow.com/publications/s2014-shading-course/#course_content Which combined displacement maps with LEAN mapping to get something approaching REYES style image quality. But with current tesselation performance, cracks in tesselation, blah blah blah, I don't think anyone will be doing that till NEXT gen. VR Gen? Whatever. There's ALSO a really nifty demo showing LEAN, CLEAN, and Toksvig results so you can see them all side by side with performance, but I don't know where it is.

 

Hope that helps!




#5264838 Questions on Baked GI Spherical Harmonics

Posted by Frenetic Pony on 04 December 2015 - 12:41 AM

1. You generally want a lot of samples in your grid for good results, which means that that you have to be okay with dedicating that much memory to your samples. It also means that you want your baking to be pretty quick. Cubemap rendering isn't always that great for this, since the GPU can only render to one cubemap at a time. We use a GPU-based ray tracer to bake our GI, and that lets us bake many samples in parallel.

2. Since you don't have direct control over where each sample is placed, you're bound to end up with samples that are buried inside of geometry. This can give you bad results, since the black samples will "bleed" into their neighbors during interpolation. To counteract this, we would detect "dead" samples and then flood-fill them with a color taken from their neighbors.

 

Remedy (Quantum Break) dealt with this nicely by using essentially a sparse multi-scale grid: http://advances.realtimerendering.com/s2015/SIGGRAPH_2015_Remedy_Notes.pdf

 

Increase probe density near geometry, decrease via octree as you get farther away from contributing geometry. Less probes and less memory.




#5264354 Drawing exposed rocks in a snowy mountain scene

Posted by Frenetic Pony on 30 November 2015 - 10:50 PM

How much work are you willing to put in? Virtualized textures for terrain are fairly well documented at this point, but still not the easiest thing to do. Would give you all the resolution you could want though.

 

Rocks could be fine, depending on how many there are. If they're the same, or a few groups of the same mesh you could batch them to reduce draw calls. But if your view distance is high enough they'll start to overlap and drag performance down. Then again, is your game top down? If it generally is you might be able to get away with a bunch of rock meshes better.




#5263050 PBR precision issue using GBuffer RGBA16F

Posted by Frenetic Pony on 21 November 2015 - 04:56 PM

 


You don't generally put HDR colours into a gbuffer.

Why not?

edit2- because I remember reading that in a 'true' HDR pipeline even the textures are in an HDR format.

 

 

Anything to do with lighting, such as a cubemap or something else you are doing for direct/indirect lighting should, in some way, take HDR into account. But textures, or rather albedo, doesn't as it only takes into account the colorspace you're working in. Which right now is SRGB, though hypothetically one could prepare for DCI/REC 2020 if one wanted to.

 

Regardless, HDR is for lighting your scene into a higher range than your colorspace/spec can go, then tonemapping/etc. back down into the output range. Theoretically, if we were working in some crazy future colorspace/spec where we had output of a hundred thousand nits instead of the SRGB output of 100 nits, we wouldn't need "HDR" because the output spec would be enough in and of itself to display whatever range of brightnesses we wanted.




#5262818 Bloom Flickering

Posted by Frenetic Pony on 19 November 2015 - 09:36 PM

I have however noticed in cases of sharp contrast there is bloom bleeding that occurs (it's kinda hard to see here, but it's pretty noticeable in engine):

Isn't that... correct though? I mean, that's what bloom is, in essence, a scattering function. It does look a little strange, but that's due to the art and the scene, not about it being "incorrect". After all, in real life you're probably never going to have a wall just painted with some super duper ultra black albedo paint in an otherwise brightly lit room. But if you did I imagine it would also look fairly strange.




#5262201 Resources for BRDF Shading Models (Specular, Diffuse, Fresnel)?

Posted by Frenetic Pony on 15 November 2015 - 10:09 PM

Cook-Torrance: http://www.codinglabs.net/article_physically_based_rendering_cook_torrance.aspx

 

Large set of references: http://graphicrants.blogspot.com/2013/08/specular-brdf-reference.html

 

More: http://simonstechblog.blogspot.com/2011/12/microfacet-brdf.html

 

You'll almost certainly want to use Smith's geometry shadowing term for the best look for performance.

 

Schlick is a very... Slick cool.png  approximation of frensel and is generally what's used: https://en.wikipedia.org/wiki/Schlick%27s_approximation

 

How you do subsurface scattering matters more for realtime performance than exact formulation/matching measured data. Activision and ye CoD guys did a really nice and cheap approximation for Advance Warfare et. al. http://www.iryoku.com/next-generation-post-processing-in-call-of-duty-advanced-warfare




#5262069 Desperate: Antialiasing/Filtering of Procedural Texture

Posted by Frenetic Pony on 14 November 2015 - 07:16 PM

Temporal supersampling might be a start, also known as temporal anti-aliasing. Anything benefiting from increased resolution could, in principle, benefit from temporal AA http://www.gamedev.net/topic/673143-temporal-aa/




#5261855 Temporal AA

Posted by Frenetic Pony on 12 November 2015 - 08:00 PM

MSAA + Temporal Filter looks to be a good solution.

 

In that case you'll want to combine them correctly: http://www.iryoku.com/smaa/






PARTNERS