Jump to content

  • Log In with Google      Sign In   
  • Create Account

Frenetic Pony

Member Since 30 Oct 2011
Offline Last Active Yesterday, 08:51 PM

#5198879 Occlusion Culling - Combine OctTree with CHC algorithm?

Posted by Frenetic Pony on 17 December 2014 - 09:19 PM

Considering the game, or whatever, sounds highly dynamic occlusion planes don't seem like an option. Besides, avoiding developer overhead is always a good idea, manually setting up occlusion is just more work.


Depth occlusion on the other hand is a very useful tool, and indeed used by many things.  There's also more complex options, DICE was helpful enough to put this out: http://www.slideshare.net/DICEStudio/culling-the-battlefield-data-oriented-design-in-practice and their requirements sound similar to yours in some way.


As for culling point light shadow maps, draw calls are often a bottleneck here along with polycount. "Efficient virtual shadow maps for many lights" extends clustered light culling, the current hotness for realtime lighting optimization anyway, to culling for shadow maps: http://www.cse.chalmers.se/~uffe/ClusteredWithShadows.pdf Unfortunately there's also an overhead to this extension, meaning you'd have to take into consideration just how many shadow maps you want going at once before considering the above.

#5198082 Radiance Question

Posted by Frenetic Pony on 14 December 2014 - 01:33 AM

Radiance "the flux of radiation emitted per unit solid angle in a given direction by a unit area of a source."


A sources emitted photons don't change with distance. The reflected radiance from a source, which can be called irradiance, assuming it's a point source so it's nice and easy, is the nice easy inverse square. I.E. L = 1/D^2.


Be careful with terms like radiance and irradiance and etc. Inverse square is still correct, but the terms often get all tangled up, causes a lot of people to get confused. I mean, radiance and irradiance aren't really self evident terms, and then there's radiosity and blah blah blah, and different fields will use different terms for the same thing. It would be nice if there was just a single set of agreed upon common terms that are all self evident, but there's not. unsure.png  

#5191633 Rendering clothes

Posted by Frenetic Pony on 07 November 2014 - 03:41 AM

I would go with your first idea and split your mesh up into manageable chunks:


Then not only are you not drawing your player twice, you're ensuring you won't get any skin coming through clothing. I would imagine your base player mesh would be naked, then you can cater for someone wearing jeans with no top - if your application requires that kind of thing.

This is what I'm planning to do for my character


Variations on this are how pretty much everyone does it, so yes indeed would recommend.

#5181964 Help with GPU Pro 5 Hi-Z Screen Space Reflections

Posted by Frenetic Pony on 21 September 2014 - 04:39 PM

I bought the book thinking i would get access to the source for this, damn it. Great work going on in here, Suggestion - Mixing this technique as is having to be reverse engineered anyway why not think about crossing with idea's from Deep G Buffers for fast screen space GI and AO http://graphics.cs.williams.edu/papers/DeepGBuffer14/ . Morgans paper is brilliant, he's also provided base code for the system. Screen space cone traced reflections mixed with Deep G Buffers would be a superb step in the right direction for near to offline results (as we can get right now anyway).


Guys take a look. Cheers


Eh, that's a lot of extra scene complexity dependent work for not a lot of benefit. SSR already only looks good if you've a specific art direction, are blending it with cubemaps/other pre-computed reflections, or only use it for specific and controlled materials like a rough, partially reflective flat floor or puddles. Otherwise you get extreme temporal instability and it just ends up looking weird.


Great work on sussing out the details of this though! I'd been eyeing the book just for this article, but it's good to know that it wasn't terribly complete and B. that there's now this thread with a very well documented journey of work and code to look at anyway!

#5180377 LOD in modern games

Posted by Frenetic Pony on 14 September 2014 - 11:12 PM

This: http://t.co/I1hjxx2P0I looks potentially interesting, depending on performance. But otherwise what Promit said is spot on.

#5180126 Dynamic cubemap relighting (random though)

Posted by Frenetic Pony on 13 September 2014 - 05:14 PM

This is too long for twitter, and I'm busy pecking away at other things, but.


Cheap idea for relighting the increasingly popular cubemap/image based lighting solution. Just store depth/normal/abedo of each cubemap face, and relight N cubemap faces a frame with the primary light/update the skybox. As long as you store the/apply the baked roughness lookups into the mipmaps, and apply such to the final output cubemaps you use for lighting, you get a dynamically re-lightable image based lighting system.


True you can't use lightmaps (unless you update those separately, perfectly possible) and can't get those pre-computed multiple bounces, but some price has to be paid for dynamic re-lighting, and since you're just re-shading a few 256x256 targets (and very basic shading at that) it's not going to cost much, and you can use whatever pre-computed ambient occlusion you want. If memory is a concern one could compress the albedo and normals down to two channels apiece, I don't see the quality of such being terribly important before lighting is applied.


Just an idea for now, but it seemed promising enough. There's still the other issues of cubemap lighting, parallax projection and light leaking and etc. but that would be solved separately and is needed research anyway.

#5163386 Simulating lighting using volumetric meshes.

Posted by Frenetic Pony on 28 June 2014 - 01:44 AM

Sounds a lot like Crytek's Light Propagation Volumes, which never had enough precision for anything more than secondary illumination and increases memory requirements by squared or more as you extend the range of the illumination. Alternatively it also sounds like the same hack used by Bungie in Destiny/Irrational in Bioshock Infinite/Epic currently in UE4. All of them render lighting information to a volume texture which is then used for deferred rendering of transparencies. Trouble there is it's all too low of a resolution to get very good shadow information or good specular.


Here are some of the examples: http://advances.realtimerendering.com/s2013/Tatarchuk-Destiny-SIGGRAPH2013.pdf   http://www.crytek.com/download/Light_Propagation_Volumes.pdf

#5158799 Screenspace Shadow Mapping Help!?

Posted by Frenetic Pony on 06 June 2014 - 04:13 PM


"Efficient virtual shadow maps for many point lights" It's

I know this paper, it is still more theocrafting than practically useful (~15-20ms frametimes on NVIDIA GTX Titan GPU + Intel Core i7-3930K CPU isn't awesome for games yet). I've hoped for a more pratically useful solution, we will see what useful changes happens once the new API approaches/consoles kicks in.



Found that with shadow map caching it can work well for many dozens of lights. But indeed while hundreds may "possible" it's not practical on most systems. And you need all the overhead of the culling scheme, which is great if you're targeting hundreds of point lights to begin with. But we're still a long way off from having a city scene with hundreds of proper point lights, and I suspect that will just have to be brute forced one way or another.

#5158623 Screenspace Shadow Mapping Help!?

Posted by Frenetic Pony on 06 June 2014 - 12:33 AM

"Efficient virtual shadow maps for many point lights" It's basically an extension of clustered shading to also allow culling for shadow mapping, along with a few hacks (the "virtual" part) for the maps themselves.


So if you've already got clustered deferred/forward going on your halfway there, which is nice.

#5157987 Metal API .... whait what

Posted by Frenetic Pony on 03 June 2014 - 08:05 PM

We are now in a fun situation where 3 APIs look largely the same (D3D12, Mantle and Metal) and OpenGL - while this won't "kill" OpenGL the general feeling outside of those who have a vested interest in it is that the other 3 are going to murder it in CPU performance due to lower overhead, explicate control and the ability to setup work across multiple threads.

It'll be interesting to see what, if any, reply Khronos has to this direction of development because aside from the N API problem the shape of the thing is what devs have been asking for (and on consoles using) for a while now.


This is why I just want something like this from OpenGl, at least on the driver overhead front and if possible (hardware guys make it so!) with memory control. 1 API to rule them, One API to run them, One API to render them all, and in the code bind them (or bindless if that's your thing).


But that's Khronos, at least I got a Lord of the Rings reference out of them.

#5157946 Ideas for rendering huge vegetation (foliage)

Posted by Frenetic Pony on 03 June 2014 - 03:41 PM

The Unigine guys had the great idea of building multiple billboard impostors for all their assets. They would, as far as I know, import any foliage asset and sample its image from multiple places across a sphere. Then just render the impostor closest to the viewing angle, and batch render as many as possible to keep CPU overhead low. Because they're all far away you can keep it low res and low memory, and because you've got a full sphere estimation they even inject them into shadow maps for shadow casting.


You don't notice much in the way of parallax error either as its only used for distant stuff. For closeup stuff take a look at Crytek's fancy grass management stuff: http://crytek.com/download/Sousa_Tiago_Rendering_Technologies_of_Crysis3.pptx

#5155931 Doing local fog (again)

Posted by Frenetic Pony on 25 May 2014 - 02:54 PM

For particles, "Weighted blended order independent transparency" should be helpful: http://jcgt.org/published/0002/02/09/ performant OIT for non refractive stuff. As I saw on twitter concerning rendering "It's all smoke and mirrors. Except smoke and mirrors, that's hard to render."


And yeah that Lords of the Fallen paper is great. I can already see multiple games implementing something like it (Some people at Ubisoft did something fantastically similar for AC already) and artists just abusing the heck out of it. A million godrays blinding you in every level here we come.


Ninja edit to your edit- Yeah smoke should definitely be done differently, as you're doing two different phenomena. "Fog" represents particles smaller than the wavelength of the light, thus scattering the results but not absorbing. Smoke has particles bigger than the wavelength and causes direct absorption.


If you're going deferred the Lords of the Fallen guys have a neat per vertex deferred for small particles that they use for smoke. If you're going forward there are ways to make forward lit particles and Z-blurring work at the same time. Doing a lot of particles today should only be a problem depending on your targeted systems. There are nice ways to batch everything and avoid overdraw, so if you've got the performance then thousands of particles (and more) is doable with some work.

#5152424 Kinds of deferred rendering

Posted by Frenetic Pony on 08 May 2014 - 05:17 PM

Like ATEFred already said, there are various techniques in popular use depending on the platform. To decide which is best, you really need to have a solid idea of what hardware you're targeting and what you need from your renderer. Tiled deferred in a compute shader will generally give you the best peak performance for many lights, but you need hardware and API's that support that sort of thing. Light prepass or tiled forward can be useful if there's a restriction on render target sizes, for instance on mobile TBDR GPU's or the Xbox 360 GPU.


For high end the popular choice is clustered forward/deferred. You can go deferred for opaque/generically shaded objects, while translucency/special lighting models can use forward. It's nice mostly because explicitly handles both at once while handling a large, or even very large number of lights better than anything, along with other fancy possibilities if you go for full cluster culling: http://www.humus.name/Articles/PracticalClusteredShading.pdf


Like MJP said though, you need the hardware to support it. Light pre-pass/forward+ is more popular for mobile solutions.

#5149041 Global illumination techniques

Posted by Frenetic Pony on 23 April 2014 - 02:50 PM

Good stuff, thanks Agleed!


Speaking of which, Lionhead seems to have advanced Light Propagation Volumes along: http://www.lionhead.com/blog/2014/april/17/dynamic-global-illumination-in-fable-legends/


Unfortunately there's no details. But I guess that means it should be somewhere in UE4, though I didn't see it. Still, occlusion and skybox injection is nice. It still seems a fairly limited ideal, you'd never get enough propagation steps to get long range bounces from say, a large terrain. But at least it would seem more usable for anyone looking for a practical solution that they can get working relatively quickly. And hey, maybe you can use a handful of realtime cubemaps that only render long distance stuff, and just rely on the volumes for short distance.


Could go along nicely with using a sparse octree for propogation instead of a regular grid: https://webcache.googleusercontent.com/search?q=cache:http://fileadmin.cs.lth.se/graphics/research/papers/2013/olpv/ which trades off more predictable performance impact for less memory and further/faster propagation. Assuming they don't use as such already.

#5128292 Volumetric lighting

Posted by Frenetic Pony on 02 February 2014 - 07:21 PM

Epipolar sampling is one of the main speedups. Basically instead of raymarching naively you raymarch in a regular fashion with samples radiating from the screenspace position of the lightsource out to the edges of the screen. Then take into account edge detection, which can again be done in screenspace, for high contrast variations and you suddenly have a lot less samples to go through.


1d Min/Max take advantage of the above. Epipolar sampling gives you what looks sort of like a 1d heightmap, which is then used to speed things up again.


A gross simplification, but I hope I just wrote something coherent enough. The intel paper ends up with only a little over 2ms on a GTX680, at least with their lowest quality setting.