Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Frenetic Pony

Member Since 30 Oct 2011
Offline Last Active Yesterday, 11:19 PM

#5241815 Lightmapping

Posted by Frenetic Pony on 21 July 2015 - 05:57 PM

 

Light-maps are no longer used.
Lighting is all real-time these days with rare minor exceptions.


That is not even close to true.

 

 

"Perfectly true!" I thought to myself. And then I tried to think of all the games recently that do use it, or that are coming out that make use of it, and came up a bit short to be honest.

 

I even looked at (recently released) the top sellers on steam at the moment: Rocket League, not so far as I can tell? (Edit: Maybe yes? I haven't played it) ARK: Nope. Homeworld Remastered: Nope. GoT Telltale: Yes. GTAV: Nope. CS GO: Yes. Witcher 3 : Nope. Polybridge: Nope.

 

So while it's still used certainly and by all means, I would say it is surprisingly, or perhaps because of the fanaticism for "open world" games unsurprisingly being phased out somewhat. Regardless Digitalfragment has some good links if OP is interested.




#5229747 Clamp light intensity

Posted by Frenetic Pony on 18 May 2015 - 08:12 PM

I'm sure you've read the above, that being said, area lights! Or well, a fast realworld approximation of such, including specular, can help you avoid a lot of your overflow. Here's a good guide that probably includes more than you actually wanted to know about PBR/Area lights: http://www.frostbite.com/2014/11/moving-frostbite-to-pbr/

 

There's a great section in there on area lights, including specular, which is just a hack but should help.




#5226892 Performance of drawing vegetation; overdraw, alpha testing... Alpha cutout mo...

Posted by Frenetic Pony on 02 May 2015 - 06:32 PM

I think you need to see if you're bottleneck is still overdraw or if it's polycount/draw calls. You can take a look at the paper infinisearch linked, it gives a good overview of how Crytek accomplished that GIF you posted in game, though not the gif is from the PC version I think, and not the older console version.

 

Either way of course you're going to have to stop drawing grass eventually. The mesh is just going to get small enough to be a mess of subpixel triangles, and your draw call count is just going to keep going up exponentially (lessened by whatever amount you lessen the mesh count). Epic, and probably others, just do things like fade to a terrain texture and call it good enough. Though if you had a mesh like Crytek's I imagine you could do the same thing hair sims do and lessen the amount of meshes drawn while thickening the remaining meshes, producing a little noticed LOD and keeping the grass from disappearing/flickering.




#5224235 Localizing image based reflections (issues / questions)

Posted by Frenetic Pony on 18 April 2015 - 04:11 PM

The basic idea of "parallax correction" is used used, though juding by it you may already being doing that https://seblagarde.wordpress.com/2012/09/29/image-based-lighting-approaches-and-parallax-corrected-cubemap/ There are, as you've found, fundamental problems with using cubemaps for all the things you're trying to do with them. You can also try getting the "distance" to each object in your cubemap by tracing a signed distance field, and the above blog has some ideas on blending between cubemaps, but the basic problems of being limited to essentially projecting from the 2d walls of the cubemaps AABB will always be there. You could also try screenspace raytracing, which can be pretty cheap today EG http://jcgt.org/published/0003/04/04/ but that's going to come up with all the problems of being screenspace, and fundamentally you're always going to have errors with cubemaps.


#5222497 A practical real time radiosity for dynamic scene

Posted by Frenetic Pony on 10 April 2015 - 02:36 PM

Actually I just read a blog post about game doing exactly what you're doing: https://t.co/gFhJUm1jnU

 

It's all there, if hack. Reflections, ambient occlusion as an approximation of diffuse lighting, etc. Take a look.




#5222361 A practical real time radiosity for dynamic scene

Posted by Frenetic Pony on 09 April 2015 - 11:52 PM

As Hodgman pointed out, global illumination is just a catch all term for "how light bounces around a scene" and is usually even applied to any approximation there of.

 

And unfortunately to do it "right" is hard, as in probably provably NP Hard if you want classical computing terms for such. What you can do is pick the best hacky approximation you can find and call it good enough. And which approximation you want depends on what your app is doing.

 

The relevant questions I can think to ask are: How big are the scenes you're going to view? Do you need reflections or is diffuse fine? How dynamic is your scene, as in is most of the geometry static and you just want to relight things, or is your scene fully destructible/changeable? What type of hardware is it going to run on? Does it matter if light leaks a lot (goes through walls) or not?

 

There's a very large, and growing, multitude of techniques that all make tradeoffs and cover different aspects of these things. Which one is best for you depends on your answers to the above.




#5218157 *solved* Too slow for full resolution, want to improve this SSAO code

Posted by Frenetic Pony on 21 March 2015 - 04:30 PM

Good results.

For kernel size issues read SAO paper.  http://graphics.cs.williams.edu/papers/SAOHPG12/

Basically the main idea is to use depthbuffer with mipmaps that are generated using rotated grid for subsampling. This make the algorithm performance almost totally independant of kernel radius.

 

You should try more aggressive temporal smoothing for ssao. This way you can get rid of your most expensive component(blurring) and use all computation time for additional samples.

 

If you remove blurring how many samples you need to get stable results? 32? 64?

 

This paper is excellent and as far as I know a lot of people have their SSAO based off it for the exact problems same problems you're having with sample radius.

 

Just a quick mention of something that was on here before, but trying to just brute force a huge number of samples with random sample rotation off might be worth a tick. Depending on what you're getting bottlenecked by, straight doubling the sample count without random rotation might give you a lot better results and still be faster.

 

Last thing, for sampling and blurring patterns the Call of Duty guys did something neat: http://www.iryoku.com/next-generation-post-processing-in-call-of-duty-advanced-warfare They created predictable and stable random noise texture, making sampling and blurring stable and predictable as a result. Apparently they tried it and liked with SSAO but didn't end up shipping it.




#5216975 Image based proxy probe gather

Posted by Frenetic Pony on 16 March 2015 - 06:57 PM

That was a mouthful. Random idea of the day:

 

Typically, today, reflections will be done from lightprobes, placed however, that pre-render cubemaps of static geometry and then re-project them locally. A bare handful of games (Remember Me, Life is Strange, The Order 1886?) also project "image based proxys" as, essentially, specular shadowing from captured proxies of objects.

 

I.E. capture something like a cubemap of a single game object (six+ axis of depth textures/whatever else you want) then project it as a light with proper blending/parallax onto the specular contribution of your scene. The problem with this is of course it doesn't scaled the best, as each object that his this dynamic specular reflection needs to be projected individually.

 

So the idea comes, why not have lightprobes gather all the image based proxies you want, save it out to the single cubemap for each, and then you're just back to reading out the same cubemaps you were before? But now your cubemaps get to have dynamic objects in them, dynamic objects a lot cheaper to alter than re-rendering the entire scene or even just the one object I might add.

 

EG take the proxys of something like a tree, multiple depth/albedo/normal textures captured around the axis of the model, and render and light that proxy texture (with tesselation hacks? whatever you want) from the viewpoint of whatever lightprobe the actual object is in. Choose your brdf sampling as you will and apply to the probes rendering, multiple taps for blinn/phong or importance sampling or whatever, and there we are. You now have an interactive tree, grow a new one, cut it down or destroy it or whatever and keep updating the lightprobe as such.




#5216149 Forward+ vs Deferred rendering

Posted by Frenetic Pony on 12 March 2015 - 03:46 PM

Cool, I think I get it now. Thanks!

 

With forward+ in mind, is there any reason why you would still choose deferred rendering? Also, why couldn't you do tile-based light culling with deferred rendering as well?

 

Because you can just do clustered rendering and do both: http://www.humus.name/Articles/PracticalClusteredShading.pdf

 

It's a very similar idea to tiled rendering, you just extend the tiles into 3D space and do your standard G-buffer for most stuff, and a forward loop pass for forward rendered stuff. You get transparency, highly complex materials, screenspace decals, a unified lighting model, etc. etc. Also tighter depth ranges, better light culling and etc. Which, as MJP pointed out for Forza 5, means you can get away with no Z-prepass and avoid doing geometry twice even for the forward rendering pass.

 

Frankly, for storing material properties in a g-buffer I've found even a single 8bit channel to be too much for a single parameter. You can do clever schemes wherein a render target channel can be use for multiple material types, splitting the 0-255 range into multiple material descriptions and/or using that channel for differing material types that aren't going to appear on the same model. You don't really need 8bit precision for metallicity after all, as even half precision, or less, will probably be fine for your artists. Other engines like Lords of the Fallen just use a channel as an 8bit LUT of materials, and Destiny manages to be extra clever and pack a 10bit LUT into an 8bit channel. So if you're clever enough getting multiple material types into a reasonable G-buffer footprint isn't the hardest thing to do.




#5210571 Lighting of a water surface

Posted by Frenetic Pony on 13 February 2015 - 05:28 PM

What worked rather well for me was "raymarching" a bit, to simulate light absorption inside the water volume multiplied by some artist choosen color. Used 2 colors there one for deep ocean and one for shallow areas.

For what what can be did for additional shading is fake some subsurface scattering. That's better than N.L for stuff like water.

float ComputeSSS(in vec3 vNormal,in vec3 LightDir,in vec3 EyeVec)
{   
//Get some normal shifted towards the eye
    vec3 N = normalize(vNormal+EyeVec*0.2);
//Flip normal,dot with LightDir, so flanks wich get lit from behind recieve some light
    float LD0 = max(0.0,dot(vec3(-1.0,-1.0,1.0)*N,LightDir));
//How much do we look into the Lightdirection ?
    float LD1 = max(0.0,dot(-EyeVec,LightDir)*0.6+0.4);
//How much does the wave flank faces towards us ?
    float LD2 = clamp(dot(EyeVec,N)*0.5+0.5,0.0,1.0);
     
//Mix it up until a nice value comes out :D
    float LD3 = LD0*LD1*LD2*LD2;
    return saturate(LD3*LD3*4.0+LD1*0.125);
}

That computes some SSS amount wich i use to blend in the shallow ocean water(neat idea from AC3)

Sure it's more hack than anything but hell as long as it looks somewhat convincing and nice..who cares ?

 

Yes to something like this! You can look up light extinction coefficients if you want, though in general a wavelength dependent extinction, going from reds being the first thing  gone to blue light lasting the longest, is both correct and the way to go.




#5207961 ShadowMapping in the year 2015?

Posted by Frenetic Pony on 31 January 2015 - 05:13 PM

No one likes VSM anymore, shadow leaking sucks and evsm is very expensive to fix it. PCF has gotten better with more taps, as well as doing a smart sampling pattern ala Advance Warfare. http://www.iryoku.com/next-generation-post-processing-in-call-of-duty-advanced-warfare

 

Regardless, there are a lot of optimizations for depth targets. Here is a good resource: http://www.cse.chalmers.se/~uffe/publications.htm

 

"Efficient Virtual Shadow Maps for Many Lights." is a great paper if you think you can afford the overhead, and apparently there will be a new, improved version of such later on this year. Caching shadow maps is also a common idea http://diaryofagraphicsprogrammer.blogspot.com/2008/12/cached-shadow-maps.html You just store the shadow maps from previous frames and re-use them unless something is actually moving within the light's radius.

 

If you've got a static directional light then scrolling cascaded shadow maps is really good idea: https://d3cw3dd2w32x2b.cloudfront.net/wp-content/uploads/2012/08/CSM-Scrolling.pdf it's very similar to caching shadow maps, you just use it for a directional light instead.

 

This is also useful: https://mediatech.aalto.fi/~ari/Publications/Shadow_Caster_Culling_for_Efficient_Shadow_Mapping.pdf a way to cull things from a directional shadow map if it won't cast shadows to anything on screen.

 

Sample distribution shadow maps can be used for much higher quality (ideally no aliasing) directional shadow maps: http://visual-computing.intel-research.net/art/publications/sdsm/sdsmLauritzen_I3D2011.pdf the excellent MJP has a sample out with a simplified (lower overhead) implementation that, I believe, is pretty much what is used in the stunning looking The Order 1886, so you can in fact run it in realtime on an actual game.

 

This: http://www.crytek.com/download/Playing%20with%20Real-Time%20Shadows.pdf offers a good overview of what Crytek did for Crysis 3...

 

And I guess that's more than enough information to get you going. Epic is experimenting with raytraced shadows accelerated by all their static (non skinned) meshes having generated signed distance fields, but it quickly gets slower than shadow mapping if your scene gets too complex.




#5201244 [SOLVED] Detail mapping + Parallax = Texture Swimming?

Posted by Frenetic Pony on 02 January 2015 - 02:35 AM

Additionally, I've found out you need to divide the offset by the tiling of the base map to avoid swimming on detail maps when you tile the base map. Just keeps getting more complicated. sad.png

 

And then you scale up the parallax factor, and shadow receiving features look worse and worse. Looking at it, there seems to be a variety of reasons few games ship with parallax mapping, which is a shame since it always starts out as such a nice seeming idea.




#5198879 Occlusion Culling - Combine OctTree with CHC algorithm?

Posted by Frenetic Pony on 17 December 2014 - 09:19 PM

Considering the game, or whatever, sounds highly dynamic occlusion planes don't seem like an option. Besides, avoiding developer overhead is always a good idea, manually setting up occlusion is just more work.

 

Depth occlusion on the other hand is a very useful tool, and indeed used by many things.  There's also more complex options, DICE was helpful enough to put this out: http://www.slideshare.net/DICEStudio/culling-the-battlefield-data-oriented-design-in-practice and their requirements sound similar to yours in some way.

 

As for culling point light shadow maps, draw calls are often a bottleneck here along with polycount. "Efficient virtual shadow maps for many lights" extends clustered light culling, the current hotness for realtime lighting optimization anyway, to culling for shadow maps: http://www.cse.chalmers.se/~uffe/ClusteredWithShadows.pdf Unfortunately there's also an overhead to this extension, meaning you'd have to take into consideration just how many shadow maps you want going at once before considering the above.




#5198082 Radiance Question

Posted by Frenetic Pony on 14 December 2014 - 01:33 AM

Radiance "the flux of radiation emitted per unit solid angle in a given direction by a unit area of a source."

 

A sources emitted photons don't change with distance. The reflected radiance from a source, which can be called irradiance, assuming it's a point source so it's nice and easy, is the nice easy inverse square. I.E. L = 1/D^2.

 

Be careful with terms like radiance and irradiance and etc. Inverse square is still correct, but the terms often get all tangled up, causes a lot of people to get confused. I mean, radiance and irradiance aren't really self evident terms, and then there's radiosity and blah blah blah, and different fields will use different terms for the same thing. It would be nice if there was just a single set of agreed upon common terms that are all self evident, but there's not. unsure.png  




#5191633 Rendering clothes

Posted by Frenetic Pony on 07 November 2014 - 03:41 AM

I would go with your first idea and split your mesh up into manageable chunks:

Hands
Body
Legs
Feet
Head

Then not only are you not drawing your player twice, you're ensuring you won't get any skin coming through clothing. I would imagine your base player mesh would be naked, then you can cater for someone wearing jeans with no top - if your application requires that kind of thing.

This is what I'm planning to do for my character

 

Variations on this are how pretty much everyone does it, so yes indeed would recommend.






PARTNERS