Why are all "professional" / "corporate" marketed laptops dual-core processors? And expensive as hell for crappier hardware? Something like Asus Zenbook which is marketed more as a "gaming" laptop is both cheaper and outperforms many "professional" laptops...
It dosn't mention anything about shadowmapping. I assume you would only batch lights together like this if they do not cast shadows? Otherwise you'd end up with alot of shadowmaps.
I assume this is for point lights only. For determining visible light sources in each tile, do you use the depth to recompute view space position and then test against view space light radius? Is there a better way?
Do you use a structured buffer for passing all lights constants and then a cbuffer with number of lights?
How does blending work in this case when you output from compute shader? In my normal point light pass where I do one light at a time and output lighting from pixel shader the additive blending is automatically performed.
Also when rendering into the shadowmap for each face, what is your proj matrix zNear and do you use the lights max radius as zFar?
The aliasing when moving is called shimmering. Should be some good google hits. Higher resolution and tighter bounds helps but you can never fully negate the problem. I believe if you can make the camera move only in shadowmap pixel-sized increments it will help too,
I've never had any aliasing issues to warrant a CSM for point lights. Whats the radius of your point light?
If you assume that linear->srgb and srgb->linear is done by dedicated circuits, and is therefore free, then it's not something to worry about
Mark source texture views, GBuffer render-target, GBuffer texture view, and backbuffer render-target as SRGB: (Textures)-- sRGB->Linear --[To GBuffer Shader]-- Linear->sRGB --(GBuffer)-- sRGB->Linear --[Lighting shader]-- no change (Lighting buffer)-- no change --[Tonemap]-- Linear->sRGB (Backbuffer)
Mark source texture views and GBuffer render-target as Linear (even though they're not!), and mark GBuffer texture view, and backbuffer render-target as SRGB: (Textures)-- no change --[To GBuffer Shader]-- no change --(GBuffer)-- sRGB->Linear --[Lighting shader]-- no change (Lighting buffer)-- no change --[Tonemap]-- Linear->sRGB (Backbuffer)
Awesome, that was exactly what I was looking for.
As long as it's guaranteed to be a free operation (maybe only older cards dont have this feature? or newer but cheaper?) the first option then seems more clearer / less deceptive.
I've got VS 2013 pro and currently have one hlsl source file for each pcf kernel, for example pcf2x2.hlsl, pcf3x3.hlsl, pcf5x5.hlsl, ...
VS compiles them automatically at build time, but it leads to some code redundancy and it unpleasant to work with. It would be much better to simply have one source file and recompile with different macros - but how does one make VS recompile the same file several times?
Is it a sound idea to do view frustrum culling for all 6 faces of a point light? For example, my RenderablePointLight has a collection of meshes for each face.
Is this about a shadow-casting point light which renders a shadow map for each face?
If your culling code has to walk the entire scene, or a hierarchic acceleration structure (such as quadtree or octree) it will likely be faster to do one spherical culling query to first get all the objects associated with any face of the point light, then test those against the individual face frustums. Profiling will reveal if that's the case.
If it's not a shadow-casting light, you shouldn't need to bother with faces, but just do a single spherical culling query to find the lit objects.
Yeah its for shadowmapping. How do you do a spherical culling query?