Jump to content
Posted 16 December 2011 - 04:28 AM
Posted 16 December 2011 - 05:29 AM
Start looking for an other artist ...
What are your thoughts?
Posted 16 December 2011 - 05:37 AM
Every time you add a boolean member variable, God kills a kitten. Every time you create a Manager class, God kills a kitten. Every time you create a Singleton...
Posted 16 December 2011 - 11:22 AM
Very workable. You can even recycle your actual illumination buffers for this purpose, and then draw light contributions in on top of the emissive. It's some extra draw calls, sure, but that shouldn't be too bad unless you have a *lot* of emissive objects.
If majority of your objects are *not* emissive, I would recommend rendering emissive (as well as other effects like environment mapping) in another pass after the deferred light rendering. The Z-buffer will already be primed, so you should get early-Z rejection for occluded objects.
Posted 16 December 2011 - 11:36 AM
Posted 16 December 2011 - 12:47 PM
Posted 16 December 2011 - 02:02 PM
Posted 16 December 2011 - 06:46 PM
Posted 17 December 2011 - 12:23 PM
Have you considered doing pre-pass lighting? It's basically the same as what you conclude, but for all objects in the scene. It uses less render targets (usually just one if you have hardware support for reading depth buffer) that classic deferred rendering, and also enables more variety in the materials (which is sorta that problem you're having).
Probably drawing the emissive mesh a second time is a nice thing (if there aren't too many of them), as it gives you the most flexibiltity. Imagine extruding the triangles and using some fancy-pants shader, you could probably have a volumetric glow rather than a flat emissive.
Posted 18 December 2011 - 10:22 PM
Posted 19 December 2011 - 12:41 AM
Well, you want to keep your artist Then this is the way I do it my engine:
Thats a really good idea. That's essentially what i was hoping to do. Store some coefficient that transformed the diffuse into the emissive factor.
Sort of like barycentric light coordinates. Where a 0.25 would indicate that the stored color is 0.25 between diffuse and emissive.
So neither the diffuse nor the emissive value were directly stored, but rather the barycentric value and coordinate were stored.
I'm not sure if that's even possible. But your solution is almost exactly that, assuming the only difference between the two colors was intensity.
Thank you. I'll run that by them.
I'm using unpacked because each channel is only using 8 bits.
I'm pretty sure we could spare another render target. But i would be proud of the implementation if it only needed 2 targets. It would also obviously be significantly better with less render targets. In terms of memory and texture fetches and resolve times.
RGBA8: diffuse RGB 8 bits each, 8 bit material index A RGBA8: compressed normal xy 8 bits each, 8 bit material index B, 8 bit alpha value // lighting pass pseudo shader code: vec4 materialA = materials[material_index_A] vec4 materialB = materials[material_index_B] vec4 material = interpolate(materialA,materialB,alpha) final_light = diffuse * (light_factor * material.diffuse_channel + material.emissive_channel) + diffuse * ( pow(spec_factor,material.spec_exponent) * material.gloss)
Posted 20 December 2011 - 02:10 PM
Posted 21 December 2011 - 02:26 PM