Like Promit, I have a somewhat hacky preprocessor which manages #includes but not much else. For experimental and small scale projects you don't really need anything more. A serious renderer would probably require a more complete system for things like generating shader permutations. External #defines are a good mechanism for that sort of thing.
The main issue is that the 'features' generated are just copies of the bright spots as sampled from the source image, not a function of the lens aperture as they should be. I'm interested in any cunning ideas for overcoming this problem; until then it's best kept as a subtle addition to sprite-based effects.
I put together a hybrid method, rendering transparent/alpha blended materials in a separate pass(es) using deferred shading. It's a bit more flexible than a using forward rendering and has the bonus of unifying the lighting between the opaque and transparent steps. As for order-independance...ShaderX7 describes an interlacing method for deferred rendering transparency; it could be used to do OIT but is very limiting on the number of per-pixel transparent layers.
Mainly regarding (3): I get pretty good results doing as follows:
1) Do a 'bright pass' (downscale + threshold)
2) Take the result and flip left-right top-bottom (as Hodgman says)
3) Apply a radial blur to the flipped bright pass
4) Blend this with the original, unflipped bright pass
5) Apply a gaussian blur to the whole thing
6) Upscale and blend with the original image, modulating with a lens dirt texture
I actually apply another threshold at step 2 so that only very bright pixels get 'flipped'. The flipped, radially blurred results give a soft 'pseudo lens flare' which works best when it's made quite subtle (hence the secondary threshold). The lens dirt texture gives the whole thing an organic look and hides any artifacts (e.g. caused by using a low number of samples for the radial blur). I've included the lens texture I made and some examples; I think overall it works better being more subtle.
As for elements (1) and (2): looking closely in BF3 I don't think the lens flare sprites are modulated by the lens dirt texture. Part of me thinks they should be...
Sorry; I think I misread your question. Within a smoothing group triangles can share vertices; the normal at each vertex is some function of the normal(s) of the face(s) to which that vertex belongs - averaging the face normals will work perfectly for this.
You need to duplicate any vertices that are shared between smoothing groups; they will have different normals. So, for example, if 2 smoothing groups meet at a single edge, the vertices on that edge will need to be duplicated.
Ultimately though, what I'm currently doing is a temporary hack more than a final solution. I would ultimately like to find a way to have the object glow, but still receive color from other nearby glowing objects. Like it glows a particular color, but you see traces of another color on one side from another object nearby. That is the final result I want, but I don't yet have any idea how to get there.
Actually you could do as I said before but copy emissive pixels into the lighting results buffer before the lighting stage and accumulate light contributions on top of the emissive pixels. This could be problematic if you're using a light at the centre of the object, though; since that light is simulating (cheating) the light being emitted at the surface of the glowing object, it shouldn't affect the object itself. You could account for this by adjusting the emissive material by pre-subtracting the light source's colour from the emissive material's diffuse colour. This isn't all that flexible, though.
I'm not sure I understand what you're trying to achieve; is it a very subtle, stylized glow or an emissive material? In the latter case I think that you can safely ignore contributions from other light sources.
One way to do it would be to mark your 'glowing' materials in the g-buffer (as you're doing), then copying the diffuse colour of the marked pixels from the g-buffer into the final buffer after the lighting results have been accumulated.
I think the best way would be to keep resolution-agnostic; use game units. Offset the path of each bullet by a random angle, using you're accuracy metric (weapon precision, player's current speed, etc.) to control the size of the offset. You'd then work backwards into screen space to get the final size of the crosshair, which I suppose should be the angular diameter of the area in which the bullet might hit.
The orientation of the z component of the normals will depend on the handedness of the coordinate system you are usng; looks like you're view-space is left-handed, since +x goes to the right, +y goes up and +z goes into the screen (hence you see very little blue).
As for the colours not "blending as well" - it's because you're not remapping into [0,1] to visualize the normals.
1st, your diffuse has all perfectly lit green grass. So if you have a sunlight, you should calculate that first and put that in your diffuse buffer as well.
I think this may potentially just cause more confusion, and in any case I'd say it was a good idea to maintain complete separation of the g-buffer/lighting stages. Are you talking about having a 'sun occlusion' factor (stored in the alpha channel or something), or actually pre-multiplying the diffuse component? The former doesn't work for dynamic objects, and the latter doesn't work at all.
1. The albedo texture should be a combo of the material diffuse property and the objects texture? 2. No ambient material should be needed since it is usually the same as the diffuse property. Later i can add SSAO 3. And now there will be no emissive or specular color but instead a factor that will be multiplied by the light properties?
Only question that remains for me now is if the materials are now like this, will the light sources properties be similar or do they still retain their RGBA values
1. I would say that an objects texture is itsmaterial's diffuse property, i.e. the colour which modulates any reflected, diffuse light. So your grass texture represents the final colour you'd see if there was a 100% white light shining on it. 2. You could have a seperate ambient colour if you wanted. But you'd need another set of RGB components in the g-buffer. As you said, however, the ambient reflectance of a material is the same as its diffuse, so you can just use the diffuse property to modulate any ambient light. 3. As with ambient, you could have separate emissive/specular colours but, again, you'd need a bigger g-buffer. For the colour of a specular reflection it's usually good enough to simply use the light's colour.
The light source properties can be as many or few as you need, since these will be passed directly into the shader and not stored in any intermediate buffers. So you could have RGB colours for separate ambient/diffuse/specular light or just a single colour for all three.
I should point out that the example I've given of the g-buffer setup is pretty arbitrary. Basically you need the g-buffer to store whatever you need in order to do lighting. If you look through the various references you'll see lots of different g-buffer configurations, non are the right or wrong way to do it. It entirely depends on what properties you need during the defered lighting pass. Understanding the lighting model you're going to use is key; make sure you're on top of Phong and know what all the inputs are doing.
Ahh, sorry if i've confused you more. Perhaps I should have made it more clear: the values in the g-buffer are per-pixel material properties. By specular intensity/power I mean the shininess/glossiness of the material, not the specular coefficient.
The material properties are stored in the g-buffer. The diffuse albedo, specular intensity/power and emissiveness values in the example I gave are what I was referring to when I said 'material properties.'
You can use the ordinary Phong formulae to compute the light_ambient_level/light_diffuse_level/light_specular_level factors and use them as per my previous post. The only difference is in which material properties are available from the g-buffer, so you'll notice that (in the example) I used the material's diffuse colour to modulate the ambient result (because there's no material ambient colour) and that the final specular value only uses the light's colour (becaues there's no material specular colour).
Although it's probably possible to pack an RGBA value into a single two byte component you'd lose a lot of precision, since you're halving the number of bits representing each component. I can only assume that you're thinking in terms of the way that 'full' Phong lighting treats materials, with seperate colour values for ambient/diffuse/specular/emissive. This is overkill, especially for a deferred renderer where you preferably want to limit the memory cost of the g-buffer. The way in which you slim down the material properties depends on what kinds of materials you want to render and how much 'space' (i.e. unused components) you've got in the g-buffer. You could use a whole render target to store material properties, or elbow them into any unused components on other targets (most of the references do this).
So a simple g-buffer layout might be something like this:
Target0: RGB = diffuse albedo A = specular power
Target1: RGB = normal xyz A = specular intensity
Target2: RGB = position xyz A = emissiveness
You render these values out at the g-buffer stage. Then, at the lighting stage, you clear the output framebuffer and render the lights. Each light you render taps into the g-buffer targets to get the required data. Obviously, since the material properties have been slimmed down, you'll need to use a modified lighting equation. So, for the example g-buffer, you might do:
Clearly this is less flexible than 'full' Phong. One of the drawbacks of deferred rendering is that the materials/lighting model tends to be very rigid, since the inputs to the lighting stage (the g-buffer targets) have a fixed format. However, with a bit of cunning you can come up with a materials/lighting system which supports the gamut of materials that you want to render.
I'm not sure what dpadam450 means by "put the full screen diffuse to the screen." The output of the first stage is the g-buffer, which specifies the material properties. This is an input to the deferred stage, in which lights are rendered and the final, shaded pixels accumulate into the final output buffer (either the back buffer or another target for post-processing).
Also, if you're using OpenGL >= 3.0 you can use render targets of different formats (but not sizes).