Jump to content


Member Since 24 Nov 2009
Offline Last Active Dec 12 2015 02:21 AM

#5151704 Screenshot of your biggest success/ tech demo

Posted by johnchapman on 05 May 2014 - 03:05 PM

Here's a tech demo I "completed" (i.e. stopped working on) a couple of years back, which has a couple of nice features although it's quite dated now:




Same software, with a more organic-looking scene:




#5078889 Do you extend GLSL?

Posted by johnchapman on 19 July 2013 - 05:01 AM

Like Promit, I have a somewhat hacky preprocessor which manages #includes but not much else. For experimental and small scale projects you don't really need anything more. A serious renderer would probably require a more complete system for things like generating shader permutations. External #defines are a good mechanism for that sort of thing. 

#5075260 Yet another Deferred Shading / Anti-aliasing discussion...

Posted by johnchapman on 04 July 2013 - 09:06 AM

There's an adaptive supersampling approach outlined here which may also be of interest.

#4920770 Screen Space 'Psuedo' Lens Flare

Posted by johnchapman on 09 March 2012 - 03:56 PM

After posting in a recent topic on lens flare, I revisited my quicky implementation and made some improvements which I've outlined in a wee tutorial here. The technique is based on MJP's old blog post about Killzone 2, plus some inspiration from BF3. Here's a video of it in action:


The main issue is that the 'features' generated are just copies of the bright spots as sampled from the source image, not a function of the lens aperture as they should be. I'm interested in any cunning ideas for overcoming this problem; until then it's best kept as a subtle addition to sprite-based effects.

#4915926 Deferred Rendering and Transparent Objects

Posted by johnchapman on 23 February 2012 - 12:17 PM

I put together a hybrid method, rendering transparent/alpha blended materials in a separate pass(es) using deferred shading. It's a bit more flexible than a using forward rendering and has the bonus of unifying the lighting between the opaque and transparent steps. As for order-independance...ShaderX7 describes an interlacing method for deferred rendering transparency; it could be used to do OIT but is very limiting on the number of per-pixel transparent layers.

#4915607 BF3 Lens flares

Posted by johnchapman on 22 February 2012 - 01:11 PM

Mainly regarding (3): I get pretty good results doing as follows:
1) Do a 'bright pass' (downscale + threshold)
2) Take the result and flip left-right top-bottom (as Hodgman says)
3) Apply a radial blur to the flipped bright pass
4) Blend this with the original, unflipped bright pass
5) Apply a gaussian blur to the whole thing
6) Upscale and blend with the original image, modulating with a lens dirt texture

I actually apply another threshold at step 2 so that only very bright pixels get 'flipped'. The flipped, radially blurred results give a soft 'pseudo lens flare' which works best when it's made quite subtle (hence the secondary threshold). The lens dirt texture gives the whole thing an organic look and hides any artifacts (e.g. caused by using a low number of samples for the radial blur). I've included the lens texture I made and some examples; I think overall it works better being more subtle.

As for elements (1) and (2): looking closely in BF3 I don't think the lens flare sprites are modulated by the lens dirt texture. Part of me thinks they should be...

Attached Thumbnails

  • flare_results.jpg
  • lens_dirt.jpg

#4915585 What normal to use when triangles are in smoothing groups?

Posted by johnchapman on 22 February 2012 - 12:22 PM

Sorry; I think I misread your question. Within a smoothing group triangles can share vertices; the normal at each vertex is some function of the normal(s) of the face(s) to which that vertex belongs - averaging the face normals will work perfectly for this.

#4915488 What normal to use when triangles are in smoothing groups?

Posted by johnchapman on 22 February 2012 - 08:04 AM

You need to duplicate any vertices that are shared between smoothing groups; they will have different normals. So, for example, if 2 smoothing groups meet at a single edge, the vertices on that edge will need to be duplicated.

#4912333 Trying to make an object glow

Posted by johnchapman on 12 February 2012 - 02:39 PM

Ultimately though, what I'm currently doing is a temporary hack more than a final solution. I would ultimately like to find a way to have the object glow, but still receive color from other nearby glowing objects. Like it glows a particular color, but you see traces of another color on one side from another object nearby. That is the final result I want, but I don't yet have any idea how to get there.

Actually you could do as I said before but copy emissive pixels into the lighting results buffer before the lighting stage and accumulate light contributions on top of the emissive pixels. This could be problematic if you're using a light at the centre of the object, though; since that light is simulating (cheating) the light being emitted at the surface of the glowing object, it shouldn't affect the object itself. You could account for this by adjusting the emissive material by pre-subtracting the light source's colour from the emissive material's diffuse colour. This isn't all that flexible, though.

I'm not sure I understand what you're trying to achieve; is it a very subtle, stylized glow or an emissive material? In the latter case I think that you can safely ignore contributions from other light sources.

#4912223 Trying to make an object glow

Posted by johnchapman on 12 February 2012 - 03:32 AM

One way to do it would be to mark your 'glowing' materials in the g-buffer (as you're doing), then copying the diffuse colour of the marked pixels from the g-buffer into the final buffer after the lighting results have been accumulated.

#4857128 FPS "Accuracy" value that affects the size of your crosshair

Posted by johnchapman on 03 September 2011 - 07:04 AM

I think the best way would be to keep resolution-agnostic; use game units. Offset the path of each bullet by a random angle, using you're accuracy metric (weapon precision, player's current speed, etc.) to control the size of the offset. You'd then work backwards into screen space to get the final size of the crosshair, which I suppose should be the angular diameter of the area in which the bullet might hit.

#4856373 How do you double-check normal map output? (screenshot)

Posted by johnchapman on 01 September 2011 - 11:08 AM

The orientation of the z component of the normals will depend on the handedness of the coordinate system you are usng; looks like you're view-space is left-handed, since +x goes to the right, +y goes up and +z goes into the screen (hence you see very little blue).

As for the colours not "blending as well" - it's because you're not remapping into [0,1] to visualize the normals.

#4854689 Deferred shading - confusion

Posted by johnchapman on 28 August 2011 - 03:57 AM

1st, your diffuse has all perfectly lit green grass. So if you have a sunlight, you should calculate that first and put that in your diffuse buffer as well.

I think this may potentially just cause more confusion, and in any case I'd say it was a good idea to maintain complete separation of the g-buffer/lighting stages. Are you talking about having a 'sun occlusion' factor (stored in the alpha channel or something), or actually pre-multiplying the diffuse component? The former doesn't work for dynamic objects, and the latter doesn't work at all.

1. The albedo texture should be a combo of the material diffuse property and the objects texture?
2. No ambient material should be needed since it is usually the same as the diffuse property. Later i can add SSAO
3. And now there will be no emissive or specular color but instead a factor that will be multiplied by the light properties?

Only question that remains for me now is if the materials are now like this, will the light sources properties be similar or do they still retain their RGBA values

1. I would say that an objects texture is its material's diffuse property, i.e. the colour which modulates any reflected, diffuse light. So your grass texture represents the final colour you'd see if there was a 100% white light shining on it.
2. You could have a seperate ambient colour if you wanted. But you'd need another set of RGB components in the g-buffer. As you said, however, the ambient reflectance of a material is the same as its diffuse, so you can just use the diffuse property to modulate any ambient light.
3. As with ambient, you could have separate emissive/specular colours but, again, you'd need a bigger g-buffer. For the colour of a specular reflection it's usually good enough to simply use the light's colour.

The light source properties can be as many or few as you need, since these will be passed directly into the shader and not stored in any intermediate buffers. So you could have RGB colours for separate ambient/diffuse/specular light or just a single colour for all three.

I should point out that the example I've given of the g-buffer setup is pretty arbitrary. Basically you need the g-buffer to store whatever you need in order to do lighting. If you look through the various references you'll see lots of different g-buffer configurations, non are the right or wrong way to do it. It entirely depends on what properties you need during the defered lighting pass. Understanding the lighting model you're going to use is key; make sure you're on top of Phong and know what all the inputs are doing.

#4854538 Deferred shading - confusion

Posted by johnchapman on 27 August 2011 - 02:13 PM

Ahh, sorry if i've confused you more. Perhaps I should have made it more clear: the values in the g-buffer are per-pixel material properties. By specular intensity/power I mean the shininess/glossiness of the material, not the specular coefficient.

#4854465 spherical billboards

Posted by johnchapman on 27 August 2011 - 10:04 AM

If you render them as point sprites you can avoid dealing with quads at all - you can just upload your per-particle centres/radii to the VBO, render as point sprites and have OpenGL generate the quads (and texture coords) for you. As for normals/depths, I'd use textures for simplicity/flexibillity but in theory you could generate these in the fragment shader.

Here's a video of mine, rendering lots of red balls using point sprites with normal maps: