Jump to content

  • Log In with Google      Sign In   
  • Create Account


johnchapman

Member Since 24 Nov 2009
Offline Last Active Today, 12:41 PM
-----

#5151704 Screenshot of your biggest success/ tech demo

Posted by johnchapman on 05 May 2014 - 03:05 PM

Here's a tech demo I "completed" (i.e. stopped working on) a couple of years back, which has a couple of nice features although it's quite dated now:

 

http://youtu.be/dNn1wYWaL6w

 

Same software, with a more organic-looking scene:

 

http://youtu.be/5NTSk6fJCJ4

 




#5078889 Do you extend GLSL?

Posted by johnchapman on 19 July 2013 - 05:01 AM

Like Promit, I have a somewhat hacky preprocessor which manages #includes but not much else. For experimental and small scale projects you don't really need anything more. A serious renderer would probably require a more complete system for things like generating shader permutations. External #defines are a good mechanism for that sort of thing. 




#4920770 Screen Space 'Psuedo' Lens Flare

Posted by johnchapman on 09 March 2012 - 03:56 PM

After posting in a recent topic on lens flare, I revisited my quicky implementation and made some improvements which I've outlined in a wee tutorial here. The technique is based on MJP's old blog post about Killzone 2, plus some inspiration from BF3. Here's a video of it in action:

http://www.youtube.com/watch?v=w164K5nuak8

The main issue is that the 'features' generated are just copies of the bright spots as sampled from the source image, not a function of the lens aperture as they should be. I'm interested in any cunning ideas for overcoming this problem; until then it's best kept as a subtle addition to sprite-based effects.


#4915607 BF3 Lens flares

Posted by johnchapman on 22 February 2012 - 01:11 PM

Mainly regarding (3): I get pretty good results doing as follows:
1) Do a 'bright pass' (downscale + threshold)
2) Take the result and flip left-right top-bottom (as Hodgman says)
3) Apply a radial blur to the flipped bright pass
4) Blend this with the original, unflipped bright pass
5) Apply a gaussian blur to the whole thing
6) Upscale and blend with the original image, modulating with a lens dirt texture

I actually apply another threshold at step 2 so that only very bright pixels get 'flipped'. The flipped, radially blurred results give a soft 'pseudo lens flare' which works best when it's made quite subtle (hence the secondary threshold). The lens dirt texture gives the whole thing an organic look and hides any artifacts (e.g. caused by using a low number of samples for the radial blur). I've included the lens texture I made and some examples; I think overall it works better being more subtle.

As for elements (1) and (2): looking closely in BF3 I don't think the lens flare sprites are modulated by the lens dirt texture. Part of me thinks they should be...

Attached Thumbnails

  • flare_results.jpg
  • lens_dirt.jpg



#4915585 What normal to use when triangles are in smoothing groups?

Posted by johnchapman on 22 February 2012 - 12:22 PM

Sorry; I think I misread your question. Within a smoothing group triangles can share vertices; the normal at each vertex is some function of the normal(s) of the face(s) to which that vertex belongs - averaging the face normals will work perfectly for this.


#4915488 What normal to use when triangles are in smoothing groups?

Posted by johnchapman on 22 February 2012 - 08:04 AM

You need to duplicate any vertices that are shared between smoothing groups; they will have different normals. So, for example, if 2 smoothing groups meet at a single edge, the vertices on that edge will need to be duplicated.


#4912333 Trying to make an object glow

Posted by johnchapman on 12 February 2012 - 02:39 PM

Ultimately though, what I'm currently doing is a temporary hack more than a final solution. I would ultimately like to find a way to have the object glow, but still receive color from other nearby glowing objects. Like it glows a particular color, but you see traces of another color on one side from another object nearby. That is the final result I want, but I don't yet have any idea how to get there.


Actually you could do as I said before but copy emissive pixels into the lighting results buffer before the lighting stage and accumulate light contributions on top of the emissive pixels. This could be problematic if you're using a light at the centre of the object, though; since that light is simulating (cheating) the light being emitted at the surface of the glowing object, it shouldn't affect the object itself. You could account for this by adjusting the emissive material by pre-subtracting the light source's colour from the emissive material's diffuse colour. This isn't all that flexible, though.

I'm not sure I understand what you're trying to achieve; is it a very subtle, stylized glow or an emissive material? In the latter case I think that you can safely ignore contributions from other light sources.


#4912223 Trying to make an object glow

Posted by johnchapman on 12 February 2012 - 03:32 AM

One way to do it would be to mark your 'glowing' materials in the g-buffer (as you're doing), then copying the diffuse colour of the marked pixels from the g-buffer into the final buffer after the lighting results have been accumulated.


#4857128 FPS "Accuracy" value that affects the size of your crosshair

Posted by johnchapman on 03 September 2011 - 07:04 AM

I think the best way would be to keep resolution-agnostic; use game units. Offset the path of each bullet by a random angle, using you're accuracy metric (weapon precision, player's current speed, etc.) to control the size of the offset. You'd then work backwards into screen space to get the final size of the crosshair, which I suppose should be the angular diameter of the area in which the bullet might hit.


#4856373 How do you double-check normal map output? (screenshot)

Posted by johnchapman on 01 September 2011 - 11:08 AM

The orientation of the z component of the normals will depend on the handedness of the coordinate system you are usng; looks like you're view-space is left-handed, since +x goes to the right, +y goes up and +z goes into the screen (hence you see very little blue).

As for the colours not "blending as well" - it's because you're not remapping into [0,1] to visualize the normals.


#4854689 Deferred shading - confusion

Posted by johnchapman on 28 August 2011 - 03:57 AM

1st, your diffuse has all perfectly lit green grass. So if you have a sunlight, you should calculate that first and put that in your diffuse buffer as well.


I think this may potentially just cause more confusion, and in any case I'd say it was a good idea to maintain complete separation of the g-buffer/lighting stages. Are you talking about having a 'sun occlusion' factor (stored in the alpha channel or something), or actually pre-multiplying the diffuse component? The former doesn't work for dynamic objects, and the latter doesn't work at all.

1. The albedo texture should be a combo of the material diffuse property and the objects texture?
2. No ambient material should be needed since it is usually the same as the diffuse property. Later i can add SSAO
3. And now there will be no emissive or specular color but instead a factor that will be multiplied by the light properties?

Only question that remains for me now is if the materials are now like this, will the light sources properties be similar or do they still retain their RGBA values


1. I would say that an objects texture is its material's diffuse property, i.e. the colour which modulates any reflected, diffuse light. So your grass texture represents the final colour you'd see if there was a 100% white light shining on it.
2. You could have a seperate ambient colour if you wanted. But you'd need another set of RGB components in the g-buffer. As you said, however, the ambient reflectance of a material is the same as its diffuse, so you can just use the diffuse property to modulate any ambient light.
3. As with ambient, you could have separate emissive/specular colours but, again, you'd need a bigger g-buffer. For the colour of a specular reflection it's usually good enough to simply use the light's colour.

The light source properties can be as many or few as you need, since these will be passed directly into the shader and not stored in any intermediate buffers. So you could have RGB colours for separate ambient/diffuse/specular light or just a single colour for all three.

I should point out that the example I've given of the g-buffer setup is pretty arbitrary. Basically you need the g-buffer to store whatever you need in order to do lighting. If you look through the various references you'll see lots of different g-buffer configurations, non are the right or wrong way to do it. It entirely depends on what properties you need during the defered lighting pass. Understanding the lighting model you're going to use is key; make sure you're on top of Phong and know what all the inputs are doing.


#4854538 Deferred shading - confusion

Posted by johnchapman on 27 August 2011 - 02:13 PM

Ahh, sorry if i've confused you more. Perhaps I should have made it more clear: the values in the g-buffer are per-pixel material properties. By specular intensity/power I mean the shininess/glossiness of the material, not the specular coefficient.


#4854447 Deferred shading - confusion

Posted by johnchapman on 27 August 2011 - 09:21 AM

The material properties are stored in the g-buffer. The diffuse albedo, specular intensity/power and emissiveness values in the example I gave are what I was referring to when I said 'material properties.'

You can use the ordinary Phong formulae to compute the light_ambient_level/light_diffuse_level/light_specular_level factors and use them as per my previous post. The only difference is in which material properties are available from the g-buffer, so you'll notice that (in the example) I used the material's diffuse colour to modulate the ambient result (because there's no material ambient colour) and that the final specular value only uses the light's colour (becaues there's no material specular colour).


#4854362 Deferred shading - confusion

Posted by johnchapman on 27 August 2011 - 03:20 AM

Although it's probably possible to pack an RGBA value into a single two byte component you'd lose a lot of precision, since you're halving the number of bits representing each component. I can only assume that you're thinking in terms of the way that 'full' Phong lighting treats materials, with seperate colour values for ambient/diffuse/specular/emissive. This is overkill, especially for a deferred renderer where you preferably want to limit the memory cost of the g-buffer. The way in which you slim down the material properties depends on what kinds of materials you want to render and how much 'space' (i.e. unused components) you've got in the g-buffer. You could use a whole render target to store material properties, or elbow them into any unused components on other targets (most of the references do this).

So a simple g-buffer layout might be something like this:
Target0: RGB = diffuse albedo A = specular power
Target1: RGB = normal xyz A = specular intensity
Target2: RGB = position xyz A = emissiveness

You render these values out at the g-buffer stage. Then, at the lighting stage, you clear the output framebuffer and render the lights. Each light you render taps into the g-buffer targets to get the required data. Obviously, since the material properties have been slimmed down, you'll need to use a modified lighting equation. So, for the example g-buffer, you might do:

ambient = material_diffuse * light_color * light_ambient_level;
diffuse = material_diffuse * light_color * light_diffuse_level;
specular = light_color * material_specular_level * light_specular_level ^ material_specular_power;
emissive = material_emissiveness * material_diffuse;
result = ambient + diffuse + specular + emissive;

Clearly this is less flexible than 'full' Phong. One of the drawbacks of deferred rendering is that the materials/lighting model tends to be very rigid, since the inputs to the lighting stage (the g-buffer targets) have a fixed format. However, with a bit of cunning you can come up with a materials/lighting system which supports the gamut of materials that you want to render.

I'm not sure what dpadam450 means by "put the full screen diffuse to the screen." The output of the first stage is the g-buffer, which specifies the material properties. This is an input to the deferred stage, in which lights are rendered and the final, shaded pixels accumulate into the final output buffer (either the back buffer or another target for post-processing).

Also, if you're using OpenGL >= 3.0 you can use render targets of different formats (but not sizes).


#4854206 Deferred shading - confusion

Posted by johnchapman on 26 August 2011 - 02:29 PM

Doesn't look as if there's anything wrong with your g-buffer textures. As regards your questions:

1) All of the fragment output can be done in the lighting pass - your g-buffer is an input to the lighting stage, you accumulate 'lit' pixels in the final framebuffer.
2) You can draw anything you like in the lighting pass, as long as it covers the pixels which need to be lit. You could draw a full screen-aligned quad for every light, which is pretty inefficient (you'll end up shading a lot of pixels which don't need to be shaded). Drawing the light volumes is the optimal solution: a sphere for point lights, a cone for spotlights is generally the way to go.
3) You'll need to write material properties to a target in the g-buffer. How you do this depends on what your lighting stage requires and the sort of material properties you need to support. A simple format might be something like R = specular level, G = specular power B = emissiveness A = AO factor.

Here's a few links to some deferred rendering resources (you may have already seen some of them):

http://developer.dow...red_Shading.pdf
http://www.talula.de...rredShading.pdf
http://bat710.univ-lyon1.fr/~jciehl/Public/educ/GAMA/2007/Deferred_Shading_Tutorial_SBGAMES2005.pdf
http://developer.amd...StarCraftII.pdf




PARTNERS