Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 24 Nov 2009
Offline Last Active Dec 12 2015 02:21 AM

#4854362 Deferred shading - confusion

Posted by on 27 August 2011 - 03:20 AM

Although it's probably possible to pack an RGBA value into a single two byte component you'd lose a lot of precision, since you're halving the number of bits representing each component. I can only assume that you're thinking in terms of the way that 'full' Phong lighting treats materials, with seperate colour values for ambient/diffuse/specular/emissive. This is overkill, especially for a deferred renderer where you preferably want to limit the memory cost of the g-buffer. The way in which you slim down the material properties depends on what kinds of materials you want to render and how much 'space' (i.e. unused components) you've got in the g-buffer. You could use a whole render target to store material properties, or elbow them into any unused components on other targets (most of the references do this).

So a simple g-buffer layout might be something like this:
Target0: RGB = diffuse albedo A = specular power
Target1: RGB = normal xyz A = specular intensity
Target2: RGB = position xyz A = emissiveness

You render these values out at the g-buffer stage. Then, at the lighting stage, you clear the output framebuffer and render the lights. Each light you render taps into the g-buffer targets to get the required data. Obviously, since the material properties have been slimmed down, you'll need to use a modified lighting equation. So, for the example g-buffer, you might do:

ambient = material_diffuse * light_color * light_ambient_level;
diffuse = material_diffuse * light_color * light_diffuse_level;
specular = light_color * material_specular_level * light_specular_level ^ material_specular_power;
emissive = material_emissiveness * material_diffuse;
result = ambient + diffuse + specular + emissive;

Clearly this is less flexible than 'full' Phong. One of the drawbacks of deferred rendering is that the materials/lighting model tends to be very rigid, since the inputs to the lighting stage (the g-buffer targets) have a fixed format. However, with a bit of cunning you can come up with a materials/lighting system which supports the gamut of materials that you want to render.

I'm not sure what dpadam450 means by "put the full screen diffuse to the screen." The output of the first stage is the g-buffer, which specifies the material properties. This is an input to the deferred stage, in which lights are rendered and the final, shaded pixels accumulate into the final output buffer (either the back buffer or another target for post-processing).

Also, if you're using OpenGL >= 3.0 you can use render targets of different formats (but not sizes).

#4854206 Deferred shading - confusion

Posted by on 26 August 2011 - 02:29 PM

Doesn't look as if there's anything wrong with your g-buffer textures. As regards your questions:

1) All of the fragment output can be done in the lighting pass - your g-buffer is an input to the lighting stage, you accumulate 'lit' pixels in the final framebuffer.
2) You can draw anything you like in the lighting pass, as long as it covers the pixels which need to be lit. You could draw a full screen-aligned quad for every light, which is pretty inefficient (you'll end up shading a lot of pixels which don't need to be shaded). Drawing the light volumes is the optimal solution: a sphere for point lights, a cone for spotlights is generally the way to go.
3) You'll need to write material properties to a target in the g-buffer. How you do this depends on what your lighting stage requires and the sort of material properties you need to support. A simple format might be something like R = specular level, G = specular power B = emissiveness A = AO factor.

Here's a few links to some deferred rendering resources (you may have already seen some of them):


#4852962 GLM Quaternion Camera

Posted by on 23 August 2011 - 03:23 PM

Sounds like your movement vector isn't being tranformed into the camera's space properly - although transforming it by the camera's quaternion *should* work...

Another way to do it, assuming that you derive a matrix from the camera's position/orientation at some point (for rendering, etc), would be to pull the camera's world x/y/z out of that. so to move the camera forward, you grab the z vector fron the camera's matrix, scale it by speed*time and add it to the position.

#4850153 Collision detection

Posted by on 16 August 2011 - 11:29 PM

It should be external to the entities. That way you can separate collision detection from your game objects, which will make life easier if you want to experiment with different implementations, etc. Also, parallelization will be much easier (you could have a collision detection thread, or n threads operating on n groups of objects from the list). By grouping the collision data and looping over it in a single bit of code you'll also get much better cache performance. Have a look into "data oriented design" for some ideas there.

Basically I'm of the opinion that it always pays in the long run to be as modular as possible, especially with potentially heavyweight subsystems.

#4849557 Delayed/Phased Simulation Idea

Posted by on 15 August 2011 - 02:49 PM

My thinking is that if a player isn't in visual range of a bar fight (or whatever) taking place, then there's no way he/she could know whether the result (placement of stools, breakage of cups) is correct or not. Off the top of my head I'd say the best approach would be to have some sort of random 'rules' which approximate the results - 50% chance of a cup being broken, 80% chance of a stool being tipped over - which you can quickly apply when a player comes in range.

I'm interested to hear what other people's ideas are on this.

#4848203 Ambient occlusion simulation for AAA projects

Posted by on 12 August 2011 - 07:14 AM

This is the last post of mine. I don't wish to spend time for argue any more, no way to proove anything for peoples of other "relegion".

Oh good.

#4847718 This bloom shader looks crappy...

Posted by on 11 August 2011 - 09:32 AM

As DigitalFragment says, there are a number of ways to improve the quality/efficiency of the blur. Getting blur right is 99% of the battle with bloom. Here's a couple of decent links:



#4847516 SSAO Problem

Posted by on 11 August 2011 - 12:30 AM

The issue stems from the fact that your sampling kernel is basically just the vertices of a cube. Even with semi-random scaling + random rotation, it still retains a regular appearance:

You can see where the interference pattern is coming from.

Instead I'd generate a sample kernel on the CPU where you have more control and can more easilly visualise the kernel seperately.

If you take a look at the tutorial I linked to there's an example of generating a decent kernel (it's for a hemispherical sampling kernel, but you can easilly adapt it to get a spherical one).

#4847139 SSAO Problem

Posted by on 10 August 2011 - 06:08 AM

How are you actually generating the sample kernel? As maxest says if there's any regularity to it that is what will be causing the interference pattern (and why increasing the number samples doesn't seem to reduce the effect).

#4846174 SSAO Problem

Posted by on 08 August 2011 - 08:26 AM

Looking at the screenshot i'd say it's a combination of a large sample kernel radius with a low number of samples; does decreasing the kernel radius/increasing the number of samples improve the problem?

The black "edges" are a result of sampling beyond the edges of the depth texture. The result of this will change depending on the texture wrap mode (repeat, clamp to edge, etc.) or you could try clamping your texture coordinated in [0, 1].

As a side note, you might be interested in a tutorial I wrote a while back that extends the SSAO technique you're implementing: http://www.john-chapman.net/content.php?id=8

#4846061 2D Animation in OpenGL

Posted by on 07 August 2011 - 11:38 PM

Another way to do it would be to load each frame of animation into layers of a 3D texture. Then to 'play' the animation you just interpolate the w texture coordinate. I'm not sure how efficient this is vs. jumping around a 2d texture but it means you can take advantage of texture filtering between frames.

#4843078 Deferred Rendering, Transparency & Alpha Blending

Posted by on 01 August 2011 - 04:55 AM

I've just posted up an overview of a technique I've been using to render transparent and alpha-blended materials in a deferred renderer. It has the slight flavour of a hack (like all deferred transparency methods, I suppose), but hopefully it'll be of some interest.

At any rate I'm keen to here your views and opinions, and to know if there are any glaringly obvious problems with the technique that I've not thought of (I have implemented it, and it does work!)

The post is here: http://www.john-chap...ntent.php?id=13

#4842794 Updating VBOs

Posted by on 31 July 2011 - 04:15 AM

Whenever you change the vertex data you need to update the VBO, but you don't necessarilly have to upload the whole data set; see glBufferSubData() in the Opengl man pages.

Also, when you initially allocate the buffer with glBufferData() you should set the 'usage' parameter to hint to the driver about how you aniticipate the data being used. For instance, you might use GL_STATIC_DRAW for data you upload once and draw a lot, or GL_DYNAMIC_DRAW for data you both update and draw frequently - there's a whole gamut of these 'hints' which allow the implementation to make decisions about how to most efficiently deal with your buffer data.

There's more information about VBOs (and other things) here.