Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Don't forget to read Tuesday's email newsletter for your chance to win a free copy of Construct 2!


johnchapman

Member Since 24 Nov 2009
Offline Last Active Nov 13 2014 06:59 AM

#4854206 Deferred shading - confusion

Posted by johnchapman on 26 August 2011 - 02:29 PM

Doesn't look as if there's anything wrong with your g-buffer textures. As regards your questions:

1) All of the fragment output can be done in the lighting pass - your g-buffer is an input to the lighting stage, you accumulate 'lit' pixels in the final framebuffer.
2) You can draw anything you like in the lighting pass, as long as it covers the pixels which need to be lit. You could draw a full screen-aligned quad for every light, which is pretty inefficient (you'll end up shading a lot of pixels which don't need to be shaded). Drawing the light volumes is the optimal solution: a sphere for point lights, a cone for spotlights is generally the way to go.
3) You'll need to write material properties to a target in the g-buffer. How you do this depends on what your lighting stage requires and the sort of material properties you need to support. A simple format might be something like R = specular level, G = specular power B = emissiveness A = AO factor.

Here's a few links to some deferred rendering resources (you may have already seen some of them):

http://developer.dow...red_Shading.pdf
http://www.talula.de...rredShading.pdf
http://bat710.univ-lyon1.fr/~jciehl/Public/educ/GAMA/2007/Deferred_Shading_Tutorial_SBGAMES2005.pdf
http://developer.amd...StarCraftII.pdf


#4852962 GLM Quaternion Camera

Posted by johnchapman on 23 August 2011 - 03:23 PM

Sounds like your movement vector isn't being tranformed into the camera's space properly - although transforming it by the camera's quaternion *should* work...

Another way to do it, assuming that you derive a matrix from the camera's position/orientation at some point (for rendering, etc), would be to pull the camera's world x/y/z out of that. so to move the camera forward, you grab the z vector fron the camera's matrix, scale it by speed*time and add it to the position.


#4850153 Collision detection

Posted by johnchapman on 16 August 2011 - 11:29 PM

It should be external to the entities. That way you can separate collision detection from your game objects, which will make life easier if you want to experiment with different implementations, etc. Also, parallelization will be much easier (you could have a collision detection thread, or n threads operating on n groups of objects from the list). By grouping the collision data and looping over it in a single bit of code you'll also get much better cache performance. Have a look into "data oriented design" for some ideas there.

Basically I'm of the opinion that it always pays in the long run to be as modular as possible, especially with potentially heavyweight subsystems.


#4849557 Delayed/Phased Simulation Idea

Posted by johnchapman on 15 August 2011 - 02:49 PM

My thinking is that if a player isn't in visual range of a bar fight (or whatever) taking place, then there's no way he/she could know whether the result (placement of stools, breakage of cups) is correct or not. Off the top of my head I'd say the best approach would be to have some sort of random 'rules' which approximate the results - 50% chance of a cup being broken, 80% chance of a stool being tipped over - which you can quickly apply when a player comes in range.

I'm interested to hear what other people's ideas are on this.


#4848203 Ambient occlusion simulation for AAA projects

Posted by johnchapman on 12 August 2011 - 07:14 AM

This is the last post of mine. I don't wish to spend time for argue any more, no way to proove anything for peoples of other "relegion".


Oh good.


#4847718 This bloom shader looks crappy...

Posted by johnchapman on 11 August 2011 - 09:32 AM

As DigitalFragment says, there are a number of ways to improve the quality/efficiency of the blur. Getting blur right is 99% of the battle with bloom. Here's a couple of decent links:

http://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/

http://prideout.net/archive/bloom/index.php


#4847516 SSAO Problem

Posted by johnchapman on 11 August 2011 - 12:30 AM

The issue stems from the fact that your sampling kernel is basically just the vertices of a cube. Even with semi-random scaling + random rotation, it still retains a regular appearance:
bad_ssao_kernel.jpg

You can see where the interference pattern is coming from.

Instead I'd generate a sample kernel on the CPU where you have more control and can more easilly visualise the kernel seperately.

If you take a look at the tutorial I linked to there's an example of generating a decent kernel (it's for a hemispherical sampling kernel, but you can easilly adapt it to get a spherical one).


#4847139 SSAO Problem

Posted by johnchapman on 10 August 2011 - 06:08 AM

How are you actually generating the sample kernel? As maxest says if there's any regularity to it that is what will be causing the interference pattern (and why increasing the number samples doesn't seem to reduce the effect).


#4846174 SSAO Problem

Posted by johnchapman on 08 August 2011 - 08:26 AM

Looking at the screenshot i'd say it's a combination of a large sample kernel radius with a low number of samples; does decreasing the kernel radius/increasing the number of samples improve the problem?

The black "edges" are a result of sampling beyond the edges of the depth texture. The result of this will change depending on the texture wrap mode (repeat, clamp to edge, etc.) or you could try clamping your texture coordinated in [0, 1].

As a side note, you might be interested in a tutorial I wrote a while back that extends the SSAO technique you're implementing: http://www.john-chapman.net/content.php?id=8


#4846061 2D Animation in OpenGL

Posted by johnchapman on 07 August 2011 - 11:38 PM

Another way to do it would be to load each frame of animation into layers of a 3D texture. Then to 'play' the animation you just interpolate the w texture coordinate. I'm not sure how efficient this is vs. jumping around a 2d texture but it means you can take advantage of texture filtering between frames.


#4843078 Deferred Rendering, Transparency & Alpha Blending

Posted by johnchapman on 01 August 2011 - 04:55 AM

I've just posted up an overview of a technique I've been using to render transparent and alpha-blended materials in a deferred renderer. It has the slight flavour of a hack (like all deferred transparency methods, I suppose), but hopefully it'll be of some interest.

At any rate I'm keen to here your views and opinions, and to know if there are any glaringly obvious problems with the technique that I've not thought of (I have implemented it, and it does work!)

The post is here: http://www.john-chap...ntent.php?id=13


#4842794 Updating VBOs

Posted by johnchapman on 31 July 2011 - 04:15 AM

Whenever you change the vertex data you need to update the VBO, but you don't necessarilly have to upload the whole data set; see glBufferSubData() in the Opengl man pages.

Also, when you initially allocate the buffer with glBufferData() you should set the 'usage' parameter to hint to the driver about how you aniticipate the data being used. For instance, you might use GL_STATIC_DRAW for data you upload once and draw a lot, or GL_DYNAMIC_DRAW for data you both update and draw frequently - there's a whole gamut of these 'hints' which allow the implementation to make decisions about how to most efficiently deal with your buffer data.

There's more information about VBOs (and other things) here.




PARTNERS