Jump to content
  • Advertisement
Sign in to follow this  
Adaline

deferred rendering : advice needed

This topic is 2592 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,
I'm about to implement deferred rendering in my project.
I would like to describe how I would do it and please let me know if you have advice to give rolleyes.gif

My buffers would be :
ColorBuffer R32G32B32
DepthBuffer 32 bits
NormalBuffer R32G32B32
SpecularityBuffer R32G32 (intensity and exponent)


I'd use a geometry shader to send color/normal/specularity on the different render targets

Is it OK ?

Thank you smile.gif

Share this post


Link to post
Share on other sites
Advertisement

Hi,
I'm about to implement deferred rendering in my project.
I would like to describe how I would do it and please let me know if you have advice to give rolleyes.gif

My buffers would be :
ColorBuffer R32G32B32
DepthBuffer 32 bits
NormalBuffer R32G32B32
SpecularityBuffer R32G32 (intensity and exponent)


I'd use a geometry shader to send color/normal/specularity on the different render targets

Is it OK ?

Thank you smile.gif


To use the MRT functionality provided by your graphics API you need every buffer to be of the same size
The buffer sizes you proposed are rather overkill, an albedo/color buffer can do just fine with 8 bits per component, normals can be done in 8bits per component too, but I would advise 16 bits, and you'd probably want to look into encoding your normals so you can just store the X and Y components and extract the Z-component later on
Your other constants really don't need 32 bits either

I'd advise you use this buffer:

Albedo/Color + Specular intensity: A8R8G8B8
Depth: R32
Normals + Specular exponent: A8R8G8B8 (if you store 2 normal components you'll have a free spot for storing additional info)

This would result in a G-Buffer of 96 bits per pixel, which should do just fine

Another alternative would be to use R16G16 for your normals, and to add an additional A8R8G8B8 buffer for storing any additional constants where needed; this would result in a G-buffer of 128 bits per pixel, which is still ok

EDIT:

I seem to have read over the geometry shader part
I really don't get why you need a geometry shader here, geometry shaders are for creating or modifying geometry on the fly; filling a G-buffer is done through a pixel shader...

Share this post


Link to post
Share on other sites
Very nice and helpful explanation
Thank you wink.gif

edit : ... because I have several buffers to fill from the same input .... so there's a way to create a single buffer containing all the data (called G buffer) ?

Share this post


Link to post
Share on other sites

Very nice and helpful explanation
Thank you wink.gif

edit : ... because I have several buffers to fill from the same input .... so there's a way to create a single buffer containing all the data (called G buffer) ?


No, that's why you use MRT functionality (MRT = Multiple Render Targets)

First of all you do a pass where you fill your G-buffer; for this pass you set your G-Buffer as the render target through use of MRT, and for each object in your scene you write the needed properties (ie. Albedo, Normals, Depth, etc.) to their appropriate render targets through a pixel shader

After that you do a lighting pass where you set your G-Buffer textures as input so you can use the properties you stored to do your lighting

There are loads of tutorials and documents on this, so I'd suggest reading up on them if you're confused about certain aspects

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!