Jump to content
  • Advertisement
Sign in to follow this  
Xycaleth

FBOs and Different Texture Dimensions

This topic is 3575 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm currently in the process of implementing HDR in a program I'm working on. I've got most of it done now, except for calculating the average luminance. From what I understand, I need to turn RGB values into a luminance value and then downsample this until I get a 1x1 pixel to get an overall average. I have a set of textures (each progressively getting smaller, 512x512, 128x128, 32x32, etc), and at first I attach the largest one to the FBO. When I come to downsample this, I render the RGB converted scene then I have to detach the current color attachment, and then attach the next smallest texture, and then render, and then repeat. Should I be using a separate FBO for each texture size? I've heard that swapping FBOs is expensive (I would have to do this about 5 times per frame)... What are your thoughts on this?

Share this post


Link to post
Share on other sites
Advertisement
Why not just make a FBO with 512x512 and mipmap it and get each mipmap level for the smaller sizes, and this only requires one FBO and one call.

Share this post


Link to post
Share on other sites
Okay. I've added the mipmapping, but I don't know how to sample from the 1x1 mipmap in the fragment shader:

vec4 lum = texture2D (texLuminance, vec2 (0.5, 0.5), 10);

I've giving the bias value in texture2D function a high value hoping it would give me the lowest resolution mipmap, but this doesn't seem to do anything. I can tell because when I point the camera at a dark spot, the exposure is increased a lot. And vice versa. Whereas it should be taking a sample from an averaged luminance map, i.e. the 1x1 texture mipmap.

Share this post


Link to post
Share on other sites
I was going to suggest using texture2DLod() but as stated below


texture2DLod is for the vertex shader or in the fragment shader (if GL_EXT_gpu_shader4 is supported). If the extension is not supported, you use the regular texture2D in the fragment shader (and the mipmap lod is computed automatically and used for indexing into the mipmap chain). Using derivatives in the vertex shader is not possible.


You might be able to use the

texture2D(<sampler2D>,<uv>,<bias>)

and use dFdx dFdy functions to calculate bias, but to use the LOD version in the FS you will need a newer GPU card e.g. GF8 or later IIRC...

Share this post


Link to post
Share on other sites
I can use the texture2DLod on my graphics card (I have an ATI HD 4850) but I want to keep the requirements as low as possible because this needs to work on a graphics 3 or 4 years old (I think that's Shader Model 2.0?). However, I'm not sure how I would go about using the dFdx/dFdy functions to calculate the bias for texture2D.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!