Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


#Actualmv348

Posted 27 January 2013 - 08:17 PM

Let's say I have a 32x32 texture containing ints, or floats. I am trying to come up with a very fast method of adding up all the values in the texture using OpenGL.

 

I was thinking I could possibly set the texture minification filter to GL_LINEAR and render it into a texture half the size (16x16). According to the documentation:

GL_LINEAR:

Returns the weighted average of the four texture elements
                                        that are closest to the specified texture coordinates.

 

If the texture rendered into is half the size I believe the weighted average will be the same as the average. Moreover, the average of a new pixel will be (sum of 4 old pixels)/4.

 

So I'm thinking if I render from a 32x32 texture into a 16x16, and then into 8x8, 4x4, 2x2 and 1x1, and then use glReadPixel( ... ) I will get the (sum of all pixels )/4^5.

 

4^5 = 1024 so I can either pre-multiply the texture values by 1024 or divide the result by 1024. I think the former would be more appropriate for ints to avoid roundoff error. I could maybe even use bit-shifting to multiply by 1024 for added speed.

 

What do you think of this idea? Do you have a better one?


#2mv348

Posted 27 January 2013 - 08:17 PM

Let's say I have a 32x32 texture containing ints, or floats. I am trying to come up with a very fast method of adding up all the values in the texture using OpenGL.

 

I was thinking I could possibly set the texture minification filter to GL_LINEAR and render it into a texture half the size (16x16). According to the documentation:

GL_LINEAR:

Returns the weighted average of the four texture elements
                                        that are closest to the specified texture coordinates.

 

If the texture rendered into is half the size I believe the weighted average will be the same as the average. Moreover, the average of a new pixel will be (sum of 4 old pixels)/4.

 

So I'm thinking if I render from a 32x32 texture into a 16x16, and then into 8x8, 4x4, 2x2 and 1x1, and then use glReadPixel( ... ) I will get the (sum of all pixels )/4^5.

 

4^5 = 1024 so I can either pre-multiply the texture values by 1024 or divide the result by 1024. I think the former would be more appropriate for ints. I could maybe even use bit-shifting to multiply by 1024 for added speed.

 

What do you think of this idea? Do you have a better one?


#1mv348

Posted 27 January 2013 - 02:02 PM

Let's say I have a 32x32 texture containing ints, or floats. I am trying to come up with a very fast method of adding up all the values in the texture using OpenGL.

 

I was thinking I could possibly set the texture minification filter to GL_LINEAR and render it int a texture half the size (16x16). According to the documentation:

GL_LINEAR:

Returns the weighted average of the four texture elements
                                        that are closest to the specified texture coordinates.

 

If the texture rendered into is half the size I believe the weighted average will be the same as the average. Moreover, the average of a new pixel will be (sum of 4 old pixels)/4.

 

So I'm thinking if I render from a 32x32 texture into a 16x16, and then into 8x8, 4x4, 2x2 and 1x1, and then use glReadPixel( ... ) I will get the (sum of all pixels )/4^5.

 

4^5 = 1024 so I can either pre-multiply the texture values by 1024 or divide the result by 1024. I think the former would be more appropriate for ints. I could maybe even use bit-shifting to multiply by 1024 for added speed.

 

What do you think of this idea? Do you have a better one?


PARTNERS