Jump to content

  • Log In with Google      Sign In   
  • Create Account

I need help on using glconvolutionfilter2d


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
6 replies to this topic

#1 Player_0   Members   -  Reputation: 99

Like
0Likes
Like

Posted 25 January 2005 - 04:04 PM

Hello, Im trying to get this function to work and its not going too good. The program compiles, but when run the windows error box pops up asking if i want to send the error to microsoft. Im not sure if im using the function correctly. In my init function I enable convolutions and call "glConvolutionFilter2D(GL_CONVOLUTION_2D, GL_RGB, 3, 3, GL_RGB, GL_FLOAT, blah);". "blah" is of course a [3][3] float for the kernel. After that I just load a bmp bind the texture and draw it unto a quad. It never gets to this point though. The problem is in the glConvolutionFilter2D function. I'm a little unsure about what it applies to. Does it multiply the convolution to every pixel that is drawn as long as its enabled? Im lost on the details, please help.

Sponsor:

#2 Brother Bob   Moderators   -  Reputation: 8202

Like
0Likes
Like

Posted 25 January 2005 - 07:49 PM

The call looks correct, so I can only think of one thing; did you load the function pointer correctly and got a valid address for it?

The convolution applies to all pixel transfers as long as it's enabled. Pixel transfers are functions like glDrawPixels and glTexImage (functions transfering pixels) but not rendered objects. The operation is, well, a two dimensional convolution. If you don't know what a oonvolution is, I suggest you search google for it. It's basically a multiply-and-sum for each pixels. The convolution kernel is centered over a pixel (and it's neighbourgs), the values in the convolution kernel is multipled with pixel values it overlaps, and everything is summed up to produce the output for the pixel the kernel is centered at.

#3 Brother Bob   Moderators   -  Reputation: 8202

Like
0Likes
Like

Posted 25 January 2005 - 07:57 PM

On a second look, it doesn't look correct. blah should be a [3][3][3] array; it is a 3x3 kernel with 3 color components for each sample.

#4 Player_0   Members   -  Reputation: 99

Like
0Likes
Like

Posted 26 January 2005 - 09:32 AM

The function only accepts 2 values for the width and height of the array. I believe it gets the color values directly from the pixels where it is applied and depending on whether you specify GL_RGB, GL_LUMINANCE etc..
With my first program something didnt mix right with the convolution and the multitexturing. I made a more simple one with gltexcoord and the quad is showing on the screen but it is all white. No matter what filter I give it the quad is always pure white no colors or any parts of the texture are showing up. Can anyone help me with this? If needed I can post some code.

#5 Brother Bob   Moderators   -  Reputation: 8202

Like
0Likes
Like

Posted 26 January 2005 - 08:34 PM

As I said in my previous post, the convolution operation does not affect rendered objects, only pixel transfers. So the only thing that would be affected is the texture when you upload it with glTexImage. If the quad turns out all white, then the texture setup is likely wrong, but the convolution kernel is not to blame for that, at least not directly.

#6 Anonymous Poster_Anonymous Poster_*   Guests   -  Reputation:

0Likes

Posted 26 January 2005 - 11:53 PM

Sorry for interrupting your deep conversation, but this glConvolutionFilter2D sounds interesting.
What kind of effects can it be used for?
If I understand this correct I should be able to blur a texture with it, right?

So would it be possible to do the following:
1. Render to a buffer/texture
2. Blur the texture with glConvolutionFilter2D
3. Use the blured texture to something fun.

Is it hardware accelerated?


#7 Brother Bob   Moderators   -  Reputation: 8202

Like
0Likes
Like

Posted 27 January 2005 - 01:17 AM

The idea is that the image data is filtered by the convolution kernel to achieve some effect. It can be bluring, sharpening, edge detection and so on. What you can do with it is just limited by your imagination and what the kernel of the supported size can do.

However, it works on pixel transfers, so you don't apply the filter to a texture or so, it's applied to the data as it's being transfered from one place to another. How this interacts with render to texture functions I don't know. But I think that the kernel should be applied if you at least use glCopyTexSubImage, which can be seen as a kind of render to texture function in the sense that you can render an image and copy it into a texture.

About performance, I don't think any consumer-level card can do this, or any part of the imaging subset, in hardware, so performance would be pretty poor for real time applications.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS