Jump to content

  • Log In with Google      Sign In   
  • Create Account


How do I do OpenGL texture blitting


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
7 replies to this topic

#1 SelethD   Members   -  Reputation: 375

Like
0Likes
Like

Posted 15 October 2012 - 03:12 PM

I have need to take portions of two different textures, and blit them onto a third texture that will then be used to render to the device.

I have been googling this all day, reading several articles, and about all I have discovered so far is that this can be done with a FBO

But all I have is this...

void glBlitFramebuffer​(GLint srcX0​, GLint srcY0​, GLint srcX1​, GLint srcY1​,
GLint dstX0​, GLint dstY0​, GLint dstX1​, GLint dstY1​,
GLbitfield mask​, GLenum filter​);

Nothing actually showing 'how' to blit the textures.

Im assuming somehow i need to do the following...

'set up' the FBO
somehow set my src texture
set my dst texture
call the glBlitFramebuffer
... then perhaps I can use the dst texture in my next call to the render function.

Thanks for any help.

oooh, and im using VS2010 in windows 7 32bit

Sponsor:

#2 larspensjo   Members   -  Reputation: 1526

Like
1Likes
Like

Posted 15 October 2012 - 03:51 PM

'set up' the FBO
somehow set my src texture
set my dst texture
call the glBlitFramebuffer

You already got it. Just some details needed. I think all bitmaps have to be the same size! If you have two textures, fSource1 and fSource 2, then create the destination texture, fSource3. After that, you can do as follows. I didn't allocate a depth buffer below, and maybe you need to explicitly disable the depth buffer also (I don't remember). You will get the "framebuffer incomplete" if something is wrong.
[source lang="cpp"]// Create the frame buffer object//-------------------------------glGenFramebuffers(1, &fboName);// enable this frame buffer as the current frame bufferglBindFramebuffer(GL_FRAMEBUFFER, fboName);// attach the textures to the frame bufferglFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, fSource1, 0);glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, fSource2, 0);glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, GL_TEXTURE_2D, fSource3, 0);GLenum fboStatus = glCheckFramebufferStatus(GL_FRAMEBUFFER);if (fboStatus != GL_FRAMEBUFFER_COMPLETE) { printf("FrameBuffer incomplete: 0x%x\n", fboStatus); exit(1);}glReadBuffer(GL_COLOR_ATTACHMENT0); // Prepare reading from this textureGLenum bufferlist[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2 };glDrawBuffers(1, &bufferlist[2]); // Prepare drawing into buffer 2.glBlitFramebuffer(..); // This will now copy from GL_COLOR_ATTACHMENT0 to GL_COLOR_ATTACHMENT2glReadBuffer(GL_COLOR_ATTACHMENT1); // Read from this texture instead nowglBlitFramebuffer(..); // This will now copy from GL_COLOR_ATTACHMENT1 to GL_COLOR_ATTACHMENT2glBindFramebuffer(GL_FRAMEBUFFER, 0); // Disable FBO when done[/source]

You can save the FBO for later use if you want, or delete it.
Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#3 SelethD   Members   -  Reputation: 375

Like
0Likes
Like

Posted 16 October 2012 - 06:28 AM

Thanks so much for the accurate and quick reply.
I am however, having an issue.

When I implemented the example in my own code, I am getting an error stating glGenFramebuffers is undefined, pretty much any of the Framebuffers commands is undefined.

So I put in a few other openGL commands, and they are recognized... so now Im thinking either I am not including something I should, or I am using an out of date openGL.

#include <gl\GL.h>
#include <gl\GLU.h>

is all I am using with my app, of course linking the libs.

I realized, I am using VS2008, so does that determine the version of opengl?

Thanks again.

#4 Bregma   Crossbones+   -  Reputation: 4749

Like
1Likes
Like

Posted 16 October 2012 - 07:41 AM

The version of the OpenGL API that is shipped with the Microsoft SDK is frozen at version 1.2 -- it's arms are too short for its great big head.

You need to use an extension wrangler, like glew or glee, to use something more recent than you're great-grandfather's OpenGL on Microsoft Windows.
Stephen M. Webb
Professional Free Software Developer

#5 larspensjo   Members   -  Reputation: 1526

Like
2Likes
Like

Posted 16 October 2012 - 11:36 AM

As Bregma says, but notice that this is not a short-coming of your graphics drivers. They already have the functionality (if they are recent), but the extension wrangler will unlock the access to this. There are ways to do it manually, but it is a waste of effort.

All OpenGL calls are going through the opengl32.dll supported by Microsoft, and they see no reason to improve on this as there is Direct3D instead.

When using an extension wrangler like glew, you no longer need to include GL/gl.h. Instead you include GL/glew.h. But don't forget to first initialize the library as soon as you have a context available (an open window).

You should also consider the difference between legacy OpenGL and OpenGL 3+. The GL/glu.h has several legacy things, and is maybe less used these days.
Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#6 SelethD   Members   -  Reputation: 375

Like
0Likes
Like

Posted 16 October 2012 - 07:40 PM

Ok, I shall look into glew, and see if I can get things rolling again.

However, you mention GL/glu.h has some legacy things... so what would I need to do to use OpenGL 3+ ? Would I just simply not use GL/glu.h, or is there something I would substitute it with?

Thanks.

#7 larspensjo   Members   -  Reputation: 1526

Like
1Likes
Like

Posted 17 October 2012 - 04:22 AM

The big difference is the legacy immediate mode. That is, you do glBegin(), then define vertices, possibly some attributes, and then glEnd(). This is simple, which makes many beginners like it.

The new way to do this is more complicated, but much more flexible. So it takes some to learn it, but when you got it running you usually do not want to go back again. The idea is instead to allocate a local buffer in RAM, store vertex data into it, transfer the buffer to the GPU, define a vertex shader and a fragment shader, and finally run the shaders with this GPU buffer as input. The vertex data you store in the buffer can be any type of data. It is up to you what you want to do with it in the shaders.

You can find a minimal complete example at: https://github.com/progschj/OpenGL-Examples/blob/master/01shader_vbo2.cpp. There are also other examples, all of them coded as small as possible, while still having good comments, so that you can study exactly how OpenGL is used to get something done.

See also http://www.opengl.org/wiki/Legacy_OpenGL for more information.

For a very good tutorial, based on OpenGL 3+, see http://www.arcsynthesis.org/gltut/.

You don't have to change. Legacy OpenGL will probably be supported by graphic drivers for a long time.
Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#8 SelethD   Members   -  Reputation: 375

Like
0Likes
Like

Posted 17 October 2012 - 07:57 AM

Wow, I really appreciate all the information. One of the most difficult issues when trying to learn the 'updated' way of doing this, is the sheer volume of older tutorials and code out there on the net. So I am very thankful for the links, and also the terminology (which is a huge help in knowing 'what' to search for).




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS