Sign in to follow this  
SelethD

OpenGL How do I do OpenGL texture blitting

Recommended Posts

SelethD    456
I have need to take portions of two different textures, and blit them onto a third texture that will then be used to render to the device.

I have been googling this all day, reading several articles, and about all I have discovered so far is that this can be done with a FBO

But all I have is this...

void glBlitFramebuffer?(GLint srcX0?, GLint srcY0?, GLint srcX1?, GLint srcY1?,
GLint dstX0?, GLint dstY0?, GLint dstX1?, GLint dstY1?,
GLbitfield mask?, GLenum filter?);

Nothing actually showing 'how' to blit the textures.

Im assuming somehow i need to do the following...

'set up' the FBO
somehow set my src texture
set my dst texture
call the glBlitFramebuffer
... then perhaps I can use the dst texture in my next call to the render function.

Thanks for any help.

oooh, and im using VS2010 in windows 7 32bit

Share this post


Link to post
Share on other sites
larspensjo    1561
[quote name='SelethD' timestamp='1350335528' post='4990519']
'set up' the FBO
somehow set my src texture
set my dst texture
call the glBlitFramebuffer
[/quote]
You already got it. Just some details needed. I think all bitmaps have to be the same size! If you have two textures, fSource1 and fSource 2, then create the destination texture, fSource3. After that, you can do as follows. I didn't allocate a depth buffer below, and maybe you need to explicitly disable the depth buffer also (I don't remember). You will get the "framebuffer incomplete" if something is wrong.
[source lang="cpp"]
// Create the frame buffer object
//-------------------------------
glGenFramebuffers(1, &fboName);
// enable this frame buffer as the current frame buffer
glBindFramebuffer(GL_FRAMEBUFFER, fboName);
// attach the textures to the frame buffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, fSource1, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, fSource2, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, GL_TEXTURE_2D, fSource3, 0);

GLenum fboStatus = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (fboStatus != GL_FRAMEBUFFER_COMPLETE) {
printf("FrameBuffer incomplete: 0x%x\n", fboStatus);
exit(1);
}

glReadBuffer(GL_COLOR_ATTACHMENT0); // Prepare reading from this texture
GLenum bufferlist[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2 };
glDrawBuffers(1, &bufferlist[2]); // Prepare drawing into buffer 2.
glBlitFramebuffer(..); // This will now copy from GL_COLOR_ATTACHMENT0 to GL_COLOR_ATTACHMENT2

glReadBuffer(GL_COLOR_ATTACHMENT1); // Read from this texture instead now
glBlitFramebuffer(..); // This will now copy from GL_COLOR_ATTACHMENT1 to GL_COLOR_ATTACHMENT2

glBindFramebuffer(GL_FRAMEBUFFER, 0); // Disable FBO when done
[/source]

You can save the FBO for later use if you want, or delete it.

Share this post


Link to post
Share on other sites
SelethD    456
Thanks so much for the accurate and quick reply.
I am however, having an issue.

When I implemented the example in my own code, I am getting an error stating glGenFramebuffers is undefined, pretty much any of the Framebuffers commands is undefined.

So I put in a few other openGL commands, and they are recognized... so now Im thinking either I am not including something I should, or I am using an out of date openGL.

#include <gl\GL.h>
#include <gl\GLU.h>

is all I am using with my app, of course linking the libs.

I realized, I am using VS2008, so does that determine the version of opengl?

Thanks again.

Share this post


Link to post
Share on other sites
Bregma    9199
The version of the OpenGL API that is shipped with the Microsoft SDK is frozen at version 1.2 -- it's arms are too short for its great big head.

You need to use an [i]extension wrangler[/i], like [url="http://glew.sourceforge.net/"]glew[/url] or [url="http://elf-stone.com/glee.php"]glee[/url], to use something more recent than you're great-grandfather's OpenGL on Microsoft Windows.

Share this post


Link to post
Share on other sites
larspensjo    1561
As Bregma says, but notice that this is not a short-coming of your graphics drivers. They already have the functionality (if they are recent), but the [i]extension wrangler[/i] will unlock the access to this. There are ways to do it manually, but it is a waste of effort.

All OpenGL calls are going through the opengl32.dll supported by Microsoft, and they see no reason to improve on this as there is Direct3D instead.

When using an extension wrangler like glew, you no longer need to include GL/gl.h. Instead you include GL/glew.h. But don't forget to first initialize the library as soon as you have a context available (an open window).

You should also consider the difference between [url="http://www.opengl.org/wiki/Legacy_OpenGL"]legacy OpenGL[/url] and OpenGL 3+. The GL/glu.h has several legacy things, and is maybe less used these days.

Share this post


Link to post
Share on other sites
SelethD    456
Ok, I shall look into glew, and see if I can get things rolling again.

However, you mention GL/glu.h has some legacy things... so what would I need to do to use OpenGL 3+ ? Would I just simply not use GL/glu.h, or is there something I would substitute it with?

Thanks.

Share this post


Link to post
Share on other sites
larspensjo    1561
The big difference is the legacy immediate mode. That is, you do glBegin(), then define vertices, possibly some attributes, and then glEnd(). This is simple, which makes many beginners like it.

The new way to do this is more complicated, but much more flexible. So it takes some to learn it, but when you got it running you usually do not want to go back again. The idea is instead to allocate a local buffer in RAM, store vertex data into it, transfer the buffer to the GPU, define a vertex shader and a fragment shader, and finally run the shaders with this GPU buffer as input. The vertex data you store in the buffer can be any type of data. It is up to you what you want to do with it in the shaders.

You can find a minimal complete example at: [url="https://github.com/progschj/OpenGL-Examples/blob/master/01shader_vbo2.cpp"]https://github.com/progschj/OpenGL-Examples/blob/master/01shader_vbo2.cpp[/url]. There are also other examples, all of them coded as small as possible, while still having good comments, so that you can study exactly how OpenGL is used to get something done.

See also [url="http://www.opengl.org/wiki/Legacy_OpenGL"]http://www.opengl.org/wiki/Legacy_OpenGL[/url] for more information.

For a very good tutorial, based on OpenGL 3+, see [url="http://www.arcsynthesis.org/gltut/"]http://www.arcsynthesis.org/gltut/[/url].

You don't [b]have [/b]to change. Legacy OpenGL will probably be supported by graphic drivers for a long time.

Share this post


Link to post
Share on other sites
SelethD    456
Wow, I really appreciate all the information. One of the most difficult issues when trying to learn the 'updated' way of doing this, is the sheer volume of older tutorials and code out there on the net. So I am very thankful for the links, and also the terminology (which is a huge help in knowing 'what' to search for).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
       
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at https://github.com/0x2A/physicam
       
      I would be happy about feedback, suggestions or contributions.

    • By altay
      Hi folks,
      Imagine we have 8 different light sources in our scene and want dynamic shadow map for each of them. The question is how do we generate shadow maps? Do we render the scene for each to get the depth data? If so, how about performance? Do we deal with the performance issues just by applying general methods (e.g. frustum culling)?
      Thanks,
       
  • Popular Now