Sign in to follow this  

OpenGL [SOLVED] problems with render-to-texture

Recommended Posts

energydrink    100
There seems to be problems when creating a texture dynamically that is larger than the screen size. I am just a rookie in OpenGL but there must be a problem with the glViewport call to resize it to fit the texture I am creating. Edit: I forgot to add, I am using glCopyTexImage2D. I have two images to show the problem. Ok Not ok As you can see where the small letters begin they are not displayed correctly when used. As for now, I am not using any call to glViewport at all because using glViewport( 0, 0, theWidthOfTexture, theHeightOfTexture ) displays nothing. Are there any pitfalls in using this or should I paste some code? [Edited by - energydrink on April 24, 2010 4:39:13 PM]

Share this post

Link to post
Share on other sites
energydrink    100
I looked into FBO and yes, it is probably what I need. However, after reading up on the subject there seems not to working. I am sure there is a minor problem or have I missed the concept?

I am basically using the same code as with glCopyTexImage2D. The following code is only called once when creating a dynamic texture for a fontsheet.

Notice I have no texture parameters here


myFontTexID.push_back( 0 ); //Allocate new space for texture reference
glGenTextures( 1, &myFontTexID[ myFontTexID.size() - 1 ] );
glBindTexture( GL_TEXTURE_2D, myFontTexID[ myFontTexID.size() - 1 ] );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, ( GLsizei )totalWidth, ( GLsizei )totalHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL );

The following buffer setup should be correctly executed, right?


GLuint framebuffer;
glGenFramebuffers( 1, &framebuffer );
glBindFramebuffer( GL_FRAMEBUFFER_EXT, framebuffer );

GLuint renderbuffer;
glGenRenderbuffers( 1, &renderbuffer );
glBindRenderbuffer( GL_RENDERBUFFER_EXT, renderbuffer );

//Create storage for image data
glRenderbufferStorage( GL_RENDERBUFFER_EXT, GL_RGBA, ( GLsizei )totalWidth, ( GLsizei )totalHeight );
//Attach renderbuffer to currently bound framebuffer
//Attach texture to FBO
glFramebufferTexture2D( GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, myFontTexID[ myFontTexID.size() - 1 ], 0 );

Here is if I got the concept correctly. With glCopyTexImage2D it was easy to understand, it was basically a printscreen call. However with FBO I am now assuming that everything that gets rendered will be put into the renderbuffer which is attached to the framebuffer which is atteched to the "main"texture. There is no call like when using VBO like glBufferData where you actually have a command to upload the current information.

Since this is in the same method, the framebuffer should be already bound.


glViewport( 0, 0, totalWidth, totalHeight ); //Set size of texture

GLfloat x = 0.0f;
GLuint tempTex;
glGenTextures( 1, &tempTex );
glBindTexture( GL_TEXTURE_2D, tempTex );

glClear( GL_COLOR_BUFFER_BIT ); //Clear everything before drawing
for( GLuint i = 0; i < myAsciiTable.length(); i++ ) {
character = myAsciiTable[ i ];

//Create temporary texture
SDL_Surface *tempSurf = TTF_RenderText_Blended( myLoadedFont, character.c_str(), color );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, tempSurf->w, tempSurf->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, tempSurf->pixels );

//Advance one glyph size
glTranslatef( x, 0.0f, 0.0f );
x += ( GLfloat )tempSurf->w;

glBegin( GL_QUADS );
glTexCoord2f( 0.0f, 1.0f ); glVertex2f( 0.0f, 0.0f );
glTexCoord2f( 0.0f, 0.0f ); glVertex2f( 0.0f, ( GLfloat )tempSurf->h );
glTexCoord2f( 1.0f, 0.0f ); glVertex2f( ( GLfloat )tempSurf->w, ( GLfloat )tempSurf->h );
glTexCoord2f( 1.0f, 1.0f ); glVertex2f( ( GLfloat )tempSurf->w, 0.0f );

SDL_FreeSurface( tempSurf );
glBindFramebuffer( GL_FRAMEBUFFER_EXT, 0 );


glViewport( 0, 0, SCREEN_WIDTH, SCREEN_HEIGHT );

glDeleteFramebuffers( 1, &framebuffer );
glDeleteRenderbuffers( 1, &renderbuffer );

This displays only white quads with no texture. If you recall what I wrote before my "create texture" code, I have no texture parameters. Setting a MIN and MAG filter will just end up displaying nothing.

Help! :)

Share this post

Link to post
Share on other sites
HuntsMan    368
You don't need a renderbuffer, just attach a texture to the FBO.
Set the MIN/MAG filters to something, like NEAREST, before glTexImage2D. Else, your texture will be incomplete (as the default is mipmapped, and you don't have mipmaps).

This should make it work.

Share this post

Link to post
Share on other sites
energydrink    100
Thanks for fast reply. It sort of work. First it displays nothing, removing glViewport() will display text on the screen. However, the texture is larger than the screen and therefor not getting the whole texture. This was also the case when I used glCopyTexImage2D, using glViewport() would display nothing.

Using my "setOrtho2D" method was the only way to get it to work.

glViewport( 0, 0, width, height );
glMatrixMode( GL_PROJECTION );
gluOrtho2D( 0, width, 0, height );

Is there a reason I have to call my gluOrtho2D aswell? ( I am not using perspective, only 2D )

In anyway, it is working! Ah, finally! BIG thank you! :)

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By povilaslt2
      Hello. I'm Programmer who is in search of 2D game project who preferably uses OpenGL and C++. You can see my projects in GitHub. Project genre doesn't matter (except MMO's :D).
    • By ZeldaFan555
      Hello, My name is Matt. I am a programmer. I mostly use Java, but can use C++ and various other languages. I'm looking for someone to partner up with for random projects, preferably using OpenGL, though I'd be open to just about anything. If you're interested you can contact me on Skype or on here, thank you!
      Skype: Mangodoor408
    • By tyhender
      Hello, my name is Mark. I'm hobby programmer. 
      So recently,I thought that it's good idea to find people to create a full 3D engine. I'm looking for people experienced in scripting 3D shaders and implementing physics into engine(game)(we are going to use the React physics engine). 
      And,ye,no money =D I'm just looking for hobbyists that will be proud of their work. If engine(or game) will have financial succes,well,then maybe =D
      Sorry for late replies.
      I mostly give more information when people PM me,but this post is REALLY short,even for me =D
      So here's few more points:
      Engine will use openGL and SDL for graphics. It will use React3D physics library for physics simulation. Engine(most probably,atleast for the first part) won't have graphical fron-end,it will be a framework . I think final engine should be enough to set up an FPS in a couple of minutes. A bit about my self:
      I've been programming for 7 years total. I learned very slowly it as "secondary interesting thing" for like 3 years, but then began to script more seriously.  My primary language is C++,which we are going to use for the engine. Yes,I did 3D graphics with physics simulation before. No, my portfolio isn't very impressive. I'm working on that No,I wasn't employed officially. If anybody need to know more PM me. 
    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
  • Popular Now