glCopyTexImage2D and 800 x 600 resolution

Started by
7 comments, last by silvermace 18 years, 11 months ago
I have a problem rendering to texture the entire screen ("taking a screenshot"). My Code: void CreateEmptyTexture(short TexId) //I put this in my Init Function. { unsigned char* data = new unsigned char[800*600*4]; memset(data, 0, 800*600*4); glGenTextures(1, &TextureArray[TexId]); glBindTexture(GL_TEXTURE_2D, TextureArray[TexId]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 800, 600, 0, GL_RGBA, GL_UNSIGNED_BYTE, data); delete data; }; void TakeAScreenShot(short TexId, bool Update) //I put this after my drawing functions { if(Update) { glBindTexture(GL_TEXTURE_2D, TextureArray[TexId]); glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 0, 0, 800, 600, 0); }; }; void DrawSavedScreenShot(short TexId, bool Show) //This just draws the saved "ScreenShot" { if(Show) { glPushMatrix(); glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glDisable(GL_DEPTH_TEST); glEnable(GL_COLOR_MATERIAL); glBlendFunc(GL_SRC_ALPHA,GL_ONE); glBindTexture(GL_TEXTURE_2D, TextureArray[TexId]); glTranslatef(250,300,0); glScalef(260,260,1); glColor4f(1.0f,1.0f,1.0f,0.5f); glBegin(GL_QUADS); glTexCoord2f(0,1); glVertex2f(-1,1); glTexCoord2f(0,0); glVertex2f(-1,-1); glTexCoord2f(1,0); glVertex2f(1,-1); glTexCoord2f(1,1); glVertex2f(1,1); glEnd(); glDisable(GL_COLOR_MATERIAL); glEnable(GL_DEPTH_TEST); glDisable(GL_BLEND); glDisable(GL_TEXTURE_2D); glPopMatrix(); }; }; It works fine with 512 x 512 and 1024 x 1024 in glCopyTexImage2D, but with the 800 x 600 resolution it doesn't seem to work. It just draws a white quad. http://collective.valve-erc.com/index.php?doc=1087646312-07733400&page=2 This is a link to a tutorial on creating a glow effect for Half-Life. It uses ScreenWidth and ScreenHeight variables for glCopyTexImage2D, and this, i suppose, can take values of 800 and 600 as a case when the game resolution is 800 x 600.
=================* No Brain - No Pain *=================
Advertisement
That's because you can't use 800x600 textures unless you use the Non-power-of-two ARB extension. Thus in your case, the only valid textures are 2^n in size. To take the screenshot, look at glReadPixels.
And what about the link? How this problem is solved there?
=================* No Brain - No Pain *=================
I was planning to use that method for some postrender effects, blur.... Can i get the same effects using glReadPixels? Are there some tutorials on glReadPixels? ThankYou
=================* No Brain - No Pain *=================
Quote:Original post by Devil_Inside
I was planning to use that method for some postrender effects, blur.... Can i get the same effects using glReadPixels? Are there some tutorials on glReadPixels? ThankYou


glReadPixels reference. To use it: glReadPixels(0, 0, 800, 600, GL_RGB, GL_UNSIGNED_BYTE, buffer);

It's very slow so don't expect to do any effects with it.

**EDIT**

I've read your post again, seems you want just to render to a texture. My bad, i thought you wanted to take a screenshot and save it to disk. See the above two posts for better information. I would advise you to do your initial rendering to a 2^n sized viewport which can be placed into a 2D texture.
There are 3 ways of handling this;

- Use a texture which is bigger than the screen size and just restrict the texture coords when drawing it to the screen again. The problem with this is wasted texture memory.

- Restrict the drawing of the first pass to a certain window which is the same as the texture size (via glViewport) and just copy it across to the texture.

Both of these suffer from not being able to use a full texture which is bigger than the screen, which can be desireable (800*600 screen but a 1024*1024 texture to improve the details), this can be side stepped with the 3rd solution;

- Use a framebuffer object to render directly to a texture of a size of your choosing or pbuffer to render to a large context and then access it via a texture in later passes.
I suggest that you add this line at the beginning and end of each function that calls OpenGL:

assert( !glGetError() );


In this case, it would trip after you did a glTexImage2D() with size 800x600.

There are two solutions:

1) create a texture that's 1024x1024 and use the lower-left corner of it (with texture coordinates from 0,0 to 800.0/1024.0,600.0/1024.0)

2) use the TEXTURE_RECT texture target instead of the TEXTURE_2D target (requires support for the extension)
enum Bool { True, False, FileNotFound };
And what about using Non-power-of-two ARB extension?
=================* No Brain - No Pain *=================
hi, AFAIK if you have the ARB_texture_non_power_of_two the TEXTURE_2D target automatically accepts NPOT values, so if its not working its probably safe to assume your card (or current drivers) doesn't support it.

I use glCopyTexSubImage2D with the ARB_texture_rectangle extension and it works quite well, the only thing is that the texturecoords are now in texel position rather than clamped between 0 and 1.

here is my render texture fallback codepath (using tex-rect and copysub)
// create the texture// bind to ARB_texture_rectangle and do a glCopyTexSubImage, so warn about performance hit..core::Log( core::Warning( "Performance: NPOT render target, falling back on ARB_texture_rectangle and Frambuffer readback", core::WL_HIGH ) );	apiTarget = GL_TEXTURE_RECTANGLE_ARB;glGenTextures(1, &apiName);glBindTexture( apiTarget, apiName );glTexParameteri( apiTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR );glTexParameteri( apiTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR );glTexParameteri( apiTarget, GL_TEXTURE_WRAP_S, GL_CLAMP );glTexParameteri( apiTarget, GL_TEXTURE_WRAP_T, GL_CLAMP );glTexImage2D( apiTarget, 0, GL_RGB16, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL );// use the texture targetglEnable( apiTarget );glBindTexture( apiTarget, apiName );glCopyTexSubImage2D( apiTarget, 0, 0, 0, 0, 0, width, height );

you should also know that EXT_framebuffer_object exists on NVIDIA hardware, and will soon be availibe on ATI hardware too, rendering most of this obsolete.

Cheers
-Danu
"I am a donut! Ask not how many tris/batch, but rather how many batches/frame!" -- Matthias Wloka & Richard Huddy, (GDC, DirectX 9 Performance)

http://www.silvermace.com/ -- My personal website

This topic is closed to new replies.

Advertisement