Followers 0

# OpenGL Rendering a texture

## 3 posts in this topic

Hello!

I am very new to OpenGL and have a basic problem: I want to render 2 quads each with its own texture.

First I have an init function which loads the texture, generates the texture IDs and uploads the texture:

void initTex(TexObj& texObj, const char* imageFilename) // Images are .xpm files
{
QImage image = QImage(imageFilename);
texObj.w = image.width();
texObj.h = image.height();

QImage glImage;
glImage = QGLWidget::convertToGLFormat(image);

char* data = static_cast<char*>( calloc( texObj.w *  texObj.h, 4) );
memcpy(data, glImage.bits(), texObj.w * texObj.h * 4);

texObj.data = data
glGenTextures(1, &texObj.texID);
glBindTexture(GL_TEXTURE_2D, texObj.texID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texObj.w, texObj.h, 0, GL_RGBA, GL_UNSIGNED_BYTE,  texObj.data);
}


Function initTex is called 2 times (for 2 different textures).

To render the 2 quads I use this function:

static void draw(TexObj& tex, float scale)
{
glEnable(GL_CULL_FACE);
glEnable( GL_TEXTURE_2D );
glFrontFace( GL_CCW );

glActiveTexture( GL_TEXTURE0 );
glBindTexture(GL_TEXTURE_2D, tex.texID);

// glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, tex->w, tex->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, tex->data );

glTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR) ;
glTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR) ;
glTexEnvf (GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE) ;

glTranslated( 0.0, 0.0, 0.0 );
glScalef( scale, scale, 1.0f );

glPolygonMode(GL_FRONT, GL_FILL);
glBegin(GL_POLYGON);
glTexCoord2f( 0.0, 0.0 );
glVertex2d(0.0, 1.0);
glTexCoord2f( 0.0, -1.0 );
glVertex2d(0.0, 0.0);
glTexCoord2f( 1.0, -1.0 );
glVertex2d(1.0, 0.0);
glTexCoord2f( 1.0, 0.0 );
glVertex2d(1.0, 1.0);
glEnd();

glDisable( GL_TEXTURE_2D );
glDisable(GL_CULL_FACE);
}


The problem is: When I let this code run the quads are completely white!

The strange thing: When I comment in the outcommented line (glTexImage2D), the quads are textured! It seems like as if the texture I load into the VRAM in function init() is disappeared and I have to upload it every frame.

Does anyone have an idea what could cause this?

Thanks!

0

##### Share on other sites

1. Move the following lines to the end of initTexture (there's no point repeastedly doing this)

glTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR) ;
glTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR) ;
glTexEnvf (GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE) ;

2. Why are you doing this:  texObj.data = data ? Unless it's for a very specific reason (getting heightmap values on the CPU side, as well as the GPU), it's just wasting memory. You can skip everything involving copying the data from the image (the calloc + memcpy), and instead just pass glImage.bits() to glTexImage2D directly.

3. You might find that using glPushMatrix / glPopMatrix surrounding the scaling and drawing of the quad will be useful (stops the transforms leaking into other draw calls).

4. glTranslated( 0.0, 0.0, 0.0 );  << this is pointless.

5.
enabling / disabling texturing and face culling constantly is a bit silly. You'd be better off moving those outside so you can do:

[source]
glEnable(GL_TEXTURE_2D);
glEnable(GL_CULL_FACE);

glDisable(GL_CULL_FACE);
glDisable(GL_TEXTURE_2D);
[/source]

6. glFrontFace( GL_CCW );  This is something you shouldn't need to be calling everytime you draw a single quad. Just once on start up will do fine (and then keep everything with the same winding order).

The problem is: When I let this code run the quads are completely white!

Check the values of tex.texID. Are they the same two values used when generating the texture?

The strange thing: When I comment in the outcommented line (glTexImage2D), the quads are textured! It seems like as if the texture I load into the VRAM in function init() is disappeared and I have to upload it every frame.

Which makes me think that you have either an invalid texture object, or that it's changing it's id somehow. You are initializing the textures AFTER you create the window right? (otherwise the gl context would be invalid, and the texture creation will fail).

So to recap, the rest of your app looks a little like this right?

[source]

TexObj g_tex1;

TexObj g_tex2;

int initGL()
{
// initialize and create GL window

// now textures are created.
initTex(g_tex1, "file1.bmp");
initTex(g_tex2, "file2.bmp");
}

void drawMethod()
{

draw(g_tex1, 1.0f);
draw(g_tex2, 1.0f);
}
[/source]

0

##### Share on other sites

Thanks RobTheBloke for your detailed help! You pushed me in the right direction:)

The problem was: When I uploaded the images with glTexImage2D I had no OpenGL context. I find it very strange that glError() did not tell any error though...

One last question: How is it possible NOT to have a valid context? I mean after I create a context and "activate" it by calling wglMakeCurrent (), isn't the context now active forever? In other words: I thought after calling wglMakeCurrent() I can use OpenGL calls in my code in every function I want. Did I miss something?

0

##### Share on other sites

I think the main case where you'd be unable to use a context is due to multithreading. An OpenGL context cannot be active on multiple threads, so in the case where you wanted to call OpenGL functions from different threads, you'd need to either create more than one context and have them share data (with wglShareLists), or have each thread sync and make the context current / not current as needed.

0

## Create an account

Register a new account

Followers 0

• ### Similar Content

• So it's been a while since I took a break from my whole creating a planet in DX11. Last time around I got stuck on fixing a nice LOD.
A week back or so I got help to find this:
https://github.com/sp4cerat/Planet-LOD
In general this is what I'm trying to recreate in DX11, he that made that planet LOD uses OpenGL but that is a minor issue and something I can solve. But I have a question regarding the code
He gets the position using this row
vec4d pos = b.var.vec4d["position"]; Which is then used further down when he sends the variable "center" into the drawing function:
if (pos.len() < 1) pos.norm(); world::draw(vec3d(pos.x, pos.y, pos.z));
Inside the draw function this happens:
draw_recursive(p3[0], p3[1], p3[2], center); Basically the 3 vertices of the triangle and the center of details that he sent as a parameter earlier: vec3d(pos.x, pos.y, pos.z)
Now onto my real question, he does vec3d edge_center[3] = { (p1 + p2) / 2, (p2 + p3) / 2, (p3 + p1) / 2 }; to get the edge center of each edge, nothing weird there.
But this is used later on with:
vec3d d = center + edge_center[i]; edge_test[i] = d.len() > ratio_size; edge_test is then used to evaluate if there should be a triangle drawn or if it should be split up into 3 new triangles instead. Why is it working for him? shouldn't it be like center - edge_center or something like that? Why adding them togheter? I asume here that the center is the center of details for the LOD. the position of the camera if stood on the ground of the planet and not up int he air like it is now.

Full code can be seen here:
https://github.com/sp4cerat/Planet-LOD/blob/master/src.simple/Main.cpp
If anyone would like to take a look and try to help me understand this code I would love this person. I'm running out of ideas on how to solve this in my own head, most likely twisted it one time to many up in my head
Toastmastern

• I googled around but are unable to find source code or details of implementation.
What keywords should I search for this topic?
Things I would like to know:
A. How to ensure that partially covered pixels are rasterized?
Apparently by expanding each triangle by 1 pixel or so, rasterization problem is almost solved.
But it will result in an unindexable triangle list without tons of overlaps. Will it incur a large performance penalty?
How to ensure proper synchronizations in GLSL?
GLSL seems to only allow int32 atomics on image.
C. Is there some simple ways to estimate coverage on-the-fly?
In case I am to draw 2D shapes onto an exisitng target:
1. A multi-pass whatever-buffer seems overkill.
2. Multisampling could cost a lot memory though all I need is better coverage.
Besides, I have to blit twice, if draw target is not multisampled.

• By mapra99
Hello

I am working on a recent project and I have been learning how to code in C# using OpenGL libraries for some graphics. I have achieved some quite interesting things using TAO Framework writing in Console Applications, creating a GLUT Window. But my problem now is that I need to incorporate the Graphics in a Windows Form so I can relate the objects that I render with some .NET Controls.

To deal with this problem, I have seen in some forums that it's better to use OpenTK instead of TAO Framework, so I can use the glControl that OpenTK libraries offer. However, I haven't found complete articles, tutorials or source codes that help using the glControl or that may insert me into de OpenTK functions. Would somebody please share in this forum some links or files where I can find good documentation about this topic? Or may I use another library different of OpenTK?

Thanks!

• Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
• By KarimIO
EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
Update: No crash occurs if I don't draw, just clear and swap.
static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));

• 14
• 12
• 22
• 11
• 28