Texturing in OpenGL

Started by
22 comments, last by Merowingian 16 years, 7 months ago
Hey, I've got some problems displaying textures on a quadrangle in my program. But first I would like to make sure that I'm getting certain basics right. First, let's assume I have just called glGenTextures(3, textures), textures being an integer array of size 3. What will happen if I call glTexImage2D right after that? Will the texture data I use in this function be then associated with the first unoccupied texture name, in this case textures[0]? What if I don't use glGenTextures beforehand? Will texture names be generated automatically at each call of glTexImage3D? By which system will they be generated? Second: I want to use two functions, each one should draw a single textured object with its own texture. So the two functioned will be called in the display() function of OpenGL, followed by a flush command, since the two objects belong to the same scene. Let's look at one function: It will call glGenTextures, using a local int array as function input. Next, it will call glTexImage2D with a pointer to the texture data of course. Then the vertex data for the geometries will be specified together with the appropriate texture coordinates. At the end of the function the textures used for this object should be cleared using glDelTextures so they won't be placed more than once, since the drawing function will be called at every display call. I have a question here: texturing belongs to the rasterisation stage of the rendering pipeline. I would expect OpenGL not to perform any actual texturing before all vertex data prior to the flush command has been specified. And this won't happen before the second function has been executet also. But by then the textures for the first object will be cleared already, won't they? I know one could specify the textures in the initGL() function, but I would like everything concerning the drawing of a specific object to be in the drawing function for that object for a better structure and overview. This obviously requires creating and clearing textures at every call. Thanks in advance, cheers Marvin
Advertisement
Before you can call glTexImage, you must bind some valid texture name with glBindTexture. It is not until the first time you bind a previously-unused texture name, the corresponding texture object is actually created. You can then initialize said object by calling glTexImage.
Hey, that's good to know.

Just changed the order of commands accordingly.

Seem to be making real progress now.

But I'm now getting some access violation error with glTexImage2D. But I'm sure the data array passed on got the right size, can't be too big.

Here's the command sequence surrounding glTexImage2D:

const unsigned int textWidth = 256;
const unsigned int textHeight = 256;
char* texture;
shelf.getSurface(texture); // stores the shelf's surface texture into texture
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, textWidth, textHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, texture);
free(texture);

Well guys,

I've made a slight change to it:



const unsigned int textWidth = 256;
const unsigned int textHeight = 256;
const unsigned int bytesPerPix = 3;
const unsigned int textBytes = textWidth * textHeight * bytesPerPix;
char* texture = new char[textBytes];
// shelf.getSurface(texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, textWidth, textHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, texture);
free(texture);

As you can see I've now commented the line "shelf.getSurface(texture);".
This version works fine, but the shelf remains blank of course, since there is no texture loaded into texture. I have found out that in C++ you are still allowed to overstep the size of an array, which you shouldn't be:


unsigned int imagePixs = width * height;
unsigned int imageBytes = imagePixs * bytesPerPix;
char* image = new char[imageBytes];
cout << image[imageBytes + 5];

For some strange reason, this sequence doesn't give me an error. And that's exactly the problem. The function shelf.getSurface is supposed to write the char* of the shelf into the char* texture (call by reference).
And for some reason, glTexImage2D seems to think texture is too big. But texture, after the call of shelf.getSurface should be an array of exactly the proper size. I've "verbally" made sure of that with commands like those above, calculating the imageBytes and creating an array of the size imageBytes. But since you cannot rely on C++ to make it really an array of that restricted size, it seems like it practically could have any size bigger than expected, no matter what you put into "new char[imageBytes]".
I really don't get it. What should I do?
Well guys,

I've made a slight change to it:



const unsigned int textWidth = 256;
const unsigned int textHeight = 256;
const unsigned int bytesPerPix = 3;
const unsigned int textBytes = textWidth * textHeight * bytesPerPix;
char* texture = new char[textBytes];
// shelf.getSurface(texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, textWidth, textHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, texture);
free(texture);

As you can see I've now commented the line "shelf.getSurface(texture);".
This version works fine, but the shelf remains blank of course, since there is no texture loaded into texture. I have found out that in C++ you are still allowed to overstep the size of an array, which you shouldn't be:


unsigned int imagePixs = width * height;
unsigned int imageBytes = imagePixs * bytesPerPix;
char* image = new char[imageBytes];
cout << image[imageBytes + 5];

For some strange reason, this sequence doesn't give me an error. And that's exactly the problem. The function shelf.getSurface is supposed to write the char* of the shelf into the char* texture (call by reference).
And for some reason, glTexImage2D seems to think texture is too big. But texture, after the call of shelf.getSurface should be an array of exactly the proper size. I've "verbally" made sure of that with commands like those above, calculating the imageBytes and creating an array of the size imageBytes. But since you cannot rely on C++ to make it really an array of that restricted size, it seems like it practically could have any size bigger than expected, no matter what you put into "new char[imageBytes]".
I really don't get it. What should I do?
Well guys,

I've made a slight change to it:



const unsigned int textWidth = 256;
const unsigned int textHeight = 256;
const unsigned int bytesPerPix = 3;
const unsigned int textBytes = textWidth * textHeight * bytesPerPix;
char* texture = new char[textBytes];
// shelf.getSurface(texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, textWidth, textHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, texture);
free(texture);

As you can see I've now commented the line "shelf.getSurface(texture);".
This version works fine, but the shelf remains blank of course, since there is no texture loaded into texture. I have found out that in C++ you are still allowed to overstep the size of an array, which you shouldn't be:


unsigned int imagePixs = width * height;
unsigned int imageBytes = imagePixs * bytesPerPix;
char* image = new char[imageBytes];
cout << image[imageBytes + 5];

For some strange reason, this sequence doesn't give me an error. And that's exactly the problem. The function shelf.getSurface is supposed to write the char* of the shelf into the char* texture (call by reference).
And for some reason, glTexImage2D seems to think texture is too big. But texture, after the call of shelf.getSurface should be an array of exactly the proper size. I've "verbally" made sure of that with commands like those above, calculating the imageBytes and creating an array of the size imageBytes. But since you cannot rely on C++ to make it really an array of that restricted size, it seems like it practically could have any size bigger than expected, no matter what you put into "new char[imageBytes]".
I really don't get it. What should I do?
Well I can't say for sure what the problem is... but I'm not sure what the method getSurface() does exactly. In the first example, you created the texture with just:

char *texture;

and then proceeded to call:

shelf.getSurface(texture)

Now what I'm wondering is, is whether or not the getSurface() method actually wants the pointer you pass in to already be pointing to valid memory, because if you don't call:

texture = new char[...]

then it doesn't point to anything, and the only way the getSurface function could do anything at all useful would be if its declaration was:

void getSurface(char *& ReferenceToPointer);

Otherwise the uninitialized pointer is useless to the function.
OK, again:

I'm using an Object of the class Shelf. The class Shelf is supposed to describe all important properties of a shelf in an abstract way, independent of the rendering process. That is dimensions, position, thickness etc. and of course, the surface, represented by an array of characters representing the rgb-values of a each rectangle on the surface. In other words, the surface of the shelf is what the rendering pipeline will use as the shelf's texture.

So shelf being an object of the Shelf class, the function call getSurface(char* &dest) will take the "texture" argument "call by reference" and put the surface's pointer value into it, so after calling getSurface you should be able to use the shelf's surface via texture.

Now as I was saying, I made sure that the texture array has a fixed and unchangeable size, not to overstep the allowed number of characters.

But for some strange reason, after applying getSurface to texture and passing the new texture on to glTexImage2D for the texture data required, I get an access violation error, as if to say that "surface" (equals texture) is too big.

Well I don't get how texture could be "too big", since I extra made sure it doesn't.

You guys might wanna know how surface gets its content.
Well here's the code setting the shelf in the main.cpp file:


void setShelf() {

char* ppmFn = "holz.ppm";
PPMReader ppm = PPMReader(ppmFn);
char* surface;
ppm.getImage(surface);
shelf = Shelf(1.5, 1.5, 1.5, surface);

}

PPMReader's constructor loads the ppmFile specified by ppmFn into ppm's attributes height, width, bitdepth and image (the actual image data, i.e. an char*).
The last three commands place the ppm's image pointer into "surface", very much like getSurface would do, and then pass on surface to Shelf's constructor. Shelf's constructor will set the x-, y- and z-position (the first three arguments) and the surface pointer as passed on to it by the arguments.

I'll give you the functions to make sure you understand:


PPMReader::PPMReader(char * filename) {
loadPPMFile(filename);
}

void PPMReader::loadPPMFile(char * filename) {

const int readBufferSize = 20;
ifstream ifs;
char buffer[readBufferSize];
ifs.open(filename, ifstream::binary);

ifs.getline(buffer, readBufferSize);
ifs.getline(buffer, readBufferSize, ' ');
width = atoi(buffer);
ifs.getline(buffer, readBufferSize);
height = atoi(buffer);
ifs.getline(buffer, readBufferSize);
unsigned int bitDepth = atoi(buffer);
bytesPerPix = (bitDepth == 255) ? 3 : 6;

unsigned int imagePixs = width * height;
unsigned int imageBytes = imagePixs * bytesPerPix;
image = new char[imageBytes];
ifs.read(image, imageBytes);
ifs.close();

}


Shelf::Shelf(float x, float y, float z, char* surface) {
setPosition(x, y, z);
this->surface = surface;
}


The getSurface of Shelf is implemented as inline void in the header file:

inline void getSurface(const char* &dest) { dest = surface; }


...as is the getImage of PPMReader:

inline void getImage(char* &dest) { dest = image; };



Got it all? My problem is still the same. I can assure you, that my ppm-file's size IS 256x256 and that the failbit used in ifstream in PPMReader is 0 at the end of loadPPM.

So somewhere in my main.cpp I was having the following code:


const unsigned int textWidth = 256;
const unsigned int textHeight = 256;
const unsigned int bytesPerPix = 3;
const unsigned int textBytes = textWidth * textHeight * bytesPerPix;
char* texture = new char[textBytes];
// shelf.getSurface(texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, textWidth, textHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, texture);
free(texture);


Which is fine but without texture, unless I uncomment the line

shelf.getSurface(texture);

As I said, this would lead to an access violation error which I do not understand. How does texture get bigger than allowed despite my effords to declare its size?
Well this is more or less a shot in the dark but... have you tried calling:

glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

If, for some odd reason, this isn't what the system is defaulting to, you might have a problem... then again... that doesn't explain why the empty byte array works fine. So that, most likely, is not the problem. Sorry :/

Also, have you bound a texture object before calling glTexImage2D ? That shouldn't affect it either, as far as I know... worth a try I suppose.

Also, is there any particular reason you prefer
getSurface(surf) over surf = getSurface()
and
PPMReader ppm = PPMReader(ppmFn) over PPMReader ppm(ppmFn); ?

Just a matter of personal preference, I suppose. :)



Anyway, other than a problem with the image (which you said there wasn't) or a problem with the loading of the image (I'm not familiar with PPM, so I can't comment on the correctness of that code) then I'm not sure what the problem is, I'm sorry :(
Quote:Original post by Merowingian
So somewhere in my main.cpp I was having the following code:

const unsigned int textWidth = 256;
const unsigned int textHeight = 256;
const unsigned int bytesPerPix = 3;
const unsigned int textBytes = textWidth * textHeight * bytesPerPix;
char* texture = new char[textBytes];
// shelf.getSurface(texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, textWidth, textHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, texture);
free(texture);


I think that the above codes except glTexImage2D() should be done in your PPMReader::loadPPMFile(). I mean you better get the width/height of texture and the pointer to the texture data from your PPMReader object, for example, your main() would be:
GLint width = ppm.getWidth();GLint height = ppm.getHeight();const GLvoid* texture = (const GLvoid*)ppm.getImage();glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, texture);

You don't need to re-allocate a memory for the texture data in main() again. If your Shelf class holds these properties, then use Self object instead of PPMReader object, for instance:

const GLvoid* texture = (const GLvoid*)shelf.getSurface();

[EDIT]
As Naxos pointed out, make sure your PPMReader is working properly, too. Here is my PPM reader/writer class. It reads both P3(Ascii) and P6(binary) formats. Check it out.

imagePpm.zip
[/EDIT]

[Edited by - songho on August 7, 2007 12:57:00 AM]

This topic is closed to new replies.

Advertisement