Jump to content
  • Advertisement
Sign in to follow this  
eysquared

OpenGL Texture Not Rendering

This topic is 4581 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Ok so what I am trying to do is read tiles from a tileset, and then load them each into individual textures. The problem is that when I render these tiles, everything is just grey. This even happens when my tileset consists of only one tile. The code I am using to build the individual tiles from a larger tileset is this:
void WorldMap::loadTileset(const std::string & path)
{
	AUX_RGBImageRec *TextureImage[1];
	if(TextureImage[0]=loadBMP(path.c_str()))
	{
		// get tileset information
		m_setWidth = TextureImage[0]->sizeX;
		m_setHeight= TextureImage[0]->sizeY;
		m_numTileW = m_setWidth/m_tileWidth;
		m_numTileH = m_setHeight/m_tileHeight;
		m_totTiles = m_numTileW * m_numTileH;
        // allocate space for textures
		m_texture = new GLuint[m_totTiles];
		
		// generate textures
		glGenTextures(m_totTiles, &m_texture[0]);

		// pixel offset, assuming tile w and h are same
		int offset = m_tileWidth / m_setWidth;
		
		int curText = 0;
		for(int h=0; h<m_numTileH; h++)
		{
			for(int w=0; w<m_numTileW; w++)
			{
				glPixelStoref(GL_UNPACK_ROW_LENGTH, m_setWidth);
				glPixelStoref(GL_UNPACK_SKIP_PIXELS, w*offset);
				glPixelStoref(GL_UNPACK_SKIP_ROWS, h*offset);
				// create our textures
				glBindTexture(GL_TEXTURE_2D, m_texture[0]);
				glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
				glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
				glTexImage2D(GL_TEXTURE_2D, 0, 3, m_tileWidth, m_tileHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, TextureImage[0]->data);
			}
		}
	}
	if (TextureImage[0])							// If Texture Exists
	{
		if (TextureImage[0]->data)					// If Texture Image Exists
			free(TextureImage[0]->data);				// Free The Texture Image Memory
		free(TextureImage[0]);						// Free The Image Structure
	}
}


The loadBMP is taken straight out of NeHe. I am parsing an XML file to get map info and tile size. My rendering code is:

void WorldMap::render()
{
	for(int w=0; w&lt;m_width; w++)
	{
		for(int h=0; h&lt;m_height; h++)
		{
			int tile = m_mapData.at(w).at(h);
			glBindTexture(GL_TEXTURE_2D, m_texture[0]);
			glBegin(GL_QUADS);
				glTexCoord3f(0.0f, 0.0f, 0.0f); glVertex3f(float(w), float(h), 0.0f);
				glTexCoord3f(1.0f, 0.0f, 0.0f); glVertex3f(float(w + 1), float(h), 0.0f);
				glTexCoord3f(1.0f, 1.0f, 0.0f); glVertex3f(float(w + 1), float(h + 1), 0.0f);
				glTexCoord3f(0.0f, 1.0f, 0.0f); glVertex3f(float(w), float(h + 1), 0.0f);
			glEnd();
		}
	}
}


In my init method I do enable openGL textures. Any ideas on what I am doing wrong? I was able to draw single textures fine, but I really want to be able to just load a tileset. Thanks in advance.

Share this post


Link to post
Share on other sites
Advertisement
It looks like you are generating texture IDs properly, but are only binding a single texture, and sending it multiple times to OpenGL. Here's the culprit:
glBindTexture(GL_TEXTURE_2D, m_texture[0]);


For each tile, you bind the same texture ID and overwrite it with a different portion of the texture map.

Something like:
glBindTexture(GL_TEXTURE_2D, m_texture[(h * m_numTileW) + w]);


would probably work better. Same thing goes for your rendering function too.

Edit: I noticed in your rendering function, you already have a tile ID:
int tile = m_mapData.at(w).at(h);
glBindTexture(GL_TEXTURE_2D, m_texture[0]);


Shouldn't you be using
m_texture[tile]
instead? :)

Share this post


Link to post
Share on other sites
Yea sorry I should have explained. Those are there on purpose, because I was testing it with just one texture and trying to elimiate all possible options. I should have changed the code before I posted but I forgot. I was doing it the way you mentioned.

Share this post


Link to post
Share on other sites
If you really are binding to multiple texture IDs properly, then I don't see any reason why it shouldn't work... although I'm not too sure about the idea of using glPixelStore to unpack portions of a large image into a texture.

Personally I would cut out the section of the bitmap needed into memory first, then upload that section with a single glTexImage2D call.

One more thing: are your tile sections power-of-two sized (ie: 32, 64, 128, ...)? If not, that might be one reason why it's not working. It might be a good idea to call glGetError around some functions and see where OpenGL is giving you trouble.

Share this post


Link to post
Share on other sites
Quote:
Original post by bpoint
Personally I would cut out the section of the bitmap needed into memory first, then upload that section with a single glTexImage2D call.


How would I go about doing this? I am pretty new to openGL.

Quote:
Original post by bpoint
One more thing: are your tile sections power-of-two sized (ie: 32, 64, 128, ...)? If not, that might be one reason why it's not working. It might be a good idea to call glGetError around some functions and see where OpenGL is giving you trouble.


Yea, they are 64 x 64 tiles. I will try the glGetError and see what I come up with. Thanks.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!