Jump to content
  • Advertisement
Sign in to follow this  
NukeCorr

SDL & opengl slow fps problem

This topic is 3769 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I decided to use SDL with opengl, since I've never done it before and I'm having serious lag problem. I have tilemap, and when I draw 1 tile to it, fps is at normal rate: 72-73 (max 72). But when I draw second tile, fps drops to 37, and after third tile it drops to about ~20, etc. And when there's about over 10 tiles, fps is 0 Tiles are 24*24 size
#define SCREEN_WIDTH 800
#define SCREEN_HEIGHT 600
#define SCREEN_BITS 32

This is my video init part:
    SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1); 

	SDL_SetVideoMode(SCREEN_WIDTH,SCREEN_HEIGHT,SCREEN_BITS,SDL_OPENGL);
	
	glEnable(GL_TEXTURE_2D);
	glClearColor( 1.0f, 1.0f, 0.0f, 0.0f );
	glViewport( 0, 0, SCREEN_WIDTH, SCREEN_HEIGHT );
	//glClear(GL_COLOR_BUFFER_BIT);
	glMatrixMode( GL_PROJECTION );
	glLoadIdentity();
	glOrtho(0.0f, SCREEN_WIDTH, SCREEN_HEIGHT, 0.0f, -1.0f, 1.0f);
	glMatrixMode( GL_MODELVIEW );
	glLoadIdentity();

Here's the load image part:
int CCore::LoadImage(char *file, GLuint tex) 
{
	//GLuint texture;			// This is a handle to our texture object
	SDL_Surface *surface;	// This surface will tell us the details of the image
	GLenum texture_format;
	GLint  nOfColors;
 
	if ( (surface = SDL_LoadBMP("gfx/cursor.bmp")) ) { 
 
		// Check that the image's width is a power of 2
		if ( (surface->w & (surface->w - 1)) != 0 ) {
			printf("warning: image.bmp's width is not a power of 2\n");
		}
		
		// Also check if the height is a power of 2
		if ( (surface->h & (surface->h - 1)) != 0 ) {
			printf("warning: image.bmp's height is not a power of 2\n");
		}
 
			// get the number of channels in the SDL surface
			nOfColors = surface->format->BytesPerPixel;
			if (nOfColors == 4)     // contains an alpha channel
			{
					if (surface->format->Rmask == 0x000000ff)
							texture_format = GL_RGBA;
					else
							texture_format = GL_BGRA;
			} else if (nOfColors == 3)     // no alpha channel
			{
					if (surface->format->Rmask == 0x000000ff)
							texture_format = GL_RGB;
					else
							texture_format = GL_BGR;
			} else {
					printf("warning: the image is not truecolor..  this will probably break\n");
					// this error should not go unhandled
			}
        
		// Have OpenGL generate a texture object handle for us
		glGenTextures( 1, &tex );
 
		// Bind the texture object
		glBindTexture( GL_TEXTURE_2D, tex );
 
		// Set the texture's stretching properties
			glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
			glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
 
		// Edit the texture object's image data using the information SDL_Surface gives us
		glTexImage2D( GL_TEXTURE_2D, 0, nOfColors, surface->w, surface->h, 0,
						  texture_format, GL_UNSIGNED_BYTE, surface->pixels );
	} 
	else {
		printf("SDL could not load image.bmp: %s\n", SDL_GetError());
		SDL_Quit();
		return 1;
	}   
 
	// Free the SDL_Surface only if it was successfully created
	if ( surface ) { 
		SDL_FreeSurface( surface );
	}

	return tex;
}

And here the texture blit function
void CCore::Blit(int x, int y, GLuint texture, int frameW, int frameH, int frame)
{
	glBindTexture(GL_TEXTURE_2D,texture);
 
	glBegin(GL_QUADS);
		//Top-left vertex (corner)
		glTexCoord2i( 0, 0 );
		glVertex3f( x, y, 0.0f );
		
		//Bottom-left vertex (corner)
		glTexCoord2i( 1, 0 );
		glVertex3f( x, y+frameH, 0 );
		
		//Bottom-right vertex (corner)
		glTexCoord2i( 1, 1 );
		glVertex3f( x+frameW, y+frameH, 0 );
		
		//Top-right vertex (corner)
		glTexCoord2i( 0, 1 );
		glVertex3f( x+frameW, y, 0 );
	glEnd();
}

So can someone please check if it has something wrong which drops the fps so much

Share this post


Link to post
Share on other sites
Advertisement
Try using textures that are powers of two, for example 32x32. It could be that the hardware is slowed down converting them on the fly.

Also note that setting up the bits per-pixel is accomplished with the SDL_GL_SetAttribute function, not the parameter to SetVideoMode.

Share this post


Link to post
Share on other sites
So that was the problem!
I changed tile size to 32*32 and it worked normally.

But is there a way to use 24*24 tiles? because all my textures are scaled to 24*24

Share this post


Link to post
Share on other sites
Quote:
Original post by NukeCorr
So that was the problem!
I changed tile size to 32*32 and it worked normally.

But is there a way to use 24*24 tiles? because all my textures are scaled to 24*24


I believe that if you use gluBuildMipMaps2D, it will automatically scale the image up to the nearest power of two. There are also a lot of code fragments to do this manually floating around.

Share this post


Link to post
Share on other sites
The non-power of two texture size issue is due to your graphics card (or driver). If your card doesn't support OpenGL 2.0, you're out of luck.

If you want to maintain support for older graphics cards, you have two options:

1). Change your textures to power-of-two.

2). Check for non-power-of-two support. If there is no support, scale the textures up to the next highest power. There's an old post in the SDL mailing list showing exactly how to do the scaling part: http://www.devolution.com/pipermail/sdl/2002-September/049078.html

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!