Sign in to follow this  
Auriya

[SDL] Message Boxes + Bitmap Fonts, too much CPU usage

Recommended Posts

Hello everyone, I've been working on a game lately, but as I've progressed little by little I noticed something when I finally compiled hours of work, that the message box module I made for the game took too much CPU(over 90% if message box is big), ofcourse the way I did it probably isn't the smartest but I couldn't think of any other way. Also Font Bitmaps, I gave that a try, and it works completely, well just like the message boxes but againt it costs too much CPU(around 90%), it is probably because I let it re-calculate and blit the message box plus bitmap text with each game-loop with only one SDL_Delay(1) to give the CPU a little rest, the reason I re-blit all every time is because I have a moving/animated background, so I have to re-blit them after I nextFrame the background.. Right? Maybe anyone else has an idea on how to make the ways I use right now better or more cpu friendly? I have a Core 2 Duo 2.1 Macbook with 1GB of RAM, so a slow/weak CPU wouldn't be the case here. I have uploaded both the Messagebox project as well as the Bitmap font since looking at the source will be easier than snippets of code, if anyone can find a solution to this problem I would be very grateful! Google didn't appear to be my best friend in this case sadly. Normally I code everything in Xcode, but this time I tried to make things Windows friendly, so all I included are the main.cpp, and other necessary files. Message boxes project; http://www.megaupload.com/?d=O0DQ0IEK Bitmap Font project; http://www.megaupload.com/?d=Q4M26AXK [Edited by - Auriya on December 30, 2007 10:08:26 AM]

Share this post


Link to post
Share on other sites
I'm not sure you're aware of this but, most of the time, SDL doesn't use the hardware acceleration to perform blitting operations. If you want it to do so, open an OpenGL context and use the equivalent OpenGL blitter functions in their place or look into glSDL to help with the conversion. If that doesn't work, look into the experimental bleeding-edge version of SDL 1.3 .

Share this post


Link to post
Share on other sites
Ok so from looking over your code it seems you are blitting a raw loaded image each time you blit. This is causing SDL to have to convert the images on the fly to the screenformat. What you can do is load the image into a temp SDL_Surface. Then convert that SDL_Surface into an optimized image using

SDL_DisplayFormat( );

so something along the lines of :

SDL_Surface *load( std::string filename ) {

SDL_Surface *Raw = NULL;
SDL_Surface *Optimized = NULL;

Raw = SDL_LoadBMP( filename.c_str() );

Optimized = SDL_DisplayFormat( Raw );

SDL_FreeSurface( Raw );

}


that function would return an optimized image that would run ALOT faster than SDL having to convert the image on the fly every frame.


Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this