How can I draw a tilemap without my ram useage or cpu useage going way up?

Started by
2 comments, last by CyanPrime 13 years ago
Okay, so I'm trying to draw a tilemap on the screen with SDL, and my fps is really bad cause of it. And I've tried doing it in SFML a different way, but that way makes my ram useage go way up.

the way I'm trying to draw it in SDL is having a SDL_Surface *tilemap; that I chop up, and render like this:


void apply_surface ( int sourceX, int sourceY, int sourceW, int sourceH, int x, int y, SDL_Surface* source, SDL_Surface* destination ){

// make a temporary rectangle to hold the offsets
SDL_Rect offset;
SDL_Rect sourceRect;

// give the offsets to the rectangle
offset.x = x;
offset.y = y;
sourceRect.x = sourceX;
sourceRect.y = sourceY;
sourceRect.h = sourceH;
sourceRect.w = sourceW;

// blit the surface
SDL_BlitSurface( source, &sourceRect, destination, &offset );
}


void render(){
for(int x = 0; x < lv.width; x++){
for(int y = 0; y < lv.height; y++){
if(x * lv.tileSize < 640 && y * lv.tileSize < 480){
int tileXCoord = 0;
int tileYCoord = 0;
int tileSheetWidth = tilemap->w / lv.tileSize;

if (lv.tile[x][y] != 0)
{
tileXCoord = lv.tile[x][y] % tileSheetWidth;
tileYCoord = lv.tile[x][y] / tileSheetWidth;
}

apply_surface(tileXCoord * lv.tileSize, tileYCoord * lv.tileSize, lv.tileSize, lv.tileSize, x * lv.tileSize, y * lv.tileSize, tilemap, screen);
}
}
}
}

This way brings my CPU way up, and drops my FPS.

I tried in SFML to draw the level to a Image, and than display that image (with cropping) but that made my ram go up to 50,000k.

so what I am asking is how can I draw a tilemap (in SDL) without making my cpu or ram go way up?

Advertisement
I don't see anything obviously wrong. The looping looks a little wasteful but unless lv.width and lv.height are hugh it should not make a big difference. An easy mistake to make in SDL is to not call SDL_DisplayFormat (or SDL_DisplayFormatAlpha) to convert the surface to the display format. If the surface is not in the display format it will have to make the conversion every time SDL_BlitSurface is called. When calling SetVideoMode with the bitsperpixel argument different from what the display can also cause big slowdown.
Check if:
- both surfaces are hardware accelerated
- both surfaces are identical format (everything except surface size)
- also, making the primary surface double buffered might or might not help
- count how many times apply_surface() was called (the code is messy, so check if you render as many tiles as you wanted and if you don't render them outside of the screen, althrough clipping should take care of it then...)

Stellar Monarch (4X, turn based, released): GDN forum topic - Twitter - Facebook - YouTube

Adding hardware rendering and calling SDL_DisplayFormat(tilemap); made a huge difference! thank you ^_^

This topic is closed to new replies.

Advertisement