Jump to content
  • Advertisement
Sign in to follow this  

How can I draw a tilemap without my ram useage or cpu useage going way up?

This topic is 2754 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Okay, so I'm trying to draw a tilemap on the screen with SDL, and my fps is really bad cause of it. And I've tried doing it in SFML a different way, but that way makes my ram useage go way up.

the way I'm trying to draw it in SDL is having a SDL_Surface *tilemap; that I chop up, and render like this:

void apply_surface ( int sourceX, int sourceY, int sourceW, int sourceH, int x, int y, SDL_Surface* source, SDL_Surface* destination ){

// make a temporary rectangle to hold the offsets
SDL_Rect offset;
SDL_Rect sourceRect;

// give the offsets to the rectangle
offset.x = x;
offset.y = y;
sourceRect.x = sourceX;
sourceRect.y = sourceY;
sourceRect.h = sourceH;
sourceRect.w = sourceW;

// blit the surface
SDL_BlitSurface( source, &sourceRect, destination, &offset );

void render(){
for(int x = 0; x < lv.width; x++){
for(int y = 0; y < lv.height; y++){
if(x * lv.tileSize < 640 && y * lv.tileSize < 480){
int tileXCoord = 0;
int tileYCoord = 0;
int tileSheetWidth = tilemap->w / lv.tileSize;

if (lv.tile[x][y] != 0)
tileXCoord = lv.tile[x][y] % tileSheetWidth;
tileYCoord = lv.tile[x][y] / tileSheetWidth;

apply_surface(tileXCoord * lv.tileSize, tileYCoord * lv.tileSize, lv.tileSize, lv.tileSize, x * lv.tileSize, y * lv.tileSize, tilemap, screen);

This way brings my CPU way up, and drops my FPS.

I tried in SFML to draw the level to a Image, and than display that image (with cropping) but that made my ram go up to 50,000k.

so what I am asking is how can I draw a tilemap (in SDL) without making my cpu or ram go way up?

Share this post

Link to post
Share on other sites
I don't see anything obviously wrong. The looping looks a little wasteful but unless lv.width and lv.height are hugh it should not make a big difference. An easy mistake to make in SDL is to not call SDL_DisplayFormat (or SDL_DisplayFormatAlpha) to convert the surface to the display format. If the surface is not in the display format it will have to make the conversion every time SDL_BlitSurface is called. When calling SetVideoMode with the bitsperpixel argument different from what the display can also cause big slowdown.

Share this post

Link to post
Share on other sites
Check if:
- both surfaces are hardware accelerated
- both surfaces are identical format (everything except surface size)
- also, making the primary surface double buffered might or might not help
- count how many times apply_surface() was called (the code is messy, so check if you render as many tiles as you wanted and if you don't render them outside of the screen, althrough clipping should take care of it then...)

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!