SDL_Layer even slower than SDL?

Started by
9 comments, last by MichaelBarth 11 years, 2 months ago

Hello everyone,

I'm currently experimenting with SDL for some basic scrolling shooter. Since the moving background reeeally slowed things down, I went online and found "SDL_Layer". The documentation is sparse, but I thought I'd figured it out - until I tried it. With just the background in one layer and a single object in another, it's just as slow as vanilla SDL. Because of the moving background I can't use dirty rects. Unfortunately, Google was not helpful on this topic, so I presume there is something wrong with my code.

Here is what I tried:


    SDL_Surface *screen;
    if ( SDL_Init(SDL_INIT_AUDIO|SDL_INIT_VIDEO) < 0 ) {
        fprintf(stderr, "Unable to init SDL: %s\n", SDL_GetError());
        exit(1);
    } 
    atexit(SDL_Quit);

    // Initialize screen
    screen = SDL_SetVideoMode(WIDTH, HEIGHT, 32, SDL_HWSURFACE | SDL_FULLSCREEN | SDL_DOUBLEBUF );
    if( screen == NULL ) {
        fprintf(stderr, "Unable to init video: %s\n", SDL_GetError());
        exit(1);
    }	

    rmask = 0x00ff0000;
    gmask = 0x0000ff00;
    bmask = 0x000000ff;
    amask = 0xff000000;

    display = SDLayer_CreateRGBLayeredDisplay(SDL_HWSURFACE | SDL_FULLSCREEN, SDLAYER_FLIP, LAYERS, layer_widths, layer_heights,
    		32, rmask, gmask, bmask, amask);

    SDLayer_SetColorKey(display, SDL_SRCCOLORKEY, 0); // without this, only the topmost layer will be visible

    bmpBackground = SDL_LoadBMP("Background.bmp");
    if(bmpBackgroundRewind == NULL) {
    	fprintf(stderr, "Couldn't find a graphic: %s\n", SDL_GetError());
    	exit(1);
    }

    bmpPlayer = SDL_LoadBMP("Player.bmp");
    if(bmpPlayer == NULL) {
    	fprintf(stderr, "Couldn't find a graphic: %s\n", SDL_GetError());
    	exit(1);
    }
    SDL_SetColorKey(bmpPlayer, SDL_SRCCOLORKEY, SDL_MapRGB(bmpPlayer->format, 255, 0, 255));

I then enter the main loop, which repeatedly triggers the draw() function:


	SDL_Rect background;
	SDL_Rect dest;
	
        static int background_y = 0;

        if(++background_y >= HEIGHT) background_y=0;

	// draw a moving background
	background.x = 0;
	background.y = background_y;
	background.w = WIDTH;
	background.h = HEIGHT;
	SDLayer_Blit(bmpBackground, NULL, display, &background, 0);

	background.x = 0;
	background.y = background_y-HEIGHT;
	background.w = WIDTH;
	background.h = HEIGHT;
	SDLayer_Blit(bmpBackground, NULL, display, &background, 0);

	// draw the player
	dest.x = player1.x; dest.y = player1.y;
	dest.w = player1.w; dest.h = player1.h;
	SDLayer_Blit(bmpPlayer, NULL, display, &dest, 1);

	// draw the buffer to the screen
        SDLayer_Update(display);

I currently disabled everything else, so it's definitely this bit which is causing the slowness. My laptop isn't the fastest one out there, but the speed I'm currently getting can't be right.

Also, I have a hunch the moving background could somehow be realized with viewports and scrolling factors, but should I even bother? When the program's finished, there will be way more objects to update, and it doesn't even work smoothly for one. Or is there some alternative to blitting that I missed?

Anyways, thanks in advance for any help!

Sincerely

Arivor

Advertisement
Maybe a stupid question, but have you made sure your video drivers are up to date? It's been years since I used SDL, but IIRC it uses OpenGL on the backend, so if your GL drivers are still Microsoft bundled or otherwise non-accelerated, then OpenGL is going to be sloooow.

I'm on Linux Fedora, and as far as I can tell, all my drivers are up to date.

-Arivor

What does glxinfo | grep render tell you?

$glxinfo | grep render
direct rendering: Yes
OpenGL renderer string: Mesa DRI Intel(R) Ironlake Mobile 
    GL_NV_conditional_render, GL_ARB_ES2_compatibility,
Hmm... that looks okay.

I just don't see anything obviously wrong in your code, so I was hoping it might be a driver issue. But like I said, it's been years since I used SDL. You might try digging into the guts of SDL Layer to see if it is doing anything stupid behind the scenes.

You say it's "just as slow as vanilla SDL". You wouldn't be measuring your speeds in FPS, would you? Because measuring performance by FPS gives deceptive results, and you should measure by millaseconds or microseconds per frame.

Example: An improve from 1 FPS to 2 FPS is a 1 frame increase... which is 500 millaseconds of increase.

An improvement from 50 FPS to 51 FPS is a 1 frame increase... which is only a 2/5th of one millasecond of an increase.

What are the actual speeds you are getting, measured in average millaseconds per frame?

Sometimes, just drawing one object per frame may take alot of processing, but for each subsequent object, the increase might be very minor. Have you already tested with 100 objects on-screen? Have you tested with 1000? Then you can calculate what is the average decrease in performance per object, after excluding the initial baseline cost of drawing any objects at all.

Further, have you converted your surfaces properly (when loading them) to be optimized to the format of the screen you are rendering to? If you don't convert them once per image after loading, then they'll have to be converted each and every time you draw the image, which can slow you down.

I apologize if this is information you already know - I'm not familiar with your skill level and knowledge, so I might be pointing out the obvious. laugh.png

Maybe a stupid question, but have you made sure your video drivers are up to date? It's been years since I used SDL, but IIRC it uses OpenGL on the backend, so if your GL drivers are still Microsoft bundled or otherwise non-accelerated, then OpenGL is going to be sloooow.

Actually, SDL 2.0 I believe is heavily attached to OpenGL, but with SDL 1.2 I think it's only true OpenGL code that benefits from hardware acceleration. I could be wrong though, I don't know if initializing the screen with SDL_OPENGL can be done without OpenGL specific code, but he could try adding that to the flags when calling SDL_SetVideoMode().

Don't know for sure though, just something to look into.

Hmm... that looks okay.

I just don't see anything obviously wrong in your code, so I was hoping it might be a driver issue. But like I said, it's been years since I used SDL. You might try digging into the guts of SDL Layer to see if it is doing anything stupid behind the scenes.

Okay, thanks for your help anyway. I'm not that great an expert, but I'll definitely take a closer look at the SDL_Layer files.

You say it's "just as slow as vanilla SDL". You wouldn't be measuring your speeds in FPS, would you? Because measuring performance by FPS gives deceptive results, and you should measure by millaseconds or microseconds per frame.

Example: An improve from 1 FPS to 2 FPS is a 1 frame increase... which is 500 millaseconds of increase.

An improvement from 50 FPS to 51 FPS is a 1 frame increase... which is only a 2/5th of one millasecond of an increase.

What are the actual speeds you are getting, measured in average millaseconds per frame?

Sometimes, just drawing one object per frame may take alot of processing, but for each subsequent object, the increase might be very minor. Have you already tested with 100 objects on-screen? Have you tested with 1000? Then you can calculate what is the average decrease in performance per object, after excluding the initial baseline cost of drawing any objects at all.

Further, have you converted your surfaces properly (when loading them) to be optimized to the format of the screen you are rendering to? If you don't convert them once per image after loading, then they'll have to be converted each and every time you draw the image, which can slow you down.

I apologize if this is information you already know - I'm not familiar with your skill level and knowledge, so I might be pointing out the obvious. laugh.png

No need to apologize. You brought me onto the right track, I think. With SDL_Layer I get a stable 21 ms/frame, which goes up to around 28ms/frame with a thousand (small) objects. So it was my code: When I first wrote it for vanilla SDL, I regulated the speed by limiting it to one update for every few frames (dumb idea, I know...). So if I update once per frame, I can use SDL_Layer without lag. A thousand objects at once should suffice for a small project like this.

Btw, I tried with and without SDL_DisplayFormat() and cannot make out any difference (neither by eye nor by measuring).

Maybe a stupid question, but have you made sure your video drivers are up to date? It's been years since I used SDL, but IIRC it uses OpenGL on the backend, so if your GL drivers are still Microsoft bundled or otherwise non-accelerated, then OpenGL is going to be sloooow.

Actually, SDL 2.0 I believe is heavily attached to OpenGL, but with SDL 1.2 I think it's only true OpenGL code that benefits from hardware acceleration. I could be wrong though, I don't know if initializing the screen with SDL_OPENGL can be done without OpenGL specific code, but he could try adding that to the flags when calling SDL_SetVideoMode().

Don't know for sure though, just something to look into.

I'm able to use that flag, but with SDL_Layers it just gives me a black screen. No idea if that's SDL_Layer's fault or that flag just needs some special code.

Thanks for all your help! Going back to coding now ;-)

-Arivor

Maybe a stupid question, but have you made sure your video drivers are up to date? It's been years since I used SDL, but IIRC it uses OpenGL on the backend, so if your GL drivers are still Microsoft bundled or otherwise non-accelerated, then OpenGL is going to be sloooow.

Actually, SDL 2.0 I believe is heavily attached to OpenGL, but with SDL 1.2 I think it's only true OpenGL code that benefits from hardware acceleration. I could be wrong though, I don't know if initializing the screen with SDL_OPENGL can be done without OpenGL specific code, but he could try adding that to the flags when calling SDL_SetVideoMode().

Don't know for sure though, just something to look into.

No, SDL 2.0 isn't truly attached to OpenGL, in fact I believe it has a Direct3D backend too (need to make sure). The software renderer is still there too (and in fact has more functionality than it used to have, since now it can draw some shapes too). But yes, it will try to go with the GPU by default and resort to software rendering only as a last option.

SDL 1.2 indeed requires you to use OpenGL explicitly because by itself it'll just do software rendering.

Don't pay much attention to "the hedgehog" in my nick, it's just because "Sik" was already taken =/ By the way, Sik is pronounced like seek, not like sick.

This topic is closed to new replies.

Advertisement