SDL slow blitting

Started by
11 comments, last by lucas92 14 years, 1 month ago
Hello, guys, I'm making a pacman clone in my incredibly slow personal road to game development. I'm doing it with SDL, and I'm liking it a lot. I load a big (2048x2048, for example) background screen, and I do a soft scroll to keep the player centered when I can. For now I don't have walls or enemies, the player can move freely and the level is filled with balls. I'm dividing the level in 32x32 squares, and inside each of these squares a ball is placed. I began it with a res of 640x480 hardcoded. However, I've got a 22" screen , and when I tried with a high resolution (say 1600x1000 windowed) or fullscreen (1680x1050) the drawing of every ball goes crazy slow. I've got a fairly decent machine (Intel Quad Core Q9400@2.66 ghz) So I'm guess I'm doing something wrong. I'm doing about (1680/32 * 1050/32) ~= 1750 blit operations. What would be the right way to do this? Having a surface with all the balls and just update it each time a ball is eaten? My balls are PNGs with alpha channel to get nice transparency effects, could this be the problem?
Advertisement
This may not be the answer you're looking for, but the easiest way to improve the performance would probably be to use hardware-accelerated rendering (e.g. using OpenGL or Direct3D). You would still be able to use SDL, but rather than having SDL do the blitting, you would use SDL for windowing, events, and so on, and render the graphics using OpenGL or Direct3D.

I've never used SDL's software blitting functionality, so I can't comment on that aspect of things. However, it seems at least possible that with that many individual, alpha-blended blits, you might be running up against the limits of what can reasonably be expected with software blitting. (I could be wrong about that though, and maybe others will be able to offer suggestions as to how you might speed up the software blitting.)

One thing you might try is to run the game without using alpha effects and see what effect (if any) that has on the frame rate.

SDL 1.3 has support for hardware-accelerated rendering 'out of the box', but it's still under development. Another option would be SFML, which also has support for hardware-accelerated sprite rendering.
You might find Bob Pendleton's SDL articles an interesting read.
Thanks, jyk. While I appreciate your comment -and undoubtfuly will walk that path some day- I don't think a damn pacman clone should need a last generation GPU to run softly. I remember playing one in a 4 Mhz Amstrad CPC 6128. Yeah, perhaps the screen resolution was high. Surely they didn't do alpha blending. But still it's an almost one million times slower proccessor. I do this for the fun and don't have a strict schedle with which I have to comply. I'd like to be able to play this without my GPU vents speeding up due to the hardware acceleration.

Perhaps I shouldn't care nowadays, but I do.

I'll look into the alpha blending thing, however. Seems promising.

Quote:Original post by jyk
This may not be the answer you're looking for, but the easiest way to improve the performance would probably be to use hardware-accelerated rendering (e.g. using OpenGL or Direct3D). You would still be able to use SDL, but rather than having SDL do the blitting, you would use SDL for windowing, events, and so on, and render the graphics using OpenGL or Direct3D.

I've never used SDL's software blitting functionality, so I can't comment on that aspect of things. However, it seems at least possible that with that many individual, alpha-blended blits, you might be running up against the limits of what can reasonably be expected with software blitting. (I could be wrong about that though, and maybe others will be able to offer suggestions as to how you might speed up the software blitting.)

One thing you might try is to run the game without using alpha effects and see what effect (if any) that has on the frame rate.

SDL 1.3 has support for hardware-accelerated rendering 'out of the box', but it's still under development. Another option would be SFML, which also has support for hardware-accelerated sprite rendering.


Damn, I changed the PNG for a GIF with transparent background and there's no noticeable difference. The same with a BMP.

Thanks for the links, rip-off, they're now in my favourites folder. Will read them carefully.
Make sure the bit resolution of your sprite images is the same as your screen surface. If not, SDL does the conversion on the fly and slows it down.

I found this article: http://lazyfoo.net/SDL_tutorials/lesson02/index.php helpful.
As others have suggested, i would also assume that it is SDL's software rendering that is causing the problem. I should certainly hope that it's not due to the PNG with alpha, any computer in the last decade or so should be able to handle alpha blending no problem. But this would be using hardware acceleration (probably with that old 4 Mhz machine too, in some form or another).

I suggest making the leap and using OpenGL. It's surprisingly simple to use, i got started with it when i saw a tutorial which draw a simple triangle, and you can setup SDL to use OpenGL rendering simply by passing a flag to it's initialisation routine.

This will get you started with OpenGL. I'm not sure how you are loading your images but that site looks like it also shows how to load an OpenGL texture from an SDL_Surface.
[Window Detective] - Windows UI spy utility for programmers
Quote:Original post by ravengangrel
Thanks, jyk. While I appreciate your comment -and undoubtfuly will walk that path some day- I don't think a damn pacman clone should need a last generation GPU to run softly. I remember playing one in a 4 Mhz Amstrad CPC 6128. Yeah, perhaps the screen resolution was high. Surely they didn't do alpha blending. But still it's an almost one million times slower proccessor.

We're talking 4.77MHz vs roughly 2.6GHz, so it's 'only' about 500 times slower. ;)

The fact is though, you're asking to move an awful lot of memory. 1600x1000x4 bytes per pixel is 6.2 megabytes. Compare that to the Amstrad's Mode 0 graphics which was 160x200 x 1/2 byte per pixel, ie. 16 kilobytes. So although your processor is roughly 500x faster, the amount of data you're shifting is not far off being 500x more.

This is made worse by the fact that on the old Z80 machines the graphics memory was designed for fast read and write access. On modern x86-based machines, the graphics memory is typically located on a different piece of hardware entirely so various tricks are performed to have to send the data down to that hardware. On top of that, it means that reading from the graphics memory can be very slow, which has an effect on blending operations. For 99% of applications this is a good trade-off because they want faster writes at the expense of slower reads. It's just not optimal for graphics programmers.

But that's where the GPU comes in; by doing the blending logic on the graphics card you can get back all of that speed, and by keeping your textures on the graphics card you can save all the time you'd otherwise spend on sending those 6MB of data from the main memory to the graphics card each frame.
Quote:Original post by Kylotan
We're talking 4.77MHz vs roughly 2.6GHz, so it's 'only' about 500 times slower. ;)

LOL, right. I thought it was a 4.00MHz uP so I was exaggerating a bit on purpose. But the three extra zeros were a mistake.

So all of you agree that I should do OpenGL rendering. OK, you win. I will try.

renners: I was already aware of that.

Thanks to all of you.

Actually I don't think you should be having visible issues for a simple pacman clone event at higher resolutions. In many cases there are some "features" of SDL that make it very portable but not so fast without some careful considerations.

Here are (arguably) the most common issues affecting SDL performance:
1. Do you load images to surfaces every frame or only once at start? Should only be once at start.
2. Are the surfaces "optimized" using SDL_DisplayFormat or SDL_DisplayFormatAlpha? Otherwise SDL has to convert the source format into the destination format every time you call SDL_BlitSurface.
3. Have you tried using color keying instead of alpha channels? This is MUCH faster in software than alpha.
4. If you redraw your entire screen every frame it isn't necessary to "clear" it with a background color first. Especially in high resolutions this is more important.

Most tutorials steer you towards making sure 1 and 2 are setup correctly so those may not be an issue but I mention them just because it isn't mentioned here in the post whether this is the case or not.

Using color keys instead of alpha however should drastically improve performance. In general a pixel matching a color key is not drawn however (from my understanding anyway) a source pixel with an alpha value is still evaluated against the destination pixel even with a fully transparent alpha value. In most cases alpha effects are really not as necessary as initially thought.

The rule of thumb when using SDL is to add alpha and rotational stuff after everything is working how you want it to see how much you can add before it becomes a performance bottleneck. If alpha and rotation are an absolute must and you are sure they are the bottleneck then a 3d API is your only choice at that point.
Evillive2

This topic is closed to new replies.

Advertisement