[Performance] SDL vs. Allegro

Started by
8 comments, last by skiesbblue 14 years, 6 months ago
Greetings, I've read two "SDL vs. Allegro" threads here on GameDev, but none of them states which is better performing. Here's why I'm doubting: a long time ago I got an Allegro game running on a ~200MHz computer. It ran fine at a decent speed. Not so long ago, I got an SDL game (similar complexity as the Allegro one) running on a 400MHz computer, however it was not as fast as the Allegro one on the ~200MHz. By this I'm guessing either one of three: 1) There was a severe bottleneck in the SDL game I was not aware of. 2) Allegro is way faster than SDL for 2D stuff. 3) A mix of the above two. Another reason is, it looks to me that Allegro uses direct access to graphics (think DOS and direct VGA), whereas SDL stands on top of DirectX or OpenGL (think Windows or Linux), correct me if I'm wrong here. Is there a definite answer? Thanks in advance.
Advertisement
I just checked the allegro homepage and it says they added Direct3D and OpenGL drivers, which should be faster on new computers than SDL.

But there are many more options (HGE, SFML, JRA) and Allegro has more included, whereas SDL has add-on libraries.

And you are wrong, Allegro shouldn't be using DOS or VGA on Windows XP, and SDL usually uses GDI or DirectDraw on Windows.
It's some time since I last played around with SDL, but I also ran into rather drastic performance problems. Doing just one full-screen blit and two bitmaps on it were hard on the limit of what my PC back then (I think an Athlon XP 2000+) could manage.

I later came to think that my problem was that I had created the bitmaps with a different pixel format from the backbuffer and SDL was doing on-the-fly conversion on each blit for me. But I never truly investigated. At least it showed me that there might be some traps in SDL :)

-Markus-
Professional C++ and .NET developer trying to break into indie game development.
Follow my progress: http://blog.nuclex-games.com/ or Twitter - Topics: Ogre3D, Blender, game architecture tips & code snippets.
Allegro uses inline asm for blit and rotation operations. AFAIK, SDL doesn't.

It's a moot point though - Allegro 4.4 and, in the future, 5, will disable asm in favour of C implementations, and move the bulk of the graphics operations into D3D/OpenGL.

For SDL, just create an OpenGL window and use that. It will be many times faster, in any reasonable hardware.

Edit: Neither SDL nor Allegro can access the framebuffer directly in the general case. Only some ports can do this (think DOS, and Unix/DirectFB).

[OpenTK: C# OpenGL 4.4, OpenGL ES 3.0 and OpenAL 1.1. Now with Linux/KMS support!]

In general SDL is not accelerated which makes it very multiplatform but it can be quite slow to perform especially if your code is not especially efficient.

SDL also has a tricky habit of not complaining when bit depths of different surfaces are different. It instead does a silent conversion between the bit depth. This can happen when blitting from an image to the screen surface and/or when it blits from the screen surface to the actual display buffer. It means the programs will work but it is a serious speed trap and it's very easy to fall in to.
Quote:Original post by Dr_Ian
In general SDL is not accelerated which makes it very multiplatform but it can be quite slow to perform especially if your code is not especially efficient.

SDL also has a tricky habit of not complaining when bit depths of different surfaces are different. It instead does a silent conversion between the bit depth. This can happen when blitting from an image to the screen surface and/or when it blits from the screen surface to the actual display buffer. It means the programs will work but it is a serious speed trap and it's very easy to fall in to.


An easy fix for this is to manually convert the surface to the same format as the screen when you load it, so it doesn't do the conversion every time you blit it. This can be accomplished with the SDL_DisplayFormat() function.

V/R,-AJThere are 10 kinds of people in the world: Those who understand binary and those who don't...
Quote:Another reason is, it looks to me that Allegro uses direct access to graphics (think DOS and direct VGA), whereas SDL stands on top of DirectX or OpenGL (think Windows or Linux), correct me if I'm wrong here.

Allegro hasn't been exclusively a DOS library since 4.0 was released seven years ago in 2001. Unless you use DJGPP as your compiler, Allegro 4 sits on top of DirectX (on Windows). Generally speaking, using a Windows compiler will give much better performance than DJGPP will under Windows. (In most cases, DJGPP compiled Allegro programs won't even work on Windows XP or later.)

Allegro 4 will try to use hardware acceleration when possible (usually VRAM<->VRAM blits), but it isn't very sophisticated at it, and using an OpenGL target (e.g., with AllegroGL) will probably give you much better results if you know how to leverage it.

Allegro 5 will provide much better performance out of the box with all new OpenGL / Direct3D drivers and a completely redesigned API.

It wouldn't surprise me if Allegro performs a bit better on older hardware considering it grew up in the days of 386's. But most of those performance tweaks and assembly code are no longer relevant on modern hardware, so SDL and Allegro probably have essentially the same performance right now on modern machines.
Performance under Allegro and SDL should be largely the same. My guess is that your problem is more in how your surfaces are being created. Your performance mileage is going to vary wildly with either API depending largely on the bit-depths and color modes you use. I can easily see blitting a 32-bit image in software that fills the screen bringing an Athlon 2000+ to it's knees, whether you use Allegro or SDL.

There are a few simple rules you can follow to maximize 2D performance in SDL or Allegro.

If you are using software surfaces, always use the lowest bit-depth you can possibly get away with. A 32-bit image will require four times the memory bandwidth of an 8-bit palletized image, and also cause significantly more cache thrashing.

Also, whether using software or hardware rendering, ensure that all of your surfaces have the same pixel format. You can sometimes fudge on this one a little for hardware rendering, since most 3D accelerators can do specific format conversions on the fly (such as 16-bit textures->32-bit rendering target), but for software it is absolutely essential. In software, if the pixel formats of the surfaces you are blitting to and from match, the CPU need only copy the memory from the source surface to the target surface, and can often do burst copies for even more speed. If the pixel formats do not match, the CPU must convert EVERY SINGLE PIXEL individually from the source format to the target format before copying. Not only does this prevent bursting the memory transfer, but it also adds a lot of math calls to do the actual color conversion.

Thirdly, if you are using hardware surfaces, don't lock the surfaces any more than you have to. Locking a hardware surface often involves copying it out of video ram and into system ram before modifying it, and then copying it back when you are done.

And finally, use hardware surfaces if at all possible. Almost anything with a screen these days supports some form of 2D blit acceleration.
I just noticed, that blitting 16-bit SDL surfaces is at least twice as slow as 24-bit surfaces. How about that. And I have everything using the same format.

It seems that 16-bit display is really slow.






I also got performance problems with SDL, but different ones. SDL seems to slow down other code of my program by more than 50 times, and I really don't know what could be the reason. I started a new thread about it:

SDL slows down other code by over 50 times? (Simulation, OpenGL, VBO)

This topic is closed to new replies.

Advertisement