Jump to content
  • Advertisement
Sign in to follow this  
Storyyeller

Why is Pygame so much slower than plain SDL?

This topic is 2414 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I decided to try out Pygame. Unfortunately, it seems to be very slow. Even a trival loop that just blits an image over and over gets only 18FPS (yes I did call surf.convert()). I know that SDL is capable of much better because I wrote a complete game in C++ with SDL that runs at well over 60FPS on the same computer.

So my question is, why is Pygame so much slower? As I understand it, Pygame is just a wrapper, with most of the work done inside of SDL, so the timings should be about the same.

Share this post


Link to post
Share on other sites
Advertisement
[source lamg='python']import random, os.path
import pygame

def loadImage(name):
print "Loading ", name
surf = pygame.image.load(os.path.join("Images", name))
surf.convert()
return surf

class ImageLoader(object):
def __getattr__(self,key):
self.__dict__[key] = loadImage(key + '.png')
return self.__dict__[key]
images = ImageLoader()

class Game(object):
def __init__(self):
pygame.init()
screen = pygame.display.set_mode((800,600))
pygame.display.set_caption("Achronal Portal")

self.screen = screen
self.spos = 400,300
self.start = pygame.time.get_ticks()
self.ticks = 0.0

def draw(self):
draw = self.screen.blit
draw(images.background02, (0,0))
## draw(images.twibody, self.spos)
pygame.display.flip()
##
## def update(self):
## if not pygame.key.get_focused():
## return
## pressed = pygame.key.get_pressed()
## xin = pressed[pygame.K_RIGHT] - pressed[pygame.K_LEFT]
## self.spos = self.spos[0] + xin, self.spos[1]
## self.ticks += 1
## print (pygame.time.get_ticks() - self.start)/self.ticks

def run(self):
running = 1
while running:
event = pygame.event.poll()
if event.type == pygame.QUIT:
running = 0
## self.update()
self.draw()
self.ticks += 1
print (pygame.time.get_ticks() - self.start)/self.ticks
Game().run()


[/source]

Share this post


Link to post
Share on other sites

[source lamg='python']
class ImageLoader(object):
def __getattr__(self,key):
self.__dict__[key] = loadImage(key + '.png')
return self.__dict__[key]
[/source]

I wouldn't do your image loading like that. It's terribly inflexible and won't work well once you're trying to implement it with level files, etc. Other than that, I'm not really seeing anything there that stands out.

Share this post


Link to post
Share on other sites
I would suggest taking the first image load out of the draw function. Right now you are loading you image based off of your first draw call. I am not saying it is loading more then once or anything as you say it only states it loaded once. I don't see anything else out of the ordinary. Basically what I see by doing this is causing python to do unnecessary redundancy checks on the data in the Dict.

The best way to do this is remove your custom Dict use the one python provides and load your images into it. Because the way you have it written now every time you use a key to access an image python is doing hash table duplication checks and such. This is the line that I would guess is causing your massive slow down. self.__dict__[key] = loadImage(key + '.png')

Ultimately re-factor your code so that you are not calling loadImage directly off the dict key.

Share this post


Link to post
Share on other sites
You are not printing FPS. You are printing total milliseconds / total frames, or the average number of milliseconds per frame. Here is my distillation of your code:


import random, os.path
import pygame

def loadImage(name):
print "Loading ", name
surf = pygame.image.load(os.path.join("Images", name))
surf.convert()
return surf

pygame.init()
screen = pygame.display.set_mode((800,600))
pygame.display.set_caption("Achronal Portal")
image = loadImage('test.png')
timer = pygame.time.get_ticks()
frames = 0
running = True
while running:
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False

screen.blit(image, (0,0))
pygame.display.flip()

frames += 1
now = pygame.time.get_ticks()
duration = now - timer
if duration > 1000:
seconds = duration / 1000
fps = frames / seconds
print "FPS: ", fps
frames = 0
timer = now



The above runs at over 180 FPS on my Ubuntu virtual machine (hosted on my Windows 7 laptop). Though I get pretty much the same results when this timing code is included in your original. By this I mean, blewisjr's guess is incorrect, there is nothing intrinsically slow in your original program.

One important point is not printing the FPS every frame. If I modify the program to do that, the FPS varies wildly, down to 30, and up above 100, averaging at maybe 70.

Share this post


Link to post
Share on other sites

You are not printing FPS. You are printing total milliseconds / total frames, or the average number of milliseconds per frame. Here is my distillation of your code:


I already accounted for that. It printed out 54ms per frame for me, which comes to 18FPS.

Share this post


Link to post
Share on other sites
Interesting.

Here is a fairly equivalent of my python program using C++ (correct me if I am missing something).

#include <iostream>
#include <cstdlib>

#include "SDL.h"
#include "SDL_image.h"

int main(int, char**)
{
if(SDL_Init(SDL_INIT_VIDEO) < 0)
{
std::cerr << "Failed to initialise SDL: " << SDL_GetError() << '\n';
return 1;
}

std::atexit(&SDL_Quit);

SDL_Surface *screen = SDL_SetVideoMode(800, 600, 0, SDL_SWSURFACE);
if(!screen)
{
std::cerr << "Failed to set video mode: " << SDL_GetError() << '\n';
return 1;
}

SDL_WM_SetCaption("Achronal Portal", NULL);

SDL_Surface *image = IMG_Load("test.png");
if(!image)
{
std::cerr << "Failed to load imae: " << IMG_GetError() << '\n';
return 1;
}

SDL_Surface *temp = SDL_DisplayFormat(image);
if(temp)
{
std::swap(temp, image);
SDL_FreeSurface(temp);
}
else
{
std::cerr << "Failed to format image to display: " << SDL_GetError() << '\n';
return 1;
}

int frames = 0;
Uint32 timer = SDL_GetTicks();
bool running = true;
while(running)
{
SDL_Event event;
while(SDL_PollEvent(&event))
{
if(event.type == SDL_QUIT)
{
running = false;
}
else if(event.type == SDL_KEYDOWN && event.key.keysym.sym == SDLK_ESCAPE)
{
running = false;
}
}

//SDL_FillRect(screen, 0, SDL_MapRGB(screen->format, 0x00, 0x00, 0xff));
SDL_Rect dest = {0, 0};
SDL_BlitSurface(image, NULL, screen, &dest);
SDL_Flip(screen);

++frames;

Uint32 now = SDL_GetTicks();
Uint32 duration = now - timer;
if(duration >= 1000)
{
Uint32 seconds = duration / 1000;
Uint32 fps = frames / seconds;
std::cout << "FPS: " << fps << '\n';
frames = 0;
timer = now;
}
}

return 0;
}

The above runs at ~900 FPS on my laptop (no virtual machine this time). My earlier python implementation runs at ~375 FPS outside the VM. This is an approximate speed differential of 42%.

I am using CPython 2.7.2 64 bit and MSVC 2010 express, on a reasonably decent laptop (Core 2 Duo 2.53GHz, 6GB RAM and an ATI RadeonHD 4650). What kind of speed differential do you get? What kind of system are you running?

Another idea is to get a profiler, or wrap the various parts of the game loop in a timer, and compare how long different parts of your program are taking.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!