SDL app only at approx. 1 FPS ??

Started by
2 comments, last by randomZ 19 years, 3 months ago
Hi folks, I am playing around with some SDL code lately and I am wondering what I am doing wrong, since it is way too slow. What I am basically doing is assembling a level from approx. 70 tiles (by blitting them onto the surface) and then flipping the page for viewing. To show something moving, I randomly exchange one tile at a time. The result looks like if the app renders about 1 frame per second and I don't know why...

int CGame::Run (void) 
{  	
  bool bExitApp = false;
	
  while (!bExitApp)
  {
    SDL_Event event;    	
		
    // Check for events
    while (SDL_PollEvent (&event))
    {
      switch (event.type)	    
      {    
	case SDL_KEYDOWN:	
	  switch( event.key.keysym.sym )
	  {
	    case SDLK_ESCAPE:
	      CleanupEnvironment ();	      
	      bExitApp = true;	
	      break;
            
            default:
              break;
          }
	  break;
	
        case SDL_QUIT:	      
	  CleanupEnvironment ();	      
	  bExitApp = true;	      
	  break;
	
        default:      
	  break;	    
      }				 
    }   
    if (!bExitApp)	
    {	  
      RenderEnvironment ();	
    }  		
  }
  
  return S_OK;
}

int CGame::RenderEnvironment (void) 
{  
  int iResult;
  
  // Frame move the scene
  if (FAILED (iResult = FrameMove ()))    
  {      
    return iResult;    
  }
  
  // Render the scene as normal
  if (FAILED (iResult = Render ()))    
  {
    return iResult;    
  }
  
  return S_OK;
}

int CGame::FrameMove( void )
{
  m_pGates[(int)(rand() % GATES_MAX)]->SwitchState();
	
  return S_OK;
}

int CGame::Render (void) 
{  	
  Uint32 color;
  // Create a white background 
  color = SDL_MapRGB (m_pSurface->format, 255, 255, 255);
  SDL_FillRect (m_pSurface, NULL, color);
	
  // Render all gates
  for( int iGate = 0; iGate < GATES_MAX; ++iGate )
  {
    m_pGates[iGate]->Render( m_pSurface );
  }
	
  // Make sure everything is displayed on screen
  SDL_Flip (m_pSurface);
  
  return S_OK;
}

int CGate::Render( SDL_Surface* pSurface )
{
  SDL_Rect myRect;
  myRect.x = 64 + m_iPosX;
  myRect.y = 80 + m_iPosY;
	
  SDL_BlitSurface( m_pimgGate[m_iState]->GetImage (), NULL, pSurface, &myRect );	
  
  return S_OK;
}

Does anyone see if theres a problem in the code? Regards, T.
Advertisement
its been awhile since i've used SDL to render, but..

are you clearing the screen before you draw everything? it looks like you are. you should not do this if you are only going to re-draw the entire screen anyway.

next, you should also convert all of your SDL_Surface's to the screens format. otherwise, each time you blit, it will do this conversion for you on the fly. this is SLOW, and you usually get a decent amount of FPS increase after doing that. just look in the SDL docs for something like SDL_ConvertSurface() or SDL_DisplayFormat() or something like that.
FTA, my 2D futuristic action MMORPG
Quote:Original post by graveyard filla
are you clearing the screen before you draw everything? it looks like you are. you should not do this if you are only going to re-draw the entire screen anyway.

Another way of applying this would be even if you're not redrawing the whole scene, but just drawing over everything you rendered in the scene before (ie - no moving objects, which you seem to have since you're just drawing the new tiles atop the old ones)

You should also stick an SDL_GetTicks() function in there before and after bits of your rendering code and then do a little (time_after - time_before) = time_spent for a bit of profiling to see where the code is getting bogged down.

Drew Sikora
Executive Producer
GameDev.net

Don't worry about clearing the entire screen - SDL_FillRect on the whole screen should be very fast in any case.

Your likely problem is that you use alpha blending, and your display surface is in video memory. This means that every time you blit an alpha surface, SDL has to get the old pixels from VRAM (which is VERY slow) and blend them with the ones from your surface.

Possible solutions:
1. pass SDL_SWSURFACE to SDL_SetVideoMode()
pro: eliminates your problem
con: you can't use double buffering and therefore no VSYNC (VSYNC is a good thing!)

2. use SDL_DOUBLEBUF and have a software shadow surface. Then first blit everything to your shadow surface (which you need to create yourself) and then, after all blits, blit the whole shadow surface to VRAM and call SDL_Flip().

pro: you can use double buffering
con: uses a bit of extra system memory - but not that much that it should matter.



Other things you should pay attention to:

After loading, convert every surface to the display format with SDL_DisplayFormatAlpha(). Speeds up blitting drastically. This is also necessary after IMG_Load() calls; SDL_Image doesn't do this for you!

On images that use alpha, always call SDL_SetAlpha() with thi SDL_RLEACCEL flag. This speeds up blits, as pixels with 0% / 100% alpha aren't blended, and the alpha information is compressed.

Edit: Another thing that came to my mind: Have all your surfaces in system memory, i.e. don't use the SDL_HWSURFACE flag.

-Sebastian
---Just trying to be helpful.Sebastian Beschkehttp://randomz.heim.at/

This topic is closed to new replies.

Advertisement