hardware accelerated sdl under linux?

Started by
5 comments, last by lanfeust 20 years, 8 months ago
Hey all! I''ve been trying to get my sdl program to use hw acceleration under X11 (redhat 7.3, Xfree 4.2.0, geforce2 with 64mb), and I have to admit I am a bit confused. Please correct me if I''m wrong, but the way I understand things you can''t have hw acceleration with x11 (which only gives you access to sw surface), UNLESS you use other driver types, such as DGA. Now, the downside of dga is that you need to be root (which, in terms of security, makes sense, but is a real pain in the butt), and that I can''t get it to work. Querying SDL tells me that it''s very happy to have hw acceleration and 64mb of mem to use, but some reason I cannot get a hw surface. The strange thing being that if i use sw surface (that is i do not set SDL_VIDEODRIVER=dga), it all works nicely. So, my question is: what am I doing wrong?? and what kind of performance gains should I expect if I get the hw accel to work (i.e: is the use of dga with its root access issue worth all that effort?)? Thanx a bunch for any help/pointers. Cheers, Lanfeust As a reference, here''s my pixel format function: void SetupPixelFormat(void) { VideoFlags = SDL_OPENGL; VideoFlags |= SDL_HWPALETTE; VideoFlags |= SDL_RESIZABLE; const SDL_VideoInfo * VideoInfo = SDL_GetVideoInfo(); if(VideoInfo == NULL) { Quit(0); } ///////////// we set the system dependant flags here if(VideoInfo -> hw_available){ VideoFlags |= SDL_HWSURFACE; cout << "good, we do have hw surface..." << endl; if(VideoInfo -> blit_hw){ VideoFlags |= SDL_HWACCEL; cout << "we have hw blitting" << endl; }else{ cout << "damn, we only have sw blitting..." << endl; } printf("Amount of available video memory (in KB): %u \n", VideoInfo ->video_mem); SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 ); SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, SCREEN_DEPTH); SDL_GL_SetAttribute( SDL_GL_STENCIL_SIZE, 0); SDL_GL_SetAttribute( SDL_GL_ACCUM_RED_SIZE, 0); SDL_GL_SetAttribute( SDL_GL_ACCUM_GREEN_SIZE, 0); SDL_GL_SetAttribute( SDL_GL_ACCUM_BLUE_SIZE, 0); SDL_GL_SetAttribute( SDL_GL_ACCUM_ALPHA_SIZE, 0); } now, once this is setup (and sofar happy to use DGA with the hardware), I try to create my window this way: void CreateMyWindow(const char * strWindowName, int width, int height, int VideoFlags) { MainWindow = SDL_SetVideoMode(width, height, SCREEN_DEPTH, VideoFlags); // SCREEN_DEPTH is macro for bits per pixel if( MainWindow == NULL ) { cerr << "Failed to Create Window : " << SDL_GetError() << endl; // report error <= this is where i exit Quit(0); } } the thing is that SDL_SetVideoMode() returns NULL, which means that I do not have a surface... and the subsequent SDL_GetError() doesn''t give anything.
Advertisement
Just a quick question: have you installed the latest GeForce drivers from nVidia''s site?
yup, i have the latest drivers, and i''ve tried running the dga test application that comes with X (called, oh surprise, "dga"), and it worked nicely... so, i guess it is not a driver/graphic sub-system issue but really something wrong with my code.

Any other ideas?


reply greatly appreciated,


Lanfeust.
Hmmmmm. I don''t understand why you want to use the DGA videodriver.
You''ll have accelerated OpenGL using the X11 videodriver (not sure you can have OpenGL at all with DGA).

You can take a look at Snowdruid''s NeHeGL SDL basecode if you want, but basically the initialization code is the same as yours.
SaM3d!, a cross-platform API for 3d based on SDL and OpenGL.The trouble is that things never get better, they just stay the same, only more so. -- (Terry Pratchett, Eric)
Well, if I understand correctly, X11 will ONLY be able to provide software surface in SDL. In other word, if you run an SDL program with the X11 driver and check the available video memory you''ll get 0kB available, no hw blit and no hw surfaces. In short, it seems like SDL is by default not using the hw acceleration capacities of the video card.
Now, I''m not sure how much this affects performances, but if I can squeeze some extra fps by off-loading the CPU, I''d be happy.

Or is there any other easy way to get hardware acceleration under X11?

sdl just wraps glx commands, and glx is hardware accelerated (or at least it is what i call hardware accelerated). you cannot make "hardware surfaces" but that doesn''t matter.
whether you have hardware acceleration or not you can test with an application that renders lots of triangles, has multitexturing and lots of lights. then first link that program with the mesa opengl library and then with the opengl lib provided by nvidia... there should be a noticeable difference if you have hardware acceleration.

i have XFree 4.3, linux 2.4.21, a geforce 4 ti 4600 and the latest nvidia detonator... and i have hardware acceleration

our new version has many new and good features. sadly, the good ones are not new and the new ones are not good
Could it be that there''s a slight 2D/3D confusion here? AFAIK, X11 will have problems providing 2D acceleration (that''s what the video memory display and generally every attribute without GL in its name is about) while 3D acceleration is purely depending on your GLX/DRI/DRM setup. Furthermore, I don''t know what creating a GL context will do to any 2D functionality.

Here''s some of my initialization code which will print some status information:

  SDL_Surface* screen;  int GL_Attributes[11];  const SDL_VideoInfo* info;  if( SDL_Init(SDL_INIT_VIDEO) == -1) {    cerr << "Can''t init SDL: " << SDL_GetError() << endl;    exit(1);  }  atexit(SDL_Quit);  // I don''t remember wy I set these  SDL_GL_SetAttribute( SDL_GL_RED_SIZE, 5 );  SDL_GL_SetAttribute( SDL_GL_GREEN_SIZE, 5 );  SDL_GL_SetAttribute( SDL_GL_BLUE_SIZE, 5 );  SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 16 );  SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );  screen = SDL_SetVideoMode( 1024, 768, 32, SDL_OPENGL );  if( NULL == screen ) {    cerr << "Can''t set video mode: " << SDL_GetError() << endl;    exit(1);  }  info = SDL_GetVideoInfo();  cerr << "Video information (2D)" << endl    << " Hardware available: " << (info->hw_available?"yes":"no") << endl    << " Window manager: " << (info->wm_available?"yes":"no") << endl    << " Blit HW: " << (info->blit_hw?"yes":"no") << (info->blit_hw_CC?" (color keyed)":"") << (info->blit_hw_A?" (alpha)":"") << endl    << " Blit SW: " << (info->blit_sw?"yes":"no") << (info->blit_sw_CC?" (color keyed)":"") << (info->blit_sw_A?" (alpha)":"") << endl    << " Color fills accelerated: " << (info->blit_fill?"yes":"no") << endl    << " Video memory: " << info->video_mem << "k" << endl    << " Resolution: " << screen->w << "x" << screen->h << endl << endl;  SDL_GL_GetAttribute(SDL_GL_RED_SIZE, &GL_Attributes[0]);  SDL_GL_GetAttribute(SDL_GL_GREEN_SIZE, &GL_Attributes[1]);  SDL_GL_GetAttribute(SDL_GL_BLUE_SIZE, &GL_Attributes[2]);  SDL_GL_GetAttribute(SDL_GL_DOUBLEBUFFER, &GL_Attributes[3]);  SDL_GL_GetAttribute(SDL_GL_BUFFER_SIZE, &GL_Attributes[4]);  SDL_GL_GetAttribute(SDL_GL_DEPTH_SIZE, &GL_Attributes[5]);  SDL_GL_GetAttribute(SDL_GL_STENCIL_SIZE, &GL_Attributes[6]);  SDL_GL_GetAttribute(SDL_GL_ACCUM_RED_SIZE, &GL_Attributes[7]);  SDL_GL_GetAttribute(SDL_GL_ACCUM_GREEN_SIZE, &GL_Attributes[8]);  SDL_GL_GetAttribute(SDL_GL_ACCUM_BLUE_SIZE, &GL_Attributes[9]);  SDL_GL_GetAttribute(SDL_GL_ACCUM_ALPHA_SIZE, &GL_Attributes[10]);  cerr << "Video information (GL)" << endl    << " RGB bits: "      << GL_Attributes[0] << "/"      << GL_Attributes[1] << "/"      << GL_Attributes[2] << endl    << " Double buffer: " << (GL_Attributes[3]?"yes":"no") << endl    << " Buffer size: " << GL_Attributes[4] << endl    << " Depth buffer size: " << GL_Attributes[5] << endl    << " Stencil buffer size: " << GL_Attributes[6] << endl    << " Accumulator RGBA: "      << GL_Attributes[7] << "/"      << GL_Attributes[8] << "/"      << GL_Attributes[9] << "/"      << GL_Attributes[10]    << endl << endl;

This topic is closed to new replies.

Advertisement