Archived

This topic is now archived and is closed to further replies.

sdlprorammer

Initializing video mode

Recommended Posts

sdlprorammer    100
Can anyone PLEASE tell me what SDL_HWSURFACE and SDL_HWSURFACE do? Actually, what''s their difference? From the documentation: SDL_SWSURFACE: Create the video surface in system memory SDL_HWSURFACE: Create the video surface in video memory But i really don''t get it. What''s video memory and system memory ( ok maybe i understand what system memory is, but what about video memory ). And finally, what''s double buffering? And please don''t use advanced terms. and what''s the difference between single buffering? From the doc: SDL_DOUBLEBUF: Enable hardware double buffering; only valid with SDL_HWSURFACE. Calling SDL_Flip will flip the buffers and update the screen. All drawing will take place on the surface that is not displayed at the moment. If double buffering could not be enabled then SDL_Flip will just perform a SDL_UpdateRect on the entire screen. Again, that''s all black to me. Thanks in advance.

Share this post


Link to post
Share on other sites
TomasH    360
quote:
Original post by sdlprorammer
Can anyone PLEASE tell me what SDL_HWSURFACE and SDL_HWSURFACE do? Actually, what''s their difference?
From the documentation:

SDL_SWSURFACE: Create the video surface in system memory
SDL_HWSURFACE: Create the video surface in video memory

But i really don''t get it. What''s video memory and system memory ( ok maybe i understand what system memory is, but what about video memory ).

System memory - the "normal" memory your computer has
Video memory - the memory of your video card.
I guess there should be a difference in performance here...

quote:
And finally, what''s double buffering? And please don''t use advanced terms. and what''s the difference between single buffering?

Single buffering - You only have one buffer for you graphics. You draw to this and display it on screen. Since you make updates to the same buffer you''re drawing to, the picture might flicker.
Double buffering - You have two buffers. You draw to one, while displaying the other. When you''ve finished drawing, you swap buffers so that the one you''ve just drawn to is displayed. This way you avoid flickering.

Share this post


Link to post
Share on other sites
sdlprorammer    100
thanks Tomas for your help
1) Why would i need to use video memory (of my video card ) instead of system memory, which is more?

2) ok thanks i got it, but i don''t really understand this: why "Since I make updates to the same buffer i am drawing to, the picture might flicker" ?

thanks...

Share this post


Link to post
Share on other sites
Pipo DeClown    804
1) Well, Video Memory is often less than System Memory. The idea is that Video Memory is faster because the data is already there. You don''t have to send it to your video hardware.

2) Because plotting pixels might be slow. Flickering means that the user might see how the pixels are plotted. This is not good, since you want them to see the whole screen at the same time.

Now if I''m right, you can either turn V-Sync on or use two buffers, one front (the user sees this one), one back (you write to this one), then switch the buffers (front becomes back and back becomes front) and you''ll write on the new backbuffer (which was the old frontbuffer).



--
You''re Welcome,
Rick Wong
- Google | Google for GameDev.net | GameDev.net''s DirectX FAQ. (not as cool as the Graphics and Theory FAQ)

Share this post


Link to post
Share on other sites
Onemind    265
First, I'll explain Double Buffering - If you draw directly to the screen, the player can see some objects being covered-up by other objects - this is called occuling. The player will probably see this as flickering, and it will cause a headache if the player looks at it long enough. Thus double buffering was born, everything is drawn to an off-screen surface, and then the off-screen surface is copied to the screen.

Video Memory has faster access to the back-buffer (its also in video memory), but unless you have a good 3D card, video memory is scarce. So you want to put only things that have to be accessed every frame in precious video memory, and put other things that will be accessed less in system memory.

Video Memory is much faster because general System Memory (i.e. RAM) stores lots of stuff, so the video card would have to wait in line to get its information.

Generally, when I create an SDL surface, I try to create it in hardware memory, if that fails, then I create it in system meory.


image = SDL_CreateRGBSurface(SDL_HWSURFACE | SDL_SRCCOLORKEY, b.image->w, b.image->h,
b.image->format->BitsPerPixel,
b.image->format->Rmask, b.image->format->Gmask, b.image->format->Bmask,
b.image->format->Amask);

if(NULL == image)
image = SDL_CreateRGBSurface(SDL_SWSURFACE | SDL_SRCCOLORKEY, b.image->w, b.image->h,
b.image->format->BitsPerPixel,
b.image->format->Rmask,
b.image->format->Gmask,
b.image->format->Bmask,
b.image->format->Amask);


[edited by - Onemind on May 26, 2004 11:27:24 AM]

Share this post


Link to post
Share on other sites
python_regious    929
quote:
Original post by Pipo DeClown
Now if I''m right, you can either turn V-Sync on or use two buffers


You cannot have V-Sync without using double buffering. V-Sync basically waits until the vertical refresh before swapping front and back buffers. Without V-Sync, the buffers can be swapped at any time, which is where "tearing" comes into play ( you see half of one buffer and half of the other on screen at the same time ). If the front and back buffers are quite different, then this becomes really quite noticable.

quote:

First, I''ll explain Double Buffering - If you draw directly to the screen, the player can see some objects being covered-up by other objects - this is called occuling. The player will probably see this as flickering, and it will cause a headache if the player looks at it long enough. Thus double buffering was born, everything is drawn to an off-screen surface, and then the off-screen surface is copied to the screen.



Drawing directly to the frame buffer will just make the drawing operation apparent to the viewer. They will literally "see" the polygons being drawn. Since some poly''s occlude others, the viewer will see flickering due to the fact that some polys are actually only visible for a very short time. Double buffering was used to eliminate this yes, but the back buffer is not copied to the front buffer, instead their pointers are switched.

quote:

Video Memory is much faster because general System Memory (i.e. RAM) stores lots of stuff, so the video card would have to wait in line to get its information.



The fact that system memory stores a lot of stuff is irrelevant. The main point is that unless the data is in AGP memory, the video card cannot DMA the data directly from memory, which means the CPU will have to be involved in getting the data to the graphics card = bad. If the data is in AGP memory, then the video card can DMA the data directly from memory, without the CPU being involved. This is good, but can still be hindered by the speed of the AGP bus. Video memory is the fastest of them all, because the GPU/VPU can directly access it ( it''s actually soldered onto the graphics card ), without going over the slow(ish) AGP bus.





You have to remember that you''re unique, just like everybody else.

Share this post


Link to post
Share on other sites
sdlprorammer    100
many many, MANY thanks for the kind replies Ok i got it ( although i don't know what DMA, GPU,VPU are.. )

one question though: about drawing directly to the screen ( no double buffering ) ok i get this: "Drawing directly to the frame buffer will just make the drawing operation apparent to the viewer. They will literally "see" the polygons being drawn.", but does this explain flickering? What i think, is that the user sees the plygons to be drawn "animatedly". I mean they will see the shape to be drawn pixel-by-pixel, for i=0, to 300 for example. ..Or not?

From the little code of Onemind, what does SDL_SRCCOLORKEY do ( and i 've read the documentation but did not understand )

and for SDL_SetVideoMode(), do u use system memory or video memory ?

thanks

[edited by - sdlprorammer on May 26, 2004 1:44:33 PM]

Share this post


Link to post
Share on other sites
python_regious    929
quote:
Original post by sdlprorammer
many many, MANY thanks for the kind replies Ok i got it ( although i don''t know what DMA, GPU,VPU are.. )



DMA = Direct Memory Access. Basically this means that anything that has DMA enabled can access memory directly, without going via the CPU.

GPU = Graphics Processing Unit
VPU = Visual/Video ( can''t remember which ) Processing Unit

GPU and VPU''s are basically equivilent. ATI call them VPU''s now, NVidia call them GPU''s.

quote:

I mean they will see the shape to be drawn pixel-by-pixel, for i=0, to 300 for example. ..Or not?



No. The actual pixel rasterisation process is too fast to see ( newer graphics cards don''t actually rasterise pixel by pixel either ). However, you can see actual polygons appear, and then dissappear behind another polygon thats just been drawn over the top. So, for a split second, you see one polygon, before it is overdrawn ( or occluded ) by another. So, you see a flicker. Also, because of the screen refresh, you can see polygons - or more likely parts of polygons - that you shouldn''t.

So for example, say you''re drawing two polygons, the second completely occluding the first. The first is rasterised, but before the second can be so, the monitor has displayed that part of the framebuffer. In this case, you see the first polygon, when you shouldn''t. Since the refresh rate is quite high however, the next refresh will probably show the second polygon. Hence, the flickering.



You have to remember that you''re unique, just like everybody else.

Share this post


Link to post
Share on other sites