what does bitblt do?

Started by
5 comments, last by Lord Chiko 17 years, 9 months ago
I know how it works but..Instruction wise what Assembly instructions does it use? is it faster to use the asm code or to just call it from gdi library? im guessing when you call it would just try and copy 1 address to another but what im getting at is that since its in a gdi32 library what if there is generic code in there to keep it "stable" at the cost of speed? i know it may not seem 2 important but i really don't like using API's to get the job done i'd rather know how they work ^_^ Ty in advanced
Advertisement
Its virtually just a memcpy with format conversion if necessary.

I'm pretty sure (if its set up correctly), the function call will use the video card to perform the operation.

Also, you can always try to speed up API's by doing the work yourself. Most of the time its not too hard, but it is only for minimal gains (minimal, like microseconds at best most of the time). I've learned that trying to optimize by re-coding something is not the way to go. Its all in how you use it.
-------Harmotion - Free 1v1 top-down shooter!Double Jump StudiosBlog
well see what im trying to focus on is mainly the Get/Setdibits api

you can send it an array of bytes to display a picture however this is slow compared to blitting 1 image to another

i did a test i can't remember exactly the size but i know that setdibits could perform it 200 times a second where bitblt could do it 2,000 times a second

so whats the difference from copying the data from an array to the screen to be displayed

and copying data from an image to the screen?

is it because the image is stored on the video memory and the array is stored on the system array?

if thats the case then why is system memory to video memory so slow?

sorry for all the questions but im just trying to clairfy it all and why im having these problems

i remember one time i was streaming the webcam into my program it was pretty smooth but when i went to capture it using bitblt and redraw it it wasn't as fast as streaming it and i think thats because it was coming from the video card
Quote:Original post by blaze02
Its virtually just a memcpy with format conversion if necessary.


Thats what a DIB implementation is. BitBlt is more like a wrapper around a wrapper, ontop of a wrapper. Its slow, its painful to use, and its utter crap for real-time games.
Quote:Original post by Lord Chiko
i did a test i can't remember exactly the size but i know that setdibits could perform it 200 times a second where bitblt could do it 2,000 times a second


You don't call SetDiBits more than once per frame! When your preparing a frame for say... a spaceship game, this is the processing of drawing your frame.

Method #1 (Using Bitblt)
BitBlt(Auroa);
BitBlt(Background);
BitBlt(Ship);
while(!AllProjectilesDrawn)
BitBlt(Projectile);
BitBlt(UI);

Method #2 (Using DIBs)
MyDrawingFunction(Auroa);
MyDrawingFunction(Background);
MyDrawingFunction(Ship);
while(!AllProjectilesDrawn)
MyDrawingFunction(Projectile);
MyDrawingFunction(UI);
SetDiBits(MyDrawings);

Where "MyDrawingfunction" is a function that you wrote to manipulate the raw pixels in the "MyDrawings" RGB array, which you made the same size of the window. Note: You'll likey want to create your own sprite format, since device contexts are out of the question.

So you can see, its apparent that if your using "SetDiBits" in place of "MyDrawingFunction" your tests are not being ran properly.

DIBs is almost always faster then BitBlt (Especialy when the client is using a videocard that doesn't support hardware accelerated BitBlt; like my previous videocard). Here are some random statistics.

Thevenin's MMORPG Performance readings:
Fleurin(GDI) - 30FPS
Fleurin(GDI - Hardware Accelerated via RadeonX800) - 70FPS
Fleurin(DIBs) - 150FPS
Fleurin(MDX) - 200+FPS [This would be alot higher if the game-logic wasn't bottlenecking it]

Inaddition, by writing your own 'MyDrawingFunction()' you can put in any sort of special effects you want, like transluceny, noise, blur, etc..

btw... since your probably using C++, be sure to capture a screenshot of what gets displayed the first time you write your "MyDrawingFunction". Buffer-overflows are pieces of art.


Also... your RGB array's width MUST BE A MULTIPLE OF FOUR!


Here is how I had did it.. err, this code works, it doesn't draw the above (Which should be an image of a bear)
/* WARNING: DIEHARD PROCEDURAL-C CODE BELOW, AND IT HAS WARTS! OMFG! *//* guiSmallWidth  - [unsigned int]Width of client area of windowguiSmallHeight - [unsigned int]Height of client area of window*/struct gtRGB{	unsigned char lucBlue, lucGreen,lucRed;};/* This procedure creates the sprite surface. */struct gtRGB* fCreateSurface(unsigned int luiWidth, unsigned int luiHeight){	struct gtRGB *laoBuf;	/* Allocate the surface, and move the width and height onto it's first 8 bytes. */	laoBuf = malloc(3*luiWidth*luiHeight + 8);	memcpy(&laoBuf[0],&luiWidth,4);	memcpy(&laoBuf[4],&luiHeight,4);	return laoBuf;}/* This procedure loads the sprite file, and stores it in a sprite surface it creates. */struct gtRGB* fLoadFSF(char *lpFileName, unsigned int luiWidth, unsigned int luiHeight){	unsigned int luiX,luiY;	//unsigned int *pX;	//unsigned int *pY;	struct gtRGB *laoBuf;	FILE *lpFSF;	lpFSF = fopen(lpFileName, "rb");	/* Allocate enough memory. */	laoBuf = malloc(3*luiWidth*luiHeight + 8);	/* Now read!1 */	for(luiX=0;luiX<luiWidth;luiX++)		for(luiY=0;luiY<luiHeight;luiY++)			fread(&laoBuf[(luiX*luiHeight) + luiY],sizeof(char),3,lpFSF);	/* Now, lets shift it all over eight bytes, and than load in the width and height before it all. */	memmove( ((char *)laoBuf) + 8, laoBuf, 3*luiWidth*luiHeight + 8);	fread(&laoBuf[0],sizeof(int),2,lpFSF);	//pX = ((char *)laoBuf);	//pY = ((char *)laoBuf) + 4;	//luiX = *pX;	//luiY = *pY;	fclose(lpFSF);	return laoBuf;}/* This function draws the sprite into a buffer . */void sDrawFSF(int liPX, int liPY, struct gtRGB *laoDest,struct gtRGB *laoSrc){	int liX,liY;	unsigned int luiWidth, luiHeight;	luiWidth =  *(((unsigned int *)laoSrc));	luiHeight = *((unsigned int*)  (((char *)laoSrc)+4)      );	/* The first eight bytes are the width and height. */	laoSrc = ((char *)laoSrc)+8;	/* For all pixels in the FSF. */	for(liX=0;liX< luiWidth ;liX++)		for(liY=0;liY< luiHeight;liY++)			/* Check if its transparent. */			if(laoSrc[liX*luiHeight + liY].lucRed != 255 ||			   laoSrc[liX*luiHeight + liY].lucRed != 255 ||			   laoSrc[liX*luiHeight + liY].lucGreen != 0)				/* Ok, now check if its within the boundaries of the drawing field. */				if(liPX + liX < guiSmallWidth &&				   liPY + liY < guiSmallHeight &&				   liPX + liX >=0 &&				   liPY + liY >=0)					/* Its within the boundaries, so draw the pixel! xD */ 					laoDest[liX+liPX + (guiSmallHeight-(liY + liPY))*guiSmallWidth] =						laoSrc[liX*luiHeight + liY];}


[Edited by - Thevenin on July 2, 2006 12:15:07 AM]
Quote:Original post by Thevenin
Quote:Original post by Lord Chiko
i did a test i can't remember exactly the size but i know that setdibits could perform it 200 times a second where bitblt could do it 2,000 times a second


You don't call SetDiBits more than once per frame! When your preparing a frame for say... a spaceship game, this is the processing of drawing your frame.

Method #1 (Using Bitblt)
BitBlt(Auroa);
BitBlt(Background);
BitBlt(Ship);
while(!AllProjectilesDrawn)
BitBlt(Projectile);
BitBlt(UI);

Method #2 (Using DIBs)
MyDrawingFunction(Auroa);
MyDrawingFunction(Background);
MyDrawingFunction(Ship);
while(!AllProjectilesDrawn)
MyDrawingFunction(Projectile);
MyDrawingFunction(UI);
SetDiBits(MyDrawings);

Where "MyDrawingfunction" is a function that you wrote to manipulate the raw pixels in the "MyDrawings" RGB array, which you made the same size of the window. Note: You'll likey want to create your own sprite format, since device contexts are out of the question.

So you can see, its apparent that if your using "SetDiBits" in place of "MyDrawingFunction" your tests are not being ran properly.

DIBs is almost always faster then BitBlt (Especialy when the client is using a videocard that doesn't support hardware accelerated BitBlt; like my previous videocard). Here are some random statistics.

Thevenin's MMORPG Performance readings:
Fleurin(GDI) - 30FPS
Fleurin(GDI - Hardware Accelerated via RadeonX800) - 70FPS
Fleurin(DIBs) - 150FPS
Fleurin(MDX) - 200+FPS [This would be alot higher if the game-logic wasn't bottlenecking it]


Yeah thats basically what i ment however it was painfully slow....but i'd take it because i was modifying the array in visual basic but just writing a wrapper to modify the right part of the array (since it starts from bottom left to top right) is a bit of a challenge

what i was hoping of doing was is sending a vb array to c++ (i've done this much) and to modify the array in c++ (can't do that) and send it back to vb

so that will solve the speed issue and have visual basic call the Get and Set Dibits when required

do you reckon that would be good enough for real time 2d gaming? if so how would i go about modifying the array in c++?


Edit:

I checked that code and..im not very c++ savoury would you be able to upload the .cpp and its workspace? :D
Umm any ideas on how i can write a wrapper for it? or 1 thats already been made

the way im thinking of doing it seems a bit messy to modify it in a byte array then to display it like where to draw the objects and what not ^_^

oh and whats MDX?

Fleurin(MDX) - 200+FPS [This would be alot higher if the game-logic wasn't bottlenecking it]

This topic is closed to new replies.

Advertisement