Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


win32 - StretchBlt() vs StretchDIBits()?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
14 replies to this topic

#1 mark-w   Members   -  Reputation: 136

Like
0Likes
Like

Posted 11 September 2005 - 10:34 AM

Hi All, What's the difference between StretchBlt() vs StretchDIBits()? Several references on the net suggest StretchDIBits is much faster than StretchBlt. Is this true. I need to be able to stretch a bitmap to screen many times a second to mimic zooming. I was using MFC's image object to do this CImage - but it doesn't even have a StretchDIBits method! Is it deprecated or something? Thanks!

Sponsor:

#2 outRider   Members   -  Reputation: 852

Like
0Likes
Like

Posted 11 September 2005 - 12:20 PM

Somehow that doesn't seem right. Working with a device-dependent bitmap (DDB) should always be equal to or faster than working with a device-independent bitmap (DIB). The DIB version has to convert it to a DDB before it's displayed anyway.

#3 mark-w   Members   -  Reputation: 136

Like
0Likes
Like

Posted 11 September 2005 - 12:43 PM

Does StretchDIBits use the video card to do the stretching, while StretchBlt() is doing it software only? Or do either of them take that into account?

#4 Extrarius   Members   -  Reputation: 1412

Like
0Likes
Like

Posted 11 September 2005 - 02:40 PM

Well, in my experience, StretchDIBits isn't actually implemented properly so you can't compare it to StretchBlt at all.
In a project I did for work where I was working with DIBs, I spent days trying to get StretchDIBits to work and the closest I cam basically involved using random magic numbers for the various parameters. I then decided to just create memoryDCs, select the DIB into them, and use StretchBlt and it worked fine with the obvious parameters. I found several knowledge-base articles describing problems with StretchDIBits, but none of them covered the issues I was seeing (such as the function doing nothing or drawing parts of the image upside down or drawing black rectangles) were caused by any of the documented problems. I'm still not sure what was going on, but I tried everything I could think of (which was quite a bit) and none of it made StretchDIBits behave in a manner consistant with its documentation.

Anyways, the difference is that StretchDIBits uses a device independant bitmap (DIB) as the source, while StretchBlt uses a device-dependant bitmap (DDB) as a source. They (supposedly) do the same thing, but with different formats of information.

#5 Anon Mike   Members   -  Reputation: 1098

Like
0Likes
Like

Posted 11 September 2005 - 05:49 PM

StretchDIBits works fine. At least I haven't had any problems using it. You need to know a little more GDI magic to fully understand what all the parameters are supposed to be though. It's definitely not deprecated.

For raw blt speed I'd be surprised if StretchDIBits beat StretchBlt. I'd expect them to be roughly the same. The way to find out which is faster is to try it. If you need to process the pixels in some way before blting than StretchDIBits would probably be quite a bit faster because you have a direct pointer to the bitmap bits.

GDI will hardware accelerate if the video driver supports it.

#6 JakeM   Members   -  Reputation: 168

Like
0Likes
Like

Posted 11 September 2005 - 05:53 PM

I would think when it comes to 2D, most of, if not all, GDI functions use software rendering.
Maybe GDI+ uses hardware rendering, I don't know for sure. Hardware rendering is mostly for 3D stuff I would guess.

#7 Anonymous Poster_Anonymous Poster_*   Guests   -  Reputation:

0Likes

Posted 11 September 2005 - 06:14 PM

Quote:
Original post by JakeM
I would think when it comes to 2D, most of, if not all, GDI functions use software rendering.
Maybe GDI+ uses hardware rendering, I don't know for sure. Hardware rendering is mostly for 3D stuff I would guess.


uhmmm, no, no, and no.

GDI funcs are definitely hardware accelerated, but of course your mileage will vary depending upon your video card's driver. GDI+ uses advanced cpu op codes as well as whatever GDI uses, since it sits on top of GDI for alot of its basic functionality. 2D hardware acceleration has been around forever, and it didn't go away just because the video card vendors focused on 3D. there just hasn't been much of anything to work on since MS hasn't added anything new to its graphics apis except layered windows.


#8 mark-w   Members   -  Reputation: 136

Like
0Likes
Like

Posted 12 September 2005 - 02:50 AM

Thanks for the responses everyone,

it seems when I first used StretchBlt() a month ago I must have been using it inefficiently (without the clipping values etc) so I was under the impression it was really slow. Now that I took the time to manually figure out where those clipping planes should be when I 'zoom in' on my bitmap, StretchBlt() works extremely fast.

Thanks!
Mark

#9 JakeM   Members   -  Reputation: 168

Like
0Likes
Like

Posted 12 September 2005 - 04:36 AM

Quote:
Original post by Anonymous Poster
uhmmm, no, no, and no.

GDI funcs are definitely hardware accelerated, but of course your mileage will vary depending upon your video card's driver. GDI+ uses advanced cpu op codes as well as whatever GDI uses, since it sits on top of GDI for alot of its basic functionality. 2D hardware acceleration has been around forever, and it didn't go away just because the video card vendors focused on 3D. there just hasn't been much of anything to work on since MS hasn't added anything new to its graphics apis except layered windows.


uhmmm, no, no, and no. If GDI functions were hardware accelerated, then that means the device context (hDC)
is also hardware accelerated, which also would mean you could create a hardware accelerated offscreen memory hDC
that could be used with OpenGL, etc... But this is completely bogus.

#10 Nik02   Crossbones+   -  Reputation: 2883

Like
0Likes
Like

Posted 12 September 2005 - 04:45 AM

Quote:
Original post by JakeM
Quote:
Original post by Anonymous Poster
uhmmm, no, no, and no.

GDI funcs are definitely hardware accelerated, but of course your mileage will vary depending upon your video card's driver. GDI+ uses advanced cpu op codes as well as whatever GDI uses, since it sits on top of GDI for alot of its basic functionality. 2D hardware acceleration has been around forever, and it didn't go away just because the video card vendors focused on 3D. there just hasn't been much of anything to work on since MS hasn't added anything new to its graphics apis except layered windows.


uhmmm, no, no, and no. If GDI functions were hardware accelerated, then that means the device context (hDC)
is also hardware accelerated, which also would mean you could create a hardware accelerated offscreen memory hDC that could be used with OpenGL, etc... But this is completely bogus.


The implementation of device contexts may actually be hardware accelerated, depending on the video drivers. The whole point of the abstraction is that the programmer doesn't have to care about the implementation and just use the context handles when drawing stuff. Especially blitting operations are commonly accelerated in hardware - I remember my friend having a "Windows Graphics Accelerator" some 14 years ago [smile]

OpenGL is an entirely different subsystem from GDI, and needs a device context mainly for synchronization with the Windows core drawing system. The point about GDI hw acceleration is irrelevant in this context.


Niko Suni


#11 Kippesoep   Members   -  Reputation: 892

Like
0Likes
Like

Posted 12 September 2005 - 04:49 AM

Quote:
Original post by mark-w
it seems when I first used StretchBlt() a month ago I must have been using it inefficiently (without the clipping values etc) so I was under the impression it was really slow. Now that I took the time to manually figure out where those clipping planes should be when I 'zoom in' on my bitmap, StretchBlt() works extremely fast.

StretchBlt can be fast and it can be slow. On nvidia cards, StretchBlt is hardware accelerated and is very fast. On ATi cards (at least the ones I've tried), it is not accelerated and is dog slow, managing only 5fps or so when stretching a simple bitmap from 640x480 to 1024x768.

Quote:
Original post by JakeM
uhmmm, no, no, and no. If GDI functions were hardware accelerated, then that means the device context (hDC) is also hardware accelerated, which also would mean you could create a hardware accelerated offscreen memory hDC that could be used with OpenGL, etc... But this is completely bogus. Have you ever tried to copy memory from OpenGL to GDI or vice versa? Can't do it in hardware. Show me proof if you can.

Actually, GDI is often hardware accelerated. That is done only for operation that actually draw to device DC's of course (after all, it's Graphics Device Interface). Having hardware acceleration is not mutually exclusive with having unaccelerated offscreen memory DCs. (or printer DCs). I remember the 3dfx Voodoo Banshee being touted as the first graphics card having implemented the entire GDI in hardware.

Kippesoep

#12 Anon Mike   Members   -  Reputation: 1098

Like
0Likes
Like

Posted 12 September 2005 - 10:33 AM

Lets end the "does GDI support hardware accelleration" debate by going straight to the docs: MSDN docs for driver entry points to do GDI hardware accelleration.

It's up to the driver to tell GDI that it wants to handle these sorts of things. If the driver does not support it then GDI will do it in software. Even if the driver *does* ask to get called by GDI it can change it's mind again and call back to GDI to handle things anyway - this is actually the common case for things like text output (see EngTextOut which is the GDI callback that drivers make when they don't want to fully support text. In this case the driver will do things like accumulate dirty regions, have GDI do all the heavy lifting, and then do a simple blt to get the final bits on the screen.

#13 Extrarius   Members   -  Reputation: 1412

Like
0Likes
Like

Posted 12 September 2005 - 03:47 PM

Quote:
Original post by Anon Mike
StretchDIBits works fine. At least I haven't had any problems using it. You need to know a little more GDI magic to fully understand what all the parameters are supposed to be though.[...]
Could you explain what you mean by the last statement quoted? It seems pretty obvious from the documentation that all the parameters are the same except that for the source you need to provide info about a DIB (the pointer to it's 'bits' meaning the out parameter of CreateDIBSection and the bitmap info used to create it).
I got StretchDIBits to work fine as long as I didn't actually stretch, but as soon as I did weird random things started happening. I even tried negating various parameters and creating an upside down DIB (instead of using a negative height) but nothing made it stretch properly on any of the machines I tested it on (some of which had onboard, others had nvidia cards, and the OSes varied between 2K and XP).

#14 Anon Mike   Members   -  Reputation: 1098

Like
0Likes
Like

Posted 13 September 2005 - 06:00 AM

Quote:
Extrarius
Could you explain what you mean by the last statement quoted?

Just that the docs can be a bit confusing if you don't really understand the lingo, although now that I look again I think I was thinking of SetDIBitsToDevice more than StretchDIBits.

Quote:
I got StretchDIBits to work fine as long as I didn't actually stretch

Here's some that captures the upper-left 200x200 corner of the desktop into a 200x200x24bpp DIB section, then uses StretchDIBits to resize it to 50x50 and copy it to the client area of the app:


// create the DIB section and put it in a memory DC
BITMAPINFO bitmapinfo;
ZeroMemory(&bitmapinfo, sizeof(bitmapinfo));
bitmapinfo.bmiHeader.biSize = sizeof(bitmapinfo.bmiHeader);
bitmapinfo.bmiHeader.biWidth = 200;
bitmapinfo.bmiHeader.biHeight = 200;
bitmapinfo.bmiHeader.biPlanes = 1;
bitmapinfo.bmiHeader.biBitCount = 24;
bitmapinfo.bmiHeader.biCompression = BI_RGB;

void * bits;
HDC desktop = GetDC(NULL);
HDC memory = CreateCompatibleDC(desktop);
HBITMAP dibsection = CreateDIBSection(memory, &bitmapinfo, DIB_RGB_COLORS, &bits, NULL, 0);
HBITMAP oldbitmap = SelectBitmap(memory, dibsection);

// Capture 200x200 image from the desktop
BitBlt(memory, 0, 0, 200, 200, desktop, 0, 0, SRCCOPY);

// Stretch captured image to a 50x50 region of the app client area
StretchDIBits(m_dc, 0, 0, 50, 50, 0, 0, 200, 200, bits, &bitmapinfo, DIB_RGB_COLORS, SRCCOPY);

// Cleanup
DeleteObject(SelectBitmap(memory, oldbitmap));
DeleteDC(memory);
ReleaseDC(NULL, desktop);


#15 Extrarius   Members   -  Reputation: 1412

Like
0Likes
Like

Posted 13 September 2005 - 09:43 AM

I made a test application using your code and found it to mostly work, but the system still treats the coordinate system somewhat strange - it's measuring the source Y and Height from the bottom of the image even for a 'rightside up' DIB (aka negative height). I can't remeber the exact problems, but I was having some serious problems relating to source and destination coordinates not making any sense, such as working the way described above for some coordinates and in other ways at seemingly random times.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS