Jump to content
  • Advertisement
Sign in to follow this  
mark-w

win32 - StretchBlt() vs StretchDIBits()?

This topic is 4813 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi All, What's the difference between StretchBlt() vs StretchDIBits()? Several references on the net suggest StretchDIBits is much faster than StretchBlt. Is this true. I need to be able to stretch a bitmap to screen many times a second to mimic zooming. I was using MFC's image object to do this CImage - but it doesn't even have a StretchDIBits method! Is it deprecated or something? Thanks!

Share this post


Link to post
Share on other sites
Advertisement
Somehow that doesn't seem right. Working with a device-dependent bitmap (DDB) should always be equal to or faster than working with a device-independent bitmap (DIB). The DIB version has to convert it to a DDB before it's displayed anyway.

Share this post


Link to post
Share on other sites
Does StretchDIBits use the video card to do the stretching, while StretchBlt() is doing it software only? Or do either of them take that into account?

Share this post


Link to post
Share on other sites
Well, in my experience, StretchDIBits isn't actually implemented properly so you can't compare it to StretchBlt at all.
In a project I did for work where I was working with DIBs, I spent days trying to get StretchDIBits to work and the closest I cam basically involved using random magic numbers for the various parameters. I then decided to just create memoryDCs, select the DIB into them, and use StretchBlt and it worked fine with the obvious parameters. I found several knowledge-base articles describing problems with StretchDIBits, but none of them covered the issues I was seeing (such as the function doing nothing or drawing parts of the image upside down or drawing black rectangles) were caused by any of the documented problems. I'm still not sure what was going on, but I tried everything I could think of (which was quite a bit) and none of it made StretchDIBits behave in a manner consistant with its documentation.

Anyways, the difference is that StretchDIBits uses a device independant bitmap (DIB) as the source, while StretchBlt uses a device-dependant bitmap (DDB) as a source. They (supposedly) do the same thing, but with different formats of information.

Share this post


Link to post
Share on other sites
StretchDIBits works fine. At least I haven't had any problems using it. You need to know a little more GDI magic to fully understand what all the parameters are supposed to be though. It's definitely not deprecated.

For raw blt speed I'd be surprised if StretchDIBits beat StretchBlt. I'd expect them to be roughly the same. The way to find out which is faster is to try it. If you need to process the pixels in some way before blting than StretchDIBits would probably be quite a bit faster because you have a direct pointer to the bitmap bits.

GDI will hardware accelerate if the video driver supports it.

Share this post


Link to post
Share on other sites
I would think when it comes to 2D, most of, if not all, GDI functions use software rendering.
Maybe GDI+ uses hardware rendering, I don't know for sure. Hardware rendering is mostly for 3D stuff I would guess.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by JakeM
I would think when it comes to 2D, most of, if not all, GDI functions use software rendering.
Maybe GDI+ uses hardware rendering, I don't know for sure. Hardware rendering is mostly for 3D stuff I would guess.


uhmmm, no, no, and no.

GDI funcs are definitely hardware accelerated, but of course your mileage will vary depending upon your video card's driver. GDI+ uses advanced cpu op codes as well as whatever GDI uses, since it sits on top of GDI for alot of its basic functionality. 2D hardware acceleration has been around forever, and it didn't go away just because the video card vendors focused on 3D. there just hasn't been much of anything to work on since MS hasn't added anything new to its graphics apis except layered windows.

Share this post


Link to post
Share on other sites
Thanks for the responses everyone,

it seems when I first used StretchBlt() a month ago I must have been using it inefficiently (without the clipping values etc) so I was under the impression it was really slow. Now that I took the time to manually figure out where those clipping planes should be when I 'zoom in' on my bitmap, StretchBlt() works extremely fast.

Thanks!
Mark

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
uhmmm, no, no, and no.

GDI funcs are definitely hardware accelerated, but of course your mileage will vary depending upon your video card's driver. GDI+ uses advanced cpu op codes as well as whatever GDI uses, since it sits on top of GDI for alot of its basic functionality. 2D hardware acceleration has been around forever, and it didn't go away just because the video card vendors focused on 3D. there just hasn't been much of anything to work on since MS hasn't added anything new to its graphics apis except layered windows.


uhmmm, no, no, and no. If GDI functions were hardware accelerated, then that means the device context (hDC)
is also hardware accelerated, which also would mean you could create a hardware accelerated offscreen memory hDC
that could be used with OpenGL, etc... But this is completely bogus.

Share this post


Link to post
Share on other sites
Quote:
Original post by JakeM
Quote:
Original post by Anonymous Poster
uhmmm, no, no, and no.

GDI funcs are definitely hardware accelerated, but of course your mileage will vary depending upon your video card's driver. GDI+ uses advanced cpu op codes as well as whatever GDI uses, since it sits on top of GDI for alot of its basic functionality. 2D hardware acceleration has been around forever, and it didn't go away just because the video card vendors focused on 3D. there just hasn't been much of anything to work on since MS hasn't added anything new to its graphics apis except layered windows.


uhmmm, no, no, and no. If GDI functions were hardware accelerated, then that means the device context (hDC)
is also hardware accelerated, which also would mean you could create a hardware accelerated offscreen memory hDC that could be used with OpenGL, etc... But this is completely bogus.


The implementation of device contexts may actually be hardware accelerated, depending on the video drivers. The whole point of the abstraction is that the programmer doesn't have to care about the implementation and just use the context handles when drawing stuff. Especially blitting operations are commonly accelerated in hardware - I remember my friend having a "Windows Graphics Accelerator" some 14 years ago [smile]

OpenGL is an entirely different subsystem from GDI, and needs a device context mainly for synchronization with the Windows core drawing system. The point about GDI hw acceleration is irrelevant in this context.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!