• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
mark-w

win32 - StretchBlt() vs StretchDIBits()?

14 posts in this topic

Hi All, What's the difference between StretchBlt() vs StretchDIBits()? Several references on the net suggest StretchDIBits is much faster than StretchBlt. Is this true. I need to be able to stretch a bitmap to screen many times a second to mimic zooming. I was using MFC's image object to do this CImage - but it doesn't even have a StretchDIBits method! Is it deprecated or something? Thanks!
0

Share this post


Link to post
Share on other sites
Somehow that doesn't seem right. Working with a device-dependent bitmap (DDB) should always be equal to or faster than working with a device-independent bitmap (DIB). The DIB version has to convert it to a DDB before it's displayed anyway.
0

Share this post


Link to post
Share on other sites
Does StretchDIBits use the video card to do the stretching, while StretchBlt() is doing it software only? Or do either of them take that into account?
0

Share this post


Link to post
Share on other sites
Well, in my experience, StretchDIBits isn't actually implemented properly so you can't compare it to StretchBlt at all.
In a project I did for work where I was working with DIBs, I spent days trying to get StretchDIBits to work and the closest I cam basically involved using random magic numbers for the various parameters. I then decided to just create memoryDCs, select the DIB into them, and use StretchBlt and it worked fine with the obvious parameters. I found several knowledge-base articles describing problems with StretchDIBits, but none of them covered the issues I was seeing (such as the function doing nothing or drawing parts of the image upside down or drawing black rectangles) were caused by any of the documented problems. I'm still not sure what was going on, but I tried everything I could think of (which was quite a bit) and none of it made StretchDIBits behave in a manner consistant with its documentation.

Anyways, the difference is that StretchDIBits uses a device independant bitmap (DIB) as the source, while StretchBlt uses a device-dependant bitmap (DDB) as a source. They (supposedly) do the same thing, but with different formats of information.
0

Share this post


Link to post
Share on other sites
StretchDIBits works fine. At least I haven't had any problems using it. You need to know a little more GDI magic to fully understand what all the parameters are supposed to be though. It's definitely not deprecated.

For raw blt speed I'd be surprised if StretchDIBits beat StretchBlt. I'd expect them to be roughly the same. The way to find out which is faster is to try it. If you need to process the pixels in some way before blting than StretchDIBits would probably be quite a bit faster because you have a direct pointer to the bitmap bits.

GDI will hardware accelerate if the video driver supports it.
0

Share this post


Link to post
Share on other sites
I would think when it comes to 2D, most of, if not all, GDI functions use software rendering.
Maybe GDI+ uses hardware rendering, I don't know for sure. Hardware rendering is mostly for 3D stuff I would guess.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by JakeM
I would think when it comes to 2D, most of, if not all, GDI functions use software rendering.
Maybe GDI+ uses hardware rendering, I don't know for sure. Hardware rendering is mostly for 3D stuff I would guess.


uhmmm, no, no, and no.

GDI funcs are definitely hardware accelerated, but of course your mileage will vary depending upon your video card's driver. GDI+ uses advanced cpu op codes as well as whatever GDI uses, since it sits on top of GDI for alot of its basic functionality. 2D hardware acceleration has been around forever, and it didn't go away just because the video card vendors focused on 3D. there just hasn't been much of anything to work on since MS hasn't added anything new to its graphics apis except layered windows.
0

Share this post


Link to post
Share on other sites
Thanks for the responses everyone,

it seems when I first used StretchBlt() a month ago I must have been using it inefficiently (without the clipping values etc) so I was under the impression it was really slow. Now that I took the time to manually figure out where those clipping planes should be when I 'zoom in' on my bitmap, StretchBlt() works extremely fast.

Thanks!
Mark
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
uhmmm, no, no, and no.

GDI funcs are definitely hardware accelerated, but of course your mileage will vary depending upon your video card's driver. GDI+ uses advanced cpu op codes as well as whatever GDI uses, since it sits on top of GDI for alot of its basic functionality. 2D hardware acceleration has been around forever, and it didn't go away just because the video card vendors focused on 3D. there just hasn't been much of anything to work on since MS hasn't added anything new to its graphics apis except layered windows.


uhmmm, no, no, and no. If GDI functions were hardware accelerated, then that means the device context (hDC)
is also hardware accelerated, which also would mean you could create a hardware accelerated offscreen memory hDC
that could be used with OpenGL, etc... But this is completely bogus.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by JakeM
Quote:
Original post by Anonymous Poster
uhmmm, no, no, and no.

GDI funcs are definitely hardware accelerated, but of course your mileage will vary depending upon your video card's driver. GDI+ uses advanced cpu op codes as well as whatever GDI uses, since it sits on top of GDI for alot of its basic functionality. 2D hardware acceleration has been around forever, and it didn't go away just because the video card vendors focused on 3D. there just hasn't been much of anything to work on since MS hasn't added anything new to its graphics apis except layered windows.


uhmmm, no, no, and no. If GDI functions were hardware accelerated, then that means the device context (hDC)
is also hardware accelerated, which also would mean you could create a hardware accelerated offscreen memory hDC that could be used with OpenGL, etc... But this is completely bogus.


The implementation of device contexts may actually be hardware accelerated, depending on the video drivers. The whole point of the abstraction is that the programmer doesn't have to care about the implementation and just use the context handles when drawing stuff. Especially blitting operations are commonly accelerated in hardware - I remember my friend having a "Windows Graphics Accelerator" some 14 years ago [smile]

OpenGL is an entirely different subsystem from GDI, and needs a device context mainly for synchronization with the Windows core drawing system. The point about GDI hw acceleration is irrelevant in this context.

0

Share this post


Link to post
Share on other sites
Quote:
Original post by mark-w
it seems when I first used StretchBlt() a month ago I must have been using it inefficiently (without the clipping values etc) so I was under the impression it was really slow. Now that I took the time to manually figure out where those clipping planes should be when I 'zoom in' on my bitmap, StretchBlt() works extremely fast.

StretchBlt can be fast and it can be slow. On nvidia cards, StretchBlt is hardware accelerated and is very fast. On ATi cards (at least the ones I've tried), it is not accelerated and is dog slow, managing only 5fps or so when stretching a simple bitmap from 640x480 to 1024x768.

Quote:
Original post by JakeM
uhmmm, no, no, and no. If GDI functions were hardware accelerated, then that means the device context (hDC) is also hardware accelerated, which also would mean you could create a hardware accelerated offscreen memory hDC that could be used with OpenGL, etc... But this is completely bogus. Have you ever tried to copy memory from OpenGL to GDI or vice versa? Can't do it in hardware. Show me proof if you can.

Actually, GDI is often hardware accelerated. That is done only for operation that actually draw to device DC's of course (after all, it's Graphics Device Interface). Having hardware acceleration is not mutually exclusive with having unaccelerated offscreen memory DCs. (or printer DCs). I remember the 3dfx Voodoo Banshee being touted as the first graphics card having implemented the entire GDI in hardware.
0

Share this post


Link to post
Share on other sites
Lets end the "does GDI support hardware accelleration" debate by going straight to the docs: MSDN docs for driver entry points to do GDI hardware accelleration.

It's up to the driver to tell GDI that it wants to handle these sorts of things. If the driver does not support it then GDI will do it in software. Even if the driver *does* ask to get called by GDI it can change it's mind again and call back to GDI to handle things anyway - this is actually the common case for things like text output (see EngTextOut which is the GDI callback that drivers make when they don't want to fully support text. In this case the driver will do things like accumulate dirty regions, have GDI do all the heavy lifting, and then do a simple blt to get the final bits on the screen.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Anon Mike
StretchDIBits works fine. At least I haven't had any problems using it. You need to know a little more GDI magic to fully understand what all the parameters are supposed to be though.[...]
Could you explain what you mean by the last statement quoted? It seems pretty obvious from the documentation that all the parameters are the same except that for the source you need to provide info about a DIB (the pointer to it's 'bits' meaning the out parameter of CreateDIBSection and the bitmap info used to create it).
I got StretchDIBits to work fine as long as I didn't actually stretch, but as soon as I did weird random things started happening. I even tried negating various parameters and creating an upside down DIB (instead of using a negative height) but nothing made it stretch properly on any of the machines I tested it on (some of which had onboard, others had nvidia cards, and the OSes varied between 2K and XP).
0

Share this post


Link to post
Share on other sites
Quote:
Extrarius
Could you explain what you mean by the last statement quoted?

Just that the docs can be a bit confusing if you don't really understand the lingo, although now that I look again I think I was thinking of SetDIBitsToDevice more than StretchDIBits.

Quote:
I got StretchDIBits to work fine as long as I didn't actually stretch

Here's some that captures the upper-left 200x200 corner of the desktop into a 200x200x24bpp DIB section, then uses StretchDIBits to resize it to 50x50 and copy it to the client area of the app:


// create the DIB section and put it in a memory DC
BITMAPINFO bitmapinfo;
ZeroMemory(&bitmapinfo, sizeof(bitmapinfo));
bitmapinfo.bmiHeader.biSize = sizeof(bitmapinfo.bmiHeader);
bitmapinfo.bmiHeader.biWidth = 200;
bitmapinfo.bmiHeader.biHeight = 200;
bitmapinfo.bmiHeader.biPlanes = 1;
bitmapinfo.bmiHeader.biBitCount = 24;
bitmapinfo.bmiHeader.biCompression = BI_RGB;

void * bits;
HDC desktop = GetDC(NULL);
HDC memory = CreateCompatibleDC(desktop);
HBITMAP dibsection = CreateDIBSection(memory, &bitmapinfo, DIB_RGB_COLORS, &bits, NULL, 0);
HBITMAP oldbitmap = SelectBitmap(memory, dibsection);

// Capture 200x200 image from the desktop
BitBlt(memory, 0, 0, 200, 200, desktop, 0, 0, SRCCOPY);

// Stretch captured image to a 50x50 region of the app client area
StretchDIBits(m_dc, 0, 0, 50, 50, 0, 0, 200, 200, bits, &bitmapinfo, DIB_RGB_COLORS, SRCCOPY);

// Cleanup
DeleteObject(SelectBitmap(memory, oldbitmap));
DeleteDC(memory);
ReleaseDC(NULL, desktop);
0

Share this post


Link to post
Share on other sites
I made a test application using your code and found it to mostly work, but the system still treats the coordinate system somewhat strange - it's measuring the source Y and Height from the bottom of the image even for a 'rightside up' DIB (aka negative height). I can't remeber the exact problems, but I was having some serious problems relating to source and destination coordinates not making any sense, such as working the way described above for some coordinates and in other ways at seemingly random times.
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0