Quote:Original post by mark-w
it seems when I first used StretchBlt() a month ago I must have been using it inefficiently (without the clipping values etc) so I was under the impression it was really slow. Now that I took the time to manually figure out where those clipping planes should be when I 'zoom in' on my bitmap, StretchBlt() works extremely fast.
StretchBlt can be fast and it can be slow. On nvidia cards, StretchBlt is hardware accelerated and is very fast. On ATi cards (at least the ones I've tried), it is not accelerated and is dog slow, managing only 5fps or so when stretching a simple bitmap from 640x480 to 1024x768.
Quote:Original post by JakeM
uhmmm, no, no, and no. If GDI functions were hardware accelerated, then that means the device context (hDC) is also hardware accelerated, which also would mean you could create a hardware accelerated offscreen memory hDC that could be used with OpenGL, etc... But this is completely bogus. Have you ever tried to copy memory from OpenGL to GDI or vice versa? Can't do it in hardware. Show me proof if you can.
Actually, GDI is often hardware accelerated. That is done only for operation that actually draw to device DC's of course (after all, it's Graphics Device Interface). Having hardware acceleration is not mutually exclusive with having unaccelerated offscreen memory DCs. (or printer DCs). I remember the 3dfx Voodoo Banshee being touted as the first graphics card having implemented the entire GDI in hardware.