Ugly StretchBlt

Started by
3 comments, last by Amr0 9 years, 2 months ago

Hi all,

just coded a small demo that captures the desktop window, and then tries to scale it down and draw it to my dialog' hDC.

Problem is, I'm using StretchBlt, and the final results are quite ugly. I've read several stuff that instructs us to use SetStretchBltMode and SetBrushOrgEx. The end result just isn't up to scratch.

Is there some solution, even if it involves GDI+/ OpenGL/ DirectX/ DirecDraw that can draw into my dialog and resize my original bitmap perfectly?
Also, I'm looking for a somewhat fast solution. I wouldn't want this to take more than 1sec to draw...

Thanks for any tips on this.

Have a great week.

Advertisement
Make sure you move to the same aspect ratio image - i.e. the width-to-height ratio of the desktop should match the small bitmap you're drawing to. Virtually all image subsampling algorithms will crap out if you change aspect ratio.

In general, subsampling algorithms are available in a wide variety, usually trading off speed for quality and vice versa.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Thx ApochPiQ,

I've done exactly that, capture a 400x400 area and reduce to 100x100, but the results are simply not acceptable.

What alternative are there outside of the outdated win32 gdi api?

Thx!

What alternative are there outside of the outdated win32 gdi api?

You can write your own, using doing some research into existing algorithms.

I'd imagine (but don't know for sure), that most of those Blt functions are designed for speed - doing the resize every frame.

If you're converting a 400x400 to 100x100, you just need to do that once and then draw the new 100x100 image using the regular Blt functions. You can do this on the CPU, using a normal function of your own writing, and it should be plenty fast enough.

Some algorithms are designed for upscaling, others may be better suited for downscaling.

For fast blits, the algorithm may be just using percentages to choose what one pixel to sample (i.e. 0.3 horizontally is the color 'red', so draw red!).

For better resizes, other algorithms take into account multiple nearby pixels and combine the colors to generate a superior image. This is probably what you want.

More "intelligent" algorithms try to analyze the image to find "edges" and other formations of pixels that stand out, and then aim to recreate these formations.

Try this.

This topic is closed to new replies.

Advertisement