Jump to content

  • Log In with Google      Sign In   
  • Create Account

Does the Windows 7 interface (GDI?) utilize the framebuffer?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 Dimension10   Members   -  Reputation: 123

Like
1Likes
Like

Posted 16 July 2013 - 06:27 PM

Right now, I'm working on a method for quickly grabbing the average screen color and spewing it out to my arduino to control my room's RGB strips.

 

 

I've been using mostly device contexts like GetDC(NULL) and then using StretchBlt to "stretch" it down to one pixel. But that seems like a ludicrous number of calls in total. I see about 15 fps and 6% cpu usage. (When grabbing one pixel, it's probably 100+ fps with <1% usage.)

 

 

 

 

But, I've heard there's a way to directly access the GPU's framebuffer. In which case, if GDI uses the framebuffer, I could grab the frame and have the GPU do the averaging massively parallel via CUDA, then send the returned color directly to the Arduino.

 

But that would depend strictly on if the windows GUI uses the framebuffer. (And perhaps how much initialization it would take to access it.)

 



Sponsor:

#2 Ravyne   GDNet+   -  Reputation: 8160

Like
1Likes
Like

Posted 16 July 2013 - 07:00 PM

It must occupy the GPU's frambuffer at some point, but I'd wager its not exposed in any way to you, aside from the GDI readback method you're already doing.

 

It'd probably be easier/better/faster to not down-sample literally the entire desktop to derive your RGB color. You could sample 1 pixel out of every 4x4 pixel square across the screen and cut your work to 1/16th of the StretchBlt method, and still come up with a color that's highly representative of your screen's average color. Certain degenerate cases could produce an undesirable effect with this scheme, but that could be solved by retaining a few frames worth of the RGB color and blending them together smoothly, just to avoid any potential harshness.



#3 Dimension10   Members   -  Reputation: 123

Like
0Likes
Like

Posted 16 July 2013 - 09:54 PM

"You could sample 1 pixel out of every 4x4 pixel square across the screen and cut your work to 1/16th of the StretchBlt method"

 

I figured that much. I was working on a method, actually, for creating a pixel density function so that the samples are more dense near the edges of the screen.

 

But I figured stuff like BitBlt and StretchBlt would grab a whole chunk of data while it's open, THEN extract what it needs.

 

I figure the only method I would really have for grabbing individual pixels is GetPixel(). But since it would re-access the device context each time, I figured that would be like shutting off the engine in your car every time you make a new turn.

 

I like your idea of keeping a few frames. It would also be interesting to dither the samples in a sense, like take a random pixel out of each 4x4 block, and keep the previous two or so frames.

 

 

Edit: And one more quick thing, if possible. How would I go about creating a buffer HDC? I tried creating one, and all I really got was a mess of digits that remained static. So as it is, I'm writing the average color to a pixel, and writing that pixel at 0,0 on my screen, then accessing the whole screen again just to grab that pixel.


Edited by Dimension10, 16 July 2013 - 09:58 PM.


#4 Dimension10   Members   -  Reputation: 123

Like
0Likes
Like

Posted 16 July 2013 - 10:10 PM

int main()
{
	// Establish Arduino as a color controller on COM7.

	cColorController *Arduino;				
	Arduino = new cColorController("COM7");
	Arduino->setAdjust(	1,		//Red Multiplier
				0.8,		//Green Multiplier
				0.4,		//Blue Multiplier
				2.0		//Gamma
				);			

	// Create a Device Context handle to grab some pixels.

	HDC hdcScreen = GetDC(NULL);
	COLORREF CurrentColor;
	SetStretchBltMode(hdcScreen,HALFTONE);
	SetBrushOrgEx(hdcScreen, 0, 0, NULL);
	int ScreenWPix = GetDeviceCaps(hdcScreen,HORZRES);
	int ScreenHPix = GetDeviceCaps(hdcScreen,VERTRES);

	// Create a Buffer device to send data to. Todo: Buffer not working.
	HDC hdcBuffer = CreateCompatibleDC(NULL);
	BITMAPINFO hdcInfo;
	ZeroMemory( &hdcInfo.bmiHeader , sizeof(BITMAPINFOHEADER));
	hdcInfo.bmiHeader.biBitCount	= 24;
	hdcInfo.bmiHeader.biPlanes	=  1;
	hdcInfo.bmiHeader.biHeight	=  1;
	hdcInfo.bmiHeader.biWidth	=  1;
	hdcInfo.bmiHeader.biSizeImage   =  0;
	hdcInfo.bmiHeader.biSize	= sizeof(BITMAPINFOHEADER);
	hdcInfo.bmiHeader.biClrUsed	=  0;
	hdcInfo.bmiHeader.biClrImportant=  0;
	VOID *pvBits;
	HBITMAP bmpBuffer = CreateDIBSection(hdcBuffer,
					&hdcInfo,
					DIB_RGB_COLORS,
					&pvBits,
					NULL,
					0);
	
	SelectObject(hdcBuffer, bmpBuffer);
	

	



	while(1)
	{
		//Stretch the screen down to a single pixel.
		StretchBlt(hdcScreen, 0, 0, 1, 1,
			hdcScreen, 0, 0, ScreenWPix, ScreenHPix,
			SRCCOPY);
		//Get this pixel's color, rather than call GetPixel for r, g, and b.
		CurrentColor = GetPixel(hdcScreen, 0, 0);

		Arduino->setColor(GetRValue(CurrentColor), //Send this color to the Arduino
				GetGValue(CurrentColor),
				GetBValue(CurrentColor)); //Todo: Have Arduino read COLORREF


		//Sleep(16);
	}


	ReleaseDC(NULL, hdcScreen);
	return 1;
}


#5 mhagain   Crossbones+   -  Reputation: 8278

Like
2Likes
Like

Posted 17 July 2013 - 03:09 AM

The fastest possible way to get a working average screen colour is:

 

 - Create a render target texture with mipmap autogeneration do this once only).

 - Read the screen contents into the texture.

 - Let mipmaps autogenerate (you shouldn't have to do anything for this; it just happens).

 - Sample from the smallest (1x1) miplevel (copy this off to another texture if you need to, since you may not be able to read back from a mipmap-autogen texture).

 

All of this should run on the GPU, with the exception of the readback from the 1x1 miplevel to CPU-side, and is capable of running at several hundred FPS on reasonable hardware.  The readback will actually be your primary bottleneck even though it's just a single pixel, owing to the required CPU/GPU synchronization.


It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#6 Dimension10   Members   -  Reputation: 123

Like
0Likes
Like

Posted 17 July 2013 - 12:03 PM

That's a good idea, perfect for my usage. though I worry about initializing devices and windows. How much initialization is generally required before I can create a texture and sample it down to 1 pixel?






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS