Jump to content
  • Advertisement
Sign in to follow this  

Detect if something changed on screen

This topic is 3898 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to figure out the fastest way to determine if something changed on the screen. The obvious way is to do a screen grab and compare it with a previous screen grab pixel by pixel. This however is very slow. In DirectDraw there's a method called GetUniquenessValue for a surface which indicates if the contents of the surface have changed. Is this something that can be used to detect if something changed on the screen? I'm open to other ideas as well.

Share this post

Link to post
Share on other sites
Is this a 2D or 3D application?

Why don't you just check if an object has moved position or if you have animation functions, if the function is true than obviously something has changed. Just setup some boolean variables around in the functions that can potentially change something.

Share this post

Link to post
Share on other sites
Hmmm, interesting problem. The most applicable solution will depend on:

- how accurate it needs to be ("might have changed" vs "has changed").

- what result you need ("something changed", "this many pixels changed", "these pixels changed").

- where the pixels came from (was it something YOU just rendered, something another app just rendered, something from a sampling device such as a camera that might have extra information to help you).

- what you do with the answer; if this is for some sort of early-out then bear in mind that the "has it changed" test might be more expensive than just doing the work.

Personally I'd use D3D and the GPU to do the work. Off the top of my head and taking the details from your post as a brief:

1a) Grab the screen to a texture [Texture A]
1b) The previous frame was already grabbed to a texture [Texture B]
1c) If this is a continuous thing, use those in a double buffered fashion

2a) Disable all alpha blending
2b) Disable all stencil testing
2c) Enable Z buffering & Z writes
2d) Enable alpha testing such that only pixels where a!=0 are rendered

3a) Clear the screen and Z buffer
3b) Or you can use an offscreen render target and Z buffer

4) Write a pixel shader that does something like (psuedo-shader-code):
float4 a=sample(TextureA)
float4 b=sample(TextureB)
float4 c=b-a
float d=dot(c, c)
return float4(1,1,1,d)

5) Begin a hardware occlusion query (D3DQUERYTYPE_OCCLUSION)

6) Render a fullscreen quad using the above pixel shader [renders either to the screen or an offscreen render target]

7) End the hardware occlusion query (D3DQUERYTYPE_OCCLUSION)

8) Do some meaningful work for "a while"

9a) Get the result back from the occlusion query. The occlusion query tells you how many pixels passed the Z test.
9b) Clearing the Z buffer means that all pixels in your quad should pass the Z test, but: Alpha test happens before Z test and if a pixel fails alpha test, it gets rejected before it gets to the Z test.

- occlusion queries are fast, but are asynchronous, so "a while" in step 8 could mean "a few frames" - which depending on what you need this for could mean unacceptable latency.

- the result in 9a should tell you how many pixels passed the Z test - but hardware vendors have disagreed on the definition of this in the past - and even changed it between driver revisions. Some chips/drivers might return the number of *fragments* that pass - which will give you different results if you use any kind of anti-aliasing. Because of this I only ever compare occlusion query results against results from the same machine rather than against any constant (so don't put any "if num_pixels_changed==52" kind of logic in there).

If you can't use queries (latency is the only reason I can see not to), I'd probably still go with D3D and pixel shaders - probably do some sort of downsample/filter using a dot product like above to reduce the comparison. Main problem with that is you're going to need to read back from GPU memory at some stage which can be slow and also risks serialising the whole pipeline (CPU<->GPU no longer in parallel throws away a lot of the benefit of having a GPU)

Share this post

Link to post
Share on other sites
Oh, and small note - if this is for something like collision detection in a normal game application, do as Brigs suggests. Detecting if things changed on screen is 100% the wrong way to go about things on modern hardware - do it with graphics independent collision data.

Share this post

Link to post
Share on other sites
Thanks for your quick replies. The application isn't a game so I can't figure out if something changed directly through collision detection etc. It's more like a VNC type app which needs to detect screen changes when the user is working on the desktop and transmit the changed areas.

I'll look into pixel shaders and let you know if I get it working.

Share this post

Link to post
Share on other sites
If it's VNC like use OS specific commands.

In Windows you can hook every message for every HWND. Check for WM_ERASEBKGND or WM_PAINT and you should be set with the exact rectangles that changed (beside the mouse cursor obviously).

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!