• Advertisement
Sign in to follow this  

Reading a pixel from a texture

This topic is 4173 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

G'day I'm programming in VC++6, using Direct9 I have a render target which is about 4x4 pixels...and it contains important pixel information ( it all relates to a collision system ). I need to extract this information so i can detect a collsion. This is currently what I use:
IDirect3DSurface9* RTTSurface; // global offscreen surface




IDirect3DSurface9* surface;

//Get information about our texture
D3DSURFACE_DESC desc; 
m_RenderTargetTexture->GetLevelDesc(0, &desc); 

if(!RTTSurface)
{
	// Create an offscreen plain surface
	if(FAILED( m_display.d3dDevice->CreateOffscreenPlainSurface(desc.Width, desc.Height,
 desc.Format, D3DPOOL_SYSTEMMEM, &RTTSurface, NULL)))
		MessageBox(0,"Error", MB_OK | MB_ICONHAND);	
}

	// Get the surface of the render target

if(FAILED(	RenderTargetTexture->GetSurfaceLevel(0, &surface)))
	MessageBox(0,"Error", MB_OK | MB_ICONHAND);

	// Put the render target data into the surface
if(FAILED(	d3dDevice->GetRenderTargetData(surface, RTTSurface)))
	MessageBox( 0, "Error", MB_OK | MB_ICONHAND);

	// lock the offset plain
D3DLOCKED_RECT rect;	
if(FAILED(	RTTSurface->LockRect(&rect, NULL, 0)) )
	MessageBox( 0, "Error", MB_OK | MB_ICONHAND);


COLORREF* pData = (COLORREF*)rect.pBits;

And at this point pData is a pointer to the pixels. Is there anyway this code can be optimised ? Is there a faster way of reading for the graphics card? Thanks for reading :)

Share this post


Link to post
Share on other sites
Advertisement

Well, you should of course get the description of your render target and create an offscreen plain surface only when the render target changes size / format. Other than that there is not much you can do about reading stuff across the AGP or PCIE buss except attempting to avoid the frequency at which you do so.

You said the texture contains CD information. This sounds interesting, would you mind elaborating a bit more? Perhaps then we can get some clever ideas from other people bouncing around in this thread.

I read a ppt presentation on NVidia's web site about their research with the people behind Havok regarding the execution physics on gpu's. At some point there was a mention of object data (positions + orientations) being returned to the application via lockable vertex buffers. I don't quite understand how that's possible but it could be something worth considering.

Share this post


Link to post
Share on other sites
The code itself is fine, but the manner in which you use it is critical. If the routine is being used once during a loading screen, then it's not even worth thinking about thinking about optimising it. If you are using it in a loop, however, then you'll want to make sure that you lock the texture only once and store the pointer, reusing it as many times as possible, until you are finished. Repeatedly locking and unlocking the same texture unnecessarily won't do your framerate any favours.

Regards
Admiral

Share this post


Link to post
Share on other sites
As has been suggested, only call CreateOffscreenPlainSurface() when you have to - on device creation or when the dimensions/format need changing. Creating/releasing resources in a tight loop is usually A Bad Thing™.

It's also quite likely you'll be sychronizing your GPU and CPU with this sort of operation - which is also A Bad Thing™ [smile]

Rendering to a surface and reading back from it in the same frame will require the GPU to actually finish rendering before it can service the read-back request, and as there tends to be a deep command queue (several frames worth) then you'll basically be forcing lock-step. Parallelism makes GPU's happy for the same sorts of reasons as you'd find in regular concurrent programming...

To get around the latter you'd be best off with a "bounded buffer" approach. Create an array of surfaces to render to (somewhere between 3-6 is probably a good idea) and then always lock the oldest of the surfaces in the queue. If you're rendering N then download/lock (N + 1) as theres a much higher chance the N+1 surface will have passed through the command queue and thus not having any dependencies that'd force synchronisation.

Both of the above are covered in the Forum FAQ (e.g. #14) had you considered looking [wink]

hth
Jack

Share this post


Link to post
Share on other sites
Hello,

This is very similar to what Im trying to accomplish, except Im more of a newb to DirectX. Using Direct3D9 in visual studio .NET 2003

Problem:
I am handed a texture that was a movie frame, and was not created as a managed texture.
I have to read and analyze the pixel color data in RGB format.
note: I am also using some C wrapper macros to the C++ d3d functions

D3DSURFACE_DESC surfaceDesc;

IDirect3DTexture9_GetLevelDesc(texture, 0, &surfaceDesc);
D3DLOCKED_RECT locked;
d3dErr = texture->LockRect(0,&locked,NULL,0);

BYTE *bytePointer = (BYTE*)locked.pBits;
*outPBuffer = (BYTE *)malloc(4* surfaceDesc.Width * surfaceDesc.Height + 1);

// then I loop the buffer copying data Byte by Byte for analysis.
texture->UnlockRect(0);

The problem is the bytePointer buffer is NULL.
Im assuming its becuase Im not creating a managed texture.

1. Do I need to render to a managed texture to be able to simply read (or copy) the pixel buffer?

2. Is there a simpler way to read pixel data from a texture? Performance is not a huge issue here (but fast is good).

Share this post


Link to post
Share on other sites
I don't know what the situation is now, but a few years ago I've had serious problems working with small render targets (for picking). I sent NVIDIA a repro that could get the computer stuck (they fixed that after a few months), and ATI simply returned wrong results. Both improved with time, but I don't know if they're perfect now (and there's no chance I'll find my testing program now). In those days I worked with a GeForce2 and Radeon 7500 (they were a bit old), so a lot could have changed since then.

I had another code path that did a similar thing using occlusion queries. You might want to consider that.

Share this post


Link to post
Share on other sites
Quote:
Original post by freekquency23
The problem is the bytePointer buffer is NULL.


Have you considered looking at the return value from the LockRect call ?
Also have you enabled the debug runtime ? It makes sense to check those.

Quote:
Original post by freekquency231. Do I need to render to a managed texture to be able to simply read (or copy) the pixel buffer?

2. Is there a simpler way to read pixel data from a texture? Performance is not a huge issue here (but fast is good).


1- no, you cannot render to a managed texture (it has to be "pool_default")
2- reading data from a texture is not practical by any mean given that the d3d api and 3d hardware are built against that. If you really need to do it, consider using GetRendertargetData which is usually "optimized".

Quote:
Original post by freekquency23I am handed a texture that was a movie frame, and was not created as a managed texture.


So does that mean that you have the original data somewhere ? Or was this movie frame only composited on the GPU ?

LeGreg

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement