Archived

This topic is now archived and is closed to further replies.

crypting

Performance problems converting offscreen surfaces into bitmap

Recommended Posts

Hi, I have an scene rendered onto an offscreen surface, IDirect3DSurface9, and need to copy a bitmap of this surface into a provided BYTE buffer. To render offscreen I use D3DXCreateRenderToSurface(..) and D3DXCreateTexture(..) with D3DUSAGE_RENDERTARGET and D3DPOOL_DEFAULT. After rendering the scene, I use IDirect3DSurface9::GetDC to retrieve the device context and copy a bitmap of the scene into the provided BYTE buffer. GetDC fails on default pool (D3DPOOL_DEFAULT) surfaces unless they are dynamic (D3DUSAGE_DYNAMIC) or are lockable render targets, so.. I create another offscreen surface using IDirect3DDevice9::CreateOffscreenPlainSurface, with D3DPOOL_SYSTEMMEM, and use IDirect3DDevice9::GetRenderTargetData before calling GetDC to copy my first offscreen surface into this new one. This is called every frame I render, since it's part of a DirectShow application (the bitmap is provided as a continuously updating video stream). It all works, but the performance is very poor. Is there a better way to copy an offscreen rendered surface into a bitmap? I mean.. can I avoid using IDirect3DDevice9::GetRenderTargetData modifying the selected approach? Or... Is there a better way to copy the surface into a compatible bitmap to fill my provided BYTE buffer? Thanks! Here is the initialization of the surfaces and the function I use to copy the surface contents into my BYTE buffer:
// Create our dynamic texture for use by the "render to" surface...


hr = D3DXCreateTexture( g_pD3DDevice, 
                    RENDERTOSURFACE_WIDTH, 
                    RENDERTOSURFACE_HEIGHT, 
		    1, D3DUSAGE_RENDERTARGET, 
		    D3DFMT_X8R8G8B8, 
		    D3DPOOL_DEFAULT, 
		    &g_pDynamicTexture );

if( FAILED(hr) )exit(-1);
   
// Create an off-screen "render to" surface...


D3DSURFACE_DESC desc;
g_pDynamicTexture->GetSurfaceLevel( 0, &m_pTextureSurface );
m_pTextureSurface->GetDesc( &desc );

hr = D3DXCreateRenderToSurface( g_pD3DDevice, 
                                desc.Width, 
		                desc.Height, 
		                desc.Format, 
		                TRUE, 
		                D3DFMT_D16, 
		                &m_pRenderToSurface );


// Surface needed to retrieve the Device Context

// Copy from device memory to system memory everytime before calling GetDC

g_pD3DDevice->CreateOffscreenPlainSurface( 
              RENDERTOSURFACE_WIDTH, 
              RENDERTOSURFACE_HEIGHT, 
              D3DFMT_X8R8G8B8,
              D3DPOOL_SYSTEMMEM,
              &g_pSurface, NULL )



   
HBITMAP CopySurfaceToBitmap(LPDIRECT3DSURFACE9 pD3DSurface, BYTE *pData, BITMAPINFO *pHeader)
{
    HDC         hScrDC, hMemDC;         // surface DC and memory DC

    HBITMAP     hBitmap, hOldBitmap;    // handles to deice-dependent bitmaps

    int         nX, nY;		        // top-left of rectangle to grab

    int         nWidth, nHeight;        // DIB width and height

    

    // create a DC for the screen and create

    // a memory DC compatible to screen DC

    if (pD3DSurface == NULL)
	return NULL;

    pD3DSurface->GetDC(&hScrDC);
    hMemDC = CreateCompatibleDC(hScrDC);

    D3DSURFACE_DESC desc;
    pD3DSurface->GetDesc( &desc );

    // get points of rectangle to grab

    nX  = 0;
    nY  = 0;
    nWidth  = desc.Width;
    nHeight = desc.Height;

    // create a bitmap compatible with the surface DC

    hBitmap = CreateCompatibleBitmap(hScrDC, nWidth, nHeight);

    // select new bitmap into memory DC

    hOldBitmap = (HBITMAP) SelectObject(hMemDC, hBitmap);

    // bitblt surface DC to memory DC

    BitBlt(hMemDC, 0, 0, nWidth, nHeight, hScrDC, nX, nY, SRCCOPY);

    // select old bitmap back into memory DC and get handle to

    // bitmap of the screen   

    hBitmap = (HBITMAP)  SelectObject(hMemDC, hOldBitmap);

    // Copy the bitmap data into the provided BYTE buffer

    GetDIBits(hScrDC, hBitmap, 0, nHeight, pData, pHeader, DIB_RGB_COLORS);

    // clean up

    pD3DSurface->ReleaseDC(hScrDC);
    DeleteDC(hMemDC);

    // return handle to the bitmap

    return hBitmap;
}
[edited by - crypting on April 5, 2004 8:12:48 PM]

Share this post


Link to post
Share on other sites
Currently, the only way to copy a surface from video memory to system memory is with GetRenderTargetData.

If you need to do this every frame, perhaps an alternative is to get DirectShow to render to system memory instead of video memory by default. If you''re going to transfer the entire video image from video memory to system memory every frame, then there really is no need to have DirectShow render to video memory in the first place (unless you''re using this data for something else later on). You might as well have it render to system memory from the beginning and save yourself the step of having to call GetRenderTargetData.

neneboricua

Share this post


Link to post
Share on other sites
Well,

I'm using VMR-9 for DirectShow rendering and it only supports video memory render targets. I'm using VMR-9 because I want to mix several streams: For example, 2 offscreen rendered surfaces (the user can interact moving objects with the mouse), 2 videos and one real-time camera.

I just found examples using D3DXCreateRenderToSurface which needs (if I understand it right) a texture created with D3DXCreateTexture and the following main parameters: D3DUSAGE_RENDERTARGET and D3DPOOL_DEFAULT.

That's the only reason that makes me use another surface created using CreateOffscreenPlainSurface and D3DPOOL_SYSTEMMEM, to copy video memory to system memory and then be able to use GetDC.

So, if there is no possibility to render my DirectShow application to system memory using VMR-9, is there any possibility to render my D3D scene to an offscreen surface created using D3DPOOL_SYSTEMMEM in the first place?

Or.. is it possible to get a bitmap directly from video memory?as an alternative to GetDC, which only works with system memory.

Thanks again



[edited by - crypting on April 6, 2004 7:55:01 AM]

Share this post


Link to post
Share on other sites
The problem is that in DX9, render targets cannot be created in system memory. My knowledge of DirectShow is somewhat limited. Could VMR7 offer the functionality you need?

You might also want to check the DirectShow newsgroup. I''m sure this topic has come up there many times.

microsoft.public.win32.programmer.directx.video

neneboricua

Share this post


Link to post
Share on other sites
quote:

The problem is that in DX9, render targets cannot be created in system memory. My knowledge of DirectShow is somewhat limited. Could VMR7 offer the functionality you need?


I was really interested using VMR9. Your answer points only into a VMR9 problem.. Does this mean there is no other way to retrieve a bitmap from a surface than using IDirect3DSurface9::GetDC method?

quote:

You might also want to check the DirectShow newsgroup. I'm sure this topic has come up there many times.

microsoft.public.win32.programmer.directx.video



Yes, I'm also checking this newsgroup, but still without success. Maybe my searchs aren't good enough

Thanks again

[edited by - crypting on April 7, 2004 8:17:43 AM]

Share this post


Link to post
Share on other sites
quote:
Original post by crypting
I was really interested using VMR9. Your answer points only into a VMR9 problem.. Does this mean there is no other way to retrieve a bitmap from a surface than using IDirect3DSurface9::GetDC method?


There are other ways to do it, but they all require that the source image be in system memory. For example, you could lock the surface and copy the bits out yourself, but again, that would require the surface to be in system memory.

neneboricua

Share this post


Link to post
Share on other sites
Hi again

I was trying to copy the bits out myself overriding the CSourceStream::FillBuffer. I need to fill the sample's data buffer with a 256x256 Direct3DSurface9.

As you told me I lock the surface and then copy the BYTEs to the sample's data buffer.

If I just copy all bytes with a for sentence the sample appears with a "vertical flip effect". The video subtype is ARGB32, format 256x256 32bits. The surface is a 256x256 X8R8G8B8 one.

Where can I read/learn how this "manual" copy works? Different formats etc.. I don't find it on the SDK Documentation

Thanks for your patient


[edited by - crypting on April 11, 2004 7:05:33 PM]

Share this post


Link to post
Share on other sites
Ok, i've already done it.

It works, but I have some questions... first the code, than the questions


D3DSURFACE_DESC surfaceDesc;
pD3DSurface->GetDesc(&surfaceDesc);
D3DLOCKED_RECT d3dlr;
BYTE *pSurfaceBuffer;

pD3DSurface->LockRect(&d3dlr, 0, D3DLOCK_DONOTWAIT);

//avoiding vertical flip

pSurfaceBuffer = (BYTE *) d3dlr.pBits + d3dlr.Pitch*(surfaceDesc.Height - 1);

//video sample pitch, forcing ARGB 32bits

int m_lVidPitch = (surfaceDesc.Width * 4 + 4) & ~(4);

for (int i=0;i<(int)surfaceDesc.Height;i++)
{
BYTE *pDataOld = pData;
BYTE *pSurfaceBufferOld = pSurfaceBuffer;

for (int j=0;j<(int)surfaceDesc.Width;j++)
{

pData[0] = pSurfaceBuffer[0];
pData[1] = pSurfaceBuffer[1];
pData[2] = pSurfaceBuffer[2];
pData[3] = pSurfaceBuffer[3];

pData+=4; pSurfaceBuffer+=4;
}
//next video sample row

pData = pDataOld + m_lVidPitch;
//previous surface row

pSurfaceBuffer = pSurfaceBufferOld - d3dlr.Pitch;
}

pD3DSurface->UnlockRect();


1) First, I don't know how to obtain the video sample pitch, so I force ARGB 32bits and get it "manually", since it's the format I'm working with.
int m_lVidPitch = (surfaceDesc.Width * 4 + 4) & ~(4);
In fact, I take the line above from a sample in the SDK, I don't really understand why the final "& ~(4)".
How does this work? Is it possible to obtain the pitch of a video sample as I do with any D3D surface?

2)Copying BYTEs from a surface into my sample data buffer makes my video appear with a vertical flip effect. To solve this I simply begin copying from the last row of the surface to the first one, main lines for this:
//avoid the vertical flip effect
pSurfaceBuffer = (BYTE *) d3dlr.pBits + d3dlr.Pitch*(surfaceDesc.Height - 1);
[...]
//external for, previous surface row
pSurfaceBuffer = pSurfaceBufferOld - d3dlr.Pitch;

Why this behaviour? I mean, why the vertical flip effect if I just copy BYTES from surface to video sample.

3) Would it be faster to copy DWORDs instead of BYTEs??

EDITED:

4) I finally added a line (pasted below) to my bucle to make green pixels appear transparent. I add filter graphs to ROT and the filter using this code says it has ARGB32bits format, but I don't get any transparent effect... why?

if ((pData[0]==0x00) && (pData[1]==0xFF) && (pData[2]==0x00)) pData[3] = 0x00;


Thanks again



[edited by - crypting on April 12, 2004 8:28:36 AM]

Share this post


Link to post
Share on other sites
quote:
Original post by crypting
1) First, I don't know how to obtain the video sample pitch, so I force ARGB 32bits and get it "manually", since it's the format I'm working with.
int m_lVidPitch = (surfaceDesc.Width * 4 + 4) & ~(4);
In fact, I take the line above from a sample in the SDK, I don't really understand why the final "& ~(4)".
How does this work? Is it possible to obtain the pitch of a video sample as I do with any D3D surface?


It's been way too long since I played with DShow to know why that &~(4) thing is there. I thought that VMR9 returned a D3D9 texture. If that's the case, you should be able to get the pitch of the video sample like you do with any D3D surface.
quote:

2)Copying BYTEs from a surface into my sample data buffer makes my video appear with a vertical flip effect. To solve this I simply begin copying from the last row of the surface to the first one, main lines for this:
//avoid the vertical flip effect
pSurfaceBuffer = (BYTE *) d3dlr.pBits + d3dlr.Pitch*(surfaceDesc.Height - 1);
[...]
//external for, previous surface row
pSurfaceBuffer = pSurfaceBufferOld - d3dlr.Pitch;

Why this behaviour? I mean, why the vertical flip effect if I just copy BYTES from surface to video sample.


I ran into the same thing. Sorry but I don't remember exactly why this is. I think it had to do with the way that video devices output their data. As you've found, its not too hard to deal with it.
quote:

3) Would it be faster to copy DWORDs instead of BYTEs??


Probably. But to do this, you will probably need to know the exact pitch of the surface. And remember that in D3D, pixel "formats are listed from left to right, most significant bit (MSB) to least significant bit (LSB). For example, D3DFORMAT_ARGB is ordered from the MSB channel A (alpha), to the LSB channel B (blue). When traversing surface data, the data is stored in memory from LSB to MSB, which means that the channel order in memory is from LSB (blue) to MSB (alpha)."

That quote was taken from the first paragraph of the D3DFORMAT section of the D3D docs.
quote:

4) I finally added a line (pasted below) to my bucle to make green pixels appear transparent. I add filter graphs to ROT and the filter using this code says it has ARGB32bits format, but I don't get any transparent effect... why?

if ((pData[0]==0x00) && (pData[1]==0xFF) && (pData[2]==0x00)) pData[3] = 0x00;


Sorry, can't help u with this one

neneboricua

[edited by - neneboricua19 on April 12, 2004 2:16:11 PM]

Share this post


Link to post
Share on other sites
quote:

It''s been way too long since I played with DShow to know why that &~(4) thing is there. I thought that VMR9 returned a D3D9 texture. If that''s the case, you should be able to get the pitch of the video sample like you do with any D3D surface.



Well, I don''t really know if I can ask for the VMR9 rendering surface at this point, since I''m still at the source filter. But the data must be store in a Direct3D surface, so I''ll have to look that in the docs, since it will make it quite easier.

Thanks a lot for all your help

I''ll write back when I solve the color key problem (if I do) :D

See you!

Share this post


Link to post
Share on other sites