Archived

This topic is now archived and is closed to further replies.

crypting

Blending multiple D3D scenes with VMR9 video (allocator-presenter)

Recommended Posts

Hi, I'm trying to composite multiple offscreen rendered Direct3D scenes with video using VMR9. What I want to do is to have multiple individual scenes (not only one) mixed with multiple video streams. I've tried before with the IVMRBitmapMixer9, but this interface only let me work with one D3D scene and only above the video. I would like to blend all video streams and Direct3D scenes and use the IVMRMixerControl9 to configure the multiple streams, setting parameters such as alpha, Z-order, position, etc. So.. my main problem is to find the correct way to allocate the offscreen rendered Direct3D surfaces into streams (each scene into a different stream), so I can mix them with the other video streams using IVMRMixerControl9 as usual. Is there any sample in the SDK showing this? Which will be the correct way? Thanks in advance... [edited by - crypting on March 22, 2004 5:49:25 AM] [edited by - crypting on March 22, 2004 9:46:14 AM]

Share this post


Link to post
Share on other sites
I'm taking a look at the "Supplying a Custom Allocator-Presenter for VMR-9" topic. It says:


quote:

1. Implement a class that supports the IVMRSurfaceAllocator9 and IVMRImagePresenter9 interfaces.
[...]
6.
[...]
Create Direct3D surfaces that match the parameters given in the InitializeDevice method. You can use the VMR-9 filter's IVMRSurfaceAllocatorNotify9::AllocateSurfaceHelper method to allocate the surface.
Typically you will want the video frames to be drawn onto a texture surface, so that you can render the texture onto a Direct3D primitive. In that case, you may need to add the VMR9AllocFlag_TextureSurface flag to the VMR9AllocationInfo structure (the lpAllocInfo parameter). If the device does not support textures in the native video format, you might need to create a separate texture surface, and then copy the video frames from the video surface to the texture



But I want just the opposite: my offscreen rendered surface (which contains my D3D scene) to be drawn onto the video frame.

Taking a look at the VMR9Allocator sample to see how to create a class supporting IVMRSurfaceAllocator9 and IVMRImagePresenter9 interfaces, it overloads IVMRImagePresenter:: PresentImage and use StretchRect or GetContainer to copy the video frame surface into the Direct3D surface.

m_D3DDev->SetRenderTarget( 0, m_renderTarget );
// if we created a private texture

// blt the decoded image onto the texture.

if( m_privateTexture != NULL )
{
CComPtr<IDirect3DSurface9> surface;
FAIL_RET( m_privateTexture->GetSurfaceLevel( 0 , & surface.p ) );

// copy the full surface onto the texture's surface

FAIL_RET( m_D3DDev->StretchRect( lpPresInfo->lpSurf, NULL,
surface, NULL,
D3DTEXF_NONE ) );

FAIL_RET( m_scene.DrawScene(m_D3DDev, m_privateTexture ) );
}
else // this is the case where we have got the textures allocated by VMR

// all we need to do is to get them from the surface

{
CComPtr<IDirect3DTexture9> texture;
FAIL_RET( lpPresInfo->lpSurf->GetContainer( IID_IDirect3DTexture9, (LPVOID*) & texture.p ) );
FAIL_RET( m_scene.DrawScene(m_D3DDev, texture ) );
}


But I don't know how to do it the other way.. any hints?

Thanks

[edited by - crypting on March 22, 2004 8:37:03 AM]

Share this post


Link to post
Share on other sites