Copy pixels

Started by
1 comment, last by djalexd 18 years, 1 month ago
Hello and sorry if there were some discussions about this topic, I just couldn't find it. What I'm trying to do is copy a region from a texture to a smaller texture. For this I'm using vertex and pixel shaders. This is the code, where the small texture's dimensions are (128.0f, 128.0f): LPDIRECT3DSURFACE9 m_oldRenderT, m_oldDepthSt, m_renderSurf; UpdateVertex quad[4]; quad[0].x = 0; quad[0].y = 0; quad[0].z = 0; quad[1].x = 0; quad[1].y = 128.0f; quad[1].z = 0; quad[2].x = 128.0f; quad[2].y = 0; quad[2].z = 0; quad[3].x = 128.0f; quad[3].y = 128.0f; quad[3].z = 0; quad[0].u = 0; quad[0].v = 0; quad[1].u = 0; quad[1].v = 1; quad[2].u = 1; quad[2].v = 0; quad[3].u = 1; quad[3].v = 1; m_pDevice->SetVertexDeclaration(m_pUpdateDecl); m_pDevice->SetVertexShader(m_pUnsampleVS); m_pDevice->SetPixelShader(m_pUnsamplePS); m_pDevice->GetRenderTarget(0, &m_oldRenderT); m_pDevice->GetDepthStencilSurface(&m_oldDepthSt); b_pSmallTexture->GetSurfaceLevel(0, &m_renderSurf); m_pDevice->SetRenderTarget(0, m_renderSurf); m_pDevice->BeginScene(); m_pDevice->SetTexture(0, m_pBigTexture); m_pDevice->SetTexture(1, 0); m_pDevice->SetTexture(2, 0); m_pDevice->DrawPrimitiveUP(D3DPT_TRIANGLESTRIP, 2, &quad[0], sizeof(UpdateVertex)); m_pDevice->EndScene(); Ok let me explain: 1. I set all the quads that are rendered with the positions at (0, 0), (128, 0), (0, 128), (128, 128), those being the corners of the small texture that needs to be updated. 2. I save the old render target and depth surface and set the new one as being the surface of m_SmallTexture's. 3. I just render. Next is the vertex shader used: uniform float Size; uniform float2 Viewport; struct OUTPUT { vector position : POSITION; float2 texcoords : TEXCOORD0; }; OUTPUT Update(float3 pos : POSITION, float2 texcoords : TEXCOORD0) { OUTPUT output; output.position = float4(float2(pos.x,-pos.y) + float2(-1.0, 1.0)/Viewport,0.0,1.0); output.texcoords = texcoords * Size; return output; } Viewport is set to (128.0f, 128.0f). Is that OK? The pixel shader generates the output color given the offset of the small texture within the big one. I'm sure the pixel shader acts correctly, but my questions were: 1. Did I do something wrong? Aren't those the steps to render to a texture? 2. Did I map the vertices correctly? I think I may have to set some matrices; right? 3. Is Viewport parameter from the vertex shader mapping the vertices to texture space? Thanks for any answers, I really would appreciate if someone would help me. One last thing: When I move the camera, I get a blank screen for a few dozens of miliseconds; I believe it's because of the render target being changed. HAs anyone encoutered this? What is the solution?
Advertisement
I'm not sure what it is wrong but there might be an alternate solution: If both of your textures are created with the RENDERTARGET flag, you can just use IDirect3DTexture9::GetSurfaceLevel() to get the surface levels, then use IDirect3DDevice9::StretchRect() to copy a subrectangle of one onto another. Note that there are heavy restrictions on this, such both have to be render target textures. It seems "b_pSmallTexture" is a render target, but I dont know about "m_pBigTexture".



That would work like:
=========================================
IDirect3DSurface9* pBigSurface;
RECT SubRect { ... }; //Insert subrectangle here...

m_pBigTexture->GetSurfaceLevel(0, &pBigSurface);
m_pDevice->StretchRect(pBigSurface, &SubRect, m_renderSurf, NULL);

pBigSurface->Release();
=========================================

Here is the MSDN link to it:
http://msdn.microsoft.com/archive/default.asp?url=/archive/en-us/directx9_c/directx/graphics/reference/d3d/interfaces/idirect3ddevice9/StretchRect.asp

If "m_pBigTexture" isn't a render target (and you want a lame hack), then you could draw the texture with a 1:1 scale in one of the corners, then use StretchRect() from the backbuffer (since it IS a render target).

|------- Get a subrect from that subrect.
v
o--------------o
| Big | |
| Tex | |
|-----o |
| |
o--------------o

Until then, I will just yield to anyone who can properly debug shaders. =)

To answer some of your questions. You need an identity world/view matrix, but an orthographic projection matrix if you want XY (Z discarded) positions of verticies to map to pixels.

Hope that helps some?
Unfortunately I can't use the StrechRect method. The pixel shader add some noise to the resulting texture so I can't get rid of it. Only the vertex shader solution is viable to what I want.

I don't need the application / vertex shader to be debugged; I got the application to run but the results were wrong. I just need someone to tell me what the code is missing (like matrix setup) or what is wrong (like what is "float2(position.x,-position.y) + float2(-1.0, 1.0)/Viewport" - i just don't understand this, maybe it's a transform from world space to texture space), if that is possible.

The reply was very helpful, thanks

This topic is closed to new replies.

Advertisement