Hello and sorry if there were some discussions about this topic, I just couldn't find it. What I'm trying to do is copy a region from a texture to a smaller texture. For this I'm using vertex and pixel shaders. This is the code,
where the small texture's dimensions are (128.0f, 128.0f):
LPDIRECT3DSURFACE9 m_oldRenderT, m_oldDepthSt, m_renderSurf;
quad.x = 0; quad.y = 0; quad.z = 0;
quad.x = 0; quad.y = 128.0f; quad.z = 0;
quad.x = 128.0f; quad.y = 0; quad.z = 0;
quad.x = 128.0f; quad.y = 128.0f; quad.z = 0;
quad.u = 0; quad.v = 0;
quad.u = 0; quad.v = 1;
quad.u = 1; quad.v = 0;
quad.u = 1; quad.v = 1;
2, &quad, sizeof(UpdateVertex));
Ok let me explain:
1. I set all the quads that are rendered with the positions at (0, 0), (128, 0), (0, 128), (128, 128), those being the corners of the small texture that needs to be updated.
2. I save the old render target and depth surface and set the new one as being the surface of m_SmallTexture's.
3. I just render.
Next is the vertex shader used:
uniform float Size;
uniform float2 Viewport;
vector position : POSITION;
float2 texcoords : TEXCOORD0;
OUTPUT Update(float3 pos : POSITION, float2 texcoords : TEXCOORD0)
output.position = float4(float2(pos.x,-pos.y) +
output.texcoords = texcoords * Size;
Viewport is set to (128.0f, 128.0f). Is that OK?
The pixel shader generates the output color given the offset of the small texture within the big one. I'm sure the pixel shader acts correctly, but my questions were:
1. Did I do something wrong? Aren't those the steps to render to a texture?
2. Did I map the vertices correctly? I think I may have to set some matrices; right?
3. Is Viewport parameter from the vertex shader mapping the vertices to texture space?
Thanks for any answers, I really would appreciate if someone would help me.
One last thing: When I move the camera, I get a blank screen for a few dozens of miliseconds; I believe it's because of the render target being changed. HAs anyone encoutered this? What is the solution?