I'm working on a postprocessing system, and everything works fine so far, except one thing: Though my downsampler is working, I can't sample it up. On my basic approach, when trying to sample up, it looks like this:
Pretty obvious whats going on, though I don't really know why: As I'm using a screensize/4-texture for downsampling, my upsampler is now rendering just that little texture to the screen.
Here is my shader (more or less taken from the directx sdk example:
texture ColorMap;
texture NormalMap;
texture PositionMap;
texture ImageMap;
texture g_txSceneColor;
texture g_txSceneNormal;
texture g_txScenePosition;
sampler2D ColorSampler =
sampler_state
{
Texture = <ColorMap>;
AddressU = Clamp;
AddressV = Clamp;
MinFilter = Point;
MagFilter = Linear;
MipFilter = Linear;
};
sampler2D NormalSampler =
sampler_state
{
Texture = <NormalMap>;
AddressU = Clamp;
AddressV = Clamp;
MinFilter = Point;
MagFilter = Linear;
MipFilter = Linear;
};
sampler2D PositionSampler =
sampler_state
{
Texture = <PositionMap>;
AddressU = Clamp;
AddressV = Clamp;
MinFilter = Point;
MagFilter = Linear;
MipFilter = Linear;
};
sampler2D ImageSampler =
sampler_state
{
Texture = <ImageMap>;
AddressU = Clamp;
AddressV = Clamp;
MinFilter = Point;
MagFilter = Linear;
MipFilter = Linear;
};
//-----------------------------------------------------------------------------
// Pixel Shader: UpFilterPS
// Desc: Performs upfiltering to scale the image to the original size.
//-----------------------------------------------------------------------------
float4 UpFilterPS( float2 Tex : TEXCOORD0 ) : COLOR0
{
return tex2D(ImageSampler, Tex );
}
//-----------------------------------------------------------------------------
// Technique: PostProcess
// Desc: Performs post-processing effect that down-filters.
//-----------------------------------------------------------------------------
technique PostProcess
{
pass p0
<
float fScaleX = 4.0f;
float fScaleY = 4.0f;
>
{
VertexShader = null;
PixelShader = compile ps_2_0 UpFilterPS();
ZEnable = false;
}
}
And thats my render-method:
LPDIRECT3DTEXTURE9 CEffect::Apply(LPDIRECT3DTEXTURE9 InTexture, LPDIRECT3DTEXTURE9 PositionMap, LPDIRECT3DTEXTURE9 NormalMap, LPDIRECT3DTEXTURE9 ColorMap)
{
m_Effect->SetTexture("ImageMap", InTexture);
m_Effect->SetTexture("PositionMap", PositionMap);
m_Effect->SetTexture("NormalMap", NormalMap);
m_Effect->SetTexture("ColorMap", ColorMap);
float RenderTargetW = m_RenderTargetW;
float RenderTargetH = m_RenderTargetH;
ClearTexture(m_Texture, 0x00000000);
SetRenderTarget(0, m_Texture);
static PlaneVertex axPlaneVertices[] =
{
{ 0, 0, .5f, 1, 0 + .5f / RenderTargetW, 0 + .5f / RenderTargetH},
{ RenderTargetW, 0, .5f, 1, 1 + .5f / RenderTargetW, 0 + .5f / RenderTargetH},
{ RenderTargetW, RenderTargetH, .5f, 1, 1 + .5f / RenderTargetW, 1 + .5f / RenderTargetH},
{ 0, RenderTargetH, .5f, 1, 0 + .5f / RenderTargetW, 1 + .5f / RenderTargetH}
};
m_Effect->Begin(NULL,0);
m_Effect->BeginPass(0);
m_lpDevice->SetFVF(D3DFVF_XYZRHW | D3DFVF_TEX1);
m_lpDevice->DrawPrimitiveUP(D3DPT_TRIANGLEFAN, 2, axPlaneVertices, sizeof(PlaneVertex));
m_Effect->EndPass();
m_Effect->End();
return m_Texture;
}
The returned texture (=the render target) is used as input for the next shader. Here is how I create the texture:
void CEffect::Load(LPD3DXEFFECT Effect, float Scale)
{
m_Effect = Effect;
m_RenderTargetW = SCR_WIDTH/Scale;
m_RenderTargetH =SCR_HEIGHT/Scale;
D3DXCreateTexture(m_lpDevice, m_RenderTargetW, m_RenderTargetH, 0, D3DUSAGE_RENDERTARGET, D3DFMT_X8R8G8B8, D3DPOOL_DEFAULT, &m_Texture);
}
Thats the interesting part. I pass a scaling value, depending on what operation I use - normally it's one, for downsampling it is 4. From a logical point of view I would set the scale to 1 for upsampling, but somehow this approach produces the effect seen above. So I tried setting the scale to 0.25f, as the SDK-sample also said that the scale should be 4 times the size (I still think it's weird, talking about the memory consumption of a 4time the backbuffer-sized texture). However now all I get is a black screen.
Does anyone have an idea? Even if it's not about post-processing but rendering a smaller texture to a larger one using upsampling, I'd be really glad. Am I missing out something?