[DirectX 9] Upsampling (postprocessing)

Started by
3 comments, last by Juliean 13 years ago
Hi,

I'm working on a postprocessing system, and everything works fine so far, except one thing: Though my downsampler is working, I can't sample it up. On my basic approach, when trying to sample up, it looks like this:
wgufk97u.png

Pretty obvious whats going on, though I don't really know why: As I'm using a screensize/4-texture for downsampling, my upsampler is now rendering just that little texture to the screen.

Here is my shader (more or less taken from the directx sdk example:


texture ColorMap;
texture NormalMap;
texture PositionMap;
texture ImageMap;

texture g_txSceneColor;
texture g_txSceneNormal;
texture g_txScenePosition;

sampler2D ColorSampler =
sampler_state
{
Texture = <ColorMap>;
AddressU = Clamp;
AddressV = Clamp;
MinFilter = Point;
MagFilter = Linear;
MipFilter = Linear;
};
sampler2D NormalSampler =
sampler_state
{
Texture = <NormalMap>;
AddressU = Clamp;
AddressV = Clamp;
MinFilter = Point;
MagFilter = Linear;
MipFilter = Linear;
};
sampler2D PositionSampler =
sampler_state
{
Texture = <PositionMap>;
AddressU = Clamp;
AddressV = Clamp;
MinFilter = Point;
MagFilter = Linear;
MipFilter = Linear;
};
sampler2D ImageSampler =
sampler_state
{
Texture = <ImageMap>;
AddressU = Clamp;
AddressV = Clamp;
MinFilter = Point;
MagFilter = Linear;
MipFilter = Linear;
};


//-----------------------------------------------------------------------------
// Pixel Shader: UpFilterPS
// Desc: Performs upfiltering to scale the image to the original size.
//-----------------------------------------------------------------------------
float4 UpFilterPS( float2 Tex : TEXCOORD0 ) : COLOR0
{
return tex2D(ImageSampler, Tex );
}




//-----------------------------------------------------------------------------
// Technique: PostProcess
// Desc: Performs post-processing effect that down-filters.
//-----------------------------------------------------------------------------
technique PostProcess
{
pass p0
<
float fScaleX = 4.0f;
float fScaleY = 4.0f;
>
{
VertexShader = null;
PixelShader = compile ps_2_0 UpFilterPS();
ZEnable = false;
}
}


And thats my render-method:

LPDIRECT3DTEXTURE9 CEffect::Apply(LPDIRECT3DTEXTURE9 InTexture, LPDIRECT3DTEXTURE9 PositionMap, LPDIRECT3DTEXTURE9 NormalMap, LPDIRECT3DTEXTURE9 ColorMap)
{
m_Effect->SetTexture("ImageMap", InTexture);
m_Effect->SetTexture("PositionMap", PositionMap);
m_Effect->SetTexture("NormalMap", NormalMap);
m_Effect->SetTexture("ColorMap", ColorMap);

float RenderTargetW = m_RenderTargetW;
float RenderTargetH = m_RenderTargetH;

ClearTexture(m_Texture, 0x00000000);
SetRenderTarget(0, m_Texture);

static PlaneVertex axPlaneVertices[] =
{
{ 0, 0, .5f, 1, 0 + .5f / RenderTargetW, 0 + .5f / RenderTargetH},
{ RenderTargetW, 0, .5f, 1, 1 + .5f / RenderTargetW, 0 + .5f / RenderTargetH},
{ RenderTargetW, RenderTargetH, .5f, 1, 1 + .5f / RenderTargetW, 1 + .5f / RenderTargetH},
{ 0, RenderTargetH, .5f, 1, 0 + .5f / RenderTargetW, 1 + .5f / RenderTargetH}
};

m_Effect->Begin(NULL,0);
m_Effect->BeginPass(0);
m_lpDevice->SetFVF(D3DFVF_XYZRHW | D3DFVF_TEX1);
m_lpDevice->DrawPrimitiveUP(D3DPT_TRIANGLEFAN, 2, axPlaneVertices, sizeof(PlaneVertex));
m_Effect->EndPass();
m_Effect->End();

return m_Texture;
}


The returned texture (=the render target) is used as input for the next shader. Here is how I create the texture:

void CEffect::Load(LPD3DXEFFECT Effect, float Scale)
{
m_Effect = Effect;
m_RenderTargetW = SCR_WIDTH/Scale;
m_RenderTargetH =SCR_HEIGHT/Scale;
D3DXCreateTexture(m_lpDevice, m_RenderTargetW, m_RenderTargetH, 0, D3DUSAGE_RENDERTARGET, D3DFMT_X8R8G8B8, D3DPOOL_DEFAULT, &m_Texture);
}


Thats the interesting part. I pass a scaling value, depending on what operation I use - normally it's one, for downsampling it is 4. From a logical point of view I would set the scale to 1 for upsampling, but somehow this approach produces the effect seen above. So I tried setting the scale to 0.25f, as the SDK-sample also said that the scale should be 4 times the size (I still think it's weird, talking about the memory consumption of a 4time the backbuffer-sized texture). However now all I get is a black screen.

Does anyone have an idea? Even if it's not about post-processing but rendering a smaller texture to a larger one using upsampling, I'd be really glad. Am I missing out something?
Advertisement

Hi,

I'm working on a postprocessing system, and everything works fine so far, except one thing: Though my downsampler is working, I can't sample it up. On my basic approach, when trying to sample up, it looks like this:
wgufk97u.png

Pretty obvious whats going on, though I don't really know why: As I'm using a screensize/4-texture for downsampling, my upsampler is now rendering just that little texture to the screen.

Here is my shader (more or less taken from the directx sdk example:


texture ColorMap;
texture NormalMap;
texture PositionMap;
texture ImageMap;

texture g_txSceneColor;
texture g_txSceneNormal;
texture g_txScenePosition;

sampler2D ColorSampler =
sampler_state
{
Texture = <ColorMap>;
AddressU = Clamp;
AddressV = Clamp;
MinFilter = Point;
MagFilter = Linear;
MipFilter = Linear;
};
sampler2D NormalSampler =
sampler_state
{
Texture = <NormalMap>;
AddressU = Clamp;
AddressV = Clamp;
MinFilter = Point;
MagFilter = Linear;
MipFilter = Linear;
};
sampler2D PositionSampler =
sampler_state
{
Texture = <PositionMap>;
AddressU = Clamp;
AddressV = Clamp;
MinFilter = Point;
MagFilter = Linear;
MipFilter = Linear;
};
sampler2D ImageSampler =
sampler_state
{
Texture = <ImageMap>;
AddressU = Clamp;
AddressV = Clamp;
MinFilter = Point;
MagFilter = Linear;
MipFilter = Linear;
};


//-----------------------------------------------------------------------------
// Pixel Shader: UpFilterPS
// Desc: Performs upfiltering to scale the image to the original size.
//-----------------------------------------------------------------------------
float4 UpFilterPS( float2 Tex : TEXCOORD0 ) : COLOR0
{
return tex2D(ImageSampler, Tex );
}




//-----------------------------------------------------------------------------
// Technique: PostProcess
// Desc: Performs post-processing effect that down-filters.
//-----------------------------------------------------------------------------
technique PostProcess
{
pass p0
<
float fScaleX = 4.0f;
float fScaleY = 4.0f;
>
{
VertexShader = null;
PixelShader = compile ps_2_0 UpFilterPS();
ZEnable = false;
}
}


And thats my render-method:

LPDIRECT3DTEXTURE9 CEffect::Apply(LPDIRECT3DTEXTURE9 InTexture, LPDIRECT3DTEXTURE9 PositionMap, LPDIRECT3DTEXTURE9 NormalMap, LPDIRECT3DTEXTURE9 ColorMap)
{
m_Effect->SetTexture("ImageMap", InTexture);
m_Effect->SetTexture("PositionMap", PositionMap);
m_Effect->SetTexture("NormalMap", NormalMap);
m_Effect->SetTexture("ColorMap", ColorMap);

float RenderTargetW = m_RenderTargetW;
float RenderTargetH = m_RenderTargetH;

ClearTexture(m_Texture, 0x00000000);
SetRenderTarget(0, m_Texture);

static PlaneVertex axPlaneVertices[] =
{
{ 0, 0, .5f, 1, 0 + .5f / RenderTargetW, 0 + .5f / RenderTargetH},
{ RenderTargetW, 0, .5f, 1, 1 + .5f / RenderTargetW, 0 + .5f / RenderTargetH},
{ RenderTargetW, RenderTargetH, .5f, 1, 1 + .5f / RenderTargetW, 1 + .5f / RenderTargetH},
{ 0, RenderTargetH, .5f, 1, 0 + .5f / RenderTargetW, 1 + .5f / RenderTargetH}
};

m_Effect->Begin(NULL,0);
m_Effect->BeginPass(0);
m_lpDevice->SetFVF(D3DFVF_XYZRHW | D3DFVF_TEX1);
m_lpDevice->DrawPrimitiveUP(D3DPT_TRIANGLEFAN, 2, axPlaneVertices, sizeof(PlaneVertex));
m_Effect->EndPass();
m_Effect->End();

return m_Texture;
}


The returned texture (=the render target) is used as input for the next shader. Here is how I create the texture:

void CEffect::Load(LPD3DXEFFECT Effect, float Scale)
{
m_Effect = Effect;
m_RenderTargetW = SCR_WIDTH/Scale;
m_RenderTargetH =SCR_HEIGHT/Scale;
D3DXCreateTexture(m_lpDevice, m_RenderTargetW, m_RenderTargetH, 0, D3DUSAGE_RENDERTARGET, D3DFMT_X8R8G8B8, D3DPOOL_DEFAULT, &m_Texture);
}


Thats the interesting part. I pass a scaling value, depending on what operation I use - normally it's one, for downsampling it is 4. From a logical point of view I would set the scale to 1 for upsampling, but somehow this approach produces the effect seen above. So I tried setting the scale to 0.25f, as the SDK-sample also said that the scale should be 4 times the size (I still think it's weird, talking about the memory consumption of a 4time the backbuffer-sized texture). However now all I get is a black screen.

Does anyone have an idea? Even if it's not about post-processing but rendering a smaller texture to a larger one using upsampling, I'd be really glad. Am I missing out something?



I just use stretchrect to do that, not sure its the best way though

I just use stretchrect to do that, not sure its the best way though [/quote]

Basically I'd agree that stretchrect would do, but I want my postprocessing-framework to be as flexible as possible. Using stretchrect for upsampling would mean having to check for exceptions from regular effects, and thats not what I want to do. Also, I'd like to beeing able to have postprocessing-effects that do additional stuff while upsampling, so basically its no option..
If you're rendering to different sized render-targets, IIRC you've got to change your viewport size. You don't seem to be setting the viewport states anywhere.
[EDIT]No, I forgot that SetRenderTarget also sets the viewport state.

You're using fixed-function vertex processing - what's the current state of the fixed-function matrices (e.g. the projection matrix)?
@Hodgman: Thanks for your suggestion, but viewport size as you said in your edit isn't the problem. Well I found out whats wrong:

static PlaneVertex axPlaneVertices[] =

This line messed everything up. I tried out and upsampling on it's own worked, but if I downsampled before, it wouldn't. So I deleted 'static', and now it's working.
I thought static would only apply on each instance of the class, but instead it seems to be applying to the whole code.

This topic is closed to new replies.

Advertisement