• Advertisement

Archived

This topic is now archived and is closed to further replies.

Pixels to Texels mapping

This topic is 5151 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

My situation are: - there are two textures with size 512x512; - result of rendering is write in one of this textures; - viewport size is equal to back-buffer size - 512õ384. When scen is rendering to the first texture it''s necessary to readout from the second texture the value of the texel precisely from the same position in which this pixel will be probably put on the first texture. For this purpose you may use following shaders on HLSL:
float2 texCoordScale = float2(1.0, 384.0/512.0);
float2 halfTexel     = float2(0.5/512.0, 0.5/512.0);
 
float4 calcTexCoord(float4 vPos)
{
  float4 vTex = vPos;
 
  vTex.xy += vPos.w;
  vTex.xy /= 2.0;
  vTex.y   = vPos.w - vTex.y;
  vTex.xy *= texCoordScale;
  vTex.xy += halfTexel*vPos.w;
          
  return vTex;  
}
 
VSO_POS_TEX0 VS(VSI_POS i)
{
  VSO_POS_TEX0 o = (VSO_POS_TEX0)0;
 
  o.vPos  = mul(i.vPos, WorldViewProj);
  o.vTex0 = calcTexCoord(o.vPos);
 
  return o;
}
 
float4 PS(VSO_POS_TEX0 i) : COLOR
{
  float3 tVal = tex2Dproj(sampler, i.vTex0);
  ...
} 
Thus, in tVal should be a required result. Generally all works properly. But sometimes, at some camera positions, the value is readout incorrectly for one or more pixels from all scene. For example, with displacement on 1 to the right. I have checked up the work of the program on the REF Rasterizer - the problems does not arise at least in all known critical positions of the camera. Maybe there is not enough accuracy for calculations in the hardware?.. But it is strange that the programm more often works correctly than incorrectly. Is it possible to fix this problem?

Share this post


Link to post
Share on other sites
Advertisement
It seems to me that you''re trying to do some kind of image space work. Correct me if I''m wrong, but I think that what you''re trying to do is figure out what are the texture coordinates of a particular point on the surface you are rendering to. If this is the case, then I think your calculations are probably doing a bit too much.

After the projection matrix is applied to geometry, vertices are in the range of [ -1..1 ] in the x and y directions, and [ 0..1 ] in the z direction.

VSO_POS_TEX0 VS(VSI_POS i)
{
VSO_POS_TEX0 o = (VSO_POS_TEX0)0;
o.vPos = mul(i.vPos, WorldViewProj);

// Assuming VSO_POS_TEX0::vTex0 is a float2 variable
o.vTex0 = (o.vPos.xy + 1.0f) * 0.5f;
return o;
}


If your pixel shader is doing some kind of computation based on the geometry rendered, then it should be fine. But if the only information you need is the color that was rendered in the previous pass (basically, you''re doing some kind of post-processing work) then you could just render a screen-aligned quad instead of the original geometry for your second pass.

Hope this helps and I hope I understood your question,
neneboricua

Share this post


Link to post
Share on other sites
Yes, I am doing some kind of computation based on the geometry rendered. It's something like custom Z-buffering.
Scaling by vPos.w multiplier is needed to eliminate perspective correction in texture coords interpolation. 2D vertex positions must be linear interpolated in screen space.
So, any other ideas?

[edited by - Igor Pavlov on January 15, 2004 11:55:28 AM]

Share this post


Link to post
Share on other sites

  • Advertisement