Sign in to follow this  

Screenspace to texturespace?

This topic is 3724 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I have a texture the size of my screen (1280x1024) which I want to sample in my pixelshader, to blend it with the rendered scene. Now I'm having trouble converting the projected pixel coords to sampling coords. Here's some pseudo-code to explain what Im trying to do:
Vertexshader
{
   output.pos = mul(inPos, xMatWVP);
   output.screenPos = ??
}


Pixelshader
{
   (...)
   color.rgb *= tex2D(ScreenMapSampler, input.screenPos);
}
I took this from an article which is supposed to do it, but it doesn't work, and I don't understand whats happing there anyway:
   OUT.vScreenCoord.x = ( OUT.vPosition.x * 0.5 + OUT.vPosition.w * 0.5 );
   OUT.vScreenCoord.y = ( OUT.vPosition.w * 0.5 - OUT.vPosition.y * 0.5 );
   OUT.vScreenCoord.z = OUT.vPosition.w;
   OUT.vScreenCoord.w = OUT.vPosition.w;
Can someone help me figure this out please?

Share this post


Link to post
Share on other sites
Why not pass in the coordinates as per-vertex attributes? That's the typical way of doing things.

Alternatively, output.pos = mul(inPos, xMatWVP); should place your geometry in projection space. This is defined in [-1..+1] range, which requires only a little massaging to be in [0..1] texture space...

hth
Jack

Share this post


Link to post
Share on other sites
Quote:
Why not pass in the coordinates as per-vertex attributes? That's the typical way of doing things.


I don't understand what you mean here. Afaik the texcoords in the vertex attributes pertain to the mesh textures. What I need to know is where the current pixel is on screen so that I know at what position to sample my screen-sized texture.

Quote:
Alternatively, output.pos = mul(inPos, xMatWVP); should place your geometry in projection space. This is defined in [-1..+1] range, which requires only a little massaging to be in [0..1] texture space...


Thank you, that's what I thought, but I wasn't sure cause I couldn't get it to work. This is how I attempt to translate from [-1..1] to [0..1]:


output.screenPos.x = output.pos.x / output.pos.w / 2.0f plus 0.5f;
output.screenPos.y = -output.pos.y / output.pos.w / 2.0f plus 0.5f;


But the result is wrong:

Photo Sharing and Video Hosting at Photobucket

combined with:

Photo Sharing and Video Hosting at Photobucket

=

Photo Sharing and Video Hosting at Photobucket

And the shadows move around when I move the camera (which they are obviously not supposed to):
Photo Sharing and Video Hosting at Photobucket


I can't seem to figure out what's wrong...

Share this post


Link to post
Share on other sites
Debugging printf-style or PIX is a good start. If you're sending those values through to the PS then PIX's "pixel history" feature should give you some data to work with, or you could just output them from the PS in colour channels.

If you change

color.rgb *= tex2D(ScreenMapSampler, input.screenPos);

to be

color.rgb = float3( input.screenPos.x, input.screenPos.y, 0.0f );

You should end up with black in the top-left, red in the top-right, green in the bottom-left and yellow in the bottom-right.

Quote:
output.screenPos.x = output.pos.x / output.pos.w / 2.0f plus 0.5f;
output.screenPos.y = -output.pos.y / output.pos.w / 2.0f plus 0.5f;
Off the top of my head:

output.screenPos.x = 0.5f * (output.pos.x + 1.0f);
output.screenPos.y = 1.0f - (0.5f * (output.pos.y + 1.0f));


I'm pretty sure you don't need to divide by W for this part...

hth
Jack

Share this post


Link to post
Share on other sites
I have never worked with PIX nor do I know how to output text on the screen xD. That's cause I'm and extreme c++ noob and just started windows/directx programming right away. Hehe, no comment on that please :P

Anyway,
Your code is mathematically the same as mine (without the W-comp), just written differently.

Nevertheless I used your code and got these results:

Photo Sharing and Video Hosting at Photobucket

Photo Sharing and Video Hosting at Photobucket


When I do divide by the W-component I get this:
Photo Sharing and Video Hosting at Photobucket

I figure your color trick is supposed to show gradients rather then a black, a red, a green and a yellow rectangle... right?

thanks for the help so far...

Share this post


Link to post
Share on other sites
Quote:
Original post by Viperrr
Hi,

I have a texture the size of my screen (1280x1024) which I want to sample in my pixelshader, to blend it with the rendered scene. Now I'm having trouble converting the projected pixel coords to sampling coords.
Can someone help me figure this out please?


Hi,

you don't have to pass the position from the vertexshader to the pixelshader, because the pixelshader already "knows" it screenspace position. The following code is for Direct3D 9:


struct psInput {
//
// add your data here
//
float2 screen : VPOS;
};


// pixelshader
void ps(in psInput input, out psOutput output)
{
// "input.screen" contains the current pixel position is screenspace.
float2 tex = float2((input.screen.x + 0.5f) / 1280.0f, (input.screen.y + 0.5f) / 1024.0f);
// now you can use "tex" as your texture coordinate to sample your texture.

// ...
}




Mr X

[Edited by - Mr X on October 21, 2007 3:54:56 PM]

Share this post


Link to post
Share on other sites

This topic is 3724 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this