Screenspace to texturespace?

Started by
7 comments, last by jollyjeffers 16 years, 6 months ago
Hi, I have a texture the size of my screen (1280x1024) which I want to sample in my pixelshader, to blend it with the rendered scene. Now I'm having trouble converting the projected pixel coords to sampling coords. Here's some pseudo-code to explain what Im trying to do:

Vertexshader
{
   output.pos = mul(inPos, xMatWVP);
   output.screenPos = ??
}


Pixelshader
{
   (...)
   color.rgb *= tex2D(ScreenMapSampler, input.screenPos);
}
I took this from an article which is supposed to do it, but it doesn't work, and I don't understand whats happing there anyway:

   OUT.vScreenCoord.x = ( OUT.vPosition.x * 0.5 + OUT.vPosition.w * 0.5 );
   OUT.vScreenCoord.y = ( OUT.vPosition.w * 0.5 - OUT.vPosition.y * 0.5 );
   OUT.vScreenCoord.z = OUT.vPosition.w;
   OUT.vScreenCoord.w = OUT.vPosition.w;
Can someone help me figure this out please?
Advertisement
Why not pass in the coordinates as per-vertex attributes? That's the typical way of doing things.

Alternatively, output.pos = mul(inPos, xMatWVP); should place your geometry in projection space. This is defined in [-1..+1] range, which requires only a little massaging to be in [0..1] texture space...

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Quote:Why not pass in the coordinates as per-vertex attributes? That's the typical way of doing things.


I don't understand what you mean here. Afaik the texcoords in the vertex attributes pertain to the mesh textures. What I need to know is where the current pixel is on screen so that I know at what position to sample my screen-sized texture.

Quote:Alternatively, output.pos = mul(inPos, xMatWVP); should place your geometry in projection space. This is defined in [-1..+1] range, which requires only a little massaging to be in [0..1] texture space...


Thank you, that's what I thought, but I wasn't sure cause I couldn't get it to work. This is how I attempt to translate from [-1..1] to [0..1]:

output.screenPos.x = output.pos.x / output.pos.w / 2.0f plus 0.5f;output.screenPos.y = -output.pos.y / output.pos.w / 2.0f plus 0.5f;


But the result is wrong:

Photo Sharing and Video Hosting at Photobucket

combined with:

Photo Sharing and Video Hosting at Photobucket

=

Photo Sharing and Video Hosting at Photobucket

And the shadows move around when I move the camera (which they are obviously not supposed to):
Photo Sharing and Video Hosting at Photobucket


I can't seem to figure out what's wrong...
Debugging printf-style or PIX is a good start. If you're sending those values through to the PS then PIX's "pixel history" feature should give you some data to work with, or you could just output them from the PS in colour channels.

If you change

color.rgb *= tex2D(ScreenMapSampler, input.screenPos);

to be

color.rgb = float3( input.screenPos.x, input.screenPos.y, 0.0f );

You should end up with black in the top-left, red in the top-right, green in the bottom-left and yellow in the bottom-right.

Quote:output.screenPos.x = output.pos.x / output.pos.w / 2.0f plus 0.5f;
output.screenPos.y = -output.pos.y / output.pos.w / 2.0f plus 0.5f;
Off the top of my head:

output.screenPos.x = 0.5f * (output.pos.x + 1.0f);
output.screenPos.y = 1.0f - (0.5f * (output.pos.y + 1.0f));

I'm pretty sure you don't need to divide by W for this part...

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

I have never worked with PIX nor do I know how to output text on the screen xD. That's cause I'm and extreme c++ noob and just started windows/directx programming right away. Hehe, no comment on that please :P

Anyway,
Your code is mathematically the same as mine (without the W-comp), just written differently.

Nevertheless I used your code and got these results:

Photo Sharing and Video Hosting at Photobucket

Photo Sharing and Video Hosting at Photobucket


When I do divide by the W-component I get this:
Photo Sharing and Video Hosting at Photobucket

I figure your color trick is supposed to show gradients rather then a black, a red, a green and a yellow rectangle... right?

thanks for the help so far...
Quote:Original post by Viperrr
Hi,

I have a texture the size of my screen (1280x1024) which I want to sample in my pixelshader, to blend it with the rendered scene. Now I'm having trouble converting the projected pixel coords to sampling coords.
Can someone help me figure this out please?


Hi,

you don't have to pass the position from the vertexshader to the pixelshader, because the pixelshader already "knows" it screenspace position. The following code is for Direct3D 9:

struct psInput {    //    // add your data here    //    float2 screen : VPOS;};// pixelshadervoid ps(in psInput input, out psOutput output){    // "input.screen" contains the current pixel position is screenspace.    float2 tex = float2((input.screen.x + 0.5f) / 1280.0f, (input.screen.y + 0.5f) / 1024.0f);    // now you can use "tex" as your texture coordinate to sample your texture.    // ...}


Mr X

[Edited by - Mr X on October 21, 2007 3:54:56 PM]
Thank you Mr X! that works perfectly!

I didn't know about the VPOS input semantic. I learned something :)

Photo Sharing and Video Hosting at Photobucket
Quote:Original post by Viperrr
Thank you Mr X! that works perfectly!

I didn't know about the VPOS input semantic. I learned something :)



Just so ya know, VPOS is ps_3_0 and above only.
Just be warned that VPOS is for ps_3_0 only - your code will not work on earlier D3D9 hardware (anything before the Radeon X1*** or GeForce 6x00).

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

This topic is closed to new replies.

Advertisement