Sign in to follow this  
Meltac

Convert view space to world space issues

Recommended Posts

Meltac    508

Hi all!

 

In a 3D game that I'm writing a (post process) pixel shader for I try to transform a view space to a world space coordinate. My HLSL code looks about like this:

 

float3 world_pos = mul(view_pos,(float3x3)m_WV) + camera_pos

 

This works, but only for certain view angles and camera positions. E.g. when I look "from south" to the position in question it looks as it should (I mark the position to be transformed on screen as a colored sphere), but when turning the camera more than about 20 degrees, or shifting the camera position so that I will look "from east" the transformation renders completely off.

 

I must be missing something here, but I don't know what. I've tried normalizing, transposing and some other basic mutations / additions to my code but didn't find any working solution.

 

Any hints?

Share this post


Link to post
Share on other sites
belfegor    2834

I think you need to transform it with inverse of view matrix to undo "view" part.

float3 world_pos = mul(view_pos, ViewInverse);

Maybe you don't even need to do this, why do you need world pos? maybe you can set all to work in world-view space?

This way you can save yourself a lot of inverting which is expensive.

Edited by belfegor

Share this post


Link to post
Share on other sites
Meltac    508

Maybe you don't even need to do this, why do you need world pos? maybe you can set all to work in world-view space?

This way you can save yourself a lot of inverting which is expensive.

 

Thanks. I need to check whether a certain point in screen space / view space lies between two points from those I got the world space coordinates. It needs to a real "in between" in world space, so I probably can't simply do that all in screen space. Furthermore I need other geometric checks apart from "in between" such as distance-to, on the same line etc. Those checks and calculations can be done pretty straightforward in world-space but would become quite painfully and error-prone in view/screen space.

 

I searched that forum for inverting a matrix in HLSL but people didn't help much the ones asking for it and just kept saying "don't do that!" - so ignoring those advises against doing it, how WOULD I calculate the inverse of the view matrix (or whatever matrix I need here) in the HLSL shader? And no, transform doesn't do the trick here, I've already tried that.

Edited by Meltac

Share this post


Link to post
Share on other sites
belfegor    2834

Oh, you are the guy that makes STALKER mods? smile.png

 

Can i see the file where m_WV is defined, maybe the view-inverse is provided but under different name.

 

I have Clear Sky unpacked, and In hmodel.h file i see something that might be it:

uniform half3x4                        m_v2w;

Since STALKER is using "raw" shaders, i think column major is default (unless it is overridden somewhere) so you need to swap matrix-vector for mul op:

//float3 V = mul( VECTOR, MATRIX );
float3 V = mul( MATRIX, VECTOR );
Edited by belfegor

Share this post


Link to post
Share on other sites
Meltac    508

Oh, you are the guy that makes STALKER mods? smile.png

 

Yes I am rolleyes.gif

 

 

 


Can i see the file where m_WV is defined, maybe the view-inverse is provided but under different name.

 

Here are the file relevant contents / matrix definitions - I have literally tried them all:

uniform half3x4 m_W;
uniform half3x4 m_V;
uniform half4x4 m_P;
uniform half3x4 m_WV;
uniform half4x4 m_VP;
uniform half4x4 m_WVP;
uniform half4 timers;
uniform half4 fog_plane;
uniform float4 fog_params;
uniform half4 fog_color;
uniform float3 L_sun_color;
uniform half3 L_sun_dir_w;
uniform half3 L_sun_dir_e;
uniform half4 L_hemi_color;
uniform half4 L_ambient;
uniform float3 eye_position;
uniform half3 eye_direction;
uniform half3 eye_normal;
uniform float4 dt_params;

I have Clear Sky unpacked, and In hmodel.h file i see something that might be it:

uniform half3x4 m_v2w;

 

Yes that's in my hmodel.h as well but since I'm doing a post-process (i.e. working on a different shading stage) I doubt I could make use of that matrix (but I'll check).

 

 

 


Since STALKER is using "raw" shaders, i think column major is default (unless it is overridden somewhere) so you need to swap matrix-vector for mul op:

//float3 V = mul( VECTOR, MATRIX );
float3 V = mul( MATRIX, VECTOR );
 

 

Hmm, that's strange. I've seen lots of code parts in the default stalker shaders where it's done the way I have it - but I'll check as well. Haven't seen any pragma specifying row major so far, though.

 

Btw, what do you mean by "raw" shaders?

 

 

EDIT:

To come back to my initial post, would you say that I get the right results under *some* conditions using the code I've post is purely coincidence or why is that? I just need to make sure that it's me doing something wrong, and not the X-Ray engine providing wrong matrix data.

Edited by Meltac

Share this post


Link to post
Share on other sites
belfegor    2834


Here are the file relevant contents / matrix definitions - I have literally tried them all:

... //code snip

Nothing in there suggest what you need. sad.png

 


Yes that's in my hmodel.h as well but since I'm doing a post-process (i.e. working on a different shading stage) I doubt I could make use of that matrix (but I'll check).

m_v2w name suggest that it is "view to world" matrix, if you look below in same file in hmodel function:

...
half3    nw        = mul( m_v2w, normal );

They take view-space normal (probably from g-buffer i guess) and calculate nw "normal in world-space".

The problem Is m_v2w available/passed in that "shading stage"?

 


Hmm, that's strange. I've seen lots of code parts in the default stalker shaders where it's done the way I have it - but I'll check as well. Haven't seen any pragma specifying row major so far, though.

Matrices might be passed transposed then this order would work:

float3 V = mul( VECTOR, MATRIX );

You need to check documentation (if any?) to see how they expose/pass their matrices.

 


Btw, what do you mean by "raw" shaders?

I meant they are not using "Effect framework" in which row-major is default.

Share this post


Link to post
Share on other sites
Meltac    508

Ok, thanks.

 

I'll check the m_v2w matrix from hmodel, you might be right with that. And yes, I meant that the engine seems to pass that matrix in the geometry and/or lighting phase/stage of the graphics pipeline to the HLSL shaders, but not in the post-processing stage where I do my stuff.

 

 

 


You need to check documentation (if any?)

 

Hehe, that's one of the best STALKER jokes I've ever heard laugh.png

(there is absolutely NO documentation about the shaders, neither officially nor in-officially - otherwise I wouldn't be here so often)

 

 

I'll post my progress when I got a chance to check these things.

Edited by Meltac

Share this post


Link to post
Share on other sites
Meltac    508

Ok, I've tried this yesterday. *Something* worked - but I'm not yet sure about it.

 

I've replaced my previous view space to world space conversion with this:


float3    world_pos   = mul( m_v2w, view_pos ).xyz + camera_pos

Indeed the engine seems to pass some value for m_v2w after adding it to my HLSL shader code:


uniform float3x4 m_v2w;

However the result is strange. I have this debug code to check whether the transformation is correct:


if (distance(world_pos, check_pos) < 1.0)
   return float4(1,0,0,1);

This is supposed to render a red sphere 1 meter around the spot whose world space coordinate I want to check against (check_pos). That worked well before (when using the code in my initial post), but as mentioned only under specific conditions (within a limited camera position and direction range).

 

NOW the result is completely different. Instead of rendering a sphere around the check position, regardless of the camera position, the shader renders now a red circle around the player if and only if he is within 1 meter from the check position !?

 

The good news is that the circle stays there regardless of the camera direction - this wasn't the case before. But how to interpret the new result I don't know. My first thought was that adding the camera position to the transformation formula might not be necessary anymore and cause this output, as the new result is obviously depending on the players / camera position:


float3 world_pos = mul( m_v2w, view_pos ).xyz

But removing that camera_pos part from the code doesn't help either as there will be nothing rendered at all at the check position.

 

Any ideas???

Edited by Meltac

Share this post


Link to post
Share on other sites
kauna    2922

float3 world_pos = mul( m_v2w, view_pos ).xyz + camera_pos

 

If the m_v2w is the view-to-world matrix, then you don't need to add the camera_pos since it is already in the matrix. 

 

Cheers!

 

[edit] I didn't notice that you tried this already.

Edited by kauna

Share this post


Link to post
Share on other sites
Meltac    508

If the m_v2w is the view-to-world matrix, then you don't need to add the camera_pos since it is already in the matrix. 

 

Yes I have already tried that without success. We suppose that m_v2w is the view-to-world matrix (unfortunately there's no documentation for that engine):

 

 

 


m_v2w name suggest that it is "view to world" matrix, if you look below in same file in hmodel function:

...
half3 nw = mul( m_v2w, normal );

They take view-space normal (probably from g-buffer i guess) and calculate nw "normal in world-space".

 

So if this assumption should be correct, what could I still be doing wrong?

Edited by Meltac

Share this post


Link to post
Share on other sites
belfegor    2834

I don't know what might be wrong now, but i would suggest to try something but in view-space as it should give you same results as in world-space.

 

You can transform your check_pos in view-space and do the compare with view-space position:

float3 check_pos_vs = mul( m_V, check_pos ) .xyz;
if (distance(view_pos, check_pos_vs) < 1.0)
return float4(1,0,0,1);

 

and let me know if that works.

Share this post


Link to post
Share on other sites
Meltac    508

Thanks for the suggestion. I just have tried that. Doesn't work with m_V, but using m_VP instead does!

 

At least approximately. Meanly, the check sphere is rendered correctly on the right spot, but it moves a bit with the players / camera movement. That's another reason why I wanted to do it all in world space. The engine seems to provide either the view space position sampler state (s_position), or the transformation matrix (m_VP) based on an approximate camera position with disregard to any applied camera physics effects such as head bobbing. That way I can't do my calculations precisely enough because the rendering generates graphical glitches. Having it all in world space shouldn't cause those side effects (I was hoping).

 

BTW, using m_V as intended instead of m_VP causes about the same effect as described before: The check sphere will be rendered when the player / camera is located within a range of < 1.0 meters from the check position. Does that ring a bell?

Edited by Meltac

Share this post


Link to post
Share on other sites
belfegor    2834

Head-bobbing does not matter at all, you need to debug and find actual problem elsewhere.

 

Maybe you should post whole shader so i can see whole picture. How do you obtain view_pos? From g-buffer?

Share this post


Link to post
Share on other sites
Meltac    508

The whole shader contains way too much  code irrelevant to the topic to post here, but I've extracted all parts of interest here:

uniform float3x4 m_W;
uniform float3x4 m_V;
uniform float4x4 m_P;
uniform float3x4 m_WV;
uniform float4x4 m_VP;
uniform float4x4 m_WVP;

uniform sampler2D s_position;

float3 pos_1_world = float3( 146.73, 0.70, -85.29);

float3 uv_posxy = tex2D(s_position, center).xyz;

float4 check_pos_vs = mul(m_VP, float4(pos_1_world, 1));

if (distance(uv_posxy, check_pos_vs) < 1.0)
   final +=  float4(1,0,0,1);

That's all I got. All the uniform variables are just references to matrices or sampler states passed by the engine. How the engine calculates those and what buffers or registers might be involved I don't know. I basically just use what the engine gives me.

 

Whether or not camera (post-)effects such as head-bobbing matter or not I cannot say for sure, but when looking at the result I see that everything is fine and correct as long as the camera only turns around or moves slowly in one direction, but when doing bigger and/or abrupt movement changes such as sprinting, jumping, or leaning sidewards the result starts getting off.

Edited by Meltac

Share this post


Link to post
Share on other sites
belfegor    2834

float3 uv_posxy = tex2D(s_position, center).xyz;

The varibale name "center" used for uv coordinate suggest that it is constant for every pixel (center of screen?), tho it should not because you need to sample view-pos from g-buffer, how/is it passed from vertex shader?

Share this post


Link to post
Share on other sites
TheChubu    9448


. The engine seems to provide either the view space position sampler state (s_position), or the transformation matrix (m_VP) based on an approximate camera position with disregard to any applied camera physics effects such as head bobbing.
One of the pros of view space is that camera's position is the origin of the space, (0,0,0), so you don't need any additional parameter. If you do your calculations in view space, head bobbing or not, camera's position will be (0,0,0).

Share this post


Link to post
Share on other sites
Meltac    508

float3 uv_posxy = tex2D(s_position, center).xyz;

The varibale name "center" used for uv coordinate suggest that it is constant for every pixel (center of screen?), tho it should not because you need to sample view-pos from g-buffer, how/is it passed from vertex shader?

 

 

Sorry to have missed that when extracting the relevant parts from the shader code. "center" refers to the uv coordinate passed from the previous shader unit (which is in this case not a vertex but another pixel shader as I'm in post-processing stage), not the screen center. Misleading, I know.

Edited by Meltac

Share this post


Link to post
Share on other sites
belfegor    2834

You cannot have 2 pixel shaders run simultaneously, you probably meant "helper" function. If you don't want to post whole shader at least show me the whole "pipeline" for uv coordinates (how it is calculated/obtained). I suspect at this because the rest of code is correct.

 

It doesn't matter if you are in "post-processing stage", you must have vertex shader, even if you dont set it yourself it is possible that it is set for you behind scene/implicitly by their shader system.

Share this post


Link to post
Share on other sites
Meltac    508

You are right, there must be some (implicit) vertex shader which has probably just not been made accessible / changeable by the devs.

 

I didn't mean running two pixel shaders simultaneously, nor did I mean "helper" function. I just wanted to say that I do not have (or have access) to any vertex shader on the post processing stage, so the buffer input I get in that post process shader is basically the output (i.e. color etc.) from the last pixel shader before the pipeline passed the data to the post processing - it doesn't matter in this case whether there's a "hidden" vertex shader inbetween because it would basically just handle over its own inputs without doing any vertex manipulations or the like.

 

So, here's the portion showing the input of the coordinates to the pixel shader:

 

struct v2p
{
  float4 tc0:         TEXCOORD0;    // Center
  float4 tc1:         TEXCOORD1;    // LT         
  float4 tc2:         TEXCOORD2;    // RB
  float4 tc3:         TEXCOORD3;    // RT
  float4 tc4:         TEXCOORD4;    // LB
  float4 tc5:        TEXCOORD5;    // Left    / Right    
  float4 tc6:        TEXCOORD6;    // Top  / Bottom
};

 

float4 main(v2p I):COLOR {
 

float2 center=I.tc0.xy;

 

...

}

 

Normally quite all stuff (sampling color / position and the like) is done using the center coordinate (that's also the case in the original / unmodded ) shader. That's why I'm using that coordinate as input to all my stuff as well.

Share this post


Link to post
Share on other sites
Meltac    508

That seems fine, there must be something else wrong then. wacko.png

 

As I said, I suspect that the engine passes some strange / unusual / wrong data to the shader, maybe the transformation matrix is not intended to be used that way in the post-processing stage or something. Not sure, though. Unfortunately I've met nobody so far who could confirm or explain this.

Edited by Meltac

Share this post


Link to post
Share on other sites
Meltac    508

Have you tried to invert view matrix at shader by hand?

 

I have asked how to do that earlier in this thread but unfortunately nobody replied on it. And in other threads here people keep just replying " don't ! " when somebody asks for it. This said, I would have tried if I would know how...

Share this post


Link to post
Share on other sites
Meltac    508

Thanks, I'll check that. If nothing else, it at least would serve as proof for or against the correctness of the matrix provided by the engine (and my code using that one).

Edited by Meltac

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this