Advertisement Jump to content
Stackmann0

3D How to get the 3D position for the point with (0,0) UV coordinates?

Recommended Posts

1 minute ago, Stackmann0 said:

I don't think that its correct. I mean, you can search for the triangle in UV space that has (x,y) (the point in uv space we're trying to find its 3d pos) inside it. But then, the fact of using barycentric coords of that point to find the 3D pos doesn't sound correct to me as the mapping between the triangles (the one in UV space and the one in 3D space) is not involved.. So, I don't know if there is a solution to this problem in the general case as you mentioned in your second reply.. or maybe I'm missing something or confusing things

It is an established solution, it's easy to demonstrate working, just code it up. The transform isn't necessary because you are simply interpolating a triangle in both cases...

However...

Afaik what may be causing the confusion is that strictly speaking, the texture mapping used in general 3d is not 'physically correct' as you are seeing it. If you use a fish eye projection for a camera, and draw a triangle, in theory the texture should also be distorted, but if you render it with standard 3d games hardware it will not be distorted. Only the vertices are going through the transform matrix, the fragment shader afaik is typically given simple linear interpolation. This may not be the case in a ray tracer.

So, you are actually right in a way I think. :) 

Share this post


Link to post
Share on other sites
Advertisement
1 hour ago, lawnjelly said:

You can do something like that, it is doing essentially exactly the same as the very first suggestion (using barycentric coordinates), except in an extremely roundabout fashion (going on a roundabout trip via the GPU).

Of course, it depends what the actual use case is, whether the conversion is rare or needed as a realtime lookup. There are many cases where having a UV -> 3d mapping for the entire texture is useful rather than e.g. using the barycentric method per point, and using the GPU is an option to create this. In my own use cases I've been fine using the CPU to calculate this conversion texture, however it you needed to e.g. recalculate it on a per frame basis the GPU might be an option, bearing in mind the potential for pipeline stalls if you have to read this back.

Considering that the OP was talking about a mesh that he renders on screen, the data is already on the GPU and the result may be required on the GPU. I just wanted to point out a potential alternative. And while he did ask for the point corresponding to UV=(0,0), I doubt he only needs that one point in any real application.

54 minutes ago, Stackmann0 said:

I don't think that its correct. I mean, you can search for the triangle in UV space that has (x,y) (the point in uv space we're trying to find its 3d pos) inside it. But then, the fact of using barycentric coords of that point to find the 3D pos doesn't sound correct to me as the mapping between the triangles (the one in UV space and the one in 3D space) is not involved.. So, I don't know if there is a solution to this problem in the general case as you mentioned in your second reply.. or maybe I'm missing something or confusing things

The mapping M is piecewise-linear, i.e., it is linear within each triangle. Therefore, the interpolation in your first post is 100% correct, because within the triangle M and therefore M⁻¹ are linear.

Edited by l0calh05t

Share this post


Link to post
Share on other sites
4 minutes ago, l0calh05t said:

Considering that the OP was talking about a mesh that he renders on screen, the data is already on the GPU and the result may be required on the GPU. I just wanted to point out a potential alternative.

Sorry I should have worded better, it is a good alternative solution. :) 

Share this post


Link to post
Share on other sites
8 minutes ago, l0calh05t said:

The mapping M is piecewise-linear, i.e., it is linear within each triangle. Therefore, the interpolation in your first post is 100% correct, because within the triangle M and therefore M⁻¹ are linear.

Unless, maybe, you are looking for NDC (post-projection) coordinates. But those are still linear before the perspective divide, so you could still apply the same method, just on the homogeneous coordinates and do the division by w as a last step

Share this post


Link to post
Share on other sites

I think I understand now.. the solution posted in the thread post is indeed correct. Thank you @lawnjelly and @l0calh05t for your explanations :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!