Sign in to follow this  

texture mapping and triangles

This topic is 3571 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm starting to think how to add texture mapping to my raytracer. Currently I'm a little confused about how to set up the intrinsic uv coordinates of the intersected point: after the intersection I have the uv relative to the triangle, but when I pass the informations to the texture mapper I need the coordinates relative to the mesh, am I correct? For example, uv for the sphere go from 0 to 1. I thought to add uv to the vertices, but how I calculate them? Spheres and planes have an intrinsic surface rapresentation, is there something similar for meshes? I mean, when I apply the mapper (Sphere Mapping, Planar Mapping, UV mapping) I already need 'neutral' uv... Do 3ds and other 3d apps store this kind of information or are their uv coords already mapped? Thank you!

Share this post


Link to post
Share on other sites
Nobody out here has tips to share?
While you think to an answer, I have another question: I thought to use polymorphism to apply wrapping policies (what to do when texture coordinates are out of the 0-1 range), but if the virtual function overhead is greater than a switch statement I will use switch to select the appropriate policy. I was unable to find informations about virtual calls versus switch performance comparisons. Any ideas?

Share this post


Link to post
Share on other sites
Quote:
Original post by cignox1
Spheres and planes have an intrinsic surface rapresentation, is there something similar for meshes? I mean, when I apply the mapper (Sphere Mapping, Planar Mapping, UV mapping) I already need 'neutral' uv... Do 3ds and other 3d apps store this kind of information or are their uv coords already mapped?

UV mapping is typically performed by the artist, in the modelling software. Pretty much every model format stores texture coordinates.

Quote:
I thought to use polymorphism to apply wrapping policies (what to do when texture coordinates are out of the 0-1 range), but if the virtual function overhead is greater than a switch statement I will use switch to select the appropriate policy. I was unable to find informations about virtual calls versus switch performance comparisons. Any ideas?

A switch statement is typically more efficient than a virtual function call, but can be much harder to implement and maintain. Texture wrapping is usually specified per-texture, when the texture is loaded/created.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
UV mapping is typically performed by the artist, in the modelling software. Pretty much every model format stores texture coordinates.

Thank you but I'm not sure if I understood your answer. Suppose that the modeler creates a mesh, but he/she doesn't specify uv coordinates. Either the model isn't exported with uv or the program generate a default set somhow. Now I load the mesh and want to apply a texture with planar mapping (I want to simply wrap my mesh): in order to do this I must know the coordinates of the intersected point in a parametric form on the mesh surface(just like spheres and planes) but I don't know how to translate from the per triangle coordinates pair (the uv got from the intersection routine) to a per mesh pair (1-0 around the whole mesh). I suppose that the real uv coordinates are given per texture and must be specified with the texture, not necessarily with the mesh...

Quote:

A switch statement is typically more efficient than a virtual function call, but can be much harder to implement and maintain. Texture wrapping is usually specified per-texture, when the texture is loaded/created.


It's just a few operations, 4 different wrapping functions. If this is faster I'll do it this way, since I already have several levels of virtual functions to follow (mesh, camera, texture, surface, shader and so on). If in future I want to make it more flexible, moving to a polymorphic solution is a matter of a few lines.
Thank you!

Share this post


Link to post
Share on other sites
I know nothing of spheres, but I know texture mapping on meshes. You calculate UV coordinates for each vertex (or use artist-supplied ones) and interpolate them based on the triangle point you're drawing. You'll want to do perspective-correct interpolation, since simple linear interpolation will cause texture warping:

http://en.wikipedia.org/wiki/Texture_mapping#Perspective_correctness

If you don't have artist-supplied UVs (which is unrealistic), you generate your UVS (again, per vertex, not per pixel as you draw) by using some rule. For planar mapping, as example, project all vertex coordinates onto a plane and use the resulting 2D coordinates as UV coords.

Share this post


Link to post
Share on other sites
Thank you, I've no problem how to texture the triangle once I have the uv, my question is how should I calculate them (an arhitectural problem, we could say). That is, when the ray intersection has be performed, I have the uv coordinates of the intersection point over the surface (the shape can be a sphere, a triangle or a plane for example). The shader get these values and ask the texture for the user selected 'coordinate mapper' (possible choices are spherical, planar, uv, cubic, screen and so on).
Now, what I thought was that this mapper takes the uv coordinates as they were generated by the intersection routine, and this means that they represent an intrinsic description of the point (at least it is for spheres, planes and triangles) and remaps them using the selected mapping method. My question is how should I handle uv maps for meshes, since its uv have been -already- mapped.

Suppose for example that the uv in the model on the disc were created using a planar mapping. I load this model and decide to map my texture using another mapping. This won't work, UNLESS the new mapping method doesn't require the uv coords of the intersection point. So far i think that this is the case (spherical uses the normal, planar the actual point and so on), but I wonder if there exists a natural way to set uv maps on a mesh that is more or less equivalent to what happens with the single triangle, with the sphere or with the plane.

Anyway, sorry for this long post, and thank you for your help (Actually I just want to know it, I don't really need it anymore...)

Share this post


Link to post
Share on other sites

This topic is 3571 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this