Texture matrix from triangle

Started by
1 comment, last by Slaaitjuh 10 years, 10 months ago

Hello,

I was wondering if there's anyone that can help me with the following idea. I am currently working on a CSG library. I made a function to import models. The models get converted to planes, and later on rebuild to vertices. For material editing I am using texture matrices per plane. I would like to continue this, but I am facing a problem/idea:

When reading an existing model, I can get the position, normal and texture coordinates for a given vertex/triangle. Since I can get the three vertices and texture coordinates of a triangle, I was wondering if there is a way to construct a Matrix from those parameters to recreate the texture coordinates after the CSG process.

This would mean that the matrix would be able to calculate texture coordinates by transforming the vertex position.

The reason I would like for this to work, is that I can reuse the texture coordinates in the models, making the initial model look much better.

I would prefer examples in C# if I have a choice, but any language will do :-) Idea's are very welcome as well. Any clarification can be provided also.

Thanks,

Advertisement

I'm not completely clear about your intended use case, but I'll give it a shot anyways :) So you have a process that converts a 3D mesh (which is a list of vertices + triangle connectivity information) into a new representation which is a list of planes, and the locations within those planes that define the vertex locations - is that correct?

Then if you want to have a single transformation that produces the texture coordinates from the positions, there are some strict limitations that would be required that I think would make this impractical. You would need to have a uniform 3D mesh surface area to texture space mapping, which typically is not the case. In addition, the production of the original texture coordinates must respect the mapping to and from position information, which is also not normally the case since texture information is just packed in as efficiently as possible.

So if I have understood what you want properly, then I don't think there is a generic way to map between positions and texture coordinates unless you impose this mapping on the original input models. Have you considered just storing the texture data along with your planar representation of the positions?

I'm not completely clear about your intended use case, but I'll give it a shot anyways smile.png So you have a process that converts a 3D mesh (which is a list of vertices + triangle connectivity information) into a new representation which is a list of planes, and the locations within those planes that define the vertex locations - is that correct?

Then if you want to have a single transformation that produces the texture coordinates from the positions, there are some strict limitations that would be required that I think would make this impractical. You would need to have a uniform 3D mesh surface area to texture space mapping, which typically is not the case. In addition, the production of the original texture coordinates must respect the mapping to and from position information, which is also not normally the case since texture information is just packed in as efficiently as possible.

So if I have understood what you want properly, then I don't think there is a generic way to map between positions and texture coordinates unless you impose this mapping on the original input models. Have you considered just storing the texture data along with your planar representation of the positions?

I am sorry for replying this late, I did not notice there was a reaction. Thanks for that.

Your first assumption is correct. The problem is that we are performing CSG on these planes, which creates sub vertices etc etc. A way to recalculate the texture coordinates ( in my head, that is) was to produce a uniform texture matrix that somehow preserves the original intention of the texture map. This would be a matrix per plane. I guess I have to agree with you that it would be impractical, maybe even impossible.

I considered barycentric coordinates (remember which plane had which vertices / uv, and do the math) yet the results were often _not spot on_. A lot of vertices got little offsetted texture coordinates which looks unnatural.

Another way was to do planar world mapping, but I wanted to avoid that since that only results in something nice on a box. Mapping a sphere or cone with this is not pretty.

The previous version I was working on involved creating the planes, looking up the texture coordinates and interpolate them along with the vertex data. The problem I have with this, is that it requires a lookup table from the original model to the CSG model. This is a weak link, which I'd like to avoid.

Do you have any tips/ideas I did not try yet?

Thanks again for your clear explanation.

This topic is closed to new replies.

Advertisement