Ok, let's see what we have here.
We have generated a plane and, by intersecting it with other brep planes we figured out it's going to have
n points
p[sub]0[/sub]...
p[sub]n-1[/sub].
I assume you already mangled the whole brep and solidified the points.
What we need to find are two vectors
s,
t generators for texture space on the plane.
Recall that Quake performs "cubic" mapping, where each face is mapped using the basis plane giving the less distortion.
So first step is comparing plane normal against the canonical generators. Loop over an array like this:
const float baseAxis[6][3][3] = {
{ { 0.0f, 1.0f, 0.0f }, { 1.0f, 0.0f, 0.0f }, { 0.0f, 0.0f, 1.0f } },
{ { 0.0f, -1.0f, 0.0f }, { 1.0f, 0.0f, 0.0f }, { 0.0f, 0.0f, 1.0f } },
{ { 1.0f, 0.0f, 0.0f }, { 0.0f, 0.0f, 1.0f }, { 0.0f, 1.0f, 0.0f } },
{ {-1.0f, 0.0f, 0.0f }, { 0.0f, 0.0f, 1.0f }, { 0.0f, 1.0f, 0.0f } },
{ { 0.0f, 0.0f, 1.0f }, { 1.0f, 0.0f, 0.0f }, { 0.0f, 1.0f, 0.0f } },
{ { 0.0f, 0.0f, -1.0f }, { 1.0f, 0.0f, 0.0f }, { 0.0f, 1.0f, 0.0f } }
};
Keep in mind that I had to fiddle with generators a bit as I'm using Direct3D/Renderman texture coord convention. You probably want to check GtkRadiant for generators which will likely fit you better.
Use the plane having the [font="Courier New"]max(dot(polyNormal, baseAxis[0])[/font]. Let this be plane
c. Then [font="Courier New"]
(sa = baseAxis[c][1], ta = baseAxis[c][2])[/font] will be a canonical basis for texture coordinate generation.
Now adapt the canonical generators
sa and
ta to scale and rotate. In line of concept you need to perform a 2D transform here but since values such as 0, 90,180,270 deg are common I suggest to short-circuit those cases to steer clear of floating point approximations.
Let sv be the index of the first nonzero component in sa and tv be the similar for ta.
Let
sinv = sin(angle), cosv = cos(angle).
Perform the scale and transform like:
s[sv] = float((cosv * sa[sv] - sinv * sa[tv]) / plane.texMods.sScale);
s[tv] = float((sinv * sa[sv] + cosv * sa[tv]) / plane.texMods.sScale);
t[sv] = float((cosv * ta[sv] - sinv * ta[tv]) / plane.texMods.tScale);
t[tv] = float((sinv * ta[sv] + cosv * ta[tv]) / plane.texMods.tScale);
s[3] = plane.texMods.sOff * plane.texMods.sScale;
t[3] = plane.texMods.tOff * plane.texMods.tScale;
Again, keep in mind that since I have D3D9 convention the matrix might be different for you. You can still check out GtkRadiant.
Now you have (s,t) texture coordinate generator basis for points on the plane. Those texture coordinates however will be generated in units. Then you need to define a "period" for each texture. That's texture dimensions in pixels... doubled? Or half? I don't remember.
What I remember is that I put in texture dimensions in the x and y texture axis but I cannot track the flow of this info right now. Just try.
Loop on all the points and just project them in texture space, like:
qcoords[p * 2 + 0] = Plane::Dot3(s, point);
qcoords[p * 2 + 1] = Plane::Dot3(t, point);
qcoords[p * 2 + 0] /= texSizes; // use inverse here, this loops much more
qcoords[p * 2 + 1] /= texSizet;
You're still not done. Loop on all the points+texcoords again and make sure you try to "center them around the origin". This is especially important for rotating shaders as they will typically offset by half a texture. If you don't, your fans will go all over the place.
[size=2]
[size=2]I thought that, when compiled to BSP, the map's texture coordinates already had stretch accounted for. Now I'm not sure, am I wrong?
[size=2]
If memory serves it will store the generators explicitly so a lot of fuss is gone. But I'm fairly sure it doesn't store the coords explicitly as this is fundamentally redundant[size=2]. Anyway, BSP mangling wreaks havoc on pretty much anything that is not BSP... although the convex hull capability is fairly slick sometime the splits are just so "wrong" it's hard to believe.