#### Archived

This topic is now archived and is closed to further replies.

# Special texture-to-object mapping - matrix problems

This topic is 5830 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

This isn't the easiest thing to explain for me, so I will first tell you what I'm trying to accomplish and what I have so far. First some background. We have a standard square texture. Yay. This texture emits light. Yay again. However, logically, only portions of this texture of that actually have a light fixture should emit light. Yet the current calculations (I'm working from already existing lighting tool source code) simply assume the entire texture emits light, which makes things easier, which I will explain. Each face is divided into 32x32 patches in a face-aligned grid pattern (uses U/V texture vectors for those calculations). When the entire texture emits light, we can simply say, "Hey, each one of these patches will also emit light as well!" and call it a day. What I'm trying to do is this: allow the mapper (if it isn't odviously by now, I'm working with HLRAD which works with the Half-Life engine) to specify only certain regions of this texture for light emission. Here's my situation -> (read on!) Let's say we have our region polygons defined. We also have our U/V vectors, along with an offset for each vector. At first glance, it would seem that all I have to do is scale each point depending on the texture scale, translate into object space using simple matrix multiplication - (translation matrix * rotation matrix * point) for each point in the polygon. My dilemna comes from the fact that my regional polygons must repeat regardless of offset to match the textures which repeat inherently! For example (let's assume simple x/y plane), if I have a 10x10 texture with some region polygons mapped to a 100x100 face (also assume scale of 1 for both U and V), then those region polygons have to repeat themselves across the entire face 10 times in both directions (total of 100 sets), even if the offset is 0 and I only have one set of physical region polygons initialized. This requirement means that we cannot really on the offset to correctly transform the region polygons to the right position for the actual clipping against the patches, but only set up the right "offset" for which to "loop and check" (hey, maybe that's why it's just called an offset!). By "loop and check", I mean we use the U and V vectors as translation components. Place our region polygons, which are in object space (aligned with U/V), but not offseted right, far away along negative U and V (by a factor of the logical width and height to maintain proper offset once we reach any patches - logical width = real texture width * width/U scale, logical height = real texture height * height/V scale ), and then offset their physical position along U V by their logical width and height until we intersect with the patches we are working with. Do clipping on a per-iteration basis, and continue looping until our region polygons are far away along positive U V. (Just in case, "far away" would mean outside allowable map bounds). That's the scoop. Problem is, this seems like overkill beyond anything I've ever seen before, and also sounds like it would run SLOW! I mean, planar polygon clipping isn't the fastest thing to do, and most of the time it will only be done in vain to discover they don't even intersect! And since we're not on a nice x/y plane, even simple bounding box collision detection is a big issue. For those of you who have a vague idea of what I've just explained, do you know of an easier way to do this? Would it be possible maybe to simply create several logical width/height-aligned copies of my region polygons in texture space in a predictable manner, so that when I transform to object space I can jump immediatly into patch/region polygon clipping? Well, if anyone can understand my jumbled post, feel free to reply! EDIT: Just thought of this, but how about if instead of transforming the region polygons to object space, I transform all the patches to texture space, do the simple clipping there, and then transform resulting patches back to object space? My problem is that I've never even worked why any sort of mapping, let alone 3D graphics, and I have no idea which way would turn out better. This latter one sounds good though. Not that speed is much of an issue - nothing here has to be performed realtime. However, I would like something easy to code for my first dive into texture mapping (well sort of) [edited by - Zipster on June 4, 2002 2:51:40 AM] [edited by - Zipster on June 7, 2002 6:32:36 PM]

##### Share on other sites
Uh, huh ? Frankly, I have no idea what you are talking about

I understood that you were splitting up polygons into patches and applying ''light textures''. Sounds like radiosity area lights to me. Do you want to define area lights by using a kind of masking texture ?

Anyway, your problem seems fairly complex. You should perhaps split it into several sub-problems and ask for each one individually. This will make it easier for people to follow you

/ Yann

##### Share on other sites
Yay, I confused the moderator :D

The diagram of the problem looks clear and concise in my head. Why can't you see that?!?!

Anyway, picture a polygon P at some orientation in object space, subdivided into at max 32x32 grid squares. The grid used to subdivied this polygon was aligned perpendicular to the polygon plane, so these aren't any wierd shape squares. Yup, they're squares

Now, imagine I have a few polygons in texture space (x/y plane), that are within the width W and height H of a texture Q (all points are within the texture). The inside of these polygons define where light will be emitted. Let's call this collection of polygons light masks . They will be logically attached to the texture (they have to be to keep consistant texture/light mask patterns). Let's now step back for just a moment. When we map texture Q to polygon P , the texture should repeat over and over, regardless of the size of polygon P . This means, that no matter how big the face is, the texture will repeat. Now, we mentioned early that our light masks are attached to the texture. This means that they have to "follow" the texture. Thus if we have a large enough face, large enough so that the texture has to repeat itself several times, these light masks also have to repeat in a pattern consistant with the texture in order to get the proper lighting patterns. Problem? We only have one physical instance of these light masks in texture space. Yet they must follow a potentially huge texture map, and potentially maks the face several times at different points. This is my dilemna. My thoughts right now lean towards translating polygon P to texture space with texture Q , so that I can then perform some sort of loop across the whole of the x/y plane (or at least the bounding box of P in texture space), correctly masking the lighting of P , even if the texture repeats a lot and/or has a small scale.

I hope that description was more clear!

EDIT: To clarify, I will transform everything back to object space when I'm done.

[edited by - Zipster on June 6, 2002 2:23:47 AM]

##### Share on other sites
Well as I expected, I am having some problems with my transformation. I''ve checked dozens of sites, and they all appear to justify my current matrix math, but the points in my resulting polygons are all screwy. I''ve tested the code with all rectangles, meaning all resulting polygons after clipping and such should be rectangles as well, but I''m getting crazy values. X/Y aligned planes result in Y/Z planes after transforming back and forth, and the results aren''t even rectangles... it''s crazy, seriously!

Hopefully you can help! I will describe what I know and how I''m doing it, and you can see if I am correct (which I shouldn''t be!). What I have is basic information:

tx->vecs[s/t][XYZ offset] -> Two vectors, S (index 0) and T (index 1), each with three floating point values: the X, Y, Z, and offset. We know for a fact that T will always be to the "right" of S (i.e if S was 1,0,0 then T would be 0,-1,0). I''m still not sure that a V vector of 0,-1,0 would differ from 0,1,0 as long as the coordinate system used for the points (world space) remain constistant, and the normal always points the same direction. The length of the vectors equals the inverse of the scale of the texture for each direction. Just in case, that means if the S and T scales were both 2, the length of the vectors would be 0.5.

I then generate N using VxU and normalize. To transform from object space to texture space, I thought what I would have to do is transform by "vector matrix" (you''ll see what I mean below) and then translate by the offset. According to what I''ve read, this means I would first have to do the matrix multiplication in this order ->

Translation Matrix * "Vector Matrix" * Point

These are my matrices:
       Translation Matrix  "Vector Matrix"   Point              ||                   ||          ||              \/                   \/          \/        [ 1  0  0  -Cx ]   [ Ux Uy Uz  0 ]   [ Px ]        [ 0  1  0  -Cy ] * [ Vx Vy Vz  0 ] * [ Py ]        [ 0  0  1  -Cz ]   [ Nx Ny Nz  0 ]   [ Pz ]        [ 0  0  0   1  ]   [ 0  0  0   1 ]   [ 1  ]	Simplify:        [ Ux Uy Uz -Cx ]   [ Px ]        [ Vx Vy Vz -Cy ] * [ Py ]        [ Nx Ny Nz -Cz ]   [ Pz ]        [ 0  0  0   1  ]   [ 1  ]	Finally:                                      . = DotProduct        [ UxPx + UyPy + UzPz - Cx ]   [ U.P - Cx ]        [ VxPx + VyPy + VzPz - Cy ] = [ V.P - Cy ]        [ NxPx + NyPy + NzPz - Cz ]   [ N.P - Cz ]        [           1             ]   [     1    ]

Is this correct? It seems kind of funny how it comes out to a DotProduct, but it makes sense to me. Also, since I neglected to normalize U and V from before, the points should be scaled in texture space by the appropiate amount.

Now, lets say I do my stuff. Shazam, now I want all my points back in texture space. Logic would dictate to me that I need to perform the operations in reverse order with a transposed "Vector Matrix" and negated offsets in my translations vector.
Doing the math:
                 Don''t negate                     ||                     \/[ Ux Vx Nx 0 ]   [ 1 0 0 Cx ]   [ Px ][ Uy Vy Ny 0 ] * [ 0 1 0 Cy ] * [ Py ][ Uz Vz Nz 0 ]   [ 0 0 1 Cz ]   [ Pz ][ 0  0  0  1 ]   [ 0 0 0 1  [   [ 1  ]Simplify:[ Ux Vx Nx (UxCx + VxCy + NxCz) ]   [ Px ][ Uy Vy Ny (UyCx + VyCy + NyCz) ] * [ Py ][ Uz Vz Nz (UzCx + VzCy + NzCz) ]   [ Pz ][ 0  0  0            1          ]   [ 1  ]Finally:[ Ux(Px + Cx) + Vx(Py + Cy) + Nx(Pz + Cz) ][ Uy(Px + Cx) + Vy(Py + Cy) + Ny(Pz + Cz) ][ Uz(Px + Cx) + Vz(Py + Cy) + Nz(Pz + Cz) ][                   1                     ]

My code:

  static void		ProjectWindingArrayToTextureSpace(const texinfo_t* tx){	Winding **winding = windingArray;	vec3_t *P;	Winding *t_winding;	vec3_t newpoint;	vec3_t U, V, N;	int x, y;	// Set up U V N	VectorCopy(tx->vecs[0], U);	VectorCopy(tx->vecs[1], V);	CrossProduct(V, U, N);	VectorNormalize(N);	// Z will always be zero	newpoint[2] = 0.0f;	for(x = 0; x < g_numwindings; x++, winding++)	{		t_winding = new Winding();		for(y = 0, P = (*winding)->m_Points; y < (*winding)->m_NumPoints; y++, P++)		{			newpoint[0] = DotProduct(U, *P) - tx->vecs[0][3];			newpoint[1] = DotProduct(V, *P) - tx->vecs[1][3];			t_winding->addPoint(newpoint);		}		// Delete the old, stupid winding!		delete *winding;		// Replace with new, transformed winding		*winding = t_winding;	}}static void		ProjectWindingArrayBackToObjectSpace(const texinfo_t* tx){	Winding **winding = windingArray;	vec3_t *P;	Winding *t_winding;	vec3_t newpoint;	vec3_t U, V, N;	int x, y;	float PxCx, PyCy;	// Precompute these reused values	// Set up U V N	VectorCopy(tx->vecs[0], U);	VectorCopy(tx->vecs[1], V);	CrossProduct(V, U, N);	VectorNormalize(N);		for(x = 0; x < g_numwindings; x++, winding++)	{		t_winding = new Winding();		for(y = 0, P = (*winding)->m_Points; y < (*winding)->m_NumPoints; y++, P++)		{			// This is the only real change in the code			// Precompute Px + Cx and Py + Cy			PxCx = *P[0] + tx->vecs[0][3];			PyCy = *P[1] + tx->vecs[1][3];						newpoint[0] = U[0]*PxCx + V[0]*PyCy + N[0]*(*P[3]);			newpoint[1] = U[1]*PxCx + V[1]*PyCy + N[1]*(*P[3]);			newpoint[2] = U[2]*PxCx + V[2]*PyCy + N[2]*(*P[3]);			t_winding->addPoint(newpoint);		}		// Delete the old, stupid winding!		delete *winding;		// Replace with new, once-again transformed winding		*winding = t_winding;	}}

Everything I''ve read doesn''t indicate I should do anything different, and while I think I am doing it the right way, I just don''t know enough about this to be sure!

• 16
• 9
• 13
• 41
• 15