Jump to content

  • Log In with Google      Sign In   
  • Create Account


Saldan

Member Since 28 Jan 2011
Offline Last Active May 30 2012 10:41 PM
-----

Posts I've Made

In Topic: Best practice for vertex normal calculations?

30 May 2012 - 01:14 AM

Are you try to avoid storing or recalculating all the triangle normals?


I'm not 100% sure about that, to be honest. The reason I was going for this approach is that the models can get quite large and complex, and I figured temporarily storing triangles as pointers to vertices, rather than as sets of new vertices, would reduce the amount of data being moved around in each model, as well as facilitate sharing of vertex normals between triangles. In the end I do store triangles as a series of floats, though, since the vertices are generated and discarded as needed, and since uploading floats to the GPU as VBOs seemed more straightforward.

Honestly, though, I'm new at programing meshes like this and I'm not sure what the best approaches are.

Another thing: you should weigh your triangle normals in the vertex normal calculation. Not all triangle normals affect the vertex normal in the same amount. Consider a cube. If you want to smooth shade the cube (I know it's stupid) you know that the vertex normals should point in a 1,1,1 direction for example (I didn't normalize it, but you get the idea). A vertex in a cube is shared by 3 sides, but that means four triangles in some cases. That means a side is represented by 2 triangles at some corners. Without weighing, it will distort the desired result because you add that normal twice.


Oh gods, that does makes sense. That's surely at least a part of the problem. Hm... Could you elaborate a little on your angle-weighting? Do you mean the angle of the line defined by two vertices (or vertex locations) shared by two different triangles?

That said, your pyramid will look distorted anyway, because you smooth sharp edges too. Take a loot at a smooth shaded triangle in any 3d modelling software. It will look something like that.


Yes, I was expecting it to look weird since, as you said, it's got sharp edges. However, I was rather expecting the weirdness to be uniform along the edges of the pyramid (like the edges were highlighted or something), rather than jagged. Presumably, if the weirdness was uniform, it could be reduced by having a smoother model.

In some of the more complex models I have, there are sharp edges (90° or less) that I'd rather not have smoothed, but duller edges that I would like to smooth out. I read a little about smoothing groups on Wikipedia just now, but I'd need some kind of algorithm that can efficiently calculate and modify smoothing groups on the fly, because my meshes are supposed to be not only arbitrary but also, to an extent, deformable. Ideally, an algorithm that identifies whether a neighboring triangle is at too sharp an angle to be included in the normal modification process, or something similar. I suppose I could just compare triangle normals and, if they are too different, ignore the neighboring triangle in the calculations...

In Topic: How to Use a Quaternion as a Camera Orientation

21 May 2012 - 07:57 AM

Good news! I've solved the problem. The issue was that I should have been pushing a matrix created from the inverse of the Orientation (as well as reversing all the transformation data, i.e. the rotation and translation). What I do now is

	//Rotate camera
	glMultMatrixf( m_PlayerCamera.GetInvertedMatrix() );

	//Translate camera
	sf::Vector3<float> CameraPos = m_PlayerCamera.GetPosition();
	glTranslatef(CameraPos.x, CameraPos.y, CameraPos.z);


Where GetInvertedMatrix() looks like this:

GLfloat* Camera::GetInvertedMatrix()
{
	//Get a new quaternion, the inverse of the Orientation
	Quaternion InvertedOrientation = m_Orientation;
	InvertedOrientation.Invert();

	//Fill the matrix with the orientation
	InvertedOrientation.RotationMatrix(m_RotationMatrix);

	//Return that matrix
	return m_RotationMatrix;
}

Once I do this, everything works as intended.

In case anybody in the future is looking for the way to Invert() a quaternion, here's my code:

void Quaternion::Invert()
{
	float Length = 1.0 / (x*x + y*y + z*z + w*w);
	x *= -Length;
	y *= -Length;
	z *= -Length;
	w *= Length;
}

In Topic: How to Use a Quaternion as a Camera Orientation

21 May 2012 - 12:31 AM

Thanks for the reply!

"Doing the reverse" means in this case m_TurnSpeed becomes "-m_TurnSpeed"
And, when translating, m_MoveSpeed is also negated "- m_MoveSpeed"

But don't change the order of the quaternion multiplcations, it's not the same. The idea behind the "camera matrix" (actually called view matrix) is that, if you can't the camera, move the whole world. This means if the camera is moved 2 units to the left, it's the same as if everything else moves 2 units to the right. Same with rotations, etc. You get the idea.


See, I thought this was the case, but when I try that out (inverting m_TurnSpeed and m_MoveSpeed, leaving the order of the quaternion multiplication alone), what happens is the rotations are in the wrong direction, and they don't add up. So for example, if I turn to the right around the Y axis (by pressing the key meant to turn to the left), then try to rotate around the Z axis, I end up rotating around the global Z axis, not the camera's Z axis, i.e. I end up rotating around the camera's X axis (if it's a 90° turn). When I leave the values alone and reverse the multiplication order, rotation *appears* to work as expected (i.e. rotation around the Y axis, then around the Z axis, leads to a rotation around the new, post-Y-rotation Z axis and not the old one). It might not be, of course; if that's the case, then there's a deeper problem.

If you still don't get it working after negating those variables, the way you're extracting the translation part and then apply movement doesn't look quite right. Can't really tell without debugging the numbers.
If I'm not mistaken, that code should make the camera spin around the origin when trying to rotate, rather than rotating around it's center. But in the models you don't see it glitch like that because you normalize the matrix, and use glTranslate to let OpenGL perform the translation for you, so it miraculous works.


Hm, that could be a problem. Here is what MultVect() looks like:

sf::Vector3<float> Quaternion::MultVect(sf::Vector3<float> Vector)
{
	//From http://www.idevgames.com/articles/quaternions
	Quaternion VectorQuat = Quaternion();
	VectorQuat.x = Vector.x;
	VectorQuat.y = Vector.y;
	VectorQuat.z = Vector.z;
	VectorQuat.w = 0.0;
	Quaternion Inverse = (*this);
	Inverse.Invert();
	Quaternion Result = Inverse * VectorQuat * (*this);
	sf::Vector3<float> ResultVector;
	ResultVector.x = Result.x;
	ResultVector.y = Result.y;
	ResultVector.z = Result.z;
	return ResultVector;
}

I do use glTranslafef() when rendering the scene. Here is what that code looks like (it is called before anything else in the scene is rendered):

	//Rotate camera
	glMultMatrixf(m_PlayerCamera.GetMatrix());
	//Translate camera
	sf::Vector3<float> CameraPos = m_PlayerCamera.GetPosition();
	glTranslatef(CameraPos.x, CameraPos.y, CameraPos.z);

But in the camera, you don't have that, so if I'm not mistaken, you are unable to move it, right?


It actually moves quite smoothly, just in the wrong direction. If I do the following, I get near-perfect movement so long as I don't try to rotate around the camera's Z axis.

	//Create new quaternions for each axis
	Quaternion xOffset = Quaternion();
	xOffset.FromAxisAngle(Rotation.x * m_TurnSpeed, 1.0, 0.0, 0.0);
	Quaternion yOffset = Quaternion();
	yOffset.FromAxisAngle(Rotation.y * m_TurnSpeed, 0.0, 1.0, 0.0);
	Quaternion zOffset = Quaternion();
	zOffset.FromAxisAngle(Rotation.z * m_TurnSpeed, 0.0, 0.0, 1.0);
	//Multiply the new quaternions into the orientation
	m_Orientation = m_Orientation * yOffset * zOffset * xOffset;
	//Get matrix
	m_Orientation.Normalize();
	m_Orientation.RotationMatrix(m_RotationMatrix);
	///Translate camera
	Translation = m_Orientation.MultVect(Translation);
	m_Position.x += (Translation.x * m_MoveSpeed);
	m_Position.y += (Translation.y * m_MoveSpeed);
	m_Position.z -= (Translation.z * m_MoveSpeed);

Now, mathematically, I'm almost certain something isn't right here, since that's not what I'd expect the proper operations to look like.

In Topic: Quaternion Object Rotation

22 March 2012 - 11:04 AM

Found it!

It turns out my implementation of quaternions wasn't correct after all. The code for quaternion multiplication I had was either wrong, or I copied it wrong. Either way, fixing it by referring to this site helped: http://www.arcsynthesis.org/gltut/Positioning/Tut08%20Quaternions.html

For the record, the correct quaternion multiplication code I now use is:

Quaternion Quaternion::operator* (Quaternion OtherQuat)
{
    float A = (OtherQuat.a * a) - (OtherQuat.x * x) - (OtherQuat.y * y) - (OtherQuat.z * z);
    float X = (OtherQuat.a * x) + (OtherQuat.x * a) + (OtherQuat.y * z) - (OtherQuat.z * y);
    float Y = (OtherQuat.a * y) + (OtherQuat.y * a) + (OtherQuat.z * x) - (OtherQuat.x * z);
    float Z = (OtherQuat.a * z) + (OtherQuat.z * a) + (OtherQuat.x * y) - (OtherQuat.y * x);
    Quaternion NewQuat = Quaternion();
    NewQuat.a = A;
    NewQuat.x = X;
    NewQuat.y = Y;
    NewQuat.z = Z;
    return NewQuat;
}

Many thanks to you, alvaro! I otherwise would have just assumed this was beyond my capabilities.

The object now rotates easily along all its local axes, no matter what previous rotations took place. Awesome!

In Topic: Quaternion Object Rotation

22 March 2012 - 09:18 AM

All right, I'll do some more intensive debugging and get back with the results.

PARTNERS