Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 28 Jan 2011
Offline Last Active May 30 2012 10:41 PM

Topics I've Started

Best practice for vertex normal calculations?

29 May 2012 - 11:37 PM

I'm looking to create arbitrarily-shaped meshes and calculate the proper vertex normals for each vertex. As I understand it (and I may be wrong), if you want smooth shading, you need to ensure that vertices have a normal that represents the combined normals of all the triangles that vertex is involved in. For instance, if you have a straight edge along your model where the surfaces extend down along Y starting at 0, right along X starting at 0 and infinitely into and out of the Z axis, then your vertices need to have normals that point exactly in the +Y, -X direction. I'm sure that's not the best description, but you get my gist. Otherwise, if the vertices in each triangle merely represent that triangle's normal, you'll get entirely flat shading.

(EDIT: Just to be clear, I mean that given a triangle A, the vertices A1, A2 and A3 shouldn't all have identical normals - rather, their normals should be influenced by the normals of other triangles as well, such that, for example, normals A2 and A3 should have a normal also influenced by the normal of the separate triangle A2-A3-A4.)

So, first question - is that indeed the way to achieve smoother-looking shading?

If it is, I've got something of a problem, as the image below (which should be a pyramid) no doubt makes clear. The triangles themselves are all placed accurately, but I'm not calculating my normals properly. I should mention that the way I build my triangles internally is that I have a fixed set of vertices and, based on a number of parameters, choose some of these vertices as points in an arbitrary mesh (which are first stored as pointers, and then ultimately uploaded to a vertex buffer object as a series of floats). Since vertices store their normals as well as their position, I thought it would be easy to ensure that each time a vertex is used by a triangle, that triangle's normal influences the vertex's total normal. It appears that isn't the case.

Currently, every time I make a new triangle, I calculate its normal and then add that normal to the existing normals of the three vertices involved. After making all my triangles, I reduce the length of each vertex's normal to 1. I've also tried adding them and then dividing by the number of triangles involved, but that doesn't work and it doesn't sound like it should work, either.

This is my normal calculation. I run this for all triangles (which contain pointers to their Verts 1, 2, and 3).

float A[3] = {Vert2.x - Vert1.x, Vert2.y - Vert1.y, Vert2.z - Vert1.z};
float B[3] = {Vert3.x - Vert1.x, Vert3.y - Vert1.y, Vert3.z - Vert1.z};

float nX = A[1] * B[2] - A[2] * B[1];
float nY = A[2] * B[0] - A[0] * B[2];
float nZ = A[0] * B[1] - A[1] * B[0];

nX *= -1;
nY *= -1;
nZ *= -1;

Vert1.nx += nX;
Vert1.ny += nY;
Vert1.nz += nZ;

Vert2.nx += nX;
Vert2.ny += nY;
Vert2.nz += nZ;

Vert3.nx += nX;
Vert3.ny += nY;
Vert3.nz += nZ;

This is how I normalize the normals. I run this for all Verts after having run the above code for all triangles.

float Length = sqrt(Vert.nx * Vert.nx + Vert.ny * Vert.ny + Vert.nz * Vert.nz);

Vert.nx /= Length;
Vert.ny /= Length;
Vert.nz /= Length;

Clearly, the results aren't what I want. I was hoping somebody might be able to help me arrive at a better solution than this. Any advice?

How to Use a Quaternion as a Camera Orientation

20 May 2012 - 11:23 PM

So a while ago I was trying to figure out how to rotate objects using quaternions, and I eventually succeeded. Now, I'm trying to adapt the same system for use in my camera. I am coding in C++, using OpenGL and SFML. Here is how I rotate and translates objects at the moment:

	//Create change quaternions for each axis
	Quaternion xOffset = Quaternion();
	xOffset.FromAxisAngle(Rotation.x * m_TurnSpeed, 1.0, 0.0, 0.0);
	Quaternion yOffset = Quaternion();
	yOffset.FromAxisAngle(Rotation.y * m_TurnSpeed, 0.0, 1.0, 0.0);
	Quaternion zOffset = Quaternion();
	zOffset.FromAxisAngle(Rotation.z * m_TurnSpeed, 0.0, 0.0, 1.0);

	//Multiply the change quats by the current quat
	m_Orientation = yOffset * zOffset * xOffset * m_Orientation;

	//Get matrix

	Translation = m_Orientation.MultVect(Translation);

	m_Position.x += (Translation.x * m_MoveSpeed);
	m_Position.y += (Translation.y * m_MoveSpeed);
	m_Position.z += (Translation.z * m_MoveSpeed);

I apply m_RotationMatrix using glMultMatrix(), and m_Position using glTranslatef().

This works fine for objects rotating and translating about, but when I apply this to my camera, obviously things don't work. Now, I say obviously, because as far as I know, if I want to change the direction I'm "looking at" in OpenGL, I need to apply the reverse of the changes I'd apply to an object. However, I'm not quite sure how to do this, and all my attempts have failed.

Leaving everything exactly the same, I find rotations don't add up and translations are backwards or inverted. Inverting the input doesn't help.

I believe that using

m_Orientation = m_Orientation * yOffset * zOffset * xOffset;

Instead of the previous form is appropriate for the camera, and this is enough to fix all the rotation issues, but I simply can't manage to get translations to work after that. I can get close by adding the X and Y changes to the position normally and then subtracting the Z change from the position, but this still leads to wonky behavior when moving and rotating around the Z axis at the same time, and the object ends off spiraling away from the point it is supposed to be pointing towards.

So, any tips on how to adapt the above behavior for proper use as a camera?

Also, when applying a camera to OpenGL, to I need to use glPushMatrix() and glPopMatrix()? I ask because that helped some tests run better than others, but it doesn't seem to make sense to do so, since those are basically for pushing a new local space for transformations onto the stack (I'm horribly mangling my terminology, I'm sure).

Quaternion Object Rotation

16 March 2012 - 07:02 AM

So I'm trying to code something similar to a 3D space sim, where I want six degrees of freedom. Apparently the best way to achieve 6DoF is with quaternions, so I've spent the last two days reading up about them and trying to get some code to work. I am using SFML 1.6 and OpenGL, nothing else.

I have a Quaternion class that I'm fairly certain is mathematically correct, and can create a quaternion from an axis and an angle; it normalizes itself, and the multiplication operator is overloaded to perform a proper multiplication.

However, don't understand how to transform the player's input into a quaternion representing a change in rotation. The player can input changes into the Pitch, Yaw and Roll, potentially all three at once. The object they are controlling is supposed to then rotate according to this input along its LOCAL axes.

Suppose the player wanted to change the Pitch by 30°, the Yaw by 45° and the Roll by 10° all at the same time (I'm using that example because it's the most complex possible situation my code will have to deal with). So I have these nice little values, 30-45-10 (which I could also convert to radians), but what do I do with them?

I've read dozens of tutorials, and what little I can make sense of them seems to indicate I need to make the change quaternion like this (pseudocode):

Qaternion xChange = Quaternion.FromAxisAngle(30, 1, 0, 0);
Qaternion yChange = Quaternion.FromAxisAngle(45, 0, 1, 0);
Qaternion zChange = Quaternion.FromAxisAngle(10, 0, 0, 1);
Qaternion totalChange = zChange * yChange * xChange;

Where FromAxisAngle takes the parameters (float Angle, float X, float Y, float Z). Then I apparently need to perform this on the object's current Orientation, also a quaternion:

Orientation = totalChange * Orientation;

And finally derive a matrix from Orientation and pass that to glMultMatrix().

Unfortunately, my current code doesn't really work. Trying to change the yaw or roll just makes the object shiver, and trying to change the pitch makes the object spin correctly for a moment and then start to roll and grow enormously large at the same time.

The thing is, deep down I know my code shouldn't work, because quaternion multiplications are non-commutative. I can't just willy-nilly multiply quaternions for three different orientations with one another - the order will end up influencing the result, right? What do I do when a particular axis isn't supposed to be rotating at the moment?

I've got this not-working system in place, but I've tried half a dozen other configurations that don't work either. Rotation is the user-input rotation around the X,Y,Z axes respectively. I'n not doing any translation at the moment.

void Entity::Transform(sf::Vector3<float> Translation, sf::Vector3<float> Rotation)
	if (Rotation.x != 0)
		Quaternion Offset = Quaternion();
		Offset.FromAxisAngle(Rotation.x, 1, 0, 0);
		m_Orientation = Offset * m_Orientation;
	if (Rotation.y != 0)
		Quaternion Offset = Quaternion();
		Offset.FromAxisAngle(Rotation.y, 0, 1, 0);
		m_Orientation = Offset * m_Orientation;
	if (Rotation.z != 0)
		Quaternion Offset = Quaternion();
		Offset.FromAxisAngle(Rotation.z, 0, 0, 1);
		m_Orientation = Offset * m_Orientation;
	//Derive a rotation matrix

Can somebody tell me how I'm supposed to transform my player input into the appropriate quaternion transformations?

Note that I've also tried directly creating a quaternion from stored PYR variables, but this didn't work and I was told by someone reading my code that that was the wrong way to implement this sort of thing (unfortunately, his explanation of the right way to do it was extremely vague, at least to me).

Using Cellular Automata to create heightmaps

18 August 2011 - 01:58 AM

Hey there!

So I've now got the hang of C++ and SFML, and am starting to wade into OpenGL programming.

One of my current projects is to create a procedural world generation program. I'm not using Perlin Noise, and I don't want to (partly for the challenge, partly to experiment and see how things could be done in other ways, and partly because I want high levels of control, conceptually, over the finished world that Perlin doesn't seem to offer easily). For the moment, most of the program is based on modifying cells in a Voronoi diagram in accordance to certain rules, and the results aren't all that bad:


But I the code is somewhat bloated, and I want to trim the fat.

I've been hearing a lot about using cellular automata to do terrain generation, but I've googled a lot and most of it is either highly technical stuff I can't read, or else simple things that only differentiate between land and water, or with similar limitations. I want to take a look at what others have done in terms of cellular automata being used to model pseudo-naturalistic heightmaps, i.e. mountain ranges, rivers, erosion, valleys, etc. So far, I've not found much to go on.

Can anyone point me in an interesting direction?

UBC's Master of Digital Media

18 July 2011 - 04:19 AM

So I'm considering applying for the Master of Digital Media in Vancouver next year (at the University of British Columbia); my goal is game design. I've been googling it, though, and I've not found many people commenting on whether the program is good or not. I know the program is well connected with industry players, or at least appears to be, but I don't know much more than that. I found a single review that makes things sound pretty good, but one review isn't always enough to go on. Is anyone here familiar with UBC's program, or graduates who've come from there? Is anyone in a position to offer comment?

Also, I'm in the process of preparing a digital portfolio for the application. I've spoken to the recruitment officer at UBC, and she told me the best portfolio would be one that demonstrates my interest in game design as well as the ability to use C++. I've learnt C++ over the past 2-3 months, have gotten into SFML and am now settling in comfortably with the language and working on some projects that interest me (an RTS prototype which is about 1/4 finished; a planet generator that simulates tectonics, erosion, hydrography and climate; and a statistical analysis tool), but I want to make sure my portfolio is as strong and relevant as possible.

What kind of projects would be best to show off? What sort of projects would help me both develop skills and demonstrate qualities a Digital Media programm might want? Applications are in June 2012, so I've got a little less than a year to get things together.