Jump to content

  • Log In with Google      Sign In   
  • Create Account

EarthBanana

Member Since 18 Mar 2013
Offline Last Active Today, 12:55 AM

Topics I've Started

Transforming rotation

05 October 2014 - 10:30 PM

I am trying to allow my camera to have different "focus points" around the scene so that when the user rotates the camera, the camera rotates around that focus point. In "focus mode", when the user clicks on an item in the scene that item is set to the camera's rotation point.

 

To do this I simply made a transform from the position of the clicked item and a quaternion holding the cameras rotation about that item. Basically the clicked item acts as the camera's parent.

 

The transforms are multiplied like this -

 

camQuaternion.getRotationTransposeMatrix() * camPosition.getInverseTranslationMatrix() * mParentTransform.getInverse()

 

Where the parent inverse is just

 

parentQuaternion.getRotationTransposeMatrix() * parentPosition.getInverseTranslationMatrix()

 

When the mode is in "focus mode" I then apply all rotations to the parent rather than the camera..

 

This all works great - but im stuck on one thing..

 

How do I set the camera's local position and rotation when the focus point changes so that the camera does not move at all..

 

or in other words, how do I transform the camera's rotation/translation from the coordinate space of one clicked object to the coordinate space of another clicked object so that the camera does appear to move at all to the user when the focus point has changed?

 

right now when you click on another object the camera jumps so that it is in the same position/rotation with respect to the new focus object as it was with the old focus object


Normals question

14 August 2014 - 11:40 AM

So I have had a strange happening that I can't figure out - its been a day and a half of me trying and so I though I would make a post to see if anyone had any insights.

 

Our team's artist has made several models in blender - lets just say a tile, a dwarf, a robot, and a bridge. When these items are rendered, the tile has light on the correct side and the other objects do not - they are lit in the back. My first reaction was to flip the normals on the objects and see if the light rendered on the correct side - it does. But - when I draw the normals they appear to be correct.. not sure whats going on. Here is an example - the first bridge is the normals before flipping, and the second is the normals after flipping. The light appears on the correct side after flipping.

 

(before flipping)

Attached File  bridge1.png   510.36KB   0 downloads

 

(after flipping)

Attached File  bridge2.png   384.75KB   0 downloads

 

So if the normals are simply incorrect - thats not a big deal. But the thing is that when I flip the normals by multiplying by negative 1 the result is not exactly correct either. I think that's because the normals need to be reflected over the model's vertical axis rather than flipped. If I do that then the specular lighting is correct. That is, if the normal is pointing from the model to the upper left like \, where the comma is the model, then the new orientation needs to be ,/ rather than '\ if that makes any sense at all.

 

I don't understand why blender would be exporting models with normals like this which makes me think it has to be a mistake in my code somewhere.

 

Could it have to do with the coordinate system of the normals? I use assimp for importing and generate normals when they aren't available, but all these models have normals available.

 

Any insights would be helpful, thanks

 

 


Crash on glGet functions

04 August 2014 - 11:57 AM

There are a few models created by our artist where he used a program to scan in some 3d items

 

He imported them to blender, touched them up, and exported them to dae format so our engine could import them..

 

The engine draws the object with glDrawElementsInstanced just fine, but will crash with any glGet type of command after the draw. This doesn't happen for all models - it seems only for ones with high vert counts or the ones exported by this program. I'm trying to figure out possible causes for this error - anyone have any ideas? I should also mention that I use a deferred shading system and the crash is after the geometry pass.. below is my geometry pass shader (vertex and fragment - custom format but self explanatory)

FRAGMENT BEGIN
gbufferdefault.fsh
#version 410

in vec2 texCoords;
in vec3 normal;
in vec3 tangent;
in vec4 worldPos;
flat in uint refID;

uniform sampler2D diffuseMap;
uniform sampler2D opacityMap;
uniform sampler2D normalMap;
uniform uint entityID;
uniform uint colorMode;
uniform vec4 fragColOut;
uniform bool hasDiffuseMap = false;
uniform bool hasNormalMap = false;
uniform bool hasOpacityMap = false;

layout (location = 0) out vec4 colorOut;
layout (location = 1) out vec3 worldPosOut;
layout (location = 2) out vec3 normalOut;
layout (location = 3) out uvec3 pickingOut;

vec3 calculateNormalMap()
{
    vec3 norm = normalize(normal);
    vec3 tang = normalize(tangent);
    tang = normalize(tang - dot(tang, norm) * norm);
    vec3 biTangent = cross(tang, norm);
	vec3 bmNormal = vec3(1.0,1.0,1.0);

	if (colorMode == 0 && hasNormalMap)
		bmNormal = texture(normalMap, texCoords).xyz;

    bmNormal = 2.0 * bmNormal - vec3(1.0, 1.0, 1.0);
    vec3 nNormal;
    mat3 transBumpMap = mat3(tang, biTangent, norm);
    nNormal = transBumpMap * bmNormal;
    nNormal = normalize(nNormal);
    return nNormal;
}

void main()
{
	vec3 difCol = texture(diffuseMap, texCoords).rgb;
	float alpha = 1.0;
	if (hasOpacityMap)
		alpha = texture(opacityMap, texCoords).a;

	if (alpha != 1.0)
		discard;

        worldPosOut = worldPos.xyz;

	if (colorMode == 0 && hasDiffuseMap)
		colorOut = vec4(difCol,alpha);
	else
		colorOut = fragColOut;

    normalOut = vec3(calculateNormalMap());
    pickingOut = uvec3(entityID, refID, 0.0);
}
FRAGMENT END

VERTEX BEGIN
gbufferdefault.vsh
#version 410

layout (location = 0) in vec3 position;
layout (location = 1) in vec2 tex;
layout (location = 2) in vec3 norm;
layout (location = 3) in vec3 tang;
layout (location = 4) in ivec4 boneIDs;
layout (location = 5) in vec4 boneWeights;
layout (location = 6) in vec4 trans1;
layout (location = 7) in vec4 trans2;
layout (location = 8) in vec4 trans3;
layout (location = 9) in vec4 trans4;
layout (location = 10) in uint referenceID;

uniform mat4 nodeTransform;
uniform mat4 projCamMat;
uniform mat4 boneTransforms[100];
uniform int hasBones;

mat4 transform;
vec4 localPos;

out vec2 texCoords;
out vec3 normal;
out vec3 tangent;
out vec4 worldPos;
flat out uint refID;

void main()
{
	transform[0] = vec4(trans1.x, trans2.x, trans3.x, trans4.x);
	transform[1] = vec4(trans1.y, trans2.y, trans3.y, trans4.y);
	transform[2] = vec4(trans1.z, trans2.z, trans3.z, trans4.z);
	transform[3] = vec4(trans1.w, trans2.w, trans3.w, trans4.w);

	if (hasBones == 1)
	{
		mat4 boneTrans = boneTransforms[boneIDs.x] * boneWeights.x;
		boneTrans += boneTransforms[boneIDs.y] * boneWeights.y;
		boneTrans += boneTransforms[boneIDs.z] * boneWeights.z;
		boneTrans += boneTransforms[boneIDs.w] * boneWeights.w;
		localPos = boneTrans * vec4(position, 1.0);
		normal = (transform * boneTrans * vec4(norm, 0.0)).xyz;
		tangent = (transform * boneTrans * vec4(tang, 0.0)).xyz;
	}
	else
	{
		localPos = nodeTransform * vec4(position, 1.0);
		normal = (transform * nodeTransform * vec4(norm, 0.0)).xyz;
		tangent = (transform * nodeTransform * vec4(tang, 0.0)).xyz;
	}

	worldPos = transform * localPos;
	texCoords = tex;
	refID = referenceID;
        gl_Position = projCamMat * worldPos;
} 

Things I have tried:

 

using glDrawElements instead of glDrawElementsInstanced

disabling, in the fragment shader, all writes except to colorOut (location = 0)

using assimp to detect/delete any vertices that fall under GL_POINTS or GL_LINES instead of GL_TRIANGLES (there were none detected)

made sure that all indices are in bounds of the vertex positions array

made sure that all positions, texCoord, normal, tangent, boneID, and boneWeight buffers are filled with valid data

disabled fragment shader completely

 

My render pass for the gbuffer is below

			while (dcIter != currentSet.end())
			{
				// much easier than sayin dcIter-> bla bla every time
				currentShader->setUniform("nodeTransform", dcIter->mSubMesh->mNode->mWorldTransform);

				// If the draw call material does not match the current material then skip it (it will belong to another material)
				currentShader->setUniform("entityID", dcIter->mEntID);

				// If there is no final transform part then that just means there is no animation - use the nodeTransform instead
				// NOTE : I may need to update the node transform to contain all parent transforms also in the future
				if (dcIter->mAnimTransforms != NULL)
				{
					currentShader->setUniform("hasBones", int(1));
					for (NSuint boneI = 0; boneI < dcIter->mAnimTransforms->size(); ++boneI)
						currentShader->setUniform("boneTransforms[" + std::to_string(boneI) + "]", (*dcIter->mAnimTransforms)[boneI]);
				}
				else
					currentShader->setUniform("hasBones", int(0));

				// Check to make sure each buffer is allocated before setting the shader attribute : un-allocated buffers
				// are fairly common because not every mesh has tangents for example.. or normals.. or whatever
				dcIter->mSubMesh->mPosBuf.bind();
				currentShader->vertexAttribPtr(NSShader::Position, 3, GL_FLOAT, GL_FALSE, sizeof(NSVec3Df), 0);

				dcIter->mSubMesh->mTexBuf.bind();
				currentShader->vertexAttribPtr(NSShader::TexCoords, 2, GL_FLOAT, GL_FALSE, sizeof(NSVec2Df), 0);

				dcIter->mSubMesh->mNormBuf.bind();
				currentShader->vertexAttribPtr(NSShader::Normal, 3, GL_FLOAT, GL_FALSE, sizeof(NSVec3Df), 0);

				dcIter->mSubMesh->mTangBuf.bind();
				currentShader->vertexAttribPtr(NSShader::Tangent, 3, GL_FLOAT, GL_FALSE, sizeof(NSVec3Df), 0);

				dcIter->mSubMesh->mBoneBuf.bind();
				currentShader->vertexAttribIPtr(NSShader::BoneID, 4, GL_INT, sizeof(NSMesh::SubMesh::BoneWeightIDs), 0);
				currentShader->vertexAttribPtr(NSShader::BoneWeight, 4, GL_FLOAT, GL_FALSE, sizeof(NSMesh::SubMesh::BoneWeightIDs), 4*sizeof(NSuint));

				dcIter->mTransformBuffer.bind();
				currentShader->vertexAttribPtr(NSShader::InstTrans1, 4, GL_FLOAT, GL_FALSE, sizeof(NSMatrix4Df), 0);
				currentShader->vertexAttribDiv(NSShader::InstTrans1, 1);

				currentShader->vertexAttribPtr(NSShader::InstTrans2, 4, GL_FLOAT, GL_FALSE, sizeof(NSMatrix4Df), sizeof(NSVec4Df));
				currentShader->vertexAttribDiv(NSShader::InstTrans2, 1);

				currentShader->vertexAttribPtr(NSShader::InstTrans3, 4, GL_FLOAT, GL_FALSE, sizeof(NSMatrix4Df), sizeof(NSVec4Df) * 2);
				currentShader->vertexAttribDiv(NSShader::InstTrans3, 1);

				currentShader->vertexAttribPtr(NSShader::InstTrans4, 4, GL_FLOAT, GL_FALSE, sizeof(NSMatrix4Df), sizeof(NSVec4Df) * 3);
				currentShader->vertexAttribDiv(NSShader::InstTrans4, 1);

				dcIter->mTransformIDBuffer.bind();
				currentShader->vertexAttribIPtr(NSShader::RefID, 1, GL_UNSIGNED_INT, sizeof(NSuint), 0);
				currentShader->vertexAttribDiv(NSShader::RefID, 1);

				// If the indice buffer has not been allocated then return without doing anything.. that means there is something wrong
				if (!dcIter->mSubMesh->mIndiceBuf.isAllocated())
				{
#ifdef NSDEBUG
					assert(mDebug != NULL);
					mDebug->print("NSRenderSystem::_drawGeometry: Cannot render shadows - Indice buffer not allocated");
#endif
					return;
				}

				dcIter->mSubMesh->mIndiceBuf.bind();

				if (!dcIter->mSubMesh->mHasTexCoords)
				{
					currentShader->setUniform("colorMode", NSuint(1));
					NSVec4Df col = (*matIter)->getColor();
					currentShader->setUniform("fragColOut", col);
				}

				glDrawElementsInstanced(GL_TRIANGLES, dcIter->mSubMesh->mIndices.size(), GL_UNSIGNED_INT, 0, dcIter->mNumTransforms);
				//NSuint ret = glGetError();
				++dcIter;
			}

The renderer has a list of draw calls for each material and a list of materials for each shader.. the loop above is the rendering procedure for the draw calls of a given material - the textures for that material have already been enabled

 

I'm not looking for any definitive answers here - just tips on more things to try and figure my problem out from people who have had similar issues before. Any suggestions are welcome - I really need to get this one figured out

 

Thanks


Saving Question

09 July 2014 - 11:49 PM

I have a component system - I users of my engine can create their own components by sub-classing Component class - and then each entity has a list of components

 

When I'm serializing/de-serializing components, considering that when I am deserializing I need to allocate any sub-classed component classes in the engine loading code without actually knowing the sub-class type (user defined) - what is a good way to do this so that the engine user does not have to rewrite any engine code?

 

IE in the engine i have a Component class with pure virtual functions "serialize(FileStream & fStream)" and "deserialize(FileStream & fStream)"

so that the engine user has to write their own versions of these functions - this works great for serializing to file - but how do I deserialize?

 

If the engine code is something like

 

serialize:

 

for each component in entity

    currentComp->serialize(fStream)

 

deserialize:

for each component found in save file

    Component * toAdd = new Component()  (How do I allocate custom user defined components?)

    entity->addComponent(Component * toAdd)

 

Thanks for any help


Error handling

19 May 2014 - 07:39 PM

I've been working on a few different projects - all of them I have handled errors in different ways. I am always unsure of myself when it comes to this - I never really know when I should let stuff fail - when I should print to log files - when I should return something indicating - when I should use error states - etc

 

I know its pretty common to let stuff that should break your code break it - so that there is a fixable crash - but I often find that when I do this I am allowing things that shouldn't break my code break it... like an assert(pointer != NULL) will result in a crash when some allowable condition is creating a NULL pointer.. Just some condition I didn't think about when I originally wrote the code

 

Anyways - you guys have any methodology to this - any things that you have found that work best for you for error handling in general? Any important remarks on logging? What level do you do the error checking and at what level do you handle it? Does anyone ever use standard exceptions?


PARTNERS