Sign in to follow this  
Jossos

OpenGL Opengl Spazzing out for no reason

Recommended Posts

So I'm making a c++ and opengl app, with bone transformations, and it is spazzing out for no reason. And I really mean no reason. I spent weeks on this app without any problems. Character walked etc, looked nice. Today I added shadows to the shader, no problem. Later on it started going crazy. like the matrices were totally distorted and the screen is raped with random triangles going crazy. Seizure inspiring.

 

Luckilly I had backed up this program about 8 times. Every single other previous version is doing the exact same thing. No other project is acting this way.

 

What the hell is going on? I can't find any problem in the code. This happened right after nvidia update, so not sure how to go about fixing this...

 

EDIT: If i Freeze the animation (as in only render same animtime constantly), it still spazzes out. What the hell is going on?? Doesn't happen when I negate the bone transformations from the shader though.

Edited by Jossos

Share this post


Link to post
Share on other sites
Could be a lot. Have you tried a clean reboot of the system just to get that out of the way?

Alternatively, it sounds like you are walking into undefined behavior (either on the C++ side or the way you are treating OpenGL), but that is really hard to say without any code.

Share this post


Link to post
Share on other sites

The thing is this exact code worked perfectly earlier. I even went through my backups, and they all have the same problem, despite all working perfectly fine when I copied them.

 

I rebooted yes.

 

This is what I don't understand:

 

It's the bone transformations giving the error. I take them out, it all works fine.

 

if I set the bone transforms only once in the beginning and don't touch them at all, it still spazzes. If they are unchanged, the entire scene should just be the exact same image throughout, but for some reason the poly's go spastic. Makes absolutely no sense.

Share this post


Link to post
Share on other sites

I can only agree with BitMaster. You are probably seeing undefined behavior either in OpenGL or C++. You are lucky though, since you can clearly see the error with 100% reproduction rate no less.

 

Perhaps first start with trying the project out on a different computer.

Share this post


Link to post
Share on other sites

I dont have another computer available to me right now.

 

Another thing I did was test another app with bone transformations and it worked fine.

 

The difference between these two apps is the one with the problems is initialized with sdl, the working app with glut.

void GRAPHICSMANAGER::InitializeGraphics(int width, int height, const std::string& title, bool fullscreen)
{
	SDL_Init(SDL_INIT_EVERYTHING);

	SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
	SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
	SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
	SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);
	SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 32);
	SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 16);
	SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);

	m_fullscreen = fullscreen;

	if (m_fullscreen)
	{
		m_screenWidth	= GetSystemMetrics(SM_CXSCREEN);
		m_screenHeight	= GetSystemMetrics(SM_CYSCREEN);

		m_window = SDL_CreateWindow(title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, m_screenWidth, m_screenHeight, SDL_WINDOW_FULLSCREEN | SDL_WINDOW_OPENGL);
	}
	else
	{
		m_screenWidth = width;
		m_screenHeight = height;

		m_window = SDL_CreateWindow(title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, m_screenWidth, m_screenHeight, SDL_WINDOW_OPENGL);
	}
	
	m_glContext = SDL_GL_CreateContext(m_window);

	GLenum status = glewInit();

	if (status != GLEW_OK)
		std::cerr << "Glew failed to initialize!\n";

	glEnable(GL_DEPTH_TEST);

	//glEnable(GL_BLEND);
	//glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

	glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
}

Maybe it's this? I don't know

Share this post


Link to post
Share on other sites

I'm adding "Spazzing Out" to my list of JIRA bug types!  

 

If all your previous working versions are broken too, then the update changed something.  But it could have been there all along and the new driver speed up exposed the problem.  I would start with a new version, and slowly add in the code piece by piece testing at each step until you find the problem.

 

With things like this, 99% of the time, even when you know it isn't your code, it is impossible that your code is wrong, it must be the driver or the update, it's your code.  

Edited by Glass_Knife

Share this post


Link to post
Share on other sites

Texture ID = 3452816845.

 

I'm starting to think more and more that the driver fucked up SDL applications that dare use bone transforms

 

EDIT: Ok, to add to the confusion - When i run debug mode, the application looks different than running from the .exe.

 

Debug mode, model is normal, walking, poly's spazzing out.

Run from exe, model is completely distorted, bones are way off, and poly's are spazzing out as well. Also no texture?????

 

What in the actual fuck is going on?

Edited by Jossos

Share this post


Link to post
Share on other sites
It's highly unlikely that something is actively trying to sabotage you. Again, every sign points to undefined behavior on your part especially since it happened "right after nvidia update" and now also happens to your old code. Something in the driver probably changed. The previously unnoticed error you made now consistently produces problems, that's a very common result of invoking undefined behavior somewhere.

While it's not unheard of for graphics drivers to contain the occasional bug you would still need some more investigation instead of saying "it worked before, so it must work now".

Edit: The fact that Release and Debug mode show such differences further suggests undefined behavior, probably because uninitialized memory (which will look very differently in Debug and Release builds) is used somewhere. Edited by BitMaster

Share this post


Link to post
Share on other sites

Well I'm happy to report that I was right all along.

 

After downloading and installing the driver, turns out i needed to go into my system, and tell the gpu to update its driver (again?). I don't understand this driver business, but all I know now is that it works.

 

I appreciate all your help. I actually found and tweaked a few little things in my code because you guys made me look.

 

:)

Share this post


Link to post
Share on other sites

Hello everyone, I'm appropriately bumping this thread because once again, opengl is spazzing out just as it did before, only this time if i manually go into system and tell it to update it says I have the latest version, so I can't do what I did last time. Please help, as I cannot do anything productive with this happening

Share this post


Link to post
Share on other sites

30nag68.jpg

 

I'll also add, as I said above, this is occuring in projects that worked perfectly fine before, but are now all spazzing out in the same way without any new changes.

Edited by Jossos

Share this post


Link to post
Share on other sites
If other OpenGL projects than your own are also now "spazzing out" then it is likely a hardware issue. If it is only your own then it is highly likely that as the other posters have been saying, you are in undefined behavior territory. In such a case, you do something wrong but there is no set way for the driver to deal with it. It might spaz out. It might silently ignore and seemingly function perfectly. It might nuke your harddrive. It might end the universe. That is what undefined behavior means.

Share this post


Link to post
Share on other sites

If you are seeing displaced vertices and corruption over time, then you may have a heat or voltage issue. I have had it many times before, the last one being because my power unit could not give enough Watts to the GPU (and machine overall).

 

Here are some diagnosable examples from a random Google search:

http://www.playtool.com/pages/artifacts/artifacts.html

 

I had what happens in the second image, slow corruption over time. It was fixed by upgrading to a better PSU, and was not bad GPU memory.

Edited by Kaptein

Share this post


Link to post
Share on other sites

This smells a lot like uninitialized memory or undefined behaviour to me.

 

Can you post code for where you load your matrices, your vertex data and how you submit draw calls (glDrawArrays, glDrawElements, etc)?

Share this post


Link to post
Share on other sites

Get ready for code overload:

 

 

LOADING

bool MESH::Load(const char* filename)
{
	Clear();

	m_hasTexture = false;
	m_numBones = 0;

	std::vector<VECTOR3f> positions;
	std::vector<VECTOR3f> normals;
	std::vector<VECTOR2f> texCoords;
	std::vector<VertexBoneData> bones;
	std::vector<uint> indices;

	uint numVertices = 0;
	uint numIndices = 0;

	// Use assimp to get data from the file
	Assimp::Importer Importer;
	m_pScene = m_Importer.ReadFile(filename, aiProcess_Triangulate | aiProcess_GenSmoothNormals | aiProcess_FlipUVs);

	if (!m_pScene)
	{
		printf("Error parsing '%s': '%s'\n", filename, Importer.GetErrorString());
		return false;
	}

	m_globalInverseTransform = m_pScene->mRootNode->mTransformation;
	m_globalInverseTransform.Inverse();

	// Make all vectors the appropriate size for the data
	m_objects.resize(m_pScene->mNumMeshes);

	for (uint i = 0; i < m_objects.size(); i++)
	{
		m_objects[i].numVertices	= m_pScene->mMeshes[i]->mNumVertices;
		m_objects[i].numIndices		= m_pScene->mMeshes[i]->mNumFaces * 3;
		m_objects[i].startVertex	= numVertices;
		m_objects[i].startIndex		= numIndices;

		numVertices += m_objects[i].numVertices;
		numIndices	+= m_objects[i].numIndices;
	}

	positions.reserve(numVertices);
	normals.reserve(numVertices);
	texCoords.reserve(numVertices);
	bones.resize(numVertices);
	indices.reserve(numIndices);

	// Initialize the meshes
	int fuck = m_objects.size();

	for (uint i = 0; i < m_objects.size(); i++)
	{
		const aiMesh* paiMesh = m_pScene->mMeshes[i];
		const aiVector3D Zero3D(0.0f, 0.0f, 0.0f);

		// Populate Vertex Attributes
		for (uint j = 0; j < paiMesh->mNumVertices; j++)
		{
			const aiVector3D* aiTexCoord	= paiMesh->HasTextureCoords(0) ? &(paiMesh->mTextureCoords[0][j]) : &Zero3D;

			positions.push_back(VECTOR3f(paiMesh->mVertices[j].x, paiMesh->mVertices[j].y, paiMesh->mVertices[j].z));
			normals.push_back(VECTOR3f(paiMesh->mNormals[j].x, paiMesh->mNormals[j].y, paiMesh->mNormals[j].z));
			texCoords.push_back(VECTOR2f(aiTexCoord->x, aiTexCoord->y));
		}

		// Populate Indices
		for (uint j = 0; j < paiMesh->mNumFaces; j++)
		{
			const aiFace& face = paiMesh->mFaces[j];
			assert(face.mNumIndices == 3);

			indices.push_back(face.mIndices[0]);
			indices.push_back(face.mIndices[1]);
			indices.push_back(face.mIndices[2]);
		}

		// Load Bones for current Mesh
		for (uint j = 0; j < paiMesh->mNumBones; j++)
		{
			uint boneIndex = 0;
			std::string boneName(paiMesh->mBones[j]->mName.data);

			if (m_boneMapping.find(boneName) == m_boneMapping.end())
			{
				boneIndex = m_numBones;
				m_numBones++;
				m_boneInfo.push_back(BoneInfo());
				m_boneInfo[boneIndex].BoneOffset = paiMesh->mBones[j]->mOffsetMatrix;
				m_boneMapping[boneName] = boneIndex;
			}
			else
				boneIndex = m_boneMapping[boneName];

			m_boneMapping[boneName] = boneIndex;
			m_boneInfo[boneIndex].BoneOffset = paiMesh->mBones[j]->mOffsetMatrix;

			for (uint k = 0; k < paiMesh->mBones[j]->mNumWeights; k++)
			{
				uint VertexID = m_objects[i].startVertex + paiMesh->mBones[j]->mWeights[k].mVertexId;
				float weight = paiMesh->mBones[j]->mWeights[k].mWeight;
				bones[VertexID].AddBoneData(boneIndex, weight);
			}
		}
	}

	// Load the material (only using one for now)
	// We assume the texture file is in the same directory as the model file
	const aiMaterial* pMaterial = m_pScene->mMaterials[0];
	aiString path;
	if (pMaterial->GetTexture(aiTextureType_DIFFUSE, 0, &path, NULL, NULL, NULL, NULL, NULL) == AI_SUCCESS)
	{
		std::string::size_type startIndex = std::string(path.C_Str()).find_last_of("\\");
		if (startIndex == std::string::npos)
			startIndex = std::string(path.C_Str()).find_last_of("/");

		std::string texFilename;

		if (startIndex != std::string::npos)
		{
			texFilename = std::string(path.C_Str()).substr(startIndex + 1);
		}
		else
			texFilename = path.C_Str();

		// get texture path, and make it lower case		
		std::string texPath = std::string(filename).substr(0, std::string(filename).find_last_of("/")) + "/";
		
		for (uint i = 0; i < texFilename.length(); i++)
			texPath += texFilename[i];

		// Load Texture
		if (!m_texture.Load(texPath.c_str()))
		{
			printf("Failed to load texture: \"%s\"", texPath.c_str());
		}
		else
			m_hasTexture = true;
	}

	const aiNodeAnim* pNodeAnim = NULL;
	uint i = 0;

	while (!pNodeAnim)
	{
		pNodeAnim = FindNodeAnim(m_pScene->mAnimations[0], m_pScene->mRootNode->mChildren[i]->mName.data);
		i++;

		if (i > 500)	// Too many loops
			continue;
	}
		//FindNodeAnim(pAnimation, nodeName);
		//m_pScene->mAnimations[0]->mChannels->

	if (pNodeAnim)
	{
		m_animInfo.numKeyframes = pNodeAnim->mNumPositionKeys;
		m_animInfo.frameDuration = m_pScene->mAnimations[0]->mDuration / (pNodeAnim->mNumPositionKeys);
	}

	// Negate the last frame, as it is the same as the first
	m_animInfo.MaxAnimTime = m_animInfo.frameDuration * (m_animInfo.numKeyframes - 1);

	// Initialize buffers with the data
	InitBuffers(positions, normals, texCoords, indices, bones);

	return true;
}
bool MESH::InitBuffers(const std::vector<VECTOR3f>& positions,
					   const std::vector<VECTOR3f>& normals,
					   const std::vector<VECTOR2f>& texCoords,
					   const std::vector<uint>& indices,
					   const std::vector<VertexBoneData>& bones)
{
	// Create the VAO
	glGenVertexArrays(1, &m_VAO);
	glBindVertexArray(m_VAO);

	// Create the buffers for the vertices attributes
	glGenBuffers(ArraySizeInElements(m_buffers), m_buffers);



	// Generate and populate the buffers with vertex attributes and the indices
	glBindBuffer(GL_ARRAY_BUFFER, m_buffers[POS_VB]);
	glBufferData(GL_ARRAY_BUFFER, sizeof(positions[0]) * positions.size(), &positions[0], GL_STATIC_DRAW);
	glEnableVertexAttribArray(POSITION_LOCATION);
	glVertexAttribPointer(POSITION_LOCATION, 3, GL_FLOAT, GL_FALSE, 0, 0);

	glBindBuffer(GL_ARRAY_BUFFER, m_buffers[TEXCOORD_VB]);
	glBufferData(GL_ARRAY_BUFFER, sizeof(texCoords[0]) * texCoords.size(), &texCoords[0], GL_STATIC_DRAW);
	glEnableVertexAttribArray(TEX_COORD_LOCATION);
	glVertexAttribPointer(TEX_COORD_LOCATION, 2, GL_FLOAT, GL_FALSE, 0, 0);

	glBindBuffer(GL_ARRAY_BUFFER, m_buffers[NORMAL_VB]);
	glBufferData(GL_ARRAY_BUFFER, sizeof(normals[0]) * normals.size(), &normals[0], GL_STATIC_DRAW);
	glEnableVertexAttribArray(NORMAL_LOCATION);
	glVertexAttribPointer(NORMAL_LOCATION, 3, GL_FLOAT, GL_FALSE, 0, 0);

	if (m_numBones > 0)
	{
		glBindBuffer(GL_ARRAY_BUFFER, m_buffers[BONE_VB]);
		glBufferData(GL_ARRAY_BUFFER, sizeof(bones[0]) * bones.size(), &bones[0], GL_STATIC_DRAW);
		glEnableVertexAttribArray(BONE_ID_LOCATION);
		glVertexAttribIPointer(BONE_ID_LOCATION, 4, GL_INT, sizeof(VertexBoneData), (const GLvoid*)0);
		glEnableVertexAttribArray(BONE_WEIGHT_LOCATION);
		glVertexAttribPointer(BONE_WEIGHT_LOCATION, 4, GL_FLOAT, GL_FALSE, sizeof(VertexBoneData), (const GLvoid*)16);
	}

	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_buffers[INDEX_BUFFER]);
	glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices[0]) * indices.size(), &indices[0], GL_STATIC_DRAW);
	


	glBindVertexArray(0);

	return true;
}
void MESH::Clear()
{
	if (m_buffers[0] != 0)
		glDeleteBuffers(ArraySizeInElements(m_buffers), m_buffers);

	if (m_VAO != 0)
	{
		glDeleteVertexArrays(1, &m_VAO);
		m_VAO = 0;
	}

	if (m_objects.size() > 0)
		m_objects.empty();
}

DRAWING

void GAME::RenderModels()
{
	// Draw Geometry / Scene Objects
	m_pipeline.MatrixMode(MODEL_MATRIX);
	m_pipeline.PushMatrix();

	stdShader->Bind();

		m_pipeline.CalcMatrices(stdShader->GetProgramID());
		plane.Render();
		
	stdShader->Unbind();
	skinShader->Bind();

		m_pipeline.RotateY(angle);
		m_pipeline.Scale(0.10f);
		//m_pipeline.Translate(0.0f, 0.0f, -6.0f);
		//m_pipeline.Translate(0.0f, 1.0f, 0.0f);

		std::vector<MATRIX4f> transforms;

		float time = m_timer.m_secsPassedSinceStart / ninja.GetAnimTime() * 0.5f;

		ninja.BoneTransform(time, transforms);

		for (uint i = 0; i < transforms.size(); i++)
			glUniformMatrix4fv(m_boneLocations[i], 1, GL_TRUE, (const GLfloat*)transforms[i].m_matrix);

		m_pipeline.Translate(0.0f, 0.0f, 1.0f);
		m_pipeline.CalcMatrices(skinShader->GetProgramID());
		ninja.Render();

	skinShader->Unbind();

	m_pipeline.PopMatrix();
}
void MESH::Render()
{
	glBindVertexArray(m_VAO);

	if (m_hasTexture)
		m_texture.Bind(GL_TEXTURE0);

	for (uint i = 0; i < m_objects.size(); i++)
	{
		glDrawElementsBaseVertex(GL_TRIANGLES,
								 m_objects[i].numIndices,
								 GL_UNSIGNED_INT,
								 (void*)(sizeof(uint) * m_objects[i].startIndex),
								 m_objects[i].startVertex);
	}

	if (m_hasTexture)
		m_texture.Unbind(GL_TEXTURE0);

	glBindVertexArray(0);
}
// This function simply has the bones be where they're supposed to be depending on the time in seconds passed in. At the moment its used as a loop
void MESH::BoneTransform(float& TimeInSeconds, std::vector<MATRIX4f>& transforms)
{
	MATRIX4f identity;
	identity.LoadIdentity();

	float ticksPerSecond = (float)(m_pScene->mAnimations[0]->mTicksPerSecond != 0 ? m_pScene->mAnimations[0]->mTicksPerSecond : 25.0f);
	float TimeInTicks = TimeInSeconds * ticksPerSecond;
    //float AnimationTime = fmod(TimeInTicks, (float)m_pScene->mAnimations[0]->mDuration);
	float AnimationTime = fmod(TimeInTicks, m_animInfo.MaxAnimTime);
	

	ReadNodeHeirarchy(AnimationTime, m_pScene->mRootNode, identity);

	transforms.resize(m_numBones);

	for (uint i = 0; i < m_numBones; i++)
		transforms[i] = m_boneInfo[i].FinalTransformation;
}
void MESH::ReadNodeHeirarchy(float AnimationTime, const aiNode* pNode, const MATRIX4f& ParentTransform)
{
	std::string nodeName(pNode->mName.data);

	const aiAnimation* pAnimation = m_pScene->mAnimations[0];

	MATRIX4f NodeTransformation(pNode->mTransformation);

	const aiNodeAnim* pNodeAnim = FindNodeAnim(pAnimation, nodeName);

	if (pNodeAnim)
	{
		// Interpolate scaling and generate scaling transformation matrix
        aiVector3D Scaling;
        CalcInterpolatedScaling(Scaling, AnimationTime, pNodeAnim);
        MATRIX4f ScalingM;
        ScalingM.InitScale(Scaling.x, Scaling.y, Scaling.z);
        
        // Interpolate rotation and generate rotation transformation matrix
        aiQuaternion RotationQ;
        CalcInterpolatedRotation(RotationQ, AnimationTime, pNodeAnim);        
        MATRIX4f RotationM = MATRIX4f(RotationQ.GetMatrix());

        // Interpolate translation and generate translation transformation matrix
        aiVector3D Translation;
        CalcInterpolatedPosition(Translation, AnimationTime, pNodeAnim);
        MATRIX4f TranslationM;
        TranslationM.InitTranslate(Translation.x, Translation.y, Translation.z);
        
        // Combine the above transformations
        NodeTransformation = TranslationM * RotationM * ScalingM;
	}

	MATRIX4f GlobalTransformation = ParentTransform * NodeTransformation;

	if (m_boneMapping.find(nodeName) != m_boneMapping.end())
	{
		uint boneIndex = m_boneMapping[nodeName];
		m_boneInfo[boneIndex].FinalTransformation = m_globalInverseTransform * GlobalTransformation * m_boneInfo[boneIndex].BoneOffset;;
	}

	for (uint i = 0; i < pNode->mNumChildren; i++)
		ReadNodeHeirarchy(AnimationTime, pNode->mChildren[i], GlobalTransformation);
}

Now this is where I suspect the problem may lie, however I don't really understand it:

void MESH::CalcInterpolatedPosition(aiVector3D& Out, float AnimationTime, const aiNodeAnim* pNodeAnim)
{
    if (pNodeAnim->mNumPositionKeys == 1) {
        Out = pNodeAnim->mPositionKeys[0].mValue;
        return;
    }
            
    uint PositionIndex = FindPosition(AnimationTime, pNodeAnim);
    uint NextPositionIndex = (PositionIndex + 1);
    assert(NextPositionIndex < pNodeAnim->mNumPositionKeys);
    float DeltaTime = (float)(pNodeAnim->mPositionKeys[NextPositionIndex].mTime - pNodeAnim->mPositionKeys[PositionIndex].mTime);
    float Factor = (AnimationTime - (float)pNodeAnim->mPositionKeys[PositionIndex].mTime) / DeltaTime;
	
	//assert(Factor >= 0.0f && Factor <= 1.0f);
    const aiVector3D& Start = pNodeAnim->mPositionKeys[PositionIndex].mValue;
    const aiVector3D& End = pNodeAnim->mPositionKeys[NextPositionIndex].mValue;
    aiVector3D Delta = End - Start;
    Out = Start + Factor * Delta;
}
void MESH::CalcInterpolatedRotation(aiQuaternion& Out, float AnimationTime, const aiNodeAnim* pNodeAnim)
{
	// we need at least two values to interpolate...
    if (pNodeAnim->mNumRotationKeys == 1) {
        Out = pNodeAnim->mRotationKeys[0].mValue;
        return;
    }
    
    uint RotationIndex = FindRotation(AnimationTime, pNodeAnim);
    uint NextRotationIndex = (RotationIndex + 1);
    assert(NextRotationIndex < pNodeAnim->mNumRotationKeys);
    float DeltaTime = (float)(pNodeAnim->mRotationKeys[NextRotationIndex].mTime - pNodeAnim->mRotationKeys[RotationIndex].mTime);
    float Factor = (AnimationTime - (float)pNodeAnim->mRotationKeys[RotationIndex].mTime) / DeltaTime;
    
	//assert(Factor >= 0.0f && Factor <= 1.0f);
    const aiQuaternion& StartRotationQ = pNodeAnim->mRotationKeys[RotationIndex].mValue;
    const aiQuaternion& EndRotationQ   = pNodeAnim->mRotationKeys[NextRotationIndex].mValue;    
    aiQuaternion::Interpolate(Out, StartRotationQ, EndRotationQ, Factor);
    Out = Out.Normalize();
}
void MESH::CalcInterpolatedScaling(aiVector3D& Out, float AnimationTime, const aiNodeAnim* pNodeAnim)
{
    if (pNodeAnim->mNumScalingKeys == 1) {
        Out = pNodeAnim->mScalingKeys[0].mValue;
        return;
    }

    uint ScalingIndex = FindScaling(AnimationTime, pNodeAnim);
    uint NextScalingIndex = (ScalingIndex + 1);
    assert(NextScalingIndex < pNodeAnim->mNumScalingKeys);
    float DeltaTime = (float)(pNodeAnim->mScalingKeys[NextScalingIndex].mTime - pNodeAnim->mScalingKeys[ScalingIndex].mTime);
    float Factor = (AnimationTime - (float)pNodeAnim->mScalingKeys[ScalingIndex].mTime) / DeltaTime;

	//assert(Factor >= 0.0f && Factor <= 1.0f);
    const aiVector3D& Start = pNodeAnim->mScalingKeys[ScalingIndex].mValue;
    const aiVector3D& End   = pNodeAnim->mScalingKeys[NextScalingIndex].mValue;
    aiVector3D Delta = End - Start;
    Out = Start + Factor * Delta;
}

You will notice i commented out the assertion because i do actually get values outside 0 and 1. I must confess I don't really understand this function.

 

Any help is greatly appreciated!

Share this post


Link to post
Share on other sites

You will notice i commented out the assertion because i do actually get values outside 0 and 1.

So what do you think will happen to Factor at the end of the animation, or if 2 keyframes are on the same time-stamp, etc.?


L. Spiro

Share this post


Link to post
Share on other sites

Sorry everyone I'm retarded.

 

My shader:

void main()
{
	mat4 BoneTransform;
	BoneTransform += gBones[BoneIDs[0]] * Weights[0];
	BoneTransform += gBones[BoneIDs[1]] * Weights[1];
	BoneTransform += gBones[BoneIDs[2]] * Weights[2];
	BoneTransform += gBones[BoneIDs[3]] * Weights[3];

bonetransform is just full of nothing. Changed to:

void main()
{
	mat4 BoneTransform = gBones[BoneIDs[0]] * Weights[0];
	BoneTransform += gBones[BoneIDs[1]] * Weights[1];
	BoneTransform += gBones[BoneIDs[2]] * Weights[2];
	BoneTransform += gBones[BoneIDs[3]] * Weights[3];

All better

Edited by Jossos

Share this post


Link to post
Share on other sites

The moral of the story is: anytime you think something's happening for no reason ... there's a reason.

 

The second moral of the story is: anytime you think the error is in a driver, tool, or OS component ... it's in your code.

 

Of course there are exceptions (we've all had fun with buggy OpenGL drivers) but these are good guidelines and should always be the first line of attack when dealing with errors or weird behaviours.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this