Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 06 Sep 2011
Offline Last Active Oct 29 2014 09:56 AM

Topics I've Started

[GLSL] NVIDIA vs ATI shader problem

28 March 2014 - 01:33 PM



I am trying to get my engine running on ATI/AMD(previously i only ran it on NVIDIA).


My shader uses diffuse lighting to render a sphere and a plane(both with same material).

This works totally fine on NVIDIA graphics cards.

On ATI it compiles, but shows really weird results when executed.

The sphere appears to have some issue with fetching texture and the plane does not appear at all.

I also tried compiling it with AMDs Shader Analyzer and it did compile without errors or warnings.


Here is the vertex shader:

uniform mat4 gWVP;
uniform mat4 gNormalMatrix;
uniform mat4 gModelViewMatrix;

in vec3 inPosition;
in vec3 inNormal;
in vec2 inTexcoord;
in vec3 inTangent;

out vec2 vTexcoord;

//all of these are in eyespace
out vec3 vPosition;
out vec3 vNormal;
out vec3 vTangent;

void main()
	gl_Position = gWVP * vec4(inPosition,1.0);
	vNormal = normalize((gNormalMatrix * vec4(inNormal,0.0)).xyz);
	vTangent = normalize((gNormalMatrix * vec4(inTangent,0.0)).xyz);

	vPosition = (gModelViewMatrix * vec4(inPosition,1.0)).xyz;

	vTexcoord = inTexcoord;

and fragment shader:

#version 130

const int LIGHT_TYPE_NONE = 0;
const int LIGHT_TYPE_POINT = 2;

struct Light{
	int lightType;
	vec3 position;
	vec4 diffuse;
	float intensity;
	float constantAttenuation;
	float linearAttenuation;
	float quadraticAttenuation;

const int NUM_LIGHTS = 4;
uniform Light gLights[NUM_LIGHTS];
uniform vec4 gGlobalAmbient;

vec4 calculateDiffuse(Light light, vec4 surfaceColor, vec3 normal, vec3 lightDir)
	vec4 outColor = vec4(0.0);

	vec3 normalizedLightDir = normalize(lightDir);
	float NdotL = max(dot(normal,normalizedLightDir),0.0);

	if(light.lightType == LIGHT_TYPE_DIRECTIONAL)
		if (NdotL > 0.0) {
			outColor += surfaceColor * light.diffuse * light.intensity * NdotL;
	}else if(light.lightType == LIGHT_TYPE_POINT)
		float dist = length(lightDir);
		if (NdotL > 0.0) {
			float attenuation = 1.0 / (light.constantAttenuation +
						light.linearAttenuation * dist +
						light.quadraticAttenuation * dist * dist);

			outColor += surfaceColor * light.diffuse * light.intensity * attenuation * NdotL;

	return outColor;

uniform sampler2D gMainTexture;

in vec2 vTexcoord;

in vec3 vPosition;
in vec3 vNormal;
in vec3 vTangent;

out vec4 outColor;

void main (void)  
	vec4 texDiffuse = texture(gMainTexture,vTexcoord);

	vec3 normal = normalize(vNormal);
	vec3 tangent = normalize(vTangent);
	vec3 bitangent = cross(normal, tangent);

	mat3 tangentSpaceMatrix = mat3(
		tangent.x, bitangent.x, normal.x,
		tangent.y, bitangent.y, normal.y,
		tangent.z, bitangent.z, normal.z

	outColor = texDiffuse * gGlobalAmbient;

	for(int i = 0;i<NUM_LIGHTS;i++)
		vec3 lightDir = vec3(0.0);
		if(gLights[i].lightType == LIGHT_TYPE_DIRECTIONAL)
			lightDir = normalize(tangentSpaceMatrix * gLights[i].position);
		}else if(gLights[i].lightType == LIGHT_TYPE_POINT)
			lightDir = tangentSpaceMatrix * (gLights[i].position - vPosition);

		if(gLights[i].lightType != LIGHT_TYPE_NONE)
			outColor += calculateDiffuse(gLights[i],texDiffuse,vec3(0.0,0.0,1.0),lightDir);

I hope someone can point out the issue to me or help me find a solution to debug it(since its no compiler error).


Thanks in advance!

[GLSL] Question about normal mapping

26 March 2014 - 12:36 PM

I took a look at some tutorials on the internet about how to do normal mapping and they all convert the light direction with into tangent space.


In my shader setup every shader has a "#define NUM_LIGHTS" and a respective array of structs for each light.


Every tutorial I looked at only calculated the lightDir for one light and passing it on with the "out" modifier to the fragment shader.

How would I go about passing, for example, 4 lightDirs to the fragmentShader, without creating a vec3 lightDir0; vec3 lightDir1; etc for each light?

I looked up arrays with out modifier but it seems those dont work on every card?


Hopefully someone can clarify this for me.


Have a nice day!

wglChoosePixelFormatARB for MSAA very slow

23 March 2014 - 10:28 AM

I am using the tutorial from NeHe Productions http://nehe.gamedev.net/tutorial/fullscreen_antialiasing/16008/ on how to enable MSAA on a gl context.


It all works fine, but the funciton call to wglChoosePixelFormatARB() is very, very slow.

Usually takes about 2-3 second(my pc is a gaming pc, so it should run muchf faster).


Does anyone know why this function is so slow and is there a workaround/solution?

[ASSIMP] Importing collada file with biped animations from 3dsmax

10 March 2014 - 04:14 AM

Hey guys!


I am currently working with my artist so we can get a workflow up and running to import static and skinned meshes in our own engine.

He is using the biped tool in 3ds max.

At the moment we have 2 options to go for:

-ASSIMP and importing .dae(Collada) files

-FBXSDK and importing .fbx(obviously)


We tried to go with the ASSIMP way and I am having a problem with bipedal animations. Normal bone animations with example files from the internet. Does anyone have experience with this kind of workflow and can explain to me the weird behaviour in the attached screenshot?


Secondly I tried using the FBXSDK but its a pain in the ass. I am having trouble with simply loading normals...


To anyone out there having a custom engine or experience in those things: How would you/do you handle biped animations?


Best regards,


[FBX SDK] Importing static model problem

09 March 2014 - 04:51 PM



I am trying to import a model with the help of the fbx sdk. Now it's not the best documented SDK so please help me with this one.

The following code loads vertices,normals, uvs and indices. This gets then rendered by opengl. The rendering part works fine(already tested with static stuff). The problem is loading the normals. So here the code:

int polygonCount = pMesh->GetPolygonCount();
		int numIndices = polygonCount * 3;
		int numVertices = pMesh->GetControlPointsCount();

		FbxGeometryElementNormal *pNormalElement = pMesh->GetElementNormal(0);
		FbxGeometryElementUV *pUVElement = pMesh->GetElementUV(0);

		FbxLayerElement::EMappingMode normalMappingMode = pNormalElement->GetMappingMode();
		FbxLayerElement::EMappingMode uvMappingMode = pUVElement->GetMappingMode();

		FbxLayerElement::EReferenceMode normalReferenceMode = pNormalElement->GetReferenceMode();
		FbxLayerElement::EReferenceMode uvReferenceMode = pUVElement->GetReferenceMode();

		unsigned short *pIndices = new unsigned short[numIndices];
		for(int i = 0;i<polygonCount;i++)
			for(int j = 0;j<3;j++)
				int polygonVertex = pMesh->GetPolygonVertex(i,j);
				pIndices[(i*3)+j] = polygonVertex;
				FbxVector4 position = pMesh->GetControlPointAt(polygonVertex);
				pVertices[polygonVertex].x = position.mData[0];
				pVertices[polygonVertex].y = position.mData[1];
				pVertices[polygonVertex].z = position.mData[2];

				FbxVector4 normal = pNormalElement->GetDirectArray().GetAt(polygonVertex);
				pVertices[polygonVertex].nx = normal.mData[0];
				pVertices[polygonVertex].ny = normal.mData[1];
				pVertices[polygonVertex].nz = normal.mData[2];

				FbxVector2 uv = pUVElement->GetDirectArray().GetAt(polygonVertex);
				pVertices[polygonVertex].uvx = uv.mData[0];
				pVertices[polygonVertex].uvy = uv.mData[1];

And this code then gives me the output you can see on the attached screenshot(its supposed to be superman). The colors are just the normals used as color. Also the normals face in random directions, so most of the time I see through the mesh.


Am I using the fbx sdk correctly? I am really not sure about the index in the function call pNormalElement->GetDirectArray().GetAt(polygonVertex);


I hope someone can help me.


Thanks in advance!