Sign in to follow this  
Eldritch

OpenGL Overcoming the limitations of OpenGL lights, second coming

Recommended Posts

With apparent acts of crime against the crown, my last inn was closed down, so here goes again, and I will try my outmost to not anger her Majesty this time ;) I wrote a couple of shaders that should light my scene with diffuse and specular lighting after the initial ambient pass where only ambient lighting is used. But, as can be seen below in the screenshot, the lighting is quite off the chart. Here are my shaders: Vertex
varying vec3 normal;
varying vec3 lightDir;

uniform vec4 LightPosition;

void main()
{
	normal = gl_NormalMatrix * gl_Normal;
	
	vec4 eyePos = gl_ModelViewMatrix * gl_Vertex;
	
	lightDir = normalize(vec3(LightPosition - eyePos));
	
	gl_Position = ftransform();
	
	gl_TexCoord[0] = gl_MultiTexCoord0;
}


Fragment
uniform sampler2D TexMap1;
uniform vec4 LightDiffuse;
uniform vec4 LightSpecular;

varying vec3 normal;
varying vec3 lightDir;

void main()
{
	vec4 diffuse = LightDiffuse * gl_FrontMaterial.diffuse;
	vec4 specular = LightSpecular * gl_FrontMaterial.specular;
	
	vec4 texel = texture2D(TexMap1, gl_TexCoord[0].st);
	
	vec4 color = diffuse * texel * max(dot(normal, lightDir), 0.0) + specular;
	
	gl_FragColor = color;
}


The rendering loop looks like this:
void CRenderer::Render3D()
{
	// Render ambient pass.
	glDisable(GL_BLEND);
	iActiveLight = -1;
    iAmbientPass = true;
	float col[4] = {0, 0, 0, 1};
	glLightfv(GL_LIGHT1, GL_DIFFUSE, col);
	glLightfv(GL_LIGHT1, GL_SPECULAR, col);
	float amb[4] = {0.1f, 0.1f, 0.1f, 1};
	glLightfv(GL_LIGHT1, GL_AMBIENT, amb);
	for (unsigned int i = 0; i < i3DObjects.size(); i++)
	{
		i3DObjects[i]->Draw();
	}

	// Render light passes.
    iAmbientPass = false;
	glDepthFunc(GL_LEQUAL);
	glBlendFunc(GL_ONE, GL_ONE);
    glEnable(GL_BLEND);
	//for (unsigned int i = 0; i < i3DObjects.size(); i++)
	{
		for (int j = 0; j < 4; j++)
		{
			CLight* L = CScenegraph::Instance()->GetLight(j);
			if (L != NULL)
			{
				iActiveLight = j;
				glLightfv(GL_LIGHT1, GL_POSITION, L->pos);
				glLightfv(GL_LIGHT1, GL_DIFFUSE, L->diff);
				glLightfv(GL_LIGHT1, GL_AMBIENT, col);
				glLightfv(GL_LIGHT1, GL_SPECULAR, L->spec);
				glLightf(GL_LIGHT1, GL_CONSTANT_ATTENUATION, L->att[0]);
				glLightf(GL_LIGHT1, GL_LINEAR_ATTENUATION, L->att[1]);
				glLightf(GL_LIGHT1, GL_QUADRATIC_ATTENUATION, L->att[2]);

				for (unsigned int i = 0; i < i3DObjects.size(); i++)
				{
					i3DObjects[i]->Draw();
				}
			}
		}
	}
	glBlendFunc(GL_SRC_ALPHA, GL_ZERO);
	glDepthFunc(GL_LESS);
}


The rendering of objects looks like this:
void CMesh::Draw()
{
	for (unsigned int i = 0; i < iMeshObj.size(); i++)
	{
		float zero[4] = {0, 0, 0, 1};
		if (CRenderer::Instance()->IsInAmbientPass() == true)
		{
			glMaterialfv(GL_FRONT, GL_DIFFUSE, zero);
			glMaterialfv(GL_FRONT, GL_AMBIENT, iMeshObj[i]->iAmbient);
			glMaterialfv(GL_FRONT, GL_SPECULAR, zero);
		}
		else
		{
			glMaterialfv(GL_FRONT, GL_DIFFUSE, iMeshObj[i]->iDiffuse);
			glMaterialfv(GL_FRONT, GL_AMBIENT, zero);
			glMaterialfv(GL_FRONT, GL_SPECULAR, iMeshObj[i]->iSpecular);
			glMaterialf(GL_FRONT, GL_SHININESS, iMeshObj[i]->iShininess);
		}

		// Check for frustum containment on the object bounding box.
		if (CFrustum::Instance()->CubeIsInFrustum(iMeshObj[i]->iBBox.GetCenter(),
												  iMeshObj[i]->iBBox.GetExtents().GetX()) == true)
		{
			// Check for and enable possible shader and/or multi-texture.
			if (iMeshObj[i]->iMultiTexturing == true)
			{
				CRenderer::Instance()->EnableMultiTexturing(iMeshObj[i]->iMaterial, iMeshObj[i]->iMultiMaterial);
			}
			else
			{
				glActiveTextureARB(GL_TEXTURE0_ARB);
				glBindTexture(GL_TEXTURE_2D, CRenderer::Instance()->GetMaterial(iMeshObj[i]->iMaterial));
			}

			if (CRenderer::Instance()->IsInAmbientPass() == false)
			{
				CShaderManager::Instance()->UseShader(iMeshObj[i]->iShader);
			}
			else
			{
				CShaderManager::Instance()->UseShader("Ambient");
			}

			CShaderManager::Instance()->SetShaderParams(iMeshObj[i]);

			// Check for frustum containment on the partition nodes.
			for (unsigned int j = 0; j < iMeshObj[i]->iPartition.iNodes.size(); j++)
			{
				if (CFrustum::Instance()->CubeIsInFrustum(iMeshObj[i]->iPartition.iNodes[j]->iBBox.GetCenter(),
														  iMeshObj[i]->iPartition.iNodes[j]->iBBox.GetExtents().GetX()) == true)
				{
					glCallList(iMeshObj[i]->iPartition.iNodes[j]->iDisplayList);
				}
			}

			CShaderManager::Instance()->NoShader();

			if (iMeshObj[i]->iMultiTexturing == true)
			{
				CRenderer::Instance()->DisableMultiTexturing();
			}
		}
	}
}


And now, ladies and gentlemen, the scene... Notice the "+" the lights seem to be causing with the brightest part in the middle? Also, the lighting looks very much like some form of per-vertex lighting, which I find peculiar enough. I am not over-using any of the light's properties nor any of the material properties, as to cause some form of over-brightening effect, which can be seen in the screenshot. However, I am not reluctant to point out that the multiplication is probably off somehow, causing a sort of over-exposure effect. I am begging for help. I cannot figure out why the lighting would be cracking up like this. Rendering only 1 light pass works, but as soon as I go into more light passes, it starts to look like something out of this world :)

Share this post


Link to post
Share on other sites
My suggestion is most likely useless (they always are), but what happens if you do the calculations without the specular part?

Share this post


Link to post
Share on other sites
No suggestions are useless and all are welcome :)

Without the specular component in the shader, the lighting becomes less bright. There are still traces of the "+" shape of the lighting, and the light gets brighter the closer the light gets to the center of the screen, meaning that if I turn away from the light, everything becomes dark.

Share this post


Link to post
Share on other sites
Your specular calculation looks way off to me, all you're doing is passing in a uniform and multiplying it by another uniform. Your specular will just end up being constant for the whole scene. Then you just add this (probably quite large) constant onto all your lighting.

First, as mrbig suggested, remove the specular and check everything works. Then you'll have to write some proper specular calculations (you have to take light position and eye position into account, plenty of info on the internet if you look).

Share this post


Link to post
Share on other sites
Okey. I noticed that the problem with the "light getting darker the more I turn away from the light" exists even with the specularity on.

Share this post


Link to post
Share on other sites
Maybe you could try out the shaders in this tutorial: http://www.clockworkcoders.com/oglsl/tutorial5.htm
They worked fine for me.

Share this post


Link to post
Share on other sites
Hehehe, the story of my life. Things that work for others, simply don't work for me :)

With the clockworkcoders' tutorial, I got a lot of z-fighting (even with the depth-func at GL_LEQUAL as suggested yesterday), but the other anomalies remain.

Share this post


Link to post
Share on other sites
Looking through you're code I have a few suggestion and comments. Using

normal = normalize(gl_NormalMatrix * gl_Normal);

in your vertex code removes the possiblilty your vertex normals are too long. Similarly, using normalize(normal) in your fragment shader gets rid of problems with large polys. I'll ignore the specular term as it's just completely wrong (as already mentioned by OrangyTang). btw all the attenuation stuff for lights needs to be handled in your shaders as well if you expect that.

You use the parameter LightPosition to get your eye space light coord into your shader. I guess you are setting this in CShaderManager::Instance()->SetShaderParams(iMeshObj[i])? The openGl light positions are in gl_LightSource[i].position which aren't used by your shaders. Personally, if I write a shader that is supposeed to be a drop-in replacement for openGL fixed functionality I try to reuse the fixed functionality parameters as much as possible, i.e. get all the light parameters from the fixed func.

Another helpful link

http://www.lighthouse3d.com/opengl/glsl/index.php?pointlight

Share this post


Link to post
Share on other sites
I took the opportunity to actually sit down with pen and paper, and came up with a shader that handles three lights at any given time, and it looks beautiful! I am going to share them with the rest of you so you can avoid having to go through the hell I have been in these past days.

What you need to complement the shaders (notice, I am not claiming them to be optimal in any way, but they do their job, so I am happy) is a system to calculate the three lights in the scene that are closest to your object and set the parameters of GL_LIGHT1 to GL_LIGHT3 of those three lights.

Anyways, here are the shaders:

Vertex Shader

varying vec3 normal;
varying vec4 ecPos;

void main()
{
// Calculate normal space.
normal = normalize(gl_NormalMatrix * gl_Normal);

// Calculate view space.
ecPos = gl_ModelViewMatrix * gl_Vertex;

// Set vertex position.
gl_Position = ftransform();

// Set texture coord.
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
}




Fragment Shader

uniform sampler2D TexMap1;

varying vec3 normal;
varying vec4 ecPos;

void main()
{
// Calculate global ambience.
vec4 ambientGlobal = gl_LightModel.ambient * gl_FrontMaterial.ambient;

// Calculate normal.
vec3 n = normalize(normal);

// Initiate color to global ambience.
vec4 color = ambientGlobal;

// For each of the 3 lights affecting this object, calculate their color addition.
for (int i = 1; i < 4; i++)
{
// Calculate light's direction.
vec3 vec = vec3(gl_LightSource[i].position - ecPos);
vec3 lightDir = normalize(vec);

// Calculate light distance.
float dist = length(vec);

// Calculate half vector.
vec3 halfVector = normalize(gl_LightSource[i].halfVector.xyz);

// Calculate diffuse component.
vec4 diffuse = gl_FrontMaterial.diffuse * gl_LightSource[i].diffuse;

// Calculate ambient component.
vec4 ambient = gl_FrontMaterial.ambient * gl_LightSource[i].ambient;

// Calculate dot product between normal and light.
float NdotL = max(dot(n, normalize(lightDir)), 0.0);

// Calculate attenuation.
float att = 1.0 / (gl_LightSource[i].constantAttenuation +
gl_LightSource[i].linearAttenuation * dist +
gl_LightSource[i].quadraticAttenuation * dist * dist);

// Add attenuation and diffuse and ambient components.
color += att * (diffuse * NdotL + ambient);

// Calculate dot product between normal and half vector.
vec3 halfV = normalize(halfVector);
float NdotHV = max(dot(n, halfV), 0.0);

// Add specular component and calculate highlight.
color += att * gl_FrontMaterial.specular * gl_LightSource[i].specular *
pow(NdotHV, gl_FrontMaterial.shininess);
}

// Multiply color with texture color.
color = color * texture2D(TexMap1, gl_TexCoord[0].st);

// Set fragment color.
gl_FragColor = color;
}




And here is how the scene looks now, as it should :)



Finally! :) Thanks to all.

Share this post


Link to post
Share on other sites
A good idea is to divide the final color before multiplication with the texture color by the number of lights you are using the shader (in my case, by 3). It averages the values and makes it look more smooth and less washed out.

Share this post


Link to post
Share on other sites
Quote:
Original post by Eldritch
A good idea is to divide the final color before multiplication with the texture color by the number of lights you are using the shader (in my case, by 3). It averages the values and makes it look more smooth and less washed out.

No, thats a bad idea as it means that the intensity of one light can affect another. If you're getting washed out colours then it means your light intensities are too bright in the first place.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627746
    • Total Posts
      2978906
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now