Followers 0

## 3 posts in this topic

Posted (edited)

I have implemented the depthmap and have gotten it to render the depthmap.

The problem I am having is when implementing the shadows. Now since this is part of a bigger project that uses Deferred Rendering, the code does not match the code in the tutorial but I am using the same concept.

#version 440
layout (location = 0) in vec3 vertexPos;
layout (location = 1) in vec2 texCoords;

out vec2 TexCoords;
out vec4 FragPosLightSpace;

uniform mat4 model;
uniform sampler2D gPosition;
uniform mat4 lightSpaceMatrix;

void main()
{
gl_Position = vec4(vertexPos, 1.0f);
TexCoords = texCoords;
vec3 FragmentPos = vec3(model * texture(gPosition, texCoords));
FragPosLightSpace = lightSpaceMatrix * vec4(FragmentPos, 1.0);
}

#version 440
out vec4 FragColor;
in vec2 TexCoords;
in vec4 FragPosLightSpace;

uniform sampler2D gPosition;
uniform sampler2D gNormal;
uniform sampler2D gAlbedoSpec;
uniform sampler2D depthMap;

struct Light {
vec3 Position;
vec3 Color;

float Linear;
};

const int NR_LIGHTS = 32;
uniform Light lights[NR_LIGHTS];
uniform vec3 viewPos;

{
//Perform perspective divide
vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
//Transform to [0,1] range
projCoords = projCoords * 0.5 + 0.5;
//Get closest depth value from light's perspective (using [0,1] range fragPosLight as coords)
float closestDepth = texture(depthMap, projCoords.xy).r;
//Get depth of current fragment from light's perspective
float currentDepth = projCoords.z;
//Check wheter current frag pos is in shadow
if(currentDepth > closestDepth)
{
}

}

void main()
{
// Retrieve data from G-buffer
vec3 FragPos = texture(gPosition, TexCoords).rgb;
vec3 Normal = texture(gNormal, TexCoords).rgb;
vec3 color = texture(gAlbedoSpec, TexCoords).rgb;
float Specular = texture(gAlbedoSpec, TexCoords).a;

vec3 viewDir = normalize(viewPos - FragPos);
for(int i = 0; i < NR_LIGHTS; ++i)
{
vec3 lightDir = normalize(lights[i].Position - FragPos);
vec3 diffuse = max(dot(Normal, lightDir), 0.0) * color * lights[i].Color;
// Specular
vec3 halfwayDir = normalize(lightDir + viewDir);
float spec = pow(max(dot(Normal, halfwayDir), 0.0), 16.0);
vec3 specular = lights[i].Color * spec * Specular;
// Attenuation
float distance = length(lights[i].Position - FragPos);
float attenuation = 1.0 / (1.0 + lights[i].Linear * distance + lights[i].Quadratic * distance * distance);
diffuse *= attenuation;
specular *= attenuation;
lighting += diffuse + specular;
}
lighting * color;
FragColor = vec4(lighting, 1.0f);
float depthValue = texture(depthMap,TexCoords).r;
// Test depthmap
//FragColor = vec4(vec3(depthValue),1.0);
}

The ShadowCalculation is the function that calculates if a position is in shadows or not. And it pretty much follows the same concept as the tutorial does.

Now if I run all this, all I get is a white screen, I thought it might be because I had the setting of shadow wrong so I tried setting float shadow = 1.0 and then in the if-statement setting shadows to 0.0. Now I dont get a completly white screen but the shadows are not showing. I feel like I am close to a solution but have kinda gotten stuck right now and would appreciate if someone could tell me what the problem is or could be.

Edited by ChobitsTheZero
0

##### Share on other sites

The shadow map is supposed to be a depth map, but you're taking vectors out of it in the vertex shader... using completely unrelated texture coordinates, no less, this doesn't make any sense whatsoever.

You could think of the shadow map texture as a lookup table for ray tracing the primary light rays, i.e. the ray between the fragment position and the light source. You will basically have to transform the fragment two different ways, for two different "cameras" to get the values you need to do this calculation. One of the cameras represents the main display, and the other represents the light. The light camera was previously used to create the depth map, in which each texel represents a single ray of light, and the depth value is the distance a light ray can travel before it gets obstructed. To perform the shadow calculation, you will test if the distance traveled by the light is shorter than the actual distance to the fragment that's being shaded, to determine if the light is obstructed before it reaches the fragment in question.

Thus, your vertex shader shouldn't do texture lookups, it should just output the position of the vertex in lightspace, i.e. lightSpaceMatrix * vec4(vertexPos, 1)

0

##### Share on other sites

The shadow map is supposed to be a depth map, but you're taking vectors out of it in the vertex shader... using completely unrelated texture coordinates, no less, this doesn't make any sense whatsoever.

You could think of the shadow map texture as a lookup table for ray tracing the primary light rays, i.e. the ray between the fragment position and the light source. You will basically have to transform the fragment two different ways, for two different "cameras" to get the values you need to do this calculation. One of the cameras represents the main display, and the other represents the light. The light camera was previously used to create the depth map, in which each texel represents a single ray of light, and the depth value is the distance a light ray can travel before it gets obstructed. To perform the shadow calculation, you will test if the distance traveled by the light is shorter than the actual distance to the fragment that's being shaded, to determine if the light is obstructed before it reaches the fragment in question.

Thus, your vertex shader shouldn't do texture lookups, it should just output the position of the vertex in lightspace, i.e. lightSpaceMatrix * vec4(vertexPos, 1)

I see! That really helps! I have kinda been short on time and really stressed due to exams so I basically has to rush through it all. But I shouldn't try to make excuses, I guess i'm just bad at understanding how excatly 3D programming works.

0

##### Share on other sites

Posted (edited)

Okey so, I did what you said but it seems that there is also another problem which I can't really figure out what it is. So now there are shadows, but they dont seem to be at the correct position. Instead of being behind the models they are instead infront of them and slightly above them. I was thinking that maybe I had to multiple the vec4(vertexPos, 1) with model, but that also gave a weird result.

#Edit: Nevermind I was able to solve it, appearantly the lightspacematrix had some issues with it.

Edited by ChobitsTheZero
0

## Create an account

Register a new account

Followers 0

• ### Similar Content

• By mapra99
Hello

I am working on a recent project and I have been learning how to code in C# using OpenGL libraries for some graphics. I have achieved some quite interesting things using TAO Framework writing in Console Applications, creating a GLUT Window. But my problem now is that I need to incorporate the Graphics in a Windows Form so I can relate the objects that I render with some .NET Controls.

To deal with this problem, I have seen in some forums that it's better to use OpenTK instead of TAO Framework, so I can use the glControl that OpenTK libraries offer. However, I haven't found complete articles, tutorials or source codes that help using the glControl or that may insert me into de OpenTK functions. Would somebody please share in this forum some links or files where I can find good documentation about this topic? Or may I use another library different of OpenTK?

Thanks!

• Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
• By KarimIO
EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
Update: No crash occurs if I don't draw, just clear and swap.
static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));
• By Tchom
Hey devs!

I've been working on a OpenGL ES 2.0 android engine and I have begun implementing some simple (point) lighting. I had something fairly simple working, so I tried to get fancy and added color-tinting light. And it works great... with only one or two lights. Any more than that, the application drops about 15 frames per light added (my ideal is at least 4 or 5). I know implementing lighting is expensive, I just didn't think it was that expensive. I'm fairly new to the world of OpenGL and GLSL, so there is a good chance I've written some crappy shader code. If anyone had any feedback or tips on how I can optimize this code, please let me know.

uniform mat4 u_MVPMatrix; uniform mat4 u_MVMatrix; attribute vec4 a_Position; attribute vec3 a_Normal; attribute vec2 a_TexCoordinate; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { v_Position = vec3(u_MVMatrix * a_Position); v_TexCoordinate = a_TexCoordinate; v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); gl_Position = u_MVPMatrix * a_Position; } Fragment Shader
precision mediump float; uniform vec4 u_LightPos["+numLights+"]; uniform vec4 u_LightColours["+numLights+"]; uniform float u_LightPower["+numLights+"]; uniform sampler2D u_Texture; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { gl_FragColor = (texture2D(u_Texture, v_TexCoordinate)); float diffuse = 0.0; vec4 colourSum = vec4(1.0); for (int i = 0; i < "+numLights+"; i++) { vec3 toPointLight = vec3(u_LightPos[i]); float distance = length(toPointLight - v_Position); vec3 lightVector = normalize(toPointLight - v_Position); float diffuseDiff = 0.0; // The diffuse difference contributed from current light diffuseDiff = max(dot(v_Normal, lightVector), 0.0); diffuseDiff = diffuseDiff * (1.0 / (1.0 + ((1.0-u_LightPower[i])* distance * distance))); //Determine attenuatio diffuse += diffuseDiff; gl_FragColor.rgb *= vec3(1.0) / ((vec3(1.0) + ((vec3(1.0) - vec3(u_LightColours[i]))*diffuseDiff))); //The expensive part } diffuse += 0.1; //Add ambient light gl_FragColor.rgb *= diffuse; } Am I making any rookie mistakes? Or am I just being unrealistic about what I can do? Thanks in advance
• By yahiko00
Hi,
Not sure to post at the right place, if not, please forgive me...
For a game project I am working on, I would like to implement a 2D starfield as a background.
I do not want to deal with static tiles, since I plan to slowly animate the starfield. So, I am trying to figure out how to generate a random starfield for the entire map.
I feel that using a uniform distribution for the stars will not do the trick. Instead I would like something similar to the screenshot below, taken from the game Star Wars: Empire At War (all credits to Lucasfilm, Disney, and so on...).

Is there someone who could have an idea of a distribution which could result in such a starfield?
Any insight would be appreciated

• 9
• 10
• 20
• 11
• 28