# OpenGL Directional light shadow mapping

This topic is 1048 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I have a scene like this (yellow light is from a point light; theres also a chair just outside the view to the left):

[attachment=26364:1.png]

If do not use a lights view matrix at all, i.e I only use an orthographic projection matrix when rendering shadow maps, it looks OK. The cascade order is top left, top right, bottom left, bottom right.

[attachment=26365:asd.png]

Now if I use an orthographic projection matrix and use a rotation matrix (as in the code below) based on the lights direction, it instead looks like this:

[attachment=26366:asd2.png]

Which is not correct, for example, the boxes are completely missing, even though they are encompassed in the orthographic projection

The resulting shadows then look like this:

[attachment=26369:asd3.png]

Some parts of shadow are correct, but major artifacts.

Here's how I build the matrix:

Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir /* == Vec3(-1.0f, -1.0f, -1.0f) in this example */)
{
// "cameraFrustrum" contains the 8 corners of the cameras frustrum in world space
float maxZ = cameraFrustrum[0].z, minZ = cameraFrustrum[0].z;
float maxX = cameraFrustrum[0].x, minX = cameraFrustrum[0].x;
float maxY = cameraFrustrum[0].y, minY = cameraFrustrum[0].y;
for (uint32_t i = 1; i < 8; i++)
{
if (cameraFrustrum[i].z > maxZ) maxZ = cameraFrustrum[i].z;
if (cameraFrustrum[i].z < minZ) minZ = cameraFrustrum[i].z;
if (cameraFrustrum[i].x > maxX) maxX = cameraFrustrum[i].x;
if (cameraFrustrum[i].x < minX) minX = cameraFrustrum[i].x;
if (cameraFrustrum[i].y > maxY) maxY = cameraFrustrum[i].y;
if (cameraFrustrum[i].y < minY) minY = cameraFrustrum[i].y;
}

Vec3 right = glm::normalize(glm::cross(glm::normalize(lightDir), Vec3(0.0f, 1.0f, 0.0f)));
Vec3 up = glm::normalize(glm::cross(glm::normalize(lightDir), right));

Mat4 lightViewMatrix = Mat4(Vec4(right, 0.0f),
Vec4(-up, 0.0f),		// why do I need to negate this btw?
Vec4(lightDir, 0.0f),
Vec4(0.0f, 0.0f, 0.0f, 1.0f));

return OrthographicMatrix(minX, maxX, maxY, minY, maxZ, minZ) * lightViewMatrix;
}

It was my understanding (based on topics like https://www.opengl.org/discussion_boards/showthread.php/155674-Shadow-maps-for-infinite-light-sources), that all I needed to do shadow mapping for directional light was an orthographic projection matrix and then a rotation matrix (with no translation component, since the dir light has no position, and as I've understod it, translation dosnt matter since it is orthographic meaning no perspective anyway).

Then what is causing the errors in the 3rd image? Is there something I am missing?

Edited by KaiserJohan

##### Share on other sites

Which is not correct, for example, the boxes are completely missing, even though they are encompassed in the orthographic projection

So how are you culling them?

L. Spiro

##### Share on other sites

If you want your camera to move past it's original position, then you'll need a light matrix to do a transform. The directional light has no position, but your orthographic projection still needs to know where it's pointing at and what it's bounds are.

##### Share on other sites

Which is not correct, for example, the boxes are completely missing, even though they are encompassed in the orthographic projection

So how are you culling them?

L. Spiro

Not sure I follow; they are in the bounds of the orthographic matrix (as seen in the 2nd image) so why are they not visible after I apply the rotation matrix?

If you want your camera to move past it's original position, then you'll need a light matrix to do a transform.

What has this got to do with the (scene) camera? Or do you mean the camera as in the lights view? If so, my understanding of orthographic projection was that I didn't need to translate the camera at all

The directional light has no position, but your orthographic projection still needs to know where it's pointing at and what it's bounds are.

"Where its pointing at" - defined by the rotation matrix? "What its bounds are" - defined by the orthographic projection matrix, built from the max/min of the camera frustrum corners? Is there anything I'm missing?

##### Share on other sites

so why are they not visible after I apply the rotation matrix?

I guess that was my point.
Why isn’t it visible? How are you culling it? How are you drawing it?
Are you sure it is in view? Because if so, and you actually sent valid commands to draw it, it would be there with the rest of the objects.

L. Spiro

##### Share on other sites

so why are they not visible after I apply the rotation matrix?

I guess that was my point.
Why isn’t it visible? How are you culling it? How are you drawing it?
Are you sure it is in view? Because if so, and you actually sent valid commands to draw it, it would be there with the rest of the objects.

L. Spiro

I'm positive the problem is related to the projection and/or lights view matrix. Is there anything I'm missing when building the lights projection/view matrix? It is true the view matrix needs no translation at all and just the rotation as in my code above?

Edited by KaiserJohan

##### Share on other sites

It is true the view matrix needs no translation at all and just the rotation as in my code above?

False. The projection determines the size and shape of a frustum "box." That box has no direct relationship to world coordinates. The view matrix determines the position and orientation of objects that will (eventually) be included/excluded from the frustum volume. Thinking in sort a backwards sense, the view matrix "positions" the frustum volume in the world, and objects which are not in the volume don't get rendered.

##### Share on other sites

It is true the view matrix needs no translation at all and just the rotation as in my code above?

False. The projection determines the size and shape of a frustum "box." That box has no direct relationship to world coordinates. The view matrix determines the position and orientation of objects that will (eventually) be included/excluded from the frustum volume. Thinking in sort a backwards sense, the view matrix "positions" the frustum volume in the world, and objects which are not in the volume don't get rendered.

That makes sense!

So, if I already have the world position of the box, how do I build a view matrix that dosn't change the position, only the orientation of the objects?

I.e, I have the world space corners of the camera frustrum, that will build the ortho box. I want the objects in it, but viewed from a certain direction, how do I build such a matrix?

##### Share on other sites

MJP has a good sample on how to do that here: https://mynameismjp.wordpress.com/2009/02/17/deferred-cascaded-shadow-maps/.

Essentially you make an AABB around the game camera, find the center of the box, project a point from that position in the opposite direction of your light's direction scaled by some factor. The scale factor determines how far your light pos will be from the camera, the closer you are the better resolution you'll get in your shadow map, but you may cull valid occluders since they are behind the camera or such. There is a lot of fiddling with that value to get it right, in my experience.

##### Share on other sites

First (minor) thing is you should normalize your lightDir as well. And your lightView matrix is actually a lookAt matrix.

I've implemented a 1-cascade shadow map, the solution is very similar. This is based on an nvidia paper and other resources which can be found in the internet.

So the first step, I calculate a view matrix like you do. This is actually the same as your code (just normalize your direction! )

view.lookAt(Vector3::Zero, lightDir, Vector3::Up);

After I calculate the bounds of the view frustum points "looked from the direction of the light"

Vector3 min = Vector3::Max;
Vector3 max = Vector3::Min;

for (int i = 0; i < BoundingFrustum::NumCorners; i++)
{
transformed = Vector3::transformCoord(corners[i], view);

min = Vector3::getMin(transformed, min);
max = Vector3::getMax(transformed, max);
}

Then my projection matrix is a simple ortho:

proj.orthographicOffCenter(-1, 1, -1, 1, min.z, max.z);

But I'm using another matrix called cropMat which will "position and clip".

const float32 scaleX = 2.0f / (max.x - min.x);
const float32 scaleY = 2.0f / (max.y - min.y);
const float32 offsetX = -0.5f * (min.x + max.x) * scaleX;
const float32 offsetY = -0.5f * (min.y + max.y) * scaleY;

cropMat.m00 = scaleX;
cropMat.m11 = scaleY;
cropMat.m22 = 1.0f;
cropMat.m30 = offsetX;
cropMat.m31 = offsetY;
cropMat.m33 = 1.0f;

The final viewProj matrix is calculated by multiplying the view, the projection and the crop matrix. NOTE that I'm using the DX-based matrices, care with the matrix row/column order.

As a side note:

You can use a sphere instead of a box. With a sphere you loose some precision but it also disables the flickering when the camera rotates:

Vector3 center;
for (int i = 0; i < BoundingFrustum::NumCorners; i++)
center += corners[i];
center /= BoundingFrustum::NumCorners;
center = Vector3::transformCoord(center, view);

const float32 radius = Vector3::distance(corners[BoundingFrustum::FLB], corners[BoundingFrustum::NRT]);

max = center + Vector3(radius);

You can also apply a rounding matrix which disables flickering when camera moves, something like this:

// round to textel
Vector3 origin = Vector3::transformCoord(Vector3::Zero, viewProj);
origin *= halfSize;

Vector3 rounding;
rounding.x = Math::round(origin.x) - origin.x;
rounding.y = Math::round(origin.y) - origin.y;
rounding /= halfSize;

roundMat.translate(rounding);
viewProj *= roundMat;

Edit:
This is an old code, can contains bugs. :)

Edited by csisy

##### Share on other sites

I have the world space corners of the camera frustrum, that will build the ortho box. I want the objects in it, but viewed from a certain direction

This is not what I said to do in the previous topic.

L. Spiro

##### Share on other sites

MJP has a good sample on how to do that here: https://mynameismjp.wordpress.com/2009/02/17/deferred-cascaded-shadow-maps/.

Essentially you make an AABB around the game camera, find the center of the box, project a point from that position in the opposite direction of your light's direction scaled by some factor. The scale factor determines how far your light pos will be from the camera, the closer you are the better resolution you'll get in your shadow map, but you may cull valid occluders since they are behind the camera or such. There is a lot of fiddling with that value to get it right, in my experience.

Is there no optimal way to find this scale/offset value other than manually test?

First (minor) thing is you should normalize your lightDir as well. And your lightView matrix is actually a lookAt matrix.

I've implemented a 1-cascade shadow map, the solution is very similar. This is based on an nvidia paper and other resources which can be found in the internet.

So the first step, I calculate a view matrix like you do. This is actually the same as your code (just normalize your direction! )

view.lookAt(Vector3::Zero, lightDir, Vector3::Up);

After I calculate the bounds of the view frustum points "looked from the direction of the light"

Vector3 min = Vector3::Max;
Vector3 max = Vector3::Min;

for (int i = 0; i < BoundingFrustum::NumCorners; i++)
{
transformed = Vector3::transformCoord(corners[i], view);

min = Vector3::getMin(transformed, min);
max = Vector3::getMax(transformed, max);
}

Then my projection matrix is a simple ortho:

proj.orthographicOffCenter(-1, 1, -1, 1, min.z, max.z);

But I'm using another matrix called cropMat which will "position and clip".

const float32 scaleX = 2.0f / (max.x - min.x);
const float32 scaleY = 2.0f / (max.y - min.y);
const float32 offsetX = -0.5f * (min.x + max.x) * scaleX;
const float32 offsetY = -0.5f * (min.y + max.y) * scaleY;

cropMat.m00 = scaleX;
cropMat.m11 = scaleY;
cropMat.m22 = 1.0f;
cropMat.m30 = offsetX;
cropMat.m31 = offsetY;
cropMat.m33 = 1.0f;

The final viewProj matrix is calculated by multiplying the view, the projection and the crop matrix. NOTE that I'm using the DX-based matrices, care with the matrix row/column order.

As a side note:

You can use a sphere instead of a box. With a sphere you loose some precision but it also disables the flickering when the camera rotates:

Vector3 center;
for (int i = 0; i < BoundingFrustum::NumCorners; i++)
center += corners[i];
center /= BoundingFrustum::NumCorners;
center = Vector3::transformCoord(center, view);

const float32 radius = Vector3::distance(corners[BoundingFrustum::FLB], corners[BoundingFrustum::NRT]);

max = center + Vector3(radius);

You can also apply a rounding matrix which disables flickering when camera moves, something like this:

// round to textel
Vector3 origin = Vector3::transformCoord(Vector3::Zero, viewProj);
origin *= halfSize;

Vector3 rounding;
rounding.x = Math::round(origin.x) - origin.x;
rounding.y = Math::round(origin.y) - origin.y;
rounding /= halfSize;

roundMat.translate(rounding);
viewProj *= roundMat;

Edit:
This is an old code, can contains bugs.

Nice! A couple of questions:

1. Why transform the bounding frustrum into lights view space?

2. What is the point of the crop matrix? Couldn't the 'scale' go into making the ortho matrix? What is the purpose of the 'offset' - to translate the view matrix from the center?

I have the world space corners of the camera frustrum, that will build the ortho box. I want the objects in it, but viewed from a certain direction

This is not what I said to do in the previous topic.

L. Spiro

Couldn't get a grasp on the plane math, but planning on revisiting that after I get the basic version working correctly.

Edited by KaiserJohan

##### Share on other sites

You can calculate the AABB for the entire scene and then set the camera to the max height of the scene. You may loose some resolution, but it'll work in a generic sense.

##### Share on other sites

Nice! A couple of questions:

1. Why transform the bounding frustrum into lights view space?
2. What is the point of the crop matrix? Couldn't the 'scale' go into making the ortho matrix? What is the purpose of the 'offset' - to translate the view matrix from the center?

Short answer: you need the min/max values by the light point of view.

Here you can find the answers, this is a nice nvidia paper. It was a great reference for me, at least. :) The algorithm is described on the 7th page.

Another good reference for PSSM.

##### Share on other sites

Resurecting this thread as I've done some more work on this.

I have it working - but it wastes sooo much resolution on empty space, using the code below, which does not care about the scene objects. See image:

Code:

Mat4 DX11DirectionalLightPass::CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir)
{
Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), glm::normalize(lightDir), Vec3(0.0f, 1.0f, 0.0f));

Vec4 transf = lightViewMatrix * cameraFrustrum[0];
float maxZ = transf.z, minZ = transf.z;
float maxX = transf.x, minX = transf.x;
float maxY = transf.y, minY = transf.y;
for (uint32_t i = 1; i < 8; i++)
{
transf = lightViewMatrix * cameraFrustrum[i];

if (transf.z > maxZ) maxZ = transf.z;
if (transf.z < minZ) minZ = transf.z;
if (transf.x > maxX) maxX = transf.x;
if (transf.x < minX) minX = transf.x;
if (transf.y > maxY) maxY = transf.y;
if (transf.y < minY) minY = transf.y;
}

Mat4 orthoMat = OrthographicMatrix(-1, 1, 1, -1, -maxZ, -minZ);

const float scaleX = 2.0f / (maxX - minX);
const float scaleY = 2.0f / (maxY - minY);
const float offsetX = -0.5f * (minX + maxX) * scaleX;
const float offsetY = -0.5f * (minY + maxY) * scaleY;

Mat4 cropMat(1.0f);
cropMat[0][0] = scaleX;
cropMat[1][1] = scaleY;
cropMat[3][0] = offsetX;
cropMat[3][1] = offsetY;

return cropMat * orthoMat * lightViewMatrix;
}


Reading nvidia CSM sample, it suggests using a AABB around all objects in each split, and comparing it to the frustrums AABB, and take the smallest value of them - like this:

    Mat4 DX11DirectionalLightPass::CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const RenderableMeshes& meshes, const Vec3& lightDir)
{
Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), glm::normalize(lightDir), Vec3(0.0f, 1.0f, 0.0f));

// frustrum aabb
Vec4 transf = lightViewMatrix * cameraFrustrum[0];
float maxZ = transf.z, minZ = transf.z;
float maxX = transf.x, minX = transf.x;
float maxY = transf.y, minY = transf.y;
for (uint32_t i = 1; i < 8; i++)
{
transf = lightViewMatrix * cameraFrustrum[i];

if (transf.z > maxZ) maxZ = transf.z;
if (transf.z < minZ) minZ = transf.z;
if (transf.x > maxX) maxX = transf.x;
if (transf.x < minX) minX = transf.x;
if (transf.y > maxY) maxY = transf.y;
if (transf.y < minY) minY = transf.y;
}

// min/max corners of the AABB of all the objects in the frustrum
Vec3 modelMin = minAABB(meshes, cameraFrustrum, lightViewMatrix);
Vec3 modelMax = maxAABB(meshes, cameraFrustrum, lightViewMatrix);

// take the smallest of frustrum AABB and all models AABB
if (modelMin.x > minX) minX = modelMin.x;
if (modelMax.x < maxX) maxX = modelMax.x;
if (modelMin.y > minY) minY = modelMin.y;
if (modelMax.y < maxY) maxY = modelMax.y;
if (modelMin.z > minZ) minZ = modelMin.z;
if (modelMax.z < maxZ) maxZ = modelMax.z;

Mat4 orthoMat = OrthographicMatrix(-1, 1, 1, -1, -maxZ, -minZ);

const float scaleX = 2.0f / (maxX - minX);
const float scaleY = 2.0f / (maxY - minY);
const float offsetX = -0.5f * (minX + maxX) * scaleX;
const float offsetY = -0.5f * (minY + maxY) * scaleY;

Mat4 cropMat(1.0f);
cropMat[0][0] = scaleX;
cropMat[1][1] = scaleY;
cropMat[3][0] = offsetX;
cropMat[3][1] = offsetY;

return cropMat * orthoMat * lightViewMatrix;
}


The result looks bonkers:

It is indeed tighter than the first... but it clips some of the objects - mainly, the tower in the back. How do I adjust it so all the objects are in full view, but not larger?

Any idea why this is?

Edited by KaiserJohan

##### Share on other sites

This is how I imagine the issue looks like:

[attachment=26729:pssm1.png]

The bottom two shows the problem as I understand it; the objects AABB is smaller than the frustrum AABB, but after applying lights view matrix the full geometry is not visible. Is there any way to make sure the WHOLE objects AABB is visible AFTER applying the lights view matrix? I'm this is simple in math terms but I have no idea where to start.

##### Share on other sites

Sorry to bump again, but does anyone have any hints on how to fix this?

##### Share on other sites

Reading nvidia CSM sample, it suggests using a AABB around all objects in each split, and comparing it to the frustrums AABB, and take the smallest value of them - like this:

That’s what I told you to do in your sister topic to this one.
Make a maximum bounding box, then tighten it around all the objects in it.

It is indeed tighter than the first... but it clips some of the objects - mainly, the tower in the back.

Is that really a problem? Is the part of the tower that is clipped actually casting a visible shadow?
The code you posted only shrinks the original light frustum, not enlarges it. The first step is to make the largest frustum that can possibly cast shadows into your scene (which is why the refinement step only shrinks it—making it larger wouldn’t make sense), so if the first pass cut off parts of objects then it is because it determined they are not able to produce a visible shadow.

By the way, not on-topic, but it certainly doesn’t hurt that I point out you should probably change how you generate mipmaps such that you are using either a Kaiser filter or a Lanczos filter. Based on how blurry your tile texture is getting in the distance I’m guessing you are using a box filter, as most tools would have you do. this really isn’t the best way to go.
http://www.number-none.com/product/Mipmapping,%20Part%201/index.html

L. Spiro Edited by L. Spiro

• 10
• 12
• 10
• 10
• 11
• ### Similar Content

• Good Evening,
I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
I am really stucked right now because of the fundamental question:
Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on.
In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
Should I treat those debug objects as entities/components?
For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
Regards,
LifeArtist
• By QQemka
Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
Let's go:
Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
There were several more but i forgot/solved them at time of writing
• By RenanRR
Hi All,
I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
#version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);
So, some doubts:
- Why use it like that?
- Is it okay to manipulate the camera that way?
-in this way, are not the vertex's positions that changes instead of the camera?
- I need to pass MVP to all shaders of object in my scenes ?

What it seems, is that the camera stands still and the scenery that changes...
it's right?

Thank you

• Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

int rgbValue = int(textureSample.w);//4 bytes of data packed as color
// algorithm might not be correct and endianness might need switching.
vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
extractedData /= 255.0f;

• While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
Anyone has any idea .. what should I do?