Shadow Mapping / z-Buffer texture on ATI card

Started by
9 comments, last by dpadam450 13 years, 4 months ago
Hi,
I am having some problems in getting shadow mapping to work on an ATI card. I guess the problem is best described by these two images:

Scene rendered with nvidia card (GF9600GT, Linux Driver 260.19.12):
http://nas.dyndns.org/temp/nvidia.png
Scene rendered with ati card (HD3400, Linux Catalyst 10.11):
http://nas.dyndns.org/temp/ati.png

For some reason the depth buffer obtained by the ATI card is missing lots of the geometry of the scene...

Code: http://nas.dyndns.org/temp/demo.zip , relevant portions:
Initialization:
///*** CREATE DEPTH TEXTURE ***///

shadowmap.width=640;

shadowmap.height=480;

shadowmap.biasMatrix<<0.5, 0.0, 0.0, 0.5, 0.0, 0.5, 0.0, 0.5, 0.0, 0.0, 0.5, 0.5, 0.0, 0.0, 0.0, 1.0;

LookAtf(shadowmap.modelViewMatrix, lightPosition, Vector3f::Zero(), Vector3f(0.f, 1.f, 0.f));

glUseProgram(shadowprog.program);

glUniformMatrix4fv(shadowprog.modelViewMatrixLocation, 1, GL_FALSE, shadowmap.modelViewMatrix.data());



glGenTextures(1, &shadowmap.texture);

glBindTexture(GL_TEXTURE_2D, shadowmap.texture);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP ); // Remove artifacts when outside the domain of the shadowmap

glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP );

glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, shadowmap.width, shadowmap.height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);

glBindTexture(GL_TEXTURE_2D, 0);



///*** CREATE SHADOW FRAMEBUFFER OBJECT ***///

glGenFramebuffers(1, &shadowmap.fbo);

glBindFramebuffer(GL_FRAMEBUFFER, shadowmap.fbo);

glDrawBuffer(GL_NONE); // Instruct openGL that we won't bind a color texture with the currently binded FBO

glReadBuffer(GL_NONE);

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shadowmap.texture, 0); // Attach depth texture to FBO

if(glCheckFramebufferStatus(GL_FRAMEBUFFER)!=GL_FRAMEBUFFER_COMPLETE){

std::cerr<<"GL_FRAMEBUFFER_COMPLETE failed, CANNOT use FBO\n";

}

glBindFramebuffer(GL_FRAMEBUFFER, 0);

Drawing:
///*** RENDER FROM LIGHT POV ***///

// Bind framebuffer

glBindFramebuffer(GL_FRAMEBUFFER, shadowmap.fbo);

glClear(GL_DEPTH_BUFFER_BIT);

glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);

// Draw

glCullFace(GL_FRONT);

glUseProgram(shadowprog.program);

for(unsigned i=0; i<models.size(); ++i){

glBindVertexArray(models.vao);

glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, models.indices);

glDrawElements(GL_TRIANGLES, models.n_indices, GL_UNSIGNED_INT, 0);

}

glBindVertexArray(0);

glCullFace(GL_BACK);

// Store matrix

Matrix4f textureMatrix=shadowmap.biasMatrix*shadowmap.perspectiveMatrix*shadowmap.modelViewMatrix;



Would not be the first time the ATI driver shows some problems, on the other hand many people say that ATI is stricter on OpenGL while nvidia being more permissive, hence the code may be wrong and only by chance render okay on the nvidia card.

Anyone has some hints?
Thanks!
smani
Advertisement
Rule #1, never use non power of two textures...especially on ATI.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

Thanks for your reply - unfortunately the issue persists with a 512x512 depth-texture.
Is the map being taken from one spot or moved because the sphere is obviously not drawn the same. If the shadow map location isn't locked in place, then do so and post the screenshot for both to get a direct comparison.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

The shadowmap is static, it is only the observer camera which rotates around the scene (indeed I could have computed the shadow texture once and used it for all successive frames - but I am testing the framework for a more complex project with moving shadows).

Anyhow, the two shadow maps _should_ be identical.
Also try turning of culling and and make sure your projection matrix has a far plane far enough to capture all the objects (only if your shadow map is not locked in place).

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

Mh, still no luck...
Are you using a shader?

Based on your 2 images your shadow is not being rendered from the same spot because looking at the ground, the sphere's shadow tells you that in one image the light is higher up and looking more down.

Try putting the camera at the lights position on both gpu's and validate that when drawing to the screen that you are actually seeing the same 2 images.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

Well actually the screenshots were obtained using the same identical program on both gpus. They only difference was the screenshot taken at an other viewer angle (note: the shadowmap is always the same since the light position is fix). Anyhow, taking the screenshot at the same observer angle yields http://nas.dyndns.org/temp/ati.png (updated screenshot). I am quite sure I am violating some openGL rules when creating the shadowmap, causing my program to work by chance on the nvidia gpu which tends to be more forgiving and not on the ati gpu - it's just I'm still clueless :s
Ok, am near to the solution.

Apparently it has nothing to do with the depth buffer, but with how I generate the VAO and it's associated shader attribute pointers:

glGenBuffers(1, &model.vertices);
glBindBuffer(GL_ARRAY_BUFFER, model.vertices);
glBufferData(GL_ARRAY_BUFFER, model.n_vertices*4*sizeof(float), shape.vertices, GL_STATIC_DRAW);
glUseProgram(drawprog.program);
glVertexAttribPointer(drawprog.vertexLocation, 4, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(drawprog.vertexLocation);
glUseProgram(shadowprog.program);
glVertexAttribPointer(shadowprog.vertexLocation, 4, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(shadowprog.vertexLocation);

glGenBuffers(1, &model.normals);
glBindBuffer(GL_ARRAY_BUFFER, model.normals);
glBufferData(GL_ARRAY_BUFFER, model.n_vertices*3*sizeof(float), shape.normals, GL_STATIC_DRAW);
glUseProgram(drawprog.program);
glVertexAttribPointer(drawprog.normalLocation, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(drawprog.normalLocation);

glGenBuffers(1, &model.indices);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, model.indices);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, model.n_indices*sizeof(unsigned), shape.indices, GL_STATIC_DRAW);

glBindVertexArray(0);


If I comment out the

glUseProgram(drawprog.program);
glVertexAttribPointer(drawprog.normalLocation, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(drawprog.normalLocation);

block, the shadow shader works fine. It appears that the vertex attribute pointers get mixed up between the multiple programs. Is this even legal to do? I though that with vertex array, the vertex attrib pointers get automatically saved in the context of the VAO?

This topic is closed to new replies.

Advertisement