Shadow Mapping work on Intel but not Nvidia?

Started by
5 comments, last by Enalis 12 years, 6 months ago
I recently had some free time at work and the intel graphics card in my machine supports opengl 3 so I decided to see if my engine would run... it does. So I thought, the only thing it's really missing right now is to convert my shadow mapping. I do it and it works with flying colors. It only took me about 3 hours to add it to the renderer.

So I thought, that was too easy, especailly for an intel card. So I brought the code home and put it in my svn and compiled on my home computer with a geforce 550 from nvidia themselves. It supports opengl 4.2. Suddenly, I realized the shadows do not work at all, in fact the whole screen is black...

Has anyone else ran into this? I've been faithful to NVidia and it has to work on nvidia and ati, and this is very strange. Normally I'm fighting with Intel cards.

Anyway, just was hoping for some quick advice, maybe a couple things to check out... here's some screen shots

Here is a screenshot from work of the shadows working on the intel gma card:
finally_organized_added_shadows.png


Here is the shadow map as generated on my NVidia card (It is exactly the same as the one as the Intel card!):
shadow_map.png


But alas, like I said, the screen is black...

I looked at gDeBugger and there seems to be no errors, especially no OpenGL Errors.

Once again, any thoughts would be greatly appreciated!
Douglas Eugene Reisinger II
Projects/Profile Site
Advertisement
This is one of the main problems with OpenGL. What works for you may not work for another (or works on one of your machines but not on the other(s)).

It very easily could be a vendor bug. On earlier NVidia cards I experienced a bug in which they were simply unable to set a bool to true or false.
More recently, all textures were black on my low-end NVidia card, but perfectly fine everywhere else. Turned out to be an inconsistent mipmap-level calculation rule causing NVidia to consider my mipmaps incomplete, whereas they were complete on every other card.


There will probably not be anyone who can help with this issue without having had exactly the same issue in the past.
You will have to read the OpenGL specifications very carefully to see any possible way you may have done something that could possibly be “mostly” defined but not “explicitly” defined by the standard.
Be sure you are not using extensions only one family of cards has.

If you are using frame buffer objects, attach color buffers first, then depth.
I have heard that some cards have problems if they are attached in the wrong order.

It is these kinds of little things you need to check carefully.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Yea, I've run into this before. My nvidia card is rather new too! It's a 550 and is actually made by nvidia. Oh well, ill keep digging. Thanks for the info, maybe it's the fbo thing.
Douglas Eugene Reisinger II
Projects/Profile Site
FBO color and depth are definitely attached in the proper order! I actually have a Nvidia GeForce GTX 550 Ti. The thing that I don't quite get is that it all works right up until the lighting parts. I mean, the shadow map is correctly stored in the depth buffer, my shader hasn't changed except I switched from shadow2DProj to textureProj because the prior was deprecated. But the shadowing always evaluates to 0. Perhaps it's precision difference between the cards but everything else that deals with FBOs work, including an SSAO effect based on depth. I'll keep digging though.
Douglas Eugene Reisinger II
Projects/Profile Site
Turns out it was me not following the opengl spec properly from nvidia...

I needed to add the correct TexParameters...

My code went from:

graphics->SetActiveTexture(2);
shaders->SetInt("shadow_mapping_enabled", 1);
shaders->SetMatrix4("shadow_mapping_bias", shadow_mapping_bias * light->GetProjectionMatrix() * light->GetViewMatrix());
graphics->BindTexture(light->GetShadowBuffer()->GetDepthTextureId(), 2);


To:

graphics->SetActiveTexture(2);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_COMPARE_R_TO_TEXTURE_ARB);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC_ARB, GL_LEQUAL);
shaders->SetInt("shadow_mapping_enabled", 1);
shaders->SetMatrix4("shadow_mapping_bias", shadow_mapping_bias * light->GetProjectionMatrix() * light->GetViewMatrix());
graphics->BindTexture(light->GetShadowBuffer()->GetDepthTextureId(), 2);


The problem is, the first light is not generating a shadow, but I'm sure that's just a lingering affect of not setting the parameters back.
Douglas Eugene Reisinger II
Projects/Profile Site
If you are using shaders, you should be performing the depth test against the depth texture yourself from within the shader, not via those built-in compare modes.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Then why did it work without those lines on the intel card? I am doing it I believe...

[size=2]If a non-shadow texture call is made to a sampler that represents a depth texture with depth comparisons turned on, then results are undefined. If a shadow texture call is made to a sampler that represents a depth texture with depth comparisons turned off, then results are undefined.
Douglas Eugene Reisinger II
Projects/Profile Site

This topic is closed to new replies.

Advertisement