Shadow Mapping

Started by
20 comments, last by Neutrinohunter 16 years, 2 months ago
I have a few questions to do with Shadow Mapping. I have managed to get an implementation of it working but I have various artefacts and things I want to add. Artefact: http://img209.imageshack.us/img209/644/artefactqn5.jpg See the lines to the left hand side of the picture, those are artefacts produced at a 512x512 resolution. I'm wondering what would cause them, the light projection matrix? Also, how would I go about getting a high quality shadow map which blends nicely onto a surface. At the moment, its very flat shaded (i.e all black) neutrinohunter
Advertisement
It's just caused by setting modelviewprojection matrix of light projection. You're projecting "pyramid", which has top point in light source and grows in the direction of the light. How to solve that artifact? There are two ways.

The tricky way - don't render directional lights, but just omnidirectional lights. They're processed by projecting 6 sides of depthmaps like a cube.

The correct way - Do everything in a shader. If your projection is off limits (that artifact you've got) don't shadow that pixel. If you want further explanation, just say.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

Yes please :)

I've incorporated a P Buffer to see how it can improve the quality and am eventually going to work on other shadowing methods (VSM, PSSM, CSM etc) once I've done some reading.

Haven't you also got your own technique which you are developing.

I would eventually like to write a shader version but if there is a way to improve on this currently I would also like to hear it.

Thanks,neutrinohunter
The shadow on the torus looks very strange to me. They are just not consistant.

You must have made some mistakes in the depth determination.

By the way, why you add to your scene a ambient light source? It makes the scene even more strange. If you really wanted to add ambient light source, you should replace your shadow with the ambient light color.

Also, if you want that much from your shadow algorithm, don't use shadow mapping. Instead, you should use shadow volume. Although there does have tricks to make shadow mapping artifacts less obvious, they are not worth to be implemented.
Its because I have added an option to change the bias and didn't in that example.

I would like to know the tricks as It would be helpful before I make a shader version. I am implementing Shadow Volumes too, I just wanted to see if I could get a better quality SM.

Vilem Otte - I just read through my post and it seems contradictory :) I would be grateful for any explanation of your shader option and if possible your own shadow algorithm.

Thanks, Neutrinohunter
Quote:
...
I would like to know the tricks as It would be helpful before I make a shader version. I am implementing Shadow Volumes too, I just wanted to see if I could get a better quality SM.
...
Thanks, Neutrinohunter


People use shadow mapping for better a frame-rate. I don't think you can find a shadow mapping alogorithm that having a better quality than shadow volume.

There does have some variant of shadow mapping algorithm that trying to fix the
artifact of shadow mapping algorithm (e.g. Adaptive Shadow Maps http://www.graphics.cornell.edu/pubs/2001/FFBG01.html ).
However, it is just simpler & easier for us to use shadow volume in case the artifact of shadow mapping is important.
ma_hty - Well, I must disagree with u. I (and much more other people) am using it to gain penumbra soft shadows at much bigger quality than can be with shadow volumes achieved at much higher framerate. So i really think, that i've got better quality than shadow volumes (well, i'm using some tricks with raytracing to get better shadow filtering, like distance shadow attenuation calculation, penumbra size attenuation etc.), i can show my method example here, it's called PCMLSM:
http://www.gamedev.net/community/forums/topic.asp?topic_id=478346

Let's get back to Neutrinohunter. I'll first explain, why to NOT use hardware extensions of shadow mapping, and than show method, how to with GLSL shaders (the best way, Cg is just for NVidia!).
1. Hardware shadow mapping (GL_ARB_shadow, GL_ARB_depth_texture or
GL_SGIX_shadow or GL_NV_shadow_mapping or ...) has some advantages:
1. It's a little faster
2. Supported on many of today GPUs
But much more disadvantages:
1. Omnidirectional lights cannot be achieved using HW shadow maps (lighs, that casts shadows to whole enviroment, represented by point or area - not direction)
2. Radeon HD 2xxx and 3xxx series has bug in HW shadow mapping!
3. Precision can be maximal 24bit depth! (with shader shadow mapping max. 128bit depth!!! - You'd need very small bias)
Shaders shadow mapping has much more advantages (supported on all gpus with shader support!, Much easier to bilinear filter, possible Variance shadow mapping) and just two disadvantages - It's a little slower (around 10%-20% slower) and harder implementation (well not so hard).

So now to that method (this is gonna be my longest post :D), Shadow mapping is multi pass algorythm (You need first pass for rendering depth or INDEXED texture, second is projecting and comparing). Lets go for first pass:
1st pass:
You'd do all the same way - this like:
1. Clear the buffer and load identity matrix (standart glClear and glLoadIdentity)
2. Set camera perspective this like:
glMatrixMode(GL_PROJECTION); // We'll operate on projection matrix, which takes care of polygon rasterizationglLoadIdentity(); // Load identity matrixgluPerspective(45.0f, 1.0f, 1.0f, 1500.0f); // Set angle of view (first parameter), aspect ration (2nd), near clipping plane (3rd) and far clipping plane (4th)glMatrixMode(GL_MODELVIEW);  // We'll operate on modelview matrix, which takes care of transformationsglLoadIdentity(); // Load identity matrix

3. Now set camera into your camera view and get inverse matrix (it's a little harder) like this:
// Set using F.e. gluLookAt, camera is my class of camera (it has 3 vectors)gluLookAt(	camera.mPos.x,camera.mPos.y,camera.mPos.z,		camera.mView.x,camera.mView.y,camera.mView.z,		camera.mUp.x,camera.mUp.y,camera.mUp.z);// Now get inverse camera matrix into float [16]camera.GetInverseMatrix(CameraInverseMatrix);

where GetInverseMatrix do this (it's not standart inverting of matrix, this can be done simplier way, because we don't need every information!):
void CCamera::GetInverseMatrix(float mCameraInverse[16]){// float [16] - variable mfloat m[16] = {0};// Get opengl's modelview matrix into mglGetFloatv(GL_MODELVIEW_MATRIX, m);	mCameraInverse[0]  = m[0]; mCameraInverse[1] = m[4]; mCameraInverse[2]  = m[8];	mCameraInverse[4]  = m[1]; mCameraInverse[5] = m[5]; mCameraInverse[6]  = m[9];	mCameraInverse[8]  = m[2]; mCameraInverse[9] = m[6]; mCameraInverse[10] = m[10];	mCameraInverse[3]  = 0.0f; mCameraInverse[7] = 0.0f; mCameraInverse[11] = 0.0f;	mCameraInverse[15] = 1.0f;	mCameraInverse[12] = -(m[12] * m[0]) - (m[13] * m[1]) - (m[14] * m[2]);	mCameraInverse[13] = -(m[12] * m[4]) - (m[13] * m[5]) - (m[14] * m[6]);	mCameraInverse[14] = -(m[12] * m[8]) - (m[13] * m[9]) - (m[14] * m[10]);}

4. Set view to lights view (i presume directional, not omnidirectional light today) using F.e. standart gluLookAt()
5. Turn on shadow map shader and send to it one float - far clipping plane, shaders code will look like this (I'm using one approach i found on codesampler, because it's more accurate than just using depth):
Vertex Shader
// sending vertex position in world spacevarying vec4 vertex;// main loopvoid main(){	// set vertex as mul(modelview matrix mat4x4, vertex vec4)	vertex = gl_ModelViewMatrix * gl_Vertex;	// Position of displayed vertexes (multiplying vertex by modelview and projection matrixes will get rasterized vertex positions!)	gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;}


Fragment Shader
// Getting vertex variable from vertex shadervarying vec4 vertex;// Far clipping plane as uniform from OpenGLuniform float zFar;// main loopvoid main(){	// bits encoding and their mask for shadow mapping (if you want encode vec3 as float using dot or encode float as vec3 using fract)	vec3 bitSh = vec3(256 * 256, 256, 1);	vec3 bitMask = vec3(0, 1 / 256, 1 / 256);	// distance from camera to vertex divided by zFar (this will get us white color by far clipping plane and black color by near clipping place)	float dist = length(vertex) / zFar;	// we'll compute that as fract function of encoding vec3 multiplied by distance	vec3 comp = fract(bitSh * dist);	// and we'll subtract masked comuted value from that	comp -= comp.xxy * bitMask;	// This will give us depth in RGB values to display (bigger precision - 24 bit without HDR textures - RGBA (or RGB), with HDR RGBA32F shadow maps this'd be 96bit precision (with alpha channel using it'd be 128 bit precision!))	gl_FragColor = vec4(comp.x, comp.y, comp.z, 1);}

6. (Back in application) Turn off shadow map shader and get modelview and projection matrixes into two float [16] variables like this:
glGetFloatv(GL_PROJECTION_MATRIX, Projection);glGetFloatv(GL_MODELVIEW_MATRIX, ModelView);

7. Now render to texture (or take from framebuffer/p-buffer)

Second Pass: In "main" OpenGL part (where you render whole scene, so called process, all what we've done was pre-process, there is post-process (it takes care of F.e. Bloom effects) too).
1. Set shadow map into one of texture units (or main texture unit) and load bias matrix (float bias[16] = {0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0}) into texture matrix. Then multiply it with projection modelview and camera inverse matrixes like this:
// Binding texture (must be done after setting texture unit, if setting)glBindTexture(GL_TEXTURE_2D, Shadow.texture);// We'll operate on texture matrixglMatrixMode(GL_TEXTURE);// Load bias matrix (texture begins in left down corner, shadowmap in the middle)glLoadMatrixf(Bias);// Multiply with projection matrix (this will get us texture projection)glMultMatrixf(Projection);// Multiply with modelview matrix (this will get us texture projection after transforming)glMultMatrixf(Modelview);// Multiply with camera inverse (to negotiate camera move with the texture)glMultMatrixf(CameraInverse);glMatrixMode(GL_MODELVIEW);

2. In the "scene" shader (which we'll turn on), which can be F.e. BRDF lighting shader you must do:
Vertex Shader
// before mainvarying vec3 lightDist;varying vec4 shadowProj;uniform vec3 lightPos;...// in main...// set shadow projection as multiplying texure matrix of shadow map texture unit (if main then it'd be 0) and vertex in world positionshadowProj = gl_TextureMatrix[ShadowMapTextureUnit] * (gl_ModelViewMatrix * gl_Vertex);// lightDist is variable, which is used to comparing, for static scene leave this, for dynamic scene it'd need light inverse transforming or doing everything in world spacelightDist = lightPos - gl_Vertex.xyz;...


Fragment Shader
// before mainvarying vec3 lightDist;varying vec4 shadowProj;uniform float zFar;...// in main...// bit decode mask (rgb colors into float)vec3 bitShifts = vec3(1 / (256 * 256), 1 / 256, 1);// projection is solved by dividing shadowProj.xyz with complex size of that point (we're projecting "box" into infinity, we're not increasing it's side size)vec3 pos = shadowProj.xyz / shadowProj.w;// Shadow map bias, i choosed number 2.5, length(lightDir) represents distance from light to pixel, zFar represents far clipping plane. If we'd have 128bit precision, bias wouldn't be necessary, because it's very BIG precision.float bias = (length(lightDir) - 2.5) / zFar;// shadow is solved by comparing float of texture distance (dot(texture, bitShifts) will give us decoded version of vec3 shadowMap into float, we're comparing to bias. If it'd be greater, there's no shadow, if not, there is shadow - further explanation of this comparing method is on wikipediafloat shadow = clamp(float(dot(texture2D(shadowMap, pos.xy).xyz, bitShifts) > (bias)), 0.0, 1.0);...// At last we need to multiply vec4 finalBRDF (which is F.e. texture or lighting, calculated or sended into shader) with shadow, if it's in projection pyramid (that if test before frag color), then multiply, if not - render without shadowsif(pos.x < 0.05 || pos.x > 0.95 || pos.y < 0.05 || pos.y > 0.95)	gl_FragColor = finalBRDF * shadow;else	gl_FragColor = finalBRDF ;

And this is it, I hope this is understandable, because i'm not-so-good teacher. You asked about way how to solve without shaders - it's possible, but damn slow, use your light pyramid (which is hard to calculate) as stencil "shadow volume", if you pass, then do shadow map comparing, if test fails, then do nothing. Anyway I don't like method without shaders.

About my own technique - sorry, that i'll not describe it right now right here, because I haven't got time. I'm preparing article about that, but I have no time - so it'd be sometime this year (I hope that it'd be as soon as possible - cannot promise, but i'll have some more time in February)

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

Vilem Otte

When you said your method (PCMLSM) is faster with better quality than shadow volume, there should be some kind of common ground for comparsion, right? If that so, how you measure faster? And, how you measure better?

Haven't you look at what others people do with shadow volume before you said that?

Is your method a novel algorithm? If that so, please let me know when it is published or patented. I would be excited to know your novel algorithm.

If it is not a novel algorithm, can you tell me what is the actual algorithm you are using?

Gary
Well, faster and better - it depends on that, from which side are u looking at that - F.e. that algorythm produces much more softer shadows, that can be achieved with shadow volumes. In comparsion with shadow volumes penumbra shadows, it's much more faster (if you haven't take in hand softness, because they're not so soft). If you want some sharper shadows, it'd be better to use shadow volumes. I can say just this - noone of those algorythms are the best. The best is to use area shadows using rays (yes, i'm talking about raytracing) - but that's not possible on today !high end hardware!. So for me it's much better algorythm, and not just for me.

I had looked what can other do with shadow volumes (i'm not shadow volume expert, but i know how to achieve penumbra shadows with them), it's really nice shadows algo, but they're geometry based (i needed for this kind of filter something, what's texture based - i know, they can be converted to texture, but it takes time), but i like more soft-shadows (more realistic) and without artifacts when you have extreme light size + need texture based algorythm.

Well it's combined NEW algorythm and it's still in development, it uses shadow maps, but not just one shadow map - it uses multi-layered shadow maps (ML - means Multi-Layered), even more than one multi-layered shadow maps (it depends on quality you need). It hasn't been published or patented (it's still not completed - there are some bottlenecks - we know, how to solve them, but it needs time to debug that method to be fast (it can be much more faster than today version) and much more accurate). It can simulate Area lights (it was firstly designed to solve area lights, because in real world doesn't exist any light source with size=0). I'm gonna to write long whitepaper (including demo example), how to implement PCMLSM, but it uses realtime raytracing to calculate some more "things" and i can't release anything from that raytracing api, which I developed for project I'm working on now. I'm gonna to release it somehow, it's combined method of rasterizing (OpenGL, probably possible with DirectX too) and raytracing (own realtime raytracing api).

Anyway it's novel algorythm, but it uses some other algorythms to look much more better. Standart shadow mapping of course, can use parallel-split shadow maps (to get better quality shadows, if rendering for larger areas) - tested! or probably any of perspective wrap technique (PSM, LiSPSM, TSM, ...) - not tested! And it's filtered using bilinear percentage close filter (PCF), could possible use even variance shadow map filter (to get faster and softer filter), but it hasn't been tested. Whole algorythm is based on getting coefficients from rays and blurring values of shadows (+ radiosity softening!) from shadow maps from so called "volumetric points" inside area of light. I'll explain whole algorythm in whitepaper, which i'm planning to write (but dunno exactly when, probably February, but can't promise).

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

There are quite alot variation of Shadow volume for rendering soft shadow. I just search with google with keywords shadow volume, penumbra and shadow. I already got quite alot relevant and well-documented algorithms on the first page.

Here one of them.

http://www2.imm.dtu.dk/visiondag/VD03/grafisk/tomasmoeller.html

Is your method better than those?

Be frank. I don't believe your subjective judgment. Especially, the conclusions you arrived are mostly based on your feeling.

Although I don't know whether yours algorithm (yours?) is really better or not, one thing is quite obvious. That is, you don't have enough information to tell it neither.

Here is my suggestion. Before you got enough evidence, don't mention shadow volume when you present your stuff. It is just not a good idea for you to do that.

This topic is closed to new replies.

Advertisement