Sign in to follow this  
Neutrinohunter

Shadow Mapping

Recommended Posts

I have a few questions to do with Shadow Mapping. I have managed to get an implementation of it working but I have various artefacts and things I want to add. Artefact: http://img209.imageshack.us/img209/644/artefactqn5.jpg See the lines to the left hand side of the picture, those are artefacts produced at a 512x512 resolution. I'm wondering what would cause them, the light projection matrix? Also, how would I go about getting a high quality shadow map which blends nicely onto a surface. At the moment, its very flat shaded (i.e all black) neutrinohunter

Share this post


Link to post
Share on other sites
It's just caused by setting modelviewprojection matrix of light projection. You're projecting "pyramid", which has top point in light source and grows in the direction of the light. How to solve that artifact? There are two ways.

The tricky way - don't render directional lights, but just omnidirectional lights. They're processed by projecting 6 sides of depthmaps like a cube.

The correct way - Do everything in a shader. If your projection is off limits (that artifact you've got) don't shadow that pixel. If you want further explanation, just say.

Share this post


Link to post
Share on other sites
Yes please :)

I've incorporated a P Buffer to see how it can improve the quality and am eventually going to work on other shadowing methods (VSM, PSSM, CSM etc) once I've done some reading.

Haven't you also got your own technique which you are developing.

I would eventually like to write a shader version but if there is a way to improve on this currently I would also like to hear it.

Thanks,neutrinohunter

Share this post


Link to post
Share on other sites
The shadow on the torus looks very strange to me. They are just not consistant.

You must have made some mistakes in the depth determination.

By the way, why you add to your scene a ambient light source? It makes the scene even more strange. If you really wanted to add ambient light source, you should replace your shadow with the ambient light color.

Also, if you want that much from your shadow algorithm, don't use shadow mapping. Instead, you should use shadow volume. Although there does have tricks to make shadow mapping artifacts less obvious, they are not worth to be implemented.

Share this post


Link to post
Share on other sites
Its because I have added an option to change the bias and didn't in that example.

I would like to know the tricks as It would be helpful before I make a shader version. I am implementing Shadow Volumes too, I just wanted to see if I could get a better quality SM.

Vilem Otte - I just read through my post and it seems contradictory :) I would be grateful for any explanation of your shader option and if possible your own shadow algorithm.

Thanks, Neutrinohunter

Share this post


Link to post
Share on other sites
Quote:

...
I would like to know the tricks as It would be helpful before I make a shader version. I am implementing Shadow Volumes too, I just wanted to see if I could get a better quality SM.
...
Thanks, Neutrinohunter


People use shadow mapping for better a frame-rate. I don't think you can find a shadow mapping alogorithm that having a better quality than shadow volume.

There does have some variant of shadow mapping algorithm that trying to fix the
artifact of shadow mapping algorithm (e.g. Adaptive Shadow Maps http://www.graphics.cornell.edu/pubs/2001/FFBG01.html ).
However, it is just simpler & easier for us to use shadow volume in case the artifact of shadow mapping is important.

Share this post


Link to post
Share on other sites
ma_hty - Well, I must disagree with u. I (and much more other people) am using it to gain penumbra soft shadows at much bigger quality than can be with shadow volumes achieved at much higher framerate. So i really think, that i've got better quality than shadow volumes (well, i'm using some tricks with raytracing to get better shadow filtering, like distance shadow attenuation calculation, penumbra size attenuation etc.), i can show my method example here, it's called PCMLSM:
http://www.gamedev.net/community/forums/topic.asp?topic_id=478346

Let's get back to Neutrinohunter. I'll first explain, why to NOT use hardware extensions of shadow mapping, and than show method, how to with GLSL shaders (the best way, Cg is just for NVidia!).
1. Hardware shadow mapping (GL_ARB_shadow, GL_ARB_depth_texture or [code]
GL_SGIX_shadow or GL_NV_shadow_mapping or ...) has some advantages:
1. It's a little faster
2. Supported on many of today GPUs
But much more disadvantages:
1. Omnidirectional lights cannot be achieved using HW shadow maps (lighs, that casts shadows to whole enviroment, represented by point or area - not direction)
2. Radeon HD 2xxx and 3xxx series has bug in HW shadow mapping!
3. Precision can be maximal 24bit depth! (with shader shadow mapping max. 128bit depth!!! - You'd need very small bias)
Shaders shadow mapping has much more advantages (supported on all gpus with shader support!, Much easier to bilinear filter, possible Variance shadow mapping) and just two disadvantages - It's a little slower (around 10%-20% slower) and harder implementation (well not so hard).

So now to that method (this is gonna be my longest post :D), Shadow mapping is multi pass algorythm (You need first pass for rendering depth or INDEXED texture, second is projecting and comparing). Lets go for first pass:
1st pass:
You'd do all the same way - this like:
1. Clear the buffer and load identity matrix (standart glClear and glLoadIdentity)
2. Set camera perspective this like:

glMatrixMode(GL_PROJECTION); // We'll operate on projection matrix, which takes care of polygon rasterization
glLoadIdentity(); // Load identity matrix
gluPerspective(45.0f, 1.0f, 1.0f, 1500.0f); // Set angle of view (first parameter), aspect ration (2nd), near clipping plane (3rd) and far clipping plane (4th)
glMatrixMode(GL_MODELVIEW); // We'll operate on modelview matrix, which takes care of transformations
glLoadIdentity(); // Load identity matrix

3. Now set camera into your camera view and get inverse matrix (it's a little harder) like this:

// Set using F.e. gluLookAt, camera is my class of camera (it has 3 vectors)
gluLookAt( camera.mPos.x,camera.mPos.y,camera.mPos.z,
camera.mView.x,camera.mView.y,camera.mView.z,
camera.mUp.x,camera.mUp.y,camera.mUp.z);
// Now get inverse camera matrix into float [16]
camera.GetInverseMatrix(CameraInverseMatrix);

where GetInverseMatrix do this (it's not standart inverting of matrix, this can be done simplier way, because we don't need every information!):

void CCamera::GetInverseMatrix(float mCameraInverse[16])
{
// float [16] - variable m
float m[16] = {0};

// Get opengl's modelview matrix into m
glGetFloatv(GL_MODELVIEW_MATRIX, m);

mCameraInverse[0] = m[0]; mCameraInverse[1] = m[4]; mCameraInverse[2] = m[8];
mCameraInverse[4] = m[1]; mCameraInverse[5] = m[5]; mCameraInverse[6] = m[9];
mCameraInverse[8] = m[2]; mCameraInverse[9] = m[6]; mCameraInverse[10] = m[10];
mCameraInverse[3] = 0.0f; mCameraInverse[7] = 0.0f; mCameraInverse[11] = 0.0f;
mCameraInverse[15] = 1.0f;

mCameraInverse[12] = -(m[12] * m[0]) - (m[13] * m[1]) - (m[14] * m[2]);
mCameraInverse[13] = -(m[12] * m[4]) - (m[13] * m[5]) - (m[14] * m[6]);
mCameraInverse[14] = -(m[12] * m[8]) - (m[13] * m[9]) - (m[14] * m[10]);
}

4. Set view to lights view (i presume directional, not omnidirectional light today) using F.e. standart gluLookAt()
5. Turn on shadow map shader and send to it one float - far clipping plane, shaders code will look like this (I'm using one approach i found on codesampler, because it's more accurate than just using depth):
Vertex Shader

// sending vertex position in world space
varying vec4 vertex;

// main loop
void main()
{
// set vertex as mul(modelview matrix mat4x4, vertex vec4)
vertex = gl_ModelViewMatrix * gl_Vertex;
// Position of displayed vertexes (multiplying vertex by modelview and projection matrixes will get rasterized vertex positions!)
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}


Fragment Shader

// Getting vertex variable from vertex shader
varying vec4 vertex;
// Far clipping plane as uniform from OpenGL
uniform float zFar;

// main loop
void main()
{
// bits encoding and their mask for shadow mapping (if you want encode vec3 as float using dot or encode float as vec3 using fract)
vec3 bitSh = vec3(256 * 256, 256, 1);
vec3 bitMask = vec3(0, 1 / 256, 1 / 256);

// distance from camera to vertex divided by zFar (this will get us white color by far clipping plane and black color by near clipping place)
float dist = length(vertex) / zFar;
// we'll compute that as fract function of encoding vec3 multiplied by distance
vec3 comp = fract(bitSh * dist);
// and we'll subtract masked comuted value from that
comp -= comp.xxy * bitMask;

// This will give us depth in RGB values to display (bigger precision - 24 bit without HDR textures - RGBA (or RGB), with HDR RGBA32F shadow maps this'd be 96bit precision (with alpha channel using it'd be 128 bit precision!))
gl_FragColor = vec4(comp.x, comp.y, comp.z, 1);
}

6. (Back in application) Turn off shadow map shader and get modelview and projection matrixes into two float [16] variables like this:

glGetFloatv(GL_PROJECTION_MATRIX, Projection);
glGetFloatv(GL_MODELVIEW_MATRIX, ModelView);

7. Now render to texture (or take from framebuffer/p-buffer)

Second Pass: In "main" OpenGL part (where you render whole scene, so called process, all what we've done was pre-process, there is post-process (it takes care of F.e. Bloom effects) too).
1. Set shadow map into one of texture units (or main texture unit) and load bias matrix (float bias[16] = {0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0}) into texture matrix. Then multiply it with projection modelview and camera inverse matrixes like this:

// Binding texture (must be done after setting texture unit, if setting)
glBindTexture(GL_TEXTURE_2D, Shadow.texture);
// We'll operate on texture matrix
glMatrixMode(GL_TEXTURE);
// Load bias matrix (texture begins in left down corner, shadowmap in the middle)
glLoadMatrixf(Bias);
// Multiply with projection matrix (this will get us texture projection)
glMultMatrixf(Projection);
// Multiply with modelview matrix (this will get us texture projection after transforming)
glMultMatrixf(Modelview);
// Multiply with camera inverse (to negotiate camera move with the texture)
glMultMatrixf(CameraInverse);
glMatrixMode(GL_MODELVIEW);

2. In the "scene" shader (which we'll turn on), which can be F.e. BRDF lighting shader you must do:
Vertex Shader

// before main
varying vec3 lightDist;
varying vec4 shadowProj;
uniform vec3 lightPos;
...
// in main
...
// set shadow projection as multiplying texure matrix of shadow map texture unit (if main then it'd be 0) and vertex in world position
shadowProj = gl_TextureMatrix[ShadowMapTextureUnit] * (gl_ModelViewMatrix * gl_Vertex);
// lightDist is variable, which is used to comparing, for static scene leave this, for dynamic scene it'd need light inverse transforming or doing everything in world space
lightDist = lightPos - gl_Vertex.xyz;
...


Fragment Shader

// before main
varying vec3 lightDist;
varying vec4 shadowProj;
uniform float zFar;
...
// in main
...
// bit decode mask (rgb colors into float)
vec3 bitShifts = vec3(1 / (256 * 256), 1 / 256, 1);
// projection is solved by dividing shadowProj.xyz with complex size of that point (we're projecting "box" into infinity, we're not increasing it's side size)
vec3 pos = shadowProj.xyz / shadowProj.w;
// Shadow map bias, i choosed number 2.5, length(lightDir) represents distance from light to pixel, zFar represents far clipping plane. If we'd have 128bit precision, bias wouldn't be necessary, because it's very BIG precision.
float bias = (length(lightDir) - 2.5) / zFar;
// shadow is solved by comparing float of texture distance (dot(texture, bitShifts) will give us decoded version of vec3 shadowMap into float, we're comparing to bias. If it'd be greater, there's no shadow, if not, there is shadow - further explanation of this comparing method is on wikipedia
float shadow = clamp(float(dot(texture2D(shadowMap, pos.xy).xyz, bitShifts) > (bias)), 0.0, 1.0);
...
// At last we need to multiply vec4 finalBRDF (which is F.e. texture or lighting, calculated or sended into shader) with shadow, if it's in projection pyramid (that if test before frag color), then multiply, if not - render without shadows
if(pos.x < 0.05 || pos.x > 0.95 || pos.y < 0.05 || pos.y > 0.95)
gl_FragColor = finalBRDF * shadow;
else
gl_FragColor = finalBRDF ;

And this is it, I hope this is understandable, because i'm not-so-good teacher. You asked about way how to solve without shaders - it's possible, but damn slow, use your light pyramid (which is hard to calculate) as stencil "shadow volume", if you pass, then do shadow map comparing, if test fails, then do nothing. Anyway I don't like method without shaders.

About my own technique - sorry, that i'll not describe it right now right here, because I haven't got time. I'm preparing article about that, but I have no time - so it'd be sometime this year (I hope that it'd be as soon as possible - cannot promise, but i'll have some more time in February)

Share this post


Link to post
Share on other sites
Vilem Otte

When you said your method (PCMLSM) is faster with better quality than shadow volume, there should be some kind of common ground for comparsion, right? If that so, how you measure faster? And, how you measure better?

Haven't you look at what others people do with shadow volume before you said that?

Is your method a novel algorithm? If that so, please let me know when it is published or patented. I would be excited to know your novel algorithm.

If it is not a novel algorithm, can you tell me what is the actual algorithm you are using?

Gary

Share this post


Link to post
Share on other sites
Well, faster and better - it depends on that, from which side are u looking at that - F.e. that algorythm produces much more softer shadows, that can be achieved with shadow volumes. In comparsion with shadow volumes penumbra shadows, it's much more faster (if you haven't take in hand softness, because they're not so soft). If you want some sharper shadows, it'd be better to use shadow volumes. I can say just this - noone of those algorythms are the best. The best is to use area shadows using rays (yes, i'm talking about raytracing) - but that's not possible on today !high end hardware!. So for me it's much better algorythm, and not just for me.

I had looked what can other do with shadow volumes (i'm not shadow volume expert, but i know how to achieve penumbra shadows with them), it's really nice shadows algo, but they're geometry based (i needed for this kind of filter something, what's texture based - i know, they can be converted to texture, but it takes time), but i like more soft-shadows (more realistic) and without artifacts when you have extreme light size + need texture based algorythm.

Well it's combined NEW algorythm and it's still in development, it uses shadow maps, but not just one shadow map - it uses multi-layered shadow maps (ML - means Multi-Layered), even more than one multi-layered shadow maps (it depends on quality you need). It hasn't been published or patented (it's still not completed - there are some bottlenecks - we know, how to solve them, but it needs time to debug that method to be fast (it can be much more faster than today version) and much more accurate). It can simulate Area lights (it was firstly designed to solve area lights, because in real world doesn't exist any light source with size=0). I'm gonna to write long whitepaper (including demo example), how to implement PCMLSM, but it uses realtime raytracing to calculate some more "things" and i can't release anything from that raytracing api, which I developed for project I'm working on now. I'm gonna to release it somehow, it's combined method of rasterizing (OpenGL, probably possible with DirectX too) and raytracing (own realtime raytracing api).

Anyway it's novel algorythm, but it uses some other algorythms to look much more better. Standart shadow mapping of course, can use parallel-split shadow maps (to get better quality shadows, if rendering for larger areas) - tested! or probably any of perspective wrap technique (PSM, LiSPSM, TSM, ...) - not tested! And it's filtered using bilinear percentage close filter (PCF), could possible use even variance shadow map filter (to get faster and softer filter), but it hasn't been tested. Whole algorythm is based on getting coefficients from rays and blurring values of shadows (+ radiosity softening!) from shadow maps from so called "volumetric points" inside area of light. I'll explain whole algorythm in whitepaper, which i'm planning to write (but dunno exactly when, probably February, but can't promise).

Share this post


Link to post
Share on other sites
There are quite alot variation of Shadow volume for rendering soft shadow. I just search with google with keywords shadow volume, penumbra and shadow. I already got quite alot relevant and well-documented algorithms on the first page.

Here one of them.

http://www2.imm.dtu.dk/visiondag/VD03/grafisk/tomasmoeller.html

Is your method better than those?

Be frank. I don't believe your subjective judgment. Especially, the conclusions you arrived are mostly based on your feeling.

Although I don't know whether yours algorithm (yours?) is really better or not, one thing is quite obvious. That is, you don't have enough information to tell it neither.

Here is my suggestion. Before you got enough evidence, don't mention shadow volume when you present your stuff. It is just not a good idea for you to do that.

Share this post


Link to post
Share on other sites
Quite impressive technique, but it has few disadvantages - it's strongly based on polygons (if you'd use scene, which I'm using in engine demos - about few milions polygons, calculating silhouette, creating shadow volume would be really expensive - F.e. if it'd be 100 000 polygons for shadow volume, 32*32 light gird - 102 400 000 - thats more than 102 milions polygons! my method would use much less polygons, it'd be much more faster than this method in complex scene, results are at comparable quality, if i set light to similar size), althrough my soft shadows are softed more by indirect illumination (and that makes much more difference). If they have scenes of several thousands of polygons, without effects and get 50 - 70 fps? I got around 40 - 50 fps on modern graphics hardware in scene with several milions of polygons and much more effects (HDR, Bloom, Supersample Antialiasing, Parallax-Normal Bump mapping, Reflections and Refractions, etc.) - so this is the main difference between shadow mapping and shadow volume - speed. So even if quality is comparable, shadow mapping is more polygon independent! So we're getting back to beginning - so it's based on side of view on these two solutions of shadows. I personally don't like shadow volumes too much (memory use, silhouette finding, edge extrusion, ... - huge memory usage, huge CPU usage, small gpu usage - well this is relative, because filtering would be bottleneck of this method), anyway shadow maps aren't without disadvantages (aliasing, if dynamic - than needed big GPU fillrate, ...), filtering is bottleneck too, but not too big, like in the case of shadow volumes. Personally i like realtime raytracing area shadows most, but they're too performance-hard to do it realtime on today hardware.

Well, yes - i'm subjective for my method (more subjective than objective), because I like shadow mapping more (maybe you think, that i'm crazy, but I render almost every time high poly models (not in Desert demo), so shadow volumes never did fast rendering for me). As you said, my decision was based on scenes, that i'm using (high poly), so only choice for me is shadow mapping, I needed high quality filtering (physically correct, really soft and FAST), so i developed own method based on what I know from raytracing (how to achieve area shadows using rays, etc.).

There's main problem - you haven't seen my filtering in action on high quality (Desert Improved demo uses low quality - just few layers, small "volumetric points", small number of rays for raytrace "filtering", etc.). I hadn't released whitepaper (I hadn't even completed it), etc. - so you (and almost everyone else, well everyone who is not in development team on project, which am I working on now).

Quote:

Here is my suggestion. Before you got enough evidence, don't mention shadow volume when you present your stuff. It is just not a good idea for you to do that.


I'll remember that, til i write whitepaper.

Share this post


Link to post
Share on other sites
I don't know, there are about 30 different variants of SM all claiming to produce shadows equivalent to Shadow Volume techniques. It does seem like Image Space techniques are becoming the current trend.

Oh btw, the application for that technique doesn't even show anything, let alone a shadow. I am very spectical myself on any whitepaper which presents to address the tiniest thing and claim itself a new novel algorithm.

Thanks, Vilem you seem to be the only person who says you have no time to lend yet stack up screen posts again and again ;)

Just a few questions:
1) The code you posted is for simple shadow mapping?
2) Whats wrong with directional lights?
3) How slow are we talking about with the shaders?

Solving Bad Artefacts - Guessing a clipplane in front of the light would do the job?

Thanks, if I have time I'll look into implementing the code when I see the whitepaper.

Neutrinohunter


Share this post


Link to post
Share on other sites
Well, image-space is really a current trend, because they need less CPU/GPU power than world-space, texture-space, etc.

Quote:

Thanks, Vilem you seem to be the only person who says you have no time to lend yet stack up screen posts again and again ;)


:D If you programme over 10 hours a day this month, I was trying to coordinate developming team too, programme my own project with very accurate documentation with many details, so i hope you understand, that I wouldn't like to write another whitepaper now and improve your code a little (programme), so I'm having a break for 1 or 2 days, when I just answer on GameDev, Devmaster, etc. I hope you understand.

Just few answers :D
1.) Yeah it's for simple shadow mapping, it's similar technique to one posted on www.codesampler.com - exactly on:
http://www.codesampler.com/usersrc/usersrc_6.htm#oglu_simple_fbo_shadow_mapping
2.) Nothing, just for directional lights you need just 1 render for generation of shadow map. For omnidirectional lights (sometimes referred as point lights) you need 2 renders to texture (dual paraboloid mapping), respectively 6 renders to texture (cube shadow mapping).
3.) I get around 1200 fps with just shaders (it was used for simple scene - that codesampler scene), so it's not slow, it's damn fast. This was just reffering, that it's a little slower than hardware shadow mapping (it could probably give F.e. 1300 fps). And the last and the best - it's much more easier to filter shaders shadow mapping than hardware shadow mapping (F.e. using VSM).

Quote:
Solving Bad Artefacts - Guessing a clipplane in front of the light would do the job?


Dunno, never tried that, but one solution is using that Pyramid and stencil buffer.

Share this post


Link to post
Share on other sites
Vilem Otte, sorry to be mean, please don't pronounce like an shadow volume expert while, indeed, a beginner. It is nothing wrong for you to not-mention shadow volume. However, it is very odd to me when you made some false statements about shadow volume.

Why are you keep trying to judge on something you don't even understand? (funny, isn't?)

Share this post


Link to post
Share on other sites
Yeah I understand, I'm not shadow volume expert - I just was able to achieve penumbra shadows with them. I'm oriented on raytracing (especially realtime raytracing and combined methods), so I need to do rasterizing part as fast as possible into texture - so shadow mapping was my decision and I'm primarily oriented on it.

Quote:

Why are you keep trying to judge on something you don't even understand? (funny, isn't?)


Oh, ... - I know, what you mean, ... well, I'm judging just on main topics and issues (that I had in my shadow volume algo), so I'm judging that based on my experiences, but you're probably right - I dunno every method of shadow volumes (and their filtering), so I should keep "judging" on other people next time (especially critism of other methods, til I see them and read more about them).

Anyway today I've gotta continue work on development (after 2 days of break), I'll show there from time to time (I mean on GameDev.net forum), but if you want quicker answer write on my mail please - vilem.otte@post.cz - because i'll informed about that immediately (instead of writing on forum) - that is mainly for Neutrinohunter.

EDIT: example of PCMLSM shadows versus Raytraced area shadows - http://www.otte.cz/engine/data/PCMLSMvsRT.jpg

[Edited by - Vilem Otte on January 29, 2008 6:38:16 AM]

Share this post


Link to post
Share on other sites
Thanks, Vilem you've been fantastic. Yes I definitely understand. I think either way once I finish this project, I'll look into your whitepaper anyhow always wanting to look at the newest stuff :)

I think I'll try and get an FBO and PBuffer working properly and implement some of the frustum changing algorithms (TSM, PSM) and see what quality I will get. Then I will attempt to try shaders for all the algorithms I've got so far.

You've given me a lot of ideas lately, with improvements and I'll have to get cracking down on them to see if they improve! :)

Cheers, I'll lend an email when I have something conclusive and improved to contribute.

neutrinohunter

Share this post


Link to post
Share on other sites
Thanks, Vilem you've been fantastic. Yes I definitely understand. I think either way once I finish this project, I'll look into your whitepaper anyhow always wanting to look at the newest stuff :)

I think I'll try and get an FBO and PBuffer working properly and implement some of the frustum changing algorithms (TSM, PSM) and see what quality I will get. Then I will attempt to try shaders for all the algorithms I've got so far.

You've given me a lot of ideas lately, with improvements and I'll have to get cracking down on them to see if they improve! :)

Cheers, I'll lend an email when I have something conclusive and improved to contribute.

NB I should also add that the application I am creating is meant to be an analysis tool for Shadow Algorithms. The idea is I provide basic functionality to change things within the same scene. Hoping is might come in useful when I improve the model loading (at the moment only MD2, 3DS) because I find a lot of papers suffer from "Picture Perfect Syndrome" where their demo is naff but the pictures always look great and adjusted to fit the scene.

neutrinohunter

Share this post


Link to post
Share on other sites
Howdy, I've managed to improve the shadow mapping somewhat but I'm having problems with resolutions greater than my viewport.

If I go greater than said resolution I use a FBO to render to. Now when I try to bind the buffer before drawing the shadow map over the scene I'm not getting a COMPLETE_EXT code from the FBO. (i.e theres something wrong with the FBO and its not attaching properly and I can't seem to narrow down the problem.) I've looked at other working code using FBO's for SM and those not and theres pretty much no difference in the code.


FBO Creation

//create the texture
glGenTextures( 1, &texture );
glBindTexture( GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

/* generates the FBO and attaches the previous texture to it */
glGenFramebuffersEXT ( 1, &fbo );
glBindFramebufferEXT (GL_FRAMEBUFFER_EXT, fbo);
glFramebufferTexture2DEXT (GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, texture, 0);
glDrawBuffer (GL_FALSE);
glReadBuffer (GL_FALSE);
glBindFramebufferEXT (GL_FRAMEBUFFER_EXT, 0);

//Binding
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
glEnable( GL_TEXTURE_2D );
glBindTexture( GL_TEXTURE_2D, texture );
GLenum status = glCheckFramebufferStatusEXT( GL_FRAMEBUFFER_EXT );
//assert(status == GL_FRAMEBUFFER_UNSUPPORTED_EXT);
assert(glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT)
== GL_FRAMEBUFFER_COMPLETE_EXT);





It could be something silly, I'm doing but I've checked the FBO spec and the tutorial by phantom and I can't see anything.

Any help would be greatly appreciated.
neutrinohunter

Share this post


Link to post
Share on other sites
It's me again :D.

I think you've got some bugs in code. You're using just framebuffer, not so called renderbuffer (which is storage object that contains buffer of 2D array of pixels).


// Creation of FBO:
// Texture generation - Depthmap with usinged byte storage, bilinearly filtered and clamped to edge
glGenTextures( 1, &texture );
glBindTexture( GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

// Generating frame buffer - one framebuffer for unsigned int fbo
glGenFramebuffersEXT(1, &fbo);
// Generating render buffer - one renderbuffer for unsigned int rbo
// It's used as storage for depth!
glGenRenderbuffersEXT(1, &rbo);
// Bind render buffer rbo
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, rbo);
// Set storage as GL_DEPTH_COMPONENT24 and size width and height
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, width, height);

// Get me my FBO status into enum status
enum status = glCheckFramebufferStatusEXT(GL_RENDERBUFFER_EXT);

// Test if we completed FBO generation
switch(status)
{
// Everything is OK!
case GL_FRAMEBUFFER_COMPLETE_EXT:
break;
// FBO isn't supported on your HW (with defined parameters!)
case GL_FRAMEBUFFER_UNSUPPORTED_EXT:
// Show message box
MessageBox(hWnd, "Framebuffer is supported on your HW", "Error", MB_OK|MB_ICONEXCLAMATION);
// Exit application
PostQuitMessage(0);
break;
}

// Binding FBO
// Bind me my framebuffer
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
// Bind me my renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, rbo);
// Set storage in framebuffer as 2D texture, with setted attachment of type (GL_DEPTH_ATTACHMENT_EXT or GL_COLOR_ATTACHMENT0_EXT) and texture
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, texture, 0);
// Set renderbuffer as depth storage for framebuffer, use rbo render buffer
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, rbo);

// Set viewport of OpenGL offscreen renderbuffer as size of width and height
glViewport(0, 0, width, height);

// Clear me color to black
glClearColor(0, 0, 0, 0);

// Clear Color buffer, Depth buffer and Stencil buffer (for this renderbuffer)
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// Reset matrix
glLoadIdentity();

// Unbinding FBO
// Set binded framebuffer to zero (none)
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
// Set binded renderbuffer to zero (none)
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);


Well, with this you can go up to maximum size of renderbuffer on your GPU.

Share this post


Link to post
Share on other sites
Thanks, Vilem. Sorry for the long wait, haven't been at my computer in a while.

Unfortunately when I bind the fbo, I don't seem to get a GL_FRAMEBUFFER_COMPLETE_EXT and nothing draws to the screen.

I'm doing

Render() {
Clear Buffer Bits
for each light {
generateShadowMap()
setupCamera();
setClipPlanes();
drawAmbientPassIfNeeded();
bindFrameBufferObject();
drawScene();
releaseFrameBufferObject();
}
//swapBuffers is internally called by QT
}

generateShadowMap() {
generateLightFrustum()
bindFrameBufferObject();
setFlags() //GL_FLAT etc..
drawScene();
releaseFrameBufferObject();
createProjectionMatrix() for Shadow Map
}

Is there something I am doing terribly wrong or would it be better to see the whole code, if so I'll post :)

As to your second part, I think I understand the frustum idea.
Basically, each of the four vectors will be +-^(lightDir)*cos(+-fov) in direction, then I would have to extrude these vectors over a certain distance to make sure they strike the surface. then render the polygon with the stencil buffer twice, once at DECR and once at INCR?

neutrinohunter

PS Just out of interest, are these lines correct:


// Get me my FBO status into enum status
enum status = glCheckFramebufferStatusEXT(GL_RENDERBUFFER_EXT);
// Test if we completed FBO generation
switch(status)
{
// Everything is OK!
case GL_FRAMEBUFFER_COMPLETE_EXT:
break;
// FBO isn't supported on your HW (with defined parameters!)
case GL_FRAMEBUFFER_UNSUPPORTED_EXT:



or should they be GL_RENDERBUFFER_COMPLETE... etc.

[Edited by - Neutrinohunter on February 2, 2008 4:41:29 AM]

Share this post


Link to post
Share on other sites
Anyway to what are u doing:


Render()
{
Clear Buffer Bits
for each light
{
generateShadowMap()
setupCamera();
setClipPlanes();
drawAmbientPassIfNeeded();
// Why u need to turn on framebuffer again? Cause u just need
// to bind framebuffer's texture, if you bind FBO again, u
// will lose every bit of data in it, so if you wanna
// Postprocessing you need another frame buffer object

// This erases your offscreen buffer and
bindFrameBufferObject();
// Everything what you drawn HERE is in it (not in screen)
drawScene();
// Everything what u drawn is in it (another binding should
// erase it again)
releaseFrameBufferObject();
}
//swapBuffers is internally called by QT
}

generateShadowMap()
{
generateLightFrustum();
// You now render to offscreen buffer
bindFrameBufferObject();
setFlags();
drawScene();
// Everything what u have drawn is inside it
releaseFrameBufferObject();
createProjectionMatrix();
}


You need to have 1 frame buffer object for every texture you want to render (this means one for every shadow map). If you want bind texture from Frame Buffer Object, u need just to write glBindTexture(GL_TEXTURE_2D, texture); - not to bind FBO again - well to that texture I presume, that u have UINT texture; in the beginning of the file as FBO texture. This should cause some issues (like empty FBO - black and nothing rendered on screen).

To the second part - as I said, I haven't tested this yet on shadow maps (I tested that just on projection textures long ago - so I cannot remember exact equation, etc.). But now i've got some time, so if you want, I can create solution and write how to (even post some code). But this should be that.

To that PS - you could test, if you get GL_FRAMEBUFFER_COMPLETE_EXT, try this:

// Get me my FBO status into enum status
enum status = glCheckFramebufferStatusEXT(GL_RENDERBUFFER_EXT);
// Test if we completed FBO generation
switch(status)
{
// Everything is OK!
case GL_FRAMEBUFFER_COMPLETE_EXT:
MessageBox(NULL, "Framebuffer created", "Information", MB_OK);
break;
// FBO isn't supported on your HW (with defined parameters!)
case GL_FRAMEBUFFER_UNSUPPORTED_EXT:
MessageBox(NULL, "Framebuffer incompleted", "Information", MB_OK);
break;
}

Share this post


Link to post
Share on other sites
[Edit]Perhaps I should rephrase what I said

Currently, I can manage to get a GL_FRAMEBUFFER_COMPLETE_EXT status for
glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT). However, when I render I get either a White background or a black background, as if there is no comparison going on maybe? I'm not sure.

My current code for rendering is as follows:


JLightManager * lightManager = scene->getLightManager();
JModelManager * modelManager = scene->getModelManager();
int numLights = lightManager->getNumberOfLightsUsed();

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glLoadIdentity();
for(int i = 0; i < numLights; i++) {

//Get the light and its position
JLight * light = lightManager->getLight(i);
JVector3D lightPos = light->getPosition();
float dist = lightPos.x * lightPos.x + lightPos.z * lightPos.z;//distance from origin
JVector3D lightDir = light->getViewingDirection(); //Normalised light direction
JVector3D planeEq(lightDir.x, 0, lightDir.z);
planeEq.SelfNormalize();

GLdouble clipPlane [4] = {-planeEq.x, -planeEq.y, -planeEq.z, dist};
//GLdouble clipPlane [4] = {0,1,0,0};
glClipPlane(GL_CLIP_PLANE0, clipPlane);

//Generate the Shadow Map from LightSource
generateShadowMap(lightPos);


//Render Scene using Shadow Map
glViewport(0, 0, m_WindowWidth, m_WindowHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0, 1.0, 0.1, 1000.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
scene->updateCameraView();

glDisable(GL_STENCIL_TEST);

// Track light position

GLfloat lPos[4] = {lightPos.x, lightPos.y, lightPos.z, 1.0};
glLightfv(GL_LIGHT0, GL_POSITION, lPos);

//Clear the window with current clearing color
glClear(GL_DEPTH_BUFFER_BIT);

// Set up shadow comparison
frameBufferObject->releaseBuffer();
glEnable(GL_TEXTURE_2D);

if(usingPixelBuffer) {
frameBufferObject->initialiseBuffer();
//glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE,
GL_COMPARE_R_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
}
else {
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE,
GL_COMPARE_R_TO_TEXTURE);
}
// Set up the eye plane for projecting the shadow map on the scene
JUtilities utility;
utility.enableTextureGen();


// Draw objects in the scene
scene->renderInternalProperties();
modelManager->render();


//glStencilFunc( GL_EQUAL, 0, 1 );
//scene->renderInternalProperties();
//modelManager->render();


utility.disableTextureGen();
glDisable(GL_ALPHA_TEST);
glDisable(GL_TEXTURE_2D);
glDisable(GL_CLIP_PLANE0);
}

if (showShadMap) {
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMatrixMode(GL_TEXTURE);
glPushMatrix();
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
//if(usingPixelBuffer) frameBufferObject->initialiseBuffer();
glDisable(GL_LIGHTING);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

// Show the shadowMap at its actual size relative to window
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(-1.0f, -1.0f);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(((GLfloat)shadowMapSize/(GLfloat)m_WindowWidth)*2.0-1.0f,
-1.0f);
glTexCoord2f(1.0f, 1.0f);
glVertex2f(((GLfloat)shadowMapSize/(GLfloat)m_WindowWidth)*2.0-1.0f,
((GLfloat)shadowMapSize/(GLfloat)m_WindowHeight)*2.0-1.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex2f(-1.0f,
((GLfloat)shadowMapSize/(GLfloat)m_WindowHeight)*2.0-1.0f);
glEnd();
glDisable(GL_TEXTURE_2D);
glEnable(GL_LIGHTING);
glPopMatrix();
glMatrixMode(GL_PROJECTION);
gluPerspective(45.0f, 1.0f, 1.0f, 1000.0f);
glMatrixMode(GL_MODELVIEW);
}



[Disclaimer]Its more than likely this code can be improved and may have redundant parts.

The relevant parts:
frameBufferObject->releaseBuffer() - This is the fbo unbinded (with parameter 2 set to 0);
frameBufferObject->initialiseBuffer() - This is binding of the texture.

I will post some screenshots if people think that will help.

Hope this better explains what the problem is.
neutrinohunter

[Edited by - Neutrinohunter on February 2, 2008 2:52:41 PM]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this