# achieving the same effect in this video...

This topic is 2072 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

hello everyone, i'm trying to achieve this lighting for sprites as seen in this video:

[media]
[/media]

i've created a program which takes a model, and well split it into it's color map, normal map, and position map.

the video says that he created a height map for each sprite for accurate positions, but my problem is how to do that?

at first, i thought i'd create a map of the position in view space for each pixel, and this is the three results at the moment:

Color map:

Normal map:

and the weird position map:

these are the shaders for how i'm trying to do this:

Vertex:
 #version 150 uniform mat4 ProjMatrix; uniform mat4 ViewMatrix; uniform mat4 MeshMatrix; in vec3 i_Vertex; in vec2 i_TexCoord; in vec3 i_Normal; out vec2 o_TexCoord; out vec3 o_Normal; out vec4 o_Vertex; void main(void){ mat4 ViewModel = ViewMatrix*MeshMatrix; mat3 NormalMatrix = transpose(inverse(mat3(ViewModel))); o_Vertex = ViewModel*vec4(i_Vertex, 1.0f); gl_Position = ProjMatrix*o_Vertex; o_TexCoord = i_TexCoord; o_Normal = normalize(NormalMatrix*i_Normal); return; } 

pixel:

 #version 150 uniform sampler2D Texture; uniform vec4 g_Color; in vec2 o_TexCoord; in vec3 o_Normal; in vec4 o_Vertex; out vec4 o_ColorBuffer; out vec4 o_NormalBuffer; out vec4 o_PosBuffer; void main(void){ o_ColorBuffer = g_Color*texture2D(Texture, o_TexCoord); if(o_ColorBuffer.a<=0.0f) discard; o_ColorBuffer.rgb *= o_ColorBuffer.a; o_NormalBuffer = vec4((o_Normal+1.0f)*0.5f, 1.0f); //Map between 0-1 for normals o_PosBuffer = o_Vertex; //don't know how to accurately map? }  Edited by slicer4ever

##### Share on other sites
Heightmap is the component of the position map that's pointing outwards from the monitor.
Of course if you created the position map correctly.

I'm not sure what you are asking exactly. The editor you are making is working with 3D models, right? So you have the depth coordinate.
Are you asking how to map to the 0...1 depth range?
I guess it depends on the game that's using it, but I would do the mapping by assuming that the depth of the 3D model is the same as the width.
Or maybe I'd calculate the bounding box with a square base.

So if the 3D model is say 125 units "wide", than the depth of -125 units would be 0, and 0 would be 1 in the heightmap. This is independent from the final resolution of the rendered and saved sprite image.

I guess you get the idea. Edited by szecs

##### Share on other sites
So from what I see in the position map you have X=Red, Y = Green, Z = Blue,
Because you take the position after the view transform, you have no Z. This is why there is no blue in your picture).
That doesn't really help, because the way you take the position now it basically just tells us this:

...................................... - Y
......................................... ^........................ [color=#FF0000]Red ( the negative Y will just make the green component 0)
......................................... |[color=#ff0000].................................
..........................................|[color=#ff0000].................................
-X -------------------------- Zero------------------> +X
[color=#008000]..........................................|
[color=#008000].................[color=#008000]Green............ |..............................[color=#FFD700]Yellow from the positive Red+Green ( you can try this in paint)
[color=#008000]......................................... |........................................
[color=#008000]..........................................|...........................................
[color=#008000].......................................+ Y
[/quote]

I think it would be more useful to have your position taken from the local space of the model.(don't multiply with the world and view mtx).
This way when you compute the light amount you have more or less a position that can be used to correctly determine the light intensity at a certain point in your model.

An alternative would be to save only the depth and reconstruct the position from it. Here is the process in more detail.
http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/ Edited by clickalot

##### Share on other sites
hey guys, thanks so far, i've been working with it, and have come up with decent results, but their's still clearly something wrong.

first off, let me post my shader code currently:

Vertex for encoding images:
 #version 150 uniform mat4 ProjMatrix; uniform mat4 ViewMatrix; uniform mat4 MeshMatrix; in vec3 i_Vertex; in vec2 i_TexCoord; in vec3 i_Normal; out vec2 o_TexCoord; out vec3 o_Normal; out vec4 o_Vertex; void main(void){ mat3 NormalMatrix = transpose(inverse(mat3(MeshMatrix))); o_Vertex = MeshMatrix*vec4(i_Vertex, 1.0f); o_TexCoord = i_TexCoord; o_Normal = normalize(NormalMatrix*i_Normal); gl_Position = ProjMatrix*ViewMatrix*o_Vertex; return; } 
pixel for encoding the images:
 #version 150 uniform sampler2D Texture; uniform vec4 g_Color; in vec2 o_TexCoord; in vec3 o_Normal; in vec4 o_Vertex; out vec4 o_ColorBuffer; out vec4 o_NormalBuffer; out vec4 o_PosBuffer; void main(void){ o_ColorBuffer = g_Color*texture2D(Texture, o_TexCoord); if(o_ColorBuffer.a<=0.0f) discard; o_ColorBuffer.rgb *= o_ColorBuffer.a; o_NormalBuffer = vec4((o_Normal+1.0f)*0.5f, 1.0f); o_PosBuffer = vec4((normalize(o_Vertex.xyz)+1.0f)*0.5f, length(o_Vertex.xyz)); } 

and here is the shader's for drawing/doing lighting:
 #version 150 uniform mat4 ProjMatrix; uniform mat4 ViewMatrix; uniform mat4 MeshMatrix; in vec3 i_Vertex; in vec2 i_TexCoord; out vec2 o_TexCoord; void main(void){ gl_Position = ProjMatrix*ViewMatrix*MeshMatrix*vec4(i_Vertex,1.0f); o_TexCoord = i_TexCoord; return; } 

 #version 150 #extension GL_EXT_gpu_shader4: enable uniform sampler2D TextureColorMap; uniform sampler2D TextureNormalMap; uniform sampler2D TextureHeightMap; uniform mat4 ViewMatrix; uniform mat4 MeshMatrix; uniform vec4 g_Color; uniform int i_LightCount; uniform samplerBuffer i_LightBuffer; in vec2 o_TexCoord; out vec4 o_Color; void main(void){ vec4 Texel = texture2D(TextureColorMap, o_TexCoord); if(Texel.a<=0.1f) discard; o_Color = vec4(0.1f, 0.1f, 0.1f, 1.0f)*g_Color*Texel; mat4 ViewModel = ViewMatrix; mat3 NormMatrix= transpose(inverse(mat3(ViewModel))); vec4 o_NormMap = texture2D(TextureNormalMap, o_TexCoord); vec4 o_HeightMap = texture2D(TextureHeightMap, o_TexCoord); vec3 o_Normal = NormMatrix*(o_NormMap.rgb*2.0f-1.0f); vec4 o_Vertex = ViewModel*vec4((o_HeightMap.rgb*2.0f-1.0f)*20.0f*(o_HeightMap.a),1.0f); //the 20.0 is scaling for the sprite which has a radius of 20. for(int i=0;i<i_LightCount*3;){ vec4 Light_Pos = texelFetchBuffer(i_LightBuffer, i++); vec4 Light_Dif = texelFetchBuffer(i_LightBuffer, i++); vec4 Light_Aten= texelFetchBuffer(i_LightBuffer, i++); vec3 Aux = Light_Pos.xyz-o_Vertex.xyz; float NdotL = max(dot(o_Normal, normalize(Aux)), 0.0f); float D = length(Aux); float Att = 1.0f/(Light_Aten.x+Light_Aten.y*D+Light_Aten.z*D*D); o_Color+=Att*(Light_Dif*NdotL); } o_Color.rgb*=o_Color.a; } 

here are the current results:

using a sphere works perfectly(although it is the simplist example to use.):
Color Map:

Normal Map:

Height Map:

Results:

A:

B:

here is an irregular shaped home that i tried encoding, as you can see, it doesn't follow the light correctly:

Color Map:

Normal Map:

Height Map:

and this is the weird results:

The green dot is the light's center.

@Clickalot, i'm attempting to keep it all within model space, but i can't seem to translate it correctly in drawing.

i'll take a look at your alternative depth approach, thanks=-).

i feel i'm close, hopefully only a bit more. Edited by slicer4ever

##### Share on other sites
Your first post's position map looked much more like a correct thing, although it's not mapped correctly. Red is the X, green is the Y, so I guess blue is the z.

For the mapping, normalization is not good, it will always make the vectors with the same length. You are using the .a channel fro storing the lengths, but that's total nonsense. Sit down and think it through.

There are features, behavior that you have to decide. It's a design question, not a programming one. And the design depends on how you will use the sprites.

So some questions:
*Do you want square sprites? or is it okay to have non square, non-power of two sprites?
*Do you want to stretch the sprites to fit inside a square, or do you want to fit the sprite into the square without stretching?
*How is the coordinate system set up in the editor?

Mapping means scaling to 0...1 boundaries. And the scale factor only depends on the overall size of the model in the editor, so it's a constant, that's why you mustn't use normalization, that's a totally different thing.

You have to do some bounding box fitting depending on what you want. If you want the sprites to stretch into a cube or you want to use non square sprites, then make the bounding box fit perfectly to the model in all axes. If you don't want to stretch and you want to (or only able to) use sqare sprites, fit the bounding cube so it fits the biggest extension of the model.

Lets assume that the bottom-left-front of the model's bounding box is in x0,y0,z0. Let the depth of the bounding box be DEPTH (in the editor's units), the width of the box be WIDTH, the height HEIGHT. If the bounding box is a cube, then obviously these are the same values.

Then something like:

o_PosBuffer.x = (o_Vertex.x-x0)/WIDTH; // subtract 0.5 if you want to map to -0.5...0.5. Multiply the whole with 2 for mapping to -1...1
o_PosBuffer.y = (o_Vertex.y-y0)/HEIGHT;
o_PosBuffer.z = (o_Vertex.z-z0)/DEPTH;

For rendering in game, you have to multiply depth values with the in game depth of the bounding box of the model you have as a sprite.
I don't know how you could store that depth, that's why I suggested to fit a square base box (or a cube) to the model in the editor. So that the in game depth will be the same as the width of the sprite.

Sorry, it was simpler to write up this than to study through the code and correct it. And I hope it's understandable. Edited by szecs

##### Share on other sites

A heightmap is ONE value per pixel, I don't know how can you have so colorful (so THREE components) heightmap.
And I don't understand why you use normalization for [font=courier new,courier,monospace]o_PosBuffer[/font].

my apologies, i suppose a heightmap is not the correct word(i was only going off the technical description in the video at the time), nor is a pure depth map(although click's alternative means i could just save the depth map), it's more of a positional map per pixel, so that i can translate back to that position for lighting calculations, essentially, i'm trying to preserve the 3D aspect of the model from a particular perspective.

i normalize the vertex's position so that it's mapped between -1-1, i then add 1, and divide by two(or multiply by .5) to map between 0-1 for output into color channels, and then use the length of that vertex(in the alpha channel) to multiply back to it's orignally position when i go to do lighting calculations on the pixel. Edited by slicer4ever

##### Share on other sites
I find it kind of strange that it's working even for the sphere since the object's normal map and position texture seems to be the same.
I'll just suppose that you just pasted the wrong picture.
vec4 o_Vertex = ViewModel*vec4((o_HeightMap.rgb*2.0f-1.0f)*20.0f*(o_HeightMap.a),1.0f);
Shouldn't it be * o_HeightMap.a instead of *20 since that is where you stored the length?

I don't understand why enclosing it in a box would help. You can just save the positions be ok.
Just storing the depth is another optimization, but first you should get it to work by saving positions and then you can try the thing mentioned in that article.

p.s. szecs's advice about not normalizing the position is valid. Since you store the length in alpha I assume that you can store the x, y, z in the rgb channels without normalizing them, thus not needing the alpha-channel anymore. (saving memory and avoiding the operation of multiplying the normalized vector with the length when rendering the lights) Edited by clickalot

##### Share on other sites
I edited the post heavily. You mustn't normalize the positions, because it will make all the positions unit length.

Hmm... Wait a minute. If you do this normalization and storing the length, and use the same method to render in the the game , so maybe it should be fine?

vec4 o_Vertex = ViewModel*vec4((o_HeightMap.rgb*2.0f-1.0f)*20.0f*(o_HeightMap.a),1.0f);

Even if it would work, your solution is much complicated than mine.

Um..... do you want to dynamically rotate the sprites in the game? MY connection is too slow to watch the video. Edited by szecs

##### Share on other sites

I find it kind of strange that it's working even for the sphere since the object's normal map and position texture seems to be the same.
I'll just suppose that you just pasted the wrong picture.
vec4 o_Vertex = ViewModel*vec4((o_HeightMap.rgb*2.0f-1.0f)*20.0f*(o_HeightMap.a),1.0f);
Shouldn't it be * o_HeightMap.a instead of *20 since that is where you stored the length?

i thought the same at first, but theoretically, a 1 unit sphere's vertex's should equal it's normals, so in theory, they both would look the same. (correct?)

the 20 is derived from the quad's size i use to draw the final image in the scene. i believe the issue is with how i store the length, since anything that is above a unit size is going to be clamped to 1 in the final output, i did this:

o_PosBuffer = vec4((normalize(o_Vertex.xyz)+1.0f)*0.5f, length(o_Vertex.xyz)/16.0f); //should allow objects within 16.0f size to map correctly

vec4 o_Vertex = vec4((o_HeightMap.rgb*2.0f-1.0f)*(o_HeightMap.a)*16.0f*20.0f,1.0f); //unmaps the height map.a

but this produces weird results, even with the ball, and i'm not certain why(the resulting image's alpha channel looks correct.

##### Share on other sites
My last thought:

The normalizing method may be good, but you have to scale (map) the lengths that you store in the .a channel. Simply divide the .a values with the half of the in-editor bounding cube's width.

With my method, you could eliminate the need of the position map, and you could just use the .a component of the normal map as the depth map.
Plus a lot of expensive normalization and stuff would be eliminated as well. Edited by szecs

##### Share on other sites

I edited the post heavily. You mustn't normalize the positions, because it will make all the positions unit length.

Hmm... Wait a minute. If you do this normalization and storing the length, and use the same method to render in the the game , so maybe it should be fine?

vec4 o_Vertex = ViewModel*vec4((o_HeightMap.rgb*2.0f-1.0f)*20.0f*(o_HeightMap.a),1.0f);

Even if it would work, your solution is much complicated than mine.

Um..... do you want to dynamically rotate the sprites in the game? MY connection is too slow to watch the video.

the sprites won't be rotated in game(other than billboarding toward the camera, and the camera is fixed, so in theory, i shouldn't have to worry about the billboard rotation),
if i normalize and store the length, it should work back correctly, since ur multiplying by the value that you divided each component by.

yes, i do agree that is most likly the problem line, for now i've hard coded the quad's size(which is 40/40, but is offset into the center, so it should be a radius of 20).

if you do ever get a chance, i highly recommend giving the video a watch, it's really awe-inspiring what the person has done, he has one other video that included weather/snow and it....it kindof brings a tear to the eye's how good the effect's look.

My last thought:

The normalizing method may be good, but you have to scale (map) the lengths that you store in the .a channel. Simply divide the .a values with the half of the in-editor bounding cube's width.

With my method, you could eliminate the need of the position map, and you could just use the .a component of the normal map.
Plus a lot of expensive normalization and stuff would be eliminated as well.

not certain if you saw my previous post, but i did think that was the potential issue(and it may be), however when i try to scale the value back(which for now is set to 16), it even breaks the sphere's correct mapping for some reason, even though in theory the result should be correct.

edit: i've been looking a bit closer at the ball that i thought was correct, but it seems to be slightly off, not exactly certain why, but i suspect it has to do with that single line. Edited by slicer4ever

##### Share on other sites

the 20 is derived from the quad's size i use to draw the final image in the scene. i believe the issue is with how i store the length, since anything that is above a unit size is going to be clamped to 1 in the final output, i did this:

o_PosBuffer = vec4((normalize(o_Vertex.xyz)+1.0f)*0.5f, length(o_Vertex.xyz)/16.0f); //should allow objects within 16.0f size to map correctly

vec4 o_Vertex = vec4((o_HeightMap.rgb*2.0f-1.0f)*(o_HeightMap.a)*16.0f*20.0f,1.0f); //unmaps the height map.a

but this produces weird results, even with the ball, and i'm not certain why(the resulting image's alpha channel looks correct.

Maybe I am not paying enough attention, but I still don't get the need to multiply by 20.
First of all you store the position like this:
o_PosBuffer = vec4((normalize(o_Vertex.xyz)+1.0f)*0.5f, length(o_Vertex.xyz));
So basically you have .xyz a normalized direction vector and some length that may be clamped to 1 depending on what format you're using.

I don't understand the need to multiply this by the arbitrary value of 20. in order to get the position in object space back.
I call it arbitrary since that 20 was not used to encode it in the first place.

You just get position vectors that have a correct orientation, but all will have the length of 20 that will be then transformed with the view-world position.
This is why I believe the sphere example seems correct, but it's not. You can try taking a cube and test with that.

One potential reason is that I believe that you want to scale it correctly for the 2.5D scene, but the *20.0f doesn't do that. Edited by clickalot

##### Share on other sites

One potential reason is that I believe that you want to scale it correctly for the 2.5D scene, but the *20.0f doesn't do that.

this is exactly what the 20 is suppose to be for, it's just a hard coded number at the moment, which is probably why it's confusing, what is your suggestion for scaling back to the 2.5D scene?, my originally thinking was taking the half width/height would do it, but it doesn't sound like it.

##### Share on other sites
You have to divide the length values in the editor by the half width of the bounding box in the editor.
Then in the game, you have to multiply these .a values with the in game, rendered width (in pixels) of the bounding box's half.

Note that these widths does not necessarily be the same.

Why can't you do this? Why do you have to use arbitrary, hard coded values?

////EDIT wait a minute again.

The whole thing is a mess, that's why it's so hard to spot the mistake
Where is the origin of the coordinate system in the editor? Is it inside the object? is it in the middle of the object or it has a totally whatever position?
If the origin is in the center of the object, only then the half-width thing applies. You seem to use absolute values for the coordinates of the object in the editor, so that could be one cause of error.

Then, you "decode" the data in the game, with the total arbitrary absolute coordinates of the sprite?
Do I get is right? If so, the whole thing will be totally screwed.

I still suggest to use the method I proposed, the way you do it now will be ridiculously complex if you even manage to solve it.

Blah, maybe I'm wrong again, I give up. Edited by szecs

##### Share on other sites

[quote name='clickalot' timestamp='1340033939' post='4950286']
One potential reason is that I believe that you want to scale it correctly for the 2.5D scene, but the *20.0f doesn't do that.

this is exactly what the 20 is suppose to be for, it's just a hard coded number at the moment, which is probably why it's confusing, what is your suggestion for scaling back to the 2.5D scene?, my originally thinking was taking the half width/height would do it, but it doesn't sound like it.
[/quote]

Ok so instead of multiplying by 20, it would make more sense to multiply by BoxWidth/OldBoxWidth.
OldBoxWidth = width of box (in pixels) enclosing the object when the 3 textures are saved.
NewBoxWidth = width (in pixels) of the box enclosing your object during the game rendering.

So the whole pipeline would be like this.
Instead of saving positions you save Position / MaxPosition. So you don't have pos.xyz normalized, instead you map them from [-ObjectSpacePosition...ObjectSpacePosition]->[-1..1]. You can also pack them to [0-1] if you desire, but i don't see much gain in this.

ObjectSpacePosition's is defined in the interval [-MaxPos...MaxPos].

In the game you reconstruct this position back, multiplying in.xyz to MaxPos. (and unpacking them from [0-1] to [-1..1] if needed).

Ok so the only thing that remains to be done is scaling it, that can be done by the procedure that I described above.
Btw OldBoxWidth is in pixels and represents let's say the texture width, while NewBoxWidth represents the item's box in-game dimension in pixels.
These have nothing to do with MaxPos. MaxPos is in some abstract units, def. not pixels.

Your scaling, even when you added the [-16...16] interval, will be 20 times bigger , while you actually want a smaller object in the game. Edited by clickalot