achieving the same effect in this video...

Started by
13 comments, last by clickalot 11 years, 10 months ago

I edited the post heavily. You mustn't normalize the positions, because it will make all the positions unit length.



Hmm... Wait a minute. If you do this normalization and storing the length, and use the same method to render in the the game , so maybe it should be fine?

But I'm not at all sure about this here:
vec4 o_Vertex = ViewModel*vec4((o_HeightMap.rgb*2.0f-1.0f)*20.0f*(o_HeightMap.a),1.0f);

Even if it would work, your solution is much complicated than mine.


Um..... do you want to dynamically rotate the sprites in the game? MY connection is too slow to watch the video.


the sprites won't be rotated in game(other than billboarding toward the camera, and the camera is fixed, so in theory, i shouldn't have to worry about the billboard rotation),
if i normalize and store the length, it should work back correctly, since ur multiplying by the value that you divided each component by.

yes, i do agree that is most likly the problem line, for now i've hard coded the quad's size(which is 40/40, but is offset into the center, so it should be a radius of 20).

if you do ever get a chance, i highly recommend giving the video a watch, it's really awe-inspiring what the person has done, he has one other video that included weather/snow and it....it kindof brings a tear to the eye's how good the effect's look.

My last thought:

The normalizing method may be good, but you have to scale (map) the lengths that you store in the .a channel. Simply divide the .a values with the half of the in-editor bounding cube's width.


With my method, you could eliminate the need of the position map, and you could just use the .a component of the normal map.
Plus a lot of expensive normalization and stuff would be eliminated as well.

not certain if you saw my previous post, but i did think that was the potential issue(and it may be), however when i try to scale the value back(which for now is set to 16), it even breaks the sphere's correct mapping for some reason, even though in theory the result should be correct.

edit: i've been looking a bit closer at the ball that i thought was correct, but it seems to be slightly off, not exactly certain why, but i suspect it has to do with that single line.
Check out https://www.facebook.com/LiquidGames for some great games made by me on the Playstation Mobile market.
Advertisement

the 20 is derived from the quad's size i use to draw the final image in the scene. i believe the issue is with how i store the length, since anything that is above a unit size is going to be clamped to 1 in the final output, i did this:

o_PosBuffer = vec4((normalize(o_Vertex.xyz)+1.0f)*0.5f, length(o_Vertex.xyz)/16.0f); //should allow objects within 16.0f size to map correctly

vec4 o_Vertex = vec4((o_HeightMap.rgb*2.0f-1.0f)*(o_HeightMap.a)*16.0f*20.0f,1.0f); //unmaps the height map.a

but this produces weird results, even with the ball, and i'm not certain why(the resulting image's alpha channel looks correct.


Maybe I am not paying enough attention, but I still don't get the need to multiply by 20.
First of all you store the position like this:
o_PosBuffer = vec4((normalize(o_Vertex.xyz)+1.0f)*0.5f, length(o_Vertex.xyz));
So basically you have .xyz a normalized direction vector and some length that may be clamped to 1 depending on what format you're using.

I don't understand the need to multiply this by the arbitrary value of 20. in order to get the position in object space back.
I call it arbitrary since that 20 was not used to encode it in the first place.

You just get position vectors that have a correct orientation, but all will have the length of 20 that will be then transformed with the view-world position.
This is why I believe the sphere example seems correct, but it's not. You can try taking a cube and test with that.

One potential reason is that I believe that you want to scale it correctly for the 2.5D scene, but the *20.0f doesn't do that.

One potential reason is that I believe that you want to scale it correctly for the 2.5D scene, but the *20.0f doesn't do that.


this is exactly what the 20 is suppose to be for, it's just a hard coded number at the moment, which is probably why it's confusing, what is your suggestion for scaling back to the 2.5D scene?, my originally thinking was taking the half width/height would do it, but it doesn't sound like it.
Check out https://www.facebook.com/LiquidGames for some great games made by me on the Playstation Mobile market.
You have to divide the length values in the editor by the half width of the bounding box in the editor.
Then in the game, you have to multiply these .a values with the in game, rendered width (in pixels) of the bounding box's half.

Note that these widths does not necessarily be the same.

Why can't you do this? Why do you have to use arbitrary, hard coded values?

////EDIT wait a minute again.

The whole thing is a mess, that's why it's so hard to spot the mistake
Where is the origin of the coordinate system in the editor? Is it inside the object? is it in the middle of the object or it has a totally whatever position?
If the origin is in the center of the object, only then the half-width thing applies. You seem to use absolute values for the coordinates of the object in the editor, so that could be one cause of error.

Then, you "decode" the data in the game, with the total arbitrary absolute coordinates of the sprite?
Do I get is right? If so, the whole thing will be totally screwed.


I still suggest to use the method I proposed, the way you do it now will be ridiculously complex if you even manage to solve it.




Blah, maybe I'm wrong again, I give up.

[quote name='clickalot' timestamp='1340033939' post='4950286']
One potential reason is that I believe that you want to scale it correctly for the 2.5D scene, but the *20.0f doesn't do that.


this is exactly what the 20 is suppose to be for, it's just a hard coded number at the moment, which is probably why it's confusing, what is your suggestion for scaling back to the 2.5D scene?, my originally thinking was taking the half width/height would do it, but it doesn't sound like it.
[/quote]

Ok so instead of multiplying by 20, it would make more sense to multiply by BoxWidth/OldBoxWidth.
OldBoxWidth = width of box (in pixels) enclosing the object when the 3 textures are saved.
NewBoxWidth = width (in pixels) of the box enclosing your object during the game rendering.

So the whole pipeline would be like this.
Instead of saving positions you save Position / MaxPosition. So you don't have pos.xyz normalized, instead you map them from [-ObjectSpacePosition...ObjectSpacePosition]->[-1..1]. You can also pack them to [0-1] if you desire, but i don't see much gain in this.

ObjectSpacePosition's is defined in the interval [-MaxPos...MaxPos].

In the game you reconstruct this position back, multiplying in.xyz to MaxPos. (and unpacking them from [0-1] to [-1..1] if needed).

Ok so the only thing that remains to be done is scaling it, that can be done by the procedure that I described above.
Btw OldBoxWidth is in pixels and represents let's say the texture width, while NewBoxWidth represents the item's box in-game dimension in pixels.
These have nothing to do with MaxPos. MaxPos is in some abstract units, def. not pixels.

Your scaling, even when you added the [-16...16] interval, will be 20 times bigger , while you actually want a smaller object in the game.

This topic is closed to new replies.

Advertisement