Worms - Terrain normals

Started by
6 comments, last by kloffy 15 years, 4 months ago
Hi everybody, I'm working on a Worms 2D clone game. Actually for terrain collision detection I'm storing a big texture where some color means "SOLID" and other colors means "FREE TO MOVE". When a collision occurs I'd like to be able to compute the normal in that point to reflect velocity vector and achieve a correct (and possibly realistic) bounce effect. I assume i should examine the 8 adjiacent pixels, and I can do that, but I don't know what to do with this information. Help needed!!! - Thanks in advance and sorry for my POOR English.
Advertisement
The easiest way i could think of would be to take the two neighboring pixels to the collision pixel, and create a line using them. Then take the normal of that line.

What do you think?
Yes that's the way I guess it should be done. BUT, since the collision pixel has 8 adjiacent pixels, how should I pick the right 2 pixels?

For example:

If an object collides with a vertical surface the two pixels should be the upper and the lower ones. Also, if the object collides with an horizontal surface, I should pick the pixels to the left and to the right... ALSO... what if the surface is neither horizontal or vertical? How to determine the angle of this surface just looking at the pixels?
well if the surface is vertical then there are only two adjacent pixels that form a continuous line, the top and bottom. So assuming that objects can not sink into the ground then you want the two solid pixels that are next to clear pixels.
So you need to determine which two pixels have a clear pixel by them

1=solid
0=non-solid
2=collision point
Vertical wall(left side)
110
120
110
Horizontal wall(floor)
000
121
111
Diagonal
100
120
111
Different diagonal
100
120
110
-A.K.A SteveGravity is unreliable-DTD
If you could have the positions the projectile went through on the path to the collision point, it would make the selection a bit easier (because you can determine the direction and can choose appropriate pixels to use as line normal).

Hell, on second thought, you may not need that extra information actually. If at the point of collision you can still grab its velocity, you can see where it was going to be (Collision+1 frame) and use adjacent pixels to that as your normal

310
120
110

0 = nothing
1 = terrain
3 = collision+1 frame (this is a terrain piece)

So, in that example, the projectile went up from the bottom right corner, and hit that #3 spot. Thus, I would use the spots to the right and bottom to form a normal.

Note also that there would be some special cases, and I'm sure you could probably think of more. Here's one that just came to mind if you used this method.

000
020
000

There was just a random pixel floating off in space that was collided with. This is definitely possible in Worms assuming you don't apply gravity to the terrain. In this case, I would say just negate Y component and dampen a bit or something.

Then, there's also this case if you can't form a normal.

300
020
000

There's only one extra pixel, and the bullet came from the bottom right corner again. For this, there is definitely not an easy way out. You may just have to hardcode this case to watch out for because you can't just form a normal between the collision point and the collision+1 point.

I'm not sure this is the best solution, but hopefully it provides a bit of food for thought!
This is a little bit offtopic but maybe it could be useful for you: 2d Worms Terrain Generator Tutorial
Hi, I was meaning to reply to this thread for several days but I kept getting interrupted. I wrote most of a reply yesterday but then my laptop battery died. Argh! Anyway, here goes:

In image processing language, the operation you are looking for is called 'gradient estimation', and for your purposes the gradient and the normal are essentially the same thing. There are several ways to compute the gradient, but the easiest approach is a method called 'central difference'.

I realise that you have a binary image (values are either on or off), but lets start by considering a grey scale image which represents a heightmap. If you place a ball on the surface of the heightmap it will start rolling down hill in a particular direction - this direction is the steepest gradient and is computed by the gradient estimation operator. As we will see later, it is also the normal you want from your image.

For a heightmap the gradient you compute will have two components (x and y) and these can be computed seperatly. For a given pixel we ask 'how much is the height changing in the x direction?' and 'how much is the height changing in the y direction?'. For the first question we find the difference in the left and right pixels, and for the second we find the difference in the above and below pixels. Let's look at an example 3x3 heightmap:

1 2 5
3 5 7
4 7 9

We want to find the gradient at the center pixel. You can see by eye that the values in the top left are generally lower than the bottom right, so we expect our gradient to point up and to the left. By applying central difference:

x component = 3 - 7 = -4
y component = 2 - 7 = -5

Which gives a vector of (-4, -5). I'm considering the positive directions to be right and down but of course you might want to change this, and you might also want to normalise the vector.

Ok, so now try it on a binary image. 1 is for solid material, and 0 is for empty space:

0 0 1
0 1 1
1 1 1

x component = 0 - 1 = -1
y component = 0 - 1 = -1

Well, it's basically correct in that it points up and left. However, when operating on a binary image like this you have the problem that there are only 8 possible output vectors - your normal will always point up, down, left, right, or at 45 degrees.

There are two solutions to this. The first is that you could blur your binary image so that, for example, each pixel is replaced by the average of its four neighbours. The result is then some fraction, rather than just being 0 or 1.

The second option is to use a larger 'filter', which doesn't just look at the immediate neighbours but might instead look 2 or 3 pixels in each direction. You will want to weight the pixels (so that those nearby have more influence than those further away) but there are several standard 'filters' which do this. However, I would recommend starting with the blurring approach.

To give you some reassurance, central difference is the exact approach I use in my project. I have a 3D volume rather than a 2D image, but I still need the normals/gradients for physics and for lighting. My project is basically the 3D equivalent of yours. Have a look at the link in my signature if you are interested.
I think the suggestions are pretty much on the right track. Thinking about this, I remembered another algorithm that is similar: hq3x. You might want to take a look at it.

This topic is closed to new replies.

Advertisement