Simple method for implementing terrain collision?

This topic is 430 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

bros, can you just give me a hint of what's the way to do it, because I did some research and there seem to be more than one way of doing it.

1st way:   Cast a ray down and check intersection with terrain.

2nd way: Divide the terrain to into grids and find the height of the terrain at the specific point I'm standing

3rd way:  Screenshot the world from above and use the values from the depth buffer to get some terrain stuff. This is easiest, but it's bad, right?

EDIT: Actually, it's bad if I do it every frame, but I can do it only once and save to some image or something.

And just one more off-topic question, if you have time.

Guys, can't I just start using Unity because it will do most of the hard stuff for me. The reason is that I kind of make this game of mine for 5-6 months now and I don't see any results at all.

I'm kind of a patient person, but I spent all this time bothering with loading models, loading skeletal animation, wasting time with OpenGL API, figuring out how to manage game resources, making simple state machines for animation transition, Implementing aabbs and simple raycasting stuff, tinkering with shaders in order to add shadows and blinn-phong, bothering with quaternions in order to do interpolation,  and ultimately my game still sucks 6 months later.

Do the skills I'm learning actually have any value, or I'm just doing all this for fun (not so much fun when debugging).

Edited by codeBoggs

Share on other sites

Terrain is generally implemented as a height map.  In that regard, your option #3 is basically correct:

Screenshot the world from above and use the values from the depth buffer to get some terrain stuff. This is easiest, but it's bad, right?

The height map is a simple grid.  Your object is at (x,y,z). Ignore the height and look up the height map values directly under the object and compare it against the height of the object.  For a small object like a player's collision capsule you probably only have a single contact point.  For more complex objects you may have more than one contact point or contact area.

If your terrain changes as a result of your game, you will need to update your height map as well.

Actually, it's bad if I do it every frame

Why do you think that?

Objects that move by physics need to have collisions checked every physics update.  Typically if you use a spatial grid you can reduce it down to under 3 objects that need collision tests. Terrain is probably the easiest to collide with since it is a heightmap, you can directly query for the quads directly under the object then do both broad phase and narrow phase detection easily.  Other objects you need to compare two potentially complicated physics meshes with both a broad-phase and narrow-phase detection.

Share on other sites

Ok, I got the height map loaded and I have a terrain that I generated using that height map in Blender.

Now how do I know exactly what pixel from the map corresponds to the place I'm currently standing at?

Share on other sites

I'd hope you have some XY coordinate telling you where in the world you are (assuming XY is the horizontal plane).

The area covered by (xmin, ymin)..(xmax, ymax) in the world maps onto the (0,0)..(xsize, ysize) of your height map.

Edited by Alberth

Share on other sites
a hint of what's the way to do it

i use a combo of heightmap and collision map for Caveman 3.0.

the height map handles steep slopes, and the collision map handle things that stick out of the ground, like trees, rock outcroppings, buildings, etc.

when you go to move, you check a short distance ahead, and get the ground height.   that combined with the height where you're at lets you compute the slope of the rise / fall in front of you. if its too steep, you "collide", if its drops off too quickly, you slide and fall.  if the heightmap doesn't mess you up, then you check for banging into trees etc, using the collision map.

Edited by Norman Barrows

Share on other sites

my heightmap texture is 1801x1801, does it need to be a power of 2?

Edited by codeBoggs

Share on other sites

my heightmap texture is 1801x1801, does it need to be a power of 2?

No, not unless whatever you're using has some special requirements.

Share on other sites

This is my function that gets a pixel from the image:

ILubyte getPixelFromImage( int x, int y, Image img )
{
if( x < 0 || x > img.width || y < 0 || y > img.height )
{
return 0;
}
return img.data[ y*img.width + x ];
}


the img.data is all the bytes that the .png file contains. But how can I be sure that I use grayscale and not rgb. And I heard somewhere that the human vision is not linear that's why height maps need to be represented linearly, not using sRGB or something like that.

And the function above returns a char that can be from 0 to 255, how do I regulate the height? Because 255.0f above the ground is too much.

EDIT: Problem with the DeviL library, again. It doesn't work with 16-bit png, need to figure out something else. These c++ image libraries are more trouble than they are worth.

EDIT 2: From devil doc:

If an image is in a format that is non-native to DevIL, it is automatically converted to a useable format. All 16-bit images are handled this way, being converted to 24-bit images on load. To use your own functions to load unsupported image formats or to override DevIL's loading functions, read the tutorial on Registering

Can I somehow read the height map as 24 bit? If I can, what should I read, R, G or B?

Edited by codeBoggs

Share on other sites

Generally a bad idea to use png or jpeg or other lossy formats.  They're great for sending compressed images over the internet for human viewing, but they're terrible for most other applications.

Some encoders and decoders can work with them in lossless ways, but because of the way decoders work even a lossless-encoded image can suffer from artifacts on decompression.

The generic standards for this type of data are TIFF and TGA formats that support various compression options. Some engines will just store it as raw data that gets zipped up, or pgm format that gets zipped.

Share on other sites

frob, DeviL has some problems with .tga and I don't want to bother with different library now, even if .png is lossy, it's not a big deal, for now I just want to make it work, but I can't, because it converts from 16 to 24.

EDIT: No, the documentation is old. My image is loaded as 1 byte per pixel.

I couldn't find where do I mess up. I will show you the code, in case someone sees something.

Image getDataFromImage( std::string path )
{
Image img;

//Generate and set current image ID
ILuint imgID = 0;
ilGenImages( 1, &imgID );
ilBindImage( imgID );

//Put some stuff into my Image struct.
img.width = ilGetInteger(IL_IMAGE_WIDTH);
img.height = ilGetInteger(IL_IMAGE_HEIGHT);
img.data = ilGetData();
img.bpp = ilGetInteger(IL_IMAGE_BPP);
printf( "%d\n", img.bpp );

return img;
}


Getting the pixel from the 8-bit image.


GLuint getPixelFromImage( int x, int y, Image img )
{
if( x < 0 || x >= img.width || y < 0 || y >= img.height )
{
return 0;
}
return (GLuint)img.data[ ( y*img.width + x ) ];
}

Edited by codeBoggs

Share on other sites
But how can I be sure that I use grayscale and not rgb

Grayscale doesn't exist in PNG, I think. At least I always find it as a RGB image, with 256 colours, where each colour is a RGB triple (x, x, xi), ie all the same value.

Now if you have a paletted image, the usual approach is that colour i = RGB(i, i, i). That makes the index value equal to the color component value, and you can use the palette index as value in the greyscale.

As for knowing what you got, read the image library documentation, it normally has query functions to ask about the type of image that it loaded. It should also explain how the image array that you have is laid out. I expect single bytes if you have a paletted image, but check. For a non-paletted image, a 3-byte data type doesn't exist in C/C++, different libraries take different solutions. Some switch to a 4-byte type, others stay with single byte values, and put each channel (each colour component) in a different byte.

Edit: Some modern image libraries always convert to full RGB, as that's what a GPU needs.

And I heard somewhere that the human vision is not linear

how do I regulate the height? Because 255.0f above the ground is too much.

Scale the range to what you need to have.

Can I somehow read the height map as 24 bit? If I can, what should I read, R, G or B?

What does that mean, 24bit often means 8bit R, 8bit G, 8bit B, or is a single channel (one colour component) 24 bit?

Edited by Alberth

Share on other sites

Alberth, I use this website: http://terrain.party/

What it does is that it allows me to download the heightmap of every place on the planet I want. And it stores it as a 16-bit png 1081x1081 res grayscale image because this is the best that the satellites can do. And using Blender, I can recreate the whole surface as a 3d model only by using the heightmap, it's super cool.

And the way I see it, 8-bit means 255 different options.

So if the png is 16bit, then it stores 65 535 different values of gray.

This means that in order to read one number, I need to parse 2 bytes. Should I worry about big-endian stuff or something like that?

From what I've read and tried, DeviL doesn't work with 16bit grayscale pngs, that's why I need to convert it, for example, to 32bit, but I don't know what happens under the hood when 16 bit grayscale gets converted to 32bit rgba.

There are programs that do this, but what's the logic of that. It seems to me that I will lose precision from 65535 to 255, right?

I'm doing something very wrong, because it doesn't work, I get wrong values and my character moves like this:   /\/\/\/\/\/\/\/\/\/.

Edited by codeBoggs

Share on other sites
And the way I see it, 8-bit means 255 different options. So if the png is 16bit, then it stores 65 535 different values of gray.

That sounds correct, except that the first value you can store is 0 rather than 1, so it's 256 and 65536 different values.

This means that in order to read one number, I need to parse 2 bytes. Should I worry about big-endian stuff or something like that?

I would assume the PNG format has covered that, unless you want to write a PNG decoder yourself. If you load such an image with a library 16bit PNG support, you'll also run into that problem.

There are programs that do this, but what's the logic of that. It seems to me that I will lose precision from 65535 to 255, right?

That sounds likely as generic conversion program.

The question is, is that bad?

I'm doing something very wrong, because it doesn't work, I get wrong values and my character moves like this: /\/\/\/\/\/\/\/\/\/.

Your character is just jumpy :D

I can agree it doesn't look right, but that's about all I can conclude.

You have a long chain of things that you do to make the character aware of its height, and the problem is at least in one of the links of the chain. Question is, which one.

At a somewhat higher level, there are 3 parts.

- The satellite image converted to heightmap, loaded by DeviL, where you loose a factor 200 precision (probably)

- Code that derives the height from the loaded image

- Code that plot the character at the right height.

Any of this code can be wrong. Likely I missed a few steps.

There is also the other option, namely all code does what it is supposed to do, but you didn't realize some of the effects that it has. In other words, the code is ok, your ideas need to change. I name this option "computer outsmarts designer". It's a fun phase, where you get confronted with weird effects that are totally correct, but unexpected.

Before you can conclude the latter though, you need to proof the problem is not in the former chain of code. The usual process for that is called debugging, and it's an art in itself.

So far you have established that the end result is not as desired/expected, but is it wrong in the sense that the code is not doing what it is supposed to do?

Can you get the value that the code reading the height map is producing?

Also, can you get the height of your character?

These should match according to the conversion you implemented. In this way you can check the code that computes the height is doing its job

It will tell you Z=<some value> for your character. If you look at the screen, is the character drawn at the height that you expect? Maybe you want to add obejcts in the world with a known height so you have a reference.

This can tell you whether the drawing code draws things where they are supposed to be drawn.

If this all works, the chain from the image to the plotted height is good, so the problem is perhaps earlier. My next step would be to use a heightmap with known properties that I prepared myself. For example, flat, but one ridge, or some other known pattern.

Load the image, see how the program reacts. Does it do what you think it should do? ( Edit: "should do" in the sense the code does how it is programmed!)

If so, the problem is clearly somewhere in the conversion and/or loading of the satellite image. If it is not, it's time to think hard why a self-prepared image doesn't do what you want it to do.

Edited by Alberth

Share on other sites

Can you get the value that the code reading the height map is producing? Also, can you get the height of your character? These should match according to the conversion you implemented. In this way you can check the code that computes the height is doing its job

Yes, the mistake was here. The height data was totally wrong. And I found out the fault was in DevIL (and my fault too, for using that forgotten lib) But basically, devil lib doesn't know how to handle 8-bit and 16-bit images, and it tries to convert it to 32 bit, but in my case, something goes wrong and I end up with 8-bit totally wrong grayscale image.

What I did is to open the image with Paint and export it as 32 bit .png and then I just read the R values, because they are all the same. And now the heightmap works perfectly.

Thanks a lot man, again. I missed that Heureka moment because you kind of debugged it for me. But it's still kind of cool. I even made a gif.

[attachment=34209:heightmapgif.gif]

Edited by codeBoggs

Share on other sites

I thought the Heureka moment was when you see it's actually doing what you thought it should do :D

I mostly just showed you the kind of reasoning that goes behind finding the cause of a problem, hopefully you can apply that a next time.

Image looks great :)

Share on other sites

Grayscale doesn't exist in PNG, I think.

PNG supports grayscale at several bit depths and both with and without alpha.

Share on other sites
Alberth, I use this website: http://terrain.party/ What it does is that it allows me to download the heightmap of every place on the planet I want. And it stores it as a 16-bit png 1081x1081 res grayscale image because this is the best that the satellites can do. And using Blender, I can recreate the whole surface as a 3d model only by using the heightmap, it's super cool.

dude i need that for AIRSHIPS!

ok, now i'm REALLY interested! <g>

try a dump or convert and see what they mean by "greyscale". odds are its the same value for r,g,and b.  so you have a 16 bit value, and you can read any channel, r.g or b.  on the website, they probably supply a scale factor too that scales the 64K values to real altitudes, as well as the number of real-world meters between pixels.

worst comes to worst you look up the spec for 16 bit png, and write some code that reads a png and extracts the required data into a simple 2d array of bytes, words, etc, as the case may be. then write it out as a bin file.   custom file formats designed for the game in question are almost always the most efficient way to go.

Edited by Norman Barrows

Share on other sites

sweet!

is that your own code rendering the imported satellite heightmap data?

Share on other sites
odds are its the same value for r,g,and b

Got the 16bit from the website, converted it to 32bit with Paint, and r, g, b are the same, yes.

is that your own code rendering the imported satellite heightmap data?

Yes, but it's not hard at all. I even posted it here somewhere. But I use a library to load the image. When you export the data from the website, you will get 4-5 images in an archive, choose the one with the name "merged". Others would maybe have some problems with the height values.

Edited by codeBoggs

Share on other sites

You're going to want to interpolate between the 4 nearest heightmap positions. There are two ways to do this.

The first way is to just bilinear interpolate using the objects fractional XY position, interpolating (edit:) the heights of both p1--p2 and p3---p4 along the X axis (if p1-p4 are laid out in a 'Z' shape). Then interpolate along the Y axis using the object's fractional position between the top edge and bottom edge of the terrain grid square, interpolating the two height values resulting from the first X axis lerps.

The second way is better suited if your terrain grid squares are being rendered as a pair of triangles, which means you only want to interpolate between the 3 points that the entity is above, instead of all 4. All you have to do is determine which triangle's right-angle the entity is closest to, and then interpolate between the height of the points of that triangle.

If you use the first method, and you are drawing two triangles per heightmap square, you could get entities that interpolate above/below the ground a bit, but it's much simpler to do.

Edited by deftware

Share on other sites

deftware, haven't seen you in a long time. Thanks for posting. :rolleyes:

You're going to want to interpolate between the 4 nearest heightmap positions.

Why do I need to interpolate? You mean that my movement can be smoother?

Can't I just use a ultra high resolution heightmap or something?

My movement seems fine for now, although I noticed it could be smoother.

Can you elaborate more on this, more on the "why" part, I mean? :huh:

Edited by bogosaur5000

Share on other sites

Can't I just use a ultra high resolution heightmap or something? My movement seems fine for now, although I noticed it could be smoother. Can you elaborate more on this, more on the "why" part, I mean? :huh:
More pixels doesn't give you more precision in height.

Assume a standard greyscale heightmap. That gives you 256 heightlevels. I don't know how your world looks, but lets assume height runs from 0 to 100.

That means grey-level 0 == height 0, and grey-level 255 == height 100. ie height = grey-level * 100/255.

That means height at grey-level 1 = 1 * 100 / 255 = 0.39. The question is now, is a vertical jump of 0.39 acceptable (the smallest possible jump you can express in grey-scale).

In any case, since grey-scale goes up or down in full units, there will always be jumps no matter how small.

Interpolation adds a smooth transition between the different height levels, which means you never see a jump.

Share on other sites
Interpolation adds a smooth transition between the different height levels, which means you never see a jump.

Oh, I got it. So the bigger the difference between the lowest and highest point on my map is, the bigger the jump is.

You sold me interpolation, guys. I'll do it. :lol:  I'll do the simplest one.

Edited by bogosaur5000

Share on other sites

Oh, I got it. So the bigger the difference between the lowest and highest point on my map is, the bigger the jump is.

no, the bigger the smallest possible height difference is between some altitude and the next one up or down, the bigger the jump is. IE if you use a 256 grayscale with one unit = 0.1 meters., your heightmap can store values from zero to 25.5 meters, at a resolution of 0.1 meters. its a step function.   like a staircase.   interpolation turns it into a linear function, like laying a flat board over the staircase.   it goes from zero to 25.5 meters with the resolution of floats or doubles - your chose of implementation.