# Low poly terrain

3114 views

Subscribe to our subreddit to get all the updates from the team!

Recently I've been tackling with more organic low poly terrains. The default way of creating indices for a 3D geometry is the following (credits) :

A way to create simple differences that makes the geometry slightly more complicated and thus more organic is to vertically swap the indices of each adjacent quad. In other words, each adjacent quad to a centered quad is its vertical mirror.

Finally, by not sharing the vertices and hence by creating two triangles per quad, this is the result with a coherent noise generator (joise) :

I did something similar to this using a ridged-multifractal but I was subdividing roughly equilateral triangles because I was generating a planet based on an icosahedron.  It looked pretty cool from a distance but after ran to the top of mountains I noticed this problem.....When I generated a ridge line, the top would sometimes end up being jagged, like the teeth of a saw, depending on which way the ridge was running relative to the mesh. Subdividing further just made the saw-teeth smaller so it didn't really fix anything.  The way I solved it was as a final subdivision  I would check the values at each corner and the values at their midpoints of the edges and and do a special custom final subdivision based on this information.  If you run into this problem you could try it.

10 hours ago, Gnollrunner said:

I did something similar to this using a ridged-multifractal bu﻿t I was subdividing roug﻿hly ﻿﻿equilateral triangle﻿s bec﻿ause I was generating a planet based on an icosahedron.  It looked pretty cool from a distance but after ran to the top o﻿f mountains I noticed this problem.....When I generated a ridge line, the top would sometimes end up being jagged, like the teeth of a saw, depending on which way the ridge was running relative to the mesh. Subdividing further just made the saw-teeth smaller so it didn't really fix anything.  The way I solved it was as a final subdivision  I would check the values at each corner and the values at their midpoints of the edges and and do a special custom final subdivision based on this information.  If you run into this problem you could try it.

It's quite complicated, at least it seems so with words and the paragraph being in English. Any chance of you making a blog post about this so I could understand it more?

This terrain looks amazing. Fantastic work.

I am wondering how you will be resolving collisions for this type of terrain?

2 minutes ago, Scouting Ninja said:

This terrain looks amazing. Fantastic work.

I am wondering how you will be resolving collisions for this type of terrain?

Thanks!

Hmmm heightmap collision detection already exists, not with the heightmap geometry itself but with the heightmap data. However, it's true that it's for smoother terrain. I guess if you could use that algorithm and modify it to lerp correcly according to the geometry, it would work like a charm.

Also, if you have a lot of memory and CPU to spare, you could create triangles for the physics engine for each triangle in the geometry.

54 minutes ago, thecheeselover said:

it to lerp correcly according to the geometry, it would work like a charm.

I would really like to see the implementation when you do it. I do wonder if a character will be able to move safely on the terrain.

I would be really grateful if you can share your finds.

1 hour ago, thecheeselover said:

It's quite complicated, at least it seems so with words and the paragraph being in English. Any chance of you making a blog post about this so I could understand it more?

Here's what I was talking about, but now that I think about it It may not be such a big problem for you since you are using more of a standard mesh than I was.  If I remember when you subdivide, you split the triangles into two halfs and so you can have edges at any 45 degree angle if you subdivide enough.  I was subdividing into 4 parts to keep triangles equilateral.  However on the the final subdivision I had to do something special.  I've kind of abandoned height maps now and have gone to voxels so it's not a big deal any more.

You should use sinus to get rid of the spikes.

I made terrain the same, with adjustable height only per vertex, not position, the texture will look a bit stretched.

I also made the collision code for the 2 triangles ( note that directX might handle this different then others with different triangle rotation ).

Try making a plane with only 1 vertex point higher then you can see 1 flat triangle and 1 not flat.

1 hour ago, thecheeselover said:

Also, if you have a lot of memory and CPU to spare, you could create triangles for the physics engine for each triangle in the geometry.

With my old code I implemented JIT (Just in Time) Terrain for the physics. It' worked pretty well and I simply did ball to mesh collision.  I had to write it myself because I used the quadtree that the fractal functions placed everything in at run time.  I figure for any real game you will need it anyway because you need to hit trees and stuff.  Here's a very old screenshot. I just shaded it with a raw simplex nose that's implemented in HLSL. As such it has horrible aliasing problems, but I kind of fixed it later by playing with the amplitude as you go far away from the camera.  In any case this world was about 1000km in diameter and you could run all over it with these three collision balls, which where were a stand-in for a character....... Not that there was much to see since there was no real shading yet, but it was kind of cool that you could never hit a wall so you could go forever.  It did have a moon and the sun though, and everything orbited. I'm recording all this stuff now for my voxel engine.

1 hour ago, Scouting Ninja said:

I would really like to see the implementation when you do it. I do wonder if a character will be able to move safely on the terrain.

I would be really grateful if you can share your finds.

I'll add this to my long term todo list.

17 minutes ago, the incredible smoker said:

You should use sinus to get rid of the spikes.

I made terrain the same, with adjustable height only per vertex, not position, the texture will look a bit stretched.

I also made the collision code for the 2 triangles ( note that directX might handle this different then others with different triangle rotation ).

Try making a plane with only 1 vertex point higher then you can see 1 flat triangle and 1 not flat.

It was intended to look like a crumpled piece of paper but thanks for the tip. By your last sentence, do you mean 1 shared vertex of a plane that is made of two triangles?

19 minutes ago, Gnollrunner said:

With my old code I implemented JIT (Just in Tim﻿e) Terrain for the physics.﻿

Do you mean that you added a physics library through JIT or that you created triangles for the terrain's physics on the fly?

Nice by the way

8 minutes ago, thecheeselover said:

Do you mean that you added a physics library through JIT or that you created triangles for the terrain's physics on the fly?

JIT Terrain is just what I called it. It's written in C++ and Direct X.  The thing is, there is no way to store planet sized mesh data on disk, it's just too big. So all the terrain is generated as you move by the fractal functions.  There are actually 2 different sets of data. One for the graphics and one for the physics. The reason is that the graphics model can update a lot slower, say every second or so. Note this is not the frame rate, which is fast because there was not much GPU stuff.  It just means the model of the terrain (i.e. Level of Detail).  If I tried to use it for collision there is the possibly that if something lags you would be building terrain under your feet which is bad news for collision. On the other hand I can update the physics terrain just right around the player and not worry about stuff that he can't touch, so I can do it very fast.   The way it's written there are no race conditions.  It has to update the terrain before you get there, so yes the collision is safe, albeit slightly complex to implement.  There is a system of nested bounding spheres.

5 minutes ago, Gnollrunner said:

On the other hand I can update the physics terrain just right around the player and not worry about stuff that he can't touch, so I can do it very fast.

Cool, I actually never thought that games could do this in a performant way.

2 minutes ago, thecheeselover said:

Cool, I actually never thought that games could do this in a performant way.

I'm not sure which games if any use run-time procedural generation. I assume some do, perhaps No Man's Sky, however from what I gather the planets aren't that large so maybe not.  But there are a lot of space games in the works and some may use it.

@ Cheeselover : no, just that all planes exist of 2 triangles,

once you have the collision you can add plants and trees.

All very basic stuff

Make a GetY( x,y ); function to get the ground position to add things.

Also make a horizontal GetCollision function for if you going to add walls.

Make all level blocks in zones and handle zone changing, so you can have fast collision algorithm

remember to not add items you can grab close to the zone borders, items are also added per zone.

Also you can add a water plane for pools.

That is how i made the most basic thing.

Making a edittor for this also aint difficult.

I wanto improve it also with the ability to add bridges, very nice.

Only i gave up on all this ( i dont wanto demotivate you ),

it cost to much work while i like to release a simple game first, maybe later i will rewrite everything.

Here is the collision code to get the ground level ( y position )

You have to make 4 lists of weedfields :

Normal weedfield is totally flat, all vertices has the same Y position.

WeedfieldX is exactly the same Y position for all left vertices, also for all right vertices.

WeedfieldY is exactly the same Y position for all front vertices and same for all back vertices.

WeedfieldZ has 3 or 4 vertices with different Y positions.

Once you have this sorted you can use this code :


//-----------------------------------------------------------------------------------------
float LevelMesh::getGroundLevel( float x , float z )
{
// WeedField ground level
sWeedField*nextwf = firstwf;
while( nextwf )
{
if( x >= nextwf->l && x <= nextwf->r &&
z >= nextwf->f && z <= nextwf->b )
{
return nextwf->y;
}

nextwf = nextwf->next;
}
// WeedFieldX ground level
sWeedFieldX*nextwfx = firstwfx;
while( nextwfx )
{
if( x >= nextwfx->l && x <= nextwfx->r &&
z >= nextwfx->f && z <= nextwfx->b )
{
float fTempBalance = ( ( x - nextwfx->r ) - ( nextwfx->l - nextwfx->r ) ) * nextwfx->fAbsRScale;
return XFade( nextwfx->y1 , nextwfx->y2 , fTempBalance );
}

nextwfx = nextwfx->next;
}
// WeedFieldY ground level
sWeedFieldY*nextwfy = firstwfy;
while( nextwfy )
{
if( x >= nextwfy->l && x <= nextwfy->r &&
z >= nextwfy->f && z <= nextwfy->b )
{
float fTempBalance = ( ( z - nextwfy->b ) - ( nextwfy->f - nextwfy->b ) ) * nextwfy->fAbsBScale;
return XFade( nextwfy->y1 , nextwfy->y2 , fTempBalance );
}

nextwfy = nextwfy->next;
}
// WeedFieldZ ground level
sWeedFieldZ*nextwfz = firstwfz;
while( nextwfz )
{
if( x >= nextwfz->l && x <= nextwfz->r &&
z >= nextwfz->f && z <= nextwfz->b )
{
float fTempBalanceFB = ( ( z - nextwfz->b ) - ( nextwfz->f - nextwfz->b ) ) * nextwfz->fAbsBScale;
float fTempBalanceLR = ( ( x - nextwfz->r ) - ( nextwfz->l - nextwfz->r ) ) * nextwfz->fAbsRScale;

if( fTempBalanceFB > fTempBalanceLR ) // Driehoek 1
{
float y1 = nextwfz->y3 - ( nextwfz->y4 - nextwfz->y2 );

}else // Driehoek 2
{
float y4 = nextwfz->y2 - ( nextwfz->y1 - nextwfz->y3 );

}

}

nextwfz = nextwfz->next;
}
return fEndOfLevel; // end of level
}



Xfade is something like a + fade * ( b - a );

Driehoek = dutch for triangle.

3 hours ago, the incredible smoker said:

Only i gave up on all this ( i dont wanto demotivate you ),﻿

it cost to much work while i like to release a simple game first, maybe later i will rewrite everything.

This is actually an old blog post from me that was reordered because of a problem with gamedev.net. I'm a actually making a game with a friend. Even though it is ambitious, we try to simplify it down so it can be achieved with the both of us.

We're not trying to make an engine, just a rogue-lite procedural vaporwave game. Consider looking at our subreddit if you're interested.

Anyway, generation wise, we're doing ok but thank you for the tips

## Create an account

Register a new account

• ### Similar Content

• By jb-dev
This is a nicer version of the spa room, plants included!

• Duplicate your mesh. Apply flat black material for the outline. Select your duplicated model and reverse its Normal direction.  'Face normals' are the direction a face is pointing/rendering. Next, we turn on backface culling.  'Backfaces' are the sides of faces that are pointing away from the normal direction. Change your move manipulator to move based on normal direction.  Then scale the mesh outwards. You can use a shader to do this but you will often get issues at corners where the faces diverge.
Be mindful of your poly count since you're basically doubling it.  If you're working super low-poly though it's all good!

Note: This tutorial was originally published on the author's website, and is reproduced here with kind permission.  Check out Brendan's ArtStation and Twitter accounts for portfolio and other tutorials.

• Hello!
I'm trying to understand how to load models with Assimp. Well learning how to use this library isn't that hard, the thing is how to use the data. From what I understand so far, each model consists of several meshes which you can render individually in order to get the final result (the model). Also from what assimp says:
One mesh uses only a single material everywhere - if parts of the model use a different material, this part is moved to a separate mesh at the same node The only thing that confuses me is how to create the shader that will use these data to draw a mesh. Lets say I have all the information about a mesh like this:
class Meshe { std::vector<Texture> diffuse_textures; std::vector<Texture> specular_textures; std::vector<Vertex> vertices; std::vector<unsigned int> indices; } And lets make the simplest shaders:

#version 330 core layout(location = 0) in vec3 aPos; layout(location = 1) in vec3 aNormal; layout(location = 2) in vec2 aTexCoord; uniform vec3 model; uniform vec3 view; uniform vec3 projection; out vec2 TextureCoordinate; out vec3 Normals; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TextureCoordinate = aTexCoord Normals = normalize(mat3(transpose(inverse(model))) * aNormal); } Fragment Shader:
#version 330 core out vec4 Output; in vec2 TextureCoordinate; in vec3 Normals; uniform sampler2D diffuse; uniform sampler2D specular; void main() { Output = texture(diffuse, TextureCoordinate); }
Will this work? I mean, assimp says that each mesh has only one material that covers it, but that material how many diffuse and specular textures can it have? Does it makes sense for a material to have more than one diffuse or more that one specular textures?  If each material has only two textures, one for the diffuse and one for the specular then its easy, i'm using the specular texture on the lighting calculations and the diffuse on the actual output.
But what happens if the textures are more? How am i defining them on the fragment shader without knowing the actual number? Also how do i use them?
• By Josheir
I'm having trouble with glew sharing some defines that I can't resolve.  Does anyone know of a way to get the following statements working instead of an include with glew (glew resolves the red squigglies too.)
glColor3f(0, 1, 0.); glRasterPos2i(10,10); I really want to use a quick glut command for now.  The command uses the statements above.
Thank You,
Josheir

• Hello everyone,
While I do have a B.S. in Game Development, I am currently unable to answer a very basic programming question. In the eyes of OpenGL, does it make any difference if the program uses integers or floats? More specifically, characters, models, and other items have coordinates. Right now, I am very tempted to use integers for the coordinates. The two reasons for this are accuracy and perhaps optimizing calculations. If multiplying two floats is more expensive in the eyes of the CPU, then this is a very powerful reason to not use floats to contain the positions, or vectors of game objects.
Please forgive me for my naivette, and not knowing the preferences of the GPU. I hope this thread becomes a way for us to learn how to program better as a community.

-Kevin
×