Sign in to follow this  
chaosvine

Creating a Cross Platform Isometric game with terrain generation. Help?

Recommended Posts

Hi, so I'm working on a 2D isometric style game with block terrain generation. So far I have set up code for the terrain generation with both DirectX 11's Direct2D in Windows and SFML in Linux. I'd like to make the game cross platform and so far that aspect is working well.

 

I have a few questions maybe someone can fill me in on, but first I should show you my code so far (at least the terrain generation part) and the output I'm getting. Like so:

for (int x = 0; x < MapWidth; x++)
    {
        for (int y = 0; y < MapHeight; y++)
        {
            // Get the depth value.
            float nOffX = (float)x / (float)(MapWidth - 1);
            float nOffY = (float)y / (float)(MapHeight - 1);
            float nX = 128.0f + nOffX * (256.0f - 128.0f);
            float nY = 128.0f + nOffY * (256.0f - 128.0f);
            int MaxDepth = lround(perlinNoise.GetValue(nX,nY, 0) * 10);
            MaxDepth = noise::ClampValue(MaxDepth, 0, MapDepth);
            for (int z = -1; z < MaxDepth - layer; z++)
            {
                // Adjust for Isometric Coordinates.
                int isoX = (((x - (z * 2)) + y) * TileWidth / 4);
                int isoY = ((y - x) * TileHeight / 2);
                isoX += (600 / 4) - (TileWidth / 2);
                isoY += (800 / 2) - (TileHeight / 2);
                terrainImage.copy(rawImage, isoY, isoX,sf::IntRect(0,0,32,32), true);
            }
        }
    }

So this is the code I'm using to create the terrain in Isometric adjusted 2D space. This is the SFML version from linux for those not familiar. The code in DirectX is essentially the same except I use the Draw methods there for creating the sprite texture. I'm creating it as one texture and then drawing it to the screen as I figured that I should probably use one draw call for the terrain or I would have major performance issues when rendering huge chunks of blocks. The output of this code looks like this:

 

Rc6Ybkj.png

 

The style of game I'm imagining is something like dwarf fortress with graphics mixed with elements from some other genres of games such as rpgs and rts. So in other words you will be in control of a "Kingdom" style of game as the player building and defending your kingdom of isometric 2D blocks from invaders, monsters, other kingdoms, ect...

 

I realize this style of game has been done before in a couple of instances, but I'd like to make my own as I love this style of game alot. And I think I could make a really good instance of the god game or kingdom builder genre if you will.

 

This leads me to a couple of Questions about creating a game like this:

 

Question 1: How do I go about getting better output from the noise algorithm and terrain generation algorithm I'm using? I need the terrain to be a bit more natural looking with hills, and valleys, and possibly even mountains when I go to a much large grid say something like 1024 by 1024 tiles.

 

Question 2: Is there a better way to go about generating the terrain? And will the way I've chosen lead to any issues when I start doing pathfinding stuff for the characters?

 

Question 3: How do I go about adding and removing blocks?

 

Question 4: How should I go about drawing the blocks in real time in a "layered" like fashion? I need the player to be able to run through the layers of blocks for adding and removing blocks in a mining style fashion.

 

That's all the questions I have for now, and probably the most essential ones for making this style of game. Any help or thoughts on my project/game would be greatly appreciated! smile.png

 

Edit: Also sorry if this is in the wrong forum... I'm kind of new here and not used to posting in this forum. If it is could a moderator please move it to the correct forum where I am more likely to get help with this topic? I wasn't sure where to put this since it's cross platform and cross API development and kind of generalized in questions... thanks again.

Edited by chaosvine

Share this post


Link to post
Share on other sites

Ok, I have another Question as well. How do I go about creating multiple textures and putting them side by side while keeping the map continuous? The reason I am wanting to do this, is there are also performance issues with rendering huge textures for large maps of say several thousand block tiles to the screen. I would need to split the texture into parts or chunks wouldn't? But how do I go about lining them up and rendering to the correct texture for each block?

Share this post


Link to post
Share on other sites

You will probably want to do this pretty much the same way as you would do a normal 3D chunk based game (the only difference being that you store your chunks graphics as a texture instead of a 3D model). The only difference is that you draw your world as a texture instead of a mesh and have disallowed camera rotation. Whatever material you find on 3D voxel games mostly applies to yours as well.

 

Keep the graphics and the game logic as separated as possible, so you have your 3D game world, and your isometric rendering. Pathfinding should be as simple as it gets since you have a simple 3D grid (you might need to apply optimizations if you have many agents though).

 

So for coming up with pretty procedural generation, just google it (using words like procedural/random/voxel terrain/3D/world/generation), theres plenty of material available.

Note that if your world is finite in size, you might add some iterative simulation after the procedural generation is done. This is because some things are not easy to do procedurally (like dwarf fortress simulates how different civilizations develop over hundreds of years before calling the map finished)

 

The rendering should probably be chunk based (this doesnt mean the underlying 3D grid has to be chunk based - just the rendering should be). So you dont have to redraw everything when something has been modified. If your world is deep (or there is even a chance that it might one day be), use a 3D grid of chunks, otherwise use a 2D grid. Chunks should probably be some power of 2 sized and cube shaped (this usually makes it easier to implement some optimizations, though make sure you can easily change the chunk size from a single place instead of hard coding the size all over the code)

 

The game should keep track of 'camera height'. This would be the height of the currently viewed layer in layer view mode.

I would add some kind of 'fog' that hides everything that is a certain depth below camera height, so you dont need a complicated culling algorithm (otherwise you have really deep chunks that are on the other side of the map but still visible because theyre so deep and toward the top of the screen direction so they end up in current view)

 

For layer view, I would re-draw all visible chunks that intersect with the current camera height, such that all blocks in those chunks above the camera height are not drawn and some shadows are drawn to cover the normally-invisible blocks. All chunks above the camera height intersecting chunks can just be skipped over and not drawn at all.

An alternative is to split the intersecting chunks into 1-high layer textures, to avoid rerendering them every time you go a layer up or down. This really depends on how expensive rendering a chunk texture is and how much memory you can use for this.

Another alternative way is to render your chunks as meshes (where individual blocks are still 2D sprites), since this way blocks behind other blocks in a chunk will still get uploaded to GPU even if theyre not visible (unlike a chunk being one texture, where stuff that is behind is simply drawn over when generating on CPU). Then using some shaders and some clever technique to fill up the holes, you could create the layered view without having to touch the uploaded chunk graphics at all.

Share this post


Link to post
Share on other sites

I recommend you to take a look at Final Fantasy tactics advanced. The maps weren't randomly generated, but they were well made enough to let the user see what he's doing without the ability to move the camera.

 

I can provide you an answer to question 1.

Instead of depending on editing the data directly you can make a few easy assumptions that will let you abuse another technology.

 

The level will always be in a grid of x and y spaces.
And Z will represent it's height.

 

You can generate a texture using noise maps to edit how your data looks. And even clamp certain areas with multiply, add, subtract, ect.

 

When that texture is done, use it to create your map.

Share this post


Link to post
Share on other sites

I'm creating it as one texture and then drawing it to the screen as I figured that I should probably use one draw call for the terrain or I would have major performance issues when rendering huge chunks of blocks.


It seems to me like you've taken the "performance problems" away from the graphics card (which is equipped to handle them) and given them to the CPU instead, inside of a draw loop that copies pixels into an image before then uploading that image to the graphics card. In the process, you've created more performance issues than the ones you thought you might have by doing it another way. Did you try doing it other ways, and make this decision based on data, or did you go by what you "thought" would happen?

What you ought to do is go back to the drawing board and write an algorithm that actually makes use of that expensive, fancy high technology that your monitor is plugged into, because it can move pixels around FAR more quickly than you CPU can.

That being said...

1) I have written about this kind of thing a few times in the past, for Game Developer Magazine, as well as in my dev journal. The gist is that instead of using a 2D Perlin noise map, you use a 3D Perlin noise function that is used to perturb a 3-dimensional step function oriented along the vertical axis (the ground). This provides volumetric data, and when combined with other fractals such as a pair of multipled Ridged fractals to carve out caves, can present a nicely detailed representation. It's a bit more complicated, though...

2) You might consider using shorter blocks, if pathfinding is an issue. The thing about using cubes is that when a character traverses from one cube to the next, the traversal requires a very steep path of motion. It is simply not "realistic" that a walking character should be able to climb that steeply. Instead, you might consider using half, quarter or even eighth cubes as the basis of your world. Of course, this increases the vertical resolution of the terrain, and thus increases the total number of primitives to be drawn.

3) You need a structure that facilitates random access of a large volumetric data set. The quickest random access, of course, would be an array. Shamus Young some time back did a programming experiment into Minecraft-like block worlds similar to what you are doing, though fully 3D. He termed his experiment Project Octant, because he thought from the start that an octree would be an appropriate structure. And octrees can, indeed, optimize the operations involved in rendering and culling. However, when it came to editing the terrain dynamically, the abstraction of the octree started to get in his way and penalize him, significantly, in the area of performance. So much so that he ditched the octree in favor of a basic dumb array, which is blindingly fast as far as dynamic editing, but doesn't help you much at all in rendering.

Fortunately, a top-down orthographically projected isometric world doesn't suffer any where near the culling problems faced by one in which the camera can be facing any direction at any given time. You can do a quick rough pass of culling using the AABB of your view frustum, which distills down to a range of X,Y and Z coordinates that can be used to iterate the cell grid. Easy peasy.

4)If the blocks are opaque, or feature only alpha-tested transparency, they should be sorted and drawn from front to back, to take advantage of depth rejection when drawing deeper blocks. (If a pixel is already drawn, and the depth test fails, then the deeper pixel is not rendered.) If the blocks feature partial transparency or translucency, they will need to be drawn back to front, resulting in quite a bit of overdraw. You can hybridize it by drawing in two passes: first, draw the opaque objects front to back, then draw the alpha-blended objects back to front.

5) Each block will consist of a bit of geometry onto which a texture is projected. In this thread I talk about it a bit more. Each block is a specially UV-mapped bit of geometry that is mapped with a texture based on its type.

Additionally, you could try alternative texturing schemes. For example, my current project uses tri-planar texturing (on hex blocks instead of cubes) with scaling so that the repeating textures do not align with cell boundaries. This offsets some of the repeating of textures. screenshot

Share this post


Link to post
Share on other sites

 

I'm creating it as one texture and then drawing it to the screen as I figured that I should probably use one draw call for the terrain or I would have major performance issues when rendering huge chunks of blocks.


It seems to me like you've taken the "performance problems" away from the graphics card (which is equipped to handle them) and given them to the CPU instead, inside of a draw loop that copies pixels into an image before then uploading that image to the graphics card. In the process, you've created more performance issues than the ones you thought you might have by doing it another way. Did you try doing it other ways, and make this decision based on data, or did you go by what you "thought" would happen?

What you ought to do is go back to the drawing board and write an algorithm that actually makes use of that expensive, fancy high technology that your monitor is plugged into, because it can move pixels around FAR more quickly than you CPU can.

That being said...

1) I have written about this kind of thing a few times in the past, for Game Developer Magazine, as well as in my dev journal. The gist is that instead of using a 2D Perlin noise map, you use a 3D Perlin noise function that is used to perturb a 3-dimensional step function oriented along the vertical axis (the ground). This provides volumetric data, and when combined with other fractals such as a pair of multipled Ridged fractals to carve out caves, can present a nicely detailed representation. It's a bit more complicated, though...

2) You might consider using shorter blocks, if pathfinding is an issue. The thing about using cubes is that when a character traverses from one cube to the next, the traversal requires a very steep path of motion. It is simply not "realistic" that a walking character should be able to climb that steeply. Instead, you might consider using half, quarter or even eighth cubes as the basis of your world. Of course, this increases the vertical resolution of the terrain, and thus increases the total number of primitives to be drawn.

3) You need a structure that facilitates random access of a large volumetric data set. The quickest random access, of course, would be an array. Shamus Young some time back did a programming experiment into Minecraft-like block worlds similar to what you are doing, though fully 3D. He termed his experiment Project Octant, because he thought from the start that an octree would be an appropriate structure. And octrees can, indeed, optimize the operations involved in rendering and culling. However, when it came to editing the terrain dynamically, the abstraction of the octree started to get in his way and penalize him, significantly, in the area of performance. So much so that he ditched the octree in favor of a basic dumb array, which is blindingly fast as far as dynamic editing, but doesn't help you much at all in rendering.

Fortunately, a top-down orthographically projected isometric world doesn't suffer any where near the culling problems faced by one in which the camera can be facing any direction at any given time. You can do a quick rough pass of culling using the AABB of your view frustum, which distills down to a range of X,Y and Z coordinates that can be used to iterate the cell grid. Easy peasy.

4)If the blocks are opaque, or feature only alpha-tested transparency, they should be sorted and drawn from front to back, to take advantage of depth rejection when drawing deeper blocks. (If a pixel is already drawn, and the depth test fails, then the deeper pixel is not rendered.) If the blocks feature partial transparency or translucency, they will need to be drawn back to front, resulting in quite a bit of overdraw. You can hybridize it by drawing in two passes: first, draw the opaque objects front to back, then draw the alpha-blended objects back to front.

5) Each block will consist of a bit of geometry onto which a texture is projected. In this thread I talk about it a bit more. Each block is a specially UV-mapped bit of geometry that is mapped with a texture based on its type.

Additionally, you could try alternative texturing schemes. For example, my current project uses tri-planar texturing (on hex blocks instead of cubes) with scaling so that the repeating textures do not align with cell boundaries. This offsets some of the repeating of textures. screenshot

 

 

Thanks for the replies guys. I'll try to address what people have said here.

 

So my reasoning was that should I make say 100,000 or more draw calls to create individual sprites for each block then I would get huge performance hits wouldn't I? Rendering the blocks to a texture reduces the number of draw calls required by quite a bit...

 

As for the pathfinding. In the final version of the terrain I intend on having half blocks which create the face of hilled areas so as to be more smooth, while mountains I thought should remain more difficult to traverse.

 

I don't particularly understand 3D noise all that well as I've only worked with 2D and 1D noise patterns before. But I suppose I could try it. Any tips in that regard?

 

Are you saying that I should be doing this in 3D rather than 2D? I'm not particularly interested in that approach if I can help it.

Share this post


Link to post
Share on other sites

I have a slightly different question for you. Since you are using SFML as part of your cross platform support, why not use SFML for all platforms, as it supports Windows, Linux, and Mac?

 

Well, I had originally designed this in windows then got interested in cross platform development recently and picked back up this project. I have already considered this and I'm actually leaning towards sfml as it is more concise and lightweight for the purposes of 2D. Though I am more familiar with programming directx. I haven't made up my mind yet.

Share this post


Link to post
Share on other sites

I recommend you to take a look at Final Fantasy tactics advanced. The maps weren't randomly generated, but they were well made enough to let the user see what he's doing without the ability to move the camera.

 

I can provide you an answer to question 1.

Instead of depending on editing the data directly you can make a few easy assumptions that will let you abuse another technology.

 

The level will always be in a grid of x and y spaces.
And Z will represent it's height.

 

You can generate a texture using noise maps to edit how your data looks. And even clamp certain areas with multiply, add, subtract, ect.

 

When that texture is done, use it to create your map.

 

Hmm interesting idea, however I'd prefer to have them randomly generated or rather coherently generated so as to be slightly different on each play through.

Share this post


Link to post
Share on other sites

You will probably want to do this pretty much the same way as you would do a normal 3D chunk based game (the only difference being that you store your chunks graphics as a texture instead of a 3D model). The only difference is that you draw your world as a texture instead of a mesh and have disallowed camera rotation. Whatever material you find on 3D voxel games mostly applies to yours as well.

 

Keep the graphics and the game logic as separated as possible, so you have your 3D game world, and your isometric rendering. Pathfinding should be as simple as it gets since you have a simple 3D grid (you might need to apply optimizations if you have many agents though).

 

So for coming up with pretty procedural generation, just google it (using words like procedural/random/voxel terrain/3D/world/generation), theres plenty of material available.

Note that if your world is finite in size, you might add some iterative simulation after the procedural generation is done. This is because some things are not easy to do procedurally (like dwarf fortress simulates how different civilizations develop over hundreds of years before calling the map finished)

 

The rendering should probably be chunk based (this doesnt mean the underlying 3D grid has to be chunk based - just the rendering should be). So you dont have to redraw everything when something has been modified. If your world is deep (or there is even a chance that it might one day be), use a 3D grid of chunks, otherwise use a 2D grid. Chunks should probably be some power of 2 sized and cube shaped (this usually makes it easier to implement some optimizations, though make sure you can easily change the chunk size from a single place instead of hard coding the size all over the code)

 

The game should keep track of 'camera height'. This would be the height of the currently viewed layer in layer view mode.

I would add some kind of 'fog' that hides everything that is a certain depth below camera height, so you dont need a complicated culling algorithm (otherwise you have really deep chunks that are on the other side of the map but still visible because theyre so deep and toward the top of the screen direction so they end up in current view)

 

For layer view, I would re-draw all visible chunks that intersect with the current camera height, such that all blocks in those chunks above the camera height are not drawn and some shadows are drawn to cover the normally-invisible blocks. All chunks above the camera height intersecting chunks can just be skipped over and not drawn at all.

An alternative is to split the intersecting chunks into 1-high layer textures, to avoid rerendering them every time you go a layer up or down. This really depends on how expensive rendering a chunk texture is and how much memory you can use for this.

Another alternative way is to render your chunks as meshes (where individual blocks are still 2D sprites), since this way blocks behind other blocks in a chunk will still get uploaded to GPU even if theyre not visible (unlike a chunk being one texture, where stuff that is behind is simply drawn over when generating on CPU). Then using some shaders and some clever technique to fill up the holes, you could create the layered view without having to touch the uploaded chunk graphics at all.

 

Thanks for these suggestions, you've given me a bit of a direction in which to go. And, before I even read this I decided earlier to do the rendering and layout in chunks of blocks. So what I am doing now is separating the chunks into different textures and then drawing those to the screen like this little update here:

 

RXhfabR.png

 

This is actually about 4 different textures (all filled with the same block pattern as I have not implemented a way to make them different yet). The computer draws this quite well, but it is statically generated so that leaves me with a bit of a problem in how to go about dynamically changing the texture each frame.

 

Also, I still need to improve the algorithm I'm using to create a more natural look, so I'm looking into options.

Share this post


Link to post
Share on other sites

Thanks for the replies guys. I'll try to address what people have said here.
 
So my reasoning was that should I make say 100,000 or more draw calls to create individual sprites for each block then I would get huge performance hits wouldn't I? Rendering the blocks to a texture reduces the number of draw calls required by quite a bit...


The thing is, a GPU is much quicker at drawing pixels than the CPU is, but you're using the CPU to do it thousands of times rather than the GPU. If you use a texture atlas, you can build a single vertex buffer from all the visible cells and draw the whole thing in a single draw call. And it would be much faster than calling image::Copy 100,000 times on the CPU.

As for the pathfinding. In the final version of the terrain I intend on having half blocks which create the face of hilled areas so as to be more smooth, while mountains I thought should remain more difficult to traverse.
 
I don't particularly understand 3D noise all that well as I've only worked with 2D and 1D noise patterns before. But I suppose I could try it. Any tips in that regard?


I posted some links.

Are you saying that I should be doing this in 3D rather than 2D? I'm not particularly interested in that approach if I can help it.


It doesn't need to have a movable camera or a perspective projection, but doing it 3D (ie, using the hardware as it was designed) is definitely preferable than performing 100,000 image copy calls on the CPU. The GPU hardware is very well suited to moving lots of pixels around, so why not let it do its job?

Share this post


Link to post
Share on other sites

Ok, so I tried it using sprite objects only (I think this is what you were referring to when you said to use the gpu hardware). It renders very well actually... and I should have known this would work based on my experience in rendering alot of instanced objects with small geometry before in 3d applications. And the SFML documentation even says in the tutorial that the best performance will come from modifying the objects as necessary but instancing the textures all from the same atlas. I don't know why I was actually worried about this. However, I now have another question...

 

So I'm allocating an array of sprites (in this instance 16 * 64 * 64 long) that I'm using to draw the sprites to the screen. However I was under the impression that I'd have to delete this array at shutdown... when I try to delete the array the debug output window says that its an invalid pointer. I'm confused, so do I still need to delete this array of sprite objects or is sfml taking care of that for me somehow? I couldn't find anything on this in the documentation.

Edited by chaosvine

Share this post


Link to post
Share on other sites

So I'm allocating an array of sprites (in this instance 16 * 64 * 64 long) that I'm using to draw the sprites to the screen. However I was under the impression that I'd have to delete this array at shutdown... when I try to delete the array the debug output window says that its an invalid pointer. I'm confused, so do I still need to delete this array of sprite objects or is sfml taking care of that for me somehow? I couldn't find anything on this in the documentation.

 

A) We need to see code to comment on your crash, but in general SFML does not manually free memory that you manually allocated. But SFML does manually frees memory that you have SFML allocate for you. smile.png

B) Use a std::vector, unless you have a reason to micro-manage your memory (which in this case, it doesn't seem like you do). Raw calls to free/malloc/new/delete are frowned upon in higher level code (becuase it's not needed and doesn't help), and reserved primarily for lower-level code.

C) I'd suggest using a 1-dimensional array(vector), and treat it as 3D when you need 3D. It's just more convenient and (in some cases) more efficient.

D) sf::Sprites are fine, but if you ever need to, you can eek out faster performance by ditching the sprites. No need to ditch the entire API. But for now, I'd just stick with sf::Sprites because it's probably sufficiently fast for your needs.

Share this post


Link to post
Share on other sites

Yeah sorry, I forgot to show code. I've used vectors extensively in the past, I'm just a bit rusty when it comes to some things. And I forgot about them automatically allocating memory and deallocating it.

 

As far as dimensions of the array, I'm already using a 1D array like a 3D array. :)

 

Yeah, I've been told I could get more performance from a vertex array but I'm not sure what that is exactly. I have yet to look it up, was a bit tired last night...

 

I'm assuming it's something similiar to the Vertex Buffers in OpenGL / DirectX in which case I would have a starting point at least for converting the algorithm.

Share this post


Link to post
Share on other sites

So I've looked up Vertex arrays, they are basically to my understanding a Vertex Buffer that is already wrapped in a dynamic array for you (very convenient). However, this poses the new Question of how to go about using this for my purposes specifically in that I'm not sure exactly how I would create a new object or mesh from this that would display the textures properly. =/

 

I'm going to look through the links Jtippets provided in his above post. Perhaps there is a clue there. Otherwise maybe someone could fill me in to help on my search?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this