Jump to content
  • Advertisement
  • 07/10/19 12:34 PM

    Tiles to Curves: Fun With Voronoi Graphs (part 1)

    General and Gameplay Programming
       (1 review)

    GameDev.net

    titleImage.png

    This is a blog about our development of Unexplored 2: The Wayfarer's Legacy. This game features state-of-the-art content generation, generative storytelling, emergent gameplay, adaptive music, and a vibrant art style.

     

    The content generator of Unexplored 2 generates tile maps. Typical output looks like this (you can read more about our level generator here😞

    roadTilesCropped.png

     

    These tile maps are stacked, using different tiles to indicate tiles ground types (grass or dirt in this example) and various decorations. In this case there are some bushes (large green circles), Rocks (black circles), plants (small green circles), flowers (white circles), and decorative textures (grey squares). There also are special tiles that indicate gameplay data such as the spawning point marked with an 's'. In addition, tiles can be tagged with additional information such as elevation or special subtypes.

    Tile maps are a convenient data structure for generators to work with. But they also are quite rigid, and often the grid has a tendency to be visible in game. Yet, in our case when the data is loaded into the game and assets are placed the result looks like this:

    Complete.png

     

    I like to think we did a pretty good job at hiding the tiles, and here's how we did it.

     

    Voronoi Magic

    The trick is that individual tiles are matched to the cells in a Voronoi diagram. Which can be used to generate shapes that are much more natural looking. A Voronoi diagram is created from seeding a plane with random points and partitioning off cells so that each point in the plane belongs to the cell that corresponds to the closest seed point. There are quite a few interesting applications of Voronoi diagrams for procedural content generation (PCG).

    A typical Voronoi diagram created from a random but fairly even distribution of seed points looks something like this:

    Voronoi.png

     

    For Unexplored 2 we use a different type of distribution of seed points. To start with, we seed one point for each tile. That way we are certain every tile in the tilemap can be mapped to one cell in the Voronoi diagram.

    Now, if you place the seed points in the middle of each cell you end up with straight grid that looks exactly like a tile map (for this image and the others below I also made a checkered version where half of the tiles are rendered yellow so you can see the patterns a little bit better):

    Straight.pngStraightCheckered.png

     

    A simple way of making this look better is to simply randomize the position of each seed point. When shifting the points it helps to make sure the point does not move outside its original tile.

    The result looks something like this:

    Randomized.pngRandomizedCheckered.png

     

    Better, but very noisy, and you don't get nice flowing lines in this way. It can be improved by 'relaxing' the Voronoi diagram (a standard technique associated with Voronoi diagrams I won't go into here). But it will always stay a little noisy, and it is difficult to effectively suggest shapes on a scale that surpasses the scale of the individual tiles.

    To get around this you need to do is to move the points around smarter than simply using random displacement. Different types of movement have very different effects. For example, using Perlin noise can create interesting curved tilemaps. Or you can turn the whole thing into hexagonal shaped tiles simply by moving every other row of seed points to the left:

     

    HexGrid.pngHexGridCheckered.png

     

    The real breakthrough comes when we start moving around the seed points in particular patterns to create rounded corners. The first step of this process is already taken inside the level generator. Corners are detected between ground types and the corner tiles are marked with different shapes, indicating how they should be deformed to generate a better-looking environment:

    roadGroundTilesCropped.png

     

    In this case, elevation differences also cause corners to appear in the tilemap. That's the reason you see the extra rounded corners in the grass in the top right and bottom left where slopes were generated.

    The game uses this information to displace the seed points of the Voronoi graph. Each rounded corner shifts the location of the seed point (see image below). In addition, it also shifts the seed points of its four orthogonal neighbors. This process is cumulative; seed points can be shifted multiple times if they are near several corners. However, after all displacement are processed, the seed points are randomized a little (about 10% of the width of a tile in either direction), and the final displacement is restricted to a maximum of 40% of the width of a tile.

    displacement.png

     

    The result is already pretty astonishing:

    Displaced.pngDisplacedCheckered.png

     

    But we're not there yet...

     

    Smart Decoration

    The overall shape is better, but the edges are still very straight and somewhat ragged in appearance. The way we cover that up is by using curved assets placed along the edges where the colors are different. The real trick, however, is that one curve is often placed over two edges, using their relative angles to determine the direction of the curve.

    borders.png

     

    The result looks like this:

    DisplacedBorders.png

     

    Next, we use 3D assets to give extra texture to the cliffs:

    DisplacedBordersCliffs.png

     

    And finally, we add the other assets to fill out the level. The placement of these assets is dictated by the level data generated earlier, and in general follows a simple philosophy. We use smaller assets to surround larger ones creating natural and nice transitions. Of particular note is the rocks added to the bottom of cliffs to create more variety and to visually dampen the vertical slopes dictated by the gameplay:

     

    Local Variation

    The corners are not the only type of displacement we use. For example, near artificial structures (such as the ruined walls below) you want the edges to be straighter:

    ruinsComplete.png

     

    In our system, this effect is easy to achieve. We simply introduce a different displacement rule that makes sure that tiles featuring artificial structures are not displaced. The generator uses smaller squares to mark these tiles and the game simply makes sure that all displacements are ignored:

    ruinTilesCropped.pngruinGroundTilesCropped.png

     

    If you look at the ground you can clearly see how specific areas can be made straight while others curve more naturally:

    ruinsGround.png

     

    Isn't that neat?

    There are a few other rules you can use easily mix in with this technique. For example, we occasionally force the tiles into a hexagonal pattern to make sure narrow paths are wide enough to be traversed. And I am sure we will find other uses for other patterns as well.

    This is one of the many reasons I really love Voronoi diagrams. Another time I will write about how we use them to generate and decorate Unexplored 2 world maps.

     

    If you are interested in learning more about the game please check us out on Fig.co or on Steam.

     

    Note: This article was originally published on the Ludomotion website, and is reproduced here with the kind permission of the author.



      Report Article


    User Feedback


    Absolutely love this kind of stuff, thank you for sharing!

    Share this comment


    Link to comment
    Share on other sites

    Big thanks to you Joris for your wonderful article. It's so interesting and your words with the pictures to illustrate it's a great job!

    Congrats and I hope to read more from you in future!

    Share this comment


    Link to comment
    Share on other sites


    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

  • Advertisement
  • Game Developer Survey

    completed-task.png

    We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a $15 incentive for your time and insights. Click here to start!

    Take me to the survey!

  • Advertisement
  • Latest Featured Articles

  • Featured Blogs

  • Advertisement
  • Popular Now

  • Similar Content

    • By Shaarigan
      Hello Forum,
      I reached a point in my rework of our build tool where I have to improve parsing different coding languages like C# and C++ to detect dependencies between files and projects. Maybe I will also add some kind of linting to the build tool. Unfortunately I'm fixed to an older version of C# to stay backwards compatible to older Visual Studio versions. This is the reason for not having the chance to use the Roslyn parser framework out of the box and I had to write my own "analyzer" to solve the task.
      That I'm now ready for refactoring, I want to make it into a real parser instead of Regex driven analyzer. For this task, I also had a look at parser utility libraries like Sprache and Pidgin to have some more convinience in declaring tokens and rules. I have written parsers on my own by hand many times before and they were always optimized to the special token/ rule and usecase. For example instead of just reading the next string and then comparing it to a keyword, then conditionally reset the stream or pass the token, I wrote something like this
      char[] buffer = new buffer[<max keyword length>]; bool MyKeywordRule(Stream data, TokenStream result) { if(data.Peek() != myKeywordFirstChar) return false; long cursor = data.Position; int count = data.Read(buffer, 0, Math.Min(myKeywordLength, data.Length - cursor)); if(count == myKeywordLength && buffer.Compare("keyword", myKeywordLength)) { result.Add(myToken); return true; } else { data.Position = cursor; return false; } } My hand written function is optimized in performance to first test the character, then fill a buffer with data that is as long as the keyword and finally test this for equality with the keyword. If I write this in Sprache or Pidgen, it will just read the stream, compare the result for the keyword and reset on failure, no early out will be performed or whatever speed optimization. If I want to test against a range of keywords or character sequences like for operators, in a generic solution it will read every statement of the sequence from the stream and test it while my hand written solution can fill the buffer once and test the result in a switch statement and early outs for the character count read.
      My question is if there exists a solution that can perform both, define rules in a convinient way like Sprache or Pidgin and on the other hand compete somehow with a hand written solution (it must not be as fast but in any way faster than simply replay the parts of a rule). I thought about a solution using C# Expressions where an operation for example my keyword, can be written as a sequence of pre-condition (testing the character), bootstrapping (reading the buffer for certain size) and validation (test if the buffer matches the keyword) and be merged together for example, so that a rule "A or B" dosen't test for a first then resets the stream and tests for B second but merges the pre-condition into
       
      IF not('A') and not('B') FAIL fills the buffer only once and then tests for equality in a switch-case statement. Or is it the best solution to keep writing such code by hand for each individual rule?
      Thank you in advance for your suggestions experts!
    • By rogerdv
      Im trying to create a system to keep the states of the scenes in an RPG, and my first approach was to have all entities (NPCs, containers, etc) of non active scene in memory, to restore them when its scene gets loaded again. But somebody told me that such idea was not optimal, because of memory consumption, and that the best way was to serialize the scene state to disk when exiting, and restoring from disk when loading again. But that gives me some doubts, because it would require an intensive use of disk (consuming SSD write cycles) and arises the problem of what to do with saved scene states when exiting the game: keep them and reference thos files from global savegame archive (with all possible derived problems), or delete them and saving the state to the saved game. Whats the best choice in this case? I have been looking at the savegames of AAA games and none of them seems to keep any serialized scenes outside the main saved game file.
    • By Stocke
      I looked into flow field pathfinding(after reading about steering behaviours) and it seems that it's mostly used for RTS games. I understand why, but wouldn't it also work for an action game with a single player character? It wouldn't be efficient, sure but it makes using steering behaviours easier to use. I could precalculate obstacles based on static colliders in the scene. And I can calculate the a vector towards the player for seek behaviours every frame. And reuse that for every enemy.
      I could then use inverse of the vectors for flee behaviours and make that avoid obstacles as well. 
      If I wanted to steer a character perpendicular to the player, maybe I could rotate the vector in the field 90 degrees to obtain that? 
      Of course this approach would not be valid for other behaviours. Like for pursuit you would need to calculate another vector field for the future position of the player. Unless there is some mathematical way to shift a flow field if the center(where the arrows converge) moves.
      I dunno how you would add local avoidance. Maybe keep track of enemy positions on another vector field and take that into consideration.
      A* is no doubt efficient and popular. But it does not work well if the agent has target velocities instead of target positions (like from using steering behaviours). And A* also has the problem of being too good(perfect paths => too robotic). 
      Anyways, that's enough rambling from me. What do you think?
    • By Stocke
      I am trying to implement obstacle avoidance in unity as follows: I fire a series of parallel raycasts in the players moving direction, get the closest hit and then multiply hit.normal by steering force to get an avoidance vector.Then I use prioritization blending(have some max steering, I add obstacle avoidance steering first and then add seek steering towards target if I have not already reached max steering). The problem is that If the player is directly behind the wall then the seek direction and hit.normal(aka avoidance direction) is in opposite. Hit. normal seems to work if obstacle is far away but not much so if the obstacle is near. Am I supposed to calculate some other way?(normal component of hit.normal on my velocity vector maybe).
      And is it better to have  cone raycasts instead of parallel ones?
    • By OrangyTang
      Hello! I'm in the middle of a procedural texture system (for things like lightmaps, splatoon-style decal painting etc.) and I have an annoying artefact I don't seem to be able to get rid of.
      I've been working through this ( https://www.flipcode.com/archives/Lightmap_Storage.shtml ) method of generating atlas uvs, which mostly works pretty well (here on a capsule):

      But under closer inspection has artefacts around the places where the charts meet:

      Since this method of uv generation splits tris into charts based on  primary axis, this is where the charts join up. I've drawn out roughly the edges of the three visible charts and you can see how it lines up:

      I've got the texture filtering set to 'point' for debugging.
      So from what I can see the uvs are correct and the texel edges are all aligned correctly, it's just that the fringe texels from one atlas don't match their counterpart texels in the other charts. I'm calling these 'twinned' texels - they exist twice (or more) in the atlas texture, but to eliminate these artifacts they should be identical colours.

      To populate the texture I've got an unwrapped version of the mesh (which basically swaps vertex positions with uvs) which can be rendered into the texture directly. My thinking was that if I divide the world into a 3d grid and have the shader's output colour *only* dependant on the grid position (not the actual fragment world position) then these 'twinned' texels would evaluate to the same colour in all instances, even though it was being generated by different  geometry. Although clearly something's not working.
      Atlas rendering shader code:
      Shader "Unlit/Atlas" { Properties { _MainTex ("Texture", 2D) = "white" {} _VoxelSize("Voxel Size", Float) = 1.0 _ColourScale("Colour Scale", Float) = 0.1 _ColourOffset("Colour Offset", Vector) = (0,0,0,0) } SubShader { Tags { "RenderType"="Opaque" } Cull Off ZWrite Off ZTest Off LOD 100 Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float2 uv0 : TEXCOORD0; float2 uv1 : TEXCOORD1; }; struct v2f { float2 uv0 : TEXCOORD0; float2 uv1 : TEXCOORD1; float2 atlasCoord : TEXCOORD2; float4 vertex : SV_POSITION; }; sampler2D _MainTex; float4 _MainTex_ST; float _VoxelSize; float _ColourScale; float4 _ColourOffset; v2f vert (appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); o.uv0 = v.uv0; o.uv1 = v.uv1; o.atlasCoord = v.vertex.xy; return o; } fixed4 frag (v2f i) : SV_Target { fixed4 col = fixed4(0.0, 0.0, 0.0, 1.0); // Extract model's world space coord from two uv channels float3 worldCoord = float3(i.uv0.xy, i.uv1.x); // World coords (in meters) // Snap from world space to containing voxel coord float3 voxelCoord = floor(floor(worldCoord / _VoxelSize)); // Voxel coords (integer values) // Convert voxel xyz into colour range col.xyz = frac((voxelCoord * _ColourScale) + _ColourOffset.xyz); return col; } ENDCG } } } One thing I'm not sure of the proper solution to is making sure all the atlas  texels actually get rendered to. If I just draw the unwrapped tris directly then they don't always touch all the texels they need to around the edges:

      So I'm manually extending the edges of the charts with additional fins to make sure these are filled in:

      This seems more accurate than just running a 2d dilate operation (since it should go through the deterministic grid method) but maybe I'm missing something. Has anyone any ideas where these artefacts might be coming from and how to make sure these 'twinned' texels are rendered out correctly? Even if I wanted to fudge it in a post-process, I'm drawing a blank as to how to actually figure out which texels would need to be processed this way. 
      Thanks.
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!