Jump to content
  • Advertisement


Our community blogs

  1. preimg.png.77b995a7bc8ffc6a71cfe5a16ecc6599.pngWhile making a roguelike game, procedural generation have to be quick and yet intriguing enough for the generated level to be fun to just pick up and play.

    There are many ways to generate and laying out procedurally generated rooms. In The Binding Of Isaac, for example, you have many different types of regular room presets. 

    The generator just picks a preset based on things like room placement and size. Because those rooms are always of fixed size, this is a nice compromise. By having handmade presets the generated level is somewhat believable (i.e. there are no gaps or obstacles below a room door or secret room and whatnot).

    gWBi3.png  large.jpg  The-Binding-of-Isaac-Rebirth-2.jpg

    Another example would be Nuclear Throne

    The game takes a different approach to procedural generation by keeping it relatively simple. Because it's not room-based like The Binding Of Isaac, there are more things like caves and large open area.  The gameplay also plays into this, as the player needs to eliminate every enemy to spawn a portal to the next level.

    orig_8b6b223c410185c7a632fd5e345397f2.png   z2POA7Q2QZGO48PDbp4E_screenshot_main_1000.jpg Nuclear-Throne.jpg

    Because my game is somehow more inspired by The Binding of Isaac, the right way to procedurally generate rooms would be to use presets, and this is how I make special rooms.

    However, there's a big difference between The Binding Of Isaac and my game: my regular rooms aren't always the same size. This means that rather than having presets of regular rooms as well as special rooms I need something more flexible and, most importantly, dynamic..

    The anatomy of a Room

    In my game, as I've said in a previous post, levels are big two-dimensional arrays from which the level geometry is generated. Every room of a level is made using a BSP tree. I won't go in details much on how rooms are generated, but in essence, we create a grid from which we trace a path between two rooms and sparsely attach bonus rooms along the way.

    Because I already have rooms sizes and whatnot with that level generation, I could reuse the same idea for room layouts.

    Within rooms, I've also set special anchor points from which props (or more precisely, prop formations, more on that later...) could be generated.


    Basic Layouts

    The idea here is to have room layout presets. Within those, presets are an array of prop formations, and each of these formations is linked to a specific anchor point.

    A formation has a two-dimensional boolean array that indicates whenever or not there should be a prop here.


    Let's take, for example, a diamond array:


    The dimension of the array depends on its rooms' dimensions. Here's how it's done:

    \( size = \left \lceil \frac{2(max(RoomSize_{x},RoomSize_{y}))) }{ 3 } \right \rceil\)

    In order to change the array's content we actually use common image manipulation algorithms...

    Bresenham's Line Algorithm

    The first used algorithm is the Bresenham's Line Algorithm

    Its purpose is to simply render a line describe by two bitmap points onto a raster image.

    To put it simply, we get the deviation (delta, or "d" for short) in both X and Y of each point of the described line and compare both of them.

    Depending on the biggest, we simply incremate the point on that axis and colour it in.


    Here's the implementation:

    public void TraceLine(Vector2Int p0, Vector2Int p1)
      int dx = Mathf.Abs(p1.x - p0.x), sx = p0.x < p1.x ? 1 : -1;
      int dy = Mathf.Abs(p1.y - p0.y), sy = p0.y < p1.y ? 1 : -1;
      int err = (dx > dy ? dx : -dy) / 2, e2;
      while (true)
        m_propArray[p0.x][p0.y] = true;
        if (p0.x == p1.x && p0.y == p1.y)
        e2 = err;
        if (e2 > -dx)
          err -= dy; p0.x += sx;
        if (e2 < dy)
          err += dx; p0.y += sy;

    Midpoint Circle Algorithm

    The midpoint circle algorithm is an algorithm used to render a circle onto an image.

    The idea is somehow similar to Bresenham's Line Algorithm, but rather than drawing a line we draw a circle.

    To do this we also need, for simplicity sakes, to divide the circle into 8 pieces, called octants. We can do this because circles are always symmetric. (It's also a nice way to unroll loops)


    Here's the implementation:

    private void TraceCircle(Vector2Int center, int r, AbstractPropFormation formation)
      int d = (5 - r * 4) / 4;
      int x = 0;
      int y = r;
        // ensure index is in range before setting (depends on your image implementation)
        // in this case we check if the pixel location is within the bounds of the image before setting the pixel
        if (IsValidPoint(center + new Vector2Int(x,y)) { formation.m_propArray[center.x + x][center.y + y] = true; }
        if (IsValidPoint(center + new Vector2Int(x,-y)) { formation.m_propArray[center.x + x][center.y - y] = true; }
        if (IsValidPoint(center + new Vector2Int(-x,y)) { formation.m_propArray[center.x - x][center.y + y] = true; }
        if (IsValidPoint(center + new Vector2Int(-x,-y)) { formation.m_propArray[center.x - x][center.y - y] = true; }
        if (IsValidPoint(center + new Vector2Int(y,x)) { formation.m_propArray[center.x + y][center.y + x] = true; }
        if (IsValidPoint(center + new Vector2Int(y,-x)) { formation.m_propArray[center.x + y][center.y - x] = true; }
        if (IsValidPoint(center + new Vector2Int(-y,x)) { formation.m_propArray[center.x - y][center.y + x] = true; }
        if (IsValidPoint(center + new Vector2Int(-y,-x)) { formation.m_propArray[center.x - y][center.y - x] = true; }
        if (d < 0)
          d += 2 * x + 1;
          d += 2 * (x - y) + 1;
      } while (x <= y);

    Flood Fill Algorithm

    This is quite a classic, but it's still useful nevertheless.

    The idea is to progressively fill a section of an image with a specific colour while 

    The implementation is using a coordinate queue rather than recursion for optimization sakes.

    We also try to fill the image using west-east orientation. Basically, we fill the westmost pixel first, eastmost second and finally go north-south.


    Here's the implementation:

    public void Fill(Vector2Int point)
      Queue<Vector2Int> q = new Queue<Vector2Int>();
      while (q.Count > 0)
        Vector2Int currentPoint = q.Dequeue();
        if (!m_propArray[currentPoint.x][currentPoint.y])
          Vector2Int westPoint = currentPoint, eastPoint = new Vector2Int(currentPoint.x + 1, currentPoint.y);
          while ((westPoint.x >= 0) && !m_propArray[westPoint.x][westPoint.y])
            m_propArray[westPoint.x][westPoint.y] = true;
            if ((westPoint.y > 0) && !m_propArray[westPoint.x][westPoint.y - 1])
              q.Enqueue(new Vector2Int(westPoint.x, westPoint.y - 1));
            if ((westPoint.y < m_propArray[westPoint.x].Length - 1) && !m_propArray[westPoint.x][westPoint.y + 1])
              q.Enqueue(new Vector2Int(westPoint.x, westPoint.y + 1));
          while ((eastPoint.x <= m_propArray.Length - 1) && !m_propArray[eastPoint.x][eastPoint.y])
            m_propArray[eastPoint.x][eastPoint.y] = true;
            if ((eastPoint.y > 0) && !m_propArray[eastPoint.x][eastPoint.y - 1])
              q.Enqueue(new Vector2Int(eastPoint.x, eastPoint.y - 1));
            if ((eastPoint.y < m_propArray[eastPoint.x].Length - 1) && !m_propArray[eastPoint.x][eastPoint.y + 1])
              q.Enqueue(new Vector2Int(eastPoint.x, eastPoint.y + 1));

    Formation Shapes

    Each formation also has a specific shape. These shapes simply define the content of the formation array. We can build these shapes using the previously mentioned algorithms. There are 9 different types of shapes as of now.

    Vertical line


    A simple vertical line of a width of one

    Horizontal line


    A simple horizontal line of a width of one



    A rather nice diamond shape, especially pretty in corners



    The circle is rendered using the Midpoint circle algorithm. Especially pretty in the center of rooms



    A simple cross shape, i.e a vertical and horizontal line align at the center. 

    X Shape


    An "X" shaped cross, i.e two perpendicular diagonal lines align at the center.



    An Isocele triangle.



    A solid block. Every cell of the formation is essentially true.



    A nice variation of the square shape. Every other cell is false.

    There might be more types of shapes as time goes by, but it's about it for now.

    Placing props

    Once the array is set, we simply need to place the actual props in the room.

    Each formation is of an actual type, i.e. rocks, ferns, etc. 

    (For simplicity sakes, let's say that every prop is a 1x1x1m cube. This would simplify future steps.)

    In order to find their position, we simply align the array's center to the formations' specified anchor point.


    For each prop formation, we then instantiate props for each true cells while checking whenever or not the prop would be outside its room.


    Afterwards, we do a precise position check to make sure no props are either partially or fully outside a room.


    Finally, we make sure every room connections aren't obstructed with props.


    And voilà, we have a nicely decorated room


    In Game Screenshots

    Here's a couple of screenshots of what it looks like in-game



  2. I'm just today thinking about the rights issues of my little frogger game for the gamedev challenge. It is quite common place to make clones of games as a learning experience and for jams, but it is worth spending a little time thinking about rights issues.

    I got to thinking about this because sometimes I notice a promising game, where the assets are obviously ripped from other games / sources without licence. I too had no qualms about this kind of approach when e.g. learning to program a new language, and not intending to distribute the result. Indeed this can be a valid use in production, with placeholder graphics, as long as you are sure to change the final versions (there is a danger of tripping up on this one, even big companies can mess this up!).


    I remember vividly being told of the dangers of plagiarism in my education, in the world of science, but equally applicable to games and artwork etc.

    Once you spend some time either programming, making artwork, sound or music, you begin to realise the effort that goes into making the results. How would you feel if someone took your hard work, used it for profit and attempted to pass it off as their own? I like to think of it that I would like to treat others work how I would like mine to be treated.

    On top of the legal aspects, when applying for jobs, many employers will take a very dim view of plagiarism. If you thought it was okay to 'steal' anothers work for something on your cv, what's to stop you thinking that it is okay to copy anothers work during your job, and expose them to huge risks? Quite apart from the fact  the interviewers may have personal experience of being plagiarised, in many cases your CV will be filed directly to the rubbish bin.

    Creative Commons and Open Source

    Luckily in the case of assets, instead of infringing on others works without permission, a huge number of people (including myself!) have decided to make some of their work available for free for others to use under a number of licences, such as creative commons, often for nothing other than an attribution in the credits. (And think of the huge contribution of open source software. I am writing this now on an operating system that is open source, and has been freely given away by the authors for the good of everyone. That to me is fantastic!)

    I remember spending some time compiling the credits list for my tower defence game, making sure as well as I could that everyone was properly credited, every model, animation, sound, piece of music. It took some time, but I feel much better knowing that I had used the work as the authors intended, quite apart from feeling more secure from a legal standpoint, even for a free game. And also, speaking as an author myself, it is quite fun to google yourself and find where others have used your work, and encourages you to share more.



    Things seem pretty cut and dry for outright copying of assets, but the situation is more complex when it comes to assets that are 'based on' others work, and for things like game titles, and game designs. Some of this intellectual property (IP) protection is based on trademarks, rather than copyright. I am no expert in this, and would encourage further reading and/or consulting a specialist lawyer if in any doubt. Also note that IP laws vary in different countries, and there are various agreements that attempt to harmonize things in different areas.

    Of course, whether you run into trouble making a clone game depends not only on skirting around the applicable law, but whether the rights owners are able / willing to take action against you (or the publishing platform). Nintendo for instance are known to be quite aggressive for pursuing infringement, whereas Sega have sometimes suggested they are happy with fan games:


    I do understand both points of view. Indeed in some jurisdictions I understand that legally the rights owner *needs* to take action in order to retain control over the rights (this seems nonsensical, but there you are). So a company being overly-aggressive may have been forced to do this by the legal system.

    Anyway, for my little research into frogger, my first guess is that the rights are with Konami / Sega / both. Of course you never know. Companies sometimes sell the rights to IP, or one goes out of business and the rights are then assigned to a creditor. Who is to say that a future rights owner will not take a different approach to enforcement.

    In the Wild


    It seems there are a number of frogger clones out there, with some successful and profitable (e.g. crossy road). Of course that does not mean making a frogger clone is 'ok' in any way, it just suggests that the likelihood of running into trouble is lower than if there were no frogger clones in markets.

    Currently I am thinking I will gradually modify the title / some aspects of gameplay so I can put it available for download after the challenge. I really should have thought of this sooner though, and made my main character something else, or put a different slant on it, like 'game of frogs', or 'crossy frog' (that one is taken!). :)

    Some light reading


  3. Instead of one big ugly image, this time I'll give you lots of smaller ugly ones. :D


    I added a new main branch to the old Mind Map, this one describes how the actual servers will be configured.  This is a general overview of course, I'm building for a single (Physical/Virtual) server installation first then I'll add in data replication and failover capabilities later.


    A new server type was added as well, the Social Server, it does some fairly obvious types of things.. ;)

    After looking at the big picture and, well spending some time painting it first, I started to see some ways to optimize and organize things better.  For example I've completely moved the main "Player Attitude Network Relay" or Position/Attitude Reflector (the thing that makes network player character coordinates propagate to other player clients on the internet..blah!).  It was going to just live in the Avatar Server, but that handles Authentication and quite a few other very crucial game play roles.. So now it lives where it makes way more sense, at the point in the system where its data is needed the most.  In the Action Server, this server handles all of the fast action related decision making.  The Avatar Server still handles propagating the player character's Data(features, wardrobe,etc), just not the deluge of player movement data.  This makes it far easier to design this part of the system to scale, which is important because this is one of the critical points for concerns of scale.  As long as EVERY Action Server maintains accurate positional buffers, then it doesn't matter WHICH Action Server processes the client's messages.  Keeping the positional buffers in sync will probably require the addition of high speed intermediary "Data Base" Servers  and all that jazz.


    I ramble, but I'm making some good progress towards a cohesive plan, and it's making everything start to feel like it's coming together.

    The hacknplan data entry is still much in progress, I've started adding tasks to keep myself on track with adding data to it.. haha, sounds redundant but it's helping me stay on track.


    Here's the Game Design Model I was talking about in my last thread, I'm enjoying the simplicity of it all.


    It's essentially just the tree structured view of my Mind Map, so it's pretty easy to keep these two tools in sync.  I add child items where necessary and attach tasks to whatever branch/child I want.

    The main "Board" view is just a standard KanBan style system, but it's simple and easy to work with, it integrates well with the Game Design Model and seems to be fairly stable.


    Here I'll attach the whole of the latest Mind Map revision, for the curious and/or gluttons of punishment.


    I'm happy with my progress so far.  Slowly approaching the Maintenance point.  Then the full code sprint starts again.  I'm still coding, so I don't lose my place, just not at the pace I would like with all the admin work I've given myself.  Anyhow, enough talking about stuff I've already done, I've got stuff to do! :D

  4. Hi there !

    Do you want to learn how to create your own sound design for your games ?

    Here is how you can very easily make some bow and arrow sounds for your RPGs or adventure games !


  5. steam_icon_2018-150x150.pngCorona Labs is pleased to announce that the Steamworks plugin is now open-source. The Steamworks plugin is used by PC and macOS games published to Valve’s Steam service that allows support for leaderboards, achievements, user profile data, and microtransaction support.

    Now you can download the repository for the plugin and add your own features and extensions to it. You will have to have a Steam developer account to be able to test the plugin. Follow Steamworks documentation (available on Steam’s developer portal) to learn how to enable Steamworks debugging and development for your game. You can get the plugin source at our GitHub repository.

    You can learn more about the Steamworks plugin in our previous announcement.

    View the full article

  6. This week my artist finished working on the new alien type called Fast Alien. After he finished, I spend some time incorporating it into the game, which means that I did some coding and managing animations.



    After I finished with this new alien, I started implementing the new GUI system. As of now I have finished settings and main menu screens. This is how it looks now:




    I also started preparing for the release of the game. I signed up for steam direct, sent them my documents, payed the registration fee, got my documents approved and next week I will start populating the steam store page.




  7. I just finished the new poster artwork for MercTactics. I hope you like it.promo.jpg.b4f484d97de8d0904c33f96ba043bbc6.jpg

    Watch this space, there are a lot of new improvements and I will be posting the new game trailer and beta demo soon!

    • 0
    • 0
    • 22

    No blog entries yet

    • 0
    • 0
    • 150

    No blog entries yet

  8. Hello everyone. :) I've been busy working on materials for a project, and thought I would share some quick renders of one material. This material would be used in a 'lava' based environment. The material has built in controls to adjust to several different states like seen below.





    I'm pretty happy with this material the dynamic ability to adjust to several environment layouts. :) Also... I'm hoping to write up another entry on my 'Frogger' GameDev.net Challenge progress within a day or so as my time has been pretty limited. :D

    Thanks for stopping by! :) 

  9. The Rundown

    Been working hard lately on the next big patch for Uagi and I wanted to run some things by everyone. This is not really a patch post but an announcement to discuss the plans for Uagi in the next few months, a road map some would call it.

    My current goal is to release Uagi into Steam Full Access November 30th, this may still change but I am pretty set on this goal. Very excited. :)

    The next big patch will be released either mid or end October depending on how things go. I also plan to do a second patch a couple weeks before full release, this will include the final end game content and should be rolling in mid November.

    Uagi-Saba has now officially  been in development 894 Days (2.4) years.


    Uagi On Steam 



    UG Live

    For those who don't know I stream development of Uagi daily on Twitch. Stop by sometime to see the inner workings of Uagi, don't be afraid to say hello!   https://www.twitch.tv/undergroundies

    Upcoming Patch Content

     I really want to take my time with this one so I can squeeze in a lot of new content. The next patch will be an optional update for the player and will include the following.



    New And Improved Body Part Color System

    Previously Mystics had three color genes that determined their overall skin pigment (covering the whole body). Now with the use of a helpful extension I am able to color individual body parts! This is something I have wanted since the start of development. Now each body part has it's own three unique color genes that determine its color. This makes for some pretty amazing color combinations and every player is guaranteed to have a completely unique mystic.




    New Room Biomes 

    I will be introducing new room biomes with the next patch. Chiseled stones and vines make up this lovely new set of rooms. These will have their own unique set of object spawns.




    Balance Of Resource Management And Creature Care 

    Resource management has started to become a bit more important than creature breeding in terms of balance. Uagi-Saba is a creature breeding game and I want it to feel more that way. some have said there is too much running around/busy work and that leaves less time for breeding and other things. I completely agree and I'm working on making changes to help with this. Though I still want there to be a challenge I don't want it too hard to hatch a few mystics and have a good time without stressing about resources and timers too much.

    New Mystic Skill System 

    The player will be able to teach each of their mystics one of three skills depending on if mystics have the required stats. The skills are "Smash", "Repair" and "Decipher" and all will be needed to proceed to end game. Though the next patch will not have the game ending the player will still be able to teach and use skills.


    Other Notable Changes

    -Improved and expanded tutorial.
    -New Mystic body parts.
    -Balance changes to resources and other game mechanics.
    -More lore, computer logs and more to read.
    -A variety of new cave plants will be added in the next patch, more life and movement to sanctuary.



  Patch List Below
    -Re-arranged the main player HUD a tad.
    -Fixed bug crash involving the new furnace buttons.
    -Removed old temp system.
    -Furnaces now only have one temperature stage.
    -Removed Kelvin from the HUD.
    -The change in temperature per cycle is now displayed on the HUD.
    -Added three new back pieces to the gene pool, two from the beast family and one from the material family.
    -Two of the three new back pieces are only available through breeding with wild mystics, I have made more of the back pieces now available to default mystics.
    -Added new buttons to the build menu. Button on far left now opens skills when pressed.
    -Pressing "Q" now opens and closes skills.
    -Though I have added new skill buttons they have all been turned off until next patch, come next patch they will all be usable.
    -Fixed problem where eggs created through breeding would sometimes destroy egg pedestals.




    • 1
    • 0
    • 76

    Recent Entries

    TribaJam is going to be a 3-month game jam lasting from December 1st, 3PM EST to March 3rd, 3PM EST.

    At 3PM, December 1st, I will release the theme on a blog post on my IndieDB account, DabbingTree. When you learn the theme, you are allowed to start your game! I will also release how you will send your game in that blog post.



    1. You must use the theme.

    2. You are not allowed to start it before the jam starts. All finished entries posted in the first 7 days will be instantly disqualified.

    3. No using copyrighted material AT ALL!

    4. No advertising in your entry. After the jam, do whatever you want with your game. But no advertising in the entry.

    5. Only one entry per person. No throwing together 40 games and entering it.

    6. You are allowed to team with any number of people.


    Supported Systems:

    Xbox One






    Your game entry will be all yours. We will not take the winning game or any others.

    I invite game developers of every kind to come and make a game!

  10. Tales of Vastor - Progress #10 - Big announcement

    Tales of Vastor - Progress 10


    • What's done?
    • What's next?
    • Release dates

    What's done?

    Knight animations

    I was working on the remaining animations for the knight, which were mostly power attacks. Here are some of them:

    Knight - Focused hit

    Knight - Holy Strike

    Knight - Lightning dash

    The text you see above the model are so called events. I used them within the code in order to determine what to do next, like moving to the enemy, handle the damage, show effects, etc.

    Power attack icons

    The power attacks are pretty much finished for the knight. Here's an image showing the icons of the attacks:

    Power attack icons

    Moonlight tower

    An important point on your journey is the moonlight tower. As I don't want to spoil the purpose, I will just leave the image here:

    Moonlight tower

    What's next?

    • New animations for the mage
    • Refactoring animations and models
    • Story implementation

    Release dates

    The big announcement of this update is the change of the release dates. The beta was planned to be released by the end of October. As my controlling sheet told me, there's more than 100 hours left to cover the content of the beta version, I will delay it's release to the end of December.

    The final release will be delayed to the end of Q1 2019, in order to provide a version, which is stable and contains the planned content.

    So, what to expect by the end of October? I want to release an updated alpha version. Older versions are available in the download section: Indiedb.com

    You can subscribe to the beta list by using this link: Join the beta
    The beta version is for signed up people only, so be sure to subscribe and get it as soon as it is released.

    I still need lots of feedback, so please spread the word and support Tales of Vastor.

    If you have feedback, you can contact me via mail or direct message whenever you want. Be sure to take a look at Twitter as well, since there will be more frequent updates.

    Thank you!

  11. This week's update was pre-staged because of some real life work obligations. Apologies for any content which was missed in the lag time between staging and deployment here.

    Official Game Guru News

    https://steamcommunity.com/app/266310/discussions/0/1730963192547852809/ - Per synchromesh it appears the long standing 'disappearing model' issue may be resolved.  Time will tell!

    Also this came out this last week: https://www.thegamecreators.com/post/gameguru-melee-weapons-pack-updated-2
    So apparently more updates to the already really decent melee weapons pack were done. They added 6 more melee weapons.  Pretty impressive stuff for those who already own it!

    Lastly all of the incremental AI updates are here: https://forum.game-guru.com/thread/220075?page=2#msg2606583

    We also have our list of winners from the Survey (all of these people got free DLC!):
    • Archor
    • Simon Cleaton
    • Rogy 
    Very cool, congrats all!

    What's Good in the Store

    The same stuff as last week, sadly.

    Free Stuff

    Nothing new for free as of this writing.

    Third Party Tools and Tutorials

    There's some pretty good info in this thread about how to use styleguru, which is a fantastic third party program that can help you customize your Game-Guru menus.

    Random Acts of Creativity

    Particle Test by Kevan Hampton - https://youtu.be/08zG7FczACM

    Extraction Point X11 https://forum.game-guru.com/thread/220128
    Now the above should really get your attention.  Someone put a lot of hard work and care into this but used mostly stock game assets.  They have a very good cutscene and HUD, as well as a neatly modified menu.  Sets a good standard Game-Guru game-making attempts.

    Environmental Particles in GG Loader: https://youtu.be/ItEVjI0HX9Q

    Pirate Mike's walk through of his upcoming game: https://youtu.be/wFLuupCC5A4

    Someone made a kindle e-book with Game-Guru graphics... speaking as someone who's made a kindle book before it's no picnic to arrange those things.  I am fairly impressed!

    In My Own Works:

    Recently this week started work on the 'how to make a desert' walkthrough.  It's coming along well though is tricker than I first thought.  Here's a screenshot:

    This is far from final and sadly the book will only include B&W stills, but regardless, the basic shape is there. There's more grasses, background materials and the like to add but atmospherically speaking it's what I wanted.
    I also have this for the Semi-Arid type of desert, but it needs even more work:

    Piece by piece this book is coming along.  Just two more desert types to do for the book and take all the necessary screenshots, do final cleanup, etc.

    I also need to rework my 'how to make a city' section as well, which falls into a similar situation.

    View the full article

  12. Gnollrunner
    Latest Entry

    For this entry we implemented the ubiquitous Ridged Multi-fractal function.  It's not so interesting in and of itself, but it does highlight a few features that were included in our voxel engine.  First as we mentioned, being a voxel engine, it supports full 3D geometry (caves, overhangs and so forth) and not just height-maps. However if we look at a typical world these features are the exception rather than the rule.  It therefor makes sense to optimize the height-map portion of our terrain functions. This is especially true since our voxels are vertically aligned. This means that there will be many places where the same height calculation is repeated.  Even if we look at a single voxel, nearly the same calculation is used for a lower corner and it's corresponding upper corner. The only difference been the subtraction from the voxel vertex position. ......

    Enter the unit sphere! In our last entry we talked about explicit voxels, with edges and faces and vertexes. However all edges and faces are not created equal. Horizontal faces (in our case the triangular faces), and horizontal edges contain a special pointer that references their corresponding parts in a unit sphere, The unit sphere can be thought of as residing in the center of each planet. Like our world octree, it is formed from a subdivided icosahedron, only it is not extruded and is organized into a quadtree instead of an octree, being more 2D in nature. Vertexes in our unit sphere can be used to cache height-map function values to avoid repeated calculations.  We also use our unit sphere to help the horizontal part of our voxel subdivision operation. By referencing the unit sphere we only have to multiply a unit sphere vertex by a height value to generate voxel vertex coordinates.  Finally our unit-sphere is also used to provide coordinates during the ghost-walking process we talked about in our first entry.  Without it, our ghost-walking would be more computationally expensive as it would have to calculate spherical coordinates on each iteration instead of just calculating heights, which are quite simple to calculate as they are all generated by simply averaging two other heights.

    Ownership of units sphere faces is a bit complex. Ostensibly they are owned by all voxel faces that reference them (and therefore add to their reference counter) . However this presents a bit of a problem as they are also used in ghost-walking which happens every LOD/re-chunking iteration, and it fact they may or may not end up being referenced by voxels faces, depending on whether mesh geometry is found.  Even if no geometry is found  we may want to keep them for the next ghost-walk search.  To solve this problem, we implemented undead-objects. Unit sphere faces can become undead and can even be created that way if they are built by the ghost-walker.  When they are undead they are kept in a special list which keeps them psudo-alive. They also have an un-dead life value associated with them. When they are touched by the ghost-walker that value is renewed.  However if after a few iterations they are untouched, they become truly dead and are destroyed.

    Picture time again.....


    So here is our Ridged Multi-Fractal in wire frame.  We'll flip it around to show our level transition........


    Here's a place that needs a bit of work. The chunk level transitions are correct but they are probably a bit more complex than they need to be. We use a very general voxel tessellation algorithm since we have to handle various combinations of vertical and horizontal transitions. We will probably optimize this later, especially for the common cases but for now it serves it's purpose. Next up we are going to try to add threads. We plan to use a separate thread(s) for the LOD/re-chunk operations, and another one for the graphics . 

    • explains an O(n) algorithm that calculates 2D distance fields by operating on rows and treating samples as overlapping quadratic parabolas
    • shows ideas to visualize distance fields, generate tiling noise and some use-cases of distance field functions

    • slides from XDC (X.Org Developer’s Conference)
    • Vulkan timeline semaphores
      • allow increasing a 64-bit value on signal and wait on “greater than” a target value
      • unified system for CPU and GPU waits
      • look at how to implement them in the drivers
    • many more talks about OS-level graphic topics

    • compute shader based adaptive GPU tessellation technique using Mesh Shaders on Turing
    • up to ~25% rendering time reduction at high tesselation rates

    • explains the Vulkan ray-tracing extension
    • contains an overview of the ray tracing pipeline, the new shader types and how to interact with the API
    • shows how to generate the acceleration structure, update and compact it as required

    • explains the mathematical foundation behind deep composition that allows compositing of volumetric effects such as fog

    • walkthrough of the steps required to render the Moana scene in the authors custom path tracer
    • uses a binning scheme on rays combined with on-demand geometry loading to be able to render the scene on a 32 GB RAM machine

    • discusses a change to the SDL render back-end that will batch CPU rendering commands to reduce the number of draw calls required
    • this will improve performance significantly

    • next part of the series on gfx-hal usage (low-level graphics API for Rust)
    • adds support for loading and using vertex buffers

    • explains a water ripple system implementation that uses a top-down projection of ripples onto the water surface in a separate rendering pass

    • updated SDF function for a capped cone, round cone, and an ellipsoid

    If you are enjoying the series and getting value from it, please consider supporting this blog.

    Support this blog

    Read more

  13. Greedy Goblin
    Latest Entry

    Games usually (if not always) require some way to manage state changes... and I'm sure most of you (if not all of you) know far more about State Machines than I do.  And I'm certain that I could learn a heck of lot from reading up about the subject to build a state machine that works beautifully and makes my code look amazing etc etc.

    Pfft.. never mind all that... I'm building this game 'off the cuff' as it were, making it up as I go along and following the principle of 'I build what I need when I need it and only insofar that it adequately fulfils the requirements at that time'.  I don't try to plan ahead (not in any granular sense anyway), I'm not building a reusable one-size-fits-all game engine, I'm not trying to make the code beautfiul, or win any awards or even make any money from the darn thing.  It just needs to perform well enough for what I want it to do.

    So my immediate requirement is that I have a way to manage the player switching from walking to running to whatever.  If I can use it elsewhere for other things then great... and I'll be honest, I do like reusable code so I tend to naturally sway toward that.  What I'm trying to avoid is getting myself stuck in a rut, spending weeks/months deliberating over the smallest details because it's got to be 'perfect' and then realising I've still got 99.5% of the game to build!  Quick and dirty is OK in my world.

    I often approach things from a top-down perspective.  This boils down to:

    'How do I want to instruct the computer to do x, y or z?'

    So for this particular requirement, how do I want to instruct the game that the player can change from walking to running and running to walking, or walking/running to falling (assuming I make that a player state - which I do), but not from sleeping to running for example?  Hell, I don't even know all the states that I want yet, but these are the ones I have a feel for so far:

    • Walking
    • Running
    • Skiiing
    • Driving
    • Falling
    • Drowning
    • Sleeping
    • Eating

    Introducing 'When'

    I thought it might be nice to be able to write something like this in my player setup:

    // Configure valid player state transitions
    When( this.playerState ).changes().from( PLAYER_STATES.WALKING ).to( PLAYER_STATES.RUNNING ).then( function () { } );
    When( this.playerState ).changes().from( PLAYER_STATES.RUNNING ).to( PLAYER_STATES.WALKING ).then( function () { } );
    When( this.playerState ).changes().from( PLAYER_STATES.WALKING ).to( PLAYER_STATES.SKIING ).then( function () { } );
    When( this.playerState ).changes().from( PLAYER_STATES.SKIING ).to( PLAYER_STATES.WALKING ).then( function () { } );
    When( this.playerState ).changes().from( PLAYER_STATES.WALKING, PLAYER_STATES.RUNNING, PLAYER_STATES.SKIING ).to( PLAYER_STATES.FALLING ).then( function () { } );

    There's probably a library for something like this out there, but heck, where's the fun in that?!


    So I create a new 'Stateful' object that represents a state (in this case the playerState) and it's allowed transitions and a 'When' function so I can write the code exactly as above:

    const Stateful = function () { }
    Stateful.isStateful = function ( obj ) {
        return obj.constructor && obj.constructor.name === Stateful.name;
    Stateful.areEqual = function ( v1, v2 ) {
        return v1.equals ? v1.equals( v2 ) : v1 == v2;
    Stateful.prototype = {
        constructor: Stateful,
        set: function ( v ) {
            let newState = typeof ( v ) === "function" ? new v() : v;
            for ( let i = 0; i < this.transitions.length; i++ ) {
                let transition = this.transitions[i];
                if ( transition && typeof ( transition.callback ) === "function" ) {
                    let fromMatch = Stateful.areEqual( transition.vFrom, this );
                    let toMatch = Stateful.areEqual( transition.vTo, newState );
                    if ( fromMatch && toMatch ) {
                        // We can only change to the new state if a valid transition exists.
                        this.previousState = Object.assign( Object.create( {} ), this );
                        Object.assign( this, newState );
                        transition.callback( this.previousState, this );
        transitions: Object.create( Object.assign( Array.prototype, {
            from: function ( vFrom ) {
                this.vFrom = typeof ( vFrom ) === "function" ? new vFrom() : vFrom;
                return this;
            to: function ( vTo ) {
                this.vTo = typeof ( vTo ) === "function" ? new vTo() : vTo;
                return this;
            remove: function ( fn ) {
                this.vFrom = this.vFrom === undefined ? { equals: function () { return true; } } : this.vFrom;
                this.vTo = this.vTo === undefined ? { equals: function () { return true; } } : this.vTo;
                for ( let i = 0; i < this.length; i++ ) {
                    let transition = this[i];
                    let fromMatch = Stateful.areEqual( this.vFrom, transition.vFrom );
                    let toMatch = Stateful.areEqual( this.vTo, transition.vTo );
                    let fnMatch = fn === undefined ? true : transition.callback == fn;
                    if ( fromMatch && toMatch & fnMatch ) {
                        delete this[i];
        } ) )
    function When( statefulObj ) {
        if ( !Stateful.isStateful( statefulObj ) ) {
            throw "Argument must be a Stateful object";
        return {
            changes: function () {
                return {
                    from: function ( ...vFrom ) {
                        this.vFrom = vFrom;
                        return this;
                    to: function ( ...vTo ) {
                        this.vTo = vTo;
                        return this;
                    then: function ( fn ) {
                        if ( typeof ( fn ) === "function" ) {
                            this.vFrom = this.vFrom === undefined ? [true] : this.vFrom;
                            this.vTo = this.vTo === undefined ? [true] : this.vTo;
                            for ( let i = 0; i < this.vFrom.length; i++ ) {
                                for ( let j = 0; j < this.vTo.length; j++ ) {
                                    statefulObj.transitions.push( {
                                        vFrom: typeof ( this.vFrom[i] ) === "function" ? new this.vFrom[i]() : this.vFrom[i],
                                        vTo: typeof ( this.vTo[j] ) === "function" ? new this.vTo[j]() : this.vTo[j],
                                        callback: fn
                                    } );
                        } else {
                            throw "Supplied argument must be a function";

    I drop the aforementioned 'When' statements into my Player setup and remove the old 'If' statements that were previously controlling changes between walking and running and insert the new playerState.set() calls where appropriate.


    "run": ( pc, keyup ) => {
      if ( keyup ) {
        _this.player.playerState.set( PLAYER_STATES.WALKING );
      } else {
        _this.player.playerState.set( PLAYER_STATES.RUNNING );

    And it seems to work!  (Yes I was actually surprised by that) 😂

    p.s. I've switched to using Bandicam for screen capture as it seems far superior to what I was using previously.

    • 1
    • 2
    • 105

    Recent Entries

    Froggy's Arcade

    This is a learning project for me. I kinda know what needs to be done, just not necessarily how to do it.
    This is not an entry into the Frogger competition but might become one if I make steady progress.

    This will not use a prebuilt engine.

    TODO - Will become more detailed.

    "World" is the 3D arcade environment with no collision detection and a constant floor level.
    "Game" is an arcade game inside the 3D environment. Game is a non-interactive display only to start with.

    Create a generic game cabinet model and the artwork.
    Program basic 2d ball bouncing display monitor than can be positioned in 3d.
    Program basic world FPS camera controls and make some walls and a floor.
    Get arcade game assembly working in world. The assembly is the cabinet with the working display that can be positioned in the world.
    Get several copies of arcade game working in world.
    Make ceiling, doors, front windows, signs.
    Create some arcade like prop models such as gumball machine, soda machine, changer
    Update ball game assembly to have controls for a paddle and make playable from environment.
    Add sounds to game.
    Create Frogger game using ball game as a starting template.
    Create arcade full of Frogger games
    Create street full of arcades.
    Create town full of streets.
    Create county full of towns.
    Create state full of counties.
    Create Frogger USA.
    Sorry, got carried away...

    Possible features:
    Collision detection.
    Basic physics to knock over things.
    Basic scene graph.
    Better lighting than ambient.



  14. We are trying to design our new features:
    -Tons of Power up

    -Realtime Multiplayer

    -Questing System

    -Reward System

    Give us some luck, and some coffee if you like the idea

  15. I have found it to be better in one aspect then multiple. Coding is not my thing. I'd rather provide you assets, sound design, and world development. I can provide scripts and direction. I cannot provide coding. :P


    Therefore, this project is going on the shelf. In the back.

  16. Awoken
    Latest Entry

    Hello GameDev,

    I've been busy working on Dynamic Assets.  In order to successfully incorporate them this time around I've needed to include a whole host of programming across all the servers for this game and I thought 'why not do a blog about the servers'.  I've never programmed a server before this project and really have no idea what I'm doing.  I just get stuff working and I'm happy.  But seems Node.js is very intuitive and it is so simple that you don't need to be a rocket scientist to figure stuff out.  But a quick overview of my servers and why I have 3 so far and will probably have 5 or 6.

    Relay Server
    The relay server hosts the website and is responsible for user authentication and what-nots.  While users are connected to the simulation the relay acts as a relay ( imagine that ) between all the clients and the back-end servers.  It relies on socket.io to communicate with the clients and zeromq to communicate with the back-end servers.

    Data Server
    The data server holds a static version of the world and all it's contents.  It's purpose is to provide all the information needed to a newly connected client.  This way the demand for data isn't hampered by newly connected clients while the simulation is running.  Of course it needs occasional updates and the terrain changes and the assets change.  And soon too updated Simulin positions.

    Simulation Server
    This hosts the path-finding and tiny bits of user functionality that I've Incorporated thus far.  But basically the simulation server is the work-horse behind the simulation, or at least will be.  I plan on breaking this up into 3 separate servers.  My wish list is to have 2 path-finding servers which each host world information necessary to path-finding and then requests are toggled between the two ( cut my pathfinding time in half ). And an AI server which will handle what it is the Simulin are doing.


    Now my assumption is that each new instance of a Node.js server will utilise it's own processing core?  Am I wrong about this?  I sure hope not because I figure that if I break the needs of the project over multiple servers it will make better use of a computers abilities.  Maybe each Node.js server will operate in parallel? 

    Because I'm using many different servers to do all the stuff I need to do it's taking a much longer time to program all this.  Plus, I recently switched all my THREE.js geometry over to buffergeometry.  The servers hold world data information according to vertex objects and face objects, but the clients hold world information according to buffer arrays which require a little trickery to update correctly.  Keeps me on my toes.


    Anyways, here's some video's to check out of my website

    And here is a video of three connected clients.
    One client is adding stuff to the world, the other two are receiving the updated content. from a different perspective.

    Thanks for checking it out!

  17. I've got a broad production level puzzle I've been trying to figure out and I've never been sure if I've been working with all of the pieces. I'll try to explain the problem and my considerations.

    1) I need to create AI for all of my characters and it needs to be good enough that it's convincing.
    2) My approach has been to quickly write Finite State Machine scripts in C++. It has worked well enough.
    3) During development, I change stuff in my game and I will be adding in new mechanics.
    4) Every time I change something important, I need to refactor my FSM AI to account for it. This becomes a "tax" I need to pay, and when I just have a few relatively simple characters, it's not intolerable.

    Now, I put on my magical hat of farseeing +3, and I see a future where my game has dozens of characters, each with FSM scripts. The overhead tax I need to pay increases proportionately, and it gets to the point where I need to spend equal amounts of time maintaining AI scripts as I do with building out game systems, which ultimately slows down the pace of development.

    Here's where I get a bit conflicted. One side of me says, "Premature optimization! YAGNI! Solving problems you don't have!"
    The other side of me says, "But these are real problems you're going to have if you don't head them off early and it's just going to be more expensive to fix them later. This has a compounding cost in terms of refactoring effort. Nip it in the bud now. Work smarter, not harder."
    And the business side of me says, "Will this help me sell more copies today? What's urgent? If this costs me X months and I don't sell Y copies to sustain development work, then my effort is misdirected. I should be focused exclusively on building what sells!"

    I think all are sage, prudent voices of reason to listen to and have merit behind them.

    My creative engineering side has been quietly asking, "How can I avoid paying that increasing maintenance tax as development goes forward?" and the answer I've been recircling back to over and over again is "Design an AI system that doesn't need maintenance. It'll have a bit of a steeper upfront cost, but the investment will pay off in saved time over the course of development." Easier said than done, and to build such a system is a tall order which teams of other, much smarter people have tried to do with limited success. My engineering approach has been to dig into current research and development in machine learning, particularly the work being done by Google owned DeepMind. They've been able to create model free AI systems which can learn how to play any game with no instructions on how to play it. Their AI systems have beaten world champions at the game "Go". A year ago I said, "That sounds like something I need for my game!" (cue: ridiculous groans).

    The reality is that I don't have a long history and deep background in machine learning AI systems. I didn't know anything about artificial neural networks, reinforcement learning, CNN's, or any other acronym super smart AI people mention off the cuff. I fumbled in the dark trying to understand these various AI systems to see if they could help me build an AI system for my game. I explored quite a few dead ends over the months. It's hard to recognize a dead end when you're in the middle of development, and it's also important not to get mentally entrenched on one track. I have to take a step back and look at the bigger picture to recognize these tracks and traps.
    One track/trap is to think, "I need to have an artificial neural network! That's the only way to make this work"
    Another one is, "I need to have machine learning, and my implementation for machine learning needs to match the AI industries definition! It's not machine learning if its not an implementation of back propagation!"
    Another trap: "My machine learning type of AI should convince AI professionals it's real!"

    All of that is obviously nonsense and nobody is saying this -- its an invented internal narrative, which creates unnecessary constraints, which creates traps.

    Russel (Magforce7) has helped me see these traps and stay focused on what matters, but I'm still fumbling my way in the dark within a maze. I'm just taking inspiration from the ideas proposed by these other AI systems, but creating my own and trying to keep it as simple as possible while trying to minimize the amount of future work I need to do to maintain the system.

    So, here are my design goals for the AI system:
    1) It shouldn't require a lot of maintenance and upkeep after it has been deployed
    2) It should ideally be flexible: Updates to the game mechanics shouldn't cause AI system code updates
    3) The AI controlled characters should appear reasonably intelligent and perform plausibly intelligent behaviors.
    4) There should be support for variation in behavior between characters of the same class, and between characters of different classes.
    5) I don't want to write any special case code. If I start writing FSM code, I'm doing it wrong.
    6) Complex behavior should be emergent rather than designed.
    7) It should be relatively simple. Integrating things into the AI system should be fast, easy and non-technical.

    Designing a system to meet these broad goals has been very challenging, but I think I've done it. I have spent several months thinking about how people and animals think and trying to create a model for intelligence which is consistent with biological intelligence. It's taken a lot of internal reflection on my own mind and thought processes, and doing a bit of external research.


    A small side note on the external internet research: There are a lot of different psychological models for how the mind works, but almost none of it is scientifically backed with empirical evidence or tested for correctness. Pretty much, all the research and proposed intelligence models out there are guesswork, some of which contradict other proposed models. That leads me to believe that even the experts don't know very much and to make things even more complicated, there are a lot of wacky pseudoscience people mixed in.

    One important distinction with proposing broad intelligence models is that at the end of the day, it must be computable. If you can't reduce an intelligence model into computable systems, then it is no good for AI, and probably isn't a well defined model and gets lumped in with all of the other guesswork other people have proposed. I've come up with a few of these myself and haven't been able to reduce them into data structures or algorithms, so I had to throw them out. Anyways, on to my model! I've decided that I would structure my model to work as sets of loosely coupled systems. If one particular module is flawed and needs to be refactored, it shouldn't mean the whole system is flawed and needs to be refactored. Here are each of the modules:


    The mind does not use or work with objects, but with the set of properties for those objects. It's critical to make this distinction for the sake of pattern matching.
    Memory is transient, stateful information which is used to choose a most optimal behavior. All memory comes with expiration times. When an expiration time is up, the memory is lost and forgotten. The importance of a memory determines how long it persists in memory, and its importance is driven by relevance to internal motivations and number of recalls. The constant trimming of memory state is what prevents cognitive overload.
    Sensory Input:
    Sensory input is how an agent gets stateful information about the environment around itself. Sensory input information is fed directly into transient memory. There is no filter applied at the sensory input level. Sensor inputs get fed sets of properties created by external objects in the surrounding environment.
    Behavior is a type of response or interaction an agent can have with itself or the external world around it. Behavior choice is the only tangible evidence we have of an agents internal intelligence, so choosing the correct and wrong behaviors will determine whether the agent passes an intelligence test.
    Every character has internal needs it is trying to satisfy through the use of behaviors. Motivators are what drive behavior choice in the agents given environmental context. Motivators are defined by a name, a normalized value, and a set of behaviors which have been discovered to change the motivation value one way or another.
    Reward (emergent) is the summed result of all motivations when a behavior effect has been applied to an object. The amount of reward gained is exponentially proportionate to the motivation satisfaction, using an F(X,W) = W(10X^3); equation, where X is normalized and represents motivational need, and W represents a weight. If you are manually assigning reward values to actions or properties, you're doing it wrong.
    Knowledge is a collection of abstract ideas and concepts which are used to identify relationships and associations between things. The abstractions can be applied towards related objects to make predictive outcomes, even though there is no prior history of experience. Knowledge is stored as a combination of property sets, behaviors, motivators, and reward experiences. Knowledge is transferable between agents.
        Knowledge reflection: This is an internal process where we look at our collection of assembled knowledge sets and try to infer generalizations, remove redundancies, streamline connections, and make a better organized sense of things. In humans, this crystalization phase typically happens during sleep or through intentional reflection.
    The mind is the central repository for storing memory, knowledge, motivators, and behavior sets, and chooses behaviors based on these four areas of cognition. This is also where it does behavior planning/prediction via a dynamically generated behavior graph, with each node weighted by anticipated reward value evaluated through knowledge.

    A picture or class diagram would be helpful in understanding this better. But let me describe the general workflow here for AI characters.

    Each character has a mind. The mind has memory, knowledge, motivators, and a list of possible behaviors to choose from. The mind is very similar to the finite state machine. Each character has sensory inputs (eye sight, hearing, smell, etc). The only way an AI character can know about the environment around it is through its sensory inputs. The sensory inputs are just a bunch of data feeds which go directly into memory. Memory is where we contain transient state about the environment around us. Memory can be persistent even after a sensory feed is cut -- closing your eyes doesn't cause objects to disappear, so we have object permanence. Objects stored in memory have "importance value" filters applied to them and they also have expiration times. Ultimately, the memory in a mind contains all relevant state information! Our mind has a list of all possible behaviors it can perform, so there needs to be a way to choose the most optimal behavior from the behavior list, given the current memory state. This means there needs to be some sort of internal decision making process which quickly evaluates optimal behavior. How do we build this without creating a bunch of FSM scripts? Because if that's what we end up doing, then we failed and are just creating overly complicated FSM's and are creating scripted behavior models which need to be maintained and updated. Here's the trick where it gets interesting... When we get objects through our sensory inputs, we don't store references to those instanced objects in memory. Instead, we store a set of descriptive tags for those objects. We don't choose our behavior based on objects in memory, but the memories abstract representation of those objects. Our brain also has a knowledge repository of behaviors, tag sets, and its effects on motivators. Our goal is to choose behaviors which create the most reward for our character, and reward is determined by the sum of motivators satisfied (more on this below). Our agent doesn't intrinsically know how much reward certain behavior and tag sets generate, so it needs to query its internal knowledge repo. The knowledge repo is where abstract reasoning happens. Since we're not working with instanced objects directly, but rather abstract representations of those objects, we can look at the tag sets which define our objects in memory and do pattern matching against tag sets in knowledge, find which sets of knowledge are relevant to the object, and then look at historical motivation satisfaction (not historical rewards). Essentially, we're looking at objects and querying our knowledge banks for similarly related objects and asking about our past experiences, and then projecting the best matched experience onto the current object. We're trying to match the best motivationally satisfying behavior to the object, and that becomes our most optimal behavior towards that particular object. We repeat this process for all objects in transient memory, keep a high score of the most rewarding behavior, and then choose that as our most optimal behavior.

    What's interesting is that this applies abstraction to objects and doesn't require thousands of training cycles. Imagine an AI character reaches out and touches a burning candle flame. That creates a negative reward for that action. The AI looks at the set of properties which define that candle flame and stores it in knowledge and associates the negative motivational experience. Let's define this by the property set {A,B,C,X,Y}. Now, some time passes, and the AI is looking at a campfire, which has the property set {C,G,H,J,X,Y}. It queries its knowledge base and sees that there is a set intersection between {A,B,C,X,Y} and {C,G,H,J,X,Y} which is {C,X,Y}. It can then say that there is a relationship between the campfire and the candle flame and based on its experience with the candle flame, it can project a predicted outcome to what would happen to its motivations if it touches the campfire, without ever actually having touched the campfire. In other words, we can make generalizations and apply those generalizations to related objects.

    I was initially making the mistake of defining how much reward each tag was worth. This is wrong, don't do this. Let's talk about reward calculation and how it's related to motive satisfaction, and how this can vary by character class, and character instance. Here is a general set of motives every character will have:

    • Existence / Self Preservation
    • Pain avoidance
    • Hunger
    • Sex
    • Love / Affection
    • Comfort
    • Greed
    • Morality
    • Justice
    • Fear
    • Power
    • Curiosity

    This is not a complete set of motives, you can add more as you think of them, but the central idea here is that our underlying motives/goals are what truly drive our actions and behaviors. It's the undercurrent to everything we do as humans and animals.
    The general equation for evaluating reward is going to be defined as:


    F(a, b, w) = w(10a^3 + -10b^3);
    a = normalized motivation before the behavior
    b = normalized motivation after the behavior
    w = weight factor

    We're essentially calculating the sum of all F(a,b,w) for all changes in motivation factors. Let's look specifically at hunger to illustrate how this works:

    In our knowledge repo, we have the following:


    Eating [TagSet]:
            Base Effects: 
                hunger -.25
                pleasure -.1

    which describes the effects on your motivations when you eat a loaf of bread: It satisfies hunger and creates a little bit of pleasure (the bread is tasty!).
    We have a few different character actors which have an eating behavior:
    1) Humans
    2) Zombies
    3) Cows
    4) Termites
    For reference, humans like to eat breads and meats, as long as the meat is not human flesh. Zombies are exclusively carnivores who eat any kind of meat and have no qualms about eating human flesh. Cows are herbivores who only eat grass and nothing else. Termites are a type of insect which only eats wood and nothing else. We 100% definitely do not want to write a state machine describing these behaviors and conditions! Let the characters learn for themselves through trial and error and abstraction!

    So, our token human is hungry and we represent this by setting his initial hunger motivation value to 0.5. In front of him is a loaf of bread and he has prior experience/knowledge with that bread, as described above in the quote. Using our equation, how much reward would he get for performing the "eat" behavior on the bread multiple times?


    F(a, b, w) = w(10a^3 + -10b^3);

    very hungry (0.5), eat bread!
    hunger a: 0.5        =    125
    hunger b: 0.25        =    -15.625
        reward: 125 + -15.625 = 109.375
    not very hungry (0.25), eat bread.
    hunger a: 0.25        =    15.625
    hunger b: 0.        =    -0
        reward: 15.625 + 0 = 15.625
    full (0), eat bread:
    hunger a: 0        =    0
    hungry b: -.25        =    -15.625
        reward: 0 + -15.625 = -15.625

    As our human continues to eat bread, it satisfies his hunger and it becomes decreasingly rewarding to continue eating bread, to the point that it becomes a disincentivized behavior when he can't eat anymore (represented by the F(X)=X^3 graph).

    Let's place zombies and humans and put a chunk of human flesh in front of them both. The knowledge looks like this:


    eat human flesh:
        Base effects:
            hunger -.3
            morality +2

    It satisfies hunger, but generates a moral crisis! Here's where weights come into play.


    very hungry human, eat human flesh:
        hunger before: 0.75            =    421.875
        hungry after:  .45            =    91.125
        morality before: -1            =    -1000
        morality after: 1            =    1000
            reward: 1*(421.875 + -91.125) + 1*(-1000 + -1000)     = -1669.25    (very bad moral cost!)
    very hungry zombie, eat human flesh: (no moral weight)
        hunger before: 0.75            =    421.875
        hungry after:  .45            =    91.125
        morality before: -1            =    -1000
        morality after: 1            =    1000
            reward: 1*(421.875 + -91.125) + 0*(-1000 + -1000)     = 330.75    (good)

    Internally within the zombie character, we have a constant, fixed weight on the influence of morality on their reward modifier. Zombies have no morality, so they are completely unaffected. Our particular human has a strong moral conscience, so eating human flesh would be deeply objectionable. We *could* adjust the humans morality weighting to 0.8 or something, and if they eventually get hungry enough, the morality consequence could get overridden by the motivation to eat, and we'd have a cannibal. Notice that no extra code would need to be written to create these special behavior cases? These numbers can be adjusted in a spreadsheet to change behavior patterns.

    We also don't want to go through the process of describing what behaviors can be performed with particular objects. That would add extra work. Let's say we have a wooden door. It's entirely possible and allowable for the human and the zombie to eat the door (or anything for that matter). But how do we prevent them from doing so? If they attempt to eat something they aren't supposed to eat, we simply don't change a single motivating value. They will both learn that eating wood doors does not help them satisfy their driving motives, so when it comes to choosing rewarding behaviors, this would score a big fat zero. If we have an idle behavior which scores a minimum reward of 1, then the characters would prefer to idle around doing nothing before they'd go around eating wooden doors. It's a bit hilarious that the threshold between idling and eating wooden doors is so small though.

    Taking a few steps back, I think I've got all of the working pieces together now and it's mostly going to be a matter of implementing this. One section that's still missing from this future planning and look ahead. If you are hungry and standing outside of a house, look in through the window and see a ham sandwich, and want to eat it, then there is an intermediate step of moving to a door and opening it. This series of chained actions has a cost which needs to be factored into the reward calculation, and it would also need to be capable of working towards goals which don't exist (such as deciding to plant crops to get food to eat -- the food doesn't exist in present time). For the last week or so, I've been building this AI system out and I've got a rough working prototype. I'm still implementing the underlying framework and discovering design oversights and errors, but I think once this is working, I'll have a pretty unique type of AI capable of abstract reasoning, learning, planning, and optimized behaviors.

    I suspect that lots of different characters with slight variations in weights, could generate an interesting system of interactions and an economic system of competing interests could be an emergent property of these underlying systems of motivation satisfaction driven behavior. I think this is also reflective of real life? It's been making me look at people very differently for the last few days and it's been blowing my mind a bit.

  18. After my first (semi-failed) algorithm, I started to think about a new approach.

    The idea was to utilize simple mechanics such as Ray Marching to extract a polygon mesh from a signed distance function (SDF).

    So, this is what I came up with:
    The algorithm is divided into 2 separate steps. Ray Marching (or what I call in this case "Ray Sampling") and Mesh Construction from said samples.

    1. Ray Marching.
      Ray march the "scene" (i.e. the SDF), most likely from the player's perspective, on a low resolution. I used 192x108 mostly in my tests. Current GPUs have no problem whatsoever to do this in realtime.
      Instead of saving the color at the "hit point", as usual when ray marching, I'm saving the point itself in a buffer. Accompanied by the normal of the SDF at that exact point.

      What we end up with after the first step, is a collection of 3D points that resembles the SDF ("samples") & the normals at those positions.
    2. Mesh Construction.
      Construct a polygon mesh from those samples by simply connecting neighbouring pixels with each other. Lastly, scale up the mesh to account for the low resolution that we have used when ray marching. (I haven't done this yet in the images/videos you can see at the bottom)

    I think the results look quite good. There's problems that I'm still trying to solve of course, such as this weird aliasing (yes, I do know what the root of that problem is)
    It currently runs at about 40-70 fps, or takes somewhere between 10 - 25 ms per mesh. (Only the 1st step is parallelized & I haven't done much to optimize the algorithm)

    The Pro's

    1. No complex, underlying data structure such as a voxel grid
    2. Can run in realtime with no problems, especially if optimized
    3. No Level-Of-Detail required, which is one of the most painful things when writing a voxel engine. The mesh is as detailed as the image constructed by the Ray Marcher. (Which is pretty good, it's just small! Scaling up a complete mesh works way better than scaling up an image :) )
    4. Enables sharp features, caves etc. (because, duh, it's ray marching.)
    5. Completly relys on SDFs (2D - "heightmap" or even higher dimensional SDFs) Meaning, we could deform the mesh in realtime by applying simple CG operations to the SDF.
    6. Infinite terrain for free! (We're only rendering what the player can see, if the SDF is "endless" so is our terrain)

    The Con's

    1. Right now, there's no precomputation. I'm thinking about the possibility of precomputing a mesh by taking "snapshots" from different perspectives. However, at the moment, it's all realtime.
    2. Only creating a mesh for what we see also means that AI etc. that is not close to the player has no mesh information to rely on.
    3. I don't know yet. Will update more con's when I find 'em. Maybe you have some ideas ?



    All results have been generated using a simple SDF consisting of 2 sinus curves.



    A huge terrain constructed by taking "snap shots" from above.


    The same mesh in wireframe.



    Wireframe close up.


  19. k0fe
    Latest Entry

    Yesterday I watched Tomb Raider (2018) and it was ok. Just ok.

    But I thought that’s the idea about an island which is hidden is so cool that I decided to make one in my game.


    So I had to make an island which is not shown on the map and also is hard to see from the main land. There are a few tricks like making a lot of small empty islands, so the hidden one will be masked behind ‘em. Or maybe some kind of fog of war.


    In the DS Zelda (sadly, I don’t remember the exac game name but there was a ship as a main feature) there was a puzzle with moving to the next area where you should find a map with a correct route (also you can simply follow it if u know it) or you’ll always get into storm and the ship will be moved back.


    I think I should a make a quest where so-called quest giver is a book chapters (example Skyrim I guess).

    Player must firstly find out about an island in the book, so he can talk to some NPCs about it and found a sailor who can take him right to the island and so on.

    This should be a fun side quest.

  20. impossible_climb_by_godintraining.jpg

         I am shocked at how quickly this Wednesday snuck up on me. Guess that's a good thing. Another week closer to my short term goals, but on the other hand there's still next to no progress being made on BGP. So caught up in the daily routine and so mentally drained after hours of office work, it's incredibly easy to understand how people fall into "the grind" trap you know? 

        Just wanna drag yourself home, flop down somewhere (preferably with pizza at hand) and just watch stuff, endlessly scroll social media, wish you were happier and doing awesome things, and fall asleep. But I gotta stay on track. Even if it's just 2-3 hours at a time. Even if these little tasks like"Just write 8 YouTube scripts already!" end up taking forever, just like every other small task self-assigned.

     Pretty sure I'm waist deep in burnout, but not in any particular situation to be able to aliviate that stress, so... Onward! Here's today's blog: https://www.yotesgames.com/2018/10/battle-gem-ponies-devlog-191-not-single.html

  • Advertisement
  • Advertisement
  • Popular Blogs

  • Advertisement
  • Blog Comments

    • IP doesn't refer to ideas though.... As long as you're not using the IP of another party it doesn't matter if you make a 'clone'. If this wasn't the case the movie industry would be shut down due to the endless amount of action flicks with the same story line. The same applies to how everyone is making 'Battle Royal' games. If the name 'Frogger' is trademarked, then you cannot use it under the same class and capacity. Trademark laws will also depend on the country and how far they can reach. This is why you can have two trademark names under different classifications (Frogger in software/games, and Frogger in clothing). Did you know "Apple" is trademarked in more than one classification and by more than one party? You can do a search and see many industries using "Apple": http://tmsearch.uspto.gov/bin/showfield?f=toc&amp;state=4809%3A9wt3ga.2.1&amp;p_search=searchstr&amp;BackReference=&amp;p_L=100&amp;p_plural=yes&amp;p_s_PARA1=Apple&amp;p_tagrepl~%3A=PARA1%24FM&amp;expr=PARA1+or+PARA2&amp;p_s_PARA2=&amp;p_tagrepl~%3A=PARA2%24ALL&amp;a_default=search&amp;f=toc&amp;state=4809%3A9wt3ga.2.1&amp;a_search=Submit+Query This is also an interesting story where a Apple lost their case for another company using the name Steve Jobs, and even used a J with a leaf and a bite, and still lost: https://globalnews.ca/news/3939095/apple-italian-fashion-brand-steve-jobs-trademark/ I honestly think this was a test to see if they could get away with it. Remember when Apple was sued by the Beatles: http://ultimateclassicrock.com/beatles-sue-apple/ https://www.theguardian.com/technology/2006/mar/29/news.newmedia & https://en.wikipedia.org/wiki/Apple_Corps_v_Apple_Computer You also cannot use any assets made by another party without proper consent. There is far too much mis-information online regarding IP rights, and people will continue to repeat this information causing a lot of confusion. I've already dealt with an IP case prior, and you need a lot of evidence to efficiently build a case of infringement which extends beyond having a similar idea or concept. At the end of the day we all pull ideas from other sources and if IP infringement was able to touch "ideas" we would all be guilty of IP theft.
    • I don't know much about the game design protection, I think it is quite difficult to protect, but you are right about the name.  Great that you are having a go too, really looking forward to seeing yours and the others versions, like you say the requirements are quite lenient and you can make it quite different!  
    • I personally don't think frogger game mechanics is the problem, the name 'Frogger' on the other hand may be. Also, as soon as you have your own art - there is no problem in that. If everything goes well (with my time management mainly), I will have the game for Frogger completed too - although I took a bit different approach to the game than you (I'm not even trying to make a clone - just to satisfy the rules).
    • Yeah, I've being busy making the game production quality. I hope to release the final version this year
  • Advertisement
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!