• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
  • entries
  • comments
  • views

About this blog

Games, graphics, and a few other things

Entries in this blog

CC Ricers
My hobby programming plate was kind of full at one moment. In addition to the game idea I have for Project SeedWorld, I have started working on a Pong-based game for Mini LD 58, and I also have my open-source Bummerman game. Though not a serious project, I plan to hold on to it to learn multiplayer networking code. Mini LD is already over, so I can go back to work "full-time" on the voxel game. I already have it working in a MonoGame project now.


Putting all the Code Together
Between this and other small games I have worked on in the past I have seen a familiar codebase develop over time and started to see what others mean about deferring your plans on building a game engine. Most recently with the voxel game, and even before then, making pet projects I haven't shown in public.

Over time, I have developed a "Screen Element" system which works like Game State management, but there is no manager class, and I prefer to call them Game Screens or Screen Elements. With Bummerman, I also have a custom ECS framework that works for this particular game but I have copied the code over to start making the platformer.

So I have two sets of code that I notice I can copy to other projects, in two separate folders. One is for the ECS and the other for Screen Elements. I can then start with a Game class that loads the first Screen Element for the game and in the main gameplay logic use ECS.

Porting SeedWorld to MonoGame

Today I have started work on taking the important voxel engine code and updating it to run with a game using the ECS and MonoGame. It's a success so far! Project SeedWorld can now go multi-platform.

I am not using procedural noise functions this time around. This game will be made using custom tiles, which are represented as 16x16x16 chunks. Right now I am still using voxel streaming to load and unload chunks, but that may change soon as it's not completely necessary now. As before, it loops through all the possible chunk areas to see if any need updating.

Tile information will be handled in the ECS framework. First, a Level object will read tile data, which is just a series of number IDs that point to voxel object files. A Tile component will store the tile position, tile ID and rotation. It will also be accompanied with a Collision component. I may not want voxel-perfect precision but instead have hit areas represented with simpler shapes.


By the way, MonoGame has one big annoyance: It has no built in SMAA support! You'll have fix it the hard way by adding this feature to the source, get around it with a post-process shader, or be more gutsy like I was and use a super-sampled render target at 2x the native resolution (in other words, 4k resolution!)


If you compare it to the first screenshot, it actually did not drop the framerate much at all. This is at 3840x2160 dowsampled to 1920x1080, running on a GTX 750Ti. But it becomes more GPU bound when I started rendering a lot more chunks.


This is because of the number of draw calls of one per chunk, just the way it was when using XNA. But I think this time I will combine several chunks into one mesh to reduce the draw calls. In the meantime, I will be working on a voxel tile loader so it can generate pre-made chunks from voxel sprites. It will basically be a tile editor but in 3D. I want to give the players the ability to customize and make their own maps with the game.

Hopefully next time I will be able to show the process of making a map with the tile editor and show some of the game features with it.
CC Ricers
I have released my first open source game code to GitHub for my Bomberman clone, Bummerman. Here is the project link:

This game uses MonoGame 3.2 for the build, and the Lidgren network library. The solution is set up for a Windows/Windows 8 build- I'm not yet able to try to build it for Linux of Mac OS X.

What started out as a small side project seems to be turning into a full-fledged game with many details. As you may know, it began with a custom ECS framework and then adding game logic to test it out with. Then I got interested in adding networked multiplayer. So I'm making this a longer term project and as a stepping stone to build other games from.

Current features are, one power-up, support for keyboard and controller buttons/D-pad (no joystick input yet), up to 4 players, and rudimentary Game Screen states. I plan to finish this game with added polish, and more features.

There is just enough to show the gameplay interaction and flow of game states. Upon starting the game, you are greeted with a message on the screen telling you to choose between hosting a game server or connecting to one as a client. After making your choice, the game starts and you can control your character.

What's left to do

The game is very much a work in progress so there are a lot of bugs and things missing from the game currently. I'm a noob to networking code here, and this is the first time I've tried it on a game. Networking is nothing more than a check to send test messages, as they have no impact on the gameplay whatsoever. But it works at least!

There needs to be more power-ups, input controls can only handle one unique set of inputs per player, so there's still the issue of one set of inputs controlling all characters on the screen. I might need to add the ability to deep-copy components that have reference type variables. I didn't include sprite assets because I am using copyrighted sprites as placeholders currently. I will add free placeholders later.

Next Mini Ludum Dare [s]is around the corner[/s] has just begun and I hope it is my first one to participate and try this codebase out for other things. Probably not the network code, though, since that is still very basic. The theme was just announced as "Pong" so it shouldn't be too hard to make something out of that.

Feedback and comments are welcome. I hope this code could be of use to someone wanting to see yet another approach to coding a game.
CC Ricers
I've been going back to work on my Entity-Component-System framework. Aside from being a side project, I will also plan to use it for my voxel platform game. I've already created a Minimum Viable Product using it, which is the Bomberman clone game I mentioned a few posts back. Animations are still very buggy, and there is no AI implemented, but a barebones 2-4 player version is working.

Previously I initialized all the Components, Systems, and Entity templates in the framework code. While I did this for testing out the game, it's not good for portability, so I had to remove all that initialization code out and update the framework so that it can accept new Components, Systems and templates from outside.

Finally, I isolated the code into its own assembly, so it would be possible to just link it as a DLL. This also meant I had to remove any XNA/MonoGame specific classes and all the rendering will have to do be done from outside. In short, it's really is meant for game logic only, and your game will have to handle the rendering separately.

The framework itself is lightweight (and I hope it stays that way), and only consists of 5 files/classes: Component, EntityManager, EntitySystem, EntityTemplate, and SystemManager. The SystemManager handles all the high level control and flow of the EntitySystems, which you make custom systems from. EntityTemplate is a simple class used as a blueprint to add Components that define an Entity, and is deep-cloneable. EntityManager handles the creation of Entities from these templates, and also the organization of its components. Despite its name, there is no Entity class. I think I wil rename this manager to "ComponentManager" in another revision.

The Bomberman game has the following components:

  • Bomb
  • Collision
  • InputContext
  • PlayerInfo
  • PowerUp
  • ScreenPosition
  • Spread
  • Sprite
  • TilePosition
  • TimedEffect

    They are used by the following systems:

    • BombSystem
    • CollisionSystem
    • ExplosionSystem
    • InputSystem
    • MovementSystem
    • PlayerSystem
    • PowerUpSystem
    • TileSystem

      Some of the systems are more generic than others. There are a couple of systems like the Bomb system or Power-up system that have very specific logic pertaining to the game, while others like the Input system are pretty abstract and can be adapted to other game types with little or no change. Some are Drawable systems so they have an extra Draw() function that is called in a separate part of the game loop.

      The funny thing is that I was going to talk about using Messages in this update, but in the time between, I did away with them completely. Messages were a static list of Message objects that was in the base System class. They were mostly used for one-time player triggered events (like setting a bomb) and every system had access to them, but I decided to just pass along the InputContext component into the systems that will be dependent on player input.

      Setup and Gameplay

      The game is started by initializing all the components and systems and then creating the entire set of Entities using a Level class. This class has only one job- to lay out the level. Specifically, it adds the components needed to make the tiles, sprites and players. My implementation of the game pre-allocates 9 Bomb entities (the maximum a player can have) for each player.

      Each player can be custom controlled but right now that's facing issues now that I moved from invoking methods to instantiate new Entities, to deep-cloning them. This works as well as long as none of the component have reference types.

      The only Component that has reference types is the InputContext component as it needs to keep a Dictionary of the available control mappings. This breaks with deep-cloning and thus with multiple players, they all share the same control scheme. Other than that, it makes the component too bloated, especially with helper functions to initialize the mappings. So I am figuring out how to use value types only to represent an arbitrary group of control mappings.

      The game starts immediately after setup, and every InputContext that is tied in with a PlayerInfo controls a player. Movement around the level is handled with the Movement System, while placing and remote-detonating bombs is handled with the Bomb System.

      The Input System detects key and button presses from an InputContext's available control mappings, and changes two integers, "current action" and "current state", based on it. It is up to other systems to determine what it should do with these values, if needed.

      The Tile System is responsible for keeping sprites aligned to a tile grid, or giving them their closest "tile coordinates" which is important in knowing where a bomb should be placed, for example.

      Collision System is self-explanatory. It handles different collision types varying by enum, to differentiate solid objects, destructible objects or damaging objects, as well as the response (it pushes players off walls, for example). If a player walks into an explosion, the Collision System knows.

      An Explosion System is used to propagate the explosions in pre-set directions. By default it's in all 4 cardinal directions with a bomb's Power attribute copied to a Spread component, subtracting one with each tile. It keeps creating more explosions until this attribute reaches 0 or it hits a wall.

      The Powerup System handles tracking tile locations with the players' own tile locations so if two identical locations are found, we know a player is over a power-up and it can be applied.

      There used to be a system for drawing sprites, but I decided to remove it and have the rendering be done outside the ECS scope. This makes the code more portable and you can use your own renderer.

      Now that the game is now done (with minimal specs), I am now ready to extend its use to produce one of the games I have wanted to make for a while, a top-down arena style shooter. This game will have similarities with components and systems for player movement, tile collision, and power-ups (which will be changed simply to items). I plan to make it in 2D at first but eventually switch the renderer to 3D and also offer customizable maps.
CC Ricers
Here's what I have been working on for the past week:


Just a 3D scene of space. But all programmed in HLSL! I have been inspired by Inigo Quilez's work and also his other website, ShaderToy with the impressive technical approaches to computer art, and wanted to learn a new way to put my shader coding abilities to the test. I still don't know how to do most of the techniques for all the most popular effects, but ray marching is no longer the intimidating dark magic I once took it to be, and I think I did well for the first week. It could come in handy in games, maybe like using the shader that made this particular image as a background for a space shooter/sim.

Also, I got the program to compile successfully with MonoGame. I finally got around the problems with compiling custom shaders and corrupt MGFX files, and figured out the build configuration needed to run them with Windows 7/8 on the SharpDX API. I mean, it's about time. Vulkan was just announced in detail to the public, and up until last week I was still using Shader Model 3.0. I think I needed to step beyond the limits of older graphics APIs and try at least learning something new.

Here's the project configuration I used for MonoGame on Win7, in case someone else is trying to figure it out:


This compiles the entire project perfectly and you don't need to use any external tools to build the shader content.

Going in a New Direction

And now back to Project SeedWorld.

This project, which I also talk about exclusively on my own blog, has been put on the side for a couple of weeks. What really happened is that I needed to reconsider the scope of my project. I felt out of my depth. See, what I really wanted to make all along is something like Cube World but with a different set of game features and more focused questing, and the added perk of being regularly updated by yours truly. But that didn't end up going the smooth trail I wanted it to be.

For most of the project I was working on getting the voxel generation and rendering engine to work well and mostly bug-free. It can now render large procedural landscapes. Physics is all custom made and I can make a player avatar walk and jump around in the world. I finished fixing the last major graphical bug that it had and then- well I have a large world. That I can run and jump in. Suddenly I felt overwhelmed with how much more I still had to do.

I started adding some GUI code to test rendering and controls, but I needed to add something that resembles gameplay, even if it were the most vague. I could throw in a bunch of objects into the world, what they are isn't important, but just to make the player pick them up. But I needed to add an inventory menu to track those objects. I felt that it wasn't interesting enough.

A Change of Genre

I have also been taking a look at other games (completed and in-progress) to inspire me, and in particular voxel games such as Staxel and The Deer God has been so influential for making the decision to scale down my game. So as of now Project SeedWorld will be a platform game in the vein of the 2D Super Mario games. It might still have item management and RPG-like progress elements, but most of the focus will be on run-and-jump platforming action. I think I would have more fun making a game of this genre, and also stand a greater chance of making the game fun.

But the voxels will stay. I am gonna keep most of my voxel engine as it is, just with changes necessary to fulfill the needs of making a platform game. Voxels will be reduced in size, so it will be like Staxel. I will probably restrict the movement to mostly 2D, but graphics will still be full 3D. Just now with the added benefit of being portable with MonoGame (possibly the next big challenge).

Will the worlds/levels be procedurally made? That depends on how easy it is to making procedural levels. I've researched a bit and read an article or two, and it's not like throwing a bunch of noise functions and tweak their parameters till you get something that looks nice. The world generation will depend greatly on the actions of the player and this could mean many, MANY hours of play testing.

So there you have it, and now begins my work with a new type of game. Guess I just needed a break again from the project to think more clearly on what I should do. Also, because I already have most of the voxel engine already made, hopefully the start will be a lot easier.
CC Ricers
This post is just to spill my general thoughts out there on what I am doing in the gamedev world, since now I have several things that I'd like to do.

Project SeedWorld is still ongoing as a voxel engine but I am not yet sure on what direction the game should be heading. It is meant to be an RPG, but some similar games such as Forge Quest look a lot like my goal and are leaps and bounds ahead of mine in development. I could possibly just keep the voxel engine and use it to make another type of game.

My Twitter account has been booming in activity ever since I started posting gamedev stuff. This has been my most successful attempt at using social media to promote my own projects. Not to mention I have traded comments and follows with other programmers doing similar things. It's very motivating to say the least.

Let's not also forget my older dev blog, which is also the basis for this journal. Over a year ago this blog was mainly about the development of my own graphics engine. A couple of people have wanted to use it, and although its source is available on CodePlex, there are no samples or documentation. I am planning to distribute it on GitHub and touch up the code a bit to get more support.

Finally, my Bomberman game on the custom ECS framework is approaching playable status. I've stomped a couple of bugs related to power-ups and resetting of entities. Next features to add are supporting multiple players with different controls, settings menu, and AI players, in that order. I have been at a standstill lately just for trying to come up with some nice visual assets. I'm not a great 2D artist and am not really feeling a lot of the free sprites I've ran across. Maybe I should just wing it and use the original Bomberman sprites as placeholders and then worry about what the game should look like.

If that wasn't enough stuff to keep me busy, Unreal Engine 4 just became completely free and I am now wanting to give it a try. My last experience with Unreal Engine has been UE2 making some practice maps with UT2004. But I don't expect to make anything significant with it soon.

(Edit) This editor also randomly eats up carriage returns.
CC Ricers
Over the last week and a half, I have been working on my own ECS framework. This is a side project away from my main voxel game, but it is something I wanted to do in order to be able to improve my productivity with making games more quickly. Inspired by Phil's Bomberman tutorial, I have implemented my own Bomberman clone with my own made-from-scratch ECS framework (though some conventions and names were adapted).


[size=2]Bomberman-like (brand new genre) game in progress. Using open source sprites but will change eventually.

My own framework has less code than Phil's but as long as the whole game still works on top of it, I would prefer that. Because right now, I don't need two Entities to check if they are exactly the same, or don't need to serialize them for other purposes. And I certainly don't need scripts at the moment.This framework has gone through two main iterations. They differed mostly in how Components are stored in the game.

As you may know in an Entity-Component-System framework Components are just simple data containers, which don't have any game logic, but are mutable by the game logic code, which resides in the Systems. Entities in my framework do not exist at all as classes, but are rather just numbers implied in Components as you'll soon see.

Framework Structure

The ECS framework in its current state consists of your typical Component and System base classes, which you can build specific Systems and Components from. Here are the important data structures:

  • A Dictionary of Component arrays indexed by enum. Usually a small amount of different types.
  • Component arrays which are accessed by index, so constant time here.
  • Systems take references of important Component arrays. Iterating them takes linear time.
  • List of Systems which are always executed in the order that they were added.
  • Static array of Messages in Systems. For now, they are just being created by the Input System. Could possibly be made non-static.

    The main point of setup and entity processing comes from the SystemManager class, which stores a list of all the different Systems, and calls their Draw and Process function in the game loop.

    This class also has an instance of an EntityManager class, which is passed to the System constructors. The EntityManager class is where all the Component arrays are stored (as the base Component class), and where the Systems get all the Components they need. Components are pooled at startup, setting each array to a fixed sized X for max number of entities (though in C# it's straightforward to resize an array if needed).

    The arrays themselves are in a Dictionary, using a Enum for Component type as key. They are arrays of base classes, but they added in as derived classes.public Dictionary components { get; private set; }// Add Tile Position componentscomponents.Add(ComponentType.TilePosition, new Components.TilePosition[maxEntities]);
    This makes it possible to re-cast them back into their derived classes, but fortunately we would only have to do this on startup. Systems get the arrays of Components they need upon initialization, cast to the the proper type.// Inside a System that uses Collision and TilePosition componentsComponents.Collision[] collision;Components.TilePosition[] tilePosition;public CollisionSystem(EntityManager entityManager) : base(entityManager) { // Load important components collision = components[ComponentType.Collision] as Components.Collision[]; tilePosition = components[ComponentType.TilePosition] as Components.TilePosition[]; }
    No further casting is needed for entire arrays after this point. The only casting that is done while the game is running is for getting certain Components at a given index.

    When an ECS is more like a CS

    Not dealing with Computer Science, but dealing with Components and Systems only. There are no entities in the framework, or at least not as objects. There is no Entity class, but instead entity IDs are stored in the components themselves and also referred to indirectly by the array indexes. The Components are access sequentially by the Systems and you can be sure that any Components in the same location of their respective array together make up an entity.

    The EntityManager also has as an integer variable, TotalEntities, for the total amount of entities active in the game. It tells each System how far into the Component arrays it should iterate. An entity is "removed" by replacing the removed entity's components with the components of the last active entity in the array. TotalEntities is reduced by 1, and this is the new index marker to tell the EntityManager where it should add Components to make a new entity.

    Since arrays are fixed size, the amount of entities should not exceed the size provided in the pool. You can usually easily test and find out what a suitable size is for simpler games. I want improve this in the future by making the EntityManager resize the arrays to a much larger size if it should reach the limit (which should generally be avoided anyways to maintain good performance).

    Component Organization

    In the first iteration, the framework had arrays of each Component type, as concrete classes. Each derives from a base Component class, but the arrays are set up as the derived Component classes. So you had arrays of different classes named spriteComponents, screenPositionComponents, etc. This was inflexible for two reasons. First, adding a new component type meant also adding code for it to do a type check in the function to "Add" an entity.// Get proper EntityPrefab methodType prefabsType = typeof(EntityPrefabs);MethodInfo theMethod = prefabsType.GetMethod("Create" + templateName);// Call method to create new templatenewTemplate = (EntityTemplate)theMethod.Invoke(null, new object[] { nextEntity });// Check every array for proper insertion foreach (Component component in newTemplate.componentList) { if (component is Components.Sprite) components.components[ComponentType.Sprite][nextEntity] = (component as Components.Sprite); if (component is Components.Bomb) components.bomb[nextEntity] = (component as Components.Bomb); if (component is Components.Collision) components.collision[nextEntity] = (component as Components.Collision);// Etc...}
    This has been improved since, and now adding Components to an array doesn't require manually going through every possible Component type.// Check every array for insertionforeach (Component component in newTemplate.componentList) components[component.type][nextEntity] = component;

    Enity Prefabs

    Every game using ECS benefits from having pre-assembled entities to use right off the bat. It's a logical way to plan the rules of your game and what kind of game objects it will have. I use a small class called EntityTemplate which stores a list of Components. A class called EntityPrefab contains different methods (CreatePlayer, CreateSolidBlock, etc.) to return a new copy of a template, and its Components are added to the pool.

    You still have to invoke EntityPrefab methods since the methods are dynamically chosen with the "templateName" String parameter. I would like to replace it with just adding prefabs to a List of EntityTemplates, so you just select them from a list. In hindsight this should have been the more obvious approach but I was taking from Phil_T's approach to making entity prefabs.

    Getting into the Game

    I will talk about this in the next post, since I've probably gone long enough already! Then I'll be able to go into more detail on how the game uses the framework. But since the first draft of this post and now, I have also made some more improvements on the ECS code and ironed out some game bugs too. The game is getting closer to being playable!
CC Ricers

Testing UI

First, a late thanks to the staff at GDnet for featuring one of my entries. It was certainly not expected and I'm glad it was very insightful to a lot of people. Back to business on my game work, I am moving away from working on the voxel terrain engine so I can start adding relevant gameplay features. I needed a GUI to make certain things easier to test, especially user configuration for controls, voxel rendering and also debugging things that just make more sense to do in runtime than recompile the code many times.

With that said, I was browsing different open-source UI libraries that are made to work with XNA. Ones I have tried before or investigated are NeoForce, Squid, NuclearWinter and Ruminate. The NeoForce library looks too big for my needs and too dependent on external XML files. Squid looks nice and has enough documentation to get you started. Nuclear Winter has less documentation but the sample code makes a lot of sense to me, plus it has a nice default skin. Ruminate is actually made for MonoGame so a couple of extra dependencies are needed, and it doesn't support lists or tables. Out of these four, I chose NuclearWInter's UI.

While it lacks a lot of documentation, the samples are straightforward to see how they are put together. More significantly, it has been used by the same developers in an ambitious project of their own, CraftStudio, which is a GUI program that lets anyone build games without needing programming experience. I imagine that if it can be used for that it should certainly be robust and work well enough for my game. Within two days of downloading it, I managed to get a custom settings menu in my game, which is launched in a separate game screen.


The controls don't do anything yet, except for "Test" which closes the UI and returns control of the character. Also, since NuclearWinter is actually a collection of different libraries to handle other things like sound, game states and animation, I had to include all the code, for now at least. These systems compile into a single DLL. But my only interest is in the UI and the input library that it relies on.

Since I use my own Game Screen system, I modified the code to not create a Game State Manager class that was specific to that library, and not update it. It's not required for creating the screen area necessary for putting the UI into. The Audio code was easy to remove, as it relies on a separate DLL. I was also able to remove the custom Collections that it used. There's also a bunch of code files on the root that I haven't gotten around to checking yet.

When all modifications are done, I should just have the UI and Input code compiled into the library. Even then the input code is too Windows centric and I don't know how much work it will be making it platform independent. Maybe I should have forked this project and work on the changes from there. It would be easier to document the changes, plus I can provide people with the UI library as a standalone.
CC Ricers

[font=arial]I have fixed a problem that has bugged me for quite some time. For most of the development of SeedWorld, up until last week, every chunk was not able to access voxels from immediate neighbors. This shortcoming was avoided with the code for voxel collisions, which worked by finding the right[/font]


[font=arial]chunk from the group of chunks to test whether a block is solid or empty, given world coordinates as input. But sometimes you needed to find out based on local coordinates, relative to a chunk's location. This was evident in the mesh building process, as it did ray casting from each visible voxel to determine if a voxel needs to be shaded darker, and by how much.[/font]


[font=arial]The edge voxels were a problem for raycasting since it could not raycast further out from the edges of the chunk. Any rays that reached the edge were considered "not blocked" and the voxel received full light. This created a seam of lighter colored voxels around the edges.[/font]




[font=arial]Now since each chunk now has access to its eight neighbors surrounding the sides, you can simply make the ray "step into" these chunks and continue traveling the distance it's supposed to instead of ending prematurely. My first attempt to fix this didn't go well. I was modifying the starting coordinates of each ray and using that to find the neighbor chunk. It ended up looking worse:[/font]




[font=arial]These seams appeared because the ray was checking against the solid voxel it started from. So it always subtracted contribution from the light, making the edges dark. This was fixed by updating the ray coordinates after each step, seeing if they go out of bounds (from 0 to 32) and then picking the correct neighbor chunk to continue and reset the local coordinates. Now the seams are gone and the shading is correct.[/font]




[font=arial]Now the chunks don't look as obvious. This was pretty satisfying to fix, and probably so much that I will move on to work on other parts of the game. There are still visible seams at height intervals (because rays don't have neighbors to check on the Y axis) but this is still a lot better than seeing an entire grid of lines going across the landscape. So it's not something I am focused on improving at the moment.[/font]


[font=arial]As for what I will be working on next, I have been looking at some UI libraries to see which I will add into the game. I've already picked one to try for the moment, and if it is easy enough to use without having to break or re-code a large part of my game, I'll stick with it and start adding some game features.[/font]

CC Ricers
Another quick no-pic update. I've spent a lot of time in the voxel chunk rendering code, perhaps too much time, but part of it has to do with converting chunks to 32x32x32 size.

Previously every chunk was 32x256x32 in size (256 being the height). That seemed like a good idea at the time, so I could query only X and Z in world coordinates and not worry about height when it comes to lighting or neighbors. But it took too long to generate one chunk in my old laptop, which I want to use as a baseline spec for running games on older hardware. So I redid everything in the chunk management code.
It took a while but it was worth the trouble. Chunks are now cubic, and still queried individually for collision detection and rendering, but with more granularity in loading times. Making voxels from 3D noise is no longer prohibitively expensive which was also a very nice thing to discover.

Chunks with the same X and Z coordinates are grouped in a class called ChunkStack. For 256 units in height each ChunkStack contains 8 chunks. This class makes it easier to organize chunks by distance so I have 8 times less things to order, and also have it share the same 2-dimensional data for generating the chunk in each stack (so I don't have to redraw the same portion of a height map or a biome map 8 times, for the 8 chunks). With the height data, an average height can be computed in order to find a good max and minimum height for each ChunkStack. In theory this would permit for worlds with heights that can span thousands of units, despite the stacks themselves being limited in height.

Finally, I added the ability for a chunk to connect to its up to 26 neighbors (top and bottom chunks have less) and this is where loading and rendering meshes can get very flexible.

Continuing on an Old Game

In non-voxel related news, I'm thinking of going back to finish the top-down shooter game which I barely started, but talked about already. It was supposed to be simple and a way to get back into XNA, but ECS coding bogged me down so I'm gonna do away with it completely for this game. The most I have managed to do with it is move a 3D model around an empty room. But this game should be something I should be able to finish in a short time, so this will not be a big sidetrack from my voxel game. Just something I want to do quick and dirty, and still be enjoyable.

3D will not be requirement this time, but just a potential upgrade after finishing the game in 2D. The game will be, as originally planned, a twitch combat style shooter with multiplayer support. Graphics will be more pixelated/retro in style with grayscale colors.
CC Ricers
I didn't get as much time as I wanted to work on my game this week. Spent too much time with Skyrim (coming very late to the game, but nonetheless). Still there are some new updates to show. First I have re-opened my Twitter account for game dev related stuff, so follow me there. Also, I am now able to load models created with the MagicaVoxel editor, which will likely take a big part of my game development time in the future.

I discovered MagicaVoxel from Giawa's dev blog, and he already made some code models into his program, which is already available for use. After modifying the code a bit, I was able to get a model loading correctly into my game. I had to flip some coordinate values around, because Magica treats Z axis as vertical, while I use the Y axis for that. Also, I had trouble expanding the 16-bit color values back into a 32-bit integer. The colors were approximate but still looked off, especially with skin tones and lighter colors. So my importer code stores voxels with 24-bit color values. Here is a comparison with the model in Magica and in the game with my importer:


Models made with Magica use a palette of 256 colors which can be user-defined. My model format in the game uses the first 768 bytes to store the RGB values for the palette (alpha not supported yet), and the rest of the bytes are X, Y, Z and color index for each solid voxel. At the end I store two more X, Y, Z locations to get the bounding box of the model. This helps me center and position it for player movement.

The code for converting the voxels to meshes is a lot like the code for the chunk meshes. The process is split up into three steps - copy all voxels to a 3D array, cull visible voxels, and finally check neighbors to make the voxel cubes with the right combination of sides.

As with the chunks, each step is also thread-able to speed up loading of models. It now seems possible to condense a lot of this code into one generic class or set of functions that can work with any kind of voxel object. But the code still needs more cleaning up in order to make this happen.
One trick I use to make this code more readable is to pad the voxel array with a 1 unit "wall" of empty voxels on each side. For example, if I want to import a model 32x32x128 in size, I load it into a 34x34x130 array. Then instead of looping from 0 to 31 or 0 to 127, I loop from 1 to 32, 1 to 128, etc. This way I can guarantee that each voxel in the model doesn't have any neighbors that fall out of bounds in the array.


Now it's time for me to get a bit creative. Time to start making my own voxel models for the landscape, as I will be adding grass, bushes, etc. and coming up with items that can be picked up.
CC Ricers
This week I have been working on adding more natural features- rivers and biomes (or at least the start of it). As part of making the world more varied and less same-y, I decided it's a good time to add bodies of water! I thought about oceans but I wasn't sure yet on how to modify the height on a large scale. I want to support negative height values for underwater oceans but the voxel engine doesn't support it yet.

So I went with rivers. I took the simple way out for this and used the same noise functions I've been using for everything else. Eventually I settled on a stripped down version of the pattern used for the big mountains so that rivers naturally travel in the valleys and creases between them.


There is also a bit of erosion added, which is more visible in the flatter, lower areas. Noise values are converted to absolute values (all negatives become flipped) so areas near 0 are lowered in height and the closest to 0 areas are where the rivers are made.


(The more eagle-eyed lot might notice some faint grid lines running along the mountains- this is a visual bug related to raycasting unable to access voxel info from adjacent chunks. This would be fixed eventually.)

I want to fix this also for steeper areas because the water looks a bit odd taking on the shape of the mountains. But it's a good start. The rivers tend to travel more in the flatter areas, because I put a hard cap on height on where they begin.

And then come biomes. These depend on a set of variables also provided by, yep, noise functions. A set of Simplex noise patterns with very low frequencies have subtle changes on the local scale (from the player's point of view) but are very apparent across the entire world. They're essentially the same patterns that made the world maps in this previous post. And so they will affect other things such as humidity and temperature. Biomes affect the color of Surface blocks and eventually, vegetation, enemy types and materials scattered around.

My approach to biomes will not be to use different classes for different biomes, or even divide the world up into discrete regions. Instead the regions will be formed implicitly by the noise patterns. I will go in detail more as I flesh out the biome system later. Here is an example of a desert biome.


For now, I just hardcoded the terrain to show the desert biome using the desert surface colors. What I really wanted to show here is how rivers change the humidity on a very local level. If we look at the other side of the peak we can see some sort of an oasis.


Humidity in this case can also mean the moisture level in the ground. Areas are grassy around rivers, even in the desert. With smart object placement, there will also be shrubs and trees in these areas.


That's all for now. I want to get the biome rules more defined in order to make the biomes transition well over different parts of the world.
CC Ricers

Day/Night Cycle

Here's a quick update this time. This time I have implemented a day/night cycle for the game. It's not physically accurate but it looks good enough. This short video demonstrates it.

There is now a skydome in the background, which uses a lookup texture to get the colors based on the time of day and altitude of the sky. The objects become bright and dark based on the time of day using a sine function, which I added various coefficients in order to get the transitions looking how I want them to. The objects also are also light differently with ambient colors based on the sky and light angle.

There are a few adjustments still to be made with the sky colors. I'm considering using another lookup texture to get the brightness of the objects which would replace the sine function operations, and much easier to adjust.

I've also been working on some code cleanup and re-organization. Chunks are now handled as a group with a ChunkManager class, which makes the code in the game state much easier to read through. Some tweaked camera physics and controls, as well as voxel shader code.
CC Ricers
It's now month two of the development of my SeedWorld engine and the game it will be used for. In the first month I have already made a lot of progress for it. The features I've done that month are:

  • A voxel rendering system to draw a world out of cubes
  • Procedural generation of landscape and trees using noise and shape functions
  • Optimized loading and streaming of voxel chunks, including multithreading
  • Move a box around the world, interacting via physics and collision detection

    The engine is shaping up to be a good tech demo already, so here's hoping for another productive month. This week I've already pushed two more features, one being a rendering code change and the other one is completely new and related to world generation. So get ready for a long post, there is a lot to cover here.

    Building a larger world

    First I will talk about the world map generation. No doubt many people who wondered how to procedurally generate a world map have ran across Amit P's article on polygonal map generation. I wanted to run with some of these ideas and kept them in the back of my head. Turns out that someone already made an XNA implementation of Amit's code to make randomized 3D islands with it. I downloaded the source and ran the program, but in the end the code was too overkill for me. I have to start with something more simple. Here is a picture of a finished map with the results of work I did in one night.


    With that said, I did not follow Amit's approach to make islands, not even closely. But I do know I wanted to generate the map using Voronoi diagrams. Also, I will eventually add specific biomes for the map, which is one of the reasons why I am doing this. The world will be big but finite, and heights and biomes will affect the way voxels are drawn and rendered from the player's point of view.

    I made a new ScreenElement class for my game's Screen System to put all my code in and test it. The awesome thing about using a system for game states or screen states is that you can experiment with things in a stand-alone state like a mini program. To generate the height map, I use the same Simplex noise functions, but layering on just two noise patterns of different frequencies. Then I add a height mask to lower the elevation on the edges of the map, so all sides are surrounded by ocean.

    The noise function and region plotter take a seed to generate the map, and pressing a key generates a new map from a new random seed. They are all made by pseudo random number generator, so the same starting seed gives you the same list of maps every time. The process takes a couple of seconds to make a map of the size above, but this will only be done once per new game when a new seed is chosen.

    Batching the models

    After deciding I'm good for now on the maps, and get to the biomes later, I moved on to plan rendering of object models other than the chunks of voxels/blocks. Since the beginning, the Chunk class had its own Draw function to render the mesh with. The wireframe box used to test movement in the world is drawn with a BoundingBoxRenderer class.

    This was not going to scale well with different kinds of objects like monsters, items or NPCs, so I decided, before I can even begin to turn that wireframe box that represents the player to a solid one, I should refactor my rendering code to be more flexible. I took ideas from the XNA SpriteBatch class to draw models in batches, taking in their transformations and current effect and drawing it in one go. The default way is to defer rendering of sprites until you call SpriteBatch.End().

    Similarly I created a ModelBatch class that loads different meshes to a Dictionary structure, using meshes as keys and lists of matrices as the values. This way several transforms can apply to one mesh to render the mesh more than once. Like SpriteBatch, you can start and end it a few times each frame to separate the batches by effect or render state. This gives you a more organized way to render different groups of objects with different effects. The ModelBatch pseudocode is as follows, as briefly as I can describe it:class MeshData{ public IndexBuffer ib; public VertexBuffer vb; public BoundingBox bBox;}class ModelBatch{ private Dictionary> meshInstances, cachedMeshInstances; private Queue meshQueue; private int initialBatchSize = // some large enough number // Other rendering resources such as graphics device, current effect, // current camera and renderStates go here public ModelBatch(GraphicsDevice) { // Initialize data structures, reserving a size for each with initialBatchSize // Queue gets size initialBatchSize + 1 for insertion purposes // Set GraphicsDevice } public Begin(Camera, Effect, RenderStates etc.) { // Set current effect, camera and renderStates here. // RenderStates can be applied here instead of in Draw() } public Add(MeshData, Matrix) { // Matrix is the world transformation for a mesh // Search meshInstances for a matching MeshData object // If found, add the Matrix transform to its Matrix list // If not found, search cachedMeshInstances for a match // If found in cachedMeshInstances, clear Matrix list and copy to meshInstances // If not found in cachedMeshInstances, add MeshData and new Matrix list // to meshInstances } public Draw() { // Set effect parameters that apply uniformly for the whole batch // (camera view, projection, etc.) For now, the default technique applies. // For each MeshData and Matrix list in meshInstances // set vertex and index buffers // For each Matrix in Matrix list // Set world transform for the effect // If in frustum view, draw the mesh with this transform // Finished rendering instances for this mesh, check if it's present // in cachedMeshInstances. If not found, add it to the cache. // Done drawing all meshes, clear meshInstances }}
    Some things not mentioned in the code are checking for null index buffers and vertex buffers, frustum culling details and using the meshQueue to limit cache size. The last one is important enough to mention, though.

    First, I decided to use caching because frequently emptying and re-filling a Dictionary with a large number of meshes (such as the voxel chunk meshes) adds a lot of memory to the managed heap, calling in the garbage collector every couple of frames. Before switching to this code, the Chunk objects always kept a local copy of the mesh and didn't need to pass it anywhere else so this wasn't a problem until now.

    This is why the Add function checks the cache Dictionary first, so it can avoid copying a reference to a MeshData object and just reuse the old one. It's not guaranteed if the matrix transformations remained unchanged from the last frame, though, so these always get newly copied.

    The Queue of MeshData is for keeping a constant size for the cache. Without it, the cache won't function properly and the ModelBatch will continue to grow the cache as it finds more unique meshes to render. This becomes a problem as you move through the world and new Chunk meshes need to be created, as well as other meshes for different monsters and randomly made NPCs.

    For it to work properly, the cache should remove the oldest MeshData to make room for a new one. The MeshData queue does this well, removing the object from the start of the queue in order to remove the oldest object from cachedMeshInstances. The code read like this:// Done rendering a mesh, release it to the cached listif (!cachedMeshInstances.ContainsKey(mesh)){ // Add to cache if not found already cachedMeshInstances.Add(mesh, new List()); // Limit cache to batch size if it's reaching the size limit, // removing the oldest mesh if (meshQueue.Count == initialBatchSize) cachedMeshInstances.Remove(meshQueue.Dequeue()); // Add the new mesh to the queue meshQueue.Enqueue(mesh);}
    Additionally, the cache must be set a large enough size so that the game will always have a lot of empty positions to fill meshes with. If the cache is too small, there will always be more unique meshes to render than there are items in the cache, causing the cache to change every frame, even if no new meshes are introduced to the batch.

    The queue/cache size depends on the implementation of the scene. For instance, if I know I will have about 2000 chunks rendered at maximum (since the view radius is fixed), I might want to reserve a size of at least 3000 for meshes scattered on the ground, characters and items. This sounds like a lot, but the memory increase from these data structures is not really much compared to the actual mesh data, and most important of all, no frequent heap allocations/deallocations.

    Those familar with XNA might notice that the behavior for Draw() is similar to SpriteBatch's End(), and Add() to SpriteBatch's Draw(). I chose these function names because they make sense to me, but I'll probably have them changed to the SpriteBatch names for consistency.

    Next I'll start work on adding mesh support to the Player class in order to have something for the player avatar, using the same ModelBatch to render it.
CC Ricers
Started off the new year well with my game. I finally got to the point where my collision box code works almost the way I want it to! I say "almost" because there is one slight bug but nothing really game-breaking. Adapting some code from the XNA platform game sample really helped also. The most difficult part is adding in the Z axis for proper collision response with horizontal movement, and last night I was fudging around with the code a lot until I got it to work predictably and sensibly.

Since it's all based on a platform game sample, I'm also left with many arbitrary numbers for variables to compute things such as drag, jumping time, acceleration, etc. I understand the code in which these numbers affect the movement but I don't like how the numbers don't seem to have any meaningful relation with each other. Oh well, at least I can use them for many things, like reducing the drag when you're on slippery surfaces.

Also, I have uploaded the first video of my voxel engine in action (in 60 FPS to boot). This recording is actually 2 days old, so no collision box demo here.

Other updates include some changes to the graphics code to get it to work with the XNA Reach profile. This didn't involve many changes, just limiting the number of vertices in each buffer to a 16-bit amount and changing some shader semantics. I really want to keep this compatibility, because I want the visuals to be simple yet still look great while running on low-powered graphics cards, in order to attract more potential players. The video above might as well be running on the Reach profile, as it looks exactly the same.

Back to the collision code- the slight bug I found is that when you collide with a wall, you keep sliding in one direction until the bounding box touches the edge of the next block. This only happens moving in one direction along each axis (eg. I can slide indefinitely while increasing on the X axis but not decreasing), but you do not get stuck on the wall, you just stop.

So I can now render a bounding box that moves around in the world and can jump. The camera is still not tied to player movement, and makes movement awkward. So that's next on the list of things to fix. After that, I will post another video showing it.
CC Ricers
This is probably the last entry I'll add this year, so I hope it's a good enough one! My SeedWorld engine has now reached a milestone- the first physics code. Also, the trees are generated more realistically and lighting is much improved.

I have raycasting code for emitting rays from every visible voxel and shade it a different darkness, with falloff applied for distance. This is all done on the CPU for now but I would like to use the GPU if possible. Here is a visual example of the ray casting stepped in cubes.


The code still has the limitation of not being able to check voxels of neighbor chunks, but the falloff of shade is so great you'd hardly notice these inconsistencies unless you look straight up at the bottom of an overhang (such as under a tree). I want to correct that later.

Also, some normal-based ambient lighting! It is simple but can have a drastic effect on the image. Each side of the cube can be lit with a different ambient color, so it is like sampling indirect lighting from a sky box texture but much simpler since cubes have only 6 sides. This Wolfire article gives a simple explanation of how it works. It looks much better than having the same dull gray color on all sides. Diffuse colors are also gamma corrected on the shader.

Here is a scene showing just the ambient lighting added to the regular shading from the normals:


And the same scene with the diffuse color multiplied:


Colors pop out a lot more now, everything looks more vivid which is the look I am going for. The ambient colors are hardcoded for now, but will be dynamic when I get a day/night system going. Colors are still too bright, but that was fixed. The lighting was adjusted some more.


Also you may notice that the trees are a bit more complex in shape, at least with the leaves. I generate the "boughs" in random sizes from the center, going in a fixed radial pattern. Each tree can have from 4 to 7 of them. This makes better looking trees than just randomly positioning them from the center, which made some trees look very unbalanced.

Shader code is also refined and the vertex data as well. Up until yesterday, all the chunk meshes are rendered with the absolute positions sent to the shader. Now I just send the local positions and transform with a matrix. To be honest, I peeked at the graphics in Cube World for an idea on how it does things, using PIX. Its approach is interesting, because none of the vertices in view exceeded absolute values of 300 in world coordinates, after the vertex shader is applied. It seems that not only does it just use local coordinates but also makes the world "scroll" around the character so as to keep everything near the origin.

Since the cubes are all located in integer coordinates and chunks are all 32 units in size I was able to greatly reduce the number of vertex data to pass to the shader. Each coordinate is now stored in a Byte4 instead of a Vector3, as was the color and normal information. This reduces the vertex size to a svelte 8 bytes. The fourth byte, usually the w component, can be used to store additional information such as AO shading, and possibly material properties that affect how it reflects color. So now the AO is added to the color in the shader instead of "baked in" the vertex. This makes lighting much more flexible to change.

First Collision Detection

Another important step forward in the engine is built-in collision detection. So far all I have to show for it is keeping the camera above the surface of the ground. A bounding box is made around the camera, the code compares the position of the box in the world to get the nearby blocks and checks if any of them is solid for a collision. See, just like 2D tile collision. The collision response is crude- it just moves the camera one unit up when the box collides with the ground. Later on I will add intersection functions to test the depth of intersection, needed to do some meaningful physics with characters and other objects.

The way I get the blocks from the world is kind of crazy. It is a function that actually re-generates all the procedural data required to find out what type of block is in a specific location. For now that just means getting the height value from the noise functions at a particular location, but I'll add in somehow to check if other objects are being built that will occupy that block so that the camera can collide with the trees as well. The reason it's done this way is to save time (for now) but also because when chunks are generated, all the block/voxel data stored in the cache falls out of scope after the mesh is made, and is replaced with the data from the next chunk. The chunk objects are mostly left with just the index and vertex buffers to render it.

This way of getting the blocks is actually very quick because the bounding box doesn't intersect with a lot of them. I'm curious at how well it will work with many moving things on the screen, especially ones almost as big as trees. Fortunately my game will not feature a lot of block creation or destruction, so that helps as well. Eventually, though, I want to store another cache for the nearby block data.

Later on, I'll start work on the Player class, its collision detection, and also code a camera that can be moved around the player for easy navigation.
CC Ricers
Here's some more progress on the SeedWorld voxel engine. Last week has been mostly occupied with tweaking stuff- tweaking noise outputs, heightmap parameters, and color gradients. It's been a ton of trial and error to get a landscape that I was happy with, but I think I needed a new perspective in order to not burn myself out on it.

I added some basic voxel shadow casting code. All it does is, for every X, Z coordinate on the surface, start from the topmost voxel and move down, checking if a voxel is solid or empty. The first solid voxel blocks sunlight and the rest of the voxels are darkened. It's simple, but with some shadow smoothing the effect will look even better.

I know how to make a landscape that can alternate between long, mostly flat plains and rolling hills and mountain ranges. But the more I looked and moved around it, the more it looked the same. It looked boring. But then I remembered that the environment was far from complete. There are no trees, boulders or other clutter filling up the place. So I decided to start adding trees. The first step I took was making a function that can produce a voxel sphere. These were the leaves for the tree. Then I added a square pillar for the stump.


Now at least I'm getting somewhere. Here are a bunch of spheres casting shadows on the ground.


Enjoy that green while it lasts, because I soon removed it. This helped me just look at the details of the landscape and not think of how repetitive it looks with everything being grass green. All the code for coloring voxels was hard-coded into the Chunk class, and this had to be moved sooner or later. Voxel color of natural features will probably be controlled by Biome classes or something similar.

So for now, everything is colored white with AO and directional lighting. Here are some screenshots of Grayscale Land.



The only condition for the tree-generation code is that they are to be placed where the elevation is lower than 80. But trees don't naturally occur in such an orderly fashion, so it's time to mix it up. My tree generation code consists of finding a surface voxel to "plant" it, and then use some loops to place cubes near the bottom for the trunk. For the leaves, a function to determine whether a location is inside a sphere is used. Leaf cubes are placed anywhere it's in the sphere.

The placement now needs to be randomized. Each chunk can be identified by its location in the world, and I can use a pseudo-random number generator that will have a different seed for each chunk. Here is how I determined the seed:

[font='courier new']chunkSeed = ((chunkOffsetX << 16) + chunkOffsetY) ^ worldSeed[/font]

The chunk's 2D location creates a 32 bit int that then is XOR'ed with the seed that is used everywhere for generating the surface of the world. You might notice that if you walk 65536 blocks in one direction (maybe if you're a voxel granddad) the same seeds will be produced again. I don't think it's an issue for now, as that's a very far distance to travel and repeated object placement will likely go unnoticed if you are moving around the world for that long. The world seed will still have an influence on this item placement later on, so things may not even be 100% the same then.

Random values are used for almost everything in tree generation, from its location in the chunk, trunk height and foliage dimensions. Spheres can now be ellipsoids and I can probably add several of them in one tree to make their forms look more varied and natural.

There will be cases where parts of the tree will intersect other chunks, so other chunks need that tree information. What did I do here? On every chunk, I actually compute the random values for all 8 chunks surrounding it, and then piece together that information by attempting to render parts of the trees on each chunk.

That sounds terribly inefficient on paper, and it probably is, but the chunks are still made as fast as ever. The tree generation code is still fairly simplistic compared to computing 3D simplex noise, plus I do not have to loop through every single voxel to see if something needs to go there (I'd have to check for out of bounds voxels anyways, otherwise the program will crash for trying to access an array element that isn't there).

So here are the results of how that looks:


(Not Bad image macro goes here)
Keep in mind this is with just one tree randomly placed in each chunk, where elevation is below 80. I first thought I needed 3 or 4 trees per chunk to look decent, but I plan on making the trees bigger anyways. A dense jungle might get away with 2 per chunk, though.

Next, I'll refine the tree generation to make more interesting shapes for the trunk and leaves, and then work on varying the colors with the trees and terrain!
CC Ricers
One week later into my voxel engine, which I now call the SeedWorld engine, I am still facing a lot of technical issues but still made a lot of progress. I finally have a octave noise function that I am very satisfied with, in creating those very believable rolling hills you see a lot in procedural landscapes. Here is the breakdown of the current technical specs of the voxel world generation.

  • Voxel data is discarded as soon as chunk meshes are made. Chunks store only vertex data at the minimum*
  • Far draw distance (I want to emphasize this in faster PCs)
  • World divided into 32x32x256 chunks, with an area roughly 2000x2000 in size for the visible portion
  • Multi-threaded support for voxel and mesh generation

    Future specs include:

    • Material determines the attributes in game, color gradients, and sounds for interaction feedback
    • Persistent storage for voxels only in the player's immediate surroundings*
    • Different biomes which affect the visuals and interactivity of the materials

      *This supports interactivity for making the world destructible, but only where it makes sense (near the player), and keeps the managed memory footprint low.

      When I was still tweaking with different combinations of noise patterns, I could only come up with very large smooth, round hills, or many little but very bumpy hills. No repetition, but very bland to look at.

      I had the basic idea down- combine many layers of Simplex noise of different frequencies, offsetting the X and Y for each of them just a little. But I had a derp moment when I realized I should be reducing the amplitude (effectively, the height variation) as I increase the frequency for best results. JTippets' article on world generation really helped here.

      Here are some screenshots of various builds, in order of progression. Here is "revision 2" as it follows the first build mentioned in my last journal entry:


      Already in revision 2 I have added optimized mesh generation to remove hidden faces. The wireframe render shows this well.

      Revision 3 shows the vast improvements in terrain generation that I mentioned previously. The draw distance is improved, and noise patterns create much more natural looking hills and valleys. Color is determined by height variation and whether or not the block is a "surface" block. The white patches you see are sides of steep hills that don't have the top face visible.


      Between revisions 3 and 4 I was trying out ways to speed up voxel generation, mostly with octrees. That didn't work out as planned, for reasons I will state later in this post. So I went back to my previous way of adding voxels. The biggest feature update here is simple vertex ambient occlusion through extensive neighbor voxel lookups.



      It is a subtle update but it greatly improves the appearance of the landscape a lot. I applied the AO method that was discussed in the 0FPS blog. The solution is actually simple to do, but the tedious part was combining the numerical ID lookups for all the neighbor voxels so that each side is lit correctly. I should really change those numbers into Enums for voxel locations so the code is less confusing.

      Here is a screenshot just showing just the AO effect.


      It is around revision 4 when I also made a Git repo for the project, and it has also been uploaded to a private Bitbucket account.

      Performance stats, you say? Unfortunately I am not yet counting the FPS in the engine and I believe my stopwatch use of tracking time for chunk updates is wrong, because when it reads 15 milliseconds (about 67 FPS) the program becomes incredibly slow, as if it was updating only twice per second, but at 10 milliseconds or less, the program runs silky smooth without any jerky movement.

      What I can tell you, though, is that currently I am sticking to update just one 32x32x256 chunk per frame in order to keep that smooth framerate. At 60 chunks per row, It's still quick enough for the world generation to catch up to movement up to around 25 blocks/second. This is throttled by a variable that I can change to tell the program how many "dirty" chunks per frame it should update. My processor is a Pentium G3258- a value CPU but still decent for many modern games (as long as they are not greatly dependent on multi-threading), especially since it is overclockable. I have mine overclocked to 4.2 Ghz. If you have a CPU that can run 4 threads, has 4 cores or more, you should be able to update several chunks per frame very easily.

      About using octrees- I did not perceive any performance gains from using them so far. I wanted to use octrees as a way to better find potential visible voxels without the brute force option of going through all the voxels in the array. The good news is: I got the octrees to technically work (also did some nice test renders) and I also learned how to do so using Z-curve ordering and Morton encoding. At least I gained some interesting knowledge there. Bad news: reducing the amount of voxel lookups with octrees did not result in being able to quickly update more chunks per frame, which was the ultimate goal. So I am putting aside the octree-related code for now and maybe it will come in handy later.

      Persistent local voxel storage concept, and future updates

      The persistent storage for local voxels is definitely something I want to implement, and make a key feature in my engine. Keeping voxel data for the entire visible world is usually wasteful and it only makes sense really to know what you will see immediately around you. After all, if you have a pickaxe, you are not going to reach that block that is 500 meters away. This data storage will update as you move around the world, storing at the most 4 chunks worth of voxels.
      This can be applied further with other objects that may interact with the world surface. Say you are a mage that can cast a destructive fireball. Upon impact, you want to calculate the voxel data for the area around the fireball so it can make a crater. Or an ice ball to freeze the surface. Obviously you want these calculations to be done very quickly, so it sounds like a good way to stress test the engine with lots of fireballs and who knows what else being thrown around.

      Other more features I want to add soon are the creation of pseudo-random rock formations and measuring slope steepness which will help in generating other pseudo-random elements. Probably gonna add those voxel trees first, in order to add more to the landscape.
CC Ricers
I'm decided to go back XNA for my shooter game on Windows 7. Two reason why: Using custom effects is unwieldy with Monogame in Windows 7, and I'm wasting my time trying to figure out why they are not being built correctly, and making my game multi-platform is more of a long term goal. MonoGame is more geared to porting or writing code on platforms that don't natively support XNA, and because there are many tutorials on it for those platforms, building custom effects may be more straightforward. That is really the only issue I'm having with the content pipeline.

So for now I will go back to focus my efforts on making a prototype using XNA only and then when it's in a playable state, start porting for the other platforms.

Voxel World - First Attempt

Meanwhile, I decided I want to try my hand making a voxel-based world because I realized that, for a while, I have been interested in the idea in creating a huge world that would be at least interesting to look at and walk around on. Also, I want to find out the challenges that come in rendering such a world.

Eventually, I will want to make some sort of action/adventure type of game with the voxel/cube world engine once it is fleshed out well enough. So far it has been a mostly smooth experience to see what it goes into the code. To get a head start I picked out some pre-existing code to make 2D and 3D simplex noise. I quickly learned how the noise functions readily make a continuous texture without being repetitive, as it's of huge importance when making a procedurally generated world.

I started working on this on Saturday, and made some decent progress by Sunday night, making a cube world with 3D Simplex noise and some mesh optimization to avoid rendering hidden cubes. The custom shader is very simple and low bandwidth, taking only a Vector3 for local position and a Byte4 for color. The mesh-building code also adds a shade of green to cubes that are visible from above, and the rest are brownish in color. This creates a somewhat convincing grass-and-dirt look. Finally I implemented some basic brute-force culling to avoid rendering invisible chunks. Quick and dirty procedural world!



Some problems I found with this voxel-cube generator were, I frequently ran into out-of-memory exceptions when I decided to go any higher than 192^3 cubes. I was splitting the world into 32^3 sized chunks but it didn't really help out the memory problem. My guess is that the triple-nested loops used to calculate the noise values are wounded too tight, and it might benefit from single-dimensional arrays. Also, I was storing the chunks in a resizable list, but it makes more sense to have a fixed sized array to be able to keep track of them. Also, while interesting to look at, the shapes produced by 3D noise weren't very desirable so I switched to 2D to make a more believable height map. From there, I will then experiment with some 3D noise to get some interesting rock formations and caves going on.

Streaming the Voxel World

The next steps are to stream new chunks of the world into view as you move the camera around. I've already made some progress with this, only for now by showing a 2D noise texture that "scrolls" when the camera moves past a certain distance from the center chunk. Currently, when the world requests new chunks to be added, the cube meshes for all chunks are re-built. This of course is not efficient enough, but I have come across bugs in how chunks are ordered when I try to just re-build some of the chunks.

The world is drawn with a grid of n x n chunks, where n is an odd number so there is an actual "center" chunk for the camera to sit in. Each chunk has a 2D coordinate called the offset, which tells its world position, as offset by its local position. For a chunk that is 32 units long and wide, its local coordinates are 0 to 32, and the offset vector is added to give the chunk's cubes their world positions.

To update chunks I have planned the following series of steps to do on every frame.

For each chunk in the grid:

  • Get the X and Z distances between its offset position and the camera's position
  • If X or Z are too far away from the camera:

    • Set a new offset for the chunk according to its distance from the camera
    • Chunk automatically tagged as 'dirty' in updating its offset

    • [size=1](this bullet item needed to indent the above two items, there is a bug preventing me to do otherwise)

      Set a "chunks rebuilt" variable to 0

      (2nd loop) for each chunk in the grid:

      • If chunk is 'dirty':

        • Re-build the chunk's mesh with the new offset location
        • Increment "chunks rebuilt" by 1

        • If n chunks have been re-built, exit the loop

          Hopefully this plan will make it possible to update only part of the world, and with the proper offset coordinates being fed to them. I split the chunk updates into two separate loops so there it is clear which ones need to re-build their mesh and avoid discontinuities in updating. I actually think that recalculating the correct offset coordinates for "faraway" chunks would actually be the trickiest to get right, since it depends on the X and Z values of the camera.

          As far as the the "exit the loop" business goes in the second loop, that is meant to minimize any CPU stalling when re-building chunks. Calculating the noise values and determining what cubes need to be added to the mesh is going to be tasking for the CPU, so I decided it is better to just update a few chunks every frame. I'd rather have some chunks be visibly "in progress" rather than try to re-build all dirty chunks and abruptly stall or lag the game when the player enters another section of the world.

          So far I haven't seen any visible dips in framerate when 81 32x32 chunks are updated in one frame, but it's still good to have the feature of splitting updates across several frames when needed. Then I will go back and optimize the mesh building so that only the faces facing the outside are drawn, rather than the entire cube.
CC Ricers
I guess it's pretty soon after my last journal entry, but I've been quick at progress with my game code the past couple of days. First of all I finally got the Entity-Component System code working in the project.

In more detail:

  • PlayerSystem sets movement and commands for all players

    • For AI, PlayerAction chooses a target out of a list of possible targets
    • For human controlled players, it reads and updates PlayerInput
    • Velocity is then updated based on one of the components mentioned above.

    • PhysicsSystem applies Velocity to Transformation matrix of objects
    • PowerUpSystem tracks all available power-ups and other pickup items and checks for player interaction
    • RenderingSystem is used with Transform and Sprite to draw the graphics on screen

      The results are not exactly the same but I was mostly focused on translating most of the program logic and making the code compile again. Last week I completed the move to ECS, but more exciting is that I also moved to MonoGame 3.2 and building it as a MonoGame application. I decided that it's a lot easier sooner than later, especially since my code base is still not so big and I am still prototyping in 2D.

      One of the big hurdles in moving from XNA to MonoGame is re-doing your content pipeline. XNA had it all set up for you, for processing different types of images, models and sounds, and making it possible to write your own content processors. It worked like an awesome magic black box that takes care of all the conversion for you outside of the actual game and the you just do a few function calls in your game to load them blazingly fast. MonoGame didn't have that for a long time. Just now they recently added a custom Content Builder but I have no use for it yet.

      Processing image files is now a part of MonoGame, and I also found a way to add in SpriteFont support to the build process as well. Keeping with my XNA projects, I even got it to work with the Nuclex SpriteFont Processor so now all my text can be rendered very nice and smooth-looking. The next big step with the content is re-adding the code for my 3D engine piece by piece and hopefully bringing in support for normal and spec-mapped models and all my custom shaders again.

      Experiments With Pathfinding

      But for now, I am focusing on an important part of the game AI- pathfinding. As mentioned before my goal is to make a 3D top-down shooter, and a lot of the inspiration comes from a similar indie shooter simply called Kong. This started development around 2007, and then ported to XNA on Xbox 360 a few years later. It's a simple game but one that looks potentially very fun with multiple players (online scene is dead at the moment). This is the basis on what I want to make my game like.

      I do not know exactly how the bots in the game work, but I am thinking back to the days of Unreal Tournament 2004 when bots used waypoints and pickup items as guides to get around the map. Waypoints, though straightforward to implement, are now considered obsolete and show artifacts in unconvincing AI movement. It is preferred to use a navigation mesh, or a grid or some combination of both that lets AI players have freedom of movement over areas, instead of sticking to a fixed network of paths.

      I decided to go with grid-based movement. Two reasons for this: there exist several algorithms that make it easily to apply grid-based movement, even when the movement in the game is not restricted to squares, and also because the maps in my game will be built using a 3D cube grid. No, it will not look like Minecraft, but having 3D objects snap to a grid makes it easier for me to build a map editor, as well as be more intuitive for people to use.

      Yesterday I was able to get a simple A* pathfinding demo working on my browser, using plain old Javascript and no external libraries. Most of the help did come from Brian Grinstead's A* algorithm code but I still had to make a couple tweaks to get it working on my demo. I used the simpler, but less efficient version, and have it support 45-degree diagonal movement. Here is a screenshot of the demo:


      The drop down box lets you add/remove walls, a start point and end point, and then calculate the path. It's a little buggy in cell selection, but this is just a web page prototype so I just cared about making the algorithm work. In its final version I want it to support Euclidean distances so it can follow any kind of diagonal direction, but it's more likely that will happen long after I integrate the pathfinding code into the game.

      One oversight on diagonal movement is that this code allows movement through diagonal walls. It's part of the original code when it checks neighboring cells. For instance, if there are obstacles to your north and east, but not northeast, you can still move northeast. In a game, this will make a character look as if he is "tunneling" through the wall. It is made even worse if there are tiles that have diagonal walls, making it much more obvious that you shouldn't pass through there.


      Needless to say the pathfinding is not yet in a usable state. It's an easy fix, though. There are only four pairs of directions that, if there are walls present, it's safe to assume that the direction between them is also impassable. Thus, a wall should also be added there to the list of neighbor cells to check.

      Once I have made that fix, I will port the code to C# and run it in a separate simulation so I can find out its real-time performance. I want to have several AI players make their way to a goal from random points in a maze first, and then have that goal movable so that their paths update in real time. Keep in mind I still have to add collision detection to the walls, especially when it comes to the human controlled player. However, this will already be a huge step forward in development. Not only for this game, but also in my understanding of game mechanics I have not previously explored in detail.
CC Ricers
Moving right into making a new game to revive my C# and XNA skills, I decided to go with a top-down view shooter. I already have a code base to start with, which includes some basic AI. The AI is a big one for me, since as long as I could remember I have avoided this subject and haven't really done anything with it. It's one of the areas of game development that was still tricky to me and didn't fully know how to start. But an article here on GameDev has helped clear up my mind a lot about it- this article about State Machines in games.

I'm familiar with the State Machine concept and have used patterns common with them to move a user through screens and game modes. However, the significant part of the article comes at the section "A More Complex Set of Machines". This takes State Machine patterns to create game actors that feel almost like living entities in the game. Here is where I wanted to pursue this idea further.

Fortunately for me, the sample code for this part is in C# and I got to examining it right away. The tutorial used puppies and toys for things to interact with. I could easily picture what you can do when you start replacing graphics and adding your own behaviors. The article even hints at turning it into a shooter as well. The first thing I decided to change is the API- it uses WinForms and that is definitely not going to be used in my game. Replacing the draw functions was trivial, and then it came down to moving all the state machine related code into my XNA game.

Now I have the puppies moving around chasing balls and sleeping on mats again, but this time it's hardware accelerated (aww yeah). With WinForms elements removed I no longer got any text feedback on the puppies' stats so I added some Sprite Font text to show the debug info. I put four puppies in my simulation and each one has their stats displayed clearly on the top of the screen like a basic HUD. If you squint your eyes hard enough you can almost see some sort of 2D competitive game with 4 AI players. Almost.


A Component Entity System architecture

After getting the code ported to XNA, it's time for the next big change- add a Component-Entity System. I did one a long time ago for practice and picking up from there. So far I have a SpriteRenderer System working with Sprite components and Transform components. But the dogs don't move anymore because the behavior code has been made ineffective now. This is where I have to do some thinking- how do I integrate the AI in the Component-Entity System? Should AI be distinct component, or part of a PlayerActor component?

As a distinct component I attempted to have a system act on the PlayerAction component (what I call the AI choices), but there is a setback in having PlayerActions as an abstract class to which other PlayerActions can be derived from. The idea was to have the system loop through the Entities with PlayerActions and perform their updates, and each Update function returns either a new type of PlayerAction or null. The system removes the old PlayerAction component from the Entity if a new one is returned, or does nothing otherwise.

I thought this would work but the class detection was not working. I am using reflection to find the class, but code behavior was unexpected with abstract classes. For instance, suppose I have a AI with an Idle object that derives from abstract PlayerAction. Idle should be detected as a PlayerAction but instead is it detected as Idle. So when an update tells it to replace the PlayerAction, the system won't be able to do it because it is not reading the right class.

That's when I realized that this method of Component detection is inefficient, not to mention wordy. Reflection has its costs, especially if you have hundreds of Entities to loop through. Then I also end up with code like this:

[font='courier new']Components.Transform transform = gameEntity.Component();[/font]

[font='courier new'][font=arial]Although it's lengthy mostly because I have my Components in a Components namespace to avoid some name clashes with XNA specific classes. What I should be doing instead is using bit masks and using an Enum as a Component ID as an argument to the Component function. (The bit mask can be better understood in a key-lock analogy as explained in this article). Then finding appropriate Entities and Components becomes easier.[/font][/font]

[font='courier new'][font=arial]AI interacting with the world[/font][/font]

[font=arial]Now I need to choose whether to move the AI actions into a bigger component instead. The AI needs to interact with items of interest in the world. With the puppy example, it would be toys and sleeping mats. In the original code, the puppies are given information on how many playthings are there and where they are. Changing the code to a ECS pattern means that all these things will have their positions stored in their own components. The PlayerAction component no longer has direct access to the positions, health, speed etc. of other things- in fact, not even about the player itself! As it is now, the component just cares about what it does and not what it's controlling. But for certain actions, it needs to know.[/font]

Maybe I should bring the focus of AI on the player's own information. The puppies in the simulation do not care about other puppies- only about themselves. It goes to a sleeping mat when it's low on energy and then finds a ball to chase after its energy is full. [font=arial]This is okay, because it is an easy start for changing the code to what I want it to do. I could possibly make an exception for the PlayerAction component by giving it a reference to the Entity it's a part of. Then it will have direct access to its statistics and location. I am fine with this, because if it's supposed to be like the brains of the player, it should know almost all about itself. As far as updating the action goes, the AI System makes a limited list of Entities that would be of possible interest to the player and sends that in the Update function.[/font]

I am settled on this design for now. First thing I have to do is make Components accessible by ID and use bit masks to select them in the Systems. Implementing the different behaviors should then be cake.
CC Ricers

Returning to form?

What's going on everyone! I haven't touched this journal (or this website) for over a year. I also had a blog that I stopped updating for just as long. For those followers that still remember me, you might be wondering what has happened?

Well, during the time here when I was game coding and taking about coding, I was working freelance as a web developer for a year. That hasn't gone well with me since running my own business hasn't been one of my strengths nor my interests. I did it because I had to get by somehow, during my long search for a full time job, but projects were so short and infrequent that I sometimes went weeks without work. Last spring I finally got a full time job, and that took away time and focus to other hobbies such as computer modding, as well as improving my own finances. And unfortunately that means putting off my game development projects.

Now I'm back because well, I no longer have that job >_> The company I was with had financial difficulties so they made tough choices I wasn't happy about. Not exactly a new thing to me. I've worked for a few startups before, that, for one reason or another, implode somewhere along the way. However I did get some good web dev experience out of it, especially with Javascript (once considered my weak point). Hopefully my job hunt now will go much quicker than last time.

So while I'm keeping an eye on my own budget, I decided to get back into the C#/XNA programming I was familiar with. This also means I am taking a new angle.
Previously I was focusing a lot on making a graphics engine. Going back where I left off, I was going to be making a shooter game. So I want to focus on making a complete game, and it doesn't hurt to add some polish to it near the end. The graphics engine is a tool I can now add to my arsenal for making a game and it's very likely I'll use it here. My start on the new game will begin with the next journal entry.
CC Ricers
It's been almost two weeks since I started my new game (and I originally wanted to post this on Sunday), and so far the progress has been disappointingly unproductive. At least it has been for the coding and implementation. I've been reading articles on Entity-Component Systems for games. Most of my time was spent planning out components and a few systems to write my game code on. And this is on top of the screen system, the state pattern that sets the flow for the game's modes.

All I have to show for it so far is a movable sphere on an invisible floor with a fixed camera. Most of the work is behind the code, in designing and planning it. Given the month-long deadline I gave myself, I don't want to get myself carried away in doing this. Also, I just got a new job and I have to work 40 hours (which I have not done in over a year) so I don't get as much time with this as I wanted to.

Components and Systems in the game

In the relatively little time I spent working on the game last week, most of the focus was on doing a design based on this. I have made Transform, Geometry, Movement, PlayerControl and Light components which already work for systems. Also, I found someone who wrote an article on using the ECS design for the same genre of game that I'm making, so this really helps a lot!

There are no specific classes for game entities/objects, just a generic Entity class that takes any kind of components, and only one of each. Some components are specific to the game, and other ones are more generic and general purpose. I'm fine with this- the low number of components still makes the code manageable.

I want to use more diagrams in my articles, so I will make one soon to give an idea of how the classes are laid out. For now I'll just write them out here:

Transform: Stores position, rotation, and scale
Movement: Applies change to transform
Geometry: Model representation of object
Light: Stores color, intensity (and other attributes)
PlayerControl: Stores input for player actions
FollowTarget: References another Entity to follow (through Transform)

Geometry - Transform -> Any entity represented as a visible object
Geometry - Transform - Movement - PlayerControl -> controllable player entity
Geometry - Transform - Movement - Bullet -> a bullet entity
Camera - Transform -> a camera entity
Light -> Light entity

BasicRenderer -> Draws visible objects with a default shader
PlayerController -> Takes input from player, updates movements
BulletCollision -> Adds, removes and updates bullets, check for collision (soon)

FollowTarget will be used for making enemies that always chase the player, and to move the camera with the player.

I need to figure out a good way in grouping and pooling the entities. Currently I have a Scene class that keeps a list of Visible entities to draw, and lights. This is good for passing to the BasicRenderer, and you can use any Camera you want with it. Additionally, I store the bullet entities in another list, which gets passed to the BulletCollision system.

The problem is that when the objects are defined for other behaviors, such as bullets, they are referenced in more than one place. When I have to erase a bullet from the bullet list, the bullet still exists in memory. It remains in the Scene's list, so the bullet is still drawn and frozen in place. So for now I may just deal with putting ALL entities into one list, to have one place to add and remove entities to, and have each system filter them out in real time.

That basically sums up how far I've gotten with the game. Like I said it's just a sphere (placeholder for a player model) that is moved around with the keyboard, and in an empty background. I will be adding a "target practice" test before having enemies with some sense of AI. If there's more time next week, perhaps doing more work with the graphics as well.
CC Ricers

[font=arial]What? Another game already? That's right, but this one will not be as big as my racing game project, which I expect to be ongoing for several months and likely at least a year. No, this game will be a short-term project, only planned for one month as part of the One Game A Month quest. I want to get in the habit of finishing games quicker. (Maybe then I could rename the blog Electronic Meteor Games! Imagine that) I want a game I can make more quickly and easily, and just as well be leveraged by the coding experience I have gotten so far. So it will re-use some of the code I'm currently working on right now, but refactored to fit the needs of the game.[/font]


[font=arial]The game will be a twin-stick top down shooter. The idea may not be original, but carrying it out should be fairly easy. I do not have a name for it yet, only know at least some features in it will include multiple levels and upgradeable weapons, local multiplayer (not sure yet if I can finish online networking code in a month), and a cool lighting atmosphere for the graphics. So basically what one may expect from a top-down shooter. Characters and setting will be fairly abstract and basic. I don't have much know-how for modeling human characters so it will be robots blasting other robots.[/font]


[font=arial]Here are the main goals I intend to follow for the month-long project:[/font]


  • [font=arial]Simplistic but nice to look at graphics and setting[/font]

  • [font=arial]Multiple weapons and enemy types[/font]

  • [font=arial]Controller support (gotta really get a controller for that though :P )[/font]

  • [font=arial]Learn some more AI programming as I go along[/font]

  • [font=arial]Use what I learned from Meteor Engine for the graphics[/font]

  • [font=arial]A lighting mechanic to hide/show parts of the map (somewhat of a "fog of war" for a shooter)[/font]


    [font=arial]I have been mostly inspired by some of the fast-paced games being put up on Steam Greenlight to do a top-down shooter. It's a genre that is simple fun and engaging for many people, and I believe that a (stripped down) top-down shooter can be another good game for budding programmers, comparable to platform games. So for this month, I will be slowing down progress of the racing game to work on this one.[/font]


    [font=arial]On the AI side, I have been reading this set of tutorials to create a state machine. Many game programmers may be familiar with the game state switching pattern to code a complete game. These tutorials take it further in applying it to other ways, like setting up rooms for a dungeon crawler or computer-controlled AI characters that follow their own motives. The latter is the one I'm most interested in. I plan to implement the tutorial code for this game to give me a head start on the AI. It won't be pretty but the functionality is what counts here.[/font]


    [font=arial]For graphics, I mentioned the Meteor Engine, but I will not be using it as-is. Rather, the game will have its own graphics code that will take several ideas from the engine. It will be a trimmed down, sort of "lite" version of the engine code, using mainly deferred rendering for graphics. The intent is to provide a setting with many moving lights, and most outdoor daytime scenes aren't good for that. Features include mostly dark rooms, bullets that light up the room along the path they take, reflective surfaces on characters and level objects, and point light shadows. A lot of the visual inspiration comes from the Frank Engine demo, so expect the game to look more or less like that.[/font]


    [font=arial]I will code this with XNA, as usual, but I will also try to get it portable to MonoGame. I have been researching this for a while but attempts to port any of my code to other platforms haven't gone well so far. MonoGame (in its current 3.0 version) on Mac seems to be a no-go with Snow Leopard, something to do with the Apple SDK's not being up-to-date with what MonoDevelop uses so I would have to upgrade XCode to 4.2 which requires a Lion upgrade. Not up to doing that right now. So it will likely be on Linux before Mac :P The cross-platform support is not part of the month-long deadline, it's just something I would like to do to take my game further like online multiplayer.[/font]


    [font=arial]I would like to get started today with programming the game, if I want to finish it before the 30th. Just for today to use a placeholder model for the character, draw everything with basic graphics and make the character shoot in all directions. At that point it's not very different logically from a scrolling shoot-em-up. So look forward for more posts related to my month-long game. It's been a while since I actually release a game and I want this to be the most complete game I've released so far.[/font]

CC Ricers
Guess I spoke too soon about wondering how to go about picking parts of the terrain, because I figured it out! As I'm going to be using BEPU for the physics engine, I just let BEPU do the dirty work. Using it to create a terrain height field and doing ray intersection testing is pretty intuitive. Storing 4 million points is no problem for it, but I may look into its source code to see how it's doing the intersection tests so efficiently.

In the meantime, though, I can move on to creating the brush and mesh placement tools. Mesh placement should be easy, as I want most objects to be touching the ground. Placing meshes will also be treated as brushes so you can fill the landscape with loads of objects quickly. For now I have this video as a proof of concept for testing.


Some ideas on the placement tools:
- Mesh brushes will likely be done the way of Poisson Disk sampling as demonstrated here, so the spacing of objects looks natural and you don't have to think much about how their placement looks.
- Objects can still be changed individually if you wish. A single Selection tool will make it possible to change an object's scaling and location.
- Rotation can be done on a basis of either having all objects orient towards the surface normal, or ignore the Y component. Rocks, for example are fine for rotating in all directions, but trees and buildings should generally point up.
- A toggle switch for each object so you can snap its vertical position to the ground surface in one click.

Physical interactions with the objects will come a bit later. I will at least need a wrapper class to associate mesh instances with BEPU Entities.
CC Ricers

[color=rgb(51,51,51)][font=Georgia][font=arial]Long post ahead! This will mention several things. First, I still want to keep working on Meteor Engine but doing it concurrently with a game means I will not put as much time into it as I once did. However I will try to keep you updated in a better manner, so to seperate game progress from engine progress, I will be headlining them separately. (A changelog of my engine is found at my blog if you're interested.) I also want to do more visual documentation of my work. As of now I am a crossroads with my game and trying not to juggle too many things at once. I guess the scope of the project is starting to catch up with me, but I do not want to see this become another piece of abandonware.[/font][/font][/color]

The puzzle known as terrain picking

[color=rgb(51,51,51)][font=Georgia][font=arial]Now back to the game. I'm at a point where I have several options to choose from on how to continue on with my racing game. I gave it a temporary name, Custom Racer, for now. It's gotten to where I would have to document and pre-visualise more plans for taking on the various aspects of the development process. Time to break out a pad of paper and start drawing out some stuff![/font][/font][/color]

[color=rgb(51,51,51)][font=Georgia][font=arial]So I have a terrain viewer, some test menus, and a screen system. I wanted to move first to the terrain editor, and I have some idea on how to implement some of the GUI, to activate different states, editing modes, etc. I tried a immediate mode GUI sample that works but I decided against it and just wanted to build on the menu system to make the editor GUI.[/font][/font][/color]

[color=rgb(51,51,51)][font=Georgia][font=arial]But I am stuck on how to write the functions to edit the terrain. More specifically, terrain-to-ray intersection code. It's stumping me. I have a RayPicker class that can cast a ray out from a spot where the mouse is clicked, and it can pick and highlight chunks of the terrain where the ray intersects. Progress! It looks neat and all, but I need to find out exactly where in the terrain the ray hits. I know how to apply triangle intersection code for finer level of picking in the mesh chunks, that's not the problem. The problem is I also want to know exactly where in the triangle the ray hit. Something of that precision is needed for my terrain brushes.[/font][/font][/color]

[color=rgb(51,51,51)][font=Georgia][font=arial]If I limited myself to only picking triangles, the triangle can be any of three points in the terrain. When looking straight down, all the triangles look like right triangles, so I can simply pick the point that is halfway along the longest edge. But I'm still snapping to points on triangle edges. I don't want the brush cursor (and its area) to be limited to jumping from point to point.[/font][/font][/color]

[color=rgb(51,51,51)][font=Georgia][font=arial]I'm using third party code for triangle picking and I don't know what some of its single-letter variables mean, and hoping they have barycentric coordinates so I can project from them the exact location on the map. Looking at this Lighthouse3D article for sample code, it returns a point with two coordinates, but barycentric coordinates have three, so those are probably not the same values (or at least not directly).[/font][/font][/color]

[font=arial]Cleaning up the menu system[/font]

[color=rgb(51,51,51)][font=Georgia][font=arial]Also, I've been fiddling around with the menu system and trying to make it easier to use. This is not as big of a priority as the terrain editor, but some of its components will be used in the GUI anyways. The menu system is a mess of sprite batching, quad rendering, and skin objects with optional XML to load. All Menu components are drawn pretty much outside of the context of the Meteor Engine. Not good, you say, for tight integration, but as they are just 2D sprites that approach will have to do for the time being. Ideally the best way to make interactive menu movements and events is with scripting. But I'm not ready to deal with an added hassle of using a scripting engine. I'm fine with data-driven behaviors for now.[/font][/font][/color]

[color=rgb(51,51,51)][font=Georgia][font=arial]While working on this, I renamed the abstract GameScreen class to ScreenElement. It makes more sense to call them this because I tend to think of a screen as the program's window where everything is drawn, and several of these come together to fill up the screen. I'm deciding whether to make each text element and button its own ScreenElement, complete with transition animations.[/font][/font][/color]

[font=arial]Instanced models and interactions[/font]

[color=rgb(51,51,51)][font=Georgia][font=arial]This one has been on my list for a while and more to do with the graphics engine itself- how to interact with specific instances of a model. Right now they have no IDs- they're just a number in a list. Great for stuff you just want to set and forget, but if you want to make a hundred boxes and have them all interact as physical objects, there should be a way to keep updating all of their positions.[/font][/font][/color]

[color=rgb(51,51,51)][font=Georgia][font=arial]I knew that way back when, I did get physics to work with instanced objects, and it's good thing I still have the project. It was one of my first 3D XNA projects was some kind of test program with you controlling a ball similar to Marble Blast or Super Monkey Ball, and subject to the laws of physics. It would spit out 5 boxes in different directions which then become part of the world. It was also my first time using BEPU Physics, and thanks to the straightforward sample demo, I was able to get it working in my program quickly.[/font][/font][/color]

[color=rgb(51,51,51)][font=Georgia][font=arial]Looking back at the code I remembered how I accomplished this, which basically was putting in the physical Entity data into my drawing code, and always storing it in a Model. Straightforward for a simple game, but I now want to keep my graphics code and physics code separate. I can't simply make a function that passes both a physics Entity and a Model mesh to create a new instance because then the engine will have be aware of physics-related classes. So I will either make a wrapper class in the game to associate the Entities with the Models, or write a class extension to the engine. The class will also need to keep pumping the Entity data into the renderer to update it on every frame.[/font][/font][/color]

[color=rgb(51,51,51)][font=Georgia][font=arial]That's a lot of stuff I have to think about, but writing it out here helps me in planning it. There's gotta be other programmers out there that tend not to pre-visualize their projects for one reason or another, but somehow must follow through with it.[/font][/font][/color]

Sign in to follow this  
Followers 0