Jump to content

  • Log In with Google      Sign In   
  • Create Account

Search Results

There were 532 results tagged with game

Sort by                Order  
  1. Narrative-Gameplay Dissonance

    The Problem

    Many gamers have experienced the scenario where they must sacrifice their desire to roleplay in order to optimize their gameplay ability. Maybe you betray a friend with a previously benevolent character or miss out on checking out the scenery in a particular area, all just to get that new ability or character that you know you would like to have for future gameplay.

    The key problem here is one of Narrative-Gameplay Dissonance. The immersion of the game is destroyed so that you will confront the realities that...

    1. the game has difficulties.
    2. it is in your best interest to optimize your character for those difficulties.
    3. it may be better for you the player, not you the character, to choose one gameplay option over another despite the fact that it comes with narrative baggage.

    What To Do...

    One of the most important elements of any role-playing game is the sense of immersion players have. An experience can be poisoned if the game doesn’t have believability, consistency, and intrigue. As such, when a player plays a game that is advertised as having a strong narrative, there is an implied relationship between the narrative designer and the player. The player agrees to invest their time and emotions in the characters and world. In return designers craft an experience that promises to keep them immersed in that world, one worth living in. In the ideal case, the player never loses the sense that they are the character until something external jolts them out of flow.

    To deal with the problem we are presented with, we must answer a fundamental question:
    Do you want narrative and gameplay choices intertwined such that decisions in one domain preclude a player’s options in the other?

    If you would prefer that players make narrative decisions for narrative reasons and gameplay decisions for gameplay reasons, then a new array of design constraints must be established.
    • Narrative decisions should not...
      • impact the types of gameplay mechanics the player encounters.
      • impact the degree of difficulty.
      • impact the player’s access to equipment and/or abilities.
    • Gameplay decisions should not...
      • impact the player's access to characters/environments/equipment/abilities.
      • impact the direction of plot points, both minor and major.
    Examples of these principles in action include The Witcher 2: Assassins of Kings and Shadowrun: Dragonfall.

    In the Witcher 2, I can go down two entirely distinct narrative paths, and while the environments/quests I encounter may be different, I will still encounter...

    1. the same diversity/frequency of combat encounters and equipment drops.
    2. the same level of difficulty in the level(s) challenges.
    3. the same quality of equipment.

    In Shadowrun, players can outline a particular knowledge base for their character (Gang, Street, Academic, etc.) that is independent of their role or abilities. You can be a spirit-summoning Shaman that knows about both street life and high society. The narrative decisions presented to players are then localized to a narrative decision made at the start rather than on the gameplay decision that affects what skills/abilities they can get.


    To be fair, there a few caveats to these constraints; it can be perfectly reasonable for a roleplay decision to affect the game mechanics. One example would be if you wanted to pull a Dark Souls and implement a natural game difficulty assignment based on the mechanics your character exploits. In Dark Souls, you can experience an “easy mode” in the form of playing as a mage. Investing in range-based skills that have auto-refilling ammo fundamentally makes the game easier to beat compared to short-range skills that involve more risk. It is important to note, however, that the game itself is still very difficult to beat, even with a mage-focus, so the premise of the series’ gameplay (“Prepare to Die”) remains in effect despite the handicap.

    Another caveat scenario is when the player makes a decision at the very beginning of the game that impacts what portions of the game they can access or which equipment/abilities they can use. Star Wars: The Old Republic has drastically different content and skills available based on your initial class decision. In this case, you are essentially playing a different game, but with similar mechanics. In addition, those mechanics are independent regardless. It is not as if choosing to be a Jedi in one playthrough somehow affects your options as a Smuggler the next go around. There are two dangers inherent in this scenario though. Players may become frustrated if they can reasonably see two roles having access to the same content, but are limited by these initial role decisions. If different "paths" converge into a central path, then players may also dislike facing a narrative decision that clearly favors one class over another in a practical sense, resulting in a decision becoming a mere calculation.


    Should you wish to avoid the following scenarios, here are some suggestions for particular cases that might help ensure that your gameplay and narrative decisions remain independent from each other.

    Case 1: Multiple Allied or Playable Characters

    Conduct your narrative design such that the skills associated with a character are not directly tied to their nature, but instead to some independent element that can be switched between characters. The goal here is to ensure that a player is able to maintain both a preferred narrative state and a preferred gameplay state when selecting skills or abilities for characters and/or selecting team members for their party.


    The skills associated with a character are based on weapon packs that can be swapped at will. The skills for a given character are completely determined by the equipment they carry. Because any character can then fill any combat role, story decisions are kept independent from gameplay decisions. Regardless of how I want to design my character or team, the narrative interaction remains firmly in the player's control.

    Case 2: Branching Storyline

    Design your quests such that…

    1. gameplay-related artefacts (either awarded by quests or available within a particular branching path) can be found in all paths/questlines so that no quest/path is followed solely for the sake of acquiring the artefact. Or at the very least, allow the player to acquire similarly useful artefacts so that the difference does not affect the player’s success rate of overcoming obstacles.
    2. level design is kept unique between branches, but those paths have comparable degrees of difficulty / gameplay diversity / etc.
    3. narrative differences are the primary distinctions you emphasize.


    I’ve been promised a reward by the mayor if I can solve the town’s troubles. A farmer and a merchant are both in need of assistance. I can choose which person to help first. With the farmer, I must protect his farm from bandits. With the merchant, I must identify who stole his merchandise. Who I help first will have ramifications later on. No matter what I do, I will encounter equally entertaining gameplay, the same amount of experience, and the same prize from the mayor. Even if I only had to help one of them, I should still be able to meet these conditions. I also have the future narrative impacted by my decision, implying a shift in story and/or level design later on.

    Case 3: Exclusive Skill-Based Narrative Manipulation

    These would be cases where your character can exclusively invest in a stat or ability that gives them access to unique dialogue choices. In particular, if you can develop your character along particular "paths" of a tree (or some equivalent exclusive choice) and if the player must ultimately devote themselves to a given sub-tree of dialogue abilities, then there is the possibility that the player may lose the exact combination they long for.

    Simply ensure that the decision of which super-dialogue-ability can be used is separated from the overall abilities of the character. Therefore, the player doesn't have to compromise their desire to explore a particular path of the narrative simply because they wish to also use particular combat abilities associated with the same sub-set of skills. I would also suggest providing methods for each sub-tree of skills to grant abilities which eventually bring about the same or equivalently valuable conclusions to dialogue decisions.


    I can lie, intimidate, or mind control people based on my stats. If I wish to fight with melee stuff, then I really need to have high Strength. In other games, that might assume an inefficiency in mind control and an efficiency with intimidation (but I really wanna roleplay as a mind-hacking warrior). Also, there are certain parts of the game I want to experience that can only be done when selecting mind-control-associated dialogue options. Thankfully, I actually do have this option. And even if I had the option of using intimidation or lying where mind control is also available, regardless of my decisions, my quest will be completed and I will receive the same type of rewards (albeit with possibly different narrative consequences due to my method).


    If you are like me and you get annoyed when narrative and gameplay start backing each other into corners, then I hope you’ll be able to take advantage of these ideas. Throw in more ideas in the comments below if you have your own. Comments, criticisms, suggestions, all welcome in further discussion. Let me know what you think. Happy designing!

  2. Need help creating sport-themed game.

    Team Wanted

    I know, I know.. 17 years old, "what the hell are you doing even trying to make a game?" Well, you see, I've only jumped in this boat a few months ago, did a little research and BOOM! An idea crashed into my head.

    I've been trying to shake it, but look like the bug bit me, and bit me hard.. The reason I can't shake this is because (yeah, like all game desighners) I think this will be something extrodinary and succesful. I've been looking for a team to help me in this designing and creating journey as I have only a limited amount of skill.

    So yeah, If anyone is interested, you know what to do... (that "what" is comment)

    6 Feb 2020: Added additional code samples
    4 Feb 2020: Initial release

    • Jul 27 2015 08:29 AM
    • by Tank16
  3. Help me out with my GameDev dream!


    My name is Frantisek Hetes, I'm a 18yo high school student who is passionately interested in game development in his free time


    Hey guys, making my dream come true, got accepted to a summer academy makeschool.com/summer-academy but I lack funding, if you know someone who can help me out or yourself have time to help out a good cause pls check me out here and share among your friends, love you, this means the world to me

    everything is explained in the link - http://t.co/xpvtrQEjz3

    I'm using tilt to crowd fund money which means - until I tilt no money will be taken from your bank account


  4. Looking to join/form a game dev team

    passionate about programming, looking to start building a portfolio

    Hi there. i have been an amateur programmer for about 10 years, mainly java. dabbled a bit with c++, dabbled a bit more with an arduino and a tiny bit with html. made a few **simple** console style games using arrays of chars to represent walls, monsters, etc. currently trying to wrap my head around java swing.

    I've loved programming for years, went to college for a bit for it. been in the navy working with nuclear reactors though and haven't had a ton of time to code until very recently.

    Anyways, that's me. sorry if im posting in wrong spot. I'm in a hurry at the moment but just had to start trying to join some team. my programming skills are probably minimal but I am very determined. I would prefer to join or start a team working on a 2D rpg style game but im open to many options, games are not a requirement, just a preference. shoot me a line if your interested. thanks for reading :D

  5. The OpenGL State Conundrum


    Over the years, OpenGL has accumulated a colossal amount of global state. Rather than taking heaps of parameters for each and every function call, OpenGL relies on a large amount of parameters being *bound* or *set* to context-wide binding points before each call. These range from bound programs, buffers, textures and samplers, to fixed-function blending, depth testing and framebuffers.

    From my travels around the wider regions of the Internet, I have found that most games and their respective engines use some for of *state management system*. In my opinion, this layer should perform two important things:

    • Check that the required state changes are made before relevant OpenGL commands. (eg: glBindBuffer before glBufferData)
    • Ensure that those state changes aren't performed more than once.
    • Provide type-safe enums and object handles (possibly wrapped inside objects if you wish to go fully OOP)

    In this blog post I'll aim to outline some of the issues with OpenGL's state machine architecture and provide some different solutions for robust state management.

    The Problem With Global State

    It is important to understand the problem of hidden global state, and thus I have prepared a small code snippet that involves creating a Vertex Buffer and a Vertex Array and linking them together. I'll use this code sample as a basis for my gripes about the current state of this issue (pardon the pun).

    GLuint vertexArrayHandle;
    glGenVertexArrays(1, &vertexArrayHandle);
    GLuint vertexBufferHandle;
    glGenBuffers(1, &vertexBufferHandle);
    glBindBuffer(GL_ARRAY_BUFFER, vertexBufferHandle);
    glBufferData(GL_ARRAY_BUFFER, sizeof(bufferData), bufferData, GL_STATIC_DRAW);

    For the average OpenGL programmer, this is dead-easy stuff. However if we take a step back for a moment to look at the workings inside OpenGL's big black stateful box, we see a labyrinth of issues.

    Bind to Use and Edit

    Bind-to-edit is a classic OpenGL paradigm employed by many of OpenGL's objects. Instead of referencing the object directly within functions like glBufferData and glUniform*, one must first bind it to one of several binding points using functions like glBindBuffer and glUseProgram.

    GL_ARRAY_BUFFER is one such binding point, and is used in the above code snippet.

    The problem with the bind-to-edit system is that we have introduced a bunch of global variables into our code. What global variables, you may ask? OpenGL hides them from us, but every call to glBindBuffer or glUseProgram is essentially changing a global variable (or more accurately a variable global to the current OpenGL context).

    Singletons are supposedly 'better' than static methods but are in fact crude masks for the real evil that lies within. The same applies with OpenGL's state. The fancy calls to glBind*** make look nicer than global variables, but they are really the same evil.

    Direct State Access

    Direct State Access is an interesting extension that has just recently been integrated into core OpenGL 4.5. It aims to solve some of the above issues through creating new OpenGL functions such as glNamedBufferData which take an explicit object handle instead of relying on the binding and unbinding.

    Although in the long term this will fix many of the issues that plague OpenGL, at the moment this extension/4.5 feature is too new to be used in production.

    Using the Code

    (Optional) If your article is about a piece of code create a small description about how the code works and how to use it.

    /* Code block here */

    Interesting Points

    Did you stumble upon any weird gotchas? .. things people should look out for? How would you describe your experience writing this article.

    VertexArray container = new VertexArray();
    VertexBuffer buffer = new VertexBuffer(VertexBuffer.Type.ARRAY_BUFFER, VertexBuffer.Usage.STATIC_DRAW);
    container.attach(buffer, new VertexAttribute[] { new VertexAttribute(0, 3, Type.FLOAT, 0, 0);


    Wrap up any loose ends for your article. Be sure to restate what was covered in the article. You may also suggest additional resources for the reader to check out if desired.

    Article Update Log

    Keep a running log of any updates that you make to the article. e.g.

    6 Feb 2020: Added additional code samples
    4 Feb 2020: Initial release

  6. Looking for an Artist...

    I need a dedicated artist to help me with a game I'm making.

    Someone who can devote time to this project and who preferably has experience with 2D sprites.

    If interested, shoot me an email and we'll discuss specifics.
    Note, I will ask for examples of your work just so I know what I'm dealing with.

    Be advised, this is not a paying project. This is simply for fun and experience.

    Thank you

  7. [iOS] One or Two?

    Hello fellow gamers,

    I'm new here but I'd like to announce the release of my first game. It was made using Apple's new programming language Swift. It took me around 3-4 months to plan, design and develop it. It would be awesome if you could try it and give me some feedback or suggestions on how to improve it.
    I'm also glad to answer any questions regarding its development.


    One or Two is a fun and exciting Indie game challenging your reaction.

    The gameplay is pretty simple, but it's hard to master. Take a look at the object in the circle and find out, how many objects of that kind you can find in the game area. Then press the corresponding button, One or Two?

    There are 100 levels ready to be played, each harder than the previous one. Try to be as fast as possible without doing any mistakes. Set a new high score and discover all elements!

    One or Two offers a gorgeous user interface: hand-picked colors and clean shapes create a unique style. You can also choose to turn on Dark Mode, providing a new experience and making playing at night even better.

    Play the game and compare your score to players all around the world. Additionally, One or Two offers several achievements for you to unlock. There are even some hidden ones that can't wait to be discovered. Share your high score on Facebook or Twitter and challenge your friends!

    What level can you reach?
    Download One or Two right now and find out!

    • Apr 27 2015 07:10 AM
    • by Vyax
  8. Awesome sniper shooter game Stick Squad 3

    Awesome stick sniper shooter game out on Android

    In their ongoing stick shooter series Canadian indie developer Brutal Studio now bring you the third episode, Stick Squad - Modern Shooter, for Android phones and tablets. The game has 20 new main missions and over 60 objectives, in a brand new location. New bad-ass guns (hand guns, assault rifle and of course sniper rifles), a shooting range and upgrades as always. Some new missions will test your sniping skills, where you need to calibrate your gun to compensate for wind and distance. One of the best stick shooter series yet!

    Stick Squad is the studio's most successful series to date and so plans are already in the works to make several more of them. For iOS players, the game should be out in the next few days for iPhones and iPads. Look out for more fun and excitement from Brutal Studio!

    Download now on Android: https://play.google.com/store/apps/details?id=air.StickSquad3Android



    Brutal Studio

  9. Need voice acting for you game?

    If you need voice over work but dont know where to look... then stop you have found the right chracter voice actor.

    My name is Timothy Banfield, and I have been in the voice over industry for over nine years now. I have voice everything from games, commercials, ads and shows. I have professional acting training and years of experience in character acting and performing. I can bring your projects to life with no problem. I have a very warm, natural and well spoken "Guy Next Door" voice which I am most known for. I am also a talented impressionist who can do over 200 impressions meaning I have a lot of range and can bring my voice even closer to what you want. My impersonation talents have landed me a spot on the Tonight Show back in 2011. Having me put my voice to your project is something you won't regret it. Also please rate my audition; I would very much appreciate it. Thank you for your time.


  10. Arcade Machines and Gaming Library Pigaco - Devlog #0: Introduction & Architecture

    The very first arcade consoles had only a single game on each machine. Later, as the hardware evolved, software grew alongside the better machines and different games on the same screen became possible. During this development, programmers needed to solve tough challenges like making games compatible for different processors than what they have been designed to run on, or sharing processor time between the running programs. With Linux, the groundwork for these problems is done and we (luckily) do not think about processor instructions very often anymore when creating games, and instead focus more on the games themselves.

    With the rise of home gaming consoles, arcade machines quickly began to fade out and the arcade shops slowly got rarer and rarer. Developers made games for the new platforms and the money definitely was in the PC and console market. This still holds true to this day, but there has not been any new development with public arcade machines and the required software to create such a machine. Big projects like EmulationStation aim for a replacement of bulky machines to make arcade gaming more suitable for home use and to combine multiple emulators into a single, good-looking UI. Projects like OUYA want to create a new home console like the PlayStation or XBox systems, but cannot be used to host a real arcade machine in a store in your neighboorhood. This goal needs a project on its own.

    Enter Pigaco

    The pigaco project (short for Raspberry Pi Gaming Console) wants to create software to make building arcade style games on real arcade machines easier and to streamline setting up machines in public places. The Raspberry Pi in the name is a remnant of the past and no longer applies to the project, it was only chosen becaue a Raspberry Pi was used to develop its networked input system.

    Libraries and Tools

    • libpiga: C++ library to make interfacing with pigaco and its components easy. (C++) (Already working)
    • pigaco: Application which lists games and applications on the console and provides shared objects (C++/SDL2) (Already working)
    • networked client: Minimal application to send inputs over the network to the console (C++) (Already working)
    • pihud: GUI-Library which has been integrated with arcade control options in mind. (C++/SDL2) (Already working, but with only a small widget collection)
    • consolemgr: Manager interface to control multiple consoles with a simple, graphical UI. (Qt/QML) (Recently started)
    • Smartphone App: Based on consolemgr - used to control consoles and send arcade inputs to a console (more players) (Qt/QML) (not started yet)
    • BomberPi: Sample game (bomberman clone), which uses the piga library to get its inputs. (C++/SDL2)(Already playable)
    • Hosts (Raspberry Pi GPIO, PC-keyboard, ...): Sources of inputs to process (C++/Anything than can receive inputs) (RPi GPIO and keyboard already working)

    Repository and Resources

    The repository can be found on github. A daily generated online version of the documentation can be found here. I advise against using the APIs in their current form for serious development because they may (and most likely will) change in the near future. Any contribution, pointer or comment which brings us closer to this far fetched goal of easier arcade machines is welcome though and I am looking forward to see some of you in my messages.


    The project is structured into 3 parts:

    1. C++ library to communicate between the programs over shared memory and the network (libpiga)
    2. Host program to handle inputs, platform-specific code, and start other applications (pigaco)
    3. Input-hosts which get linked dynamically by pigaco or connect themselves to pigaco over the network to read inputs from buttons and joysticks, and to send commands to pigaco

    To get a better understanding of how the system is built, look at the following image. It illustrates how the parts work together in a better structure than I can express in the text. (taken from the architecture documentation page)



    The library handles communication between games/applications and the host program pigaco. This works using shared memory (boost interprocess), the event system of the host computer (keyboard simulation), and the network (over enet; to communicate with external input sources).

    The library is built with C++ and should be usable from any C++-compatible engine. If the engine is not compatible with the language, or the developers do not want to modify their product to fit into the pigaco system, the arcade machine maintainer can specify keyboard-mappings (and soon mouse-mappings too) to be mapped to the game inputs sent from the shared library. This can be accomplished with configuration files in the Yaml format in the game's directory, which are looking similar to the one from BomberPi.

    The documentation of the library is not yet finished, the most complete one can be found with the BomberPi game. Also, the class which is used to integrate a game directly over shared memory with pigaco is piga::Interface, which is already documented a bit.


    Pigaco manages games, shows a game chooser dialog, runs games, receives inputs from its input hosts and processes possible packets coming from the network. To save ressources, it shuts down its own rendering when a game is running and resumes it afterwards. The GUI is selfmade to fit to the multi-user approach of the input system, which requirements are not really met by other UI-frameworks. The program uses SDL2 for the graphics and is not very pretty yet, but it already works well and the moving background is better than nothing.

    If you want to see a screenshot of how it looks, you can look at the following picture. If you don't care about the looks and also think that they will get better once the backend is working fine, just ignore the graphical horror contained within it.


    Most of the managing stuff is done in piga::Host, but the documentation for this class still has a long way to go before becoming anything useful.

    Input Hosts and Networked Clients

    Input hosts are shared libraries with a common interface (a few functions defined in the host header file), which are loaded by pigaco. It then loops through each loaded host and checks for new inputs for every player, which will then be sent to the running game/application.

    Networked clients on the other side are controlling the console over the network from a remote location. Maintainers of arcade machines can specify if a console can be accessed by this network protocol and with what password. They can also state that they want to make networked inputs without passwords possible, so that players can play games with more people in one single game using their smartphones as inputs for the console.

    The smartphone app and the manager application are not implemented yet (sadly), but networked inputs are working perfectly well already. The networked client is the best example of a working implementation for this.

    Motivation and Future Goals

    My motivation to start this project is explained easily: I always wanted to build games and tools which can be enjoyed by other people. I alredy created smaller projects which were liked by my friends and collegues and the next step for me was to create something bigger. The piga library can be used by everyone who has fun in experimenting with new projects and the console which I'm building by myself currently will surely be fun for my friends and me too. With the upcoming port of my old game SFML-Sidescroller, more piga-compatible games will follow too.

    In the near future, I like to finish building my own arcade machine and to improve the documentation of piga. Also, the manager application consolemgr should be ready in the next few weeks, so I can control the console from my laptop or phone and develop games on my machine in a better way. Once the whole system works a little bit better, I would like to post a video on YouTube about the console and some footage of people playing games on the machine.

    A far fetched goal will be its own networked service with gamer authentication and multiplayer gameplay over the network (this project would then be called Pine - like the tree, but as abbreviation of Pigaco Network), but this sounds way too ambitious to be archieved someday. It's just good to always have something even greater and harder to accomplish, so that I can work towards it and learn new things along the way. Do not count on that feature, this is just a goal way down on the roadmap.

    Along the way I want to write some articles about the development of the libraries and about the methods used to write the backend for the console here on gamedev.net. I found some articles here before, but now I feel like I can share my findings here too, instead of just silently pushing my work onto github and do not tell anybody else from the outside world. I hope somebody will find some useful information in these articles or even become interested in the project and want to collaborate or make arcade games!

  11. In Fight - 3D Online Fighting

    “In Fight” – is a fighting game that can be launched directly from the browser. No prior registration or client download is needed, since the game is developed for social network platforms, like Facebook. This game reflects everything that is meant with the word “fighting”. Be ready to smash real, human controlled enemies in the 3D world.

    Inviting friends and contesting them for a duel will be possible with Social Networks. No complicated actions are needed – everything is simple and straightforward.

    There are things that make “In Fight” special. One of those is a level-based system that gives an opportunity for players to improve their character with different techniques and deadly combinations, pretty well-known experience analogue to RPG games. The variety of martial arts will make your character truly unique and sometimes even undefeatable.

    We took care about character customization. Players are offered to choose between 18 models (9 male and 9 female) that can be dressed with different items beginning with common jeans and ending with variety of useful costumes. It is obvious to guess that equipment will boost the character and make it stronger. It is also possible to tweak the character’s appearance. Sunglasses, capes, hats – you will definitely find something that will suit your character’s style.

    Our game supports not only the keyboard. Players will have a possibility to plug their beloved controller to computer and enjoy the game like they would have played it over the console.


    • Feb 18 2015 11:45 PM
    • by Aram_B
  12. Game Developing Team (Steam Group)

    Hello designers

    This is an organized steam group to all game designers. If you haven't heard of steam it is an marketplace where you can get software and games. The recommended software to get for beginners is Blender. Here is a link to the software. http://www.blender.org/. Positions in the game developing team are coders, animation specialist, and sculptors. When you have downloaded steam you can join this group here http://steamcommunity.com/groups/bjgamedevteam.


    I will be taking care of the money by using PayPal. The donations will be going towards better software and equipment. Donations are very much appreciated. Here is my PayPal if you want to support this. Just go to send at the top of the screen and type in jaydoncrenshaw@gmail.com or my phone number: 331-302-1746. This group comes together and brainstorms ideas for games and a specific amount of people go into a steam call and work on the game.


    : none yet
    Sunday: none, Monday: Discussion, Tuesday: Work day, Wednesday: none, Thursday: Work day, Friday: Discussion, Saturday: none.
    Thank you and I hope you contribute to the steam group if you like.

  13. Visual Tools For Debugging Games


    How much of your time do you spend writing code? How much of your time you spend fixing code?

    Which would you rather be doing? This is a no-brainer, right?

    As you set out to develop a game, having a strategy for how you are going to illuminate the "huh?" moments well before you are on the 15th level is going to pay dividends early and often in the product life cycle. This article discusses strategies and lessons learned from debugging the test level for the 2D top down shooter, Star Crossing.


    Intrinsic Tools

    Before discussing the tools you do not have by default, it seems prudent to list out the ones you will generally have available in most modern development tool chains.
    • Console output via printf(...). With more advanced loggers built into your code base, you can generate oceans worth of output or a gentle trickle of nuanced information as needed. Or you can just have it print "here 1", "here 2", etc. To get output, you have to actually put in code just for the purpose of outputting it. This usually starts with some basic outputs for things you know are going to be helpful, then degenerates into 10x the number of logging messages for specific issues you are working on.
    • Your actual "debugger", which allows you to set breakpoints, inspect variables, and gnash your teeth at when you try to have it display the contents of a std::map. This is your first line of defense and probably the one you learned to use in your crib.
    • A "profiler" which allows you to pinpoint where your code is sucking down the frame rate. You usually only break this out (1) when things go really wrong with your frame rate, (2) when you are looking for that memory leak that is crashing your platform, or (3) when your boss tells you to run before shipping even though the frame rate is good and the memory appears stable, because you don't really know if the memory is stable until you check.
    All these tools are part of the total package you start out with (usually). They will carry you well through a large part of the development, but will start to lose their luster when you are debugging AI, physics, etc. That is to say, when you are looking at stuff that is going on in real time, it is often very hard to put the break point at the right place in the code or pull useful information from the deluge of console output.

    Random Thoughts

    If your game has randomness built into it (e.g. random damage, timeouts, etc.), you may run into serious trouble duplicating failure modes. Someone may even debate whether the randomness is adding value to your game because of the headaches associated with debugging it. As part of the overall design, a decision was made early on to enable not-so-random-randomness as follows:
    • A "cycle clock" was constructed. This is lowest "tick" of execution of the AI/Physics of the game.
    • The cycle clock was set to 0 at the start of every level, and proceeded up from there. There is, of course, the possibility that the game may be left running forever and overflow the clock. Levels are time limited, so this is not a concern here (consider yourself caveated).
    • A simple static class provided the API for random number generation and setting the seed of the generator. This allowed us to put anything we want inside of the generation so the "clients" did not know or care what the actual "rand" function was.
    • At the start of every tick, the tick value was used to initialize the seed for the random number system.
    This allowed completely predictable random number generation for the purposes of debugging. This also has an added benefit, if it stays in the game, of the game evolving in a predictable way, at least at the start of a level. Once the user generates their own "random input", all bets are off.

    Pause, Validate, Continue

    The screenshot below shows a scene from the game with only the minimal debugging information displayed, the frame rate.


    The first really big problem with debugging a real-time game is that, well, it is going on in real-time. In the time it takes you to take your hand off the controls and hit the pause button (if you have a pause button), the thing you are looking at could have moved on.

    To counter this, Star Crossing has a special (configurable) play mode where taking your finger off the "Steer" control pauses the game immediately. When the game is paused, you can drag the screen around in any direction, zoom in/out with relative impunity, and focus in on the specific region of interest without the game moving on past you. You could even set a breakpoint (after the game is paused) in the debugger to dig deeper or look at the console output. Which is preferable to watching it stream by.

    A further enhancement of this would be to add a "do 1 tick" button while the game was paused. While this may not generate much motion on screen, it would allow seeing the console output generated from that one cycle.

    The frame rate (1) is ALWAYS displayed in debug builds even when not explicitly debugging. It might be easy to miss a small slowdown if you don't have the number on the screen. But even a small drop means that you have exhausted the available time in several frames (multiple CPU "spikes" in a row) so it needs attention.

    The visual debugging information can be turned on/off by a simple toggle (2). So you can leave it on, turn it on for a quick look and turn it off, etc. When it is on, it dropped the frame rate so it usually stayed off unless something specific was being looked at. On the positive side, this had the effect of slowing down the game a bit during on-screen debugging, which allowed seeing more details. Of course, this effect could be achieved by slowing down the main loop update.

    Debug Level 1

    The screen shot below shows the visual debugging turned on.



    At the heart of the game is a physics engine (Box2D). Every element in the game has a physical interaction with the other elements. Once you start using the physics, you must have the ability to see the bodies it generates. Your graphics are going to be on the screen but there are physics elements (anchor points, hidden bodies, joints, etc.) that you need to also see.

    The Box2D engine itself has a capacity to display the physics information (joints, bodies, AABB, etc.). It had to be slightly modified to work in with Star Crossing's zooming system and also to make the bodies mostly transparent (1). The physics layer was placed low in the layer stack (and it could be turned on/off by header include options). With the graphics layer(s) above the physics, the alignment of the sprites with the bodies they represented was easy to check. It was also easy to see where joints were connected, how they were pulling, etc.


    Star Crossing is laid out on a floating point "grid". The position in the physics world of all the bodies is used extensively in console debug output (and can be displayed in the labels under entities...more on this later). When levels are built, a rough "plan" of where items are placed is drawn up using this grid. When the debug information is turned on, major grid locations (2) are displayed. This has the following benefits:
    • If something looks like it is cramped or too spaced out, you can "eye ball" guess the distance from the major grid points and quickly change the positions in the level information.
    • The information you see on screen lines up with the position information displayed in the console.
    • Understanding the action of distance based effects is easier because you have a visual sense of the distance as seen from the entity.

    Entity Labels

    Every "thing" in the game has a unique identifier, simply called "ID". This value is displayed, along with the "type" of the entity, below it.
    • Since there are multiple instances of many entities, having the ID helps when comparing data to the console.
    • The labels are also present during the regular game, but only show up when the game is paused. This allows the player to get a bit more information about the "thing" on the screen without an extensive "what is this" page.
    • The labels can be easily augmented to display other information (state, position, health, etc.).
    • The labels scale in size based on zooming level. This helps eye-strain a lot when you zoom out or in.

    Debug Level 2

    While the player is able to move to any position (that the physics will allow), AI driven entities in the game use a combination of steering behaviors and navigation graphs to traverse the Star Crossing world.


    Navigation Grid

    The "navigation grid" (1) is a combination of Box2D bodies laid out on a grid as well as a graph with each body as a node and edges connecting adjacent bodies. The grid bodies are used for collision detection, dynamically updating the graph to mark nodes as "blocked' or "not blocked".

    The navigation grid is not always displayed (it can be disabled...it eats up cycles). When it is displayed, it shows exactly which cells an entity is occupying. This is very helpful for the following:
    • Watching the navigation path generation and ensuring it is going AROUND blocked nodes.
    • The path following behavior does a "look ahead" to see if the NEXT path edge (node) is blocked before entering (and recomputes a path if it is). This took a lot of tweaking to get right and having the blocked/unblocked status displayed, along with some "whiskers" from the entity really helped.

    Navigation Grid Numbers

    Each navigation grid node has a label that it can display (2). These numbers were put to use as follows:
    • Verifying the path the AI is going on matches up with the grid by displaying the navigation graph index of the grid node. For example, an AI that must perform a "ranged attack" does this by locating an empty node a certain distance from the target (outside its physical body), navigating to that node, pointing towards the target, and shooting. At one point, the grid was a little "off" and the attack position was inside the body of the target, but only in certain cases. The "what heck is that" moment occurred when it was observed that the last path node was inside the body of the target on the screen.
    • Star Crossing uses an influence mapping based approach to steer between objects. When a node becomes blocked or unblocked, the influence of all blockers in and around that node are updated. The path search uses this information to steer "between" blocking objects (these are the numbers in the image displayed). It is REALLY HARD to know if this working properly without seeing the paths and the influence numbers at the same time.

    Navigation Paths

    It is very difficult to debug a navigation system without looking at the paths that are coming from it (3). In the case of the paths from Star Crossing, only the last entity doing a search is displayed (to save CPU cycles). The "empty" red circle at the start of the path is the current target the entity is moving toward. As it removes nodes from its path, the current circle "disappears" and the next circle is left "open".

    One of the reasons for going to influence based navigation was because of entities getting "stuck" going around corners. Quite often, a path around an object with a rectangular shape was "hugging" its perimeter, then going diagonally to hug the next perimeter segment. The diagonal move had the entity pushing into the rectangular corner of the object it was going around. While the influence based approach solved this, it took a while to "see" why the entity was giving up and re-pathing after trying to burrow into the building.

    Parting Thoughts

    While there were a lot of very specific problems worked, the methods used to debug them, beyond the "intrinsic tools" are not terribly complex:

    1. You need a way to measure your FPS. This is included directly in many frameworks or is one of the first examples they give when teaching you how to use the framework.
    2. You need a way to enable/disable the debug data displayed on your screen.
    3. You need a way to hold the processing "still" while you can look around your virtual world (possibly poking and prodding it).
    4. You need a system to display your physics bodies, if you have a physics engine (or something that acts similar to one).
    5. You need a system to draw labels for "interesting" things and have those labels "stick" to those things as they move about the world.
    6. You need a way to draw simple lines for various purposes. This may be a little bit of a challenge because of how the screen gets redrawn, but getting it working is well worth the investment.

    These items are not a substitute for your existing logging/debugger system, they are a complement to it. These items are somewhat "generic". You can get a lot of mileage out of simple tools, though, if you know how to use them.

    Article Update Log

    30 Jan 2015: Initial release

  14. Are You Letting Others In?


    A good friend and colleague of mine recently talked about the realization of not letting others in on some of his projects. He expressed how limiting it was to try and do everything by himself. Limiting to his passion and creativity on the project. Limiting to his approach. Limiting to the overall scope and impact of the project. This really struck a chord with me as I’ve recently pushed to do more collaborating in my own projects. In an industry that is so often one audio guy in front of a computer, bringing in people with differing, new approaches is not only freeing, it’s refreshing.

    The Same Ol' Thing

    If you’ve composed for any amount of time, you’ve noticed that you develop ruts in the grass. I know I have. Same chord progressions. Same melodic patterns. Same approaches to composing a piece of music. Bringing in new people to help branch out exposes your work to new avenues. New opportunities. So, on your next project I’d challenge you to ask yourself – am I letting others in? Even to just evalute the mix and overall structure of the piece? To review the melody and offering up suggestions? I’ve been so pleasantly surprised and encouraged by sharing my work with others during the production process. It’s made me a better composer, better engineer and stronger musician. Please note that while this can be helpful for any composer at ANY stage of development, it's most likely going to be work best with someone with at least some experience and some set foundation. This is why I listed this article as "intermediate."

    Get Out of the Cave

    In an industry where so many of us tend to hide away in our dark studios and crank away on our masteripieces, maybe we should do a bit more sharing? When it’s appropriate and not guarded by NDA, of course! So reach out to your friends and peers. Folks that play actual instruments (gasp!) and see how they can breathe life into your pieces. Make suggestions as to how your piece can be stronger. More emotional. For example, I’d written out a flute ostinato that worked well for the song but was very challenging for a live player to perform. My VST could handle it all day… but my VST also doesn’t have to breathe. We made it work in a recording studio environment but if I ever wanted to have that piece performed live, I’d need to rethink that part some.

    Using live musicians or collaborating can also be more inspiring and much more affordable than you might first think! Consult with folks who are talented and knowledgible at production and mixing. Because even the best song can suck with terrible production. I completely realize you cannot, and most likely WILL NOT, collaborate on every piece you do. But challenging yourself with new approaches and ideas is always a good thing. Maybe you’ll use them or maybe you’ll confirm that your own approach is the best for a particular song. Either way, you’ll come out ahead for having passed your piece across some people you admire and respect.

    My point?

    Music composition and production is a life long path. No one person can know everything. This industry is actually much smaller than first impressions and folks are willing to help out! Buy them a beer, coffee or do an exchange of services. When possible throw cash. Or just ask and show gratitude! It’s definitely worked for me and I think it would work for you as well. The more well versed you are, the better. It will never hurt you.

    Article Update Log

    28 January 2015: Initial release

    GameDev.net Soapbox logo design by Mark "Prinz Eugn" Simpson

  15. Game Programming Courses

    Rocking it is, that is what most say about the gaming industry these days, and why not? Most gaming honchos are raking in the moolah, like no manâEUR™s business. The gaming technological advancements has changed the era, and across varied platforms too. Gaming is so different now from what it was seen back then in the late 70s and early 80s. Those were the days when simple programs in plastic were doled out for entertainment. However, as time passed by and with the introduction of smaller game consoles and microchips, the scene has changed out there. With more and more developments in the making, there are ten times the student inflow wanting to make the most from popular Game Programming Courses.

    A better understanding

    For those who wish to study such courses, it is a must for them to know what they would have as takeaway at the end of the day. The matter should be understood well and even if it is a well known institute, one needs to check with due diligence if the learners appetite would be satiated or not.
    The instructors or the trainers in such courses need to be highly accredited with certifications and licenses, maybe someone who has gifted the world with their own gaming creativity and innovation too. Choose an institute that has professionals who were once pioneers of the domain, and Singapore has many such institutes that hire and work with the same.

    Hand on experience and a practical approach is a must, and students interested in Game Programming Courses for Iphone should check with ex-students or professionals of the college if that is being given here or not? It is important to get clarity on this point and also on the number of hours too.

    The world is competitive and fierce out there, and there are institutes that take advantage of students by providing them with half baked information and needs. DonâEUR™t fall prey to such scamsters or frauds, do your homework and choose a reputed hub which would bring the best out of you. Remember, the boom has only just begun for the game developers and the industry out there. It all depends on which path you would want to take, so check in for Game Programming Courses for Iphone through colleges and learning hubs that indeed resonate with you, and not because a celebrity says so!!

  16. Free to play Finger VS Guns now available on Android

    Finger VS Guns available now

    Canadian indie game developer Brutal Studio has released the awaited sequel to Finger VS Axes, now featuring guns! Finger VS Guns is the sequel to our successful action game but this time your finger will taste bullets! Fans of the first game will love this reloaded version, which features new and intense levels, new optional weapons and more humorous fun.


    From the creators of Stick Squad, Punch My Face and the Sift Heads brand. Brutal Studio brings you yet another original and unique game, for your enjoyment ... now available on all Android phones and tablets. The game is free to play but offers optional in-app purchases. Players can also pay a small fee to remove any intrusive ads from the game. Another version for iPhones and iPads will be available in the next few weeks to come.

    Look out for more fun and excitement from Canadian developer Brutal Studio!

  17. How to create game rankings in WiMi5


    Each game created with WiMi5 has a ranking assigned to it by default. Using it is optional, and the decision to use it is totally up to the developer. It’s very easy to handle, and we can sum it up in three steps, which are explained below.

    1. Firstly, the setup is performed in the Ranking section of the project’s properties, which can be accessed from the dashboard.
    2. Then, when you’ve created the game in the editor, you’ll have a series of blackboxes available to handle the rankings.
    3. Finally, once it’s running, a button will appear in the game’s toolbar (if the developer has set it up that way) with which the player can access the scoreboard at any time.

    It is important to know that a player can only appear on a game’s scoreboard if they are registered.

    Setting up the Rankings

    Note:  Remember that since it’s still in beta, the settings are limited. So we need you to keep a few things in mind:

    - The game is only assigned one table, which is based both on the score obtained in the game and on the date that score was obtained. In the future, the developer will be able to create and personalize their own tables.

    - The ranking has several setup options available, but most of them are preset and can’t be modified. In later versions, they will be.

    In the dashboard, in the project’s properties, you can access the rankings’ setup by clicking on the Rankings tab. As you can observe, there is a default setting.


    For configuring the ranking, you will have the following options:

    Display the button in the game’s toolbar

    This option is selected by default, and allows the button to show rankings to appear on the toolbar that’s seen in every game. If you’re not going to use rankings in your game, or don’t want that button to appear, in order to have more control over when the scoreboard will be shown, all you have to do is deactivate this option. The button looks like this:


    Only one result per user

    NOTE: The modification of this option is turned off for the time being.

    This allows you to set up whether a player can have multiple results on the scoreboard or only their best one. This option is turned off by default, meaning all the player’s matches with top scores will appear.

    It’s important to note that if this option is turned off, the player’s best score is the only one that will appear highlighted, not the last. If the player has a lower score in a future match, it will be included in the ranking, but it may not be shown since there’s a better score already on the board.

    Timeframe Selection

    NOTE: The modification of this option is turned off for the moment, and is only available in all-time view, which is further limited to the 500 best scores.

    This allows the scoreboard to have different timeframes for the same lists: all-time, daily, monthly, and yearly. The all-time view is chosen by default, which, as its name implies, has no time limit.

    Match data used in rankings

    NOTE: The modification of this option is turned off for the moment. The internationalization of public texts is not enabled.

    This allows you to define what data will be the sorting criteria for the scoreboard. The Points item is used by default; another criterion available is Date (automatically handled by the system), which includes the date and time the game’s score is entered.

    Of each of the data that is configured, we have the following characteristics reflected in the board columns:
    • Priority: This indicates the datum’s importance in the sorting of the scoreboard. The lower the number, the more important the datum is. In the default case, for example, the first criterion is points; if there are equal points, then by date.
    • ID: Unique name by which the datum is identified. This identifier is what will appear in the blackboxes that allow the rankings’ data to be managed.
    • Type: The type of data.
    • List order: Ascending or descending order. In the default case, it will be descending for points (highest number first), and should points be equal, by ascending order by date (oldest games first).
    • Visible: This indicates if it should be shown in the scoreboard. That is to say, the datum is taken into account to calculate the ranking position, but it is not shown later on the scoreboard. In the default case, the points are visible, but the date and time are not.
    • Header title: The public name of the datum. This is the name the player will see on the scoreboard.
    It is possible to add additional data with the “+” button, to modify the characteristics of the existing data, as well as to modify the priority order.

    Once the rankings configuration is reviewed, the next step is to use them in the development of the game.

    Making use of the rankings in the editor

    When you’re creating your game in the editor, you’ll find in the Blackboxes tab in the LogicChart that there’s a group of blackboxes called Rankings, in which you will find all the blackboxes related to the management of rankings. From this link you can access these blackboxes’ technical documentation. Nevertheless, we’re going to go over them briefly and see a simple example of their use.

    SubmitScore Blackbox

    This allows the scores of the finished matches to be sent.


    This blackbox will be different depending on the data you’ve configured on the dashboard. In the default case, points is the input parameter.

    When the blackbox’s submit activator is activated, the value of that parameter will be sent as a new score for the user identified in the match.

    If the done output signal is activated, this will indicate that the operation was successful, whereas if the error signal is activated, that will indicate that there was an error and the score could not be stored.

    If the accessDenied signal is activated, this will mean that a non-registered user tried to send a score, which will allow you to treat this matter in a special way (like showing an error message, etc.).

    Finally, there is the onPrevious signal. If you select the blackbox and go to its properties, you’ll see there is one called secure that can be activated or deactivated. If you activate it, this blackbox will not send the game’s result as long as the answer to a previously-sent result is still pending. Therefore, onPrevious will activate if you try to send a game result when the answer to a previously-sent result is still pending, and the blackbox’s secure mode is activated.

    ShowRanking Blackbox

    This allows the scoreboard to be displayed (for example, at the end of a match). It has the same result as when a player presses the “see rankings” button on the game’s toolbar.


    When this show input is activated, the scoreboard will be displayed. If it was displayed successfully, the shown output signal will be activated, and when the player closes the board, that will activate the closed output signal, which will allow us to also personalize the flow of how it’s run.

    Example of the use of the blackboxes

    If you want, for example, to send a player’s final score at the end of a match so that it appears on the scoreboard, you can do that this way:

    Suppose you have a Script you manage the end of the game with and it shares a parameter with the points the player has accumulated up to that moment:


    You’d have to create a SubmitScore blackbox...


    …and join the gameEnded output (which, let’s say, is activated when the game has ended) to the submit input in the blackbox you just created. It will also be necessary to indicate the variable the score has that we want to send, points from the DetectGameEnd script in our case. So, click and drag it to the SubmitScore blackbox to assign it. With these two actions, you’ll get the following:


    And the game would then be read to send scores. The player could check the board at any time by clicking on the menu button that was created for just that purpose, as we saw the setup section.

    However, you could want the scoreboard to appear automatically once the match is over. To do that, use the ShowRankings blackbox which, for example, could join to the done output in the SubmitScore blackbox and thus show the scores as soon as the score has been sent and confirmed:


    And with that you have a game that can send scores and show the scoreboard.

    Running the game

    Once a game that is able to send match results is developed, you have to remember that it behaves differently in “editing” than in “testing” and “production” mode.

    By “editing” mode, we mean when the game is run directly in the editor, in the Preview frame, to perform quick tests. In this mode, the scores are stored, but only temporarily. When we stop running it, those scores are deleted. Also, in this mode, the scoreboard is “simulated”; it’s not real. This means that there’s no way to access the toolbar button, since it doesn’t really exist.

    Sending and storing “real” scores is done by testing the games or playing their published versions. To test them, first you have to deploy them with the Deploy option on the Tools roll-down menu, and then with the Test option (from the menu or from the dialogue box you see after the Deploy one) you can start testing your game. In this mode, the scoreboard is real, not simulated, so the match scores are added to the ranking definitively.


    Dealing with rankings in WiMi5 is very easy. Just configure the rankings you want to use. Then use the rankings blackboxes and let your players challenge between them trying to get the best score. If you don´t want to use a ranking in your game, just click on the settings to hide this feature.

    • Jan 23 2015 10:26 AM
    • by hafo
  18. Multi-threaded Keyboard Input for Games

    Usually, during a game frame update, the user inputs were fired during this frame are requested in the game loop in the next frame and they're handled via callbacks before the game logic takes place. This is known as polling inputs.

    What is the obvious problem with polling inputs just after you got them or handle them all entirely in a single frame? The problems is: these inputs are not connected with the game as they are; inputs handled this way approaches the game simulation into an event-based application. User interaction (most of the time) should fire game events, and therefore they can be considered as game events themselves with a less degree of abstraction.

    We all know that games aren't event based applications even if there are game events in the game that are to be fired.  This idea gets a good constrast when one uses a fixed time step simulation.


    Updating the game simulation each frame (which includes A.I., physics, and animation, for example) by the elapsed frame time since the last frame update, means that the game behaviour varies with time. That's the main reason why some game programmers generally update their games each frame by fixed time slices.

    It is well know that determinism is mandatory in modern games, and that this is achieved by using the fixed time step approach. However, its implementation details are off-topic. Hence, the reader must be confortable with fixed-time step games or at least understand how they work... "Ugh"... tick. Thus, a single  fixed-time step update is the same as a game logical update.

    A single game logical update updates the current game time by a fixed time interval which is approximately 16.6 ms (or 60 Hz) or 33.3 ms (or 30 Hz) in general, and it can update our game by n times each frame depending of the real frame time (see the loop below). The game logic time should be very close to the current time, meaning that the computer is updating the game as faster as it can.

    To open our minds, the basic game loop of a fixed-time step simulation is:

    UINT64 ui64CurTime = m_tRenderTime.Update();
    while ( ui64CurTime - m_tLogicTime.CurTime() > FIXED_TIME_STEP ) {
            // Update the logical-side of the game.        
            m_tLogicTime.UpdateBy( FIXED_TIME_STEP );

    where m_tRenderTime.Update() updates the render time by the real elapsed frame time and m_tLogicTime.UpdateBy( FIXED_TIME_STEP ) updates the game logic time by FIXED_TIME_STEP time units. (I personally prefer to use microseconds and can recommend to use it.)

    What happens if we press a button at any time during a game logic update, poll all input events in the beginning of the current frame (before the loop start), and release the button during the game logical update? If we update n times, and the button state changes in the update, then it will be seen by the game tick as if it got pressed during the entire frame. Let's think about it: this is not a problem if we're updating the game by small steps because you'll jump to the next frame faster, but if the current game time is considerably larger than the time step, then we can get in troubles by only knowing the button state at the beginning of the frame. To avoid this issue we need to time-stamp all input events when they occur. This open some doors such measuring their duration when they're pressed and were suddently released, and synchronizing them with the game simulation.

    Every input event should be time-stamped in order for the game to eat the right amount of inputs in the game-side. The input system should be able to request inputs anywhere and process them somewhere later. You may have noticed that what we're doing here is basically buffering all inputs. This is a good idea because if we can store the time-stamped input events then we can process all of them not necessarily instantly. We now are able to synchronize inputs with the fixed time-step updates.

    The solution to keep the input events synchronized with the game simulation is to only consume the inputs that occurred up to the current game logical time and on each logical update. Example:

    Current time = 1000 ms;

    Fixed time-step = 100 ms;

    Game logic updates = 1000 ms / 100 ms = 10;

    Game time = 0 ms;

    Input Buffer:

    X-down at 700 ms;

    X-up at 850 ms;

    Y-down at 860 ms;

    Y-up at 900 ms;

    1st logical update eats 100 ms of input. Since there are no inputs up to there to be consumed, then go to the next logical update;


    7st logical update eats 100 ms of input. Because the game logic time was updated 6 times by 100 ms, then the game time is: 600 ms. But therenwere no inputs up to there, then, continue with the remaining updates;

    8st update. The game time is: 800 ms. Then the time-stamped X-down input event must be consumed. The current duration of the X button is the current game time subtracted by the time stamp, that is, 800 ms - 700 ms = 100 ms. Now, the game can check if a button is being held for a certain amount of time, which is an usable information. Momentarily, we know that a mechanism could be fired here because is the first time the user presses the X button (in the example, of course, because there was no X-down before). Another thing we could do on this example would be mapping the X button to a game-engine understandable input key, logging it into the input system, and then remapping it into the game;

    9st update. Game time = 900 ms. X-up, and Y-down along with its time-stamps can be consumed. The X button was released, then its total duration since it gots pressed is the current game time subtracted by its first tap time-stamp, that is, 900 ms - 700 ms = 300 ms. You may want to log this change somewhere in the game-side. Y was pressed, then we repeat for it the same thing we did to X in the last update;

    and finally...

    10st update. The current game time is 1000 ms. We repeat the same thing we did to X in the last update for Y and we're done.


    The Time Class

    At this point you should already know how the computer time works and how to create an appropriate timer class. The timer class stores microseconds as the standard time units in order to avoid numerical drifts. Small intervals can be converted to seconds or miliseconds and stored in doubles or floats but they're not accumulated in our timer class. It's incredible how people tend to accumulate time in floats without remembering that time is infinite and it should be accumulated precisely since the computer yet still has  (and it will always have) such storage and processing limitations when it comes to time measurements.

    #ifndef __TIME_H__
    #define __TIME_H__
    typedef unsigned long long UINT64;
    class CTime {
    	void Update();
    	void UpdateBy(UINT64 _ui64Ticks);
    	inline UINT64 CurTime() const { return m_ui64CurTime; }
    	inline UINT64 CurMicros() const { return m_ui64CurMicros; }
    	inline UINT64 DeltaMicros() const { return m_ui64DeltaMicros; }
    	inline float DeltaSecs() const { return m_fDeltaSecs; }
    	inline void SetFrequency(UINT64 _ui64Resolution) { m_ui64Resolution = _ui64Resolution; }
    	inline CTime& operator=(const CTime& _tTime) {
    		m_ui64LastRealTime = _tTime.m_ui64LastRealTime;
    		return (*this);
    protected :
    	UINT64 RealTime() const;
    	UINT64 m_ui64Resolution;
    	UINT64 m_ui64CurTime;
    	UINT64 m_ui64LastTime;
    	UINT64 m_ui64LastRealTime;
    	UINT64 m_ui64CurMicros;
    	UINT64 m_ui64DeltaMicros;
    	float m_fDeltaSecs;
    #endif //#ifndef __TIME_H__

    #include "CTime.h"
    #include <Windows.h>
    CTime::CTime() : m_ui64Resolution(0ULL), m_ui64CurTime(0ULL), m_ui64LastTime(0ULL), m_ui64LastRealTime(0ULL),
    m_ui64CurMicros(0ULL), m_ui64DeltaMicros(0ULL), m_fDeltaSecs(0.0f) {
    	::QueryPerformanceFrequency( reinterpret_cast<LARGE_INTEGER*>(&amp;m_ui64Resolution) );
    	m_ui64LastRealTime = RealTime();
    UINT64 CTime::RealTime() const {
    	UINT64 ui64Ret;
    	::QueryPerformanceCounter( reinterpret_cast<LARGE_INTEGER*>(&amp;ui64Ret) );
    	return ui64Ret;
    void CTime::Update() {
    	UINT64 ui64TimeNow = RealTime();
    	UINT64 ui64DeltaTime = ui64TimeNow - m_ui64LastRealTime;
    	m_ui64LastRealTime = ui64TimeNow;
    void CTime::UpdateBy(UINT64 _ui64Ticks) {
    	m_ui64LastTime = m_ui64CurTime;
    	m_ui64CurTime += _ui64Ticks;
    	UINT64 m_ui64LastMicros = m_ui64CurMicros;
    	m_ui64CurMicros = m_ui64CurTime * 1000000ULL / m_ui64Resolution;
    	m_ui64DeltaMicros = m_ui64CurMicros - m_ui64LastMicros;
    	m_fDeltaSecs = m_ui64DeltaMicros * static_cast<float>(1.0 / 1000000.0);

    As you can see, the timer delta seconds (which is usefull for the physics step for instance) can be derived after having stored the delta microseconds and the conversion can be made with a minimum numerical drift.

    The Keyboard Class

    If you're running Windows®, you may have noticed that the pre-processed (or raw input messages) can be pooled using the Win32 API message queue.

    Is mandatory to keep the input polling in the same thread that the window was created, but is not mandatory to keep our game simulation running on an another thread. In order to separate the raw input processing from the game logic we can let the message queue running on the main thread and keep the game simulation and rendering on another thread. Since only the game logic affects what it will be rendered then we don't need syncronization in the game-logic-render thread in the next example.

    I hope the classes do stay clear as your read.

    void CKeyboardBuffer::OnKeyDown(unsigned int _ui32Key) { 
    CLocker lLocker(m_csCritic); 
    KB_KEY_EVENT keEvent; 
    keEvent.keEvent = KE_KEYDOWN; 
    keEvent.ui64Time = m_tTime.CurMicros(); 
    void CKeyboardBuffer::OnKeyUp(unsigned int _ui32Key) { 
    CLocker lLocker(m_csCritic); 
    KB_KEY_EVENT keEvent; 
    keEvent.keEvent = KE_KEYUP; 
    keEvent.ui64Time = m_tTime.CurMicros(); 
    protected : 
    struct KB_KEY_EVENT { 
    KB_KEY_EVENTS keEvent; 
    unsigned long long ui64Time; }; 
    CCriticalSection m_csCritic; 
    CTime m_tTime; 
    std::vector m_keKeyEvents[KB_TOTAL_KEYS&#93;; 
    LRESULT CALLBACK CWindow::WindowProc(HWND _hWnd, UINT _uMsg, WPARAM _wParam, LPARAM _lParam) { 
    switch (_uMsg) { 
    case WM_KEYDOWN : { 
    case WM_KEYUP : { 

    Importantly, in order to request up to x time from the keyboard buffer in the game thread, the thread-safe keyboard timer must be synchronized with the real frame timer so we eat the correct time-stamps. This is done simply by using an assignment operator that internally copies the render timer last real time into the thread-safe last real time.

    //Before the game start. InputBuffer it's every possible type of device. We synchronize every timer.
    INT CGame:Init(/*...*/) {
            keyboardBuffer.m_tTime = m_pgGame->m_tRenderTime = m_pgGame->m_tLogicTime;

    Note that you need to set game logic time frequency to microseconds, so you get the right amount of microseconds, seconds, or miliseconds. Since the time gets updated by a fixed interval, and it handles time in microseconds, we don't want the interval to be divided by the real frequency. For this, all we need to do is to set the game logic time frequency to 1000000.

    Now we have time-stamped events. The thread that listens to inputs keeps running on the background so it cannot interfere our simulation directly. The game thread then listens to the keyboard buffer.

    We saw that the responsibility of the keyboard buffer is to buffer keyboard presses and releases. What would be usefull now would be a way of using the keyboard in the game-side, requesting informations such: a) "For how long the key was pressed?"; or b) "What is the current duration of the key?". Therefore we'll use the keyboard buffer to update a keyboard class that contains all the key states and their durations in order to be acessed by the game.

    class CKeyboard {
            friend class CKeyboardBuffer;
    public :
    	inline bool KeyIsDown(unsigned int _ui32Key) const {
                   return m_kiCurKeys[_ui32Key&#93;.bDown;
            unsigned long long KeyDuration(unsigned int _ui32Key) const {
                   return m_kiCurKeys[_ui32Key&#93;.ui64Duration;
    protected :
    	struct KB_KEY_INFO {
    		/* The key is down.*/
    		bool bDown;
    		/* The time the key was pressed. This is needed to calculate its duration. */
    		unsigned long long ui64TimePressed;
    		/* This should be logged but it's here just for simplicity. */
    		unsigned long long ui64Duration;
    	KB_KEY_INFO m_kiCurKeys[CKeyboardBuffer::KB_TOTAL_KEYS&#93;;
    	KB_KEY_INFO m_kiLastKeys[CKeyboardBuffer::KB_TOTAL_KEYS&#93;;

    Now the keyboard is able to be used as our final keyboard on the game, and we need transfer the data coming from the keyboard buffer into the keyboard. Then we will give our game an instance of the keyboard buffer class.

    Each logical update we request all the keyboard events up to the current game logical time from the thread-safe keyboard buffer, and transfer them to our game-side keyboard buffer. Then we update the game-side keyboard that will be used by the game. We'll implement two functions in our keyboard buffer: a function that transfers thread-safe inputs, and another that updates the keyboard with its current keyboard events.

    void CKeyboardBuffer::UpdateKeyboardBuffer(CKeyboardBuffer&amp; _kbOut, unsigned long long _ui64MaxTimeStamp) {
    	CLocker lLocker(m_csCritic); //Enter in our critical section.
    	for (unsigned int I = KB_TOTAL_KEYS; I--;) {
    		std::vector&amp; vKeyEvents = m_keKeyEvents[I&#93;;
    		for (std::vector::iterator J = vKeyEvents.begin(); J != vKeyEvents.end();) {
    			const KB_KEY_EVENT&amp; keEvent = *J;
    			if (keEvent.ui64Time < _ui64MaxTimeStamp) {
    				J = vKeyEvents.erase(J); //Eat key event. This is not optimized.
    			else {
    //Leave our critical section.
    void CKeyboardBuffer::UpdateKeyboard(CKeyboard&amp; _kKeyboard, unsigned long long _ui64CurTime) {
    	for (unsigned int I = KB_TOTAL_KEYS; I--;) {
    		CKeyboard::KB_KEY_INFO&amp; kiCurKeyInfo = _kKeyboard.m_kiCurKeys[I&#93;;
    		CKeyboard::KB_KEY_INFO&amp; kiLastKeyInfo = _kKeyboard.m_kiLastKeys[I&#93;;
    		std::vector&amp; vKeyEvents = m_keKeyEvents[I&#93;;
    		for (std::vector::iterator J = vKeyEvents.begin(); J != vKeyEvents.end(); ++J) {
    			const KB_KEY_EVENT&amp; keEvent = *J;
    			if ( keEvent.keEvent == KE_KEYDOWN ) {
    				if ( kiLastKeyInfo.bDown ) {
    				else {
    					//The time that the key was pressed.
    					kiCurKeyInfo.bDown = true;
    					kiCurKeyInfo.ui64TimePressed = keEvent.ui64Time;
    			else {
    				//Calculate the total duration of the key event.
    				kiCurKeyInfo.bDown = false;
    				kiCurKeyInfo.ui64Duration = keEvent.ui64Time - kiCurKeyInfo.ui64TimePressed;
    			kiLastKeyInfo.bDown = kiCurKeyInfo.bDown;
    			kiLastKeyInfo.ui64TimePressed = kiCurKeyInfo.ui64TimePressed;
    			kiLastKeyInfo.ui64Duration = kiCurKeyInfo.ui64Duration;
    		if ( kiCurKeyInfo.bDown ) {
    			//The key it's being held. Update its duration.
    			kiCurKeyInfo.ui64Duration = _ui64CurTime - kiCurKeyInfo.ui64TimePressed;
    		//Clear the buffer for the next request.

    Now we can request up-to-time inputs, and we can use these in the game logical update. Example:

    bool CGame::Tick() {
    	m_tRenderTime.Update(); //Update by the rela elapsed time.
    	UINT64 ui64CurMicros = m_tRenderTime.CurMicros();
    	while (ui64CurMicros - m_tLogicTime.CurTime() > FIXED_TIME_STEP) {
    		UINT64 ui64CurGameTime = m_tLogicTime.CurTime();
    		m_pkbKeyboardBuffer->UpdateKeyboardBuffer( m_kbKeyboardBuffer, ui64CurGameTime ); //The window keyboard buffer pointer.
    		m_kbKeyboardBuffer.UpdateKeyboard(m_kKeyboard, ui64CurGameTime); //Our non thread-safe game-side buffer will update our keyboard with its key events.
                    UpdateGameState();//We can use m_kKeyboard now at any time in our game-state.
    	return true;


    What we did in this article was dividing the input system in small pieces synchronized with our logical game simulation. There are open doors for optimization. After the game has all the input information then it can start mapping and logging it. For the moment, what matters is that it is synchronized with the logical game time and the game is able to interface with it  losing minimum input events.

    Note that this way of managing inputs can add some complexity to your current code base. For small demos or simple applications using the old polling it still can be an advantage in favor of simplicity.








  19. 4 Simple Things I Learned About the Industry as a Beginner

    For the last year or so I have been working professionally at a AAA mobile game studio. This year has been a huge eye opener for me. Although this article is really short and concise, I`d really like to share these (although seemingly minor) tips for anyone who is thinking about joining, or perhaps has already joined and starting in the professional game development industry.

    All my teen years I had only one dream, to become a professional game developer, and it has finally happened. I was most excited, but as it turns out, I was not ready. At the time of this post, I`m still a student, hopefully getting my bachelors degree in 2016. Juggling between school and a corporate job (because it is a corporation, after all) has been really damaging to my grades, to my social life, but hey, I knew what I signed up for. In the meantime I met lots of really cool and talented people, from whom I have learned tons. Not necessarily programming skills (although I did manage to pick up quite a few tricks there as well), but how to behave in such an environment, how to handle stress, how to speak with non-technical people about technical things. These turned out to be essential skills, in some cases way more important than the technical skills that you have to have in order to be successful at your job. Now, don't misunderstand me, the fact that I wasn’t ready doesn’t mean I got fired, in fact I really enjoyed and loved the environment of pro game development, but I simply couldn’t spend so much time anymore, it has started to become a health issue. I got a new job, still programming, although not game development. A lot more laid back, in a totally different industry though. I plan to return to the game development area as soon as possible.

    So, in summary, I’d like to present a few main points of interest for those who are new to the industry, or are maybe contemplating becoming game developers in a professional area.

    1. It’s not what you’ve been doing so far

    So far you’ve been pretty much doing what projects you wanted, how you wanted them. It will not be the case anymore. There are deadlines, there are expectations to be met, there is profit that needs to be earned. Don’t forget that after all, it is a business. You will probably do tasks which you are interested in and you love them, but you will also do tedious, even boring ones.

    2. Your impact will not be as great as it has been before

    Ever implemented a whole game? Perhaps whole systems? Yeah, it’s different here. You will probably only get to work with parts of systems, or maybe just tweaking them, fixing bugs (especially as a beginner). These games are way bigger than what we’re used to as hobbyist game developers, you have to adapt to the situation. Most of the people working on a project specialize in some area (networking, graphics, etc.). Also, I figured that lots of the people in the team - including myself, I always went with rendering engines, that's what my thing is :D - have never done a full game by themselves (and that is okay).

    3. You WILL have to learn to talk properly with managers/leads, designers, artists

    If you’re working alone, you’re a one man team and you’re the god of your projects. In a professional environment talking to non-technical people about technical things may very well make the difference between you getting to the next level, or getting fired. It is an essential skill that can be easily learned through experience. In the beginning however, keep your head low.

    4. You WILL have to put in extra effort

    If you’re working on your own hobby project, if a system gets done 2 days later than you originally wanted it to, it’s not a big deal. However, in this environment, it could set back the whole team. There will be days when you will have to work overtime, for the sake of the project and your team.

    Essentially, I could boil all this down to two words : COMMUNICATION and TEAMWORK.

    If you really enjoy developing games, go for the professional environment, however if you’re not sure about it, avoid it. All of the people manage to be successful here by loving what they do. Love it or quit it.

    14 Jan 2015: Initial release

    • Jan 20 2015 12:15 PM
    • by Azurys
  20. Banshee Engine Architecture - Introduction

    This article is imagined as part of a larger series that will explain the architecture and implementation details of Banshee game development toolkit. In this introductory article a very general overview of the architecture is provided, as well as the goals and vision for Banshee. In later articles I will delve into details about various engine systems, providing specific implementation information.

    The intent of the articles is to teach you how to implement various engine systems, see how they integrate into a larger whole, and give you an insight into game engine architecture. I will be covering various topics, from low level run time type information and serialization systems, multithreaded rendering, general purpose GUI system, input handling, asset processing to editor building and scripting languages.

    Since Banshee is very new and most likely unfamiliar to the reader I will start with a lengthy introduction.

    What is Banshee?

    It is a free & modern multi-platform game development toolkit. It aims to provide simple yet powerful environment for creating games and other graphical applications. A wide range of features are available, ranging from a math and utility library, to DirectX 11 and OpenGL render systems all the way to asset processing, fully featured editor and C# scripting.

    At the time of writing this project is in active development, but its core systems are considered feature complete and a fully working version of the engine is available online. In its current state it can be compared to libraries like SDL or XNA but with a wider scope. Work is progressing on various high level systems as described by the list of features below.

    Currently available features

    • Design
      • Built using C++11 and modern design principles
      • Clean layered design
      • Fully documented
      • Modular & plugin based
      • Multiplatform ready
    • Renderer
      • DX9, DX11 and OpenGL 4.3 render systems
      • Multi-threaded rendering
      • Flexible material system
      • Easy to control and set up
      • Shader parsing for HLSL9, HLSL11 and GLSL
    • Asset pipeline
      • Easy to use
      • Asynchronous resource loading
      • Extensible importer system
      • Available importer plugins for:
        • FXB,OBJ, DAE meshes
        • PNG, PSD, BMP, JPG, ... images
        • OTF, TTF fonts
        • HLSL9, HLSL11, GLSL shaders
    • Powerful GUI system
      • Unicode text rendering and input
      • Easy to use layout based system
      • Many common GUI controls
      • Fully skinnable
      • Automatch batching
      • Support for texture atlases
      • Localization
    • Other
      • CPU & GPU profiler
      • Virtual input
      • Advanced RTTI system
      • Automatic object serialization/deserialization
      • Debug drawing
      • Utility library
        • Math, file system, events, thread pool, task scheduler, logging, memory allocators and more

    Features coming soon (2015 & 2016)

    • WYSIWYG editor
      • All in one editor
      • Scene, asset and project management
      • Play-in-editor
      • Integration with scripting system
      • Fully customizable for custom workflows
      • One click multi-platform building
    • C# scripting
      • Multiplatform via Mono
      • Full access to .NET library
      • High level engine wrapper
    • High quality renderer
      • Fully deferred
      • Physically based shading
      • Global illumination
      • Gamma correct and HDR rendering
      • High quality post processing effects
    • 3rd party physics, audio, video, network and AI system integration
      • FMOD
      • Physx
      • Ogg Vorbis
      • Ogg Theora
      • Raknet
      • Recast/Detour


    You might want to retrieve the project source code to better follow the articles to come - in each article I will reference source code files that you may view for exact implementation details. I will be touching onto features currently available and will update the articles as new features are released.

    You may download Banshee from its GitHub page:


    The ultimate goal for Banshee is to be a fully featured toolkit that is easy to use, powerful, well designed and extensible so it may rival AAA engine quality. I'll try to touch upon each of those factors and let you know how exactly it attempts to accomplish that.

    Ease of use

    Banshee interface (both code and UI wise) was created to be as simple as possible without sacrificing customizability. Banshee is designed in layers, with the lowest layers providing most general purpose functionality, while higher layers reference lower layers and provide more specialized functionality. Most people will be happy with the simpler more specialized functionality, but lower level functionality is there if they need it and it wasn’t designed as an afterthought either.

    Highest level is imagined as a multi-purpose editor that deals with scene editing, asset import and processing, animation, particles, terrain and similar. Entire editor is designed to be extensible without deep knowledge of the engine - a special scripting interface is provided only for the editor. Each game requires its own custom workflow and set of tools which is reflected in the editor design.

    On a layer below lies the C# scripting system. C# allows you to write high level functionality of your project more easily and safely. It provides access to the large .NET library and most importantly has extremely fast iteration times so you may test your changes within seconds of making them. All compilation is done in editor and you may jump into the game immediately after it is done - this even applies if you are modifying the editor itself.


    Below the C# scripting layer lie two separate speedy C++ layers that allow you to access the engine core, renderer and rendering APIs directly. Not everyone’s performance requirements can be satisfied on the high level and that’s why even the low level interfaces had a lot of thought put into them.

    Banshee is a fully multithreaded engine designed with performance in mind. Renderer thread runs completely separate from the rest of your code giving you maximum CPU resources for best graphical fidelity. Resources are loaded asynchronously therefore avoiding stalls, and internal buffers and systems are designed to avoid CPU-GPU synchronization points.

    Additionally Banshee comes with built-in CPU and GPU profilers that monitor speed, memory allocations and resource usage for squeezing the most out of your code.

    Power doesn’t only mean speed, but also features. Banshee isn’t just a library, but aims to be a fully featured development toolkit. This includes an all-purpose editor, a scripting system, integration with 3rd party physics, audio, video, networking and AI solutions, high fidelity renderer, and with the help of the community hopefully much more.


    A major part of Banshee is the extensible all-purpose editor. Games need custom tools that make development easier and allow your artists and designers to do more. This can range from simple data input for game NPC stats to complex 3D editing tools for your in-game cinematics. The GUI system was designed to make it as easy as possible to design your own input interfaces, and a special scripting interface has been provided that exposes the majority of editor functionality for variety of other uses.

    Aside from being a big part of the editor, extensibility is also something that is prevalent throughout the lower layers of the engine. Anything not considered core is built as a plugin that inherits a common abstract interface. This means you can build your own plugins for various engine systems without touching the rest of engine. For example, DX9, DX11 and OpenGL render system APIs are all built as plugins and you may switch between them with a single line of code.

    Quality design

    A great deal of effort has been spent to design Banshee the right way, with no shortcuts. The entire toolkit, from the low level file system library to GUI system and the editor has been designed and developed from scratch following modern design principles and using modern technologies, solely for the purposes of Banshee.

    It has been made modular and decoupled as much as possible to allow people to easily replace or update engine systems. Plugin-based architecture keeps all the specialized code outside of the engine core, which makes it easier to tailor it to your own needs by extending it with new plugins. It also makes it easier to learn as you have clearly defined boundaries between systems, which is further supported by the layered architecture that reduces class coupling and makes the direction of dependencies even clearer. Additionally every non trivial method, from lowest to highest layer, is fully documented.

    From its inception it has been designed to be a multi-platform and a multi-threaded engine.

    Platform-specific functionality is kept to a minimum and is cleanly encapsulated in order to make porting to other platforms as easy as possible. This is further supported by its render API interface which already supports multiple popular APIs, including OpenGL.

    Its multithreaded design makes communication between the main and render thread clear and allows you to perform rendering operations from both, depending on developer preference. Resource initialization between the two threads is handled automatically which further allows operations like asynchronous resource loading. Async operation objects provide functionality similar to C++ future/promise and C# async/await concepts. Additionally you are supplied with tools like the task scheduler that allow you to quickly set up parallel operations yourself.


    Now that you have an idea of what Banshee is trying to acomplish I will describe the general architecture in a bit more detail. Starting with the top level design which is the four primary layers shown on the image below.


    The layers were created for two reasons:
    • To give developers a chance to pick the level of functionality they need. Some people will want just core and utility and start working on their own engine while others might be just interested in game development and will stick with the editor layer.
    • To decouple code. Lower layers do not know about higher levels and low level code never caters to specialized high level code. This makes the design cleaner and forces a certain direction for dependencies.
    Lower levels were designed to be more general purpose than higher levels. They provide very general techniques usually usable in various situations, and they attempt to cater to everyone. On the other hand higher levels provide a lot more focused and specialized techniques. This might mean relying on very specific rendering APIs, platforms or plugins but it also means using newer, fancier and maybe not as widely accepted techniques (e.g. some new rendering algorithm).


    This is the lowest layer of the engine. It is a collection of very decoupled and separate systems that are likely to be used throughout all of the higher layers. Essentially a collection of tools that are in no way tied into a larger whole. Most of the functionality isn’t even game engine specific, like providing file-system access, file path parsing or events. Other things that belong here are the math library, object serialization and RTTI system, threading primitives and managers, among various others.


    It is the second lowest layer and the first layer that starts to take shape of an actual engine. This layer provides some very game-specific modules tied into a coherent whole, but it tries to be very generic and offer something that every engine might need instead of focusing on very specialized techniques. Render API wrappers exist here, but actual render APIs are implemented as plugins so you are not constrained by specific subset. Scene manager, renderer, resource management, importers and others all belong here, and all are implemented in an abstract way that they can be implemented/extended by higher layers or plugins.


    Second highest layer and first layer with a more focused goal. It is built upon BansheeCore but relies on a specific sub-set of plugins and implements systems like scene manager and renderer in a specific way. For example DirectX 11 and OpenGL render systems are referenced by name, as well as Mono scripting system among others. Renderer that follows a specific set of techniques and algorithms that determines how are all objects rendered also belongs here.


    And finally the top layer is the editor. Although it is named as such it also heavily relies on the scripting system and C# interface as those are primarily used through the editor. It is an extensible multi-purpose editor that provides functionality for level editing, compiling script code, editing script objects, playing in editor, importing assets and publishing the game. But also much more as it can be easily extended with your own custom sub-editors. Want a shader node editor? You can build one yourself without touching the complex bits of the engine, you have an entire scripting interface built only for editor extensions.

    Figure below shows a more detailed structure of each layer as it is designed currently (expect it to change as new features are added). Also note the plugin slots that allow you to extend the engine without actually changing the core.


    In the future chapters I will explain major systems in each of the layers. These explanations should give you insight on how to use them but also reveal why and how they were implemented. However first off I’d like to focus on a quick guide on how to get started with your first Banshee project in order to give the readers a bit more perspective (And some code!).

    Example application

    This section is intended to show you how to create a minimal application in Banshee. The example will primarily be using BansheeEngine layer, which is a high level C++ interface. Otherwise inclined users may use the lower level C++ interface and access the rendering API directly, or use the higher level C# scripting interface. We will delve into those interfaces into more detail in later chapters.

    One important thing to mention is that I will not give instructions on how to set up the Banshee environment and will also omit some less relevant code. This chapter is intended just to give some perspective but the interested reader can head to the project website and check out the example project or the provided tutorial.


    Each Banshee program starts with a call to the Application class. It is the primary entry point into Banshee, handles startup, shutdown and the primary game loop. A minimal application that just creates an empty window looks something like this:

    RENDER_WINDOW_DESC renderWindowDesc;
    renderWindowDesc.videoMode = VideoMode(1280, 720);
    renderWindowDesc.title = "My App";
    renderWindowDesc.fullscreen = false;
    Application::startUp(renderWindowDesc, RenderSystemPlugin::DX11);

    When starting up the application you are required to provide a structure describing the primary render window and a render system plugin to use. When startup completes your render window will show up and then you can run your game code by calling runMainLoop. In this example we haven’t set up any game code so your loop will just be running the internal engine systems. When the user is done with the application the main loop returns and shutdown is performed. All objects are cleaned up and plugins unloaded.


    Since our main loop isn’t currently doing much we will want to add some game code to perform certain actions. However in order for any of those actions to be visible we need some resources to display on the screen. We will need at least a 3D model and a texture. To get resources into Banshee you can either load a preprocessed resource using the Resources class, or you may import a resource from a third-party format using the Importer class. We'll import a 3D model using an FBX file format, and a texture using the PSD file format.

    HMesh dragonModel = Importer::instance().import<Mesh>("C:\Dragon.fbx");
    HTexture dragonTexture = Importer::instance().import<Texture>("C:\Dragon.psd");

    Game code

    Now that we have some resources we can add some game code to display them on the screen. Every bit of game code in Banshee is created in the form of Components. Components are attached to SceneObjects, which can be positioned and oriented around the scene. You will often create your own components but for this example we only need two built-in component types: Camera and Renderable. Camera allows us to set up a viewport into the scene and outputs what it sees into the target surface (our window in this example) and renderable allows us to render a 3D model with a specific material.

    HSceneObject sceneCameraSO = SceneObject::create("SceneCamera");
    HCamera sceneCamera = sceneCameraSO->addComponent<Camera>(window);
    sceneCameraSO->setPosition(Vector3(40.0f, 30.0f, 230.0f));
    sceneCameraSO->lookAt(Vector3(0, 0, 0));
    HSceneObject dragonSO = SceneObject::create("Dragon");
    HRenderable renderable = dragonSO->addComponent<Renderable>();

    I have skipped material creation as it will be covered in a later chapter but it is enough to say that it involves importing a couple of GPU programs (e.g. shaders), using them to create a material and then attaching the previously loaded texture, among a few other minor things.

    You can check out the source code and the ExampleProject for a more comprehensive introduction, as I didn't want to turn this article in a tutorial when there already is one.


    This concludes the introduction. I hope you enjoyed this article and I'll see you next time when I'll be talking about implementing a run-time type information system in C++ as well as a flexible serialization system that handles everything from saving simple config files, entire resources and even entire level hierarchies.

  21. Setting Realistic Deadlines, Family, and Soup

    Jan. 23, 2015. This is my goal. My deadline. And I'm going to miss it.

    Let me explain. As I write this article, I am also making soup. Trust me, it all comes together at the end.

    Part I: Software Estimation 101

    I've been working on Archmage Rises full time for three months and part time about 5 months before that. In round numbers, I’m about 1,000 hours in.

    You see, I have been working without a specific deadline because of a little thing I know from business software called the “Cone of Uncertainty”:


    In business software, the customer shares an idea (or “need”)—and 10 out of 10 times, the next sentence is: "When will you be done, and how much will it cost?"

    Looking at the cone diagram, when is this estimate most accurate? When you are done! You know exactly how long it takes and how much it will actually cost when you finish the project. When do they want the estimate? At the beginning—when accuracy is nil! For this reason, I didn't set a deadline; anything I said would be wrong and misleading to all involved.

    Even when my wife repeatedly asked me.

    Even when the head of Alienware called me and asked, “When will it ship?”

    I focused on moving forward in the cone so I could be in a position to estimate a deadline with reasonable accuracy. In fact, I have built two prototypes which prove the concept and test certain mechanics. Then I moved into the core features of the game.

    Making a game is like building a sports car from a kit.
    … but with no instructions
    … and many parts you have to build yourself (!)

    I have spent the past months making critical pieces. As each is complete, I put it aside for final assembly at a later time. To any outside observer, it looks nothing like a car—just a bunch of random parts lying on the floor. Heck! To ME, it looks like a bunch of random parts on the floor. How will this ever be a road worthy car?

    Oh, hold on. Gotta check the soup.
    Okay, we're good.

    This week I finished a critical feature of my story editor/reader, and suddenly the heavens parted and I could see how all the pieces fit together! Now I'm in a place where I can estimate a deadline.

    But before I get into that, I need to clarify what deadline I'm talking about.

    Vertical Slice, M.V.P. & Scrum

    Making my first game (Catch the Monkey), I learned a lot of things developers should never do. In my research after that project, I learned how game-making is unique and different from business software (business software has to work correctly. Games have to work correctly and be fun) and requires a different approach.

    Getting to basics, a vertical slice is a short, complete experience of the game. Imagine you are making Super Mario Bros. You build the very first level (World 1-1) with complete mechanics, power ups, art, music, sound effects, and juice (polish). If this isn't fun, if the mechanics don't work, then you are wasting your time building the rest of the game.

    The book Lean Startup has also greatly influenced my thinking on game development. In it, the author argues to fail quickly, pivot, and then move in a better direction. The mechanism to fail quickly is to build the Minimum Valuable Product (MVP). Think of web services like HootSuite, Salesforce, or Amazon. Rather than build the "whole experience," you build the absolute bare minimum that can function so that you can test it out on real customers and see if there is any traction to this business idea. I see the Vertical Slice and MVP as interchangeable labels to the same idea.

    A fantastic summary of Scrum.

    Finally, Scrum is the iterative incremental software development methodology I think works best for games (I'm quite familiar with the many alternatives). Work is estimated in User Stories and (in the pure form) estimated in Story Points. By abstracting the estimates, the cone of uncertainty is built in. I like that. It also says when you build something, you build it complete and always leave the game able to run. Meaning, you don't mostly get a feature working and then move on to another task; you make it 100% rock solid: built, tested, bug fixed. You do this because it eliminates Technical Debt.


    What's technical debt? Well like real debt, it is something you have to pay later. So if the story engine has several bugs in it but I leave them to fix "later," that is technical debt I have to pay at some point. People who get things to 90% and then move on to the next feature create tons of technical debt in the project. This seriously undermines the ability to complete the project because the amount of technical debt is completely unknown and likely to hamper forward progress. I have experienced this personally on my projects. I have heard this is a key contributor to "crunch" in the game industry.

    Hold on: Gotta go put onions and peppers in the soup now.

    A second and very important reason to never accrue technical debt is it completely undermines your ability to estimate.

    Let's say you are making the Super Mario Bros. World 1-1 vertical slice. Putting aside knowing if your game is fun or not, the real value of completing the slice is the ability to effectively estimate the total effort and cost of the project (with reasonable accuracy). So let's say World 1-1 took 100 hours to complete across the programmer, designer, and artist with a cost of $1,000. Well, if the game design called for 30 levels, you have a fact-based approach to accurate estimating: It will take 3,000 hours and $30,000. But the reverse is also helpful. Let's say you only have $20,000. Well right off the bat you know you can only make 20 levels. See how handy this is?!?

    Still, you can throw it all out the window when you allow technical debt.

    Let me illustrate:
    Let's say the artist didn't do complete work. Some corners were cut and treated as "just a prototype," so only 80% effort was expended. Let's say the programmer left some bugs and hardcoded a section just to work for the slice. Call it a 75% effort of the real total. Well, now your estimates will be way off. The more iterations (levels) and scale (employees) you multiply by your vertical slice cost, the worse off you are. This is a sure-fire way to doom your project.

    So when will you be done?

    So bringing this back to Archmage Rises, I now have built enough of the core features to be able to estimate the rest of the work to complete the MVP vertical slice. It is crucial that I get the slice right and know my effort/costs so that I can see what it will take to finish the whole game.

    I set up the seven remaining sprints into my handy dandy SCRUM tool Axosoft, and this is what I got:


    That wasn't very encouraging. :-) One of the reasons is because as I have ideas, or interact with fans on Facebook or the forums, I write user stories in Axosoft so I don't forget them. This means the number of user stories has grown since I began tracking the project in August. It's been growing faster than I have been completing them. So the software is telling the truth: Based on your past performance, you will never finish this project.

    I went in and moved all the "ideas" out of the actual scheduled sprints with concrete work tasks, and this is what I got:


    January 23, 2015

    This is when the vertical slice is estimated to be complete. I am just about to tell you why it's still wrong, but first I have to add cream and milk to the soup. Ok! Now that it's happily simmering away, I can get to the second part.

    Part II: Scheduling the Indie Life

    I am 38 and have been married to a beautiful woman for 15 years. Over these years, my wife has heard ad nauseam that I want to make video games. When she married me, I was making pretty good coin leading software projects for large e-commerce companies in Toronto. I then went off on my own. We had some very lean years as I built up my mobile software business.

    We can't naturally have kids, so we made a “Frankenbaby” in a lab. My wife gave birth to our daughter Claire. That was two years ago.


    My wife is a professional and also works. We make roughly the same income. So around February of this year, I went to her and said, "This Archmage thing might have legs, and I'd like to quit my job and work on it full time." My plan was to live off her—a 50% drop in household income. Oh and on top of that, I'd also like to spend thousands upon thousands of dollars on art, music, tools, -- and any games that catch my fancy on Steam.

    It was a sweetheart offer, don't you think?

    I don't know what it is like to be the recipient of an amazing opportunity like this, but I think her choking and gasping for air kind of said it all. :-)

    After thought and prayer, she said, "I want you to pursue your dream. I want you to build Archmage Rises."

    Now I write this because I have three game devs in my immediate circle—each of which are currently working from home and living off their spouse's income. Developers have written me asking how they can talk with their spouse about this kind of major life transition.

    Lesson 1: Get “Buy In,” not Agreement

    A friend’s wife doesn't really want him to make video games. She loves him, so when they had that air-gasping indie game sit down conversation she said, "Okay"—but she's really not on board.

    How do you think it will go when he needs some money for the game?
    Or when he's working hard on it and she feels neglected?
    Or when he originally said the game will take X months but now says it will take X * 2 months to complete?


    Yep! Fights.

    See, by not "fighting it out" initially, by one side just caving, what really happened was that one of them said, "I'd rather fight about this later than now." Well, later is going to come. Over and over again. Until the core issue is resolved.

    I and my friend believe marriage is committed partnership for life. We're in it through thick and thin, no matter how stupid or crazy it gets. It's not roommates sharing an Internet bill; this is life together.

    So they both have to be on the same page, because the marriage is more important than any game. Things break down and go horribly wrong when the game/dream is put before the marriage. This means if she is really against it deep down, he has to be willing to walk away from the game. And he is, for her.

    One thing I got right off the bat is my wife is 100% partnered with me in Archmage Rises. Whether it succeeds or fails, there are no fights or "I told you so"s along the way.

    Lesson 2: Do Your Part


    So why am I making soup? Because my wife is out there working, and I’m at home. Understandably so, I have taken on more of the domestic duties. That's how I show her I love her and appreciate her support. I didn't "sell" domestic duties in order to get her buy-in; it is a natural response. So with me working downstairs, I can make soup for dinner tonight, load and unload the dishwasher, watch Claire, and generally reduce the household burden on her as she takes on the bread-winning role.

    If I shirk household duties and focus solely on the game (and the game flops!), boy oh boy is there hell to pay.

    Gotta check that soup. Yep, we're good.

    Lesson 3: Do What You Say

    Claire is two. She loves to play ball with me. It's a weird game with a red nerf soccer ball where the rules keep changing from catching, to kicking, to avoiding the ball. It's basically Calvin ball. :-)


    She will come running up to my desk, pull my hand off the mouse, and say, "Play ball?!" Sometimes I'm right in the middle of tracking down a bug, but at other times I'm not that intensely involved in the task. The solution is to either play ball right now (I've timed it with a stop watch; it only holds her interest for about seven minutes), or promise her to play it later. Either way, I'm playing ball with Claire.

    And this is important, because to be a crappy dad and have a great game just doesn't look like success to me. To be a great dad with a crappy game? Ya, I'm more than pleased with that.

    Now Claire being two, she doesn't have a real grasp of time. She wants to go for a walk "outside" at midnight, and she wants to see the moon in the middle of the afternoon. So when I promise to play ball with her "later," there is close to a 0% chance of her remembering or even knowing when later is. But who is responsible in this scenario for remembering my promise? Me. So when I am able, say in between bugs or end of the work day, I'll go find her and we'll play ball. She may be too young to notice I'm keeping my promises, but when she does begin to notice I won't have to change my behavior. She'll know dad is trustworthy.

    Lesson 4: Keep the Family in the Loop like a Board of Directors

    If my family truly is partnered with me in making this game, then I have to understand what it is like from their perspective:

    1. They can't see it
    2. They can't play it
    3. They can't help with it
    4. They don't know how games are even made
    5. They have no idea if what I am making is good, bad, or both


    They are totally in the dark. Now what is a common reaction to the unknown? Fear. We generally fear what we do not understand. So I need to understand that my wife secretly fears what I'm working on won't be successful, that I'm wasting my time. She has no way to judge this unless I tell her.

    So I keep her up to date with the ebb and flow of what is going on. Good or bad. And because I tell her the bad, she can trust me when I tell her the good.

    A major turning point was the recent partnership with Alienware. My wife can't evaluate my game design, but if a huge company like Alienware thinks what I'm doing is good, that third party perspective goes a long way with her. She has moved from cautious to confident.

    The Alienware thing was a miracle out of the blue, but that doesn't mean you can't get a third party perspective on your game (a journalist?) and share it with your significant other.

    Lesson 5: Life happens. Put It in the Schedule.

    I've been scheduling software developers for 20 years. I no longer program in HTML3, but I still make schedules—even if it is just for me.

    Customers (or publishers) want their projects on the date you set. Well, actually, they want it sooner—but let's assume you've won that battle and set a reasonable date.

    If there is one thing I have learned in scheduling large team projects, it is that unknown life things happen. The only way to handle that is to put something in the schedule for it. At my mobile company, we use a rule of 5.5-hour days. That means a 40-hour-a-week employee does 27.5 hours a week of active project time; the rest is lunch, doctor appointments, meetings, phone calls with the wife, renewing their mortgage, etc. Over a 7-8 month project, there is enough buffer built in there to handle the unexpected kid sick, sudden funeral, etc.
    Also, plug in statutory holidays, one sick day a month, and any vacation time. You'll never regret including it; you'll always regret not including it.

    That's great for work, but it doesn't work for the indie at home.


    To really dig into the reasons why would be another article, so I'll just jump to the conclusion:

    1. Some days, you get stuck making soup. :-)
    2. Being at home and dealing with kids ranges from playing ball (short) to trips to the emergency room (long)
    3. Being at home makes you the "go to" family member for whatever crops up. "Oh, we need someone to be home for the furnace guy to do maintenance." Guess who writes blogs and just lost an hour of his day watching the furnace guy?
    4. There are many, many hats to wear when you’re an indie. From art direction for contract artists to keeping everyone organized, there is a constant stream of stuff outside your core discipline you'll just have to do to keep the game moving forward.
    5. Social media marketing may be free, but writing articles and responding to forum and Facebook posts takes a lot of time. More importantly, it takes a lot of energy.

    After three months, I have not been able to come up with a good rule of thumb for how much programming work I can get done in a week. I've been tracking it quite precisely for the last three weeks, and it has varied widely. My goal is to hit six hours of programming in an 8-12 hour day.

    Putting This All Together


    Oh, man! This butternut squash soup is AMAZING! I'm not much of a soup guy, and this is only my second attempt at it—but this is hands-down the best soup I've ever had at home or in a restaurant! See the endnotes for the recipe—because you aren't truly indie unless you are making a game while making soup!

    So in order to try and hit my January 23rd deadline, I need to get more programming done. One way to achieve this is to stop writing weekly dev blogs and switch to a monthly format. It's ironic that writing less blogs makes it look like less progress is being made, but it's the exact opposite! I hope to gain back 10 hours a week by moving to a monthly format.

    I'll still keep updating the Facebook page regularly. Because, well, it's addictive. :-)

    So along the lines of Life Happens, it is about to happen to me. Again.

    We were so impressed with Child 1.0 we decided to make another. Baby Avery is scheduled to come by C-section one week from today.

    How does this affect my January 23rd deadline? Well, a lot.
    • Will baby be healthy?
    • Will mom have complications?
    • How will a newborn disrupt the disposition or sleeping schedule of a two-year-old?
    These are all things I just don't know. I'm at the front end of the cone of uncertainty again. :-)



    Agile Game Development with Scrum – great book on hows and whys of Scrum for game dev. Only about the first half is applicable to small indies.

    Axosoft SCRUM tool – Free for single developers; contact support to get a free account (it's not advertised)

    You can follow the game I'm working on, Archmage Rises, by joining the newsletter and Facebook page.

    You can tweet me @LordYabo


    Indie Game Developer's Butternut Squash Soup
    (about 50 minutes; approximately 130 calories per 250ml/cup serving)

    Dammit Jim I'm a programmer not a food photographer!

    I created this recipe as part of a challenge to my wife that I could make a better squash soup than the one she ordered in the restaurant. She agrees, this is better! It is my mashup of three recipes I found on the internet.
    • 2 butternut squash (about 3.5 pounds), seeded and quartered
    • 4 cups chicken or vegetable broth
    • 1 tablespoon minced fresh ginger (about 50g)
    • 1/4 teaspoon nutmeg
    • 1 yellow onion diced
    • Half a red pepper diced (or whole if you like more kick to your soup)
    • 1 tablespoon kosher salt
    • 1 teaspoon black pepper
    • 1/3 cup honey
    • 1 cup whipping cream
    • 1 cup milk
    Peel squash, seed, and cut into small cubes. Put in a large pot with broth on a low boil for about 30 minutes.
    Add red pepper, onion, honey, ginger, nutmeg, salt, pepper. Place over medium heat and bring to a simmer for approximately 6 minutes. Using a stick blender, puree the mixture until smooth. Stir in whipping cream and milk. Simmer 5 more minutes.

    Serve with a dollop of sour cream in the middle and sprinkling of sour dough croutons.

  22. A Room With A View

    A Viewport allows for a much larger and richer 2-D universe in your game. It allows you to zoom in, pan across, and scale the objects in your world based on what the user wants to see (or what you want them to see).

    The Viewport is a software component (written in C++ this time) that participates in a larger software architecture. UML class and sequence diagrams (below) show how these interactions are carried out.

    The algorithms used to create the viewport are not complex. The ubiquitous line equation, y = m.x + b, is all that is needed to create the effect of the Viewport. The aspect ratio of the screen is also factored in so that "squares can stay squares" when rendered.

    Beyond the basic use of the Viewport, allowing entities in your game to map their position and scale onto the display, it can also be a larger participant in the story your game tells and the mechanics of making your story work efficiently. Theatrical camera control, facilitating the level of detail, and culling graphics operations are all real-world uses of the Viewport.

    NOTE: Even though I use Box2D for my physics engine, the concepts in this article are independent of that or even using a physics engine for that matter.

    The Video

    The video below shows this in action.

    The Concept

    The world is much bigger than what you can see through your eyes. You hear a sound. Where did it come from? Over "there". But you can't see that right now. You have to move "there", look around, see what you find. Is it an enemy? A friend? A portal to the bonus round? By only showing your player a portion of the bigger world, they are goaded into exploring the parts they cannot see. This way lies a path to immersion and entertainment.

    A Viewport is a slice of the bigger world. The diagram below shows the basic concept of how this works.


    The Game World (left side) is defined to be square and in meters, the units used in Box2D. The world does not have to be square, but it means one less parameter to carry around and worry about, so it is convenient.

    The Viewport itself is defined as a scale factor of the respective width/height of the Game World. The width of the Viewport is scaled by the aspect ratio of the screen. This makes it convenient as well. If the Viewport is "square" like the world, then it would have to lie either completely inside the non-square Device Screen or with part of it completely outside the Device Screen. This makes it unusable for "IsInView" operations that are useful (see Other Uses at the end).

    The "Entity" is deliberately shown as partially inside the Viewport. When displayed on the Device Screen, it is also only shown as partially inside the view. Its aspect on the screen is not skewed by the size of the screen relative to the world size. Squares should stay squares, etc.

    The "nuts and bolts" of the Viewport are linear equations mapping the two corner points (top left, bottom right) in the coordinate system of the world onto the screen coordinate system. From a "usage" standpoint, it maps the positions in the simulated world (meters) to a position on the screen (pixels). There will also be times when it is convenient to go the other way and map from pixels to meters. The Viewport class handles the math for the linear equations, computing them when needed, and also provides interfaces for the pixel-to-meter or meter-to-pixel transformations.

    Note that the size of the Game World used is also specifically ambiguous. The size of all Box2D objects should be between 0.1m and 10m, the world can be much larger as needed and within realistic use of the float32 precision used in Box2D. That being said, the Viewport size is based on a scale factor of the Game World size, but it is conceivable (and legal) to move the Viewport outside of the "established" Game World size. What happens when you view things "off the grid" is entirely up to your game design.

    Classes and Sequences

    The Viewport does not live by itself in the ecosystem of the game architecture. It is a component that participates in the architecture. The diagram below shows the major components used in the Missile Demo application.


    The main details of each class have been omitted; we're more interested in the overall component structure than internal APIs at this point.

    Main Scene

    The MainScene (top left) is the container for all the visual elements (CCLayer-derived objects) and owner of an abstract interface, the MovingEntityIFace. Only one instance exists at a time. The MainScene creates a new one when signaled by the DebugMenuLayer (user input) to change the Entity. Commands to the Entity are also executed via the MainScene. The MainScene also acts as the holder of the Box2D world reference.

    Having the MainScene tie everything together is perfectly acceptable for a small single-screen application like this demonstration. In a larger multi-scene system, some sort of UI Manager approach would be used.

    Viewport and Notifier

    The Viewport (lower right) is a Singleton. This is a design choice. The motivations behind it are:
    • There is only one screen the user is looking at.
    • Lots of different parts of the graphics system may use the Viewport.
    • It is much more convenient to do it as a "global" singleton than to pass the reference around to all potential consumers.
    • Deriving it from the SingletonDynamic template ensures that it follows the Init/Reset/Shutdown model used for all the Singleton components. It's life cycle is entirely predictable: it always exists.

    Having certain parts of the design as a singleton may make it hard to envision how you would handle other types of situations, like a split screen or a mini-map. If you needed to deal with such a situation, at least one strategy would be to factor the base functionality of the "Viewport" into a class and then construct a singleton to handle the main viewport, one for the mini-map, etc. Essentially, if you only have "one" of something, the singleton pattern is helping you to ensure ease of access to the provider of the feature and also guaranteeing the life cycle of that feature matches into the life cycle of your design.

    This is (in my mind) absolutely NOT the same thing as a variable that can be acccessed and modified without an API from anywhere in your system (i.e. a global variable). When you wrap it and control the life cycle, you get predictability and a place to put a convenient debug point. When you don't, you have fewer guarantees of initial state and you have to put debug points at every point that touches the variable to figure out how it evolves over time. That inversion (one debug point vs. lots of debug points) can crush your productivity.

    If you felt that using the singleton approach was not for you or not per company policy or group agreed policies, etc., you could create an instance of that "viewport" class and pass it to all the interested cosumers as a reference. You will still need a place for that instance to live and you will need to manage its life cycle.

    You have to weigh the design goals against the design and make a decision about what constitutes the best tool for the job, often using conflicting goals, design requirements, and the strong opinions of your peers. Rising to the real challenges this represents is a practical reality of "the job". And possibly why indie developers like to work independently.

    The Notifier is also pictured to highlight its importance; it is an active participant when the Viewport changes. The diagram below shows exactly this scenario.


    The user user places both fingers on the screen and begins to move them together (1.0). This move is received by the framework and interpreted by the TapDragPinchInput as a Pinch gesture, which it signals to the MainScene (1.1). The MainScene calls SetCenter on the Viewport (1.2) which immediately leads to the Viewport letting all interested parties know the view is changing via the Notifier (1.3). The Notifier immediately signals the GridLayer, which has registered for the event (1.4). This leads to the GridLayer recalculating the position of its grid lines (1.5). Internally, the GridLayer maintains the grid lines as positions in meters. It will use the Viewport to convert these to positions in pixels and cache them off. The grid is not actually redrawn until the next draw(...) call is executed on it by the framework.

    The first set of transactions were executed synchronously as the user moved their fingers; each time a new touch event came in, the change was made. The next sequence (starting with 1.6) is initiated when the framework calls the Update(...) method on the main scene. This causes an update of the Box2D physics model (1.7). At some point later, the framework calls the draw(...) method on the Box2dDebugLayer (1.8). This uses the Viewport to calculate the display positions of all the Box2D bodies (and other elements) it will display (1.9).

    These two sequences demonstrate the two main types of Viewport update sequences. The first is triggered by the a direct change of the view leading to events that trigger immediate updates. The second is called by the framework every major update of the model (as in MVC).


    The general method for mapping the world space limits (Wxmin, Wxmax) onto the screen coordinates (0,Sxmax) is done by a linear mapping with a y = mx + b formulation. Given the two known points for the transformation:

    Wxmin (meters) maps onto (pixel) 0 and
    Wxmax (meters) maps onto (pixel) Sxmax
    Solving y0 = m*x0 + b and y1 = m*x1 + b1 yields:

    m = Sxmax/(Wxmax - Wxmin) and
    b = -Wxmin*Sxmax/(Wxmax - Wxmin) (= -m * Wxmin)

    We replace (Wxmax - Wxmin) with scale*(Wxmax-Wxmin) for the x dimension and scale*(Wymax-Wymin)/aspectRatio in the y dimension.

    The value (Wxmax - Wxmin) = scale*worldSizeMeters (xDimension)

    The value Wxmin = viewport center - 1/2 the width of the viewport


    In code, this is broken into two operations. Whenever the center or scale changes, the slope/offset values are calculated immediately.

    void Viewport::CalculateViewport()
       // Bottom Left and Top Right of the viewport
       _vSizeMeters.width = _vScale*_worldSizeMeters.width;
       _vSizeMeters.height = _vScale*_worldSizeMeters.height/_aspectRatio;
       _vBottomLeftMeters.x = _vCenterMeters.x - _vSizeMeters.width/2;
       _vBottomLeftMeters.y = _vCenterMeters.y - _vSizeMeters.height/2;
       _vTopRightMeters.x = _vCenterMeters.x + _vSizeMeters.width/2;
       _vTopRightMeters.y = _vCenterMeters.y + _vSizeMeters.height/2;
       // Scale from Pixels/Meters
       _vScalePixelToMeter.x = _screenSizePixels.width/(_vSizeMeters.width);
       _vScalePixelToMeter.y = _screenSizePixels.height/(_vSizeMeters.height);
       // Offset based on the screen center.
       _vOffsetPixels.x = -_vScalePixelToMeter.x * (_vCenterMeters.x - _vScale*_worldSizeMeters.width/2);
       _vOffsetPixels.y = -_vScalePixelToMeter.y * (_vCenterMeters.y - _vScale*_worldSizeMeters.height/2/_aspectRatio);
       _ptmRatio = _screenSizePixels.width/_vSizeMeters.width;

    Note:  Whenever the viewport changes, we emit a notification to the rest of the system to let interested parties react. This could be broken down into finer detail for changes in scale vs. changes in the center of the viewport.

    When the a conversion from world space to viewport space is needed:

    CCPoint Viewport::Convert(const Vec2&amp; position)
       float32 xPixel = position.x * _vScalePixelToMeter.x + _vOffsetPixels.x;
       float32 yPixel = position.y * _vScalePixelToMeter.y + _vOffsetPixels.y;
       return ccp(xPixel,yPixel);

    And, occasionally, we need to go the other way.

    /* To convert a pixel to a position (meters), we invert
     * the linear equation to get x = (y-b)/m.
    Vec2 Viewport::Convert(const CCPoint&amp; pixel)
       float32 xMeters = (pixel.x-_vOffsetPixels.x)/_vScalePixelToMeter.x;
       float32 yMeters = (pixel.y-_vOffsetPixels.y)/_vScalePixelToMeter.y;
       return Vec2(xMeters,yMeters);

    Position, Rotation, and PTM Ratio

    Box2D creates a physics simulation of objects between the sizes of 0.1m and 10m (according to the manual, if the scaled size is outside of this, bad things can happen...the manual is not lying). Once you have your world up and running, you need to put the representation of the bodies in it onto the screen. To do this, you need its rotation (relative to x-axis), position, and a scale factor to convert the physical meters to pixels. Let's assume you are doing this with a simple sprite for now.

    The rotation is the easiest. Just ask the b2Body what its rotation is and convert it to degrees with CC_RADIANS_TO_DEGREES(...). Use this for the angle of your sprite.

    The position is obtained by asking the body for its position in meters and calling the Convert(...) method on the Viewport. Let's take a closer look at the code for this.

    /* To convert a position (meters) to a pixel, we use
     * the y = mx + b conversion.
    CCPoint Viewport::Convert(const Vec2&amp; position)
       float32 xPixel = position.x * _vScalePixelToMeter.x + _vOffsetPixels.x;
       float32 yPixel = position.y * _vScalePixelToMeter.y + _vOffsetPixels.y;
       return ccp(xPixel,yPixel);

    This is about as simple as it gets in the math arena. A linear equation to map the position from the simulated physical space (meters) to the Viewport's view of the world on the screen (pixels). A key nuance here is that the scale and offset are calculated ONLY when the viewport changes.

    The scale is called the pixel-to-meter ratio, or just PTM Ratio. If you look inside the CalculateViewport method, you will find this rather innocuous piece of code:

       _ptmRatio = _screenSizePixels.width/_vSizeMeters.width;

    The PTM Ratio is computed dynamically based on the size of the width viewport (_vSizeMeters). Note that it could be computed based on the height instead; be sure to define the aspect ratio, etc., appropriately.

    If you search the web for articles on Box2D, whenever they get to the display portion, they almost always have something like this:

    #define PTM_RATIO 32

    Which is to say, every physical body is represented by a ratio of 32 pixels (or some other value) for each meter in the simulation. The original iPhone screen was 480 x 320, and Box2D represents objects on the scale of 0.1m to 10m, so a full sized object would take up the full width of the screen. However, it is a fixed value. Which is fine.

    Something very interesting happens though, when you let this value change. By letting the PTM Ratio change and scaling your objects using it, the viewer is given the illusion of depth. They can move into and out of the scene and feel like they are moving into and out of the scene in the third dimension.

    You can see this in action when you use the pinch operation on the screen in the App. The Box2DDebug uses the Viewport's PTM Ratio to change the size of the displayed polygons. It can (and has) been used to also scale sprites so that you can zoom in/out.

    Other Uses

    With a little more work or a few other components, the Viewport concept can be expanded to yield other benefits. All of these uses are complementary. That is to say, they can all be used at the same time without interfering with each other.


    The Viewport itself is "Dumb". You tell it change and it changes. It has no concept of time or motion; it only executes at the time of command and notifies (or is polled) as needed. To execute theatrical camera actions, such as panning, zooming, or combinations of panning and zooming, you need a "controller" for the Viewport that has a notion of state. This controller is the camera.

    Consider the following API for a Camera class:

    class Camera
       // If the camera is performing any operation, return true.
       bool IsBusy();
       // Move/Zoom the Camera over time.
       void PanToPosition(const vec2&amp; position, float32 seconds);
       void ZoomToScale(float32 scale, float32 seconds);
       // Expand/Contract the displayed area without changing
       // the scale directly.
       void ExpandToSize(float32 size, float32 seconds);
       // Stop the current operation immediately.
       void Stop();
       // Called every frame to update the Camera state
       // and modify the Viewport.  The dt value may 
       // be actual or fixed in a fixed timestep
       // system.
       void Update(float32 dt);

    This interface presents a rudimentary Camera. This class interacts with the Viewport over time when commanded. You can use this to create cut scenes, quickly show items/locations of interest to a player, or other cinematic events.

    A more sophisticated Camera could keep track of a specific entity and move the viewport automatically if the the entity started to move too close to the viewable edge.

    Level of Detail

    In a 3-D game, objects that are of little importance to the immediate user, such as objects far off in the distance, don't need to be rendered with high fidelity. If it is only going to be a "dot" to you, do you really need 10k polygons to render it? The same is true in 2-D as well. This is the idea of "Level of Detail".

    The PTMRatio(...) method/member of the Viewport gives the number of pixels an object will be given its size in meters. If you use this to adjust the scale of your displayed graphics, you can create elements that are "sized" properly for the screen relative to the other objects and the zoom level. You can ALSO substitute other graphics when the displayed object will appear to be little more than a blob. This can cut down dramatically on the GPU load and improve the performance of your game.

    For example, in Space Spiders Must Die!, each Spider is not single sprite, but a group of sprites loaded from a sprite sheet. This sheet must be loaded into the GPU, the graphics drawn, then another sprite sheet loaded in for other objects. When the camera is zoomed all the way out, we could get a lot more zip out of the system if we didn't have to swap out the sprite sheet at all and just drew a single sprite for each spider. A much smaller series of "twinkling" sprites could easily replace the full-size spider.

    Culling Graphics Operations

    If an object is not in view, why draw it at all? Well...you might still draw it...if the cost of keeping it from being drawn exceeds the cost of drawing it. In Cocos2D-x, it can get sticky to figure out whether or not you are really getting a lot by "flagging" elements off the screen and controlling their visibility (the GPU would probably handle it from here).

    However, there is a much less-ambiguous situation: Skeletal Animations. Rather than use a lot of animated sprites (and sprite sheets), we tend to use Spine to create skeletal animated sprites. These absolutely use a lot of calculations which are completely wasted if you can't see the animation because it is off camera. To save CPU cycles, which are even more limited these days than GPU cycles for the games we make, we can let the AI for the animiation keep running but only update the "presentation" when needed.

    The Viewport provides a method called IsInView(...) just for this purpose. Using it, you can flag entities as "in view" or "not in view". Internally, the representation used for the entity can make the decision to update or not based on this.


    A Viewport has uses that allows you to create a richer world for the player to "live" in, both by providing "depth" via zooming and allowing you to keep content outside the Viewport. It also provides opportunities to improve the graphics processing efficiency of your game.

    Get the Source Code for the this post hosted on GitHub by clicking here.

    Article Update Log

    21 Nov 2014: Added update about singleton usage.
    6 Nov 2014: Initial release

  23. What's In Your Toolbox?

    Big things are made of little things. Making things at all takes tools. We all know it is not the chisel that creates the sculpture, but the hand that guides it. Still, having a pointy chisel is probably better to break the rock than your hand.

    In this article, I'll enumerate the software tools that I use to put together various parts of my software. I learned about these tools by reading sites like this one, so feel free to contribute your own. I learned how to use them by setting a small goal for myself and figuring out whether or not the tool could help me achieve it. Some made the cut. Some did not. Some may be good for you. Others may be good for you.

    Software Tools

    #NameUsed ForCostLinkNotes
    1Cocos2d-xC++ Graphical FrameworkFreewww.cocos2d-x.orgHas lots of stuff out of the box and a relatively light learning curve. We haven't used it cross-platform (yet) but many have before us, so no worries.
    2Box2D2-D PhysicsFreewww.box2d.orgNo longer the default for cocos2d-x :( but still present in the framework. I still prefer it over Chipmunk. Now you know at least two to try...
    3GimpBitmap Graphics EditorFreewww.gimp.orgAbove our heads but has uses for slicing, dicing, and mutilating images. Great for doing backgrounds.
    4InkscapeVector Graphics EditorFree
    www.inkscape.orgOur favorite tool for creating vector graphics. We still suck at it, but at least the tool doesn't fight us.
    5PaperGraphics Editor (iPad)~$10App StoreThis is an incredible sketching tool. We use it to generate graphics, spitball ideas for presentations, and create one-offs for posts.
    6SpineSkeletal Animation~$100www.esotericsoftware.comI *wish* I had enough imagination to get more out of this incredible tool.
    7Physics EditorSee Notes$20www.codeandweb.comCreates data to turn images into data that Box2D can use. Has some annoyances but very solid on the whole.
    8Texture PackerSee Notes$40www.codeandweb.comPuts images together into a single file so that you can batch them as sprites.
    9PythonScripting LanguageFreewww.python.orgAt some point you will need a scripting language to automate something in your build chain. We use python. You can use whatever you like.
    10Enterprise ArchitectUML Diagrams~$130-$200www.sparxsystems.comYou probably won't need this but we use it to create more sophisticated diagrams when needed. We're not hard core on UML, but we are serious about ideas and a picture is worth a thousand words.
    11ReflectorSee Notes~$15Mac App StoreThis tool lets you show your iDevice screen on your Mac. Which is handy for screen captures without the (very slow) simulator.
    12XCodeIDEFreeMac App StoreCocos2d-x works in multiple IDEs. We are a Mac/Windows shop. Game stuff is on iPads, so we use XCode. Use what works best for you.
    13Glyph DesignerSee Notes$40www.71squared.comCreates bitmapped fonts with data. Seamless integration with Cocos2d-x. Handy when you have a lot of changing text to render.
    14Particle DesignerSee Notes$60www.71squared.comHelps you design the parameters for particle emitter effects. Not sure if we need it for our stuff but we have used these effects before and may again. Be sure to block out two hours of time...the temptation to tweak is incredible.
    15Sound BibleSee NotesFreewww.soundbible.comGreat place to find sound clips. Usually the license is just attribution, which is a great karmic bond.
    16Tiled QTSee NotesFreewww.mapeditor.orgA 2-D map editor. Cocos2d-x has import mechanisms for it. I haven't needed it, but it can be used for tile/orthogonal map games. May get some use yet.


    A good developer (or shop) uses the tools of others as needed, and develops their own tools for the rest. The tools listed here are specifically software that is available "off the shelf". I did not list a logging framework (because I use my own) or a unit test framework (more complex discussion here) or other "tools" that I have picked up over the years and use to optimize my work flow.

    I once played with Blender, the fabulous open-source 3-D rendering tool. It has about a million "knobs" on it. Using it, I realized I was easily overwhelmed by it, but I also realized that my tools could easily overwhelm somebody else if they were unfamiliar with them and did not take the time to figure out how to get the most out of them.

    The point of all this is that every solid developer I know figures out the tools to use in their kit and tries to get the most out of them. Not all hammers fit in all hands, though.

    Article Update Log

    5 Nov 2014: Initial Release

  24. Making a Game with Blend4Web Part 6: Animation and FX

    This time we'll speak about the main stages of character modeling and animation, and also will create the effect of the deadly falling rocks.

    Character model and textures

    The character data was placed into two files. The character_model.blend file contains the geometry, the material and the armature, while the character_animation.blend file contains the animation for this character.

    The character model mesh is low-poly:


    This model - just like all the others - lacks a normal map. The color texture was entirely painted on the model in Blender using the Texture Painting mode:


    The texture then has been supplemented (4) with the baked ambient occlusion map (2). Its color (1) was much more pale initially than required, and has been enhanced (3) with the Multiply node in the material. This allowed for fine tuning of the final texture's saturation.


    After baking we received the resulting diffuse texture, from which we created the specular map. We brightened up this specular map in the spots corresponding to the blade, the metal clothing elements, the eyes and the hair. As usual, in order to save video memory, this texture was packed into the alpha channel of the diffuse texture.


    Character material

    Let's add some nodes to the character material to create the highlighting effect when the character contacts the lava.


    We need two height-dependent procedural masks (2 and 3) to implement this effect. One of these masks (2) will paint the feet in the lava-contacting spots (yellow), while the other (3) will paint the character legs just above the knees (orange). The material specular value is output (4) from the diffuse texture alpha channel (1).


    Character animation

    Because the character is seen mainly from afar and from behind, we created a simple armature with a limited number of inverse kinematics controlling bones.


    A group of objects, including the character model and its armature, has been linked to the character_animation.blend file. After that we've created a proxy object for this armature (Object > Make Proxy...) to make its animation possible.

    At this game development stage we need just three animation sequences: looping run, idle and death animations.


    Using the specially developed tool - the Blend4Web Anim Baker - all three animations were baked and then linked to the main scene file (game_example.blend). After export from this file the animation becomes available to the programming part of the game.


    Special effects

    During the game the red-hot rocks will keep falling on the character. To visualize this a set of 5 elements is created for each rock:

    1. the geometry and the material of the rock itself,
    2. the halo around the rock,
    3. the explosion particle system,
    4. the particle system for the smoke trail of the falling rock,
    5. and the marker under the rock.

    The above-listed elements are present in the lava_rock.blend file and are linked to the game_example.blend file. Each element from the rock set has a unique name for convenient access from the programming part of the application.

    Falling rocks

    For diversity, we made three rock geometry types:


    The texture was created by hand in the Texture Painting mode:


    The material is generic, without the use of nodes, with the Shadeless checkbox enabled:


    For the effect of glowing red-hot rock, we created an egg-shaped object with the narrow part looking down, to imitate rapid movement.


    The material of the shiny areas is entirely procedural, without any textures. First of all we apply a Dot Product node to the geometry normals and vector (0, 0, -1) in order to obtain a view-dependent gradient (similar to the Fresnel effect). Then we squeeze and shift the gradient in two different ways and get two masks (2 and 3). One of them (the widest) we paint to the color gradient (5), while the other is subtracted from the first (4) to use the resulting ring as a transparency map.


    The empty node group named NORMAL_VIEW is used for compatibility: in the Geometry node the normals are in the camera space, but in Blend4Web - in the world space.


    The red-hot rocks will explode upon contact with the rigid surface.


    To create the explosion effect we'll use a particle system with a pyramid-shaped emitter. For the particle system we'll create a texture with an alpha channel - this will imitate fire and smoke puffs:


    Let's create a simple material and attach the texture to it:


    Then we setup a particle system using the just created material:


    Activate particle fade-out with the additional settings on the Blend4Web panel:


    To increase the size of the particles during their life span we create a ramp for the particle system:


    Now the explosion effect is up and running!


    Smoke trail

    When the rock is falling a smoke trail will follow it:


    This effect can be set up quite easily. First of all let's create a smoke material using the same texture as for explosions. In contrast to the previous material this one uses a procedural blend texture for painting the particles during their life span - red in the beginning and gray in the end - to mimic the intense burning:


    Now proceed to the particle system. A simple plane with its normal oriented down will serve as an emitter. For this time the emission is looping and more long-drawn:


    As before this particle system has a ramp for reducing the particles size progressively:


    Marker under the rock

    It remains only to add a minor detail - the marker indicating the spot to which the rock is falling, just to make the player's life easier. We need a simple unwrapped plane. Its material is fully procedural, no textures are used.


    The Average node is applied to the UV data to obtain a radial gradient (1) with its center in the middle of the plane. We are already familiar with the further procedures. Two transformations result in two masks (2 and 3) of different sizes. Subtracting one from the other gives the visual ring (4). The transparency mask (6) is tweaked and passed to the material alpha channel. Another mask is derived after squeezing the ring a bit (5). It is painted in two colors (7) and passed to the Color socket.



    At this stage the gameplay content is ready. After merging it with the programming part described in the previous article of this series we may enjoy the rich world packed with adventure!

    Link to the standalone application

    The source files of the models are part of the free Blend4Web SDK distribution.

    • Feb 06 2015 01:56 AM
    • by Spunya
  25. How to Create a Scoreboard for Lives, Time, and Points in HTML5 with WiMi5

    This tutorial gives a step-by-step explanation on how to create a scoreboard that shows the number of lives, the time, or the points obtained in a video game.

    To give this tutorial some context, we’re going to use the example project StunPig in which all the applications described in this tutorial can be seen. This project can be cloned from the WiMi5 Dashboard.


    We require two graphic elements to visualize the values of the scoreboards, a “Lives” Sprite which represents the number of lives, and as many Font and Letter Sprites as needed to represent the value of the digit to be shown in each case. The “Lives” Sprite is one with four animations or image states that are linked to each one of the four numerical values for the value of the level of lives.

    image01.png image27.png

    The Font or Letter Sprite, a Sprite with 11 animations or image states which are linked to each of the ten values of the numbers 0-9, as well as an extra one for the colon (:).

    image16.png image10.png

    Example 1. How to create a lives scoreboard

    To manage the lives, we’ll need a numeric value for them, which in our example is a number between 0 and 3 inclusive, and its graphic representation in our case is the three orange-colored stars which change to white as lives are lost, until all of them are white when the number of lives is 0.


    To do this, in the Scene Editor, we must create the instance of the sprite used for the stars. In our case, we’ll call them “Lives”. To manipulate it, we’ll have a Script (“lifeLevelControl”) with two inputs (“start” and “reduce”), and two outputs (“alive” and “death”).


    The “start” input initializes the lives by assigning them a numeric value of 3 and displaying the three orange stars. The “reduce” input lowers the numeric value of lives by one and displays the corresponding stars. As a consequence of triggering this input, one of the two outputs is activated. The “alive” output is activated if, after the reduction, the number of lives is greater than 0. The “death” output is activated when, after the reduction, the number of lives equals 0.

    Inside the Script, we do everything necessary to change the value of lives, displaying the Sprite in relation to the number of lives, triggering the correct output in function of the number of lives, and in our example, also playing a negative fail sound when the number of lives goes down..

    In our “lifeLevelControl” Script, we have a “currentLifeLevel” parameter which contains the number of lives, and a parameter which contains the “Lives” Sprite, which is the element on the screen which represents the lives. This Sprite has four animations of states, “0”, “1”, “2”, and “3”.


    The “start” input connector activates the ActionOnParam “copy” blackbox which assigns the value of 3 to the “currentLifeLevel” parameter and, once that’s done, it activates the “setAnimation” ActionOnParam blackbox which displays the “3” animation Sprite.

    The “reduce” input connector activates the “-” ActionOnParam blackbox which subtracts from the “currentLifeLevel” parameter the value of 1. Once that’s done, it first activates the “setAnimation” ActionOnParam blackbox which displays the animation or state corresponding to the value of the “CurrentLifeLevel” parameter and secondly, it activates the “greaterThan” Compare blackbox, which activates the “alive” connector if the value of the “currentLifeLevel” parameter is greater than 0, or the “death” connector should the value be equal to or less than 0.

    Example 2. How to create a time scoreboard or chronometer

    In order to manage time, we’ll have as a base a numerical time value that will run in thousandths of a second in the round and a graphic element to display it. This graphic element will be 5 instances of a Sprite that will have 10 animations or states, which will be the numbers from 0-9.



    In our case, we’ll display the time in seconds and thousandths of a second as you can see in the image, counting down; so the time will go from the total time at the start and decrease until reaching zero, finishing.

    To do this in the Scenes editor, we must create the 6 instances of the different sprites used for each segment of the time display, the tenths place, the units place, the tenths of a second place, the hundredths of a second place, and the thousandths of a second place, as well as the colon. In our case, we’ll call them “second.unit”, “second.ten”, “millisec.unit”, “millisec.ten” y “millisec.hundred”.


    In order to manage this time, we’ll have a Script (“RoundTimeControl”) which has 2 inputs (“start” and “stop”) and 1 output (“end”), as well as an exposed parameter called “roundMillisecs” and which contains the value of the starting time.


    The “start” input activates the countdown from the total time and displays the decreasing value in seconds and milliseconds. The “stop” input stops the countdown, freezing the current time on the screen. When the stipulated time runs out, the “end” output is activated, which determines that the time has run out. Inside the Script, we do everything needed to control the time and display the Sprites in relation to the value of time left, activating the “end” output when it has run out.

    In order to use it, all we need to do is put the time value in milliseconds in, either by placing it directly in the “roundMillisecs” parameter, or by using a blackbox I assign it, and once that’s been assigned, we then activate the the “start” input which will display the countdown until we activate the “stop” input or reach 0, in which case the “end” output will be activated, which we can use, for example, to remove a life or whatever else we’d like to activate.


    In the “RoundTimeControl” Script, we have a fundamental parameter, “roundMillisecs”, which contains and defines the playing time value in the round. Inside this Script, we also have two other Scripts, “CurrentMsecs-Secs” and “updateScreenTime”, which group together the actions I’ll describe below.

    The activation of the “start” connector activates the “start” input of the Timer blackbox, which starts the countdown. As the defined time counts down, this blackbox updates the “elapsedTime” parameter with the time that has passed since the clock began counting, activating its “updated” output. This occurs from the very first moment and is repeated until the last time the time is checked, when the “finished” output is triggered, announcing that time has run out. Given that the time to run does not have to be a multiple of the times between the update and the checkup of the time run, the final value of the elapsedTime parameter will most likely be greater than measured, which is something that will have to be kept in mind when necessary.

    The “updated” output tells us we have a new value in the “elapsedTime” parameter and will activate the “CurrentTimeMsecs-Secs” Script which calculates the total time left in total milliseconds and divides it into seconds and milliseconds in order to display it. Once this piece of information is available, the “available” output will be triggered, which will in turn activate the “update” input of the “updateScreenTime” Script which places the corresponding animations into the Sprites displaying the time.

    In the “CurrentMsecs-Secs” Script, we have two fundamental parameters with to carry out; “roundMillisecs”, which contains and defines the value of playing time in the round, and “elapsedTime”, which contains the amount of time that has passed since the clock began running. In this Script, we calculate the time left and then we break down that time in milliseconds into seconds and milliseconds--the latter is done in the “CalculateSecsMillisecs” Script, which I’ll be getting to.


    The activation of the get connector starts the calculation of time remaining, starting with the activation of the “-” ActionOnParam blackbox that subtracts the value of the time that has passed since the “elapsedTime” parameter contents started from the total run time value contained in the “roundMillisecs” parameter. This value, stored in the “CurrentTime” parameter, is the time left in milliseconds.

    Once that has been calculated, the “greaterThanOrEqual” Compare blackbox is activated, which compares the value contained in “CurrentTime” (the time left) to the value 0. If it is greater than or equal to 0, it activates the “CalculateSecsMillisecs” Script which breaks down the remaining time into seconds and milliseconds, and when this is done, it triggers the “available” output connector. If it is less, before activating the “CalculateSecsMillisecs” Script, we activate the ActionOnParam “copy” blackbox which sets the time remaining value to zero.


    In the “CalculateSecsMillisecs” Script, we have the value of the time left in milliseconds contained in the “currentTime” parameter as an input. The Script breaks down this input value into its value in seconds and its value in milliseconds remaining, providing them to the “CurrentMilliSecs” and “CurrentSecs” parameters. The activation of its “get” input connector activates the “lessThan” Compare blackbox. This performs the comparison of the value contained in the “currentTime” parameter to see if it is less than 1000.

    If it is less, the “true” output is triggered. What this means is that there are no seconds, which means the whole value of “CurrentTime” is used as a value in the “CurrentMilliSecs” parameter, which is then copied by the “Copy” ActionOnParam blackbox; but it doesn’t copy the seconds, because they’re 0, and that gives the value of zero to the “currentSecs” parameter via the “copy” ActionOnParam blackbox. After this, it has the values the Script provided, so it activates its “done” output..

    On the other hand, if the check the “lessThan” Compare blackbox runs determines that the “currentTime” is greater than 1000, it activates its “false” output. This activates the “/” ActionOnParam blackbox, which divides the “currentTime” parameter by 1000’, storing it in the “totalSecs” parameter. Once that is done, the “floor” ActionOnParam is activated, which leaves its total “totalSecs” value in the “currentSecs” parameter.

    After this, the “-” ActionOnParam is activated, which subtracts “currentSecs” from “totalSecs”, which gives us the decimal part of “totalSecs”, and stores it in “currentMillisecs” in order to later activate the “*” ActionOnParam blackbox, multiplying by 1000 the “currentMillisecs” parameter which contains the decimal value of the seconds left in order to convert it into milliseconds, which is stored in the “CurrentMillisecs” parameter (erasing the previous value). After this, it then has the values the Script provides, so it then activates its “done” output.

    When the “CalculateSecsMillisecs” Script finishes and activates is “done” output, and this activates the Script’s “available” output, the “currentTimeMsecs-Secs” Script is activated, which then activates the “updateScreenTime” Script via its “update” input. This Script handles displaying the data obtained in the previous Script and which are available in the “CurrentMillisecs” and “CurrentSecs” parameters.


    The “updateScreenTime” Script in turn contains two Scripts, “setMilliSeconds” and “setSeconds”, which are activated when the “update” input is activated, and which set the time value in milliseconds and seconds respectively when their “set” inputs are activated. Both Scripts are practically the same, since they take a time value and place the Sprites related to the units of that value in the corresponding animations. The difference between the two is that “setMilliseconds” controls 3 digits (tenths, hundredths, and thousandths), while “setSeconds” controls only 2 (units and tens).


    The first thing the “setMilliseconds” Script does when activated is convert the value “currentMillisecs” is to represent to text via the “toString” ActionOnParam blackbox. This text is kept in the “numberAsString” parameter. Once the text has been obtained, we divide it into characters, grouping it up in a collection of Strings via the “split” ActionOnParam. It is very important to leave the content of the “separator” parameter of this blackbox empty, even though in the image you can see two quotation marks in the field. This collection of characters is gathered by the “digitsAsStrings” parameter. Later, based on the value of milliseconds to be presented, it will set one animation or another in the Sprites.

    Should the time value to be presented be less than 10, which is checked by the “lessThan” Compare blackbox against the value 10, the “true” output is activated which in turn activates the “setWith1Digit” Script. Should the time value be greater than 10, the blackbox’s “false” output is activated, and it proceeds to check if the time value is less than 100, which is checked by the “lessThan” Compare blackbox against the value 100. If this blackbox activates its “true” output, this in turn activates the “setWith2Digits” Script. Finally, if this blackbox activates the “false” output, the “setWith3Digits” Script is activated.


    The “setWith1Digit” Script takes the first of the collection of characters, and uses it to set the animation of the Sprite that corresponds with the units contained in the “millisec.unit” parameter. The remaining Sprites (“millisec.ten” and “millisec.hundred”) are set with the 0 animation.


    The “setWith2Digits” Script takes the first of the collection of characters, and uses it to set the animation of the Sprite corresponding to the tenths place number contained in the “millisec.ten” parameter, the second character of the collection to set the Sprite animation corresponding to the units contained in the “millisec.unit” parameter and the “millisec.hundred” Sprite is given the animation for 0.


    The “setWith3Digits” Sprite takes the first of the collection of characters, and uses it to set the animation of the Sprite corresponding to the hundredths contained in the “millisec.hundred” parameter, the second character of the collection to set the animation of the Sprite corresponding to the tenths place value, contained in the “millisec.ten” parameter, and the third character of the collection to set the animation of the Sprite corresponding to the units place value contained in the “millisec.unit” parameter.


    The “setSeconds” Script when first activated converts the value to represent “currentSecs” to text via the “toString” ActionOnParam blackbox. This text is grouped in the “numberAsString” parameter. Once the text is obtained, we divide it into characters, gathering it in a collection of Strings via the “split” ActionOnParam blackbox. It is very important to leave the content of the “separator” parameter of this Blackbox blank, even though you can see two quotation marks in the field. This collection of characters is collected in the “digitsAsStrings” parameter. Later, based on the value of the seconds to be shown, one animation or another will be placed in the Sprites.

    If the time value to be presented is less than 10, it’s checked by the “lessThan” Compare blackbox against the value of 10, which activates the “true” output; the first character of the collection is taken and used to set the animation of the Sprite corresponding to the units place value contained in the “second.unit” parameter. The other Sprite, “second.ten”, is given the animation for 0.

    If the time value to be presented is greater than ten, the “false” output of the blackbox is activated, and it proceeds to pick the first character from the collection of characters and we use it to set the animation of the Sprite corresponding to the tens place value contained in the “second.ten” parameter, and the second character of the character collection is used to set the animation of the Sprite corresponding to the units place value contained in the “second.unit” parameter.

    Example 3. How to create a points scoreboard.

    In order to manage the number of points, we’ll have as a base the whole number value of these points that we’ll be increasing and a graphic element to display it. This graphic element will be 4 instances of a Sprite that will have 10 animations or states, which will be each of the numbers from 0 to 9.


    In our case, we’ll display the points up to 4 digits, meaning scores can go up to 9999, as you can see in the image, starting at 0 and then increasing in whole numbers.


    For this, in the Scene editor, we must create the four instances of the different Sprites used for each one of the numerical units to be used to count points: units, tens, hundreds, and thousands. In our case, we’ll call them “unit point”, “ten point”, “hundred point”, and “thousand point”. To manage this time, we’ll have a Script (“ScorePoints”), which has 2 inputs (“reset” and “increment”), as well as an exposed parameter called “pointsToWin” which contains the value of the points to be added in each incrementation.


    The “reset” input sets the current score value to zero, and the “increment” input adds the points won in each incrementation contained in the “pointsToWin” parameter to the current score.

    In order to use it, we must only set the value for the points to win in each incrementation by either putting it in the “pointsToWin” parameter or by using a blackbox that I assign it. Once I have it, we can activate the “increment” input, which will increase the score and show it on the screen. Whenever we want, we can begin again by resetting the counter to zero by activating the “reset” input.

    In the interior of the Script, we do everything necessary to perform these actions and to represent the current score on the screen, displaying the 4 Sprites (units, tens, hundreds, and thousands) in relation to that value. When the “reset” input is activated, a “copy” ActionOnParam blackbox sets the value to 0 in the “scorePoints” parameter, which contains the value of the current score. Also, when the “increment” input is activated, a “+” ActionOnParam blackbox adds the parameter “pointsToWin”, which contains the value of the points won in each incrementation, to the “scorePoints” parameter, which contains the value of the current score. After both activations, a “StoreOnScreen” Script is activated via its “update” input.


    The “StoreOnScreen” Script has a connector to the “update” input and shares the “scorePoints” parameter, which contains the value of the current score.



    Once the “ScoreOnScreen” Script is activated by its “update” input, it begins converting the score value contained in the “scorePoints” parameter into text via the “toString” ActionOnParam blackbox. This text is gathered in the “numberAsString” parameter. Once the text has been obtained, we divide it into characters and group them into a collection of Strings via the “split” ActionOnParam.

    This collection of characters is gathered into the “digitsAsStrings” parameter. Later, based on the value of the score to be presented, one animation or another will be set for the 4 Sprites. If the value of the score is less than 10, as checked by the “lessThan” Compare blackbox against the value 10, its “true” output is activated, which activates the “setWith1Digit” Script.

    If the value is greater than 10, the blackbox’s “false” output is activated, and it checks to see if the value is less than 100. When the “lessThan” Compare blackbox checks that the value is less than 100, its “true” output is activated, which in turn activates the “setWith2Digits” Script.

    If the value is greater than 100, the “false” output of the blackbox is activated, and it proceeds to see if the value is less than 1000, which is checked by the “lessThan” Compare blackbox against the value of 1000. If this blackbox activates its “true” output, this will then activate the “setWith3Digits” Script. If the blackbox activates the “false” output, the “setWith4Digits” Script is activated.



    The “setWith1Digit” Script takes the first character from the collection of characters and uses it to set the animation of the Sprite that corresponds to the units place contained in the “unit.point” parameter. The remaining Sprites (“ten.point”, “hundred.point” and “thousand.point”) are set with the “0” animation.



    The “setWith2Digits” takes the first of the collection of characters and uses it to set the animation of the Sprite corresponding to the tens place contained in the “ten.point” parameter, and the second character of the collection is set with the animation of the Sprite corresponding to the units place as contained in the “units.point” parameter. The remaining Sprites (“hundred.point”) and (“thousand.point”) are set with the “0” animation.



    The “setWith3Digits” takes the first of the collection of characters and uses it to set the animation of the Sprite corresponding to the hundreds place contained in the “hundred.point”) parameter; the second character in the collection is set with the animation for the Sprite corresponding to the tens place as contained in the “ten.point” parameter; and the third character in the collection is set with the animation for the Sprite corresponding to the units place as contained in the “unit.point” parameter. The remaining Sprite, (“thousand.point”) is set with the “0” animation.



    The “setWith4Digits” Script takes the first character of the collection of characters and uses it to set the animation of the Sprite corresponding to the thousands place as contained in the “thousand.point” parameter; the second is set with the animation for the Sprite corresponding to the hundreds place as contained in the “hundred.point” parameter; the third is set with the animation for the Sprite corresponding to the tens place as contained in the “ten.point” parameter; and the fourth is set with the animation for the Sprite corresponding to the units place as contained in the “unit.point” parameter.

    As you can see it is not necessary to write code when you work with WiMi5. The whole logic of these scoreboard has been created by dragging and dropping blackboxes in the LogicChart. You also have to set and configure parameters and scripts, but all the work is visually done. We hope you have enjoyed this tutorial and you have understood how to create scoreboards.

    • Oct 21 2014 12:21 PM
    • by hafo