Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!

Search Results

There were 523 results tagged with game

Sort by                Order  
  1. In Fight - 3D Online Fighting

    “In Fight” – is a fighting game that can be launched directly from the browser. No prior registration or client download is needed, since the game is developed for social network platforms, like Facebook. This game reflects everything that is meant with the word “fighting”. Be ready to smash real, human controlled enemies in the 3D world.

    Inviting friends and contesting them for a duel will be possible with Social Networks. No complicated actions are needed – everything is simple and straightforward.

    There are things that make “In Fight” special. One of those is a level-based system that gives an opportunity for players to improve their character with different techniques and deadly combinations, pretty well-known experience analogue to RPG games. The variety of martial arts will make your character truly unique and sometimes even undefeatable.

    We took care about character customization. Players are offered to choose between 18 models (9 male and 9 female) that can be dressed with different items beginning with common jeans and ending with variety of useful costumes. It is obvious to guess that equipment will boost the character and make it stronger. It is also possible to tweak the character’s appearance. Sunglasses, capes, hats – you will definitely find something that will suit your character’s style.

    Our game supports not only the keyboard. Players will have a possibility to plug their beloved controller to computer and enjoy the game like they would have played it over the console.


    • Feb 18 2015 11:45 PM
    • by Aram_B
  2. Game Developing Team (Steam Group)

    Hello designers

    This is an organized steam group to all game designers. If you haven't heard of steam it is an marketplace where you can get software and games. The recommended software to get for beginners is Blender. Here is a link to the software. http://www.blender.org/. Positions in the game developing team are coders, animation specialist, and sculptors. When you have downloaded steam you can join this group here http://steamcommunity.com/groups/bjgamedevteam.


    I will be taking care of the money by using PayPal. The donations will be going towards better software and equipment. Donations are very much appreciated. Here is my PayPal if you want to support this. Just go to send at the top of the screen and type in jaydoncrenshaw@gmail.com or my phone number: 331-302-1746. This group comes together and brainstorms ideas for games and a specific amount of people go into a steam call and work on the game.


    : none yet
    Sunday: none, Monday: Discussion, Tuesday: Work day, Wednesday: none, Thursday: Work day, Friday: Discussion, Saturday: none.
    Thank you and I hope you contribute to the steam group if you like.

  3. Visual Tools For Debugging Games


    How much of your time do you spend writing code? How much of your time you spend fixing code?

    Which would you rather be doing? This is a no-brainer, right?

    As you set out to develop a game, having a strategy for how you are going to illuminate the "huh?" moments well before you are on the 15th level is going to pay dividends early and often in the product life cycle. This article discusses strategies and lessons learned from debugging the test level for the 2D top down shooter, Star Crossing.


    Intrinsic Tools

    Before discussing the tools you do not have by default, it seems prudent to list out the ones you will generally have available in most modern development tool chains.
    • Console output via printf(...). With more advanced loggers built into your code base, you can generate oceans worth of output or a gentle trickle of nuanced information as needed. Or you can just have it print "here 1", "here 2", etc. To get output, you have to actually put in code just for the purpose of outputting it. This usually starts with some basic outputs for things you know are going to be helpful, then degenerates into 10x the number of logging messages for specific issues you are working on.
    • Your actual "debugger", which allows you to set breakpoints, inspect variables, and gnash your teeth at when you try to have it display the contents of a std::map. This is your first line of defense and probably the one you learned to use in your crib.
    • A "profiler" which allows you to pinpoint where your code is sucking down the frame rate. You usually only break this out (1) when things go really wrong with your frame rate, (2) when you are looking for that memory leak that is crashing your platform, or (3) when your boss tells you to run before shipping even though the frame rate is good and the memory appears stable, because you don't really know if the memory is stable until you check.
    All these tools are part of the total package you start out with (usually). They will carry you well through a large part of the development, but will start to lose their luster when you are debugging AI, physics, etc. That is to say, when you are looking at stuff that is going on in real time, it is often very hard to put the break point at the right place in the code or pull useful information from the deluge of console output.

    Random Thoughts

    If your game has randomness built into it (e.g. random damage, timeouts, etc.), you may run into serious trouble duplicating failure modes. Someone may even debate whether the randomness is adding value to your game because of the headaches associated with debugging it. As part of the overall design, a decision was made early on to enable not-so-random-randomness as follows:
    • A "cycle clock" was constructed. This is lowest "tick" of execution of the AI/Physics of the game.
    • The cycle clock was set to 0 at the start of every level, and proceeded up from there. There is, of course, the possibility that the game may be left running forever and overflow the clock. Levels are time limited, so this is not a concern here (consider yourself caveated).
    • A simple static class provided the API for random number generation and setting the seed of the generator. This allowed us to put anything we want inside of the generation so the "clients" did not know or care what the actual "rand" function was.
    • At the start of every tick, the tick value was used to initialize the seed for the random number system.
    This allowed completely predictable random number generation for the purposes of debugging. This also has an added benefit, if it stays in the game, of the game evolving in a predictable way, at least at the start of a level. Once the user generates their own "random input", all bets are off.

    Pause, Validate, Continue

    The screenshot below shows a scene from the game with only the minimal debugging information displayed, the frame rate.


    The first really big problem with debugging a real-time game is that, well, it is going on in real-time. In the time it takes you to take your hand off the controls and hit the pause button (if you have a pause button), the thing you are looking at could have moved on.

    To counter this, Star Crossing has a special (configurable) play mode where taking your finger off the "Steer" control pauses the game immediately. When the game is paused, you can drag the screen around in any direction, zoom in/out with relative impunity, and focus in on the specific region of interest without the game moving on past you. You could even set a breakpoint (after the game is paused) in the debugger to dig deeper or look at the console output. Which is preferable to watching it stream by.

    A further enhancement of this would be to add a "do 1 tick" button while the game was paused. While this may not generate much motion on screen, it would allow seeing the console output generated from that one cycle.

    The frame rate (1) is ALWAYS displayed in debug builds even when not explicitly debugging. It might be easy to miss a small slowdown if you don't have the number on the screen. But even a small drop means that you have exhausted the available time in several frames (multiple CPU "spikes" in a row) so it needs attention.

    The visual debugging information can be turned on/off by a simple toggle (2). So you can leave it on, turn it on for a quick look and turn it off, etc. When it is on, it dropped the frame rate so it usually stayed off unless something specific was being looked at. On the positive side, this had the effect of slowing down the game a bit during on-screen debugging, which allowed seeing more details. Of course, this effect could be achieved by slowing down the main loop update.

    Debug Level 1

    The screen shot below shows the visual debugging turned on.



    At the heart of the game is a physics engine (Box2D). Every element in the game has a physical interaction with the other elements. Once you start using the physics, you must have the ability to see the bodies it generates. Your graphics are going to be on the screen but there are physics elements (anchor points, hidden bodies, joints, etc.) that you need to also see.

    The Box2D engine itself has a capacity to display the physics information (joints, bodies, AABB, etc.). It had to be slightly modified to work in with Star Crossing's zooming system and also to make the bodies mostly transparent (1). The physics layer was placed low in the layer stack (and it could be turned on/off by header include options). With the graphics layer(s) above the physics, the alignment of the sprites with the bodies they represented was easy to check. It was also easy to see where joints were connected, how they were pulling, etc.


    Star Crossing is laid out on a floating point "grid". The position in the physics world of all the bodies is used extensively in console debug output (and can be displayed in the labels under entities...more on this later). When levels are built, a rough "plan" of where items are placed is drawn up using this grid. When the debug information is turned on, major grid locations (2) are displayed. This has the following benefits:
    • If something looks like it is cramped or too spaced out, you can "eye ball" guess the distance from the major grid points and quickly change the positions in the level information.
    • The information you see on screen lines up with the position information displayed in the console.
    • Understanding the action of distance based effects is easier because you have a visual sense of the distance as seen from the entity.

    Entity Labels

    Every "thing" in the game has a unique identifier, simply called "ID". This value is displayed, along with the "type" of the entity, below it.
    • Since there are multiple instances of many entities, having the ID helps when comparing data to the console.
    • The labels are also present during the regular game, but only show up when the game is paused. This allows the player to get a bit more information about the "thing" on the screen without an extensive "what is this" page.
    • The labels can be easily augmented to display other information (state, position, health, etc.).
    • The labels scale in size based on zooming level. This helps eye-strain a lot when you zoom out or in.

    Debug Level 2

    While the player is able to move to any position (that the physics will allow), AI driven entities in the game use a combination of steering behaviors and navigation graphs to traverse the Star Crossing world.


    Navigation Grid

    The "navigation grid" (1) is a combination of Box2D bodies laid out on a grid as well as a graph with each body as a node and edges connecting adjacent bodies. The grid bodies are used for collision detection, dynamically updating the graph to mark nodes as "blocked' or "not blocked".

    The navigation grid is not always displayed (it can be disabled...it eats up cycles). When it is displayed, it shows exactly which cells an entity is occupying. This is very helpful for the following:
    • Watching the navigation path generation and ensuring it is going AROUND blocked nodes.
    • The path following behavior does a "look ahead" to see if the NEXT path edge (node) is blocked before entering (and recomputes a path if it is). This took a lot of tweaking to get right and having the blocked/unblocked status displayed, along with some "whiskers" from the entity really helped.

    Navigation Grid Numbers

    Each navigation grid node has a label that it can display (2). These numbers were put to use as follows:
    • Verifying the path the AI is going on matches up with the grid by displaying the navigation graph index of the grid node. For example, an AI that must perform a "ranged attack" does this by locating an empty node a certain distance from the target (outside its physical body), navigating to that node, pointing towards the target, and shooting. At one point, the grid was a little "off" and the attack position was inside the body of the target, but only in certain cases. The "what heck is that" moment occurred when it was observed that the last path node was inside the body of the target on the screen.
    • Star Crossing uses an influence mapping based approach to steer between objects. When a node becomes blocked or unblocked, the influence of all blockers in and around that node are updated. The path search uses this information to steer "between" blocking objects (these are the numbers in the image displayed). It is REALLY HARD to know if this working properly without seeing the paths and the influence numbers at the same time.

    Navigation Paths

    It is very difficult to debug a navigation system without looking at the paths that are coming from it (3). In the case of the paths from Star Crossing, only the last entity doing a search is displayed (to save CPU cycles). The "empty" red circle at the start of the path is the current target the entity is moving toward. As it removes nodes from its path, the current circle "disappears" and the next circle is left "open".

    One of the reasons for going to influence based navigation was because of entities getting "stuck" going around corners. Quite often, a path around an object with a rectangular shape was "hugging" its perimeter, then going diagonally to hug the next perimeter segment. The diagonal move had the entity pushing into the rectangular corner of the object it was going around. While the influence based approach solved this, it took a while to "see" why the entity was giving up and re-pathing after trying to burrow into the building.

    Parting Thoughts

    While there were a lot of very specific problems worked, the methods used to debug them, beyond the "intrinsic tools" are not terribly complex:

    1. You need a way to measure your FPS. This is included directly in many frameworks or is one of the first examples they give when teaching you how to use the framework.
    2. You need a way to enable/disable the debug data displayed on your screen.
    3. You need a way to hold the processing "still" while you can look around your virtual world (possibly poking and prodding it).
    4. You need a system to display your physics bodies, if you have a physics engine (or something that acts similar to one).
    5. You need a system to draw labels for "interesting" things and have those labels "stick" to those things as they move about the world.
    6. You need a way to draw simple lines for various purposes. This may be a little bit of a challenge because of how the screen gets redrawn, but getting it working is well worth the investment.

    These items are not a substitute for your existing logging/debugger system, they are a complement to it. These items are somewhat "generic". You can get a lot of mileage out of simple tools, though, if you know how to use them.

    Article Update Log

    30 Jan 2015: Initial release

  4. Are You Letting Others In?


    A good friend and colleague of mine recently talked about the realization of not letting others in on some of his projects. He expressed how limiting it was to try and do everything by himself. Limiting to his passion and creativity on the project. Limiting to his approach. Limiting to the overall scope and impact of the project. This really struck a chord with me as I’ve recently pushed to do more collaborating in my own projects. In an industry that is so often one audio guy in front of a computer, bringing in people with differing, new approaches is not only freeing, it’s refreshing.

    The Same Ol' Thing

    If you’ve composed for any amount of time, you’ve noticed that you develop ruts in the grass. I know I have. Same chord progressions. Same melodic patterns. Same approaches to composing a piece of music. Bringing in new people to help branch out exposes your work to new avenues. New opportunities. So, on your next project I’d challenge you to ask yourself – am I letting others in? Even to just evalute the mix and overall structure of the piece? To review the melody and offering up suggestions? I’ve been so pleasantly surprised and encouraged by sharing my work with others during the production process. It’s made me a better composer, better engineer and stronger musician. Please note that while this can be helpful for any composer at ANY stage of development, it's most likely going to be work best with someone with at least some experience and some set foundation. This is why I listed this article as "intermediate."

    Get Out of the Cave

    In an industry where so many of us tend to hide away in our dark studios and crank away on our masteripieces, maybe we should do a bit more sharing? When it’s appropriate and not guarded by NDA, of course! So reach out to your friends and peers. Folks that play actual instruments (gasp!) and see how they can breathe life into your pieces. Make suggestions as to how your piece can be stronger. More emotional. For example, I’d written out a flute ostinato that worked well for the song but was very challenging for a live player to perform. My VST could handle it all day… but my VST also doesn’t have to breathe. We made it work in a recording studio environment but if I ever wanted to have that piece performed live, I’d need to rethink that part some.

    Using live musicians or collaborating can also be more inspiring and much more affordable than you might first think! Consult with folks who are talented and knowledgible at production and mixing. Because even the best song can suck with terrible production. I completely realize you cannot, and most likely WILL NOT, collaborate on every piece you do. But challenging yourself with new approaches and ideas is always a good thing. Maybe you’ll use them or maybe you’ll confirm that your own approach is the best for a particular song. Either way, you’ll come out ahead for having passed your piece across some people you admire and respect.

    My point?

    Music composition and production is a life long path. No one person can know everything. This industry is actually much smaller than first impressions and folks are willing to help out! Buy them a beer, coffee or do an exchange of services. When possible throw cash. Or just ask and show gratitude! It’s definitely worked for me and I think it would work for you as well. The more well versed you are, the better. It will never hurt you.

    Article Update Log

    28 January 2015: Initial release

    GameDev.net Soapbox logo design by Mark "Prinz Eugn" Simpson

  5. Game Programming Courses

    Rocking it is, that is what most say about the gaming industry these days, and why not? Most gaming honchos are raking in the moolah, like no manâEUR™s business. The gaming technological advancements has changed the era, and across varied platforms too. Gaming is so different now from what it was seen back then in the late 70s and early 80s. Those were the days when simple programs in plastic were doled out for entertainment. However, as time passed by and with the introduction of smaller game consoles and microchips, the scene has changed out there. With more and more developments in the making, there are ten times the student inflow wanting to make the most from popular Game Programming Courses.

    A better understanding

    For those who wish to study such courses, it is a must for them to know what they would have as takeaway at the end of the day. The matter should be understood well and even if it is a well known institute, one needs to check with due diligence if the learners appetite would be satiated or not.
    The instructors or the trainers in such courses need to be highly accredited with certifications and licenses, maybe someone who has gifted the world with their own gaming creativity and innovation too. Choose an institute that has professionals who were once pioneers of the domain, and Singapore has many such institutes that hire and work with the same.

    Hand on experience and a practical approach is a must, and students interested in Game Programming Courses for Iphone should check with ex-students or professionals of the college if that is being given here or not? It is important to get clarity on this point and also on the number of hours too.

    The world is competitive and fierce out there, and there are institutes that take advantage of students by providing them with half baked information and needs. DonâEUR™t fall prey to such scamsters or frauds, do your homework and choose a reputed hub which would bring the best out of you. Remember, the boom has only just begun for the game developers and the industry out there. It all depends on which path you would want to take, so check in for Game Programming Courses for Iphone through colleges and learning hubs that indeed resonate with you, and not because a celebrity says so!!

  6. Free to play Finger VS Guns now available on Android

    Finger VS Guns available now

    Canadian indie game developer Brutal Studio has released the awaited sequel to Finger VS Axes, now featuring guns! Finger VS Guns is the sequel to our successful action game but this time your finger will taste bullets! Fans of the first game will love this reloaded version, which features new and intense levels, new optional weapons and more humorous fun.


    From the creators of Stick Squad, Punch My Face and the Sift Heads brand. Brutal Studio brings you yet another original and unique game, for your enjoyment ... now available on all Android phones and tablets. The game is free to play but offers optional in-app purchases. Players can also pay a small fee to remove any intrusive ads from the game. Another version for iPhones and iPads will be available in the next few weeks to come.

    Look out for more fun and excitement from Canadian developer Brutal Studio!

  7. How to create game rankings in WiMi5


    Each game created with WiMi5 has a ranking assigned to it by default. Using it is optional, and the decision to use it is totally up to the developer. It’s very easy to handle, and we can sum it up in three steps, which are explained below.

    1. Firstly, the setup is performed in the Ranking section of the project’s properties, which can be accessed from the dashboard.
    2. Then, when you’ve created the game in the editor, you’ll have a series of blackboxes available to handle the rankings.
    3. Finally, once it’s running, a button will appear in the game’s toolbar (if the developer has set it up that way) with which the player can access the scoreboard at any time.

    It is important to know that a player can only appear on a game’s scoreboard if they are registered.

    Setting up the Rankings

    Note:  Remember that since it’s still in beta, the settings are limited. So we need you to keep a few things in mind:

    - The game is only assigned one table, which is based both on the score obtained in the game and on the date that score was obtained. In the future, the developer will be able to create and personalize their own tables.

    - The ranking has several setup options available, but most of them are preset and can’t be modified. In later versions, they will be.

    In the dashboard, in the project’s properties, you can access the rankings’ setup by clicking on the Rankings tab. As you can observe, there is a default setting.


    For configuring the ranking, you will have the following options:

    Display the button in the game’s toolbar

    This option is selected by default, and allows the button to show rankings to appear on the toolbar that’s seen in every game. If you’re not going to use rankings in your game, or don’t want that button to appear, in order to have more control over when the scoreboard will be shown, all you have to do is deactivate this option. The button looks like this:


    Only one result per user

    NOTE: The modification of this option is turned off for the time being.

    This allows you to set up whether a player can have multiple results on the scoreboard or only their best one. This option is turned off by default, meaning all the player’s matches with top scores will appear.

    It’s important to note that if this option is turned off, the player’s best score is the only one that will appear highlighted, not the last. If the player has a lower score in a future match, it will be included in the ranking, but it may not be shown since there’s a better score already on the board.

    Timeframe Selection

    NOTE: The modification of this option is turned off for the moment, and is only available in all-time view, which is further limited to the 500 best scores.

    This allows the scoreboard to have different timeframes for the same lists: all-time, daily, monthly, and yearly. The all-time view is chosen by default, which, as its name implies, has no time limit.

    Match data used in rankings

    NOTE: The modification of this option is turned off for the moment. The internationalization of public texts is not enabled.

    This allows you to define what data will be the sorting criteria for the scoreboard. The Points item is used by default; another criterion available is Date (automatically handled by the system), which includes the date and time the game’s score is entered.

    Of each of the data that is configured, we have the following characteristics reflected in the board columns:
    • Priority: This indicates the datum’s importance in the sorting of the scoreboard. The lower the number, the more important the datum is. In the default case, for example, the first criterion is points; if there are equal points, then by date.
    • ID: Unique name by which the datum is identified. This identifier is what will appear in the blackboxes that allow the rankings’ data to be managed.
    • Type: The type of data.
    • List order: Ascending or descending order. In the default case, it will be descending for points (highest number first), and should points be equal, by ascending order by date (oldest games first).
    • Visible: This indicates if it should be shown in the scoreboard. That is to say, the datum is taken into account to calculate the ranking position, but it is not shown later on the scoreboard. In the default case, the points are visible, but the date and time are not.
    • Header title: The public name of the datum. This is the name the player will see on the scoreboard.
    It is possible to add additional data with the “+” button, to modify the characteristics of the existing data, as well as to modify the priority order.

    Once the rankings configuration is reviewed, the next step is to use them in the development of the game.

    Making use of the rankings in the editor

    When you’re creating your game in the editor, you’ll find in the Blackboxes tab in the LogicChart that there’s a group of blackboxes called Rankings, in which you will find all the blackboxes related to the management of rankings. From this link you can access these blackboxes’ technical documentation. Nevertheless, we’re going to go over them briefly and see a simple example of their use.

    SubmitScore Blackbox

    This allows the scores of the finished matches to be sent.


    This blackbox will be different depending on the data you’ve configured on the dashboard. In the default case, points is the input parameter.

    When the blackbox’s submit activator is activated, the value of that parameter will be sent as a new score for the user identified in the match.

    If the done output signal is activated, this will indicate that the operation was successful, whereas if the error signal is activated, that will indicate that there was an error and the score could not be stored.

    If the accessDenied signal is activated, this will mean that a non-registered user tried to send a score, which will allow you to treat this matter in a special way (like showing an error message, etc.).

    Finally, there is the onPrevious signal. If you select the blackbox and go to its properties, you’ll see there is one called secure that can be activated or deactivated. If you activate it, this blackbox will not send the game’s result as long as the answer to a previously-sent result is still pending. Therefore, onPrevious will activate if you try to send a game result when the answer to a previously-sent result is still pending, and the blackbox’s secure mode is activated.

    ShowRanking Blackbox

    This allows the scoreboard to be displayed (for example, at the end of a match). It has the same result as when a player presses the “see rankings” button on the game’s toolbar.


    When this show input is activated, the scoreboard will be displayed. If it was displayed successfully, the shown output signal will be activated, and when the player closes the board, that will activate the closed output signal, which will allow us to also personalize the flow of how it’s run.

    Example of the use of the blackboxes

    If you want, for example, to send a player’s final score at the end of a match so that it appears on the scoreboard, you can do that this way:

    Suppose you have a Script you manage the end of the game with and it shares a parameter with the points the player has accumulated up to that moment:


    You’d have to create a SubmitScore blackbox...


    …and join the gameEnded output (which, let’s say, is activated when the game has ended) to the submit input in the blackbox you just created. It will also be necessary to indicate the variable the score has that we want to send, points from the DetectGameEnd script in our case. So, click and drag it to the SubmitScore blackbox to assign it. With these two actions, you’ll get the following:


    And the game would then be read to send scores. The player could check the board at any time by clicking on the menu button that was created for just that purpose, as we saw the setup section.

    However, you could want the scoreboard to appear automatically once the match is over. To do that, use the ShowRankings blackbox which, for example, could join to the done output in the SubmitScore blackbox and thus show the scores as soon as the score has been sent and confirmed:


    And with that you have a game that can send scores and show the scoreboard.

    Running the game

    Once a game that is able to send match results is developed, you have to remember that it behaves differently in “editing” than in “testing” and “production” mode.

    By “editing” mode, we mean when the game is run directly in the editor, in the Preview frame, to perform quick tests. In this mode, the scores are stored, but only temporarily. When we stop running it, those scores are deleted. Also, in this mode, the scoreboard is “simulated”; it’s not real. This means that there’s no way to access the toolbar button, since it doesn’t really exist.

    Sending and storing “real” scores is done by testing the games or playing their published versions. To test them, first you have to deploy them with the Deploy option on the Tools roll-down menu, and then with the Test option (from the menu or from the dialogue box you see after the Deploy one) you can start testing your game. In this mode, the scoreboard is real, not simulated, so the match scores are added to the ranking definitively.


    Dealing with rankings in WiMi5 is very easy. Just configure the rankings you want to use. Then use the rankings blackboxes and let your players challenge between them trying to get the best score. If you don´t want to use a ranking in your game, just click on the settings to hide this feature.

    • Jan 23 2015 10:26 AM
    • by hafo
  8. Game-synchronized and Multi-threaded Keyboard Input


    A input system in a simulation becomes difficult to visualize. Even with tons of informations found on the web about input handling in games, the correct way it's rare to found.

    There are tons of low-level information that should be considered in a input system article, but here I'll be more direct—since it's not a book.

    The intuition says that each frame a simulation gets updated, we must request inputs and handle it in the game-logical side via callbacks or virtual functions, making a player to jump or to shoot in a enemy. Requesting inputs in a certain time is also known as polling. Polling is something that we can't avoid—since we poll all inputs that occurred since the last request at some point, so we'll use the term to visualize as requesting input events at any time—polling input events at any time.

    What is the problem with polling every frame? The problem it's that we don't know how fast our game it's being updated, and you may not want to update faster than the real time. Updating a game faster than the real time (or slower) means that you're advancing the game in a variant way; the game will run slower in a computer when could be running faster or vice-versa. The concept of fixed time-step it's mandatory in modern games (and very unified too), and its implementation details are out of topic, so I'll assume that you know what a fixed-time step simulation is.

    One fixed-time step update will be referred here as one logical update—more specifically a game logical update. A logical update updates our current game time by a fixed-time step—which it's generally 16.6ms or 33.3ms. If you know what a logical update in respect to the real time is, you know that we can update our game by N times each frame; the game logic time should be very close to the current time, meaning that we're updating the game as faster we can (we're doing logical updates up to the current time).

    So, you should already know that the basic game logic loop of a fixed-time step game simulation is:

    UINT64 ui64CurTime = m_tRenderTime.Update();
    while ( ui64CurTime - m_tLogicTime.CurTime() > FIXED_TIME_STEP ) {
            m_tLogicTime.UpdateBy( FIXED_TIME_STEP );
            //Update the logical-side of the game.

    where m_tRenderTime.Update() updates the render timer by the real elapsed time converted to microseconds (we want maximum precision for time accumulation), and m_tLogicTime.UpdateBy( FIXED_TIME_STEP ) updates the game by FIXED_TIME_STEP microseconds.

    What happens if we press a button at any time in the game, poll all input events in the beginning of a frame (before the loop start), and release that button during the game logical update? The answer it's that if we update N times, and that button changes its state in between, the button will be seen as if got pressed during the entire frame. This is not a problem if you're updating small steps because you'll jump to the next frame faster, but if the current time it's considerably larger than the time step, you can get into problems by just knowing the state of the button in the beginning of that frame. To avoid that issue, you'll need o time-stamp all input events when they occurred in order to measure it's duration, and synchronize it with the simulation.

    Every input any time should be time-stamped in order to eat the right amount of inputs at any time in the game-side; the input system should be able to request inputs anywhere (can be a frame update, a logical update, etc.), and process that somewhere later. You may have noticed that what we're doing here it's basically buffering all inputs; that's a good idea because if we can keep time-stamped inputs until some moment that means that we can process all input events before takes a logical update, synchronizing it with the game simulation depending of its time-stamp—input actions, input duration, etc. at the same time being synchronized with the game logical time it's what should be done.

    What's the ideal solution to keep our input system synchronized with your game simulation? The answer it's to consume inputs that occurred up to the current game time each logical update. Example:

    Current time = 1000ms.

    Fixed time-step = 100ms.

    Total game updates = 1000ms/100ms = 10.

    Game time = 0.

    Input Buffer:

    X-down at 700ms;

    X-up at 850ms;

    Y-down at 860ms;

    Y-up at 900ms;

    1st logical update eats 100ms of input. No inputs here, go to the next logical update(s).


    7st logical update eats 100ms of input. Because the logical game time was updated 6 times by 100ms, our game time is: 600ms, but there are no inputs up to the game time, so, we continue with the remaining updates...

    8st update. Game time = 800ms. X-down along with its time-stamp can be consumed. The current duration of X it is the current game time subtracted by the time that the key was pressed, that is, the duration of button X is 800ms - 700ms = 100ms. Now, the game it's able to check if a button it's being held for certain amount of time, which it's a good thing for every type of game. We know that (in the example) we can fire an input action here, because it's the first time that we've pressed the X button (in the example, of course, because there was no X-down before). Since we have all inputs in this logical update, we can re-map X to some game-side input action or log it.

    9st update. Game time = 900ms. X-up, and Y-down along with its time-stamps can be consumed. The X button was released, that means that the total duration of X it's the current game time subtracted by the time the key was pressed, that is, the duration of the button X = 900ms - 700ms = 300ms. You may want to log that somewhere in the game-side. Y was pressed, we repeat the same thing we did to X in the last update, but now for Y.

    and finally...

    10st update. Game time = 1000ms. We repeat the same thing we did to X in the last update for Y and we're done.


    If you're on Windows®, you may have noticed that we can poll pre-processed or raw input events using its message queue. It's mandatory to keep input polling in the same thread that the window was created, but it's not mandatory to keep our game simulation running on another. In order to try to increase our input frequency, what we can do it's to let our pre-processed input system polling input events in the main thread, and run our game simulation and rendering in another thread(s)—since only the game affects what will be rendered we don't need synchronization in the game-logic-render thread.

    I hope what each class do stay clear as your read.


    INT CWindow::ReceiveWindowMsgs() {
    	::SetThreadPriority( ::GetCurrentThread(), THREAD_PRIORITY_HIGHEST ); //Just an example.
    	MSG mMsg;
    	while ( m_bReceiveWindowMsgs ) {
    		::WaitMessage(); //Yields control to other threads when a thread has no other messages in its message queue.
    		while ( ::PeekMessage( &mMsg, m_hWnd, 0U, 0U, PM_REMOVE ) ) {
    			::DispatchMessage( &mMsg );
    	return static_cast(mMsg.wParam);

    To request up to the current X time in the game thread, we must synchronize our thread-safe input buffer time-stamping timer with all game timers (the render time and logical time should be also synchronized), so, there is no time that is more advanced than other, and we can measure its correct intervals (time-stamps for our case) at any time.

    //Before the game start. InputBuffer it's every possible type of device. We synchronize every timer.
    void CEngine:Init(/*...*/) {
            m_pwWindow->KeyboardBuffer().m_tTime = m_pgGame->m_tRenderTime = m_pgGame->m_tLogicTime;

    Here, we'll use a keyboard as the main input source that I've described, but it can be translated to every another type of device if you want (GamePad, Touch, etc). Let's call our thread-safe input buffer as keyboard buffer.

    When a key is pressed (or released), we should process the window message in the window procedure. That can be done as following:

    class CKeyboardBuffer {
    public :
    	enum KB_KEY_EVENTS {
    	enum KB_KEY_TOTALS {
    		KB_TOTAL_KEYS = 256UL
    	void CKeyboardBuffer::OnKeyDown(unsigned int _ui32Key) {
    	   CLocker lLocker(m_csCritic);
    	   KB_KEY_EVENT keEvent;
    	   keEvent.keEvent = KE_KEYDOWN;
    	   keEvent.ui64Time = m_tTime.CurMicros();
            void CKeyboardBuffer::OnKeyUp(unsigned int _ui32Key) {
    	   CLocker lLocker(m_csCritic);
    	   KB_KEY_EVENT keEvent;
    	   keEvent.keEvent = KE_KEYUP;
    	   keEvent.ui64Time = m_tTime.CurMicros();
    protected :
    	struct KB_KEY_EVENT {
    		KB_KEY_EVENTS keEvent;
    		unsigned long long ui64Time;
    	CCriticalSection m_csCritic;
    	CTime m_tTime;
    	std::vector m_keKeyEvents[KB_TOTAL_KEYS];
    LRESULT CALLBACK CWindow::WindowProc(HWND _hWnd, UINT _uMsg, WPARAM _wParam, LPARAM _lParam) {
    	switch (_uMsg) {
    	case WM_KEYDOWN: {
    	case WM_KEYUP: {

    Now we have time-stamped events; the thread that listens to inputs it's always running on the background so it cannot interfere our simulation directly.

    We have a buffer that holds keyboard information, what we need now it's to process that in our game logical update. We saw that the responsibility of the keyboard buffer was to buffer inputs. What we want know it's a way of using the keyboard in the game-side, requesting information such "For how long the key was pressed?", "What is the current duration of the key?", etc. Instead of log all inputs that can be processed in the game-side (which it's the correct way, and trivial too), we keep it simple here and we'll use the keyboard buffer to update a keyboard class, which has all key states and durations for our simple keyboard interface that can be acessed by the game.

    class CKeyboard {
            friend class CKeyboardBuffer;
    public :
    	inline bool KeyIsDown(unsigned int _ui32Key) const {
                   return m_kiCurKeys[_ui32Key].bDown;
            unsigned long long KeyDuration(unsigned int _ui32Key) const {
                   return m_kiCurKeys[_ui32Key].ui64Duration;
    protected :
    	struct KB_KEY_INFO {
    		/* The key is down.*/
    		bool bDown;
    		/* The time the key was pressed. This is needed to calculate its duration. */
    		unsigned long long ui64TimePressed;
    		/* This should be logged but it's here just for simplicity. */
    		unsigned long long ui64Duration;
    	KB_KEY_INFO m_kiCurKeys[CKeyboardBuffer::KB_TOTAL_KEYS];
    	KB_KEY_INFO m_kiLastKeys[CKeyboardBuffer::KB_TOTAL_KEYS];

    The keyboard it's now able to be used as our final keyboard interface on the game-state, but we still need to transfer the data coming from the keyboard buffer. We will give our game an instance of the CKeyboardBuffer. So, each logical update we request all keyboard events up to the current game logical time from the thread-safe window-side keyboard buffer, and transfer that to our game-side keyboard buffer, then we update the game-side keyboard that will be used by the game. We'll implement two functions in our keyboard buffer. One that transfer thread-safe inputs and other that just update a keyboard with it's current keyboard events.

    void CKeyboardBuffer::UpdateKeyboardBuffer(CKeyboardBuffer& _kbOut, unsigned long long _ui64MaxTimeStamp) {
    	CLocker lLocker(m_csCritic); //Enter in our critical section.
    	for (unsigned int I = KB_TOTAL_KEYS; I--;) {
    		std::vector& vKeyEvents = m_keKeyEvents[I];
    		for (std::vector::iterator J = vKeyEvents.begin(); J != vKeyEvents.end();) {
    			const KB_KEY_EVENT& keEvent = *J;
    			if (keEvent.ui64Time < _ui64MaxTimeStamp) {
    				J = vKeyEvents.erase(J); //Eat key event. This is not optimized.
    			else {
    } //Leave our critical section.
    void CKeyboardBuffer::UpdateKeyboard(CKeyboard&amp; _kKeyboard, unsigned long long _ui64CurTime) {
    	for (unsigned int I = KB_TOTAL_KEYS; I--;) {
    		CKeyboard::KB_KEY_INFO&amp; kiCurKeyInfo = _kKeyboard.m_kiCurKeys[I&#93;;
    		CKeyboard::KB_KEY_INFO&amp; kiLastKeyInfo = _kKeyboard.m_kiLastKeys[I&#93;;
    		std::vector&amp; vKeyEvents = m_keKeyEvents[I&#93;;
    		for (std::vector::iterator J = vKeyEvents.begin(); J != vKeyEvents.end(); ++J) {
    			const KB_KEY_EVENT&amp; keEvent = *J;
    			if ( keEvent.keEvent == KE_KEYDOWN ) {
    				if ( kiLastKeyInfo.bDown ) {
    				else {
    					//The time that the key was pressed.
    					kiCurKeyInfo.bDown = true;
    					kiCurKeyInfo.ui64TimePressed = keEvent.ui64Time;
    			else {
    				//Calculate the total duration of the key event.
    				kiCurKeyInfo.bDown = false;
    				kiCurKeyInfo.ui64Duration = keEvent.ui64Time - kiCurKeyInfo.ui64TimePressed;
    			kiLastKeyInfo.bDown = kiCurKeyInfo.bDown;
    			kiLastKeyInfo.ui64TimePressed = kiCurKeyInfo.ui64TimePressed;
    			kiLastKeyInfo.ui64Duration = kiCurKeyInfo.ui64Duration;
    		if ( kiCurKeyInfo.bDown ) {
    			//The key it's being held. Update its duration.
    			kiCurKeyInfo.ui64Duration = _ui64CurTime - kiCurKeyInfo.ui64TimePressed;
    		//Clear the buffer for the next request.

    Now we can request up-to-time inputs, and we can use that in a game logical update. Example:

    bool CGame::Tick() {
    	m_tRenderTime.Update(); //Update by the rela elapsed time.
    	UINT64 ui64CurMicros = m_tRenderTime.CurMicros();
    	while (ui64CurMicros - m_tLogicTime.CurTime() > FIXED_TIME_STEP) {
    		UINT64 ui64CurGameTime = m_tLogicTime.CurTime();
    		m_pkbKeyboardBuffer->UpdateKeyboardBuffer( m_kbKeyboardBuffer, ui64CurGameTime ); //The window keyboard buffer pointer.
    		m_kbKeyboardBuffer.UpdateKeyboard(m_kKeyboard, ui64CurGameTime); //Our non thread-safe game-side buffer will update our keyboard with its key events.
                    UpdateGameState();//We can use m_kKeyboard now at any time in our game-state.
    	return true;


    I've seen many people asking how the time is implemented in order to get everything working, so I'll post here.

    The time class handles time in microseconds in order to avoid any type of unecessary convertion between frames. Small intervals are converted when you have microseconds and they're not accumulated. Is incredible how people tend to accumulate time in floating point numbers without remembering that time is infinite and should be accumulated in the most precise form since the computer is finite. Precision it is not an optimization, it is mandatory.

    #ifndef __TIME_H__
    #define __TIME_H__
    typedef unsigned long long UINT64;
    class CTime {
    	void Update();
    	void UpdateBy(UINT64 _ui64Ticks);
    	inline UINT64 CurTime() const { return m_ui64CurTime; }
    	inline UINT64 CurMicros() const { return m_ui64CurMicros; }
    	inline UINT64 DeltaMicros() const { return m_ui64DeltaMicros; }
    	inline float DeltaSecs() const { return m_fDeltaSecs; }
    	inline void SetResolution(UINT64 _ui64Resolution) { m_ui64Resolution = _ui64Resolution; }
    	inline CTime&amp; operator=(const CTime&amp; _tTime) {
    		m_ui64LastRealTime = _tTime.m_ui64LastRealTime;
    		return (*this);
    protected :
    	UINT64 RealTime() const;
    	UINT64 m_ui64Resolution;
    	UINT64 m_ui64CurTime;
    	UINT64 m_ui64LastTime;
    	UINT64 m_ui64LastRealTime;
    	UINT64 m_ui64CurMicros;
    	UINT64 m_ui64DeltaMicros;
    	float m_fDeltaSecs;
    #endif //#ifndef __TIME_H__

    #include "CTime.h"
    #include <Windows.h>
    CTime::CTime() : m_ui64Resolution(0ULL), m_ui64CurTime(0ULL), m_ui64LastTime(0ULL), m_ui64LastRealTime(0ULL),
    m_ui64CurMicros(0ULL), m_ui64DeltaMicros(0ULL), m_fDeltaSecs(0.0f) {
    	::QueryPerformanceFrequency( reinterpret_cast<LARGE_INTEGER*>(&amp;m_ui64Resolution) );
    	m_ui64LastRealTime = RealTime();
    UINT64 CTime::RealTime() const {
    	UINT64 ui64Ret;
    	::QueryPerformanceCounter( reinterpret_cast<LARGE_INTEGER*>(&amp;ui64Ret) );
    	return ui64Ret;
    void CTime::Update() {
    	UINT64 ui64TimeNow = RealTime();
    	UINT64 ui64DeltaTime = ui64TimeNow - m_ui64LastRealTime;
    	m_ui64LastRealTime = ui64TimeNow;
    void CTime::UpdateBy(UINT64 _ui64Ticks) {
    	m_ui64LastTime = m_ui64CurTime;
    	m_ui64CurTime += _ui64Ticks;
    	UINT64 m_ui64LastMicros = m_ui64CurMicros;
    	m_ui64CurMicros = m_ui64CurTime * 1000000ULL / m_ui64Resolution;
    	m_ui64DeltaMicros = m_ui64CurMicros - m_ui64LastMicros;
    	m_fDeltaSecs = m_ui64DeltaMicros * static_cast<float>(1.0 / 1000000.0);

    As you can see, the delta seconds (which is usefull for physics for instance) is converted after the time has time in microseconds. If the delta time was derived from an accumulated floating point number, you'd be losing almost 50% of its precision each frame.

    There is something that you should notice here. Remember that I said that the input time should be synchronized with the game logic time in order to eat the right amount of input? The assignment operator overloading takes care of that, and what is needed is just an assignment. But if you need to convert the time to microseconds or seconds, you need to set the frequency of the game logic time to microseconds, so you get the right amount of microseconds, seconds, miliseconds, etc. All we need to do is to set the game logic time frequency to one microseconds, that is: 1000000ULL.

    //Before the game start. InputBuffer it's every possible type of device. We synchronize every timer.
    //Let's add a define over here. You may use this variable during the game.
        #define ONE_MICROSECOND 1000000ULL
    #endif //#ifndef ONE_MICROSECOND
    void CEngine:Init(/*...*/) {
            m_pwWindow->KeyboardBuffer().m_tTime = m_pgGame->m_tRenderTime = m_pgGame->m_tLogicTime;

    This is important. If you don't do this you will have a variable frequency each time you computer start. Any type of time that is accumulated like this should have one microsecond as it resolution.

    That's it. The time class isn't hard to learn just by reading it; at this point (of having a fixed-time step) you should already know how computer time works.

    What we've done here was dividing the input system in small pieces synchronized with our logical game simulation. I haven't optimized yet because wasn't my intention to do production code here, but some of the articles never gets into its implementation because it depends of how many devices you have. After you have all informations, you can start re-mapping that or logging. For the moment, what matters is that is synchronized with the logical game time and the game it's able to interface with that without losing input information.

    Send me a message if you have any question and I'll answer as possible I can with or without code (but at least try to visualize the solution by yourself).

    Thanks for stepping by.

    • Feb 19 2015 01:18 PM
    • by Irlan
  9. 4 Simple Things I Learned About the Industry as a Beginner

    For the last year or so I have been working professionally at a AAA mobile game studio. This year has been a huge eye opener for me. Although this article is really short and concise, I`d really like to share these (although seemingly minor) tips for anyone who is thinking about joining, or perhaps has already joined and starting in the professional game development industry.

    All my teen years I had only one dream, to become a professional game developer, and it has finally happened. I was most excited, but as it turns out, I was not ready. At the time of this post, I`m still a student, hopefully getting my bachelors degree in 2016. Juggling between school and a corporate job (because it is a corporation, after all) has been really damaging to my grades, to my social life, but hey, I knew what I signed up for. In the meantime I met lots of really cool and talented people, from whom I have learned tons. Not necessarily programming skills (although I did manage to pick up quite a few tricks there as well), but how to behave in such an environment, how to handle stress, how to speak with non-technical people about technical things. These turned out to be essential skills, in some cases way more important than the technical skills that you have to have in order to be successful at your job. Now, don't misunderstand me, the fact that I wasn’t ready doesn’t mean I got fired, in fact I really enjoyed and loved the environment of pro game development, but I simply couldn’t spend so much time anymore, it has started to become a health issue. I got a new job, still programming, although not game development. A lot more laid back, in a totally different industry though. I plan to return to the game development area as soon as possible.

    So, in summary, I’d like to present a few main points of interest for those who are new to the industry, or are maybe contemplating becoming game developers in a professional area.

    1. It’s not what you’ve been doing so far

    So far you’ve been pretty much doing what projects you wanted, how you wanted them. It will not be the case anymore. There are deadlines, there are expectations to be met, there is profit that needs to be earned. Don’t forget that after all, it is a business. You will probably do tasks which you are interested in and you love them, but you will also do tedious, even boring ones.

    2. Your impact will not be as great as it has been before

    Ever implemented a whole game? Perhaps whole systems? Yeah, it’s different here. You will probably only get to work with parts of systems, or maybe just tweaking them, fixing bugs (especially as a beginner). These games are way bigger than what we’re used to as hobbyist game developers, you have to adapt to the situation. Most of the people working on a project specialize in some area (networking, graphics, etc.). Also, I figured that lots of the people in the team - including myself, I always went with rendering engines, that's what my thing is :D - have never done a full game by themselves (and that is okay).

    3. You WILL have to learn to talk properly with managers/leads, designers, artists

    If you’re working alone, you’re a one man team and you’re the god of your projects. In a professional environment talking to non-technical people about technical things may very well make the difference between you getting to the next level, or getting fired. It is an essential skill that can be easily learned through experience. In the beginning however, keep your head low.

    4. You WILL have to put in extra effort

    If you’re working on your own hobby project, if a system gets done 2 days later than you originally wanted it to, it’s not a big deal. However, in this environment, it could set back the whole team. There will be days when you will have to work overtime, for the sake of the project and your team.

    Essentially, I could boil all this down to two words : COMMUNICATION and TEAMWORK.

    If you really enjoy developing games, go for the professional environment, however if you’re not sure about it, avoid it. All of the people manage to be successful here by loving what they do. Love it or quit it.

    14 Jan 2015: Initial release

    • Jan 20 2015 12:15 PM
    • by Azurys
  10. Banshee Engine Architecture - Introduction

    This article is imagined as part of a larger series that will explain the architecture and implementation details of Banshee game development toolkit. In this introductory article a very general overview of the architecture is provided, as well as the goals and vision for Banshee. In later articles I will delve into details about various engine systems, providing specific implementation information.

    The intent of the articles is to teach you how to implement various engine systems, see how they integrate into a larger whole, and give you an insight into game engine architecture. I will be covering various topics, from low level run time type information and serialization systems, multithreaded rendering, general purpose GUI system, input handling, asset processing to editor building and scripting languages.

    Since Banshee is very new and most likely unfamiliar to the reader I will start with a lengthy introduction.

    What is Banshee?

    It is a free & modern multi-platform game development toolkit. It aims to provide simple yet powerful environment for creating games and other graphical applications. A wide range of features are available, ranging from a math and utility library, to DirectX 11 and OpenGL render systems all the way to asset processing, fully featured editor and C# scripting.

    At the time of writing this project is in active development, but its core systems are considered feature complete and a fully working version of the engine is available online. In its current state it can be compared to libraries like SDL or XNA but with a wider scope. Work is progressing on various high level systems as described by the list of features below.

    Currently available features

    • Design
      • Built using C++11 and modern design principles
      • Clean layered design
      • Fully documented
      • Modular & plugin based
      • Multiplatform ready
    • Renderer
      • DX9, DX11 and OpenGL 4.3 render systems
      • Multi-threaded rendering
      • Flexible material system
      • Easy to control and set up
      • Shader parsing for HLSL9, HLSL11 and GLSL
    • Asset pipeline
      • Easy to use
      • Asynchronous resource loading
      • Extensible importer system
      • Available importer plugins for:
        • FXB,OBJ, DAE meshes
        • PNG, PSD, BMP, JPG, ... images
        • OTF, TTF fonts
        • HLSL9, HLSL11, GLSL shaders
    • Powerful GUI system
      • Unicode text rendering and input
      • Easy to use layout based system
      • Many common GUI controls
      • Fully skinnable
      • Automatch batching
      • Support for texture atlases
      • Localization
    • Other
      • CPU & GPU profiler
      • Virtual input
      • Advanced RTTI system
      • Automatic object serialization/deserialization
      • Debug drawing
      • Utility library
        • Math, file system, events, thread pool, task scheduler, logging, memory allocators and more

    Features coming soon (2015 & 2016)

    • WYSIWYG editor
      • All in one editor
      • Scene, asset and project management
      • Play-in-editor
      • Integration with scripting system
      • Fully customizable for custom workflows
      • One click multi-platform building
    • C# scripting
      • Multiplatform via Mono
      • Full access to .NET library
      • High level engine wrapper
    • High quality renderer
      • Fully deferred
      • Physically based shading
      • Global illumination
      • Gamma correct and HDR rendering
      • High quality post processing effects
    • 3rd party physics, audio, video, network and AI system integration
      • FMOD
      • Physx
      • Ogg Vorbis
      • Ogg Theora
      • Raknet
      • Recast/Detour


    You might want to retrieve the project source code to better follow the articles to come - in each article I will reference source code files that you may view for exact implementation details. I will be touching onto features currently available and will update the articles as new features are released.

    You may download Banshee from its GitHub page:


    The ultimate goal for Banshee is to be a fully featured toolkit that is easy to use, powerful, well designed and extensible so it may rival AAA engine quality. I'll try to touch upon each of those factors and let you know how exactly it attempts to accomplish that.

    Ease of use

    Banshee interface (both code and UI wise) was created to be as simple as possible without sacrificing customizability. Banshee is designed in layers, with the lowest layers providing most general purpose functionality, while higher layers reference lower layers and provide more specialized functionality. Most people will be happy with the simpler more specialized functionality, but lower level functionality is there if they need it and it wasn’t designed as an afterthought either.

    Highest level is imagined as a multi-purpose editor that deals with scene editing, asset import and processing, animation, particles, terrain and similar. Entire editor is designed to be extensible without deep knowledge of the engine - a special scripting interface is provided only for the editor. Each game requires its own custom workflow and set of tools which is reflected in the editor design.

    On a layer below lies the C# scripting system. C# allows you to write high level functionality of your project more easily and safely. It provides access to the large .NET library and most importantly has extremely fast iteration times so you may test your changes within seconds of making them. All compilation is done in editor and you may jump into the game immediately after it is done - this even applies if you are modifying the editor itself.


    Below the C# scripting layer lie two separate speedy C++ layers that allow you to access the engine core, renderer and rendering APIs directly. Not everyone’s performance requirements can be satisfied on the high level and that’s why even the low level interfaces had a lot of thought put into them.

    Banshee is a fully multithreaded engine designed with performance in mind. Renderer thread runs completely separate from the rest of your code giving you maximum CPU resources for best graphical fidelity. Resources are loaded asynchronously therefore avoiding stalls, and internal buffers and systems are designed to avoid CPU-GPU synchronization points.

    Additionally Banshee comes with built-in CPU and GPU profilers that monitor speed, memory allocations and resource usage for squeezing the most out of your code.

    Power doesn’t only mean speed, but also features. Banshee isn’t just a library, but aims to be a fully featured development toolkit. This includes an all-purpose editor, a scripting system, integration with 3rd party physics, audio, video, networking and AI solutions, high fidelity renderer, and with the help of the community hopefully much more.


    A major part of Banshee is the extensible all-purpose editor. Games need custom tools that make development easier and allow your artists and designers to do more. This can range from simple data input for game NPC stats to complex 3D editing tools for your in-game cinematics. The GUI system was designed to make it as easy as possible to design your own input interfaces, and a special scripting interface has been provided that exposes the majority of editor functionality for variety of other uses.

    Aside from being a big part of the editor, extensibility is also something that is prevalent throughout the lower layers of the engine. Anything not considered core is built as a plugin that inherits a common abstract interface. This means you can build your own plugins for various engine systems without touching the rest of engine. For example, DX9, DX11 and OpenGL render system APIs are all built as plugins and you may switch between them with a single line of code.

    Quality design

    A great deal of effort has been spent to design Banshee the right way, with no shortcuts. The entire toolkit, from the low level file system library to GUI system and the editor has been designed and developed from scratch following modern design principles and using modern technologies, solely for the purposes of Banshee.

    It has been made modular and decoupled as much as possible to allow people to easily replace or update engine systems. Plugin-based architecture keeps all the specialized code outside of the engine core, which makes it easier to tailor it to your own needs by extending it with new plugins. It also makes it easier to learn as you have clearly defined boundaries between systems, which is further supported by the layered architecture that reduces class coupling and makes the direction of dependencies even clearer. Additionally every non trivial method, from lowest to highest layer, is fully documented.

    From its inception it has been designed to be a multi-platform and a multi-threaded engine.

    Platform-specific functionality is kept to a minimum and is cleanly encapsulated in order to make porting to other platforms as easy as possible. This is further supported by its render API interface which already supports multiple popular APIs, including OpenGL.

    Its multithreaded design makes communication between the main and render thread clear and allows you to perform rendering operations from both, depending on developer preference. Resource initialization between the two threads is handled automatically which further allows operations like asynchronous resource loading. Async operation objects provide functionality similar to C++ future/promise and C# async/await concepts. Additionally you are supplied with tools like the task scheduler that allow you to quickly set up parallel operations yourself.


    Now that you have an idea of what Banshee is trying to acomplish I will describe the general architecture in a bit more detail. Starting with the top level design which is the four primary layers shown on the image below.


    The layers were created for two reasons:
    • To give developers a chance to pick the level of functionality they need. Some people will want just core and utility and start working on their own engine while others might be just interested in game development and will stick with the editor layer.
    • To decouple code. Lower layers do not know about higher levels and low level code never caters to specialized high level code. This makes the design cleaner and forces a certain direction for dependencies.
    Lower levels were designed to be more general purpose than higher levels. They provide very general techniques usually usable in various situations, and they attempt to cater to everyone. On the other hand higher levels provide a lot more focused and specialized techniques. This might mean relying on very specific rendering APIs, platforms or plugins but it also means using newer, fancier and maybe not as widely accepted techniques (e.g. some new rendering algorithm).


    This is the lowest layer of the engine. It is a collection of very decoupled and separate systems that are likely to be used throughout all of the higher layers. Essentially a collection of tools that are in no way tied into a larger whole. Most of the functionality isn’t even game engine specific, like providing file-system access, file path parsing or events. Other things that belong here are the math library, object serialization and RTTI system, threading primitives and managers, among various others.


    It is the second lowest layer and the first layer that starts to take shape of an actual engine. This layer provides some very game-specific modules tied into a coherent whole, but it tries to be very generic and offer something that every engine might need instead of focusing on very specialized techniques. Render API wrappers exist here, but actual render APIs are implemented as plugins so you are not constrained by specific subset. Scene manager, renderer, resource management, importers and others all belong here, and all are implemented in an abstract way that they can be implemented/extended by higher layers or plugins.


    Second highest layer and first layer with a more focused goal. It is built upon BansheeCore but relies on a specific sub-set of plugins and implements systems like scene manager and renderer in a specific way. For example DirectX 11 and OpenGL render systems are referenced by name, as well as Mono scripting system among others. Renderer that follows a specific set of techniques and algorithms that determines how are all objects rendered also belongs here.


    And finally the top layer is the editor. Although it is named as such it also heavily relies on the scripting system and C# interface as those are primarily used through the editor. It is an extensible multi-purpose editor that provides functionality for level editing, compiling script code, editing script objects, playing in editor, importing assets and publishing the game. But also much more as it can be easily extended with your own custom sub-editors. Want a shader node editor? You can build one yourself without touching the complex bits of the engine, you have an entire scripting interface built only for editor extensions.

    Figure below shows a more detailed structure of each layer as it is designed currently (expect it to change as new features are added). Also note the plugin slots that allow you to extend the engine without actually changing the core.


    In the future chapters I will explain major systems in each of the layers. These explanations should give you insight on how to use them but also reveal why and how they were implemented. However first off I’d like to focus on a quick guide on how to get started with your first Banshee project in order to give the readers a bit more perspective (And some code!).

    Example application

    This section is intended to show you how to create a minimal application in Banshee. The example will primarily be using BansheeEngine layer, which is a high level C++ interface. Otherwise inclined users may use the lower level C++ interface and access the rendering API directly, or use the higher level C# scripting interface. We will delve into those interfaces into more detail in later chapters.

    One important thing to mention is that I will not give instructions on how to set up the Banshee environment and will also omit some less relevant code. This chapter is intended just to give some perspective but the interested reader can head to the project website and check out the example project or the provided tutorial.


    Each Banshee program starts with a call to the Application class. It is the primary entry point into Banshee, handles startup, shutdown and the primary game loop. A minimal application that just creates an empty window looks something like this:

    RENDER_WINDOW_DESC renderWindowDesc;
    renderWindowDesc.videoMode = VideoMode(1280, 720);
    renderWindowDesc.title = "My App";
    renderWindowDesc.fullscreen = false;
    Application::startUp(renderWindowDesc, RenderSystemPlugin::DX11);

    When starting up the application you are required to provide a structure describing the primary render window and a render system plugin to use. When startup completes your render window will show up and then you can run your game code by calling runMainLoop. In this example we haven’t set up any game code so your loop will just be running the internal engine systems. When the user is done with the application the main loop returns and shutdown is performed. All objects are cleaned up and plugins unloaded.


    Since our main loop isn’t currently doing much we will want to add some game code to perform certain actions. However in order for any of those actions to be visible we need some resources to display on the screen. We will need at least a 3D model and a texture. To get resources into Banshee you can either load a preprocessed resource using the Resources class, or you may import a resource from a third-party format using the Importer class. We'll import a 3D model using an FBX file format, and a texture using the PSD file format.

    HMesh dragonModel = Importer::instance().import<Mesh>("C:\Dragon.fbx");
    HTexture dragonTexture = Importer::instance().import<Texture>("C:\Dragon.psd");

    Game code

    Now that we have some resources we can add some game code to display them on the screen. Every bit of game code in Banshee is created in the form of Components. Components are attached to SceneObjects, which can be positioned and oriented around the scene. You will often create your own components but for this example we only need two built-in component types: Camera and Renderable. Camera allows us to set up a viewport into the scene and outputs what it sees into the target surface (our window in this example) and renderable allows us to render a 3D model with a specific material.

    HSceneObject sceneCameraSO = SceneObject::create("SceneCamera");
    HCamera sceneCamera = sceneCameraSO->addComponent<Camera>(window);
    sceneCameraSO->setPosition(Vector3(40.0f, 30.0f, 230.0f));
    sceneCameraSO->lookAt(Vector3(0, 0, 0));
    HSceneObject dragonSO = SceneObject::create("Dragon");
    HRenderable renderable = dragonSO->addComponent<Renderable>();

    I have skipped material creation as it will be covered in a later chapter but it is enough to say that it involves importing a couple of GPU programs (e.g. shaders), using them to create a material and then attaching the previously loaded texture, among a few other minor things.

    You can check out the source code and the ExampleProject for a more comprehensive introduction, as I didn't want to turn this article in a tutorial when there already is one.


    This concludes the introduction. I hope you enjoyed this article and I'll see you next time when I'll be talking about implementing a run-time type information system in C++ as well as a flexible serialization system that handles everything from saving simple config files, entire resources and even entire level hierarchies.

  11. Setting Realistic Deadlines, Family, and Soup

    Jan. 23, 2015. This is my goal. My deadline. And I'm going to miss it.

    Let me explain. As I write this article, I am also making soup. Trust me, it all comes together at the end.

    Part I: Software Estimation 101

    I've been working on Archmage Rises full time for three months and part time about 5 months before that. In round numbers, I’m about 1,000 hours in.

    You see, I have been working without a specific deadline because of a little thing I know from business software called the “Cone of Uncertainty”:


    In business software, the customer shares an idea (or “need”)—and 10 out of 10 times, the next sentence is: "When will you be done, and how much will it cost?"

    Looking at the cone diagram, when is this estimate most accurate? When you are done! You know exactly how long it takes and how much it will actually cost when you finish the project. When do they want the estimate? At the beginning—when accuracy is nil! For this reason, I didn't set a deadline; anything I said would be wrong and misleading to all involved.

    Even when my wife repeatedly asked me.

    Even when the head of Alienware called me and asked, “When will it ship?”

    I focused on moving forward in the cone so I could be in a position to estimate a deadline with reasonable accuracy. In fact, I have built two prototypes which prove the concept and test certain mechanics. Then I moved into the core features of the game.

    Making a game is like building a sports car from a kit.
    … but with no instructions
    … and many parts you have to build yourself (!)

    I have spent the past months making critical pieces. As each is complete, I put it aside for final assembly at a later time. To any outside observer, it looks nothing like a car—just a bunch of random parts lying on the floor. Heck! To ME, it looks like a bunch of random parts on the floor. How will this ever be a road worthy car?

    Oh, hold on. Gotta check the soup.
    Okay, we're good.

    This week I finished a critical feature of my story editor/reader, and suddenly the heavens parted and I could see how all the pieces fit together! Now I'm in a place where I can estimate a deadline.

    But before I get into that, I need to clarify what deadline I'm talking about.

    Vertical Slice, M.V.P. & Scrum

    Making my first game (Catch the Monkey), I learned a lot of things developers should never do. In my research after that project, I learned how game-making is unique and different from business software (business software has to work correctly. Games have to work correctly and be fun) and requires a different approach.

    Getting to basics, a vertical slice is a short, complete experience of the game. Imagine you are making Super Mario Bros. You build the very first level (World 1-1) with complete mechanics, power ups, art, music, sound effects, and juice (polish). If this isn't fun, if the mechanics don't work, then you are wasting your time building the rest of the game.

    The book Lean Startup has also greatly influenced my thinking on game development. In it, the author argues to fail quickly, pivot, and then move in a better direction. The mechanism to fail quickly is to build the Minimum Valuable Product (MVP). Think of web services like HootSuite, Salesforce, or Amazon. Rather than build the "whole experience," you build the absolute bare minimum that can function so that you can test it out on real customers and see if there is any traction to this business idea. I see the Vertical Slice and MVP as interchangeable labels to the same idea.

    A fantastic summary of Scrum.

    Finally, Scrum is the iterative incremental software development methodology I think works best for games (I'm quite familiar with the many alternatives). Work is estimated in User Stories and (in the pure form) estimated in Story Points. By abstracting the estimates, the cone of uncertainty is built in. I like that. It also says when you build something, you build it complete and always leave the game able to run. Meaning, you don't mostly get a feature working and then move on to another task; you make it 100% rock solid: built, tested, bug fixed. You do this because it eliminates Technical Debt.


    What's technical debt? Well like real debt, it is something you have to pay later. So if the story engine has several bugs in it but I leave them to fix "later," that is technical debt I have to pay at some point. People who get things to 90% and then move on to the next feature create tons of technical debt in the project. This seriously undermines the ability to complete the project because the amount of technical debt is completely unknown and likely to hamper forward progress. I have experienced this personally on my projects. I have heard this is a key contributor to "crunch" in the game industry.

    Hold on: Gotta go put onions and peppers in the soup now.

    A second and very important reason to never accrue technical debt is it completely undermines your ability to estimate.

    Let's say you are making the Super Mario Bros. World 1-1 vertical slice. Putting aside knowing if your game is fun or not, the real value of completing the slice is the ability to effectively estimate the total effort and cost of the project (with reasonable accuracy). So let's say World 1-1 took 100 hours to complete across the programmer, designer, and artist with a cost of $1,000. Well, if the game design called for 30 levels, you have a fact-based approach to accurate estimating: It will take 3,000 hours and $30,000. But the reverse is also helpful. Let's say you only have $20,000. Well right off the bat you know you can only make 20 levels. See how handy this is?!?

    Still, you can throw it all out the window when you allow technical debt.

    Let me illustrate:
    Let's say the artist didn't do complete work. Some corners were cut and treated as "just a prototype," so only 80% effort was expended. Let's say the programmer left some bugs and hardcoded a section just to work for the slice. Call it a 75% effort of the real total. Well, now your estimates will be way off. The more iterations (levels) and scale (employees) you multiply by your vertical slice cost, the worse off you are. This is a sure-fire way to doom your project.

    So when will you be done?

    So bringing this back to Archmage Rises, I now have built enough of the core features to be able to estimate the rest of the work to complete the MVP vertical slice. It is crucial that I get the slice right and know my effort/costs so that I can see what it will take to finish the whole game.

    I set up the seven remaining sprints into my handy dandy SCRUM tool Axosoft, and this is what I got:


    That wasn't very encouraging. :-) One of the reasons is because as I have ideas, or interact with fans on Facebook or the forums, I write user stories in Axosoft so I don't forget them. This means the number of user stories has grown since I began tracking the project in August. It's been growing faster than I have been completing them. So the software is telling the truth: Based on your past performance, you will never finish this project.

    I went in and moved all the "ideas" out of the actual scheduled sprints with concrete work tasks, and this is what I got:


    January 23, 2015

    This is when the vertical slice is estimated to be complete. I am just about to tell you why it's still wrong, but first I have to add cream and milk to the soup. Ok! Now that it's happily simmering away, I can get to the second part.

    Part II: Scheduling the Indie Life

    I am 38 and have been married to a beautiful woman for 15 years. Over these years, my wife has heard ad nauseam that I want to make video games. When she married me, I was making pretty good coin leading software projects for large e-commerce companies in Toronto. I then went off on my own. We had some very lean years as I built up my mobile software business.

    We can't naturally have kids, so we made a “Frankenbaby” in a lab. My wife gave birth to our daughter Claire. That was two years ago.


    My wife is a professional and also works. We make roughly the same income. So around February of this year, I went to her and said, "This Archmage thing might have legs, and I'd like to quit my job and work on it full time." My plan was to live off her—a 50% drop in household income. Oh and on top of that, I'd also like to spend thousands upon thousands of dollars on art, music, tools, -- and any games that catch my fancy on Steam.

    It was a sweetheart offer, don't you think?

    I don't know what it is like to be the recipient of an amazing opportunity like this, but I think her choking and gasping for air kind of said it all. :-)

    After thought and prayer, she said, "I want you to pursue your dream. I want you to build Archmage Rises."

    Now I write this because I have three game devs in my immediate circle—each of which are currently working from home and living off their spouse's income. Developers have written me asking how they can talk with their spouse about this kind of major life transition.

    Lesson 1: Get “Buy In,” not Agreement

    A friend’s wife doesn't really want him to make video games. She loves him, so when they had that air-gasping indie game sit down conversation she said, "Okay"—but she's really not on board.

    How do you think it will go when he needs some money for the game?
    Or when he's working hard on it and she feels neglected?
    Or when he originally said the game will take X months but now says it will take X * 2 months to complete?


    Yep! Fights.

    See, by not "fighting it out" initially, by one side just caving, what really happened was that one of them said, "I'd rather fight about this later than now." Well, later is going to come. Over and over again. Until the core issue is resolved.

    I and my friend believe marriage is committed partnership for life. We're in it through thick and thin, no matter how stupid or crazy it gets. It's not roommates sharing an Internet bill; this is life together.

    So they both have to be on the same page, because the marriage is more important than any game. Things break down and go horribly wrong when the game/dream is put before the marriage. This means if she is really against it deep down, he has to be willing to walk away from the game. And he is, for her.

    One thing I got right off the bat is my wife is 100% partnered with me in Archmage Rises. Whether it succeeds or fails, there are no fights or "I told you so"s along the way.

    Lesson 2: Do Your Part


    So why am I making soup? Because my wife is out there working, and I’m at home. Understandably so, I have taken on more of the domestic duties. That's how I show her I love her and appreciate her support. I didn't "sell" domestic duties in order to get her buy-in; it is a natural response. So with me working downstairs, I can make soup for dinner tonight, load and unload the dishwasher, watch Claire, and generally reduce the household burden on her as she takes on the bread-winning role.

    If I shirk household duties and focus solely on the game (and the game flops!), boy oh boy is there hell to pay.

    Gotta check that soup. Yep, we're good.

    Lesson 3: Do What You Say

    Claire is two. She loves to play ball with me. It's a weird game with a red nerf soccer ball where the rules keep changing from catching, to kicking, to avoiding the ball. It's basically Calvin ball. :-)


    She will come running up to my desk, pull my hand off the mouse, and say, "Play ball?!" Sometimes I'm right in the middle of tracking down a bug, but at other times I'm not that intensely involved in the task. The solution is to either play ball right now (I've timed it with a stop watch; it only holds her interest for about seven minutes), or promise her to play it later. Either way, I'm playing ball with Claire.

    And this is important, because to be a crappy dad and have a great game just doesn't look like success to me. To be a great dad with a crappy game? Ya, I'm more than pleased with that.

    Now Claire being two, she doesn't have a real grasp of time. She wants to go for a walk "outside" at midnight, and she wants to see the moon in the middle of the afternoon. So when I promise to play ball with her "later," there is close to a 0% chance of her remembering or even knowing when later is. But who is responsible in this scenario for remembering my promise? Me. So when I am able, say in between bugs or end of the work day, I'll go find her and we'll play ball. She may be too young to notice I'm keeping my promises, but when she does begin to notice I won't have to change my behavior. She'll know dad is trustworthy.

    Lesson 4: Keep the Family in the Loop like a Board of Directors

    If my family truly is partnered with me in making this game, then I have to understand what it is like from their perspective:

    1. They can't see it
    2. They can't play it
    3. They can't help with it
    4. They don't know how games are even made
    5. They have no idea if what I am making is good, bad, or both


    They are totally in the dark. Now what is a common reaction to the unknown? Fear. We generally fear what we do not understand. So I need to understand that my wife secretly fears what I'm working on won't be successful, that I'm wasting my time. She has no way to judge this unless I tell her.

    So I keep her up to date with the ebb and flow of what is going on. Good or bad. And because I tell her the bad, she can trust me when I tell her the good.

    A major turning point was the recent partnership with Alienware. My wife can't evaluate my game design, but if a huge company like Alienware thinks what I'm doing is good, that third party perspective goes a long way with her. She has moved from cautious to confident.

    The Alienware thing was a miracle out of the blue, but that doesn't mean you can't get a third party perspective on your game (a journalist?) and share it with your significant other.

    Lesson 5: Life happens. Put It in the Schedule.

    I've been scheduling software developers for 20 years. I no longer program in HTML3, but I still make schedules—even if it is just for me.

    Customers (or publishers) want their projects on the date you set. Well, actually, they want it sooner—but let's assume you've won that battle and set a reasonable date.

    If there is one thing I have learned in scheduling large team projects, it is that unknown life things happen. The only way to handle that is to put something in the schedule for it. At my mobile company, we use a rule of 5.5-hour days. That means a 40-hour-a-week employee does 27.5 hours a week of active project time; the rest is lunch, doctor appointments, meetings, phone calls with the wife, renewing their mortgage, etc. Over a 7-8 month project, there is enough buffer built in there to handle the unexpected kid sick, sudden funeral, etc.
    Also, plug in statutory holidays, one sick day a month, and any vacation time. You'll never regret including it; you'll always regret not including it.

    That's great for work, but it doesn't work for the indie at home.


    To really dig into the reasons why would be another article, so I'll just jump to the conclusion:

    1. Some days, you get stuck making soup. :-)
    2. Being at home and dealing with kids ranges from playing ball (short) to trips to the emergency room (long)
    3. Being at home makes you the "go to" family member for whatever crops up. "Oh, we need someone to be home for the furnace guy to do maintenance." Guess who writes blogs and just lost an hour of his day watching the furnace guy?
    4. There are many, many hats to wear when you’re an indie. From art direction for contract artists to keeping everyone organized, there is a constant stream of stuff outside your core discipline you'll just have to do to keep the game moving forward.
    5. Social media marketing may be free, but writing articles and responding to forum and Facebook posts takes a lot of time. More importantly, it takes a lot of energy.

    After three months, I have not been able to come up with a good rule of thumb for how much programming work I can get done in a week. I've been tracking it quite precisely for the last three weeks, and it has varied widely. My goal is to hit six hours of programming in an 8-12 hour day.

    Putting This All Together


    Oh, man! This butternut squash soup is AMAZING! I'm not much of a soup guy, and this is only my second attempt at it—but this is hands-down the best soup I've ever had at home or in a restaurant! See the endnotes for the recipe—because you aren't truly indie unless you are making a game while making soup!

    So in order to try and hit my January 23rd deadline, I need to get more programming done. One way to achieve this is to stop writing weekly dev blogs and switch to a monthly format. It's ironic that writing less blogs makes it look like less progress is being made, but it's the exact opposite! I hope to gain back 10 hours a week by moving to a monthly format.

    I'll still keep updating the Facebook page regularly. Because, well, it's addictive. :-)

    So along the lines of Life Happens, it is about to happen to me. Again.

    We were so impressed with Child 1.0 we decided to make another. Baby Avery is scheduled to come by C-section one week from today.

    How does this affect my January 23rd deadline? Well, a lot.
    • Will baby be healthy?
    • Will mom have complications?
    • How will a newborn disrupt the disposition or sleeping schedule of a two-year-old?
    These are all things I just don't know. I'm at the front end of the cone of uncertainty again. :-)



    Agile Game Development with Scrum – great book on hows and whys of Scrum for game dev. Only about the first half is applicable to small indies.

    Axosoft SCRUM tool – Free for single developers; contact support to get a free account (it's not advertised)

    You can follow the game I'm working on, Archmage Rises, by joining the newsletter and Facebook page.

    You can tweet me @LordYabo


    Indie Game Developer's Butternut Squash Soup
    (about 50 minutes; approximately 130 calories per 250ml/cup serving)

    Dammit Jim I'm a programmer not a food photographer!

    I created this recipe as part of a challenge to my wife that I could make a better squash soup than the one she ordered in the restaurant. She agrees, this is better! It is my mashup of three recipes I found on the internet.
    • 2 butternut squash (about 3.5 pounds), seeded and quartered
    • 4 cups chicken or vegetable broth
    • 1 tablespoon minced fresh ginger (about 50g)
    • 1/4 teaspoon nutmeg
    • 1 yellow onion diced
    • Half a red pepper diced (or whole if you like more kick to your soup)
    • 1 tablespoon kosher salt
    • 1 teaspoon black pepper
    • 1/3 cup honey
    • 1 cup whipping cream
    • 1 cup milk
    Peel squash, seed, and cut into small cubes. Put in a large pot with broth on a low boil for about 30 minutes.
    Add red pepper, onion, honey, ginger, nutmeg, salt, pepper. Place over medium heat and bring to a simmer for approximately 6 minutes. Using a stick blender, puree the mixture until smooth. Stir in whipping cream and milk. Simmer 5 more minutes.

    Serve with a dollop of sour cream in the middle and sprinkling of sour dough croutons.

  12. A Room With A View

    A Viewport allows for a much larger and richer 2-D universe in your game. It allows you to zoom in, pan across, and scale the objects in your world based on what the user wants to see (or what you want them to see).

    The Viewport is a software component (written in C++ this time) that participates in a larger software architecture. UML class and sequence diagrams (below) show how these interactions are carried out.

    The algorithms used to create the viewport are not complex. The ubiquitous line equation, y = m.x + b, is all that is needed to create the effect of the Viewport. The aspect ratio of the screen is also factored in so that "squares can stay squares" when rendered.

    Beyond the basic use of the Viewport, allowing entities in your game to map their position and scale onto the display, it can also be a larger participant in the story your game tells and the mechanics of making your story work efficiently. Theatrical camera control, facilitating the level of detail, and culling graphics operations are all real-world uses of the Viewport.

    NOTE: Even though I use Box2D for my physics engine, the concepts in this article are independent of that or even using a physics engine for that matter.

    The Video

    The video below shows this in action.

    The Concept

    The world is much bigger than what you can see through your eyes. You hear a sound. Where did it come from? Over "there". But you can't see that right now. You have to move "there", look around, see what you find. Is it an enemy? A friend? A portal to the bonus round? By only showing your player a portion of the bigger world, they are goaded into exploring the parts they cannot see. This way lies a path to immersion and entertainment.

    A Viewport is a slice of the bigger world. The diagram below shows the basic concept of how this works.


    The Game World (left side) is defined to be square and in meters, the units used in Box2D. The world does not have to be square, but it means one less parameter to carry around and worry about, so it is convenient.

    The Viewport itself is defined as a scale factor of the respective width/height of the Game World. The width of the Viewport is scaled by the aspect ratio of the screen. This makes it convenient as well. If the Viewport is "square" like the world, then it would have to lie either completely inside the non-square Device Screen or with part of it completely outside the Device Screen. This makes it unusable for "IsInView" operations that are useful (see Other Uses at the end).

    The "Entity" is deliberately shown as partially inside the Viewport. When displayed on the Device Screen, it is also only shown as partially inside the view. Its aspect on the screen is not skewed by the size of the screen relative to the world size. Squares should stay squares, etc.

    The "nuts and bolts" of the Viewport are linear equations mapping the two corner points (top left, bottom right) in the coordinate system of the world onto the screen coordinate system. From a "usage" standpoint, it maps the positions in the simulated world (meters) to a position on the screen (pixels). There will also be times when it is convenient to go the other way and map from pixels to meters. The Viewport class handles the math for the linear equations, computing them when needed, and also provides interfaces for the pixel-to-meter or meter-to-pixel transformations.

    Note that the size of the Game World used is also specifically ambiguous. The size of all Box2D objects should be between 0.1m and 10m, the world can be much larger as needed and within realistic use of the float32 precision used in Box2D. That being said, the Viewport size is based on a scale factor of the Game World size, but it is conceivable (and legal) to move the Viewport outside of the "established" Game World size. What happens when you view things "off the grid" is entirely up to your game design.

    Classes and Sequences

    The Viewport does not live by itself in the ecosystem of the game architecture. It is a component that participates in the architecture. The diagram below shows the major components used in the Missile Demo application.


    The main details of each class have been omitted; we're more interested in the overall component structure than internal APIs at this point.

    Main Scene

    The MainScene (top left) is the container for all the visual elements (CCLayer-derived objects) and owner of an abstract interface, the MovingEntityIFace. Only one instance exists at a time. The MainScene creates a new one when signaled by the DebugMenuLayer (user input) to change the Entity. Commands to the Entity are also executed via the MainScene. The MainScene also acts as the holder of the Box2D world reference.

    Having the MainScene tie everything together is perfectly acceptable for a small single-screen application like this demonstration. In a larger multi-scene system, some sort of UI Manager approach would be used.

    Viewport and Notifier

    The Viewport (lower right) is a Singleton. This is a design choice. The motivations behind it are:
    • There is only one screen the user is looking at.
    • Lots of different parts of the graphics system may use the Viewport.
    • It is much more convenient to do it as a "global" singleton than to pass the reference around to all potential consumers.
    • Deriving it from the SingletonDynamic template ensures that it follows the Init/Reset/Shutdown model used for all the Singleton components. It's life cycle is entirely predictable: it always exists.

    Having certain parts of the design as a singleton may make it hard to envision how you would handle other types of situations, like a split screen or a mini-map. If you needed to deal with such a situation, at least one strategy would be to factor the base functionality of the "Viewport" into a class and then construct a singleton to handle the main viewport, one for the mini-map, etc. Essentially, if you only have "one" of something, the singleton pattern is helping you to ensure ease of access to the provider of the feature and also guaranteeing the life cycle of that feature matches into the life cycle of your design.

    This is (in my mind) absolutely NOT the same thing as a variable that can be acccessed and modified without an API from anywhere in your system (i.e. a global variable). When you wrap it and control the life cycle, you get predictability and a place to put a convenient debug point. When you don't, you have fewer guarantees of initial state and you have to put debug points at every point that touches the variable to figure out how it evolves over time. That inversion (one debug point vs. lots of debug points) can crush your productivity.

    If you felt that using the singleton approach was not for you or not per company policy or group agreed policies, etc., you could create an instance of that "viewport" class and pass it to all the interested cosumers as a reference. You will still need a place for that instance to live and you will need to manage its life cycle.

    You have to weigh the design goals against the design and make a decision about what constitutes the best tool for the job, often using conflicting goals, design requirements, and the strong opinions of your peers. Rising to the real challenges this represents is a practical reality of "the job". And possibly why indie developers like to work independently.

    The Notifier is also pictured to highlight its importance; it is an active participant when the Viewport changes. The diagram below shows exactly this scenario.


    The user user places both fingers on the screen and begins to move them together (1.0). This move is received by the framework and interpreted by the TapDragPinchInput as a Pinch gesture, which it signals to the MainScene (1.1). The MainScene calls SetCenter on the Viewport (1.2) which immediately leads to the Viewport letting all interested parties know the view is changing via the Notifier (1.3). The Notifier immediately signals the GridLayer, which has registered for the event (1.4). This leads to the GridLayer recalculating the position of its grid lines (1.5). Internally, the GridLayer maintains the grid lines as positions in meters. It will use the Viewport to convert these to positions in pixels and cache them off. The grid is not actually redrawn until the next draw(...) call is executed on it by the framework.

    The first set of transactions were executed synchronously as the user moved their fingers; each time a new touch event came in, the change was made. The next sequence (starting with 1.6) is initiated when the framework calls the Update(...) method on the main scene. This causes an update of the Box2D physics model (1.7). At some point later, the framework calls the draw(...) method on the Box2dDebugLayer (1.8). This uses the Viewport to calculate the display positions of all the Box2D bodies (and other elements) it will display (1.9).

    These two sequences demonstrate the two main types of Viewport update sequences. The first is triggered by the a direct change of the view leading to events that trigger immediate updates. The second is called by the framework every major update of the model (as in MVC).


    The general method for mapping the world space limits (Wxmin, Wxmax) onto the screen coordinates (0,Sxmax) is done by a linear mapping with a y = mx + b formulation. Given the two known points for the transformation:

    Wxmin (meters) maps onto (pixel) 0 and
    Wxmax (meters) maps onto (pixel) Sxmax
    Solving y0 = m*x0 + b and y1 = m*x1 + b1 yields:

    m = Sxmax/(Wxmax - Wxmin) and
    b = -Wxmin*Sxmax/(Wxmax - Wxmin) (= -m * Wxmin)

    We replace (Wxmax - Wxmin) with scale*(Wxmax-Wxmin) for the x dimension and scale*(Wymax-Wymin)/aspectRatio in the y dimension.

    The value (Wxmax - Wxmin) = scale*worldSizeMeters (xDimension)

    The value Wxmin = viewport center - 1/2 the width of the viewport


    In code, this is broken into two operations. Whenever the center or scale changes, the slope/offset values are calculated immediately.

    void Viewport::CalculateViewport()
       // Bottom Left and Top Right of the viewport
       _vSizeMeters.width = _vScale*_worldSizeMeters.width;
       _vSizeMeters.height = _vScale*_worldSizeMeters.height/_aspectRatio;
       _vBottomLeftMeters.x = _vCenterMeters.x - _vSizeMeters.width/2;
       _vBottomLeftMeters.y = _vCenterMeters.y - _vSizeMeters.height/2;
       _vTopRightMeters.x = _vCenterMeters.x + _vSizeMeters.width/2;
       _vTopRightMeters.y = _vCenterMeters.y + _vSizeMeters.height/2;
       // Scale from Pixels/Meters
       _vScalePixelToMeter.x = _screenSizePixels.width/(_vSizeMeters.width);
       _vScalePixelToMeter.y = _screenSizePixels.height/(_vSizeMeters.height);
       // Offset based on the screen center.
       _vOffsetPixels.x = -_vScalePixelToMeter.x * (_vCenterMeters.x - _vScale*_worldSizeMeters.width/2);
       _vOffsetPixels.y = -_vScalePixelToMeter.y * (_vCenterMeters.y - _vScale*_worldSizeMeters.height/2/_aspectRatio);
       _ptmRatio = _screenSizePixels.width/_vSizeMeters.width;

    Note:  Whenever the viewport changes, we emit a notification to the rest of the system to let interested parties react. This could be broken down into finer detail for changes in scale vs. changes in the center of the viewport.

    When the a conversion from world space to viewport space is needed:

    CCPoint Viewport::Convert(const Vec2&amp; position)
       float32 xPixel = position.x * _vScalePixelToMeter.x + _vOffsetPixels.x;
       float32 yPixel = position.y * _vScalePixelToMeter.y + _vOffsetPixels.y;
       return ccp(xPixel,yPixel);

    And, occasionally, we need to go the other way.

    /* To convert a pixel to a position (meters), we invert
     * the linear equation to get x = (y-b)/m.
    Vec2 Viewport::Convert(const CCPoint&amp; pixel)
       float32 xMeters = (pixel.x-_vOffsetPixels.x)/_vScalePixelToMeter.x;
       float32 yMeters = (pixel.y-_vOffsetPixels.y)/_vScalePixelToMeter.y;
       return Vec2(xMeters,yMeters);

    Position, Rotation, and PTM Ratio

    Box2D creates a physics simulation of objects between the sizes of 0.1m and 10m (according to the manual, if the scaled size is outside of this, bad things can happen...the manual is not lying). Once you have your world up and running, you need to put the representation of the bodies in it onto the screen. To do this, you need its rotation (relative to x-axis), position, and a scale factor to convert the physical meters to pixels. Let's assume you are doing this with a simple sprite for now.

    The rotation is the easiest. Just ask the b2Body what its rotation is and convert it to degrees with CC_RADIANS_TO_DEGREES(...). Use this for the angle of your sprite.

    The position is obtained by asking the body for its position in meters and calling the Convert(...) method on the Viewport. Let's take a closer look at the code for this.

    /* To convert a position (meters) to a pixel, we use
     * the y = mx + b conversion.
    CCPoint Viewport::Convert(const Vec2&amp; position)
       float32 xPixel = position.x * _vScalePixelToMeter.x + _vOffsetPixels.x;
       float32 yPixel = position.y * _vScalePixelToMeter.y + _vOffsetPixels.y;
       return ccp(xPixel,yPixel);

    This is about as simple as it gets in the math arena. A linear equation to map the position from the simulated physical space (meters) to the Viewport's view of the world on the screen (pixels). A key nuance here is that the scale and offset are calculated ONLY when the viewport changes.

    The scale is called the pixel-to-meter ratio, or just PTM Ratio. If you look inside the CalculateViewport method, you will find this rather innocuous piece of code:

       _ptmRatio = _screenSizePixels.width/_vSizeMeters.width;

    The PTM Ratio is computed dynamically based on the size of the width viewport (_vSizeMeters). Note that it could be computed based on the height instead; be sure to define the aspect ratio, etc., appropriately.

    If you search the web for articles on Box2D, whenever they get to the display portion, they almost always have something like this:

    #define PTM_RATIO 32

    Which is to say, every physical body is represented by a ratio of 32 pixels (or some other value) for each meter in the simulation. The original iPhone screen was 480 x 320, and Box2D represents objects on the scale of 0.1m to 10m, so a full sized object would take up the full width of the screen. However, it is a fixed value. Which is fine.

    Something very interesting happens though, when you let this value change. By letting the PTM Ratio change and scaling your objects using it, the viewer is given the illusion of depth. They can move into and out of the scene and feel like they are moving into and out of the scene in the third dimension.

    You can see this in action when you use the pinch operation on the screen in the App. The Box2DDebug uses the Viewport's PTM Ratio to change the size of the displayed polygons. It can (and has) been used to also scale sprites so that you can zoom in/out.

    Other Uses

    With a little more work or a few other components, the Viewport concept can be expanded to yield other benefits. All of these uses are complementary. That is to say, they can all be used at the same time without interfering with each other.


    The Viewport itself is "Dumb". You tell it change and it changes. It has no concept of time or motion; it only executes at the time of command and notifies (or is polled) as needed. To execute theatrical camera actions, such as panning, zooming, or combinations of panning and zooming, you need a "controller" for the Viewport that has a notion of state. This controller is the camera.

    Consider the following API for a Camera class:

    class Camera
       // If the camera is performing any operation, return true.
       bool IsBusy();
       // Move/Zoom the Camera over time.
       void PanToPosition(const vec2&amp; position, float32 seconds);
       void ZoomToScale(float32 scale, float32 seconds);
       // Expand/Contract the displayed area without changing
       // the scale directly.
       void ExpandToSize(float32 size, float32 seconds);
       // Stop the current operation immediately.
       void Stop();
       // Called every frame to update the Camera state
       // and modify the Viewport.  The dt value may 
       // be actual or fixed in a fixed timestep
       // system.
       void Update(float32 dt);

    This interface presents a rudimentary Camera. This class interacts with the Viewport over time when commanded. You can use this to create cut scenes, quickly show items/locations of interest to a player, or other cinematic events.

    A more sophisticated Camera could keep track of a specific entity and move the viewport automatically if the the entity started to move too close to the viewable edge.

    Level of Detail

    In a 3-D game, objects that are of little importance to the immediate user, such as objects far off in the distance, don't need to be rendered with high fidelity. If it is only going to be a "dot" to you, do you really need 10k polygons to render it? The same is true in 2-D as well. This is the idea of "Level of Detail".

    The PTMRatio(...) method/member of the Viewport gives the number of pixels an object will be given its size in meters. If you use this to adjust the scale of your displayed graphics, you can create elements that are "sized" properly for the screen relative to the other objects and the zoom level. You can ALSO substitute other graphics when the displayed object will appear to be little more than a blob. This can cut down dramatically on the GPU load and improve the performance of your game.

    For example, in Space Spiders Must Die!, each Spider is not single sprite, but a group of sprites loaded from a sprite sheet. This sheet must be loaded into the GPU, the graphics drawn, then another sprite sheet loaded in for other objects. When the camera is zoomed all the way out, we could get a lot more zip out of the system if we didn't have to swap out the sprite sheet at all and just drew a single sprite for each spider. A much smaller series of "twinkling" sprites could easily replace the full-size spider.

    Culling Graphics Operations

    If an object is not in view, why draw it at all? Well...you might still draw it...if the cost of keeping it from being drawn exceeds the cost of drawing it. In Cocos2D-x, it can get sticky to figure out whether or not you are really getting a lot by "flagging" elements off the screen and controlling their visibility (the GPU would probably handle it from here).

    However, there is a much less-ambiguous situation: Skeletal Animations. Rather than use a lot of animated sprites (and sprite sheets), we tend to use Spine to create skeletal animated sprites. These absolutely use a lot of calculations which are completely wasted if you can't see the animation because it is off camera. To save CPU cycles, which are even more limited these days than GPU cycles for the games we make, we can let the AI for the animiation keep running but only update the "presentation" when needed.

    The Viewport provides a method called IsInView(...) just for this purpose. Using it, you can flag entities as "in view" or "not in view". Internally, the representation used for the entity can make the decision to update or not based on this.


    A Viewport has uses that allows you to create a richer world for the player to "live" in, both by providing "depth" via zooming and allowing you to keep content outside the Viewport. It also provides opportunities to improve the graphics processing efficiency of your game.

    Get the Source Code for the this post hosted on GitHub by clicking here.

    Article Update Log

    21 Nov 2014: Added update about singleton usage.
    6 Nov 2014: Initial release

  13. What's In Your Toolbox?

    Big things are made of little things. Making things at all takes tools. We all know it is not the chisel that creates the sculpture, but the hand that guides it. Still, having a pointy chisel is probably better to break the rock than your hand.

    In this article, I'll enumerate the software tools that I use to put together various parts of my software. I learned about these tools by reading sites like this one, so feel free to contribute your own. I learned how to use them by setting a small goal for myself and figuring out whether or not the tool could help me achieve it. Some made the cut. Some did not. Some may be good for you. Others may be good for you.

    Software Tools

    #NameUsed ForCostLinkNotes
    1Cocos2d-xC++ Graphical FrameworkFreewww.cocos2d-x.orgHas lots of stuff out of the box and a relatively light learning curve. We haven't used it cross-platform (yet) but many have before us, so no worries.
    2Box2D2-D PhysicsFreewww.box2d.orgNo longer the default for cocos2d-x :( but still present in the framework. I still prefer it over Chipmunk. Now you know at least two to try...
    3GimpBitmap Graphics EditorFreewww.gimp.orgAbove our heads but has uses for slicing, dicing, and mutilating images. Great for doing backgrounds.
    4InkscapeVector Graphics EditorFree
    www.inkscape.orgOur favorite tool for creating vector graphics. We still suck at it, but at least the tool doesn't fight us.
    5PaperGraphics Editor (iPad)~$10App StoreThis is an incredible sketching tool. We use it to generate graphics, spitball ideas for presentations, and create one-offs for posts.
    6SpineSkeletal Animation~$100www.esotericsoftware.comI *wish* I had enough imagination to get more out of this incredible tool.
    7Physics EditorSee Notes$20www.codeandweb.comCreates data to turn images into data that Box2D can use. Has some annoyances but very solid on the whole.
    8Texture PackerSee Notes$40www.codeandweb.comPuts images together into a single file so that you can batch them as sprites.
    9PythonScripting LanguageFreewww.python.orgAt some point you will need a scripting language to automate something in your build chain. We use python. You can use whatever you like.
    10Enterprise ArchitectUML Diagrams~$130-$200www.sparxsystems.comYou probably won't need this but we use it to create more sophisticated diagrams when needed. We're not hard core on UML, but we are serious about ideas and a picture is worth a thousand words.
    11ReflectorSee Notes~$15Mac App StoreThis tool lets you show your iDevice screen on your Mac. Which is handy for screen captures without the (very slow) simulator.
    12XCodeIDEFreeMac App StoreCocos2d-x works in multiple IDEs. We are a Mac/Windows shop. Game stuff is on iPads, so we use XCode. Use what works best for you.
    13Glyph DesignerSee Notes$40www.71squared.comCreates bitmapped fonts with data. Seamless integration with Cocos2d-x. Handy when you have a lot of changing text to render.
    14Particle DesignerSee Notes$60www.71squared.comHelps you design the parameters for particle emitter effects. Not sure if we need it for our stuff but we have used these effects before and may again. Be sure to block out two hours of time...the temptation to tweak is incredible.
    15Sound BibleSee NotesFreewww.soundbible.comGreat place to find sound clips. Usually the license is just attribution, which is a great karmic bond.
    16Tiled QTSee NotesFreewww.mapeditor.orgA 2-D map editor. Cocos2d-x has import mechanisms for it. I haven't needed it, but it can be used for tile/orthogonal map games. May get some use yet.


    A good developer (or shop) uses the tools of others as needed, and develops their own tools for the rest. The tools listed here are specifically software that is available "off the shelf". I did not list a logging framework (because I use my own) or a unit test framework (more complex discussion here) or other "tools" that I have picked up over the years and use to optimize my work flow.

    I once played with Blender, the fabulous open-source 3-D rendering tool. It has about a million "knobs" on it. Using it, I realized I was easily overwhelmed by it, but I also realized that my tools could easily overwhelm somebody else if they were unfamiliar with them and did not take the time to figure out how to get the most out of them.

    The point of all this is that every solid developer I know figures out the tools to use in their kit and tries to get the most out of them. Not all hammers fit in all hands, though.

    Article Update Log

    5 Nov 2014: Initial Release

  14. Making a Game with Blend4Web Part 6: Animation and FX

    This time we'll speak about the main stages of character modeling and animation, and also will create the effect of the deadly falling rocks.

    Character model and textures

    The character data was placed into two files. The character_model.blend file contains the geometry, the material and the armature, while the character_animation.blend file contains the animation for this character.

    The character model mesh is low-poly:


    This model - just like all the others - lacks a normal map. The color texture was entirely painted on the model in Blender using the Texture Painting mode:


    The texture then has been supplemented (4) with the baked ambient occlusion map (2). Its color (1) was much more pale initially than required, and has been enhanced (3) with the Multiply node in the material. This allowed for fine tuning of the final texture's saturation.


    After baking we received the resulting diffuse texture, from which we created the specular map. We brightened up this specular map in the spots corresponding to the blade, the metal clothing elements, the eyes and the hair. As usual, in order to save video memory, this texture was packed into the alpha channel of the diffuse texture.


    Character material

    Let's add some nodes to the character material to create the highlighting effect when the character contacts the lava.


    We need two height-dependent procedural masks (2 and 3) to implement this effect. One of these masks (2) will paint the feet in the lava-contacting spots (yellow), while the other (3) will paint the character legs just above the knees (orange). The material specular value is output (4) from the diffuse texture alpha channel (1).


    Character animation

    Because the character is seen mainly from afar and from behind, we created a simple armature with a limited number of inverse kinematics controlling bones.


    A group of objects, including the character model and its armature, has been linked to the character_animation.blend file. After that we've created a proxy object for this armature (Object > Make Proxy...) to make its animation possible.

    At this game development stage we need just three animation sequences: looping run, idle and death animations.


    Using the specially developed tool - the Blend4Web Anim Baker - all three animations were baked and then linked to the main scene file (game_example.blend). After export from this file the animation becomes available to the programming part of the game.


    Special effects

    During the game the red-hot rocks will keep falling on the character. To visualize this a set of 5 elements is created for each rock:

    1. the geometry and the material of the rock itself,
    2. the halo around the rock,
    3. the explosion particle system,
    4. the particle system for the smoke trail of the falling rock,
    5. and the marker under the rock.

    The above-listed elements are present in the lava_rock.blend file and are linked to the game_example.blend file. Each element from the rock set has a unique name for convenient access from the programming part of the application.

    Falling rocks

    For diversity, we made three rock geometry types:


    The texture was created by hand in the Texture Painting mode:


    The material is generic, without the use of nodes, with the Shadeless checkbox enabled:


    For the effect of glowing red-hot rock, we created an egg-shaped object with the narrow part looking down, to imitate rapid movement.


    The material of the shiny areas is entirely procedural, without any textures. First of all we apply a Dot Product node to the geometry normals and vector (0, 0, -1) in order to obtain a view-dependent gradient (similar to the Fresnel effect). Then we squeeze and shift the gradient in two different ways and get two masks (2 and 3). One of them (the widest) we paint to the color gradient (5), while the other is subtracted from the first (4) to use the resulting ring as a transparency map.


    The empty node group named NORMAL_VIEW is used for compatibility: in the Geometry node the normals are in the camera space, but in Blend4Web - in the world space.


    The red-hot rocks will explode upon contact with the rigid surface.


    To create the explosion effect we'll use a particle system with a pyramid-shaped emitter. For the particle system we'll create a texture with an alpha channel - this will imitate fire and smoke puffs:


    Let's create a simple material and attach the texture to it:


    Then we setup a particle system using the just created material:


    Activate particle fade-out with the additional settings on the Blend4Web panel:


    To increase the size of the particles during their life span we create a ramp for the particle system:


    Now the explosion effect is up and running!


    Smoke trail

    When the rock is falling a smoke trail will follow it:


    This effect can be set up quite easily. First of all let's create a smoke material using the same texture as for explosions. In contrast to the previous material this one uses a procedural blend texture for painting the particles during their life span - red in the beginning and gray in the end - to mimic the intense burning:


    Now proceed to the particle system. A simple plane with its normal oriented down will serve as an emitter. For this time the emission is looping and more long-drawn:


    As before this particle system has a ramp for reducing the particles size progressively:


    Marker under the rock

    It remains only to add a minor detail - the marker indicating the spot to which the rock is falling, just to make the player's life easier. We need a simple unwrapped plane. Its material is fully procedural, no textures are used.


    The Average node is applied to the UV data to obtain a radial gradient (1) with its center in the middle of the plane. We are already familiar with the further procedures. Two transformations result in two masks (2 and 3) of different sizes. Subtracting one from the other gives the visual ring (4). The transparency mask (6) is tweaked and passed to the material alpha channel. Another mask is derived after squeezing the ring a bit (5). It is painted in two colors (7) and passed to the Color socket.



    At this stage the gameplay content is ready. After merging it with the programming part described in the previous article of this series we may enjoy the rich world packed with adventure!

    Link to the standalone application

    The source files of the models are part of the free Blend4Web SDK distribution.

    • Feb 06 2015 01:56 AM
    • by Spunya
  15. How to Create a Scoreboard for Lives, Time, and Points in HTML5 with WiMi5

    This tutorial gives a step-by-step explanation on how to create a scoreboard that shows the number of lives, the time, or the points obtained in a video game.

    To give this tutorial some context, we’re going to use the example project StunPig in which all the applications described in this tutorial can be seen. This project can be cloned from the WiMi5 Dashboard.


    We require two graphic elements to visualize the values of the scoreboards, a “Lives” Sprite which represents the number of lives, and as many Font and Letter Sprites as needed to represent the value of the digit to be shown in each case. The “Lives” Sprite is one with four animations or image states that are linked to each one of the four numerical values for the value of the level of lives.

    image01.png image27.png

    The Font or Letter Sprite, a Sprite with 11 animations or image states which are linked to each of the ten values of the numbers 0-9, as well as an extra one for the colon (:).

    image16.png image10.png

    Example 1. How to create a lives scoreboard

    To manage the lives, we’ll need a numeric value for them, which in our example is a number between 0 and 3 inclusive, and its graphic representation in our case is the three orange-colored stars which change to white as lives are lost, until all of them are white when the number of lives is 0.


    To do this, in the Scene Editor, we must create the instance of the sprite used for the stars. In our case, we’ll call them “Lives”. To manipulate it, we’ll have a Script (“lifeLevelControl”) with two inputs (“start” and “reduce”), and two outputs (“alive” and “death”).


    The “start” input initializes the lives by assigning them a numeric value of 3 and displaying the three orange stars. The “reduce” input lowers the numeric value of lives by one and displays the corresponding stars. As a consequence of triggering this input, one of the two outputs is activated. The “alive” output is activated if, after the reduction, the number of lives is greater than 0. The “death” output is activated when, after the reduction, the number of lives equals 0.

    Inside the Script, we do everything necessary to change the value of lives, displaying the Sprite in relation to the number of lives, triggering the correct output in function of the number of lives, and in our example, also playing a negative fail sound when the number of lives goes down..

    In our “lifeLevelControl” Script, we have a “currentLifeLevel” parameter which contains the number of lives, and a parameter which contains the “Lives” Sprite, which is the element on the screen which represents the lives. This Sprite has four animations of states, “0”, “1”, “2”, and “3”.


    The “start” input connector activates the ActionOnParam “copy” blackbox which assigns the value of 3 to the “currentLifeLevel” parameter and, once that’s done, it activates the “setAnimation” ActionOnParam blackbox which displays the “3” animation Sprite.

    The “reduce” input connector activates the “-” ActionOnParam blackbox which subtracts from the “currentLifeLevel” parameter the value of 1. Once that’s done, it first activates the “setAnimation” ActionOnParam blackbox which displays the animation or state corresponding to the value of the “CurrentLifeLevel” parameter and secondly, it activates the “greaterThan” Compare blackbox, which activates the “alive” connector if the value of the “currentLifeLevel” parameter is greater than 0, or the “death” connector should the value be equal to or less than 0.

    Example 2. How to create a time scoreboard or chronometer

    In order to manage time, we’ll have as a base a numerical time value that will run in thousandths of a second in the round and a graphic element to display it. This graphic element will be 5 instances of a Sprite that will have 10 animations or states, which will be the numbers from 0-9.



    In our case, we’ll display the time in seconds and thousandths of a second as you can see in the image, counting down; so the time will go from the total time at the start and decrease until reaching zero, finishing.

    To do this in the Scenes editor, we must create the 6 instances of the different sprites used for each segment of the time display, the tenths place, the units place, the tenths of a second place, the hundredths of a second place, and the thousandths of a second place, as well as the colon. In our case, we’ll call them “second.unit”, “second.ten”, “millisec.unit”, “millisec.ten” y “millisec.hundred”.


    In order to manage this time, we’ll have a Script (“RoundTimeControl”) which has 2 inputs (“start” and “stop”) and 1 output (“end”), as well as an exposed parameter called “roundMillisecs” and which contains the value of the starting time.


    The “start” input activates the countdown from the total time and displays the decreasing value in seconds and milliseconds. The “stop” input stops the countdown, freezing the current time on the screen. When the stipulated time runs out, the “end” output is activated, which determines that the time has run out. Inside the Script, we do everything needed to control the time and display the Sprites in relation to the value of time left, activating the “end” output when it has run out.

    In order to use it, all we need to do is put the time value in milliseconds in, either by placing it directly in the “roundMillisecs” parameter, or by using a blackbox I assign it, and once that’s been assigned, we then activate the the “start” input which will display the countdown until we activate the “stop” input or reach 0, in which case the “end” output will be activated, which we can use, for example, to remove a life or whatever else we’d like to activate.


    In the “RoundTimeControl” Script, we have a fundamental parameter, “roundMillisecs”, which contains and defines the playing time value in the round. Inside this Script, we also have two other Scripts, “CurrentMsecs-Secs” and “updateScreenTime”, which group together the actions I’ll describe below.

    The activation of the “start” connector activates the “start” input of the Timer blackbox, which starts the countdown. As the defined time counts down, this blackbox updates the “elapsedTime” parameter with the time that has passed since the clock began counting, activating its “updated” output. This occurs from the very first moment and is repeated until the last time the time is checked, when the “finished” output is triggered, announcing that time has run out. Given that the time to run does not have to be a multiple of the times between the update and the checkup of the time run, the final value of the elapsedTime parameter will most likely be greater than measured, which is something that will have to be kept in mind when necessary.

    The “updated” output tells us we have a new value in the “elapsedTime” parameter and will activate the “CurrentTimeMsecs-Secs” Script which calculates the total time left in total milliseconds and divides it into seconds and milliseconds in order to display it. Once this piece of information is available, the “available” output will be triggered, which will in turn activate the “update” input of the “updateScreenTime” Script which places the corresponding animations into the Sprites displaying the time.

    In the “CurrentMsecs-Secs” Script, we have two fundamental parameters with to carry out; “roundMillisecs”, which contains and defines the value of playing time in the round, and “elapsedTime”, which contains the amount of time that has passed since the clock began running. In this Script, we calculate the time left and then we break down that time in milliseconds into seconds and milliseconds--the latter is done in the “CalculateSecsMillisecs” Script, which I’ll be getting to.


    The activation of the get connector starts the calculation of time remaining, starting with the activation of the “-” ActionOnParam blackbox that subtracts the value of the time that has passed since the “elapsedTime” parameter contents started from the total run time value contained in the “roundMillisecs” parameter. This value, stored in the “CurrentTime” parameter, is the time left in milliseconds.

    Once that has been calculated, the “greaterThanOrEqual” Compare blackbox is activated, which compares the value contained in “CurrentTime” (the time left) to the value 0. If it is greater than or equal to 0, it activates the “CalculateSecsMillisecs” Script which breaks down the remaining time into seconds and milliseconds, and when this is done, it triggers the “available” output connector. If it is less, before activating the “CalculateSecsMillisecs” Script, we activate the ActionOnParam “copy” blackbox which sets the time remaining value to zero.


    In the “CalculateSecsMillisecs” Script, we have the value of the time left in milliseconds contained in the “currentTime” parameter as an input. The Script breaks down this input value into its value in seconds and its value in milliseconds remaining, providing them to the “CurrentMilliSecs” and “CurrentSecs” parameters. The activation of its “get” input connector activates the “lessThan” Compare blackbox. This performs the comparison of the value contained in the “currentTime” parameter to see if it is less than 1000.

    If it is less, the “true” output is triggered. What this means is that there are no seconds, which means the whole value of “CurrentTime” is used as a value in the “CurrentMilliSecs” parameter, which is then copied by the “Copy” ActionOnParam blackbox; but it doesn’t copy the seconds, because they’re 0, and that gives the value of zero to the “currentSecs” parameter via the “copy” ActionOnParam blackbox. After this, it has the values the Script provided, so it activates its “done” output..

    On the other hand, if the check the “lessThan” Compare blackbox runs determines that the “currentTime” is greater than 1000, it activates its “false” output. This activates the “/” ActionOnParam blackbox, which divides the “currentTime” parameter by 1000’, storing it in the “totalSecs” parameter. Once that is done, the “floor” ActionOnParam is activated, which leaves its total “totalSecs” value in the “currentSecs” parameter.

    After this, the “-” ActionOnParam is activated, which subtracts “currentSecs” from “totalSecs”, which gives us the decimal part of “totalSecs”, and stores it in “currentMillisecs” in order to later activate the “*” ActionOnParam blackbox, multiplying by 1000 the “currentMillisecs” parameter which contains the decimal value of the seconds left in order to convert it into milliseconds, which is stored in the “CurrentMillisecs” parameter (erasing the previous value). After this, it then has the values the Script provides, so it then activates its “done” output.

    When the “CalculateSecsMillisecs” Script finishes and activates is “done” output, and this activates the Script’s “available” output, the “currentTimeMsecs-Secs” Script is activated, which then activates the “updateScreenTime” Script via its “update” input. This Script handles displaying the data obtained in the previous Script and which are available in the “CurrentMillisecs” and “CurrentSecs” parameters.


    The “updateScreenTime” Script in turn contains two Scripts, “setMilliSeconds” and “setSeconds”, which are activated when the “update” input is activated, and which set the time value in milliseconds and seconds respectively when their “set” inputs are activated. Both Scripts are practically the same, since they take a time value and place the Sprites related to the units of that value in the corresponding animations. The difference between the two is that “setMilliseconds” controls 3 digits (tenths, hundredths, and thousandths), while “setSeconds” controls only 2 (units and tens).


    The first thing the “setMilliseconds” Script does when activated is convert the value “currentMillisecs” is to represent to text via the “toString” ActionOnParam blackbox. This text is kept in the “numberAsString” parameter. Once the text has been obtained, we divide it into characters, grouping it up in a collection of Strings via the “split” ActionOnParam. It is very important to leave the content of the “separator” parameter of this blackbox empty, even though in the image you can see two quotation marks in the field. This collection of characters is gathered by the “digitsAsStrings” parameter. Later, based on the value of milliseconds to be presented, it will set one animation or another in the Sprites.

    Should the time value to be presented be less than 10, which is checked by the “lessThan” Compare blackbox against the value 10, the “true” output is activated which in turn activates the “setWith1Digit” Script. Should the time value be greater than 10, the blackbox’s “false” output is activated, and it proceeds to check if the time value is less than 100, which is checked by the “lessThan” Compare blackbox against the value 100. If this blackbox activates its “true” output, this in turn activates the “setWith2Digits” Script. Finally, if this blackbox activates the “false” output, the “setWith3Digits” Script is activated.


    The “setWith1Digit” Script takes the first of the collection of characters, and uses it to set the animation of the Sprite that corresponds with the units contained in the “millisec.unit” parameter. The remaining Sprites (“millisec.ten” and “millisec.hundred”) are set with the 0 animation.


    The “setWith2Digits” Script takes the first of the collection of characters, and uses it to set the animation of the Sprite corresponding to the tenths place number contained in the “millisec.ten” parameter, the second character of the collection to set the Sprite animation corresponding to the units contained in the “millisec.unit” parameter and the “millisec.hundred” Sprite is given the animation for 0.


    The “setWith3Digits” Sprite takes the first of the collection of characters, and uses it to set the animation of the Sprite corresponding to the hundredths contained in the “millisec.hundred” parameter, the second character of the collection to set the animation of the Sprite corresponding to the tenths place value, contained in the “millisec.ten” parameter, and the third character of the collection to set the animation of the Sprite corresponding to the units place value contained in the “millisec.unit” parameter.


    The “setSeconds” Script when first activated converts the value to represent “currentSecs” to text via the “toString” ActionOnParam blackbox. This text is grouped in the “numberAsString” parameter. Once the text is obtained, we divide it into characters, gathering it in a collection of Strings via the “split” ActionOnParam blackbox. It is very important to leave the content of the “separator” parameter of this Blackbox blank, even though you can see two quotation marks in the field. This collection of characters is collected in the “digitsAsStrings” parameter. Later, based on the value of the seconds to be shown, one animation or another will be placed in the Sprites.

    If the time value to be presented is less than 10, it’s checked by the “lessThan” Compare blackbox against the value of 10, which activates the “true” output; the first character of the collection is taken and used to set the animation of the Sprite corresponding to the units place value contained in the “second.unit” parameter. The other Sprite, “second.ten”, is given the animation for 0.

    If the time value to be presented is greater than ten, the “false” output of the blackbox is activated, and it proceeds to pick the first character from the collection of characters and we use it to set the animation of the Sprite corresponding to the tens place value contained in the “second.ten” parameter, and the second character of the character collection is used to set the animation of the Sprite corresponding to the units place value contained in the “second.unit” parameter.

    Example 3. How to create a points scoreboard.

    In order to manage the number of points, we’ll have as a base the whole number value of these points that we’ll be increasing and a graphic element to display it. This graphic element will be 4 instances of a Sprite that will have 10 animations or states, which will be each of the numbers from 0 to 9.


    In our case, we’ll display the points up to 4 digits, meaning scores can go up to 9999, as you can see in the image, starting at 0 and then increasing in whole numbers.


    For this, in the Scene editor, we must create the four instances of the different Sprites used for each one of the numerical units to be used to count points: units, tens, hundreds, and thousands. In our case, we’ll call them “unit point”, “ten point”, “hundred point”, and “thousand point”. To manage this time, we’ll have a Script (“ScorePoints”), which has 2 inputs (“reset” and “increment”), as well as an exposed parameter called “pointsToWin” which contains the value of the points to be added in each incrementation.


    The “reset” input sets the current score value to zero, and the “increment” input adds the points won in each incrementation contained in the “pointsToWin” parameter to the current score.

    In order to use it, we must only set the value for the points to win in each incrementation by either putting it in the “pointsToWin” parameter or by using a blackbox that I assign it. Once I have it, we can activate the “increment” input, which will increase the score and show it on the screen. Whenever we want, we can begin again by resetting the counter to zero by activating the “reset” input.

    In the interior of the Script, we do everything necessary to perform these actions and to represent the current score on the screen, displaying the 4 Sprites (units, tens, hundreds, and thousands) in relation to that value. When the “reset” input is activated, a “copy” ActionOnParam blackbox sets the value to 0 in the “scorePoints” parameter, which contains the value of the current score. Also, when the “increment” input is activated, a “+” ActionOnParam blackbox adds the parameter “pointsToWin”, which contains the value of the points won in each incrementation, to the “scorePoints” parameter, which contains the value of the current score. After both activations, a “StoreOnScreen” Script is activated via its “update” input.


    The “StoreOnScreen” Script has a connector to the “update” input and shares the “scorePoints” parameter, which contains the value of the current score.



    Once the “ScoreOnScreen” Script is activated by its “update” input, it begins converting the score value contained in the “scorePoints” parameter into text via the “toString” ActionOnParam blackbox. This text is gathered in the “numberAsString” parameter. Once the text has been obtained, we divide it into characters and group them into a collection of Strings via the “split” ActionOnParam.

    This collection of characters is gathered into the “digitsAsStrings” parameter. Later, based on the value of the score to be presented, one animation or another will be set for the 4 Sprites. If the value of the score is less than 10, as checked by the “lessThan” Compare blackbox against the value 10, its “true” output is activated, which activates the “setWith1Digit” Script.

    If the value is greater than 10, the blackbox’s “false” output is activated, and it checks to see if the value is less than 100. When the “lessThan” Compare blackbox checks that the value is less than 100, its “true” output is activated, which in turn activates the “setWith2Digits” Script.

    If the value is greater than 100, the “false” output of the blackbox is activated, and it proceeds to see if the value is less than 1000, which is checked by the “lessThan” Compare blackbox against the value of 1000. If this blackbox activates its “true” output, this will then activate the “setWith3Digits” Script. If the blackbox activates the “false” output, the “setWith4Digits” Script is activated.



    The “setWith1Digit” Script takes the first character from the collection of characters and uses it to set the animation of the Sprite that corresponds to the units place contained in the “unit.point” parameter. The remaining Sprites (“ten.point”, “hundred.point” and “thousand.point”) are set with the “0” animation.



    The “setWith2Digits” takes the first of the collection of characters and uses it to set the animation of the Sprite corresponding to the tens place contained in the “ten.point” parameter, and the second character of the collection is set with the animation of the Sprite corresponding to the units place as contained in the “units.point” parameter. The remaining Sprites (“hundred.point”) and (“thousand.point”) are set with the “0” animation.



    The “setWith3Digits” takes the first of the collection of characters and uses it to set the animation of the Sprite corresponding to the hundreds place contained in the “hundred.point”) parameter; the second character in the collection is set with the animation for the Sprite corresponding to the tens place as contained in the “ten.point” parameter; and the third character in the collection is set with the animation for the Sprite corresponding to the units place as contained in the “unit.point” parameter. The remaining Sprite, (“thousand.point”) is set with the “0” animation.



    The “setWith4Digits” Script takes the first character of the collection of characters and uses it to set the animation of the Sprite corresponding to the thousands place as contained in the “thousand.point” parameter; the second is set with the animation for the Sprite corresponding to the hundreds place as contained in the “hundred.point” parameter; the third is set with the animation for the Sprite corresponding to the tens place as contained in the “ten.point” parameter; and the fourth is set with the animation for the Sprite corresponding to the units place as contained in the “unit.point” parameter.

    As you can see it is not necessary to write code when you work with WiMi5. The whole logic of these scoreboard has been created by dragging and dropping blackboxes in the LogicChart. You also have to set and configure parameters and scripts, but all the work is visually done. We hope you have enjoyed this tutorial and you have understood how to create scoreboards.

    • Oct 21 2014 12:21 PM
    • by hafo
  16. BeMyGuess

    Secret Agent Morgan is the best in his profession. A man you can trust to achieve every goal. But this specific mission is on the edge now.

    He has only few seconds to find the secret number that unlocks the safebox. The safebox that contains the very secrets of the Syndicate of The Burnt Dragon. And Agent Morgan needs to know everything about it. Can you help him guess it, before the alarm starts?

    There are two tables of possible answers in front of you. As you progress answers wil be ruled out... Use the rollers to create guesses and assist Agent Morgan in his quest...

    A fun and simple Puzzle game, available on Android.

    [attachment=23674:1.png] [attachment=23675:2.png] [attachment=23676:3.png] [attachment=23677:4.png] [attachment=23678:5.png]

  17. BeMyGuess

    Secret Agent Morgan is the best in his profession. A man you can trust to achieve every goal. But this specific mission is on the edge now.

    He has only few seconds to find the secret number that unlocks the safebox. The safebox that contains the very secrets of the Syndicate of The Burnt Dragon. And Agent Morgan needs to know everything about it. Can you help him guess it, before the alarm starts?

    There are two tables of possible answers in front of you. As you progress answers wil be ruled out... Use the rollers to create guesses and assist Agent Morgan in his quest...

    A fun and simple Puzzle game, available on Android.

    [attachment=23674:1.png] [attachment=23675:2.png] [attachment=23676:3.png] [attachment=23677:4.png] [attachment=23678:5.png]

  18. What's new in 1.4

    We are glad to announce that WaveEngine 1.4 (Dolphin) is out! This is probably our biggest release until now, with a lot of new features.

    New Demo

    Alongside with the 1.4 Release of Wave Engine, we have published in our GitHub repository a new sample to show all the features included in this new version.
    In this sample you play Yurei, a little ghost character that slides through a dense forest and a haunted house.
    Some key features:

    • The new Camera 2D is crucial to follow the little ghost across the way.
    Parallax scrolling effect done automatically with the Camera2D perspective projection.
    Animated 2D model using Spine model with FFD transforms.
    Image Effects to change the look and feel of the scene to make it scarier.

    The source code of this sample is available in our GitHub sample repository


    Binary version for Windows PC, Here.

    Camera 2D

    One of the major improvements in 2D games using Wave Engine is the new Camera 2D feature.
    With a Camera 2D, you can pan, zoom and rotate the display area of the 2D world. So from now on making a 2D game with a large scene is straightforward.
    Additionally, you can change the Camera 2D projection:

    Orthogonal projection. Camera will render objects 2D uniformly, with no sense of perspective. That was the most common projection used.
    Perspective projection. Camera will render objects 2D with sense of perspective.

    Now it’s easy to make a Parallax Scrolling effect using Perspective projection in Camera 2D, you only need to properly set the DrawOrder property to specify the entity depth value between the background and the foreground.

    More info about this.

    Image Effects library

    This new release comes with and extesion library called WaveEngine.ImageEffects This allows users an easy mode (one line of code) to add multiple postprocessing effects to their games.
    The first version of this library has more than 20 image effects to improve the visual quality of each development.
    All these image effects have been optimized to work in real time on Mobile devices.
    Custom image effects are also allowed and all image effects in the current library are published as OpenSource.


    More info about image effects library.

    Skeletal 2D animation

    In this new release we have improved the integration with Spine Skeletal 2D animation tool to support Free-Form deformation (FFD).
    This new feature allows you to move individual mesh vertices to deform the image.


    Read more.

    Transform3D & Transform3D with real hierarchy

    One of the most requested features by our users was the implementation of a real Parent / Child transform relationship.
    Now, when an Entity is a Parent of another Entity, the Child Entity will move, rotate, and scale in the same way as its Parent does. Child Entities can also have children, conforming an Entity hierarchy.
    Transform2D and Transform3D components now have new properties to deal with entity hierarchy:

    LocalPosition, LocalRotation and LocalScale properties are used to specify the transform values relative to its parent.
    Position, Rotation and Scale properties are used now to set global transform values.
    Transform2D inherited from Transform3D component, so you can deal with 3D transform properties in 2D entities.


    More info about this.

    Multiplatform tools

    We have been working on rewrite all our tools using GTK# to get these tools available on Windows, MacOS and Linux.
    We want to offer the same development experience for all our developer regardless their OS.


    More info about this.

    Workflow improved

    Within this WaveEngine version we have also improved the developer workflow, because one of the most tedious tasks when working in multiplatform games is the assets management.
    So with this new workflow all these tasks will be performed automatically and transparently to developers, obtaining amazing benefits:

    • Reduce development time and increased productivity
    • Improved the process of porting to other platforms
    • Isolated the developer from managing WPK files

    All our samples and quickstarter have been updated to this new workflow, Github repository.


    More about new workflow.

    Scene Editor in progress

    After many developers requests we have started to create a Scene Editor tool. Today we are excited to announce that we are already working on it.

    It will be a multiplatform tool so developers will be able to use this tool from either Windows, MacOs, or Linux.


    Community thread about this future tool.

    Digital Boss Monster powered by WaveEngine

    Outstanding success of the Boss Monster card game for iOS & Android kickstarter, which will be developed using WaveEngine in the next months.


    If you want to see a cool video of the prototype here is the link.

    Don't miss the chance to be part of this kickstarter, only a few hours left (link)

    More Open Source

    We keep on publishing source code of some extensions for WaveEngine:

    Image Effects Library: WaveEngine image library, with more than 20 lenses (published)
    Complete code in our Github repository.

    Using Wave Engine in your applications built on Windows Forms, GTKSharp and WPF

    Within this new version we want to help every developer using Wave Engine on their Windows Desktop Applications, like game teams that need to
    build their own game level editor, or University research groups that need to integrate research technologies with a Wave Engine render and show tridimensional
    results. Right now, located at our Wave Engine GitHub Samples repository, you can find some demo projects that show how to integrate Wave Engine with Windows Forms,
    GtkSharp or Windows Presentation Foundation technologies.

    Complete code in our Github repository.

    Better Visual Studio integration

    Current supported editions:

    • Visual Studio Express 2012 for Windows Desktop
    • Visual Studio Express 2012 for Web
    • Visual Studio Professional 2012
    • Visual Studio Premium 2012
    • Visual Studio Ultimate 2012
    • Visual Studio Express 2013 for Windows Desktop
    • Visual Studio Express 2013 for Web
    • Visual Studio Professional 2013
    • Visual Studio Premium 2013
    • Visual Studio Ultimate 2013

    We help you port wave engine project from 1.3.5 version to new 1.4 version

    Within this new version there are some importants changes so we want to help every wave engine developers port theirs game projects
    To the new 1.4 version.

    More info about this.

    Complete Changelog of WaveEngine 1.4 (Dolphin), Here.

    Download WaveEngine Now (Windows, MacOS, Linux)


  19. Banshee Game Development Toolkit - Introduction


    *This is a suggested template to get you started. Feel free to modify it to suit your article. Introduce the topic you are going to write about briefly. Provide any relevant background information regarding your article that may be helpful to readers.

    Main Section Title

    Explaining the Concept


    This is the tutorial part of your article. What are you trying to convey to your readers? In this main body section you can put all your descriptive text and pictures (you can drag and drop pictures right into the editor!).

    Using the Code

    (Optional) If your article is about a piece of code create a small description about how the code works and how to use it.

    /* Code block here */

    Interesting Points

    Did you stumble upon any weird gotchas? .. things people should look out for? How would you describe your experience writing this article.


    Wrap up any loose ends for your article. Be sure to restate what was covered in the article. You may also suggest additional resources for the reader to check out if desired.

    Article Update Log

    Keep a running log of any updates that you make to the article. e.g.

    6 Feb 2020: Added additional code samples
    4 Feb 2020: Initial release

  20. Making a Game with Blend4Web Part 4: Mobile Devices

    This is the fourth part of the Blend4Web gamedev tutorial. Today we'll add mobile devices support and program the touch controls. Before reading this article, please look at the first part of this series, in which the keyboard controls are implemented. We will use the Android and iOS 8 platforms for testing.

    Detecting mobile devices

    In general, mobile devices are not as good in performance as desktops and so we'll lower the rendering quality. We'll detect a mobile device with the following function:

    function detect_mobile() {
        if( navigator.userAgent.match(/Android/i)
         || navigator.userAgent.match(/webOS/i)
         || navigator.userAgent.match(/iPhone/i)
         || navigator.userAgent.match(/iPad/i)
         || navigator.userAgent.match(/iPod/i)
         || navigator.userAgent.match(/BlackBerry/i)
         || navigator.userAgent.match(/Windows Phone/i)) {
            return true;
        } else {
            return false;

    The init function now looks like this:

    exports.init = function() {
            var quality = m_cfg.P_LOW;
            var quality = m_cfg.P_HIGH;
            canvas_container_id: "canvas3d",
            callback: init_cb,
            physics_enabled: true,
            quality: quality,
            show_fps: true,
            alpha: false,
            physics_uranium_path: "uranium.js"

    As we can see, a new initialization parameter - quality - has been added. In the P_LOW profile there are no shadows and post-processing effects. This will allow us to dramatically increase the performance on mobile devices.

    Controls elements on the HTML page

    Lets add the following elements to the HTML file:

    <!DOCTYPE html>
        <div id="canvas3d"></div>
        <div id="controls">
            <div id ="control_circle"></div>
            <div id ="control_tap"></div>
            <div id ="control_jump"></div>

    1. control_circle element will appear when the screen is touched, and will be used for directing the character.
    2. The control_tap element is a small marker, following the finger.
    3. The control_jump element is a jump button located in the bottom right corner of the screen.

    By default all these elements are hidden (visibility property). They will become visible after the scene is loaded.

    The styles for these elements can be found in the game_example.css file.

    Processing the touch events

    Let's look at the callback which is executed at scene load:

    function load_cb(root) {
        _character = m_scs.get_first_character();
        _character_body = m_scs.get_object_by_empty_name("character",
        var right_arrow = m_ctl.create_custom_sensor(0);
        var left_arrow  = m_ctl.create_custom_sensor(0);
        var up_arrow    = m_ctl.create_custom_sensor(0);
        var down_arrow  = m_ctl.create_custom_sensor(0);
        var touch_jump  = m_ctl.create_custom_sensor(0);
        if(detect_mobile()) {
            document.getElementById("control_jump").style.visibility = "visible";
            setup_control_events(right_arrow, up_arrow,
                                 left_arrow, down_arrow, touch_jump);
        setup_movement(up_arrow, down_arrow);
        setup_rotation(right_arrow, left_arrow);

    The new things here are the 5 sensors created with the controls.create_custom_sensor() method. We will change their values when the corresponding touch events are fired.

    If the detect_mobile() function returns true, the control_jump element is shown up and the setup_control_events() function is called to set up the values for these new sensors (passed as arguments). This function is quite large and we'll look at it step-by-step.

    var touch_start_pos = new Float32Array(2);
    var move_touch_idx;
    var jump_touch_idx;
    var tap_elem = document.getElementById("control_tap");
    var control_elem = document.getElementById("control_circle");
    var tap_elem_offset = tap_elem.clientWidth / 2;
    var ctrl_elem_offset = control_elem.clientWidth / 2;

    First of all the variables are declared for saving the touch point and the touch indices, which correspond to the character's moving and jumping. The tap_elem and control_elem HTML elements are required in several callbacks.

    The touch_start_cb() callback

    In this function the beginning of a touch event is processed.

    function touch_start_cb(event) {
        var h = window.innerHeight;
        var w = window.innerWidth;
        var touches = event.changedTouches;
        for (var i = 0; i < touches.length; i++) {
            var touch = touches[i&#93;;
            var x = touch.clientX;
            var y = touch.clientY;
            if (x > w / 2) // right side of the screen
            touch_start_pos[0&#93; = x;
            touch_start_pos[1&#93; = y;
            move_touch_idx = touch.identifier;
            tap_elem.style.visibility = "visible";
            tap_elem.style.left = x - tap_elem_offset + "px";
            tap_elem.style.top  = y - tap_elem_offset + "px";
            control_elem.style.visibility = "visible";
            control_elem.style.left = x - ctrl_elem_offset + "px";
            control_elem.style.top  = y - ctrl_elem_offset + "px";

    Here we iterate through all the changed touches of the event (event.changedTouches) and discard the touches from the right half of the screen:

        if (x > w / 2) // right side of the screen

    If this condition is met, we save the touch point touch_start_pos and the index of this touch move_touch_idx. After that we'll render 2 elements in the touch point: control_tap and control_circle. This will look on the device screen as follows:


    The touch_jump_cb() callback

    function touch_jump_cb (event) {
        var touches = event.changedTouches;
        for (var i = 0; i < touches.length; i++) {
            var touch = touches[i&#93;;
            m_ctl.set_custom_sensor(jump, 1);
            jump_touch_idx = touch.identifier;

    This callback is called when the control_jump button is touched


    It just sets the jump sensor value to 1 and saves the corresponding touch index.

    The touch_move_cb() callback

    This function is very similar to the touch_start_cb() function. It processes finger movements on the screen.

        function touch_move_cb(event) {
            m_ctl.set_custom_sensor(up_arrow, 0);
            m_ctl.set_custom_sensor(down_arrow, 0);
            m_ctl.set_custom_sensor(left_arrow, 0);
            m_ctl.set_custom_sensor(right_arrow, 0);
            var h = window.innerHeight;
            var w = window.innerWidth;
            var touches = event.changedTouches;
            for (var i=0; i < touches.length; i++) {
                var touch = touches[i&#93;;
                var x = touch.clientX;
                var y = touch.clientY;
                if (x > w / 2) // right side of the screen
                tap_elem.style.left = x - tap_elem_offset + "px";
                tap_elem.style.top  = y - tap_elem_offset + "px";
                var d_x = x - touch_start_pos[0&#93;;
                var d_y = y - touch_start_pos[1&#93;;
                var r = Math.sqrt(d_x * d_x + d_y * d_y);
                if (r < 16) // don't move if control is too close to the center
                var cos = d_x / r;
                var sin = -d_y / r;
                if (cos > Math.cos(3 * Math.PI / 8))
                    m_ctl.set_custom_sensor(right_arrow, 1);
                else if (cos < -Math.cos(3 * Math.PI / 8))
                    m_ctl.set_custom_sensor(left_arrow, 1);
                if (sin > Math.sin(Math.PI / 8))
                    m_ctl.set_custom_sensor(up_arrow, 1);
                else if (sin < -Math.sin(Math.PI / 8))
                    m_ctl.set_custom_sensor(down_arrow, 1);

    The values of d_x and d_y denote by how much the marker is shifted relative to the point in which the touch started. From these increments the distance to this point is calculated, as well as the cosine and sine of the direction angle. This data fully defines the required behavior depending on the finger position by means of simple trigonometric transformations.

    As a result the ring is divided into 8 parts, for which their own sets of sensors are assigned: right_arrow, left_arrow, up_arrow, down_arrow.

    The touch_end_cb() callback

    This callback resets the sensors' values and the saved touch indices.

        function touch_end_cb(event) {
            var touches = event.changedTouches;
            for (var i=0; i < touches.length; i++) {
                if (touches[i&#93;.identifier == move_touch_idx) {
                    m_ctl.set_custom_sensor(up_arrow, 0);
                    m_ctl.set_custom_sensor(down_arrow, 0);
                    m_ctl.set_custom_sensor(left_arrow, 0);
                    m_ctl.set_custom_sensor(right_arrow, 0);
                    move_touch_idx = null;
                    tap_elem.style.visibility = "hidden";
                    control_elem.style.visibility = "hidden";
                } else if (touches[i&#93;.identifier == jump_touch_idx) {
                    m_ctl.set_custom_sensor(jump, 0);
                    jump_touch_idx = null;

    Also for the move event the corresponding control elements become hidden:

        tap_elem.style.visibility = "hidden";
        control_elem.style.visibility = "hidden";


    Setting up the callbacks for the touch events

    And the last thing happening in the setup_control_events() function is setting up the callbacks for the corresponding touch events:

        document.getElementById("canvas3d").addEventListener("touchstart", touch_start_cb, false);
        document.getElementById("control_jump").addEventListener("touchstart", touch_jump_cb, false);
        document.getElementById("canvas3d").addEventListener("touchmove", touch_move_cb, false);
        document.getElementById("canvas3d").addEventListener("touchend", touch_end_cb, false);
        document.getElementById("controls").addEventListener("touchend", touch_end_cb, false);

    Please note that the touchend event is listened for two HTML elements. That is because the user can release his/her finger both inside and outside of the controls element.

    Now we have finished working with events.

    Including the touch sensors into the system of controls

    Now we only have to add the created sensors to the existing system of controls. Let's check out the changes using the setup_movement() function as an example.

    function setup_movement(up_arrow, down_arrow) {
        var key_w     = m_ctl.create_keyboard_sensor(m_ctl.KEY_W);
        var key_s     = m_ctl.create_keyboard_sensor(m_ctl.KEY_S);
        var key_up    = m_ctl.create_keyboard_sensor(m_ctl.KEY_UP);
        var key_down  = m_ctl.create_keyboard_sensor(m_ctl.KEY_DOWN);
        var move_array = [
            key_w, key_up, up_arrow,
            key_s, key_down, down_arrow
        var forward_logic  = function(s){return (s[0&#93; || s[1&#93; || s[2&#93;)};
        var backward_logic = function(s){return (s[3&#93; || s[4&#93; || s[5&#93;)};
        function move_cb(obj, id, pulse) {
            if (pulse == 1) {
                switch(id) {
                case "FORWARD":
                    var move_dir = 1;
                    m_anim.apply(_character_body, "character_run_B4W_BAKED");
                case "BACKWARD":
                    var move_dir = -1;
                    m_anim.apply(_character_body, "character_run_B4W_BAKED");
            } else {
                var move_dir = 0;
                m_anim.apply(_character_body, "character_idle_01_B4W_BAKED");
            m_phy.set_character_move_dir(obj, move_dir, 0);
            m_anim.set_behavior(_character_body, m_anim.AB_CYCLIC);
        m_ctl.create_sensor_manifold(_character, "FORWARD", m_ctl.CT_TRIGGER,
            move_array, forward_logic, move_cb);
        m_ctl.create_sensor_manifold(_character, "BACKWARD", m_ctl.CT_TRIGGER,
            move_array, backward_logic, move_cb);
        m_anim.apply(_character_body, "character_idle_01_B4W_BAKED");
        m_anim.set_behavior(_character_body, m_anim.AB_CYCLIC);

    As we can see, the only changed things are the set of sensors in the move_array and inside the forward_logic() and backward_logic() logic functions, which now depend on the touch sensors as well.

    The setup_rotation() and setup_jumping() functions have changed in a similar way. They are listed below:

    function setup_rotation(right_arrow, left_arrow) {
        var key_a     = m_ctl.create_keyboard_sensor(m_ctl.KEY_A);
        var key_d     = m_ctl.create_keyboard_sensor(m_ctl.KEY_D);
        var key_left  = m_ctl.create_keyboard_sensor(m_ctl.KEY_LEFT);
        var key_right = m_ctl.create_keyboard_sensor(m_ctl.KEY_RIGHT);
        var elapsed_sensor = m_ctl.create_elapsed_sensor();
        var rotate_array = [
            key_a, key_left, left_arrow,
            key_d, key_right, right_arrow,
        var left_logic  = function(s){return (s[0&#93; || s[1&#93; || s[2&#93;)};
        var right_logic = function(s){return (s[3&#93; || s[4&#93; || s[5&#93;)};
        function rotate_cb(obj, id, pulse) {
            var elapsed = m_ctl.get_sensor_value(obj, "LEFT", 6);
            if (pulse == 1) {
                switch(id) {
                case "LEFT":
                    m_phy.character_rotation_inc(obj, elapsed * ROT_SPEED, 0);
                case "RIGHT":
                    m_phy.character_rotation_inc(obj, -elapsed * ROT_SPEED, 0);
        m_ctl.create_sensor_manifold(_character, "LEFT", m_ctl.CT_CONTINUOUS,
            rotate_array, left_logic, rotate_cb);
        m_ctl.create_sensor_manifold(_character, "RIGHT", m_ctl.CT_CONTINUOUS,
            rotate_array, right_logic, rotate_cb);
    function setup_jumping(touch_jump) {
        var key_space = m_ctl.create_keyboard_sensor(m_ctl.KEY_SPACE);
        var jump_cb = function(obj, id, pulse) {
            if (pulse == 1) {
        m_ctl.create_sensor_manifold(_character, "JUMP", m_ctl.CT_TRIGGER,
            [key_space, touch_jump&#93;, function(s){return s[0&#93; || s[1&#93;}, jump_cb);

    And the camera again

    In the end let's return to the camera. Keeping in mind the community feedback, we've introduced the possibility to tweak the stiffness of the camera constraint. Now this function call is as follows:

        m_cons.append_semi_soft_cam(camera, _character, CAM_OFFSET, CAM_SOFTNESS);

    The CAM_SOFTNESS constant is defined in the beginning of the file and its value is 0.2.


    At this stage, programming the controls for mobile devices is finished. In the next tutorials we'll implement the gameplay and look at some other features of the Blend4Web physics engine.

    Link to the standalone application

    The source files of the application and the scene are part of the free Blend4Web SDK distribution.

    • Sep 03 2014 02:02 PM
    • by Spunya
  21. The Art of Feeding Time: Branding

    Although a game's branding rarely has much to do with its gameplay, it's still a very important forward-facing aspect to consider.

    Initial concepts for a Feeding Time logo.

    For Feeding Time's logo, we decided to create numerous designs and get some feedback before committing to a single concept.

    Our early mockups featured both a clock and various types of food. Despite seeming like a perfect fit, the analog clock caused quite a bit of confusion in-game. We wanted a numerical timer to clearly indicate a level's duration, but this was criticized when placed on an analog clock background. Since the concept already prompted some misunderstandings -- and a digital watch was too high-tech for the game's rustic ambiance -- we decided to avoid it for the logo.

    The food concepts were more readable than the clock, but Feeding Time was meant to be a game where any type of animal could make an appearance. Consequently we decided to avoid single food-types to prevent the logo from being associated with just one animal.

    Even more logo concepts. They're important!

    A few more variations included a placemat and a dinner bell, but we didn't feel like these really captured the look of the game. We were trying to be clever, but the end results weren't quite there.

    We felt that the designs came across as somewhat sterile, resembling the perfect vector logos of large conglomerates that looked bland compared to the in-game visuals.

    Our final logo.

    Ultimately we decided to go with big, bubbly letters on top of a simple apéritif salad. It was bright and colourful, and fit right in with the restaurant-themed UI we were pursuing at the time. We even used the cloche-unveiling motif in the trailer!

    One final extra touch was a bite mark on the top-right letter. We liked the idea in the early carrot-logo concept, and felt that it added an extra bit of playfulness.

    Initial sketches for the app icon.

    The app-icon was a bit easier to nail down as we decided not to avoid specific foods and animals due to the small amount of space. We still tried out a few different sketches, but the dog-and-bone was easily the winner. It matched the in-game art, represented the core of the gameplay, and was fairly readable at all resolutions.

    To help us gauge the clarity of the icon, we used the App Icon Template.

    This package contains a large Photoshop file with a Smart Object embedded in various portholes and device screenshots. The Smart Object can be replaced with any logo to quickly get a feel for how it appears in different resolutions and how it is framed within the AppStore. This was particularly helpful with the bordering as iOS 7 increased the corner radius making the icons appear rounder.

    Final icon iterations for Feeding Time.

    Despite a lot of vibrant aesthetics, we still felt that Feeding Time was missing a face; a central identifying character.

    Our first shot at a "mascot" was a grandmother that sent the player to various parts of the world in order to feed its hungry animals. A grandmother fretting over everyone having enough to eat is a fairly identifiable concept, and it nicely fit in with the stall-delivery motif.

    Our initial clerk was actually a babushka with some not-so-kindly variations.

    However, there was one problem: the introductory animation showed the grandmother tossing various types of food into her basket and random animals periodically snatching 'em away.

    We thought this sequence did a good job of previewing the gameplay in a fairly cute and innocuous fashion, but the feedback was quite negative. People were very displeased that all the nasty animals were stealing from the poor old woman!

    People were quite appalled by the rapscallion animals when the clerk was played by a kindly grandma.

    It was a big letdown as we really liked the animation, but much to our surprise we were told it'd still work OK with a slightly younger male clerk. A quick mockup later, and everyone was pleased with the now seemingly playful shenanigans of the animals!

    Having substituted the kindly babushka for a jolly uncle archetype, we also shrunk down the in-game menus and inserted the character above them to add an extra dash of personality.

    The clerk as he appears over two pause menus, a bonus game in which the player gets a low score, and a bonus game in which the player gets a high score.

    The clerk made a substantial impact keeping the player company on their journey, so we decided to illustrate a few more expressions. We also made these reflect the player's performance helping to link it with in-game events such as bonus-goal completion and minigames scores.

    The official Feeding Time website complete with our logo, title-screen stall and background, a happy clerk, and a bunch of dressed up animals.

    Finally, we used the clerk and various game assets for the Feeding Time website and other Incubator Games outlets. We made sure to support 3rd generation iPads with a resolution of 2048x1536, which came in handy for creating various backgrounds, banners, and icons used on our Twitter, Facebook, YouTube, tumblr, SlideDB, etc.

    Although branding all these sites wasn't a must, it helped to unify our key message: Feeding Time is now available!

    Article Update Log

    30 July 2014: Initial release

  22. Making a Game with Blend4Web Part 2: Models for the Location

    In this article we will describe the process of creating the models for the location - geometry, textures and materials. This article is aimed at experienced Blender users that would like to familiarize themselves with creating game content for the Blend4Web engine.

    Graphical content style

    In order to create a game atmosphere a non-photoreal cartoon setting has been chosen. The character and environment proportions have been deliberately hypertrophied in order to give the gaming process something of a comic and unserious feel.

    Location elements

    This location consists of the following elements:
    • the character's action area: 5 platforms on which the main game action takes place;
    • the background environment, the role of which will be performed by less-detailed ash-colored rocks;
    • lava covering most of the scene surface.
    At this stage the source blend files of models and scenes are organized as follows:


    1. env_stuff.blend - the file with the scene's environment elements which the character is going to move on;
    2. character_model.blend - the file containing the character's geometry, materials and armature;
    3. character_animation.blend - the file which has the character's group of objects and animation (including the baked one) linked to it;
    4. main_scene.blend - the scene which has the environment elements from other files linked to it. It also contains the lava model, collision geometry and the lighting settings;
    5. example2.blend - the main file, which has the scene elements and the character linked to it (in the future more game elements will be added here).

    In this article we will describe the creation of simple low-poly geometry for the environment elements and the 5 central islands. As the game is intended for mobile devices we decided to manage without normal maps and use only the diffuse and specular maps.

    Making the geometry of the central islands


    First of all we will make the central islands in order to get settled with the scene scale. This process can be divided into 3 steps:

    1) A flat outline of the future islands using single vertices, which were later joined into polygons and triangulated for convenient editing when needed.


    2) The Solidify modifier was used for the flat outline with the parameter equal to 0.3, which pushes the geometry volume up.


    3) At the last stage the Solidify modifier was applied to get the mesh for hand editing. The mesh was subdivided where needed at the edges of the islands. According to the final vision cavities were added and the mesh was changed to create the illusion of rock fragments with hollows and projections. The edges were sharpened (using Edge Sharp), after which the Edge Split modifier was added with the Sharp Edges option enabled. The result is that a well-outlined shadow has appeared around the islands.

    Note:  It's not recommended to apply modifiers (using the Apply button). Enable the Apply Modifiers checkbox in the object settings on the Blend4Web panel instead; as a result the modifiers will be applied to the geometry automatically on export.


    Texturing the central islands

    Now that the geometry for the main islands has been created, lets move on to texturing and setting up the material for baking. The textures were created using a combination of baking and hand-drawing techniques.

    Four textures were prepared altogether.


    At the first stage lets define the color with the addition of small spots and cracks to create the effect of a rough stony and dusty rock. To paint these bumps texture brushes were used, which can be downloaded from the Internet or drawn by youself if necessary.


    At the second stage the ambient occlusion effect was baked. Because the geometry is low-poly, relatively sharp transitions between light and shadow appeared as a result. These can be slightly blurred with a Gaussian Blur filter in a graphical editor.


    The third stage is the most time consuming - painting the black and white texture by hand in the Texture Painting mode. It was layed over the other two, lightening and darkening certain areas. It's necessary to keep in mind the model's geometry so that the darker areas would be mostly in cracks, with the brighter ones on the sharp geometry angles. A generic brush was used with stylus pressure sensitivity turned on.


    The color turned out to be monotonous so a couple of withered places imitating volcanic dust and stone scratches have been added. In order to get more flexibility in the process of texturing and not to use the original color texture, yet another texture was introduced. On this texture the light spots are decolorizing the previous three textures, and the dark spots don't change the color.


    You can see how the created textures were combined on the auxiliary node material scheme below.


    The color of the diffuse texture (1) was multiplied by itself to increase contrast in dark places.

    After that the color was burned a bit in the darker places using baked ambient occlusion (2), and the hand-painted texture (3) was layered on top - the Overlay node gave the best result.

    At the next stage the texture with baked ambient occlusion (2) was layered again - this time with the Multiply node - in order to darken the textures in certain places.

    Finally the fourth texture (4) was used as a mask, using which the result of the texture decolorizing (using Hue/Saturation) and the original color texture (1) were mixed together.

    The specular map was made from applying the Squeeze Value node to the overall result.

    As a result we have the following picture.


    Creating the background rocks

    The geometry of rocks was made according to a similar technology although some differences are present. First of all we created a low-poly geometry of the required form. On top of it we added the Bevel modifier with an angle threshold, which added some beveling to the sharpest geometry places, softening the lighting at these places.


    The rock textures were created approximately in the same way as the island textures. This time a texture with decolorizing was not used because such a level of detail is excessive for the background. Also the texture created with the texture painting method is less detailed. Below you can see the final three textures and the results of laying them on top of the geometry.


    The texture combination scheme was also simplified.


    First comes the color map (1), over which goes the baked ambient occlusion (2), and finally - the hand-painted texture (3).

    The specular map was created from the color texture. To do this a single texture channel (Separate RGB) was used, which was corrected (Squeeze Value) and given into the material as the specular color.

    There is another special feature in this scheme which makes it different from the previous one - the dirty map baked into the vertex color, overlayed (Overlay node) in order to create contrast between the cavities and elevations of the geometry.


    The final result of texturing the background rocks:


    Optimizing the location elements

    Lets start optimizing the elements we have and preparing them for displaying in Blend4Web.

    First of all we need to combine all the textures of the above-mentioned elements (background rocks and the islands) into a single texture atlas and then re-bake them into a single texture map. To do this lets combine UV maps of all geometry into a single UV map using the Texture Atlas addon.

    Note:  The Texture Atlas addon can be activated in Blender's settings under the Addons tab (UV category)


    In the texture atlas mode lets place the UV maps of every mesh so that they would fill up all the future texture area evenly.

    Note:  It's not necessary to follow the same scale for all elements. It's recommended to allow more space for foreground elements (the islands).


    After that let's bake the diffuse texture and the specular map from the materials of rocks and islands.


    Note:  In order to save video memory, the specular map was packed into the alpha channel of the diffuse texture. As a result we got only one file.

    Lets place all the environment elements into a separate file (i.e. library): env_stuff.blend. For convenience we will put them on different layers. Lets place the mesh bottom for every element into the center of coordinates. For every separate element we'll need a separate group with the same name.


    After the elements were gathered in the library, we can start creating the material. The material for all the library elements - both for the islands and the background rocks - is the same. This will let the engine automatically merge the geometry of all these elements into a single object which increases the performance significantly through decreasing the number of draw calls.

    Setting up the material

    The previously baked diffuse texture (1), into the alpha channel of which the specular map is packed, serves as the basis for the node material.


    Our scene includes lava with which the environment elements will be contacting. Let's create the effect of the rock glowing and being heated in the contact places. To do this we will use a vertex mask (2), which we will apply to all library elements - and paint the vertices along the bottom geometry line.


    The vertex mask was modified several times by the Squeeze Value node. First of all the less hot color of the lava glow (3) is placed on top of the texture using a more blurred mask. Then a brighter yellow color (4) is added near the contact places using a slightly tightened mask - in order to imitate a fritted rock.

    Lava should illuminate the rock from below. So in order to avoid shadowing in lava-contacting places we'll pass the same vertex mask into the Emit material's socket.

    We have one last thing to do - pass (5) the specular value from the diffuse texture's alpha channel to the Spec material's socket.


    Object settings

    Let's enable the "Apply Modifiers" checkbox (as mentioned above) and also the "Shadows: Receive" checkbox in the object settings of the islands.



    Let's create exact copies of the island's geometry (named _collision for convenience). For these meshes we'll replace the material by a new material (named collision), and enable the "Special: Collision" checkbox in its settings (Blend4Web panel). This material will be used by the physics engine for collisions.

    Let's add the resulting objects into the same groups as the islands themselves.



    We've finished creating the library of the environment models. In one of the upcoming articles we'll demonstrate how the final game location was assembled and also describe making the lava effect.

    Link to the standalone application

    The source files of the application and the scene are part of the free Blend4Web SDK distribution.

    • Aug 18 2014 09:27 AM
    • by Spunya
  23. Making a Game with Blend4Web Part 3: Level Design

    This is the third article in the Making a Game series. In this article we'll consider assembling the game scene using the models prepared at the previous stage, setting up the lighting and the environment, and also we'll look in detail at creating the lava effect.

    Assembling the game scene

    Let's assemble the scene's visual content in the main_scene.blend file. We'll add the previously prepared environment elements from the env_stuff.blend file.

    Open the env_stuff.blend file via the File -> Link menu, go to the Group section, add the geometry of the central islands (1) and the background rocks (2) and arrange them on the scene.


    Now we need to create the surface geometry of the future lava. The surface can be inflated a bit to deepen the effect of the horizon receding into the distance. Lets prepare 5 holes copying the outline of the 5 central islands in the center for the vertex mask which we'll introduce later.

    We'll also copy this geometry and assign the collision material to it as it is described in the previous article.


    A simple cube will serve us as the environment with its center located at the horizon level for convenience. The cube's normals must be directed inside.

    Lets set up a simple node material for it. Get a vertical gradient (1) located on the level of the proposed horizon from the Global socket. After some squeezing and shifting it with the Squeeze Value node (2) we add the color (3). The result is passed directly into the Output node without the use of an intermediate Material node in order to make this object shadeless.


    Setting up the environment

    We'll set up the fog under the World tab using the Fog density and Fog color parameters. Let's enable ambient lighting with the Environment Lighting option and setup its intensity (Energy). We'll select the two-color hemispheric lighting model Sky Color and tweak the Zenith Color and Horizon Color.


    Next place two light sources into the scene. The first one of the Sun type will illuminate the scene from above. Enable the Generate Shadows checkbox for it to be a shadow caster. We'll put the second light source (also Sun) below and direct it vertically upward. This source will imitate the lighting from lava.


    Then add a camera for viewing the exported scene. Make sure that the camera's Move style is Target (look at the camera settings on the Blend4Web panel), i.e. the camera is rotating around a certain pivot. Let's define the position of this pivot on the same panel (Target location).

    Also, distance and vertical angle limits can be assigned to the camera for convenient scene observation in the Camera limits section.


    Adding the scene to the scene viewer

    At this stage a test export of the scene can be performed: File -> Export -> Blend4Web (.json). Let's add the exported scene to the list of the scene viewer external/deploy/assets/assets.json using any text editor, for example:

            "name": "Tutorials",
                    "name": "Game Example",
                    "load_file": "../tutorials/examples/example2/main_scene.json"

    Then we can open the scene viewer apps_dev/viewer/viewer_dev.html with a browser, go to the Scenes panel and select the scene which is added to the Tutorials category.


    The tools of the scene viewer are useful for tweaking scene parameters in real time.

    Setting up the lava material

    We'll prepare two textures by hand for the lava material, one is a repeating seamless diffuse texture and another will be a black and white texture which we'll use as a mask. To reduce video memory consumption the mask is packed into the alpha channel of the diffuse texture.


    The material consists of several blocks. The first block (1) constantly shifts the UV coordinates for the black and white mask using the TIME (2) node in order to imitate the lava flow movement.


    The TIME node is basically a node group with a reserved name. This group is replaced by the time-generating algorithm in the Blend4Web engine. To add this node it's enough to create a node group named TIME which has an output of the Value type. It can be left empty or can have for example a Value node for convenient testing right in Blender's viewport.

    In the other two blocks (4 and 5) the modified mask stretches and squeezes the UV in certain places, creating a swirling flow effect for the lava. The results are mixed together in block 6 to imitate the lava flow.

    Furthermore, the lava geometry has a vertex mask (3), using which a clean color (7) is added in the end to visualize the lava's burning hot spots.


    To simulate the lava glow the black and white mask (8) is passed to the Emit socket. The mask itself is derived from the modified lava texture and from a special procedural mask (9), which reduces the glow effect with distance.


    This is where the assembling of the game scene is finished. The result can be exported and viewed in the engine. In one of the upcoming articles we'll show the process of modeling and texturing the visual content for the character and preparing it for the Blend4Web engine.


    Link to the standalone application

    The source files of the application and the scene are part of the free Blend4Web SDK distribution.

    • Aug 18 2014 09:36 AM
    • by Spunya
  24. Making a Game with Blend4Web Part 1: The Character

    Today we're going to start creating a fully-functional game app with Blend4Web.


    Let's set up the gameplay. The player - a brave warrior - moves around a limited number of platforms. Melting hot stones keep falling on him from the sky; the stones should be avoided. Their number increases with time. Different bonuses which give various advantages appear on the location from time to time. The player's goal is to stay alive as long as possible. Later we'll add some other interesting features but for now we'll stick to these. This small game will have a third-person view.

    In the future, the game will support mobile devices and a score system. And now we'll create the app, load the scene and add the keyboard controls for the animated character. Let's begin!

    Setting up the scene

    Game scenes are created in Blender and then are exported and loaded into applications. Let's use the files made by our artist which are located in the blend/ directory. The creation of these resources will be described in a separate article.

    Let's open the character_model.blend file and set up the character. We'll do this as follows: switch to the Blender Game mode and select the character_collider object - the character's physical object.


    Under the Physics tab we'll specify the settings as pictured above. Note that the physics type must be either Dynamic or Rigid Body, otherwise the character will be motionless.

    The character_collider object is the parent for the "graphical" character model, which, therefore, will follow the invisible physical model. Note that the lower point heights of the capsule and the avatar differ a bit. It was done to compensate for the Step height parameter, which lifts the character above the surface in order to pass small obstacles.

    Now lets open the main game_example.blend file, from which we'll export the scene.


    The following components are linked to this file:

    1. The character group of objects (from the character_model.blend file).
    2. The environment group of objects (from the main_scene.blend file) - this group contains the static scene models and also their copies with the collision materials.
    3. The baked animations character_idle_01_B4W_BAKED and character_run_B4W_BAKED (from the character_animation.blend file).

    To link components from another file go to File -> Link and select the file. Then go to the corresponding datablock and select the components you wish. You can link anything you want - from a single animation to a whole scene.

    Make sure that the Enable physics checkbox is turned on in the scene settings.

    The scene is ready, lets move on to programming.

    Preparing the necessary files

    Let's place the following files into the project's root:

    1. The engine b4w.min.js
    2. The addon for the engine app.js
    3. The physics engine uranium.js

    The files we'll be working with are: game_example.html and game_example.js.

    Let's link all the necessary scripts to the HTML file:

    <!DOCTYPE html>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1">
        <script type="text/javascript" src="b4w.min.js"></script>
        <script type="text/javascript" src="app.js"></script>
        <script type="text/javascript" src="game_example.js"></script>
            body {
                margin: 0;
                padding: 0;
    <div id="canvas3d"></div>

    Next we'll open the game_example.js script and add the following code:

    "use strict"
    if (b4w.module_check("game_example_main"))
        throw "Failed to register module: game_example_main";
    b4w.register("game_example_main", function(exports, require) {
    var m_anim  = require("animation");
    var m_app   = require("app");
    var m_main  = require("main");
    var m_data  = require("data");
    var m_ctl   = require("controls");
    var m_phy   = require("physics");
    var m_cons  = require("constraints");
    var m_scs   = require("scenes");
    var m_trans = require("transform");
    var m_cfg   = require("config");
    var _character;
    var _character_body;
    var ROT_SPEED = 1.5;
    var CAMERA_OFFSET = new Float32Array([0, 1.5, -4&#93;);
    exports.init = function() {
            canvas_container_id: "canvas3d",
            callback: init_cb,
            physics_enabled: true,
            alpha: false,
            physics_uranium_path: "uranium.js"
    function init_cb(canvas_elem, success) {
        if (!success) {
            console.log("b4w init failure");
        window.onresize = on_resize;
    function on_resize() {
        var w = window.innerWidth;
        var h = window.innerHeight;
        m_main.resize(w, h);
    function load() {
        m_data.load("game_example.json", load_cb);
    function load_cb(root) {

    If you have read Creating an Interactive Web Application tutorial there won't be much new stuff for you here. At this stage all the necessary modules are linked, the init functions and two callbacks are defined. Also there is a possibility to resize the app window using the on_resize function.

    Pay attention to the additional physics_uranium_path initialization parameter which specifies the path to the physics engine file.

    The global variable _character is declared for the physics object while _character_body is defined for the animated model. Also the two constants ROT_SPEED and CAMERA_OFFSET are declared, which we'll use later.

    At this stage we can run the app and look at the static scene with the character motionless.

    Moving the character

    Let's add the following code into the loading callback:

    function load_cb(root) {
        _character = m_scs.get_first_character();
        _character_body = m_scs.get_object_by_empty_name("character",
        m_anim.apply(_character_body, "character_idle_01");
        m_anim.set_behavior(_character_body, m_anim.AB_CYCLIC);

    First we save the physical character model to the _character variable. The animated model is saved as _character_body.

    The last three lines are responsible for setting up the character's starting animation.
    • animation.apply() - sets up animation by corresponding name,
    • animation.play() - plays it back,
    • animation.set_behaviour() - change animation behavior, in our case makes it cyclic.
    Please note that skeletal animation should be applied to the character object which has an Armature modifier set up in Blender for it.

    Before defining the setup_movement(), setup_rotation() and setup_jumping() functions its important to understand how the Blend4Web's event-driven model works. We recommend reading the corresponding section of the user manual. Here we will only take a glimpse of it.

    In order to generate an event when certain conditions are met, a sensor manifold should be created.

    You can check out all the possible sensors in the corresponding section of the API documentation.

    Next we have to define the logic function, describing in what state (true or false) the certain sensors of the manifold should be in, in order for the sensor callback to receive a positive result. Then we should create a callback, in which the performed actions will be present. And finally the controls.create_sensor_manifold() function should be called for the sensor manifold, which is responsible for processing the sensors' values. Let's see how this will work in our case.

    Define the setup_movement() function:

    function setup_movement() {
        var key_w     = m_ctl.create_keyboard_sensor(m_ctl.KEY_W);
        var key_s     = m_ctl.create_keyboard_sensor(m_ctl.KEY_S);
        var key_up    = m_ctl.create_keyboard_sensor(m_ctl.KEY_UP);
        var key_down  = m_ctl.create_keyboard_sensor(m_ctl.KEY_DOWN);
        var move_array = [
            key_w, key_up,
            key_s, key_down
        var forward_logic  = function(s){return (s[0&#93; || s[1&#93;)};
        var backward_logic = function(s){return (s[2&#93; || s[3&#93;)};
        function move_cb(obj, id, pulse) {
            if (pulse == 1) {
                switch(id) {
                case "FORWARD":
                    var move_dir = 1;
                    m_anim.apply(_character_body, "character_run");
                case "BACKWARD":
                    var move_dir = -1;
                    m_anim.apply(_character_body, "character_run");
            } else {
                var move_dir = 0;
                m_anim.apply(_character_body, "character_idle_01");
            m_phy.set_character_move_dir(obj, move_dir, 0);
            m_anim.set_behavior(_character_body, m_anim.AB_CYCLIC);
        m_ctl.create_sensor_manifold(_character, "FORWARD", m_ctl.CT_TRIGGER,
            move_array, forward_logic, move_cb);
        m_ctl.create_sensor_manifold(_character, "BACKWARD", m_ctl.CT_TRIGGER,
            move_array, backward_logic, move_cb);

    Let's create 4 keyboard sensors - for arrow forward, arrow backward, S and W keys. We could have done with two but we want to mirror the controls on the symbol keys as well as on arrow keys. We'll append them to the move_array.

    Now to define the logic functions. We want the movement to occur upon pressing one of two keys in move_array.

    This behavior is implemented through the following logic function:

    function(s) { return (s[0&#93; || s[1&#93;) }

    The most important things happen in the move_cb() function.

    Here obj is our character. The pulse argument becomes 1 when any of the defined keys is pressed. We decide if the character is moved forward (move_dir = 1) or backward (move_dir = -1) based on id, which corresponds to one of the sensor manifolds defined below. Also the run and idle animations are switched inside the same blocks.

    Moving the character is done through the following call:

    m_phy.set_character_move_dir(obj, move_dir, 0);

    Two sensor manifolds for moving forward and backward are created in the end of the setup_movement() function. They have the CT_TRIGGER type i.e. they snap into action every time the sensor values change.

    At this stage the character is already able to run forward and backward. Now let's add the ability to turn.

    Turning the character

    Here is the definition for the setup_rotation() function:

    function setup_rotation() {
        var key_a     = m_ctl.create_keyboard_sensor(m_ctl.KEY_A);
        var key_d     = m_ctl.create_keyboard_sensor(m_ctl.KEY_D);
        var key_left  = m_ctl.create_keyboard_sensor(m_ctl.KEY_LEFT);
        var key_right = m_ctl.create_keyboard_sensor(m_ctl.KEY_RIGHT);
        var elapsed_sensor = m_ctl.create_elapsed_sensor();
        var rotate_array = [
            key_a, key_left,
            key_d, key_right,
        var left_logic  = function(s){return (s[0&#93; || s[1&#93;)};
        var right_logic = function(s){return (s[2&#93; || s[3&#93;)};
        function rotate_cb(obj, id, pulse) {
            var elapsed = m_ctl.get_sensor_value(obj, "LEFT", 4);
            if (pulse == 1) {
                switch(id) {
                case "LEFT":
                    m_phy.character_rotation_inc(obj, elapsed * ROT_SPEED, 0);
                case "RIGHT":
                    m_phy.character_rotation_inc(obj, -elapsed * ROT_SPEED, 0);
        m_ctl.create_sensor_manifold(_character, "LEFT", m_ctl.CT_CONTINUOUS,
            rotate_array, left_logic, rotate_cb);
        m_ctl.create_sensor_manifold(_character, "RIGHT", m_ctl.CT_CONTINUOUS,
            rotate_array, right_logic, rotate_cb);

    As we can see it is very similar to setup_movement().

    The elapsed sensor was added which constantly generates a positive pulse. This allows us to get the time, elapsed from the previous rendering frame, inside the callback using the controls.get_sensor_value() function. We need it to correctly calculate the turning speed.

    The type of sensor manifolds has changed to CT_CONTINUOUS, i.e. the callback is executed in every frame, not only when the sensor values change.

    The following method turns the character around the vertical axis:

    m_phy.character_rotation_inc(obj, elapsed * ROT_SPEED, 0)

    The ROT_SPEED constant is defined to tweak the turning speed.

    Character jumping

    The last control setup function is setup_jumping():

    function setup_jumping() {
        var key_space = m_ctl.create_keyboard_sensor(m_ctl.KEY_SPACE);
        var jump_cb = function(obj, id, pulse) {
            if (pulse == 1) {
        m_ctl.create_sensor_manifold(_character, "JUMP", m_ctl.CT_TRIGGER, 
            [key_space&#93;, function(s){return s[0&#93;}, jump_cb);

    The space key is used for jumping. When it is pressed the following method is called:


    Now we can control our character!

    Moving the camera

    The last thing we cover here is attaching the camera to the character.

    Let's add yet another function call - setup_camera() - into the load_cb() callback.

    This function looks as follows:

    function setup_camera() {
        var camera = m_scs.get_active_camera();
        m_cons.append_semi_soft_cam(camera, _character, CAMERA_OFFSET);

    The CAMERA_OFFSET constant defines the camera position relative to the character: 1.5 meters above (Y axis in WebGL) and 4 meters behind (Z axis in WebGL).

    This function finds the scene's active camera and creates a constraint for it to follow the character smoothly.

    That's enough for now. Lets run the app and enjoy the result!


    Link to the standalone application

    The source files of the application and the scene are part of the free Blend4Web SDK distribution.

    • Aug 18 2014 09:27 AM
    • by Spunya
  25. The Art of Feeding Time: Animation

    While some movement was best handled programmatically, Feeding Time‘s extensive animal cast and layered environments still left plenty of room for hand-crafted animation. The animals in particular required experimentation to find an approach that could retain the hand-painted texturing of the illustrations while also harkening to hand-drawn animation.

    old_dogsketch.gif old_dogeat.gif

    An early pass involved creating actual sketched frames, then slicing the illustration into layers and carefully warping those into place to match each sketch. Once we decided to limit all the animals to just a single angle, we dispensed with the sketch phase and settled on creating the posed illustrations directly. When the finalized dog image was ready, a full set of animations was created to test our planned lineup of animations.

    The initial approach was to include Sleep, Happy, Idle, Sad, and Eat animations. Sleep would play at the start of the stage, then transition into Happy upon arrival of the delivery, then settle into Idle until the player attempted to eat food, resulting in Sad for incorrect choices and Eat for correct ones.

    dog2_sleeping.gif dog3_happy.gif dog1_idle.gif dog_sad2.gif dog3_chomp.gif

    Ultimately, we decided to cut Sleep because its low visibility during the level intro didn’t warrant the additional assets. We also discovered that having the animals rush onto the screen in the beginning of the level and dart away at the end helped to better delineate the gameplay phase.

    There were also plans to play either Happy or Sad at end of each level for the animal that ate the most and the least food. The reactions to this, however, was almost overwhelmingly negative! Players hated the idea of always making one of the animals sad regardless of how many points they scored, so we quickly scrapped the idea.

    The Happy and Sad animations were still retained to add a satisfying punch to a successful combo and to inform the player when an incorrect match was attempted. As we discovered, a sad puppy asking to be thrown a bone (instead of, say, a kitty’s fish) proved to be a great deterrent for screen mashing and worked quite well as a passive tutorial.

    While posing the frames one by one was effectively employed for the Dog, Cat, Mouse, and Rabbit, a more sophisticated and easily iterated upon approach was developed for the rest of the cast:

    monkeylayers.gif jaw_cycle.gif lip_pull.gif

    With both methods, hidden portions of the animal's faces such as teeth and tongues were painted beneath separated layers. In the improved method, however, these layers could be much more freely posed and keyframed with a variety of puppet and warp tools at our disposal to make modifications to posing or frame rate much simpler.

    monkey_eat.gif beaver_eating.gif lion_eat.gif

    The poses themselves are often fairly extreme, but this was done to ensure that the motion was legible on small screens and at a fast pace in-game:


    For Feeding Time’s intro animation and environments, everything was illustrated in advance on its own layer, making animation prep a smoother process than separating the flattened animals had been.

    The texture atlas comprising the numerous animal frames grew to quite a large size — this is just a small chunk!


    Because the background elements wouldn’t require the hand-drawn motion of the animals, our proprietary tool “SLAM” was used to give artists the ability to create movement that would otherwise have to be input programmatically. With SLAM, much like Adobe Flash, artists can nest many layers of images and timelines, all of which loop within one master timeline.

    SLAM’s simple interface focuses on maximizing canvas visibility and allows animators to fine-tune image placement by numerical values if desired:


    One advantage over Flash (and the initial reason SLAM was developed) is its capability to output final animation data in a succinct and clean format which maximizes our capability to port assets to other platforms.

    Besides environments, SLAM also proved useful for large scale effects, which would otherwise bloat the game’s filesize if rendered as image sequences:


    Naturally, SLAM stands for Slick Animation, which is what we set out to create with a compact number of image assets. Hopefully ‘slick’ is what you’ll think when you see it in person, now that you have some insight into how we set things into motion!

    Article Update Log

    16 July 2014: Initial release