Jump to content

  • Log In with Google      Sign In   
  • Create Account

Search Results

There were 514 results tagged with game

Sort by                Order  
  1. Banshee Engine Architecture - Introduction

    This article is imagined as part of a larger series that will explain the architecture and implementation details of Banshee game development toolkit. In this introductory article a very general overview of the architecture is provided, as well as the goals and vision for Banshee. In later articles I will delve into details about various engine systems, providing specific implementation information.

    The intent of the articles is to teach you how to implement various engine systems, see how they integrate into a larger whole, and give you an insight into game engine architecture. I will be covering various topics, from low level run time type information and serialization systems, multithreaded rendering, general purpose GUI system, input handling, asset processing to editor building and scripting languages.

    Since Banshee is very new and most likely unfamiliar to the reader I will start with a lengthy introduction.

    What is Banshee?


    It is a free & modern multi-platform game development toolkit. It aims to provide simple yet powerful environment for creating games and other graphical applications. A wide range of features are available, ranging from a math and utility library, to DirectX 11 and OpenGL render systems all the way to asset processing, fully featured editor and C# scripting.

    At the time of writing this project is in active development, but its core systems are considered feature complete and a fully working version of the engine is available online. In its current state it can be compared to libraries like SDL or XNA but with a wider scope. Work is progressing on various high level systems as described by the list of features below.

    Currently available features

    • Design
      • Built using C++11 and modern design principles
      • Clean layered design
      • Fully documented
      • Modular & plugin based
      • Multiplatform ready
    • Renderer
      • DX9, DX11 and OpenGL 4.3 render systems
      • Multi-threaded rendering
      • Flexible material system
      • Easy to control and set up
      • Shader parsing for HLSL9, HLSL11 and GLSL
    • Asset pipeline
      • Easy to use
      • Asynchronous resource loading
      • Extensible importer system
      • Available importer plugins for:
        • FXB,OBJ, DAE meshes
        • PNG, PSD, BMP, JPG, ... images
        • OTF, TTF fonts
        • HLSL9, HLSL11, GLSL shaders
    • Powerful GUI system
      • Unicode text rendering and input
      • Easy to use layout based system
      • Many common GUI controls
      • Fully skinnable
      • Automatch batching
      • Support for texture atlases
      • Localization
    • Other
      • CPU & GPU profiler
      • Virtual input
      • Advanced RTTI system
      • Automatic object serialization/deserialization
      • Debug drawing
      • Utility library
        • Math, file system, events, thread pool, task scheduler, logging, memory allocators and more

    Features coming soon (2015 & 2016)

    • WYSIWYG editor
      • All in one editor
      • Scene, asset and project management
      • Play-in-editor
      • Integration with scripting system
      • Fully customizable for custom workflows
      • One click multi-platform building
    • C# scripting
      • Multiplatform via Mono
      • Full access to .NET library
      • High level engine wrapper
    • High quality renderer
      • Fully deferred
      • Physically based shading
      • Global illumination
      • Gamma correct and HDR rendering
      • High quality post processing effects
    • 3rd party physics, audio, video, network and AI system integration
      • FMOD
      • Physx
      • Ogg Vorbis
      • Ogg Theora
      • Raknet
      • Recast/Detour

    Download


    You might want to retrieve the project source code to better follow the articles to come - in each article I will reference source code files that you may view for exact implementation details. I will be touching onto features currently available and will update the articles as new features are released.

    You may download Banshee from its GitHub page:
    https://github.com/BearishSun/BansheeEngine

    Vision


    The ultimate goal for Banshee is to be a fully featured toolkit that is easy to use, powerful, well designed and extensible so it may rival AAA engine quality. I'll try to touch upon each of those factors and let you know how exactly it attempts to accomplish that.

    Ease of use

    Banshee interface (both code and UI wise) was created to be as simple as possible without sacrificing customizability. Banshee is designed in layers, with the lowest layers providing most general purpose functionality, while higher layers reference lower layers and provide more specialized functionality. Most people will be happy with the simpler more specialized functionality, but lower level functionality is there if they need it and it wasn’t designed as an afterthought either.

    Highest level is imagined as a multi-purpose editor that deals with scene editing, asset import and processing, animation, particles, terrain and similar. Entire editor is designed to be extensible without deep knowledge of the engine - a special scripting interface is provided only for the editor. Each game requires its own custom workflow and set of tools which is reflected in the editor design.

    On a layer below lies the C# scripting system. C# allows you to write high level functionality of your project more easily and safely. It provides access to the large .NET library and most importantly has extremely fast iteration times so you may test your changes within seconds of making them. All compilation is done in editor and you may jump into the game immediately after it is done - this even applies if you are modifying the editor itself.

    Power

    Below the C# scripting layer lie two separate speedy C++ layers that allow you to access the engine core, renderer and rendering APIs directly. Not everyone’s performance requirements can be satisfied on the high level and that’s why even the low level interfaces had a lot of thought put into them.

    Banshee is a fully multithreaded engine designed with performance in mind. Renderer thread runs completely separate from the rest of your code giving you maximum CPU resources for best graphical fidelity. Resources are loaded asynchronously therefore avoiding stalls, and internal buffers and systems are designed to avoid CPU-GPU synchronization points.

    Additionally Banshee comes with built-in CPU and GPU profilers that monitor speed, memory allocations and resource usage for squeezing the most out of your code.

    Power doesn’t only mean speed, but also features. Banshee isn’t just a library, but aims to be a fully featured development toolkit. This includes an all-purpose editor, a scripting system, integration with 3rd party physics, audio, video, networking and AI solutions, high fidelity renderer, and with the help of the community hopefully much more.

    Extensibility

    A major part of Banshee is the extensible all-purpose editor. Games need custom tools that make development easier and allow your artists and designers to do more. This can range from simple data input for game NPC stats to complex 3D editing tools for your in-game cinematics. The GUI system was designed to make it as easy as possible to design your own input interfaces, and a special scripting interface has been provided that exposes the majority of editor functionality for variety of other uses.

    Aside from being a big part of the editor, extensibility is also something that is prevalent throughout the lower layers of the engine. Anything not considered core is built as a plugin that inherits a common abstract interface. This means you can build your own plugins for various engine systems without touching the rest of engine. For example, DX9, DX11 and OpenGL render system APIs are all built as plugins and you may switch between them with a single line of code.

    Quality design

    A great deal of effort has been spent to design Banshee the right way, with no shortcuts. The entire toolkit, from the low level file system library to GUI system and the editor has been designed and developed from scratch following modern design principles and using modern technologies, solely for the purposes of Banshee.

    It has been made modular and decoupled as much as possible to allow people to easily replace or update engine systems. Plugin-based architecture keeps all the specialized code outside of the engine core, which makes it easier to tailor it to your own needs by extending it with new plugins. It also makes it easier to learn as you have clearly defined boundaries between systems, which is further supported by the layered architecture that reduces class coupling and makes the direction of dependencies even clearer. Additionally every non trivial method, from lowest to highest layer, is fully documented.

    From its inception it has been designed to be a multi-platform and a multi-threaded engine.

    Platform-specific functionality is kept to a minimum and is cleanly encapsulated in order to make porting to other platforms as easy as possible. This is further supported by its render API interface which already supports multiple popular APIs, including OpenGL.

    Its multithreaded design makes communication between the main and render thread clear and allows you to perform rendering operations from both, depending on developer preference. Resource initialization between the two threads is handled automatically which further allows operations like asynchronous resource loading. Async operation objects provide functionality similar to C++ future/promise and C# async/await concepts. Additionally you are supplied with tools like the task scheduler that allow you to quickly set up parallel operations yourself.

    Architecture


    Now that you have an idea of what Banshee is trying to acomplish I will describe the general architecture in a bit more detail. Starting with the top level design which is the four primary layers shown on the image below.


    [attachment=24594:BansheeLayers.png]


    The layers were created for two reasons:
    • To give developers a chance to pick the level of functionality they need. Some people will want just core and utility and start working on their own engine while others might be just interested in game development and will stick with the editor layer.
    • To decouple code. Lower layers do not know about higher levels and low level code never caters to specialized high level code. This makes the design cleaner and forces a certain direction for dependencies.
    Lower levels were designed to be more general purpose than higher levels. They provide very general techniques usually usable in various situations, and they attempt to cater to everyone. On the other hand higher levels provide a lot more focused and specialized techniques. This might mean relying on very specific rendering APIs, platforms or plugins but it also means using newer, fancier and maybe not as widely accepted techniques (e.g. some new rendering algorithm).

    BansheeUtility

    This is the lowest layer of the engine. It is a collection of very decoupled and separate systems that are likely to be used throughout all of the higher layers. Essentially a collection of tools that are in no way tied into a larger whole. Most of the functionality isn’t even game engine specific, like providing file-system access, file path parsing or events. Other things that belong here are the math library, object serialization and RTTI system, threading primitives and managers, among various others.

    BansheeCore

    It is the second lowest layer and the first layer that starts to take shape of an actual engine. This layer provides some very game-specific modules tied into a coherent whole, but it tries to be very generic and offer something that every engine might need instead of focusing on very specialized techniques. Render API wrappers exist here, but actual render APIs are implemented as plugins so you are not constrained by specific subset. Scene manager, renderer, resource management, importers and others all belong here, and all are implemented in an abstract way that they can be implemented/extended by higher layers or plugins.

    BansheeEngine

    Second highest layer and first layer with a more focused goal. It is built upon BansheeCore but relies on a specific sub-set of plugins and implements systems like scene manager and renderer in a specific way. For example DirectX 11 and OpenGL render systems are referenced by name, as well as Mono scripting system among others. Renderer that follows a specific set of techniques and algorithms that determines how are all objects rendered also belongs here.

    BansheeEditor

    And finally the top layer is the editor. Although it is named as such it also heavily relies on the scripting system and C# interface as those are primarily used through the editor. It is an extensible multi-purpose editor that provides functionality for level editing, compiling script code, editing script objects, playing in editor, importing assets and publishing the game. But also much more as it can be easily extended with your own custom sub-editors. Want a shader node editor? You can build one yourself without touching the complex bits of the engine, you have an entire scripting interface built only for editor extensions.

    Figure below shows a more detailed structure of each layer as it is designed currently (expect it to change as new features are added). Also note the plugin slots that allow you to extend the engine without actually changing the core.


    [attachment=24595:BansheeComplexLayers.png]


    In the future chapters I will explain major systems in each of the layers. These explanations should give you insight on how to use them but also reveal why and how they were implemented. However first off I’d like to focus on a quick guide on how to get started with your first Banshee project in order to give the readers a bit more perspective (And some code!).

    Example application


    This section is intended to show you how to create a minimal application in Banshee. The example will primarily be using BansheeEngine layer, which is a high level C++ interface. Otherwise inclined users may use the lower level C++ interface and access the rendering API directly, or use the higher level C# scripting interface. We will delve into those interfaces into more detail in later chapters.

    One important thing to mention is that I will not give instructions on how to set up the Banshee environment and will also omit some less relevant code. This chapter is intended just to give some perspective but the interested reader can head to the project website and check out the example project or the provided tutorial.

    Startup


    Each Banshee program starts with a call to the Application class. It is the primary entry point into Banshee, handles startup, shutdown and the primary game loop. A minimal application that just creates an empty window looks something like this:

    RENDER_WINDOW_DESC renderWindowDesc;
    renderWindowDesc.videoMode = VideoMode(1280, 720);
    renderWindowDesc.title = "My App";
    renderWindowDesc.fullscreen = false;
    
    Application::startUp(renderWindowDesc, RenderSystemPlugin::DX11);
    Application::instance().runMainLoop();
    Application::shutDown();
    

    When starting up the application you are required to provide a structure describing the primary render window and a render system plugin to use. When startup completes your render window will show up and then you can run your game code by calling runMainLoop. In this example we haven’t set up any game code so your loop will just be running the internal engine systems. When the user is done with the application the main loop returns and shutdown is performed. All objects are cleaned up and plugins unloaded.

    Resources


    Since our main loop isn’t currently doing much we will want to add some game code to perform certain actions. However in order for any of those actions to be visible we need some resources to display on the screen. We will need at least a 3D model and a texture. To get resources into Banshee you can either load a preprocessed resource using the Resources class, or you may import a resource from a third-party format using the Importer class. We'll import a 3D model using an FBX file format, and a texture using the PSD file format.

    HMesh dragonModel = Importer::instance().import<Mesh>("C:\Dragon.fbx");
    HTexture dragonTexture = Importer::instance().import<Texture>("C:\Dragon.psd");
    

    Game code


    Now that we have some resources we can add some game code to display them on the screen. Every bit of game code in Banshee is created in the form of Components. Components are attached to SceneObjects, which can be positioned and oriented around the scene. You will often create your own components but for this example we only need two built-in component types: Camera and Renderable. Camera allows us to set up a viewport into the scene and outputs what it sees into the target surface (our window in this example) and renderable allows us to render a 3D model with a specific material.

    HSceneObject sceneCameraSO = SceneObject::create("SceneCamera");
    HCamera sceneCamera = sceneCameraSO->addComponent<Camera>(window);
    sceneCameraSO->setPosition(Vector3(40.0f, 30.0f, 230.0f));
    sceneCameraSO->lookAt(Vector3(0, 0, 0));
    
    HSceneObject dragonSO = SceneObject::create("Dragon");
    HRenderable renderable = dragonSO->addComponent<Renderable>();
    renderable->setMesh(dragonModel);
    renderable->setMaterial(dragonMaterial);
    

    I have skipped material creation as it will be covered in a later chapter but it is enough to say that it involves importing a couple of GPU programs (e.g. shaders), using them to create a material and then attaching the previously loaded texture, among a few other minor things.

    You can check out the source code and the ExampleProject for a more comprehensive introduction, as I didn't want to turn this article in a tutorial when there already is one.

    Conclusion


    This concludes the introduction. I hope you enjoyed this article and I'll see you next time when I'll be talking about implementing a run-time type information system in C++ as well as a flexible serialization system that handles everything from saving simple config files, entire resources and even entire level hierarchies.

  2. Setting Realistic Deadlines, Family, and Soup

    Jan. 23, 2015. This is my goal. My deadline. And I'm going to miss it.

    Let me explain. As I write this article, I am also making soup. Trust me, it all comes together at the end.

    Part I: Software Estimation 101


    I've been working on Archmage Rises full time for three months and part time about 5 months before that. In round numbers, I’m about 1,000 hours in.

    You see, I have been working without a specific deadline because of a little thing I know from business software called the “Cone of Uncertainty”:


    image.png


    In business software, the customer shares an idea (or “need”)—and 10 out of 10 times, the next sentence is: "When will you be done, and how much will it cost?"

    Looking at the cone diagram, when is this estimate most accurate? When you are done! You know exactly how long it takes and how much it will actually cost when you finish the project. When do they want the estimate? At the beginning—when accuracy is nil! For this reason, I didn't set a deadline; anything I said would be wrong and misleading to all involved.

    Even when my wife repeatedly asked me.

    Even when the head of Alienware called me and asked, “When will it ship?”

    I focused on moving forward in the cone so I could be in a position to estimate a deadline with reasonable accuracy. In fact, I have built two prototypes which prove the concept and test certain mechanics. Then I moved into the core features of the game.

    Making a game is like building a sports car from a kit.
    … but with no instructions
    … and many parts you have to build yourself (!)

    I have spent the past months making critical pieces. As each is complete, I put it aside for final assembly at a later time. To any outside observer, it looks nothing like a car—just a bunch of random parts lying on the floor. Heck! To ME, it looks like a bunch of random parts on the floor. How will this ever be a road worthy car?

    Oh, hold on. Gotta check the soup.
    Okay, we're good.

    This week I finished a critical feature of my story editor/reader, and suddenly the heavens parted and I could see how all the pieces fit together! Now I'm in a place where I can estimate a deadline.

    But before I get into that, I need to clarify what deadline I'm talking about.

    Vertical Slice, M.V.P. & Scrum


    Making my first game (Catch the Monkey), I learned a lot of things developers should never do. In my research after that project, I learned how game-making is unique and different from business software (business software has to work correctly. Games have to work correctly and be fun) and requires a different approach.

    Getting to basics, a vertical slice is a short, complete experience of the game. Imagine you are making Super Mario Bros. You build the very first level (World 1-1) with complete mechanics, power ups, art, music, sound effects, and juice (polish). If this isn't fun, if the mechanics don't work, then you are wasting your time building the rest of the game.

    The book Lean Startup has also greatly influenced my thinking on game development. In it, the author argues to fail quickly, pivot, and then move in a better direction. The mechanism to fail quickly is to build the Minimum Valuable Product (MVP). Think of web services like HootSuite, Salesforce, or Amazon. Rather than build the "whole experience," you build the absolute bare minimum that can function so that you can test it out on real customers and see if there is any traction to this business idea. I see the Vertical Slice and MVP as interchangeable labels to the same idea.



    A fantastic summary of Scrum.


    Finally, Scrum is the iterative incremental software development methodology I think works best for games (I'm quite familiar with the many alternatives). Work is estimated in User Stories and (in the pure form) estimated in Story Points. By abstracting the estimates, the cone of uncertainty is built in. I like that. It also says when you build something, you build it complete and always leave the game able to run. Meaning, you don't mostly get a feature working and then move on to another task; you make it 100% rock solid: built, tested, bug fixed. You do this because it eliminates Technical Debt.


    debt.jpg


    What's technical debt? Well like real debt, it is something you have to pay later. So if the story engine has several bugs in it but I leave them to fix "later," that is technical debt I have to pay at some point. People who get things to 90% and then move on to the next feature create tons of technical debt in the project. This seriously undermines the ability to complete the project because the amount of technical debt is completely unknown and likely to hamper forward progress. I have experienced this personally on my projects. I have heard this is a key contributor to "crunch" in the game industry.

    Hold on: Gotta go put onions and peppers in the soup now.

    A second and very important reason to never accrue technical debt is it completely undermines your ability to estimate.

    Let's say you are making the Super Mario Bros. World 1-1 vertical slice. Putting aside knowing if your game is fun or not, the real value of completing the slice is the ability to effectively estimate the total effort and cost of the project (with reasonable accuracy). So let's say World 1-1 took 100 hours to complete across the programmer, designer, and artist with a cost of $1,000. Well, if the game design called for 30 levels, you have a fact-based approach to accurate estimating: It will take 3,000 hours and $30,000. But the reverse is also helpful. Let's say you only have $20,000. Well right off the bat you know you can only make 20 levels. See how handy this is?!?

    Still, you can throw it all out the window when you allow technical debt.

    Let me illustrate:
    Let's say the artist didn't do complete work. Some corners were cut and treated as "just a prototype," so only 80% effort was expended. Let's say the programmer left some bugs and hardcoded a section just to work for the slice. Call it a 75% effort of the real total. Well, now your estimates will be way off. The more iterations (levels) and scale (employees) you multiply by your vertical slice cost, the worse off you are. This is a sure-fire way to doom your project.

    So when will you be done?


    So bringing this back to Archmage Rises, I now have built enough of the core features to be able to estimate the rest of the work to complete the MVP vertical slice. It is crucial that I get the slice right and know my effort/costs so that I can see what it will take to finish the whole game.

    I set up the seven remaining sprints into my handy dandy SCRUM tool Axosoft, and this is what I got:


    never.jpg


    That wasn't very encouraging. :-) One of the reasons is because as I have ideas, or interact with fans on Facebook or the forums, I write user stories in Axosoft so I don't forget them. This means the number of user stories has grown since I began tracking the project in August. It's been growing faster than I have been completing them. So the software is telling the truth: Based on your past performance, you will never finish this project.

    I went in and moved all the "ideas" out of the actual scheduled sprints with concrete work tasks, and this is what I got:


    better.jpg


    January 23, 2015

    This is when the vertical slice is estimated to be complete. I am just about to tell you why it's still wrong, but first I have to add cream and milk to the soup. Ok! Now that it's happily simmering away, I can get to the second part.

    Part II: Scheduling the Indie Life


    I am 38 and have been married to a beautiful woman for 15 years. Over these years, my wife has heard ad nauseam that I want to make video games. When she married me, I was making pretty good coin leading software projects for large e-commerce companies in Toronto. I then went off on my own. We had some very lean years as I built up my mobile software business.

    We can't naturally have kids, so we made a “Frankenbaby” in a lab. My wife gave birth to our daughter Claire. That was two years ago.


    Test_Tube_Baby2.jpg


    My wife is a professional and also works. We make roughly the same income. So around February of this year, I went to her and said, "This Archmage thing might have legs, and I'd like to quit my job and work on it full time." My plan was to live off her—a 50% drop in household income. Oh and on top of that, I'd also like to spend thousands upon thousands of dollars on art, music, tools, -- and any games that catch my fancy on Steam.

    It was a sweetheart offer, don't you think?

    I don't know what it is like to be the recipient of an amazing opportunity like this, but I think her choking and gasping for air kind of said it all. :-)

    After thought and prayer, she said, "I want you to pursue your dream. I want you to build Archmage Rises."

    Now I write this because I have three game devs in my immediate circle—each of which are currently working from home and living off their spouse's income. Developers have written me asking how they can talk with their spouse about this kind of major life transition.

    Lesson 1: Get “Buy In,” not Agreement


    A friend’s wife doesn't really want him to make video games. She loves him, so when they had that air-gasping indie game sit down conversation she said, "Okay"—but she's really not on board.

    How do you think it will go when he needs some money for the game?
    Or when he's working hard on it and she feels neglected?
    Or when he originally said the game will take X months but now says it will take X * 2 months to complete?


    pp42_marriage_conflict.jpg


    Yep! Fights.

    See, by not "fighting it out" initially, by one side just caving, what really happened was that one of them said, "I'd rather fight about this later than now." Well, later is going to come. Over and over again. Until the core issue is resolved.

    I and my friend believe marriage is committed partnership for life. We're in it through thick and thin, no matter how stupid or crazy it gets. It's not roommates sharing an Internet bill; this is life together.

    So they both have to be on the same page, because the marriage is more important than any game. Things break down and go horribly wrong when the game/dream is put before the marriage. This means if she is really against it deep down, he has to be willing to walk away from the game. And he is, for her.

    One thing I got right off the bat is my wife is 100% partnered with me in Archmage Rises. Whether it succeeds or fails, there are no fights or "I told you so"s along the way.

    Lesson 2: Do Your Part


    Helping-Each-Other.jpg


    So why am I making soup? Because my wife is out there working, and I’m at home. Understandably so, I have taken on more of the domestic duties. That's how I show her I love her and appreciate her support. I didn't "sell" domestic duties in order to get her buy-in; it is a natural response. So with me working downstairs, I can make soup for dinner tonight, load and unload the dishwasher, watch Claire, and generally reduce the household burden on her as she takes on the bread-winning role.

    If I shirk household duties and focus solely on the game (and the game flops!), boy oh boy is there hell to pay.

    Gotta check that soup. Yep, we're good.

    Lesson 3: Do What You Say


    Claire is two. She loves to play ball with me. It's a weird game with a red nerf soccer ball where the rules keep changing from catching, to kicking, to avoiding the ball. It's basically Calvin ball. :-)


    redball.jpg


    She will come running up to my desk, pull my hand off the mouse, and say, "Play ball?!" Sometimes I'm right in the middle of tracking down a bug, but at other times I'm not that intensely involved in the task. The solution is to either play ball right now (I've timed it with a stop watch; it only holds her interest for about seven minutes), or promise her to play it later. Either way, I'm playing ball with Claire.

    And this is important, because to be a crappy dad and have a great game just doesn't look like success to me. To be a great dad with a crappy game? Ya, I'm more than pleased with that.

    Now Claire being two, she doesn't have a real grasp of time. She wants to go for a walk "outside" at midnight, and she wants to see the moon in the middle of the afternoon. So when I promise to play ball with her "later," there is close to a 0% chance of her remembering or even knowing when later is. But who is responsible in this scenario for remembering my promise? Me. So when I am able, say in between bugs or end of the work day, I'll go find her and we'll play ball. She may be too young to notice I'm keeping my promises, but when she does begin to notice I won't have to change my behavior. She'll know dad is trustworthy.

    Lesson 4: Keep the Family in the Loop like a Board of Directors


    If my family truly is partnered with me in making this game, then I have to understand what it is like from their perspective:

    1. They can't see it
    2. They can't play it
    3. They can't help with it
    4. They don't know how games are even made
    5. They have no idea if what I am making is good, bad, or both


    Board-table.jpg


    They are totally in the dark. Now what is a common reaction to the unknown? Fear. We generally fear what we do not understand. So I need to understand that my wife secretly fears what I'm working on won't be successful, that I'm wasting my time. She has no way to judge this unless I tell her.

    So I keep her up to date with the ebb and flow of what is going on. Good or bad. And because I tell her the bad, she can trust me when I tell her the good.

    A major turning point was the recent partnership with Alienware. My wife can't evaluate my game design, but if a huge company like Alienware thinks what I'm doing is good, that third party perspective goes a long way with her. She has moved from cautious to confident.

    The Alienware thing was a miracle out of the blue, but that doesn't mean you can't get a third party perspective on your game (a journalist?) and share it with your significant other.

    Lesson 5: Life happens. Put It in the Schedule.


    I've been scheduling software developers for 20 years. I no longer program in HTML3, but I still make schedules—even if it is just for me.

    Customers (or publishers) want their projects on the date you set. Well, actually, they want it sooner—but let's assume you've won that battle and set a reasonable date.

    If there is one thing I have learned in scheduling large team projects, it is that unknown life things happen. The only way to handle that is to put something in the schedule for it. At my mobile company, we use a rule of 5.5-hour days. That means a 40-hour-a-week employee does 27.5 hours a week of active project time; the rest is lunch, doctor appointments, meetings, phone calls with the wife, renewing their mortgage, etc. Over a 7-8 month project, there is enough buffer built in there to handle the unexpected kid sick, sudden funeral, etc.
    Also, plug in statutory holidays, one sick day a month, and any vacation time. You'll never regret including it; you'll always regret not including it.

    That's great for work, but it doesn't work for the indie at home.


    1176429_orig.jpg


    To really dig into the reasons why would be another article, so I'll just jump to the conclusion:

    1. Some days, you get stuck making soup. :-)
    2. Being at home and dealing with kids ranges from playing ball (short) to trips to the emergency room (long)
    3. Being at home makes you the "go to" family member for whatever crops up. "Oh, we need someone to be home for the furnace guy to do maintenance." Guess who writes blogs and just lost an hour of his day watching the furnace guy?
    4. There are many, many hats to wear when you’re an indie. From art direction for contract artists to keeping everyone organized, there is a constant stream of stuff outside your core discipline you'll just have to do to keep the game moving forward.
    5. Social media marketing may be free, but writing articles and responding to forum and Facebook posts takes a lot of time. More importantly, it takes a lot of energy.

    After three months, I have not been able to come up with a good rule of thumb for how much programming work I can get done in a week. I've been tracking it quite precisely for the last three weeks, and it has varied widely. My goal is to hit six hours of programming in an 8-12 hour day.

    Putting This All Together


    jigsaw.jpg?w=640


    Oh, man! This butternut squash soup is AMAZING! I'm not much of a soup guy, and this is only my second attempt at it—but this is hands-down the best soup I've ever had at home or in a restaurant! See the endnotes for the recipe—because you aren't truly indie unless you are making a game while making soup!

    So in order to try and hit my January 23rd deadline, I need to get more programming done. One way to achieve this is to stop writing weekly dev blogs and switch to a monthly format. It's ironic that writing less blogs makes it look like less progress is being made, but it's the exact opposite! I hope to gain back 10 hours a week by moving to a monthly format.

    I'll still keep updating the Facebook page regularly. Because, well, it's addictive. :-)

    So along the lines of Life Happens, it is about to happen to me. Again.

    We were so impressed with Child 1.0 we decided to make another. Baby Avery is scheduled to come by C-section one week from today.

    How does this affect my January 23rd deadline? Well, a lot.
    • Will baby be healthy?
    • Will mom have complications?
    • How will a newborn disrupt the disposition or sleeping schedule of a two-year-old?
    These are all things I just don't know. I'm at the front end of the cone of uncertainty again. :-)

    SDG

    Links:


    Agile Game Development with Scrum – great book on hows and whys of Scrum for game dev. Only about the first half is applicable to small indies.

    Axosoft SCRUM tool – Free for single developers; contact support to get a free account (it's not advertised)

    You can follow the game I'm working on, Archmage Rises, by joining the newsletter and Facebook page.

    You can tweet me @LordYabo

    Recipes:


    Indie Game Developer's Butternut Squash Soup
    (about 50 minutes; approximately 130 calories per 250ml/cup serving)


    soup.jpg
    Dammit Jim I'm a programmer not a food photographer!


    I created this recipe as part of a challenge to my wife that I could make a better squash soup than the one she ordered in the restaurant. She agrees, this is better! It is my mashup of three recipes I found on the internet.
    • 2 butternut squash (about 3.5 pounds), seeded and quartered
    • 4 cups chicken or vegetable broth
    • 1 tablespoon minced fresh ginger (about 50g)
    • 1/4 teaspoon nutmeg
    • 1 yellow onion diced
    • Half a red pepper diced (or whole if you like more kick to your soup)
    • 1 tablespoon kosher salt
    • 1 teaspoon black pepper
    • 1/3 cup honey
    • 1 cup whipping cream
    • 1 cup milk
    Peel squash, seed, and cut into small cubes. Put in a large pot with broth on a low boil for about 30 minutes.
    Add red pepper, onion, honey, ginger, nutmeg, salt, pepper. Place over medium heat and bring to a simmer for approximately 6 minutes. Using a stick blender, puree the mixture until smooth. Stir in whipping cream and milk. Simmer 5 more minutes.

    Serve with a dollop of sour cream in the middle and sprinkling of sour dough croutons.

  3. A Room With A View

    A Viewport allows for a much larger and richer 2-D universe in your game. It allows you to zoom in, pan across, and scale the objects in your world based on what the user wants to see (or what you want them to see).

    The Viewport is a software component (written in C++ this time) that participates in a larger software architecture. UML class and sequence diagrams (below) show how these interactions are carried out.

    The algorithms used to create the viewport are not complex. The ubiquitous line equation, y = m.x + b, is all that is needed to create the effect of the Viewport. The aspect ratio of the screen is also factored in so that "squares can stay squares" when rendered.

    Beyond the basic use of the Viewport, allowing entities in your game to map their position and scale onto the display, it can also be a larger participant in the story your game tells and the mechanics of making your story work efficiently. Theatrical camera control, facilitating the level of detail, and culling graphics operations are all real-world uses of the Viewport.

    NOTE: Even though I use Box2D for my physics engine, the concepts in this article are independent of that or even using a physics engine for that matter.

    The Video


    The video below shows this in action.




    The Concept


    The world is much bigger than what you can see through your eyes. You hear a sound. Where did it come from? Over "there". But you can't see that right now. You have to move "there", look around, see what you find. Is it an enemy? A friend? A portal to the bonus round? By only showing your player a portion of the bigger world, they are goaded into exploring the parts they cannot see. This way lies a path to immersion and entertainment.

    A Viewport is a slice of the bigger world. The diagram below shows the basic concept of how this works.


    [attachment=24500:Viewport-Concept.png]


    The Game World (left side) is defined to be square and in meters, the units used in Box2D. The world does not have to be square, but it means one less parameter to carry around and worry about, so it is convenient.

    The Viewport itself is defined as a scale factor of the respective width/height of the Game World. The width of the Viewport is scaled by the aspect ratio of the screen. This makes it convenient as well. If the Viewport is "square" like the world, then it would have to lie either completely inside the non-square Device Screen or with part of it completely outside the Device Screen. This makes it unusable for "IsInView" operations that are useful (see Other Uses at the end).

    The "Entity" is deliberately shown as partially inside the Viewport. When displayed on the Device Screen, it is also only shown as partially inside the view. Its aspect on the screen is not skewed by the size of the screen relative to the world size. Squares should stay squares, etc.

    The "nuts and bolts" of the Viewport are linear equations mapping the two corner points (top left, bottom right) in the coordinate system of the world onto the screen coordinate system. From a "usage" standpoint, it maps the positions in the simulated world (meters) to a position on the screen (pixels). There will also be times when it is convenient to go the other way and map from pixels to meters. The Viewport class handles the math for the linear equations, computing them when needed, and also provides interfaces for the pixel-to-meter or meter-to-pixel transformations.

    Note that the size of the Game World used is also specifically ambiguous. The size of all Box2D objects should be between 0.1m and 10m, the world can be much larger as needed and within realistic use of the float32 precision used in Box2D. That being said, the Viewport size is based on a scale factor of the Game World size, but it is conceivable (and legal) to move the Viewport outside of the "established" Game World size. What happens when you view things "off the grid" is entirely up to your game design.

    Classes and Sequences


    The Viewport does not live by itself in the ecosystem of the game architecture. It is a component that participates in the architecture. The diagram below shows the major components used in the Missile Demo application.


    [attachment=24501:Missile-Demo-Main-Components.png]


    The main details of each class have been omitted; we're more interested in the overall component structure than internal APIs at this point.

    Main Scene


    The MainScene (top left) is the container for all the visual elements (CCLayer-derived objects) and owner of an abstract interface, the MovingEntityIFace. Only one instance exists at a time. The MainScene creates a new one when signaled by the DebugMenuLayer (user input) to change the Entity. Commands to the Entity are also executed via the MainScene. The MainScene also acts as the holder of the Box2D world reference.

    Having the MainScene tie everything together is perfectly acceptable for a small single-screen application like this demonstration. In a larger multi-scene system, some sort of UI Manager approach would be used.

    Viewport and Notifier


    The Viewport (lower right) is a Singleton. This is a design choice. The motivations behind it are:
    • There is only one screen the user is looking at.
    • Lots of different parts of the graphics system may use the Viewport.
    • It is much more convenient to do it as a "global" singleton than to pass the reference around to all potential consumers.
    • Deriving it from the SingletonDynamic template ensures that it follows the Init/Reset/Shutdown model used for all the Singleton components. It's life cycle is entirely predictable: it always exists.

    Having certain parts of the design as a singleton may make it hard to envision how you would handle other types of situations, like a split screen or a mini-map. If you needed to deal with such a situation, at least one strategy would be to factor the base functionality of the "Viewport" into a class and then construct a singleton to handle the main viewport, one for the mini-map, etc. Essentially, if you only have "one" of something, the singleton pattern is helping you to ensure ease of access to the provider of the feature and also guaranteeing the life cycle of that feature matches into the life cycle of your design.

    This is (in my mind) absolutely NOT the same thing as a variable that can be acccessed and modified without an API from anywhere in your system (i.e. a global variable). When you wrap it and control the life cycle, you get predictability and a place to put a convenient debug point. When you don't, you have fewer guarantees of initial state and you have to put debug points at every point that touches the variable to figure out how it evolves over time. That inversion (one debug point vs. lots of debug points) can crush your productivity.

    If you felt that using the singleton approach was not for you or not per company policy or group agreed policies, etc., you could create an instance of that "viewport" class and pass it to all the interested cosumers as a reference. You will still need a place for that instance to live and you will need to manage its life cycle.

    You have to weigh the design goals against the design and make a decision about what constitutes the best tool for the job, often using conflicting goals, design requirements, and the strong opinions of your peers. Rising to the real challenges this represents is a practical reality of "the job". And possibly why indie developers like to work independently.




    The Notifier is also pictured to highlight its importance; it is an active participant when the Viewport changes. The diagram below shows exactly this scenario.


    [attachment=24499:Pinch-Changes-Viewport.png]


    The user user places both fingers on the screen and begins to move them together (1.0). This move is received by the framework and interpreted by the TapDragPinchInput as a Pinch gesture, which it signals to the MainScene (1.1). The MainScene calls SetCenter on the Viewport (1.2) which immediately leads to the Viewport letting all interested parties know the view is changing via the Notifier (1.3). The Notifier immediately signals the GridLayer, which has registered for the event (1.4). This leads to the GridLayer recalculating the position of its grid lines (1.5). Internally, the GridLayer maintains the grid lines as positions in meters. It will use the Viewport to convert these to positions in pixels and cache them off. The grid is not actually redrawn until the next draw(...) call is executed on it by the framework.

    The first set of transactions were executed synchronously as the user moved their fingers; each time a new touch event came in, the change was made. The next sequence (starting with 1.6) is initiated when the framework calls the Update(...) method on the main scene. This causes an update of the Box2D physics model (1.7). At some point later, the framework calls the draw(...) method on the Box2dDebugLayer (1.8). This uses the Viewport to calculate the display positions of all the Box2D bodies (and other elements) it will display (1.9).

    These two sequences demonstrate the two main types of Viewport update sequences. The first is triggered by the a direct change of the view leading to events that trigger immediate updates. The second is called by the framework every major update of the model (as in MVC).

    Algorithms


    The general method for mapping the world space limits (Wxmin, Wxmax) onto the screen coordinates (0,Sxmax) is done by a linear mapping with a y = mx + b formulation. Given the two known points for the transformation:

    Wxmin (meters) maps onto (pixel) 0 and
    Wxmax (meters) maps onto (pixel) Sxmax
    Solving y0 = m*x0 + b and y1 = m*x1 + b1 yields:

    m = Sxmax/(Wxmax - Wxmin) and
    b = -Wxmin*Sxmax/(Wxmax - Wxmin) (= -m * Wxmin)

    We replace (Wxmax - Wxmin) with scale*(Wxmax-Wxmin) for the x dimension and scale*(Wymax-Wymin)/aspectRatio in the y dimension.

    The value (Wxmax - Wxmin) = scale*worldSizeMeters (xDimension)

    The value Wxmin = viewport center - 1/2 the width of the viewport

    etc.

    In code, this is broken into two operations. Whenever the center or scale changes, the slope/offset values are calculated immediately.

    void Viewport::CalculateViewport()
    {
       // Bottom Left and Top Right of the viewport
       _vSizeMeters.width = _vScale*_worldSizeMeters.width;
       _vSizeMeters.height = _vScale*_worldSizeMeters.height/_aspectRatio;
    
       _vBottomLeftMeters.x = _vCenterMeters.x - _vSizeMeters.width/2;
       _vBottomLeftMeters.y = _vCenterMeters.y - _vSizeMeters.height/2;
       _vTopRightMeters.x = _vCenterMeters.x + _vSizeMeters.width/2;
       _vTopRightMeters.y = _vCenterMeters.y + _vSizeMeters.height/2;
    
       // Scale from Pixels/Meters
       _vScalePixelToMeter.x = _screenSizePixels.width/(_vSizeMeters.width);
       _vScalePixelToMeter.y = _screenSizePixels.height/(_vSizeMeters.height);
    
       // Offset based on the screen center.
       _vOffsetPixels.x = -_vScalePixelToMeter.x * (_vCenterMeters.x - _vScale*_worldSizeMeters.width/2);
       _vOffsetPixels.y = -_vScalePixelToMeter.y * (_vCenterMeters.y - _vScale*_worldSizeMeters.height/2/_aspectRatio);
    
       _ptmRatio = _screenSizePixels.width/_vSizeMeters.width;
    
       Notifier::Instance().Notify(Notifier::NE_VIEWPORT_CHANGED);
    }
    

    Note:  Whenever the viewport changes, we emit a notification to the rest of the system to let interested parties react. This could be broken down into finer detail for changes in scale vs. changes in the center of the viewport.


    When the a conversion from world space to viewport space is needed:

    CCPoint Viewport::Convert(const Vec2&amp; position)
    {
       float32 xPixel = position.x * _vScalePixelToMeter.x + _vOffsetPixels.x;
       float32 yPixel = position.y * _vScalePixelToMeter.y + _vOffsetPixels.y;
       return ccp(xPixel,yPixel);
    }
    

    And, occasionally, we need to go the other way.

    /* To convert a pixel to a position (meters), we invert
     * the linear equation to get x = (y-b)/m.
     */
    Vec2 Viewport::Convert(const CCPoint&amp; pixel)
    {
       float32 xMeters = (pixel.x-_vOffsetPixels.x)/_vScalePixelToMeter.x;
       float32 yMeters = (pixel.y-_vOffsetPixels.y)/_vScalePixelToMeter.y;
       return Vec2(xMeters,yMeters);
    }
    

    Position, Rotation, and PTM Ratio


    Box2D creates a physics simulation of objects between the sizes of 0.1m and 10m (according to the manual, if the scaled size is outside of this, bad things can happen...the manual is not lying). Once you have your world up and running, you need to put the representation of the bodies in it onto the screen. To do this, you need its rotation (relative to x-axis), position, and a scale factor to convert the physical meters to pixels. Let's assume you are doing this with a simple sprite for now.

    The rotation is the easiest. Just ask the b2Body what its rotation is and convert it to degrees with CC_RADIANS_TO_DEGREES(...). Use this for the angle of your sprite.

    The position is obtained by asking the body for its position in meters and calling the Convert(...) method on the Viewport. Let's take a closer look at the code for this.

    /* To convert a position (meters) to a pixel, we use
     * the y = mx + b conversion.
     */
    CCPoint Viewport::Convert(const Vec2&amp; position)
    {
       float32 xPixel = position.x * _vScalePixelToMeter.x + _vOffsetPixels.x;
       float32 yPixel = position.y * _vScalePixelToMeter.y + _vOffsetPixels.y;
       return ccp(xPixel,yPixel);
    }
    

    This is about as simple as it gets in the math arena. A linear equation to map the position from the simulated physical space (meters) to the Viewport's view of the world on the screen (pixels). A key nuance here is that the scale and offset are calculated ONLY when the viewport changes.

    The scale is called the pixel-to-meter ratio, or just PTM Ratio. If you look inside the CalculateViewport method, you will find this rather innocuous piece of code:

       _ptmRatio = _screenSizePixels.width/_vSizeMeters.width;
    

    The PTM Ratio is computed dynamically based on the size of the width viewport (_vSizeMeters). Note that it could be computed based on the height instead; be sure to define the aspect ratio, etc., appropriately.

    If you search the web for articles on Box2D, whenever they get to the display portion, they almost always have something like this:

    #define PTM_RATIO 32
    

    Which is to say, every physical body is represented by a ratio of 32 pixels (or some other value) for each meter in the simulation. The original iPhone screen was 480 x 320, and Box2D represents objects on the scale of 0.1m to 10m, so a full sized object would take up the full width of the screen. However, it is a fixed value. Which is fine.

    Something very interesting happens though, when you let this value change. By letting the PTM Ratio change and scaling your objects using it, the viewer is given the illusion of depth. They can move into and out of the scene and feel like they are moving into and out of the scene in the third dimension.

    You can see this in action when you use the pinch operation on the screen in the App. The Box2DDebug uses the Viewport's PTM Ratio to change the size of the displayed polygons. It can (and has) been used to also scale sprites so that you can zoom in/out.

    Other Uses


    With a little more work or a few other components, the Viewport concept can be expanded to yield other benefits. All of these uses are complementary. That is to say, they can all be used at the same time without interfering with each other.

    Camera


    The Viewport itself is "Dumb". You tell it change and it changes. It has no concept of time or motion; it only executes at the time of command and notifies (or is polled) as needed. To execute theatrical camera actions, such as panning, zooming, or combinations of panning and zooming, you need a "controller" for the Viewport that has a notion of state. This controller is the camera.

    Consider the following API for a Camera class:

    class Camera
    {
    public:
       // If the camera is performing any operation, return true.
       bool IsBusy();
    
       // Move/Zoom the Camera over time.
       void PanToPosition(const vec2&amp; position, float32 seconds);
       void ZoomToScale(float32 scale, float32 seconds);
    
       // Expand/Contract the displayed area without changing
       // the scale directly.
       void ExpandToSize(float32 size, float32 seconds);
    
       // Stop the current operation immediately.
       void Stop();
    
       // Called every frame to update the Camera state
       // and modify the Viewport.  The dt value may 
       // be actual or fixed in a fixed timestep
       // system.
       void Update(float32 dt);
    };
    

    This interface presents a rudimentary Camera. This class interacts with the Viewport over time when commanded. You can use this to create cut scenes, quickly show items/locations of interest to a player, or other cinematic events.

    A more sophisticated Camera could keep track of a specific entity and move the viewport automatically if the the entity started to move too close to the viewable edge.

    Level of Detail


    In a 3-D game, objects that are of little importance to the immediate user, such as objects far off in the distance, don't need to be rendered with high fidelity. If it is only going to be a "dot" to you, do you really need 10k polygons to render it? The same is true in 2-D as well. This is the idea of "Level of Detail".

    The PTMRatio(...) method/member of the Viewport gives the number of pixels an object will be given its size in meters. If you use this to adjust the scale of your displayed graphics, you can create elements that are "sized" properly for the screen relative to the other objects and the zoom level. You can ALSO substitute other graphics when the displayed object will appear to be little more than a blob. This can cut down dramatically on the GPU load and improve the performance of your game.

    For example, in Space Spiders Must Die!, each Spider is not single sprite, but a group of sprites loaded from a sprite sheet. This sheet must be loaded into the GPU, the graphics drawn, then another sprite sheet loaded in for other objects. When the camera is zoomed all the way out, we could get a lot more zip out of the system if we didn't have to swap out the sprite sheet at all and just drew a single sprite for each spider. A much smaller series of "twinkling" sprites could easily replace the full-size spider.

    Culling Graphics Operations


    If an object is not in view, why draw it at all? Well...you might still draw it...if the cost of keeping it from being drawn exceeds the cost of drawing it. In Cocos2D-x, it can get sticky to figure out whether or not you are really getting a lot by "flagging" elements off the screen and controlling their visibility (the GPU would probably handle it from here).

    However, there is a much less-ambiguous situation: Skeletal Animations. Rather than use a lot of animated sprites (and sprite sheets), we tend to use Spine to create skeletal animated sprites. These absolutely use a lot of calculations which are completely wasted if you can't see the animation because it is off camera. To save CPU cycles, which are even more limited these days than GPU cycles for the games we make, we can let the AI for the animiation keep running but only update the "presentation" when needed.

    The Viewport provides a method called IsInView(...) just for this purpose. Using it, you can flag entities as "in view" or "not in view". Internally, the representation used for the entity can make the decision to update or not based on this.

    Conclusion


    A Viewport has uses that allows you to create a richer world for the player to "live" in, both by providing "depth" via zooming and allowing you to keep content outside the Viewport. It also provides opportunities to improve the graphics processing efficiency of your game.

    Get the Source Code for the this post hosted on GitHub by clicking here.

    Article Update Log

    21 Nov 2014: Added update about singleton usage.
    6 Nov 2014: Initial release

  4. What's In Your Toolbox?

    Big things are made of little things. Making things at all takes tools. We all know it is not the chisel that creates the sculpture, but the hand that guides it. Still, having a pointy chisel is probably better to break the rock than your hand.

    In this article, I'll enumerate the software tools that I use to put together various parts of my software. I learned about these tools by reading sites like this one, so feel free to contribute your own. I learned how to use them by setting a small goal for myself and figuring out whether or not the tool could help me achieve it. Some made the cut. Some did not. Some may be good for you. Others may be good for you.

    Software Tools


    #NameUsed ForCostLinkNotes
    1Cocos2d-xC++ Graphical FrameworkFreewww.cocos2d-x.orgHas lots of stuff out of the box and a relatively light learning curve. We haven't used it cross-platform (yet) but many have before us, so no worries.
    2Box2D2-D PhysicsFreewww.box2d.orgNo longer the default for cocos2d-x :( but still present in the framework. I still prefer it over Chipmunk. Now you know at least two to try...
    3GimpBitmap Graphics EditorFreewww.gimp.orgAbove our heads but has uses for slicing, dicing, and mutilating images. Great for doing backgrounds.
    4InkscapeVector Graphics EditorFree
    www.inkscape.orgOur favorite tool for creating vector graphics. We still suck at it, but at least the tool doesn't fight us.
    5PaperGraphics Editor (iPad)~$10App StoreThis is an incredible sketching tool. We use it to generate graphics, spitball ideas for presentations, and create one-offs for posts.
    6SpineSkeletal Animation~$100www.esotericsoftware.comI *wish* I had enough imagination to get more out of this incredible tool.
    7Physics EditorSee Notes$20www.codeandweb.comCreates data to turn images into data that Box2D can use. Has some annoyances but very solid on the whole.
    8Texture PackerSee Notes$40www.codeandweb.comPuts images together into a single file so that you can batch them as sprites.
    9PythonScripting LanguageFreewww.python.orgAt some point you will need a scripting language to automate something in your build chain. We use python. You can use whatever you like.
    10Enterprise ArchitectUML Diagrams~$130-$200www.sparxsystems.comYou probably won't need this but we use it to create more sophisticated diagrams when needed. We're not hard core on UML, but we are serious about ideas and a picture is worth a thousand words.
    11ReflectorSee Notes~$15Mac App StoreThis tool lets you show your iDevice screen on your Mac. Which is handy for screen captures without the (very slow) simulator.
    12XCodeIDEFreeMac App StoreCocos2d-x works in multiple IDEs. We are a Mac/Windows shop. Game stuff is on iPads, so we use XCode. Use what works best for you.
    13Glyph DesignerSee Notes$40www.71squared.comCreates bitmapped fonts with data. Seamless integration with Cocos2d-x. Handy when you have a lot of changing text to render.
    14Particle DesignerSee Notes$60www.71squared.comHelps you design the parameters for particle emitter effects. Not sure if we need it for our stuff but we have used these effects before and may again. Be sure to block out two hours of time...the temptation to tweak is incredible.
    15Sound BibleSee NotesFreewww.soundbible.comGreat place to find sound clips. Usually the license is just attribution, which is a great karmic bond.
    16Tiled QTSee NotesFreewww.mapeditor.orgA 2-D map editor. Cocos2d-x has import mechanisms for it. I haven't needed it, but it can be used for tile/orthogonal map games. May get some use yet.

    Conclusion


    A good developer (or shop) uses the tools of others as needed, and develops their own tools for the rest. The tools listed here are specifically software that is available "off the shelf". I did not list a logging framework (because I use my own) or a unit test framework (more complex discussion here) or other "tools" that I have picked up over the years and use to optimize my work flow.

    I once played with Blender, the fabulous open-source 3-D rendering tool. It has about a million "knobs" on it. Using it, I realized I was easily overwhelmed by it, but I also realized that my tools could easily overwhelm somebody else if they were unfamiliar with them and did not take the time to figure out how to get the most out of them.

    The point of all this is that every solid developer I know figures out the tools to use in their kit and tries to get the most out of them. Not all hammers fit in all hands, though.

    Article Update Log


    5 Nov 2014: Initial Release

  5. Making a Game with Blend4Web Part 6: Animation and FX

    This time we'll speak about the main stages of character modeling and animation, and also will create the effect of the deadly falling rocks.

    Character model and textures


    The character data was placed into two files. The character_model.blend file contains the geometry, the material and the armature, while the character_animation.blend file contains the animation for this character.

    The character model mesh is low-poly:


    gm06_img02.jpg?v=20141022153048201407181


    This model - just like all the others - lacks a normal map. The color texture was entirely painted on the model in Blender using the Texture Painting mode:


    gm06_img03.jpg?v=20141022153048201407181


    The texture then has been supplemented (4) with the baked ambient occlusion map (2). Its color (1) was much more pale initially than required, and has been enhanced (3) with the Multiply node in the material. This allowed for fine tuning of the final texture's saturation.


    gm06_img04.jpg?v=20141022153048201407181


    After baking we received the resulting diffuse texture, from which we created the specular map. We brightened up this specular map in the spots corresponding to the blade, the metal clothing elements, the eyes and the hair. As usual, in order to save video memory, this texture was packed into the alpha channel of the diffuse texture.


    gm06_img05.jpg?v=20141022153048201407181


    Character material


    Let's add some nodes to the character material to create the highlighting effect when the character contacts the lava.


    gm06_img06.jpg?v=20141022153048201407181


    We need two height-dependent procedural masks (2 and 3) to implement this effect. One of these masks (2) will paint the feet in the lava-contacting spots (yellow), while the other (3) will paint the character legs just above the knees (orange). The material specular value is output (4) from the diffuse texture alpha channel (1).


    gm06_img07.jpg?v=20141022153048201407181


    Character animation


    Because the character is seen mainly from afar and from behind, we created a simple armature with a limited number of inverse kinematics controlling bones.


    gm06_img08.jpg?v=20141022153048201407181


    A group of objects, including the character model and its armature, has been linked to the character_animation.blend file. After that we've created a proxy object for this armature (Object > Make Proxy...) to make its animation possible.

    At this game development stage we need just three animation sequences: looping run, idle and death animations.


    gm06_img09.jpg?v=20141022153048201407181


    Using the specially developed tool - the Blend4Web Anim Baker - all three animations were baked and then linked to the main scene file (game_example.blend). After export from this file the animation becomes available to the programming part of the game.


    gm06_img10.jpg?v=20141030124956201407181


    Special effects

    During the game the red-hot rocks will keep falling on the character. To visualize this a set of 5 elements is created for each rock:

    1. the geometry and the material of the rock itself,
    2. the halo around the rock,
    3. the explosion particle system,
    4. the particle system for the smoke trail of the falling rock,
    5. and the marker under the rock.

    The above-listed elements are present in the lava_rock.blend file and are linked to the game_example.blend file. Each element from the rock set has a unique name for convenient access from the programming part of the application.

    Falling rocks

    For diversity, we made three rock geometry types:


    gm06_img12.jpg?v=20141022153048201407181


    The texture was created by hand in the Texture Painting mode:


    gm06_img13.jpg?v=20141030124956201407181


    The material is generic, without the use of nodes, with the Shadeless checkbox enabled:


    gm06_img14.jpg?v=20141030124956201407181


    For the effect of glowing red-hot rock, we created an egg-shaped object with the narrow part looking down, to imitate rapid movement.


    gm06_img15.jpg?v=20141030124956201407181


    The material of the shiny areas is entirely procedural, without any textures. First of all we apply a Dot Product node to the geometry normals and vector (0, 0, -1) in order to obtain a view-dependent gradient (similar to the Fresnel effect). Then we squeeze and shift the gradient in two different ways and get two masks (2 and 3). One of them (the widest) we paint to the color gradient (5), while the other is subtracted from the first (4) to use the resulting ring as a transparency map.


    gm06_img16.jpg?v=20141030124956201407181


    The empty node group named NORMAL_VIEW is used for compatibility: in the Geometry node the normals are in the camera space, but in Blend4Web - in the world space.

    Explosions


    The red-hot rocks will explode upon contact with the rigid surface.


    gm06_img17.jpg?v=20141030124956201407181


    To create the explosion effect we'll use a particle system with a pyramid-shaped emitter. For the particle system we'll create a texture with an alpha channel - this will imitate fire and smoke puffs:


    gm06_img18.jpg?v=20141030124956201407181


    Let's create a simple material and attach the texture to it:


    gm06_img19.jpg?v=20141030124956201407181


    Then we setup a particle system using the just created material:


    gm06_img20.jpg?v=20141030124956201407181


    Activate particle fade-out with the additional settings on the Blend4Web panel:


    gm06_img21.jpg?v=20141030124956201407181


    To increase the size of the particles during their life span we create a ramp for the particle system:


    gm06_img22.jpg?v=20141030124956201407181


    Now the explosion effect is up and running!


    gm06_img23.jpg?v=20141030124956201407181


    Smoke trail


    When the rock is falling a smoke trail will follow it:


    gm06_img11.jpg?v=20141030124956201407181


    This effect can be set up quite easily. First of all let's create a smoke material using the same texture as for explosions. In contrast to the previous material this one uses a procedural blend texture for painting the particles during their life span - red in the beginning and gray in the end - to mimic the intense burning:


    gm06_img24.jpg?v=20141030124956201407181


    Now proceed to the particle system. A simple plane with its normal oriented down will serve as an emitter. For this time the emission is looping and more long-drawn:


    gm06_img25.jpg?v=20141030124956201407181


    As before this particle system has a ramp for reducing the particles size progressively:


    gm06_img26.jpg?v=20141030124956201407181


    Marker under the rock


    It remains only to add a minor detail - the marker indicating the spot to which the rock is falling, just to make the player's life easier. We need a simple unwrapped plane. Its material is fully procedural, no textures are used.


    gm06_img27.jpg?v=20141030124956201407181


    The Average node is applied to the UV data to obtain a radial gradient (1) with its center in the middle of the plane. We are already familiar with the further procedures. Two transformations result in two masks (2 and 3) of different sizes. Subtracting one from the other gives the visual ring (4). The transparency mask (6) is tweaked and passed to the material alpha channel. Another mask is derived after squeezing the ring a bit (5). It is painted in two colors (7) and passed to the Color socket.


    gm06_img28.jpg?v=20141030124956201407181


    Conclusion


    At this stage the gameplay content is ready. After merging it with the programming part described in the previous article of this series we may enjoy the rich world packed with adventure!

    Link to the standalone application

    The source files of the models are part of the free Blend4Web SDK distribution.

    • Oct 30 2014 02:21 PM
    • by Spunya
  6. How to Create a Scoreboard for Lives, Time, and Points in HTML5 with WiMi5

    This tutorial gives a step-by-step explanation on how to create a scoreboard that shows the number of lives, the time, or the points obtained in a video game.

    To give this tutorial some context, we’re going to use the example project StunPig in which all the applications described in this tutorial can be seen. This project can be cloned from the WiMi5 Dashboard.

    image07.png

    We require two graphic elements to visualize the values of the scoreboards, a “Lives” Sprite which represents the number of lives, and as many Font and Letter Sprites as needed to represent the value of the digit to be shown in each case. The “Lives” Sprite is one with four animations or image states that are linked to each one of the four numerical values for the value of the level of lives.


    image01.png image27.png


    The Font or Letter Sprite, a Sprite with 11 animations or image states which are linked to each of the ten values of the numbers 0-9, as well as an extra one for the colon (:).


    image16.png image10.png


    Example 1. How to create a lives scoreboard


    To manage the lives, we’ll need a numeric value for them, which in our example is a number between 0 and 3 inclusive, and its graphic representation in our case is the three orange-colored stars which change to white as lives are lost, until all of them are white when the number of lives is 0.


    image12.png


    To do this, in the Scene Editor, we must create the instance of the sprite used for the stars. In our case, we’ll call them “Lives”. To manipulate it, we’ll have a Script (“lifeLevelControl”) with two inputs (“start” and “reduce”), and two outputs (“alive” and “death”).


    image13.png


    The “start” input initializes the lives by assigning them a numeric value of 3 and displaying the three orange stars. The “reduce” input lowers the numeric value of lives by one and displays the corresponding stars. As a consequence of triggering this input, one of the two outputs is activated. The “alive” output is activated if, after the reduction, the number of lives is greater than 0. The “death” output is activated when, after the reduction, the number of lives equals 0.

    Inside the Script, we do everything necessary to change the value of lives, displaying the Sprite in relation to the number of lives, triggering the correct output in function of the number of lives, and in our example, also playing a negative fail sound when the number of lives goes down..

    In our “lifeLevelControl” Script, we have a “currentLifeLevel” parameter which contains the number of lives, and a parameter which contains the “Lives” Sprite, which is the element on the screen which represents the lives. This Sprite has four animations of states, “0”, “1”, “2”, and “3”.


    image14.png


    The “start” input connector activates the ActionOnParam “copy” blackbox which assigns the value of 3 to the “currentLifeLevel” parameter and, once that’s done, it activates the “setAnimation” ActionOnParam blackbox which displays the “3” animation Sprite.

    The “reduce” input connector activates the “-” ActionOnParam blackbox which subtracts from the “currentLifeLevel” parameter the value of 1. Once that’s done, it first activates the “setAnimation” ActionOnParam blackbox which displays the animation or state corresponding to the value of the “CurrentLifeLevel” parameter and secondly, it activates the “greaterThan” Compare blackbox, which activates the “alive” connector if the value of the “currentLifeLevel” parameter is greater than 0, or the “death” connector should the value be equal to or less than 0.

    Example 2. How to create a time scoreboard or chronometer


    In order to manage time, we’ll have as a base a numerical time value that will run in thousandths of a second in the round and a graphic element to display it. This graphic element will be 5 instances of a Sprite that will have 10 animations or states, which will be the numbers from 0-9.


    image10.png

    image20.png


    In our case, we’ll display the time in seconds and thousandths of a second as you can see in the image, counting down; so the time will go from the total time at the start and decrease until reaching zero, finishing.

    To do this in the Scenes editor, we must create the 6 instances of the different sprites used for each segment of the time display, the tenths place, the units place, the tenths of a second place, the hundredths of a second place, and the thousandths of a second place, as well as the colon. In our case, we’ll call them “second.unit”, “second.ten”, “millisec.unit”, “millisec.ten” y “millisec.hundred”.


    screenshot_309.png


    In order to manage this time, we’ll have a Script (“RoundTimeControl”) which has 2 inputs (“start” and “stop”) and 1 output (“end”), as well as an exposed parameter called “roundMillisecs” and which contains the value of the starting time.


    image31.png


    The “start” input activates the countdown from the total time and displays the decreasing value in seconds and milliseconds. The “stop” input stops the countdown, freezing the current time on the screen. When the stipulated time runs out, the “end” output is activated, which determines that the time has run out. Inside the Script, we do everything needed to control the time and display the Sprites in relation to the value of time left, activating the “end” output when it has run out.

    In order to use it, all we need to do is put the time value in milliseconds in, either by placing it directly in the “roundMillisecs” parameter, or by using a blackbox I assign it, and once that’s been assigned, we then activate the the “start” input which will display the countdown until we activate the “stop” input or reach 0, in which case the “end” output will be activated, which we can use, for example, to remove a life or whatever else we’d like to activate.


    image04.png


    In the “RoundTimeControl” Script, we have a fundamental parameter, “roundMillisecs”, which contains and defines the playing time value in the round. Inside this Script, we also have two other Scripts, “CurrentMsecs-Secs” and “updateScreenTime”, which group together the actions I’ll describe below.

    The activation of the “start” connector activates the “start” input of the Timer blackbox, which starts the countdown. As the defined time counts down, this blackbox updates the “elapsedTime” parameter with the time that has passed since the clock began counting, activating its “updated” output. This occurs from the very first moment and is repeated until the last time the time is checked, when the “finished” output is triggered, announcing that time has run out. Given that the time to run does not have to be a multiple of the times between the update and the checkup of the time run, the final value of the elapsedTime parameter will most likely be greater than measured, which is something that will have to be kept in mind when necessary.

    The “updated” output tells us we have a new value in the “elapsedTime” parameter and will activate the “CurrentTimeMsecs-Secs” Script which calculates the total time left in total milliseconds and divides it into seconds and milliseconds in order to display it. Once this piece of information is available, the “available” output will be triggered, which will in turn activate the “update” input of the “updateScreenTime” Script which places the corresponding animations into the Sprites displaying the time.

    In the “CurrentMsecs-Secs” Script, we have two fundamental parameters with to carry out; “roundMillisecs”, which contains and defines the value of playing time in the round, and “elapsedTime”, which contains the amount of time that has passed since the clock began running. In this Script, we calculate the time left and then we break down that time in milliseconds into seconds and milliseconds--the latter is done in the “CalculateSecsMillisecs” Script, which I’ll be getting to.


    image19-1024x323.png


    The activation of the get connector starts the calculation of time remaining, starting with the activation of the “-” ActionOnParam blackbox that subtracts the value of the time that has passed since the “elapsedTime” parameter contents started from the total run time value contained in the “roundMillisecs” parameter. This value, stored in the “CurrentTime” parameter, is the time left in milliseconds.

    Once that has been calculated, the “greaterThanOrEqual” Compare blackbox is activated, which compares the value contained in “CurrentTime” (the time left) to the value 0. If it is greater than or equal to 0, it activates the “CalculateSecsMillisecs” Script which breaks down the remaining time into seconds and milliseconds, and when this is done, it triggers the “available” output connector. If it is less, before activating the “CalculateSecsMillisecs” Script, we activate the ActionOnParam “copy” blackbox which sets the time remaining value to zero.


    image30-1024x294.png


    In the “CalculateSecsMillisecs” Script, we have the value of the time left in milliseconds contained in the “currentTime” parameter as an input. The Script breaks down this input value into its value in seconds and its value in milliseconds remaining, providing them to the “CurrentMilliSecs” and “CurrentSecs” parameters. The activation of its “get” input connector activates the “lessThan” Compare blackbox. This performs the comparison of the value contained in the “currentTime” parameter to see if it is less than 1000.

    If it is less, the “true” output is triggered. What this means is that there are no seconds, which means the whole value of “CurrentTime” is used as a value in the “CurrentMilliSecs” parameter, which is then copied by the “Copy” ActionOnParam blackbox; but it doesn’t copy the seconds, because they’re 0, and that gives the value of zero to the “currentSecs” parameter via the “copy” ActionOnParam blackbox. After this, it has the values the Script provided, so it activates its “done” output..

    On the other hand, if the check the “lessThan” Compare blackbox runs determines that the “currentTime” is greater than 1000, it activates its “false” output. This activates the “/” ActionOnParam blackbox, which divides the “currentTime” parameter by 1000’, storing it in the “totalSecs” parameter. Once that is done, the “floor” ActionOnParam is activated, which leaves its total “totalSecs” value in the “currentSecs” parameter.

    After this, the “-” ActionOnParam is activated, which subtracts “currentSecs” from “totalSecs”, which gives us the decimal part of “totalSecs”, and stores it in “currentMillisecs” in order to later activate the “*” ActionOnParam blackbox, multiplying by 1000 the “currentMillisecs” parameter which contains the decimal value of the seconds left in order to convert it into milliseconds, which is stored in the “CurrentMillisecs” parameter (erasing the previous value). After this, it then has the values the Script provides, so it then activates its “done” output.

    When the “CalculateSecsMillisecs” Script finishes and activates is “done” output, and this activates the Script’s “available” output, the “currentTimeMsecs-Secs” Script is activated, which then activates the “updateScreenTime” Script via its “update” input. This Script handles displaying the data obtained in the previous Script and which are available in the “CurrentMillisecs” and “CurrentSecs” parameters.


    image06.png


    The “updateScreenTime” Script in turn contains two Scripts, “setMilliSeconds” and “setSeconds”, which are activated when the “update” input is activated, and which set the time value in milliseconds and seconds respectively when their “set” inputs are activated. Both Scripts are practically the same, since they take a time value and place the Sprites related to the units of that value in the corresponding animations. The difference between the two is that “setMilliseconds” controls 3 digits (tenths, hundredths, and thousandths), while “setSeconds” controls only 2 (units and tens).


    image11.png


    The first thing the “setMilliseconds” Script does when activated is convert the value “currentMillisecs” is to represent to text via the “toString” ActionOnParam blackbox. This text is kept in the “numberAsString” parameter. Once the text has been obtained, we divide it into characters, grouping it up in a collection of Strings via the “split” ActionOnParam. It is very important to leave the content of the “separator” parameter of this blackbox empty, even though in the image you can see two quotation marks in the field. This collection of characters is gathered by the “digitsAsStrings” parameter. Later, based on the value of milliseconds to be presented, it will set one animation or another in the Sprites.

    Should the time value to be presented be less than 10, which is checked by the “lessThan” Compare blackbox against the value 10, the “true” output is activated which in turn activates the “setWith1Digit” Script. Should the time value be greater than 10, the blackbox’s “false” output is activated, and it proceeds to check if the time value is less than 100, which is checked by the “lessThan” Compare blackbox against the value 100. If this blackbox activates its “true” output, this in turn activates the “setWith2Digits” Script. Finally, if this blackbox activates the “false” output, the “setWith3Digits” Script is activated.


    image15.png


    The “setWith1Digit” Script takes the first of the collection of characters, and uses it to set the animation of the Sprite that corresponds with the units contained in the “millisec.unit” parameter. The remaining Sprites (“millisec.ten” and “millisec.hundred”) are set with the 0 animation.


    image22.png


    The “setWith2Digits” Script takes the first of the collection of characters, and uses it to set the animation of the Sprite corresponding to the tenths place number contained in the “millisec.ten” parameter, the second character of the collection to set the Sprite animation corresponding to the units contained in the “millisec.unit” parameter and the “millisec.hundred” Sprite is given the animation for 0.


    image29.png


    The “setWith3Digits” Sprite takes the first of the collection of characters, and uses it to set the animation of the Sprite corresponding to the hundredths contained in the “millisec.hundred” parameter, the second character of the collection to set the animation of the Sprite corresponding to the tenths place value, contained in the “millisec.ten” parameter, and the third character of the collection to set the animation of the Sprite corresponding to the units place value contained in the “millisec.unit” parameter.


    image18.png


    The “setSeconds” Script when first activated converts the value to represent “currentSecs” to text via the “toString” ActionOnParam blackbox. This text is grouped in the “numberAsString” parameter. Once the text is obtained, we divide it into characters, gathering it in a collection of Strings via the “split” ActionOnParam blackbox. It is very important to leave the content of the “separator” parameter of this Blackbox blank, even though you can see two quotation marks in the field. This collection of characters is collected in the “digitsAsStrings” parameter. Later, based on the value of the seconds to be shown, one animation or another will be placed in the Sprites.

    If the time value to be presented is less than 10, it’s checked by the “lessThan” Compare blackbox against the value of 10, which activates the “true” output; the first character of the collection is taken and used to set the animation of the Sprite corresponding to the units place value contained in the “second.unit” parameter. The other Sprite, “second.ten”, is given the animation for 0.

    If the time value to be presented is greater than ten, the “false” output of the blackbox is activated, and it proceeds to pick the first character from the collection of characters and we use it to set the animation of the Sprite corresponding to the tens place value contained in the “second.ten” parameter, and the second character of the character collection is used to set the animation of the Sprite corresponding to the units place value contained in the “second.unit” parameter.

    Example 3. How to create a points scoreboard.


    In order to manage the number of points, we’ll have as a base the whole number value of these points that we’ll be increasing and a graphic element to display it. This graphic element will be 4 instances of a Sprite that will have 10 animations or states, which will be each of the numbers from 0 to 9.


    image10.png


    In our case, we’ll display the points up to 4 digits, meaning scores can go up to 9999, as you can see in the image, starting at 0 and then increasing in whole numbers.


    image08.png


    For this, in the Scene editor, we must create the four instances of the different Sprites used for each one of the numerical units to be used to count points: units, tens, hundreds, and thousands. In our case, we’ll call them “unit point”, “ten point”, “hundred point”, and “thousand point”. To manage this time, we’ll have a Script (“ScorePoints”), which has 2 inputs (“reset” and “increment”), as well as an exposed parameter called “pointsToWin” which contains the value of the points to be added in each incrementation.


    image09.png


    The “reset” input sets the current score value to zero, and the “increment” input adds the points won in each incrementation contained in the “pointsToWin” parameter to the current score.

    In order to use it, we must only set the value for the points to win in each incrementation by either putting it in the “pointsToWin” parameter or by using a blackbox that I assign it. Once I have it, we can activate the “increment” input, which will increase the score and show it on the screen. Whenever we want, we can begin again by resetting the counter to zero by activating the “reset” input.

    In the interior of the Script, we do everything necessary to perform these actions and to represent the current score on the screen, displaying the 4 Sprites (units, tens, hundreds, and thousands) in relation to that value. When the “reset” input is activated, a “copy” ActionOnParam blackbox sets the value to 0 in the “scorePoints” parameter, which contains the value of the current score. Also, when the “increment” input is activated, a “+” ActionOnParam blackbox adds the parameter “pointsToWin”, which contains the value of the points won in each incrementation, to the “scorePoints” parameter, which contains the value of the current score. After both activations, a “StoreOnScreen” Script is activated via its “update” input.


    image03.png


    The “StoreOnScreen” Script has a connector to the “update” input and shares the “scorePoints” parameter, which contains the value of the current score.


    image00.png

    image28-1024x450.png


    Once the “ScoreOnScreen” Script is activated by its “update” input, it begins converting the score value contained in the “scorePoints” parameter into text via the “toString” ActionOnParam blackbox. This text is gathered in the “numberAsString” parameter. Once the text has been obtained, we divide it into characters and group them into a collection of Strings via the “split” ActionOnParam.

    This collection of characters is gathered into the “digitsAsStrings” parameter. Later, based on the value of the score to be presented, one animation or another will be set for the 4 Sprites. If the value of the score is less than 10, as checked by the “lessThan” Compare blackbox against the value 10, its “true” output is activated, which activates the “setWith1Digit” Script.

    If the value is greater than 10, the blackbox’s “false” output is activated, and it checks to see if the value is less than 100. When the “lessThan” Compare blackbox checks that the value is less than 100, its “true” output is activated, which in turn activates the “setWith2Digits” Script.

    If the value is greater than 100, the “false” output of the blackbox is activated, and it proceeds to see if the value is less than 1000, which is checked by the “lessThan” Compare blackbox against the value of 1000. If this blackbox activates its “true” output, this will then activate the “setWith3Digits” Script. If the blackbox activates the “false” output, the “setWith4Digits” Script is activated.


    image21.png

    image05.png


    The “setWith1Digit” Script takes the first character from the collection of characters and uses it to set the animation of the Sprite that corresponds to the units place contained in the “unit.point” parameter. The remaining Sprites (“ten.point”, “hundred.point” and “thousand.point”) are set with the “0” animation.


    image24.png

    image02.png


    The “setWith2Digits” takes the first of the collection of characters and uses it to set the animation of the Sprite corresponding to the tens place contained in the “ten.point” parameter, and the second character of the collection is set with the animation of the Sprite corresponding to the units place as contained in the “units.point” parameter. The remaining Sprites (“hundred.point”) and (“thousand.point”) are set with the “0” animation.


    image25.png

    image17.png


    The “setWith3Digits” takes the first of the collection of characters and uses it to set the animation of the Sprite corresponding to the hundreds place contained in the “hundred.point”) parameter; the second character in the collection is set with the animation for the Sprite corresponding to the tens place as contained in the “ten.point” parameter; and the third character in the collection is set with the animation for the Sprite corresponding to the units place as contained in the “unit.point” parameter. The remaining Sprite, (“thousand.point”) is set with the “0” animation.


    image23.png

    image26.png


    The “setWith4Digits” Script takes the first character of the collection of characters and uses it to set the animation of the Sprite corresponding to the thousands place as contained in the “thousand.point” parameter; the second is set with the animation for the Sprite corresponding to the hundreds place as contained in the “hundred.point” parameter; the third is set with the animation for the Sprite corresponding to the tens place as contained in the “ten.point” parameter; and the fourth is set with the animation for the Sprite corresponding to the units place as contained in the “unit.point” parameter.

    As you can see it is not necessary to write code when you work with WiMi5. The whole logic of these scoreboard has been created by dragging and dropping blackboxes in the LogicChart. You also have to set and configure parameters and scripts, but all the work is visually done. We hope you have enjoyed this tutorial and you have understood how to create scoreboards.

    • Oct 21 2014 12:21 PM
    • by hafo
  7. BeMyGuess

    Secret Agent Morgan is the best in his profession. A man you can trust to achieve every goal. But this specific mission is on the edge now.

    He has only few seconds to find the secret number that unlocks the safebox. The safebox that contains the very secrets of the Syndicate of The Burnt Dragon. And Agent Morgan needs to know everything about it. Can you help him guess it, before the alarm starts?

    There are two tables of possible answers in front of you. As you progress answers wil be ruled out... Use the rollers to create guesses and assist Agent Morgan in his quest...


    A fun and simple Puzzle game, available on Android.

    [attachment=23674:1.png] [attachment=23675:2.png] [attachment=23676:3.png] [attachment=23677:4.png] [attachment=23678:5.png]

  8. BeMyGuess

    Secret Agent Morgan is the best in his profession. A man you can trust to achieve every goal. But this specific mission is on the edge now.

    He has only few seconds to find the secret number that unlocks the safebox. The safebox that contains the very secrets of the Syndicate of The Burnt Dragon. And Agent Morgan needs to know everything about it. Can you help him guess it, before the alarm starts?

    There are two tables of possible answers in front of you. As you progress answers wil be ruled out... Use the rollers to create guesses and assist Agent Morgan in his quest...


    A fun and simple Puzzle game, available on Android.

    [attachment=23674:1.png] [attachment=23675:2.png] [attachment=23676:3.png] [attachment=23677:4.png] [attachment=23678:5.png]

  9. What's new in 1.4

    We are glad to announce that WaveEngine 1.4 (Dolphin) is out! This is probably our biggest release until now, with a lot of new features.

    New Demo

    Alongside with the 1.4 Release of Wave Engine, we have published in our GitHub repository a new sample to show all the features included in this new version.
    In this sample you play Yurei, a little ghost character that slides through a dense forest and a haunted house.
    Some key features:

    • The new Camera 2D is crucial to follow the little ghost across the way.
    Parallax scrolling effect done automatically with the Camera2D perspective projection.
    Animated 2D model using Spine model with FFD transforms.
    Image Effects to change the look and feel of the scene to make it scarier.

    The source code of this sample is available in our GitHub sample repository

    fuN6rWVl.png

    Binary version for Windows PC, Here.

    Camera 2D

    One of the major improvements in 2D games using Wave Engine is the new Camera 2D feature.
    With a Camera 2D, you can pan, zoom and rotate the display area of the 2D world. So from now on making a 2D game with a large scene is straightforward.
    Additionally, you can change the Camera 2D projection:

    Orthogonal projection. Camera will render objects 2D uniformly, with no sense of perspective. That was the most common projection used.
    Perspective projection. Camera will render objects 2D with sense of perspective.

    Now it’s easy to make a Parallax Scrolling effect using Perspective projection in Camera 2D, you only need to properly set the DrawOrder property to specify the entity depth value between the background and the foreground.



    More info about this.

    Image Effects library


    This new release comes with and extesion library called WaveEngine.ImageEffects This allows users an easy mode (one line of code) to add multiple postprocessing effects to their games.
    The first version of this library has more than 20 image effects to improve the visual quality of each development.
    All these image effects have been optimized to work in real time on Mobile devices.
    Custom image effects are also allowed and all image effects in the current library are published as OpenSource.

    UhuEBdG.png


    More info about image effects library.

    Skeletal 2D animation

    In this new release we have improved the integration with Spine Skeletal 2D animation tool to support Free-Form deformation (FFD).
    This new feature allows you to move individual mesh vertices to deform the image.

    3lLWb0O.png

    Read more.

    Transform3D & Transform3D with real hierarchy

    One of the most requested features by our users was the implementation of a real Parent / Child transform relationship.
    Now, when an Entity is a Parent of another Entity, the Child Entity will move, rotate, and scale in the same way as its Parent does. Child Entities can also have children, conforming an Entity hierarchy.
    Transform2D and Transform3D components now have new properties to deal with entity hierarchy:

    LocalPosition, LocalRotation and LocalScale properties are used to specify the transform values relative to its parent.
    Position, Rotation and Scale properties are used now to set global transform values.
    Transform2D inherited from Transform3D component, so you can deal with 3D transform properties in 2D entities.

    15kHRT4.png

    More info about this.

    Multiplatform tools

    We have been working on rewrite all our tools using GTK# to get these tools available on Windows, MacOS and Linux.
    We want to offer the same development experience for all our developer regardless their OS.

    D9yh7y1.png

    More info about this.

    Workflow improved

    Within this WaveEngine version we have also improved the developer workflow, because one of the most tedious tasks when working in multiplatform games is the assets management.
    So with this new workflow all these tasks will be performed automatically and transparently to developers, obtaining amazing benefits:

    • Reduce development time and increased productivity
    • Improved the process of porting to other platforms
    • Isolated the developer from managing WPK files

    All our samples and quickstarter have been updated to this new workflow, Github repository.

    Ue21Rm0.png

    More about new workflow.

    Scene Editor in progress

    After many developers requests we have started to create a Scene Editor tool. Today we are excited to announce that we are already working on it.

    It will be a multiplatform tool so developers will be able to use this tool from either Windows, MacOs, or Linux.

    fNSslO2.png

    Community thread about this future tool.

    Digital Boss Monster powered by WaveEngine

    Outstanding success of the Boss Monster card game for iOS & Android kickstarter, which will be developed using WaveEngine in the next months.

    3zdq0il.jpg

    If you want to see a cool video of the prototype here is the link.

    Don't miss the chance to be part of this kickstarter, only a few hours left (link)

    More Open Source

    We keep on publishing source code of some extensions for WaveEngine:

    Image Effects Library: WaveEngine image library, with more than 20 lenses (published)
    Complete code in our Github repository.

    Using Wave Engine in your applications built on Windows Forms, GTKSharp and WPF

    Within this new version we want to help every developer using Wave Engine on their Windows Desktop Applications, like game teams that need to
    build their own game level editor, or University research groups that need to integrate research technologies with a Wave Engine render and show tridimensional
    results. Right now, located at our Wave Engine GitHub Samples repository, you can find some demo projects that show how to integrate Wave Engine with Windows Forms,
    GtkSharp or Windows Presentation Foundation technologies.

    Complete code in our Github repository.

    Better Visual Studio integration

    Current supported editions:

    • Visual Studio Express 2012 for Windows Desktop
    • Visual Studio Express 2012 for Web
    • Visual Studio Professional 2012
    • Visual Studio Premium 2012
    • Visual Studio Ultimate 2012
    • Visual Studio Express 2013 for Windows Desktop
    • Visual Studio Express 2013 for Web
    • Visual Studio Professional 2013
    • Visual Studio Premium 2013
    • Visual Studio Ultimate 2013

    We help you port wave engine project from 1.3.5 version to new 1.4 version

    Within this new version there are some importants changes so we want to help every wave engine developers port theirs game projects
    To the new 1.4 version.

    More info about this.

    Complete Changelog of WaveEngine 1.4 (Dolphin), Here.

    Download WaveEngine Now (Windows, MacOS, Linux)

    4508c900-53eb-489c-9ace-780d222364fd.png

  10. Banshee Game Development Toolkit - Introduction

    Introduction


    *This is a suggested template to get you started. Feel free to modify it to suit your article. Introduce the topic you are going to write about briefly. Provide any relevant background information regarding your article that may be helpful to readers.

    Main Section Title


    Explaining the Concept

    Subheading



    This is the tutorial part of your article. What are you trying to convey to your readers? In this main body section you can put all your descriptive text and pictures (you can drag and drop pictures right into the editor!).

    Using the Code


    (Optional) If your article is about a piece of code create a small description about how the code works and how to use it.

    /* Code block here */
    

    Interesting Points


    Did you stumble upon any weird gotchas? .. things people should look out for? How would you describe your experience writing this article.

    Conclusion


    Wrap up any loose ends for your article. Be sure to restate what was covered in the article. You may also suggest additional resources for the reader to check out if desired.

    Article Update Log


    Keep a running log of any updates that you make to the article. e.g.

    6 Feb 2020: Added additional code samples
    4 Feb 2020: Initial release

  11. Making a Game with Blend4Web Part 4: Mobile Devices

    This is the fourth part of the Blend4Web gamedev tutorial. Today we'll add mobile devices support and program the touch controls. Before reading this article, please look at the first part of this series, in which the keyboard controls are implemented. We will use the Android and iOS 8 platforms for testing.

    Detecting mobile devices


    In general, mobile devices are not as good in performance as desktops and so we'll lower the rendering quality. We'll detect a mobile device with the following function:

    function detect_mobile() {
        if( navigator.userAgent.match(/Android/i)
         || navigator.userAgent.match(/webOS/i)
         || navigator.userAgent.match(/iPhone/i)
         || navigator.userAgent.match(/iPad/i)
         || navigator.userAgent.match(/iPod/i)
         || navigator.userAgent.match(/BlackBerry/i)
         || navigator.userAgent.match(/Windows Phone/i)) {
            return true;
        } else {
            return false;
        }
    }
    

    The init function now looks like this:

    exports.init = function() {
    
        if(detect_mobile())
            var quality = m_cfg.P_LOW;
        else
            var quality = m_cfg.P_HIGH;
    
        m_app.init({
            canvas_container_id: "canvas3d",
            callback: init_cb,
            physics_enabled: true,
            quality: quality,
            show_fps: true,
            alpha: false,
            physics_uranium_path: "uranium.js"
        });
    }
    

    As we can see, a new initialization parameter - quality - has been added. In the P_LOW profile there are no shadows and post-processing effects. This will allow us to dramatically increase the performance on mobile devices.

    Controls elements on the HTML page


    Lets add the following elements to the HTML file:

    <!DOCTYPE html>
    <body>
        <div id="canvas3d"></div>
    
        <div id="controls">
            <div id ="control_circle"></div>
            <div id ="control_tap"></div>
            <div id ="control_jump"></div>
        </div>
    </body>
    

    1. control_circle element will appear when the screen is touched, and will be used for directing the character.
    2. The control_tap element is a small marker, following the finger.
    3. The control_jump element is a jump button located in the bottom right corner of the screen.

    By default all these elements are hidden (visibility property). They will become visible after the scene is loaded.

    The styles for these elements can be found in the game_example.css file.

    Processing the touch events


    Let's look at the callback which is executed at scene load:

    function load_cb(root) {
        _character = m_scs.get_first_character();
        _character_body = m_scs.get_object_by_empty_name("character",
                                                             "character_body");
    
        var right_arrow = m_ctl.create_custom_sensor(0);
        var left_arrow  = m_ctl.create_custom_sensor(0);
        var up_arrow    = m_ctl.create_custom_sensor(0);
        var down_arrow  = m_ctl.create_custom_sensor(0);
        var touch_jump  = m_ctl.create_custom_sensor(0);
    
        if(detect_mobile()) {
            document.getElementById("control_jump").style.visibility = "visible";
            setup_control_events(right_arrow, up_arrow,
                                 left_arrow, down_arrow, touch_jump);
        }
    
        setup_movement(up_arrow, down_arrow);
        setup_rotation(right_arrow, left_arrow);
    
        setup_jumping(touch_jump);
    
        setup_camera();
    }
    

    The new things here are the 5 sensors created with the controls.create_custom_sensor() method. We will change their values when the corresponding touch events are fired.

    If the detect_mobile() function returns true, the control_jump element is shown up and the setup_control_events() function is called to set up the values for these new sensors (passed as arguments). This function is quite large and we'll look at it step-by-step.

    var touch_start_pos = new Float32Array(2);
    
    var move_touch_idx;
    var jump_touch_idx;
    
    var tap_elem = document.getElementById("control_tap");
    var control_elem = document.getElementById("control_circle");
    var tap_elem_offset = tap_elem.clientWidth / 2;
    var ctrl_elem_offset = control_elem.clientWidth / 2;
    

    First of all the variables are declared for saving the touch point and the touch indices, which correspond to the character's moving and jumping. The tap_elem and control_elem HTML elements are required in several callbacks.

    The touch_start_cb() callback


    In this function the beginning of a touch event is processed.

    function touch_start_cb(event) {
        event.preventDefault();
    
        var h = window.innerHeight;
        var w = window.innerWidth;
    
        var touches = event.changedTouches;
    
        for (var i = 0; i < touches.length; i++) {
            var touch = touches[i&#93;;
            var x = touch.clientX;
            var y = touch.clientY;
    
            if (x > w / 2) // right side of the screen
                break;
    
            touch_start_pos[0&#93; = x;
            touch_start_pos[1&#93; = y;
            move_touch_idx = touch.identifier;
    
            tap_elem.style.visibility = "visible";
            tap_elem.style.left = x - tap_elem_offset + "px";
            tap_elem.style.top  = y - tap_elem_offset + "px";
    
            control_elem.style.visibility = "visible";
            control_elem.style.left = x - ctrl_elem_offset + "px";
            control_elem.style.top  = y - ctrl_elem_offset + "px";
        }
    }
    

    Here we iterate through all the changed touches of the event (event.changedTouches) and discard the touches from the right half of the screen:

        if (x > w / 2) // right side of the screen
            break;
    

    If this condition is met, we save the touch point touch_start_pos and the index of this touch move_touch_idx. After that we'll render 2 elements in the touch point: control_tap and control_circle. This will look on the device screen as follows:


    gm04_img01.jpg?v=20140827183625201406251



    The touch_jump_cb() callback


    function touch_jump_cb (event) {
        event.preventDefault();
    
        var touches = event.changedTouches;
    
        for (var i = 0; i < touches.length; i++) {
            var touch = touches[i&#93;;
            m_ctl.set_custom_sensor(jump, 1);
            jump_touch_idx = touch.identifier;
        }
    }
    

    This callback is called when the control_jump button is touched


    gm04_img02.jpg?v=20140827183625201406251



    It just sets the jump sensor value to 1 and saves the corresponding touch index.

    The touch_move_cb() callback


    This function is very similar to the touch_start_cb() function. It processes finger movements on the screen.

        function touch_move_cb(event) {
            event.preventDefault();
    
            m_ctl.set_custom_sensor(up_arrow, 0);
            m_ctl.set_custom_sensor(down_arrow, 0);
            m_ctl.set_custom_sensor(left_arrow, 0);
            m_ctl.set_custom_sensor(right_arrow, 0);
    
            var h = window.innerHeight;
            var w = window.innerWidth;
    
            var touches = event.changedTouches;
    
            for (var i=0; i < touches.length; i++) {
                var touch = touches[i&#93;;
                var x = touch.clientX;
                var y = touch.clientY;
    
                if (x > w / 2) // right side of the screen
                    break;
    
                tap_elem.style.left = x - tap_elem_offset + "px";
                tap_elem.style.top  = y - tap_elem_offset + "px";
    
                var d_x = x - touch_start_pos[0&#93;;
                var d_y = y - touch_start_pos[1&#93;;
    
                var r = Math.sqrt(d_x * d_x + d_y * d_y);
    
                if (r < 16) // don't move if control is too close to the center
                    break;
    
                var cos = d_x / r;
                var sin = -d_y / r;
    
                if (cos > Math.cos(3 * Math.PI / 8))
                    m_ctl.set_custom_sensor(right_arrow, 1);
                else if (cos < -Math.cos(3 * Math.PI / 8))
                    m_ctl.set_custom_sensor(left_arrow, 1);
    
                if (sin > Math.sin(Math.PI / 8))
                    m_ctl.set_custom_sensor(up_arrow, 1);
                else if (sin < -Math.sin(Math.PI / 8))
                    m_ctl.set_custom_sensor(down_arrow, 1);
            }
        }
    

    The values of d_x and d_y denote by how much the marker is shifted relative to the point in which the touch started. From these increments the distance to this point is calculated, as well as the cosine and sine of the direction angle. This data fully defines the required behavior depending on the finger position by means of simple trigonometric transformations.

    As a result the ring is divided into 8 parts, for which their own sets of sensors are assigned: right_arrow, left_arrow, up_arrow, down_arrow.

    The touch_end_cb() callback


    This callback resets the sensors' values and the saved touch indices.

        function touch_end_cb(event) {
            event.preventDefault();
    
            var touches = event.changedTouches;
    
            for (var i=0; i < touches.length; i++) {
    
                if (touches[i&#93;.identifier == move_touch_idx) {
                    m_ctl.set_custom_sensor(up_arrow, 0);
                    m_ctl.set_custom_sensor(down_arrow, 0);
                    m_ctl.set_custom_sensor(left_arrow, 0);
                    m_ctl.set_custom_sensor(right_arrow, 0);
                    move_touch_idx = null;
                    tap_elem.style.visibility = "hidden";
                    control_elem.style.visibility = "hidden";
    
                } else if (touches[i&#93;.identifier == jump_touch_idx) {
                    m_ctl.set_custom_sensor(jump, 0);
                    jump_touch_idx = null;
                }
            }
        }
    

    Also for the move event the corresponding control elements become hidden:

        tap_elem.style.visibility = "hidden";
        control_elem.style.visibility = "hidden";
    


    gm04_img04.jpg?v=20140827183625201406251



    Setting up the callbacks for the touch events


    And the last thing happening in the setup_control_events() function is setting up the callbacks for the corresponding touch events:

        document.getElementById("canvas3d").addEventListener("touchstart", touch_start_cb, false);
        document.getElementById("control_jump").addEventListener("touchstart", touch_jump_cb, false);
    
        document.getElementById("canvas3d").addEventListener("touchmove", touch_move_cb, false);
    
        document.getElementById("canvas3d").addEventListener("touchend", touch_end_cb, false);
        document.getElementById("controls").addEventListener("touchend", touch_end_cb, false);
    

    Please note that the touchend event is listened for two HTML elements. That is because the user can release his/her finger both inside and outside of the controls element.

    Now we have finished working with events.

    Including the touch sensors into the system of controls


    Now we only have to add the created sensors to the existing system of controls. Let's check out the changes using the setup_movement() function as an example.

    function setup_movement(up_arrow, down_arrow) {
        var key_w     = m_ctl.create_keyboard_sensor(m_ctl.KEY_W);
        var key_s     = m_ctl.create_keyboard_sensor(m_ctl.KEY_S);
        var key_up    = m_ctl.create_keyboard_sensor(m_ctl.KEY_UP);
        var key_down  = m_ctl.create_keyboard_sensor(m_ctl.KEY_DOWN);
    
        var move_array = [
            key_w, key_up, up_arrow,
            key_s, key_down, down_arrow
        &#93;;
    
        var forward_logic  = function(s){return (s[0&#93; || s[1&#93; || s[2&#93;)};
        var backward_logic = function(s){return (s[3&#93; || s[4&#93; || s[5&#93;)};
    
        function move_cb(obj, id, pulse) {
            if (pulse == 1) {
                switch(id) {
                case "FORWARD":
                    var move_dir = 1;
                    m_anim.apply(_character_body, "character_run_B4W_BAKED");
                    break;
                case "BACKWARD":
                    var move_dir = -1;
                    m_anim.apply(_character_body, "character_run_B4W_BAKED");
                    break;
                }
            } else {
                var move_dir = 0;
                m_anim.apply(_character_body, "character_idle_01_B4W_BAKED");
            }
    
            m_phy.set_character_move_dir(obj, move_dir, 0);
    
            m_anim.play(_character_body);
            m_anim.set_behavior(_character_body, m_anim.AB_CYCLIC);
        };
    
        m_ctl.create_sensor_manifold(_character, "FORWARD", m_ctl.CT_TRIGGER,
            move_array, forward_logic, move_cb);
        m_ctl.create_sensor_manifold(_character, "BACKWARD", m_ctl.CT_TRIGGER,
            move_array, backward_logic, move_cb);
    
        m_anim.apply(_character_body, "character_idle_01_B4W_BAKED");
        m_anim.play(_character_body);
        m_anim.set_behavior(_character_body, m_anim.AB_CYCLIC);
    }
    

    As we can see, the only changed things are the set of sensors in the move_array and inside the forward_logic() and backward_logic() logic functions, which now depend on the touch sensors as well.

    The setup_rotation() and setup_jumping() functions have changed in a similar way. They are listed below:

    function setup_rotation(right_arrow, left_arrow) {
        var key_a     = m_ctl.create_keyboard_sensor(m_ctl.KEY_A);
        var key_d     = m_ctl.create_keyboard_sensor(m_ctl.KEY_D);
        var key_left  = m_ctl.create_keyboard_sensor(m_ctl.KEY_LEFT);
        var key_right = m_ctl.create_keyboard_sensor(m_ctl.KEY_RIGHT);
    
        var elapsed_sensor = m_ctl.create_elapsed_sensor();
    
        var rotate_array = [
            key_a, key_left, left_arrow,
            key_d, key_right, right_arrow,
            elapsed_sensor,
        &#93;;
    
        var left_logic  = function(s){return (s[0&#93; || s[1&#93; || s[2&#93;)};
        var right_logic = function(s){return (s[3&#93; || s[4&#93; || s[5&#93;)};
    
        function rotate_cb(obj, id, pulse) {
    
            var elapsed = m_ctl.get_sensor_value(obj, "LEFT", 6);
    
            if (pulse == 1) {
                switch(id) {
                case "LEFT":
                    m_phy.character_rotation_inc(obj, elapsed * ROT_SPEED, 0);
                    break;
                case "RIGHT":
                    m_phy.character_rotation_inc(obj, -elapsed * ROT_SPEED, 0);
                    break;
                }
            }
        }
    
        m_ctl.create_sensor_manifold(_character, "LEFT", m_ctl.CT_CONTINUOUS,
            rotate_array, left_logic, rotate_cb);
        m_ctl.create_sensor_manifold(_character, "RIGHT", m_ctl.CT_CONTINUOUS,
            rotate_array, right_logic, rotate_cb);
    }
    
    function setup_jumping(touch_jump) {
        var key_space = m_ctl.create_keyboard_sensor(m_ctl.KEY_SPACE);
    
        var jump_cb = function(obj, id, pulse) {
            if (pulse == 1) {
                m_phy.character_jump(obj);
            }
        }
    
        m_ctl.create_sensor_manifold(_character, "JUMP", m_ctl.CT_TRIGGER,
            [key_space, touch_jump&#93;, function(s){return s[0&#93; || s[1&#93;}, jump_cb);
    }
    

    And the camera again


    In the end let's return to the camera. Keeping in mind the community feedback, we've introduced the possibility to tweak the stiffness of the camera constraint. Now this function call is as follows:

        m_cons.append_semi_soft_cam(camera, _character, CAM_OFFSET, CAM_SOFTNESS);
    

    The CAM_SOFTNESS constant is defined in the beginning of the file and its value is 0.2.

    Conclusion


    At this stage, programming the controls for mobile devices is finished. In the next tutorials we'll implement the gameplay and look at some other features of the Blend4Web physics engine.

    Link to the standalone application

    The source files of the application and the scene are part of the free Blend4Web SDK distribution.

    • Sep 03 2014 02:02 PM
    • by Spunya
  12. The Art of Feeding Time: Branding

    Although a game's branding rarely has much to do with its gameplay, it's still a very important forward-facing aspect to consider.


    ft_initial_logos.jpg
    Initial concepts for a Feeding Time logo.


    For Feeding Time's logo, we decided to create numerous designs and get some feedback before committing to a single concept.

    Our early mockups featured both a clock and various types of food. Despite seeming like a perfect fit, the analog clock caused quite a bit of confusion in-game. We wanted a numerical timer to clearly indicate a level's duration, but this was criticized when placed on an analog clock background. Since the concept already prompted some misunderstandings -- and a digital watch was too high-tech for the game's rustic ambiance -- we decided to avoid it for the logo.

    The food concepts were more readable than the clock, but Feeding Time was meant to be a game where any type of animal could make an appearance. Consequently we decided to avoid single food-types to prevent the logo from being associated with just one animal.


    ft_further_logos.jpg
    Even more logo concepts. They're important!


    A few more variations included a placemat and a dinner bell, but we didn't feel like these really captured the look of the game. We were trying to be clever, but the end results weren't quite there.

    We felt that the designs came across as somewhat sterile, resembling the perfect vector logos of large conglomerates that looked bland compared to the in-game visuals.


    ft_logo_bite_white.jpg
    Our final logo.


    Ultimately we decided to go with big, bubbly letters on top of a simple apéritif salad. It was bright and colourful, and fit right in with the restaurant-themed UI we were pursuing at the time. We even used the cloche-unveiling motif in the trailer!

    One final extra touch was a bite mark on the top-right letter. We liked the idea in the early carrot-logo concept, and felt that it added an extra bit of playfulness.


    ft_icon_concepts.jpg
    Initial sketches for the app icon.


    The app-icon was a bit easier to nail down as we decided not to avoid specific foods and animals due to the small amount of space. We still tried out a few different sketches, but the dog-and-bone was easily the winner. It matched the in-game art, represented the core of the gameplay, and was fairly readable at all resolutions.

    To help us gauge the clarity of the icon, we used the App Icon Template.

    This package contains a large Photoshop file with a Smart Object embedded in various portholes and device screenshots. The Smart Object can be replaced with any logo to quickly get a feel for how it appears in different resolutions and how it is framed within the AppStore. This was particularly helpful with the bordering as iOS 7 increased the corner radius making the icons appear rounder.


    ft_final_icon.jpg
    Final icon iterations for Feeding Time.


    Despite a lot of vibrant aesthetics, we still felt that Feeding Time was missing a face; a central identifying character.

    Our first shot at a "mascot" was a grandmother that sent the player to various parts of the world in order to feed its hungry animals. A grandmother fretting over everyone having enough to eat is a fairly identifiable concept, and it nicely fit in with the stall-delivery motif.


    ft_babushkas.jpg
    Our initial clerk was actually a babushka with some not-so-kindly variations.


    However, there was one problem: the introductory animation showed the grandmother tossing various types of food into her basket and random animals periodically snatching 'em away.

    We thought this sequence did a good job of previewing the gameplay in a fairly cute and innocuous fashion, but the feedback was quite negative. People were very displeased that all the nasty animals were stealing from the poor old woman!


    animation_foodsteal.gif
    People were quite appalled by the rapscallion animals when the clerk was played by a kindly grandma.


    It was a big letdown as we really liked the animation, but much to our surprise we were told it'd still work OK with a slightly younger male clerk. A quick mockup later, and everyone was pleased with the now seemingly playful shenanigans of the animals!

    Having substituted the kindly babushka for a jolly uncle archetype, we also shrunk down the in-game menus and inserted the character above them to add an extra dash of personality.


    ft_clerk_lineup.jpg
    The clerk as he appears over two pause menus, a bonus game in which the player gets a low score, and a bonus game in which the player gets a high score.


    The clerk made a substantial impact keeping the player company on their journey, so we decided to illustrate a few more expressions. We also made these reflect the player's performance helping to link it with in-game events such as bonus-goal completion and minigames scores.


    ft_website.jpg
    The official Feeding Time website complete with our logo, title-screen stall and background, a happy clerk, and a bunch of dressed up animals.


    Finally, we used the clerk and various game assets for the Feeding Time website and other Incubator Games outlets. We made sure to support 3rd generation iPads with a resolution of 2048x1536, which came in handy for creating various backgrounds, banners, and icons used on our Twitter, Facebook, YouTube, tumblr, SlideDB, etc.

    Although branding all these sites wasn't a must, it helped to unify our key message: Feeding Time is now available!

    Article Update Log


    30 July 2014: Initial release

  13. Making a Game with Blend4Web Part 2: Models for the Location

    In this article we will describe the process of creating the models for the location - geometry, textures and materials. This article is aimed at experienced Blender users that would like to familiarize themselves with creating game content for the Blend4Web engine.

    Graphical content style


    In order to create a game atmosphere a non-photoreal cartoon setting has been chosen. The character and environment proportions have been deliberately hypertrophied in order to give the gaming process something of a comic and unserious feel.

    Location elements


    This location consists of the following elements:
    • the character's action area: 5 platforms on which the main game action takes place;
    • the background environment, the role of which will be performed by less-detailed ash-colored rocks;
    • lava covering most of the scene surface.
    At this stage the source blend files of models and scenes are organized as follows:


    ex02_p02_img01.jpg?v=2014072916520120140


    1. env_stuff.blend - the file with the scene's environment elements which the character is going to move on;
    2. character_model.blend - the file containing the character's geometry, materials and armature;
    3. character_animation.blend - the file which has the character's group of objects and animation (including the baked one) linked to it;
    4. main_scene.blend - the scene which has the environment elements from other files linked to it. It also contains the lava model, collision geometry and the lighting settings;
    5. example2.blend - the main file, which has the scene elements and the character linked to it (in the future more game elements will be added here).

    In this article we will describe the creation of simple low-poly geometry for the environment elements and the 5 central islands. As the game is intended for mobile devices we decided to manage without normal maps and use only the diffuse and specular maps.

    Making the geometry of the central islands


    ex02_p02_img02.jpg?v=2014073110210020140


    First of all we will make the central islands in order to get settled with the scene scale. This process can be divided into 3 steps:

    1) A flat outline of the future islands using single vertices, which were later joined into polygons and triangulated for convenient editing when needed.


    ex02_p02_img03.jpg?v=2014072916520120140


    2) The Solidify modifier was used for the flat outline with the parameter equal to 0.3, which pushes the geometry volume up.


    ex02_p02_img04.jpg?v=2014072916520120140


    3) At the last stage the Solidify modifier was applied to get the mesh for hand editing. The mesh was subdivided where needed at the edges of the islands. According to the final vision cavities were added and the mesh was changed to create the illusion of rock fragments with hollows and projections. The edges were sharpened (using Edge Sharp), after which the Edge Split modifier was added with the Sharp Edges option enabled. The result is that a well-outlined shadow has appeared around the islands.

    Note:  It's not recommended to apply modifiers (using the Apply button). Enable the Apply Modifiers checkbox in the object settings on the Blend4Web panel instead; as a result the modifiers will be applied to the geometry automatically on export.


    ex02_p02_img05.jpg?v=2014073110210020140


    Texturing the central islands


    Now that the geometry for the main islands has been created, lets move on to texturing and setting up the material for baking. The textures were created using a combination of baking and hand-drawing techniques.

    Four textures were prepared altogether.


    ex02_p02_img06.jpg?v=2014072916520120140


    At the first stage lets define the color with the addition of small spots and cracks to create the effect of a rough stony and dusty rock. To paint these bumps texture brushes were used, which can be downloaded from the Internet or drawn by youself if necessary.


    ex02_p02_img07.jpg?v=2014072916520120140


    At the second stage the ambient occlusion effect was baked. Because the geometry is low-poly, relatively sharp transitions between light and shadow appeared as a result. These can be slightly blurred with a Gaussian Blur filter in a graphical editor.


    ex02_p02_img08.jpg?v=2014072916520120140


    The third stage is the most time consuming - painting the black and white texture by hand in the Texture Painting mode. It was layed over the other two, lightening and darkening certain areas. It's necessary to keep in mind the model's geometry so that the darker areas would be mostly in cracks, with the brighter ones on the sharp geometry angles. A generic brush was used with stylus pressure sensitivity turned on.


    ex02_p02_img09.jpg?v=2014072916520120140


    The color turned out to be monotonous so a couple of withered places imitating volcanic dust and stone scratches have been added. In order to get more flexibility in the process of texturing and not to use the original color texture, yet another texture was introduced. On this texture the light spots are decolorizing the previous three textures, and the dark spots don't change the color.


    ex02_p02_img10.jpg?v=2014072916520120140


    You can see how the created textures were combined on the auxiliary node material scheme below.


    ex02_p02_img11.jpg?v=2014072916520120140


    The color of the diffuse texture (1) was multiplied by itself to increase contrast in dark places.

    After that the color was burned a bit in the darker places using baked ambient occlusion (2), and the hand-painted texture (3) was layered on top - the Overlay node gave the best result.

    At the next stage the texture with baked ambient occlusion (2) was layered again - this time with the Multiply node - in order to darken the textures in certain places.

    Finally the fourth texture (4) was used as a mask, using which the result of the texture decolorizing (using Hue/Saturation) and the original color texture (1) were mixed together.

    The specular map was made from applying the Squeeze Value node to the overall result.

    As a result we have the following picture.


    ex02_p02_img12.jpg?v=2014072916520120140


    Creating the background rocks


    The geometry of rocks was made according to a similar technology although some differences are present. First of all we created a low-poly geometry of the required form. On top of it we added the Bevel modifier with an angle threshold, which added some beveling to the sharpest geometry places, softening the lighting at these places.


    ex02_p02_img13.jpg?v=2014072916520120140


    The rock textures were created approximately in the same way as the island textures. This time a texture with decolorizing was not used because such a level of detail is excessive for the background. Also the texture created with the texture painting method is less detailed. Below you can see the final three textures and the results of laying them on top of the geometry.


    ex02_p02_img14.jpg?v=2014072916520120140


    The texture combination scheme was also simplified.


    ex02_p02_img15.jpg?v=2014072916520120140


    First comes the color map (1), over which goes the baked ambient occlusion (2), and finally - the hand-painted texture (3).

    The specular map was created from the color texture. To do this a single texture channel (Separate RGB) was used, which was corrected (Squeeze Value) and given into the material as the specular color.

    There is another special feature in this scheme which makes it different from the previous one - the dirty map baked into the vertex color, overlayed (Overlay node) in order to create contrast between the cavities and elevations of the geometry.


    ex02_p02_img16.jpg?v=2014072916520120140


    The final result of texturing the background rocks:


    ex02_p02_img17.jpg?v=2014072916520120140


    Optimizing the location elements


    Lets start optimizing the elements we have and preparing them for displaying in Blend4Web.

    First of all we need to combine all the textures of the above-mentioned elements (background rocks and the islands) into a single texture atlas and then re-bake them into a single texture map. To do this lets combine UV maps of all geometry into a single UV map using the Texture Atlas addon.

    Note:  The Texture Atlas addon can be activated in Blender's settings under the Addons tab (UV category)


    ex02_p02_img18.jpg?v=2014072916520120140


    In the texture atlas mode lets place the UV maps of every mesh so that they would fill up all the future texture area evenly.

    Note:  It's not necessary to follow the same scale for all elements. It's recommended to allow more space for foreground elements (the islands).


    ex02_p02_img19.jpg?v=2014072916520120140


    After that let's bake the diffuse texture and the specular map from the materials of rocks and islands.


    ex02_p02_img20.jpg?v=2014072916520120140


    Note:  In order to save video memory, the specular map was packed into the alpha channel of the diffuse texture. As a result we got only one file.


    Lets place all the environment elements into a separate file (i.e. library): env_stuff.blend. For convenience we will put them on different layers. Lets place the mesh bottom for every element into the center of coordinates. For every separate element we'll need a separate group with the same name.


    ex02_p02_img21.jpg?v=2014072916520120140


    After the elements were gathered in the library, we can start creating the material. The material for all the library elements - both for the islands and the background rocks - is the same. This will let the engine automatically merge the geometry of all these elements into a single object which increases the performance significantly through decreasing the number of draw calls.

    Setting up the material


    The previously baked diffuse texture (1), into the alpha channel of which the specular map is packed, serves as the basis for the node material.


    ex02_p02_img22.jpg?v=2014072916520120140


    Our scene includes lava with which the environment elements will be contacting. Let's create the effect of the rock glowing and being heated in the contact places. To do this we will use a vertex mask (2), which we will apply to all library elements - and paint the vertices along the bottom geometry line.


    ex02_p02_img23.jpg?v=2014072916520120140


    The vertex mask was modified several times by the Squeeze Value node. First of all the less hot color of the lava glow (3) is placed on top of the texture using a more blurred mask. Then a brighter yellow color (4) is added near the contact places using a slightly tightened mask - in order to imitate a fritted rock.

    Lava should illuminate the rock from below. So in order to avoid shadowing in lava-contacting places we'll pass the same vertex mask into the Emit material's socket.

    We have one last thing to do - pass (5) the specular value from the diffuse texture's alpha channel to the Spec material's socket.


    ex02_p02_img24.jpg?v=2014072916520120140


    Object settings


    Let's enable the "Apply Modifiers" checkbox (as mentioned above) and also the "Shadows: Receive" checkbox in the object settings of the islands.


    ex02_p02_img25.jpg?v=2014072916520120140


    Physics


    Let's create exact copies of the island's geometry (named _collision for convenience). For these meshes we'll replace the material by a new material (named collision), and enable the "Special: Collision" checkbox in its settings (Blend4Web panel). This material will be used by the physics engine for collisions.

    Let's add the resulting objects into the same groups as the islands themselves.


    ex02_p02_img26.jpg?v=2014072916520120140


    Conclusion


    We've finished creating the library of the environment models. In one of the upcoming articles we'll demonstrate how the final game location was assembled and also describe making the lava effect.

    Link to the standalone application

    The source files of the application and the scene are part of the free Blend4Web SDK distribution.

    • Aug 18 2014 09:27 AM
    • by Spunya
  14. Making a Game with Blend4Web Part 3: Level Design

    This is the third article in the Making a Game series. In this article we'll consider assembling the game scene using the models prepared at the previous stage, setting up the lighting and the environment, and also we'll look in detail at creating the lava effect.

    Assembling the game scene


    Let's assemble the scene's visual content in the main_scene.blend file. We'll add the previously prepared environment elements from the env_stuff.blend file.

    Open the env_stuff.blend file via the File -> Link menu, go to the Group section, add the geometry of the central islands (1) and the background rocks (2) and arrange them on the scene.


    ex02_p03_img02.jpg?v=2014080111562320140


    Now we need to create the surface geometry of the future lava. The surface can be inflated a bit to deepen the effect of the horizon receding into the distance. Lets prepare 5 holes copying the outline of the 5 central islands in the center for the vertex mask which we'll introduce later.

    We'll also copy this geometry and assign the collision material to it as it is described in the previous article.


    ex02_p03_img03.jpg?v=2014080111562320140


    A simple cube will serve us as the environment with its center located at the horizon level for convenience. The cube's normals must be directed inside.

    Lets set up a simple node material for it. Get a vertical gradient (1) located on the level of the proposed horizon from the Global socket. After some squeezing and shifting it with the Squeeze Value node (2) we add the color (3). The result is passed directly into the Output node without the use of an intermediate Material node in order to make this object shadeless.


    ex02_p03_img04.jpg?v=2014080111562320140


    Setting up the environment


    We'll set up the fog under the World tab using the Fog density and Fog color parameters. Let's enable ambient lighting with the Environment Lighting option and setup its intensity (Energy). We'll select the two-color hemispheric lighting model Sky Color and tweak the Zenith Color and Horizon Color.


    ex02_p03_img05.jpg?v=2014080111562320140


    Next place two light sources into the scene. The first one of the Sun type will illuminate the scene from above. Enable the Generate Shadows checkbox for it to be a shadow caster. We'll put the second light source (also Sun) below and direct it vertically upward. This source will imitate the lighting from lava.


    ex02_p03_img06.jpg?v=2014080111562320140


    Then add a camera for viewing the exported scene. Make sure that the camera's Move style is Target (look at the camera settings on the Blend4Web panel), i.e. the camera is rotating around a certain pivot. Let's define the position of this pivot on the same panel (Target location).

    Also, distance and vertical angle limits can be assigned to the camera for convenient scene observation in the Camera limits section.


    ex02_p03_img07.jpg?v=2014080111562320140


    Adding the scene to the scene viewer


    At this stage a test export of the scene can be performed: File -> Export -> Blend4Web (.json). Let's add the exported scene to the list of the scene viewer external/deploy/assets/assets.json using any text editor, for example:

        {
            "name": "Tutorials",
            "items":[
    
                ...
    
                {
                    "name": "Game Example",
                    "load_file": "../tutorials/examples/example2/main_scene.json"
                },
    
                ...
            &#93;
       }   
    

    Then we can open the scene viewer apps_dev/viewer/viewer_dev.html with a browser, go to the Scenes panel and select the scene which is added to the Tutorials category.


    ex02_p03_img08.jpg?v=2014080111562320140


    The tools of the scene viewer are useful for tweaking scene parameters in real time.

    Setting up the lava material


    We'll prepare two textures by hand for the lava material, one is a repeating seamless diffuse texture and another will be a black and white texture which we'll use as a mask. To reduce video memory consumption the mask is packed into the alpha channel of the diffuse texture.


    ex02_p03_img09.jpg?v=2014080111562320140


    The material consists of several blocks. The first block (1) constantly shifts the UV coordinates for the black and white mask using the TIME (2) node in order to imitate the lava flow movement.


    ex02_p03_img10.jpg?v=2014080111562320140


    Note:  
    The TIME node is basically a node group with a reserved name. This group is replaced by the time-generating algorithm in the Blend4Web engine. To add this node it's enough to create a node group named TIME which has an output of the Value type. It can be left empty or can have for example a Value node for convenient testing right in Blender's viewport.


    In the other two blocks (4 and 5) the modified mask stretches and squeezes the UV in certain places, creating a swirling flow effect for the lava. The results are mixed together in block 6 to imitate the lava flow.

    Furthermore, the lava geometry has a vertex mask (3), using which a clean color (7) is added in the end to visualize the lava's burning hot spots.


    ex02_p03_img11.jpg?v=2014080111562320140


    To simulate the lava glow the black and white mask (8) is passed to the Emit socket. The mask itself is derived from the modified lava texture and from a special procedural mask (9), which reduces the glow effect with distance.

    Conclusion


    This is where the assembling of the game scene is finished. The result can be exported and viewed in the engine. In one of the upcoming articles we'll show the process of modeling and texturing the visual content for the character and preparing it for the Blend4Web engine.


    ex02_p03_img12.jpg?v=2014080111562320140



    Link to the standalone application

    The source files of the application and the scene are part of the free Blend4Web SDK distribution.

    • Aug 18 2014 09:36 AM
    • by Spunya
  15. Making a Game with Blend4Web Part 1: The Character

    Today we're going to start creating a fully-functional game app with Blend4Web.

    Gameplay


    Let's set up the gameplay. The player - a brave warrior - moves around a limited number of platforms. Melting hot stones keep falling on him from the sky; the stones should be avoided. Their number increases with time. Different bonuses which give various advantages appear on the location from time to time. The player's goal is to stay alive as long as possible. Later we'll add some other interesting features but for now we'll stick to these. This small game will have a third-person view.

    In the future, the game will support mobile devices and a score system. And now we'll create the app, load the scene and add the keyboard controls for the animated character. Let's begin!

    Setting up the scene


    Game scenes are created in Blender and then are exported and loaded into applications. Let's use the files made by our artist which are located in the blend/ directory. The creation of these resources will be described in a separate article.

    Let's open the character_model.blend file and set up the character. We'll do this as follows: switch to the Blender Game mode and select the character_collider object - the character's physical object.


    ex02_img01.jpg?v=20140717114607201406061


    Under the Physics tab we'll specify the settings as pictured above. Note that the physics type must be either Dynamic or Rigid Body, otherwise the character will be motionless.

    The character_collider object is the parent for the "graphical" character model, which, therefore, will follow the invisible physical model. Note that the lower point heights of the capsule and the avatar differ a bit. It was done to compensate for the Step height parameter, which lifts the character above the surface in order to pass small obstacles.

    Now lets open the main game_example.blend file, from which we'll export the scene.


    ex02_img02.jpg?v=20140717114607201406061


    The following components are linked to this file:

    1. The character group of objects (from the character_model.blend file).
    2. The environment group of objects (from the main_scene.blend file) - this group contains the static scene models and also their copies with the collision materials.
    3. The baked animations character_idle_01_B4W_BAKED and character_run_B4W_BAKED (from the character_animation.blend file).

    NOTE:
    To link components from another file go to File -> Link and select the file. Then go to the corresponding datablock and select the components you wish. You can link anything you want - from a single animation to a whole scene.

    Make sure that the Enable physics checkbox is turned on in the scene settings.

    The scene is ready, lets move on to programming.

    Preparing the necessary files


    Let's place the following files into the project's root:

    1. The engine b4w.min.js
    2. The addon for the engine app.js
    3. The physics engine uranium.js

    The files we'll be working with are: game_example.html and game_example.js.

    Let's link all the necessary scripts to the HTML file:

    <!DOCTYPE html>
    <html>
    <head>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1">
        <script type="text/javascript" src="b4w.min.js"></script>
        <script type="text/javascript" src="app.js"></script>
        <script type="text/javascript" src="game_example.js"></script>
    
        <style>
            body {
                margin: 0;
                padding: 0;
            }
        </style>
    
    </head>
    <body>
    <div id="canvas3d"></div>
    <body>
    </html>
    

    Next we'll open the game_example.js script and add the following code:

    "use strict"
    
    if (b4w.module_check("game_example_main"))
        throw "Failed to register module: game_example_main";
    
    b4w.register("game_example_main", function(exports, require) {
    
    var m_anim  = require("animation");
    var m_app   = require("app");
    var m_main  = require("main");
    var m_data  = require("data");
    var m_ctl   = require("controls");
    var m_phy   = require("physics");
    var m_cons  = require("constraints");
    var m_scs   = require("scenes");
    var m_trans = require("transform");
    var m_cfg   = require("config");
    
    var _character;
    var _character_body;
    
    var ROT_SPEED = 1.5;
    var CAMERA_OFFSET = new Float32Array([0, 1.5, -4&#93;);
    
    exports.init = function() {
        m_app.init({
            canvas_container_id: "canvas3d",
            callback: init_cb,
            physics_enabled: true,
            alpha: false,
            physics_uranium_path: "uranium.js"
        });
    }
    
    function init_cb(canvas_elem, success) {
    
        if (!success) {
            console.log("b4w init failure");
            return;
        }
    
        m_app.enable_controls(canvas_elem);
    
        window.onresize = on_resize;
        on_resize();
        load();
    }
    
    function on_resize() {
        var w = window.innerWidth;
        var h = window.innerHeight;
        m_main.resize(w, h);
    };
    
    function load() {
        m_data.load("game_example.json", load_cb);
    }
    
    function load_cb(root) {
    
    }
    
    });
    
    b4w.require("game_example_main").init();
    

    If you have read Creating an Interactive Web Application tutorial there won't be much new stuff for you here. At this stage all the necessary modules are linked, the init functions and two callbacks are defined. Also there is a possibility to resize the app window using the on_resize function.

    Pay attention to the additional physics_uranium_path initialization parameter which specifies the path to the physics engine file.

    The global variable _character is declared for the physics object while _character_body is defined for the animated model. Also the two constants ROT_SPEED and CAMERA_OFFSET are declared, which we'll use later.

    At this stage we can run the app and look at the static scene with the character motionless.

    Moving the character


    Let's add the following code into the loading callback:

    function load_cb(root) {
        _character = m_scs.get_first_character();
        _character_body = m_scs.get_object_by_empty_name("character",
                                                         "character_body");
    
        setup_movement();
        setup_rotation();
        setup_jumping();
    
        m_anim.apply(_character_body, "character_idle_01");
        m_anim.play(_character_body);
        m_anim.set_behavior(_character_body, m_anim.AB_CYCLIC);
    }
    

    First we save the physical character model to the _character variable. The animated model is saved as _character_body.

    The last three lines are responsible for setting up the character's starting animation.
    • animation.apply() - sets up animation by corresponding name,
    • animation.play() - plays it back,
    • animation.set_behaviour() - change animation behavior, in our case makes it cyclic.
    NOTE:
    Please note that skeletal animation should be applied to the character object which has an Armature modifier set up in Blender for it.

    Before defining the setup_movement(), setup_rotation() and setup_jumping() functions its important to understand how the Blend4Web's event-driven model works. We recommend reading the corresponding section of the user manual. Here we will only take a glimpse of it.

    In order to generate an event when certain conditions are met, a sensor manifold should be created.

    NOTE:
    You can check out all the possible sensors in the corresponding section of the API documentation.

    Next we have to define the logic function, describing in what state (true or false) the certain sensors of the manifold should be in, in order for the sensor callback to receive a positive result. Then we should create a callback, in which the performed actions will be present. And finally the controls.create_sensor_manifold() function should be called for the sensor manifold, which is responsible for processing the sensors' values. Let's see how this will work in our case.

    Define the setup_movement() function:

    function setup_movement() {
        var key_w     = m_ctl.create_keyboard_sensor(m_ctl.KEY_W);
        var key_s     = m_ctl.create_keyboard_sensor(m_ctl.KEY_S);
        var key_up    = m_ctl.create_keyboard_sensor(m_ctl.KEY_UP);
        var key_down  = m_ctl.create_keyboard_sensor(m_ctl.KEY_DOWN);
    
        var move_array = [
            key_w, key_up,
            key_s, key_down
        &#93;;
    
        var forward_logic  = function(s){return (s[0&#93; || s[1&#93;)};
        var backward_logic = function(s){return (s[2&#93; || s[3&#93;)};
    
        function move_cb(obj, id, pulse) {
            if (pulse == 1) {
                switch(id) {
                case "FORWARD":
                    var move_dir = 1;
                    m_anim.apply(_character_body, "character_run");
                    break;
                case "BACKWARD":
                    var move_dir = -1;
                    m_anim.apply(_character_body, "character_run");
                    break;
                }
            } else {
                var move_dir = 0;
                m_anim.apply(_character_body, "character_idle_01");
            }
    
            m_phy.set_character_move_dir(obj, move_dir, 0);
    
            m_anim.play(_character_body);
            m_anim.set_behavior(_character_body, m_anim.AB_CYCLIC);
        };
    
        m_ctl.create_sensor_manifold(_character, "FORWARD", m_ctl.CT_TRIGGER,
            move_array, forward_logic, move_cb);
        m_ctl.create_sensor_manifold(_character, "BACKWARD", m_ctl.CT_TRIGGER,
            move_array, backward_logic, move_cb);
    }
    

    Let's create 4 keyboard sensors - for arrow forward, arrow backward, S and W keys. We could have done with two but we want to mirror the controls on the symbol keys as well as on arrow keys. We'll append them to the move_array.

    Now to define the logic functions. We want the movement to occur upon pressing one of two keys in move_array.

    This behavior is implemented through the following logic function:

    function(s) { return (s[0&#93; || s[1&#93;) }
    

    The most important things happen in the move_cb() function.

    Here obj is our character. The pulse argument becomes 1 when any of the defined keys is pressed. We decide if the character is moved forward (move_dir = 1) or backward (move_dir = -1) based on id, which corresponds to one of the sensor manifolds defined below. Also the run and idle animations are switched inside the same blocks.

    Moving the character is done through the following call:

    m_phy.set_character_move_dir(obj, move_dir, 0);
    

    Two sensor manifolds for moving forward and backward are created in the end of the setup_movement() function. They have the CT_TRIGGER type i.e. they snap into action every time the sensor values change.

    At this stage the character is already able to run forward and backward. Now let's add the ability to turn.

    Turning the character


    Here is the definition for the setup_rotation() function:

    function setup_rotation() {
        var key_a     = m_ctl.create_keyboard_sensor(m_ctl.KEY_A);
        var key_d     = m_ctl.create_keyboard_sensor(m_ctl.KEY_D);
        var key_left  = m_ctl.create_keyboard_sensor(m_ctl.KEY_LEFT);
        var key_right = m_ctl.create_keyboard_sensor(m_ctl.KEY_RIGHT);
    
        var elapsed_sensor = m_ctl.create_elapsed_sensor();
    
        var rotate_array = [
            key_a, key_left,
            key_d, key_right,
            elapsed_sensor
        &#93;;
    
        var left_logic  = function(s){return (s[0&#93; || s[1&#93;)};
        var right_logic = function(s){return (s[2&#93; || s[3&#93;)};
    
        function rotate_cb(obj, id, pulse) {
    
            var elapsed = m_ctl.get_sensor_value(obj, "LEFT", 4);
    
            if (pulse == 1) {
                switch(id) {
                case "LEFT":
                    m_phy.character_rotation_inc(obj, elapsed * ROT_SPEED, 0);
                    break;
                case "RIGHT":
                    m_phy.character_rotation_inc(obj, -elapsed * ROT_SPEED, 0);
                    break;
                }
            }
        }
    
        m_ctl.create_sensor_manifold(_character, "LEFT", m_ctl.CT_CONTINUOUS,
            rotate_array, left_logic, rotate_cb);
        m_ctl.create_sensor_manifold(_character, "RIGHT", m_ctl.CT_CONTINUOUS,
            rotate_array, right_logic, rotate_cb);
    }
    

    As we can see it is very similar to setup_movement().

    The elapsed sensor was added which constantly generates a positive pulse. This allows us to get the time, elapsed from the previous rendering frame, inside the callback using the controls.get_sensor_value() function. We need it to correctly calculate the turning speed.

    The type of sensor manifolds has changed to CT_CONTINUOUS, i.e. the callback is executed in every frame, not only when the sensor values change.

    The following method turns the character around the vertical axis:

    m_phy.character_rotation_inc(obj, elapsed * ROT_SPEED, 0)
    

    The ROT_SPEED constant is defined to tweak the turning speed.

    Character jumping


    The last control setup function is setup_jumping():

    function setup_jumping() {
        var key_space = m_ctl.create_keyboard_sensor(m_ctl.KEY_SPACE);
    
        var jump_cb = function(obj, id, pulse) {
            if (pulse == 1) {
                m_phy.character_jump(obj);
            }
        }
    
        m_ctl.create_sensor_manifold(_character, "JUMP", m_ctl.CT_TRIGGER, 
            [key_space&#93;, function(s){return s[0&#93;}, jump_cb);
    }
    

    The space key is used for jumping. When it is pressed the following method is called:

    m_phy.character_jump(obj)
    

    Now we can control our character!

    Moving the camera


    The last thing we cover here is attaching the camera to the character.

    Let's add yet another function call - setup_camera() - into the load_cb() callback.

    This function looks as follows:

    function setup_camera() {
        var camera = m_scs.get_active_camera();
        m_cons.append_semi_soft_cam(camera, _character, CAMERA_OFFSET);
    }
    

    The CAMERA_OFFSET constant defines the camera position relative to the character: 1.5 meters above (Y axis in WebGL) and 4 meters behind (Z axis in WebGL).

    This function finds the scene's active camera and creates a constraint for it to follow the character smoothly.

    That's enough for now. Lets run the app and enjoy the result!

    ex02_img03.jpg?v=20140717114607201406061

    Link to the standalone application

    The source files of the application and the scene are part of the free Blend4Web SDK distribution.

    • Aug 18 2014 09:27 AM
    • by Spunya
  16. The Art of Feeding Time: Animation

    While some movement was best handled programmatically, Feeding Time‘s extensive animal cast and layered environments still left plenty of room for hand-crafted animation. The animals in particular required experimentation to find an approach that could retain the hand-painted texturing of the illustrations while also harkening to hand-drawn animation.


    old_dogsketch.gif old_dogeat.gif


    An early pass involved creating actual sketched frames, then slicing the illustration into layers and carefully warping those into place to match each sketch. Once we decided to limit all the animals to just a single angle, we dispensed with the sketch phase and settled on creating the posed illustrations directly. When the finalized dog image was ready, a full set of animations was created to test our planned lineup of animations.

    The initial approach was to include Sleep, Happy, Idle, Sad, and Eat animations. Sleep would play at the start of the stage, then transition into Happy upon arrival of the delivery, then settle into Idle until the player attempted to eat food, resulting in Sad for incorrect choices and Eat for correct ones.


    dog2_sleeping.gif dog3_happy.gif dog1_idle.gif dog_sad2.gif dog3_chomp.gif


    Ultimately, we decided to cut Sleep because its low visibility during the level intro didn’t warrant the additional assets. We also discovered that having the animals rush onto the screen in the beginning of the level and dart away at the end helped to better delineate the gameplay phase.

    There were also plans to play either Happy or Sad at end of each level for the animal that ate the most and the least food. The reactions to this, however, was almost overwhelmingly negative! Players hated the idea of always making one of the animals sad regardless of how many points they scored, so we quickly scrapped the idea.

    The Happy and Sad animations were still retained to add a satisfying punch to a successful combo and to inform the player when an incorrect match was attempted. As we discovered, a sad puppy asking to be thrown a bone (instead of, say, a kitty’s fish) proved to be a great deterrent for screen mashing and worked quite well as a passive tutorial.

    While posing the frames one by one was effectively employed for the Dog, Cat, Mouse, and Rabbit, a more sophisticated and easily iterated upon approach was developed for the rest of the cast:


    monkeylayers.gif jaw_cycle.gif lip_pull.gif


    With both methods, hidden portions of the animal's faces such as teeth and tongues were painted beneath separated layers. In the improved method, however, these layers could be much more freely posed and keyframed with a variety of puppet and warp tools at our disposal to make modifications to posing or frame rate much simpler.


    monkey_eat.gif beaver_eating.gif lion_eat.gif


    The poses themselves are often fairly extreme, but this was done to ensure that the motion was legible on small screens and at a fast pace in-game:


    allframes.png


    For Feeding Time’s intro animation and environments, everything was illustrated in advance on its own layer, making animation prep a smoother process than separating the flattened animals had been.

    The texture atlas comprising the numerous animal frames grew to quite a large size — this is just a small chunk!


    ft_animals_atlas.jpg


    Because the background elements wouldn’t require the hand-drawn motion of the animals, our proprietary tool “SLAM” was used to give artists the ability to create movement that would otherwise have to be input programmatically. With SLAM, much like Adobe Flash, artists can nest many layers of images and timelines, all of which loop within one master timeline.

    SLAM’s simple interface focuses on maximizing canvas visibility and allows animators to fine-tune image placement by numerical values if desired:


    slamscreen.jpg


    One advantage over Flash (and the initial reason SLAM was developed) is its capability to output final animation data in a succinct and clean format which maximizes our capability to port assets to other platforms.

    Besides environments, SLAM also proved useful for large scale effects, which would otherwise bloat the game’s filesize if rendered as image sequences:


    slamconfetti.jpg


    Naturally, SLAM stands for Slick Animation, which is what we set out to create with a compact number of image assets. Hopefully ‘slick’ is what you’ll think when you see it in person, now that you have some insight into how we set things into motion!

    Article Update Log


    16 July 2014: Initial release

  17. Waiting for review... or "ok, it was fun, now what?"

    The flop


    As a indie, who loves to work inside my own world with my own rules (no deadlines, no budget), this is like "... ok, ok, I need to get a life and move on, and well, let's also try to make this thing something real at last, something that people can touch and provide some reactions… let’s go and see what happens!".

    This is the kind of attitude which will always make you fail when trying to make a dent into the App Store. Your game will probably hit the deepest rank in App Store in 2-3 days and will never recover from there.

    None will care about you, your "firm" nor your game, because the real thing is that you never existed, your shadow is too short.

    The issue


    Exactly.

    The real problem about app marketing is not just about game quality or marketing budget. If it was so simple, then the solution would be just to throw in more bucks. But it’s not. It's been always about reach, or properly named "social reach" today.

    When I worked on the first game, TapTapGo (see it at AppStore here), I did it for the sake of getting the experience of both creating something on the magnificient Apple platform but also publishing and trying to make impact. I learnt a lot about marketing then. And I also felt depressed.

    I tested almost any tool and tactic to try to get attention from potential gamers: preview videos, social and forums engagement (and joining #IDRTG), PR kits (almost none reply to my pledges), paid reviews with Gnome Escape (which gave my first game a mix of good and bad reviews), ASO (Sensor Tower and others) and many more.

    It was exhausting (and expensive). And I could not see any benefit from those actions. I watched every day (and every hour sometimes) Apple charts. I was looking at iTunes Connect app and other statistical apps in my phone during my walks.

    After 2-3 months releasing a few updates and working hard trying to get my game up in the ladder of App Store I threw the towel and decided to move on, to take a break and bring in space for new ideas.

    Conclusion


    My reflections on that experience are that, even I pushed very hard to market my game, my reach was very low. I had few contacts to share my game with and even my game got a few thousand downloads during the first weeks, I could not see any network effect. Every action, every penny, I expended trying to expand my reach was unsuccessfully brief, because the real problem is that I didn’t have any social attractor linked to my game.

    I didn’t care about creating a fan base which could provide enough reach so any update could bring in more fans. Instead, I got only superficial downloads, forced by discrete investment.

    So, next time, don’t rush to release your game. Make sure you have a good reason to do that. Why will you be going to release the game? What are your expectations? Do you have taken the steps to build a proper reach through a fan base when launch day comes? Because if you don’t, you will be disappointed. And you won’t be able to explain why do you feel so.

    In next posts I will describe my efforts to create some noise before Noir Run, our second game, get released.

    UT6422u.png

    Stay tunned!

    - Rob.
    @KronnectGames


    <h3>Article Update Log</h3>

    13 Jul 2014: Initial release

  18. Blackhole by Fiolasoft reaches above 110% supporters & will be shared to over 67.000 people.

    Blackhole will reach a massive amount of gamers on release

    [attachment=22564:blackhole.jpg]
    After 2 weeks the company Fiolasoft published their indie game named Blackhole on Epocu, it have reached above 110% supported (332 supporters) and will be shared to over 67.000 people on the end date (in 38 days).

    We're excited to see how much it will help the promotion of the game and their playerbase. We also hope this success story will inspire more indie developers to promote their games through various platforms.


    You can see the Blackhole campaign here: http://epocu.com/campaigns/blackhole/

    • Jul 09 2014 10:28 AM
    • by ahogen
  19. Epocu – a free and easy way to hype & test upcoming games

    Hey guys!

    We're a small danish company called Kunju Studios who love games and have previously worked on tools for gamers and game developers.

    Over the past months we have been working on a way to put the work and ideas of indie game developers in the spotlight. What we’ve come up with is Epocu.

    Envisioned to be a hub for upcoming games and game concepts, Epocu offers developers a unique opportunity to test their ideas and assumptions before investing time in it. Upon presenting the idea, developers can set up the interest goal to see if they can achieve the critical mass of people interested in their game concept. Once the interest goal you set up is achieved, a message of your choice will be shared through the social media of your supporters at the time you think you would benefit most from the buzz and attention it deserves.

    [attachment=22419:2.jpg]

    We hope you will check it out and help us make the platform that the indie game development deserves. Epocu is completely free to use and it always will be, in hopes that the attention created helps you find investors, a fanbase as well as lay down a solid foundation for a kickstarter campaign.

    [attachment=22418:1.jpg]

    If you’re interested to learn more about us, you can check it out in more detail here. http://epocu.com
    Feel free to leave us your feedback or suggestions in the comments; we will do our best to answer you as soon as possible! :-)

  20. Epocu – a free and easy way to hype & test upcoming games

    Hey guys!

    We're a small danish company called Kunju Studios who love games and have previously worked on tools for gamers and game developers.

    Over the past months we have been working on a way to put the work and ideas of indie game developers in the spotlight. What we’ve come up with is Epocu.

    Envisioned to be a hub for upcoming games and game concepts, Epocu offers developers a unique opportunity to test their ideas and assumptions before investing time in it. Upon presenting the idea, developers can set up the interest goal to see if they can achieve the critical mass of people interested in their game concept. Once the interest goal you set up is achieved, a message of your choice will be shared through the social media of your supporters at the time you think you would benefit most from the buzz and attention it deserves.

    [attachment=22419:2.jpg]

    We hope you will check it out and help us make the platform that the indie game development deserves. Epocu is completely free to use and it always will be, in hopes that the attention created helps you find investors, a fanbase as well as lay down a solid foundation for a kickstarter campaign.

    [attachment=22418:1.jpg]

    If you’re interested to learn more about us, you can check it out in more detail here. http://epocu.com
    Feel free to leave us your feedback or suggestions in the comments; we will do our best to answer you as soon as possible! :-)

  21. Epocu – a free and easy way to hype & test upcoming games

    Hey guys!

    We're a small danish company called Kunju Studios who love games and have previously worked on tools for gamers and game developers.

    Over the past months we have been working on a way to put the work and ideas of indie game developers in the spotlight. What we’ve come up with is Epocu.

    Envisioned to be a hub for upcoming games and game concepts, Epocu offers developers a unique opportunity to test their ideas and assumptions before investing time in it. Upon presenting the idea, developers can set up the interest goal to see if they can achieve the critical mass of people interested in their game concept. Once the interest goal you set up is achieved, a message of your choice will be shared through the social media of your supporters at the time you think you would benefit most from the buzz and attention it deserves.

    [attachment=22419:2.jpg]

    We hope you will check it out and help us make the platform that the indie game development deserves. Epocu is completely free to use and it always will be, in hopes that the attention created helps you find investors, a fanbase as well as lay down a solid foundation for a kickstarter campaign.

    [attachment=22418:1.jpg]

    If you’re interested to learn more about us, you can check it out in more detail here. http://epocu.com
    Feel free to leave us your feedback or suggestions in the comments; we will do our best to answer you as soon as possible! :-)

  22. Epocu – a free and easy way to hype & test upcoming games

    Hey guys!

    We're a small danish company called Kunju Studios who love games and have previously worked on tools for gamers and game developers.

    Over the past months we have been working on a way to put the work and ideas of indie game developers in the spotlight. What we’ve come up with is Epocu.

    Envisioned to be a hub for upcoming games and game concepts, Epocu offers developers a unique opportunity to test their ideas and assumptions before investing time in it. Upon presenting the idea, developers can set up the interest goal to see if they can achieve the critical mass of people interested in their game concept. Once the interest goal you set up is achieved, a message of your choice will be shared through the social media of your supporters at the time you think you would benefit most from the buzz and attention it deserves.

    [attachment=22419:2.jpg]

    We hope you will check it out and help us make the platform that the indie game development deserves. Epocu is completely free to use and it always will be, in hopes that the attention created helps you find investors, a fanbase as well as lay down a solid foundation for a kickstarter campaign.

    [attachment=22418:1.jpg]

    If you’re interested to learn more about us, you can check it out in more detail here. http://epocu.com
    Feel free to leave us your feedback or suggestions in the comments; we will do our best to answer you as soon as possible! :-)

  23. Epocu – a free and easy way to hype upcoming games

    Hey guys,
    We're a small danish company called Kunju Studios who love games and have previously worked on tools for gamers and game developers.

    Over the past months we have been working on a way to put the work and ideas of indie game developers in the spotlight. What we’ve come up with is Epocu.

    Envisioned to be a hub for upcoming games and game concepts, Epocu offers developers a unique opportunity to test their ideas and assumptions before investing time in it.

    Upon presenting the idea, developers can set up the interest goal to see if they can achieve the critical mass of people interested in their game concept. Once the interest goal you set up is achieved, a message of your choice will be shared through the social media of your supporters at the time you think you would benefit most from the buzz and attention it deserves.

    We hope you will check it out and help us make the platform that the indie game development deserves. Epocu is completely free to use and it always will be, in hopes that the attention created helps you find investors, a fanbase as well as lay down a solid foundation for a kickstarter campaign.

    If you’re interested to learn more about us, you can check it out in more detail here. http://epocu.com
    Feel free to leave us your feedback or suggestions in the comments; we will do our best to answer you as soon as possible! :-)

  24. Game in the making (The Doomed Ones)

    Intro:
    I started to put together plans for a low rez 3D FPS not too long ago..

    It's based in a decent sized city, lets say Atlanta Georgia..
    The game will follow the roll of the tv show Revolution, but with a bit of a twist..

    Rough Story Line:
    About 15 years ago a small outbreak of a virus known as virus b-200342 was somehow released from a top secret research facility.
    The virus was then released into the water supply of New York City. The Virus, later to be called the Doom Virus (Looking for another name for it)
    made its way into the system of nearly every living occupant of New York. It acted like an STD, traveling from person to person in every way it could, waiting to initiate its madness. After a month a converse from person to person, the government began delivering vaccines, which were later determined to only save the non infected from their chance of a future doom. Nearly 98% of the world population had already been infected by now. The crisis was believed to be averted and so everyone went on with the everyday lives. But then it happened, in one swooping action every infected person was sort of "Activated." Their veins began glowing blue and their minds went blank, focusing around one thing... Food. Not your normal kinda food either, more along eating those of not their kind. 10 years before the story begins is when this happened, now the infected mope around, looking for food but tired. Sort of in a zombie-like state but ready to attack whenever bothered.

    This is where the game comes in. You are a survivor, the characteristics of your survivor are determined by your prologue (physical appearance, past life, and mental abilities.) You have been looking for other survivors for years now and have just decided to head down to the Atlanta area, In hopes of finding some sort of survivors. Of course the place is swarming with the Doomed ones. Yet you find signs of the living. You then proceed into Atlanta and that is where your story begins.

    Game Characteristics:
    I love the idea of open world survival games... The thought of doing everything you can to survive and work your way around, trying to form a group and kick some ass.
    so here are my main game characteristics or mechanics.

    • Survivability (Needing to eat and drink, also medicines and craftable splints)
    • Weapons (Not so much high-tech and accurate.. possibly handmade and crude, but a few precision ones hidden about.)
    • The Doomed AI (Stumbling about, but when they detect a human they will walk or crawl in a job like speed, possibly pouncing at you.)
    • Food and Water (You cannot be infected so you can drink from the tap if you find a working sink, food is scarce but find-able)
    • Medicine (A few hospitals can be found on the map, full of meds and bandaging material, possible craftable from herbs and cloth)
    • Player Creation (Focused around people from age 16 to 40, customizable in all ways)
    • Possible MMO (100 player servers, with faction creation and base building)
    • Base building (The ability to scavenge wood and parts from vehicles and utilities to craft base items)
    • Terrain (A large infected city with little power)
    Hiring:
    I'd love it if i could get a few or many more people in on this project! Of course, there will not be pay until it is released on steam.
    Just email me @ Lane_cork97@hotmail.com if you want to help, or add me on skype @ Lcorak97

    Feedback:
    Give me your feedback! It's the best thing i could use right now to determine rather or not i want to go on with this project.

    Thank you for taking your time to read this.

  25. The Process of Creating Music

    Hello, everyone! My name is Arthur Baryshev and I’m a music composer and sound designer. I compose sound tracks for video games, and I’m the manager of the IK-Sound studio. I’ve been collaborating with MagicIndie Softworks for a long time now and I would like to share my experience with you about how I compose tracks for the games forged by these guys.

    How it begins?


    You have to know that my music is “born” even before I write the first note of it. First, I study whichever game I have to compose music for. I am usually given the short description of the game, the concept art, the list of needed tracks, and some other references. Immediately after that, I brainstorm any ideas I have and get a general image of the music I am to compose: the style, the musical instruments I am going to use, the mood, and so on…

    I often lean back in my armchair and soak in the concept art’s slide show. The first impressions are extremely important because they usually are the most powerful and close to what I have to create.

    From the very beginning, it is crucially important to maintain a clear dialog with the main game designer. If we understand each other then we are already half way through. The result could be a stylistically unified soundtrack, which highlights each action that you take in the game. Just like in Brink of Consciousness: Dorian Gray Syndrome, by the way, you can hear the soundtrack for this game here:

    Just click on the image below
    85736_1361445455_brink-consciousness-dor

    Some words about the process


    After all is set into motion and everything is agreed upon, I start composing. My cornerstone is my virtual orchestra. I use orchestra and “live” instruments in virtually every track I compose. This gives my tracks a distinctive flavour and truly brings them to life.

    I send my sketches, which usually are about 15 to 30 seconds long, to the developers. And only after they give me their seal of approval I do finish them. After I’ve decided upon the final version of the track, I bring it to perfection by polishing or adding new details to it.

    Many soundtracks are based upon leitmotifs – melodies that set the tone of the game. Speaking of leitmotifs, a reasonable example would be the soundtrack, which I wrote for Nearwood. In the main menu, from the very first second, you can hear a very memorable tune, catchy even. This melody is, afterward, used in various cut-scenes which gives them a distinctive mood. You can listen to the tunes from this game here:

    Just click on the image below
    nearwood-collectors-edition_feature.jpg

    Developing a particular song or tune for a game one should keep in mind the following:
    • How the music will fit with the overall sound theme;
    • Whether it will be annoying and intrusive to the person who plays the game;
    • Will I be able to loop the track;
    • And so on, and so forth…
    This is vitally important! The track could be a musical masterpiece, a 9th Symphony, but if it is poorly implemented it will ruin the entire experience. When all is set and every track is completed and it “has found” its place into the game, you can sit and admire the results.

    Well, that's all ... Oh, wait!


    Now I am working on the Cyberline Racing and Saga Tenebrae projects, which are currently in full development. They both are set in different worlds, which requires entirely different approaches. I compose heavy metal and electronic music for one and soulful and fantasy music for the other. Guess which is for which??

    Here’s a sneak peek at a fresh battle composition from the upcoming Saga Tenebrae game and demo OST (half the OST I'd say) from the Cyberline Racing:

    Just click on the images below
    artworks-000067634085-5kyszy-t200x200.jp artworks-000063798800-6eb6yt-t200x200.jp

    And before I go, I will say that composing the music is only half the work. The second half, which is as important as the first, consists of the sound design and sound effects. I’ll talk to you about the sound design a bit later. Good luck and stay awesome! ;)

PARTNERS