Jump to content

  • Log In with Google      Sign In   
  • Create Account


Search Results

There were 506 results tagged with game

Sort by                Order  
  1. What's new in 1.4

    We are glad to announce that WaveEngine 1.4 (Dolphin) is out! This is probably our biggest release until now, with a lot of new features.

    New Demo

    Alongside with the 1.4 Release of Wave Engine, we have published in our GitHub repository a new sample to show all the features included in this new version.
    In this sample you play Yurei, a little ghost character that slides through a dense forest and a haunted house.
    Some key features:

    • The new Camera 2D is crucial to follow the little ghost across the way.
    Parallax scrolling effect done automatically with the Camera2D perspective projection.
    Animated 2D model using Spine model with FFD transforms.
    Image Effects to change the look and feel of the scene to make it scarier.

    The source code of this sample is available in our GitHub sample repository

    fuN6rWVl.png

    Binary version for Windows PC, Here.

    Camera 2D

    One of the major improvements in 2D games using Wave Engine is the new Camera 2D feature.
    With a Camera 2D, you can pan, zoom and rotate the display area of the 2D world. So from now on making a 2D game with a large scene is straightforward.
    Additionally, you can change the Camera 2D projection:

    Orthogonal projection. Camera will render objects 2D uniformly, with no sense of perspective. That was the most common projection used.
    Perspective projection. Camera will render objects 2D with sense of perspective.

    Now it’s easy to make a Parallax Scrolling effect using Perspective projection in Camera 2D, you only need to properly set the DrawOrder property to specify the entity depth value between the background and the foreground.



    More info about this.

    Image Effects library


    This new release comes with and extesion library called WaveEngine.ImageEffects This allows users an easy mode (one line of code) to add multiple postprocessing effects to their games.
    The first version of this library has more than 20 image effects to improve the visual quality of each development.
    All these image effects have been optimized to work in real time on Mobile devices.
    Custom image effects are also allowed and all image effects in the current library are published as OpenSource.

    UhuEBdG.png


    More info about image effects library.

    Skeletal 2D animation

    In this new release we have improved the integration with Spine Skeletal 2D animation tool to support Free-Form deformation (FFD).
    This new feature allows you to move individual mesh vertices to deform the image.

    3lLWb0O.png

    Read more.

    Transform3D & Transform3D with real hierarchy

    One of the most requested features by our users was the implementation of a real Parent / Child transform relationship.
    Now, when an Entity is a Parent of another Entity, the Child Entity will move, rotate, and scale in the same way as its Parent does. Child Entities can also have children, conforming an Entity hierarchy.
    Transform2D and Transform3D components now have new properties to deal with entity hierarchy:

    LocalPosition, LocalRotation and LocalScale properties are used to specify the transform values relative to its parent.
    Position, Rotation and Scale properties are used now to set global transform values.
    Transform2D inherited from Transform3D component, so you can deal with 3D transform properties in 2D entities.

    15kHRT4.png

    More info about this.

    Multiplatform tools

    We have been working on rewrite all our tools using GTK# to get these tools available on Windows, MacOS and Linux.
    We want to offer the same development experience for all our developer regardless their OS.

    D9yh7y1.png

    More info about this.

    Workflow improved

    Within this WaveEngine version we have also improved the developer workflow, because one of the most tedious tasks when working in multiplatform games is the assets management.
    So with this new workflow all these tasks will be performed automatically and transparently to developers, obtaining amazing benefits:

    • Reduce development time and increased productivity
    • Improved the process of porting to other platforms
    • Isolated the developer from managing WPK files

    All our samples and quickstarter have been updated to this new workflow, Github repository.

    Ue21Rm0.png

    More about new workflow.

    Scene Editor in progress

    After many developers requests we have started to create a Scene Editor tool. Today we are excited to announce that we are already working on it.

    It will be a multiplatform tool so developers will be able to use this tool from either Windows, MacOs, or Linux.

    fNSslO2.png

    Community thread about this future tool.

    Digital Boss Monster powered by WaveEngine

    Outstanding success of the Boss Monster card game for iOS & Android kickstarter, which will be developed using WaveEngine in the next months.

    3zdq0il.jpg

    If you want to see a cool video of the prototype here is the link.

    Don't miss the chance to be part of this kickstarter, only a few hours left (link)

    More Open Source

    We keep on publishing source code of some extensions for WaveEngine:

    Image Effects Library: WaveEngine image library, with more than 20 lenses (published)
    Complete code in our Github repository.

    Using Wave Engine in your applications built on Windows Forms, GTKSharp and WPF

    Within this new version we want to help every developer using Wave Engine on their Windows Desktop Applications, like game teams that need to
    build their own game level editor, or University research groups that need to integrate research technologies with a Wave Engine render and show tridimensional
    results. Right now, located at our Wave Engine GitHub Samples repository, you can find some demo projects that show how to integrate Wave Engine with Windows Forms,
    GtkSharp or Windows Presentation Foundation technologies.

    Complete code in our Github repository.

    Better Visual Studio integration

    Current supported editions:

    • Visual Studio Express 2012 for Windows Desktop
    • Visual Studio Express 2012 for Web
    • Visual Studio Professional 2012
    • Visual Studio Premium 2012
    • Visual Studio Ultimate 2012
    • Visual Studio Express 2013 for Windows Desktop
    • Visual Studio Express 2013 for Web
    • Visual Studio Professional 2013
    • Visual Studio Premium 2013
    • Visual Studio Ultimate 2013

    We help you port wave engine project from 1.3.5 version to new 1.4 version

    Within this new version there are some importants changes so we want to help every wave engine developers port theirs game projects
    To the new 1.4 version.

    More info about this.

    Complete Changelog of WaveEngine 1.4 (Dolphin), Here.

    Download WaveEngine Now (Windows, MacOS, Linux)

    4508c900-53eb-489c-9ace-780d222364fd.png

  2. Banshee Game Development Toolkit - Introduction

    Introduction


    *This is a suggested template to get you started. Feel free to modify it to suit your article. Introduce the topic you are going to write about briefly. Provide any relevant background information regarding your article that may be helpful to readers.

    Main Section Title


    Explaining the Concept

    Subheading



    This is the tutorial part of your article. What are you trying to convey to your readers? In this main body section you can put all your descriptive text and pictures (you can drag and drop pictures right into the editor!).

    Using the Code


    (Optional) If your article is about a piece of code create a small description about how the code works and how to use it.

    /* Code block here */
    

    Interesting Points


    Did you stumble upon any weird gotchas? .. things people should look out for? How would you describe your experience writing this article.

    Conclusion


    Wrap up any loose ends for your article. Be sure to restate what was covered in the article. You may also suggest additional resources for the reader to check out if desired.

    Article Update Log


    Keep a running log of any updates that you make to the article. e.g.

    6 Feb 2020: Added additional code samples
    4 Feb 2020: Initial release

  3. Making a Game with Blend4Web Part 4: Mobile Devices

    This is the fourth part of the Blend4Web gamedev tutorial. Today we'll add mobile devices support and program the touch controls. Before reading this article, please look at the first part of this series, in which the keyboard controls are implemented. We will use the Android and iOS 8 platforms for testing.

    Detecting mobile devices


    In general, mobile devices are not as good in performance as desktops and so we'll lower the rendering quality. We'll detect a mobile device with the following function:

    function detect_mobile() {
        if( navigator.userAgent.match(/Android/i)
         || navigator.userAgent.match(/webOS/i)
         || navigator.userAgent.match(/iPhone/i)
         || navigator.userAgent.match(/iPad/i)
         || navigator.userAgent.match(/iPod/i)
         || navigator.userAgent.match(/BlackBerry/i)
         || navigator.userAgent.match(/Windows Phone/i)) {
            return true;
        } else {
            return false;
        }
    }
    

    The init function now looks like this:

    exports.init = function() {
    
        if(detect_mobile())
            var quality = m_cfg.P_LOW;
        else
            var quality = m_cfg.P_HIGH;
    
        m_app.init({
            canvas_container_id: "canvas3d",
            callback: init_cb,
            physics_enabled: true,
            quality: quality,
            show_fps: true,
            alpha: false,
            physics_uranium_path: "uranium.js"
        });
    }
    

    As we can see, a new initialization parameter - quality - has been added. In the P_LOW profile there are no shadows and post-processing effects. This will allow us to dramatically increase the performance on mobile devices.

    Controls elements on the HTML page


    Lets add the following elements to the HTML file:

    <!DOCTYPE html>
    <body>
        <div id="canvas3d"></div>
    
        <div id="controls">
            <div id ="control_circle"></div>
            <div id ="control_tap"></div>
            <div id ="control_jump"></div>
        </div>
    </body>
    

    1. control_circle element will appear when the screen is touched, and will be used for directing the character.
    2. The control_tap element is a small marker, following the finger.
    3. The control_jump element is a jump button located in the bottom right corner of the screen.

    By default all these elements are hidden (visibility property). They will become visible after the scene is loaded.

    The styles for these elements can be found in the game_example.css file.

    Processing the touch events


    Let's look at the callback which is executed at scene load:

    function load_cb(root) {
        _character = m_scs.get_first_character();
        _character_body = m_scs.get_object_by_empty_name("character",
                                                             "character_body");
    
        var right_arrow = m_ctl.create_custom_sensor(0);
        var left_arrow  = m_ctl.create_custom_sensor(0);
        var up_arrow    = m_ctl.create_custom_sensor(0);
        var down_arrow  = m_ctl.create_custom_sensor(0);
        var touch_jump  = m_ctl.create_custom_sensor(0);
    
        if(detect_mobile()) {
            document.getElementById("control_jump").style.visibility = "visible";
            setup_control_events(right_arrow, up_arrow,
                                 left_arrow, down_arrow, touch_jump);
        }
    
        setup_movement(up_arrow, down_arrow);
        setup_rotation(right_arrow, left_arrow);
    
        setup_jumping(touch_jump);
    
        setup_camera();
    }
    

    The new things here are the 5 sensors created with the controls.create_custom_sensor() method. We will change their values when the corresponding touch events are fired.

    If the detect_mobile() function returns true, the control_jump element is shown up and the setup_control_events() function is called to set up the values for these new sensors (passed as arguments). This function is quite large and we'll look at it step-by-step.

    var touch_start_pos = new Float32Array(2);
    
    var move_touch_idx;
    var jump_touch_idx;
    
    var tap_elem = document.getElementById("control_tap");
    var control_elem = document.getElementById("control_circle");
    var tap_elem_offset = tap_elem.clientWidth / 2;
    var ctrl_elem_offset = control_elem.clientWidth / 2;
    

    First of all the variables are declared for saving the touch point and the touch indices, which correspond to the character's moving and jumping. The tap_elem and control_elem HTML elements are required in several callbacks.

    The touch_start_cb() callback


    In this function the beginning of a touch event is processed.

    function touch_start_cb(event) {
        event.preventDefault();
    
        var h = window.innerHeight;
        var w = window.innerWidth;
    
        var touches = event.changedTouches;
    
        for (var i = 0; i < touches.length; i++) {
            var touch = touches[i&#93;;
            var x = touch.clientX;
            var y = touch.clientY;
    
            if (x > w / 2) // right side of the screen
                break;
    
            touch_start_pos[0&#93; = x;
            touch_start_pos[1&#93; = y;
            move_touch_idx = touch.identifier;
    
            tap_elem.style.visibility = "visible";
            tap_elem.style.left = x - tap_elem_offset + "px";
            tap_elem.style.top  = y - tap_elem_offset + "px";
    
            control_elem.style.visibility = "visible";
            control_elem.style.left = x - ctrl_elem_offset + "px";
            control_elem.style.top  = y - ctrl_elem_offset + "px";
        }
    }
    

    Here we iterate through all the changed touches of the event (event.changedTouches) and discard the touches from the right half of the screen:

        if (x > w / 2) // right side of the screen
            break;
    

    If this condition is met, we save the touch point touch_start_pos and the index of this touch move_touch_idx. After that we'll render 2 elements in the touch point: control_tap and control_circle. This will look on the device screen as follows:


    gm04_img01.jpg?v=20140827183625201406251



    The touch_jump_cb() callback


    function touch_jump_cb (event) {
        event.preventDefault();
    
        var touches = event.changedTouches;
    
        for (var i = 0; i < touches.length; i++) {
            var touch = touches[i&#93;;
            m_ctl.set_custom_sensor(jump, 1);
            jump_touch_idx = touch.identifier;
        }
    }
    

    This callback is called when the control_jump button is touched


    gm04_img02.jpg?v=20140827183625201406251



    It just sets the jump sensor value to 1 and saves the corresponding touch index.

    The touch_move_cb() callback


    This function is very similar to the touch_start_cb() function. It processes finger movements on the screen.

        function touch_move_cb(event) {
            event.preventDefault();
    
            m_ctl.set_custom_sensor(up_arrow, 0);
            m_ctl.set_custom_sensor(down_arrow, 0);
            m_ctl.set_custom_sensor(left_arrow, 0);
            m_ctl.set_custom_sensor(right_arrow, 0);
    
            var h = window.innerHeight;
            var w = window.innerWidth;
    
            var touches = event.changedTouches;
    
            for (var i=0; i < touches.length; i++) {
                var touch = touches[i&#93;;
                var x = touch.clientX;
                var y = touch.clientY;
    
                if (x > w / 2) // right side of the screen
                    break;
    
                tap_elem.style.left = x - tap_elem_offset + "px";
                tap_elem.style.top  = y - tap_elem_offset + "px";
    
                var d_x = x - touch_start_pos[0&#93;;
                var d_y = y - touch_start_pos[1&#93;;
    
                var r = Math.sqrt(d_x * d_x + d_y * d_y);
    
                if (r < 16) // don't move if control is too close to the center
                    break;
    
                var cos = d_x / r;
                var sin = -d_y / r;
    
                if (cos > Math.cos(3 * Math.PI / 8))
                    m_ctl.set_custom_sensor(right_arrow, 1);
                else if (cos < -Math.cos(3 * Math.PI / 8))
                    m_ctl.set_custom_sensor(left_arrow, 1);
    
                if (sin > Math.sin(Math.PI / 8))
                    m_ctl.set_custom_sensor(up_arrow, 1);
                else if (sin < -Math.sin(Math.PI / 8))
                    m_ctl.set_custom_sensor(down_arrow, 1);
            }
        }
    

    The values of d_x and d_y denote by how much the marker is shifted relative to the point in which the touch started. From these increments the distance to this point is calculated, as well as the cosine and sine of the direction angle. This data fully defines the required behavior depending on the finger position by means of simple trigonometric transformations.

    As a result the ring is divided into 8 parts, for which their own sets of sensors are assigned: right_arrow, left_arrow, up_arrow, down_arrow.

    The touch_end_cb() callback


    This callback resets the sensors' values and the saved touch indices.

        function touch_end_cb(event) {
            event.preventDefault();
    
            var touches = event.changedTouches;
    
            for (var i=0; i < touches.length; i++) {
    
                if (touches[i&#93;.identifier == move_touch_idx) {
                    m_ctl.set_custom_sensor(up_arrow, 0);
                    m_ctl.set_custom_sensor(down_arrow, 0);
                    m_ctl.set_custom_sensor(left_arrow, 0);
                    m_ctl.set_custom_sensor(right_arrow, 0);
                    move_touch_idx = null;
                    tap_elem.style.visibility = "hidden";
                    control_elem.style.visibility = "hidden";
    
                } else if (touches[i&#93;.identifier == jump_touch_idx) {
                    m_ctl.set_custom_sensor(jump, 0);
                    jump_touch_idx = null;
                }
            }
        }
    

    Also for the move event the corresponding control elements become hidden:

        tap_elem.style.visibility = "hidden";
        control_elem.style.visibility = "hidden";
    


    gm04_img04.jpg?v=20140827183625201406251



    Setting up the callbacks for the touch events


    And the last thing happening in the setup_control_events() function is setting up the callbacks for the corresponding touch events:

        document.getElementById("canvas3d").addEventListener("touchstart", touch_start_cb, false);
        document.getElementById("control_jump").addEventListener("touchstart", touch_jump_cb, false);
    
        document.getElementById("canvas3d").addEventListener("touchmove", touch_move_cb, false);
    
        document.getElementById("canvas3d").addEventListener("touchend", touch_end_cb, false);
        document.getElementById("controls").addEventListener("touchend", touch_end_cb, false);
    

    Please note that the touchend event is listened for two HTML elements. That is because the user can release his/her finger both inside and outside of the controls element.

    Now we have finished working with events.

    Including the touch sensors into the system of controls


    Now we only have to add the created sensors to the existing system of controls. Let's check out the changes using the setup_movement() function as an example.

    function setup_movement(up_arrow, down_arrow) {
        var key_w     = m_ctl.create_keyboard_sensor(m_ctl.KEY_W);
        var key_s     = m_ctl.create_keyboard_sensor(m_ctl.KEY_S);
        var key_up    = m_ctl.create_keyboard_sensor(m_ctl.KEY_UP);
        var key_down  = m_ctl.create_keyboard_sensor(m_ctl.KEY_DOWN);
    
        var move_array = [
            key_w, key_up, up_arrow,
            key_s, key_down, down_arrow
        &#93;;
    
        var forward_logic  = function(s){return (s[0&#93; || s[1&#93; || s[2&#93;)};
        var backward_logic = function(s){return (s[3&#93; || s[4&#93; || s[5&#93;)};
    
        function move_cb(obj, id, pulse) {
            if (pulse == 1) {
                switch(id) {
                case "FORWARD":
                    var move_dir = 1;
                    m_anim.apply(_character_body, "character_run_B4W_BAKED");
                    break;
                case "BACKWARD":
                    var move_dir = -1;
                    m_anim.apply(_character_body, "character_run_B4W_BAKED");
                    break;
                }
            } else {
                var move_dir = 0;
                m_anim.apply(_character_body, "character_idle_01_B4W_BAKED");
            }
    
            m_phy.set_character_move_dir(obj, move_dir, 0);
    
            m_anim.play(_character_body);
            m_anim.set_behavior(_character_body, m_anim.AB_CYCLIC);
        };
    
        m_ctl.create_sensor_manifold(_character, "FORWARD", m_ctl.CT_TRIGGER,
            move_array, forward_logic, move_cb);
        m_ctl.create_sensor_manifold(_character, "BACKWARD", m_ctl.CT_TRIGGER,
            move_array, backward_logic, move_cb);
    
        m_anim.apply(_character_body, "character_idle_01_B4W_BAKED");
        m_anim.play(_character_body);
        m_anim.set_behavior(_character_body, m_anim.AB_CYCLIC);
    }
    

    As we can see, the only changed things are the set of sensors in the move_array and inside the forward_logic() and backward_logic() logic functions, which now depend on the touch sensors as well.

    The setup_rotation() and setup_jumping() functions have changed in a similar way. They are listed below:

    function setup_rotation(right_arrow, left_arrow) {
        var key_a     = m_ctl.create_keyboard_sensor(m_ctl.KEY_A);
        var key_d     = m_ctl.create_keyboard_sensor(m_ctl.KEY_D);
        var key_left  = m_ctl.create_keyboard_sensor(m_ctl.KEY_LEFT);
        var key_right = m_ctl.create_keyboard_sensor(m_ctl.KEY_RIGHT);
    
        var elapsed_sensor = m_ctl.create_elapsed_sensor();
    
        var rotate_array = [
            key_a, key_left, left_arrow,
            key_d, key_right, right_arrow,
            elapsed_sensor,
        &#93;;
    
        var left_logic  = function(s){return (s[0&#93; || s[1&#93; || s[2&#93;)};
        var right_logic = function(s){return (s[3&#93; || s[4&#93; || s[5&#93;)};
    
        function rotate_cb(obj, id, pulse) {
    
            var elapsed = m_ctl.get_sensor_value(obj, "LEFT", 6);
    
            if (pulse == 1) {
                switch(id) {
                case "LEFT":
                    m_phy.character_rotation_inc(obj, elapsed * ROT_SPEED, 0);
                    break;
                case "RIGHT":
                    m_phy.character_rotation_inc(obj, -elapsed * ROT_SPEED, 0);
                    break;
                }
            }
        }
    
        m_ctl.create_sensor_manifold(_character, "LEFT", m_ctl.CT_CONTINUOUS,
            rotate_array, left_logic, rotate_cb);
        m_ctl.create_sensor_manifold(_character, "RIGHT", m_ctl.CT_CONTINUOUS,
            rotate_array, right_logic, rotate_cb);
    }
    
    function setup_jumping(touch_jump) {
        var key_space = m_ctl.create_keyboard_sensor(m_ctl.KEY_SPACE);
    
        var jump_cb = function(obj, id, pulse) {
            if (pulse == 1) {
                m_phy.character_jump(obj);
            }
        }
    
        m_ctl.create_sensor_manifold(_character, "JUMP", m_ctl.CT_TRIGGER,
            [key_space, touch_jump&#93;, function(s){return s[0&#93; || s[1&#93;}, jump_cb);
    }
    

    And the camera again


    In the end let's return to the camera. Keeping in mind the community feedback, we've introduced the possibility to tweak the stiffness of the camera constraint. Now this function call is as follows:

        m_cons.append_semi_soft_cam(camera, _character, CAM_OFFSET, CAM_SOFTNESS);
    

    The CAM_SOFTNESS constant is defined in the beginning of the file and its value is 0.2.

    Conclusion


    At this stage, programming the controls for mobile devices is finished. In the next tutorials we'll implement the gameplay and look at some other features of the Blend4Web physics engine.

    Link to the standalone application

    The source files of the application and the scene are part of the free Blend4Web SDK distribution.

    • Sep 03 2014 02:02 PM
    • by Spunya
  4. The Art of Feeding Time: Branding

    Although a game's branding rarely has much to do with its gameplay, it's still a very important forward-facing aspect to consider.


    ft_initial_logos.jpg
    Initial concepts for a Feeding Time logo.


    For Feeding Time's logo, we decided to create numerous designs and get some feedback before committing to a single concept.

    Our early mockups featured both a clock and various types of food. Despite seeming like a perfect fit, the analog clock caused quite a bit of confusion in-game. We wanted a numerical timer to clearly indicate a level's duration, but this was criticized when placed on an analog clock background. Since the concept already prompted some misunderstandings -- and a digital watch was too high-tech for the game's rustic ambiance -- we decided to avoid it for the logo.

    The food concepts were more readable than the clock, but Feeding Time was meant to be a game where any type of animal could make an appearance. Consequently we decided to avoid single food-types to prevent the logo from being associated with just one animal.


    ft_further_logos.jpg
    Even more logo concepts. They're important!


    A few more variations included a placemat and a dinner bell, but we didn't feel like these really captured the look of the game. We were trying to be clever, but the end results weren't quite there.

    We felt that the designs came across as somewhat sterile, resembling the perfect vector logos of large conglomerates that looked bland compared to the in-game visuals.


    ft_logo_bite_white.jpg
    Our final logo.


    Ultimately we decided to go with big, bubbly letters on top of a simple apéritif salad. It was bright and colourful, and fit right in with the restaurant-themed UI we were pursuing at the time. We even used the cloche-unveiling motif in the trailer!

    One final extra touch was a bite mark on the top-right letter. We liked the idea in the early carrot-logo concept, and felt that it added an extra bit of playfulness.


    ft_icon_concepts.jpg
    Initial sketches for the app icon.


    The app-icon was a bit easier to nail down as we decided not to avoid specific foods and animals due to the small amount of space. We still tried out a few different sketches, but the dog-and-bone was easily the winner. It matched the in-game art, represented the core of the gameplay, and was fairly readable at all resolutions.

    To help us gauge the clarity of the icon, we used the App Icon Template.

    This package contains a large Photoshop file with a Smart Object embedded in various portholes and device screenshots. The Smart Object can be replaced with any logo to quickly get a feel for how it appears in different resolutions and how it is framed within the AppStore. This was particularly helpful with the bordering as iOS 7 increased the corner radius making the icons appear rounder.


    ft_final_icon.jpg
    Final icon iterations for Feeding Time.


    Despite a lot of vibrant aesthetics, we still felt that Feeding Time was missing a face; a central identifying character.

    Our first shot at a "mascot" was a grandmother that sent the player to various parts of the world in order to feed its hungry animals. A grandmother fretting over everyone having enough to eat is a fairly identifiable concept, and it nicely fit in with the stall-delivery motif.


    ft_babushkas.jpg
    Our initial clerk was actually a babushka with some not-so-kindly variations.


    However, there was one problem: the introductory animation showed the grandmother tossing various types of food into her basket and random animals periodically snatching 'em away.

    We thought this sequence did a good job of previewing the gameplay in a fairly cute and innocuous fashion, but the feedback was quite negative. People were very displeased that all the nasty animals were stealing from the poor old woman!


    animation_foodsteal.gif
    People were quite appalled by the rapscallion animals when the clerk was played by a kindly grandma.


    It was a big letdown as we really liked the animation, but much to our surprise we were told it'd still work OK with a slightly younger male clerk. A quick mockup later, and everyone was pleased with the now seemingly playful shenanigans of the animals!

    Having substituted the kindly babushka for a jolly uncle archetype, we also shrunk down the in-game menus and inserted the character above them to add an extra dash of personality.


    ft_clerk_lineup.jpg
    The clerk as he appears over two pause menus, a bonus game in which the player gets a low score, and a bonus game in which the player gets a high score.


    The clerk made a substantial impact keeping the player company on their journey, so we decided to illustrate a few more expressions. We also made these reflect the player's performance helping to link it with in-game events such as bonus-goal completion and minigames scores.


    ft_website.jpg
    The official Feeding Time website complete with our logo, title-screen stall and background, a happy clerk, and a bunch of dressed up animals.


    Finally, we used the clerk and various game assets for the Feeding Time website and other Incubator Games outlets. We made sure to support 3rd generation iPads with a resolution of 2048x1536, which came in handy for creating various backgrounds, banners, and icons used on our Twitter, Facebook, YouTube, tumblr, SlideDB, etc.

    Although branding all these sites wasn't a must, it helped to unify our key message: Feeding Time is now available!

    Article Update Log


    30 July 2014: Initial release

  5. Making a Game with Blend4Web Part 2: Models for the Location

    In this article we will describe the process of creating the models for the location - geometry, textures and materials. This article is aimed at experienced Blender users that would like to familiarize themselves with creating game content for the Blend4Web engine.

    Graphical content style


    In order to create a game atmosphere a non-photoreal cartoon setting has been chosen. The character and environment proportions have been deliberately hypertrophied in order to give the gaming process something of a comic and unserious feel.

    Location elements


    This location consists of the following elements:
    • the character's action area: 5 platforms on which the main game action takes place;
    • the background environment, the role of which will be performed by less-detailed ash-colored rocks;
    • lava covering most of the scene surface.
    At this stage the source blend files of models and scenes are organized as follows:


    ex02_p02_img01.jpg?v=2014072916520120140


    1. env_stuff.blend - the file with the scene's environment elements which the character is going to move on;
    2. character_model.blend - the file containing the character's geometry, materials and armature;
    3. character_animation.blend - the file which has the character's group of objects and animation (including the baked one) linked to it;
    4. main_scene.blend - the scene which has the environment elements from other files linked to it. It also contains the lava model, collision geometry and the lighting settings;
    5. example2.blend - the main file, which has the scene elements and the character linked to it (in the future more game elements will be added here).

    In this article we will describe the creation of simple low-poly geometry for the environment elements and the 5 central islands. As the game is intended for mobile devices we decided to manage without normal maps and use only the diffuse and specular maps.

    Making the geometry of the central islands


    ex02_p02_img02.jpg?v=2014073110210020140


    First of all we will make the central islands in order to get settled with the scene scale. This process can be divided into 3 steps:

    1) A flat outline of the future islands using single vertices, which were later joined into polygons and triangulated for convenient editing when needed.


    ex02_p02_img03.jpg?v=2014072916520120140


    2) The Solidify modifier was used for the flat outline with the parameter equal to 0.3, which pushes the geometry volume up.


    ex02_p02_img04.jpg?v=2014072916520120140


    3) At the last stage the Solidify modifier was applied to get the mesh for hand editing. The mesh was subdivided where needed at the edges of the islands. According to the final vision cavities were added and the mesh was changed to create the illusion of rock fragments with hollows and projections. The edges were sharpened (using Edge Sharp), after which the Edge Split modifier was added with the Sharp Edges option enabled. The result is that a well-outlined shadow has appeared around the islands.

    Note:  It's not recommended to apply modifiers (using the Apply button). Enable the Apply Modifiers checkbox in the object settings on the Blend4Web panel instead; as a result the modifiers will be applied to the geometry automatically on export.


    ex02_p02_img05.jpg?v=2014073110210020140


    Texturing the central islands


    Now that the geometry for the main islands has been created, lets move on to texturing and setting up the material for baking. The textures were created using a combination of baking and hand-drawing techniques.

    Four textures were prepared altogether.


    ex02_p02_img06.jpg?v=2014072916520120140


    At the first stage lets define the color with the addition of small spots and cracks to create the effect of a rough stony and dusty rock. To paint these bumps texture brushes were used, which can be downloaded from the Internet or drawn by youself if necessary.


    ex02_p02_img07.jpg?v=2014072916520120140


    At the second stage the ambient occlusion effect was baked. Because the geometry is low-poly, relatively sharp transitions between light and shadow appeared as a result. These can be slightly blurred with a Gaussian Blur filter in a graphical editor.


    ex02_p02_img08.jpg?v=2014072916520120140


    The third stage is the most time consuming - painting the black and white texture by hand in the Texture Painting mode. It was layed over the other two, lightening and darkening certain areas. It's necessary to keep in mind the model's geometry so that the darker areas would be mostly in cracks, with the brighter ones on the sharp geometry angles. A generic brush was used with stylus pressure sensitivity turned on.


    ex02_p02_img09.jpg?v=2014072916520120140


    The color turned out to be monotonous so a couple of withered places imitating volcanic dust and stone scratches have been added. In order to get more flexibility in the process of texturing and not to use the original color texture, yet another texture was introduced. On this texture the light spots are decolorizing the previous three textures, and the dark spots don't change the color.


    ex02_p02_img10.jpg?v=2014072916520120140


    You can see how the created textures were combined on the auxiliary node material scheme below.


    ex02_p02_img11.jpg?v=2014072916520120140


    The color of the diffuse texture (1) was multiplied by itself to increase contrast in dark places.

    After that the color was burned a bit in the darker places using baked ambient occlusion (2), and the hand-painted texture (3) was layered on top - the Overlay node gave the best result.

    At the next stage the texture with baked ambient occlusion (2) was layered again - this time with the Multiply node - in order to darken the textures in certain places.

    Finally the fourth texture (4) was used as a mask, using which the result of the texture decolorizing (using Hue/Saturation) and the original color texture (1) were mixed together.

    The specular map was made from applying the Squeeze Value node to the overall result.

    As a result we have the following picture.


    ex02_p02_img12.jpg?v=2014072916520120140


    Creating the background rocks


    The geometry of rocks was made according to a similar technology although some differences are present. First of all we created a low-poly geometry of the required form. On top of it we added the Bevel modifier with an angle threshold, which added some beveling to the sharpest geometry places, softening the lighting at these places.


    ex02_p02_img13.jpg?v=2014072916520120140


    The rock textures were created approximately in the same way as the island textures. This time a texture with decolorizing was not used because such a level of detail is excessive for the background. Also the texture created with the texture painting method is less detailed. Below you can see the final three textures and the results of laying them on top of the geometry.


    ex02_p02_img14.jpg?v=2014072916520120140


    The texture combination scheme was also simplified.


    ex02_p02_img15.jpg?v=2014072916520120140


    First comes the color map (1), over which goes the baked ambient occlusion (2), and finally - the hand-painted texture (3).

    The specular map was created from the color texture. To do this a single texture channel (Separate RGB) was used, which was corrected (Squeeze Value) and given into the material as the specular color.

    There is another special feature in this scheme which makes it different from the previous one - the dirty map baked into the vertex color, overlayed (Overlay node) in order to create contrast between the cavities and elevations of the geometry.


    ex02_p02_img16.jpg?v=2014072916520120140


    The final result of texturing the background rocks:


    ex02_p02_img17.jpg?v=2014072916520120140


    Optimizing the location elements


    Lets start optimizing the elements we have and preparing them for displaying in Blend4Web.

    First of all we need to combine all the textures of the above-mentioned elements (background rocks and the islands) into a single texture atlas and then re-bake them into a single texture map. To do this lets combine UV maps of all geometry into a single UV map using the Texture Atlas addon.

    Note:  The Texture Atlas addon can be activated in Blender's settings under the Addons tab (UV category)


    ex02_p02_img18.jpg?v=2014072916520120140


    In the texture atlas mode lets place the UV maps of every mesh so that they would fill up all the future texture area evenly.

    Note:  It's not necessary to follow the same scale for all elements. It's recommended to allow more space for foreground elements (the islands).


    ex02_p02_img19.jpg?v=2014072916520120140


    After that let's bake the diffuse texture and the specular map from the materials of rocks and islands.


    ex02_p02_img20.jpg?v=2014072916520120140


    Note:  In order to save video memory, the specular map was packed into the alpha channel of the diffuse texture. As a result we got only one file.


    Lets place all the environment elements into a separate file (i.e. library): env_stuff.blend. For convenience we will put them on different layers. Lets place the mesh bottom for every element into the center of coordinates. For every separate element we'll need a separate group with the same name.


    ex02_p02_img21.jpg?v=2014072916520120140


    After the elements were gathered in the library, we can start creating the material. The material for all the library elements - both for the islands and the background rocks - is the same. This will let the engine automatically merge the geometry of all these elements into a single object which increases the performance significantly through decreasing the number of draw calls.

    Setting up the material


    The previously baked diffuse texture (1), into the alpha channel of which the specular map is packed, serves as the basis for the node material.


    ex02_p02_img22.jpg?v=2014072916520120140


    Our scene includes lava with which the environment elements will be contacting. Let's create the effect of the rock glowing and being heated in the contact places. To do this we will use a vertex mask (2), which we will apply to all library elements - and paint the vertices along the bottom geometry line.


    ex02_p02_img23.jpg?v=2014072916520120140


    The vertex mask was modified several times by the Squeeze Value node. First of all the less hot color of the lava glow (3) is placed on top of the texture using a more blurred mask. Then a brighter yellow color (4) is added near the contact places using a slightly tightened mask - in order to imitate a fritted rock.

    Lava should illuminate the rock from below. So in order to avoid shadowing in lava-contacting places we'll pass the same vertex mask into the Emit material's socket.

    We have one last thing to do - pass (5) the specular value from the diffuse texture's alpha channel to the Spec material's socket.


    ex02_p02_img24.jpg?v=2014072916520120140


    Object settings


    Let's enable the "Apply Modifiers" checkbox (as mentioned above) and also the "Shadows: Receive" checkbox in the object settings of the islands.


    ex02_p02_img25.jpg?v=2014072916520120140


    Physics


    Let's create exact copies of the island's geometry (named _collision for convenience). For these meshes we'll replace the material by a new material (named collision), and enable the "Special: Collision" checkbox in its settings (Blend4Web panel). This material will be used by the physics engine for collisions.

    Let's add the resulting objects into the same groups as the islands themselves.


    ex02_p02_img26.jpg?v=2014072916520120140


    Conclusion


    We've finished creating the library of the environment models. In one of the upcoming articles we'll demonstrate how the final game location was assembled and also describe making the lava effect.

    Link to the standalone application

    The source files of the application and the scene are part of the free Blend4Web SDK distribution.

    • Aug 18 2014 09:27 AM
    • by Spunya
  6. Making a Game with Blend4Web Part 3: Level Design

    This is the third article in the Making a Game series. In this article we'll consider assembling the game scene using the models prepared at the previous stage, setting up the lighting and the environment, and also we'll look in detail at creating the lava effect.

    Assembling the game scene


    Let's assemble the scene's visual content in the main_scene.blend file. We'll add the previously prepared environment elements from the env_stuff.blend file.

    Open the env_stuff.blend file via the File -> Link menu, go to the Group section, add the geometry of the central islands (1) and the background rocks (2) and arrange them on the scene.


    ex02_p03_img02.jpg?v=2014080111562320140


    Now we need to create the surface geometry of the future lava. The surface can be inflated a bit to deepen the effect of the horizon receding into the distance. Lets prepare 5 holes copying the outline of the 5 central islands in the center for the vertex mask which we'll introduce later.

    We'll also copy this geometry and assign the collision material to it as it is described in the previous article.


    ex02_p03_img03.jpg?v=2014080111562320140


    A simple cube will serve us as the environment with its center located at the horizon level for convenience. The cube's normals must be directed inside.

    Lets set up a simple node material for it. Get a vertical gradient (1) located on the level of the proposed horizon from the Global socket. After some squeezing and shifting it with the Squeeze Value node (2) we add the color (3). The result is passed directly into the Output node without the use of an intermediate Material node in order to make this object shadeless.


    ex02_p03_img04.jpg?v=2014080111562320140


    Setting up the environment


    We'll set up the fog under the World tab using the Fog density and Fog color parameters. Let's enable ambient lighting with the Environment Lighting option and setup its intensity (Energy). We'll select the two-color hemispheric lighting model Sky Color and tweak the Zenith Color and Horizon Color.


    ex02_p03_img05.jpg?v=2014080111562320140


    Next place two light sources into the scene. The first one of the Sun type will illuminate the scene from above. Enable the Generate Shadows checkbox for it to be a shadow caster. We'll put the second light source (also Sun) below and direct it vertically upward. This source will imitate the lighting from lava.


    ex02_p03_img06.jpg?v=2014080111562320140


    Then add a camera for viewing the exported scene. Make sure that the camera's Move style is Target (look at the camera settings on the Blend4Web panel), i.e. the camera is rotating around a certain pivot. Let's define the position of this pivot on the same panel (Target location).

    Also, distance and vertical angle limits can be assigned to the camera for convenient scene observation in the Camera limits section.


    ex02_p03_img07.jpg?v=2014080111562320140


    Adding the scene to the scene viewer


    At this stage a test export of the scene can be performed: File -> Export -> Blend4Web (.json). Let's add the exported scene to the list of the scene viewer external/deploy/assets/assets.json using any text editor, for example:

        {
            "name": "Tutorials",
            "items":[
    
                ...
    
                {
                    "name": "Game Example",
                    "load_file": "../tutorials/examples/example2/main_scene.json"
                },
    
                ...
            &#93;
       }   
    

    Then we can open the scene viewer apps_dev/viewer/viewer_dev.html with a browser, go to the Scenes panel and select the scene which is added to the Tutorials category.


    ex02_p03_img08.jpg?v=2014080111562320140


    The tools of the scene viewer are useful for tweaking scene parameters in real time.

    Setting up the lava material


    We'll prepare two textures by hand for the lava material, one is a repeating seamless diffuse texture and another will be a black and white texture which we'll use as a mask. To reduce video memory consumption the mask is packed into the alpha channel of the diffuse texture.


    ex02_p03_img09.jpg?v=2014080111562320140


    The material consists of several blocks. The first block (1) constantly shifts the UV coordinates for the black and white mask using the TIME (2) node in order to imitate the lava flow movement.


    ex02_p03_img10.jpg?v=2014080111562320140


    Note:  
    The TIME node is basically a node group with a reserved name. This group is replaced by the time-generating algorithm in the Blend4Web engine. To add this node it's enough to create a node group named TIME which has an output of the Value type. It can be left empty or can have for example a Value node for convenient testing right in Blender's viewport.


    In the other two blocks (4 and 5) the modified mask stretches and squeezes the UV in certain places, creating a swirling flow effect for the lava. The results are mixed together in block 6 to imitate the lava flow.

    Furthermore, the lava geometry has a vertex mask (3), using which a clean color (7) is added in the end to visualize the lava's burning hot spots.


    ex02_p03_img11.jpg?v=2014080111562320140


    To simulate the lava glow the black and white mask (8) is passed to the Emit socket. The mask itself is derived from the modified lava texture and from a special procedural mask (9), which reduces the glow effect with distance.

    Conclusion


    This is where the assembling of the game scene is finished. The result can be exported and viewed in the engine. In one of the upcoming articles we'll show the process of modeling and texturing the visual content for the character and preparing it for the Blend4Web engine.


    ex02_p03_img12.jpg?v=2014080111562320140



    Link to the standalone application

    The source files of the application and the scene are part of the free Blend4Web SDK distribution.

    • Aug 18 2014 09:36 AM
    • by Spunya
  7. Making a Game with Blend4Web Part 1: The Character

    Today we're going to start creating a fully-functional game app with Blend4Web.

    Gameplay


    Let's set up the gameplay. The player - a brave warrior - moves around a limited number of platforms. Melting hot stones keep falling on him from the sky; the stones should be avoided. Their number increases with time. Different bonuses which give various advantages appear on the location from time to time. The player's goal is to stay alive as long as possible. Later we'll add some other interesting features but for now we'll stick to these. This small game will have a third-person view.

    In the future, the game will support mobile devices and a score system. And now we'll create the app, load the scene and add the keyboard controls for the animated character. Let's begin!

    Setting up the scene


    Game scenes are created in Blender and then are exported and loaded into applications. Let's use the files made by our artist which are located in the blend/ directory. The creation of these resources will be described in a separate article.

    Let's open the character_model.blend file and set up the character. We'll do this as follows: switch to the Blender Game mode and select the character_collider object - the character's physical object.


    ex02_img01.jpg?v=20140717114607201406061


    Under the Physics tab we'll specify the settings as pictured above. Note that the physics type must be either Dynamic or Rigid Body, otherwise the character will be motionless.

    The character_collider object is the parent for the "graphical" character model, which, therefore, will follow the invisible physical model. Note that the lower point heights of the capsule and the avatar differ a bit. It was done to compensate for the Step height parameter, which lifts the character above the surface in order to pass small obstacles.

    Now lets open the main game_example.blend file, from which we'll export the scene.


    ex02_img02.jpg?v=20140717114607201406061


    The following components are linked to this file:

    1. The character group of objects (from the character_model.blend file).
    2. The environment group of objects (from the main_scene.blend file) - this group contains the static scene models and also their copies with the collision materials.
    3. The baked animations character_idle_01_B4W_BAKED and character_run_B4W_BAKED (from the character_animation.blend file).

    NOTE:
    To link components from another file go to File -> Link and select the file. Then go to the corresponding datablock and select the components you wish. You can link anything you want - from a single animation to a whole scene.

    Make sure that the Enable physics checkbox is turned on in the scene settings.

    The scene is ready, lets move on to programming.

    Preparing the necessary files


    Let's place the following files into the project's root:

    1. The engine b4w.min.js
    2. The addon for the engine app.js
    3. The physics engine uranium.js

    The files we'll be working with are: game_example.html and game_example.js.

    Let's link all the necessary scripts to the HTML file:

    <!DOCTYPE html>
    <html>
    <head>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1">
        <script type="text/javascript" src="b4w.min.js"></script>
        <script type="text/javascript" src="app.js"></script>
        <script type="text/javascript" src="game_example.js"></script>
    
        <style>
            body {
                margin: 0;
                padding: 0;
            }
        </style>
    
    </head>
    <body>
    <div id="canvas3d"></div>
    <body>
    </html>
    

    Next we'll open the game_example.js script and add the following code:

    "use strict"
    
    if (b4w.module_check("game_example_main"))
        throw "Failed to register module: game_example_main";
    
    b4w.register("game_example_main", function(exports, require) {
    
    var m_anim  = require("animation");
    var m_app   = require("app");
    var m_main  = require("main");
    var m_data  = require("data");
    var m_ctl   = require("controls");
    var m_phy   = require("physics");
    var m_cons  = require("constraints");
    var m_scs   = require("scenes");
    var m_trans = require("transform");
    var m_cfg   = require("config");
    
    var _character;
    var _character_body;
    
    var ROT_SPEED = 1.5;
    var CAMERA_OFFSET = new Float32Array([0, 1.5, -4&#93;);
    
    exports.init = function() {
        m_app.init({
            canvas_container_id: "canvas3d",
            callback: init_cb,
            physics_enabled: true,
            alpha: false,
            physics_uranium_path: "uranium.js"
        });
    }
    
    function init_cb(canvas_elem, success) {
    
        if (!success) {
            console.log("b4w init failure");
            return;
        }
    
        m_app.enable_controls(canvas_elem);
    
        window.onresize = on_resize;
        on_resize();
        load();
    }
    
    function on_resize() {
        var w = window.innerWidth;
        var h = window.innerHeight;
        m_main.resize(w, h);
    };
    
    function load() {
        m_data.load("game_example.json", load_cb);
    }
    
    function load_cb(root) {
    
    }
    
    });
    
    b4w.require("game_example_main").init();
    

    If you have read Creating an Interactive Web Application tutorial there won't be much new stuff for you here. At this stage all the necessary modules are linked, the init functions and two callbacks are defined. Also there is a possibility to resize the app window using the on_resize function.

    Pay attention to the additional physics_uranium_path initialization parameter which specifies the path to the physics engine file.

    The global variable _character is declared for the physics object while _character_body is defined for the animated model. Also the two constants ROT_SPEED and CAMERA_OFFSET are declared, which we'll use later.

    At this stage we can run the app and look at the static scene with the character motionless.

    Moving the character


    Let's add the following code into the loading callback:

    function load_cb(root) {
        _character = m_scs.get_first_character();
        _character_body = m_scs.get_object_by_empty_name("character",
                                                         "character_body");
    
        setup_movement();
        setup_rotation();
        setup_jumping();
    
        m_anim.apply(_character_body, "character_idle_01");
        m_anim.play(_character_body);
        m_anim.set_behavior(_character_body, m_anim.AB_CYCLIC);
    }
    

    First we save the physical character model to the _character variable. The animated model is saved as _character_body.

    The last three lines are responsible for setting up the character's starting animation.
    • animation.apply() - sets up animation by corresponding name,
    • animation.play() - plays it back,
    • animation.set_behaviour() - change animation behavior, in our case makes it cyclic.
    NOTE:
    Please note that skeletal animation should be applied to the character object which has an Armature modifier set up in Blender for it.

    Before defining the setup_movement(), setup_rotation() and setup_jumping() functions its important to understand how the Blend4Web's event-driven model works. We recommend reading the corresponding section of the user manual. Here we will only take a glimpse of it.

    In order to generate an event when certain conditions are met, a sensor manifold should be created.

    NOTE:
    You can check out all the possible sensors in the corresponding section of the API documentation.

    Next we have to define the logic function, describing in what state (true or false) the certain sensors of the manifold should be in, in order for the sensor callback to receive a positive result. Then we should create a callback, in which the performed actions will be present. And finally the controls.create_sensor_manifold() function should be called for the sensor manifold, which is responsible for processing the sensors' values. Let's see how this will work in our case.

    Define the setup_movement() function:

    function setup_movement() {
        var key_w     = m_ctl.create_keyboard_sensor(m_ctl.KEY_W);
        var key_s     = m_ctl.create_keyboard_sensor(m_ctl.KEY_S);
        var key_up    = m_ctl.create_keyboard_sensor(m_ctl.KEY_UP);
        var key_down  = m_ctl.create_keyboard_sensor(m_ctl.KEY_DOWN);
    
        var move_array = [
            key_w, key_up,
            key_s, key_down
        &#93;;
    
        var forward_logic  = function(s){return (s[0&#93; || s[1&#93;)};
        var backward_logic = function(s){return (s[2&#93; || s[3&#93;)};
    
        function move_cb(obj, id, pulse) {
            if (pulse == 1) {
                switch(id) {
                case "FORWARD":
                    var move_dir = 1;
                    m_anim.apply(_character_body, "character_run");
                    break;
                case "BACKWARD":
                    var move_dir = -1;
                    m_anim.apply(_character_body, "character_run");
                    break;
                }
            } else {
                var move_dir = 0;
                m_anim.apply(_character_body, "character_idle_01");
            }
    
            m_phy.set_character_move_dir(obj, move_dir, 0);
    
            m_anim.play(_character_body);
            m_anim.set_behavior(_character_body, m_anim.AB_CYCLIC);
        };
    
        m_ctl.create_sensor_manifold(_character, "FORWARD", m_ctl.CT_TRIGGER,
            move_array, forward_logic, move_cb);
        m_ctl.create_sensor_manifold(_character, "BACKWARD", m_ctl.CT_TRIGGER,
            move_array, backward_logic, move_cb);
    }
    

    Let's create 4 keyboard sensors - for arrow forward, arrow backward, S and W keys. We could have done with two but we want to mirror the controls on the symbol keys as well as on arrow keys. We'll append them to the move_array.

    Now to define the logic functions. We want the movement to occur upon pressing one of two keys in move_array.

    This behavior is implemented through the following logic function:

    function(s) { return (s[0&#93; || s[1&#93;) }
    

    The most important things happen in the move_cb() function.

    Here obj is our character. The pulse argument becomes 1 when any of the defined keys is pressed. We decide if the character is moved forward (move_dir = 1) or backward (move_dir = -1) based on id, which corresponds to one of the sensor manifolds defined below. Also the run and idle animations are switched inside the same blocks.

    Moving the character is done through the following call:

    m_phy.set_character_move_dir(obj, move_dir, 0);
    

    Two sensor manifolds for moving forward and backward are created in the end of the setup_movement() function. They have the CT_TRIGGER type i.e. they snap into action every time the sensor values change.

    At this stage the character is already able to run forward and backward. Now let's add the ability to turn.

    Turning the character


    Here is the definition for the setup_rotation() function:

    function setup_rotation() {
        var key_a     = m_ctl.create_keyboard_sensor(m_ctl.KEY_A);
        var key_d     = m_ctl.create_keyboard_sensor(m_ctl.KEY_D);
        var key_left  = m_ctl.create_keyboard_sensor(m_ctl.KEY_LEFT);
        var key_right = m_ctl.create_keyboard_sensor(m_ctl.KEY_RIGHT);
    
        var elapsed_sensor = m_ctl.create_elapsed_sensor();
    
        var rotate_array = [
            key_a, key_left,
            key_d, key_right,
            elapsed_sensor
        &#93;;
    
        var left_logic  = function(s){return (s[0&#93; || s[1&#93;)};
        var right_logic = function(s){return (s[2&#93; || s[3&#93;)};
    
        function rotate_cb(obj, id, pulse) {
    
            var elapsed = m_ctl.get_sensor_value(obj, "LEFT", 4);
    
            if (pulse == 1) {
                switch(id) {
                case "LEFT":
                    m_phy.character_rotation_inc(obj, elapsed * ROT_SPEED, 0);
                    break;
                case "RIGHT":
                    m_phy.character_rotation_inc(obj, -elapsed * ROT_SPEED, 0);
                    break;
                }
            }
        }
    
        m_ctl.create_sensor_manifold(_character, "LEFT", m_ctl.CT_CONTINUOUS,
            rotate_array, left_logic, rotate_cb);
        m_ctl.create_sensor_manifold(_character, "RIGHT", m_ctl.CT_CONTINUOUS,
            rotate_array, right_logic, rotate_cb);
    }
    

    As we can see it is very similar to setup_movement().

    The elapsed sensor was added which constantly generates a positive pulse. This allows us to get the time, elapsed from the previous rendering frame, inside the callback using the controls.get_sensor_value() function. We need it to correctly calculate the turning speed.

    The type of sensor manifolds has changed to CT_CONTINUOUS, i.e. the callback is executed in every frame, not only when the sensor values change.

    The following method turns the character around the vertical axis:

    m_phy.character_rotation_inc(obj, elapsed * ROT_SPEED, 0)
    

    The ROT_SPEED constant is defined to tweak the turning speed.

    Character jumping


    The last control setup function is setup_jumping():

    function setup_jumping() {
        var key_space = m_ctl.create_keyboard_sensor(m_ctl.KEY_SPACE);
    
        var jump_cb = function(obj, id, pulse) {
            if (pulse == 1) {
                m_phy.character_jump(obj);
            }
        }
    
        m_ctl.create_sensor_manifold(_character, "JUMP", m_ctl.CT_TRIGGER, 
            [key_space&#93;, function(s){return s[0&#93;}, jump_cb);
    }
    

    The space key is used for jumping. When it is pressed the following method is called:

    m_phy.character_jump(obj)
    

    Now we can control our character!

    Moving the camera


    The last thing we cover here is attaching the camera to the character.

    Let's add yet another function call - setup_camera() - into the load_cb() callback.

    This function looks as follows:

    function setup_camera() {
        var camera = m_scs.get_active_camera();
        m_cons.append_semi_soft_cam(camera, _character, CAMERA_OFFSET);
    }
    

    The CAMERA_OFFSET constant defines the camera position relative to the character: 1.5 meters above (Y axis in WebGL) and 4 meters behind (Z axis in WebGL).

    This function finds the scene's active camera and creates a constraint for it to follow the character smoothly.

    That's enough for now. Lets run the app and enjoy the result!

    ex02_img03.jpg?v=20140717114607201406061

    Link to the standalone application

    The source files of the application and the scene are part of the free Blend4Web SDK distribution.

    • Aug 18 2014 09:27 AM
    • by Spunya
  8. The Art of Feeding Time: Animation

    While some movement was best handled programmatically, Feeding Time‘s extensive animal cast and layered environments still left plenty of room for hand-crafted animation. The animals in particular required experimentation to find an approach that could retain the hand-painted texturing of the illustrations while also harkening to hand-drawn animation.


    old_dogsketch.gif old_dogeat.gif


    An early pass involved creating actual sketched frames, then slicing the illustration into layers and carefully warping those into place to match each sketch. Once we decided to limit all the animals to just a single angle, we dispensed with the sketch phase and settled on creating the posed illustrations directly. When the finalized dog image was ready, a full set of animations was created to test our planned lineup of animations.

    The initial approach was to include Sleep, Happy, Idle, Sad, and Eat animations. Sleep would play at the start of the stage, then transition into Happy upon arrival of the delivery, then settle into Idle until the player attempted to eat food, resulting in Sad for incorrect choices and Eat for correct ones.


    dog2_sleeping.gif dog3_happy.gif dog1_idle.gif dog_sad2.gif dog3_chomp.gif


    Ultimately, we decided to cut Sleep because its low visibility during the level intro didn’t warrant the additional assets. We also discovered that having the animals rush onto the screen in the beginning of the level and dart away at the end helped to better delineate the gameplay phase.

    There were also plans to play either Happy or Sad at end of each level for the animal that ate the most and the least food. The reactions to this, however, was almost overwhelmingly negative! Players hated the idea of always making one of the animals sad regardless of how many points they scored, so we quickly scrapped the idea.

    The Happy and Sad animations were still retained to add a satisfying punch to a successful combo and to inform the player when an incorrect match was attempted. As we discovered, a sad puppy asking to be thrown a bone (instead of, say, a kitty’s fish) proved to be a great deterrent for screen mashing and worked quite well as a passive tutorial.

    While posing the frames one by one was effectively employed for the Dog, Cat, Mouse, and Rabbit, a more sophisticated and easily iterated upon approach was developed for the rest of the cast:


    monkeylayers.gif jaw_cycle.gif lip_pull.gif


    With both methods, hidden portions of the animal's faces such as teeth and tongues were painted beneath separated layers. In the improved method, however, these layers could be much more freely posed and keyframed with a variety of puppet and warp tools at our disposal to make modifications to posing or frame rate much simpler.


    monkey_eat.gif beaver_eating.gif lion_eat.gif


    The poses themselves are often fairly extreme, but this was done to ensure that the motion was legible on small screens and at a fast pace in-game:


    allframes.png


    For Feeding Time’s intro animation and environments, everything was illustrated in advance on its own layer, making animation prep a smoother process than separating the flattened animals had been.

    The texture atlas comprising the numerous animal frames grew to quite a large size — this is just a small chunk!


    ft_animals_atlas.jpg


    Because the background elements wouldn’t require the hand-drawn motion of the animals, our proprietary tool “SLAM” was used to give artists the ability to create movement that would otherwise have to be input programmatically. With SLAM, much like Adobe Flash, artists can nest many layers of images and timelines, all of which loop within one master timeline.

    SLAM’s simple interface focuses on maximizing canvas visibility and allows animators to fine-tune image placement by numerical values if desired:


    slamscreen.jpg


    One advantage over Flash (and the initial reason SLAM was developed) is its capability to output final animation data in a succinct and clean format which maximizes our capability to port assets to other platforms.

    Besides environments, SLAM also proved useful for large scale effects, which would otherwise bloat the game’s filesize if rendered as image sequences:


    slamconfetti.jpg


    Naturally, SLAM stands for Slick Animation, which is what we set out to create with a compact number of image assets. Hopefully ‘slick’ is what you’ll think when you see it in person, now that you have some insight into how we set things into motion!

    Article Update Log


    16 July 2014: Initial release

  9. Waiting for review... or "ok, it was fun, now what?"

    The flop


    As a indie, who loves to work inside my own world with my own rules (no deadlines, no budget), this is like "... ok, ok, I need to get a life and move on, and well, let's also try to make this thing something real at last, something that people can touch and provide some reactions… let’s go and see what happens!".

    This is the kind of attitude which will always make you fail when trying to make a dent into the App Store. Your game will probably hit the deepest rank in App Store in 2-3 days and will never recover from there.

    None will care about you, your "firm" nor your game, because the real thing is that you never existed, your shadow is too short.

    The issue


    Exactly.

    The real problem about app marketing is not just about game quality or marketing budget. If it was so simple, then the solution would be just to throw in more bucks. But it’s not. It's been always about reach, or properly named "social reach" today.

    When I worked on the first game, TapTapGo (see it at AppStore here), I did it for the sake of getting the experience of both creating something on the magnificient Apple platform but also publishing and trying to make impact. I learnt a lot about marketing then. And I also felt depressed.

    I tested almost any tool and tactic to try to get attention from potential gamers: preview videos, social and forums engagement (and joining #IDRTG), PR kits (almost none reply to my pledges), paid reviews with Gnome Escape (which gave my first game a mix of good and bad reviews), ASO (Sensor Tower and others) and many more.

    It was exhausting (and expensive). And I could not see any benefit from those actions. I watched every day (and every hour sometimes) Apple charts. I was looking at iTunes Connect app and other statistical apps in my phone during my walks.

    After 2-3 months releasing a few updates and working hard trying to get my game up in the ladder of App Store I threw the towel and decided to move on, to take a break and bring in space for new ideas.

    Conclusion


    My reflections on that experience are that, even I pushed very hard to market my game, my reach was very low. I had few contacts to share my game with and even my game got a few thousand downloads during the first weeks, I could not see any network effect. Every action, every penny, I expended trying to expand my reach was unsuccessfully brief, because the real problem is that I didn’t have any social attractor linked to my game.

    I didn’t care about creating a fan base which could provide enough reach so any update could bring in more fans. Instead, I got only superficial downloads, forced by discrete investment.

    So, next time, don’t rush to release your game. Make sure you have a good reason to do that. Why will you be going to release the game? What are your expectations? Do you have taken the steps to build a proper reach through a fan base when launch day comes? Because if you don’t, you will be disappointed. And you won’t be able to explain why do you feel so.

    In next posts I will describe my efforts to create some noise before Noir Run, our second game, get released.

    UT6422u.png

    Stay tunned!

    - Rob.
    @KronnectGames


    <h3>Article Update Log</h3>

    13 Jul 2014: Initial release

  10. Blackhole by Fiolasoft reaches above 110% supporters & will be shared to over 67.000 people.

    Blackhole will reach a massive amount of gamers on release

    [attachment=22564:blackhole.jpg]
    After 2 weeks the company Fiolasoft published their indie game named Blackhole on Epocu, it have reached above 110% supported (332 supporters) and will be shared to over 67.000 people on the end date (in 38 days).

    We're excited to see how much it will help the promotion of the game and their playerbase. We also hope this success story will inspire more indie developers to promote their games through various platforms.


    You can see the Blackhole campaign here: http://epocu.com/campaigns/blackhole/

    • Jul 09 2014 10:28 AM
    • by ahogen
  11. Epocu – a free and easy way to hype & test upcoming games

    Hey guys!

    We're a small danish company called Kunju Studios who love games and have previously worked on tools for gamers and game developers.

    Over the past months we have been working on a way to put the work and ideas of indie game developers in the spotlight. What we’ve come up with is Epocu.

    Envisioned to be a hub for upcoming games and game concepts, Epocu offers developers a unique opportunity to test their ideas and assumptions before investing time in it. Upon presenting the idea, developers can set up the interest goal to see if they can achieve the critical mass of people interested in their game concept. Once the interest goal you set up is achieved, a message of your choice will be shared through the social media of your supporters at the time you think you would benefit most from the buzz and attention it deserves.

    [attachment=22419:2.jpg]

    We hope you will check it out and help us make the platform that the indie game development deserves. Epocu is completely free to use and it always will be, in hopes that the attention created helps you find investors, a fanbase as well as lay down a solid foundation for a kickstarter campaign.

    [attachment=22418:1.jpg]

    If you’re interested to learn more about us, you can check it out in more detail here. http://epocu.com
    Feel free to leave us your feedback or suggestions in the comments; we will do our best to answer you as soon as possible! :-)

  12. Epocu – a free and easy way to hype & test upcoming games

    Hey guys!

    We're a small danish company called Kunju Studios who love games and have previously worked on tools for gamers and game developers.

    Over the past months we have been working on a way to put the work and ideas of indie game developers in the spotlight. What we’ve come up with is Epocu.

    Envisioned to be a hub for upcoming games and game concepts, Epocu offers developers a unique opportunity to test their ideas and assumptions before investing time in it. Upon presenting the idea, developers can set up the interest goal to see if they can achieve the critical mass of people interested in their game concept. Once the interest goal you set up is achieved, a message of your choice will be shared through the social media of your supporters at the time you think you would benefit most from the buzz and attention it deserves.

    [attachment=22419:2.jpg]

    We hope you will check it out and help us make the platform that the indie game development deserves. Epocu is completely free to use and it always will be, in hopes that the attention created helps you find investors, a fanbase as well as lay down a solid foundation for a kickstarter campaign.

    [attachment=22418:1.jpg]

    If you’re interested to learn more about us, you can check it out in more detail here. http://epocu.com
    Feel free to leave us your feedback or suggestions in the comments; we will do our best to answer you as soon as possible! :-)

  13. Epocu – a free and easy way to hype & test upcoming games

    Hey guys!

    We're a small danish company called Kunju Studios who love games and have previously worked on tools for gamers and game developers.

    Over the past months we have been working on a way to put the work and ideas of indie game developers in the spotlight. What we’ve come up with is Epocu.

    Envisioned to be a hub for upcoming games and game concepts, Epocu offers developers a unique opportunity to test their ideas and assumptions before investing time in it. Upon presenting the idea, developers can set up the interest goal to see if they can achieve the critical mass of people interested in their game concept. Once the interest goal you set up is achieved, a message of your choice will be shared through the social media of your supporters at the time you think you would benefit most from the buzz and attention it deserves.

    [attachment=22419:2.jpg]

    We hope you will check it out and help us make the platform that the indie game development deserves. Epocu is completely free to use and it always will be, in hopes that the attention created helps you find investors, a fanbase as well as lay down a solid foundation for a kickstarter campaign.

    [attachment=22418:1.jpg]

    If you’re interested to learn more about us, you can check it out in more detail here. http://epocu.com
    Feel free to leave us your feedback or suggestions in the comments; we will do our best to answer you as soon as possible! :-)

  14. Epocu – a free and easy way to hype & test upcoming games

    Hey guys!

    We're a small danish company called Kunju Studios who love games and have previously worked on tools for gamers and game developers.

    Over the past months we have been working on a way to put the work and ideas of indie game developers in the spotlight. What we’ve come up with is Epocu.

    Envisioned to be a hub for upcoming games and game concepts, Epocu offers developers a unique opportunity to test their ideas and assumptions before investing time in it. Upon presenting the idea, developers can set up the interest goal to see if they can achieve the critical mass of people interested in their game concept. Once the interest goal you set up is achieved, a message of your choice will be shared through the social media of your supporters at the time you think you would benefit most from the buzz and attention it deserves.

    [attachment=22419:2.jpg]

    We hope you will check it out and help us make the platform that the indie game development deserves. Epocu is completely free to use and it always will be, in hopes that the attention created helps you find investors, a fanbase as well as lay down a solid foundation for a kickstarter campaign.

    [attachment=22418:1.jpg]

    If you’re interested to learn more about us, you can check it out in more detail here. http://epocu.com
    Feel free to leave us your feedback or suggestions in the comments; we will do our best to answer you as soon as possible! :-)

  15. Epocu – a free and easy way to hype upcoming games

    Hey guys,
    We're a small danish company called Kunju Studios who love games and have previously worked on tools for gamers and game developers.

    Over the past months we have been working on a way to put the work and ideas of indie game developers in the spotlight. What we’ve come up with is Epocu.

    Envisioned to be a hub for upcoming games and game concepts, Epocu offers developers a unique opportunity to test their ideas and assumptions before investing time in it.

    Upon presenting the idea, developers can set up the interest goal to see if they can achieve the critical mass of people interested in their game concept. Once the interest goal you set up is achieved, a message of your choice will be shared through the social media of your supporters at the time you think you would benefit most from the buzz and attention it deserves.

    We hope you will check it out and help us make the platform that the indie game development deserves. Epocu is completely free to use and it always will be, in hopes that the attention created helps you find investors, a fanbase as well as lay down a solid foundation for a kickstarter campaign.

    If you’re interested to learn more about us, you can check it out in more detail here. http://epocu.com
    Feel free to leave us your feedback or suggestions in the comments; we will do our best to answer you as soon as possible! :-)

  16. Game in the making (The Doomed Ones)

    Intro:
    I started to put together plans for a low rez 3D FPS not too long ago..

    It's based in a decent sized city, lets say Atlanta Georgia..
    The game will follow the roll of the tv show Revolution, but with a bit of a twist..

    Rough Story Line:
    About 15 years ago a small outbreak of a virus known as virus b-200342 was somehow released from a top secret research facility.
    The virus was then released into the water supply of New York City. The Virus, later to be called the Doom Virus (Looking for another name for it)
    made its way into the system of nearly every living occupant of New York. It acted like an STD, traveling from person to person in every way it could, waiting to initiate its madness. After a month a converse from person to person, the government began delivering vaccines, which were later determined to only save the non infected from their chance of a future doom. Nearly 98% of the world population had already been infected by now. The crisis was believed to be averted and so everyone went on with the everyday lives. But then it happened, in one swooping action every infected person was sort of "Activated." Their veins began glowing blue and their minds went blank, focusing around one thing... Food. Not your normal kinda food either, more along eating those of not their kind. 10 years before the story begins is when this happened, now the infected mope around, looking for food but tired. Sort of in a zombie-like state but ready to attack whenever bothered.

    This is where the game comes in. You are a survivor, the characteristics of your survivor are determined by your prologue (physical appearance, past life, and mental abilities.) You have been looking for other survivors for years now and have just decided to head down to the Atlanta area, In hopes of finding some sort of survivors. Of course the place is swarming with the Doomed ones. Yet you find signs of the living. You then proceed into Atlanta and that is where your story begins.

    Game Characteristics:
    I love the idea of open world survival games... The thought of doing everything you can to survive and work your way around, trying to form a group and kick some ass.
    so here are my main game characteristics or mechanics.

    • Survivability (Needing to eat and drink, also medicines and craftable splints)
    • Weapons (Not so much high-tech and accurate.. possibly handmade and crude, but a few precision ones hidden about.)
    • The Doomed AI (Stumbling about, but when they detect a human they will walk or crawl in a job like speed, possibly pouncing at you.)
    • Food and Water (You cannot be infected so you can drink from the tap if you find a working sink, food is scarce but find-able)
    • Medicine (A few hospitals can be found on the map, full of meds and bandaging material, possible craftable from herbs and cloth)
    • Player Creation (Focused around people from age 16 to 40, customizable in all ways)
    • Possible MMO (100 player servers, with faction creation and base building)
    • Base building (The ability to scavenge wood and parts from vehicles and utilities to craft base items)
    • Terrain (A large infected city with little power)
    Hiring:
    I'd love it if i could get a few or many more people in on this project! Of course, there will not be pay until it is released on steam.
    Just email me @ Lane_cork97@hotmail.com if you want to help, or add me on skype @ Lcorak97

    Feedback:
    Give me your feedback! It's the best thing i could use right now to determine rather or not i want to go on with this project.

    Thank you for taking your time to read this.

  17. The Process of Creating Music

    Hello, everyone! My name is Arthur Baryshev and I’m a music composer and sound designer. I compose sound tracks for video games, and I’m the manager of the IK-Sound studio. I’ve been collaborating with MagicIndie Softworks for a long time now and I would like to share my experience with you about how I compose tracks for the games forged by these guys.

    How it begins?


    You have to know that my music is “born” even before I write the first note of it. First, I study whichever game I have to compose music for. I am usually given the short description of the game, the concept art, the list of needed tracks, and some other references. Immediately after that, I brainstorm any ideas I have and get a general image of the music I am to compose: the style, the musical instruments I am going to use, the mood, and so on…

    I often lean back in my armchair and soak in the concept art’s slide show. The first impressions are extremely important because they usually are the most powerful and close to what I have to create.

    From the very beginning, it is crucially important to maintain a clear dialog with the main game designer. If we understand each other then we are already half way through. The result could be a stylistically unified soundtrack, which highlights each action that you take in the game. Just like in Brink of Consciousness: Dorian Gray Syndrome, by the way, you can hear the soundtrack for this game here:

    Just click on the image below
    85736_1361445455_brink-consciousness-dor

    Some words about the process


    After all is set into motion and everything is agreed upon, I start composing. My cornerstone is my virtual orchestra. I use orchestra and “live” instruments in virtually every track I compose. This gives my tracks a distinctive flavour and truly brings them to life.

    I send my sketches, which usually are about 15 to 30 seconds long, to the developers. And only after they give me their seal of approval I do finish them. After I’ve decided upon the final version of the track, I bring it to perfection by polishing or adding new details to it.

    Many soundtracks are based upon leitmotifs – melodies that set the tone of the game. Speaking of leitmotifs, a reasonable example would be the soundtrack, which I wrote for Nearwood. In the main menu, from the very first second, you can hear a very memorable tune, catchy even. This melody is, afterward, used in various cut-scenes which gives them a distinctive mood. You can listen to the tunes from this game here:

    Just click on the image below
    nearwood-collectors-edition_feature.jpg

    Developing a particular song or tune for a game one should keep in mind the following:
    • How the music will fit with the overall sound theme;
    • Whether it will be annoying and intrusive to the person who plays the game;
    • Will I be able to loop the track;
    • And so on, and so forth…
    This is vitally important! The track could be a musical masterpiece, a 9th Symphony, but if it is poorly implemented it will ruin the entire experience. When all is set and every track is completed and it “has found” its place into the game, you can sit and admire the results.

    Well, that's all ... Oh, wait!


    Now I am working on the Cyberline Racing and Saga Tenebrae projects, which are currently in full development. They both are set in different worlds, which requires entirely different approaches. I compose heavy metal and electronic music for one and soulful and fantasy music for the other. Guess which is for which??

    Here’s a sneak peek at a fresh battle composition from the upcoming Saga Tenebrae game and demo OST (half the OST I'd say) from the Cyberline Racing:

    Just click on the images below
    artworks-000067634085-5kyszy-t200x200.jp artworks-000063798800-6eb6yt-t200x200.jp

    And before I go, I will say that composing the music is only half the work. The second half, which is as important as the first, consists of the sound design and sound effects. I’ll talk to you about the sound design a bit later. Good luck and stay awesome! ;)

  18. Super Mario Bros Quest

    Introduction


    *This is a suggested template to get you started. Feel free to modify it to suit your article. Introduce the topic you are going to write about briefly. Provide any relevant background information regarding your article that may be helpful to readers.

    Main Section Title


    Explaining the Concept


    This is the tutorial part of your article. What are you trying to convey to your readers? In this main body section you can put all your descriptive text and pictures (you can drag and drop pictures right into the editor!).

    Using the Code


    (Optional) If your article is about a piece of code create a small description about how the code works and how to use it.

    /* Code block here */
    

    Interesting Points


    Did you stumble upon any weird gotchas? .. things people should look out for? How would you describe your experience writing this article.

    Conclusion


    Wrap up any loose ends for your article. Be sure to restate what was covered in the article. You may also suggest additional resources for the reader to check out if desired.

    Article Update Log


    Keep a running log of any updates that you make to the article. e.g.

    6 Feb 2020: Added additional code samples
    4 Feb 2020: Initial release

  19. Super Mario Bros Quest

    Introduction


    *This is a suggested template to get you started. Feel free to modify it to suit your article. Introduce the topic you are going to write about briefly. Provide any relevant background information regarding your article that may be helpful to readers.

    Main Section Title


    Explaining the Concept


    This is the tutorial part of your article. What are you trying to convey to your readers? In this main body section you can put all your descriptive text and pictures (you can drag and drop pictures right into the editor!).

    Using the Code


    (Optional) If your article is about a piece of code create a small description about how the code works and how to use it.

    /* Code block here */
    

    Interesting Points


    Did you stumble upon any weird gotchas? .. things people should look out for? How would you describe your experience writing this article.

    Conclusion


    Wrap up any loose ends for your article. Be sure to restate what was covered in the article. You may also suggest additional resources for the reader to check out if desired.

    Article Update Log


    Keep a running log of any updates that you make to the article. e.g.

    6 Feb 2020: Added additional code samples
    4 Feb 2020: Initial release

  20. Super Mario Bros Quest

    Introduction


    *This is a suggested template to get you started. Feel free to modify it to suit your article. Introduce the topic you are going to write about briefly. Provide any relevant background information regarding your article that may be helpful to readers.

    Main Section Title


    Explaining the Concept


    This is the tutorial part of your article. What are you trying to convey to your readers? In this main body section you can put all your descriptive text and pictures (you can drag and drop pictures right into the editor!).

    Using the Code


    (Optional) If your article is about a piece of code create a small description about how the code works and how to use it.

    /* Code block here */
    

    Interesting Points


    Did you stumble upon any weird gotchas? .. things people should look out for? How would you describe your experience writing this article.

    Conclusion


    Wrap up any loose ends for your article. Be sure to restate what was covered in the article. You may also suggest additional resources for the reader to check out if desired.

    Article Update Log


    Keep a running log of any updates that you make to the article. e.g.

    6 Feb 2020: Added additional code samples
    4 Feb 2020: Initial release

  21. Outcast gets an HD Reboot on Kickstarter

    Outcast Reboot HD Kickstarter


    Namur, Belgium – April 7, 2014 -


    Fresh3D Inc. today announced that it has launched a KickStarter campaign to bring a reboot HD of the award winning open world game ‘Outcast’ to PC/Mac and potentially nextgen consoles. ’Outcast’, is an action-adventure game developed in 1999 for the PC. It was one of the first 3d game to offer non-linear gameplay, free-roaming environments, combats against clever reactive AI, excellent voice acting, symphonic music, and 20+ hours of highly engrossing adventure. It was rated 90% by PC Gamer and received the Editors’ choice award.

    The Kickstarter campaign begins today and run for thirty days. Reward tiers include digital and physicals releases of the game, art book, early access to forum and beta version, exclusive backer tee-shirts, statuettes among many other exclusive Kickstarter backer-only content. To pledge or watch the campaign video, visit the KickStarter page https://www.kickstarter.com/projects/outcast-reboot-hd/outcast-reboot-hd

    About Fresh3D Inc.,


    Fresh3D Inc. is a subsidiary of Fresh3D sarl (http://www.fresh3d.com), an independent game developer, publisher and technology provider since 2004. With the help of their proprietary FreshEngine™ technology for PSP™, PlayStation®3, XboxOne, Playstation®4, VITA and PC, they develop quality products within decent timeframes and budgets. The company manages all the IP rights of the 'Outcast' franchise, recently purchased by their creators from ATARI Europe.

    Contact


    Additional information can be obtained by contacting Fresh3D (contact@fresh3d.com)

    Article Update Log


    4 Feb 2020: Initial release

  22. Most used model software

    Introduction


    Hi, i'd like to know which software is most used in big games company. Maya , Blender or 3D MAX?
    Thanks

    Article Update Log


    Keep a running log of any updates that you make to the article. e.g.

    6 Feb 2020: Added additional code samples
    4 Feb 2020: Initial release

  23. New Game Developer hits mobile market

    New Game Developer hits mobile market.


    New game developer Pop Casual hits the mobile market with two Android games. The studio has a long history of game creation, noticeably in the online Flash market. Their five years of experience gives the studio a head start on the creative and development process. For their first game Pop Casual jumped on the trend wagon, launching a Flappy Bird parody named Slappy Turd. The company’s second title, Gold Hunters, a puzzle platformer game, shows off a little more of the team’s creative capabilities. The game mixes the addictive mind quest of a puzzle game and the fun gold throwing action of an action game.

    As the Pop Casual portfolio grows, the team plans on developing better games, with a more interactive experience. “We want to produce games for all ages, ranging from casual puzzle games, platform action games, to intense shooter games.” Currently the 2 Pop Casual titles are available on the Android market and online, with iOS coming out in the following weeks. Look out for new titles such as; RepStyle and more!

    Currently Pop Casual’s first title Slappy Bird is free and available on Google Play: https://play.google.com/store/apps/details?id=air.SlappyTurdAndroid , while Gold Hunters offers a free and paid version on Google Play as well: https://play.google.com/store/search?q=%22pop%20casual%22%20%22gold%20hunters%22&c=apps

    Look out for some great new game releases from this promising new game studio.

  24. Algo-Bot: lessons learned from our kickstarter failure

    Previous article: When a game teaches you

    UPDATE 20th of March 2014: Algo-Bot has been Greenlit yesterday on Steam

    DISCLAIMER: Before reading this article I’d like to make you understand that every Kickstarter campaign is different. I can’t guarantee that my pieces of advice are success’ keys when a Kickstarter depends in part on luck factor.

    If you'd like to help us, upvote our game on our Steam Greenlight page here (before Valve decides to drop the service).

    ABOUT ALGO-BOT


    Giving you a bit of context, Algo-Bot is a challenging 3D puzzle game in where programming logic is the player's weapon.

    Players don’t directly control the robots (yes you can control several ones) but instead, players manipulate sequential commands to order them around and complete the level objectives. In the beginning, players are limited to telling the robots where to go: straight, left, right… But as players progress in the game, it gets much more complicated with the introduction of more advanced elements such as functions, variables, conditions, and other, more advanced, programming principles.

    OK now you know what kind of game is it and how different / similar we are to other games. Now let's move on the Kickstarter topic.

    GAME OVER


    In January Fishing Cactus launched Algo-Bot on Kickstarter. It was our first experience on the platform and we were pretty confident about the success of our game. Why wouldn’t it be a success? Even as a niche game, everyone who played it liked it. We were ready. Our Kickstarter page was nice, our video looked very pro. Moreover we read all about “how to run a successful Kickstarter” annnnnd we failed... Things happen!

    When I say that we failed it’s not completely right. Less than two weeks after the launch day, all of us secretly knew that it wouldn’t make it but when you worked so hard on something it’s even harder to admit your defeat. We had two solutions: run it to the end and fail or cancel it. After analyzing the situation, the second option looked more appropriate and more in control of our fate.

    Step 1: Cancel a Kickstarter


    To cancel your project you have to push that cancel button on the page. I noticed that it felt a bit like pushing the green button to launch your project. You feel excited, insecure and full of doubts. You ask your team twice if they are really sure they want to cancel it. It’s like: “ok I’m doing it” “I’m really doing it! Is everyone sure about it?” “I mean it, I’ll do it”. Then you press it and it feels so wrong and so right at the same time. You failed but you learnt so much from it.

    Step 2: Analyze the situation


    When you are running a Kickstarter you have access to quite a complete dashboard. You can see who your backers are, where your traffic comes from, your funding progress and what pledges are the most popular. You can’t see how many people have visited your page but you can evaluate it by seeing how many times people clicked on your video. This dashboard is really helpful during and after the Kickstarter.

    The first day, our backers were mostly people living in our country which is not a lot when you live in Belgium. They were friends, family members, people from our network or people who simply found us via the geo-localization on Kickstarter’s home page. The others found our project thanks to the fact that they are Kickstarter regulars and they probably sorted the game category by launch date. Or, you suddenly appeared in the by default “sorted by magic", which, according to Kickstarter, shows you what's bubbling up right now across categories and subcategories.

    “It’s not about money. It’s about backers”


    That first day we raised 2% of our goal which is clearly not enough. According to many sources, if you haven’t reached 10% within the first 24 hours you’re screwed unless you’re Notch… or Tim Schafer. Are you Notch? No you’re not-ch. Anyway, what experience taught us is that your success is not about money, it’s about backers. Of course, the money you raise that first day is important but less than the number of your backers. Having two backers at $500 is way less powerful than one hundred backers at $10. It shows that your project is valuable and it helps you catch the staff attention to potentially become featured. That first day, your goal is not to raise money but to raise a community.

    There are approximately 20 new game projects on Kickstarter every 24 hours and the “discover” page sorted by launch date, contains 20 projects on it. It means that the shelf life of your project lowers every time there’s a new project on the page. Of course, backers are still able to find your project by scrolling down and press “load more” but it requires involvement from the backer and a very catchy picture ;) So, these first 24 hours are crucial and won’t give you a single breathing space.

    One more thing: keep in mind that Kickstarter is extremely viral. If backers love your project, make sure they will spread the word around them. So, past the first day, unless you’re featured, don’t count on Kickstarter to drive a huge traffic towards your page. Same thing happens with magazines. If you are not on a big one like Mashable, Forbes or Rock Paper Shotgun don’t believe you’ll create a buzz with one isolated article. It will drive 10 maybe 20 backers but that won’t be enough compared to what Facebook, Twitter, and blogs can generate.

    With all these elements in hands, we were able to spot some of our mistakes: Message and visibility

    Step 3: Spot the errors


    The visibility


    The first error we made was to think that our network was enough to bring backers. You can see on the picture below that our network drove 2% more money than Kickstarter. But it only says that our friends are generous. Like I said earlier, money is not our goal. We want to know who our backers are and where they come from. Like many Kickstarter we built a small community around the game, approx. 600 people but clearly not enough to support the launch of the campaign.

    Kickstarter_dashboard_referrers_1.png

    According to the tab below, our backers mainly come from Kickstarter, social networks only come at the second place and videogame websites at the third place. It proves us that our social networking has really gone wrong. What’s funny is remembering us being glad to see that more and more backers found us via Kickstarter when it clearly meant that nobody was talking about our game outside of Kickstarter. Ouch!

    Kickstarter_dashboard_referrers_2.png

    The message


    Preview

    Talking about Kickstarter traffic, one of our biggest error was to broadcast the wrong message about our game.


    project_tumb_presa.png


    On the preview of our game you can read “Code smarter. Not Harder” It doesn't say anything about the game. It only says that it’s something about coding, targeted for a niche. Moreover, it makes the target confusing. Is this game for programmers? Is this even a game?

    Then you read the small description. “Awards winning 3D game in which you achieve your missions using Code Logic. No developers were harmed during the making of this”

    What the description tells us about the project?


    • It’s an awards winning game
    • It’s a game and it’s 3D


    For the moment, we are just right with our message and it’s not confusing at all. It brings value to the game and helps to justify the $60,000 goal.

    • Achieve your missions using Code Logic
    • No developers were harmed during the making of this (...)


    The third message confirms that this game is about coding. But, it doesn’t say more than the title already told us. Does this game teach you code logic? Is it for beginners or experts? Is it for kids? And what exactly “code logic” is?

    The little joke at the end shows that we are funny or try to be. But it’s not relevant here because it says nothing about the game.

    You have 130 characters (136 but fit to 130 to avoid display issues), use them wisely!

    Project page

    As said earlier, you can’t see how many people have visited your page but you can estimate it by knowing how many people played your video.

    Kickstarter_dashboard_videostats.png

    4,848 people played our video. That’s cool but that’s not what is important to us right now. Playing a Kickstarter video requires an investment from the potential backer. Before playing it, he will scroll down your page to get an overview of your project and see if it’s of interest.

    To stay faithful to the theme “coding”, we found nice code related titles for our paragraphs. I said code related but it wasn’t really coding. It just looked like it and we kept it simple enough to be accessible to non-programmers. So, “Genesis” became “cmd starts Algo-Bot 1.0” and “Gameplay” became “bool Algo-Bot (string world, string characters, string controls)”. We keep thinking that it was funny but it was a very bad idea.

    Private jokes are PRIVATE jokes and we lost a lot of potential backers here because they were lost and didn’t know what it was about. They just couldn’t find their way through the “coding” mess. Seeing titles like that is scary when you are not use to it. At that moment we lost all the puzzle lovers, parents who wanted to teach programming to their kids and only kept the programmers. When a programmer made us notice about the issue, we made the required amends but it was just too late because the 24 first hours were such a long time ago.

    The more fearless from them maybe read the first paragraph and certaintly came with this: “that teaches basic coding and logic skills: aha! Game that teaches stuff.... hmmm.... many of teaching games suck!” and in the best case “but it sounds interesting enough. So let's see how it looks and what it is all about”. According to the backers who read our texts, they were way too long so, instead of reading what it is about they would rather to play the video.

    Video

    Our backers really enjoyed our video. One said that it was the best Kickstarter video ever. We thank him for that but obviously our video wasn’t that good because it didn’t describe the product well enough. Gameplay is the key.

    Our video starts with an animated sequence that runs for 1:30 but it is too long especially because it is placed directly at the start and fails to show gameplay prominently. You have to wait 2:10 to see 40 seconds of gameplay which is not enough to understand how the gameplay exactly works. Come to the point as quickly as possible. Show how the actual game looks and what the gameplay is like straight away.

    I’ll add that our video doesn’t communicate if coding knowledge is required which is very important for noobies and puzzle-lovers. Again, it doesn't tell the viewer who the target audience of the game is.

    At the very end we say where the game is going when it gets funding, what is already finished and what is still needed, most of our backers missed it, we should have placed it well before as it was a key element.

    To finish with our video, I’ll highlight that 4:56 minutes (including the irrelevant 10 seconds of ending) wasn’t a clever length for a Kickstarter. When you are not going straight to the point people will skip some part or just stop the video which isn’t good at all. We now know that one of the statistics that Kickstarter uses to determine who gets featured on the main page is the number of “Finished plays” which means the number of time your video has been entirely seen. So, keep your video simple, stupid and SHORT.

    Kickstarter_dashboard_videostats2.png

    One last thing, about the gameplay video this time, comment it or attach a background music. We did a gameplay video with no sound and it was awfully boring to watch. Many of our potential backers stopped it after a few seconds.

    Goal and Stretch-Goals

    We set our goal at $60,000. For this sum of money we promised to deliver several features such as a level editor, the ability to code in-game and a lot of polish. While the gameplay is in an OK state and rather well defined now, the overall aspects of the game require more improvements: sounds, UI, integrate a better lighting system, more life with some particle FX, user friendly interface, increase the reactivity of the game and finally extend the story and plot of the game to make it more interesting for the long run.

    After considering it, we should have lowered our goal and put more features as stretch-goals.

    Rewards

    For this part we asked our 250 backers some feedback. According to their judgment, the prices were quite good but the pledges looked messy because there were too many superfluous rewards between interesting ones.

    Let’s see what our dashboard says about our rewards and compare it with what our backers say.

    Kickstarter_dashboard_rewardpopularity2.

    You can see on the tab our three most popular rewards. 52% of our backers pledged for the digital downloadable copy for PC. They told us that they didn’t care about rewards. This pledge was an early bird limited at 800 which was a too high number for an early bird when you are not famous.

    Then, 30% pledged another early bird limited at 500. For $20 they would have received the following digital rewards (in bold the rewards that really interested them):

    1. Exclusive wallpapers
    2. Infinite gratitude
    3. Digital downloadable copy for PC
    4. Algo-Bot papercraft
    5. Your name in the credits
    6. Participate in project development surveys
    7. Vote for the future programming language the game will support

    What does it tell us? Our niche wanted to be part of it! They wanted to influence the development and not just have their names in the credits. It brings us back to the core of Kickstarter. It’s not about selling your game. It’s about sharing a dream.

    “It’s not about selling your game. It’s about sharing a dream.”


    The last pledge confirms it well. For $80 they would have an early access to our level editor and submit their levels for the final version of the game. They didn’t care about the t-shirt or the poster. What they really wanted was to participate!

    To sum up what backers really want:

    1. Participate in your project (surveys, level editor, vote, design an element of the game,…)
    2. Alpha/Beta access with a decent price
    3. Multiple copies of the game
    4. Physical rewards

    “Well what? But you just say that they didn’t care about your goodies!” Calm down, I said that OUR NICHE didn’t care about goodies but it’s important to have some physical rewards for people outside your niche. Imagine an uncle wanted to offer the game to his niece. He thinks that it’s a really good idea but he’s afraid that his niece won’t appreciate the gift at first sight but… Oh wait! What is this? A cute plushie of the robot! That would complete the gift nicely! See?

    Another problem was the way we presented it. Each time we wrote the entire previous pledge plus the new rewards. I don’t say that it’s bad. It’s actually good to do it… when you have a decent number of rewards in the pledge. So, prefer to write it like that “previous pledge + new gifts” if you plant to have a lot more.

    To finish, don’t just tell your rewards on the rewards column. Add a more visual description of your pledges on the page and a tab because some people are more comfortable with it.

    PRESS START TO CONTINUE


    One possible next article will talk about the reboot plan.

    Meanwhile, we launched the game on Steam Greenlight with the wish to build a strong community on it. Never forget that building a community before running a Kickstarter is very important.

  25. Development of the Game: From an Idea on a Napkin to a Campaign on Kickstarter

    Introduction


    Hello, my name is Andrey Vlasenko. I live in Kharkiv, Ukraine. I am a software developer and work as a CIO in ApexTech company. I want to tell you about the creation of our game "Demolition Lander". To start with watch a small trailer that will give you an idea of how the result looks (shots of the game begin with the 50th second).




    Birth of the idea


    There is nothing worse than an idea stuck in your head. At first it’s just looming at the back of your mind, then after an incubation period it makes you act. And so having worked in Enterprise software for about 7 years I have firmly decided that I want to try game development and create a game. A lot of time has passed since this decision was made and one day all of a sudden the idea has gained a concrete shape.

    Over lunch, my friend Sasha (CEO of ApexTech) and I discussed what kinds of games there used to be and what each of us liked to play. Both remembered the classic game from Atari - Lunar Lander. During the discussion, I began to draw a ship with two engines on a napkin, and it just happened that these engines were not symmetrical and aligned in different directions. We both smiled. That's it! Mechanics of Lunar Lander where ships are equipped with two engines, which in turn are controlled by separate joysticks. "It will be a hit!"

    In the future, the concept was extended by destructible levels and elements of action, but later about this.


    2a6a55f195b58b5ee1a461ce585ba156.jpg
    the first ship


    The first prototype. Multiplatform support. Selection of game engine. First mistakes


    I should start with the fact that the company decided to give a green light to this project, but only I was assigned to do it.

    We had planned the game for mobile devices, so naturally I thought about the multiplatform realization from the start. There were two variants of the game engine: cocos2d variations and Unity. After long discussions Unity was discarded due to the fact that this engine is good for 3D, and not so good for working with 2D. In addition to this I had some experience with cocos2d, and the choice was made in its favour.

    The first prototype was built on the cocos2d Javascript. The advantages of this technology are:
    • code in Javascript
    • the same code runs on virtually all platforms, including web browsers with support of HTML5
    Cons were no less weighty:
    • no matter how the developers of the engine may assure you – it runs slow compared to native implementations
    • the implementation of a platform-dependent functional is pretty messy and curved
    In other words, for a full game it is necessary to implement natively all heavy and platform-dependent (like Game Center, or In-App Purchases) parts of the code. Then one should pull them through Javascript bindings and use in the main Javascript code. It is not fast and not a very pleasant process.


    2712953f1d74150c8528e6081746d161.jpg
    the first prototype


    Hitting this rake, the position has been reconsidered. It was decided to do only a version for iOS on cocos2d-iphone so far. Despite the name, the engine works well both as on the iPhone and other Apple devices running iOS.

    We started to implement physics using Chipmunk 2D. Some might ask – why not Box2D? The answer is simple, at the time of the first prototype, Javascript bindings in cocos2d have been implemented only for Chipmunk. In the transition to native cocos2d-iphone, we decided to leave it as it is, and never regretted the decision.

    Gameplay. Is there one?


    A few weeks of development passed. Prototype was ready. Game mechanics were ready. Everything is flying, engines are spinning, levels are large. It seemed that just a bit more and it can be put in the App Store. But playing this game was boring. And that means one thing – gameplay is missing. Here are the elements of the game mechanics that have been implemented at the time:
    • big level with a bunch of flat ground stations on which it is necessary to land for refuelling
    • ship equipped with two engines, which can be operated synchronously by one joystick, and separately by two joysticks
    • the purpose of the game – finish the level without breaking the ship
    After collective brainstorming the following elements were planned:
    • tuning the ship in all specifications
    • spheres with energy, which helps you to buy upgrades and fly to new distant planets
    • boxes for ship repair
    • underground caves
    • fuel cans for caves
    • destructible surface of the level
    • weapons for ships – bombs
    • danger zones – a zone which damages a ship over some time period
    • active and passive traps – mines and anti-aircraft guns
    • unique to each level artifacts hidden away deep in the caves
    • the aim of the game remains the same
    While approving the new features it was decided to expand the team to 4 people.

    Dimensions of a level


    I always felt excited playing games which offered a large space to move about without any restrictions or additional loading lags and I was keen to implement a similar experience in Demolition Lander. Levels in the game are huge. If you draw the entire level, you get an image of 65,536 by 16,384 in size. And this is not the limit – you can enlarge them to 65,536 by 65,536 without any loss of performance, yet I have imagined the playing process as horizontal flight above ground with periodic exploration of underground caves.

    Levels of the game consists of the following elements:
    • level mask – greyscale relief texture 2048 by 512 (which scales by 32 when overlayed)
    • texture of soil, crust, crust pattern and the texture for mixing all textures together imposed on a stretched mask
    • tile map with textures of the sky and stars


    06a184746091db1a398776d73a964714.jpg
    level mask

    5d20ebd926ca590487bd821f956309a0.jpg
    textures of terrain, crust, crust pattern and texture mix

    ba337a0f35ef929d438e2bf805f76131.jpg
    tile map of the sky

    c71429225e09c7aba0cfff403cd94169.jpg
    rendered level as a result


    Rendering algorithm of the level consists of two parts – loading the layers of the sky and creating a parallax; loading land textures and initializing a shader that mixes these textures.

    Creating the destructible land surface


    One of the most spectacular features of the game demanded a very responsible approach to implementation. Destruction of the surface must be fast, synchronized with the boundaries of the physical body of the level and graphical display.

    To create and update the physical boundaries of the land we used a feature of Chipmunk 2D Pro, which scans the terrain texture mask of the level and creates a set of tiles with the boundary lines. In the future, when traveling on the ship through the level the surrounding tiles with lines are reprocessed, and the old ones are unloaded from memory.

    Deformation itself is done as a change in texture of the mask in the corresponding places and reprocessing of tiles with physical boundaries. Graphically it is displayed instantly when drawing the next frame – it is a result of physics engine and terrain shader using the same mask.


    154c9ba276914fb39da1102155492c4f.jpg
    physics debug layer is rendered


    Is the game ready? – No


    When you have a fully working game with exciting game mechanics, it seems that App Store is close as ever. But the levels with names like Test1, Test2 and one rectangular ship without any sane design suggests otherwise. It's time for design and content.

    We engaged in creating the content ourselves. Types of ships, their names, characteristics, names of the planets, the number of in-game money at every level – this all was decided at the following brainstorming sessions.

    For the art design we decided to hire a freelancer. And not just a freelancer, a citizen of India with an outstanding portfolio, excellent English and what seemed to be common sense. People are surprisingly deceptive. Starting with the fact that he just did not get in touch periodically, ending with disgusting quality drawings and misunderstanding of the basic things that we wanted him to do. At the same time model of behavior of the person was surprisingly consistent - "OK, sir. No problem. It will be done".

    Spoiler


    After a week of wasted time the freelancer was fired and we were looking for a designer to join the team on a permanent basis.

    Search of an art designer is easier for those who understand something in artwork. We do not. We were deciding by the portfolios. If we liked it, we invited the person for a meeting. That way we stumbled upon a unique "creator”. His portfolio was amazing, a great number of beautiful high-quality graphics in a variety of styles, hand sketches, 3D models. Invited him over, chatted for a bit. He seemed to be an adequate person. In the evening a colleague sends everyone a link to one of the pictures of the candidate. It wasn’t his drawing, as with 80% of the rest of the portfolio. Moral – people lie, and Google Images is good for finding pictures.

    In the end we managed to find an artist who coped quite successfully with the assigned objectives and became an indispensable part of our team.


    6c72902ca12329adb5cb9e4a734b8e7b.jpg
    anti-aircraft gun and a black hole

    d3f677125856cd7d8381b40c5af20cfd.jpg
    explosion of a bomb


    Optimization and memory leaks


    Another couple of months have passed. Design and content were ready. On actual levels the game was performing too slow and crashing periodically after a certain time. We began to search for memory leaks and ways to optimize performance.

    Of memory leaks the blocks and their common use with the ARC were the weakest spot. In second place – wrong construction of the object graph, namely strong references to objects that contain each other (even through multiple nested objects).

    The optimization of performance came down to one principle – to draw and process only objects in sight of the ship and within the reach of bombs.

    Summary. Readiness - 90%


    Half a year of development. Almost finished game. End of capability for its funding.

    Remaining:
    • art on some planets / levels
    • refinement of sound and music
    • optimization for older Apple devices
    • testing and once again testing
    Scheduled:
    • multiplayer
    • more types of weapons
    • planets and more levels
    • port to mobile platforms, Ouya, PC and MAC

    What could have been done better


    Firstly – do not fantasize about the timing of development. Initial expectations were to spend a couple of months.

    Secondly, find an art designer for the team and begin his part together with the development. It would have saved a lot of time. And no freelancers.

    And thirdly – we should have shown the game to everyone whom it might interest, from the first prototype to the current state. That could have collected a certain base of people who already "know" and "talk" about Demolition Lander by the time the project was launched on Kickstarter.

    Campaign on Kickstarter



    Campaign on Kickstarter deserves a separate article, which, if I get a chance I will definitely write. In short, the fundamental success factors for financing are: a fan base, brand creator and / or product awareness. We have now launched a second campaign to raise funds for Demolition Lander. First had to be canceled due to a failure of PR strategy.

    What will happen to the second – is not clear, but the worrying is building up.

PARTNERS