• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

1266 Excellent

About Spunya

  • Rank
  1. [color=rgb(0,35,80)][font='Open Sans Light']It seems though the story of [/font][/color]Experience Curiosity[color=rgb(0,35,80)][font='Open Sans Light'], a Blend4Web-powered simulator developed by NASA, is not quite over. This interactive web application featuring the famous Mars rover was nominated for the highly prestigious Webby Award (aka Internet Oscar) granted to the most exemplary online projects of the year.[/font][/color] [color=rgb(0,35,80)][font='Open Sans Light']After a preliminary stage of public online voting, the judges [/font][/color]named[color=rgb(0,35,80)][font='Open Sans Light'] this project the Webby Award winner in the category of Government & Civil Innovation Websites. Intriguingly, NASA's [/font][/color]second nominated project[color=rgb(0,35,80)][font='Open Sans Light'], the Unity-powered Spacecraft 3D mobile app, wasn't as lucky![/font][/color] [color=rgb(0,35,80)][font='Open Sans Light']It is tradition for Webby winners to give a short "speech" consisting of just 5 words at the award ceremony. This time the phrase was [/font][/color][color=rgb(0,35,80)][font='Open Sans Light']Rockin' and Rovin' on Mars[/font][/color][color=rgb(0,35,80)][font='Open Sans Light'], voiced by NASA's representative Brian Kumanchik, the lead and the art director of this project.[/font][/color] [color=rgb(0,35,80)][font='Open Sans Light']Brian is a veteran of interactive CG who migrated to Blender 6 years ago. The [/font][/color]decision of NASA engineers[color=rgb(0,35,80)][font='Open Sans Light'] to adopt Blend4Web for the Mars rover simulator was due to a number of reasons including the fact that it works in browsers without plug-ins, is open source, is integrated with Blender and offers a fully-fledged physics engine out of the box.[/font][/color]
  2. Hello everyone, If you like web games development, html5 and WebGL in particular,then you might be interested in the upcoming event - the first Blend4Web Conference which will take place May 1 in Moscow. The primary conference language is English and we are going to organize a web translation as well!   We have created a dedicated page for it on our website so that you can easily register or watch speeches live. The schedule will be available soon. So we would be glad to see you or read your comments and answer any of your questions!
  3. Thanks! We have tested many Android devices. But there are really lots of them so probably your device has some not tested yet configuration. On Os X it should work properly. But on IMac-mini it can be serious performance issues. So Chrome even on Mac has better WebGL stability.
  4. In this article we will describe the process of creating the models for the location - geometry, textures and materials. This article is aimed at experienced Blender users that would like to familiarize themselves with creating game content for the Blend4Web engine. Graphical content style In order to create a game atmosphere a non-photoreal cartoon setting has been chosen. The character and environment proportions have been deliberately hypertrophied in order to give the gaming process something of a comic and unserious feel. Location elements This location consists of the following elements: the character's action area: 5 platforms on which the main game action takes place; the background environment, the role of which will be performed by less-detailed ash-colored rocks; lava covering most of the scene surface. At this stage the source blend files of models and scenes are organized as follows: env_stuff.blend - the file with the scene's environment elements which the character is going to move on; character_model.blend - the file containing the character's geometry, materials and armature; character_animation.blend - the file which has the character's group of objects and animation (including the baked one) linked to it; main_scene.blend - the scene which has the environment elements from other files linked to it. It also contains the lava model, collision geometry and the lighting settings; example2.blend - the main file, which has the scene elements and the character linked to it (in the future more game elements will be added here). In this article we will describe the creation of simple low-poly geometry for the environment elements and the 5 central islands. As the game is intended for mobile devices we decided to manage without normal maps and use only the diffuse and specular maps. Making the geometry of the central islands First of all we will make the central islands in order to get settled with the scene scale. This process can be divided into 3 steps: 1) A flat outline of the future islands using single vertices, which were later joined into polygons and triangulated for convenient editing when needed. 2) The Solidify modifier was used for the flat outline with the parameter equal to 0.3, which pushes the geometry volume up. 3) At the last stage the Solidify modifier was applied to get the mesh for hand editing. The mesh was subdivided where needed at the edges of the islands. According to the final vision cavities were added and the mesh was changed to create the illusion of rock fragments with hollows and projections. The edges were sharpened (using Edge Sharp), after which the Edge Split modifier was added with the Sharp Edges option enabled. The result is that a well-outlined shadow has appeared around the islands. It's not recommended to apply modifiers (using the Apply button). Enable the Apply Modifiers checkbox in the object settings on the Blend4Web panel instead; as a result the modifiers will be applied to the geometry automatically on export. Texturing the central islands Now that the geometry for the main islands has been created, lets move on to texturing and setting up the material for baking. The textures were created using a combination of baking and hand-drawing techniques. Four textures were prepared altogether. At the first stage lets define the color with the addition of small spots and cracks to create the effect of a rough stony and dusty rock. To paint these bumps texture brushes were used, which can be downloaded from the Internet or drawn by youself if necessary. At the second stage the ambient occlusion effect was baked. Because the geometry is low-poly, relatively sharp transitions between light and shadow appeared as a result. These can be slightly blurred with a Gaussian Blur filter in a graphical editor. The third stage is the most time consuming - painting the black and white texture by hand in the Texture Painting mode. It was layed over the other two, lightening and darkening certain areas. It's necessary to keep in mind the model's geometry so that the darker areas would be mostly in cracks, with the brighter ones on the sharp geometry angles. A generic brush was used with stylus pressure sensitivity turned on. The color turned out to be monotonous so a couple of withered places imitating volcanic dust and stone scratches have been added. In order to get more flexibility in the process of texturing and not to use the original color texture, yet another texture was introduced. On this texture the light spots are decolorizing the previous three textures, and the dark spots don't change the color. You can see how the created textures were combined on the auxiliary node material scheme below. The color of the diffuse texture (1) was multiplied by itself to increase contrast in dark places. After that the color was burned a bit in the darker places using baked ambient occlusion (2), and the hand-painted texture (3) was layered on top - the Overlay node gave the best result. At the next stage the texture with baked ambient occlusion (2) was layered again - this time with the Multiply node - in order to darken the textures in certain places. Finally the fourth texture (4) was used as a mask, using which the result of the texture decolorizing (using Hue/Saturation) and the original color texture (1) were mixed together. The specular map was made from applying the Squeeze Value node to the overall result. As a result we have the following picture. Creating the background rocks The geometry of rocks was made according to a similar technology although some differences are present. First of all we created a low-poly geometry of the required form. On top of it we added the Bevel modifier with an angle threshold, which added some beveling to the sharpest geometry places, softening the lighting at these places. The rock textures were created approximately in the same way as the island textures. This time a texture with decolorizing was not used because such a level of detail is excessive for the background. Also the texture created with the texture painting method is less detailed. Below you can see the final three textures and the results of laying them on top of the geometry. The texture combination scheme was also simplified. First comes the color map (1), over which goes the baked ambient occlusion (2), and finally - the hand-painted texture (3). The specular map was created from the color texture. To do this a single texture channel (Separate RGB) was used, which was corrected (Squeeze Value) and given into the material as the specular color. There is another special feature in this scheme which makes it different from the previous one - the dirty map baked into the vertex color, overlayed (Overlay node) in order to create contrast between the cavities and elevations of the geometry. The final result of texturing the background rocks: Optimizing the location elements Lets start optimizing the elements we have and preparing them for displaying in Blend4Web. First of all we need to combine all the textures of the above-mentioned elements (background rocks and the islands) into a single texture atlas and then re-bake them into a single texture map. To do this lets combine UV maps of all geometry into a single UV map using the Texture Atlas addon. The Texture Atlas addon can be activated in Blender's settings under the Addons tab (UV category) In the texture atlas mode lets place the UV maps of every mesh so that they would fill up all the future texture area evenly. It's not necessary to follow the same scale for all elements. It's recommended to allow more space for foreground elements (the islands). After that let's bake the diffuse texture and the specular map from the materials of rocks and islands. In order to save video memory, the specular map was packed into the alpha channel of the diffuse texture. As a result we got only one file. Lets place all the environment elements into a separate file (i.e. library): env_stuff.blend. For convenience we will put them on different layers. Lets place the mesh bottom for every element into the center of coordinates. For every separate element we'll need a separate group with the same name. After the elements were gathered in the library, we can start creating the material. The material for all the library elements - both for the islands and the background rocks - is the same. This will let the engine automatically merge the geometry of all these elements into a single object which increases the performance significantly through decreasing the number of draw calls. Setting up the material The previously baked diffuse texture (1), into the alpha channel of which the specular map is packed, serves as the basis for the node material. Our scene includes lava with which the environment elements will be contacting. Let's create the effect of the rock glowing and being heated in the contact places. To do this we will use a vertex mask (2), which we will apply to all library elements - and paint the vertices along the bottom geometry line. The vertex mask was modified several times by the Squeeze Value node. First of all the less hot color of the lava glow (3) is placed on top of the texture using a more blurred mask. Then a brighter yellow color (4) is added near the contact places using a slightly tightened mask - in order to imitate a fritted rock. Lava should illuminate the rock from below. So in order to avoid shadowing in lava-contacting places we'll pass the same vertex mask into the Emit material's socket. We have one last thing to do - pass (5) the specular value from the diffuse texture's alpha channel to the Spec material's socket. Object settings Let's enable the "Apply Modifiers" checkbox (as mentioned above) and also the "Shadows: Receive" checkbox in the object settings of the islands. Physics Let's create exact copies of the island's geometry (named _collision for convenience). For these meshes we'll replace the material by a new material (named collision), and enable the "Special: Collision" checkbox in its settings (Blend4Web panel). This material will be used by the physics engine for collisions. Let's add the resulting objects into the same groups as the islands themselves. Conclusion We've finished creating the library of the environment models. In one of the upcoming articles we'll demonstrate how the final game location was assembled and also describe making the lava effect. Link to the standalone application The source files of the application and the scene are part of the free Blend4Web SDK distribution.
  5. Today we're going to start creating a fully-functional game app with Blend4Web. Gameplay Let's set up the gameplay. The player - a brave warrior - moves around a limited number of platforms. Melting hot stones keep falling on him from the sky; the stones should be avoided. Their number increases with time. Different bonuses which give various advantages appear on the location from time to time. The player's goal is to stay alive as long as possible. Later we'll add some other interesting features but for now we'll stick to these. This small game will have a third-person view. In the future, the game will support mobile devices and a score system. And now we'll create the app, load the scene and add the keyboard controls for the animated character. Let's begin! Setting up the scene Game scenes are created in Blender and then are exported and loaded into applications. Let's use the files made by our artist which are located in the blend/ directory. The creation of these resources will be described in a separate article. Let's open the character_model.blend file and set up the character. We'll do this as follows: switch to the Blender Game mode and select the character_collider object - the character's physical object. Under the Physics tab we'll specify the settings as pictured above. Note that the physics type must be either Dynamic or Rigid Body, otherwise the character will be motionless. The character_collider object is the parent for the "graphical" character model, which, therefore, will follow the invisible physical model. Note that the lower point heights of the capsule and the avatar differ a bit. It was done to compensate for the Step height parameter, which lifts the character above the surface in order to pass small obstacles. Now lets open the main game_example.blend file, from which we'll export the scene. The following components are linked to this file: The character group of objects (from the character_model.blend file). The environment group of objects (from the main_scene.blend file) - this group contains the static scene models and also their copies with the collision materials. The baked animations character_idle_01_B4W_BAKED and character_run_B4W_BAKED (from the character_animation.blend file). NOTE: To link components from another file go to File -> Link and select the file. Then go to the corresponding datablock and select the components you wish. You can link anything you want - from a single animation to a whole scene. Make sure that the Enable physics checkbox is turned on in the scene settings. The scene is ready, lets move on to programming. Preparing the necessary files Let's place the following files into the project's root: The engine b4w.min.js The addon for the engine app.js The physics engine uranium.js The files we'll be working with are: game_example.html and game_example.js. Let's link all the necessary scripts to the HTML file: body { margin: 0; padding: 0; } Next we'll open the game_example.js script and add the following code: "use strict" if (b4w.module_check("game_example_main")) throw "Failed to register module: game_example_main"; b4w.register("game_example_main", function(exports, require) { var m_anim = require("animation"); var m_app = require("app"); var m_main = require("main"); var m_data = require("data"); var m_ctl = require("controls"); var m_phy = require("physics"); var m_cons = require("constraints"); var m_scs = require("scenes"); var m_trans = require("transform"); var m_cfg = require("config"); var _character; var _character_body; var ROT_SPEED = 1.5; var CAMERA_OFFSET = new Float32Array([0, 1.5, -4]); exports.init = function() { m_app.init({ canvas_container_id: "canvas3d", callback: init_cb, physics_enabled: true, alpha: false, physics_uranium_path: "uranium.js" }); } function init_cb(canvas_elem, success) { if (!success) { console.log("b4w init failure"); return; } m_app.enable_controls(canvas_elem); window.onresize = on_resize; on_resize(); load(); } function on_resize() { var w = window.innerWidth; var h = window.innerHeight; m_main.resize(w, h); }; function load() { m_data.load("game_example.json", load_cb); } function load_cb(root) { } }); b4w.require("game_example_main").init(); If you have read Creating an Interactive Web Application tutorial there won't be much new stuff for you here. At this stage all the necessary modules are linked, the init functions and two callbacks are defined. Also there is a possibility to resize the app window using the on_resize function. Pay attention to the additional physics_uranium_path initialization parameter which specifies the path to the physics engine file. The global variable _character is declared for the physics object while _character_body is defined for the animated model. Also the two constants ROT_SPEED and CAMERA_OFFSET are declared, which we'll use later. At this stage we can run the app and look at the static scene with the character motionless. Moving the character Let's add the following code into the loading callback: function load_cb(root) { _character = m_scs.get_first_character(); _character_body = m_scs.get_object_by_empty_name("character", "character_body"); setup_movement(); setup_rotation(); setup_jumping(); m_anim.apply(_character_body, "character_idle_01"); m_anim.play(_character_body); m_anim.set_behavior(_character_body, m_anim.AB_CYCLIC); } First we save the physical character model to the _character variable. The animated model is saved as _character_body. The last three lines are responsible for setting up the character's starting animation. animation.apply() - sets up animation by corresponding name, animation.play() - plays it back, animation.set_behaviour() - change animation behavior, in our case makes it cyclic. NOTE: Please note that skeletal animation should be applied to the character object which has an Armature modifier set up in Blender for it. Before defining the setup_movement(), setup_rotation() and setup_jumping() functions its important to understand how the Blend4Web's event-driven model works. We recommend reading the corresponding section of the user manual. Here we will only take a glimpse of it. In order to generate an event when certain conditions are met, a sensor manifold should be created. NOTE: You can check out all the possible sensors in the corresponding section of the API documentation. Next we have to define the logic function, describing in what state (true or false) the certain sensors of the manifold should be in, in order for the sensor callback to receive a positive result. Then we should create a callback, in which the performed actions will be present. And finally the controls.create_sensor_manifold() function should be called for the sensor manifold, which is responsible for processing the sensors' values. Let's see how this will work in our case. Define the setup_movement() function: function setup_movement() { var key_w = m_ctl.create_keyboard_sensor(m_ctl.KEY_W); var key_s = m_ctl.create_keyboard_sensor(m_ctl.KEY_S); var key_up = m_ctl.create_keyboard_sensor(m_ctl.KEY_UP); var key_down = m_ctl.create_keyboard_sensor(m_ctl.KEY_DOWN); var move_array = [ key_w, key_up, key_s, key_down ]; var forward_logic = function(s){return (s[0] || s[1])}; var backward_logic = function(s){return (s[2] || s[3])}; function move_cb(obj, id, pulse) { if (pulse == 1) { switch(id) { case "FORWARD": var move_dir = 1; m_anim.apply(_character_body, "character_run"); break; case "BACKWARD": var move_dir = -1; m_anim.apply(_character_body, "character_run"); break; } } else { var move_dir = 0; m_anim.apply(_character_body, "character_idle_01"); } m_phy.set_character_move_dir(obj, move_dir, 0); m_anim.play(_character_body); m_anim.set_behavior(_character_body, m_anim.AB_CYCLIC); }; m_ctl.create_sensor_manifold(_character, "FORWARD", m_ctl.CT_TRIGGER, move_array, forward_logic, move_cb); m_ctl.create_sensor_manifold(_character, "BACKWARD", m_ctl.CT_TRIGGER, move_array, backward_logic, move_cb); } Let's create 4 keyboard sensors - for arrow forward, arrow backward, S and W keys. We could have done with two but we want to mirror the controls on the symbol keys as well as on arrow keys. We'll append them to the move_array. Now to define the logic functions. We want the movement to occur upon pressing one of two keys in move_array. This behavior is implemented through the following logic function: function(s) { return (s[0] || s[1]) } The most important things happen in the move_cb() function. Here obj is our character. The pulse argument becomes 1 when any of the defined keys is pressed. We decide if the character is moved forward (move_dir = 1) or backward (move_dir = -1) based on id, which corresponds to one of the sensor manifolds defined below. Also the run and idle animations are switched inside the same blocks. Moving the character is done through the following call: m_phy.set_character_move_dir(obj, move_dir, 0); Two sensor manifolds for moving forward and backward are created in the end of the setup_movement() function. They have the CT_TRIGGER type i.e. they snap into action every time the sensor values change. At this stage the character is already able to run forward and backward. Now let's add the ability to turn. Turning the character Here is the definition for the setup_rotation() function: function setup_rotation() { var key_a = m_ctl.create_keyboard_sensor(m_ctl.KEY_A); var key_d = m_ctl.create_keyboard_sensor(m_ctl.KEY_D); var key_left = m_ctl.create_keyboard_sensor(m_ctl.KEY_LEFT); var key_right = m_ctl.create_keyboard_sensor(m_ctl.KEY_RIGHT); var elapsed_sensor = m_ctl.create_elapsed_sensor(); var rotate_array = [ key_a, key_left, key_d, key_right, elapsed_sensor ]; var left_logic = function(s){return (s[0] || s[1])}; var right_logic = function(s){return (s[2] || s[3])}; function rotate_cb(obj, id, pulse) { var elapsed = m_ctl.get_sensor_value(obj, "LEFT", 4); if (pulse == 1) { switch(id) { case "LEFT": m_phy.character_rotation_inc(obj, elapsed * ROT_SPEED, 0); break; case "RIGHT": m_phy.character_rotation_inc(obj, -elapsed * ROT_SPEED, 0); break; } } } m_ctl.create_sensor_manifold(_character, "LEFT", m_ctl.CT_CONTINUOUS, rotate_array, left_logic, rotate_cb); m_ctl.create_sensor_manifold(_character, "RIGHT", m_ctl.CT_CONTINUOUS, rotate_array, right_logic, rotate_cb); } As we can see it is very similar to setup_movement(). The elapsed sensor was added which constantly generates a positive pulse. This allows us to get the time, elapsed from the previous rendering frame, inside the callback using the controls.get_sensor_value() function. We need it to correctly calculate the turning speed. The type of sensor manifolds has changed to CT_CONTINUOUS, i.e. the callback is executed in every frame, not only when the sensor values change. The following method turns the character around the vertical axis: m_phy.character_rotation_inc(obj, elapsed * ROT_SPEED, 0) The ROT_SPEED constant is defined to tweak the turning speed. Character jumping The last control setup function is setup_jumping(): function setup_jumping() { var key_space = m_ctl.create_keyboard_sensor(m_ctl.KEY_SPACE); var jump_cb = function(obj, id, pulse) { if (pulse == 1) { m_phy.character_jump(obj); } } m_ctl.create_sensor_manifold(_character, "JUMP", m_ctl.CT_TRIGGER, [key_space], function(s){return s[0]}, jump_cb); } The space key is used for jumping. When it is pressed the following method is called: m_phy.character_jump(obj) Now we can control our character! Moving the camera The last thing we cover here is attaching the camera to the character. Let's add yet another function call - setup_camera() - into the load_cb() callback. This function looks as follows: function setup_camera() { var camera = m_scs.get_active_camera(); m_cons.append_semi_soft_cam(camera, _character, CAMERA_OFFSET); } The CAMERA_OFFSET constant defines the camera position relative to the character: 1.5 meters above (Y axis in WebGL) and 4 meters behind (Z axis in WebGL). This function finds the scene's active camera and creates a constraint for it to follow the character smoothly. That's enough for now. Lets run the app and enjoy the result! Link to the standalone application The source files of the application and the scene are part of the free Blend4Web SDK distribution.
  6. This time we'll speak about the main stages of character modeling and animation, and also will create the effect of the deadly falling rocks. Character model and textures The character data was placed into two files. The character_model.blend file contains the geometry, the material and the armature, while the character_animation.blend file contains the animation for this character. The character model mesh is low-poly: This model - just like all the others - lacks a normal map. The color texture was entirely painted on the model in Blender using the Texture Painting mode: The texture then has been supplemented (4) with the baked ambient occlusion map (2). Its color (1) was much more pale initially than required, and has been enhanced (3) with the Multiply node in the material. This allowed for fine tuning of the final texture's saturation. After baking we received the resulting diffuse texture, from which we created the specular map. We brightened up this specular map in the spots corresponding to the blade, the metal clothing elements, the eyes and the hair. As usual, in order to save video memory, this texture was packed into the alpha channel of the diffuse texture. Character material Let's add some nodes to the character material to create the highlighting effect when the character contacts the lava. We need two height-dependent procedural masks (2 and 3) to implement this effect. One of these masks (2) will paint the feet in the lava-contacting spots (yellow), while the other (3) will paint the character legs just above the knees (orange). The material specular value is output (4) from the diffuse texture alpha channel (1). Character animation Because the character is seen mainly from afar and from behind, we created a simple armature with a limited number of inverse kinematics controlling bones. A group of objects, including the character model and its armature, has been linked to the character_animation.blend file. After that we've created a proxy object for this armature (Object > Make Proxy...) to make its animation possible. At this game development stage we need just three animation sequences: looping run, idle and death animations. Using the specially developed tool - the Blend4Web Anim Baker - all three animations were baked and then linked to the main scene file (game_example.blend). After export from this file the animation becomes available to the programming part of the game. Special effects During the game the red-hot rocks will keep falling on the character. To visualize this a set of 5 elements is created for each rock: the geometry and the material of the rock itself, the halo around the rock, the explosion particle system, the particle system for the smoke trail of the falling rock, and the marker under the rock. The above-listed elements are present in the lava_rock.blend file and are linked to the game_example.blend file. Each element from the rock set has a unique name for convenient access from the programming part of the application. Falling rocks For diversity, we made three rock geometry types: The texture was created by hand in the Texture Painting mode: The material is generic, without the use of nodes, with the Shadeless checkbox enabled: For the effect of glowing red-hot rock, we created an egg-shaped object with the narrow part looking down, to imitate rapid movement. The material of the shiny areas is entirely procedural, without any textures. First of all we apply a Dot Product node to the geometry normals and vector (0, 0, -1) in order to obtain a view-dependent gradient (similar to the Fresnel effect). Then we squeeze and shift the gradient in two different ways and get two masks (2 and 3). One of them (the widest) we paint to the color gradient (5), while the other is subtracted from the first (4) to use the resulting ring as a transparency map. The empty node group named NORMAL_VIEW is used for compatibility: in the Geometry node the normals are in the camera space, but in Blend4Web - in the world space. Explosions The red-hot rocks will explode upon contact with the rigid surface. To create the explosion effect we'll use a particle system with a pyramid-shaped emitter. For the particle system we'll create a texture with an alpha channel - this will imitate fire and smoke puffs: Let's create a simple material and attach the texture to it: Then we setup a particle system using the just created material: Activate particle fade-out with the additional settings on the Blend4Web panel: To increase the size of the particles during their life span we create a ramp for the particle system: Now the explosion effect is up and running! Smoke trail When the rock is falling a smoke trail will follow it: This effect can be set up quite easily. First of all let's create a smoke material using the same texture as for explosions. In contrast to the previous material this one uses a procedural blend texture for painting the particles during their life span - red in the beginning and gray in the end - to mimic the intense burning: Now proceed to the particle system. A simple plane with its normal oriented down will serve as an emitter. For this time the emission is looping and more long-drawn: As before this particle system has a ramp for reducing the particles size progressively: Marker under the rock It remains only to add a minor detail - the marker indicating the spot to which the rock is falling, just to make the player's life easier. We need a simple unwrapped plane. Its material is fully procedural, no textures are used. The Average node is applied to the UV data to obtain a radial gradient (1) with its center in the middle of the plane. We are already familiar with the further procedures. Two transformations result in two masks (2 and 3) of different sizes. Subtracting one from the other gives the visual ring (4). The transparency mask (6) is tweaked and passed to the material alpha channel. Another mask is derived after squeezing the ring a bit (5). It is painted in two colors (7) and passed to the Color socket. Conclusion At this stage the gameplay content is ready. After merging it with the programming part described in the previous article of this series we may enjoy the rich world packed with adventure! Link to the standalone application The source files of the models are part of the free Blend4Web SDK distribution.
  7. We continue our gamedev series about the adventures of Pyatigor! (Yes, this is how we decided to name our protagonist.) In this article, we'll explain how to create environment FX and other details which make the game world more diverse and dynamic. Environment FX By running the game or just viewing the screenshot above, you can see new elements in the scene. Let's take a look at them one by one: heat haze, that is optical distortion due to differences in air temperature, smoking near the perimeter of the level, lava flaming, small rocks floating in lava, smoking in the sky. First of all, let's talk about heat haze, smoking and lava flaming. All these effects were created using dynamically updated materials. For the heat haze material, a solid encircling geometry in the form of a cylinder (1) was created around the islands. As a result, all objects behind this geometry will be distorted when viewing from the center. For the smoking material, geometry around the islands was created as well, but with spaces (2). Lava flaming geometry (3) is situated in places where other objects make contact with lava. Heat Haze This effect is based on the refraction effect coupled with UV animation of a normal map. Let's take a look at the material. The normal map (1) is at the heart of this material and is used with different scaling twice. Thanks to a Time Blend4Web-specific node, which is being added to one of the UV channels (2), this texture glides through the material creating an illusion of rising heated air. The normal map is passed to a Refraction node (3), which is yet another Blend4Web-specific node. Also, a mask is generated (4) to be passed into this Refraction node in order to specify in which places distortions will be observed, and in which places it will not. The Levels Of Quality nodes (5) situated before the final color and alpha cause this material to disappear at low quality settings, where the refraction effect is not available. The above picture shows how it works. On the left the original red sphere is shown ("clean"), then the mask is pictured ("mask"), more to the right the normal map is shown ("normal") which glides through the UV. This will result in visible distortions ("refraction") of the sphere when observing it through the material. Smoking Material The material for smoking effect is made similar to the heat haze. It is based on the tile texture resembling smoke (1), which is passed to the alpha channel of the material. It moves along the UV coordinates under the influence of the Time node (2) and is combined with the vertex color with a different scale (3 and 4). This vertex color fades the material out on the edges of the geometry. In the above picture you can see how it works. In this case, black color corresponds to fully transparent areas. Lava Flaming Lava flames are located near bunches of stones. Their geometry is constructed of groups of spreaded polygons, which are painted with a black and white vertex color mask, darker to the top. Again, this material uses the same UV animation principle. Moreover, it uses the same tile smoke texture (1). With a Time node it is being shifted through the UV in three different directions (2). The resulting color obtained from this shifting is combined with the vertex color, and then all this is used to generate the alpha mask (3). In addition, this texture is mixed a bit differently, painted with fire-like colors (4) and passed into the diffuse channel of the output. Floating Stones In order to add further details, I have also added small stones floating in lava. While the source .blend file only keeps five different stones, I managed to make seven variations by adding or excluding different stones from the groups. For optimization purposes, I re-used the island material for these stones. If you launch the game, you may notice that these stones are slightly rocking. This effect was achieved using procedural vertex animation, namely Wind Bending, which is normally used for grass, bushes and so on. This animation can be enabled for objects under the Blend4Web panel. In this particular case I only needed to tweak two parameters: Angle for max object inclination and Frequency, which is how fast the bending will happen. The wind bending effect is a simple and resource-conserving way to deform geometry compared with animation of any other types. Its settings are described in detail in the user manual. Smoking in the Sky If you stare at the sky, you may notice that it is now much more diverse because of smoking. I did that with dynamically updated material. Once again, I re-used the smoke texture (1) and made it shifting with a Time node (2). The important distinction from the above mentioned materials is that the texture is moving not through the UV coordinates but through global coordinates. The only thing left was to paint this texture with the right colors (3). It is also worth noting the Levels Of Quality node which switches the material to a primitive two-color gradient at the low quality mode. The Levels Of Quality node allows to create parallel settings inside a single material for rendering at different quality modes. Now the scene looks much more lively and detailed. However, the most interesting things are still ahead: I mean the gameplay elements for the player to interact with and for which this small virtual space was created. But about that you'll find in one of the following articles, don't miss it! Launch the game! Move with WASD. Attack with Enter. Kill the golems, collect the gems and put them into the obelisks. Each obelisk require 4 gems. Golems can knock gems out of the obelisks. The source files will be included in the upcoming release of the free Blend4Web SDK.
  8. This is the third article in the Making a Game series. In this article we'll consider assembling the game scene using the models prepared at the previous stage, setting up the lighting and the environment, and also we'll look in detail at creating the lava effect. Assembling the game scene Let's assemble the scene's visual content in the main_scene.blend file. We'll add the previously prepared environment elements from the env_stuff.blend file. Open the env_stuff.blend file via the File -> Link menu, go to the Group section, add the geometry of the central islands (1) and the background rocks (2) and arrange them on the scene. Now we need to create the surface geometry of the future lava. The surface can be inflated a bit to deepen the effect of the horizon receding into the distance. Lets prepare 5 holes copying the outline of the 5 central islands in the center for the vertex mask which we'll introduce later. We'll also copy this geometry and assign the collision material to it as it is described in the previous article. A simple cube will serve us as the environment with its center located at the horizon level for convenience. The cube's normals must be directed inside. Lets set up a simple node material for it. Get a vertical gradient (1) located on the level of the proposed horizon from the Global socket. After some squeezing and shifting it with the Squeeze Value node (2) we add the color (3). The result is passed directly into the Output node without the use of an intermediate Material node in order to make this object shadeless. Setting up the environment We'll set up the fog under the World tab using the Fog density and Fog color parameters. Let's enable ambient lighting with the Environment Lighting option and setup its intensity (Energy). We'll select the two-color hemispheric lighting model Sky Color and tweak the Zenith Color and Horizon Color. Next place two light sources into the scene. The first one of the Sun type will illuminate the scene from above. Enable the Generate Shadows checkbox for it to be a shadow caster. We'll put the second light source (also Sun) below and direct it vertically upward. This source will imitate the lighting from lava. Then add a camera for viewing the exported scene. Make sure that the camera's Move style is Target (look at the camera settings on the Blend4Web panel), i.e. the camera is rotating around a certain pivot. Let's define the position of this pivot on the same panel (Target location). Also, distance and vertical angle limits can be assigned to the camera for convenient scene observation in the Camera limits section. Adding the scene to the scene viewer At this stage a test export of the scene can be performed: File -> Export -> Blend4Web (.json). Let's add the exported scene to the list of the scene viewer external/deploy/assets/assets.json using any text editor, for example: { "name": "Tutorials", "items":[ ... { "name": "Game Example", "load_file": "../tutorials/examples/example2/main_scene.json" }, ... ] } Then we can open the scene viewer apps_dev/viewer/viewer_dev.html with a browser, go to the Scenes panel and select the scene which is added to the Tutorials category. The tools of the scene viewer are useful for tweaking scene parameters in real time. Setting up the lava material We'll prepare two textures by hand for the lava material, one is a repeating seamless diffuse texture and another will be a black and white texture which we'll use as a mask. To reduce video memory consumption the mask is packed into the alpha channel of the diffuse texture. The material consists of several blocks. The first block (1) constantly shifts the UV coordinates for the black and white mask using the TIME (2) node in order to imitate the lava flow movement. The TIME node is basically a node group with a reserved name. This group is replaced by the time-generating algorithm in the Blend4Web engine. To add this node it's enough to create a node group named TIME which has an output of the Value type. It can be left empty or can have for example a Value node for convenient testing right in Blender's viewport. In the other two blocks (4 and 5) the modified mask stretches and squeezes the UV in certain places, creating a swirling flow effect for the lava. The results are mixed together in block 6 to imitate the lava flow. Furthermore, the lava geometry has a vertex mask (3), using which a clean color (7) is added in the end to visualize the lava's burning hot spots. To simulate the lava glow the black and white mask (8) is passed to the Emit socket. The mask itself is derived from the modified lava texture and from a special procedural mask (9), which reduces the glow effect with distance. Conclusion This is where the assembling of the game scene is finished. The result can be exported and viewed in the engine. In one of the upcoming articles we'll show the process of modeling and texturing the visual content for the character and preparing it for the Blend4Web engine. Link to the standalone application The source files of the application and the scene are part of the free Blend4Web SDK distribution.
  9. We continue the exciting process of creating a mini Blend4Web game. Now we'll introduce some gameplay elements: red-hot rocks which fall from the sky and damage the character. New objects in the Blender scene Let's prepare new game objects in the blend/lava_rock.blend file: There are 3 sorts of falling rocks: rock_01, rock_02, rock_03. Smoke tails for these rocks - 3 identical particle system emitters, parented to the rock objects: smoke_emitter_01, smoke_emitter_02, smoke_emitter_03. Particle systems for the rock explosions: burst_emitter_01, burst_emitter_02, burst_emitter_03. Markers that appear under the falling rocks: mark_01, mark_02, mark_03. We'll describe the creation of these objects in one of the next articles. For convenience, let's put all these objects into a single group lava_rock and link this group to the main file game_example.blend. Then we double the number of all the objects on the scene - by copying the empty object with the duplication group. As a result we obtain a pool of 6 falling rocks, which we'll access by the names of the two empty objects - lava_rock and lava_rock.001. Health bar Let's add four HTML elements to render the health bar. These elements will move when our character receives damage. The corresponding style descriptions have been added to the game_example.css file. Constants and variables First of all lets initialize some new constants for gameplay tweaking and also a global variable for character hit points: var ROCK_SPEED = 2; var ROCK_DAMAGE = 20; var ROCK_DAMAGE_RADIUS = 0.75; var ROCK_RAY_LENGTH = 10; var ROCK_FALL_DELAY = 0.5; var LAVA_DAMAGE_INTERVAL = 0.01; var MAX_CHAR_HP = 100; var _character_hp; var _vec3_tmp = new Float32Array(3); The _vec3_tmp typed array is created for storing intermediate calculation results in order to reduce the JavaScript garbage collector load. Let's set the _character_hp value to MAX_CHAR_HP in the load_cb() function - our character is in full health when the game starts. _character_hp = MAX_CHAR_HP; Falling rocks - initialization The stack of function calls now looks like this: var elapsed_sensor = m_ctl.create_elapsed_sensor(); setup_movement(up_arrow, down_arrow); setup_rotation(right_arrow, left_arrow, elapsed_sensor); setup_jumping(touch_jump); setup_falling_rocks(elapsed_sensor); setup_lava(elapsed_sensor); setup_camera(); For performance reasons, elapsed_sensor is initialized only once and passed as argument to the functions. Let's look at the new function for setting up the rock falling: function setup_falling_rocks(elapsed_sensor) { var ROCK_EMPTIES = ["lava_rock","lava_rock.001"]; var ROCK_NAMES = ["rock_01", "rock_02", "rock_03"]; var BURST_EMITTERS_NAMES = ["burst_emitter_01", "burst_emitter_02", "burst_emitter_03"]; var MARK_NAMES = ["mark_01", "mark_02", "mark_03"]; var falling_time = {}; ... } The first thing we see is the population of arrays with names of falling rocks and related objects. The falling_time dictionary serves for tracking the time passed after every rock had started falling. Falling rocks - sensors Let's set up sensors to describe the behavior of each falling rock within the double loop: for (var i = 0; i < ROCK_EMPTIES.length; i++) { var dupli_name = ROCK_EMPTIES; for (var j = 0; j < ROCK_NAMES.length; j++) { var rock_name = ROCK_NAMES[j]; var burst_name = BURST_EMITTER_NAMES[j]; var mark_name = MARK_NAMES[j]; var rock = m_scs.get_object_by_dupli_name(dupli_name, rock_name); var burst = m_scs.get_object_by_dupli_name(dupli_name, burst_name); var mark = m_scs.get_object_by_dupli_name(dupli_name, mark_name); var coll_sens_lava = m_ctl.create_collision_sensor(rock, "LAVA", true); var coll_sens_island = m_ctl.create_collision_sensor(rock, "ISLAND", true); var ray_sens = m_ctl.create_ray_sensor(rock, [0, 0, 0], [0, -ROCK_RAY_LENGTH, 0], false, null); m_ctl.create_sensor_manifold(rock, "ROCK_FALL", m_ctl.CT_CONTINUOUS, [elapsed_sensor], null, rock_fall_cb); m_ctl.create_sensor_manifold(rock, "ROCK_CRASH", m_ctl.CT_SHOT, [coll_sens_island, coll_sens_lava], function(s){return s[0] || s[1]}, rock_crash_cb, burst); m_ctl.create_sensor_manifold(rock, "MARK_POS", m_ctl.CT_CONTINUOUS, [ray_sens], null, mark_pos_cb, mark); set_random_position(rock); var rock_name = m_scs.get_object_name(rock); falling_time[rock_name] = 0; } } The external loop iterates through dupli-groups (remember - there are just two of them). The inner loop processes the rock objects, explosion particle systems (burst) and markers. It's not required to process the smoke tail particle systems because they are parented to the falling rocks and follow them automatically. The coll_sens_lava and coll_sens_island sensors detect collisions of the rocks with the lava surface and the ground. The third create_collision_sensor() function argument means that we want to receive the collision point coordinates inside the callback. The ray_sens sensor detects the distance between the falling rock and the object under it, and is used to place the marker. The created ray starts at the [0,0,0] object coordinates and ends 10 meters beneath it. The last argument - null - means that collisions will be detected with any objects regardless of their collision_id. Falling rocks - sensor manifolds Then we use the sensor model that we learned in the prevoius articles. Three sensor manifolds are formed with the just created sensors: ROCK_FALL is responsible for rock falling, ROCK_CRASH processes impacts with the ground and the lava, and MARK_POS places the marker under the rock. Also let's randomly position the rock at some height with the set_random_position() function: function set_random_position(obj) { var pos = _vec3_tmp; pos[0] = 8 * Math.random() - 4; pos[1] = 4 * Math.random() + 2; pos[2] = 8 * Math.random() - 4; m_trans.set_translation_v(obj, pos); } Last but not least - the time for tracking the rock falling is initialized to zero in the falling_time dictionary: falling_time[rock_name] = 0; The rock names are used as keys in this object. These names are unique despite the fact that several identical objects are present in the scene. The thing is that in Blend4Web the resulting name of an object, which is linked using a duplication group, is composed from the group name and the original name of the object, e.g. lava_rock.001*rock_03. Callback for the falling time The rock_fall_cb() callback is as follows: function rock_fall_cb(obj, id, pulse) { var elapsed = m_ctl.get_sensor_value(obj, id, 0); var obj_name = m_scs.get_object_name(obj); falling_time[obj_name] += elapsed; if (falling_time[obj_name]
  10. Thanks! Honestly I think this kind of applications is a future of gaming. Developing only for one platform and getting a result for every device is a really nice approach in my opinion.
  11. This is the fourth part of the Blend4Web gamedev tutorial. Today we'll add mobile devices support and program the touch controls. Before reading this article, please look at the first part of this series, in which the keyboard controls are implemented. We will use the Android and iOS 8 platforms for testing. Detecting mobile devices In general, mobile devices are not as good in performance as desktops and so we'll lower the rendering quality. We'll detect a mobile device with the following function: function detect_mobile() { if( navigator.userAgent.match(/Android/i) || navigator.userAgent.match(/webOS/i) || navigator.userAgent.match(/iPhone/i) || navigator.userAgent.match(/iPad/i) || navigator.userAgent.match(/iPod/i) || navigator.userAgent.match(/BlackBerry/i) || navigator.userAgent.match(/Windows Phone/i)) { return true; } else { return false; } } The init function now looks like this: exports.init = function() { if(detect_mobile()) var quality = m_cfg.P_LOW; else var quality = m_cfg.P_HIGH; m_app.init({ canvas_container_id: "canvas3d", callback: init_cb, physics_enabled: true, quality: quality, show_fps: true, alpha: false, physics_uranium_path: "uranium.js" }); } As we can see, a new initialization parameter - quality - has been added. In the P_LOW profile there are no shadows and post-processing effects. This will allow us to dramatically increase the performance on mobile devices. Controls elements on the HTML page Lets add the following elements to the HTML file: control_circle element will appear when the screen is touched, and will be used for directing the character. The control_tap element is a small marker, following the finger. The control_jump element is a jump button located in the bottom right corner of the screen. By default all these elements are hidden (visibility property). They will become visible after the scene is loaded. The styles for these elements can be found in the game_example.css file. Processing the touch events Let's look at the callback which is executed at scene load: function load_cb(root) { _character = m_scs.get_first_character(); _character_body = m_scs.get_object_by_empty_name("character", "character_body"); var right_arrow = m_ctl.create_custom_sensor(0); var left_arrow = m_ctl.create_custom_sensor(0); var up_arrow = m_ctl.create_custom_sensor(0); var down_arrow = m_ctl.create_custom_sensor(0); var touch_jump = m_ctl.create_custom_sensor(0); if(detect_mobile()) { document.getElementById("control_jump").style.visibility = "visible"; setup_control_events(right_arrow, up_arrow, left_arrow, down_arrow, touch_jump); } setup_movement(up_arrow, down_arrow); setup_rotation(right_arrow, left_arrow); setup_jumping(touch_jump); setup_camera(); } The new things here are the 5 sensors created with the controls.create_custom_sensor() method. We will change their values when the corresponding touch events are fired. If the detect_mobile() function returns true, the control_jump element is shown up and the setup_control_events() function is called to set up the values for these new sensors (passed as arguments). This function is quite large and we'll look at it step-by-step. var touch_start_pos = new Float32Array(2); var move_touch_idx; var jump_touch_idx; var tap_elem = document.getElementById("control_tap"); var control_elem = document.getElementById("control_circle"); var tap_elem_offset = tap_elem.clientWidth / 2; var ctrl_elem_offset = control_elem.clientWidth / 2; First of all the variables are declared for saving the touch point and the touch indices, which correspond to the character's moving and jumping. The tap_elem and control_elem HTML elements are required in several callbacks. The touch_start_cb() callback In this function the beginning of a touch event is processed. function touch_start_cb(event) { event.preventDefault(); var h = window.innerHeight; var w = window.innerWidth; var touches = event.changedTouches; for (var i = 0; i < touches.length; i++) { var touch = touches; var x = touch.clientX; var y = touch.clientY; if (x > w / 2) // right side of the screen break; touch_start_pos[0] = x; touch_start_pos[1] = y; move_touch_idx = touch.identifier; tap_elem.style.visibility = "visible"; tap_elem.style.left = x - tap_elem_offset + "px"; tap_elem.style.top = y - tap_elem_offset + "px"; control_elem.style.visibility = "visible"; control_elem.style.left = x - ctrl_elem_offset + "px"; control_elem.style.top = y - ctrl_elem_offset + "px"; } } Here we iterate through all the changed touches of the event (event.changedTouches) and discard the touches from the right half of the screen: if (x > w / 2) // right side of the screen break; If this condition is met, we save the touch point touch_start_pos and the index of this touch move_touch_idx. After that we'll render 2 elements in the touch point: control_tap and control_circle. This will look on the device screen as follows: The touch_jump_cb() callback function touch_jump_cb (event) { event.preventDefault(); var touches = event.changedTouches; for (var i = 0; i < touches.length; i++) { var touch = touches; m_ctl.set_custom_sensor(jump, 1); jump_touch_idx = touch.identifier; } } This callback is called when the control_jump button is touched It just sets the jump sensor value to 1 and saves the corresponding touch index. The touch_move_cb() callback This function is very similar to the touch_start_cb() function. It processes finger movements on the screen. function touch_move_cb(event) { event.preventDefault(); m_ctl.set_custom_sensor(up_arrow, 0); m_ctl.set_custom_sensor(down_arrow, 0); m_ctl.set_custom_sensor(left_arrow, 0); m_ctl.set_custom_sensor(right_arrow, 0); var h = window.innerHeight; var w = window.innerWidth; var touches = event.changedTouches; for (var i=0; i < touches.length; i++) { var touch = touches; var x = touch.clientX; var y = touch.clientY; if (x > w / 2) // right side of the screen break; tap_elem.style.left = x - tap_elem_offset + "px"; tap_elem.style.top = y - tap_elem_offset + "px"; var d_x = x - touch_start_pos[0]; var d_y = y - touch_start_pos[1]; var r = Math.sqrt(d_x * d_x + d_y * d_y); if (r < 16) // don't move if control is too close to the center break; var cos = d_x / r; var sin = -d_y / r; if (cos > Math.cos(3 * Math.PI / 8)) m_ctl.set_custom_sensor(right_arrow, 1); else if (cos < -Math.cos(3 * Math.PI / 8)) m_ctl.set_custom_sensor(left_arrow, 1); if (sin > Math.sin(Math.PI / 8)) m_ctl.set_custom_sensor(up_arrow, 1); else if (sin < -Math.sin(Math.PI / 8)) m_ctl.set_custom_sensor(down_arrow, 1); } } The values of d_x and d_y denote by how much the marker is shifted relative to the point in which the touch started. From these increments the distance to this point is calculated, as well as the cosine and sine of the direction angle. This data fully defines the required behavior depending on the finger position by means of simple trigonometric transformations. As a result the ring is divided into 8 parts, for which their own sets of sensors are assigned: right_arrow, left_arrow, up_arrow, down_arrow. The touch_end_cb() callback This callback resets the sensors' values and the saved touch indices. function touch_end_cb(event) { event.preventDefault(); var touches = event.changedTouches; for (var i=0; i < touches.length; i++) { if (touches.identifier == move_touch_idx) { m_ctl.set_custom_sensor(up_arrow, 0); m_ctl.set_custom_sensor(down_arrow, 0); m_ctl.set_custom_sensor(left_arrow, 0); m_ctl.set_custom_sensor(right_arrow, 0); move_touch_idx = null; tap_elem.style.visibility = "hidden"; control_elem.style.visibility = "hidden"; } else if (touches.identifier == jump_touch_idx) { m_ctl.set_custom_sensor(jump, 0); jump_touch_idx = null; } } } Also for the move event the corresponding control elements become hidden: tap_elem.style.visibility = "hidden"; control_elem.style.visibility = "hidden"; Setting up the callbacks for the touch events And the last thing happening in the setup_control_events() function is setting up the callbacks for the corresponding touch events: document.getElementById("canvas3d").addEventListener("touchstart", touch_start_cb, false); document.getElementById("control_jump").addEventListener("touchstart", touch_jump_cb, false); document.getElementById("canvas3d").addEventListener("touchmove", touch_move_cb, false); document.getElementById("canvas3d").addEventListener("touchend", touch_end_cb, false); document.getElementById("controls").addEventListener("touchend", touch_end_cb, false); Please note that the touchend event is listened for two HTML elements. That is because the user can release his/her finger both inside and outside of the controls element. Now we have finished working with events. Including the touch sensors into the system of controls Now we only have to add the created sensors to the existing system of controls. Let's check out the changes using the setup_movement() function as an example. function setup_movement(up_arrow, down_arrow) { var key_w = m_ctl.create_keyboard_sensor(m_ctl.KEY_W); var key_s = m_ctl.create_keyboard_sensor(m_ctl.KEY_S); var key_up = m_ctl.create_keyboard_sensor(m_ctl.KEY_UP); var key_down = m_ctl.create_keyboard_sensor(m_ctl.KEY_DOWN); var move_array = [ key_w, key_up, up_arrow, key_s, key_down, down_arrow ]; var forward_logic = function(s){return (s[0] || s[1] || s[2])}; var backward_logic = function(s){return (s[3] || s[4] || s[5])}; function move_cb(obj, id, pulse) { if (pulse == 1) { switch(id) { case "FORWARD": var move_dir = 1; m_anim.apply(_character_body, "character_run_B4W_BAKED"); break; case "BACKWARD": var move_dir = -1; m_anim.apply(_character_body, "character_run_B4W_BAKED"); break; } } else { var move_dir = 0; m_anim.apply(_character_body, "character_idle_01_B4W_BAKED"); } m_phy.set_character_move_dir(obj, move_dir, 0); m_anim.play(_character_body); m_anim.set_behavior(_character_body, m_anim.AB_CYCLIC); }; m_ctl.create_sensor_manifold(_character, "FORWARD", m_ctl.CT_TRIGGER, move_array, forward_logic, move_cb); m_ctl.create_sensor_manifold(_character, "BACKWARD", m_ctl.CT_TRIGGER, move_array, backward_logic, move_cb); m_anim.apply(_character_body, "character_idle_01_B4W_BAKED"); m_anim.play(_character_body); m_anim.set_behavior(_character_body, m_anim.AB_CYCLIC); } As we can see, the only changed things are the set of sensors in the move_array and inside the forward_logic() and backward_logic() logic functions, which now depend on the touch sensors as well. The setup_rotation() and setup_jumping() functions have changed in a similar way. They are listed below: function setup_rotation(right_arrow, left_arrow) { var key_a = m_ctl.create_keyboard_sensor(m_ctl.KEY_A); var key_d = m_ctl.create_keyboard_sensor(m_ctl.KEY_D); var key_left = m_ctl.create_keyboard_sensor(m_ctl.KEY_LEFT); var key_right = m_ctl.create_keyboard_sensor(m_ctl.KEY_RIGHT); var elapsed_sensor = m_ctl.create_elapsed_sensor(); var rotate_array = [ key_a, key_left, left_arrow, key_d, key_right, right_arrow, elapsed_sensor, ]; var left_logic = function(s){return (s[0] || s[1] || s[2])}; var right_logic = function(s){return (s[3] || s[4] || s[5])}; function rotate_cb(obj, id, pulse) { var elapsed = m_ctl.get_sensor_value(obj, "LEFT", 6); if (pulse == 1) { switch(id) { case "LEFT": m_phy.character_rotation_inc(obj, elapsed * ROT_SPEED, 0); break; case "RIGHT": m_phy.character_rotation_inc(obj, -elapsed * ROT_SPEED, 0); break; } } } m_ctl.create_sensor_manifold(_character, "LEFT", m_ctl.CT_CONTINUOUS, rotate_array, left_logic, rotate_cb); m_ctl.create_sensor_manifold(_character, "RIGHT", m_ctl.CT_CONTINUOUS, rotate_array, right_logic, rotate_cb); } function setup_jumping(touch_jump) { var key_space = m_ctl.create_keyboard_sensor(m_ctl.KEY_SPACE); var jump_cb = function(obj, id, pulse) { if (pulse == 1) { m_phy.character_jump(obj); } } m_ctl.create_sensor_manifold(_character, "JUMP", m_ctl.CT_TRIGGER, [key_space, touch_jump], function(s){return s[0] || s[1]}, jump_cb); } And the camera again In the end let's return to the camera. Keeping in mind the community feedback, we've introduced the possibility to tweak the stiffness of the camera constraint. Now this function call is as follows: m_cons.append_semi_soft_cam(camera, _character, CAM_OFFSET, CAM_SOFTNESS); The CAM_SOFTNESS constant is defined in the beginning of the file and its value is 0.2. Conclusion At this stage, programming the controls for mobile devices is finished. In the next tutorials we'll implement the gameplay and look at some other features of the Blend4Web physics engine. Link to the standalone application The source files of the application and the scene are part of the free Blend4Web SDK distribution.
  12. Well. Here we speak about Blend4Web - WebGL engine + addon for Blender. It currently doesn't have some special API for 2D games, but there won't be principal differences compared to 3D. If you asking about Blender Game Engine, than I've seen some simple 2D games made with it, so everything I can tell is that it's possible to create such games =). Probably this is the best choice if you familiar with Blender.
  13. Thanks for positive review! Next chapters will be here soon.   It is a quote from our site. Didn't get what you wanted to say. Sorry.