• Advertisement

Blogs

Featured Entries

  • Seven Tips for Starting Game Developers

    By Ruslan Sibgatullin

    Originally posted on Medium Well, it’s been a ride. My first game Totem Spirits is now live.   I’m not gonna tell you how awesome the game is (since you may try it yourself :) ). Instead I want to share my own experience as a developer and highlight some useful tips for those interested in game development. First of all, short background information about myself — I’m 26 now and have about 22 years of a game playing experience (yes, that’s right the first games I played at age 3–4, one of them was Age of Empires) and slightly more than three years of professional career as a Java developer. Alright, let’s dive into the topic itself now. There are seven tips I’ve discovered while creating the game: 1. The team is the main asset. Yes, even the smallest game dev studios have a team of a few people. I literally give a standing ovation to those guys who are able to create a whole game product only by themselves (I know only one example of such). In my team there were one artist, one UX-designer\artist, one sound designer, and myself — programmer\game designer\UX-designer. And here comes the first tip: you should tip 1: Delegate the work you are not qualified in to the professionals. Just a few examples why: Firstly, I tried to find the sounds myself, spent a few days on it and ended up with a terrible mix of unsuitable and poorly created sound samples. Then, I found a guy who made a great set of sounds for less than $15. The first version of promo video was, well, horrible, because I thought I’m quite good at it. Fortunately, I met an UX-designer who made this cool version you may find at the beginning of this post. I can see now why there are so many, let’s say, strange-looking games with horrible art assets and unlistenable music. Well, you just can’t have the same level of professionalism in everything.   2. Game development is not free. You would have to spend your time or\and your money. I mean, if you want to create a good-looking and playable product you need to invest in it. To be honest, I think that not each and every product out there in the markets can be called a “Game”, since many of them are barely playable. As for my game I’ve spend about $1200 on the development and slightly more than 2 years of my life. Still think that it’s worth every penny and every minute, since I gained a lot of experience in programming which boosted my professional career. tip 2: Take it seriously, investments are necessary.   3. Respect the product. The development process is painful, you will want to quit several(many)times. But if the game you’re building is the one you would enjoy playing yourself it would make the process more interesting and give it additional meaning. The third tip is my main keynote. tip 3: Build a game you would want to play yourself.   4. Share it with the closest friends and relatives, BUT… tip 4: …choose beta-testers wisely. If you don’t want to pay extra money for professional testers then friends\colleagues\relatives are gonna be the first ones to test the game. Try to find what kind of games they like since probably not each of them represents your target audience. And I suggest sharing the product not earlier that in the “beta” stage — otherwise you would need to explain a lot of game rules and that would harm the user experience and you gain almost nothing useful out of it.   5. Make use of your strengths. It will cost you less if you know how to code or how to create an assets. In my case, I didn’t need to hire a programmers or game designers. No one is able to implement your idea better than you, that’s why I suggest to tip 5: Take as many roles in the project as possible. But do not forget about the tip 1.   6. Don’t waste too much time on planning. No, you still need to have some kind of a roadmap and game design document, just tip 6: Make documentation flexible. You would probably need to change it many times. In my case a lot of great ideas had come during the development process itself. And don’t be afraid to share your ideas within a team and listen to their ideas as well!   7. You will hate your game at some point. That may sound sad, but that’s true. After a ten-thousandth launch you just hate the game. You may be tempted to start a new “better”, “more interesting”, etc. project at that point, but, please, tip 7: Don’t give up! Make it happen. Share the game with the world since you’ve put a lot of effort into it. Those tips I’ve discovered mostly for myself and more than sure that for a game industry giants the list above may sound like a baby talk. Nevertheless I still think it might be useful for those dreaming to create the best game ever.
    • 8 comments
    • 1171 views
  • How I halved apk size

    By Ruslan Sibgatullin

    Originally posted on Medium You coded your game so hard for several months (or even years), your artist made a lot of high-quality assets, and the game is finally ready to be launched. Congratulation! You did a great job. Now take a look at the apk size and be prepared to be scared. What is the size — 60, 70 or even 80 megabytes? As it might be sounds strange to hear (in the era of 128GB smartphones) but I have some bad news — the size it too big. That’s exactly what happened to me after I’ve finished the game Totem Spirits. In this article I want to share several advises about how to reduce the size of a release apk file and yet not lose the quality. Please, note, that for development I used quite popular game development engine Libgdx, but tips below should be applicable for other frameworks as well. Moreover, my case is about rather simple 2D game with a lot of sprites (i.e. images), so it might be not that useful for large 3D products.       To keep you motivated to read this article further I want to share the final result: I managed to halve the apk size — from 64MB to 32.36MB. Memory management The very first thing that needs to be done properly is a memory management. You should always have only necessary objects loaded into the memory and release resources once they are not in use. This topic requires a lot of details, so I’d rather cover it in a separate article. Next, I want to analyze the size of current apk file. As for my game I have four different types of game resources: 1. Intro — the resources for intro screen. Intro background Loaded before the game starts, disposed immediately after the loading is done. (~0.5MB) 2. In menu resources — used in menu only (location backgrounds, buttons, etc). Loaded during the intro stage and when a player exits a game level. Disposed during “in game resources” loading. (~7.5MB images + ~5.4MB music) 3. In game resources — used on game levels only (objects, game backgrounds, etc.). Loaded during a game level loading, disposed when a player exits the game level. Note, that those resources are not disposed when a player navigates between levels (~4.5MB images + ~10MB music) 4. Common — used in all three above. Loaded during the intro stage, disposed only once the game is closed. This one also includes fonts. (~1.5MB). The summed size of all resources is ~30MB, so we can conclude that the size of apk is basically the size of all its assets. The code base is only ~3MB. That’s why I want to focus on the assets in the first place (still, the code will be discussed too). Images optimization The first thing to do is to make the size of images smaller while not harming the quality. Fortunately, there are plenty services that offer exactly this. I used this one. This resulted in 18MB reduction already! Compare the two images below: Not optimized Optimized the sizes are 312KB and 76KB respectively, so the optimized image is 4 times smaller! But a human eye can’t notice the difference. Images combination You should combine the same images programmatically rather than having almost the same images (especially if they are quite big). Consider the following example: Before After God of Fire God of Water Rather than having four full-size images with different Gods but same background I have only one big background image and four smaller images of Gods that are then combined programmatically into one image. Although, the reduction is not so big (~2MB) for some cases it can make a difference. Images format I consider this as my biggest mistake so far. I had several images without transparency saved in PNG format. The JPG version of those images is 6 times more lightweight! Once I transformed all images without transparency into JPG the apk size became 5MB smaller. Music optimization At first the music quality was 256 kbps. Then I reduced it to 128 kbps and saved 5MB more. Still think that tracks can be compressed even more. Please, share in comments if you ever used 64 kbps in your games. Texture Packs This item might be a bit Libgdx-specific, although I think similar functionality should exist in other engines as well. Texture pack is a way to organize a bunch of images into one big pack. Then, in code you treat each pack as one unit, so it’s quite handy for memory management. But you should combine images wisely. As for my game, at first I had resources packed quite badly. Then, I separated all transparent and non-transparent images and gained about 5MB more. Dependencies and Optimal code base Now let’s see the other side of development process — coding. I will not dive into too many details about the code-writing here (since it deserves separate article as well). But still want to share some general rules that I believe could be applied to any project. The most important thing is to reduce the quantity of 3d party dependencies in the project. Do you really need to add Apache Commons if you use only one method from StringUtils? Or gson if you just don’t like the built-in json functionality? Well, you do not. I used Libgdx as a game development engine and quite happy with it. Quite sure that for the next game I’ll use this engine again. Oh, do I need to say that you should have the code to be written the most optimal way? :) Well, I mentioned it. Although, the most of the tips I’ve shared here can be applied at the late development stage, some of them (especially, optimization of memory management) should be designed right from the very beginning of a project. Stay tuned for more programming articles!
    • 4 comments
    • 1349 views
  • Day 38 of 100 Days of VR: Creating a VR First Person Shooter I

    By Josh Chang

    Welcome to Day 38! Today, we’re going to talk about the limitations of mobile VR and make some changes in our game to fix things. We’ve already started to fix some things, specifically adding event triggers to our enemies, but there’s still many more things to solve! Here’s a quick list of things I want to tackle from what we encountered 2 days ago: From a technical limitation: We can’t move We only have one input which is clicking Some actual technical problems: The enemies are all black color We don’t have any of our UI’s anymore We’re going to address these problems over the next couple of days. Today, we’re going to focus on the technical limitations of Mobile VR, today’s priorities are: Discussing how to change our game design to accommodate our new limitations Implementing our new designs Edit, Important Note: After playing around with the Cardboard in Unity today and looking at this article about Google Cardboard’s inputs. It seems that we don’t have to use Google VR SDK. Unity already has most of the internal integration necessary to make a VR app Everything we had already works, the reason why there I initially thought there was a problem is, because of how we did raycasting. Specifically, our raycasting code targeted where our mouse/finger was touching, not the middle of the screen! More on this later. Step 1: Changing the Game to Fit our Mobile Limitations Like mentioned before, in the Google Cardboard, we have 3 limitations: We can’t move our characters position We only have tapping as an input to interact with the game Our cursor will always be in the middle of the screen Even for the Daydream Viewer, we will have the first 2 limitations. However, with the new Daydream Standalone device coming out, we’ll have World Space, finally allowing us to track the player’s movements without requiring external devices like what the Vive does! Anyways, back on topic. Considering these 3 limitations, here are my thoughts of what needs to be changed in our game: Because we can’t move, we should place our character in a more centered location for the enemies to reach us Because we can no longer run away, we should make the enemies weaker so that we don’t get swarmed Because we only have one input, we can shoot, but we can’t reload, we should get rid of the reload system Essentially, we’re going to create a shooter with our player in the center with enemies coming from all around us. Step 2: Implementing Our New Designs Now that we have everything we want to do planned, let’s get started in the actual implementation! Step 2.1: Placing the Character in the Middle Let’s place the character in the middle of where our spawn points are set. After playing around with it, I think the best spot would be at Position: (100, 1, 95) Select Player in our hierarchy. In the Transform component, set our Position to be X: 100, Y: 1, Z: 95 Step 2.2: Making the Enemies Weaker Next up, let’s make the enemies weaker. In the Enemy Health script component attached to our Knight, Bandit, and Zombie prefab, let’s change their health value. In order of our health, the order of size from largest to smallest is: Zombie > Knight > Bandit. Let’s set the health to be: Zombie: 4 HP Knight: 2 HP Bandit: 1 HP Here’s how we change our health: In Assets > Prefabs select our prefabs, in this case, let’s choose Zombie. In the Inspector, select the Enemy Health (Script) component and change Health to be 4 Do the same change with the other 2 prefabs. Step 2.3: Remove our ammo system Now it’s time to back to our Player Shooting Controller (Script) Component that we disabled yesterday. I want to keep the animation and sound effects that we had when shooting our gun, however, I’m going to get rid of the ammo and the need to reload. Here are my changes: using UnityEngine; using System.Collections; public class PlayerShootingController : MonoBehaviour { public float Range = 100; public float ShootingDelay = 0.1f; public AudioClip ShotSfxClips; public Transform GunEndPoint; //public float MaxAmmo = 10f; private Camera _camera; private ParticleSystem _particle; private LayerMask _shootableMask; private float _timer; private AudioSource _audioSource; private Animator _animator; private bool _isShooting; //private bool _isReloading; //private LineRenderer _lineRenderer; //private float _currentAmmo; //private ScreenManager _screenManager; void Start () { _camera = Camera.main; _particle = GetComponentInChildren<ParticleSystem>(); Cursor.lockState = CursorLockMode.Locked; _shootableMask = LayerMask.GetMask("Shootable"); _timer = 0; SetupSound(); _animator = GetComponent<Animator>(); _isShooting = false; //_isReloading = false; //_lineRenderer = GetComponent<LineRenderer>(); //_currentAmmo = MaxAmmo + 10; //_screenManager = GameObject.FindWithTag("ScreenManager").GetComponent<ScreenManager>(); } void Update () { _timer += Time.deltaTime; // Create a vector at the center of our camera's viewport //Vector3 lineOrigin = _camera.ViewportToWorldPoint(new Vector3(0.5f, 0.5f, 0.0f)); // Draw a line in the Scene View from the point lineOrigin in the direction of fpsCam.transform.forward * weaponRange, using the color green //Debug.DrawRay(lineOrigin, _camera.transform.forward * Range, Color.green); if (Input.GetButton("Fire1") && _timer >= ShootingDelay /*&& !_isReloading && _currentAmmo > 0*/) { Shoot(); if (!_isShooting) { TriggerShootingAnimation(); } } else if (!Input.GetButton("Fire1") /*|| _currentAmmo <= 0*/) { StopShooting(); if (_isShooting) { TriggerShootingAnimation(); } } /*if (Input.GetKeyDown(KeyCode.R)) { StartReloading(); }*/ } private void StartReloading() { _animator.SetTrigger("DoReload"); StopShooting(); _isShooting = false; //_isReloading = true; } private void TriggerShootingAnimation() { _isShooting = !_isShooting; _animator.SetTrigger("Shoot"); //print("trigger shoot animation"); } private void StopShooting() { _audioSource.Stop(); _particle.Stop(); } public void Shoot() { //print("shoot called"); _timer = 0; Ray ray = _camera.ViewportPointToRay(new Vector3(0.5f, 0.5f, 0f));//_camera.ScreenPointToRay(Input.mousePosition); RaycastHit hit = new RaycastHit(); _audioSource.Play(); _particle.Play(); //_currentAmmo--; //_screenManager.UpdateAmmoText(_currentAmmo, MaxAmmo); //_lineRenderer.SetPosition(0, GunEndPoint.position); //StartCoroutine(FireLine()); if (Physics.Raycast(ray, out hit, Range, _shootableMask)) { print("hit " + hit.collider.gameObject); //_lineRenderer.SetPosition(1, hit.point); //EnemyHealth health = hit.collider.GetComponent<EnemyHealth>(); EnemyMovement enemyMovement = hit.collider.GetComponent<EnemyMovement>(); if (enemyMovement != null) { enemyMovement.KnockBack(); } /*if (health != null) { health.TakeDamage(1); }*/ } /*else { _lineRenderer.SetPosition(1, ray.GetPoint(Range)); }*/ } // called from the animation finished /*public void ReloadFinish() { _isReloading = false; _currentAmmo = MaxAmmo; _screenManager.UpdateAmmoText(_currentAmmo, MaxAmmo); }*/ private void SetupSound() { _audioSource = gameObject.AddComponent<AudioSource>(); _audioSource.volume = 0.2f; _audioSource.clip = ShotSfxClips; } public void GameOver() { _animator.SetTrigger("GameOver"); StopShooting(); print("game over called"); } } I’ve kept what I commented out, here’s the clean version of our script. using UnityEngine; using System.Collections; public class PlayerShootingController : MonoBehaviour { public float Range = 100; public float ShootingDelay = 0.1f; public AudioClip ShotSfxClips; public Transform GunEndPoint; private Camera _camera; private ParticleSystem _particle; private LayerMask _shootableMask; private float _timer; private AudioSource _audioSource; private Animator _animator; private bool _isShooting; void Start () { _camera = Camera.main; _particle = GetComponentInChildren<ParticleSystem>(); Cursor.lockState = CursorLockMode.Locked; _shootableMask = LayerMask.GetMask("Shootable"); _timer = 0; SetupSound(); _animator = GetComponent<Animator>(); _isShooting = false; } void Update () { _timer += Time.deltaTime; if (Input.GetButton("Fire1") && _timer >= ShootingDelay) { Shoot(); if (!_isShooting) { TriggerShootingAnimation(); } } else if (!Input.GetButton("Fire1")) { StopShooting(); if (_isShooting) { TriggerShootingAnimation(); } } } private void TriggerShootingAnimation() { _isShooting = !_isShooting; _animator.SetTrigger("Shoot"); } private void StopShooting() { _audioSource.Stop(); _particle.Stop(); } public void Shoot() { _timer = 0; Ray ray = _camera.ViewportPointToRay(new Vector3(0.5f, 0.5f, 0f)); RaycastHit hit = new RaycastHit(); _audioSource.Play(); _particle.Play(); if (Physics.Raycast(ray, out hit, Range, _shootableMask)) { print("hit " + hit.collider.gameObject); EnemyMovement enemyMovement = hit.collider.GetComponent<EnemyMovement>(); if (enemyMovement != null) { enemyMovement.KnockBack(); } } } private void SetupSound() { _audioSource = gameObject.AddComponent<AudioSource>(); _audioSource.volume = 0.2f; _audioSource.clip = ShotSfxClips; } public void GameOver() { _animator.SetTrigger("GameOver"); StopShooting(); print("game over called"); } } Looking through the Changes We removed a lot of the code that was part of the reloading system. We basically removed any mentions of our ammo and reloading, however, I kept the changes involved with the shooting animation, shooting sound effects, and shooting rate. There were only 2 changes that were made: I changed the input we use to shoot from GetMouseButton to GetButton(“Fire1”), I believe this is the same thing, but I’m making the change anyways. Either option returns true when we’re touching the screen on our mobile device. I also changed our Ray from our raycasting system. Before casted a ray from where our mouse was located at, which before we fixed at the center. However, after we got rid of the code that fixed cursor to the middle, we needed a new way to target the middle. Instead of firing the raycast from our mouse, we now fire the raycast from the middle of our camera, which will fix our problem with our mobile device. Go ahead and play the game now. We should be able to have a playable game now. There are 2 things that will happen when we shoot: We’ll shoot a raycast and if it hits the enemy, they’ll be pushed back The enemies trigger event will detect that we clicked down on the enemy, so they’ll take some damage At this point, we have a problem: if we were to hold down the screen, we’ll push the enemy back, but they’ll only be hit once! That’s because we only have that deals with an OnClick event, but not if the user is currently selecting them. We’re going to fix this problem tomorrow, but I’ve done a lot of investigation work with raycasts now and want to take a break! Step 2.4: Changing the ScreenManager script One more thing we need to do before we leave. The Unity compiler would complain about a missing reference with our ScreenManager, specifically with the MaxAmmo variable that we got rid of. Let’s just get rid of it: using UnityEngine; using UnityEngine.UI; public class ScreenManager : MonoBehaviour { public Text AmmoText; void Start() { { PlayerShootingController shootingController = Camera.main.GetComponentInChildren<PlayerShootingController>(); //UpdateAmmoText(shootingController.MaxAmmo, shootingController.MaxAmmo); } } public void UpdateAmmoText(float currentAmmo, float maxAmmo) { AmmoText.text = currentAmmo + "/" + maxAmmo; } } And we’re good to go! Technically speaking, we won’t be using this script anymore either. Conclusion And another day’s worth of work has ended! There’s a lot of things I learned about VR, such as: we don’t need ANYTHING that the Google VR SDK provides! Unity as a game engine already provides us with everything we need to make a VR experience. Google’s SDK kit is more of a utility kit that help make implementation easier. The TLDR I learned today is that we don’t have to be fixed on using Unity’s Raycasting script, we don’t need it. We can continue to use what we already have. However, for the sake of learning, I’m going to continue down re-implementing our simple FPS with the Google Cardboard assets! We’ll continue tomorrow on Day 39! See you then! Day 37 | 100 Days of VR | Day 39 Home
    • 2 comments
    • 692 views
  • Marching cubes

    By thecheeselover

    I have had difficulties recently with the Marching Cubes algorithm, mainly because the principal source of information on the subject was kinda vague and incomplete to me. I need a lot of precision to understand something complicated  Anyhow, after a lot of struggles, I have been able to code in Java a less hardcoded program than the given source because who doesn't like the cuteness of Java compared to the mean looking C++? Oh and by hardcoding, I mean something like this :  cubeindex = 0; if (grid.val[0] < isolevel) cubeindex |= 1; if (grid.val[1] < isolevel) cubeindex |= 2; if (grid.val[2] < isolevel) cubeindex |= 4; if (grid.val[3] < isolevel) cubeindex |= 8; if (grid.val[4] < isolevel) cubeindex |= 16; if (grid.val[5] < isolevel) cubeindex |= 32; if (grid.val[6] < isolevel) cubeindex |= 64; if (grid.val[7] < isolevel) cubeindex |= 128; By no mean I am saying that my code is better or more performant. It's actually ugly. However, I absolutely loathe hardcoding.   Here's the result with a scalar field generated using the coherent noise library joise :  
    • 0 comments
    • 832 views
  • ANL Editor, GC Editor, The Future

    By JTippetts

    Following along from the previous post about the node graphs, I have lately pulled the node graph stuff out and have started work also on a standalone editor for noise functions. https://github.com/JTippetts/ANLEditor The editor uses the node graph functionality, along with an output node that provides various functions, to allow one to create graphs of noise that can be used to create textures. The output node allows you to map either a Grayscale or RGBA image output (the Volume button currently does nothing, for now). It can analyze a connected grayscale function to give you a histogram graph of how the function output is distributed, and calculates a set of scale/add factors that could be used to remap the output of the function to the 0,1 range. It allows you to specify seamless mapping settings, and to export images to file. It's all still fairly rudimentary and I still haven't settled on a final save/load format, but all that is in progress. I have also started creating an editor of sorts for Goblinson Crusoe, using some of this editor functionality. It's still in its infancy, but eventually it will allow me to create areas and area pieces for use in randomly generating maps.  Lately, I've been doing some thinking about what I want to do with Goblinson Crusoe. It has become clear to me that it will probably never be a commercial release. It will probably never be on Steam or anything like that. I just don't have the time. I wasted a LOT of my early years spinning my wheels and going nowhere, and now I have so many other things that have to come first (family, full-time job, home-ownership, etc...) that I just don't think I'd ever realistically finish this thing. If I could work on it full-time, perhaps, but mortgage, bills, and the necessity of providing insurance and safety nets for my wife and kids means that isn't feasible. However, I do enjoy working on it, and don't want to just abandon it. Nor do I want it to never see some kind of release. And I would like to see some kind of return on my huge time investment over the years. So I've been thinking of making GC a free and open-source project, linked with a Patreon and/or PayPal for goodwill donations. At least that way, if I die or life gets in the way of further development, at the very least it wouldn't disappear completely. Plus, I might get at least a few dollars from appreciative people along the way. What do you all think? Any advice on how to structure such a thing? Good idea/bad idea?
    • 0 comments
    • 971 views

Our community blogs

  1. Free mobile Zombie Road kill Action Racing game 'Dead Run : Road of Zombie' is updated to new version 1.0.6

    Enjoy more fun playing with Cinematic camera and mission.

    Google Play에서 다운로드

     

    **************** v1.0.5 Update****************

    Cinematic camera moving

    Play time extension

    Add Control Tip.

    Add slope.

    Display combo Kill numbers.

    1.0.6_Capture.thumb.png.f9b8a4e5a428e11b6fbb59ff235bd44d.png

  2. Currently on alpha test, silverpath online has active event currently that spawns bosses on event room.

    You can join free and feedback bugs, glitches or whatever problem you have encountered.

    Currently friends and community system is on progress you feedbacks are important to me, also alpha users will get prizes on beta regarding their alpha-end rankings.

    https://play.google.com/apps/testing/com.ogzzmert.online.game

     

    sil1.png

  3. Time for an update.  


    So what have I been working on in the last couple of weeks?  Firstly the lighting and particle systems are activated.  The particle system is pretty unintrusive with the most notable aspect being the chimney smoke rising from the different steampunk engines.  Alongside this there is now a bit of splashing water and a few sparks flying around.  Much more noticeable is the lighting system as demonstrated in the new screenshots.  Here there is now a day / night cycle - I spent quite a long time making sure that the night was not too dark and I already have a game setting allowing this to be turned off (while this will lose a lot of the atmosphere just having day light slightly improves performance ... no other lights need to be active ... and maximises visibility).  Introducing other lights was a bit more problematic than expected.  Firstly, it took a while to get the light fall off fine-tuned correctly and secondly I upgraded the code quite a bit.  Originally, the light manager would always choose the lights nearest to the player, meaning that a maximum of 7 lights (beyond the sunlight) could be active in any scene.  Okay, but it did mean that more distant lights would suddenly flick on.  The new logic activates lights nearest to each game object or map tile currently being drawn, allowing a much greater number of lights to be shown in any scene.  In general the list of lights to activate are pre-calculated as each map section is loaded, with only lighting for moving objects being calculated on the fly.  So far seems to be working nicely - if I overloaded a particular area with lights there could still be light pop-up, but with sensible level design this can be avoided.  I did consider pre-baking the lighting but with the day/night cycle and the desire to alter light intensity and even colour on the fly this was going to be too complex and the performance of the current solution seems to be very good.

    Blog2shot1.jpg.7d898aae32d2b3fb02a260206ab72f6c.jpgBlog2shot3.thumb.jpg.af308ed941dc066f4cc56a1a1cf48f85.jpg 

    The other task I've been working on is the introduction of two new map zones.  The objective was to introduce something distinct from what had been done so far and to this end I have been working on a wilderness and an industrial zone.  And the wilderness zone completely failed to work.  It's a beginner zone so there wasn't any intention to overload it with complex gameplay, but even so it's just empty and uninteresting - back to the drawing board on that one.  As for the industrial zone this one is going better.  There are a number of new models being used and a few more to add with a couple of objectives in mind.  First off the aim is to create a little bit the confusion of a steampunk factory - pipes, big machines, smoke and steam.  Secondly, to hint at the down side of (steampunk) industrialisation with the texturing more grimy and even the addition of waste stacks (handily blocking off the player's progression requiring them to navigate their way round more carefully).  An early draft is shown in the screenshot below - the ground texturing needs to be changed with green grass needing to be replaced by rock and sand and I will also be working on the lighting and fog - to draw in the view and create a darker scene even in the middle of the day.  The scene may be a bit too busy at the moment, but I will see how I feel once these changes are made.

    Blog2shot2.thumb.jpg.fcb9d25e3a0844d264a87fe2b4d7c78d.jpg
    Hope the update was interesting - as before any feedback most welcome.  

  4. Originally posted on Troll Purse development blog.

    Unreal Engine 4 is an awesome game engine and the Editor is just as good. There are a lot of built in tools for a game (especially shooters) and some excellent tutorials out there for it. So, here is one more. Today the topic to discuss is different methods to program player world interaction in Unreal Engine 4 in C++. While the context is specific to UE4, it can also easily translate to any game with a similar architecture.

    UE-Logo-988x988-1dee3bc7f6714edf3c21ee71

    Interaction via Overlaps

    By and far, the most common tutorials for player-world interaction is to use Trigger Volumes or Trigger Actors. This makes sense, it is a decoupled way to set up interaction and leverages most of the work using classes already provided by the engine. Here is a simple example where the overlap code is used to interact with the player:

    Header

    // Fill out your copyright notice in the Description page of Project Settings.
    
    #pragma once
    
    #include "CoreMinimal.h"
    #include "GameFramework/Actor.h"
    #include "InteractiveActor.generated.h"
    
    UCLASS()
    class GAME_API InteractiveActor : public AActor
    {
    	GENERATED_BODY()
    
    public:
    	// Sets default values for this actor's properties
    	InteractiveActor();
    
        virtual void BeginPlay() override;
    
    protected:
    	UFUNCTION()
    	virtual void OnInteractionTriggerBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult);
    
    	UFUNCTION()
    	virtual void OnInteractionTriggerEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex);
    
        UFUNCTION()
        virtual void OnPlayerInputActionReceived();
    
    	UPROPERTY(VisibleAnywhere, BlueprintReadOnly, Category = Interaction)
    	class UBoxComponent* InteractionTrigger;
    }
    

    This is a small header file for a simple base Actor class that can handle overlap events and a single input action. From here, one can start building up the various entities within a game that will respond to player input. For this to work, the player pawn or character will have to overlap with the InteractionTrigger component. This will then put the InteractiveActor into the input stack for that specific player. The player will then trigger the input action (via a keyboard key press for example), and then the code in OnPlayerInputActionReceived will execute. Here is a layout of the executing code.

    Source

    // Fill out your copyright notice in the Description page of Project Settings.
    
    #include "InteractiveActor.h"
    #include "Components/BoxComponent.h"
    
    // Sets default values
    AInteractiveActor::AInteractiveActor()
    {
    	PrimaryActorTick.bCanEverTick = true;
    
    	RootComponent = CreateDefaultSubobject<USceneComponent>(TEXT("Root"));
    	RootComponent->SetMobility(EComponentMobility::Static);
    
    	InteractionTrigger = CreateDefaultSubobject<UBoxComponent>(TEXT("Interaction Trigger"));
    	InteractionTrigger->InitBoxExtent(FVector(128, 128, 128));
    	InteractionTrigger->SetMobility(EComponentMobility::Static);
    	InteractionTrigger->OnComponentBeginOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyBeginOverlap);
    	InteractionTrigger->OnComponentEndOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyEndOverlap);
    
    	InteractionTrigger->SetupAttachment(RootComponent);
    }
    
    void AInteractiveActor::BeginPlay()
    {
        if(InputComponent == nullptr)
        {
            InputComponent = ConstructObject<UInputComponent>(UInputComponent::StaticClass(), this, "Input Component");
            InputComponent->bBlockInput = bBlockInput;
        }
    
        InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnPlayerInputActionReceived);
    }
    
    void AInteractiveActor::OnPlayerInputActionReceived()
    {
        //this is where logic for the actor when it receives input will be execute. You could add something as simple as a log message to test it out.
    }
    
    void AInteractiveActor::OnInteractionProxyBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult)
    {
    	if (OtherActor)
    	{
    		AController* Controller = OtherActor->GetController();
            if(Controller)
            {
                APlayerController* PC = Cast<APlayerController>(Controller);
                if(PC)
                {
                    EnableInput(PC);
                }
            }
    	}
    }
    
    void AInteractiveActor::OnInteractionProxyEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex)
    {
    	if (OtherActor)
    	{
    		AController* Controller = OtherActor->GetController();
            if(Controller)
            {
                APlayerController* PC = Cast<APlayerController>(Controller);
                if(PC)
                {
                    DisableInput(PC);
                }
            }
    	}
    }
    

    Pros and Cons

    The positives of the collision volume approach is the ease at which the code is implemented and the strong decoupling from the rest of the game logic. The negatives to this approach is that interaction becomes broad when considering the game space as well as the introduction to a new interactive volume for each interactive within the scene.

    Interaction via Raytrace

    Another popular method is to use the look at viewpoint of the player to ray trace for any interactive world items for the player to interact with. This method usually relies on inheritance for handling player interaction within the interactive object class. This method eliminates the need for another collision volume for item usage and allows for more precise interaction targeting.

    Source

    AInteractiveActor.h

    // Fill out your copyright notice in the Description page of Project Settings.
    
    #pragma once
    
    #include "CoreMinimal.h"
    #include "GameFramework/Actor.h"
    #include "InteractiveActor.generated.h"
    
    UCLASS()
    class GAME_API AInteractiveActor : public AActor
    {
    	GENERATED_BODY()
    
    public:
        virtual OnReceiveInteraction(class APlayerController* PC);
    }
    

    AMyPlayerController.h

    // Fill out your copyright notice in the Description page of Project Settings.
    
    #pragma once
    
    #include "CoreMinimal.h"
    #include "GameFramework/PlayerController.h"
    #include "AMyPlayerController.generated.h"
    
    UCLASS()
    class GAME_API AMyPlayerController : public APlayerController
    {
    	GENERATED_BODY()
    
        AMyPlayerController();
    
    public:
        virtual void SetupInputComponent() override;
    
        float MaxRayTraceDistance;
    
    private:
        AInteractiveActor* GetInteractiveByCast();
    
        void OnCastInput();
    }
    

    These header files define the functions minimally needed to setup raycast interaction. Also note that there are two files here as two classes would need modification to support input. This is more work that the first method shown that uses trigger volumes. However, all input binding is now constrained to the single ACharacter class or - if you designed it differently - the APlayerController class. Here, the latter was used.

    The logic flow is straight forward. A player can point the center of the screen towards an object (Ideally a HUD crosshair aids in the coordination) and press the desired input button bound to Interact. From here, the function OnCastInput() is executed. It will invoke GetInteractiveByCast() returning either the first camera ray cast collision or nullptr if there are no collisions. Finally, the AInteractiveActor::OnReceiveInteraction(APlayerController*)  function is invoked. That final function is where inherited classes will implement interaction specific code.

    The simple execution of the code is as follows in the class definitions.

    AInteractiveActor.cpp

    void AInteractiveActor::OnReceiveInteraction(APlayerController* PC)
    {
        //nothing in the base class (unless there is logic ALL interactive actors will execute, such as cosmetics (i.e. sounds, particle effects, etc.))
    }
    

    AMyPlayerController.cpp

    AMyPlayerController::AMyPlayerController()
    {
        MaxRayTraceDistance = 1000.0f;
    }
    
    AMyPlayerController::SetupInputComponent()
    {
        Super::SetupInputComponent();
        InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnCastInput);
    }
    
    void AMyPlayerController::OnCastInput()
    {
        AInteractiveActor* Interactive = GetInteractiveByCast();
        if(Interactive != nullptr)
        {
            Interactive->OnReceiveInteraction(this);
        }
        else
        {
            return;
        }
    }
    
    AInteractiveActor* AMyPlayerController::GetInteractiveByCast()
    {
        FVector CameraLocation;
    	FRotator CameraRotation;
    
    	GetPlayerViewPoint(CameraLocation, CameraRotation);
    	FVector TraceEnd = CameraLocation + (CameraRotation.Vector() * MaxRayTraceDistance);
    
    	FCollisionQueryParams TraceParams(TEXT("RayTrace"), true, GetPawn());
    	TraceParams.bTraceAsyncScene = true;
    
    	FHitResult Hit(ForceInit);
    	GetWorld()->LineTraceSingleByChannel(Hit, CameraLocation, TraceEnd, ECC_Visibility, TraceParams);
    
        AActor* HitActor = Hit.GetActor();
        if(HitActor != nullptr)
        {
            return Cast<AInteractiveActor>(HitActor);
        }
    	else
        {
            return nullptr;
        }
    }
    

    Pros and Cons

    One pro for this method is the control of input stays in the player controller and implementation of input actions is still owned by the Actor that receives the input. Some cons are that the interaction can be fired as many times as a player clicks and does not repeatedly detect interactive state without a refactor using a Tick function override.

    Conclusion

    There are many methods to player-world interaction within a game world. In regards to creating Actors within Unreal Engine 4 that allow for player interaction, two of these potential methods are collision volume overlaps and ray tracing from the player controller. There are several other methods discussed out there that could also be used. Hopefully, the two implementations presented help you decide on how to go about player-world interaction within your game. Cheers!

     

     

    Originally posted on Troll Purse development blog.

  5. Hi there,

    this week I was working on following stuff.

    Forest Strike - Dev Blog 4


    Scaling issues

    I was struggling with the scaling issues. As you might have seen, the "pixels" displayed on the screen did not always had the same size. Now, this issue is fixed and it looks way better. Check it out:

    Forest Strike - Scaling issue fix

    Mouse dragging

    Because of the scaling issue, now not all the tiles are visible on one image. In order to navigate through the map, you can now use the mouse and drag the camera around. You're going to use this feature in greater maps in order to navigate your characters and get an overview.

    Forest Strike - Mouse dragging

    Title screen

    Finally, I implemented a title screen. From here, you can navigate to a new game, your settings and exit the game. It is currently under development, so it might change a bit. The background image should stay the same.

    Forest Strike - Title Screen


    That's it for this update. Be sure to follow this blog in order to stay up2date. :3

    Thank you for reading! :D

    As always, if you have questions or any kind of feedback feel free to post a comment or contact me directly.

    Additionally, if you want to know where I get my ideas regarding pixel arts from, you can check out my Pinterest board.

     
  6. Hello everyone!

    Oh, I'm so delighted with the number of views! And gamedev.net even featured our entry on their Facebook page! Thank you for finding this blog interesting! 

    In the last entry, I made a brief introduction of our Egypt: Old Kingdom game. It's not just based on history, we're basically trying to recreate the history in a game form. Of course, it requires a tremendous amount of research!


    Sometimes people ask us: "Why did you choose Hierakonpolis/Memphis as the main location, and not Thinis or some other important settlements?"

    The reply will be: because in order to make the game really historical, our location of choice has to be very well researched. We need a lot of information about the location: events, personalities, buildings, lifestyle. 

    The research was done by the game designer, Mikhail, and I think he can now get his master degree as an Egyptologist because he knows A LOT about Ancient Egypt thanks to his research!  xD He did the research by himself for Bronze Age and Marble Age, but then it got too hard to keep up with both research and game design. For the next game, Predynastic Kingdom, we contacted the scientists from the Center For Egyptian Study of Russian Academy of Sciences (CES RAS). We're lucky they agreed to help! Predynastic Egypt was the first game made with their support.

    For Egypt Old Kingdom Mikhail created a huge database containing most of the known events, places and personalities of the Old Kingdom period:

    5a64470c34901_dffca616076ebecc0b4ebead239864a41.thumb.png.72ad3711babe7a4f859a3caca6d4afbe.png

    Every little thing about the period is studied thoroughly in order to immerse the player deeper in the game. We learn about kings’ deeds, their authority, did they properly worship gods or not, did they start any wars or not. We study climate, soil, vegetation, natural disasters of that period. We learn about the appearance of ancient Egyptians, their dress, their food, their houses.

    Sketches of Egyptians' appearance:

    5a64479231389_4c1e49e02c7cbef62c4d37a685bfcddd1.png.ea1763c326a99dad0ad0f6a0a19e9d10.png

    When the database is ready, Mikhail goes over it with the scientists. They check everything, correct what's necessary, provide more information and details. Like every other science,  history has a lot of controversial points. For example, "The White Walls" of Memphis is something that scientists can't agree about. There are two major opinions about what could it be:

    1. It is the walls of a palace. 

    2. It is the walls of burial grounds.


    In our game, we don't want to take sides, so the scientists of CES RAS inform us about such "dangerous" topics as well. This way we can avoid the controversy and let the player decide which theory he prefers.

    This is Mikhail (left side) discussing the game events with scientists :) In the middle - Galina Belova, one of the most famous Russian Egyptologists. The director of CES RAS to the right.

    5a644bfd20667_de9a44b1a23e268d5a7f26dd5510e15b1.thumb.jpg.adb125edf29c7c9fc6eec38db75997c2.jpg

    During this part of the work we sort out all of the events and divide them in groups: the most important events which must be in the game;  less important events which can be beneficial for the atmosphere of the game; insignificant events.  

    When this part of work is done, and all of the information is sorted out, the design of the game begins. In the process we still keep in touch with the scientists, because some events are not easy to turn in a game at all.

    For example, one of our goals is to make the player fully experience the life of Ancient Egypt. We want to make player think like Ancient Egyptians, to make him exparience the same difficulties. In order to do that we have to know what Egyptians were thinking, and also through the gaming process, we have to put the player in the same conditions as Egyptians had.

    Ancient Egyptians strongly believed that if they would not worship their ancestors and gods properly, the country will experience all kinds of disasters. This belief was unconscious and unconditional, that’s why they were building all those funeral complexes, made sacrifices, trying to please their ancestors. Even cities were built only as a way to please gods and ancestors! They were sure if they will stop properly worship them, the country will be doomed, because ancestors will stop to protect them.

    We wanted to nudge the player to build all these pyramids for the same reasons as Egyptians, and this is how stat “Divine favor” appeared. This stat is mostly necessary to worship the gods’ cults, and player can earn it by working in temples and worshipping ancestors. But what really makes the player to feel like Egyptians did is the feature of “Divine favor” stat – it degrades by 0,1 every turn. It happens because people are dying; hence, there are more and more ancestors that must be worshipped. If player will not pay attention to this stat and it will degrade too much, more and more disasters will start to happen, such as fires, earthquakes, droughts, etc. If will greatly influence the economy and the result of the game.

    That's how we turn history in a game. It can be fun and challenging! There are many other examples of similar transitions. We'll definitely keep working with the scientists, not only Russian, but also foreign. In fact, we hope to engage more and more people in the process of game making.

    That's it for now. Thank you for reading! Comments are very welcome!

    If you would like to know more about the game and follow our social media, here are links:

    Egypt: Old Kingdom on Steam;

    Predynastic Egypt on Steam;

    Our community on Facebook;

    Our Twitter.

  7. Three weeks have passed, a few new people have installed the game. I had the chance to deploy the game also to juniors x86 tablet, where it currently crashes. The tablet does have a gyro sensor, which I thought worked fine when using it on Windows Phone, but apparently the code isn't crash proof enough.

    I've been looking mostly for crashes now, and a handful (well, two in the last 3 days) did appear. The crash info is all over the place. Some have a really great stack trace, other nothing. Some seem to be in between. I reckon this is heavily affected by fiddling with telemetry settings. 

    What I'm also missing on a first glance is the configuration of the crashed app. Since the compile builds executables for x86, x64 and ARM it'd be nice to know which of these were the cause. What's always there is the name, IP, Windows build version and device type of the device that was running the game.

    While the stack traces sometimes help they are only that. You don't get a full dump or local watch info. So you can get lucky and the location of the problem is clear, or you're out of luck. In these last two crashes the stack traces hints on a null pointer on a method that is used throughout the game (GUI displaying a texture section). I suspect that it happens during startup and the code path went inside a function when it wasn't ready. In these cases I can only add a safety check and cleanly jump out of the function. Build, upload, re-certify and next try in 1 to 3 days.

     

    Currently I'm struggling getting remote debugging done on the tablet. I can deploy the app via enabled web portal, but the remote debugger is not properly recognized by Visual Studio. I was hoping for USB debugging as that works nicely on the phone, but had no luck with it.

     

    Well, here's to the next version hoping to get those crashes fixed!

     

    • 1
      entry
    • 1
      comment
    • 55
      views

    Recent Entries

    Latest Entry

    1.png.1f031b7b43238d7170f55af0367bfcbd.png

    Here goes my game.

    This challenge is sooooo convenient to me for I have no ability of drawing... :(

    3.png.b3d3a24587e40fb00c4964ae3bda5b20.png

    By finishing it I learned lots of Cocos Creator, which is good at UI effects.

    Thanks a lot for the Challenge!

     

    Download (Windows only):

    https://www.dropbox.com/s/q0l37r5urhqgtup/MissileCommandRelease.zip?dl=0

     

    Source code:

    https://github.com/surevision/Missile_Command_Challenge

     

    Screenshot:

    Spoiler

    1.png.1f031b7b43238d7170f55af0367bfcbd.png

    title

    2.png.9fdfd0204fa9f90ae63e86c7050f8773.png

    gameplay

     

  8. For QLMesh (and some other projects), I am running my own fork of Asset Import Library.  The difference: it is amalgamated build - all sources are merged into one file (including dependencies). Since Assimp recently switched to miniz, I have replaced remaining references to zlib with miniz - so zlib is not required too.

    drwxr-xr-x  85 piecuchp  staff     2890 Jan 17 23:34 assimp
    -rw-r--r--   1 piecuchp  staff  4921627 Jan 17 23:34 assimp.cpp
    -rw-r--r--   1 piecuchp  staff  2893785 Jan 17 23:34 private\assimp.h

     

    Everything you need to buid the assimp is:

    g++ -c -std=c++11 code-portable/assimp.cpp
    

    or just add assimp.cpp to your project/IDE (you can find code-portable directory in my repo.

    One disclaimer: I have only tested this amalgamation under OSX with QLMesh). Main reason for this amalgamation is that it makes compilation/recompilation on different platforms with different configurations rather easier.

    Side-effect is that single-file assimp.cpp compiles really fast (like 10x faster on my MacBook than original project files).

    (http://pawelp.ath.cx/)(http://komsoft.ath.cx/)(https://itunes.apple.com/us/app/qlmesh/id1037909675)

    model2-9.jpg

    • 1
      entry
    • 7
      comments
    • 61
      views

    Recent Entries

    I have released my first free prototype!

    https://yesindiedee.itch.io/is-this-a-game

    How terrifying!

    It is strange that I have been working to the moment of releasing something to the public for all of my adult life, and now I have I find it pretty scary.

    I have been a developer now for over 20 years and in that time I have released a grand total of 0 products.

    The Engine

    The engine is designed to be flexible with its components, but so far it uses

    Opengl, OpenAL, Python (scripting), CG, everything else is built in

    The Games

    When I started developing a game I had a pretty grand vision, a 3D exploration game. It was called Cavian, image attached. and yep it was far to complex for my first release. Maybe I will go back to it one day.

    I took a year off after that, I had to sell most of my stuff anyway as not releasing games isn't great for your financial situation.

    THE RELEASE

    When I came back I was determined to actually release something! I lowered my sights to a car game, it is basically finished but unfortunately my laptop is too old to handle the deferred lighting (Thinkpad x220 intel graphics)  so I can't test it really, going to wait until I can afford a better computer before releasing it.

    Still determined to release something I decided to focus more on the gameplay than graphics.

    Is This A Game?

    Now I have created an Experimental prototype. Its released and everything: https://yesindiedee.itch.io/is-this-a-game 

    So far I don't know if it even runs on another computer. Any feedback would be greatly appreciated!

    If you have any questions about any process in the creation of this game design - coding - scripting - graphics - deployment just ask, I will try to make a post on it.

    Have a nice day, I have been lurking on here for ages but never really said anything.....

    I like my cave

     

    ScreenSat27.jpg

    • 2
      entries
    • 0
      comments
    • 137
      views

    Recent Entries

    Last week we made a draft design document to visualize the global structure of the game. 
    choconoa_levels.png

    Because the levels are linear with a lot of verticality and the camera free, we are trying to find a good solution for the boundaries. Of course it will be heavily filled with environments deco but we want don't want to spend a lot of time making sure there are no holes! And we don't want invisible walls which are not obvious. So I tried with a depth-faded (using depth fade for the intersection between the wall and the other objects, and a camera depth fade) magic wall like this:

    giphy.gif

    Now the chantilly path automatically conform to the ground:

    giphy.gif

    As much as possible we try to save time by creating tools like this:

    giphy.gif
     

  9. This will be a short technical one for anyone else facing the same problem. I can't pretend to have a clue what I was doing here, only the procedure I followed in the hope it will help others, I found little information online on this subject.

    I am writing an Android game and want to put in gamepad support, for analogue controllers. This had proved incredibly difficult, because the Android Studio emulator has no built in support for trying out gamepad functionality. So I had bought a Tronsmart Mars G02 wireless gamepad (comes with a usb wireless dongle). It also supports bluetooth.

    The problem I faced was that the gamepad worked fine on my Android tv box device, but wasn't working under Linux Mint, let alone in the emulator, and wasn't working via bluetooth on my tablet and phone. I needed it working in the emulator ideally to be able to debug (as the Android tv box was too far). 

    Here is how I solved it, for anyone else facing the same problem. Firstly the problem of getting the gamepad working and seen under linux, and then the separate problem of getting it seen under the Android emulator (this may work under Windows too).

    Under Linux

    Unfortunately I couldn't get the bluetooth working as I didn't have up to date bluetooth, and none of my devices were seeing the gamepad. I plugged in the usb wireless dongle but no joy.

    It turns out the way to find out what is going on with usb devices is to use the command:

    lsusb

    This gives a list of devices attached, along with a vendor id and device id (takes the form 20bc:5500).

    It was identifying my dongle as an Xbox 360 controller. Yay! That was something at least, so I installed an xbox 360 gamepad driver by using:

    https://unixblogger.com/2016/05/31/how-to-get-your-xbox-360-wireless-controller-working-under-your-linux-box/

    sudo apt-get install xboxdrv

    sudo xboxdrv --detach-kernel-driver

    It still didn't seem to do anything, but I needed to test whether it worked so I installed a joystick test app, 'jstest-gtk' using apt-get.

    The xbox gamepad showed up but didn't respond.

    Then I realised I had read in the gamepad manual I might have to switch the controller mode for PC from D-input mode to X-input. I did this and it appeared as a PS3 controller (with a different USB id), and it was working in the jstest app!! :)

    Under Android Emulator

    Next stage was to get it working in the Emulator. I gather the emulator used with Android Studio is qemu and I found this article:

    https://stackoverflow.com/questions/7875061/connect-usb-device-to-android-emulator

    I followed the instructions here, basically:

    Navigate to emulator directory in the android sdk.

    Then to run it from command line:

    ./emulator -avd YOUR_VM -qemu -usb -usbdevice host:1234:abcd

    where the host is your usb vendor and id from lsusb command.

    This doesn't work straight off, you need to give it a udev rule to be able to talk to the usb port. I think this gives it permission, I'm not sure.

    http://reactivated.net/writing_udev_rules.html

    Navigate to etc/udev/rules.d folder

    You will need to create a file in there with your rules. You will need root privileges for this (choose to open the folder as root in Nemo or use the appropriate method for your OS).

    I created a file called '10-local.rules' following the article.

    In this I inserted the udev rule suggested in the stackoverflow article:

    SUBSYSTEM!="usb", GOTO="end_skip_usb"
    ATTRS{idVendor}=="2563", ATTRS{idProduct}=="0575", TAG+="uaccess"
    LABEL="end_skip_usb"
    SUBSYSTEM!="usb", GOTO="end_skip_usb"
    ATTRS{idVendor}=="20bc", ATTRS{idProduct}=="5500", TAG+="uaccess"
    LABEL="end_skip_usb"

    Note that I actually put in two sets of rules because the usb vendor ID seemed to change once I had the emulator running, it originally gave me an UNKNOWN USB DEVICE error or some such in the emulator, so watch that the usb ID has not changed. I suspect only the latter one was needed in the end.

    To get the udev rules 'refreshed', I unplugged and replugged the usb dongle. This may be necessary.

    Once all this was done, and the emulator was 'cold booted' (you may need to wipe the data first for it to work) the emulator started, connected to the usb gamepad, and it worked! :)

    This whole procedure was a bit daunting for me as a linux newbie, but if at first you don't succeed keep trying and googling. Because the usb device is simply passed to the emulator, the first step getting it recognised by linux itself may not be necessary, I'm not sure. And a modified version of the technique may work for getting a gamepad working under windows.

  10. NeutrinoParticles is a Real-time Particles Effect Editor and it is a new extraordinary editor on the market.www.neutrinoparticles.com

    What makes this editor recognisably different than other editors is, it allows you to export the effects to the source code in JavaScript or C# which makes them extremely compact and fast, and it is absolutely FREE.

    MacOS and Linux users may use WINE to run the editor for now. Native packages will be available soon.

    Particles Effect Editor, JavaScript, C#, Unity, PIXI Engine, Generic HTML

    The software has some renderers for JavaScript (PIXI Engine, Generic HTML) and for C# (Unity, C# Generic).

    For example, if you use PIXI on your projects, you only need to copy/paste several lines of code to make it work.

    Moon2.png

    Fire3.png

    Birds5.png

    Home Page (54).png

  11. We've uploaded a video to our Dailymotion account that is a full length recording of the intro text scene we've been working on for the game.  This was interesting to do because we've never done animations that are timed and sequenced.  The intro quickly details the lore a bit, who you are (good), who's the bad guy, and some history behind your powers and training.  This is one of the first things we've made for the game that is actually a playable part (not a test level, or rigging to get controls right).  Follow for more updates on the game, hope to show the tutorial section of the game quite soon.

    Crystal Dissention Intro Text Trailer - Dailymotion video

    • 1
      entry
    • 2
      comments
    • 67
      views

    Recent Entries

    Game Programming Resources

    5a610531495ba_gameprogrammingresources2.thumb.png.0024fd7e0c8f4a6533bb2b56faab4c32.png

    Rodrigo Monteiro, who has been making games for twenty years now, started a thread on Twitter for sharing his favorite game programming resources. I then collected those and a few responses and indexed them into a Twitter moment here:

    Here’s what was in the thread: 

    Game Networking: https://gafferongames.com/categories/game-networking/

    Development and Deployment of Multiplayer Online Games by IT Hare / No Bugs’ Hare is a multiplayer game programming resource split into nine volumes; the first of which is available here on Amazon.

    Linear Algebra: 

     

    Geometry – Separating Axis Theorem (for collision detection): http://www.metanetsoftware.com/technique/tutorialA.html

    How to implement 2D platformer games: http://higherorderfun.com/blog/2012/05/20/the-guide-to-implementing-2d-platformers/

    Pathfinding: https://www.redblobgames.com/pathfinding/a-star/introduction.html

    OpenGL Tutorial: https://learnopengl.com/

    Audio Programming: https://jackschaedler.github.io/circles-sines-signals/index.html

    OpenAL Effects Extension Guide (for game audio): http://kcat.strangesoft.net/misc-downloads/Effects%20Extension%20Guide.pdf

    Entity Component Systems provide an alternative to object-oriented programming.

    Entity Systems are the future of MMOG development: http://t-machine.org/index.php/2007/09/03/entity-systems-are-the-future-of-mmog-development-part-1/

    What is an entity system framework for game development? http://www.richardlord.net/blog/ecs/what-is-an-entity-framework.html

    Understanding Component-Entity-Systems: https://www.gamedev.net/articles/programming/general-and-gameplay-programming/understanding-component-entity-systems-r3013/

    Alan Zucconi blogs about shaders and game math for developers on his site: https://www.alanzucconi.com/tutorials/

    AI Steering Behaviours: http://www.red3d.com/cwr/boids/

    Bartosz Olszewski blogs about game programming here: gamesarchitecture.com

    How to write a shader to scale pixel art: https://colececil.io/blog/2017/scaling-pixel-art-without-destroying-it/

    Here’s podcast on C++ programming: http://cppcast.com/archives/

    http://gameprogrammingpatterns.com/

    Note: This post was originally published on my blog as game programming resources.

  12. Main character : Zoile.

    First shared concept art ! Please give me your tough about him (comment below, share and subscribe). My first intention was to reveal the plot, but I felt like the blog was missing some more visual support. 10 followers and we will unlock a new character next week :)

    Today's question: who was your favorite game, cartoon or comic book heroes and why ?

    Zoile.png.bc3430c98f2c83225c6ed0b94e0e13e7.pngZoile_noarmor.png.723aa0325d5b87b6871647fcfa155b01.png

    Zoile is the main character of a group of three. In our game, you will have the chance to control 3 main character, each having individual weakness and strength. You will be able to control them all at the same time and all the time ! How you will control them will be covered in an upcoming post, but promise to be a fun and unique way of handling different skill set. I remember one of my old time favorite arcade game 1989 Teenage Mutant Ninja Turtle. My favorite character was Michelangelo, but Donatello had this long stick that could reach enemy from a much further distance making him the best choice to fight the first boss Rocksteady which I felt was the strongest of the game beside Shredder. It was always annoying to have the choice between the one I liked and the one that was most capable to handle that situation. I felt the same thing playing many RPG like Diablo, II and WoW where you just can't invest enough time in all classes and you always dreamed to mix and match class skill to get the best possible character you could build that would match your play-style. How we approach that problem will be reveal soon.

     

     

    Quote

    At this point in time, you may feel that ETs are old-dated, unrefreshing and filled with "cliché", think again. I swear that our anti-heroes are different and that the plot and the phylosophical humorous twist that will be explored in this universe is very different and refreshing from what you have in mind.

    Zoile is the self-proclaimed leader of our squad. He is particularly strong physically for his race and possesses a quick wit, few have dared to challenge him. His descent gave him a high level of  self-confidence and Zoile quickly became imbued of himself. Afraid of none, he sees himself as one of the best fighters of his race and has an unwavering pride of his homeland. He was born from a war hero and has always dreamed of becoming a full member of the prestigious flying squad of the 452b Pekler Interplanetary Army. After several failures in the admissions exam, Zoile became very mean and bitter towards others. After 4 years of attempts, he finally was accepted as a low rank recruit. It is said that his father intervened with the council so he could have the chance to demonstrate his value. Unfortunately, many disciplinary problems and conflict with other members have confined him in low rank function. After several difficult years with the squad, the high council allowed him to create a small squadron with two members of his choice for his first mission. He, unsurprisingly selected the only two members with whom a sincere friendship was developed. The two members accepted the honor after a convincing patriotic speach from Zoile and since then he has taken his role with the great honor. Despite is strong wrong and frequent fight with his two friend, he is willing to give everything to show to everyone that his team is the best, because he is the best leader and a good leader can bring weak soldier to great honor.

    You can see Zoile with and without his armor set and a typical ray-gun weapon. The Zarin are a humanoid skinny race from a nearby planet.

    Zarin, as a race, are mostly peaceful, minding their own business and seeking knowledge. They have an equivalent length of evolution compared to human, but they took a different path. They are scared about a recent discovery they made that put their world at risk of what they call the "multicolored tall people invasion". More info about the Zarin will be shared in an upcoming post !

     

    Thanks, please share your opinion, it's very valuable. 5 comments and we publish a video tutorial on how to draw it.

     

     

     

  13. I'm a man on a Mobile Gaming Quest (MGQ) to play a new mobile game every day, and document my first impressions here and on YouTube. Below is the latest episode.

    Run, swipe, die. Rinse and repeat. Seriously, Glitch Dash looks gorgeous but might just be the most difficult arcade game I've played on mobile (well, apart from Flappy Bird). Avoiding the swinging hammers and laser beams is pure torture, but extremely satisfying when you finally complete each level. 

    The game's currently in beta, but I decided to include it as I was having a lot of fun with it, and I figured some of you might want to signup for the beta.

    In terms of monetization, you start out with ten lives, which you'll quickly burn through, and get 10 new lives after 120 seconds, or immediately by watching an ad. Luckily, we can also remove the life system entirely through a $2 IAP. 

    My thoughts on Glitch Dash:


    Google Play: https://www.signupanywhere.com/signup/nvip99qq
    iOS: https://www.signupanywhere.com/signup/nvip99qq

    Subscribe on YouTube for more commentaries: https://goo.gl/xKhGjh
    Or join me on Facebook: https://www.facebook.com/mobilegamefan/
    Or Instagram: https://www.instagram.com/nimblethoryt/
    Or Twitter: https://twitter.com/nimblethor

  14. Latest Entry

    New inventory interface framework

    DOMEN KONESKI

    I established a new inventory interface framework by writing a new logical layer on top of the current system. By doing this we can now save a lot of coding and designing time. The system is generic which means UI elements are built on the fly and asynchronously. For example, we load sprites from the pool if they already exist in the memory, else we load them asynchronously from the resources – disk. Everything feels seamless now, which was the primary goal in mind while recreating the system.

    UI_inventory_lowpoly_floatlands.png?fit=

    New logical layer introduced new item manipulation techniques while having your inventory open:

    • By hovering over an item, a description panel shows which holds information about your item such as item name, description and quality;
    • By dragging one item onto another one, swapping or merging becomes possible;
    • If you right click on an item, you can split it if there is enough room in your inventory;
    • If you left click while holding left-shift, you can transfer an item instantly to another inventory panel.

    giphy.gif

    Harvesting world resources

    DOMEN KONESKI

    World resources such as stone veins, metal veins, niter veins, crystals and trees now have a harvesting feature – items (wooden logs, ores) now drop while mining/harvesting the resource which significantly improves resource gathering feature of Floatlands:

    resource_gathering_lowpoly_floatlands.pn

    Critters

    ANDREJ KREBS

    We started adding some more critters to the game. These will be harmless small animals and robots that will add some more life to the game. So far I’ve modeled, rigged and animated the hermit crab and the spherical robot. I also added a missing animation to the rabbit.

    Foliage

    ANDREJ KREBS

    I have prepared some undergrowth foliage to make the nature more varied and interesting. They’re made in a similar way the grass, bushes and treetops are made, with alpha transparent textures on planes.

     

    giphy.gif

     

    Companions

    MITO HORVAT

    Roaming around the world can sometimes feel a bit lonely. So we decided to implement a companion to follow you and keep you company as you play. Companion won’t just follow you arround and be useless, they’ll be a valuable asset early in the game. For instance, they’ll help you gather resources with their set of utilities. They’ll even provide the source of light during the night!

    As with any concept design task I’m given, I usualy sketch a bunch of ideas on a small collage. In this case we then printed it and everyone in the office picked their 3 favourites.

    companion_concepts_lowpoly_floatlands.pncompanion_sketch_lowpoly_floatlands.png?

    Based on that decision I combined the elements of the companions everyone picked and came up with the following concept. It’s also important that the companion has elements that you can find in the game, to give it the unique aesthetic look.

  15. I took the last two weeks of December off for holidays, so no production was done for Spellbound during that time. I met up with my friend Russel (Magforce7) for an afternoon at my office and gave him a demo of Spellbound in VR. He works for Firaxis, so it was interesting to compare notes on development and production. Without a doubt, he's a lot more experienced with production and development, so I tried to glean as many tips and tricks as I could. It was also his first time trying VR, so I gave him a bunch of quick VR demos so that he could get familiar with the medium and how to interface with it. It's interesting to compare the differences between producing a traditional video game vs. a room scale VR video game.

    In terms of production, I've written out the complete narrative manuscript for Episode 1 of Spellbound and have begun shopping it around to anyone willing read it. It's not "done" by any stretch, it's just the first draft, and the first draft is always going to be susceptible to lots of revisions. Currently, it's about 40 pages in length. That's about what I had expected. Now, I need to go through and do a ton of polishing passes. I think of the story sort of like one of those JPG images which loads over a slow internet connection. The very first version of the image is this highly artifacted mess which barely holds a semblance to the actual image, but with each pass, the resolution of the image improves and the details get more refined each time, until you end up with a perfectly clear image.

    With regards to writing narrative for a VR game, I think the pass process is going to be a lot more convoluted. The first pass is just trying to write the story itself and figure out what the story even is. The writer explores a bunch of different directions and the final product is the choices by the writer which yield the most interesting story. But, you can't just take the story of a writer and plop it into a VR game and call it perfect. In fact, the writer must keep in mind the medium they're writing for and what the capabilities of that medium are. If you're writing a script for a movie, you have to think about what scenes you're going to create and possibly consider a shot list, and also think about the actors who will portray your characters and the acting style. You can effectively frame the shot of the scene to show exactly what you want the audience to see. That's a great amount of power and control over the audience experience. Writing for VR is completely backwards. I have to preface this by saying that I'm a novice writer and have never written a script, much less, a script for VR, so take my words with a hefty grain of salt. My writing technique mostly consists of putting myself into the body of the character. I am that character. That character has a personal history. A personality. A style. Stated interests, and unstated secret interests and ambitions. Character flaws, and character strengths. I see the scene from the eyes of the character, see the state of the world, listen to what was just said, and then react based on the traits of my embodied character. The character should always be trying to progress their ambitions. Character conflict should happen when ambitions collide. When it comes to VR games, the protagonist is the player themselves, so you have to keep in mind that the protagonist has agency which the writer can't control. They experience the story from the first person perspective, through the eyes of the character they embody. So, whatever happens to the main character also happens to the player. With VR, the player brings their own body and hands into the scene, so those are things the writer can interface with. Maybe the player gives a bow to a king? What if they don't bow before royalty? Maybe when you meet a new character, they extend a hand to give a handshake? What happens if you don't shake their hand? Maybe a character comes forward to give the player a huge hug? The secret sauce for VR is finding these new ways to develop interpersonal connections with characters in the world and using that to drive story and player experience. I try to keep this at the forefront of my mind when writing for VR -- first hand player experience is king. I also want to give my characters depth, so I do this mostly through subtle narrative exposition, mostly in the form of ambient banter between characters. For the sake of simplicity of production, the main character doesn't have narrative conversation choices. This means I don't have to create conversation trees or user interfaces for dialogue choices and the flow of dialogue can be seamless and uninterrupted.

    I am starting to audition for character voices. I've got a list of local voice actor talents and am asking a few of them to send me a few demo lines from the manuscript and a quote for their day rates. It's hugely inspiring to hear the voices of the characters saying the lines I've written. It feels like these characters might actually exist somewhere outside of my imagination, and I owe it to them to give them the very best lines I can come up with to portray their nature and ambitions correctly. A few people have read my manuscript and given mostly positive feedback, so that suggests that I'm roughly on the right track. I'm going to spend a few days taking it to various writers meet up groups and getting critical feedback, so this will help me immensely to get to a higher level of polish and clarity. If you're interested in reading the manuscript and my production notes, feel free to read the google doc and supply feedback in the comments below:

    https://docs.google.com/document/d/1IvNYNf9NqtdikD6UZuGq-rUo9yU5LVqqgIlWsz6n2Qs/edit?usp=sharing

    (Note: It's a work in progress, so you can see changes happening live as I edit it.)

    The ideal is to write a story which is so compelling that it grabs people and makes them want to read it. I want to be able to drop a 40 page manuscript in someones lap and tell them to read it. They'll be thinking, "oh god, more bullshit. I don't want to read this crappy novice writing. I'll humor them and read two pages." So, they read two pages. It's good. They decide to read another page. It's also good. In fact, it's getting better. They turn to the next page to keep going. Wow. It's actually a decent story. They keep turning pages. Forty pages later, they're surprised to have read the whole thing and they are left wanting more. It's a pleasant surprise. The story should be good enough that it stands strongly on its own legs. It doesn't need anything else to be a compelling experience. Now, if you experience the same story in VR, and characters act out their lines, and the voice acting is stellar, the experience of the story is just multiplied by the talent and quality. This is the ideal I'm shooting for. Spellbound will be a story centered VR game, rather than a game which happens to have a shallow story layered on top. It's worth taking the time to nail the story and get it right, so I'm taking my time.

    When the manuscript is complete, I'll have voice actors voice out each of the characters. I really don't want to have to do a lot of dialogue resamples, so I need to make sure that the first time a line is voiced is also the last time it's voiced. The goal is to avoid revisions. So, how do I do this? My current plan is to polish the story and get it as close to perfection as possible. Hiring voice actors costs money. When I drop voiced lines into the game, I am going to need to know whether the line works in the current scene with the current context. So, a part of the creative writing process will require for me to experience the scene and adapt the writing for context and length through a bunch of iterations. I'm going to voice act my own characters with a crappy headphone mic and use these assets as placeholders. It'll be a really good way for me to quickly iterate on the character interactions and player experience. I kind of feel silly, like I'm just playing with dolls who are having a conversation with each other. But hey, maybe that's really the core of script writing in hollywood too?

    On a personal note, I've decided to give up all social media for a month. No facebook, no twitter, no reddit, no youtube, etc. The primary reason is because it costs me too much time. Typically, my day starts by waking up, pulling out my laptop and checking twitter and facebook for updates to my news feed. That costs me about 30-45 minutes before I get out of bed. Then I go to work. I get to work an hour later, start a build or compile, and since it's going to take 5 minutes to complete, I decide "Hey, I'll spend five minutes checking facebook while I wait.". That five minutes turns into twenty minutes without me realizing it. And this happens ten times a day. I can easily waste hours of my day on social media without consciously realizing it. It adds up, especially over the course of days and weeks. And for what? To stay updated and informed on the latest developments in my news feeds? Why do I actually care about that? What value does it add to my life? How is my life better? Or, is my life actually better? What if social media is actually unhealthy? What if its like cigarettes? Cigarettes cause lung cancer with prolonged use, so maybe social media causes mental health problems like depression, low self worth and narcissism with prolonged use? What if social media is inherently an anti-social activity? Anyways, I've consciously decided to abstain for a full month without social media as an experiment. So far, I'm five days in and realizing how much I was using it as an outlet for self expression. Something happens to me and my default reaction is, "Oh, this would be good to share in a post!", and now I realize "Oh, I can't share this on social media. Who am I actually trying to share this with? Why am I trying to share this? Can I just forget about sharing and just relish the experience in this fleeting moment?" The secondary effect of abstaining from social media is that I'm also trying to pull away from technology a bit more so I can find a more healthy balance between technology and life. Currently, if I'm not staring at a screen, I'm at a loss for what to do with my time. Should I really live my whole life staring at glowing rectangles? Is there more to life than that? How would I feel if I'm laying on my deathbed and reflecting on my life, realizing that I spent most of it looking at screens?

    I need new hobbies and passions outside of screens. So, I've picked up my old love for reading by starting in on some fantasy books. Currently, I'm well on my way through "The Way of Kings" by Brandon Sanderson. I'm reading his first book slowly, digesting it sentence by sentence, and thinking about it from the eyes of a writer instead of a reader. It's an amazingly different experience. He's got some amazingly clever lines in his book, and there are some amazing pieces of exposition which the author uses as a proxy to share his own attitudes and life philosophies. I am going to steal some of the writing techniques and use them myself.

    I'm also still doing VR contract work on the side in order to make money to finance my game project. The side work is picking up slightly and I'm getting better at it. I have this ambitious idea for a new way to create VR content using 360 video and pictures. Most clients are trying to capture an experience or create a tour of something in VR and taking audiences through it. Essentially, it's mostly just video captured in 360 and then projected onto the inside of a sphere, and then setting the player camera at the center of the sphere. It's somewhat simple to implement. My critique is that this isn't a very compelling virtual reality experience because it's really just a passive experience in a movie theater where the screen wraps all around the viewer. There's very little interaction. So, my idea is to flip this around. I'd like to take a 360 camera and place it at various locations, take a photograph/video, and then move the camera. Instead of having a cut to the next scene, the viewer decides when to cut and where to cut. So, let's pretend that we're creating a virtual reality hike. We incrementally move the 360 camera down the trail, 50 feet at a time, for the entire length of the hike. A hike may not be perfectly sequentially linear, there may be areas where you take a detour to experience a look out on the side of the trail. So, on the conceptual data structure level, we are going to have a connected node graph arranged spatially, and the viewer will transition between connected nodes based on what direction they want to go on the hiking trail. I'll have ambisonic audio recording, so you'll be able to hear birds chirping in the trees and a babbling brook in the side of the trail, etc. The key difference here is that the viewer drives the pace of the experience, so they can spend as much or as little time as they want, experiencing an environment/scene, and since they can control what nodes to visit next, they have agency over their entire experience. This is the magic of VR, and if I get a prototype proof of concept working, I think it can be a new type of service to sell to clients. I can go around Washington State and go create virtual recreations of hikes for people to experience. There's some beautiful hikes through the Cascade mountains. We have a desert on the eastern half of washington, filled with sage brush and basalt lava rocks. We also have a temperate rainforest on the Olympic peninsula, where we get 300+ inches of rain a year, with six feet of moss hanging off of tree branches. The geography, flora and fauna are somewhat unique to Washington state, so if I can create a library of interactive virtual reality experiences of various parts of our state, it would be a pretty cool experience, where you can get virtual tours of various parts of the state. It would almost be as good as visiting in person and a good way to preview a place you might want to experience. IF it is a popular form of content, I can expand my content library by offering virtual reality tours of other parts of the world people wouldn't otherwise be able to visit. Would you like to explore the tropical jungles of Costa Rica? Would you like to climb the mountains of Nepal? Would you like to walk around in Antarctica? Would you like to go to the Eiffel Tower? If I do this right, I could create a fun VR travel channel and combine some educational elements to the experience. It would be a good way for me to get out of the office and experience the world. I'm currently working on building a prototype proof of concept to figure out the technical side and user interface, and will probably have something rough built out by the end of the month. This could turn into a cool new way to do interactive cinema in VR. I haven't seen anyone else do something like this before, but I may just be under informed.

  16. Meteor Bombardment 1 Devblog 01

    • Genre: Fixed Shooter
    • Engine: Unity
    • Platform: PC
    • Art Style: 8-bit Pixel Graphics
    • Current State: Technical Design Phase - 30% Complete

     

    Game Description

    Aliens from a distant planet have begun redirecting meteors and attack ships at earth in order to wipe out as much of the population as possible before they invade. Using the only salvaged alien attack ship, you must work to destroy the meteors before they impact with earth and kill off its population.

     

    Development Status Overview

    Conceptual design for the game is completed. Technical Design has begun, which involves defining how meteors and attack ships will travel, how many hits needed to destroy meteors and attack ships, as well as level design theory.

    Attached to this blog is the album for the game which includes the conceptual design image. When Technical Design is completed images regarding the technical aspects will be uploaded to the same album, with a new developer blog posted.

     

    Project General Goals

    1. Concept Design
    2. Technical Design
    3. Recruit Team
    4. Develop Game
    5. Test Game
    6. Launch as Free Title
  17. 2018 has already been a busy year. The Gears of Eden team has been hard at work as we prep for our Alpha 2 release. For our art and dev team, that means designing and implementing in-game resources. For our writing team, that means research and planning. But, combined, that means we get the chance to test our cool new toys and show them off for everyone to see.

    To accomplish this, our team recently held our first Gears of Eden Discord Day. In case you don't know, Discord is an app that allows gamedevs to make their own chat servers and share images and info with people who join (here's YOUR invite). We shared our new rover design, talked about influences for the game, explained how this project got started, and streamed the first meeting between our old rover and the new one. Check out the first meeting: https://www.twitch.tv/videos/208097308?t=12m04s

    Since then, the art team has been hard at work on base design and updating our UI. We've gone over quite a few models looking to find the right fit for our game. Because the bases are to be used by rovers, and modular (read expandable!), we decided that design must be functional rather than just aesthetically pleasing.

    GearsOfEden_SmallBase_Form-Langu.thumb.jpg.c80456b2dc2e538cf3b0dc9d51e7b26f.jpg

    The images above were just a few samples that we reviewed, and we are getting closer and closer to deciding what the base design will be for Alpha 2. Based on these images, our team was able to render a sample of the base in-game.

    DRLh22sUIAAvY8k.thumb.jpg.7e35a5a415186f74d311a87a29f4d4da.jpg

    As mentioned, our UI is being updated to be more intuitive and provide better information for players. Instead of clicking on the gears to craft using the inventory, there will be separate tabs implemented. The Crafting tab will show all blueprints collected, which you will then be able to craft from if you have the resources.

     

    With that in mind, we've been doing some more Twitch streaming. Some of that has been development videos, and some of that has been members of our team showing off some games we enjoy playing. Here you can see Sledge going over the new UI, testing out the new rover, and doing some crafting: https://www.twitch.tv/videos/210491947?t=02m54s

     

    We're making a lot of progress, but Alpha 2 is going to be a critical phase for us. Right now, we're doing all our development at our own costs, with a small team. Once Alpha 2 is out, we're going to have to find a way to secure some financial backing if we want to finish our demo in a reasonable timeframe. That's where you come in. We really, really need your help in growing our audience. Please engage with us, and follow us on our various social media accounts to help spread the words to others. Like, comment, share. And, if you're able, you could always support our endeavors at our donation rewards page, or through Patreon. We literally cannot make this game with you. Thank you so much!

    This is going to be so fun! We can't wait to show you everything we've been working on these past few months, and it'll be a great stamp on this stage of development! If you want to see how we get all this done as we get it done, follow us on Twitter, Twitch, and Facebook for all the latest and greatest news on Gears of Eden.

  18. Corona’s engineers have slipped a couple of really cool features into recent daily builds that may have not caught your attention.

    Emitter Particles and Groups

    Previously, the particles emitted from an emitter became part of the stage until they expired. This would create problems with the relative positioning of the particles if your app needed to move the emitter. If you moved the emitter as part of a parent group, it didn’t create a natural look. Emitters can now have their particles be part of the parent group the emitter is in. This was added to daily build 2018.3199.

    emitter.gifTo use this feature, you can set emitter.absolutePosition to the parent group of the emitter. Previously you had the options of true or false to determine if the positioning was absolute or relative to the emitter. By passing a group, it’s now relative to the group. You can download a sample project to see the feature in action.

    Controlling iOS system gestures

    On iOS when you swipe down from the top, it shows the Notifications panel. When you swipe up, you get the control panel. If you have UI elements in your game that are near the top of bottom in areas that are likely to result in swipe gestures, it would be nice to be able to control that. Now you can! Starting with daily build 2018.3193, you can use native.setProperty( “preferredScreenEdgesDeferringSystemGestures”, true ) to have those swipes just show a swipe arrow and a second swipe to activate the panels.

    We have more great things in the pipeline so watch this space for news and updates.


    View the full article

  19.  

    Mobile games still represent the highest growing niche among apps. The mobile game market worldwide is supposed to touch $46 billion in the present year. In spite of this staggering growth, just 10% of the game apps can actually be called commercially successful as per the growth and ROI they achieved. Naturally, rethinking the strategy and planning for something effective to market mobile games will continue. Based on the experience of the near past, what are the key marketing tips for mobile games we can consider in 2018? Let us have a look.

    5a2b8e97bcf6b_8WaysYouCanRetainYourGamePlayerforLonger-Nimblechapps.thumb.jpg.af5d3f68ecb1632c34f9da7a2710d244.jpg

    1) Localization will be key

    To make your game connect its audience in different parts of the world your game needs to speak the language of your audience in different local markets. There are many markets that are far from dominated by any single language, and often these markets offer the bigger untapped potential for new game apps. While localising the game language is crucial, there are other considerations as well.

    Localisation should also be offered through a selection of payment methods. The selection of payment methods should be made available as per the penetration of such methods in respective markets. For instance, markets having lower credit card penetration should be given other methods of payment for the game players. In some markets, third-party payment solution or third party publishing rights may be good solutions while in others they may not.

    2) Consider the latest In-app purchase (IAPs) trends

    Throughout 2017 In-app purchase has dominated the mobile app marketing space, and it has been the most effective and revenue earning strategy. In-app purchases have earned $37 billion in app revenue in 2017 alone. In spite of the fact that just 5% of game players actually end up spending money through IAPs, this monetisation avenue is credited for 2000% more profit compared to other avenues.

    In the months to come, In-app purchase (IAP) will be more specific in targeting game players with new tools and tweaks like limited period events, behavioral incentives and dynamic pricing. We can also expect more games to adopt several different types of virtual currency for payment. Specially targeted offers for some game playing audience will also be a key trend.

    3) Consider these social media hacks

    Social media will continue to feature more prominently in the mobile game marketing. There will be a few effective social media hacks and effective strategies that will dominate mobile game marketing in 2018 and beyond.

    When planning marketing for your new game apps on social media, you need to prioritise social platforms based on the type of user your game app will be entitled for. There are plenty of social platforms, but the app that can work on Facebook may not work well on Pinterest. It obviously depends on the audience.

    When it comes to marketing your game on Facebook, you need to build up excitement for the game for several months prior to your launch and based on the reaction of your launch should launch the game app to generate maximum buzz.

    Pinterest can be a great medium if you can register various screenshots and app related images in an appealing manner to the visual database of the platform. Pinterest works great if you have a separate website for the app to draw and engage traffic.

    Reddit, on the other hand, can be a good platform to track information and spot marketing opportunities of your game app. Lastly, make use of social analytics to track and monitor your game playing audience and activities.

    4) Paid games

    You may have discarded paid apps already as a monetisation strategy, but in the last year only there have been several highest grossing paid mobile games. In fact, there has been $ 29 billion revenue from the paid apps alone. Yes, we know that nearly 98% of paid apps in Play Store are free apps, but to your surprise, many of these apps are now coming with a mix of strategies by offering paid sister apps. Often value additions like new graphic contents with these paid sister apps can actually boost engagement from the audience.

    5) Game ads rewarding players

    Mobile game players are more habituated with promotional contents compared to other app users. With the traction and engagement for in-app ads to garner any substantial revenue often a game needs a huge audience or players to earn a substantial value. This is why game ads need to be thought in a completely new light. Rewarding game players for watching game ads have come up with a really effective strategy. Around 80% of game players prefer watching video ads for game rewards.

    6) In-game sponsorship

    Sponsored contents within mobile games continued to remain another popular aspect of many mobile games in the recent years. It started way back in 2015 when Angry Birds players were allowed to kill the awful pigs by Honey Nut Cheerios as long as for 2 whole weeks and thus promoting another game app. Soon, several other games followed this trend which incorporated elements of other game apps in the gaming actions for the purpose of sponsorship. It works great specially for developers who have multiple game apps with a few successful ones across the board. In the present year, we can see mobile game developers to reduce dependence on the in-app purchase by embracing these rewarded and sponsored ads.

    7) Merchandising game products

    Merchandising your game-related products to the game players is still effective for too many mobile games. But it needs at least a certain level of commercial success for the game app. Only when your game has a widespread following and enjoys a niche branding, you can come up with the marketing of in-game characters shaped into real life products like t-shirts, stuffed toys, game-inspired cars, and even notebooks or coffee mugs.

    In conclusion

    All these strategies and avenues seem to have one thing in common, and it is more about connecting audience more specifically and in a targeted manner. In 2018, we can expect these strategies evolve further.

    • 1
      entry
    • 0
      comments
    • 1206
      views

    Recent Entries

    It's been awhile since I've been on this site.  Been busy at work, but as with all contracting, sometimes work gets light, which is the case as of the new year.  So I saw this challenge, and thought it might be fun to brush up on my skills.  I've been working mainly with embedded systems and C#, so I haven't touched C++ in awhile, and when I have, it's been with an old compiler, that's not even C++11 compliant.  So, I installed Visual Studio 2017, and decided to make the best use of this.

    Time is short, and I don't exactly have media to use, so I decided to just go out and start to learn Direct2D.  I have little experience with any modern form of C++, and zero experience with Direct2D and XAudio.  Whereas I didn't mind learning Direct2D, I fully admit XAudio presented a bit of problems.  In the end, I blatantly stole Microsoft's tutorial and have a barebones sound system working.  And unlike the Direct2D part, I didn't bother to spend much time learning what it does, so it's still a mystery to me.  I'm not entirely sure I released everything correctly.  The documentation said releasing IXAudio2 would release objects, and when I tried to manually delete buffers, things blew up, so I just let it be.  There are most likely memory leaks there.

    As you can plainly tell, this is by far the worst entry in the challenge.  This is as much of a learning experience as an attempt to get something out the door.  I figured, if I couldn't be anything close to modern, at least be efficient.  And I failed at that miserably.  Originally I wrote this in C.  Excluding the audio files, it came out to a whopping 16 KB in size, and memory usage was roughly 6 MB.  And then I decided to start to clean up my spaghetti code (I said start, never said I finished), and every time I thought I was getting more clever, the program grew in size and memory usage.  As of right now, it's 99 KB and takes up roughly 30 MB RAM on 720p resolution.  I haven't really checked for memory leaks yet, and I'm sure they exist (beyond just the audio).  In reality, I'd prefer to clean up a lot of the code.  (And I found a few errors with memory management, so I need to track down where I went wrong.  I removed allocating memory for the time being and pushed everything onto the stack.)

    The other thing is, this code is ugly.  Towards the end, I just started taking a patchwork approach rather than keeping it clean.  I was originally hoping for modularity, but that got destroyed later on.  And I'd love to replace the pointers that are thrown liberally throughout the code with smart pointers.

    Unlike the other entries, I only have missiles for the gameplay.  I didn't include UFOs, airplanes, smart bombs, nor warheads.  I just don't feel I had enough time.  Yes, there's still a couple weeks to go, but I'd prefer to cleanup what I have than add new features.  And unfortunately, I was a bit shortsighted, which caused problems later on.  There are multiple places where the code is far more verbose than it needs to be, because I wasn't properly focused on the correct areas.  I wanted to make it scalable, and I focused making the game a 1:1 ratio internally, yet displayed 16:9 to the user, which caused massive problems later on.  I ended up having to do math on pretty much every piece of graphics and game logic whereas if I had just displayed it as 1:1, or handled the internals in 16:9, I could have shaved off a thousand lines of code.  And it also caused problems with hit detection, which is another reason I didn't bother adding in anything but missiles.

    The hit detection was a mess.  I had everything mapped out.  The game was going to work whether a missile went 1 pixel a second, or 1000 pixels a nanosecond.  Calculating moving objects and collision with circles or boxes is easy.  Unfortunately, I was using ellipses.  And while there are formulas for that, I'll admit my eyes started to glaze over at the amount of math that would be required.  In the end, I decided to leave it buggy, and only detect if it was currently in a static ellipse, which is easy and fast enough to calculate.  I mean, I guess if the program freezes up, the user was going to lose a city/silo anyway, or lose it if the missile was traveling at light speed, but it's still a bug, and still annoys me, especially since everything else was calculated regardless of what the user sees. (*EDIT* Thinking about this more, the solution was right in front of me the entire time.  Just squish the world back to 1:1 and do the hit detection that way).

    Controls:

    1,2, and 3 control the missiles, and the arrow keys control the cursor.  Escape for the menu, and Enter for selection.  I've only tested this on Windows 10, as I'm pretty sure it requires those libraries.  It's a 64-bit executable.

    MCDemo.png

  • Advertisement
  • Advertisement
  • Popular Blogs

  • Advertisement
  • Blog Comments

    • Since I've given away so much of the spine of the Astral Invasion story that I wasn't originally meaning too, I'll add this as well.  Especially considering the Triumph songs I posted, and how perfect the final lines of this one are.  I'll also mention that this is all closely related to the Time of the Titans chess set in Armageddon Chess, and only someone who has gotten into the story spread across this whole blog, and really taken it in, will be likely to be able to make much sense of all of this other than just what is apparent on the surface.  Although after reading just Space Hockey, and what I have given away about Astral Invasion in Space Hockey and since posting Space Hockey, the trailer and theme songs of Fallen Angel Rising found in the Armageddon Chess blog post will have a lot more meaning too you.  
    • That's really just another way of saying "they don't hire game designers in this business" which they've been insisting isn't true for 35 years.  They get downright offended by it.  But like I said in an earlier post, I've understood for a long time now that it is because they have a completely different definition of the term. I might, as a hobby after I give up on this, mess with something like GameMaker just to do something for myself.  That would be a very long road to making my games, and I don't have a long road anymore.  I will be 50 later this year, and I was born with a genetic condition that makes me already older than I should expect to live.  But my family has a history of living to an old age for someone with this problem.  So I probably only have 10 or 15 years left.  Making simple board-game like things in a generic editor is not going to get me to making PDU games in any kind of amount of time that I have left.  Really, I should have been making computer games in the early 1990s, the computer game industry has never liked board game designers right from the beginning.  I know, I was there.  So I was really just born into the wrong career at exactly the wrong time in history. And I do have skills to help other than just writing it.  I did all 30 levels of Sinistar: Unleashed across four levels of difficulties through the raw data files in under 3 months.  As soon as I have something to work with... "a game is never finished, someone wearing a suit eventually rips it from your hands and puts it on a shelf".  I would never be happy with it, or consider it to be "finished", there would always be more than I could possibly do before it shipped.  And finishing the story of any one game is a monumental task when there are 11 other games intricately woven through it as well. I'm not going to get to the PDU in any amount of time that I have playing with a generic editor.  If those kinds of things existed 20 years ago I'm sure I would have done a lot of things with them, but in 2018 its a little late for me for that.
    • In the business i used to work last time, nobody of us was great at what he was doing, really And again, time has changed. Today artists with some basic scripting skills create games, some of them very good. We are heading towards creating professional games just by clicking. If you really want to create computer games, you can. You could start with something simpler, like Game Maker. I work for 4 months on a problem that i'm still unable to solve. I'm not good at this stuff. There are few people who finally developed seemingly good solutions for the problem after a decade of research - maybe. Academic experts, so i do not understand what they say in their scientific papers. I really don't like to work on the problem, it's kinda boring, most code i already wrote is useless. But i must succeed - otherwise i'm totally stuck. And nobody will join me to help even if i claim my vision is worth it. I have to show it really works first i guess. So, i really think you should to do the same. Even if it's just for fun and still won't work after a year. Writing design documents is not enough. What you do is like going to a record company without a demo tape and not willing to sing, compose or play an instrument.            
    • It's not my thing, I would never become truly good at it.  You need to be great at it to be of any help in your business.  I've worked with "AAA" programmers before.  I would never become good enough at it to be more of a hindrance too them than a help.  Someone good be in an endless state of cleaning up my mess.  I wouldn't try to become an artist, either, because I have no talent for it.  I believe there are only two ways for a game designer to find a way into the computer game industry.  You either need to be a programmer or artist, and you can become a part of the committee.  Not really a game designer, but at the same time you will be "designing games". To be a true game designer, designing the games and creating the background/story, you need to be both a businessman and a designer.  This was as true in the hobbyist game industry as it is in yours, it was just a lot cheaper and easier to do with board games.  Often a one-man operation running out of a spare bedroom.  There is a little more too it than that if you want to make computer games, and either way you have to be two things if you want to be a game designer who is "creating their own art".  And that is my problem, I only do one thing.  I am as terrible of a business man as I am a programmer or artist.  Space Hockey is actually the fourth time that I have tried to start my own company to make games.  The only time I ever came close was when my father, unlike me being a businessman is his thing, devoted a tiny bit of his time to help and almost did it.  But I won't ever pull that off on my own, as you can probably see from my two post attempt that is all I can think to do along those lines. I really am very good at what I do, but there really is only one thing that I do well. What the heck, another Astral Invasion Cindy song...  
    • Thank you! A mobile export is optionally planned after the desktop version is finished. Actually, it should not be a big issue as I am using Game Maker to develop the game, which comes with a mobile export.
  • Advertisement
  • Advertisement