Jump to content
  • Advertisement
  • entries
    41
  • comments
    135
  • views
    41135

Entries in this blog

 

White Balancing Images

Now I realise this is stretching slightly outside the normal realms of gamedev, but while working on my texture tools I've turned my attention again to something that is often a problem for photographers, white balance. While white balance is something that is easy to correct when you have a RAW image file (before white balance has been applied), I've always found it considerably more difficult to correct for when all you have to work with is a JPG, or video stream. Hence the general advice to photographers to get white balance right in the camera (using grey card preset balance etc). Of course the world is not ideal and sometimes you end up with images that have a blue or yellow cast. I've found the colour balance correction tools built into things like photoshop and the gimp to be pretty awful. Often when you pick a gray / white point you can very roughly correct things overall but you get colour tinges in other areas of the photo, and it never looks as good as a correctly balanced photo. Tungsten (blue cast - needs correcting) Neutral (correct, reference image) Attempt to colour correct in Gimp using white and gray points (overly saturated result) I've worked through various naive approaches to balancing photos in the past. I started long ago with the very basic multipliers on the sRGB 8 bit data. First mistake of course .. this is gamma compressed, and to do anything sensible with images you have to first convert them to linear with a higher bit depth. I now know better and use linear in nearly all my image manipulation. Colour Spaces The next step is colour spaces. I must admit I have found this very difficult to wrap my head around. I have used various other colour spaces like LAB and HSL in the past and just have used conversion functions without really knowing what they were doing. This past week I have finally got my head around the CIE XYZ colour space (I think!), and have functions to convert from linear sRGB to XYZ and back. Next I learned that in order to do chromatic adaptation (the technical term for white balance!) it is common to transform to yet another colour space called LMS space, which is meant to more accurately represent the colour cone cells in the eye. Anyway to test this all out is luckily rather easy. All you need is a RAW photo, and export it as jpg with different white balance, then attempt to transform the 'wrong' jpg image to the 'right' jpg image. Usually I would do this myself but there are some rather convenient already made images here:
https://en.wikipedia.org/wiki/Color_balance So I've been using these to test, trying to convert the tungsten (blue cast) to neutral. I've had various other ideas, but first wanted to try a very simple, attempt to get the photo into the initial colour space / gamma, alter the white balance, then convert it back again. To do this I converted both the blue (tungsten) image and the reference (neutral) to sRGB linear, then to XYZ, then to LMS. I found the average pixel colour in both images, then found the multiplier that would convert the average colour from the blue to the neutral. Then I applied this multiplier to each pixel of the blue image (in LMS space), then finally converted back to sRGB for viewing. Results LMS colour space Linear RGB colour space The results showed a slightly better result doing this process in LMS space versus linear RGB space. The background turned a better gray, rather than still having a blue tinge with the RGB space. The result looks pleasing to the eye but you can still tell something is 'off', and the white point on the colour checker chart in the photo is visibly off. As well as the photo, I also superimpose a plot of the RGB values with converted value against expected value, to show how accurate the process is. A perfect result would give a straight line plot. Clearly there is quite a bit of colour variation not corrected for in the process. My current thinking is that there are 2 main things going on here: To do the conversion, the colour space / gamma should exactly match that in the same stage in the RAW conversion. Maybe it doesn't. Is the RAW conversion done in linear camera space rather any standard colour space? I don't know. I've attempted to dig into some open source RAW converters to get this information (DCRaw and RawTherapee) but not had much luck. To get things to the state when the white balance was applied in the RAW converter, you not only have to reverse colour spaces, you have to reverse any other modifications that were made to the image (picture styles, possibly sharpening etc). This is very unlikely to be possible, so this technique may not be able to produce a perfect result. Aside from the simple idea of applying multipliers, my other idea is essentially a 3d lookup table of colour conversions. If (and that is a big if) there is a 1:1 mapping of input colours from the blue image to reference colours in the neutral image, it should be possible to simply lookup the conversion needed. You could do this by simply going through the blue image till you found a matching pixel, then find the corresponding pixel in the neutral image and use this. In theory this should produce a perfect result in the test image by definition (if the 1:1 mapping holds). I should say at this point the intended use is that if you can find a mapping for a specific camera to get from one white balance setting to another, you can then apply this to correct images that were taken BEFORE the mapping was found. So if you are attempting to convert a different photo it is likely that there will be pixels that do not match the reference images. So some kind of interpolation and perhaps lookup table would be needed, unless you were okay with a dithered result. At this point I'm going to try the lookup table approach. I suspect it may give better results, but not perfect, because I do fear that picture styles / sharpening may have made a perfect correction impossible, much like the entropy reversing idea of putting a shattered teacup back together, from Stephen Hawking and our old friend Hannibal Lecter. Ideas? Anyway this blog post was originally meant to be a question for the forum, but grew too large. So my question would be whether any of you guys has experience in this, and advice or ideas to offer? Through my research I found this whole colour thing I felt like a real newbie, and it is quite a big field with a number of people working on it, I'm sure there must be some established ideas for this kind of thing.

lawnjelly

lawnjelly

 

Node Editor

Just a quick update to show I've just started getting the GUI working for texture tools. The node editor is still a work in progress but it seems to do the job. I still haven't done any significant work on the methods yet but that is the fun stuff .. I've been getting the boring interface and framework working first. There are just some simple test methods as yet, I will be adding more complex blending etc tech soon. It is now pretty easy to add new methods (like plugins) without worrying about the rest of the program.

lawnjelly

lawnjelly

 

Texture tools

After attempting to texture some models recently, using projection painting in 3d paint, the new healing brush is proving fantastic at healing up those edges between projections, but I must admit I get very frustrated trying to find suitable reference images. Part of the problem I have decided is that I'd often like to be able to have larger, more homogeneous areas of texture to clone from. Given that I have a reasonable healing implementation working, it struck me I should be able to have some algorithms for doing this little job for me, to provide better source material for painting. I thought about putting this ability directly into 3d paint, however, it seemed to make more sense to do a separate small utility app for this kind of thing, which might be useful to more people. So, eager to not make the major mistake I made with 3d paint, that of under-engineering the initial program, I decided to make a positive effort and spend a few days building a solid backbone to the texturing program, so it will be easy to maintain and add to in the future. Instead of making a photoshop like affair, this will be a very focused app, and at the moment I'm thinking in terms of a node based editor with some input textures, and methods, producing intermediate and final textures for export. I'm planning for you to be able to move the nodes in the UI, assign inputs and outputs and parameters. Although the UI is not yet operational, the framework is getting there and I've implemented a first test method. I decided one useful first pass before other methods would be to equalize the colours across an image. Here is an example I have run it on a skin photo, left is before, right is after. Bland and boring on the right, but that is what I am going for, it should be easier to clone etc. The way the method works is it first finds the average colour in the entire image, then gaussian blurs the image. For each pixel it then finds the difference between the blurred colour and the average colour, then adds this difference (with a multiplier) to the original pixel colour. This has the effect of reducing local colour contrast, or increasing colour contrast depending on the sign of the multiplier. Anyway, obviously loads more methods to come, maybe some using variations of the healing technique from Georgiev's paper. All colours are converts to floats, and can be converted to linear, and HSL or LAB colour spaces.

lawnjelly

lawnjelly

Project Management

Something I feel doesn't get discussed enough in the world of gamedev, and something that in general we (more often than not) tend to be incredibly bad at, is project management. This isn't something I'm an expert at myself, I get it wrong plenty of times, but at least I'd like to stimulate a little thought on the subject. One of the most crucial aspects of project management is the interplay between project scope, and timescales and scheduling. It doesn't matter whether you are managing an AAA game with a multimillion dollar budget, or a one man bedroom indy game or first game, this all affects you. Scale & Duration The first point is that as you increase the scale of a project, the time needed to complete it will increase. You would think that there would be some kind of linear relationship between the scale and the time needed, but in practice there can be a tendency for the time needed to increase almost exponentially with scale.  Real life observations show that it can be incredibly difficult to predict the time taken to complete a complex task. If we were, for example, asking how long it would take a man to harvest a small field of potatoes by hand, we could find out how long it took him to do 1 metre squared, multiply it by the size of the field, leave some time for breaks etc and come up with some rough estimate. Unfortunately developing software is not like harvesting a field. Some tasks can be made more like this (artwork production for instance), but things like programming are absolutely nothing like this model. You can add more artists but if you add more programmers expecting a productivity increase expect a shock (see the mythical man month). Dunning-Kruger Couple this difficulty in making time estimates with something called the Dunning-Kruger effect. This is (I quote wikipedia) : This blog post is probably a great example of Dunning-Kruger! Those people who have little knowledge of a task will tend to be very bad at estimating how long that task will take to complete. Unfortunately, the people typically responsible for making time estimates are exactly those people who tend to have little knowledge of the subject area. This is often management, and / or 'yes men'. (Sycophants / Yes men (or people, they could be transgender) are those types that follow the strategy of agreeing with everything their boss says, or providing extremely optimistic estimates. This is a very widely used strategy for promotion, constantly reassuring the superior providing short term appreciation, then when things inevitably go wrong, it is usually easy to blame an external event or third party.) Other types of people who often have little knowledge of a task are beginners, and those choosing to work on something they haven't done before (most of us). So essentially most of us are prone to this Dunning-Kruger effect. With beginners this almost always shows itself in new posts to forums introducing themselves as knowing nothing about programming or artwork, but expecting to complete a game such as one requiring teams of 50+ experienced staff and years of work, in a couple of weeks. Presumably they expect there is some app for doing this, and they will just press a 'make game' button, after selecting the right parameters. It is easy to see the error in beginners, but this same thing tends to happen with all of us and we have to actively fight against the tendency. Real World So how long do things really take? Well, a wise man once told me, as a rule of thumb, if you are familiar with the subject area, it will on the whole take at least 3x as long as your estimated scenario. What does this mean? Well if you in charge of developing a typical 18 month commercial game, you should be aiming for something you believe you can complete in 6 months. No, that's not a prototype, that's the whole thing. Once all the things have gone wrong (and they will) it will easily expand to take the full allotted time. Best case if it is finished ahead, you have extra time for testing, and polish. If you are an indy and you want to complete a game over a 6 month schedule, you should be aiming for something you can finish in 2 months. If you are a beginner and you are aiming for something you can finish in 2 weeks, you should be aiming for something you can finish in 1-2 days. Beginners are the worst at estimating, and are usually wildly out, and typically don't have the skills to complete what they thought, so often don't even have the skills to complete their original project until months or years after. How can you battle this effect? The best advice I have heard to battle these problems is to aim small. This applies to everyone but 100x more to beginners. There are a billion unfinished shelved projects out there in the world. Don't be one of those people. Choose something you *can* realistically complete, something 10x to 3x smaller than your original vision (depending on your skill level). If you are beginner this means start by making tic-tac-toe. Make pac man, make tower defence, gradually increase your skill set so you can make better games (and better doesn't always mean bigger). If you are learning OpenGL / DirectX by all means learn to make your own engine but be under no illusions that you will make something that others will want to use. Commercial engines now tend to be made by small armies of experienced devs, and have to work under conditions of a huge variety of hardware, with testing and all kinds of considerations you haven't even thought of. If you are an indy and you dream of making the next amazing rpg to rival the big companies, have a think, how much artwork, how much sound, how many voiceovers, how many scripts, how many levels do you need to create? Aside from the programming, creating the assets for such games is a major undertaking. Have a look at the credits list on a game you would wish to emulate. If you are a company, you have possible funding, be very realistic about what you can create in your allotted 18 months. Aim to be leveraging and reusing your own and others' technology because to create everything from scratch everytime is just not possible. Aim for that 6 month timescale, and get estimates from all the tech people especially not just from 'yes men', and use them all to make a balanced decision on what you think is realistic.

lawnjelly

lawnjelly

Tower Defence - Post Mortem

Background It's been a few days since I put my latest alpha of my entry for the Tower Defence challenge on itch.io and my project page: https://lawnjelly.itch.io/ramsbottom I think I've covered the requirements for the challenge, and made the game a bit above just the requirements so it is a bit more fun to play and has some longevity. The reason I entered this time is because I'd been watching the previous challenges with a little envy, and had been waiting for one that seemed simple enough (I think the last one I looked at had multiplayer and I knew that could be a bit of a bag of worms). My usual low level c++ / opengl approach would probably be overkill for a small / low timescale game, so I decided it would be a good opportunity for me to try out Unity engine, which a lot of people are using currently. What went right 1. Using Unity Rapid development, well suited for this type of small game. 2. Attempting to get as much of the challenge completed asap, then leaving further time for more features / polish. I finished much of the base functionality in the first week, then spent time on and off in the next few weeks just making it better. There are lots of advantages to getting something 'finished' up front, and this is a development model I am trying to move towards. You can 'call time' at any time, and still have a functional product. Unforeseen events always seem to appear and limit the time you can spend on a project. This approach guarantees that even in this situation you will still have a 'product' rather than a half-done version of your 'glorious vision'. 3. Using the asset store, not building all the models myself, and using sites such as freesound for the sound, and creative commons music. For small learning games such as this it didn't make sense for me to make the assets. I know it takes me 2/3 of the time to make artwork etc, and while I am improving at it, I am better at (and enjoy) programming more than making artwork. 4. Finding some good tutorials to learn Unity (then throwing out their approaches!). There are some great tutorials out there (brackys for instance), and these are good for learning unity specific stuff, but in some cases I could instantly see better ways of doing things. I put this down to many tutorials being pitched at total beginners, who are happy to get anything on the screen. But e.g. using Unity editor to lay out levels just seemed ridiculous and limiting. What went wrong 1. C# . I hate it, absolute abomination of a language. I spent more time than should ever be necessary screaming at the damn thing, it makes visual basic look like Shakespeare. I could write a whole blog post just on the things about it that make me seethe, but yeah, if I could avoid ever having to use it again, that would be great. 2. Monodevelop Yeah, see point 1. Pretty bad. I might have to see if I can get another editor working if I use Unity again. I hear VS code may be worth a go (I'm on Linux). Monodevelop seemed really keen to reformat my code in stupid ways I couldn't turn off, and kept trying to autocomplete words incorrectly (that I also couldn't turn off). 3. Lack of debugging support. This may have been due to my setup, it might not be straightforward to get debugging working on Linux (I'm assuming with Unity it is possible to do step by step debugging?). This meant huge problems debugging anything but the simplest code (I had to resort to lots of Debug.Log statements). 4. Unity editor. I'm not really a drag and drop sort of guy. I tried to avoid having half the game 'code' being a particular setup in the drag and drop editor. I'm not even sure how to backup that stuff, I'm sure if I'd have had a crash I could have lost the lot. Come to think of it, did I have to backup all the assets too? With all that .meta stuff? I don't know. At least with code you can zip it up small and keep lots of backups. There should be an option in the menu to save your entire project in a compressed form without all the bloated assets etc, just the stuff that is a pain to lose. 5. Unity build times. I had massive problems with excessive build times taking hours when changing platform particularly, it kept baking lightmaps (or maybe something with shaders?) when as far as I knew I had tried to turn them off. Eventually more by luck than judgement, I found that deleting some skydome assets I had imported and deleting (rather than turning off) an extra light finally cured the problem. Far too little debugging info is given out by the build tool logs, to enable you to know WHY your builds are taking hours. Googling reveals I was not the only one with this problem. Don't just tell me 'baking lightmaps', tell me which light is causing this, which objects etc etc. Conclusion Overall I found the challenge very worthwhile. There are several of us working on it, and bouncing ideas around and spurring each other on works very well. Also a little hint of friendly competition is good too!
I managed to get fair basic grounding in Unity, and have a better idea of whether it would be worthwhile using in any future projects.. I may use it for a couple more small games, or evaluate some more current engines (Unreal, or perhaps something more code orientated).
Doing such small projects is also great for experiencing and practising the whole development cycle including release and marketing. This can give a much better perspective on how much time you should invest in different stages, and improve your ability to schedule / finish larger projects. It is something I would recommend to beginners through to advanced developers.

lawnjelly

lawnjelly

Tower Defence - nearing the finish line

After an initial burst of energy last month, I knew I wouldn't have so much time available for working on Tower Defence challenge up until the June 30th finish. However I did manage to get much of it working already and now I am mostly at the 'finishing up' stage. Changes More music tracks I have added some more game modes. The normal mode is a scripted set of 12 levels, random generates 20 random levels increasing in difficulty, and 'budget' is a shorter game of 6 random levels but you are given an estimated amount of money to beat all the enemies at the beginning, and you can't win any more cash, so you have to make it last. Cross platform I have made some effort to make the game cross platform. It works well on Linux, Windows and Android now. To that end there is a certain amount of 'designing for the common denominator'. I had to scale back the particle effects so that it would work well on tablets and mobile phones. If I was spending more time on it I would scale the number of particles according to hardware. Graphics Quality Setting Inside the game pause menu there is now a performance slider where you can trade off graphics quality with speed. It sets the unity GraphicsQuality setting. This works well, although I did encounter a snag on Android.. the higher quality settings on some devices was causing complete graphical corruption, possibly trying to switch anti-aliasing. While I could try and blame this on Unity, I do know that devices can be bad at reporting their capabilities, and you often have to just 'try it and see what happens'. What was however a design oversight is that when you set unity to GraphicsQuality that causes corruption, while you can quit out of the game from the OS, Unity actually SAVES your choice of graphics quality so that the next time you run it it is also corrupted. The only way around this (on Android) was to go into App settings and delete the cached app data. Clearly this was not ideal for users. As a bodge compromise, when you originally start the game it reads the default GraphicsQuality, persists this separately, then resets to the default every time you run the game. This means any changes to quality are lost on each startup, which is annoying, but better than the graphical corruption. Another alternative would be to get the user to click on an 'OK' box within 15 seconds to confirm that graphical changes worked, but I did not have time for this. Gamepad / Keyboard input While a lot of tasks have gone very well, I spent a lot of yesterday and today getting keyboard and gamepad input working. Most people will play the game using mouse, or touchscreen, and it works best with these. However my Android TV box has no touchscreen, and I find it annoying how few Android game developers support other means of input. I had previously managed to get gamepad input working on Android using Android SDK via java / NDK, however configuring input on Unity seems rather overcomplicated. I read several recommendations that the Unity input system was poor, and it was better to buy InControl or Rewired assets to do this. I tried the free version of InControl but couldn't get it to recognise my gamepad. So I went back to the in built Unity InputManager and finally managed to (barely) get it working. The selection of joystick axes seems non-standardized, so I have no idea whether my setup will work with other joysticks / gamepads, not having the time or facilities for further testing. As well as simply getting a gamepad working, there is also the issue that the game was designed for mouse. I had hoped that navigating the menus would be a simple affair, but it took me a while to work out how to select a default menu item when entering each menu screen, without which you could not change the selection. Many people I googled had had similar problems, I am not clear on the 'correct' way to do it. In addition many of the UI items did not show a large enough change in colour to indicate that they were highlighted, and I had to change this manually. Gameplay with keyboard / gamepad The biggest hurdle was how to place towers in a game designed for you to pick a spot with the mouse? I decided to have a cursor which you can move up down left and right across the map. This was a bit fiddly to get working with the input, and is still not perfect, but does allow you to play the game, albeit not as efficiently as with a mouse / touchscreen. Finally I had to cover tower selection via keyboard / gamepad. To do this I simply had a button cycle through towers available, and had to put in an indicator arrow to show which was currently selected. I have now tried it and it even works well on my Android TV box with gamepad, which has surprised even myself. Performance has been very good, even on low power devices. That is probably largely down to careful design due to me being already familiar with performance bottlenecks on this hardware (fill rate is usually the biggest issue), and using mostly simple mobile diffuse shaders. Future I suspect from now until release I will mostly be concerned with testing. The input in particular has introduced all kinds of subtle potential bugs from different combinations of input devices. I would ideally like a high score table but handling virtual keyboard input on different platforms puts me off, unless I use a simple initialing system.

lawnjelly

lawnjelly

Tower Defence second Alpha Version

This is my second alpha version for my gamedev Tower Defence challenge entry. Fingers crossed the links will work. Linux64 (40 megs) https://www.dropbox.com/s/jfzzh3rluzd6a3h/TowerD_012_Linux_x86_64.zip?dl=0 Win32 (32 megs) https://www.dropbox.com/s/8uj1zo8izp0weg3/TowerD_012_win32.zip?dl=0 Android (49 megs) https://www.dropbox.com/s/60rnljff8al5ugw/TowerD_012.apk?dl=0 I did miss out on several days because my internet got taken out due to a massive lightning storm, and Unity editor refused to run without an internet connection... However new features: 3 new enemies .. bird, spider and boss 2 new towers .. ballista, plasma tower Each level is now scripted, each wave and enemy Some terrain improvements Projectile models (bullet, cannonball, plasma, bolt) Tooltips on tower buttons Lots of changes to game rules / balancing Templates for each tower and each actor type Player has 3 lives Tip: I don't think I gave enough cash for the first level in this build, but you can win usually by building 3 of the first cheap tower and 1 of the second tower clumped together in a group, that way the towers can defend each other and they aren't too expensive. Once you get past the first level the cash reward for completing the level is quite generous. Android Build - edit After some considerable effort, I finally got an Android build working. The build kept hanging while trying to bake lightmaps (which I had been trying to turn off), but after several hours experimentation and changing options randomly I finally got it working. The APK size is nearly 50 megs, no idea why larger than even the linux build. But it does run, choppy on my old tablet when there's much action happening, but shows potential. I think the particle effects will be slowing it down so I'll add options to turn them down. Also with a tablet touchscreen there is no preview of whether you can build on a spot because there is no 'mousemove' equivalent, only a touch, which is like a click to build. [Also I'll have to scale up the tower selection buttons on mobile because it's easy to click to build instead of hitting the button when you don't have a mouse.] FIXED

lawnjelly

lawnjelly

Alpha Version

In the spirit of release early and often I have a playable alpha version of my challenge entry. I am not quite sure where is reliable to upload to so I have been trying dropbox which I've had issues with in the past, fingers crossed these links might work, please let me know if the links work / it installs and runs. The zip file is about 30-40 megs and doesn't require any installation, and is typical unity fare. Linux 64 bit: https://www.dropbox.com/s/fmjzmi249a3otph/TowerD_010_Linux_x86_64.zip?dl=0 Windows 32 bit: https://www.dropbox.com/s/i7bxe1pub56rpc4/TowerD_010_win32.zip?dl=0 I've been developing on linux but just tested the windows version on my ancient laptop and it seems to work. Very much a test version, I need to put in credits for things like the music and assets (need to work out how to do scrolling credits). Instructions When you start choose new game. You start with a certain amount of cash, and enemies start appearing in waves. You must build towers to destroy them before they destroy your base. Click one of the 3 tower types to select it for building, then click on a location that comes up gray to build. Each tower will cost you money to build and to repair. You get cash when you kill enemies and on completing level. The enemies will move down the path towards your base but will also attack towers. If a tower is destroyed you can click on it to repair. The enemies go so close to towers when attacking they cannot defend themselves so you often have to build towers in pairs (this was more by accident than design lol). Cameras There are currently 3 camera types, and you can toggle between them by clicking the magnify glass icon. Overview shows the whole map from above, area mode tries to show the area where the action is taking place, and follow cam will follow the enemy of your choosing closeup. You can select which enemy to follow by clicking close to it on the path. The path the enemies follow is darker than the rest of the terrain. You can't build on the path, or on squares that have buildings or other props on them (these will indicate red). This means you have to be strategic about where to build towers. There's still quite a bit I intend to add, like multiple lives, high score table, balance the difficulty as you progress, more towers / enemies.

lawnjelly

lawnjelly

Tower Defence Progress

After a few days away from home I've done a bit more on the game. I must admit I'm kind of looking forward to wrapping up and working on the next thing, so am thinking more in terms of what I need to finish. The towers now have separate logic and firing / sounds. There are just 3 so far but I'll probably add another 2 or 3. Don't know if I have the energy to put in an upgrade path for tower technology .. maybe if people play it lol. The enemies now attack towers as well as your base. The AI for this is pretty simple, and they have no collision detection, so just go for the tower that attacked them. Maybe I will put in collision detection but it potentially opens up a barrel of worms (avoiding stuff) so I may leave on the todo list. Towers have health and once they are destroyed they stop working and have smoke coming out. You can repair them by clicking, this costs money in proportion to how damaged they are. The building placement is a bit better now, there is a zone around houses where other buildings will not be placed so they aren't too close together. The buildings and props serve no function as yet except they prevent you building towers on that spot, so you have to be a little strategic to where you build. As I'm a little worried the unity terrain is not practical both in terms of installation size and performance, I've just today put in some procedural texturing of the ground block underneath the play area which I am using as an alternative to unity terrain. It looks okay now after some research into how this was possible, I did it with just software texture splatting and precreating the texture on level load. On the terrain front I plan to vary this and the buildings / trees etc as you progress through the levels, so it doesn't look all the same. I should probably also put in a high score table, as it feels a bit of a let down when you get a long way then fail a level. But that depends how easy it is to put in a keyboard to type in a name, as it might not be played on a desktop. Actually I've no idea if it will work on other platforms but I'm hoping unity will handle most of that automagically. The controls are pretty tablet friendly so far. A big issue to confront will be installation size. I'm not sure yet what this depends on but it seems ridiculous when I've done test builds (187 megs last time). Hopefully I can get some kind of build log to find out what is being included in assets and strip out a load of unneeded stuff and oversize textures etc.
 

lawnjelly

lawnjelly

Tower Defence video 2

Just posting this quickly as I will be off for 5 days or so and unable to access PC / work on game, so a little update. Got sound working, although it needs some tweaking I need variations for the sounds but the programming side is all done. I have a sound manager that pools sound sources, don't know if that is more efficient than putting sound sources on all the gameobjects. I also figured out how to put animation events on the animations to play sounds in sync (footsteps, sword etc). Made a couple more quick towers in blender, although there is no separate logic for the towers yet, they are all firing same bullets etc. Changed from using my native huts to some medieval assets from Unity asset store. As these houses are bigger than one grid square there is now a bit more complex logic for placing them, rotating them until they fit on the map. Also the waypoints now lead to the front of the big building which will be your base (I might change the building model though). And added some particle effects. Made a pool system for these, and got the effects themselves again from asset store. There are small explosions, smoke (for your base when being hit), flames, and muzzle flashes, and blood (although this is my old test particle system I made). Did a little tweaking to the area camera to make sure more of the relevant action was 'in frame'. Although all the towers are firing the same so far, I will have them fire different projectiles at different speeds / damage / range. And maybe have some of the enemies attack the towers so you have to repair them. I've got a feeling play balancing might be a bit time consuming at the end, making it not too hard or too easy and having it get increasingly difficult as you progress...

lawnjelly

lawnjelly

 

Tower Defence first video

Finally made a video: This doesn't have the terrain showing, I figured out it is quicker to make a build with terrain switched off, and it increases the filesize quite a bit so I might drop the terrain, not sure yet. I put in support yesterday for more than one enemy type, I have got a third type but haven't put in yet. These will have different strengths / health / speed. Also spent ages debugging yesterday from a nasty c# references bug, first time I've had to do any major debugging and it was difficult because I only have debug.log statements to rely on, as yet I have no debugging in monodevelop. The reason for the bug was because I'm using pooling for my game objects, so as not to new and create unity objects, they are just reused and position and visibility switched. So I have an intermediate 'reference' (another reference!) actor which stores which pooled actor of each type in is an actor slot. This makes it slightly more complex the adding and deleting actor code. Anyway when deleting an array element, I switched the last array element with the one to be deleted, then decrement the count. However, in c# the = operator does not copy the data like in c++, it copies the reference, so all kinds of unpredictable behaviour was resulting. Anyway, bug solved, crisis averted! I also put in some basic auto cameras. They work pretty well. There is an overview of the whole board, which does not change, a follow cam which zooms in on a particular actor, and an area cam, which treats all active actors as an area to focus on. The follow cam actually follows a point just ahead of the actor, because there is a delay from the smoothing. The area cam needs a little tweaking to get better.  Next I need the other enemy type, and different tower types. And I need to put in a special big building or something for your base, maybe with some particle effects to show it being destroyed.

lawnjelly

lawnjelly

Tower Defence - Day 7

I actually got to spend more time on tower defence game today. Things I got done In game UI - cash, enemies remaining and player base healthbar Cash system, paying for towers, bonus cash on completing level and killing enemies Waves of enemies (very rudimentary) Main menu and options menu, and changing from game to menu scenes Animations on completing a level, or having base destroyed It is all coming together now for the most basic version. Main things I still have to add Different enemies, wave variations Different towers Upgrading and repairing towers Sound But the basic version is almost completely playable now, I just have a small bug where it doesn't preserve cash between levels. The healthbars are also showing through a couple of the animations but I'll just hang them off an empty node tomorrow and cull them. I am quite pleased with what I have achieved in a week of part timing, with most of that spent watching tutorials and learning. Much I learned / copied from the brackeys tutorials in terms of basics of using Unity (https://www.youtube.com/user/Brackeys/videos), including some of the user interface stuff today. I can't wait to get some funky cameras going and perhaps doing some videos.

lawnjelly

lawnjelly

Tower Defence Day 6

Just a quick one, and not a real day, as I only got to spend an hour or two in the morning yesterday before another day trip. In fact none of these 'days' are really days of work, just on and off playing with the game in between other tasks. I changed from the overcomplicated paths to just generating some random waypoints, and having manhattan walks in between them. In order to prevent crossing of paths I simply make sure each random waypoint is further on the y coord towards the destination. This seems to work fine for now, I will refine it more at a later stage. I tried out the unity terrain system in order to make the environment around the play area a bit more interesting. I figured out how the asset store worked, and have decided to replace my old native model with some free character assets. Using free stuff should be fine as this is just a test game anyway. And I'm kind of hoping that if these models are rigged right then I might have more luck getting ragdoll working. The terrain thing I'm not sure about. It was kind of quick but I've somehow broken the shadows (will have to investigate). And I did a first 'build' of the game and it came out something ridiculous like 82 megs, which may have been due to the terrain. This is one of the issues I have with things like unity, it's very easy with a few mouse clicks to end up with something completely unoptimal for rendering, and hugely overbloated. Couple this with the fact that many people have powerful development PCs, it gives them a false sense of what is possible. Of course this was just a first play with the terrain to evaluate it, not sure I will use it. Would be nice with some gaps for lakes. One thing is to make the levels different I'd probably have to figure out how to alter the terrain procedurally, so suddenly something that was an added 'extra' can potentially a time sink... anyway it is not a necessary element so I won't spend too long on it. Edit : Got the shadows working again, it was just some shadow distance setting in the quality section.

lawnjelly

lawnjelly

Tower Defence Days 4 & 5

Day 4 Actors sink into ground when dead Auto generation of waypoint paths / building plots Refactoring Failed trial with ragdoll Day 5 Run animation working Watching tutorial videos for user interface Selection of spots for building UI for selecting which towers to build Build on mouse click (with fail when full already) It's felt like progress has slowed down, however this is partly an illusion because it is easier to see progress when graphical things change and not when behind the scenes things change. However I did have several problems with basic C# stuff (probably because I haven't really got the time or inclination to learn the language). C# I have to say I don't know whether it's my lack of knowledge but I've found  C# very painful to use so far. I used it once before for a little GUI app, but here in unity it seems more like a scripting language, okay for basics but none of the unity tutorials have stuff like creating your own types (there was no in built integer vectors in the version I am using!!) and the seeming need for references everywhere is a big WTF (it is possible I should be using struct rather than class?). I shudder to think what is going on below the surface. I may even end up investigating javascript support as I don't remember that being as painful to work with. This was coupled with monodevelop not saving my files properly, I had to alter the key bindings to save all with Ctrl-S to make sure all my files were being updated. Ideally I'd like one of these rapid engines that was programmatical rather than GUI based. Drag and drop is great for non-programmers, but I'm just finding it painful, watching 5 seconds of a tutorial video, alt tabbing to unity, finding the exact spot the guy clicked, back to the video etc etc. And works with C++. I did briefly try unreal a year or so ago but it was so bloated it hardly ran on my old machine. Incidentally I'm finding the videos by Brackeys very useful on youtube, even if some of the stuff like lack of enums makes me groan, I know it is aimed at beginners. He goes through topics rapidly so it's less boring and time wasting...
https://www.youtube.com/watch?v=IlKaB1etrik Pathfinding Anyway .. back to the game. The most interesting bit has been deciding how to make paths for the enemies and plots of land for building on. There's probably a great algorithm for this, but most of the tutorials had manual map creation, so I just had a quick stab at it. At the moment I create a bunch of rectangular 'blocks' or plots of land on which to build buildings, that is not allowed to be walked on. Then I run the Floyd Warshall algorithm to find all the paths from every cell to every other (this is simple to implement). At the moment I just choose 2 far off points for the enemies to run between to create the waypoints. But with all the paths calculated the enemies could theoretically spawn anywhere. I tried last night to get ragdoll working as it would be kind of cool to have the enemies thrown off their feet when they get hit. But I got loads of tearing with my model, something in the export it doesn't like so I may spend more time on this when I get everything else working. Getting skinned characters exported from blender in 'just the right' way seems to be a bit fiddly. Ragdoll would be kind of cool if I get to firing monkeys out of cannons with physics. The structure of my game code still seems a tad messy with some duplication of similar classes, because I'm learning unity, I'll have to clear it up quite a bit if I release the source at the end.

Tower Defence Day 3

Another day with more time spent wrestling with Unity rather than writing much code .. there doesn't seem to be a great deal of consistency in the commands to do things with the engine, and I seem to spend a lot of time googling how to do something simple (like turn off and on a particle effect, or get access to it). Of course I'm sure this gets better with familiarity of the quirks. I'm gradually trying to get used to monodevelop but I can't figure out how to fix its atrocious text navigation with ctrl-arrow left and right. I've also worked out how to put things in separate C# files, to get things a bit easier to navigate, it isn't like #include in c++. I had started getting enemies running between waypoints last night. So today I added: Enemies turning to face the direction they are running Firing bullets (or projectiles) from guns Aiming system for the towers Healthbars for the enemies Enemy alive / dead states and respawn. Blood particle system for hits The aiming system is quite fun. As the bullets travel slowly, if the towers fire when aiming at a moving enemy, they usually miss. So the aiming system uses some maths to predict ahead of time where the bullet will hit the enemy given its velocity and the velocity of the bullet. This actually works when the enemies are running in a straight line, but all bets are off if they turn a corner. This can work well with different tower types firing bullets of different velocity. Higher velocity bullets are more likely to hit, however lower velocity bullets could do more damage? At the moment the calculations are 2d but it could be fun having some 3d projectiles launched into the air, maybe with area effects. The placement of waypoints is manual so far, so I want to write an auto system for this, and make sure that towers / buildings will not be built along the path (unless maybe placed by the player as a block). (shamelessly ripped from https://gamedev.stackexchange.com/questions/14469/2d-tower-defense-a-bullet-to-an-enemy) // Aiming with slow bullets has to take account of the fact the enemy is moving. // So we use some maths to predict where the enemy will be when the bullet hits, // and the correct angle to aim 'ahead'. This works when the enemy is going in a straight // line, however when it turns a corner the shot will probably miss. Vector2 ptT = main.m_Actors.m_Actors[m_TargetActorID].m_ptPos; Vector2 velT = main.m_Actors.m_Actors[m_TargetActorID].GetVelocity(); Vector2 totarget = ptT - m_ptLoc; float a = Vector2.Dot(velT, velT) - (m_VelBullet * m_VelBullet) ; float b = 2 * Vector2.Dot(velT, totarget); float c = Vector2.Dot(totarget, totarget); float p = -b / (2 * a); float q = (float)Mathf.Sqrt((b * b) - 4 * a * c) / (2 * a); float t1 = p - q; float t2 = p + q; float t; if (t1 > t2 && t2 > 0) { t = t2; } else { t = t1; } Vector2 aimSpot = ptT + (velT * t); Vector2 bulletPath = aimSpot - m_ptLoc; // unused as yet... // float ticksToImpact = bulletPath.magnitude / m_VelBullet; float target_angle = MaCommon.VectorToAngle(bulletPath); // move the aim towards target angle... SetYaw_Funky (MaCommon.SmoothAngle (m_Yaw, target_angle, 20.0f));  

Tower Defence - Day 2

Well suffice to say that finally getting some sunshine and nice weather here in the UK is anathema to making any kind of progress in game development, as I spent the day motobiking in Wales. In fact, my mum suggests that part of the reason the UK does well in intellectual pursuits is because we have such godawful weather, and you have to find something to do while you are trapped indoors for 360 days of the year. But today I managed a second day of wrestling with Unity and Monodevelop. I'm developing a passionate hatred of both, particularly Monodevelop as the defaults seem to try to screw with every bit of text formatting, and there seems to no obvious way of turning all the auto stuff OFF. But it does seem to give some intellisense, something that isn't properly working in Geany. And given that debugging these C# scripts has so far been a nightmare, I'll be thankful for having Intellisense. Apparently you can do stepping through scripts somehow, but I haven't worked it out yet, and am having to rely on Debug logs (which seem overly verbose), and running the thing, changing a line, running again etc. It's like being back in the 80s. Anyway despite this I'm gradually making progress. I still haven't a clue how C# really works, how it handles references to objects etc, so it's a miracle anything is working really. I made a very dodgy placeholder gun turret in blender and got it imported, and put in a framework for the towers, an array to hold them, and fixed update and frame updates to do their logic and interpolation for rendering. Took far too long debugging to get Atan2 working for orientations to point the guns at the enemies, but finally it seems to be working. The guns smoothly move to point at their target enemy. For the enemy I've just imported one of my native models, he has a run animation but I haven't got it playing yet, I'll worry about that later. The next stage I guess it to start making some paths and get the enemies to move along it towards the player 'base', then get the towers firing some projectiles. I could use spears, or monkeys really.

Tower Defence Challenge

Ok so I needed a distraction so I might have a go at this tower defence challenge at Gamedev. I've been meaning to have a play with Unity .. I'm a c++ guy who tends to do *everything* themselves, so this is quite a change using a rapid development environment and *gasp* someone elses engine. That said I'm gonna have some teething issues with trying to learn Unity and C# from scratch at the same time as doing the challenge. So I followed some tutorials last night on youtube and this morning, and I have a tentative feel for how I might be arranging stuff in this drag and drop editor thingy. I still end up having to google how to do every little thing but I'm sure it will get faster. I want to make a global array of map locations to put your towers in, so each location is either ground for walking, a tower of some type or some decoration. Instead of making all the graphics specifically for it, I thought I'd rip off some of the models from my jungle game. I haven't got anything resembling a tower yet, but I'm sure I can build something quick in blender. The process of importing models from blender seems a bit clunky (I was hoping to be able to import a whole lot at once) but is just about doable, it won't seem to recognise the textures in a blend file so I have to set them up manually. So I now have it creating a grid of prefabs, it should be easy to import some more prefabs, vary them according to some map generator. Come to think of it I should try playing a tower defence game for research, I don't really know how they work...

UI Improvements

These past few days I have been putting in some improvements to my dodgy custom GUI system. Introduction I wrote this originally a few years ago and have gradually been tweaking things as I use it, kind of like Han Solo and the Millenium Falcon. Writing my basic GUI was not all that tricky (it started out as a game UI), and quite fun as a learning experience. I'm sure writing a really good one takes a lot more thought. As a quick intro, I was able to take lots of shortcuts because it is totally a software GUI, it renders everything on the CPU, then transfers whatever changed to the GPU (using OpenGL / OpenGL ES, but any flavour API will do for this). It also can render OpenGL into 3D windows. It is very low footprint, allowing windowed apps in the realms of 200k or so, and is pretty good with RAM too, and is multi-platform, windows, linux and android so far, should be fine on iOS. Improvements First most obvious thing was I wanted to improve how it looked. Being a programmer I'm usually more interested in spending time on code than on tweaking design issues, but I have to admit the previous version looked something like windows 3.1: I resolved to dive into the 21st century and do a 'dark theme', and at the same time revamped how widget colours and filling is handled. In classic 'seemed like a good idea at the time', the widgets were drawing themselves, calling some common functions but essentially making the design decisions in a bunch of widget classes. This is all great object orientated design, until you come to try and make the widget colour schemes etc work with each other and you have to keep track of 2 dozen classes. So at the expense of 'infinite customization' I have simplified it all having a massive widget enum of all the widgets, then in a common routine using a switch statement to categorize them into a few different 'fill styles'. Then the design task becomes one of trying to make the smaller list of 6 or so fill styles work with each other rather every damn widget. It also makes it easier to make themes. The old code did gradients and a few fancy things but the new code only does flat shading and a few edge styles. I might add a few more things if I have time, like curved corners, possibly gradients, but gradients makes the font rendering a bit more involved (if the font background matches the fill colour, you can just copy rather than blend fonts). Anyway this is things so far, I haven't exported a second font yet so the fonts look samey, and I haven't figured a good way of emphasizing the data blocks on the left, but I'll get there. I could put more edge accenting on everything like the last version but I found it a bit distracting to the eye. And the borders are all only 1 pixel wide at the moment, it is possible a graded 2 pixel border will look better. Aside from the looks I want to carefully go over the 'API' exposed to the user to see if I can simplify it. Simpler API is easier to use, quicker to use, and less prone to mistakes.

lawnjelly

lawnjelly

 

Evaluating Box2D

I've been using my own internal physics so far for my jungle game, but had been meaning to try out Box2D as I had heard good things. Whether to use third party middleware for physics comes down to a few pros and cons for me: Middleware Mostly written already Tested Written by specialists May require customization to work well for game May have assumptions that don't work with game Internal Deceptively quick to get *something* working, but a lot of special cases to deal with Programming / testing / research time consuming Can be easily tailored to the game In comparison to quite a bit of middleware I've tried, I found Box2D very easy to compile, and then to slot into the game. I was able to have it effectively swapped in and out with a simple #define. Implementation It only took me yesterday to get much of the functionality working, which must be a record for me with middleware. First I added trees and buildings as static fixtures when the map was loaded, circles for trees and boxes for building sections. Then as dynamic actors are added (animals / floating boxes etc) they are added as circles and boxes. Each tick (I use fixed tick rate) I can apply an impulse to the Box2D actors, then read back their positions after stepping the physics. Only the on screen actors are simulated each tick. This was easy to slot in, as my physics basically does the same. Actor Facing What does slightly complicate things is when I try to rotate the actors to face the direction they are travelling. This is quite simple with a circular actor, but with long box shaped actors (especially crocodiles and elephants) they can get 'trapped' trying to rotate between obstacles, and there can be jitter as the actor tries to rotate but the physics blocks it. This is an ongoing problem I am attempting to solve, with e.g. temporal smoothing or perhaps some AI rather than relying on physics. Response Overall I found Box2D worked great for collision detection, but handling the response in the actors may take a bit of fiddling. When a Box2D actor encounters a tree for example, he bounces or slides around it without penetrating. However in my internal engine, I am able to apply a force to push the actor away from the tree in proportion to how close it is to the centre. Although this is not 'realistic' in terms of physics, it gives a better feel to movement, and helps prevent actors getting stuck between obstacles. On the other hand Box2D looks great for the boats, and they have collision points / responses calculated far better than my own. Of course the other snag is that Box2D is 2D, and my game is 3D, having flying objects roofs etc. I am confident I can add in a third dimension, but at this point I am deciding whether it will be worth it, and whether it will be better to stay with the internal physics (except maybe for boats?), and spend the time instead refining the internal physics. Performance I did also incidentally do some crude performance timings, from memory Box2D was taking around 50-500 microseconds for a tick, versus about 10-30 microseconds for internal. This difference was to be expected as the internal is simpler. This was on my PC, the target is low power mobile phone so this will obviously be longer, I haven't timed on mobile, but I'd estimate 5-10x the time, so could possibly cause a dropped frame, although I don't foresee it being as major a bottleneck as rendering.

lawnjelly

lawnjelly

Matching feet to terrain - Part 2

(This is a follow up to my blog entry here: https://www.gamedev.net/blogs/entry/2264506-matching-feet-to-terrain-using-ik/ ) This is a quick one just to show progress with getting 4 legged creatures matched to the terrain. While it was quite quick to get 2 legged creatures matching their feet to terrain using IK, 4 legged creatures has proved to be a whole other ball game entirely. The few examples I've seen of 4 legged IK on the web have been using full body iterative solutions, so maybe I should be going this way. However using what I already had working I've so far been staying with the simple analytical solution for the legs, and using some maths and forward kinematics for the rest of the body. First thing I did was try changing the pitch of the torso. I have been using a rotation based around the pelvis, not sure if this is the best approach but it seems to look rightish. Of course in real life you do not always hit a slope head on, so I wanted to be able to roll the torso too. As well as this there is a twist in the spine so the body can follow the terrain better. A stiff spine looks very rigid (but is simpler). Then it is a case of lining up the legs to compensate for gravity (this isn't always switched on) and doing the IK legs to the terrain under each foot. I've had quite a lot of problems with feet not being long enough to reach the ground, particularly with the short legged crocodile. You sometimes have to move the belly up to not hit the ground, but then the legs are too short to hit the ground! As well as the central body there is also a forward kinematic system for moving the head and the tail. It has been very difficult making things generic (fixing one thing on the crododile would break the elephant and vice versa) but I am getting there. There are also other creatures which are semi 4 legged (the monkey and chimp) but those are easier for the system to calculate. There are still some bugs of feet going through ground, and glitches, but it doesn't have to be perfect.  

lawnjelly

lawnjelly

Matching feet to terrain using IK

For a long time I had been delaying finding a solution to feet etc interpenetrating terrain in my game. Finally I asked for suggestions here, and came to the conclusion that Inverse Kinematics (IK) was probably the best solution. https://www.gamedev.net/forums/topic/694967-animating-characters-on-sloping-ground/ There seem to be quite a few 'ready built' solutions for Unity and Unreal, but I'm doing this from scratch so had to figure it out myself. I will detail here the first foray into getting IK working, some more steps are remaining to make it into a working solution. Inverse Kinematics - how is it done? The two main techniques for IK seem to be an iterative approach such as CCD or FABRIK, or an analytical solution where you directly calculate the solution. After some research CCD and FABRIK looked pretty simple, and I will probably implement one of these later. However for a simple 2 bone chain such as a leg, I decided that the analytical solution would probably do the job, and possibly be more efficient to calculate. The idea is that based on some school maths, we can calculate the change in angle of the knee joint in order for the foot to reach a required destination. The formula I used was based on the 'law of cosines':
https://en.wikipedia.org/wiki/Law_of_cosines I will not detail here but it is easy enough to look up. For the foot itself I used a different system, I calculated the normal of the ground under the foot in the collision detection, then matched the orientation of the foot to the ground. My test case was to turn off the animation and just have animals in an idle pose, and get the IK system to try to match the feet to the ground as I move them around. The end effect is like ice skating over the terrain. First I attempted to get it working with the main hero character. Implementing The biggest hurdle was not understanding IK itself, but in implementing it within an existing skeletal animation system. At first I considered changing the positions of the bones in local space (relative to the joint), but then realised it would be better to calculate the IK in world space (actually model space in my case), then somehow interpolate between the local space animation rotations and the world space IK solution. I was quite successful in getting it working until I came to blending between the animation solution and the IK solution. The problems I was having seemed to be stemming from my animation system concatenating transforms using matrices, rather than quaternions and translates. As a result, I was ending up trying to decompose a matrix to a quaternion in order to perform blends to and from IK. This seemed a bit ridiculous, and I had always been meaning to see whether I could totally work the animation system using quaternion / translate pairs rather than matrices, and it would clearly make things much easier for IK. So I went about converting the animation system. I wasn't even absolutely sure it would work, but after some fiddling, yay! It was working. I now do all the animation blending / concatenation / IK as quaternions & translates, then only as a final stage convert the quaternion/translate pairs to matrices, for faster skinning. This made it far easier in particular to rotate the foot to match the terrain. Another snag I found was that blender seemed to be exporting some bones with an 'extra' rotation, i.e. if you use an identity local rotation the skin doesn't always point along the bone axis. I did some tests with an ultra simple 3 bone rig, trying to figure out what was causing this (perhaps I had set up my rig wrong?) but no joy. It is kind of hard to explain and I'm sure there is good reason for it. But I had to compensate for this in my foot rotation code. Making it generic To run the IK on legs, I set up each animal with a number of legs, and the foot bone ID, number of bones in the chain etc. Thus I could reuse the same IK routines for different animals just changing these IK chain lists. I also had to change the polarity of IK angles in some animals .. maybe because some legs work 'back to front' (look at the anatomy of e.g. a horse rear leg). The IK now appears to be working on most of the animals I have tested. This basic solution simply bends the knees when the ground level is higher than the foot placed by the animation. This works passably with 2 legged creatures but it is clear that with 4 legged creatures such as elephant I will also have to rotate the back / pelvis to match the terrain gradient, and perhaps adjust the leg angles correspondingly to line up with gravity. At the moment the elephant looks like it is sliding in snow down hills. Blending To blend the IK solution with the animation is kind of tricky to get to look perfect. It is clear when the foot from the animation is at ground level or below, the IK solution should be blended in fully. At a small height above the ground I gradually blend back from the IK into the animation. This 'kind of' works, but doesn't look as good as the original animation, I'm sure I will tweak it. Another issue is that when one leg is on an 'overhang', you can end up with a situation where the fully outstretched leg cannot reach the ground. I have seen that others offset the skeleton downwards in these cases, which I will experiment with. Of course this means that the other leg may have a knee bent further than physically possible. So there are limits to what can be achieved without rotating the animals pelvis / back.
 
 Anyway this is just description of the trials I had, hopefully helpful to those who haven't done IK, and maybe will generate some tips from those of you that have already solved these problems.

lawnjelly

lawnjelly

 

Wireless USB gamepad working in Linux and Android Emulator!

This will be a short technical one for anyone else facing the same problem. I can't pretend to have a clue what I was doing here, only the procedure I followed in the hope it will help others, I found little information online on this subject. I am writing an Android game and want to put in gamepad support, for analogue controllers. This had proved incredibly difficult, because the Android Studio emulator has no built in support for trying out gamepad functionality. So I had bought a Tronsmart Mars G02 wireless gamepad (comes with a usb wireless dongle). It also supports bluetooth. The problem I faced was that the gamepad worked fine on my Android tv box device, but wasn't working under Linux Mint, let alone in the emulator, and wasn't working via bluetooth on my tablet and phone. I needed it working in the emulator ideally to be able to debug (as the Android tv box was too far).  Here is how I solved it, for anyone else facing the same problem. Firstly the problem of getting the gamepad working and seen under linux, and then the separate problem of getting it seen under the Android emulator (this may work under Windows too). Under Linux Unfortunately I couldn't get the bluetooth working as I didn't have up to date bluetooth, and none of my devices were seeing the gamepad. I plugged in the usb wireless dongle but no joy. It turns out the way to find out what is going on with usb devices is to use the command: lsusb This gives a list of devices attached, along with a vendor id and device id (takes the form 20bc:5500). It was identifying my dongle as an Xbox 360 controller. Yay! That was something at least, so I installed an xbox 360 gamepad driver by using: https://unixblogger.com/2016/05/31/how-to-get-your-xbox-360-wireless-controller-working-under-your-linux-box/ sudo apt-get install xboxdrv sudo xboxdrv --detach-kernel-driver It still didn't seem to do anything, but I needed to test whether it worked so I installed a joystick test app, 'jstest-gtk' using apt-get. The xbox gamepad showed up but didn't respond. Then I realised I had read in the gamepad manual I might have to switch the controller mode for PC from D-input mode to X-input. I did this and it appeared as a PS3 controller (with a different USB id), and it was working in the jstest app!! Under Android Emulator Next stage was to get it working in the Emulator. I gather the emulator used with Android Studio is qemu and I found this article: https://stackoverflow.com/questions/7875061/connect-usb-device-to-android-emulator I followed the instructions here, basically: Navigate to emulator directory in the android sdk. Then to run it from command line: ./emulator -avd YOUR_VM -qemu -usb -usbdevice host:1234:abcd where the host is your usb vendor and id from lsusb command. This doesn't work straight off, you need to give it a udev rule to be able to talk to the usb port. I think this gives it permission, I'm not sure. http://reactivated.net/writing_udev_rules.html Navigate to etc/udev/rules.d folder You will need to create a file in there with your rules. You will need root privileges for this (choose to open the folder as root in Nemo or use the appropriate method for your OS). I created a file called '10-local.rules' following the article. In this I inserted the udev rule suggested in the stackoverflow article: SUBSYSTEM!="usb", GOTO="end_skip_usb" ATTRS{idVendor}=="2563", ATTRS{idProduct}=="0575", TAG+="uaccess" LABEL="end_skip_usb" SUBSYSTEM!="usb", GOTO="end_skip_usb" ATTRS{idVendor}=="20bc", ATTRS{idProduct}=="5500", TAG+="uaccess" LABEL="end_skip_usb" Note that I actually put in two sets of rules because the usb vendor ID seemed to change once I had the emulator running, it originally gave me an UNKNOWN USB DEVICE error or some such in the emulator, so watch that the usb ID has not changed. I suspect only the latter one was needed in the end. To get the udev rules 'refreshed', I unplugged and replugged the usb dongle. This may be necessary. Once all this was done, and the emulator was 'cold booted' (you may need to wipe the data first for it to work) the emulator started, connected to the usb gamepad, and it worked! This whole procedure was a bit daunting for me as a linux newbie, but if at first you don't succeed keep trying and googling. Because the usb device is simply passed to the emulator, the first step getting it recognised by linux itself may not be necessary, I'm not sure. And a modified version of the technique may work for getting a gamepad working under windows.

lawnjelly

lawnjelly

Android Build and Performance

In the last few weeks I've been focusing on getting the Android build of my jungle game working and tested. Last time I did this I was working from Windows, but now I've totally migrated to Linux I wasn't sure how easily everything would go. In the end, it turns out that support for Linux is great, in fact it was easier than getting things up and running on Windows, no special drivers needed. Definitely Android studio and particularly the emulators seem to be better than last time, with x86 emulators running near native speed, and much quicker APK uploads to the emulators (although still slow to the devices, I gather I can increase this by updating them to high Android version but then less good for testing). My devices I have at home are an old Cat B15 phone, 800x480 with a GPU that seems to date from 2006(!), a Nexus 7 2012 tablet, and finally an Amlogic S905X TV media player (2017).  Funnily enough the TV box has been the most involved to get working. CPU issues My first issue to contend with was I got a 'SIGBUS illegal alignment' error when running on the phone. After tracking it down, it turns out the particular Arm CPU is very picky about the alignment of data. It is usually good practice to keep structures aligned well, but the x86 is very forgiving, and I use quite a few structs #pragma packed to 1 byte, particularly in serialization. Some padding in the structures sorted this. Next I had spent many hours trying to figure out a strange bug whereby the object lighting worked fine on emulators, but looked wrong on the device. I had a suspicion it was a signed / unsigned issue in values for diffuse light in a shader input, but I couldn't see anything wrong with the code. Almost unbelievably, when I tracked it down, it turned there wasn't anything wrong with the code. The problem was that on the x86 compiler, a 'char' defaults to be a signed char, but on the ARM compiler, 'char' defaults to unsigned!! This is an interesting choice (apparently on the ARM chip the unsigned may be faster) but it goes against the usual convention for short, int etc. It was easy enough to fix by flipping a compiler switch. I guess I should really be using explicit signed / unsigned types. It has always struck me as somewhat wierd that C is so vague with the in-built types, with number of bits and the sign, given that changing these usually gives bugs. GPU issues The biggest problem I tend to have with OpenGL ES devices is the 'precision' specifiers in shaders. You can fill them however you want on the desktop, but it just ignores them and uses high precision. However different devices have different capabilities for lowp, mediump and highp both in vertex and fragment shaders. What would be really helpful if the guys making the emulators / OpenGL ES on the desktop could allow it to emulate the lower precision, allowing us to debug precision on the desktop. Alas no, I couldn't figure out a way to get this to work. It may be impossible using the hardware OpenGL ES, but the emulator also can use SwiftShader so maybe they could implement this? My biggest problems were that my worst performing device for precision was actually my newest, the TV box. It is built for super fast decoding video at high resolution, but the fragment shaders are a minimal 10 bit precision affair, and the fill rate is poor for a 1080P device. This was coupled with the problem I couldn't usb connect up to the desktop for debugging, I literally was compiling an APK, putting it on a usb stick (or dropbox), taking to bedroom, installing, running. This is not ideal and I will look into either seeing if ADB will run over my LAN or getting another low precision device for testing. I won't go into detail on the precision issues, I wrote more on this on a post here:
https://www.gamedev.net/forums/topic/694188-debugging-precision-issues-in-opengl-es-2 As a quick summary, 10 bits of precision in the fragment shader can lead to sampling error in any maths done there, especially in texture coordinate math. I was able to fix some of my problems by moving the tex coordinate calculations to the vertex shader, which has more precision. Then, it turns out that my TV box (and presumably many such chipsets) support an extra high precision path in the fragment shader, *as long as you don't touch the input data*. This allows them to do accurate uv coords on large texture maps, because they don't use the 10 bit precision. Menus I've written a rudimentary menu system for the game, with tickboxes, sliders and listboxes. This has enabled me to put in a bunch of debugging features I can turn on and off on devices, to try and find out what affects performance, without recompiling. Another trick from my console days is I have put in some simple graphical performance bars. I record the last 60 frames into a circle buffer and store things like the frame duration, and when certain game tasks took place. In my case the big issue is when a 'scroll' event takes place, as I render horizontal and vertical tiles of the landscape as you move about it. In the diagram the blue bar is where a scroll happens, a green bar is where the ground scroll happens, and the red is the frame duration. It doesn't show much on the desktop as the GPU is fast, but on the slow devices I often get a dropped frame on the scrolls, so I am trying to reduce this. I can turn on and off various aspects of the scrolling / rendering to track down what causes performance issues. Certainly PCF shadows are a big ask on mobiles, as is the ground (terrain) shader. On my first incarnation of the game I pre-rendered everything (graphics + shadows) out to a massive texture at loadup and just scrolled through it as you moved. This is great for performance, but unfortunately uses a shedload of memory if you want big maps. And phones don't have lots of memory.
So a lot of technical effort has gone into writing the scrolling system which redraws the background in horizontal and vertical tiles as you move about. This is much more tricky with an angled landscape than with a top-down 90 degree view, and even more tricky when you have to render shadow maps as you move.
Having identified the shadow map pass as being a bottleneck, I did some quick calculations for my max map size (approx 16384x16384) and decided that I could probably get away with pre-rendering the shadow map to a 2048x2048 texture. Alright it isn't very high resolution, but it beats turning shadows off completely.
This is working fine, and avoids a lot of ugly issues from scrolling the shadow map. To render out the shadow map I render a bunch of 256x256 tiles and copy them to the final shadowmap. This fixed some of the slowness, then I realised I could go a step further. Much of the PCF shadows slowdown was from rendering the landscape shadows. The buildings and objects are much rarer so I figured I could pre-render a low-res landscape shadow texture, and use this when scrolling, then only need to do expensive PCF / simple shadows on the static objects, and dynamic objects. This worked a treat, and incidentally solves at a stroke precision issues I was having with the shadow shader on the 10 bit hardware. Joysticks As well as supporting touchscreens and keyboards, I want to support gamepads, so I bought a bluetooth / wireless gamepad for xmas. It works great with the TV box with wireless dongle, unfortunately, the bluetooth doesn't seem to work with my old phone and tablet, or my desktop. So it has been very difficult / impossible to debug to get analog joystick working. And, in an oversight(?) for the emulator, there doesn't seem to be an option for emulating a gamepad. I can get a D pad but I don't think it is analog. So after some stabs in the dark with docs I am still facing gamepad focus issues so will have to wait till I have a suitable device to debug this. That's all for now folks!

lawnjelly

lawnjelly

 

Native Variation and Conversations

Just a little progress video. As well as getting the scripting working a bit more, I've been placing villages and naming map areas. The area names are generated by putting together random chosen syllables. Morphing For variation with the natives they now use realtime bones like the player, and there is a morphing system so you can have fat / thin / muscular natives etc (I only have created fat and thin so far to test, but it works fine). UV Maps As well as the morphing variation, each native has a uv map so it can use different textures for different uv islands (parts of the body). This will allow e.g. wearing different clothing, different faces, jewellry etc. At the moment I've just just put some red green and blue and white over the different areas as placeholder until I create more textures. The conversations are all random from script just for a test .. choosing random animals and people to make random snippets of talk. I will probably make this more purposeful, giving each villager names and relations so they can further the plot. Next Next up I am putting in attachments so they can carry spears etc, and the player can carry sword. Also I may be able to have a canoe as an attachment off the root nodes so they can canoe on the lakes. I will also add female natives. I could do them as morphs, but I think it will be easier for texturing etc to have a different female model.

lawnjelly

lawnjelly

 

Sept 2017 Update

Lots of improvements since moving my development to Linux. All the software has been working great, just as a recap I am using mainly QT Creator, Blender, Geany, Gimp, and Audacity. Shared Textures One of the big changes I made was to the export toolchain for static 3d models - buildings etc. Originally each model had its own texture (which got put in a texture atlas). This allows things like baked ambient occlusion, but was not very efficient with texture memory once you had lots of objects, and made it more difficult to get uniform looks, and easily change the texture 'palette'. So I improved my blender python export script to handle multiple models in the same scene, and detect shared textures. A second converter reads in the txt export and builds the binary model file to be used in game - builds texture atlases, modifies texture coords and packs the data into game friendly format. This has worked very well and I've modelled a number of new buildings, boxes, barrels etc. Map Generation I've finally improved the map generator. Initially I had been using just simple random positions of map objects, which usually resulted in a mess with buildings on top of each other etc. I have replaced this with a similar physics based spacing system I used in the old version of the game. Another snag is that buildings would often get placed on sloping ground. This looks bad in game because you get half the building underground, and half suspended in thin air. To improve this I add a flattening stage to the terrain, where the area around objects that are in a 'flattening group' gets flattened. This leads to problems of its own, like terrain 'cliffs' which are smoothed afterwards. This is still being tweaked. Actor Movement I've made some improvements to the physics - the movement speed takes account of the gradient of the land now so you no longer get fast moves up and down cliffs. The handling of elevation and altitude is also now more sensible, making the jumping and flying better. The physics is all now currently cylinder based, and you can jump (and sit) on top of other objects and actors, and pass below them. Still being tweaked .. and I need to come up with a solution for non-cylindrical buildings - I will perhaps have have some solution made with 1 or 2 orientated bounding boxes. The movement controllers for yaw, pitch and roll are also improved, now storing a velocity for each, which helps prevent yo-yoing artefacts. The bats look much better with some roll, and they also pitch in comical ways, I might have to turn that down lol. Animation Notes The animation toolchain now supports notes in the animation, to mark the frames where footsteps, attack noises etc should be. Here is a video of everything working so far: What is next? Next on my todo list is modelling more animals, and adding realtime bones animation for the main player. I figured bones animation was probably too expensive on mobiles for the animals, so I pre-render out their bones animation to vertex tweening animation. The same is currently true of the player but if possible I want to use bones in realtime so I can get responsive animation blending and more flexibility. The downside is I will probably have to write 3 codepaths : software skinning, and 2 versions of the skinning shader as I have read articles suggesting that not all OpenGL ES 2.0 hardware supports a standard way of skinning.  

lawnjelly

lawnjelly

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!