Jump to content
  • Advertisement
  • entries
    53
  • comments
    180
  • views
    43698

Entries in this blog

Random game mode and options

Just a little post that I have uploaded new version of my frogger game (011) on my project page, there is now a random game mode, and options where you can change difficulty, full screen toggle, and sound and music volume, and an as yet untested MacOSX version. The fullscreen toggle may be a bit dodgy, it works a couple of times on my linux PC but then has some graphic corruption, I don't know whether that is a Godot problem or graphics drivers. Adding random gamemode and difficulty took a lot of playtesting. Difficulty mostly changes the speed of the enemies, but there are some other effects too. When you get real good you can play on max difficulty, make sure to have lots of caffeine before this.

lawnjelly

lawnjelly

 

Frogger - Post Mortem

Finally made a first release of my frogger game for the gamedev challenge yesterday: What went right Using Godot Engine. Godot and GDScript was very quick to learn and get started with, and is very good for these types of small scale games. Overall I preferred it to Unity which I used last challenge. Using skin modifier for modelling in blender. This enabled me to make game creature models fast, around a day for modelling, rigging, animating and texturing a model. Using 3d paint for texturing creatures. Having spent many months developing 3d paint, it is really starting to pay off in quick texturing of assets. Blender can do this texturing too, but the workflow is much faster in 3d paint. What went wrong 3d sound broken in Godot. I had to do some bodging to get any kind of positional sound working, and it is flakey at best. I hope they will fix this soon. Android support not yet working. My android hardware / emulator only seem to support OpenGL ES 2, and Godot only supports ES 3, up until the 3.1 release. I tried the 3.1 alpha but no joy as yet. Creating art assets took most of my time, approx 2/3 of the development time (I am not an artist!). Moving house - I only realistically had the first 3 weeks to work a lot on the game, so tried to finish as much as possible early up. I do not even have access to computer / internet at new house yet. Dealing with different aspect ratios. I don't really deal with this as yet, I may have to address this. Normal mapping the assets. I tried this on the cars but it is very finicky to get working right and I don't have much experience. Took a lot of time and the difference was negligable so I dropped it. Procedural terrain texturing. Implemented but was too slow in GDScript, so I precreated 5 terrain textures and just used them in the levels. Same algorithm was fast enough in Unity in C# so I think GDScript is several times slower currently. (However I do prefer the GDScript language to C#) No wheels on cars. This is just funny, I always intended to put them in but never got round to it!! Dropping lots of features due to lack of time. This is typical of gamedev in general, but luckily I had enough features to make it playable. There is already support for other pickups like score and poison etc, I just didn't have time to make the models.    

lawnjelly

lawnjelly

 

Frogger - birds

Been a bit slow on the progress front past few days, maybe because I was making more assets which is slow - lily pad, snake and bird. These are now in the game although I haven't put in sound effects yet. I need to re-export the snake because it has lost the motion of the root node which is why the slithering looks super silly lol.   Today I've been starting to put in some basic functionality for creatures that are not on rails, first one being the bird. The AI is pretty simple at the moment but it looks passable, they just try to get at you and peck, although the peck does nothing yet. I was thinking about maybe having the peck reduce health rather than be an insta kill, or do something like freeze you or move you off path so you are more likely to get hit by something else. There is collision detection between the birds but not with the traffic etc, as I can see it getting expensive in gdscript, I might have the free moving creatures on specific maps on their own. I also have done some clever jiggery pokery with the fixed timestep. The frog and input are now working at 60 ticks per second, and everything else at 10 ticks per second. I was previously running everything at 30tps, however this is a waste once you start having AI creatures, 10 is fine for them, and 60tps gives nice responsive keyboard input for the player.

lawnjelly

lawnjelly

Using the Skin Modifier in Blender to quickly model creatures

Introduction After spending many hours painstakingly attempting to model creatures entirely by hand, I finally discovered (a couple of years ago) the skin modifier in Blender, which is a fantastic quick way to build organic creatures and shapes, especially for the artistically challenged like myself, and also makes rigging a breeze. I thought I would write a quick description for those unfamiliar. If you want ultimate control, particularly for a low poly creature, there is no substitute for manually creating polys. However this can be very time consuming and tedious. If you are instead in a position where you are willing to trade off speed of creation against 'perfect rendering efficiency', or are making medium / high poly models, or models for later sculpting, then one of the options available is the skin modifier. The skin modifier works instead of modelling the skin by hand, instead you place the joints (as vertices) of a creature to build a kind of skeleton, and allow the skin modifier to automagically generate a skin around this skeleton. Process Typically I start off by creating a plane, then go into edit mode, and merge the vertices to 1 in the centre. Next set up the modifier stack to create the skin. At the top of the stack goes a mirror modifier, because most animals are symmetrical bilaterally. Next goes the skin modifier, which creates a simple box-like skin around the skeleton. Finally add a subsurface modifier to smooth the skin, and make it more organic. Once the modifier stack is ready you can begin modelling. In the case of this bird I started with a topdown view. Select the start vertex (there should now be a 'blob' around the single merged vertex), and create the skeleton by pressing 'e' to extrude and place a new vertex. I did this to place several vertices to create a backbone for the bird. You can then create wings and legs by picking one of the vertices in the backbone, and extruding to the side. If you follow this process you can form a rough top down skeleton, it doesn't have to be exact because it is easy to adjust, that is one of the beauties of skin modifier. I find it useful to google pictures of the skeleton of the animal for reference. Next look at side views and adjust the up down position of the vertices (joints). The legs needed to be going downwards, and the head slightly up. Once I am happy with the basics of the structure, I start to fill it out. You do this by selecting a vertex, then pressing 'ctrl-a' then dragging with the mouse. You can make the skin thicker or thinner at each vertex. This can quickly give you a reasonable shape. You can further refine the shape by pressing 'ctrl-a' then limiting to either the x or y axis by pressing 'x' or 'y' before dragging. I used this to give a broad flat tail and wings. Conclusions Pretty soon you can build a pretty good model. You can tweak a few things in the skin modifier, especially set a root vertex (e.g. pelvis) can make it easier for later animation. Skin modifier also makes rigging easy. Once you are happy with your skeleton, make a copy of the whole thing (so you don't lose the original), then choose 'create armature' from the skin modifier. This will create an armature and link it to the mesh so it is ready for posing and animating! I also typically choose smooth shading in the skin modifier, then manually add hard edges in mesh edit mode (ctrl-e, hard edge, and use in combination with the edge-split modifier). I also use this to select seams for uv mapping. Note that once I finish the skin modifier version I usually have to do a little tweaking of the polys manually, because there are some things it is not good at. Anyway this has been a brief introduction to this method, I would encourage trying it and following some youtube tutorials. After some decimating and very rough texturing (~640 tris)

lawnjelly

lawnjelly

 

Frogger - night mode

I spent a couple of days adding a procedural splatting terrain texturing system, similar to the one I used in tower defence. However it does run slower than in the Unity version, I suspect gdscript is currently quite a bit slower in Godot (3.05) than C# in Unity. As a result I've thought about pre-generating some terrain textures as .jpg, compressing them a lot and using them instead of doing it on the fly. This is an option, but I've left it procedural for the time being. The road and rivers are not needing the procedural system, so there is less area that needs doing, so I may get away with it. Certainly an advantage to procedural, as well as variety, is that I can change the terrain around buildings etc should I put them in. I've started adding some more cameras too. You can now switch between a top down traditional ortho camera, and a perspective low down camera that follows the frog, and shows closeups where necessary. I've added an easy way to layout each level, I specify the number of tiles of a type (grass, river, road at the moment), then for each row I can specify the type of traffic, speed etc. And lastly I've been playing with the lighting, experimenting with the spotlight in godot, possibly for a nightmode. I don't know how it will affect performance if I do a mobile version, but certainly the spotlight is fun, and changes the gameplay a little as you sometimes can't see the vehicles coming. I've been thinking of putting a 'spyfrog' slant on things, and this lighting would work for a spy. I still have to get around to thoroughly debugging the collision areas. I'll probably attach some visible bounding quads to each object to check the bound matches up with the visual rep, it's quite a bit off in some cases which is why you die jumping on certain bits of logs etc. I will add lily pads soon, and have realised that if I make a snake, it can act just like any other bit of traffic, just moving on the grass.

lawnjelly

lawnjelly

 

Intellectual Property and Clone Games

I'm just today thinking about the rights issues of my little frogger game for the gamedev challenge. It is quite common place to make clones of games as a learning experience and for jams, but it is worth spending a little time thinking about rights issues. I got to thinking about this because sometimes I notice a promising game, where the assets are obviously ripped from other games / sources without licence. I too had no qualms about this kind of approach when e.g. learning to program a new language, and not intending to distribute the result. Indeed this can be a valid use in production, with placeholder graphics, as long as you are sure to change the final versions (there is a danger of tripping up on this one, even big companies can mess this up!). Plagiarism I remember vividly being told of the dangers of plagiarism in my education, in the world of science, but equally applicable to games and artwork etc. Once you spend some time either programming, making artwork, sound or music, you begin to realise the effort that goes into making the results. How would you feel if someone took your hard work, used it for profit and attempted to pass it off as their own? I like to think of it that I would like to treat others work how I would like mine to be treated. On top of the legal aspects, when applying for jobs, many employers will take a very dim view of plagiarism. If you thought it was okay to 'steal' anothers work for something on your cv, what's to stop you thinking that it is okay to copy anothers work during your job, and expose them to huge risks? Quite apart from the fact  the interviewers may have personal experience of being plagiarised, in many cases your CV will be filed directly to the rubbish bin. Creative Commons and Open Source Luckily in the case of assets, instead of infringing on others works without permission, a huge number of people (including myself!) have decided to make some of their work available for free for others to use under a number of licences, such as creative commons, often for nothing other than an attribution in the credits. (And think of the huge contribution of open source software. I am writing this now on an operating system that is open source, and has been freely given away by the authors for the good of everyone. That to me is fantastic!) I remember spending some time compiling the credits list for my tower defence game, making sure as well as I could that everyone was properly credited, every model, animation, sound, piece of music. It took some time, but I feel much better knowing that I had used the work as the authors intended, quite apart from feeling more secure from a legal standpoint, even for a free game. And also, speaking as an author myself, it is quite fun to google yourself and find where others have used your work, and encourages you to share more. https://opengameart.org/
https://freesound.org/ Copyright++ Things seem pretty cut and dry for outright copying of assets, but the situation is more complex when it comes to assets that are 'based on' others work, and for things like game titles, and game designs. Some of this intellectual property (IP) protection is based on trademarks, rather than copyright. I am no expert in this, and would encourage further reading and/or consulting a specialist lawyer if in any doubt. Also note that IP laws vary in different countries, and there are various agreements that attempt to harmonize things in different areas. Of course, whether you run into trouble making a clone game depends not only on skirting around the applicable law, but whether the rights owners are able / willing to take action against you (or the publishing platform). Nintendo for instance are known to be quite aggressive for pursuing infringement, whereas Sega have sometimes suggested they are happy with fan games: https://kotaku.com/sega-takes-shot-at-nintendo-encourages-fans-to-keep-ma-1786527246 I do understand both points of view. Indeed in some jurisdictions I understand that legally the rights owner *needs* to take action in order to retain control over the rights (this seems nonsensical, but there you are). So a company being overly-aggressive may have been forced to do this by the legal system. Anyway, for my little research into frogger, my first guess is that the rights are with Konami / Sega / both. Of course you never know. Companies sometimes sell the rights to IP, or one goes out of business and the rights are then assigned to a creditor. Who is to say that a future rights owner will not take a different approach to enforcement. In the Wild It seems there are a number of frogger clones out there, with some successful and profitable (e.g. crossy road). Of course that does not mean making a frogger clone is 'ok' in any way, it just suggests that the likelihood of running into trouble is lower than if there were no frogger clones in markets. Currently I am thinking I will gradually modify the title / some aspects of gameplay so I can put it available for download after the challenge. I really should have thought of this sooner though, and made my main character something else, or put a different slant on it, like 'game of frogs', or 'crossy frog' (that one is taken!). :) Some light reading https://en.wikipedia.org/wiki/Copyright_and_video_games
https://www.newmediarights.org/guide/legal/Video_Games_law_Copyright_Trademark_Intellectual_Property
https://en.wikipedia.org/wiki/Berne_Convention
https://en.wikipedia.org/wiki/TRIPS_Agreement
https://en.wikipedia.org/wiki/Digital_Millennium_Copyright_Act
https://en.wikipedia.org/wiki/WIPO_Copyright_Treaty
https://en.wikipedia.org/wiki/Directive_on_Copyright_in_the_Digital_Single_Market

lawnjelly

lawnjelly

 

Frogger - animation

I have now added some animations from blender to Godot. Mostly they went in very easy, and the AnimationTreePlayer makes it easy to set up node graphs for the animation logic. Although I couldn't work out how to do certain state changes, I gather this is being changed in new version though. The animations look good in the game, especially the frog. I also added logic for controlling the yaw of the frog as you move. The only bug I've encountered in the animations is that, particularly in a window, sometimes it is showing streaked corrupted black triangle down the screen. I initially thought it was to do with shadow mapping but now I think there may be bug in the engine vertex buffer / shader code somewhere. I did maybe wonder if I had exported my animations wrongly, but the process is pretty simple so on balance I think it may be an engine bug in 3.05. Will see if it fixed in the 3.1 alpha or my compiled from source version. Next I aim to work some more on the core game, fixing collision / river, and adding progression in skill / variation through levels. Then I will look at some different cameras, and dealing properly with traffic leaving the screen in different aspect ratios. Also I want to look at revamping the background graphics, not quite sure what to go for yet.  

lawnjelly

lawnjelly

 

Frogger - programming day 4

Quick update to show how everything is going. Ignoring the river doesn't drown you, collision bugs etc, I've been pressing on with getting the major features in. There are now menus, UI (basic to start), game state logic, winning, losing, and just today I was putting in sound. Sound has been quite tricky because 3d sound and listeners seems to be currently broken in Godot (3.05 at least), so I've had to bodge around the broken bits considerably. If they fix it I can solve some of these things but for now sound will have to be pretty simple. Still loads of stuff to fix, as well as putting in some kind of progression as you complete levels. I was thinking of making roads / rivers wider, or multiple versions with more traffic, different speeds / densities. Also I might put in some other stuff like a fly to eat and otter and snake, depending on time. I also figured out some performance issues were occurring because Godot was defaulting to a 4096x4096 shadow map (!), gawd knows how this was decided to be a good choice lol.  

lawnjelly

lawnjelly

 

Frogger - programming day 2

Just a quick update to show how my frogger challenge entry is coming along. I had a day off yesterday and have today put in collision detection and a few other things. You don't die yet if you drown and the crocs and turtles are going the wrong way, the collision detection has lots of bugs, but it is getting there. My plan is to first get a working version, then flesh it out as time allows. I will put in UI soon and work out how to do menus etc. Once it is playable and meets the challenge I will put in animation, improve backgrounds, cameras etc.  

lawnjelly

lawnjelly

Frogger - Models and First Version

As I'm using Godot for this challenge there is no asset store, so I'm attempting to make all the assets myself. I'm no artist, and I find making artwork pretty tedious and time consuming .. that said I'm gradually getting more efficient at it. So I spent the first few days making artwork which is mostly in my frogger gallery. Yesterday evening and today I started actually programming the game in Godot. I'm pretty much making this up as I go from zero experience with the engine and GDScript (like when I used Unity for Tower Defence). So far I am pretty positive about the Godot experience, my only criticism so far would be the lack of in built interpolation (see this post), but it has been fairly easy to workaround. This time I'm mainly aiming at desktop rather than mobile, partly because I have no idea how good the android support is in Godot. I will cross that bridge when I come to it, but the game should in theory run fine on mobile too. The whole engine is node based, and you can have deeply nested nodes (something I'm not sure Unity supports), and handily you can make your game out of smaller scenes which you just pull in in the editor or programmatically. I like the paradigm, and the node based scene graph is similar to NDL or Ogre so I'm familiar with how to use it. My first step was to try and move the frog around the screen. For interpolation I store previous and current positions on each tick (started out at 10 ticks per second). It was pretty easy to get the frog moving left right up and down. However, from playing frogger games in a browser I realised the movement is all in 'jumps', almost like a grid (however not when on logs etc). So instead of holding a key to move the frog, each keypress gives a new destination location, and the frog makes the way to the destination, and won't respond to other moves until it has reached the destination. This is now working pretty well. For the vehicles etc I wanted a generic system, no point in programming the same thing multiple times. I have separated the lines of traffic into 'row' objects, and each row can contain multiple cars of a certain type, with certain speed and direction. There is no collision yet but I'll use just simple AABB checks. Handily, because the frog can only be on a maximum of 2 rows at a time, you only need to collide check against the cars / logs on those rows. I've also figured out how to preload the different vehicles as scenes so I can create instances which get reused as they go off the edge of the screen. I need to re-export the models as I have some newer versions now. There are also no wheels yet, and no animations, I'll deal with them later. Overall it is coming together good for just a day of programming, which is a testament to how easy Godot is to use. My first aim is to get the frog playable over the screen to get to lily pads, and die if you hit vehicles etc, and have some UI for the score and lives. I have no idea how to do menus yet in Godot but I think it will involve more scenes so will hopefully be easy. And while I'm starting with the traditional top down orthographic camera, which makes wraparound of vehicles easy, I'd like a more free moving camera, I'll just have to think of a way of making the vehicles fade in and out.

lawnjelly

lawnjelly

 

Fixing your Timestep and evaluating Godot

The past few days I have been playing with Godot engine with a view to using it for some Gamedev challenges. Last time for Tower Defence I used Unity, but updating it has broken my version so I am going to try a different 'rapid development' engine. So far I've been very impressed by Godot, it installs down to a small size, isn't bloated and slow with a million files, and is very easy to change versions of the engine (I compiled it from source to have latest version as an option). Unfortunately, after working through a couple of small tutorials I get the impression that Godot suffers from the same frame judder problem I had to deal with in Unity. Let me explain: (tipping a hat to Glenn Fiedler's article) Some of the first games ran on fixed hardware so they didn't have a big deal about timing, each 'tick' of the game was a frame that was rendered to the screen. If the screen rendered at 30fps, the game ran at 30fps for everyone. This was used on PCs for a bit, but the problem was that some hardware was faster than others, and there were some games that ran too fast or too slow depending on the PC. Clearly something had to be done to enable the games to deal with different speed CPUs and refresh rates. Delta Time? The obvious answer was to sample a timer at the beginning of each frame, and use the difference (delta) in time between the current frame and the previous to decide how far to step the simulation. This is great except that things like physics can produce different results when you give it shorter and longer timesteps, for instance a long pause while jumping due to a hard disk whirring could give enough time for your player to jump into orbit. Physics (and other logic) tends to work best and be simpler when given fixed regular  intervals. Fixed intervals also makes it far easier to get deterministic behaviour, which can be critical in some scenarios (lockstep multiplayer games, recorded gameplay etc). Fixed Timestep If you know you want your gameplay to have a 'tick' every 100 milliseconds, you can calculate how many ticks you want to have complete at the start of any frame. // some globals iCurrentTick = 0 void Update() { // Assuming our timer starts at 0 on level load: // (ideally you would use a higher resolution than milliseconds, and watch for overflow) iMS = gettime(); // ticks required since start of game iTicksRequired = iMS / 100; // number of ticks that are needed this frame iTicksRequired -= iCurrentTick; // do each gameplay / physics tick for (int n=0; n<iTicksRequired; n++) { TickUpdate(); iCurrentTick++; } // finally, the frame update FrameUpdate(); } Brilliant! Now we have a constant tick rate, and it deals with different frame rates. Providing the tick rate is high enough (say 60fps), the positions when rendered look kind of smooth. This, ladies and gentlemen, is about as far as Unity and Godot typically get. The Problem However, there is a problem. The problem can be illustrated by taking the tick rate down to something that could be considered 'ridiculous', like 10 or less ticks per second. The problem is, that frames don't coincide exactly with ticks. At a low tick rate, several frames will be rendered with dynamic objects in the same position before they 'jump' to the next tick position. The same thing happens at high tick rates. If the tick does not exactly match the frame rate, you will get some frames that have 1 tick, some with 0 ticks, some with 2. This appears as a 'jitter' effect. You know something is wrong, but you can't put your finger on it. Semi-Fixed Timestep Some games attempt to fix this by running as many fixed timesteps as possible within a frame, then a smaller timestep to make up the difference to the delta time. However this brings with it many of the same problems we were trying to avoid by using fixed timestep (lack of deterministic behaviour especially). Interpolation The established solution that is commonly used to deal with both these extremes is to interpolate, usually between the current and previous values for position, rotation etc. Here is some code: // ticks required since start of game iTicksRequired = iMS / 100; // remainder iMSLeftOver = iMS % 100; // ... gameplay ticks // finally, the frame update float fInterpolationFraction = iMSLeftOver / 100.0f; FrameUpdate(fInterpolationFraction); ..... // very pseudocodey, just an example for translate for one object void FrameUpdate(float fInterpolationFraction) { // where pos is Vector3 translate m_Pos_render = m_Pos_previous + ((m_Pos_current - m_Pos_previous) * fInterpolationFraction); } The more astute among you will notice that if we interpolate between the previous and current positions, we are actually interpolating *back in time*. We are in fact going back by exactly 1 tick. This results in a smooth movement between positions, at a cost of a 1 tick delay. This delay is unacceptable! You may be thinking. However the chances are that many of the games you have played have had this delay, and you have not noticed. In practice, fast twitch games can set their tick rate higher to be more responsive. Games where this isn't so important (e.g. RTS games) can reduce processing by dropping tick rate. My Tower Defence game runs at 10 ticks per second, for instance, and many networked multiplayer games will have low update rates and rely on interpolation and extrapolation. I should also mention that some games attempt to deal with the 'fraction' by extrapolating into the future rather than interpolation back a frame. However, this can bring in new sets of problems, such as lerping into colliding situations, and snapping. Multiple Tick Rates Something which doesn't get mentioned much is that you can extend this concept, and have different tick rates for different systems. You could for example, run your physics at 30tps (ticks per second), and your AI at 10tps (an exact multiple for simplicity). Or use tps to scale down processing for far away objects. How do I retrofit frame interpolation to an engine that does not support it fully? With care is the answer unfortunately. There appears to be some support for interpolation in Unity for rigid bodies (Rigidbody.interpolation) so this is definitely worth investigating if you can get it to work, I ended up having to support it manually (ref 7) (if you are not using internal physics, the internal mechanism may not be an option). Many people have had issues with dealing with jitter in Godot and I am as yet not aware of support for interpolation in 3.0 / 3.1, although there is some hope of allowing interpolation from Bullet physics engine in the future. One option for engine devs is to leave interpolation to the physics engine. This would seem to make a lot of sense (avoiding duplication of data, global mechanism), however there are many circumstances where you may not wish to use physics, but still use interpolation (short of making everything a kinematic body). It would be nice to have internal support of some kind, but if this is not available, to support this correctly, you should explicitly separate the following: transform CURRENT (tick) transform PREVIOUS (tick) transform RENDER (where to render this frame) The transform depends on the engine and object but it will be typically be things like translate, rotate and scale which would need interpolation. All these should be accessible from the game code, as they all may be required, particularly 1 and 3. 1 would be used for most gameplay code, and 3 is useful for frame operations like following a player with a camera.  The problem that exists today in some engines is that in some situations you may wish to manually move a node (for interpolation) and this in turn throws the physics off etc, so you have to be very careful shoehorning these techniques in. Delta smoothing One final point to totally throw you. Consider that typically we have been relying on a delta (difference) in time that is measured from the start of one frame (as seen by the app) and the start of the next frame (as seen by the app). However, in modern systems, the frame is not actually rendered between these two points. The commands are typically issued to a graphics API but may not be actually rendered until some time later (consider the case of triple buffering). As such the delta we measure is not actually the time difference between the 2 rendered frames, it is the delta between the 2 submitted frames. A dropped frame may for instance have very little difference in the delta for the submitted frames, but have double the delta between the rendered frames. This is somewhat a 'chicken and the egg' problem. We need to know how long the frame will take to render in order to decide what to render, where, but in order to know how long the frame will take to render, we need to decide what to render, and where!! On top of this, a dropped frame 2 frames ago could cause an artificially high delta in later submitted frames if they are capped to vsync! Luckily in most cases the solution is to stay well within performance bounds and keep a steady frame rate at the vsync cap. But in any situation where we are on the border between getting dropped frames (perhaps a high refresh monitor?) it becomes a potential problem. There are various strategies for trying to deal with this, for instance by smoothing delta times, or working with multiples of the vsync interval, and I would encourage further reading on this subject (ref 3). References 1 https://gafferongames.com/post/fix_your_timestep/
2 http://www.kinematicsoup.com/news/2016/8/9/rrypp5tkubynjwxhxjzd42s3o034o8
3 http://frankforce.com/?p=2636
4 http://fabiensanglard.net/timer_and_framerate/index.php
5 http://lspiroengine.com/?p=378
6 http://www.koonsolo.com/news/dewitters-gameloop/
7 https://forum.unity.com/threads/case-974438-rigidbody-interpolation-is-broken-in-2017-2.507002/

lawnjelly

lawnjelly

Texture Tools progress - 3D LUTs, heal tiling

Just a little progress update to show how my little texture tools app is progressing. I actually got waylaid for far too long investigating white balance correction. It is not at all important for this app, and more something I am interested in for correcting photos and video. White balance correction Though my early efforts had involved colour space conversion in the hope of making white balance correction easier, I am now of the opinion that the best way to do it is through 3D colour look up tables (LUTs). I've previously used multiple 1D LUTs many times (in fact they are used extensively in 3D Paint), but not 3D. The advantage of 3D LUTs is as well as independent control of contrast / gamma / balance for R, G and B, you can have complex interaction between the colour channels, allowing changing things like colour saturation etc. The downside to 3D LUTs is firstly they take more memory, such that it is common to use sizes such as 33x33x33 rather than 256x256x256, and secondly that as a result of the lower resolution you have to implement 3D interpolation. I implemented 3D tetrahedral interpolation based on some slides from nvidia, and I understand it should give better results than simple trilinear interpolation. For my white balance experiments I was thus trying to create a LUT to go from one image (e.g. with tungsten white balance) to an identical image with correct white balance. I tried a variety of approaches and ended up being most successful with an iterative 'monte carlo' scheme that optimized the LUT values towards the best fit to convert from one image to the other. This did produce nigh on perfect results, however it was quite slow, and I had to use approaches like performing runs on mipmaps of the large image size. On top of the problem of producing a LUT to convert between the 2 test images, another problem was the LUT had 'blanks' in it. That is, areas of the LUT where there was no data to create a best fit. I ended trying 3d natural neighbour interpolation for this. However when it came to testing out the LUTs I tried to find an existing graphics editor that would load my created LUTs to test them out. Gimp wouldn't seem to load LUTs, however after some research I found an excellent plugin called G'Mic which would load 'hald' format LUTs. I also discovered that the author had attempted to do exactly the same thing as me (LUT from 2 reference images), and his worked 10x better than mine lol. Firstly it calculated it very quickly, and secondly and more importantly, it managed to guess the 'in between' blanks in the LUT as well as the values that were in the images. I took this as a sign that I had wasted far too much time researching this, so I resolved to study the G'Mic source code and work out how he did it (for future reference) and abandon my own efforts in this direction. Unfortunately the plugin was written in a somewhat inpenetrable scripting language, but I will try to work it out when I have some spare time (or email the author lol ). However something I have left in texture tools is a method for loading 3D LUTs and applying them in the pipeline as it could be quite handy for users (there are a number of freely available LUTs around for colour balancing, or getting specific 'looks'). Simple Methods So back to work on the actual app I quickly added a few more basic methods (these are all added as nodes which can be connected up with inputs and outputs). I've put in levels, hue saturation, crop, resize. I then converted my healing code to work with float data, and made a method for simple tiling textures. Tiling This 'heal tile' method automates a very common operation in photoshop / gimp. First it offset the x and y by 50%, so the mismatched edges are forming a cross shape central on the image. Next it heals the two borders, using source from the original un-offsetted image. Then it finally offsets by 50% again so the image is in the original position. The percentage of the border that is used for the healing can be changed with a slider. Synthesizing larger images So that is the very basic functionality working. My next phase is to look at means of synthesizing larger image from a smaller reference texture. I am not quite sure how to do this yet, JoeJ has suggested a very interesting paper by Eric Heitz and Fabrice Neyret which I have finally vaguely understood (there is no source code), so I may have a go at doing a version of their technique. I may also try some simpler techniques of splatting areas on top of each other. One extra feature I just put in today is you can paint an alpha layer on the source images, to mark out where you want to use as source material for these latter techniques (as a photo may contain other junk aside from the texture of interest).

lawnjelly

lawnjelly

 

White Balancing Images

Now I realise this is stretching slightly outside the normal realms of gamedev, but while working on my texture tools I've turned my attention again to something that is often a problem for photographers, white balance. While white balance is something that is easy to correct when you have a RAW image file (before white balance has been applied), I've always found it considerably more difficult to correct for when all you have to work with is a JPG, or video stream. Hence the general advice to photographers to get white balance right in the camera (using grey card preset balance etc). Of course the world is not ideal and sometimes you end up with images that have a blue or yellow cast. I've found the colour balance correction tools built into things like photoshop and the gimp to be pretty awful. Often when you pick a gray / white point you can very roughly correct things overall but you get colour tinges in other areas of the photo, and it never looks as good as a correctly balanced photo. Tungsten (blue cast - needs correcting) Neutral (correct, reference image) Attempt to colour correct in Gimp using white and gray points (overly saturated result) I've worked through various naive approaches to balancing photos in the past. I started long ago with the very basic multipliers on the sRGB 8 bit data. First mistake of course .. this is gamma compressed, and to do anything sensible with images you have to first convert them to linear with a higher bit depth. I now know better and use linear in nearly all my image manipulation. Colour Spaces The next step is colour spaces. I must admit I have found this very difficult to wrap my head around. I have used various other colour spaces like LAB and HSL in the past and just have used conversion functions without really knowing what they were doing. This past week I have finally got my head around the CIE XYZ colour space (I think!), and have functions to convert from linear sRGB to XYZ and back. Next I learned that in order to do chromatic adaptation (the technical term for white balance!) it is common to transform to yet another colour space called LMS space, which is meant to more accurately represent the colour cone cells in the eye. Anyway to test this all out is luckily rather easy. All you need is a RAW photo, and export it as jpg with different white balance, then attempt to transform the 'wrong' jpg image to the 'right' jpg image. Usually I would do this myself but there are some rather convenient already made images here:
https://en.wikipedia.org/wiki/Color_balance So I've been using these to test, trying to convert the tungsten (blue cast) to neutral. I've had various other ideas, but first wanted to try a very simple, attempt to get the photo into the initial colour space / gamma, alter the white balance, then convert it back again. To do this I converted both the blue (tungsten) image and the reference (neutral) to sRGB linear, then to XYZ, then to LMS. I found the average pixel colour in both images, then found the multiplier that would convert the average colour from the blue to the neutral. Then I applied this multiplier to each pixel of the blue image (in LMS space), then finally converted back to sRGB for viewing. Results LMS colour space Linear RGB colour space The results showed a slightly better result doing this process in LMS space versus linear RGB space. The background turned a better gray, rather than still having a blue tinge with the RGB space. The result looks pleasing to the eye but you can still tell something is 'off', and the white point on the colour checker chart in the photo is visibly off. As well as the photo, I also superimpose a plot of the RGB values with converted value against expected value, to show how accurate the process is. A perfect result would give a straight line plot. Clearly there is quite a bit of colour variation not corrected for in the process. My current thinking is that there are 2 main things going on here: To do the conversion, the colour space / gamma should exactly match that in the same stage in the RAW conversion. Maybe it doesn't. Is the RAW conversion done in linear camera space rather any standard colour space? I don't know. I've attempted to dig into some open source RAW converters to get this information (DCRaw and RawTherapee) but not had much luck. To get things to the state when the white balance was applied in the RAW converter, you not only have to reverse colour spaces, you have to reverse any other modifications that were made to the image (picture styles, possibly sharpening etc). This is very unlikely to be possible, so this technique may not be able to produce a perfect result. Aside from the simple idea of applying multipliers, my other idea is essentially a 3d lookup table of colour conversions. If (and that is a big if) there is a 1:1 mapping of input colours from the blue image to reference colours in the neutral image, it should be possible to simply lookup the conversion needed. You could do this by simply going through the blue image till you found a matching pixel, then find the corresponding pixel in the neutral image and use this. In theory this should produce a perfect result in the test image by definition (if the 1:1 mapping holds). I should say at this point the intended use is that if you can find a mapping for a specific camera to get from one white balance setting to another, you can then apply this to correct images that were taken BEFORE the mapping was found. So if you are attempting to convert a different photo it is likely that there will be pixels that do not match the reference images. So some kind of interpolation and perhaps lookup table would be needed, unless you were okay with a dithered result. At this point I'm going to try the lookup table approach. I suspect it may give better results, but not perfect, because I do fear that picture styles / sharpening may have made a perfect correction impossible, much like the entropy reversing idea of putting a shattered teacup back together, from Stephen Hawking and our old friend Hannibal Lecter. Ideas? Anyway this blog post was originally meant to be a question for the forum, but grew too large. So my question would be whether any of you guys has experience in this, and advice or ideas to offer? Through my research I found this whole colour thing I felt like a real newbie, and it is quite a big field with a number of people working on it, I'm sure there must be some established ideas for this kind of thing.

lawnjelly

lawnjelly

 

Node Editor

Just a quick update to show I've just started getting the GUI working for texture tools. The node editor is still a work in progress but it seems to do the job. I still haven't done any significant work on the methods yet but that is the fun stuff .. I've been getting the boring interface and framework working first. There are just some simple test methods as yet, I will be adding more complex blending etc tech soon. It is now pretty easy to add new methods (like plugins) without worrying about the rest of the program.

lawnjelly

lawnjelly

 

Texture tools

After attempting to texture some models recently, using projection painting in 3d paint, the new healing brush is proving fantastic at healing up those edges between projections, but I must admit I get very frustrated trying to find suitable reference images. Part of the problem I have decided is that I'd often like to be able to have larger, more homogeneous areas of texture to clone from. Given that I have a reasonable healing implementation working, it struck me I should be able to have some algorithms for doing this little job for me, to provide better source material for painting. I thought about putting this ability directly into 3d paint, however, it seemed to make more sense to do a separate small utility app for this kind of thing, which might be useful to more people. So, eager to not make the major mistake I made with 3d paint, that of under-engineering the initial program, I decided to make a positive effort and spend a few days building a solid backbone to the texturing program, so it will be easy to maintain and add to in the future. Instead of making a photoshop like affair, this will be a very focused app, and at the moment I'm thinking in terms of a node based editor with some input textures, and methods, producing intermediate and final textures for export. I'm planning for you to be able to move the nodes in the UI, assign inputs and outputs and parameters. Although the UI is not yet operational, the framework is getting there and I've implemented a first test method. I decided one useful first pass before other methods would be to equalize the colours across an image. Here is an example I have run it on a skin photo, left is before, right is after. Bland and boring on the right, but that is what I am going for, it should be easier to clone etc. The way the method works is it first finds the average colour in the entire image, then gaussian blurs the image. For each pixel it then finds the difference between the blurred colour and the average colour, then adds this difference (with a multiplier) to the original pixel colour. This has the effect of reducing local colour contrast, or increasing colour contrast depending on the sign of the multiplier. Anyway, obviously loads more methods to come, maybe some using variations of the healing technique from Georgiev's paper. All colours are converts to floats, and can be converted to linear, and HSL or LAB colour spaces.

lawnjelly

lawnjelly

Project Management

Something I feel doesn't get discussed enough in the world of gamedev, and something that in general we (more often than not) tend to be incredibly bad at, is project management. This isn't something I'm an expert at myself, I get it wrong plenty of times, but at least I'd like to stimulate a little thought on the subject. One of the most crucial aspects of project management is the interplay between project scope, and timescales and scheduling. It doesn't matter whether you are managing an AAA game with a multimillion dollar budget, or a one man bedroom indy game or first game, this all affects you. Scale & Duration The first point is that as you increase the scale of a project, the time needed to complete it will increase. You would think that there would be some kind of linear relationship between the scale and the time needed, but in practice there can be a tendency for the time needed to increase almost exponentially with scale.  Real life observations show that it can be incredibly difficult to predict the time taken to complete a complex task. If we were, for example, asking how long it would take a man to harvest a small field of potatoes by hand, we could find out how long it took him to do 1 metre squared, multiply it by the size of the field, leave some time for breaks etc and come up with some rough estimate. Unfortunately developing software is not like harvesting a field. Some tasks can be made more like this (artwork production for instance), but things like programming are absolutely nothing like this model. You can add more artists but if you add more programmers expecting a productivity increase expect a shock (see the mythical man month). Dunning-Kruger Couple this difficulty in making time estimates with something called the Dunning-Kruger effect. This is (I quote wikipedia) : This blog post is probably a great example of Dunning-Kruger! Those people who have little knowledge of a task will tend to be very bad at estimating how long that task will take to complete. Unfortunately, the people typically responsible for making time estimates are exactly those people who tend to have little knowledge of the subject area. This is often management, and / or 'yes men'. (Sycophants / Yes men (or people, they could be transgender) are those types that follow the strategy of agreeing with everything their boss says, or providing extremely optimistic estimates. This is a very widely used strategy for promotion, constantly reassuring the superior providing short term appreciation, then when things inevitably go wrong, it is usually easy to blame an external event or third party.) Other types of people who often have little knowledge of a task are beginners, and those choosing to work on something they haven't done before (most of us). So essentially most of us are prone to this Dunning-Kruger effect. With beginners this almost always shows itself in new posts to forums introducing themselves as knowing nothing about programming or artwork, but expecting to complete a game such as one requiring teams of 50+ experienced staff and years of work, in a couple of weeks. Presumably they expect there is some app for doing this, and they will just press a 'make game' button, after selecting the right parameters. It is easy to see the error in beginners, but this same thing tends to happen with all of us and we have to actively fight against the tendency. Real World So how long do things really take? Well, a wise man once told me, as a rule of thumb, if you are familiar with the subject area, it will on the whole take at least 3x as long as your estimated scenario. What does this mean? Well if you in charge of developing a typical 18 month commercial game, you should be aiming for something you believe you can complete in 6 months. No, that's not a prototype, that's the whole thing. Once all the things have gone wrong (and they will) it will easily expand to take the full allotted time. Best case if it is finished ahead, you have extra time for testing, and polish. If you are an indy and you want to complete a game over a 6 month schedule, you should be aiming for something you can finish in 2 months. If you are a beginner and you are aiming for something you can finish in 2 weeks, you should be aiming for something you can finish in 1-2 days. Beginners are the worst at estimating, and are usually wildly out, and typically don't have the skills to complete what they thought, so often don't even have the skills to complete their original project until months or years after. How can you battle this effect? The best advice I have heard to battle these problems is to aim small. This applies to everyone but 100x more to beginners. There are a billion unfinished shelved projects out there in the world. Don't be one of those people. Choose something you *can* realistically complete, something 10x to 3x smaller than your original vision (depending on your skill level). If you are beginner this means start by making tic-tac-toe. Make pac man, make tower defence, gradually increase your skill set so you can make better games (and better doesn't always mean bigger). If you are learning OpenGL / DirectX by all means learn to make your own engine but be under no illusions that you will make something that others will want to use. Commercial engines now tend to be made by small armies of experienced devs, and have to work under conditions of a huge variety of hardware, with testing and all kinds of considerations you haven't even thought of. If you are an indy and you dream of making the next amazing rpg to rival the big companies, have a think, how much artwork, how much sound, how many voiceovers, how many scripts, how many levels do you need to create? Aside from the programming, creating the assets for such games is a major undertaking. Have a look at the credits list on a game you would wish to emulate. If you are a company, you have possible funding, be very realistic about what you can create in your allotted 18 months. Aim to be leveraging and reusing your own and others' technology because to create everything from scratch everytime is just not possible. Aim for that 6 month timescale, and get estimates from all the tech people especially not just from 'yes men', and use them all to make a balanced decision on what you think is realistic.

lawnjelly

lawnjelly

Tower Defence - Post Mortem

Background It's been a few days since I put my latest alpha of my entry for the Tower Defence challenge on itch.io and my project page: https://lawnjelly.itch.io/ramsbottom I think I've covered the requirements for the challenge, and made the game a bit above just the requirements so it is a bit more fun to play and has some longevity. The reason I entered this time is because I'd been watching the previous challenges with a little envy, and had been waiting for one that seemed simple enough (I think the last one I looked at had multiplayer and I knew that could be a bit of a bag of worms). My usual low level c++ / opengl approach would probably be overkill for a small / low timescale game, so I decided it would be a good opportunity for me to try out Unity engine, which a lot of people are using currently. What went right 1. Using Unity Rapid development, well suited for this type of small game. 2. Attempting to get as much of the challenge completed asap, then leaving further time for more features / polish. I finished much of the base functionality in the first week, then spent time on and off in the next few weeks just making it better. There are lots of advantages to getting something 'finished' up front, and this is a development model I am trying to move towards. You can 'call time' at any time, and still have a functional product. Unforeseen events always seem to appear and limit the time you can spend on a project. This approach guarantees that even in this situation you will still have a 'product' rather than a half-done version of your 'glorious vision'. 3. Using the asset store, not building all the models myself, and using sites such as freesound for the sound, and creative commons music. For small learning games such as this it didn't make sense for me to make the assets. I know it takes me 2/3 of the time to make artwork etc, and while I am improving at it, I am better at (and enjoy) programming more than making artwork. 4. Finding some good tutorials to learn Unity (then throwing out their approaches!). There are some great tutorials out there (brackys for instance), and these are good for learning unity specific stuff, but in some cases I could instantly see better ways of doing things. I put this down to many tutorials being pitched at total beginners, who are happy to get anything on the screen. But e.g. using Unity editor to lay out levels just seemed ridiculous and limiting. What went wrong 1. C# . I hate it, absolute abomination of a language. I spent more time than should ever be necessary screaming at the damn thing, it makes visual basic look like Shakespeare. I could write a whole blog post just on the things about it that make me seethe, but yeah, if I could avoid ever having to use it again, that would be great. 2. Monodevelop Yeah, see point 1. Pretty bad. I might have to see if I can get another editor working if I use Unity again. I hear VS code may be worth a go (I'm on Linux). Monodevelop seemed really keen to reformat my code in stupid ways I couldn't turn off, and kept trying to autocomplete words incorrectly (that I also couldn't turn off). 3. Lack of debugging support. This may have been due to my setup, it might not be straightforward to get debugging working on Linux (I'm assuming with Unity it is possible to do step by step debugging?). This meant huge problems debugging anything but the simplest code (I had to resort to lots of Debug.Log statements). 4. Unity editor. I'm not really a drag and drop sort of guy. I tried to avoid having half the game 'code' being a particular setup in the drag and drop editor. I'm not even sure how to backup that stuff, I'm sure if I'd have had a crash I could have lost the lot. Come to think of it, did I have to backup all the assets too? With all that .meta stuff? I don't know. At least with code you can zip it up small and keep lots of backups. There should be an option in the menu to save your entire project in a compressed form without all the bloated assets etc, just the stuff that is a pain to lose. 5. Unity build times. I had massive problems with excessive build times taking hours when changing platform particularly, it kept baking lightmaps (or maybe something with shaders?) when as far as I knew I had tried to turn them off. Eventually more by luck than judgement, I found that deleting some skydome assets I had imported and deleting (rather than turning off) an extra light finally cured the problem. Far too little debugging info is given out by the build tool logs, to enable you to know WHY your builds are taking hours. Googling reveals I was not the only one with this problem. Don't just tell me 'baking lightmaps', tell me which light is causing this, which objects etc etc. Conclusion Overall I found the challenge very worthwhile. There are several of us working on it, and bouncing ideas around and spurring each other on works very well. Also a little hint of friendly competition is good too!
I managed to get fair basic grounding in Unity, and have a better idea of whether it would be worthwhile using in any future projects.. I may use it for a couple more small games, or evaluate some more current engines (Unreal, or perhaps something more code orientated).
Doing such small projects is also great for experiencing and practising the whole development cycle including release and marketing. This can give a much better perspective on how much time you should invest in different stages, and improve your ability to schedule / finish larger projects. It is something I would recommend to beginners through to advanced developers.

lawnjelly

lawnjelly

Tower Defence - nearing the finish line

After an initial burst of energy last month, I knew I wouldn't have so much time available for working on Tower Defence challenge up until the June 30th finish. However I did manage to get much of it working already and now I am mostly at the 'finishing up' stage. Changes More music tracks I have added some more game modes. The normal mode is a scripted set of 12 levels, random generates 20 random levels increasing in difficulty, and 'budget' is a shorter game of 6 random levels but you are given an estimated amount of money to beat all the enemies at the beginning, and you can't win any more cash, so you have to make it last. Cross platform I have made some effort to make the game cross platform. It works well on Linux, Windows and Android now. To that end there is a certain amount of 'designing for the common denominator'. I had to scale back the particle effects so that it would work well on tablets and mobile phones. If I was spending more time on it I would scale the number of particles according to hardware. Graphics Quality Setting Inside the game pause menu there is now a performance slider where you can trade off graphics quality with speed. It sets the unity GraphicsQuality setting. This works well, although I did encounter a snag on Android.. the higher quality settings on some devices was causing complete graphical corruption, possibly trying to switch anti-aliasing. While I could try and blame this on Unity, I do know that devices can be bad at reporting their capabilities, and you often have to just 'try it and see what happens'. What was however a design oversight is that when you set unity to GraphicsQuality that causes corruption, while you can quit out of the game from the OS, Unity actually SAVES your choice of graphics quality so that the next time you run it it is also corrupted. The only way around this (on Android) was to go into App settings and delete the cached app data. Clearly this was not ideal for users. As a bodge compromise, when you originally start the game it reads the default GraphicsQuality, persists this separately, then resets to the default every time you run the game. This means any changes to quality are lost on each startup, which is annoying, but better than the graphical corruption. Another alternative would be to get the user to click on an 'OK' box within 15 seconds to confirm that graphical changes worked, but I did not have time for this. Gamepad / Keyboard input While a lot of tasks have gone very well, I spent a lot of yesterday and today getting keyboard and gamepad input working. Most people will play the game using mouse, or touchscreen, and it works best with these. However my Android TV box has no touchscreen, and I find it annoying how few Android game developers support other means of input. I had previously managed to get gamepad input working on Android using Android SDK via java / NDK, however configuring input on Unity seems rather overcomplicated. I read several recommendations that the Unity input system was poor, and it was better to buy InControl or Rewired assets to do this. I tried the free version of InControl but couldn't get it to recognise my gamepad. So I went back to the in built Unity InputManager and finally managed to (barely) get it working. The selection of joystick axes seems non-standardized, so I have no idea whether my setup will work with other joysticks / gamepads, not having the time or facilities for further testing. As well as simply getting a gamepad working, there is also the issue that the game was designed for mouse. I had hoped that navigating the menus would be a simple affair, but it took me a while to work out how to select a default menu item when entering each menu screen, without which you could not change the selection. Many people I googled had had similar problems, I am not clear on the 'correct' way to do it. In addition many of the UI items did not show a large enough change in colour to indicate that they were highlighted, and I had to change this manually. Gameplay with keyboard / gamepad The biggest hurdle was how to place towers in a game designed for you to pick a spot with the mouse? I decided to have a cursor which you can move up down left and right across the map. This was a bit fiddly to get working with the input, and is still not perfect, but does allow you to play the game, albeit not as efficiently as with a mouse / touchscreen. Finally I had to cover tower selection via keyboard / gamepad. To do this I simply had a button cycle through towers available, and had to put in an indicator arrow to show which was currently selected. I have now tried it and it even works well on my Android TV box with gamepad, which has surprised even myself. Performance has been very good, even on low power devices. That is probably largely down to careful design due to me being already familiar with performance bottlenecks on this hardware (fill rate is usually the biggest issue), and using mostly simple mobile diffuse shaders. Future I suspect from now until release I will mostly be concerned with testing. The input in particular has introduced all kinds of subtle potential bugs from different combinations of input devices. I would ideally like a high score table but handling virtual keyboard input on different platforms puts me off, unless I use a simple initialing system.

lawnjelly

lawnjelly

Tower Defence second Alpha Version

This is my second alpha version for my gamedev Tower Defence challenge entry. Fingers crossed the links will work. Linux64 (40 megs) https://www.dropbox.com/s/jfzzh3rluzd6a3h/TowerD_012_Linux_x86_64.zip?dl=0 Win32 (32 megs) https://www.dropbox.com/s/8uj1zo8izp0weg3/TowerD_012_win32.zip?dl=0 Android (49 megs) https://www.dropbox.com/s/60rnljff8al5ugw/TowerD_012.apk?dl=0 I did miss out on several days because my internet got taken out due to a massive lightning storm, and Unity editor refused to run without an internet connection... However new features: 3 new enemies .. bird, spider and boss 2 new towers .. ballista, plasma tower Each level is now scripted, each wave and enemy Some terrain improvements Projectile models (bullet, cannonball, plasma, bolt) Tooltips on tower buttons Lots of changes to game rules / balancing Templates for each tower and each actor type Player has 3 lives Tip: I don't think I gave enough cash for the first level in this build, but you can win usually by building 3 of the first cheap tower and 1 of the second tower clumped together in a group, that way the towers can defend each other and they aren't too expensive. Once you get past the first level the cash reward for completing the level is quite generous. Android Build - edit After some considerable effort, I finally got an Android build working. The build kept hanging while trying to bake lightmaps (which I had been trying to turn off), but after several hours experimentation and changing options randomly I finally got it working. The APK size is nearly 50 megs, no idea why larger than even the linux build. But it does run, choppy on my old tablet when there's much action happening, but shows potential. I think the particle effects will be slowing it down so I'll add options to turn them down. Also with a tablet touchscreen there is no preview of whether you can build on a spot because there is no 'mousemove' equivalent, only a touch, which is like a click to build. [Also I'll have to scale up the tower selection buttons on mobile because it's easy to click to build instead of hitting the button when you don't have a mouse.] FIXED

lawnjelly

lawnjelly

Alpha Version

In the spirit of release early and often I have a playable alpha version of my challenge entry. I am not quite sure where is reliable to upload to so I have been trying dropbox which I've had issues with in the past, fingers crossed these links might work, please let me know if the links work / it installs and runs. The zip file is about 30-40 megs and doesn't require any installation, and is typical unity fare. Linux 64 bit: https://www.dropbox.com/s/fmjzmi249a3otph/TowerD_010_Linux_x86_64.zip?dl=0 Windows 32 bit: https://www.dropbox.com/s/i7bxe1pub56rpc4/TowerD_010_win32.zip?dl=0 I've been developing on linux but just tested the windows version on my ancient laptop and it seems to work. Very much a test version, I need to put in credits for things like the music and assets (need to work out how to do scrolling credits). Instructions When you start choose new game. You start with a certain amount of cash, and enemies start appearing in waves. You must build towers to destroy them before they destroy your base. Click one of the 3 tower types to select it for building, then click on a location that comes up gray to build. Each tower will cost you money to build and to repair. You get cash when you kill enemies and on completing level. The enemies will move down the path towards your base but will also attack towers. If a tower is destroyed you can click on it to repair. The enemies go so close to towers when attacking they cannot defend themselves so you often have to build towers in pairs (this was more by accident than design lol). Cameras There are currently 3 camera types, and you can toggle between them by clicking the magnify glass icon. Overview shows the whole map from above, area mode tries to show the area where the action is taking place, and follow cam will follow the enemy of your choosing closeup. You can select which enemy to follow by clicking close to it on the path. The path the enemies follow is darker than the rest of the terrain. You can't build on the path, or on squares that have buildings or other props on them (these will indicate red). This means you have to be strategic about where to build towers. There's still quite a bit I intend to add, like multiple lives, high score table, balance the difficulty as you progress, more towers / enemies.

lawnjelly

lawnjelly

Tower Defence Progress

After a few days away from home I've done a bit more on the game. I must admit I'm kind of looking forward to wrapping up and working on the next thing, so am thinking more in terms of what I need to finish. The towers now have separate logic and firing / sounds. There are just 3 so far but I'll probably add another 2 or 3. Don't know if I have the energy to put in an upgrade path for tower technology .. maybe if people play it lol. The enemies now attack towers as well as your base. The AI for this is pretty simple, and they have no collision detection, so just go for the tower that attacked them. Maybe I will put in collision detection but it potentially opens up a barrel of worms (avoiding stuff) so I may leave on the todo list. Towers have health and once they are destroyed they stop working and have smoke coming out. You can repair them by clicking, this costs money in proportion to how damaged they are. The building placement is a bit better now, there is a zone around houses where other buildings will not be placed so they aren't too close together. The buildings and props serve no function as yet except they prevent you building towers on that spot, so you have to be a little strategic to where you build. As I'm a little worried the unity terrain is not practical both in terms of installation size and performance, I've just today put in some procedural texturing of the ground block underneath the play area which I am using as an alternative to unity terrain. It looks okay now after some research into how this was possible, I did it with just software texture splatting and precreating the texture on level load. On the terrain front I plan to vary this and the buildings / trees etc as you progress through the levels, so it doesn't look all the same. I should probably also put in a high score table, as it feels a bit of a let down when you get a long way then fail a level. But that depends how easy it is to put in a keyboard to type in a name, as it might not be played on a desktop. Actually I've no idea if it will work on other platforms but I'm hoping unity will handle most of that automagically. The controls are pretty tablet friendly so far. A big issue to confront will be installation size. I'm not sure yet what this depends on but it seems ridiculous when I've done test builds (187 megs last time). Hopefully I can get some kind of build log to find out what is being included in assets and strip out a load of unneeded stuff and oversize textures etc.
 

lawnjelly

lawnjelly

Tower Defence video 2

Just posting this quickly as I will be off for 5 days or so and unable to access PC / work on game, so a little update. Got sound working, although it needs some tweaking I need variations for the sounds but the programming side is all done. I have a sound manager that pools sound sources, don't know if that is more efficient than putting sound sources on all the gameobjects. I also figured out how to put animation events on the animations to play sounds in sync (footsteps, sword etc). Made a couple more quick towers in blender, although there is no separate logic for the towers yet, they are all firing same bullets etc. Changed from using my native huts to some medieval assets from Unity asset store. As these houses are bigger than one grid square there is now a bit more complex logic for placing them, rotating them until they fit on the map. Also the waypoints now lead to the front of the big building which will be your base (I might change the building model though). And added some particle effects. Made a pool system for these, and got the effects themselves again from asset store. There are small explosions, smoke (for your base when being hit), flames, and muzzle flashes, and blood (although this is my old test particle system I made). Did a little tweaking to the area camera to make sure more of the relevant action was 'in frame'. Although all the towers are firing the same so far, I will have them fire different projectiles at different speeds / damage / range. And maybe have some of the enemies attack the towers so you have to repair them. I've got a feeling play balancing might be a bit time consuming at the end, making it not too hard or too easy and having it get increasingly difficult as you progress...

lawnjelly

lawnjelly

 

Tower Defence first video

Finally made a video: This doesn't have the terrain showing, I figured out it is quicker to make a build with terrain switched off, and it increases the filesize quite a bit so I might drop the terrain, not sure yet. I put in support yesterday for more than one enemy type, I have got a third type but haven't put in yet. These will have different strengths / health / speed. Also spent ages debugging yesterday from a nasty c# references bug, first time I've had to do any major debugging and it was difficult because I only have debug.log statements to rely on, as yet I have no debugging in monodevelop. The reason for the bug was because I'm using pooling for my game objects, so as not to new and create unity objects, they are just reused and position and visibility switched. So I have an intermediate 'reference' (another reference!) actor which stores which pooled actor of each type in is an actor slot. This makes it slightly more complex the adding and deleting actor code. Anyway when deleting an array element, I switched the last array element with the one to be deleted, then decrement the count. However, in c# the = operator does not copy the data like in c++, it copies the reference, so all kinds of unpredictable behaviour was resulting. Anyway, bug solved, crisis averted! I also put in some basic auto cameras. They work pretty well. There is an overview of the whole board, which does not change, a follow cam which zooms in on a particular actor, and an area cam, which treats all active actors as an area to focus on. The follow cam actually follows a point just ahead of the actor, because there is a delay from the smoothing. The area cam needs a little tweaking to get better.  Next I need the other enemy type, and different tower types. And I need to put in a special big building or something for your base, maybe with some particle effects to show it being destroyed.

lawnjelly

lawnjelly

Tower Defence - Day 7

I actually got to spend more time on tower defence game today. Things I got done In game UI - cash, enemies remaining and player base healthbar Cash system, paying for towers, bonus cash on completing level and killing enemies Waves of enemies (very rudimentary) Main menu and options menu, and changing from game to menu scenes Animations on completing a level, or having base destroyed It is all coming together now for the most basic version. Main things I still have to add Different enemies, wave variations Different towers Upgrading and repairing towers Sound But the basic version is almost completely playable now, I just have a small bug where it doesn't preserve cash between levels. The healthbars are also showing through a couple of the animations but I'll just hang them off an empty node tomorrow and cull them. I am quite pleased with what I have achieved in a week of part timing, with most of that spent watching tutorials and learning. Much I learned / copied from the brackeys tutorials in terms of basics of using Unity (https://www.youtube.com/user/Brackeys/videos), including some of the user interface stuff today. I can't wait to get some funky cameras going and perhaps doing some videos.

lawnjelly

lawnjelly

Tower Defence Day 6

Just a quick one, and not a real day, as I only got to spend an hour or two in the morning yesterday before another day trip. In fact none of these 'days' are really days of work, just on and off playing with the game in between other tasks. I changed from the overcomplicated paths to just generating some random waypoints, and having manhattan walks in between them. In order to prevent crossing of paths I simply make sure each random waypoint is further on the y coord towards the destination. This seems to work fine for now, I will refine it more at a later stage. I tried out the unity terrain system in order to make the environment around the play area a bit more interesting. I figured out how the asset store worked, and have decided to replace my old native model with some free character assets. Using free stuff should be fine as this is just a test game anyway. And I'm kind of hoping that if these models are rigged right then I might have more luck getting ragdoll working. The terrain thing I'm not sure about. It was kind of quick but I've somehow broken the shadows (will have to investigate). And I did a first 'build' of the game and it came out something ridiculous like 82 megs, which may have been due to the terrain. This is one of the issues I have with things like unity, it's very easy with a few mouse clicks to end up with something completely unoptimal for rendering, and hugely overbloated. Couple this with the fact that many people have powerful development PCs, it gives them a false sense of what is possible. Of course this was just a first play with the terrain to evaluate it, not sure I will use it. Would be nice with some gaps for lakes. One thing is to make the levels different I'd probably have to figure out how to alter the terrain procedurally, so suddenly something that was an added 'extra' can potentially a time sink... anyway it is not a necessary element so I won't spend too long on it. Edit : Got the shadows working again, it was just some shadow distance setting in the quality section.

lawnjelly

lawnjelly

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!