I spent a couple of days adding a procedural splatting terrain texturing system, similar to the one I used in tower defence. However it does run slower than in the Unity version, I suspect gdscript is currently quite a bit slower in Godot (3.05) than C# in Unity.
As a result I've thought about pre-generating some terrain textures as .jpg, compressing them a lot and using them instead of doing it on the fly. This is an option, but I've left it procedural for the time being. The road and rivers are not needing the procedural system, so there is less area that needs doing, so I may get away with it. Certainly an advantage to procedural, as well as variety, is that I can change the terrain around buildings etc should I put them in.
I've started adding some more cameras too. You can now switch between a top down traditional ortho camera, and a perspective low down camera that follows the frog, and shows closeups where necessary.
I've added an easy way to layout each level, I specify the number of tiles of a type (grass, river, road at the moment), then for each row I can specify the type of traffic, speed etc.
And lastly I've been playing with the lighting, experimenting with the spotlight in godot, possibly for a nightmode. I don't know how it will affect performance if I do a mobile version, but certainly the spotlight is fun, and changes the gameplay a little as you sometimes can't see the vehicles coming. I've been thinking of putting a 'spyfrog' slant on things, and this lighting would work for a spy.
I still have to get around to thoroughly debugging the collision areas. I'll probably attach some visible bounding quads to each object to check the bound matches up with the visual rep, it's quite a bit off in some cases which is why you die jumping on certain bits of logs etc.
I will add lily pads soon, and have realised that if I make a snake, it can act just like any other bit of traffic, just moving on the grass.
I'm just today thinking about the rights issues of my little frogger game for the gamedev challenge. It is quite common place to make clones of games as a learning experience and for jams, but it is worth spending a little time thinking about rights issues.
I got to thinking about this because sometimes I notice a promising game, where the assets are obviously ripped from other games / sources without licence. I too had no qualms about this kind of approach when e.g. learning to program a new language, and not intending to distribute the result. Indeed this can be a valid use in production, with placeholder graphics, as long as you are sure to change the final versions (there is a danger of tripping up on this one, even big companies can mess this up!).
I remember vividly being told of the dangers of plagiarism in my education, in the world of science, but equally applicable to games and artwork etc.
Once you spend some time either programming, making artwork, sound or music, you begin to realise the effort that goes into making the results. How would you feel if someone took your hard work, used it for profit and attempted to pass it off as their own? I like to think of it that I would like to treat others work how I would like mine to be treated.
On top of the legal aspects, when applying for jobs, many employers will take a very dim view of plagiarism. If you thought it was okay to 'steal' anothers work for something on your cv, what's to stop you thinking that it is okay to copy anothers work during your job, and expose them to huge risks? Quite apart from the fact the interviewers may have personal experience of being plagiarised, in many cases your CV will be filed directly to the rubbish bin.
Creative Commons and Open Source
Luckily in the case of assets, instead of infringing on others works without permission, a huge number of people (including myself!) have decided to make some of their work available for free for others to use under a number of licences, such as creative commons, often for nothing other than an attribution in the credits. (And think of the huge contribution of open source software. I am writing this now on an operating system that is open source, and has been freely given away by the authors for the good of everyone. That to me is fantastic!)
I remember spending some time compiling the credits list for my tower defence game, making sure as well as I could that everyone was properly credited, every model, animation, sound, piece of music. It took some time, but I feel much better knowing that I had used the work as the authors intended, quite apart from feeling more secure from a legal standpoint, even for a free game. And also, speaking as an author myself, it is quite fun to google yourself and find where others have used your work, and encourages you to share more.
Things seem pretty cut and dry for outright copying of assets, but the situation is more complex when it comes to assets that are 'based on' others work, and for things like game titles, and game designs. Some of this intellectual property (IP) protection is based on trademarks, rather than copyright. I am no expert in this, and would encourage further reading and/or consulting a specialist lawyer if in any doubt. Also note that IP laws vary in different countries, and there are various agreements that attempt to harmonize things in different areas.
Of course, whether you run into trouble making a clone game depends not only on skirting around the applicable law, but whether the rights owners are able / willing to take action against you (or the publishing platform). Nintendo for instance are known to be quite aggressive for pursuing infringement, whereas Sega have sometimes suggested they are happy with fan games:
I do understand both points of view. Indeed in some jurisdictions I understand that legally the rights owner *needs* to take action in order to retain control over the rights (this seems nonsensical, but there you are). So a company being overly-aggressive may have been forced to do this by the legal system.
Anyway, for my little research into frogger, my first guess is that the rights are with Konami / Sega / both. Of course you never know. Companies sometimes sell the rights to IP, or one goes out of business and the rights are then assigned to a creditor. Who is to say that a future rights owner will not take a different approach to enforcement.
In the Wild
It seems there are a number of frogger clones out there, with some successful and profitable (e.g. crossy road). Of course that does not mean making a frogger clone is 'ok' in any way, it just suggests that the likelihood of running into trouble is lower than if there were no frogger clones in markets.
Currently I am thinking I will gradually modify the title / some aspects of gameplay so I can put it available for download after the challenge. I really should have thought of this sooner though, and made my main character something else, or put a different slant on it, like 'game of frogs', or 'crossy frog' (that one is taken!). :)
Some light reading
https://en.wikipedia.org/wiki/Copyright_and_video_games https://www.newmediarights.org/guide/legal/Video_Games_law_Copyright_Trademark_Intellectual_Property https://en.wikipedia.org/wiki/Berne_Convention https://en.wikipedia.org/wiki/TRIPS_Agreement https://en.wikipedia.org/wiki/Digital_Millennium_Copyright_Act https://en.wikipedia.org/wiki/WIPO_Copyright_Treaty https://en.wikipedia.org/wiki/Directive_on_Copyright_in_the_Digital_Single_Market
I have now added some animations from blender to Godot. Mostly they went in very easy, and the AnimationTreePlayer makes it easy to set up node graphs for the animation logic. Although I couldn't work out how to do certain state changes, I gather this is being changed in new version though.
The animations look good in the game, especially the frog. I also added logic for controlling the yaw of the frog as you move.
The only bug I've encountered in the animations is that, particularly in a window, sometimes it is showing streaked corrupted black triangle down the screen. I initially thought it was to do with shadow mapping but now I think there may be bug in the engine vertex buffer / shader code somewhere. I did maybe wonder if I had exported my animations wrongly, but the process is pretty simple so on balance I think it may be an engine bug in 3.05. Will see if it fixed in the 3.1 alpha or my compiled from source version.
Next I aim to work some more on the core game, fixing collision / river, and adding progression in skill / variation through levels. Then I will look at some different cameras, and dealing properly with traffic leaving the screen in different aspect ratios. Also I want to look at revamping the background graphics, not quite sure what to go for yet.
Quick update to show how everything is going. Ignoring the river doesn't drown you, collision bugs etc, I've been pressing on with getting the major features in. There are now menus, UI (basic to start), game state logic, winning, losing, and just today I was putting in sound.
Sound has been quite tricky because 3d sound and listeners seems to be currently broken in Godot (3.05 at least), so I've had to bodge around the broken bits considerably. If they fix it I can solve some of these things but for now sound will have to be pretty simple.
Still loads of stuff to fix, as well as putting in some kind of progression as you complete levels. I was thinking of making roads / rivers wider, or multiple versions with more traffic, different speeds / densities. Also I might put in some other stuff like a fly to eat and otter and snake, depending on time.
I also figured out some performance issues were occurring because Godot was defaulting to a 4096x4096 shadow map (!), gawd knows how this was decided to be a good choice lol.
Just a quick update to show how my frogger challenge entry is coming along. I had a day off yesterday and have today put in collision detection and a few other things. You don't die yet if you drown and the crocs and turtles are going the wrong way, the collision detection has lots of bugs, but it is getting there.
My plan is to first get a working version, then flesh it out as time allows. I will put in UI soon and work out how to do menus etc.
Once it is playable and meets the challenge I will put in animation, improve backgrounds, cameras etc.
As I'm using Godot for this challenge there is no asset store, so I'm attempting to make all the assets myself. I'm no artist, and I find making artwork pretty tedious and time consuming .. that said I'm gradually getting more efficient at it. So I spent the first few days making artwork which is mostly in my frogger gallery.
Yesterday evening and today I started actually programming the game in Godot. I'm pretty much making this up as I go from zero experience with the engine and GDScript (like when I used Unity for Tower Defence). So far I am pretty positive about the Godot experience, my only criticism so far would be the lack of in built interpolation (see this post), but it has been fairly easy to workaround.
This time I'm mainly aiming at desktop rather than mobile, partly because I have no idea how good the android support is in Godot. I will cross that bridge when I come to it, but the game should in theory run fine on mobile too.
The whole engine is node based, and you can have deeply nested nodes (something I'm not sure Unity supports), and handily you can make your game out of smaller scenes which you just pull in in the editor or programmatically. I like the paradigm, and the node based scene graph is similar to NDL or Ogre so I'm familiar with how to use it.
My first step was to try and move the frog around the screen. For interpolation I store previous and current positions on each tick (started out at 10 ticks per second). It was pretty easy to get the frog moving left right up and down. However, from playing frogger games in a browser I realised the movement is all in 'jumps', almost like a grid (however not when on logs etc). So instead of holding a key to move the frog, each keypress gives a new destination location, and the frog makes the way to the destination, and won't respond to other moves until it has reached the destination. This is now working pretty well.
For the vehicles etc I wanted a generic system, no point in programming the same thing multiple times. I have separated the lines of traffic into 'row' objects, and each row can contain multiple cars of a certain type, with certain speed and direction. There is no collision yet but I'll use just simple AABB checks. Handily, because the frog can only be on a maximum of 2 rows at a time, you only need to collide check against the cars / logs on those rows.
I've also figured out how to preload the different vehicles as scenes so I can create instances which get reused as they go off the edge of the screen. I need to re-export the models as I have some newer versions now.
There are also no wheels yet, and no animations, I'll deal with them later.
Overall it is coming together good for just a day of programming, which is a testament to how easy Godot is to use. My first aim is to get the frog playable over the screen to get to lily pads, and die if you hit vehicles etc, and have some UI for the score and lives. I have no idea how to do menus yet in Godot but I think it will involve more scenes so will hopefully be easy.
And while I'm starting with the traditional top down orthographic camera, which makes wraparound of vehicles easy, I'd like a more free moving camera, I'll just have to think of a way of making the vehicles fade in and out.
The past few days I have been playing with Godot engine with a view to using it for some Gamedev challenges. Last time for Tower Defence I used Unity, but updating it has broken my version so I am going to try a different 'rapid development' engine.
So far I've been very impressed by Godot, it installs down to a small size, isn't bloated and slow with a million files, and is very easy to change versions of the engine (I compiled it from source to have latest version as an option).
Unfortunately, after working through a couple of small tutorials I get the impression that Godot suffers from the same frame judder problem I had to deal with in Unity.
Let me explain: (tipping a hat to Glenn Fiedler's article)
Some of the first games ran on fixed hardware so they didn't have a big deal about timing, each 'tick' of the game was a frame that was rendered to the screen. If the screen rendered at 30fps, the game ran at 30fps for everyone.
This was used on PCs for a bit, but the problem was that some hardware was faster than others, and there were some games that ran too fast or too slow depending on the PC.
Clearly something had to be done to enable the games to deal with different speed CPUs and refresh rates.
The obvious answer was to sample a timer at the beginning of each frame, and use the difference (delta) in time between the current frame and the previous to decide how far to step the simulation.
This is great except that things like physics can produce different results when you give it shorter and longer timesteps, for instance a long pause while jumping due to a hard disk whirring could give enough time for your player to jump into orbit. Physics (and other logic) tends to work best and be simpler when given fixed regular intervals. Fixed intervals also makes it far easier to get deterministic behaviour, which can be critical in some scenarios (lockstep multiplayer games, recorded gameplay etc).
If you know you want your gameplay to have a 'tick' every 100 milliseconds, you can calculate how many ticks you want to have complete at the start of any frame.
// some globals
iCurrentTick = 0
// Assuming our timer starts at 0 on level load:
// (ideally you would use a higher resolution than milliseconds, and watch for overflow)
iMS = gettime();
// ticks required since start of game
iTicksRequired = iMS / 100;
// number of ticks that are needed this frame
iTicksRequired -= iCurrentTick;
// do each gameplay / physics tick
for (int n=0; n<iTicksRequired; n++)
// finally, the frame update
Brilliant! Now we have a constant tick rate, and it deals with different frame rates. Providing the tick rate is high enough (say 60fps), the positions when rendered look kind of smooth. This, ladies and gentlemen, is about as far as Unity and Godot typically get.
However, there is a problem. The problem can be illustrated by taking the tick rate down to something that could be considered 'ridiculous', like 10 or less ticks per second. The problem is, that frames don't coincide exactly with ticks. At a low tick rate, several frames will be rendered with dynamic objects in the same position before they 'jump' to the next tick position.
The same thing happens at high tick rates. If the tick does not exactly match the frame rate, you will get some frames that have 1 tick, some with 0 ticks, some with 2. This appears as a 'jitter' effect. You know something is wrong, but you can't put your finger on it.
Some games attempt to fix this by running as many fixed timesteps as possible within a frame, then a smaller timestep to make up the difference to the delta time. However this brings with it many of the same problems we were trying to avoid by using fixed timestep (lack of deterministic behaviour especially).
The established solution that is commonly used to deal with both these extremes is to interpolate, usually between the current and previous values for position, rotation etc. Here is some code:
// ticks required since start of game
iTicksRequired = iMS / 100;
iMSLeftOver = iMS % 100;
// ... gameplay ticks
// finally, the frame update
float fInterpolationFraction = iMSLeftOver / 100.0f;
// very pseudocodey, just an example for translate for one object
void FrameUpdate(float fInterpolationFraction)
// where pos is Vector3 translate
m_Pos_render = m_Pos_previous + ((m_Pos_current - m_Pos_previous) * fInterpolationFraction);
The more astute among you will notice that if we interpolate between the previous and current positions, we are actually interpolating *back in time*. We are in fact going back by exactly 1 tick. This results in a smooth movement between positions, at a cost of a 1 tick delay.
This delay is unacceptable! You may be thinking. However the chances are that many of the games you have played have had this delay, and you have not noticed.
In practice, fast twitch games can set their tick rate higher to be more responsive. Games where this isn't so important (e.g. RTS games) can reduce processing by dropping tick rate. My Tower Defence game runs at 10 ticks per second, for instance, and many networked multiplayer games will have low update rates and rely on interpolation and extrapolation.
I should also mention that some games attempt to deal with the 'fraction' by extrapolating into the future rather than interpolation back a frame. However, this can bring in new sets of problems, such as lerping into colliding situations, and snapping.
Multiple Tick Rates
Something which doesn't get mentioned much is that you can extend this concept, and have different tick rates for different systems. You could for example, run your physics at 30tps (ticks per second), and your AI at 10tps (an exact multiple for simplicity). Or use tps to scale down processing for far away objects.
How do I retrofit frame interpolation to an engine that does not support it fully?
With care is the answer unfortunately. There appears to be some support for interpolation in Unity for rigid bodies (Rigidbody.interpolation) so this is definitely worth investigating if you can get it to work, I ended up having to support it manually (ref 7) (if you are not using internal physics, the internal mechanism may not be an option). Many people have had issues with dealing with jitter in Godot and I am as yet not aware of support for interpolation in 3.0 / 3.1, although there is some hope of allowing interpolation from Bullet physics engine in the future.
One option for engine devs is to leave interpolation to the physics engine. This would seem to make a lot of sense (avoiding duplication of data, global mechanism), however there are many circumstances where you may not wish to use physics, but still use interpolation (short of making everything a kinematic body). It would be nice to have internal support of some kind, but if this is not available, to support this correctly, you should explicitly separate the following:
transform CURRENT (tick)
transform PREVIOUS (tick)
transform RENDER (where to render this frame)
The transform depends on the engine and object but it will be typically be things like translate, rotate and scale which would need interpolation.
All these should be accessible from the game code, as they all may be required, particularly 1 and 3. 1 would be used for most gameplay code, and 3 is useful for frame operations like following a player with a camera.
The problem that exists today in some engines is that in some situations you may wish to manually move a node (for interpolation) and this in turn throws the physics off etc, so you have to be very careful shoehorning these techniques in.
One final point to totally throw you. Consider that typically we have been relying on a delta (difference) in time that is measured from the start of one frame (as seen by the app) and the start of the next frame (as seen by the app). However, in modern systems, the frame is not actually rendered between these two points. The commands are typically issued to a graphics API but may not be actually rendered until some time later (consider the case of triple buffering). As such the delta we measure is not actually the time difference between the 2 rendered frames, it is the delta between the 2 submitted frames.
A dropped frame may for instance have very little difference in the delta for the submitted frames, but have double the delta between the rendered frames. This is somewhat a 'chicken and the egg' problem. We need to know how long the frame will take to render in order to decide what to render, where, but in order to know how long the frame will take to render, we need to decide what to render, and where!!
On top of this, a dropped frame 2 frames ago could cause an artificially high delta in later submitted frames if they are capped to vsync!
Luckily in most cases the solution is to stay well within performance bounds and keep a steady frame rate at the vsync cap. But in any situation where we are on the border between getting dropped frames (perhaps a high refresh monitor?) it becomes a potential problem.
There are various strategies for trying to deal with this, for instance by smoothing delta times, or working with multiples of the vsync interval, and I would encourage further reading on this subject (ref 3).
Just a little progress update to show how my little texture tools app is progressing. I actually got waylaid for far too long investigating white balance correction. It is not at all important for this app, and more something I am interested in for correcting photos and video.
White balance correction
Though my early efforts had involved colour space conversion in the hope of making white balance correction easier, I am now of the opinion that the best way to do it is through 3D colour look up tables (LUTs). I've previously used multiple 1D LUTs many times (in fact they are used extensively in 3D Paint), but not 3D. The advantage of 3D LUTs is as well as independent control of contrast / gamma / balance for R, G and B, you can have complex interaction between the colour channels, allowing changing things like colour saturation etc.
The downside to 3D LUTs is firstly they take more memory, such that it is common to use sizes such as 33x33x33 rather than 256x256x256, and secondly that as a result of the lower resolution you have to implement 3D interpolation. I implemented 3D tetrahedral interpolation based on some slides from nvidia, and I understand it should give better results than simple trilinear interpolation.
For my white balance experiments I was thus trying to create a LUT to go from one image (e.g. with tungsten white balance) to an identical image with correct white balance. I tried a variety of approaches and ended up being most successful with an iterative 'monte carlo' scheme that optimized the LUT values towards the best fit to convert from one image to the other. This did produce nigh on perfect results, however it was quite slow, and I had to use approaches like performing runs on mipmaps of the large image size.
On top of the problem of producing a LUT to convert between the 2 test images, another problem was the LUT had 'blanks' in it. That is, areas of the LUT where there was no data to create a best fit. I ended trying 3d natural neighbour interpolation for this.
However when it came to testing out the LUTs I tried to find an existing graphics editor that would load my created LUTs to test them out. Gimp wouldn't seem to load LUTs, however after some research I found an excellent plugin called G'Mic which would load 'hald' format LUTs. I also discovered that the author had attempted to do exactly the same thing as me (LUT from 2 reference images), and his worked 10x better than mine lol. Firstly it calculated it very quickly, and secondly and more importantly, it managed to guess the 'in between' blanks in the LUT as well as the values that were in the images.
I took this as a sign that I had wasted far too much time researching this, so I resolved to study the G'Mic source code and work out how he did it (for future reference) and abandon my own efforts in this direction. Unfortunately the plugin was written in a somewhat inpenetrable scripting language, but I will try to work it out when I have some spare time (or email the author lol ).
However something I have left in texture tools is a method for loading 3D LUTs and applying them in the pipeline as it could be quite handy for users (there are a number of freely available LUTs around for colour balancing, or getting specific 'looks').
So back to work on the actual app I quickly added a few more basic methods (these are all added as nodes which can be connected up with inputs and outputs). I've put in levels, hue saturation, crop, resize. I then converted my healing code to work with float data, and made a method for simple tiling textures.
This 'heal tile' method automates a very common operation in photoshop / gimp. First it offset the x and y by 50%, so the mismatched edges are forming a cross shape central on the image. Next it heals the two borders, using source from the original un-offsetted image. Then it finally offsets by 50% again so the image is in the original position. The percentage of the border that is used for the healing can be changed with a slider.
Synthesizing larger images
So that is the very basic functionality working. My next phase is to look at means of synthesizing larger image from a smaller reference texture. I am not quite sure how to do this yet, JoeJ has suggested a very interesting paper by Eric Heitz and Fabrice Neyret which I have finally vaguely understood (there is no source code), so I may have a go at doing a version of their technique.
I may also try some simpler techniques of splatting areas on top of each other.
One extra feature I just put in today is you can paint an alpha layer on the source images, to mark out where you want to use as source material for these latter techniques (as a photo may contain other junk aside from the texture of interest).
Now I realise this is stretching slightly outside the normal realms of gamedev, but while working on my texture tools I've turned my attention again to something that is often a problem for photographers, white balance.
While white balance is something that is easy to correct when you have a RAW image file (before white balance has been applied), I've always found it considerably more difficult to correct for when all you have to work with is a JPG, or video stream. Hence the general advice to photographers to get white balance right in the camera (using grey card preset balance etc).
Of course the world is not ideal and sometimes you end up with images that have a blue or yellow cast. I've found the colour balance correction tools built into things like photoshop and the gimp to be pretty awful. Often when you pick a gray / white point you can very roughly correct things overall but you get colour tinges in other areas of the photo, and it never looks as good as a correctly balanced photo.
Tungsten (blue cast - needs correcting)
Neutral (correct, reference image)
Attempt to colour correct in Gimp using white and gray points (overly saturated result)
I've worked through various naive approaches to balancing photos in the past. I started long ago with the very basic multipliers on the sRGB 8 bit data. First mistake of course .. this is gamma compressed, and to do anything sensible with images you have to first convert them to linear with a higher bit depth. I now know better and use linear in nearly all my image manipulation.
The next step is colour spaces. I must admit I have found this very difficult to wrap my head around. I have used various other colour spaces like LAB and HSL in the past and just have used conversion functions without really knowing what they were doing.
This past week I have finally got my head around the CIE XYZ colour space (I think!), and have functions to convert from linear sRGB to XYZ and back.
Next I learned that in order to do chromatic adaptation (the technical term for white balance!) it is common to transform to yet another colour space called LMS space, which is meant to more accurately represent the colour cone cells in the eye.
Anyway to test this all out is luckily rather easy. All you need is a RAW photo, and export it as jpg with different white balance, then attempt to transform the 'wrong' jpg image to the 'right' jpg image.
Usually I would do this myself but there are some rather convenient already made images here: https://en.wikipedia.org/wiki/Color_balance
So I've been using these to test, trying to convert the tungsten (blue cast) to neutral.
I've had various other ideas, but first wanted to try a very simple, attempt to get the photo into the initial colour space / gamma, alter the white balance, then convert it back again.
To do this I converted both the blue (tungsten) image and the reference (neutral) to sRGB linear, then to XYZ, then to LMS. I found the average pixel colour in both images, then found the multiplier that would convert the average colour from the blue to the neutral. Then I applied this multiplier to each pixel of the blue image (in LMS space), then finally converted back to sRGB for viewing.
LMS colour space
Linear RGB colour space
The results showed a slightly better result doing this process in LMS space versus linear RGB space. The background turned a better gray, rather than still having a blue tinge with the RGB space. The result looks pleasing to the eye but you can still tell something is 'off', and the white point on the colour checker chart in the photo is visibly off.
As well as the photo, I also superimpose a plot of the RGB values with converted value against expected value, to show how accurate the process is. A perfect result would give a straight line plot. Clearly there is quite a bit of colour variation not corrected for in the process.
My current thinking is that there are 2 main things going on here:
To do the conversion, the colour space / gamma should exactly match that in the same stage in the RAW conversion. Maybe it doesn't. Is the RAW conversion done in linear camera space rather any standard colour space? I don't know. I've attempted to dig into some open source RAW converters to get this information (DCRaw and RawTherapee) but not had much luck.
To get things to the state when the white balance was applied in the RAW converter, you not only have to reverse colour spaces, you have to reverse any other modifications that were made to the image (picture styles, possibly sharpening etc). This is very unlikely to be possible, so this technique may not be able to produce a perfect result.
Aside from the simple idea of applying multipliers, my other idea is essentially a 3d lookup table of colour conversions. If (and that is a big if) there is a 1:1 mapping of input colours from the blue image to reference colours in the neutral image, it should be possible to simply lookup the conversion needed. You could do this by simply going through the blue image till you found a matching pixel, then find the corresponding pixel in the neutral image and use this. In theory this should produce a perfect result in the test image by definition (if the 1:1 mapping holds).
I should say at this point the intended use is that if you can find a mapping for a specific camera to get from one white balance setting to another, you can then apply this to correct images that were taken BEFORE the mapping was found. So if you are attempting to convert a different photo it is likely that there will be pixels that do not match the reference images. So some kind of interpolation and perhaps lookup table would be needed, unless you were okay with a dithered result.
At this point I'm going to try the lookup table approach. I suspect it may give better results, but not perfect, because I do fear that picture styles / sharpening may have made a perfect correction impossible, much like the entropy reversing idea of putting a shattered teacup back together, from Stephen Hawking and our old friend Hannibal Lecter.
Anyway this blog post was originally meant to be a question for the forum, but grew too large. So my question would be whether any of you guys has experience in this, and advice or ideas to offer? Through my research I found this whole colour thing I felt like a real newbie, and it is quite a big field with a number of people working on it, I'm sure there must be some established ideas for this kind of thing.
Just a quick update to show I've just started getting the GUI working for texture tools. The node editor is still a work in progress but it seems to do the job.
I still haven't done any significant work on the methods yet but that is the fun stuff .. I've been getting the boring interface and framework working first. There are just some simple test methods as yet, I will be adding more complex blending etc tech soon. It is now pretty easy to add new methods (like plugins) without worrying about the rest of the program.
After attempting to texture some models recently, using projection painting in 3d paint, the new healing brush is proving fantastic at healing up those edges between projections, but I must admit I get very frustrated trying to find suitable reference images. Part of the problem I have decided is that I'd often like to be able to have larger, more homogeneous areas of texture to clone from.
Given that I have a reasonable healing implementation working, it struck me I should be able to have some algorithms for doing this little job for me, to provide better source material for painting. I thought about putting this ability directly into 3d paint, however, it seemed to make more sense to do a separate small utility app for this kind of thing, which might be useful to more people.
So, eager to not make the major mistake I made with 3d paint, that of under-engineering the initial program, I decided to make a positive effort and spend a few days building a solid backbone to the texturing program, so it will be easy to maintain and add to in the future.
Instead of making a photoshop like affair, this will be a very focused app, and at the moment I'm thinking in terms of a node based editor with some input textures, and methods, producing intermediate and final textures for export. I'm planning for you to be able to move the nodes in the UI, assign inputs and outputs and parameters.
Although the UI is not yet operational, the framework is getting there and I've implemented a first test method. I decided one useful first pass before other methods would be to equalize the colours across an image. Here is an example I have run it on a skin photo, left is before, right is after. Bland and boring on the right, but that is what I am going for, it should be easier to clone etc. The way the method works is it first finds the average colour in the entire image, then gaussian blurs the image. For each pixel it then finds the difference between the blurred colour and the average colour, then adds this difference (with a multiplier) to the original pixel colour.
This has the effect of reducing local colour contrast, or increasing colour contrast depending on the sign of the multiplier.
Anyway, obviously loads more methods to come, maybe some using variations of the healing technique from Georgiev's paper. All colours are converts to floats, and can be converted to linear, and HSL or LAB colour spaces.
Something I feel doesn't get discussed enough in the world of gamedev, and something that in general we (more often than not) tend to be incredibly bad at, is project management. This isn't something I'm an expert at myself, I get it wrong plenty of times, but at least I'd like to stimulate a little thought on the subject.
One of the most crucial aspects of project management is the interplay between project scope, and timescales and scheduling. It doesn't matter whether you are managing an AAA game with a multimillion dollar budget, or a one man bedroom indy game or first game, this all affects you.
Scale & Duration
The first point is that as you increase the scale of a project, the time needed to complete it will increase. You would think that there would be some kind of linear relationship between the scale and the time needed, but in practice there can be a tendency for the time needed to increase almost exponentially with scale.
Real life observations show that it can be incredibly difficult to predict the time taken to complete a complex task. If we were, for example, asking how long it would take a man to harvest a small field of potatoes by hand, we could find out how long it took him to do 1 metre squared, multiply it by the size of the field, leave some time for breaks etc and come up with some rough estimate.
Unfortunately developing software is not like harvesting a field. Some tasks can be made more like this (artwork production for instance), but things like programming are absolutely nothing like this model. You can add more artists but if you add more programmers expecting a productivity increase expect a shock (see the mythical man month).
Couple this difficulty in making time estimates with something called the Dunning-Kruger effect. This is (I quote wikipedia) :
This blog post is probably a great example of Dunning-Kruger!
Those people who have little knowledge of a task will tend to be very bad at estimating how long that task will take to complete. Unfortunately, the people typically responsible for making time estimates are exactly those people who tend to have little knowledge of the subject area. This is often management, and / or 'yes men'.
(Sycophants / Yes men (or people, they could be transgender) are those types that follow the strategy of agreeing with everything their boss says, or providing extremely optimistic estimates. This is a very widely used strategy for promotion, constantly reassuring the superior providing short term appreciation, then when things inevitably go wrong, it is usually easy to blame an external event or third party.)
Other types of people who often have little knowledge of a task are beginners, and those choosing to work on something they haven't done before (most of us). So essentially most of us are prone to this Dunning-Kruger effect.
With beginners this almost always shows itself in new posts to forums introducing themselves as knowing nothing about programming or artwork, but expecting to complete a game such as one requiring teams of 50+ experienced staff and years of work, in a couple of weeks. Presumably they expect there is some app for doing this, and they will just press a 'make game' button, after selecting the right parameters. It is easy to see the error in beginners, but this same thing tends to happen with all of us and we have to actively fight against the tendency.
So how long do things really take? Well, a wise man once told me, as a rule of thumb, if you are familiar with the subject area, it will on the whole take at least 3x as long as your estimated scenario.
What does this mean?
Well if you in charge of developing a typical 18 month commercial game, you should be aiming for something you believe you can complete in 6 months. No, that's not a prototype, that's the whole thing. Once all the things have gone wrong (and they will) it will easily expand to take the full allotted time. Best case if it is finished ahead, you have extra time for testing, and polish.
If you are an indy and you want to complete a game over a 6 month schedule, you should be aiming for something you can finish in 2 months. If you are a beginner and you are aiming for something you can finish in 2 weeks, you should be aiming for something you can finish in 1-2 days. Beginners are the worst at estimating, and are usually wildly out, and typically don't have the skills to complete what they thought, so often don't even have the skills to complete their original project until months or years after.
How can you battle this effect?
The best advice I have heard to battle these problems is to aim small. This applies to everyone but 100x more to beginners. There are a billion unfinished shelved projects out there in the world. Don't be one of those people.
Choose something you *can* realistically complete, something 10x to 3x smaller than your original vision (depending on your skill level).
If you are beginner this means start by making tic-tac-toe. Make pac man, make tower defence, gradually increase your skill set so you can make better games (and better doesn't always mean bigger).
If you are learning OpenGL / DirectX by all means learn to make your own engine but be under no illusions that you will make something that others will want to use. Commercial engines now tend to be made by small armies of experienced devs, and have to work under conditions of a huge variety of hardware, with testing and all kinds of considerations you haven't even thought of.
If you are an indy and you dream of making the next amazing rpg to rival the big companies, have a think, how much artwork, how much sound, how many voiceovers, how many scripts, how many levels do you need to create? Aside from the programming, creating the assets for such games is a major undertaking. Have a look at the credits list on a game you would wish to emulate.
If you are a company, you have possible funding, be very realistic about what you can create in your allotted 18 months. Aim to be leveraging and reusing your own and others' technology because to create everything from scratch everytime is just not possible. Aim for that 6 month timescale, and get estimates from all the tech people especially not just from 'yes men', and use them all to make a balanced decision on what you think is realistic.
It's been a few days since I put my latest alpha of my entry for the Tower Defence challenge on itch.io and my project page:
I think I've covered the requirements for the challenge, and made the game a bit above just the requirements so it is a bit more fun to play and has some longevity.
The reason I entered this time is because I'd been watching the previous challenges with a little envy, and had been waiting for one that seemed simple enough (I think the last one I looked at had multiplayer and I knew that could be a bit of a bag of worms). My usual low level c++ / opengl approach would probably be overkill for a small / low timescale game, so I decided it would be a good opportunity for me to try out Unity engine, which a lot of people are using currently.
What went right
1. Using Unity
Rapid development, well suited for this type of small game.
2. Attempting to get as much of the challenge completed asap, then leaving further time for more features / polish.
I finished much of the base functionality in the first week, then spent time on and off in the next few weeks just making it better.
There are lots of advantages to getting something 'finished' up front, and this is a development model I am trying to move towards.
You can 'call time' at any time, and still have a functional product. Unforeseen events always seem to appear and limit the time you can spend on a project. This approach guarantees that even in this situation you will still have a 'product' rather than a half-done version of your 'glorious vision'.
3. Using the asset store, not building all the models myself, and using sites such as freesound for the sound, and creative commons music.
For small learning games such as this it didn't make sense for me to make the assets. I know it takes me 2/3 of the time to make artwork etc, and while I am improving at it, I am better at (and enjoy) programming more than making artwork.
4. Finding some good tutorials to learn Unity (then throwing out their approaches!).
There are some great tutorials out there (brackys for instance), and these are good for learning unity specific stuff, but in some cases I could instantly see better ways of doing things. I put this down to many tutorials being pitched at total beginners, who are happy to get anything on the screen. But e.g. using Unity editor to lay out levels just seemed ridiculous and limiting.
What went wrong
1. C# .
I hate it, absolute abomination of a language. I spent more time than should ever be necessary screaming at the damn thing, it makes visual basic look like Shakespeare. I could write a whole blog post just on the things about it that make me seethe, but yeah, if I could avoid ever having to use it again, that would be great.
Yeah, see point 1. Pretty bad. I might have to see if I can get another editor working if I use Unity again. I hear VS code may be worth a go (I'm on Linux). Monodevelop seemed really keen to reformat my code in stupid ways I couldn't turn off, and kept trying to autocomplete words incorrectly (that I also couldn't turn off).
3. Lack of debugging support.
This may have been due to my setup, it might not be straightforward to get debugging working on Linux (I'm assuming with Unity it is possible to do step by step debugging?). This meant huge problems debugging anything but the simplest code (I had to resort to lots of Debug.Log statements).
4. Unity editor.
I'm not really a drag and drop sort of guy. I tried to avoid having half the game 'code' being a particular setup in the drag and drop editor. I'm not even sure how to backup that stuff, I'm sure if I'd have had a crash I could have lost the lot. Come to think of it, did I have to backup all the assets too? With all that .meta stuff? I don't know. At least with code you can zip it up small and keep lots of backups. There should be an option in the menu to save your entire project in a compressed form without all the bloated assets etc, just the stuff that is a pain to lose.
5. Unity build times.
I had massive problems with excessive build times taking hours when changing platform particularly, it kept baking lightmaps (or maybe something with shaders?) when as far as I knew I had tried to turn them off. Eventually more by luck than judgement, I found that deleting some skydome assets I had imported and deleting (rather than turning off) an extra light finally cured the problem. Far too little debugging info is given out by the build tool logs, to enable you to know WHY your builds are taking hours. Googling reveals I was not the only one with this problem. Don't just tell me 'baking lightmaps', tell me which light is causing this, which objects etc etc.
Overall I found the challenge very worthwhile. There are several of us working on it, and bouncing ideas around and spurring each other on works very well. Also a little hint of friendly competition is good too!
I managed to get fair basic grounding in Unity, and have a better idea of whether it would be worthwhile using in any future projects.. I may use it for a couple more small games, or evaluate some more current engines (Unreal, or perhaps something more code orientated).
Doing such small projects is also great for experiencing and practising the whole development cycle including release and marketing. This can give a much better perspective on how much time you should invest in different stages, and improve your ability to schedule / finish larger projects. It is something I would recommend to beginners through to advanced developers.
After an initial burst of energy last month, I knew I wouldn't have so much time available for working on Tower Defence challenge up until the June 30th finish. However I did manage to get much of it working already and now I am mostly at the 'finishing up' stage.
More music tracks
I have added some more game modes. The normal mode is a scripted set of 12 levels, random generates 20 random levels increasing in difficulty, and 'budget' is a shorter game of 6 random levels but you are given an estimated amount of money to beat all the enemies at the beginning, and you can't win any more cash, so you have to make it last.
I have made some effort to make the game cross platform. It works well on Linux, Windows and Android now. To that end there is a certain amount of 'designing for the common denominator'. I had to scale back the particle effects so that it would work well on tablets and mobile phones. If I was spending more time on it I would scale the number of particles according to hardware.
Graphics Quality Setting
Inside the game pause menu there is now a performance slider where you can trade off graphics quality with speed. It sets the unity GraphicsQuality setting. This works well, although I did encounter a snag on Android.. the higher quality settings on some devices was causing complete graphical corruption, possibly trying to switch anti-aliasing. While I could try and blame this on Unity, I do know that devices can be bad at reporting their capabilities, and you often have to just 'try it and see what happens'.
What was however a design oversight is that when you set unity to GraphicsQuality that causes corruption, while you can quit out of the game from the OS, Unity actually SAVES your choice of graphics quality so that the next time you run it it is also corrupted. The only way around this (on Android) was to go into App settings and delete the cached app data. Clearly this was not ideal for users.
As a bodge compromise, when you originally start the game it reads the default GraphicsQuality, persists this separately, then resets to the default every time you run the game. This means any changes to quality are lost on each startup, which is annoying, but better than the graphical corruption. Another alternative would be to get the user to click on an 'OK' box within 15 seconds to confirm that graphical changes worked, but I did not have time for this.
Gamepad / Keyboard input
While a lot of tasks have gone very well, I spent a lot of yesterday and today getting keyboard and gamepad input working. Most people will play the game using mouse, or touchscreen, and it works best with these. However my Android TV box has no touchscreen, and I find it annoying how few Android game developers support other means of input.
I had previously managed to get gamepad input working on Android using Android SDK via java / NDK, however configuring input on Unity seems rather overcomplicated. I read several recommendations that the Unity input system was poor, and it was better to buy InControl or Rewired assets to do this.
I tried the free version of InControl but couldn't get it to recognise my gamepad. So I went back to the in built Unity InputManager and finally managed to (barely) get it working. The selection of joystick axes seems non-standardized, so I have no idea whether my setup will work with other joysticks / gamepads, not having the time or facilities for further testing.
As well as simply getting a gamepad working, there is also the issue that the game was designed for mouse.
I had hoped that navigating the menus would be a simple affair, but it took me a while to work out how to select a default menu item when entering each menu screen, without which you could not change the selection. Many people I googled had had similar problems, I am not clear on the 'correct' way to do it. In addition many of the UI items did not show a large enough change in colour to indicate that they were highlighted, and I had to change this manually.
Gameplay with keyboard / gamepad
The biggest hurdle was how to place towers in a game designed for you to pick a spot with the mouse? I decided to have a cursor which you can move up down left and right across the map. This was a bit fiddly to get working with the input, and is still not perfect, but does allow you to play the game, albeit not as efficiently as with a mouse / touchscreen.
Finally I had to cover tower selection via keyboard / gamepad. To do this I simply had a button cycle through towers available, and had to put in an indicator arrow to show which was currently selected.
I have now tried it and it even works well on my Android TV box with gamepad, which has surprised even myself. Performance has been very good, even on low power devices. That is probably largely down to careful design due to me being already familiar with performance bottlenecks on this hardware (fill rate is usually the biggest issue), and using mostly simple mobile diffuse shaders.
I suspect from now until release I will mostly be concerned with testing. The input in particular has introduced all kinds of subtle potential bugs from different combinations of input devices. I would ideally like a high score table but handling virtual keyboard input on different platforms puts me off, unless I use a simple initialing system.
This is my second alpha version for my gamedev Tower Defence challenge entry. Fingers crossed the links will work.
Linux64 (40 megs)
Win32 (32 megs)
Android (49 megs)
I did miss out on several days because my internet got taken out due to a massive lightning storm, and Unity editor refused to run without an internet connection...
However new features:
3 new enemies .. bird, spider and boss
2 new towers .. ballista, plasma tower
Each level is now scripted, each wave and enemy
Some terrain improvements
Projectile models (bullet, cannonball, plasma, bolt)
Tooltips on tower buttons
Lots of changes to game rules / balancing
Templates for each tower and each actor type
Player has 3 lives
I don't think I gave enough cash for the first level in this build, but you can win usually by building 3 of the first cheap tower and 1 of the second tower clumped together in a group, that way the towers can defend each other and they aren't too expensive. Once you get past the first level the cash reward for completing the level is quite generous.
Android Build - edit
After some considerable effort, I finally got an Android build working. The build kept hanging while trying to bake lightmaps (which I had been trying to turn off), but after several hours experimentation and changing options randomly I finally got it working.
The APK size is nearly 50 megs, no idea why larger than even the linux build. But it does run, choppy on my old tablet when there's much action happening, but shows potential. I think the particle effects will be slowing it down so I'll add options to turn them down.
Also with a tablet touchscreen there is no preview of whether you can build on a spot because there is no 'mousemove' equivalent, only a touch, which is like a click to build. [Also I'll have to scale up the tower selection buttons on mobile because it's easy to click to build instead of hitting the button when you don't have a mouse.] FIXED
In the spirit of release early and often I have a playable alpha version of my challenge entry. I am not quite sure where is reliable to upload to so I have been trying dropbox which I've had issues with in the past, fingers crossed these links might work, please let me know if the links work / it installs and runs. The zip file is about 30-40 megs and doesn't require any installation, and is typical unity fare.
Linux 64 bit:
Windows 32 bit:
I've been developing on linux but just tested the windows version on my ancient laptop and it seems to work.
Very much a test version, I need to put in credits for things like the music and assets (need to work out how to do scrolling credits).
When you start choose new game. You start with a certain amount of cash, and enemies start appearing in waves. You must build towers to destroy them before they destroy your base.
Click one of the 3 tower types to select it for building, then click on a location that comes up gray to build. Each tower will cost you money to build and to repair. You get cash when you kill enemies and on completing level. The enemies will move down the path towards your base but will also attack towers. If a tower is destroyed you can click on it to repair. The enemies go so close to towers when attacking they cannot defend themselves so you often have to build towers in pairs (this was more by accident than design lol).
There are currently 3 camera types, and you can toggle between them by clicking the magnify glass icon. Overview shows the whole map from above, area mode tries to show the area where the action is taking place, and follow cam will follow the enemy of your choosing closeup. You can select which enemy to follow by clicking close to it on the path.
The path the enemies follow is darker than the rest of the terrain. You can't build on the path, or on squares that have buildings or other props on them (these will indicate red). This means you have to be strategic about where to build towers.
There's still quite a bit I intend to add, like multiple lives, high score table, balance the difficulty as you progress, more towers / enemies.
After a few days away from home I've done a bit more on the game. I must admit I'm kind of looking forward to wrapping up and working on the next thing, so am thinking more in terms of what I need to finish.
The towers now have separate logic and firing / sounds. There are just 3 so far but I'll probably add another 2 or 3. Don't know if I have the energy to put in an upgrade path for tower technology .. maybe if people play it lol.
The enemies now attack towers as well as your base. The AI for this is pretty simple, and they have no collision detection, so just go for the tower that attacked them. Maybe I will put in collision detection but it potentially opens up a barrel of worms (avoiding stuff) so I may leave on the todo list.
Towers have health and once they are destroyed they stop working and have smoke coming out. You can repair them by clicking, this costs money in proportion to how damaged they are.
The building placement is a bit better now, there is a zone around houses where other buildings will not be placed so they aren't too close together. The buildings and props serve no function as yet except they prevent you building towers on that spot, so you have to be a little strategic to where you build.
As I'm a little worried the unity terrain is not practical both in terms of installation size and performance, I've just today put in some procedural texturing of the ground block underneath the play area which I am using as an alternative to unity terrain. It looks okay now after some research into how this was possible, I did it with just software texture splatting and precreating the texture on level load.
On the terrain front I plan to vary this and the buildings / trees etc as you progress through the levels, so it doesn't look all the same.
I should probably also put in a high score table, as it feels a bit of a let down when you get a long way then fail a level. But that depends how easy it is to put in a keyboard to type in a name, as it might not be played on a desktop. Actually I've no idea if it will work on other platforms but I'm hoping unity will handle most of that automagically. The controls are pretty tablet friendly so far.
A big issue to confront will be installation size. I'm not sure yet what this depends on but it seems ridiculous when I've done test builds (187 megs last time). Hopefully I can get some kind of build log to find out what is being included in assets and strip out a load of unneeded stuff and oversize textures etc.
Just posting this quickly as I will be off for 5 days or so and unable to access PC / work on game, so a little update.
Got sound working, although it needs some tweaking I need variations for the sounds but the programming side is all done. I have a sound manager that pools sound sources, don't know if that is more efficient than putting sound sources on all the gameobjects. I also figured out how to put animation events on the animations to play sounds in sync (footsteps, sword etc).
Made a couple more quick towers in blender, although there is no separate logic for the towers yet, they are all firing same bullets etc.
Changed from using my native huts to some medieval assets from Unity asset store. As these houses are bigger than one grid square there is now a bit more complex logic for placing them, rotating them until they fit on the map. Also the waypoints now lead to the front of the big building which will be your base (I might change the building model though).
And added some particle effects. Made a pool system for these, and got the effects themselves again from asset store. There are small explosions, smoke (for your base when being hit), flames, and muzzle flashes, and blood (although this is my old test particle system I made).
Did a little tweaking to the area camera to make sure more of the relevant action was 'in frame'.
Although all the towers are firing the same so far, I will have them fire different projectiles at different speeds / damage / range. And maybe have some of the enemies attack the towers so you have to repair them.
I've got a feeling play balancing might be a bit time consuming at the end, making it not too hard or too easy and having it get increasingly difficult as you progress...
Finally made a video:
This doesn't have the terrain showing, I figured out it is quicker to make a build with terrain switched off, and it increases the filesize quite a bit so I might drop the terrain, not sure yet.
I put in support yesterday for more than one enemy type, I have got a third type but haven't put in yet. These will have different strengths / health / speed. Also spent ages debugging yesterday from a nasty c# references bug, first time I've had to do any major debugging and it was difficult because I only have debug.log statements to rely on, as yet I have no debugging in monodevelop.
The reason for the bug was because I'm using pooling for my game objects, so as not to new and create unity objects, they are just reused and position and visibility switched. So I have an intermediate 'reference' (another reference!) actor which stores which pooled actor of each type in is an actor slot. This makes it slightly more complex the adding and deleting actor code. Anyway when deleting an array element, I switched the last array element with the one to be deleted, then decrement the count. However, in c# the = operator does not copy the data like in c++, it copies the reference, so all kinds of unpredictable behaviour was resulting. Anyway, bug solved, crisis averted!
I also put in some basic auto cameras. They work pretty well. There is an overview of the whole board, which does not change, a follow cam which zooms in on a particular actor, and an area cam, which treats all active actors as an area to focus on. The follow cam actually follows a point just ahead of the actor, because there is a delay from the smoothing. The area cam needs a little tweaking to get better.
Next I need the other enemy type, and different tower types. And I need to put in a special big building or something for your base, maybe with some particle effects to show it being destroyed.
I actually got to spend more time on tower defence game today. Things I got done
In game UI - cash, enemies remaining and player base healthbar
Cash system, paying for towers, bonus cash on completing level and killing enemies
Waves of enemies (very rudimentary)
Main menu and options menu, and changing from game to menu scenes
Animations on completing a level, or having base destroyed
It is all coming together now for the most basic version.
Main things I still have to add
Different enemies, wave variations
Upgrading and repairing towers
But the basic version is almost completely playable now, I just have a small bug where it doesn't preserve cash between levels.
The healthbars are also showing through a couple of the animations but I'll just hang them off an empty node tomorrow and cull them.
I am quite pleased with what I have achieved in a week of part timing, with most of that spent watching tutorials and learning. Much I learned / copied from the brackeys tutorials in terms of basics of using Unity (https://www.youtube.com/user/Brackeys/videos), including some of the user interface stuff today.
I can't wait to get some funky cameras going and perhaps doing some videos.
Just a quick one, and not a real day, as I only got to spend an hour or two in the morning yesterday before another day trip. In fact none of these 'days' are really days of work, just on and off playing with the game in between other tasks.
I changed from the overcomplicated paths to just generating some random waypoints, and having manhattan walks in between them. In order to prevent crossing of paths I simply make sure each random waypoint is further on the y coord towards the destination. This seems to work fine for now, I will refine it more at a later stage.
I tried out the unity terrain system in order to make the environment around the play area a bit more interesting.
I figured out how the asset store worked, and have decided to replace my old native model with some free character assets. Using free stuff should be fine as this is just a test game anyway. And I'm kind of hoping that if these models are rigged right then I might have more luck getting ragdoll working.
The terrain thing I'm not sure about. It was kind of quick but I've somehow broken the shadows (will have to investigate). And I did a first 'build' of the game and it came out something ridiculous like 82 megs, which may have been due to the terrain. This is one of the issues I have with things like unity, it's very easy with a few mouse clicks to end up with something completely unoptimal for rendering, and hugely overbloated. Couple this with the fact that many people have powerful development PCs, it gives them a false sense of what is possible.
Of course this was just a first play with the terrain to evaluate it, not sure I will use it. Would be nice with some gaps for lakes. One thing is to make the levels different I'd probably have to figure out how to alter the terrain procedurally, so suddenly something that was an added 'extra' can potentially a time sink... anyway it is not a necessary element so I won't spend too long on it.
Edit : Got the shadows working again, it was just some shadow distance setting in the quality section.
Actors sink into ground when dead
Auto generation of waypoint paths / building plots
Failed trial with ragdoll
Run animation working
Watching tutorial videos for user interface
Selection of spots for building
UI for selecting which towers to build
Build on mouse click (with fail when full already)
It's felt like progress has slowed down, however this is partly an illusion because it is easier to see progress when graphical things change and not when behind the scenes things change. However I did have several problems with basic C# stuff (probably because I haven't really got the time or inclination to learn the language).
Ideally I'd like one of these rapid engines that was programmatical rather than GUI based. Drag and drop is great for non-programmers, but I'm just finding it painful, watching 5 seconds of a tutorial video, alt tabbing to unity, finding the exact spot the guy clicked, back to the video etc etc. And works with C++. I did briefly try unreal a year or so ago but it was so bloated it hardly ran on my old machine.
Incidentally I'm finding the videos by Brackeys very useful on youtube, even if some of the stuff like lack of enums makes me groan, I know it is aimed at beginners. He goes through topics rapidly so it's less boring and time wasting... https://www.youtube.com/watch?v=IlKaB1etrik
Anyway .. back to the game. The most interesting bit has been deciding how to make paths for the enemies and plots of land for building on. There's probably a great algorithm for this, but most of the tutorials had manual map creation, so I just had a quick stab at it. At the moment I create a bunch of rectangular 'blocks' or plots of land on which to build buildings, that is not allowed to be walked on. Then I run the Floyd Warshall algorithm to find all the paths from every cell to every other (this is simple to implement). At the moment I just choose 2 far off points for the enemies to run between to create the waypoints. But with all the paths calculated the enemies could theoretically spawn anywhere.
I tried last night to get ragdoll working as it would be kind of cool to have the enemies thrown off their feet when they get hit. But I got loads of tearing with my model, something in the export it doesn't like so I may spend more time on this when I get everything else working. Getting skinned characters exported from blender in 'just the right' way seems to be a bit fiddly. Ragdoll would be kind of cool if I get to firing monkeys out of cannons with physics.
The structure of my game code still seems a tad messy with some duplication of similar classes, because I'm learning unity, I'll have to clear it up quite a bit if I release the source at the end.
Another day with more time spent wrestling with Unity rather than writing much code .. there doesn't seem to be a great deal of consistency in the commands to do things with the engine, and I seem to spend a lot of time googling how to do something simple (like turn off and on a particle effect, or get access to it). Of course I'm sure this gets better with familiarity of the quirks. I'm gradually trying to get used to monodevelop but I can't figure out how to fix its atrocious text navigation with ctrl-arrow left and right. I've also worked out how to put things in separate C# files, to get things a bit easier to navigate, it isn't like #include in c++.
I had started getting enemies running between waypoints last night. So today I added:
Enemies turning to face the direction they are running
Firing bullets (or projectiles) from guns
Aiming system for the towers
Healthbars for the enemies
Enemy alive / dead states and respawn.
Blood particle system for hits
The aiming system is quite fun. As the bullets travel slowly, if the towers fire when aiming at a moving enemy, they usually miss. So the aiming system uses some maths to predict ahead of time where the bullet will hit the enemy given its velocity and the velocity of the bullet. This actually works when the enemies are running in a straight line, but all bets are off if they turn a corner.
This can work well with different tower types firing bullets of different velocity. Higher velocity bullets are more likely to hit, however lower velocity bullets could do more damage?
At the moment the calculations are 2d but it could be fun having some 3d projectiles launched into the air, maybe with area effects.
The placement of waypoints is manual so far, so I want to write an auto system for this, and make sure that towers / buildings will not be built along the path (unless maybe placed by the player as a block).
(shamelessly ripped from https://gamedev.stackexchange.com/questions/14469/2d-tower-defense-a-bullet-to-an-enemy)
// Aiming with slow bullets has to take account of the fact the enemy is moving.
// So we use some maths to predict where the enemy will be when the bullet hits,
// and the correct angle to aim 'ahead'. This works when the enemy is going in a straight
// line, however when it turns a corner the shot will probably miss.
Vector2 ptT = main.m_Actors.m_Actors[m_TargetActorID].m_ptPos;
Vector2 velT = main.m_Actors.m_Actors[m_TargetActorID].GetVelocity();
Vector2 totarget = ptT - m_ptLoc;
float a = Vector2.Dot(velT, velT) - (m_VelBullet * m_VelBullet) ;
float b = 2 * Vector2.Dot(velT, totarget);
float c = Vector2.Dot(totarget, totarget);
float p = -b / (2 * a);
float q = (float)Mathf.Sqrt((b * b) - 4 * a * c) / (2 * a);
float t1 = p - q;
float t2 = p + q;
if (t1 > t2 && t2 > 0)
t = t2;
t = t1;
Vector2 aimSpot = ptT + (velT * t);
Vector2 bulletPath = aimSpot - m_ptLoc;
// unused as yet...
// float ticksToImpact = bulletPath.magnitude / m_VelBullet;
float target_angle = MaCommon.VectorToAngle(bulletPath);
// move the aim towards target angle...
SetYaw_Funky (MaCommon.SmoothAngle (m_Yaw, target_angle, 20.0f));
Well suffice to say that finally getting some sunshine and nice weather here in the UK is anathema to making any kind of progress in game development, as I spent the day motobiking in Wales. In fact, my mum suggests that part of the reason the UK does well in intellectual pursuits is because we have such godawful weather, and you have to find something to do while you are trapped indoors for 360 days of the year.
But today I managed a second day of wrestling with Unity and Monodevelop. I'm developing a passionate hatred of both, particularly Monodevelop as the defaults seem to try to screw with every bit of text formatting, and there seems to no obvious way of turning all the auto stuff OFF. But it does seem to give some intellisense, something that isn't properly working in Geany. And given that debugging these C# scripts has so far been a nightmare, I'll be thankful for having Intellisense.
Apparently you can do stepping through scripts somehow, but I haven't worked it out yet, and am having to rely on Debug logs (which seem overly verbose), and running the thing, changing a line, running again etc. It's like being back in the 80s.
Anyway despite this I'm gradually making progress. I still haven't a clue how C# really works, how it handles references to objects etc, so it's a miracle anything is working really.
I made a very dodgy placeholder gun turret in blender and got it imported, and put in a framework for the towers, an array to hold them, and fixed update and frame updates to do their logic and interpolation for rendering.
Took far too long debugging to get Atan2 working for orientations to point the guns at the enemies, but finally it seems to be working. The guns smoothly move to point at their target enemy.
For the enemy I've just imported one of my native models, he has a run animation but I haven't got it playing yet, I'll worry about that later.
The next stage I guess it to start making some paths and get the enemies to move along it towards the player 'base', then get the towers firing some projectiles. I could use spears, or monkeys really.
Ok so I needed a distraction so I might have a go at this tower defence challenge at Gamedev. I've been meaning to have a play with Unity .. I'm a c++ guy who tends to do *everything* themselves, so this is quite a change using a rapid development environment and *gasp* someone elses engine. That said I'm gonna have some teething issues with trying to learn Unity and C# from scratch at the same time as doing the challenge.
So I followed some tutorials last night on youtube and this morning, and I have a tentative feel for how I might be arranging stuff in this drag and drop editor thingy. I still end up having to google how to do every little thing but I'm sure it will get faster.
I want to make a global array of map locations to put your towers in, so each location is either ground for walking, a tower of some type or some decoration. Instead of making all the graphics specifically for it, I thought I'd rip off some of the models from my jungle game. I haven't got anything resembling a tower yet, but I'm sure I can build something quick in blender.
The process of importing models from blender seems a bit clunky (I was hoping to be able to import a whole lot at once) but is just about doable, it won't seem to recognise the textures in a blend file so I have to set them up manually.
So I now have it creating a grid of prefabs, it should be easy to import some more prefabs, vary them according to some map generator. Come to think of it I should try playing a tower defence game for research, I don't really know how they work...