• entries
432
1166
• views
759195

## Water

So, I ended up doing the "skirts" method I spoke of in the last post. And in conjunction with the default Urho3D water shader (with a few small tweaks, and more to come to eliminate artifacts on the corners of the water blocks) it actually looks pretty decent. The water animates a noise texture that ripples the ground beneath. It also uses a reflection texture (which I have set to black at the moment) if desired. I might tweak the water shader further, but for now I'm happy with it. I've also got all the small issues sorted out from the change to multi-level water. It wasn't a large change, but I was surprised at how many different parts of the code worked from the assumption that water was determined by elevation < 9. I thought I had contained it better than that, but I did get all the various spellcasting, pathfinding and object spawning oddities worked out.

## Multi-level Water

So, I'm trying to figure out how to do water. Right now, I am doing water the "brain dead" way; any tile below a certain height is water, and water is created as a simple hexagonal plane with a partially transparent blue material applied. It works okay, but the ultimate end goal is to have water play a more involved role in the landscape. I'd like to have rivers, waterfalls, etc... and that means that I need to rethink how I do it. I'm trying to come up with ideas for the geometry of water.

Here is a shot of how the current water system might look if I use it unmodified for multi-level water:

Clearly, I need some sort of geometry to tie the pieces together. My first thought is to create a sort of "skirt" piece that is attached to the side of a water hex if that water hex has neighbors whose water height is lower than its own. Might end up looking something like this:

The issue with that, of course, is that I have to oversize the skirts to avoid Z-fighting with the underlying ground hex, and that means that skirt pieces overlap with adjacent skirt pieces on different hexes and with the water hex of the lower water levels. In combination with the alpha-blending, this creates bands or regions of darker color where two pieces of water blend together. I could use waterfall particle systems to help obscure this overlap, I think. Alternatively, I could use a solid material instead of a partially transparent one:

I dont like the look of it, though. Large areas of flat water look terrible. Granted, there will need to be improvements to the actual water material to make it look better regardless of how I handle the geometry, but for now I'm sorta stuck on how best to do this. I do have some ideas as to how I could perform the geometry stitching of the skirts to minimize overlap, but it'll take the creation of special pieces, and some special-case code. Not too difficult, I suppose, but still seems kinda messy.

## New Gameplay Video

I'm on vacation in California, which means I kinda have some time to work on stuff, but it's split up by blocks of frantic activity. I'll tweak a few things, then head off to Knott's Berry Farm to burn in the sun while the kids ride on rides too small for me. Then I'll fiddle a few more things, then take the kids swimming. So while I'm getting some stuff done, it's all in a sort of disorganized tangle. I did decide to upload a new gameplay video. Once again, it's pretty poor quality. (Some day, I'll own a rig decent enough to make high-quality videos. But that day is not today.)

It's about 10 minutes of random gameplay. I was kinda distracted while playing by a 5 year old who wanted my help building electronic circuits with a snap kit toy he recently got, so there are some pauses. Also, there is still stuttering in some spots, which won't be cured until I fully convert everything from Lua to C++. (That's an ongoing project, but I am getting quite a bit of progress done while here in Cali.)

The stuttering is from the Lua garbage collector, which has been an ongoing problem with the Lua bindings to Urho3D. Throughout development of this game it has been at time worse and at times better. A recent change (unsure which one) made it worse again, enough that I'm finally going to just convert everything except UI stuff to C++ components.

## Doors, Dungeons, MakeHuman, Bug Fixing

It's been a little bit of an art push lately. First of all, I started work on a dungeon tile set. Up there is my first stab at it. I created a couple different wall variations, a door and a hex-pattern tile ground texture (used in conjunction with existing sand and gravel textures). Don't have anything in the way of doodads or decorations yet. Doors are still kinda tricky. I had a conversation with riuthamus about it. The gist of doors in this game is that a door needs to work with any configuration of walls around it, so trying to do artwork for a traditional-looking door and choosing alternates to match up with the surrounding walls was getting to be too difficult. I had already implemented doors some time ago that utilize portcullis-like behavior: when you open the door, it slides into the ground. Closing it brings it back up again. The door in the above shot works the same. The issue lies in creating a graphic that looks door-like, even though it doesn't look like a traditional door. I'm not sure there's a perfect solution for it. But at least when you hover over a door, a popup appears with the label 'Door'. Hopefully that's enough of a clue for people to figure it out.

I've also started experimenting with MakeHuman. The ogre in this shot is a result of that experiment:

It was a quick effort. I just used some of the clothes provided with MakeHuman (hence the jeans and button-up shirt, articles of clothing that would be quite difficult to obtain in the Goblinson Crusoe universe) and ran some of the various sliders for the mesh deformation all the way to 11 to try to get an ogre-ish form. The experiment worked pretty well, I think, certainly well enough to warrant further experimentation. As a bonus, MH will export a skeleton rig to fit the mesh, though I still have to rig it with IK and animate. As it turns out, I'm still terrible at animating. Who knew?

I spent some more time doing miscellaneous cleanup. Fixed a bug that caused creatures to die multiple times if they died in a round with multiple dots on them. (They would die once for each dot because I wasn't checking for isdead in between dot applications.) Formalized the construction of summoning spells, so that a flashy spell effect is played when things are summoned. Added some flashy effects for things dying. Moved and rearranged some data tables again. You know, crazy shit like that.

## The Weirdness of Turn-Based Games

So, lately I've been working on the DoTs/HoTs mentioned in the previous entry, as well as the framework for ground effects: ignited ground, lava, etc... And in the process I have yet again stumbled upon exactly how weird a turn-based game really is; or, at least, one done in the manner in which I am making this one.

Here's the setup: Goblinson Crusoe is built on an Action Points-based turn system. A Turn consists of 10 Rounds. Each Turn, all entities that want to act are gathered into a list, then each are given an opportunity to act for 10 Rounds. Moving, casting spells, attacking, harvesting loot, etc... these all consume Action Points until all points are used up, or until the unit chooses to end its turn early, at which point the next unit in line is given the chance to act. When all units have acted, the next Turn is started. So, while the units perform their actions consecutively, the abstraction is that these actions are ostensibly happening "at the same time". That's the weirdness of a turn-based game. You take a turn, then I take a turn, but it's supposed to be like these turns happen all at the same time. The abstraction really breaks down upon analysis, and there's really no way to make it better outside of moving it to a real-time simulation.

One way that this flawed abstraction bit me in the ass is with these DoTs/HoTs and ground effects. You see, in this system, time only really passes for a unit if that unit actually acts during a Turn. For example, control passes to the player who moves 10 hexes. That's 10 rounds worth of time, and at each step an event is fired to indicate a round has passed. DoTs, HoTs and time-based ground effects wait for these events in order to know when to do their thing. After all, you can't have an over-time effect without the "over time" part.

The problem I ran into is that while mobs such as enemies, GC, GC's minions, etc... are all active, acting objects, some things that otherwise can take damage are not. Things such as walls and doors, which have no active combat capability. They block tiles until destroyed, that's it. And the weirdness that resulted is that these units were effectively immune to DoTs. No time passed for them, so no ticks of damage were ever applied.

That's not what I wanted. I mean, obviously, a burn effect should burn a wooden door down.

It took a little bit of figuring to come up with a workaround that didn't completely break the system. The workaround is that walls and doors and such ARE active objects, but they have a different aggro component and an ai controller that does only one thing: ends its turn. The normal aggro component collects damage events and tracks them according to various damage sources, and if any damage was taken or any proximity events occurred, it signals that the unit is ready and willing to act that Turn. The fortification aggro component, however, only tracks DoT/HoT and ground effect events. If any of those are applied to the unit, then the unit wants to act. If they all expire, then the unit no longer wants to act. In the case of fortifications, "acting" means to simply end it's turn without doing anything. End Turn will cancel out any remaining Action Points, add them up, and send off a Time Passed event for the amount remaining, meaning that an entire 10 Rounds worth of time will be sent for the unit at once. The result is that if any fortification is hit for periodic damage in a Turn, then the camera will pan briefly to that unit while the damage numbers tick off, then will move on.

It doesn't really feel optimal, but I guess it's about the best I can do. The turn-based system in GC is fairly zippy, and simulation speed can be increased if desired, but it's still not great that the player has to spend some few seconds of a combat Turn watching DoTs hit all of the walls engulfed by the last Inferno spell he cast. But still, it seems to work okay.

It's always nice when I get these relatively large, relatively unbroken blocks of time in which to get things done. And it's always sad when they come to an end. Night shifts at work start tonight, and I'm working some extra days in the next couple weeks, so this thing'll probably be put back on the back burner once again. :(

## DoTs, HoTs, Aggro Bugs, Smarter Siege-breaking, Banners

Added a quick text banner on level load, to give the player a hint or reminder of the scenario type. While right now the scenario type is "random collection of enemies scattered across a randomized terrain", eventually I'll have different scenarios such as "boss fight", "base assault", "base defense", "dungeon crawl", etc... This, I hope, will help to provide variety in the kinds of gameplay a player can take on.

A couple days ago, I implemented a DoTs and HoTs system. Damage-over-time, heal-over-time. Spells and effects can apply dots or hots (either timed or permanent) which deal or heal a set amount of damage of a set of specified type(s). The way the system works is that whenever a unit is acting (moving, casting, attacking), the amount of "real" turn time (ie, actual action points used, not modified amounts used due to speed modifiers) is accumulated and used to apply damage or healing whenever time accumulates to one round or longer. So, as the unit is walking along, periodically there will be damage applied or healing done.

Now, the DoT/HoT system is working pretty well. However, in the process I uncovered another bug in the AI. When I first went to test the system with a test DoT, I noticed that certain units would stop acting correctly after a DoT would hit them. These units would work correctly before the DoT, but once the first damage was applied they would stop moving or acting in any way. Confused, I dug into the code. Turns out, in the Aggro component (which tracks damage sources and is used to figure out who the unit hates the most) I wasn't filtering incoming damage sources based on origin. It wasn't a big deal before, when all damage applied to a unit was external; however, the DoT damage is tagged as being applied by the unit itself. This resulted in the mobs damaging themselves, getting pissed at themselves, and trying to kill themselves using skills and attacks that can't be used on themselves. It was a simple fix, but demonstrative, I think, of the kind of weird shit you have to tackle when doing AI.

Tonight, I rewrote and cleaned up the heuristic function for enemy pathfinding as well. It runs faster, making pathfinding more snappy. I also started tweaking AIs to better handle incidental enemy units encountered while pathing, for example to bust through a barrier or wall owned by Crusoe. The additions and changes make them much more effective at busting through any fortifications GC puts up. I still need to do more work on this, though, as there are still a few oddities. Still, it is nice that GC can no longer sit untouchable by melee units behind a ring of walls.

## Summoning, Healing, Combat Stats

Lately, I've been doing a lot of cleanup and some more experiments with AI. The combat stats system still isn't nailed down 100%, but I am getting closer. The AI revamp is going pretty well. Most of the behaviors are cleaned up, and enemies are now much more effective at traversing terrain using whatever methods are available, and casting spells that are given to them. Enemies can build bridges to cross water much more effectively (the pathing on previous iterations was broken, but it's fixed now), they can open doors if the doors belong to their faction (making it more effective to 'fort up' and let mobs come and go), and so forth. The AI system much more closely resembles a behavior tree than the random collection of bullshit that it was before, making it much easier to assemble complicated behavior systems from small, manageable parts. Currently in progress is a system to allow selecting attacks based on obstacles encountered while pathing to a target.

Central to the AI system is the pathfinder, which will select a path to approach a selected target. The pathfinder can be tweaked to try to avoid enemies, seek out resource nodes to loot, etc... It is possible that, given the unit's specified heuristic function, along the way the unit will need to attack something that is not the ultimate target, be it an enemy door or wall or another hostile unit. In previous iterations, the mob would use melee strikes only to assault these incidental objects. I'm currently working on sub-behaviors to allow the mob to more intelligently select attacks or spells to use. For example, wooden walls will probably fall faster to fire-based attacks than they will to the pounding of tiny little goblin fists.

Some of the folks in the chat have been hassling me to release something that they can test out, so I'm working toward that. It won't be pretty, and there really still isn't much 'game' here. But what I have so far is still a pretty interesting taste of what the final game will hopefully be like. Currently, the test level is populated by two types of enemies: melee swordsman (green gobbos wielding green fiery swords) and shaman (red gobbos with a staff). The swordsmen pretty much will only approach and attack with a melee hit. The shaman are much more interesting, in that they have a single-target heal, an AoE heal, and a summon spell. They will first attempt to find a bro to heal, and if nobody needs healing they'll either approach GC and rain fireballs on his head or they will summon in Choppers to attack. Choppers are tiny 1/2 scale swordsmen with low hitpoints and high melee damage. GC can kill them quickly, but if the shaman aren't taken care of their numbers may soon become untenable.

## Performance: GC is being made on a potato.

I haven't posted about GC in awhile. I have done work (most of it regarding the combat system) but a lot of the work is still "on paper" as opposed to "in the game".
The performance of the game has been a concern of mine for quite some time now. Granted, my dev machine is not a powerhouse. In truth, it is the polar opposite of powerhouse, unless your definition of "powerhouse" is "HP laptop from Costco with the finest integrated onboard Intel graphics." If that's your definition of powerhouse, though, you have worse problems than I do.
Anyway, the performance is crap. I mean, really. It's crap. I added the grass model you see in the shot above, and my framerate plunged to, like, 11. Sometimes 8. (This is in borderless full-screen, so 1366x768 on my lump of clay.) As you can imagine, testing out combat configurations and systems at these rates is frustrating, so often I'll disable vegetation and replace the hex tile material with plain white when testing that stuff. Still, in the back of my mind lurks the thought that it just isn't performant enough. People with roughly my system specs should be able to play a game of this level of graphical fidelity, and expect at least a somewhat decent level of performance.
I had fastcall22 do a test on his computer, and he said he was getting a pretty consistent 144 fps at 2560x1440 resolution. He's got a Nvidia GTX 1070 card. So, right now, about the best I can say is that depending on your system specs, you can expect somewhere between 5 and 144 fps playing my game. That's just... I don't know. Kinda scary.
What's the thought on ignoring shitty laptops from Costco with integrated Intel graphics? I mean, I know they're garbage. But still... 5 fps, on a machine purchased in 2015. Thoughts?
The alternative would be to scrap the quad-planar tile material and use traditional models, but everyone I talk to (including myself; yes, I talk to myself) likes the quad-planar tiles. They're kinda cool, and certainly unique. I don't really want to give them up. But if it comes down to that, or fielding a shitload of complaints about crappy performance, then I'd say I need to do what I have to. Assuming this thing ever releases, anyway, which given my glacial pace of development is looking highly doubtful.

## If I were to make Golem now...

If I were to go back and remake Golem today, what would I do? Where would I start? As I have mentioned in previous entries, I would almost certainly use an engine now instead of rolling my own. (Although, to be fair, back then there wasn't nearly the wide range of available engines to do this stuff with, so rolling your own was almost a given.) Without having to worry about the low level details, I would be free to worry more about the stuff that makes it an actual game. I've become a better programmer, so I like to think that I would make fewer mistakes now, mistakes that would turn the game into spaghetti. Still, though, even with all that I've learned in the last couple decades about game making, I think I would probably start Golem now with the same thing I started with back then: drawing the ground.

Think about it: in an ARPG like Diablo, you spend a LOT of time looking at the ground. Camera above the ground looking down, dude walking around. The ground fills the frame nearly 100% of the time. To me, poring over screenshots of Diablo and Diablo 2, it seemed the natural place to start. Draw the ground, then figure out what kinds of other stuff to draw on top of it. In 1997, my options for drawing the ground were somewhat limited.

# Then...

When I first read Designing Isometric Game Environments by Nate Goudie in a print issue of Dr. Dobbs Journal in 1997, I had already played Diablo 1. I had also already created a number of small prototype tile-based RPGs, starting in the late 80s. I had dipped into assembly language programming in the early 90s, to write "fast" tile blitting routines. I had a basic understanding of how tile maps were supposed to work, so I took what I knew and what I read in that isometric article, and I dove in.

Golem went through various different stages as I learned about tiles and tilemaps. From base tiles with no transitions, to transition permutations, to alpha-blended transitions for greater flexibility. I spent a lot of time looking at screenshots, playing with paint programs, trying to hand-draw some dungeon floors. I remember in the early 90s I had found a program for DOS called NeoPaint, and I was still using it in the late 90s to try to draw dungeon floors, right up until I discovered GIMP. I struggled to try to reproduce Diablo and especially Diablo 2, without really knowing or understanding how such things were done. At some point, I had the epiphany that they most likely created their tiles from photographic sources. Digital photography was starting to take off, and I remember stumbling on Mayang's textures in the early 2000s. Most of the tiles in the Golem screenshots still in existence were created from texture sources downloaded from Mayang's. I remember just being floored at the number of textures available there, and during that time I really worked on learning various editing techniques to make tiles from photo sources, to make them seamless and create transitions.

In the beginning, all of the render and blit operations were written in assembly. The first generation of Golem map rendering had its roots in a tile map system I wrote in the early 90s for DOS. At that time, my computing budget was limited. Hard drive space was at a premium, all drawing had to be done in assembly language to get the kind of performance a game required. With the switch to isometric in 97, I devised a very simple run-length encoding scheme to encode the graphic tiles. See, the isometric tiles are diamond shaped:

From the very first, I understood how wasteful this configuration was. All that empty space, all those transparent pixels. My early thought was to encode the image so that only the pixels (plus some book-keeping stuff for row offsets) were stored. That saved a lot of disk space, plus I wrote custom assembly routines to draw these RLE sprites with pretty decent performance.

The first iteration, I had no idea about transitions. I had played a lot of JRPGs on Nintendo systems, so I was pretty familiar with tile maps. All those first Golem tilemaps were transitionless. I don't have any of those screenshots anymore, but it was essentially like this:

Those are newer textures; the ones I made for Golem initially were... pretty terrible. I thrashed around a lot in those days, trying to figure out techniques and learn tricks. Eventually, I read about tilemap transitions (probably in the gamedev.net resource section) and started creating permutations. Dirt to grass, dirt to water, grass to water, etc... If you've ever done transition permutations like that, you know that the number of permutations can explode pretty rapidly. Thus, this later revision of Golem's map rendering had to carefully constrain terrain placement. Gone were the days when I could just throw whatever tilemap configuration at the renderer I wanted; I had to be careful in placing types so that, for example, I didn't end up with a dirt tile that bordered grass on one side and water on another. That just wouldn't work. Still, even given the limitations of tile permutations like that, the simple addition of transitions was a pretty good leap forward in quality. The result was something like this:

Eventually, though, that limitation started to really chafe. Just... you know, sometimes you really want to have a dirt tile with grass on one side and water on the other. You know? So I learned some more tricks, read some more articles. Somewhere around this time, I switched to DirectDraw for rendering, then moved on to Direct3D. We're talking early versions here. I kinda wanna say I started with DirectX 4. DirectX had been a thing for a few years, and I probably stuck with the old assembly stuff for far too long, but you gotta understand, kids. Back then, the internet was still mostly just sacks of bytes carried around on the backs of mules. You kids and your reddits and your facespaces, you just don't know what hardship is. Hardship is connecting to the World Wide Web with a dialup modem, hanging three T-shirts over your computer to try to muffle the connection noise because there's no volume control on it and your roommates are trying to sleep. Hardship is not even knowing that marvelous wonders such as DirectX or OpenGL even exist, because the only real reference you have is Michael Abrash's ponderous tome on graphics programming, with its chapters on Mode X, already a relic of a bygone era by that point.

Anyway, with the switch to modern APIs, things really changed. I could use alpha blending, something my old assembly routines didn't really allow. Now, I could create the transitions as alpha-blended decals overlaid on top of base tiles. Suddenly, I could have that mythical dirt tile bordered by grass and water, without having to create approximately 900,000 different terrain permutations. It was amazing!

Again, those early dev screens are lost. But here is a quick example of how that can look:

Any number of transitions could be overlaid upon one another to create as complex a transition as was required. Of course, this required me learning a few more tricks on tile creation. (Again, those early tiles were pretty garbage, though I was getting better by that point.)

It was around this time that I started branching out into a lot of different areas. I was building the character control stuff, combat stuff, inventory handling and management, etc... I was learning how to model in 3D, using the amazing Blender software that I had discovered. Blender was in its infancy, having just been released as open source by a crowd-funding campaign. But even then, it was pretty powerful and it opened up a LOT of options and avenues for me. I was learning how to create walls and gates and skulls and creatures. It was an amazing time.

I started to dislike how the tile transition schemes were plagued by the appearance of grid-based artifacts. You could see the tile-basedness of it. I let that ride for a long time, though. At some point later (after I had already abandoned Golem) I started experimenting with decal-based splatting systems that would splat terrain and terrain decorations down on top of a terrain, randomizing locations of splats to mitigate the grid constraints. I could lay down a base terrain, then splat decoration on top of it using alpha-masked splats.

With the rapidly expanding hard drive sizes and graphics card capabilities, the RLE system I had encoded my stuff in all along quickly became a liability. I didn't really need to optimize for hard drive space anymore, and it required an extraneous step of unpacking the encoded sprite to a OpenGL texture. I never did replace that system; I just worked around it. But since I had plenty of space, I could now do tile variations: different variations of a single terrain type that could be mixed and matched to eliminate the appearance of repetition. Even just having 2 variations of a given type made a large difference:

http://i.imgur.com/2HpPN1a.png

And that's pretty much where I left it all off. Using alpha-blended transitions or detail splats to hide repetition, texture variations, etc... I was able to make some pretty decent stuff, slightly approaching what Diablo 2 accomplished (if with nowhere near the artistic skill). Unfortunately, this was the end. The awkwardness of the graphics pipeline, combined with the general shittiness of the codebase, killed the project. Eventually, my old hardware was upgraded to new and the Golem was not copied over. Pretty soon, it was nothing but rotting bits on a disconnected harddrive in the bottom of a dusty, spider-webbed cardboard box.

# Now...

So, back to the question of if I were creating Golem now, what would I do? First and foremost, I would ditch the tile-based transition scheme altogether. Using a custom shader to blend different terrain types together using a blend texture is the way I would go. It's what I've been using in my Goblinson Crusoe project for a long time now. Using such a scheme has quite a few advantages. The grid is no longer as large a problem. There is some repetition, due to using tiling textures, but the actual transition configurations are no longer tile-bound. A couple different base textures, a map to blend them together, and some revised ground terrain generation rules for the random generator and you're in business:

Couple it with the decal detail splatting to add some decorations, and you can get some really nice results with relatively low performance cost.

The base version allows 5 terrain types (black in the blend map is the base, R, G, B and A channels control the upper layers). But you could throw another blend texture in there for 9 types if desired. Texture arrays make it easy, since you can bind as many textures as you need.

I put together a quick test of it. The funny thing is that it only took me about an hour to get the framework up, and implement a quick randomly generated ground terrain and have it scroll around with the WASD keys. The power of a pre-existing engine, coupled with the countless hours I spent up to that point learning about how all this shit works and writing shaders and camera code for other projects. What took me literally years in the early days to accomplish, I can now accomplish in an afternoon and with better results besides. Such is progress.

If I were to make Golem now, this is how I would start.

## Golem

In the previous post, I commented how I didn't see any use in re-factoring and re-building Golem, the way evolutional is re-factoring Manta-X. I'd like to elaborate on that a little bit.

It's not that I don't find any value in the game itself; Golem is a game that I most definitely would like to revisit at some point. I feel like I had a lot of good ideas with it, and I liked the setting and story I was writing for it. It was fun to play, as far as I got with it, blending some of my favorite aspects of Diablo and the various roguelikes I was playing at the time. I could certainly see myself going back to it some day.

However, that doesn't mean that I have any need to try to refactor it as it stands. You see, refactoring that thing is a trap. A warm, cozy, comfortable trap. I could jump in and start refactoring. I've got a thousand ideas even now, as I write, for what I could do to start improving things. There are things I would do better, things I could rewrite, etc... I've got an idea for a message passing scheme that... well, let's just say that this is how it starts. I spent decades of my life in this trap. Decades. Call it framework-itis, if you want. Call it engine-sickness. Whatever. It's a trap.

I have started to work on countless framework/engine projects in the past. I've written numerous prototypes, demonstrating some new aspect of engine technology that I hadn't explored yet. I've written object frameworks, message passing schemes, resource handlers. There is always a reason for me to jump back into that mess, but the thing is I already know how that is going to end. I already know how my time will be wasted.

I've mentioned before, I'm not much of a software engineer. I'm better than I once was, certainly, which just makes it more tempting to say, "I can do better this time." I could dive in, write some framework code, munge around with more modern OpenGL versions, build a new object framework, yadda, yadda, yadda. But eventually I'm going to run into some roadblock. Some poorly designed bit, some system I lack the knowledge or skill to architect gracefully. The spaghetti-ing will start. I'll begin running headlong into limitations of the engine, I'll start kludging in bits to make something work. Before you know it, months or years have passed and I am right back where I was for just so many years of my life: writing engine code, rather than writing a game, and getting bogged down in a mess of code that is spiraling out of control. That's how it happens with me. It's a road I've been down before. I've never been a good-enough programmer to avoid the trap, and I don't think that I ever will be.

It's already started, you see. I've got a project skeleton in my projects folder right now. I've got some new libraries I've never used before copied into that project, libraries that I am itching to get a feel for and learn about. The skeleton project builds without errors or warnings; it opens a window, loads some shaders and draws a red/green/blue triangle. I've copied over the source files from Golem. I've started writing a tool to un-crunch the data files in order to recover the assets and try to make use of them again. It has begun.

Over the years, since I joined this site (officially in 2004, unofficially somewhat before then, in the days of the splash screen), I have come into contact with a lot of really talented and skilled programmers. evolutional has been hugely influential, off and on through the years. Washu, Promit, swiftcoder, SiCrane, Fruny, phantom, superpig, hplus0603, Hodgman, Servant, Apoch, eppo.... the list goes on and on. A lot of these people know programming in a way I never will, and I was always a little bit envious of that. But more than anything, over the years I've learned to accept my limitations and understand that I need to play against my strengths. Writing low-level framework code in an elegant and efficient manner, regardless of how I enjoy it, is not one of my strengths and never has been.

So the chief thing I would do differently this time around would be to use an engine. I've been using Urho3D for quite some time, and have become very comfortable with it. However, if I use an engine then automatically, by default, the vast majority of the existing Golem code becomes an unusable non-sequitur. Animation handling? GUI? Object hierarchy? Just no real practical way to shoe-horn it into Urho3D's existing component-based framework. Not without completely re-writing everything, which is pretty much what I would end up doing. The re-factor then becomes a completely new project altogether. It would be less labor-intensive to just scratch everything and start fresh, building directly against the architecture of the engine. It would certainly align more closely with the way I currently think about game development. I have left behind many of the attitudes and beliefs that I held when I was working on Golem. I don't think I could fit in those pants anymore.

But I wouldn't start work on Golem again right now, not with Goblinson Crusoe already on my plate. Nuh uh, no way. One medium-scale RPG is already too much work for one dude. No sense adding another.

## On The Topic Of Retrospectives...

Evolutional's recent post about digging through some old code inspired me to take a look at some of my own. I had recently found a copy of the working development folder of my old game, Golem, on a hard drive in a box, and while I've peeked into it out of morbid curiosity, I haven't felt the inclination to dig very deep until now.

Golem was an isometric ARPG heavily inspired by Diablo and, later, Diablo 2. I had grown up playing the original Atari console and, later, the original Nintendo and the Super Nintendo. I didn't get to have a whole lot of time playing PC games. My first PC RPG was Wizardry 4, and holy shit was that a mind-bender. I played JRPGs on the Nintendo, mostly: Final Fantasy, Dragon Warrior, etc... Loved Secret of Mana, loved the Zelda games, loved Illusion of Gaia. But playing Diablo was like an awakening. On PC, I had mostly played Nethack up to that point: turn-based dungeon delving rendered in awkward ASCII graphics. But Diablo... well, Diablo was kinda like Nethack (not nearly so deep, obviously) but it looked good. To my young eye, the isometric graphics of Diablo were stunning. Playing Diablo literally altered the course of my life, in ways that I only really now understand. While I had dabbled in game development since the early days of DOS and the advent of 256 color graphics in ModeX, it wasn't until I played Diablo that I really had in mind a vision of what I wanted to do and make.

I started work on Golem some time around 1998. Some of it was just scribblings in notebooks made during lunch break at work, some of it was assembly-language tile drawing routines I wrote later, some of it was C code written for the old Borland Turbo C++ compiler. Over the years, it evolved away from assembly and into OpenGL as I learned new technologies. Many of the first posts I made here on gamedev.net were in regards to this game. In a way, I've been working on this game (in some form or another) ever since. I've certainly spent a lot of time on isometric game prototype development, at any rate, even if little of that original code is still in use. But during the summer of 2004 is when Golem in its final (sadly, uncomplete) form took shape. I was between jobs, having returned (briefly) from Arizona to Wyoming. I had not yet met my wife, bought a house, had kids, or any of that. Everything I owned, I could fit in the back of my old Toyota 4x4. I slept on a cot in my dad's basement. By day, I hiked the juniper and cedar covered hills just outside of town, and by night I hunched over my keyboard, listening to Soundgarden on repeat and furiously pounding out 69,094 lines of the shittiest code you ever did see.

Evolutional's restropective is a thoughtful look at how a maturing developer revisits his old code with an eye toward understanding how he was then, and understanding how he has improved over the years since that code was written. He starts the whole thing off with a look at the project's structure, and while he has criticisms about how he structured things, I gotta say this: Manta-X was a paragon of order and structure compared to this:

Take a look; drink it in. That's the root level source directory for Golem. Are there source files there? Sure are. Are they mixed in with intermediate build .o files? You betcha. Are there miscellaneous game data scattered all throughout like nuggets of gold? You're damned right there are! In the finest code-spaghetti Inception tradition, there are even archives of the game code directory in the game code directory. At a quick glance, I see bitmap fonts; I see sprite files for various player characters and props; I see tile sets; I see UI graphics. Sure, there's a data folder in there, and sure during later development all game-relevant data was finally sorted and organized into the data/ folder structure (it's actually pretty clean now), but all that old testing and development cruft is still in there, polluting the source tree alongside build artifacts like little nuggets of poo fouling the bed sheets.

Rest assured, I have improved--in this regard, at least.

In the last few hours, I've opened up several source files at random and perused their contents, trying to get back into the mindset of Josh v2004, or VertexNormal as I was known back then. This process can also be summed up as 'what the bloody hell was VertexNormal thinking?'

Elephant in the room here: yes, there are singletons. Hoo boy, are there ever singletons. In the final day, when I have to stand before the judgement bar of God and account for my actions here in this mortal life, the question is gonna be asked, sternly and with great gravity: "Did you ever abuse the singleton pattern?" And since God sees all and I will not be able to lie, squirm or weasel, I'm gonna have to say: "Absolutely, I did. Many, many times." It's not going to be a proud moment. (In this envisioning, God looks a lot how I imagine Washu looks; make of that what you will.)

So, rough idea of how it's organized from a high level. We've got a Map, and a MapBuilder. The map, of course, is the level. The MapBuilder is a utility I wrote to encapsulate (and I use the term encapsulate loosely, and with liberal license) the various things I needed to randomly generate levels. We've got a ScriptContext. We have a MiniMap. We have a GlobalAlloc object allocator. We have an EffectFactory. We have an SDLWrap (a wrapper of all things SDL related, naturally). We have a BuildInterface (whatever that is). We have a ScriptInterface, a PlayerInventory, an ItemFactory, and more. All of them, singletons.

All of them.

Look, I kind of get what VertexNormal was doing here. I get it. Software engineering has never been my strong suit, and to this day I continue to make terrible decisions. And back then, I didn't even know what the term 'dependency injection' meant. But man, you guys. Man.

Some day, someone is gonna write a book on how to kill a project, and under the chapter titled "Dependency Spaghetti Through Singleton Abuse" there will be no text; just a link to the repo of Golem. Every knotty little problem, every little grievance and aggravation that led to me ultimately abandoning Golem, finally and completely, can be traced back to the abuse of singletons. Singletons are everywhere, threading a knotty little web of dependency throughout the entire project. Dive too deeply into the source, and the threads of that web can strangle you.

Now, I'm humorously critical of this project, but the important thing about it is that when I look at this project, I see a kid who was trying his hardest to learn. He was learning about object lifetime management, learning about encapsulation, learning about how to build a game in general. He was making mistakes and working to rectify them, and expanding his knowledge in the process. I don't see any value in doing with this project what evolutional is doing with Manta-X; unlike Manta-X, I don't think there is anything of interest to be had from that here. To refactor this thing would be a large task, and ultimately pointless since there is little here that I haven't accomplished, more efficiently and more effectively, in later prototypes. But I think it IS kind of a valuable exercise to look at it and think about how I would do things differently now, knowing what I know in the present.

Aside from the singleton abuse, there are a few issues.

1) VertexNormal still didn't have a good grasp on object ownership and lifetime. The existence of so many singletons demonstrates this, but so too does the structure of the various object factories and allocators. Knowing who owns what, who is responsible for what, and who is just using what, is a large part of game engineering.

2) VertexNormal made the mistake of using custom binary asset formats for everything, even in early stages of development. That meant that I was constantly fussing about with building tools to pack/unpack assets, even as asset formats were constantly changing and evolving. A lot of time was wasted maintaining and modifying the various sprite packing utilities. I had the idea of saving memory by storing graphics as run-length encoded files, using a custom RLE scheme I had devised. It was a carryover from the earlier days, when the rendering was done in assembly and the graphics were rendered directly from these RLE data chunks. But with the switch to OpenGL, the pipeline was modified to include a step for unpacking these graphics to textures, meaning that the steps of packing and unpacking were made extraneous. Disk space was cheap by that point, so there was just no point to keep fighting the binary packing system as I did. I'm amazed that I implemented as many objects as I did, considering each was a binary packed asset that had to be specially constructed. It is especially ironic that I struggled with the binary asset descriptions, when I had already embedded Lua into the project, and Lua is in itself a fantastic tool for data description. These days, all my assets are described as Lua tables rather than binary blobs, and it's easy enough to compile them to binary format.

3) VertexNormal was still figuring out how to draw isometric games and make them look good. The tile-basedness of Golem is painfully obvious in every screenshot. Which isn't necessarily a bad thing, but I made some bad choices with how I was drawing the levels that made it difficult to mitigate the tile-basedness. These days, I make greater use of shaders and techniques that can eliminate the grid and increase flexibility. I'll probably try to talk more about this later, but it is remarkable to realize exactly how much time and effort I have spent on figuring out how to draw isometric levels.

4) This was VertexNormal's first introduction to Lua. It's where I learned to love the language, even if back then I had absolutely no friggin idea how to properly make Lua and C++ work together, in a way that helps rather than hinders. I had difficulties with passing objects back and forth across the interface, hence the existence of the ScriptContext class which was used to set various 'current' objects for manipulation by script. I still struggle with exact implementations, but I have a much better grasp on things now. In Golem, it is obvious I had troubles knowing whether Lua or C++ should be responsible for a given task. Many things that I would now implement in script were then implemented in C++, and vice versa. This is an ongoing learning process.

There are other issues, but these are the main takeaways from my high-level strafing run.

It was a fun little excursion. It reminded me of who I was, back when I thought I had, just, a whole lot to say. I was full of energy, full of excitement. There were a thousand articles I was going to write, talking about all the things I was learning and figuring out, articles that I just never seemed to get to because I was always off chasing something else. It was fun to revisit that. Much of that excitement has faded, now. I no longer feel like I really have a lot to say, or that anybody really cares what I have to say anyway. That's not fishing for comments or arguments to the contrary or anything, it's just the truth as I see it. While I still enjoy doing this stuff, it's not my main passion or focus anymore, and hasn't really been for quite some time. Every day that passes, the field/hobby/industry pulls further ahead of me. People are doing stuff now that we could only dream of back in the day. And with how my time is split between work, family, writing, woodworking, hiking, hunting and fishing, etc, etc, etc... Game development just doesn't consume nearly as much of my emotional and mental capacity as it once did. It's a melancholy thought.

## Haste, slow, and refactoring

As part of the AI and control refactor, I refactored the basic components that provide the turn-basedness for the unit controllers.

Previously, the turn system was controlled by 2 components: the Turn Scheduler, which runs scene-wide and provides a 'heartbeat' on a specified timer; and the command queue component per combatant, that listens for the heartbeat and synchronizes a unit's actions to that beat. This allows me to control the pace of the gameplay; by changing the interval of the heartbeat, the battle can be sped up or slowed down.

However, having the combat heartbeat controlled globally like this has some disadvantages. First is that unit actions only take place on heartbeats, so if the player selects a movement command, then the unit will not start to move until the start of the next heartbeat. This results in a noticeable hesitation, that gets worse as the battle speed is slowed down. Another disadvantage is that it might be useful to be able to speed up or slow down the update interval depending on certain factors, such as the presence of a haste or slow effect on the unit.

So, I refactored. The Turn Scheduler no longer provides the heartbeat. Rather, the heartbeat is encoded into the command queue executor component, which knows about any haste or slow effects on the unit. In addition to this refactor, I actually implemented haste and slow effects to test it all out.

Haste/Slow comes in 3 different flavors: modify movement speed, modify casting speed, and modify attack speed. Thus, you could have bonus to cast speed and penalties to attack speed, and the system correctly accounts for portioning out action points accordingly. Additionally, the movement/cast/attack speed modifiers are used to speed up or slow down the rate of animation for a particular action, as well as the duration of time it takes to complete the action. This change has the benefit that if a unit has a haste spell on to speed its movement, when it moves it will move across the map more quickly. While the animation speed and timings are visual-only, the haste/slow modifiers also affect the number of action points the unit is given.

For example, say you have a movement speed modifier of +25% and a cast speed modifier of +10%. If the action points are set to grant 10 points per unit in a given turn, then the unit will actually receive 12 movement points, 11 casting points, and 10 attack points. Using movement points will reduce the casting and attacking points proportionally. So in a given turn, the unit could take 12 steps, or cast 11 spells that take 1 turn to cast, or make 10 attacks that take 1 turn to cast, or some combination of miscellaneous attacks. (Actually, the unit receives 12.5 movement points, but the points are truncated.)

The three numbers next to the character portrait indicate how many points the unit can use for movement(green), casting(blue) or attacking(red).

I have made (and am still in the process of refining) some other changes to how player units control. Initially, a movement command was executed by selecting the Move button or pressing the hotkey 'm', which would display the movement preview grid. Left-clicking on the grid would initiate a move to the clicked location and hide the preview grid. Similarly, executing actions such as using a workbench or looting a tree required that the action button be selected or the hotkey be pressed, then the target clicked. It's kind of clunky. Even the simplest of actions, such as clear-cutting a grove for wood resources, requires just way too many clicks and keypresses to execute. So I have begun to refine things a bit. Instead of having to manually select move, you just click to move. If your unit is in a waiting state, then the move will be calculated and executed. Similarly with using/looting/melee attacking. Just click on the thing to loot or use. Also, I have added functionality to bind a skill to the right mouse button, so that skill may be used without the 'select skill, select target, left click' loop. Instead, right-clicking will execute the bound skill automatically at the mouse cursor location. I am still in the process of refining everything, and a lot of the UI hasn't really been updated to reflect these changes, but already the gameplay is much tighter and less clunky, by far.

## Highlighting

In between work, sleep, family, home and yard projects, and being sick (all the usual bull-shit cop-out excuses you have all come to expect from me over the years) I have been getting in some work on Goblinson Crusoe.

So, the Great AI Refactor has begun in earnest, and as I suspected it would, it has run its tentacles all throughout the project. Up til now, the enemies have all been running off a script object creatively titled TestRandomWanderAI, which obviously randomly selects between FireShotgun, Melee and Area Heal to cast, and which tries to approach GC if a path presents itself. (The class has, uh... grown, somewhat, since it was first created.) Creation-time conditionals in the class designate some loose class-ish tendencies to favor one or the other of the skills. For example, the red-skinned 'shamans' favor fire and healing, while the blue-skinned 'bruisers' favor melee. I've kept this TestRandomWander component around, because inside it are templates and patterns for many of the actions an AI needs to perform. But it is no longer instanced into any of the enemies. In its place is the more generic AIController component.

AIController supplies a healthy set of generic functions (many of them derived, with refactoring and improvement, from the patterns inside TestRandomWander) that provide common functionality. However, instead of the hard-coded think() method of TestRandomWander, the AIController must be supplied a think() function at creation time. Much of the hard work of abstracting out the think() process was already done; I just needed to formalize the structure of the AIController, and re-factor out some of the behavior into separate think() functions supplied when instancing a mob of a given type. I've now finished specifying the existing shaman, bruiser and looter AIs, as a proof-of-concept for myself, and the new system works well. A great deal of 'test code' and placeholder stuff has vanished, in favor of more flexible, more 'real' code.

However, some expected issues are rearing their heads. So far, I have been operating on a very loose, ad-hoc 'design' for the combat system, as regards stats (which stats? what do they do?), buffs, how damage is specified in skill casts, and so forth. I have purposely kept the system flexible. I don't, for example, have a specific set of damage types in play. Damage types are possible, and I use quite a few of them. For example, all the test fire skills deal some measure of fire and siege damage. And some of the test equipment confers bonuses such as +fire_attack and -siege_defense, as a proof of concept. But any given skill could specify any kind of arbitrary damage type if desired. You could conceivably have a Fireball that does 16 to 47 base points of "happy silt stuffing" damage. This damage would apply, taking into consideration any "happy silt stuffing" attack and defense bonuses, if any. The flexibility has been nice, but I think it's probably time to start nailing down concrete damage types and thus simplifying the combat resolution code.

Additionally, with the addition some time back of the Melee skill, I ran into a conflict. When I first started fleshing out the skill system, along with the set of test skills, I imagined the skill system operating such that each skill had a different specification for each rank of the skill, and the damage ranges for the skill would be encoded in the data description for the skill. For example, a Fireball with one single point (assuming a Diablo 2-style skill point advancement system, as I currently envision) might do 3 to 6 damage, while a L2 Fireball might do 7 to 10. Numbers are for illustrative purposes only. I would list the various ranks of Fireball in an array, index it by the caster's Fireball skill level to get the proper one to cast. Segregating it by ranks this way more easily allows me to add extra 'juicy bits' to higher-ranked skills. For example, adding additional bounces to the bouncing fireball spell, adding some kind of DoT ignition debuff at higher ranks, etc...

The system has worked well, but Melee plays differently. In fact, in most games of this basic nature, Melee plays differently. It makes sense to encode the base damage of a fireball skill in the skill itself, and buff/debuff the final values based on caster stats. However, the base damage of a melee swing typically is determined by the equipped weapon, rather than the skill itself. And so, just like that, the addition of Melee has caused a necessary re-design. All of a sudden, for one particular special skill, I have to obtain the base damage ranges from somewhere else other than the skill description itself. It's a relatively minor issue on the face of it. Just call (?) and obtain melee ranges, where (?) is... And there we have the next issue. I don't have the formalized equipment system in place yet. I have A equipment system in place, but not THE equipment system. The test weapons in place can do things such as switch stance and apply stat buffs/debuffs, but that's it. There is nowhere I can currently call up to ask "what is my base damage range for a melee swing?" That ? just doesn't exist yet. And so, with the melee controller, I have added another set of hard-coded test values, to my shame. I need that melee-ing bruiser to do SOMETHING when he swings. So I hardcode some values, and add yet more items to the TODO list.

Still, these kinds of small-but-annoying issues help to refine the various interfaces between components that make up the guts of a game like this. A simple question of "where do base range values come from?" can have far-reaching implications in your design. So many factors can affect it--weapon type, character level, skill level, etc--and all of these factors can possibly live in different places depending on the structure of your combat-enabled entities.

Additional things have also cropped up in this refactor. At any given time, I have more units on the battlefield now than I used to, and a lot more 'things' happening around the map. So my non-polished UI is really starting to show its cracks. I made a few enhancements, but I still have far to go. For example, I added the capability of highlighting the currently-hovered unit with a halo color of green for friendly units, red for enemies, and yellow for neutrals, in order to help make decisions on the battlefield:

This really helps, especially since I'm using the same goblin model and skin(s) for so many different unit types, both friend and foe. However, I still have not updated the hover UI panel to show things such as health, class/unit type, etc... all the vital information you need to decide whether to approach a unit or run away. (The UI, I fear, is going to continue to haunt me for a long time.)

Another thing that has cropped up is a big one that I have been putting off for awhile: performance. In fact, I've been systematically making this problem worse, with my shader-based shenanigans. But the performance is really starting to become critical now, as I add more units, more lights, and just more stuff in general, to the play field. I've made some preliminary stabs at optimizing things: LoD models and materials, tweaking draw distance to strike a happy medium between tactical awareness and performance, etc... But I am still hitting low double-digits to high single-digits framerate on a depressingly consistent basis, and the visual lag and general unresponsiveness makes it feel janky and rough.

That quad-planar shader, with its 32+ texture samples per light in forward rendering, is just one expensive heavyweight bastard, and I really need to start figuring out how to take back some of my framerate. For now, I have 'disabled' the quad-planar texturing on the vertical surfaces of the hex cells; those are now simply wrapped in a standard diffuse+normalmap texture, with only the cap still receiving the quad-planar. The long and short of it is, the quad-planar shader is probably going to go away. In it's place, I am working on a shader that blends from the diff+normal of the hex pillar's vertical surfaces, to the existing texture blend that occurs on the tops of the hexes. I have decided it probably isn't necessary to permit more than one stone/dirt type on a single hex cell, as long as the 'interesting' blending occurring on the tops remains. And by making that simple adjustment, I can eliminate a metric fuckton of texture samples per light. Even with just the temporary placeholder material with a single rock texture, I have already gained back enough framerate to put me back in the high 20s, low 30s on a fairly consistent basis, and with more LoD refinement I can certainly get even more. Hopefully, I can manage to keep the look-and-feel I have enjoyed so far (if with a few limitations that didn't exist before).

Anyway, just thought I'd drop a few lines to let folks know I am still working on GC, if slowly and extremely painfully.

Edit:

Some fun things in this shot. I added a code path to these goblin types to enable them to build bridges in order to reach Crusoe. These particular guys are Fireball casters, and I gave them Chain Fireball as well. In this shot, 3 of the mobs started building bridges to get to GC, then one of them randomly chose to cast a Chain Fireball at GC. The chain struck the other two goblins, as well as obliterating a number of already-constructed bridge sections, and triggered a 3-way brawl between the goblin and his now-pissed-off colleagues.

AI is fun.

## Installing

Finally got a day off, with the leisure time to install Linux. Been booting off the USB for days now. During the 'trial' period, I tried out a number of different distros, just to refresh my head on what's out there. I used to be a big Ubuntu fan. Debian-based distros are by far my favorite (long familiarity, most likely). But I didn't really like the new desktop environment in Ubuntu. Unity Desktop it's called, I believe. I had the issue that the taskbar buttons are lined up along the left edge of the screen, and the taskbar is 'sticky', meaning that it makes the mouse pointer stick to it slightly. This causes a hesitation as the mouse crosses from one monitor to the other, and after a mere handful of hours this behavior started to infuriate me beyond what most people would find reasonable. I hated it so bad, just thinking about it makes my hands shake. It's the little things...

I ended up installing Mint. It's Debian-based, had a very easy install with no bugs or glitches. I'm starting to really like the Cinnamon desktop environment. Got my development setup all put together, installed the Radeon drivers, got Urho3D building, got my game building and running, got Blender going, even got that Instant Meshes re-topo tool I talked about working, etc... Everything seems to be mostly hunky-dory (absent an issue with GC that I am working on debugging, specifically with the quad-planar shader I talked about a couple posts back).

I really love how developer-friendly Linux is. It's something I've missed, having been back on Windows for so long. With the package-based software system of Debian, it's extremely easy to download and install all the various developer packages that I love to use.

An added bonus is that my game achieves essentially the same performance as on Windows/D3D11. It's not fantastic; a solid 18 or so fps. But it'll do.

## Mint

To whom it may concern,

I am writing this missive while booted into Linux Mint from an ancient USB stick. How I came to this sad, low, lonely state is a tale of hubris and pride. You see, I am a habitual dismisser of Windows system restore disk nag screens. "System restore?" I cry. "Pah! I don't have time for that right now. Begone, thou pesky fiend! Begone!" Pride knotted my heart and furrowed my brow, each time that I swept the nag screen from my cluttered desktop with a snarl. What use have I for system restores? Nothing bad shall happen. Not anytime soon, anyway.

Ah, hubris.

And so, I have a USB stick jutting like a swollen tongue from the front of my ancient HP desktop machine. I peer at a screen that looks as if it came to my desk straight from the darkened hollows of 1998. Once upon a time, I was a fervent user of Linux. Once, I crowed from the rooftops about the virtues of Gentoo and Slack and Debian. Once, I believed it the height of computational prowess to compile my kernel from source, custom configured to my machine through the fervent work of Cheetos-stained fingers dancing madly across the pleasing feel and click of a mechanical keyboard. Once, I thought nothing of peering at a desktop filled with flat grays, bleak and morose, adorned with a deep black terminal window astrewn with command-line spew. But those days are behind me. Far behind.

To a spoiled Windows user, it feels like a strange hell. I have a soft spot in my heart yet for this beast Linux, though. Like that friend, whom I've known for years. The one who ate many dried acrylic paint chips in art class, and who had a penchant for dosing himself with voltage straight from the terminals of the ignition coil in a '79 Chevy. Sometimes, that friend was a pretty cool guy, good for some solid laughs. But other times, no matter what I did, he was determined to wear a pancake like a hat upon his head, and all I could do was ignore him and hope that the febrile gleam in his eye would soon fade. I look upon this bleak, smoke-colored interface through the mist of years, much as I look through the boxes of ancient photographs in the closet to refresh those memories of laughter and howls of pain filling the grease-smeared walls of the auto-mech shop. I look back, peering through the years, and think, "sweet mother of pete, that dude was crazier than a shit-house rat. And what the hell is up with this desktop screen that looks like a pre-release version of Windows 95?"

No sound comes from my speakers. They should work, I think. My machine is old and mainstream, no funky hardware to be found. Alas, David Draiman silently mouths the words of an ancient Simon and Garfunkel tune upon my second browser tab with impassioned, yet forever silent, emotion. Sing loudly, David. I have ears, but I can't hear a friggin thing.

Weep for me, my friends. I have accidentally erased the Firefox shortcut from the bottom taskbar, and I lack the technical knowledge to restore it to its rightful place, just to the right of the tiny little white smudge and the word 'Menu'. Weep for me, for I fear that should this browser window of mine, with it's pale gray panes and panels--should it close, I shall never discover how to open it once more. I click, I drag, I munge and fudge and muck-about, to no avail. That taskbar, that hateful and dreary thing with its completely bullshit right-click context menu, glares back at me stubbornly, starkly absent the comforting orange and blue swoosh of the Firefox logo.

I must close this letter now. My second screen is flickering to black. I fear something has gone amiss. No doubt, the kernel is on the verge of a panic. I know the feeling well. I feel the bleakness closing in, and should the kernel panic, I fear that I shall not be far behind.

Farewell for now.

HA! It never fails. As soon as I resolve to work on much-needed logic and AI refactoring, I let myself get distracted with a visual thing again. But this fix was relatively easy, and is something I've been meaning to do for quite awhile.

As mentioned in previous entries, the ground tiles in GC use tri-planar texture mapping. If you are not familiar with tri-planar mapping, essentially any object is textured with 3 different terrain textures, each texture being projected along one of the X, Y and Z axes. The textures are blended using 3 blending factors derived from the normal of the object mesh. In triplanar mapping, calculating the blending factors is as simple as taking the X, Y or Z component of the normal. So for any given fragment, the final diffuse color is calculated as abs(normal.x) * texture2D(Texture1, position.zy) + abs(normal.y)*texture2D(Texture2, position.zx) + abs(normal.z)*texture2D(Texture3, position.xy).

This kind of texturing would be perfect if Goblinson Crusoe took place on a cube grid, since each major face of a cube gets full blending from one of the 3 projections, and the only mixing of textures takes place on smooth, rounded corners where the normal is transitioning between faces. Here is a quick 2D diagram, showing how two textures (the blue and green lines) are projected along the X and Y axes onto a square.

However, since GC use a hex grid, then the 3 plane projection means that of the 6 vertical faces of a hex tile, 2 of them receive "unblended" texturing, and 4 of them receive "blended" texturing, where the final texture is a mix of two different texture projections. While I use the same texture for the X and Z axis projections, it still becomes obvious that textures are being blended, as much of the detail becomes muddled and mixed. You can see how this happens in this 2D diagram:

The green texture, projected along the Y axis, ends up projecting onto both of the faces of the hex that point sort-of along the Y axis, but the blue texture also gets projected onto those 2 faces, resulting in a mix of the two. The face that points along X gets only the blue texture on it.

You can see the mixing taking place in this shot:

The faces toward the camera are mixed, because the Z axis projection projects against the corner between the faces, rather than squarely against a face. No matter how you twist the projection, 2 faces out of 3 at least end up mixed. You can see that the texture detail becomes clearest in that shot as the normal rounds the corner, since as the normal transitions it draws nearer and nearer to a "pure" Z axis projection.

The tri-planar mapping is easy, because the X, Y and Z components of a normalized normal are orthogonal to one another. If, for example, Y=1, then Z and X are guaranteed to be 0.

However, it is possible to construct a planar mapping that takes the hex shape into account. By projecting 4 textures against the shape, 1 from the top and 3 from the side, each of the 3 aligning with 2 of the faces of the hex piece, then you reduce or eliminate the mixing that takes place on the surfaces. You can see from this diagram how it should work:

Each of the textures, red green and blue, are projected against one of 3 axes that each run through 2 faces. In this case, the faces should receive singular mapping from one of the textures, and the only mixing that should take place is on the corners if smooth shading is used.

However, the math for the blending becomes a little bit more difficult now, because the 3 projections are not orthogonal to one another. You can calculate a blending factor for each of the 3 planes by taking the dot product of the normal and the axis that runs through each of the faces. However, for example, if the dot product for the axis that receives the red texture is 1, meaning that face should fully receive, red, the dot product with the axis that receives blue will be larger than 0. It will, in fact, be equal to 0.5, meaning that the face that should be fully red ends up with red mixed at 1 and blue mixed at 0.5, resulting in a purple shade instead.

In order to pull this off, you have to do some hackery. The axes for the 3 face orientations end up being (0.5, 0.8660254), (1,0) and (0.5, -0.8660254) (2D case, of course). So you can see that a normal that is aligned with the red or green projection axes, dot-producted with the blue axis, result in 0.5. Similarly, a normal aligned with blue, dot-producted against the red or green axes, will also result in 0.5. So the solution is to make 0.5 be the new 0.

What I mean is, you can perform the dot-product calculation for a given axis, then apply the formula (dp-0.5)*2. Thus, when the normal projects fully along the blue axis and 0.5 along the red and green, after the calculation you end up with a blend factor of 1 for blue and 0 for red and green.

In practice, I end up with the following blend factor calculation:

 float3 n1=float3(0.5,0,0.86602540378443864676372317075294); float3 n3=float3(0.5,0,-0.86602540378443864676372317075294); float3 nrm = normalize(psin.Normal); float4 blending; blending.y=abs(nrm.y); blending.x=saturate((abs(dot(nrm, n1))-0.5))*2; blending.z=saturate((abs(nrm.x)-0.5))*2; blending.w=saturate((abs(dot(nrm,n3))-0.5))*2; blending=normalize(blending); float b=blending.x+blending.y+blending.z+blending.w; blending=blending/b;And the result seems to work pretty well:

It does add an extra texture and normal lookup for each of the terrain texture types, but even on my machine it doesn't make a real difference on the framerate, and the reduction of texture mixing artifacts is more than worth it.

## Creature AI

I'm in a creature AI phase right now. It's actually a bunch of code that I haven't really touched in awhile, and as with all such code I have had to spend some time getting familiar with it again. I still only have a relative handful of skill options, so I haven't really dug in-depth into complex AI behaviors. And a lot of what I have in place is still placeholder or rough initial prototype stuff. Digging through it has revealed a few bugs though.

For example, I equipped a few of the semi-unique test mobs (they're big) with the Flame Shotgun spell, and it has revealed some interesting behavior. Flame Shotgun targeting is a cone, and all destroyables in the cone are added to the target list, so any mob that uses it on GC stands a pretty good chance of mowing down whatever trees, bushes, and random buddies happen to stand in the way. However, I still have AI code in the controller to distinguish friend from foe, and that code path will actually prevent a mob from attacking an ally, even though my initial intention was only to discourage such. Consequently, it is possible for two mobs to fire off a Flame Shotgun in GC's general direction, hit one another, and consequently 'get mad' at each other. They will then switch targets, path until they are directly adjacent to one another, and then deadlock as they each attempt to murder the other but can't because the glitch in the controller prevents it.

I fixed the glitch, but it has revealed deeper issues that have me convinced a general refactor is in order. So that'll probably be my task for the next few weeks, in between work and stuff. Also, spring and all that the arrival of spring entails (lawn stuff, garden stuff, house stuff, you know... stuff.)

## Bringing back the mountains

I spent some time recently working out some bugs with the tri-planar shader. I believe the bugs are fixed now (and it required some fixes in the Urho3D library to do it), but I don't really understand what was causing them in the first place, so I'm still a touch nervous about them. At any rate, I've taken up the Quest Map UI again.

In a previous entry, I introduced the quest map as a 2D scene rendered to a texture and displayed on a scroll atop the Map Table model and in the placeholder interface widget for the quest map. The process that made the shader bugs apparent to me, however, was an experiment to bring back the old world-map look of the original 3D Goblinson Crusoe conversion. Back then, I started playing with representations of hex tiles modeled as hills, mountains, etc... I still like the style and feel of that world map, so I whipped up a quick test to see how I like it as the Quest Map.

In short: I like it quite a bit.

In the first shot, you can see the Map Table object with the scroll map on top. For that rendering of the map scene, I blend the texture with a version of the map scroll texture, whereas in the UI widget the rendered map is un-blended. My vision for it is to have various objects populating the map, representing the missions/quests available in the quest log. Each quest will have an entry in the list to the left. You can click on a quest in the quest list, and the map camera will zoom to that location. (Right now, the camera is just animated with a circle animator).

I will probably do another iteration on the actual tiles, with some updated textures and a more consistent style among tilesets. Pictured up there is a mish-mash of various experiments and tests, with different base textures, so some of the tiles don't really match the others that well.

But all in all, this version of the quest map is the one that I like the best. I'll try to juice it up a bit (better water texturing, aforementioned new tilesets, some animated doodads like seabirds, clouds, splashes in the water, etc...)

## New textures with the tri-planar shader

SoTL and MarkS talked me back into using the tri-planar shader I've been using so far. Above are a couple shots of some of the textures I've been working on the past few days.

A couple things:

The textures jibe together a bit better than the previous sets. Previously, I was using photo textures, with normal maps derived from the luminance of the diffuse, which isn't really the best way to go. In these textures, normal maps are derived from the particle bakes, so the normal map works a bit better.

A recent commit to Urho3D provided the ability to use texture arrays in the GL and D3D11 branches, something that has vastly simplified the tri-planar shader. Previously, I was required to combine all of the terrain textures into a single texture atlas, and use sampling trickery to obtain each of the pixel samples. While it worked well enough, it caused quite a bit of blurring to occur at oblique view angles, and often it produced texture seams at oblique angles as well. The new shader eliminates a great many shader instructions and produces a superior visual result. Additionally, I can more easily pick and choose textures for different terrain sets, without having to bake them into a dedicated texture atlas. I can simply specify which textures I want in an XML file. On the downside, it shuts me out from using the D3D9 render branch, as texture arrays are not supported in D3D9.

## Textures

Did a quick little test to see how well some of the textures I made in making the previous journal entry "read" in Goblinson Crusoe. Looks pretty good, if you ask me.

## Dirt And Rock Textures Using Blender Particle Systems

Reading alfith's entry on rocks reminded me of a technique I often use for creating dirt, stone, and pebble textures in Blender using particle systems. Since alfith's rocks are fresh in my mind (and near the top in my file history in Blender) I'll go ahead and use them to demonstrate the technique in its entirety.

The Blender Hair particle system type is useful for distributing particles as objects across the surface of a mesh. I have demonstrated how to use this type of system before, to create rendered forest scenes. The idea is basically the same: use a particle system to scatter a bunch of objects (in this case, rocks) across a surface. Since I have a .blend file with some of alfith's rocks in it, that's where I'll start: with some rocks. In this case, I have 3 rocks scaled to 3 different sizes:

In case you haven't read alfith's entry yet, go ahead and do so now. I'll wait. Essentially, he takes a default cube in Blender and slices it with several randomized planes, using boolean operators to take chunks from it. After the slicing, he bevels the edges and smooths it a little bit. Easy peasy, and the result is quite convincingly pebble-like. alfith provides the Python script he uses to build the rocks, in case you are curious.

After building a few rocks, I scaled them to different sizes. Then, all the rocks are combined together into a Group:

The Group functionality is found under the Object Data tab. Select the object, navigate to Object Data, and click Add To Group. If no Group has been created yet, you can create it on the first rock. All subsequent rocks should be added to the same group.

With this Group, you now have a tool for controlling the size distribution of the rocks. When the particle system is instanced, it will select objects randomly from the specified group. In this example case, I use 3 different sizes of rocks, resulting in a basically equal distribution of small, medium and large rocks. If a greater prevalance of small rocks is desired, you can add more small rocks to the group. Similarly if larger rocks are preferred. Some variance is added to the sizing process in the particle system itself, as well.

The rig I use for this purpose looks like this:

The camera is set to Ortho and oriented so that it sits above the origin on the Z axis, looking directly downward. The render size is set to the desired texture size (say, 512x512) and the Orthographic Scale is set to 1. The particle system is instanced on a plane of size 1x1 that sits centered at the origin, facing upward toward the positive Z axis. The simple explanation of how the particle system is created is as follows:

1) Create a plane. Press 's' to scale, type 0.5. When a plane is first created, it is sized 2x2. If desired, you can set Orthographic Scale to 2 in the camera settings, and not worry about scaling the plane at all. It's all the same either way.

2) With the plane selected, navigate to the Particle tab and create a new particle system. It should look like this:

3) Set the parameters of the particle system. To save time, I'll post some images of a representative setup along with the result, then explain what things are and what to tweak.

The Particle System type is changed from Emitter to Hair. Hair systems are pretty much what they say on the tin: a way to grow hair. In this case, the hair is shaped like rocks. The Advanced toggle is checked, which gives access to additional fields such as the Rotation sub-section, which also is toggled. The system emits from Faces by default, which is the desired behavior; this will cause rocks to emit from across the entire plane. By default, the Number of particles is 1000. You can tweak this to achieve your desired rock scatter density. Under Rotation, the Initial Orientation is changed to Global Z, and the Random slider underneath this is slid all the way to 1. The Random slider underneath the Phase entry is also slid all the way to the right, ending at 2. These sliders, together, give each rock particle a randomized rotation, so that the rocks do not align with one another. Under the Physics section, the Size parameter is set to 0.01, and the Random Size slider is turned all the way up to to 1. Finally, under the Render section, the Group button is selected, and the group containing the rocks is chosen. And the result of all of this should look something like this:

It's possible that you won't be happy with your immediate results, in which case you'll need to tweak. The chief things to tweak are:

1) Number of particles. Crank this up or down to alter the density of rocks.

2) Size distribution of rocks in the group. Add more rocks to alter the distribution.

3) Size parameter under the Physics tab. If all of your rocks are generally too large or too small, rather than scaling the individual rocks in the Group you can modify this scaling parameter.

4) Seed parameter in the header of the particle system. This gives a different pattern of distribution for each seed.

Now, you could go ahead and bake displacement/normal/ao/render from this (well, render once you have assigned materials and rigged some lights) if you want, but it won't be seamless. Also, if you want to create a set of textures that are different-yet-similar, and which tile with one another, this simple setup isn't sufficient. For that, some modifications must be made.

To show the rig I use for seamless texture sets, here is an image.

It looks a little tricky, so here is an annotated image:

In this image, you can see that I have instanced multiple differently-sized planes and aligned them with one another. Each plane has a duplicate of the rock particle system from the main plane. The plane labeled A is the main system. Objects labeled with the same letter are duplicates of each other. The differences between particle systems are:

a) Adjusted the number of particles based on the area of the plane. The main portion has 2000 in this case, the thin edge strips have 250, and the corner squares have 30.
b) Altered the Seed parameter of the particle system to give different patterns.

The pieces labeled B in the image share the same Seed; similarly, the pieces labeled C, D, E, F and G.

The reasoning for this pattern is that the edge strips are duplicated from one side onto the other, providing rock overlap as if from an adjacent tile of the same configuration, without having to actually duplicate the entire 1x1 square plane with its 2000 particles. The reasoning for the double sets of edge strips is that the Seed for the particle system on A can be altered to give different patterns for the center of the tile, but since the seeds for B,C,D,E,F and G stay the same, the edges of the tile will also stay the same, ensuring that all tiles created by varying A's seed will tile with all other variants seamlessly.

Now, with this setup it's time to start baking. But there's a problem: Blender doesn't like to Bake from a particle system. The typical method for baking a normal/ambient occlusion/displacement map from this setup is to create a plane, sized 1x1, and UV map it to fill the whole image. Then, translate it to lie directly above the rig, and use the Bake menu to bake out your maps. But, like I said, Blender doesn't like to bake from particle systems. There are a couple ways you can get around this:

1) Convert your particle systems to meshes, and bake from those. This involves selecting all of the planes, going to the Modifier stack for the plane and, on the entry for the particle system, press the big Convert button. This will make all of those rocks into real objects (duplicates of the corresponding rock from the rock Group). If you go this route, prepare for your face count to explode; the more particles you use, the more explody it will get. But once you have converted the meshes, you can then bake your maps.

2) Construct Cycles-based node materials for your objects, and render using those materials to get your various maps. If I have a particularly heavy particle system, this is the route I typically use. Here is the basic material setup for that:

Now, it might look a little complex, but it's not. Basically, there are 3 sections: AO, displacement and normal. On the far right is the output node, and to the left of that are 3 nodes that represent each of the 3 setups. The top node is the simplest, providing the built-in AO node. To bake the AO map, just connect the output of this node to the Surface input of the Material Output node, and render. The output should look like this:

If desired, you can connect the output of the AO node to a Brightness/Contrast modifier node to tweak the range of the AO map.

Second, connect the output of the Diffuse BSDF node in the center to the Surface input. This is the Displacement map chain. The way it works is it obtains the coordinate value of the point, takes the dot product of this position against the vector (0,0,1), feeds the output value of that operation to a Brightness/Contrast node so you can adjust the range, then feeds that result as a color to the output shader. The bright/contrast can be tweaked to find a good range. The output of this operation should look something like this:

The third set of nodes is used to bake a normal map. The normal is obtained from the Geometry input, split apart into separate X, Y and Z components. The Z is discarded, while the X and Y are modified by adding 1 and multiplying by 0.5. The value of 2 is substituted for Z, then the whole thing is combined back into a vector and fed to an Emission shader. (The emission shader is necessary so that lighting is not applied to the final output.) The result should look something like this:

The normal output material has a couple of Math nodes right after the split operation. You can modify the constants in those math nodes to scale the X and Y components of the normal, to make the bumps more or less pronounce. For example, with a value of 2 for both X and Y scaling:

So, now I've got an AO map, a Displacement map and a Normal map. It's time to take them into Gimp. Then, I search through my library of seamless textures. I'm looking for two simple dirt-like patterns without a lot of detail to them. I've got quite a few such textures kicking around on my hard drive. This step might take me awhile to get something I like. Essentially, what I do is find a texture to act as the base dirt and a texture to act as the stones. Then I use the Displacement map as a mask to blend the stones layer on top of the dirt layer, adjusting the brightness and contrast of the displacement mask to get something I like. Sometimes, I won't even blend two layers like this; I might instead just use a single dirt texture. Whatever I decide upon, though, after I have a good diffuse color map that I like, I load up the AO map and multiply the AO against the diffuse to get a final diffuse map:

Now, for my terrain shaders I typically use a texturing scheme such as this method which uses the displacement map to modify the blend factor when blending between terrain types. In preparation for that, I'll combine the displacement map with the diffuse map, displacement in the alpha channel. Then, I'll fire up a test terrain and see how it looks:

It can often take a lot of tweaking and iterating to get something I like. I'll experiment with diffuse colors, normal sharpness, rock shape, etc... Lots of different places in there to experiment.

Attached to this post is a zip archive of the .blend file used to generate the rock maps, in case you are interested in playing with stuff for yourself.

Edit: Re-uploaded the .blend file with gamma correction fixed and the normal shader tweaked to provide "more correct" output.

Edit: Re-uploaded yet another version of the blend file. This version has the material nodes grouped together, with 2 named inputs: displacement scaling and normal scaling to adjust how those outputs are scaled, and 3 named outputs: AO, Displace and Normal:

Expanding the group:

Some changes:

1) Tweaked the Displacement node chain to use a color ramp and a scaling factor, rather than a Bright/contrast node. The Color Ramp can be more easily tweaked to govern the gradient than a bright/contrast node, and the scaling factor can be used to quickly adjust the overall brightness.
2) Tweaked the Normal node chain a little bit, to get rid of some confusion. The scaling of X and Y needs to happen before the normal is converted into color space; pulling it out as a group input makes this even more clear.
3) The gamma change mentioned in the previous edit. By default, Blender does some color management stuff, including a gamma correction pass, that will push your colors out of whack. I've switched it to Raw color management, pushed the gamma to 1, and the colors are now unadjusted. This change also helps the ambient occlusion render to have greater contrast.

Edit: A shot of some more textures I made tonight:

Edit:

## Instant Meshes Retopo

I'm terrible at art.

Let me back up. There are some aspects of art, at least video game art creation, that I have achieved an "acceptable" level of skill. And there are others that I have achieved "pretty okay". But there are many, many other areas where I have achieved merely an "ass-tastic" level of skill. Retopo is one of these.

Retopology of a high-detail base mesh (such as from a sculpt) into a low-detail mesh suitable for use in-game is kind of a black art, imo. With much practice, one can become quite adept at structuring the topology of a mesh to follow the flowing lines and surfaces of an organic base mesh. The keyword is "much practice." Luckily, for those of us who have so many areas to master that we simply can't spend the hours and hours of time required to truly master such a skill, there are tools that can aid in the process. Recent releases of Blender offer many tools to help in the retopo process, including the remesh modifier.

In the past, I have made a few attempts at hand- or tool-assisted- retopo of character meshes (such as the goblin featured in my game). Most often, however, I simply fall back to some combination of Blender's Decimate and Shrinkwrap modifiers, unwilling to spend a whole lot more time on it. Given the fact that most of my projects tend to be isometric Diablo-alike viewpoints, this is no big deal, as very rarely are you ever near enough to one of my meshes to notice just how terrible the retopo is. Seriously, it's terrible:

That's the current low-poly of Goblinson Crusoe. From far away, you can't really see how ugly it is, and certainly the ancient and venerable trick of normal mapping helps to hide the ugliness even more, but still: it's just not a clean mesh.

Today, I happened to stumble upon another tool: Instant Field-Aligned Meshes. The link provides links to the technical paper upon which the tool is based, the Github repo for the source code, and some links to pre-built binaries. It's a relatively simple tool, built in order to demonstrate the concept highlighted in the paper. But even so, it's a quite useful little program in its own right.

The tool allows importing .OBJ or .PLY data. I tend to work in .OBJ a lot, since the Sculptris tool will export to .OBJ directly. Once a mesh is imported, the Instant Meshes tool will start you on a journey of two steps. The first step, initiated by clicking the Solve button underneath the Orientation Field sub-section, will trigger the generation of an orientation field overlaid upon the mesh, that lets you see how the topology will flow:

The colored hash marks give you an idea of the general flow of the mesh that will result. Once this step is completed, you can use the three tools (represented as a Comb, a Magnet and a Ghost) to re-order the flow, if desired, in order to more closely correspond to how you desire the topology to flow across the mesh. In particular, the comb tool is useful. By tracing lines on the mesh, you can re-orient the flow fields:

The tool also lets you manipulate "singularities", or places where the mesh flow orientation flows outward in multiple directions. For example, you could attract singularities to one region, such as an elbow or the point of a shoulder, or repel them away from another region.

Once the field is completed, you initiate the second step by clicking Solve underneath the Position Field sub-section, at which point an overlay of the final mesh will be placed over the object:

In this step, you can now use the provided tools to specify edge paths. For example, if you have a sharp brow ridge, you can specify an edge path along the ridge in order to ensure that an edge of the mesh follows the ridge. The algorithm works fairly effectively at following these areas regardless. At any rate, once you are happy with the result, you can export an .OBJ mesh to be imported to Blender for UV mapping and texture baking. The result, as you can see, is a far cleaner mesh than my poor Decimate mesh:

As a programmer artist, I am entirely delighted at the quality of that retopo. Could a professional do better? Most likely. Will it provide an acceptable mesh flow for even close-up views of the character? Indeed it will. And it only took about 5 minutes using the Instant Meshes tool.

## Rendering Tree Sprites

It's been awhile since I did anything 2D-related, and I'm a little bit worried that some of those particular hard-earned skills are starting to fade from my weak brain. Last night, I was talking with an individual in the GD.net chat about trees. This person was drawing trees and looking for advice on how to make them, so I shared a little bit about my own personal workflow for 2D trees. I've talked about it somewhat before, but in the interest of archiving one little bit of digital knowledge that I risk losing, I'd like to take the time to talk about it a bit more in-depth.

I first started making trees many years ago, for the original Golem isometric RPG I was working on at the time. In the course of my research at the time, I stumbled across a Python script for Blender that implemented an L-system for generating trees. The script is now long-defunct, being non-functional for anything beyond Blender v2.49. The site for the script is no longer functional. About the only thing that I can find for it anymore is the archive at archive.org, and reading the comments in the sidebar about how progress on the next build is being made and should be released soon is kind of depressing. It's like a reflection of my own life. At any rate, you can see the site for an earlier version still in existence. But the sad fact is, the script is dead. It did serve as a nice introduction to the concept of L-system tree generation, however, and I do keep a copy of it (along with a zip of Blender 2.49b so I can run it). But in the modern world, there are other alternatives.

One of the chief alternatives is ngPlant, an open-source L-system generator. It is still under development, and being a stand-alone tool rather than a script built in Blender itself, it is not tied to any particular Blender version. Using ngPlant, you create a tree then export it to meshes for import to Blender. ngPlant allows the loading and saving of parameter templates, and there is a library site called 3D Plants Collection where .NGP files for ngPlant can be found that implement generators for various real-world plant and tree varieties. The site is somewhat related to the older Blender Greenhouse site.

The ngPlant generator is pretty neat. When you first start the program, you begin with a "tree" that consists of a single branch layer, forming a conical trunk. You can add additional branch layers in a hierarchy, with each successive layer of branches "growing" from the parent. At a certain depth, you convert a layer of branches to leaves. Some variants use multiple leaf layers, with different leaf texture selections for each layer. Trees with leaves and flowers, for example. This journal entry is not intended to be a tutorial for how to use ngPlant. If you are interested, I highly recommend downloading it and playing with some of the templates found at the 3D Plants Collection site to see how the various settings and sliders can be used to generate interesting vegetation forms. It's a fun toy to play with in its own right.

Once you've tweaked the sliders and made some cool stuff, you can export. I typically export to Alias-Wavefront .OBJ format, simply because Blender has no issues importing in that format. The exporter gives you a few options that can help to optimize your export. For example, full-bodied tree canopies come as a result of having fairly deeply-nested branch levels. By going 3 or 4 levels deep on the nesting, you can get a thick, nicely-formed tree canopy, but the cost is having 3 or 4 levels of branches. As you get deeper in the branch hierarchy, the branches become smaller and more numerous, until you get to the level of twigs. Sadly, exporting all of these layers isn't really optimal. The smaller twig layers contribute a huge amount of geometry, but only a small amount toward the final render's visual appearance. Many of those twigs won't even be visible at all. So ngPlant makes it possible to mark a branch layer as Hidden, and to choose not to export Hidden layers. This way, you can hide the final twig layers so they won't be exported. Exporting will also copy the bark and leaf textures you assign in ngPlant to the folder you select as destination. The templates at the 3D Plants Collection site come with some decent textures, but I highly recommend spending some time with a digital camera and the Gimp, creating a library of your own leaf and bark textures. It gives you an excuse to go out into the hills or woodlands on nice day. While you're out there, take a few pictures of whole trees as well. Having reference is a big help when you are tweaking ngPlant templates to fit what you see.

So, you've built a cool tree in ngPlant and exported it to .OBJ. Time to open up Blender, and import. Now, before you go all crazy with generating trees and trying to render them, it is important that you settle on a consistent lighting scheme so that the are all visually compatible with one another. For this purpose, I will usually create a lighting and rendering rig, setting up the lights and the camera just right, then saving that Blender file and using it as a template when importing trees.

Creating the Lighting Rig

The first thing I do is fire up Blender and delete the default cube and camera. Deleting the camera isn't strictly necessary (it's easy enough to reset it's parameters) but it's just a habit I have gotten myself into. I will also usually delete the default point lamp. I want to start with a nice, empty scene. Once I'm there, I create a new camera. The new camera will be created at the origin (assuming you haven't moved the 3D cursor, anyway) pointing downward along the Z axis. Select the camera, press 'r' to rotate, press 'x' to constrain it to rotate around the X axis, and type 90 then enter to rotate it by 90 degrees. Now it will be pointing along the Y axis. Press 'g' to grab it, type 'y' to constrain it to the Y axis, and type, say, -10 to move it 10 units back. Now it will be located away from the origin, pointed back toward the origin.

Now, at the bottom of the render view is a drop-down box:

This box can be used to select the pivot location for rotation operations. On startup, it is usually set to Median, meaning that rotation operations will pivot around the median center point of the selected object(s). If you choose 3D Cursor instead, then the rotation will be performed around the location of the 3D cursor, which should still be set at the origin. Once changed to 3D Cursor, select the camera again, press 'r' to rotate, 'x' to constrain around X axis, then type -30 and Enter. This rotates the camera up around the origin to a location 30 degrees above the horizon, which is suitable for isometric games using the 2:1 tile ratio that so many games use. Of course, the angle you set here needs to reflect your actual in-game angle.

If I am going to be using the same lighting rig to pre-render landscape tiles and graphics for an isometric game, then at this point I will select the camera, press 'r' to rotate, press 'z' to constrain to the Z axis, and type 45 then enter to rotate the camera around the Z-axis. While not strictly necessary, this step can help in getting the lighting to be consistent between landscape tiles and graphics such as trees.

Once the camera is in place, I go to the Camera options menu:

There, I change the camera to Orthographic, and change the Orthographic Scale to something reasonable, like 2. I typically export my trees scaled to 1 or 2 units high, since I like to work at smaller scales.

In order to assist in lighting, I typically create a ground plane at the origin. I create a plane, scale it by, say, 4. This will help me when choosing the shadow angle. It will also come in handy later when rendering shadows.

Next, it's time to set up the lights. When rendering isometric graphics like this, I almost always stick to using only Sun lamps. They provide directional lighting, which is necessary when illuminating tile objects that need to tile with one another. In order to keep the lighting consistent between tiles and objects, it is necessary to use the same Sun lamp setup for the trees. So, I'll go ahead and create an initial sun lamp.

Lighting is subjective, and dependent entirely upon the look and feel desired for the game. It is common that I will spend a LOT of time tweaking this part. Lighting rigs can change depending on environment theme as well, so in any given isometric project you might create a dozen of these templates. In order to assist in setting up the lighting conditions, I will usually import a landscape object. Trees work, of course, but they tend to be a tad heavyweight, requiring a bit of time to render, so something lighter might be more appropriate. It's an iterative process: setting lights, rendering, tweaking, etc...

Basically, I'm looking for proper illumination from the chosen viewpoint angle, and a nice-looking shadow cast on the ground plane. I will spend some time tweaking the Size parameter on the Sun lamp, to dial-in the softness of the shadow. Larger sizes equal softer shadows, smaller sizes equal sharper shadows. At these scales, I find that somewhere around 0.01 or 0.05 is a nice place to be, but it again depends on the game.

I like to use the Blender Cycles renderer, because the shader-like structure of the material system is a nice addition to the pipeline. However, the stochastic nature of the render, with the accompanying introduction of high-frequency noise, means that I have to crank the Samples in the final render up a LOT in order to get tile pieces to properly match their lighting with each other.

Once I'm somewhat satisfied with the lighting rig, then it's time to start working on the materials for the tree. ngPlant exports each branch layer as a separate mesh object. So each branch layer will be a separate mesh. I typically will select all the branch layers and use 'ctrl j' to join them into a single mesh. Add a simple texture mapping material to the trunk, with the bark texture specified as an image source, and render to see how it looks.

For the leaves, the material is slightly different. It uses the alpha channel as an input to mix between a diffuse shader, whose color comes from the color channel of the texture, and a transparent shader. This implements an alpha mask using the leaf texture's alpha channel. Additionally, a mix shader with a solid color can be used to grant seasonal variation. Or, a mix shader with a noise component can grant color variation throughout the canopy. For example:

At this point, once I've tweaked the lighting and the shadow, I save the .blend file, because after this I will be making some destructive edits, so this will give a base to work from for both paths.

First of all, in order to make the final tree render, I delete the ground plane. We want a transparent background, so I tick the Transparent check box in render settings:

Also have to select RGBA for the output format, so it will properly save the alpha channel:

And render:

At this point, I'll usually throw together a quick test background in gimp to see what the tree is going to look like on ground that is roughly equivalent to what will be in the game:

That looks okay. This process usually takes a lot of tweaking. I'll render, and decide I don't like the particular angle the tree is at. Or I'll render, paste it on a background, and decide the lighting is off. Or maybe I don't like the foliage colors. Something almost always needs to be changed, sometimes multiple times, but the end result is usually a tree sprite I'm happy with.

But it kinda looks off, doesn't it? It really, really needs a shadow.

In 3D, shadows are Easy(TM). (For certain definitions of the word "easy"). That is, shadowing is a render process integrated into the pipeline and performed at run time. But in a 2D game like this, you can't really do that. That tree sprite lacks depth information that would be required for run-time shadows. Instead, you have to 'bake' the shadow into the sprite, either as part of the sprite itself or as a separate shadow sprite that is drawn prior to the tree sprite. So, to get the shadow, I will load up the base save for the tree, the one that still has the background plane. And then I gotta do some stuff.

This part is why I really, really, REALLY like the Blender Cycles renderer. It makes the shadow phase of a sprite dead simple. Here is what you do:

You setup a material like the above, and assign it to both the trunk and the leaves of the tree. (It's not 100% accurate for the leaves; for that, you would also need to add alpha masking, but I find that it's close enough and renders faster this way.)

In Blender Cycles, the renderer does its thing using various rays that are traced, or stepped, through the scene. At any given time, a particular ray will be labeled according it its type or phase. The Light Path node lets you select a behavior based on the type of ray currently being processed. In this case, I select base on whether a ray is a Shadow ray or not. If it is a Shadow ray, then the ray is 'drawn' (ie, considered when generating the output). Otherwise, the ray is 'ignored', ie 'drawn' using a completely transparent material. This means that only the shadow will be drawn when rendered; the object itself is rendered completely transparent. Observe:

Nifty, yeah? After the shadow is rendered, I can take it into the Gimp and do some stuff to it.

First, I adjust the contrast a bit, until the white parts are fully white. If I used a slightly tinted sun lamp, then I also will desaturate the shadow, to remove any color information. Then I invert, so that the white parts become black, the black parts become white, and everything in between is flipped upside down:

At this point, I will choose a solid color that is close to my desired shadow color, and use the inverted shadow as an alpha mask. Again, this typically takes some tweaking to get it exactly right, but the end result looks something like this:

Now, you can either use that shadow as a pre-pass sprite, or you can combine the tree and shadow into a single sprite:

The shadow seems a little too dense, so I'd probably go back and adjust the brightness of the inverted shadow mask, iterating on it until I got something I liked.

Time to check it against the test background:

The neat thing about doing trees like this is that, once you have done the work for a single tree, you can rotate the tree and re-render to get multiple variations. Rotating the tree makes it look like an entirely new tree in most cases. You can also alter things like the leaf texture, or the foliage blend colors to get different canopy appearances. You can remove the canopy leaves altogether, leaving just the tree trunk (though, if you do this, I suggest un-hiding another level or two of the exported branches, to get more twigs in view.) You have lots of leeway to adjust lighting, materials, shadows, etc... and the process lets you do that quickly, without having to re-draw trees by hand.

## ANL Expression Parsing

Over the last several years, my noise library (ANL) has been a sort of sink for whatever miscellaneous time I get in between work, kids, the house, game development, game playing, etc... It has evolved quite significantly from its roots as basically a libNoise knockoff. One of the key goals I have always had in mind has been to decrease the redundancy and boiler-plate involved in creating complex noise functions.

In the beginning, ANL followed libNoise in its structure. Noise functions were composed by chaining instances of various function classes. Creating a noise function was very clunky, as you had to instance the function objects then chain them manually by parameter passing. It was a pointer-indirection hell with a LOT of manual typing redundancy. Observe:

anl::CImplicitFractal frac(anl::FBM, anl::GRADIENT, anl::QUINTIC, octaves, freq, false);anl::CImplicitAutoCorrect ac(0,1);ac.setSource(&frac);anl::CImplicitSelect(0, 1, &ac, 0.5, 0.1);It worked, but it took a lot of typing. Over time, I simplified things with various interface classes and some Lua code that allowed me to specify function chains using Lua tables. The latest major re-write of the library formulates noise functions as simple arrays of instructions in a virtual-machine-ish fashion. The primary interface is quite similar to the old tree-builder interface added to the initial version, ie you can build modules in this manner:

k=anl::CKernel();k.select(k.constant(0), k.constant(1), k.simplefBm(anl::GRADIENT, anl::QUINTIC, 6, 2, 123123), k.constant(0.5), k.constant(0.1));This format is much more concise, but still requires the mechanism of making function calls on k.

Something I have wanted to implement for several years now is the ability to construct a noise function from an expression. The other day in the gd.net chat, we were talking about implementing simple calculator functionality to evaluate an expression, and it got me motivated to start working on an ExpressionBuilder class for ANL that can parse an expression string and construct a function chain from it. This morning, I pushed a commit of my initial work on this. This expression builder functionality lets you write expressions such as:

clamp(gradientBasis(3,rand)*0.5+0.5,0,1)+sin(x)*3and the code will parse the expression and generate the functions within a kernel, returning the index of the final function in the chain.

If you've never written an expression parser/evaluator, then know that there are essentially 3 steps to the process. 1) Split the input string into a stream of 'tokens' 2) Convert the token stream into a format the computer can easily evaluate 3) evaluate the final expression and return an answer.

The first simple tokenizing a new programmer is likely to encounter is a simple string.split operation, which splits up a string into chunks based on whitespace. Such a split operation might look like this:

std::vector tokenize(const std::string &s){ std::istringstream st(s); std::vector vec; std::string token; while(st >> token) vec.push_back(token); return vec;}This simple code accepts a string as input, and returns a vector of the individual tokens. However, the issue with a simple tokenizer like this is that individual tokens must be delineated with whitespace. The above example expression, then, would have to be written as:

clamp ( gradientBasis ( 3 , rand ) * 0.5 + 0.5 , 0 , 1 ) + sin ( x ) * 3which, obviously, is annoying as hell. Make a simple mistake of omitting a space, and suddenly what should be 2 separate tokens get merged into a single token that, probably, doesn't correcly match a token pattern. The expression parser wants tokens to 'match' a pattern in order to determine what the token is. You have your identfiers (start with a letter, can contain some combination of letters, numbers and _ characters), your numbers (contain digits, possibly a leading -, possibly a decimal point somewhere, etc..), your operators (math operations such as +, -, ^, etc..), your parentheses (open and close), and so forth. These tokens have to match the correct pattern so the evaluator knows what it is, but if two tokens get 'squished' together because you forgot a space, the result is likely a parse error, where the parser erroneously interprets the squished tokens as a different token altogether, or simply can't interpret it as any kind of valid token.

So, the 'simple' tokenizer won't work for general expressions. That means, you have to write a more robust (ie, more complicated) tokenizer. Such a thing is going to iterate the string character by character, attempting to match patterns as it goes. It will "eat" whitespace as it works (ie, before attempting to parse a token it will discard any leading whitespace), and attempt to build a token of a particular type based on the first non-whitespace character it encounters. Did it encounter a letter? Then the token is likely either a function name or a variable name, so parse an identifier. Starting from that first character, it will read characters until it encounters one that is not valid for an identifier: an operator, a parentheses, a comma, or whatever. As soon as this pattern-breaking character is encountered, then the tokenizer packages up the characters it has read, labels them as a FUNCTION or VARIABLE token, and inserts it into the token stream. Then it continues on, parsing the operator or parentheses it encountered.

At the end of the tokenizing, you end up with a token stream. This stream is essentially the same as the expression itself. It is in the same order, the chief difference is that it is 'split up' into easy-to-digest chunks, and each chunk is labeled as to what 'type' it is, ie NUMBER, FUNCTION, OPERATOR, and so forth. However, it's still not in a format that the computer can easily evaluate.

Computers are different from you and I. You or I could take an expression like x+3*y-9 and figure out an answer. You know from math class that multiplication comes first, so you're going to multiply 3 by y. Then you're going to add that to x, and subtract 9 from the whole thing. For us, it's 'easy' to interpret such an expression string. But the computer has a hard time interpreting it in the initial format. Part of is lies in the formal idea of operator 'precedence'. You know that multiplication comes first, so you skip to that part first. Your natural language processing and pattern processing 'brain' knows how to find the pieces that have to be calculated first. But a computer has to be specifically told which operations to do, and in what order to do them, and it is difficult to figure that out inherently from an expression such as we are used to looking at.

The trick is to convert the expression from its current format (commonly called 'infix') to a format that it can work with more easily, called 'postfix'. Infix simply means that the operators are 'inside' the operation. Postfix means that the operators come at the end of the operation. For example, the expression 4*3 in infix would equate to the expression 4 3 * in postfix. Similarly, the infix expression 5*(3+5) would convert to 5 3 5 + *.

A postfix operation specifies the operands first, followed by the operator to use on the operands. It is 'easy' for us to read an infix expression, but hard for us to read the postfix format of the expression, whereas it is 'easy' for the computer to read the postfix, and harder for it to read the infix. So, once an expression has been successfully converted into a stream of valid tokens, the next step is conversion from infix to postfix.

The algorithm I use is called the Shunting yard algorithm. This algo is so-named due to it's similarity to how rail cars are split up and assembled into trains in a rail-yard. The linked wikipedia article describes the algorithm fairly well. Essentially, it is a series of 'rules' for how each token in a stream is to be processed. The algorithm uses 2 data structures: the output vector (which will be a token stream converted to postfix) and an operator stack, onto which operator tokens or function tokens can be pushed. The algorithm works by iterating the token stream from first to last, and for each token:

1) If it's a number or variable token, push it into the output vector
2) If it's a function token, push it onto the operator stack
3) If it's an argument separator (comma), pop operators off of the top of the operator stack, until a left parentheses ( is encountered.
4) If it's an operator, compare the operator's precedence with the precedence of the operator currently at the top of the stack (if the stack is not empty) and if the precedence of the operator being considered is less-than-or-equal to the one on top of the stack, then pop that operator off the stack and push it into the output vector. Keep going until either the stack is empty, or the operator on top has less precedence. Then, push the operator being considered onto the stack.
5) If left parentheses, push it onto the stack
6) If right parentheses, then pop operators off of the stack and push them into the output vector, until you get to a ( parentheses. Pop that off and discard. If the next token is a function, pop it off the stack and push it into the vector.
7) Once the end of the input stream is reached, pop all remaining operators off of the stack and push them into the vector. Then, return the vector to caller.

The result should be the expression converted to postfix notation, barring any errors.

The postfix notation has the characteristic that all of the parentheses and commas in the expression are eliminated, so that no parentheses or comma tokens end up in the output stream. Only operands (number or variable), functions and operators are present in the stream. The operands and operators are ordered such that if the postfix token stream is evaluated, the order implied by the parentheses in the original expression is preserved. For example, in the expression 4*3+5 the resulting postfix will be 4 3 * 5 +, whereas with the expression 4*(3+5) the postfix will be 4 3 5 + *.

Evaluating a postfix is a fairly simple operation, involving yet another stack. The stack this time is used to hold operands (numbers or variable tokens).

To evaluate a postfix stream, simply iterate the stream, and for each token:

1) If it's a number, push the number onto the operand stack
2) If it's a variable, look up the value of the variable and push it onto the stack
3) If it's an operator, pop 2 values off the stack (right, left), perform the operation described by the operator using the two operands, and push the result onto the stack.
4) If it's a fuction, pop as many operands as the function requires off the stack, call the function, and push the result back onto the stack.

When all is said and done, there should be one value left on top of the stack. This is the result of the expression.

The evaluator I implemented for the ExpressionBuilder works similarly to this, except rather than using a stack for operands, I use a stack for CInstructionIndex values returned from the various functions of CKernel. When evaluating the postfix stream, if a token is a number or variable, then the number/variable is passed to a call to CKernel::constant() and the resulting instruction index is pushed on the stack. If a token is an operator, such as *, then 2 indices will be popped, then the corresponding math function in CKernel is called, and the result pushed onto the stack. ie, CKernel::multiply(left,right). And so it goes, until the final entry on the stack will be the index of the final function in the chain.

The ExpressionBuilder implements most of the functions in the CKernel interface. (I still have to figure out the best way to implement the fractal functions, though.) Some of the CKernel functions, such as x(), are implement as variable rather than functions, meaning that in the expression you can use x instead of having to use x(), in order to get the value of the x part of the input coordinate. Saves a little bit of typing. Similarly, radial is implemented as a variable rather than a function.

The ExpressionBuilder also implements 3 'special' variables: rand, rand01 and index. The rand token, when encountered, results in a PRNG call to get a random number, which is passed to CKernel::seed(). The rand01 token results in a PRNG call to get a random number converted into the range 0,1 and passed to CKernel::constant(). The index token is not yet implemented; it's what I'll be working on today. This token will allow you to 'index' the results of previously-evaluated expressions as a means of 'including' them into the current expression. For certain function chains that might be used repeatedly throughout a larger function chain, this is the way to go.

So far, the code I have pushed works. I haven't tested it very thoroughly, and I don't completely trust the tokenizer, but I'll continue to work on it throughout the days of my time off. I'm pretty happy to have finally started this project, though.

## Jasper's Place

So, I haven't worked on GC in quite awhile. I've been playing a bit of Path of Exile lately, so my urge to play/work on a turn-based RPG is at a pretty low ebb.

I typically play games at my PC in the dining room. I have dual monitors, and on the other one I've typically got Thomas the Train cartoons or Leapfrog counting/letters cartoons running for my kids. Both kids will sit there and watch or draw or whatever while I play. But lately, my oldest (my son) has been more interested in watching me play games. Since PoE tends to be a tad... ah... grim, I haven't been playing it quite as much while he watches. For the most part, most of the action is okay, but some of the areas (specifically, the battlegrounds in Act 3, the Lunaris Temple with its piles of bodies and rivers of blood, and the whole last half of Act 4 which takes place inside the greasy, gross bowels of a huge monster) get a little sketchy. So lately, instead of playing PoE, I've been working with my son, Jasper, on creating Jasper's Place.

Jasper's Place is a point-n-click area exploration toy. Fascinated by the action in PoE, where you click somewhere and your little guy moves there, Jasper will happily click around for an hour or more in a safely cleared Coasts map. So I decided to make a little toy that he can play with, without running the risk of rip-ing one of my hardcore characters to an overlooked mob while I'm not paying attention. The beauty of Jasper's Place is that Jasper is taking an active part in creating it. It's still pretty sparse, but the whole project so far has been a fantastic way for me to spend time with my son, indoctrinating him in the ways of game development. Both of us have been having a blast.

A level in Jasper's Place starts as a heightmap. Some time ago, I created a rudimentary heightmap editor, which allows us to draw mountains and valleys and hills. A couple of simple filters let us populate it quickly with Perlin noise, then draw splines to create rivers and roads and whatnot.

Jasper tells me where to make the roads and rivers. (He's only 3, so while he tries to do some of it, the tools aren't really all that toddler-friendly). I then export the heightmap to Blender, where we paint it.

The heightmap editor includes terrain painting that paints to a texture blend map that is draped over the terrain. But Jasper loves caves. He wants caves in Jasper's Place, and Urho3D's Terrain component doesn't play nicely with caves. (Specifically, there is no way yet of 'snipping' out holes). So instead, I apply the heightmap as a displacement to a plane in Blender, then export the plane as a single object for the game. And instead of performing terrain blends from a texture, I instead have modified the tri-planar terrain shader from Goblinson Crusoe, tweaked to obtain the blend factors from the RGB components of the vertex color layer. This allows Jasper and I to go into Blender's vertex paint mode and quickly paint on terrain. By now, he knows that Red equates to stone, green equates to grass and blue equates to sand, while black brings back the dirt. When it comes time for terrain painting, I assist a little bit, but for the most part I let him go crazy.

When it's time for the caves, I have to help him out quite a bit. He tells me where he wants a cave to open, and I'll build it. To create the caves, I start by extruding the edges of the displaced heightmap down and filling the extruded area to create a solid. Then I can do boolean operations with various tubes, created using Bezier curves. Using a boolean difference option, I 'subtract' the tubes from the heightmap solid. Then I delete the extruded parts of the heightmap (to avoid extraneous geometry) and do some cleanup of the edges where the cave was subtracted.

Here is an underside shot of an example cave:

On the right is the curve/tube that was used to bore the cave. Shot of the cave opening:

Once the caves are dug, we paint:

The terrain shader defaults to a dirt fill, so anywhere there is black is dirt. Successive layers are 'piled' on by the red, green and blue channels, meaning that up to 4 terrain types are supported. This part is Jasper's favorite part. Kid loves to paint. Once it's painted, I export and load it up in the toy.

The toy is pretty simple. It just loads a level, creates a navigation mesh and lets Jasper point-n-click to move Goblinson Crusoe around. I bounded the play area with rock walls, because it baffled Jasper that the world would just suddenly end. But he understands that a wall can stop him, so that works.

Now he's talking happily about building a castle, so I'm going to build some simple castle pieces so he can show me where to put them.