• entries
    436
  • comments
    1179
  • views
    763232

About this blog

Adventures of Goblinson Crusoe

Entries in this blog

JTippetts

Following along from the previous post about the node graphs, I have lately pulled the node graph stuff out and have started work also on a standalone editor for noise functions. https://github.com/JTippetts/ANLEditor

The editor uses the node graph functionality, along with an output node that provides various functions, to allow one to create graphs of noise that can be used to create textures.

sU7aQW1.png

The output node allows you to map either a Grayscale or RGBA image output (the Volume button currently does nothing, for now). It can analyze a connected grayscale function to give you a histogram graph of how the function output is distributed, and calculates a set of scale/add factors that could be used to remap the output of the function to the 0,1 range. It allows you to specify seamless mapping settings, and to export images to file. It's all still fairly rudimentary and I still haven't settled on a final save/load format, but all that is in progress.

I have also started creating an editor of sorts for Goblinson Crusoe, using some of this editor functionality. It's still in its infancy, but eventually it will allow me to create areas and area pieces for use in randomly generating maps. 

Lately, I've been doing some thinking about what I want to do with Goblinson Crusoe. It has become clear to me that it will probably never be a commercial release. It will probably never be on Steam or anything like that. I just don't have the time. I wasted a LOT of my early years spinning my wheels and going nowhere, and now I have so many other things that have to come first (family, full-time job, home-ownership, etc...) that I just don't think I'd ever realistically finish this thing. If I could work on it full-time, perhaps, but mortgage, bills, and the necessity of providing insurance and safety nets for my wife and kids means that isn't feasible.

However, I do enjoy working on it, and don't want to just abandon it. Nor do I want it to never see some kind of release. And I would like to see some kind of return on my huge time investment over the years. So I've been thinking of making GC a free and open-source project, linked with a Patreon and/or PayPal for goodwill donations. At least that way, if I die or life gets in the way of further development, at the very least it wouldn't disappear completely. Plus, I might get at least a few dollars from appreciative people along the way.

What do you all think? Any advice on how to structure such a thing? Good idea/bad idea?

JTippetts

I've been working on the node graph editor for noise functions in the context of the Urho3D-based Terrain Editor I have been working on. It's a thing that I work on every so often, when I'm not working on Goblinson Crusoe or when I don't have a whole lot of other things going on. Lately, it's been mostly UI stuff plus the node graph stuff. The thing is getting pretty useful, although it is still FAR from polished, and a lot of stuff is still just broken.

Today, I worked on code to allow me to build and maintain a node graph library. The editor has a tool, as mentioned in the previous entry, to allow me to use a visual node graph system to edit and construct chains/trees/graphs of noise functions. These functions can be pretty complex:

70xCrMd.png

I'm working on code to allow me to save these graphs as they are, and also to save them as Library Nodes. Saving a graph as a Library Node works slightly differently than just saving the node chain. Saving it as a Library Node allows you to import the entire thing as a single 'black box' node. In the above graph, I have a fairly complex setup with a cellular function distorted by a couple of billow fractals. In the upper left corner are some constant and seed nodes, explicitly declared. Each node has a number of inputs that can receive a connection. If there is no connection, when the graph is traversed to build the function, those inputs are 'hardwired' to the constant value they are set to. But if you wire up an explicit seed or constant node to an input, then when the graph is saved as a Library Node, those explicit constants/seeds will be converted to the input parameters for a custom node representing the function. For example, the custom node for the above graph looks like this:

h5X6wLF.png

Any parameter to which a constant node was attached is now tweakable, while the rest of the graph node is an internal structure that the user can not edit. By linking the desired inputs with a constant or seed node, they become the customizable inputs of a new node type.

(A note on the difference between Constant and Seed. They are basically the same thing: a number. Any input can receive either a constant or a seed or any chain of constants, seeds, and functions. However, there are special function types such as Seeder and Fractal which can iterate a function graph and modify the value of any seed functions. This is used, for example, to re-seed the various octaves of a fractal with different seeds to use different noise patterns. Seeder lets you re-use a node or node chain with different seeds for each use. Only nodes that are marked as Seed will be altered.)

With the node graph library functionality, it will be possible to construct a node graph and save it for later, useful for certain commonly-used patterns that are time-consuming to set up, which pretty much describes any node graph using domain turbulence.

With that node chain in hand, it is easy enough to output the function to the heightmap:

eBOmKpe.jpg

Then you can quickly apply the erosion filter to it:

D2oAHnC.jpg

Follow that up with a quick Cliffify filter to set cliffs:

WhASGoV.jpg

And finish it off with a cavity map filter to place sediment in the cavities:

pzY6ey4.jpg

The editor now lets you zoom the camera all the way in with the scroll wheel, then when on the ground you can use WASD to rove around the map seeing what it looks like from the ground.

v0IOTtR.jpg

Still lots to do on this, such as, you know, actually saving the node graph to file. but already it's pretty fun to play with.

JTippetts

So the last couple days I've been working on something that I've wanted to do for a long time now.

E3Wnvvp.png

I've been building it as part of a terrain editor I've been working on. It's still mostly uncomplete, but so far you can create nodes, drag links to link/unlink them, then output them to a greyscale image. In the works, I've got a few node types to implement, and a lot of glue code to implement saving/loading node graphs, hooking them up to generate terrain and terrain layer weights, etc... But it's coming along pretty nicely.

In the process, I have made a few fixes and additions to the noise library, to help with matters. One of the main additions is the Fractal node. The library, of course, has had fractals since the beginning, but the way they were implemented made it tough to allow them in the node graph editor. So instead, I implemented a special function type that can take as input a chain of functions for the layer noise, as well as other functions to provide the fractal parameters. Internally, the function will iterate over the number of octaves, and calculate the noise value. At each octave, the layer function chain is re-seeded with a new seed. This travels the function graph, and sets a new seed for any values of Seed type in the chain. This small change has opened up some easier ways of making fractals.

Additionally, I have added a Seeder module, which implements this internal re-seeding operation. I have implemented also a Randomize module. The randomize module takes a seed or seed chain, and uses it to randomize an output value from within a range.

It's kinda weird to talk about, so instead I'll demonstrate using a real-world solution to a common problem. Here is a fractal image of ridged noise:

nZYkXO3.png

This fractal is generated by layering successive layers, where the input is a Value noise basis passed through an Abs operation before being summed with previous layers. It creates the Ridged Multifractal variant, but you can see in that image that there are grid-based artifacts. These kinds of artifacts are common; each layer of noise is generated on a grid, and so the artifacts tend to sort of amplify as the fractal is built. You can mitigate it somewhat using different values for lacunarity (which is the value that scales the frequency for each successive layer) but that can only slightly reduce the visible appearance of artifacts, not eliminate them altogether. A long time ago, I advocated for applying a randomized axial rotation to each layer of noise, rotating the noise function around a specifiable axis in order to un-align the individual grid bases, and prevent the grid biases from amplifying one another. Previously, these rotations were built directly into the fractals, but that is no longer necessary. The new Randomizer and Fractal nodes now make this easy to incorporate in a more flexible way (or eliminate, if you really want artifacts):

49DwnX2.png

In this screenshot, you can see that I have set up a fractal node, and for the layer source I specify a gradient basis fed through an Abs function. That function in turn feeds a RotateDomain node, which feeds the layer input of the fractal. Attached to the angle input on the fractal is a Randomize node, that randomizes a value in the range 0 to 3. The result is this:

fbnNdcT.png

You can see that the grid artifacts are gone. The fractal iterates the layers, and at each layer it re-seeds the Layer input chain. This means that any node marked as a seed is re-set to a new value each time. This means that the Gradient basis node (which has a seed) is re-seeded, and the Randomize node that specifies the rotation angle is re-seeded. This means that each noise layer generates a different pattern than the other layers, and is rotated by a different amount around the Z axis. This misaligns the grid biases, preventing them from amplifying each other, and gives a nice non-artifacty fractal pattern.

I still have quite a bit to do in implementing the rest of the noise functions in ANL. But there you go, that's what I'm working on right now.

JTippetts

It's been awhile since I posted. I've been kinda reluctant to just post more project updates on GC, since that kind of thing gets a little tedious and pretentious sometimes. "Hey, guys, take five minutes out of your precious time to look at this screenshot of an incremental improvement to creature AI, that doesn't really show anything that I've actually been working on because that kind of thing doesn't show up well in static screenshots! But ain't it SHINY?!"

GC is still in progress; at least, as much as it can be given all the other stuff going on. Kid starting kindergarten, camping for several days in Yellowstone, other kid starting preschool, work (as always), yardwork, working some more on the basement, etc, etc, etc...

While thinking about some further UI upgrades, however, I was doing some reading through old links. I was thinking about healthbars, and re-visited a link from my bookmarks that dealt with the resource bubble meters in Diablo 3. Diablo 3 was an awful game, still one of my most hated, but damned if Blizz can't make things shiny and pretty. https://simonschreibt.de/gat/diablo-3-resource-bubbles/ speculates a little bit about how the resource bubbles are implemented graphics-wise, and I thought it might be interesting to take a stab at making one, even if ultimately it isn't useful for Goblinson Crusoe.

The article above talks about diving through the D3 assets and finding a circular mesh with distorted UV coordinates that provide the basis for the spherical distortion of the moving texture map. However, I elected to go with the idea mentioned in https://simonschreibt.de/gat/007-legends-the-world/ of using a texture map that encodes the UV coordinates for the spherical distortion. To generate the map, I fired up Blender and created a sphere and a custom Cycles material for it. After some thrashing around, the node setup I ended up with was this:

KXg6Gj1.png

The material takes the normal of the sphere and splits it into X, Y and Z components. It multiplies X and Y by the cosine of Z, then scales and translates the results into the range 0,1 by multiplying by 0.5 and adding 0.5. Then it sets the red channel from the X result, the green channel from Y, and applies the resulting color as an emission material with a power of 1 (which outputs the color without any lighting or shading). The result looks like this:

JnKI4rh.png

The way this texture is used is that the resource bubble is drawn as a rectangular plane, texture mapped across its face from (0,0) in the upper left to (1,1) in the lower right. This texture coordinate is used to sample the bubble UV texture, and the result is used to sample the diffuse color texture. This causes the diffuse color to be distorted as if it were being drawn across the surface of a sphere:

j605OT4.png

6h5nGVr.png

For the diffuse color, I grabbed a random photo of stars, snipped a piece, and made it seamless. To achieve the complex-seeming swirling motion effect, the star map is duplicated 3 times, with each layer being translated by some vector multiplied by elapsed time, and the three layers multiplied together and scaled up.

YXHvAbC.png

You can't see the motion in a still photo, of course, but in real-time it has 3 layers being moved in different directions at different speeds, and the effect is actually quite mesmerizing.

To implement the resource bubble representing different levels, I added a uniform to the shader, Level, that can be specified as any value in the range 0,1. To set the specific level, you set the value of the uniform based on the ratio of the resource's current value to it's maximum value. ie, for mana, you would set the uniform to (CurrentMana / MaximumMana). This level is used with a smoothstep function to cut-off the diffuse texture at a given height. Of course, using the smoothstep as-is would create a straight cut-off, so to make it more interesting I provide a noise texture and use the noise texture to distort the cut-off function. This noise texture is animated, and the effect is to make the top of the resource fluid look 'frothy'. Also, a clip texture (in a real application, I would probably make this the alpha channel of the normal texture instead of a separate texture) is used to clip the outsides of the bubble to black.

Q4VqoCr.png

Now, I felt that the surface of the fluid, even with animated froth, looked a little plain. And if you look at the D3 resource bubble, there is a 'line' of glowing material on the surface of the fluid that provides emphasis to the level. So to implement that, I used another pair of smoothstep functions, based on the fluid level, to isolate a 'band' centered on the top of the fluid. This band is used to scale the brightness of the fluid, and is distorted by the same noise texture as the fluid surface.

QGudqV1.png

This gives the effect that light is shining on the surface of the liquid, making it stand out.

Finally, I overlaid a texture that contains the reflections/streaks for the glass. To implement this texture, I used the sphere in Blender and applied a glossy shader with some lights. This one was done in haste, but it looks okay regardless.

Kl5AVxf.png

In a real application, I would spend a little more time on that one to make it look better. This glass texture is applied additively to the final fragment color.

BfBTsaK.png

In motion, it looks pretty cool:

 

The final GLSL shader code for Urho3D looks like this:

#include "Uniforms.glsl"
#include "Samplers.glsl"
#include "Transform.glsl"
#include "ScreenPos.glsl"

varying vec2 vTexCoord;
varying vec4 vWorldPos;

uniform sampler2D sGlass0;
uniform sampler2D sClip1;
uniform sampler2D sNoise2;
uniform sampler2D sNormalTex3;
uniform sampler2D sStars4;

uniform float cLevel;

void VS()
{
    mat4 modelMatrix = iModelMatrix;
    vec3 worldPos = GetWorldPos(modelMatrix);
    gl_Position = GetClipPos(worldPos);
    vTexCoord = GetTexCoord(iTexCoord);
    vWorldPos = vec4(worldPos, GetDepth(gl_Position));
}

void PS()
{
	float level=1.0-cLevel;
	vec2 newuv=texture(sNormalTex3, vTexCoord).xy;
	float clip=texture(sClip1, vTexCoord).x;
	float maskval=vTexCoord.y+texture(sNoise2, vTexCoord+vec2(0.1,0.3)*cElapsedTimePS).x * 0.05;
	float mask=smoothstep(level-0.01, level+0.01, maskval);
	float glowline=min(smoothstep(level-0.05, level, maskval),
		smoothstep(level+0.05, level, maskval))*clip*5+1;
	
	gl_FragColor=clip * mask * texture(sStars4, newuv + vec2(0.1,0)*cElapsedTimePS) *
		texture(sStars4, newuv + vec2(0.01,0.03)*cElapsedTimePS) * 
		texture(sStars4, newuv + vec2(-0.01,-0.02)*cElapsedTimePS) * 4.0 * glowline + texture(sGlass0, vTexCoord);
}

If you want to see it in action, here is a downloadable Urho3D sample (I can't promise it'll stay active for long; I tried to upload it as an attachment, but it kept failing mysteriously):

https://drive.google.com/file/d/0B_HwlEqgWzFbR0R6UzJGUTJTSHM/view?usp=sharing

Github repo: https://github.com/JTippetts/D3ResourceBubbles

On Windows, extract the archive, navigate to the root, and execute the run.bat batch file. It should open up a 256x256 window with the animated resource bubble filling the frame.

 

This little project was a fun little diversion while waiting to take my daughter to preschool. I don't tinker with shaders very often anymore, so it was nice to do so. It shows how even cool effects like this can be relatively simple underneath.

JTippetts

Water

screen_2017_7_15_16_11_36.thumb.png.277f247aee4c8ecf132ad352d64af124.png

screen_2017_7_15_16_50_15.thumb.png.44d77f3ca7de9052e10d8eba7d9245d3.png

 

So, I ended up doing the "skirts" method I spoke of in the last post. And in conjunction with the default Urho3D water shader (with a few small tweaks, and more to come to eliminate artifacts on the corners of the water blocks) it actually looks pretty decent. The water animates a noise texture that ripples the ground beneath. It also uses a reflection texture (which I have set to black at the moment) if desired. I might tweak the water shader further, but for now I'm happy with it. I've also got all the small issues sorted out from the change to multi-level water. It wasn't a large change, but I was surprised at how many different parts of the code worked from the assumption that water was determined by elevation < 9. I thought I had contained it better than that, but I did get all the various spellcasting, pathfinding and object spawning oddities worked out.

JTippetts

So, I'm trying to figure out how to do water. Right now, I am doing water the "brain dead" way; any tile below a certain height is water, and water is created as a simple hexagonal plane with a partially transparent blue material applied. It works okay, but the ultimate end goal is to have water play a more involved role in the landscape. I'd like to have rivers, waterfalls, etc... and that means that I need to rethink how I do it. I'm trying to come up with ideas for the geometry of water.

Here is a shot of how the current water system might look if I use it unmodified for multi-level water:

water1.png.a769e6226e009cbc7c0b5893cb4980c6.png

Clearly, I need some sort of geometry to tie the pieces together. My first thought is to create a sort of "skirt" piece that is attached to the side of a water hex if that water hex has neighbors whose water height is lower than its own. Might end up looking something like this:

water2.png.006bcfd18fc640ad67d4537dfcaa1003.png

The issue with that, of course, is that I have to oversize the skirts to avoid Z-fighting with the underlying ground hex, and that means that skirt pieces overlap with adjacent skirt pieces on different hexes and with the water hex of the lower water levels. In combination with the alpha-blending, this creates bands or regions of darker color where two pieces of water blend together. I could use waterfall particle systems to help obscure this overlap, I think. Alternatively, I could use a solid material instead of a partially transparent one:

water3.png.765e6ea2b2774a08dd101fcd8a5ae9df.png

I dont like the look of it, though. Large areas of flat water look terrible. Granted, there will need to be improvements to the actual water material to make it look better regardless of how I handle the geometry, but for now I'm sorta stuck on how best to do this. I do have some ideas as to how I could perform the geometry stitching of the skirts to minimize overlap, but it'll take the creation of special pieces, and some special-case code. Not too difficult, I suppose, but still seems kinda messy.

JTippetts

I'm on vacation in California, which means I kinda have some time to work on stuff, but it's split up by blocks of frantic activity. I'll tweak a few things, then head off to Knott's Berry Farm to burn in the sun while the kids ride on rides too small for me. Then I'll fiddle a few more things, then take the kids swimming. So while I'm getting some stuff done, it's all in a sort of disorganized tangle. I did decide to upload a new gameplay video. Once again, it's pretty poor quality. (Some day, I'll own a rig decent enough to make high-quality videos. But that day is not today.)

 

It's about 10 minutes of random gameplay. I was kinda distracted while playing by a 5 year old who wanted my help building electronic circuits with a snap kit toy he recently got, so there are some pauses. Also, there is still stuttering in some spots, which won't be cured until I fully convert everything from Lua to C++. (That's an ongoing project, but I am getting quite a bit of progress done while here in Cali.)

The stuttering is from the Lua garbage collector, which has been an ongoing problem with the Lua bindings to Urho3D. Throughout development of this game it has been at time worse and at times better. A recent change (unsure which one) made it worse again, enough that I'm finally going to just convert everything except UI stuff to C++ components.

 

JTippetts

screen_2017_6_19_3_9_51.thumb.png.d9d97850bf6642b5bd6ad60f0257c448.png

 

It's been a little bit of an art push lately. First of all, I started work on a dungeon tile set. Up there is my first stab at it. I created a couple different wall variations, a door and a hex-pattern tile ground texture (used in conjunction with existing sand and gravel textures). Don't have anything in the way of doodads or decorations yet. Doors are still kinda tricky. I had a conversation with riuthamus about it. The gist of doors in this game is that a door needs to work with any configuration of walls around it, so trying to do artwork for a traditional-looking door and choosing alternates to match up with the surrounding walls was getting to be too difficult. I had already implemented doors some time ago that utilize portcullis-like behavior: when you open the door, it slides into the ground. Closing it brings it back up again. The door in the above shot works the same. The issue lies in creating a graphic that looks door-like, even though it doesn't look like a traditional door. I'm not sure there's a perfect solution for it. But at least when you hover over a door, a popup appears with the label 'Door'. Hopefully that's enough of a clue for people to figure it out.

 

I've also started experimenting with MakeHuman. The ogre in this shot is a result of that experiment:

screen_2017_6_19_4_7_15.thumb.png.0d9386cab37750cfb6a4bc9309fa4e62.png

 

It was a quick effort. I just used some of the clothes provided with MakeHuman (hence the jeans and button-up shirt, articles of clothing that would be quite difficult to obtain in the Goblinson Crusoe universe) and ran some of the various sliders for the mesh deformation all the way to 11 to try to get an ogre-ish form. The experiment worked pretty well, I think, certainly well enough to warrant further experimentation. As a bonus, MH will export a skeleton rig to fit the mesh, though I still have to rig it with IK and animate. As it turns out, I'm still terrible at animating. Who knew?

 

I spent some more time doing miscellaneous cleanup. Fixed a bug that caused creatures to die multiple times if they died in a round with multiple dots on them. (They would die once for each dot because I wasn't checking for isdead in between dot applications.) Formalized the construction of summoning spells, so that a flashy spell effect is played when things are summoned. Added some flashy effects for things dying. Moved and rearranged some data tables again. You know, crazy shit like that.

 

JTippetts

So, lately I've been working on the DoTs/HoTs mentioned in the previous entry, as well as the framework for ground effects: ignited ground, lava, etc... And in the process I have yet again stumbled upon exactly how weird a turn-based game really is; or, at least, one done in the manner in which I am making this one.


Here's the setup: Goblinson Crusoe is built on an Action Points-based turn system. A Turn consists of 10 Rounds. Each Turn, all entities that want to act are gathered into a list, then each are given an opportunity to act for 10 Rounds. Moving, casting spells, attacking, harvesting loot, etc... these all consume Action Points until all points are used up, or until the unit chooses to end its turn early, at which point the next unit in line is given the chance to act. When all units have acted, the next Turn is started. So, while the units perform their actions consecutively, the abstraction is that these actions are ostensibly happening "at the same time". That's the weirdness of a turn-based game. You take a turn, then I take a turn, but it's supposed to be like these turns happen all at the same time. The abstraction really breaks down upon analysis, and there's really no way to make it better outside of moving it to a real-time simulation.


One way that this flawed abstraction bit me in the ass is with these DoTs/HoTs and ground effects. You see, in this system, time only really passes for a unit if that unit actually acts during a Turn. For example, control passes to the player who moves 10 hexes. That's 10 rounds worth of time, and at each step an event is fired to indicate a round has passed. DoTs, HoTs and time-based ground effects wait for these events in order to know when to do their thing. After all, you can't have an over-time effect without the "over time" part.


The problem I ran into is that while mobs such as enemies, GC, GC's minions, etc... are all active, acting objects, some things that otherwise can take damage are not. Things such as walls and doors, which have no active combat capability. They block tiles until destroyed, that's it. And the weirdness that resulted is that these units were effectively immune to DoTs. No time passed for them, so no ticks of damage were ever applied.


That's not what I wanted. I mean, obviously, a burn effect should burn a wooden door down.


It took a little bit of figuring to come up with a workaround that didn't completely break the system. The workaround is that walls and doors and such ARE active objects, but they have a different aggro component and an ai controller that does only one thing: ends its turn. The normal aggro component collects damage events and tracks them according to various damage sources, and if any damage was taken or any proximity events occurred, it signals that the unit is ready and willing to act that Turn. The fortification aggro component, however, only tracks DoT/HoT and ground effect events. If any of those are applied to the unit, then the unit wants to act. If they all expire, then the unit no longer wants to act. In the case of fortifications, "acting" means to simply end it's turn without doing anything. End Turn will cancel out any remaining Action Points, add them up, and send off a Time Passed event for the amount remaining, meaning that an entire 10 Rounds worth of time will be sent for the unit at once. The result is that if any fortification is hit for periodic damage in a Turn, then the camera will pan briefly to that unit while the damage numbers tick off, then will move on.


It doesn't really feel optimal, but I guess it's about the best I can do. The turn-based system in GC is fairly zippy, and simulation speed can be increased if desired, but it's still not great that the player has to spend some few seconds of a combat Turn watching DoTs hit all of the walls engulfed by the last Inferno spell he cast. But still, it seems to work okay.


It's always nice when I get these relatively large, relatively unbroken blocks of time in which to get things done. And it's always sad when they come to an end. Night shifts at work start tonight, and I'm working some extra days in the next couple weeks, so this thing'll probably be put back on the back burner once again. :(

JTippetts

Added a quick text banner on level load, to give the player a hint or reminder of the scenario type. While right now the scenario type is "random collection of enemies scattered across a randomized terrain", eventually I'll have different scenarios such as "boss fight", "base assault", "base defense", "dungeon crawl", etc... This, I hope, will help to provide variety in the kinds of gameplay a player can take on.


A couple days ago, I implemented a DoTs and HoTs system. Damage-over-time, heal-over-time. Spells and effects can apply dots or hots (either timed or permanent) which deal or heal a set amount of damage of a set of specified type(s). The way the system works is that whenever a unit is acting (moving, casting, attacking), the amount of "real" turn time (ie, actual action points used, not modified amounts used due to speed modifiers) is accumulated and used to apply damage or healing whenever time accumulates to one round or longer. So, as the unit is walking along, periodically there will be damage applied or healing done.


Now, the DoT/HoT system is working pretty well. However, in the process I uncovered another bug in the AI. When I first went to test the system with a test DoT, I noticed that certain units would stop acting correctly after a DoT would hit them. These units would work correctly before the DoT, but once the first damage was applied they would stop moving or acting in any way. Confused, I dug into the code. Turns out, in the Aggro component (which tracks damage sources and is used to figure out who the unit hates the most) I wasn't filtering incoming damage sources based on origin. It wasn't a big deal before, when all damage applied to a unit was external; however, the DoT damage is tagged as being applied by the unit itself. This resulted in the mobs damaging themselves, getting pissed at themselves, and trying to kill themselves using skills and attacks that can't be used on themselves. It was a simple fix, but demonstrative, I think, of the kind of weird shit you have to tackle when doing AI.


Tonight, I rewrote and cleaned up the heuristic function for enemy pathfinding as well. It runs faster, making pathfinding more snappy. I also started tweaking AIs to better handle incidental enemy units encountered while pathing, for example to bust through a barrier or wall owned by Crusoe. The additions and changes make them much more effective at busting through any fortifications GC puts up. I still need to do more work on this, though, as there are still a few oddities. Still, it is nice that GC can no longer sit untouchable by melee units behind a ring of walls.

JTippetts

Lately, I've been doing a lot of cleanup and some more experiments with AI. The combat stats system still isn't nailed down 100%, but I am getting closer. The AI revamp is going pretty well. Most of the behaviors are cleaned up, and enemies are now much more effective at traversing terrain using whatever methods are available, and casting spells that are given to them. Enemies can build bridges to cross water much more effectively (the pathing on previous iterations was broken, but it's fixed now), they can open doors if the doors belong to their faction (making it more effective to 'fort up' and let mobs come and go), and so forth. The AI system much more closely resembles a behavior tree than the random collection of bullshit that it was before, making it much easier to assemble complicated behavior systems from small, manageable parts. Currently in progress is a system to allow selecting attacks based on obstacles encountered while pathing to a target.

Central to the AI system is the pathfinder, which will select a path to approach a selected target. The pathfinder can be tweaked to try to avoid enemies, seek out resource nodes to loot, etc... It is possible that, given the unit's specified heuristic function, along the way the unit will need to attack something that is not the ultimate target, be it an enemy door or wall or another hostile unit. In previous iterations, the mob would use melee strikes only to assault these incidental objects. I'm currently working on sub-behaviors to allow the mob to more intelligently select attacks or spells to use. For example, wooden walls will probably fall faster to fire-based attacks than they will to the pounding of tiny little goblin fists.

IXDygR1.jpg

Some of the folks in the chat have been hassling me to release something that they can test out, so I'm working toward that. It won't be pretty, and there really still isn't much 'game' here. But what I have so far is still a pretty interesting taste of what the final game will hopefully be like. Currently, the test level is populated by two types of enemies: melee swordsman (green gobbos wielding green fiery swords) and shaman (red gobbos with a staff). The swordsmen pretty much will only approach and attack with a melee hit. The shaman are much more interesting, in that they have a single-target heal, an AoE heal, and a summon spell. They will first attempt to find a bro to heal, and if nobody needs healing they'll either approach GC and rain fireballs on his head or they will summon in Choppers to attack. Choppers are tiny 1/2 scale swordsmen with low hitpoints and high melee damage. GC can kill them quickly, but if the shaman aren't taken care of their numbers may soon become untenable.


U2rD09X.jpg

JTippetts

I haven't posted about GC in awhile. I have done work (most of it regarding the combat system) but a lot of the work is still "on paper" as opposed to "in the game".
The performance of the game has been a concern of mine for quite some time now. Granted, my dev machine is not a powerhouse. In truth, it is the polar opposite of powerhouse, unless your definition of "powerhouse" is "HP laptop from Costco with the finest integrated onboard Intel graphics." If that's your definition of powerhouse, though, you have worse problems than I do.
Anyway, the performance is crap. I mean, really. It's crap. I added the grass model you see in the shot above, and my framerate plunged to, like, 11. Sometimes 8. (This is in borderless full-screen, so 1366x768 on my lump of clay.) As you can imagine, testing out combat configurations and systems at these rates is frustrating, so often I'll disable vegetation and replace the hex tile material with plain white when testing that stuff. Still, in the back of my mind lurks the thought that it just isn't performant enough. People with roughly my system specs should be able to play a game of this level of graphical fidelity, and expect at least a somewhat decent level of performance.
I had fastcall22 do a test on his computer, and he said he was getting a pretty consistent 144 fps at 2560x1440 resolution. He's got a Nvidia GTX 1070 card. So, right now, about the best I can say is that depending on your system specs, you can expect somewhere between 5 and 144 fps playing my game. That's just... I don't know. Kinda scary.
What's the thought on ignoring shitty laptops from Costco with integrated Intel graphics? I mean, I know they're garbage. But still... 5 fps, on a machine purchased in 2015. Thoughts?
The alternative would be to scrap the quad-planar tile material and use traditional models, but everyone I talk to (including myself; yes, I talk to myself) likes the quad-planar tiles. They're kinda cool, and certainly unique. I don't really want to give them up. But if it comes down to that, or fielding a shitload of complaints about crappy performance, then I'd say I need to do what I have to. Assuming this thing ever releases, anyway, which given my glacial pace of development is looking highly doubtful.

JTippetts

If I were to go back and remake Golem today, what would I do? Where would I start? As I have mentioned in previous entries, I would almost certainly use an engine now instead of rolling my own. (Although, to be fair, back then there wasn't nearly the wide range of available engines to do this stuff with, so rolling your own was almost a given.) Without having to worry about the low level details, I would be free to worry more about the stuff that makes it an actual game. I've become a better programmer, so I like to think that I would make fewer mistakes now, mistakes that would turn the game into spaghetti. Still, though, even with all that I've learned in the last couple decades about game making, I think I would probably start Golem now with the same thing I started with back then: drawing the ground.

Think about it: in an ARPG like Diablo, you spend a LOT of time looking at the ground. Camera above the ground looking down, dude walking around. The ground fills the frame nearly 100% of the time. To me, poring over screenshots of Diablo and Diablo 2, it seemed the natural place to start. Draw the ground, then figure out what kinds of other stuff to draw on top of it. In 1997, my options for drawing the ground were somewhat limited.

Then...



When I first read Designing Isometric Game Environments by Nate Goudie in a print issue of Dr. Dobbs Journal in 1997, I had already played Diablo 1. I had also already created a number of small prototype tile-based RPGs, starting in the late 80s. I had dipped into assembly language programming in the early 90s, to write "fast" tile blitting routines. I had a basic understanding of how tile maps were supposed to work, so I took what I knew and what I read in that isometric article, and I dove in.

Golem went through various different stages as I learned about tiles and tilemaps. From base tiles with no transitions, to transition permutations, to alpha-blended transitions for greater flexibility. I spent a lot of time looking at screenshots, playing with paint programs, trying to hand-draw some dungeon floors. I remember in the early 90s I had found a program for DOS called NeoPaint, and I was still using it in the late 90s to try to draw dungeon floors, right up until I discovered GIMP. I struggled to try to reproduce Diablo and especially Diablo 2, without really knowing or understanding how such things were done. At some point, I had the epiphany that they most likely created their tiles from photographic sources. Digital photography was starting to take off, and I remember stumbling on Mayang's textures in the early 2000s. Most of the tiles in the Golem screenshots still in existence were created from texture sources downloaded from Mayang's. I remember just being floored at the number of textures available there, and during that time I really worked on learning various editing techniques to make tiles from photo sources, to make them seamless and create transitions.

In the beginning, all of the render and blit operations were written in assembly. The first generation of Golem map rendering had its roots in a tile map system I wrote in the early 90s for DOS. At that time, my computing budget was limited. Hard drive space was at a premium, all drawing had to be done in assembly language to get the kind of performance a game required. With the switch to isometric in 97, I devised a very simple run-length encoding scheme to encode the graphic tiles. See, the isometric tiles are diamond shaped:

XhAUIOQ.png

From the very first, I understood how wasteful this configuration was. All that empty space, all those transparent pixels. My early thought was to encode the image so that only the pixels (plus some book-keeping stuff for row offsets) were stored. That saved a lot of disk space, plus I wrote custom assembly routines to draw these RLE sprites with pretty decent performance.

The first iteration, I had no idea about transitions. I had played a lot of JRPGs on Nintendo systems, so I was pretty familiar with tile maps. All those first Golem tilemaps were transitionless. I don't have any of those screenshots anymore, but it was essentially like this:

x5NImTo.png

Those are newer textures; the ones I made for Golem initially were... pretty terrible. I thrashed around a lot in those days, trying to figure out techniques and learn tricks. Eventually, I read about tilemap transitions (probably in the gamedev.net resource section) and started creating permutations. Dirt to grass, dirt to water, grass to water, etc... If you've ever done transition permutations like that, you know that the number of permutations can explode pretty rapidly. Thus, this later revision of Golem's map rendering had to carefully constrain terrain placement. Gone were the days when I could just throw whatever tilemap configuration at the renderer I wanted; I had to be careful in placing types so that, for example, I didn't end up with a dirt tile that bordered grass on one side and water on another. That just wouldn't work. Still, even given the limitations of tile permutations like that, the simple addition of transitions was a pretty good leap forward in quality. The result was something like this:

iXrpwty.png

Eventually, though, that limitation started to really chafe. Just... you know, sometimes you really want to have a dirt tile with grass on one side and water on the other. You know? So I learned some more tricks, read some more articles. Somewhere around this time, I switched to DirectDraw for rendering, then moved on to Direct3D. We're talking early versions here. I kinda wanna say I started with DirectX 4. DirectX had been a thing for a few years, and I probably stuck with the old assembly stuff for far too long, but you gotta understand, kids. Back then, the internet was still mostly just sacks of bytes carried around on the backs of mules. You kids and your reddits and your facespaces, you just don't know what hardship is. Hardship is connecting to the World Wide Web with a dialup modem, hanging three T-shirts over your computer to try to muffle the connection noise because there's no volume control on it and your roommates are trying to sleep. Hardship is not even knowing that marvelous wonders such as DirectX or OpenGL even exist, because the only real reference you have is Michael Abrash's ponderous tome on graphics programming, with its chapters on Mode X, already a relic of a bygone era by that point.

Anyway, with the switch to modern APIs, things really changed. I could use alpha blending, something my old assembly routines didn't really allow. Now, I could create the transitions as alpha-blended decals overlaid on top of base tiles. Suddenly, I could have that mythical dirt tile bordered by grass and water, without having to create approximately 900,000 different terrain permutations. It was amazing!

Again, those early dev screens are lost. But here is a quick example of how that can look:

b7VEaB8.png

Any number of transitions could be overlaid upon one another to create as complex a transition as was required. Of course, this required me learning a few more tricks on tile creation. (Again, those early tiles were pretty garbage, though I was getting better by that point.)

It was around this time that I started branching out into a lot of different areas. I was building the character control stuff, combat stuff, inventory handling and management, etc... I was learning how to model in 3D, using the amazing Blender software that I had discovered. Blender was in its infancy, having just been released as open source by a crowd-funding campaign. But even then, it was pretty powerful and it opened up a LOT of options and avenues for me. I was learning how to create walls and gates and skulls and creatures. It was an amazing time.

I started to dislike how the tile transition schemes were plagued by the appearance of grid-based artifacts. You could see the tile-basedness of it. I let that ride for a long time, though. At some point later (after I had already abandoned Golem) I started experimenting with decal-based splatting systems that would splat terrain and terrain decorations down on top of a terrain, randomizing locations of splats to mitigate the grid constraints. I could lay down a base terrain, then splat decoration on top of it using alpha-masked splats.

TVxWp0Q.png

With the rapidly expanding hard drive sizes and graphics card capabilities, the RLE system I had encoded my stuff in all along quickly became a liability. I didn't really need to optimize for hard drive space anymore, and it required an extraneous step of unpacking the encoded sprite to a OpenGL texture. I never did replace that system; I just worked around it. But since I had plenty of space, I could now do tile variations: different variations of a single terrain type that could be mixed and matched to eliminate the appearance of repetition. Even just having 2 variations of a given type made a large difference:

http://i.imgur.com/2HpPN1a.png

Adding more variations only made it better.

And that's pretty much where I left it all off. Using alpha-blended transitions or detail splats to hide repetition, texture variations, etc... I was able to make some pretty decent stuff, slightly approaching what Diablo 2 accomplished (if with nowhere near the artistic skill). Unfortunately, this was the end. The awkwardness of the graphics pipeline, combined with the general shittiness of the codebase, killed the project. Eventually, my old hardware was upgraded to new and the Golem was not copied over. Pretty soon, it was nothing but rotting bits on a disconnected harddrive in the bottom of a dusty, spider-webbed cardboard box.

Now...



So, back to the question of if I were creating Golem now, what would I do? First and foremost, I would ditch the tile-based transition scheme altogether. Using a custom shader to blend different terrain types together using a blend texture is the way I would go. It's what I've been using in my Goblinson Crusoe project for a long time now. Using such a scheme has quite a few advantages. The grid is no longer as large a problem. There is some repetition, due to using tiling textures, but the actual transition configurations are no longer tile-bound. A couple different base textures, a map to blend them together, and some revised ground terrain generation rules for the random generator and you're in business:

7AHEqEq.jpg

Couple it with the decal detail splatting to add some decorations, and you can get some really nice results with relatively low performance cost.

The base version allows 5 terrain types (black in the blend map is the base, R, G, B and A channels control the upper layers). But you could throw another blend texture in there for 9 types if desired. Texture arrays make it easy, since you can bind as many textures as you need.

I put together a quick test of it. The funny thing is that it only took me about an hour to get the framework up, and implement a quick randomly generated ground terrain and have it scroll around with the WASD keys. The power of a pre-existing engine, coupled with the countless hours I spent up to that point learning about how all this shit works and writing shaders and camera code for other projects. What took me literally years in the early days to accomplish, I can now accomplish in an afternoon and with better results besides. Such is progress.

If I were to make Golem now, this is how I would start.

JTippetts

Golem

In the previous post, I commented how I didn't see any use in re-factoring and re-building Golem, the way evolutional is re-factoring Manta-X. I'd like to elaborate on that a little bit.

It's not that I don't find any value in the game itself; Golem is a game that I most definitely would like to revisit at some point. I feel like I had a lot of good ideas with it, and I liked the setting and story I was writing for it. It was fun to play, as far as I got with it, blending some of my favorite aspects of Diablo and the various roguelikes I was playing at the time. I could certainly see myself going back to it some day.

However, that doesn't mean that I have any need to try to refactor it as it stands. You see, refactoring that thing is a trap. A warm, cozy, comfortable trap. I could jump in and start refactoring. I've got a thousand ideas even now, as I write, for what I could do to start improving things. There are things I would do better, things I could rewrite, etc... I've got an idea for a message passing scheme that... well, let's just say that this is how it starts. I spent decades of my life in this trap. Decades. Call it framework-itis, if you want. Call it engine-sickness. Whatever. It's a trap.

I have started to work on countless framework/engine projects in the past. I've written numerous prototypes, demonstrating some new aspect of engine technology that I hadn't explored yet. I've written object frameworks, message passing schemes, resource handlers. There is always a reason for me to jump back into that mess, but the thing is I already know how that is going to end. I already know how my time will be wasted.

I've mentioned before, I'm not much of a software engineer. I'm better than I once was, certainly, which just makes it more tempting to say, "I can do better this time." I could dive in, write some framework code, munge around with more modern OpenGL versions, build a new object framework, yadda, yadda, yadda. But eventually I'm going to run into some roadblock. Some poorly designed bit, some system I lack the knowledge or skill to architect gracefully. The spaghetti-ing will start. I'll begin running headlong into limitations of the engine, I'll start kludging in bits to make something work. Before you know it, months or years have passed and I am right back where I was for just so many years of my life: writing engine code, rather than writing a game, and getting bogged down in a mess of code that is spiraling out of control. That's how it happens with me. It's a road I've been down before. I've never been a good-enough programmer to avoid the trap, and I don't think that I ever will be.

It's already started, you see. I've got a project skeleton in my projects folder right now. I've got some new libraries I've never used before copied into that project, libraries that I am itching to get a feel for and learn about. The skeleton project builds without errors or warnings; it opens a window, loads some shaders and draws a red/green/blue triangle. I've copied over the source files from Golem. I've started writing a tool to un-crunch the data files in order to recover the assets and try to make use of them again. It has begun.

I gotta stop the madness.

Over the years, since I joined this site (officially in 2004, unofficially somewhat before then, in the days of the splash screen), I have come into contact with a lot of really talented and skilled programmers. evolutional has been hugely influential, off and on through the years. Washu, Promit, swiftcoder, SiCrane, Fruny, phantom, superpig, hplus0603, Hodgman, Servant, Apoch, eppo.... the list goes on and on. A lot of these people know programming in a way I never will, and I was always a little bit envious of that. But more than anything, over the years I've learned to accept my limitations and understand that I need to play against my strengths. Writing low-level framework code in an elegant and efficient manner, regardless of how I enjoy it, is not one of my strengths and never has been.

So the chief thing I would do differently this time around would be to use an engine. I've been using Urho3D for quite some time, and have become very comfortable with it. However, if I use an engine then automatically, by default, the vast majority of the existing Golem code becomes an unusable non-sequitur. Animation handling? GUI? Object hierarchy? Just no real practical way to shoe-horn it into Urho3D's existing component-based framework. Not without completely re-writing everything, which is pretty much what I would end up doing. The re-factor then becomes a completely new project altogether. It would be less labor-intensive to just scratch everything and start fresh, building directly against the architecture of the engine. It would certainly align more closely with the way I currently think about game development. I have left behind many of the attitudes and beliefs that I held when I was working on Golem. I don't think I could fit in those pants anymore.

But I wouldn't start work on Golem again right now, not with Goblinson Crusoe already on my plate. Nuh uh, no way. One medium-scale RPG is already too much work for one dude. No sense adding another.

JTippetts

Evolutional's recent post about digging through some old code inspired me to take a look at some of my own. I had recently found a copy of the working development folder of my old game, Golem, on a hard drive in a box, and while I've peeked into it out of morbid curiosity, I haven't felt the inclination to dig very deep until now.

Golem was an isometric ARPG heavily inspired by Diablo and, later, Diablo 2. I had grown up playing the original Atari console and, later, the original Nintendo and the Super Nintendo. I didn't get to have a whole lot of time playing PC games. My first PC RPG was Wizardry 4, and holy shit was that a mind-bender. I played JRPGs on the Nintendo, mostly: Final Fantasy, Dragon Warrior, etc... Loved Secret of Mana, loved the Zelda games, loved Illusion of Gaia. But playing Diablo was like an awakening. On PC, I had mostly played Nethack up to that point: turn-based dungeon delving rendered in awkward ASCII graphics. But Diablo... well, Diablo was kinda like Nethack (not nearly so deep, obviously) but it looked good. To my young eye, the isometric graphics of Diablo were stunning. Playing Diablo literally altered the course of my life, in ways that I only really now understand. While I had dabbled in game development since the early days of DOS and the advent of 256 color graphics in ModeX, it wasn't until I played Diablo that I really had in mind a vision of what I wanted to do and make.

9Qg4fkn.jpg

I started work on Golem some time around 1998. Some of it was just scribblings in notebooks made during lunch break at work, some of it was assembly-language tile drawing routines I wrote later, some of it was C code written for the old Borland Turbo C++ compiler. Over the years, it evolved away from assembly and into OpenGL as I learned new technologies. Many of the first posts I made here on gamedev.net were in regards to this game. In a way, I've been working on this game (in some form or another) ever since. I've certainly spent a lot of time on isometric game prototype development, at any rate, even if little of that original code is still in use. But during the summer of 2004 is when Golem in its final (sadly, uncomplete) form took shape. I was between jobs, having returned (briefly) from Arizona to Wyoming. I had not yet met my wife, bought a house, had kids, or any of that. Everything I owned, I could fit in the back of my old Toyota 4x4. I slept on a cot in my dad's basement. By day, I hiked the juniper and cedar covered hills just outside of town, and by night I hunched over my keyboard, listening to Soundgarden on repeat and furiously pounding out 69,094 lines of the shittiest code you ever did see.

EDyY4Gt.jpg

Evolutional's restropective is a thoughtful look at how a maturing developer revisits his old code with an eye toward understanding how he was then, and understanding how he has improved over the years since that code was written. He starts the whole thing off with a look at the project's structure, and while he has criticisms about how he structured things, I gotta say this: Manta-X was a paragon of order and structure compared to this:

7ge0ruW.png

Take a look; drink it in. That's the root level source directory for Golem. Are there source files there? Sure are. Are they mixed in with intermediate build .o files? You betcha. Are there miscellaneous game data scattered all throughout like nuggets of gold? You're damned right there are! In the finest code-spaghetti Inception tradition, there are even archives of the game code directory in the game code directory. At a quick glance, I see bitmap fonts; I see sprite files for various player characters and props; I see tile sets; I see UI graphics. Sure, there's a data folder in there, and sure during later development all game-relevant data was finally sorted and organized into the data/ folder structure (it's actually pretty clean now), but all that old testing and development cruft is still in there, polluting the source tree alongside build artifacts like little nuggets of poo fouling the bed sheets.

Rest assured, I have improved--in this regard, at least.

In the last few hours, I've opened up several source files at random and perused their contents, trying to get back into the mindset of Josh v2004, or VertexNormal as I was known back then. This process can also be summed up as 'what the bloody hell was VertexNormal thinking?'

Elephant in the room here: yes, there are singletons. Hoo boy, are there ever singletons. In the final day, when I have to stand before the judgement bar of God and account for my actions here in this mortal life, the question is gonna be asked, sternly and with great gravity: "Did you ever abuse the singleton pattern?" And since God sees all and I will not be able to lie, squirm or weasel, I'm gonna have to say: "Absolutely, I did. Many, many times." It's not going to be a proud moment. (In this envisioning, God looks a lot how I imagine Washu looks; make of that what you will.)

So, rough idea of how it's organized from a high level. We've got a Map, and a MapBuilder. The map, of course, is the level. The MapBuilder is a utility I wrote to encapsulate (and I use the term encapsulate loosely, and with liberal license) the various things I needed to randomly generate levels. We've got a ScriptContext. We have a MiniMap. We have a GlobalAlloc object allocator. We have an EffectFactory. We have an SDLWrap (a wrapper of all things SDL related, naturally). We have a BuildInterface (whatever that is). We have a ScriptInterface, a PlayerInventory, an ItemFactory, and more. All of them, singletons.

All of them.

Look, I kind of get what VertexNormal was doing here. I get it. Software engineering has never been my strong suit, and to this day I continue to make terrible decisions. And back then, I didn't even know what the term 'dependency injection' meant. But man, you guys. Man.

Some day, someone is gonna write a book on how to kill a project, and under the chapter titled "Dependency Spaghetti Through Singleton Abuse" there will be no text; just a link to the repo of Golem. Every knotty little problem, every little grievance and aggravation that led to me ultimately abandoning Golem, finally and completely, can be traced back to the abuse of singletons. Singletons are everywhere, threading a knotty little web of dependency throughout the entire project. Dive too deeply into the source, and the threads of that web can strangle you.

Now, I'm humorously critical of this project, but the important thing about it is that when I look at this project, I see a kid who was trying his hardest to learn. He was learning about object lifetime management, learning about encapsulation, learning about how to build a game in general. He was making mistakes and working to rectify them, and expanding his knowledge in the process. I don't see any value in doing with this project what evolutional is doing with Manta-X; unlike Manta-X, I don't think there is anything of interest to be had from that here. To refactor this thing would be a large task, and ultimately pointless since there is little here that I haven't accomplished, more efficiently and more effectively, in later prototypes. But I think it IS kind of a valuable exercise to look at it and think about how I would do things differently now, knowing what I know in the present.

Aside from the singleton abuse, there are a few issues.

1) VertexNormal still didn't have a good grasp on object ownership and lifetime. The existence of so many singletons demonstrates this, but so too does the structure of the various object factories and allocators. Knowing who owns what, who is responsible for what, and who is just using what, is a large part of game engineering.

2) VertexNormal made the mistake of using custom binary asset formats for everything, even in early stages of development. That meant that I was constantly fussing about with building tools to pack/unpack assets, even as asset formats were constantly changing and evolving. A lot of time was wasted maintaining and modifying the various sprite packing utilities. I had the idea of saving memory by storing graphics as run-length encoded files, using a custom RLE scheme I had devised. It was a carryover from the earlier days, when the rendering was done in assembly and the graphics were rendered directly from these RLE data chunks. But with the switch to OpenGL, the pipeline was modified to include a step for unpacking these graphics to textures, meaning that the steps of packing and unpacking were made extraneous. Disk space was cheap by that point, so there was just no point to keep fighting the binary packing system as I did. I'm amazed that I implemented as many objects as I did, considering each was a binary packed asset that had to be specially constructed. It is especially ironic that I struggled with the binary asset descriptions, when I had already embedded Lua into the project, and Lua is in itself a fantastic tool for data description. These days, all my assets are described as Lua tables rather than binary blobs, and it's easy enough to compile them to binary format.

3) VertexNormal was still figuring out how to draw isometric games and make them look good. The tile-basedness of Golem is painfully obvious in every screenshot. Which isn't necessarily a bad thing, but I made some bad choices with how I was drawing the levels that made it difficult to mitigate the tile-basedness. These days, I make greater use of shaders and techniques that can eliminate the grid and increase flexibility. I'll probably try to talk more about this later, but it is remarkable to realize exactly how much time and effort I have spent on figuring out how to draw isometric levels.

4) This was VertexNormal's first introduction to Lua. It's where I learned to love the language, even if back then I had absolutely no friggin idea how to properly make Lua and C++ work together, in a way that helps rather than hinders. I had difficulties with passing objects back and forth across the interface, hence the existence of the ScriptContext class which was used to set various 'current' objects for manipulation by script. I still struggle with exact implementations, but I have a much better grasp on things now. In Golem, it is obvious I had troubles knowing whether Lua or C++ should be responsible for a given task. Many things that I would now implement in script were then implemented in C++, and vice versa. This is an ongoing learning process.

There are other issues, but these are the main takeaways from my high-level strafing run.

It was a fun little excursion. It reminded me of who I was, back when I thought I had, just, a whole lot to say. I was full of energy, full of excitement. There were a thousand articles I was going to write, talking about all the things I was learning and figuring out, articles that I just never seemed to get to because I was always off chasing something else. It was fun to revisit that. Much of that excitement has faded, now. I no longer feel like I really have a lot to say, or that anybody really cares what I have to say anyway. That's not fishing for comments or arguments to the contrary or anything, it's just the truth as I see it. While I still enjoy doing this stuff, it's not my main passion or focus anymore, and hasn't really been for quite some time. Every day that passes, the field/hobby/industry pulls further ahead of me. People are doing stuff now that we could only dream of back in the day. And with how my time is split between work, family, writing, woodworking, hiking, hunting and fishing, etc, etc, etc... Game development just doesn't consume nearly as much of my emotional and mental capacity as it once did. It's a melancholy thought.

JTippetts

As part of the AI and control refactor, I refactored the basic components that provide the turn-basedness for the unit controllers.

Previously, the turn system was controlled by 2 components: the Turn Scheduler, which runs scene-wide and provides a 'heartbeat' on a specified timer; and the command queue component per combatant, that listens for the heartbeat and synchronizes a unit's actions to that beat. This allows me to control the pace of the gameplay; by changing the interval of the heartbeat, the battle can be sped up or slowed down.

However, having the combat heartbeat controlled globally like this has some disadvantages. First is that unit actions only take place on heartbeats, so if the player selects a movement command, then the unit will not start to move until the start of the next heartbeat. This results in a noticeable hesitation, that gets worse as the battle speed is slowed down. Another disadvantage is that it might be useful to be able to speed up or slow down the update interval depending on certain factors, such as the presence of a haste or slow effect on the unit.

So, I refactored. The Turn Scheduler no longer provides the heartbeat. Rather, the heartbeat is encoded into the command queue executor component, which knows about any haste or slow effects on the unit. In addition to this refactor, I actually implemented haste and slow effects to test it all out.

Haste/Slow comes in 3 different flavors: modify movement speed, modify casting speed, and modify attack speed. Thus, you could have bonus to cast speed and penalties to attack speed, and the system correctly accounts for portioning out action points accordingly. Additionally, the movement/cast/attack speed modifiers are used to speed up or slow down the rate of animation for a particular action, as well as the duration of time it takes to complete the action. This change has the benefit that if a unit has a haste spell on to speed its movement, when it moves it will move across the map more quickly. While the animation speed and timings are visual-only, the haste/slow modifiers also affect the number of action points the unit is given.

For example, say you have a movement speed modifier of +25% and a cast speed modifier of +10%. If the action points are set to grant 10 points per unit in a given turn, then the unit will actually receive 12 movement points, 11 casting points, and 10 attack points. Using movement points will reduce the casting and attacking points proportionally. So in a given turn, the unit could take 12 steps, or cast 11 spells that take 1 turn to cast, or make 10 attacks that take 1 turn to cast, or some combination of miscellaneous attacks. (Actually, the unit receives 12.5 movement points, but the points are truncated.)

pzHZrg8.jpg

The three numbers next to the character portrait indicate how many points the unit can use for movement(green), casting(blue) or attacking(red).

I have made (and am still in the process of refining) some other changes to how player units control. Initially, a movement command was executed by selecting the Move button or pressing the hotkey 'm', which would display the movement preview grid. Left-clicking on the grid would initiate a move to the clicked location and hide the preview grid. Similarly, executing actions such as using a workbench or looting a tree required that the action button be selected or the hotkey be pressed, then the target clicked. It's kind of clunky. Even the simplest of actions, such as clear-cutting a grove for wood resources, requires just way too many clicks and keypresses to execute. So I have begun to refine things a bit. Instead of having to manually select move, you just click to move. If your unit is in a waiting state, then the move will be calculated and executed. Similarly with using/looting/melee attacking. Just click on the thing to loot or use. Also, I have added functionality to bind a skill to the right mouse button, so that skill may be used without the 'select skill, select target, left click' loop. Instead, right-clicking will execute the bound skill automatically at the mouse cursor location. I am still in the process of refining everything, and a lot of the UI hasn't really been updated to reflect these changes, but already the gameplay is much tighter and less clunky, by far.

JTippetts

Highlighting

In between work, sleep, family, home and yard projects, and being sick (all the usual bull-shit cop-out excuses you have all come to expect from me over the years) I have been getting in some work on Goblinson Crusoe.

So, the Great AI Refactor has begun in earnest, and as I suspected it would, it has run its tentacles all throughout the project. Up til now, the enemies have all been running off a script object creatively titled TestRandomWanderAI, which obviously randomly selects between FireShotgun, Melee and Area Heal to cast, and which tries to approach GC if a path presents itself. (The class has, uh... grown, somewhat, since it was first created.) Creation-time conditionals in the class designate some loose class-ish tendencies to favor one or the other of the skills. For example, the red-skinned 'shamans' favor fire and healing, while the blue-skinned 'bruisers' favor melee. I've kept this TestRandomWander component around, because inside it are templates and patterns for many of the actions an AI needs to perform. But it is no longer instanced into any of the enemies. In its place is the more generic AIController component.

AIController supplies a healthy set of generic functions (many of them derived, with refactoring and improvement, from the patterns inside TestRandomWander) that provide common functionality. However, instead of the hard-coded think() method of TestRandomWander, the AIController must be supplied a think() function at creation time. Much of the hard work of abstracting out the think() process was already done; I just needed to formalize the structure of the AIController, and re-factor out some of the behavior into separate think() functions supplied when instancing a mob of a given type. I've now finished specifying the existing shaman, bruiser and looter AIs, as a proof-of-concept for myself, and the new system works well. A great deal of 'test code' and placeholder stuff has vanished, in favor of more flexible, more 'real' code.

However, some expected issues are rearing their heads. So far, I have been operating on a very loose, ad-hoc 'design' for the combat system, as regards stats (which stats? what do they do?), buffs, how damage is specified in skill casts, and so forth. I have purposely kept the system flexible. I don't, for example, have a specific set of damage types in play. Damage types are possible, and I use quite a few of them. For example, all the test fire skills deal some measure of fire and siege damage. And some of the test equipment confers bonuses such as +fire_attack and -siege_defense, as a proof of concept. But any given skill could specify any kind of arbitrary damage type if desired. You could conceivably have a Fireball that does 16 to 47 base points of "happy silt stuffing" damage. This damage would apply, taking into consideration any "happy silt stuffing" attack and defense bonuses, if any. The flexibility has been nice, but I think it's probably time to start nailing down concrete damage types and thus simplifying the combat resolution code.

Additionally, with the addition some time back of the Melee skill, I ran into a conflict. When I first started fleshing out the skill system, along with the set of test skills, I imagined the skill system operating such that each skill had a different specification for each rank of the skill, and the damage ranges for the skill would be encoded in the data description for the skill. For example, a Fireball with one single point (assuming a Diablo 2-style skill point advancement system, as I currently envision) might do 3 to 6 damage, while a L2 Fireball might do 7 to 10. Numbers are for illustrative purposes only. I would list the various ranks of Fireball in an array, index it by the caster's Fireball skill level to get the proper one to cast. Segregating it by ranks this way more easily allows me to add extra 'juicy bits' to higher-ranked skills. For example, adding additional bounces to the bouncing fireball spell, adding some kind of DoT ignition debuff at higher ranks, etc...

The system has worked well, but Melee plays differently. In fact, in most games of this basic nature, Melee plays differently. It makes sense to encode the base damage of a fireball skill in the skill itself, and buff/debuff the final values based on caster stats. However, the base damage of a melee swing typically is determined by the equipped weapon, rather than the skill itself. And so, just like that, the addition of Melee has caused a necessary re-design. All of a sudden, for one particular special skill, I have to obtain the base damage ranges from somewhere else other than the skill description itself. It's a relatively minor issue on the face of it. Just call (?) and obtain melee ranges, where (?) is... And there we have the next issue. I don't have the formalized equipment system in place yet. I have A equipment system in place, but not THE equipment system. The test weapons in place can do things such as switch stance and apply stat buffs/debuffs, but that's it. There is nowhere I can currently call up to ask "what is my base damage range for a melee swing?" That ? just doesn't exist yet. And so, with the melee controller, I have added another set of hard-coded test values, to my shame. I need that melee-ing bruiser to do SOMETHING when he swings. So I hardcode some values, and add yet more items to the TODO list.

Still, these kinds of small-but-annoying issues help to refine the various interfaces between components that make up the guts of a game like this. A simple question of "where do base range values come from?" can have far-reaching implications in your design. So many factors can affect it--weapon type, character level, skill level, etc--and all of these factors can possibly live in different places depending on the structure of your combat-enabled entities.

Additional things have also cropped up in this refactor. At any given time, I have more units on the battlefield now than I used to, and a lot more 'things' happening around the map. So my non-polished UI is really starting to show its cracks. I made a few enhancements, but I still have far to go. For example, I added the capability of highlighting the currently-hovered unit with a halo color of green for friendly units, red for enemies, and yellow for neutrals, in order to help make decisions on the battlefield:

XeQ1e9D.jpg

This really helps, especially since I'm using the same goblin model and skin(s) for so many different unit types, both friend and foe. However, I still have not updated the hover UI panel to show things such as health, class/unit type, etc... all the vital information you need to decide whether to approach a unit or run away. (The UI, I fear, is going to continue to haunt me for a long time.)

Another thing that has cropped up is a big one that I have been putting off for awhile: performance. In fact, I've been systematically making this problem worse, with my shader-based shenanigans. But the performance is really starting to become critical now, as I add more units, more lights, and just more stuff in general, to the play field. I've made some preliminary stabs at optimizing things: LoD models and materials, tweaking draw distance to strike a happy medium between tactical awareness and performance, etc... But I am still hitting low double-digits to high single-digits framerate on a depressingly consistent basis, and the visual lag and general unresponsiveness makes it feel janky and rough.

That quad-planar shader, with its 32+ texture samples per light in forward rendering, is just one expensive heavyweight bastard, and I really need to start figuring out how to take back some of my framerate. For now, I have 'disabled' the quad-planar texturing on the vertical surfaces of the hex cells; those are now simply wrapped in a standard diffuse+normalmap texture, with only the cap still receiving the quad-planar. The long and short of it is, the quad-planar shader is probably going to go away. In it's place, I am working on a shader that blends from the diff+normal of the hex pillar's vertical surfaces, to the existing texture blend that occurs on the tops of the hexes. I have decided it probably isn't necessary to permit more than one stone/dirt type on a single hex cell, as long as the 'interesting' blending occurring on the tops remains. And by making that simple adjustment, I can eliminate a metric fuckton of texture samples per light. Even with just the temporary placeholder material with a single rock texture, I have already gained back enough framerate to put me back in the high 20s, low 30s on a fairly consistent basis, and with more LoD refinement I can certainly get even more. Hopefully, I can manage to keep the look-and-feel I have enjoyed so far (if with a few limitations that didn't exist before).

Anyway, just thought I'd drop a few lines to let folks know I am still working on GC, if slowly and extremely painfully.

Edit:

sZhtfqT.jpg

Some fun things in this shot. I added a code path to these goblin types to enable them to build bridges in order to reach Crusoe. These particular guys are Fireball casters, and I gave them Chain Fireball as well. In this shot, 3 of the mobs started building bridges to get to GC, then one of them randomly chose to cast a Chain Fireball at GC. The chain struck the other two goblins, as well as obliterating a number of already-constructed bridge sections, and triggered a 3-way brawl between the goblin and his now-pissed-off colleagues.

AI is fun.

JTippetts

Installing

Finally got a day off, with the leisure time to install Linux. Been booting off the USB for days now. During the 'trial' period, I tried out a number of different distros, just to refresh my head on what's out there. I used to be a big Ubuntu fan. Debian-based distros are by far my favorite (long familiarity, most likely). But I didn't really like the new desktop environment in Ubuntu. Unity Desktop it's called, I believe. I had the issue that the taskbar buttons are lined up along the left edge of the screen, and the taskbar is 'sticky', meaning that it makes the mouse pointer stick to it slightly. This causes a hesitation as the mouse crosses from one monitor to the other, and after a mere handful of hours this behavior started to infuriate me beyond what most people would find reasonable. I hated it so bad, just thinking about it makes my hands shake. It's the little things...

I ended up installing Mint. It's Debian-based, had a very easy install with no bugs or glitches. I'm starting to really like the Cinnamon desktop environment. Got my development setup all put together, installed the Radeon drivers, got Urho3D building, got my game building and running, got Blender going, even got that Instant Meshes re-topo tool I talked about working, etc... Everything seems to be mostly hunky-dory (absent an issue with GC that I am working on debugging, specifically with the quad-planar shader I talked about a couple posts back).

I really love how developer-friendly Linux is. It's something I've missed, having been back on Windows for so long. With the package-based software system of Debian, it's extremely easy to download and install all the various developer packages that I love to use.

An added bonus is that my game achieves essentially the same performance as on Windows/D3D11. It's not fantastic; a solid 18 or so fps. But it'll do.

JTippetts

Mint

To whom it may concern,

I am writing this missive while booted into Linux Mint from an ancient USB stick. How I came to this sad, low, lonely state is a tale of hubris and pride. You see, I am a habitual dismisser of Windows system restore disk nag screens. "System restore?" I cry. "Pah! I don't have time for that right now. Begone, thou pesky fiend! Begone!" Pride knotted my heart and furrowed my brow, each time that I swept the nag screen from my cluttered desktop with a snarl. What use have I for system restores? Nothing bad shall happen. Not anytime soon, anyway.

Ah, hubris.

And so, I have a USB stick jutting like a swollen tongue from the front of my ancient HP desktop machine. I peer at a screen that looks as if it came to my desk straight from the darkened hollows of 1998. Once upon a time, I was a fervent user of Linux. Once, I crowed from the rooftops about the virtues of Gentoo and Slack and Debian. Once, I believed it the height of computational prowess to compile my kernel from source, custom configured to my machine through the fervent work of Cheetos-stained fingers dancing madly across the pleasing feel and click of a mechanical keyboard. Once, I thought nothing of peering at a desktop filled with flat grays, bleak and morose, adorned with a deep black terminal window astrewn with command-line spew. But those days are behind me. Far behind.

To a spoiled Windows user, it feels like a strange hell. I have a soft spot in my heart yet for this beast Linux, though. Like that friend, whom I've known for years. The one who ate many dried acrylic paint chips in art class, and who had a penchant for dosing himself with voltage straight from the terminals of the ignition coil in a '79 Chevy. Sometimes, that friend was a pretty cool guy, good for some solid laughs. But other times, no matter what I did, he was determined to wear a pancake like a hat upon his head, and all I could do was ignore him and hope that the febrile gleam in his eye would soon fade. I look upon this bleak, smoke-colored interface through the mist of years, much as I look through the boxes of ancient photographs in the closet to refresh those memories of laughter and howls of pain filling the grease-smeared walls of the auto-mech shop. I look back, peering through the years, and think, "sweet mother of pete, that dude was crazier than a shit-house rat. And what the hell is up with this desktop screen that looks like a pre-release version of Windows 95?"

No sound comes from my speakers. They should work, I think. My machine is old and mainstream, no funky hardware to be found. Alas, David Draiman silently mouths the words of an ancient Simon and Garfunkel tune upon my second browser tab with impassioned, yet forever silent, emotion. Sing loudly, David. I have ears, but I can't hear a friggin thing.

Weep for me, my friends. I have accidentally erased the Firefox shortcut from the bottom taskbar, and I lack the technical knowledge to restore it to its rightful place, just to the right of the tiny little white smudge and the word 'Menu'. Weep for me, for I fear that should this browser window of mine, with it's pale gray panes and panels--should it close, I shall never discover how to open it once more. I click, I drag, I munge and fudge and muck-about, to no avail. That taskbar, that hateful and dreary thing with its completely bullshit right-click context menu, glares back at me stubbornly, starkly absent the comforting orange and blue swoosh of the Firefox logo.

I must close this letter now. My second screen is flickering to black. I fear something has gone amiss. No doubt, the kernel is on the verge of a panic. I know the feeling well. I feel the bleakness closing in, and should the kernel panic, I fear that I shall not be far behind.

Farewell for now.

JTippetts

HA! It never fails. As soon as I resolve to work on much-needed logic and AI refactoring, I let myself get distracted with a visual thing again. But this fix was relatively easy, and is something I've been meaning to do for quite awhile.

As mentioned in previous entries, the ground tiles in GC use tri-planar texture mapping. If you are not familiar with tri-planar mapping, essentially any object is textured with 3 different terrain textures, each texture being projected along one of the X, Y and Z axes. The textures are blended using 3 blending factors derived from the normal of the object mesh. In triplanar mapping, calculating the blending factors is as simple as taking the X, Y or Z component of the normal. So for any given fragment, the final diffuse color is calculated as abs(normal.x) * texture2D(Texture1, position.zy) + abs(normal.y)*texture2D(Texture2, position.zx) + abs(normal.z)*texture2D(Texture3, position.xy).

This kind of texturing would be perfect if Goblinson Crusoe took place on a cube grid, since each major face of a cube gets full blending from one of the 3 projections, and the only mixing of textures takes place on smooth, rounded corners where the normal is transitioning between faces. Here is a quick 2D diagram, showing how two textures (the blue and green lines) are projected along the X and Y axes onto a square.

aUVh86I.png

However, since GC use a hex grid, then the 3 plane projection means that of the 6 vertical faces of a hex tile, 2 of them receive "unblended" texturing, and 4 of them receive "blended" texturing, where the final texture is a mix of two different texture projections. While I use the same texture for the X and Z axis projections, it still becomes obvious that textures are being blended, as much of the detail becomes muddled and mixed. You can see how this happens in this 2D diagram:

yhyXXZw.png

The green texture, projected along the Y axis, ends up projecting onto both of the faces of the hex that point sort-of along the Y axis, but the blue texture also gets projected onto those 2 faces, resulting in a mix of the two. The face that points along X gets only the blue texture on it.

You can see the mixing taking place in this shot:

JHNrAmm.jpg

The faces toward the camera are mixed, because the Z axis projection projects against the corner between the faces, rather than squarely against a face. No matter how you twist the projection, 2 faces out of 3 at least end up mixed. You can see that the texture detail becomes clearest in that shot as the normal rounds the corner, since as the normal transitions it draws nearer and nearer to a "pure" Z axis projection.

The tri-planar mapping is easy, because the X, Y and Z components of a normalized normal are orthogonal to one another. If, for example, Y=1, then Z and X are guaranteed to be 0.

However, it is possible to construct a planar mapping that takes the hex shape into account. By projecting 4 textures against the shape, 1 from the top and 3 from the side, each of the 3 aligning with 2 of the faces of the hex piece, then you reduce or eliminate the mixing that takes place on the surfaces. You can see from this diagram how it should work:

Hh55Gll.png

Each of the textures, red green and blue, are projected against one of 3 axes that each run through 2 faces. In this case, the faces should receive singular mapping from one of the textures, and the only mixing that should take place is on the corners if smooth shading is used.

However, the math for the blending becomes a little bit more difficult now, because the 3 projections are not orthogonal to one another. You can calculate a blending factor for each of the 3 planes by taking the dot product of the normal and the axis that runs through each of the faces. However, for example, if the dot product for the axis that receives the red texture is 1, meaning that face should fully receive, red, the dot product with the axis that receives blue will be larger than 0. It will, in fact, be equal to 0.5, meaning that the face that should be fully red ends up with red mixed at 1 and blue mixed at 0.5, resulting in a purple shade instead.

In order to pull this off, you have to do some hackery. The axes for the 3 face orientations end up being (0.5, 0.8660254), (1,0) and (0.5, -0.8660254) (2D case, of course). So you can see that a normal that is aligned with the red or green projection axes, dot-producted with the blue axis, result in 0.5. Similarly, a normal aligned with blue, dot-producted against the red or green axes, will also result in 0.5. So the solution is to make 0.5 be the new 0.

What I mean is, you can perform the dot-product calculation for a given axis, then apply the formula (dp-0.5)*2. Thus, when the normal projects fully along the blue axis and 0.5 along the red and green, after the calculation you end up with a blend factor of 1 for blue and 0 for red and green.

In practice, I end up with the following blend factor calculation:

float3 n1=float3(0.5,0,0.86602540378443864676372317075294); float3 n3=float3(0.5,0,-0.86602540378443864676372317075294); float3 nrm = normalize(psin.Normal); float4 blending; blending.y=abs(nrm.y); blending.x=saturate((abs(dot(nrm, n1))-0.5))*2; blending.z=saturate((abs(nrm.x)-0.5))*2; blending.w=saturate((abs(dot(nrm,n3))-0.5))*2; blending=normalize(blending); float b=blending.x+blending.y+blending.z+blending.w; blending=blending/b;And the result seems to work pretty well:

ofCcw3g.jpg
2dW75ek.jpg

It does add an extra texture and normal lookup for each of the terrain texture types, but even on my machine it doesn't make a real difference on the framerate, and the reduction of texture mixing artifacts is more than worth it.

JTippetts

Creature AI

ulaF0Wy.jpg
CMGEDS9.jpg

I'm in a creature AI phase right now. It's actually a bunch of code that I haven't really touched in awhile, and as with all such code I have had to spend some time getting familiar with it again. I still only have a relative handful of skill options, so I haven't really dug in-depth into complex AI behaviors. And a lot of what I have in place is still placeholder or rough initial prototype stuff. Digging through it has revealed a few bugs though.

qT1DwZ7.jpg

For example, I equipped a few of the semi-unique test mobs (they're big) with the Flame Shotgun spell, and it has revealed some interesting behavior. Flame Shotgun targeting is a cone, and all destroyables in the cone are added to the target list, so any mob that uses it on GC stands a pretty good chance of mowing down whatever trees, bushes, and random buddies happen to stand in the way. However, I still have AI code in the controller to distinguish friend from foe, and that code path will actually prevent a mob from attacking an ally, even though my initial intention was only to discourage such. Consequently, it is possible for two mobs to fire off a Flame Shotgun in GC's general direction, hit one another, and consequently 'get mad' at each other. They will then switch targets, path until they are directly adjacent to one another, and then deadlock as they each attempt to murder the other but can't because the glitch in the controller prevents it.

I fixed the glitch, but it has revealed deeper issues that have me convinced a general refactor is in order. So that'll probably be my task for the next few weeks, in between work and stuff. Also, spring and all that the arrival of spring entails (lawn stuff, garden stuff, house stuff, you know... stuff.)

JTippetts

I spent some time recently working out some bugs with the tri-planar shader. I believe the bugs are fixed now (and it required some fixes in the Urho3D library to do it), but I don't really understand what was causing them in the first place, so I'm still a touch nervous about them. At any rate, I've taken up the Quest Map UI again.

In a previous entry, I introduced the quest map as a 2D scene rendered to a texture and displayed on a scroll atop the Map Table model and in the placeholder interface widget for the quest map. The process that made the shader bugs apparent to me, however, was an experiment to bring back the old world-map look of the original 3D Goblinson Crusoe conversion. Back then, I started playing with representations of hex tiles modeled as hills, mountains, etc... I still like the style and feel of that world map, so I whipped up a quick test to see how I like it as the Quest Map.

eFlB7UP.jpg

gJ5gejI.jpg

In short: I like it quite a bit.

In the first shot, you can see the Map Table object with the scroll map on top. For that rendering of the map scene, I blend the texture with a version of the map scroll texture, whereas in the UI widget the rendered map is un-blended. My vision for it is to have various objects populating the map, representing the missions/quests available in the quest log. Each quest will have an entry in the list to the left. You can click on a quest in the quest list, and the map camera will zoom to that location. (Right now, the camera is just animated with a circle animator).

I will probably do another iteration on the actual tiles, with some updated textures and a more consistent style among tilesets. Pictured up there is a mish-mash of various experiments and tests, with different base textures, so some of the tiles don't really match the others that well.

But all in all, this version of the quest map is the one that I like the best. I'll try to juice it up a bit (better water texturing, aforementioned new tilesets, some animated doodads like seabirds, clouds, splashes in the water, etc...)

JTippetts

232ILDK.jpg
u8d0uzA.jpg

SoTL and MarkS talked me back into using the tri-planar shader I've been using so far. Above are a couple shots of some of the textures I've been working on the past few days.

A couple things:

The textures jibe together a bit better than the previous sets. Previously, I was using photo textures, with normal maps derived from the luminance of the diffuse, which isn't really the best way to go. In these textures, normal maps are derived from the particle bakes, so the normal map works a bit better.

A recent commit to Urho3D provided the ability to use texture arrays in the GL and D3D11 branches, something that has vastly simplified the tri-planar shader. Previously, I was required to combine all of the terrain textures into a single texture atlas, and use sampling trickery to obtain each of the pixel samples. While it worked well enough, it caused quite a bit of blurring to occur at oblique view angles, and often it produced texture seams at oblique angles as well. The new shader eliminates a great many shader instructions and produces a superior visual result. Additionally, I can more easily pick and choose textures for different terrain sets, without having to bake them into a dedicated texture atlas. I can simply specify which textures I want in an XML file. On the downside, it shuts me out from using the D3D9 render branch, as texture arrays are not supported in D3D9.

JTippetts

Textures

qUltO72.jpg

Did a quick little test to see how well some of the textures I made in making the previous journal entry "read" in Goblinson Crusoe. Looks pretty good, if you ask me.

JTippetts

Reading alfith's entry on rocks reminded me of a technique I often use for creating dirt, stone, and pebble textures in Blender using particle systems. Since alfith's rocks are fresh in my mind (and near the top in my file history in Blender) I'll go ahead and use them to demonstrate the technique in its entirety.

The Blender Hair particle system type is useful for distributing particles as objects across the surface of a mesh. I have demonstrated how to use this type of system before, to create rendered forest scenes. The idea is basically the same: use a particle system to scatter a bunch of objects (in this case, rocks) across a surface. Since I have a .blend file with some of alfith's rocks in it, that's where I'll start: with some rocks. In this case, I have 3 rocks scaled to 3 different sizes:

ZgaDvv4.png

In case you haven't read alfith's entry yet, go ahead and do so now. I'll wait. Essentially, he takes a default cube in Blender and slices it with several randomized planes, using boolean operators to take chunks from it. After the slicing, he bevels the edges and smooths it a little bit. Easy peasy, and the result is quite convincingly pebble-like. alfith provides the Python script he uses to build the rocks, in case you are curious.

After building a few rocks, I scaled them to different sizes. Then, all the rocks are combined together into a Group:

zYVVJlS.png

The Group functionality is found under the Object Data tab. Select the object, navigate to Object Data, and click Add To Group. If no Group has been created yet, you can create it on the first rock. All subsequent rocks should be added to the same group.

With this Group, you now have a tool for controlling the size distribution of the rocks. When the particle system is instanced, it will select objects randomly from the specified group. In this example case, I use 3 different sizes of rocks, resulting in a basically equal distribution of small, medium and large rocks. If a greater prevalance of small rocks is desired, you can add more small rocks to the group. Similarly if larger rocks are preferred. Some variance is added to the sizing process in the particle system itself, as well.

The rig I use for this purpose looks like this:

WoxP2cT.png

The camera is set to Ortho and oriented so that it sits above the origin on the Z axis, looking directly downward. The render size is set to the desired texture size (say, 512x512) and the Orthographic Scale is set to 1. The particle system is instanced on a plane of size 1x1 that sits centered at the origin, facing upward toward the positive Z axis. The simple explanation of how the particle system is created is as follows:

1) Create a plane. Press 's' to scale, type 0.5. When a plane is first created, it is sized 2x2. If desired, you can set Orthographic Scale to 2 in the camera settings, and not worry about scaling the plane at all. It's all the same either way.

2) With the plane selected, navigate to the Particle tab and create a new particle system. It should look like this:

vPC07fz.png

3) Set the parameters of the particle system. To save time, I'll post some images of a representative setup along with the result, then explain what things are and what to tweak.

re4MkVP.png

The Particle System type is changed from Emitter to Hair. Hair systems are pretty much what they say on the tin: a way to grow hair. In this case, the hair is shaped like rocks. The Advanced toggle is checked, which gives access to additional fields such as the Rotation sub-section, which also is toggled. The system emits from Faces by default, which is the desired behavior; this will cause rocks to emit from across the entire plane. By default, the Number of particles is 1000. You can tweak this to achieve your desired rock scatter density. Under Rotation, the Initial Orientation is changed to Global Z, and the Random slider underneath this is slid all the way to 1. The Random slider underneath the Phase entry is also slid all the way to the right, ending at 2. These sliders, together, give each rock particle a randomized rotation, so that the rocks do not align with one another. Under the Physics section, the Size parameter is set to 0.01, and the Random Size slider is turned all the way up to to 1. Finally, under the Render section, the Group button is selected, and the group containing the rocks is chosen. And the result of all of this should look something like this:

JoEbEb7.png

It's possible that you won't be happy with your immediate results, in which case you'll need to tweak. The chief things to tweak are:

1) Number of particles. Crank this up or down to alter the density of rocks.

2) Size distribution of rocks in the group. Add more rocks to alter the distribution.

3) Size parameter under the Physics tab. If all of your rocks are generally too large or too small, rather than scaling the individual rocks in the Group you can modify this scaling parameter.

4) Seed parameter in the header of the particle system. This gives a different pattern of distribution for each seed.

Now, you could go ahead and bake displacement/normal/ao/render from this (well, render once you have assigned materials and rigged some lights) if you want, but it won't be seamless. Also, if you want to create a set of textures that are different-yet-similar, and which tile with one another, this simple setup isn't sufficient. For that, some modifications must be made.

To show the rig I use for seamless texture sets, here is an image.

qW4rE9t.png

It looks a little tricky, so here is an annotated image:

6l0gUO0.png

In this image, you can see that I have instanced multiple differently-sized planes and aligned them with one another. Each plane has a duplicate of the rock particle system from the main plane. The plane labeled A is the main system. Objects labeled with the same letter are duplicates of each other. The differences between particle systems are:

a) Adjusted the number of particles based on the area of the plane. The main portion has 2000 in this case, the thin edge strips have 250, and the corner squares have 30.
b) Altered the Seed parameter of the particle system to give different patterns.

The pieces labeled B in the image share the same Seed; similarly, the pieces labeled C, D, E, F and G.

The reasoning for this pattern is that the edge strips are duplicated from one side onto the other, providing rock overlap as if from an adjacent tile of the same configuration, without having to actually duplicate the entire 1x1 square plane with its 2000 particles. The reasoning for the double sets of edge strips is that the Seed for the particle system on A can be altered to give different patterns for the center of the tile, but since the seeds for B,C,D,E,F and G stay the same, the edges of the tile will also stay the same, ensuring that all tiles created by varying A's seed will tile with all other variants seamlessly.

Now, with this setup it's time to start baking. But there's a problem: Blender doesn't like to Bake from a particle system. The typical method for baking a normal/ambient occlusion/displacement map from this setup is to create a plane, sized 1x1, and UV map it to fill the whole image. Then, translate it to lie directly above the rig, and use the Bake menu to bake out your maps. But, like I said, Blender doesn't like to bake from particle systems. There are a couple ways you can get around this:

1) Convert your particle systems to meshes, and bake from those. This involves selecting all of the planes, going to the Modifier stack for the plane and, on the entry for the particle system, press the big Convert button. This will make all of those rocks into real objects (duplicates of the corresponding rock from the rock Group). If you go this route, prepare for your face count to explode; the more particles you use, the more explody it will get. But once you have converted the meshes, you can then bake your maps.

2) Construct Cycles-based node materials for your objects, and render using those materials to get your various maps. If I have a particularly heavy particle system, this is the route I typically use. Here is the basic material setup for that:

eARNfug.png

Now, it might look a little complex, but it's not. Basically, there are 3 sections: AO, displacement and normal. On the far right is the output node, and to the left of that are 3 nodes that represent each of the 3 setups. The top node is the simplest, providing the built-in AO node. To bake the AO map, just connect the output of this node to the Surface input of the Material Output node, and render. The output should look like this:

I2Q73O3.png

If desired, you can connect the output of the AO node to a Brightness/Contrast modifier node to tweak the range of the AO map.

Second, connect the output of the Diffuse BSDF node in the center to the Surface input. This is the Displacement map chain. The way it works is it obtains the coordinate value of the point, takes the dot product of this position against the vector (0,0,1), feeds the output value of that operation to a Brightness/Contrast node so you can adjust the range, then feeds that result as a color to the output shader. The bright/contrast can be tweaked to find a good range. The output of this operation should look something like this:

QoxUkPh.png

The third set of nodes is used to bake a normal map. The normal is obtained from the Geometry input, split apart into separate X, Y and Z components. The Z is discarded, while the X and Y are modified by adding 1 and multiplying by 0.5. The value of 2 is substituted for Z, then the whole thing is combined back into a vector and fed to an Emission shader. (The emission shader is necessary so that lighting is not applied to the final output.) The result should look something like this:

gTxx3ub.png

The normal output material has a couple of Math nodes right after the split operation. You can modify the constants in those math nodes to scale the X and Y components of the normal, to make the bumps more or less pronounce. For example, with a value of 2 for both X and Y scaling:

5r74YFe.png

So, now I've got an AO map, a Displacement map and a Normal map. It's time to take them into Gimp. Then, I search through my library of seamless textures. I'm looking for two simple dirt-like patterns without a lot of detail to them. I've got quite a few such textures kicking around on my hard drive. This step might take me awhile to get something I like. Essentially, what I do is find a texture to act as the base dirt and a texture to act as the stones. Then I use the Displacement map as a mask to blend the stones layer on top of the dirt layer, adjusting the brightness and contrast of the displacement mask to get something I like. Sometimes, I won't even blend two layers like this; I might instead just use a single dirt texture. Whatever I decide upon, though, after I have a good diffuse color map that I like, I load up the AO map and multiply the AO against the diffuse to get a final diffuse map:

r7wbBGi.png

Now, for my terrain shaders I typically use a texturing scheme such as this method which uses the displacement map to modify the blend factor when blending between terrain types. In preparation for that, I'll combine the displacement map with the diffuse map, displacement in the alpha channel. Then, I'll fire up a test terrain and see how it looks:

ZCI1tBy.jpg

It can often take a lot of tweaking and iterating to get something I like. I'll experiment with diffuse colors, normal sharpness, rock shape, etc... Lots of different places in there to experiment.

Attached to this post is a zip archive of the .blend file used to generate the rock maps, in case you are interested in playing with stuff for yourself.

alfith_rocks_bake.zip

Edit: Re-uploaded the .blend file with gamma correction fixed and the normal shader tweaked to provide "more correct" output.

Edit: Re-uploaded yet another version of the blend file. This version has the material nodes grouped together, with 2 named inputs: displacement scaling and normal scaling to adjust how those outputs are scaled, and 3 named outputs: AO, Displace and Normal:

ke9erFe.png

Expanding the group:

2I1bp96.png

Some changes:

1) Tweaked the Displacement node chain to use a color ramp and a scaling factor, rather than a Bright/contrast node. The Color Ramp can be more easily tweaked to govern the gradient than a bright/contrast node, and the scaling factor can be used to quickly adjust the overall brightness.
2) Tweaked the Normal node chain a little bit, to get rid of some confusion. The scaling of X and Y needs to happen before the normal is converted into color space; pulling it out as a group input makes this even more clear.
3) The gamma change mentioned in the previous edit. By default, Blender does some color management stuff, including a gamma correction pass, that will push your colors out of whack. I've switched it to Raw color management, pushed the gamma to 1, and the colors are now unadjusted. This change also helps the ambient occlusion render to have greater contrast.

Edit: A shot of some more textures I made tonight:

JFElgW4.jpg

Edit:

udnPzHd.jpg