Dyeing to finish character composition

posted in A Keyboard and the Truth for project 96 Mill
Published June 27, 2013
Advertisement
'lo again all!

I spent a good chunk of time during the last interim between alpha 0.7 and 0.8 investigating runtime character composition.

The first step was figuring if I could actually create the required art layers, and that they would be visually passable; that turned out quite well, so since then I've been on to the next challenge.

Actually getting the system integrated into the game.

Which generally breaks down into two sections of complexity:
1. Required data and code structure in the game/engine to support composition.

2. Shortcomings of HTML5 Canvas


Changing the game

Revel uses a very simple system of conveying characters; a loaded Map has a number of Sprite objects on it, and each sprite object is 'backed' by some subclass of 'Entity'.

A sprite is responsible for its position and general display/motion of itself; but it takes almost all of its cues from the entity object; polling it regularly for vital information.

The key piece of polled information in this case is 'getAnimation(id)', so the sprite asks the entity what it should look like, the entity is expected to return an AnimationTemplate for the requested id 'id being an action such as idle, move, etc.'.

The AnimationTemplate, contains information such as what image page to use, how many rows/cols are in an image, anchoring offsets, expected animated FPS, etc.

So before composition; it was common to specify in the CharacterTemplate (a flyweight pattern shared object that 'backed' character,npc and player instances) what animations to use for given actions.

However, now, all of that information is subject to change based on various circumstances (hair dye, hair styling, worn equipment, etc.)


Moving to a Parametric System

So now, instead of defining that NPC's and characters should use specific animation templates, instead we start to design characters parametrically:

species, gender, skinColor, hairStyle, hairColor

those are some of the base parameters that character templates for NPC and PC alike need to be drawn.

these settings are specified at the character template; and hairStyle and color, can be overriden at the characterState level (in essence per-character you can change your hair color and style, but if not specified it defaults to template-wide settings)

as a side note, I should add, specify dyes/colors as integer palette indexes; this will allow you to collectively modify the actual colors used, without changing each instance; especially useful as hairColor overrides will be stated to a player save file.

Animations to Layers

So before, the Sprite was requesting a single animation, for a given action; very clean, but sadly this will no longer cut it. A composed character is built up of around five or so layers. I decided after a number of tests to always draw layer by layer; instead of composing and cahcing to a specific animation sheet for a character; doing so can create a lot of specialized image caching, and the need for that chaching mechanic frankly screws up the 'pull' nature of the sprite asking for graphics; instead requiring the need for detecting, cachining, hashing and delivering brand new magicaly created sprite sheets at runtime.

So now the sprite and entities have a new function getLayers(layers,id), and the idea here is simple; the entity now delivers layers in back to front order, for the requested action id.

But what is a layer? A layer is a two element array, where the 0th element is an AnimationTemplate, and the 1st element is a color (it is an actual color, not an index as this is only data delivery, nothing here is stated, but it could be an index too).

The sprite gets these layers, and renders them back to front using color modulation to turn the greyscale component sources into colorized versions.

How are these layers made? In getLayers, for character, we use those previously mentioned parameters for gender, species, skinColor, etc. to pick the proper animation template(s) for instance, human-female-body-idle.png, human-female-hair000-idle.png


Oh right... those HTML5 Canvas shortcomings...

Sadly there is an extra step that no self respecting modern graphics api should require, but what are ya gonna do?

Color modulation; in a nutshell HTML5 Canvas doesn't have it.

It seems like it is forthcoming, but that doesn't help us today.

Color modulation is really handy; it allows you to take greyscale images, and tint them with any pure color you wish:

Dest=SRC*COLOR

In most 2D API's you specify that you want to use in-image alpha, and/or constant alpha, and a modulated blend; which could be white, to get an unchanged src result (C times 1 is C).

So to accomplish this in Canvas, you've gotta jump through some hoops; the easiest and naive way. Is to have an offscreen canvas, large enough to store your biggest single image; for me that would be a 128x128.

When you wish to draw a color modulated image you follow these steps.

1. clear the offscreen canvas
2. copy the image to the offscreen canvas at source size, and location 0,0
3. get the pixels from the canvas
4. loop through the pixels, setting each pixel to itself multiplied by the tint color
5. set the pixels back into the canvas
6. draw from the offscreen canvas to your final intended destination

This is actually pretty speedy for being incredibly naive; if you have small characters (64x64) and only plan to do your main character composited; I venture to say you could get away with this.

...however; I'm using composition a lot; with tens of composed sprites on screen; and this just wont cut it; so we need to make it faster.

The answer, as is common; is to trade CPU for RAM.

We take the same process of copying to an offscreen canvas, and color modulating; but instead, we make a much larger canvas; say 2048x2048

And when we copy our graphic to this canvas; we use a simple bin-pack algorithim (shelf) to find the next availiable empty spot; we place our image there, do our work and make note that we have the image, in the requested color, and where it is.

Next time we ask for that image, in the same color; we simply draw from that cached position, instead of the expensive copy/pixel manipulate operation.

If ever we run out of room in the offscreen canvas; clear the entire canvas, dump the atlas data; and begin to re-shelf all over again; as long as your offscreen canvas has room for a few-scenes worth of data this will avoid any balloning or need to add additional atlas pages.




After all of this work, in theory you are rewarded with a much easier system of creating varried characters with modest increases to image resource size and less artwork overall.
1 likes 1 comments

Comments

Navyman

Really enjoyed the detail you shared here and excite to hear more about this game.

June 27, 2013 03:46 PM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Profile
Author
Advertisement
Advertisement