• ### Announcements

Followers 0
• entries
22
30
• views
42478

General programming blog-type thing

## Is there a hexagonal analog of Conway's Game of Life?

[color=rgb(51,51,51)][font=Georgia]

## The short answer is that in the hexagonal case the best analog of Conway's Game of Life -- in my opinion as someone who has been a CA hobbyist for 30 years or so -- is an original creation which I will describe for the first time in detail in this blog post. (Note: The links to cellular automata in this post go to a custom web-based player which may not run correctly on mobile devices.)

[/font][/color]

[color=rgb(51,51,51)][font=Georgia]

## Regarding a hexagonal Game of Life, a key thing to understand is that Conway Life isn't just the rule; it is the rule (Life 2333) running on a square grid with an 8-cell neighborhood i.e. the neighborhood of four squares directly adjacent to a square plus those diagonally adjacent. You can, of course, apply the same rule on a hexagonal grid using the natural six hexagon neighborhood but what you will get won't look anything like Life. It will look like this: Life 2333 on a hexagon grid.

[/font][/color]

[color=rgb(51,51,51)][font=Georgia]

## So if the above does not qualify as a hexagonal Game of Life then what would? Well, at the very least we need a glider. Carter Bays, a professor at the University of South Carolina, presented a Conway-like rule that admits a glider on the hexagonal lattice in 2005, Life 3,5/2 in his notation. However, by Bays' own admission "this rule is not as rich as Conway's Life" and indeed when we run Bays' Hex Life on random input we do not see gliders: Life 3,5/2 on random input. The problem is that its glider is too big to occur randomly. Its glider is in a sense artificial. Part of the beauty of Conway Life is that gliders are frequently spontaneously generated. The other thing you'll notice about Bays' Life 3,5/2 is that it descends into still-lifes and oscillators too quickly. Conway Life is dynamic. It sprawls and grows, descends into bounded chaos, before finally decaying into still-lifes and oscillators.

[/font][/color]

[color=rgb(51,51,51)][font=Georgia]

## To summarize, we want a hexagonal cellular automaton that

[/font][/color]

1. Has a glider that is frequently generated by random initial input.
2. Frequently exhibits bounded growth from random initial input.

[color=rgb(51,51,51)][font=Georgia]

## It is my contention that there is no simple totalistic rule over two states and the hexagonal grid using the natural 6-cell neighborhood that exhibits both 1. and 2.

[/font][/color]

[color=rgb(51,51,51)][font=Georgia]

## In order to achieve what we want, we need to drop one of the constraints. Using my cellular automata breeding application Lifelike, I explored dropping the constraint that the rule must be a simple totalistic rule over two states. What I have come up with is a cellular automaton that uses a simple totalistic rule over three states, states 0 to 2. One way to view this move is to view the live cells in Conway Life as counters -- beans, pennies, whatever -- and imagine dropping the constraint that a cell can only contain one counter. Anyway, my rule is as follows:

[/font][/color]

• Take the sum S of the 6-cell neighborhood where death=0, yellow=1, and red=2.
• If the cell is currently dead it comes to life as yellow if S is exactly 4.
• If the cell is yellow it goes to red if S is 1 to 4 inclusive or is exactly 6.
• If the cell is red it stays red if S is 1 or 2 and goes to yellow if S is 4.
• Otherwise it is dead in the next generation.

[color=rgb(51,51,51)][font=Georgia]

## The above, which I call "Joe Life", is as rich as Conway life: click this link to see it run. It exhibits bounded growth with about the same burn rate as Conway Life and features two gliders, the little fish and the big fish, which are frequently generated spontaneously.

[/font][/color]

[color=rgb(51,51,51)][font=Georgia]

## The little fish

[/font][/color]

[color=rgb(51,51,51)][font=Georgia]

## The big fish

[/font][/color]

[color=rgb(51,51,51)][font=Georgia]

[/font][/color]

[color=rgb(51,51,51)][font=Georgia]

## I concede Conway's rule is more elegant than Joe Life's rule but if one thinks about it, Conway Life's neighborhood is less natural than the six hexagon neighborhood in that it is kind of weird to include the the diagonally adjacent squares. So in my opinion, elegance that Joe Life lacks in its state table it makes up for in its neighborhood.

[/font][/color]

## Lifelike, or the Joy of Killing Time via Breeding Little Squiggles

[color=rgb(40,40,40)][font='Times New Roman']

# A couple of weeks ago I got interested in this project but wanted full control of the code, wanted to know exactly what it is doing, wanted a bunch of features like the ability to import and export CA rules, and wanted to have the process not be seeded by cellular automata already featuring gliders (which the web app seems to be). To this end I have pushed a project to github that does something similar but, I hope, more transparently.

[/font][/color]
[color=rgb(40,40,40)][font='Times New Roman']

# I'm calling it Lifelike Cellular Automata Breeder. It is a (C# WinForms) application in which given some settings a user can artificially select and breed cellular automata; i.e., it performs a genetic algorithm in which the user manually provides the fitness criteria interactively.

[/font][/color]
[color=rgb(40,40,40)][font='Times New Roman']

# I decided I wanted to only allow a reproduction step in which I scramble together state tables in various ways, guessing that using "DNA" more complex than commensurate 2D tables of numbers wouldn't work well for a genetic process in this case. I characterize CA rules as applying only relative to a given number of states and a given, what I call, "cell structure" and "neighborhood function". Cell structure just means a lattice type and neighborhood -e.g. square, with four neighbors; square, with eight neighbors; hex, with six, etc. "Neighborhood function" is an arbitrary function that given the states of the n cells in some cell's neighborhood returns an integer from 0 to r where r is dependent on the neighborhood function and possibly number of states. For example, Conway Life uses the neighborhood function I refer to as "alive cell count", and for an n-cell neighborhood, r equals n because the greatest number of alive cells that can surround a cell is just the size of the neighborhood. If the user has selected s total states, the state tables will be s by r.

[/font][/color]
[color=rgb(40,40,40)][font='Times New Roman']

# Lifelike works as follows

[/font][/color]

1. The user selects a number of states, cellular structure, and neighborhood function and kicks off the genetic process.
2. Lifelike sets the current generation to nil, where by "generation" we just mean a set of cellular automata that have been tagged with fitness values.
3. While the user has not clicked the "go to the next generation" button,

• If the current generation is nil, Lifelike randomly generates a cellular automata, CA, from scratch by making an s by r state table filled with random numbers from 0 to s. (The random states are generated via a discrete distribution controllable by the user). If the generation is not nil, Lifelike selects a reproduction method requiring k parents, selects k parents from the current generation such that this selection is weighted on the fitness of the automata, generates CA using the reproduction method and parents, and then possibly selects a random mutation function and mutates CA, selecting the mutation function via a discrete distribution controllable by the user and applying it with a "temperature" controlled by the user.
• Lifelike presents CA in a window.
• The user either skips CA in which case it no longer plays a role in the algorithm or applies a fitness value to it and adds it to the next generation.

• When the user decides to go to next generation, the selections the user just made become the new parent generation and processing continues.

[color=rgb(40,40,40)][font='Times New Roman']

# Here's a video of me playing with an early version of the application.

[/font][/color]
[color=rgb(40,40,40)][font='Times New Roman'][/font][/color]

[color=rgb(40,40,40)][font='Times New Roman']

# Results, Musings, etc.

[/font][/color]

[color=rgb(40,40,40)][font='Times New Roman']

# Briefly, Lifelike works.

[/font][/color]

[color=rgb(40,40,40)][font='Times New Roman']

# You can produce interesting cellular automata with it and there is a weird feeling that is hard to describe when you first see a tiny glider wiggle across the screen; however, the way it works is somehow more mundane than I thought that it would be. I wonder if all genetic algorithms are like this. Most of what it produces is garbage. When you see something that isn't garbage you can select for it. However Lifelike doesn't do magic. It doesn't magically find phenomena you want just because you have a fancy framework implemented to find such phenomena. For example, it is my belief at this point that there is no simple hex grid analog of Conway Life using alive cell count, a six cell neighborhood, and 2-states. I think you could probably prove the non-existence of gliders in this configuration but it would be a boring proof by exhaustion and running Lifelike on that configuration is boring as well. Just because you, the user, are a step in a "genetic algorithm" doesn't somehow make it interesting.

[/font][/color]
[color=rgb(40,40,40)][font='Times New Roman']

# The simple hex neighborhood negative result led me to ask the following question: What is the smallest change you can make to hex/6-cell/alive cell count/2 states to allow Conway Life-like behavior? If you google "Hex Life" you will find that it is well-known that interesting things can happen if you go to a 12-cell Star of David shaped neighborhood, but this seems inelegant to me because the simple hex neighborhood is so nice. The question then is are gliders possible in the simple hex lattice and neighborhood if we add one state and modify "alive cell count" in a trivial way? The answer to this question turns out be yes. There are beautiful rules that live in the hex lattice with the natural neighborhood if we have 0 = death, 1 = Alive-A, 2 = Alive-B and instead of the simple alive cell count we use its natural analog when states can have values greater than one: sum of states.

[/font][/color]
[color=rgb(40,40,40)][font='Times New Roman']

# Below is a such a rule set and is probably the best thing that has come out of my work with Lifelike as far as I am concerned (So I am naming it Joe Life, assuming that it is unknown in the literature).

[/font][/color]
[color=rgb(40,40,40)][font='Times New Roman']

# The above has a nice quality that Conway Life also has that I call "burn". This a qualitative thing that is really hard to define but it is what I look for now when I play with Lifelike: burn + gliders = a Life-like cellular automaton. "Burn" is the propensity of a cellular automata configuration to descend into segregated regions of chaos that churn for awhile before ultimately decaying into gliders, oscillators, and still lifes. Some CAs burn faster than others; the above has a nice slow burn. CAs that exhibit steady controlled burn turn out to be rare. Most CAs either die or devolve instantly into various flavors of unbounded chaos.

[/font][/color]
[color=rgb(40,40,40)][font='Times New Roman']

# However there does turn out to be another quality that is not death or unbounded chaos that is sort of like the opposite of burn. See for example

[/font][/color]
[color=rgb(40,40,40)][font='Times New Roman'][/font][/color]
[color=rgb(40,40,40)][font='Times New Roman']

# (The above is hex 6-cell, four states, and using a neighborhood function I call "state-based binary") which I have been calling "Armada" and generally have been referring to these kind of CAs as being armada-like. Armada-like cellular automata quickly decay completely into only weakly interacting gliders. For example, one from the literature that I would characterize as armada-like is Brian's Brain. Armada-like rules turn out to be more common than life-like rules. They're impressive when you first start finding them but they are ultimately less interesting, to me at least. The best thing about armada-like rules is that they indicate that life-like rules are probably "nearby" in the space you are exploring in the genetic process.

[/font][/color]
[color=rgb(40,40,40)][font='Times New Roman']

# Also they can breed weird hybrids that defy classification, such as the following which are all burn with large blob-like gliders and seem sometimes to live around the boundary between armada-like rules and life-like rules.

[/font][/color]
[color=rgb(40,40,40)][font='Times New Roman']

# Magic Carpets (square, 4-cell/ 4 states / sum of states)

[/font][/color]
[color=rgb(40,40,40)][font='Times New Roman']

# or Ink Blots (hex, 6-cell/ 3 states / "0-low-med-high")

[/font][/color]
[color=rgb(40,40,40)][font='Times New Roman']

# My other major result is that life-like rules exist in the square 4-cell neighborhood if we allow an extra state and use the simple sum of states as the neighborhood function, but they can be boring looking so instead here is an armada-like square 4-cell CA that is on the edge of being lifelike:The above uses the neighborhood function I call "2-state count" which enumerates all possible combinations of c1, number of neighboring cells in state 1 and c2, number of neighboring cells in state 2 or above, in an n-cell neighborhood i.e. c1 + c2 ? n.

[/font][/color]
[color=#282828][font='Times New Roman']

# [color=#333333][font=Georgia]

## from

[/font][/color][color=#333333][font=Georgia]

## The Curiously Recurring Gimlet Pattern

[/font][/color][/font][/color]
• ## draak update

[color=#282828][font='Times New Roman']

# Below is gameplay video of a prototype of the puzzle game "Draak" that I am working on. Draak will be an iPad-only iOS game; the prototype below is written to Windows in C# on top of MonoGame.

[/font][/color]

[media]
[/media]

[color=#282828][font='Times New Roman']

# The initial idea for this game was about the art. I thought it would be cool to make a game that looks like an animated version of one of M. C. Escher's tessellations, particularly one that involves a similarity symmetry. So I lifted mechanics from a 1990s era game that I always felt was a good game trapped in the typical square lattice for which it was particularly ill-suited. The old game's Euclidean lay-out wasted a lot of space.

[/font][/color]

[color=#282828][font='Times New Roman']

# A lot of the work in the above may not be immediately apparent (which is good I think). Specifically, there are actually two shapes of butterflies in the above not just one. They look like this:and are arrayed as in a checkerboard with a 90 degree rotation applied to cells of opposite parity -- the geometry might be clearer here in which I create these butterfly tiles unadorned with my tool EscherDraw (before I had beziers working in EscherDraw) Thus any time a butterfly moves to a cell with opposite parity I need to not only handle the scale and rotation tweening imposed by the spiral lattice, I also have to simultaneously do the 90 degree rotation imposed by the butterfly tiles and simultaneously morph the actual sprites above from one to the other.

[/font][/color]

[color=#282828][font='Times New Roman']

# I created the above sprites as vector art using tile outlines exported from Escher draw as SVG and wrote a Python script that morphs SVG as the publically available software for morphing vector graphics is surprisingly non-existent. I am going to post about my vector morphing code here soon, I just need to clean it up for use by people other than me.

[/font][/color]

[color=#282828][font='Times New Roman']

# source

[/font][/color]

## My Code for Doing Two Things That Sooner or Later You Will Want to do with Bezier Curves

I just added full cubic bezier curve support to the vector tessellation creation tool I am developing (see here). I'll include some video at a later date (Update: here) showing this feature in action but first I want to document and make publicly available some of the code involved: namely, a bezier class in C# that supports two functions that are not the easiest things in the world to implement. Both of these functions come up as soon as you try to do anything non-trivial with beziers and the literature online can be unhelpful.

The first function is splitting a cubic bezier into two curves at a given t parameter such that the concatenation of the two curves is congruent to the original. This turns out to be easy; however, I could find no actual code on offer, rather just descriptions of code. There is good coverage elsewhere on the theory so I won't go into it deeply here, but, briefly, splitting a bezier is easy because how one does so follows naturally from the sort of recursive definition of the bezier that is implied by De Casteljau's algorithm. Anyway here is my code:...public Tuple Split(double t){ PointD E = Interpolate(t, CtrlPoint1, CtrlPoint2); PointD F = Interpolate(t, CtrlPoint2, CtrlPoint3); PointD G = Interpolate(t, CtrlPoint3, CtrlPoint4); PointD H = Interpolate(t, E, F); PointD J = Interpolate(t, F, G); PointD K = Interpolate(t, H, J); return new Tuple( new Bezier(CtrlPoint1, E, H, K), new Bezier(K, J, G, CtrlPoint4) );} static private PointD Interpolate(double t, PointD pt1, PointD pt2){ double x = (1.0 - t) * pt1.X + t * pt2.X; double y = (1.0 - t) * pt1.Y + t * pt2.Y; return new PointD(x, y);}...
The other function is considerably harder: finding the nearest point on a bezier curve to a given point. If you scan the literature (i.e. perform the google search) you will find that what most people end up using is some code from an article in the book Graphics Gems I, "Solving the Nearest-Point-On-Curve Problem" by Philip J. Schneider. I, however, have a couple of problems with this code (1) I don't really understand it, (2) it is long and I need to port it to C#, and (3) someone on the internet is claiming that it is incorrect but given (1) I can't really evaluate the claim; see the comment that begins "There seem to be errors in the ControlPolygonFlatEnough function [...]" in the C code here.

But more relevantly I just don't get this code on a fundamental level. The nearest point on a bezier problem is difficult to solve mathematically but it is easy to formulate mathematically. I think that the Graphics Gems code obscures the straightforward aspect of how this problem can be formulated. Further, I don't have a problem using code that I don't understand; however, if I am going to port an opaque block of code from one language to another I'd like to keep that portion of code to a minimum and I can do this easily by structuring my code around the formulation of the problem that I understand.

The formulation I am talking about is as follows, if B(t) is a particular cubic bezier function, B'(t) is its derivative, and P is an arbitrary point then [B(t) - P] ? B'(t) = 0 when t is the parameter of the closest point on B to P. [B(t) - P] is a vector that points from P towards some point on the curve and B'(t) is a vector that is tangent to the curve at this point, if the distance is going to be a minimum then these two vectors will be perpendicular and thus their dot product will be zero. Since B(t) is cubic, its derivative will be quadratic and thus when we take the scalar product of [B(t) - P] and B'(t) we will end up with a quintic equation of one variable. This leads naturally to a nearest point function that looks like the following on the top level:public Tuple FindNearestPoint(PointD pt){ var polyQuintic = GetNearestBezierPtQuintic( _points[0].X, _points[0].Y, _points[1].X, _points[1].Y, _points[2].X, _points[2].Y, _points[3].X, _points[3].Y, pt.X, pt.Y ); List roots = FindAllRoots(polyQuintic); // Filter out roots with nonzero imaginary parts and roots // with real parts that are not between 0 and 1. List candidates = roots.FindAll( root => root.Real > 0 && root.Real < 1.0 && Math.Abs(root.Imaginary) < ROOT_EPS ).Select( root => root.Real ).ToList(); // add t=0 and t=1 ... the edge cases. candidates.Add(0.0); candidates.Add(1.0); // find the candidate that yields the closest point on the bezier to the given point. double t = double.NaN; PointD output = new PointD(double.NaN,double.NaN); double minDistance = double.MaxValue; foreach (double candidate in candidates) { var ptAtCandidate = GetPoint(candidate); double distance = DistSqu(ptAtCandidate, pt); if (distance < minDistance) { minDistance = distance; t = candidate; output = ptAtCandidate; } } return new Tuple(output, t);}
which reduces the problem to implementing GetNearestBezierPtQuintic() and a numeric root finding function for polynomials, given that quintic equations cannot be solved via a closed-form formula like the quadratic equation.

GetNearestBezierPtQuintic() -which returns the coefficients of [B(t) - P] ? B'(t) when fully expanded given P and the control points of B - turns out to be a little two much algebra to work out comfortably by hand, so I ran the open source symbolic mathematics application Sage on this Sage script yielding the following when translated into C#: (For the first time on the internet! ... as far as I can tell)static private List GetNearestBezierPtQuintic(double x_0, double y_0, double x_1, double y_1, double x_2, double y_2, double x_3, double y_3, double x, double y){ double t5 = 3 * x_0 * x_0 - 18 * x_0 * x_1 + 27 * x_1 * x_1 + 18 * x_0 * x_2 - 54 * x_1 * x_2 + 27 * x_2 * x_2 - 6 * x_0 * x_3 + 18 * x_1 * x_3 - 18 * x_2 * x_3 + 3 * x_3 * x_3 + 3 * y_0 * y_0 - 18 * y_0 * y_1 + 27 * y_1 * y_1 + 18 * y_0 * y_2 - 54 * y_1 * y_2 + 27 * y_2 * y_2 - 6 * y_0 * y_3 + 18 * y_1 * y_3 - 18 * y_2 * y_3 + 3 * y_3 * y_3; double t4 = -15 * x_0 * x_0 + 75 * x_0 * x_1 - 90 * x_1 * x_1 - 60 * x_0 * x_2 + 135 * x_1 * x_2 - 45 * x_2 * x_2 + 15 * x_0 * x_3 - 30 * x_1 * x_3 + 15 * x_2 * x_3 - 15 * y_0 * y_0 + 75 * y_0 * y_1 - 90 * y_1 * y_1 - 60 * y_0 * y_2 + 135 * y_1 * y_2 - 45 * y_2 * y_2 + 15 * y_0 * y_3 - 30 * y_1 * y_3 + 15 * y_2 * y_3; double t3 = 30 * x_0 * x_0 - 120 * x_0 * x_1 + 108 * x_1 * x_1 + 72 * x_0 * x_2 - 108 * x_1 * x_2 + 18 * x_2 * x_2 - 12 * x_0 * x_3 + 12 * x_1 * x_3 + 30 * y_0 * y_0 - 120 * y_0 * y_1 + 108 * y_1 * y_1 + 72 * y_0 * y_2 - 108 * y_1 * y_2 + 18 * y_2 * y_2 - 12 * y_0 * y_3 + 12 * y_1 * y_3; double t2 = 3 * x * x_0 - 30 * x_0 * x_0 - 9 * x * x_1 + 90 * x_0 * x_1 - 54 * x_1 * x_1 + 9 * x * x_2 - 36 * x_0 * x_2 + 27 * x_1 * x_2 - 3 * x * x_3 + 3 * x_0 * x_3 + 3 * y * y_0 - 30 * y_0 * y_0 - 9 * y * y_1 + 90 * y_0 * y_1 - 54 * y_1 * y_1 + 9 * y * y_2 - 36 * y_0 * y_2 + 27 * y_1 * y_2 - 3 * y * y_3 + 3 * y_0 * y_3; double t1 = -6 * x * x_0 + 15 * x_0 * x_0 + 12 * x * x_1 - 30 * x_0 * x_1 + 9 * x_1 * x_1 - 6 * x * x_2 + 6 * x_0 * x_2 - 6 * y * y_0 + 15 * y_0 * y_0 + 12 * y * y_1 - 30 * y_0 * y_1 + 9 * y_1 * y_1 - 6 * y * y_2 + 6 * y_0 * y_2; double t0 = 3 * x * x_0 - 3 * x_0 * x_0 - 3 * x * x_1 + 3 * x_0 * x_1 + 3 * y * y_0 - 3 * y_0 * y_0 - 3 * y * y_1 + 3 * y_0 * y_1; return new List { (Complex)t0/t5, (Complex)t1/t5, (Complex)t2/t5, (Complex)t3/t5, (Complex)t4/t5, (Complex)1.0 };}
and decided to use Laguerre's Method to solve the equation.

I chose Laguerre's Method because it is optimized for polynomials, is less flaky than Newton-Raphson, is relatively concise, and is easy to port from C++ given Microsoft's implementation of complex numbers found in the System.Numerics namespace. I ported the implementation of Laguerre's Method found in Data Structures and Algorithms in C++ by Goodrich et. al. (here). This solver works well enough for my purposes; however, I think the optimal thing to do to get the optimal nearest-point-on-a-bezier algorithm would be to implement one of a class of newish algorithms that are specifically targeted at solving the quintic, e.g. "Solving the Quintic by Iteration" by Doyle and McMullen. No implementations of these algorithms seem to be publicly available, however, and it would be a research project that I don't have time for right now to translate that paper from academese into code.

Source

## Untitled Escher Game Update

[font='Times New Roman']

## I've decided my "

[/font]untitled Escher game[font='Times New Roman']

## ", i.e.

[/font] Zoop[font='Times New Roman']

## on a double logarithmic spiral, is going to be named "Draak", which is the Dutch word for dragon. The title bears no relation to the game beyond the fact that I am going to use as its logo art based on the following novel tesselation of the plane that I discovered while fooling around with

[/font]the tesselation tool that I implemented[font='Times New Roman']

## to construct the sprites for this game, and also that I thought it would be cool to make the title art be an

[/font]ambigram[font='Times New Roman']

## with rotational symmetry and I think I could figure out a way to do this with the word "draak".

[/font][color=rgb(40,40,40)][font='Times New Roman']

## Notes on a novel tiling of the planeRendered in Escher draw

[/font][/color]

## An Application for Creating Novel Escher-like Tessellations of the Plane

Over the past several weeks I have been investigating desktop applications for creating novel Escher-like tilings of the plane. Basically I've determined that none of what is publicly available is useful to me. The game I have in development will involve an animated Escher-like tessellation of a double logarithmic spiral -- see here. The "logarithmic spiral" part of it means the publicly available applications can't help me: they are too limited in what they will do and I can't extend them because they are proprietary, so I have created my own tool.

These applications are "too limited" in the sense that none encompass everything possible that one would regard as a 2D tessellation. You have some applications that simplistically shoehorn you into a few kinds of geometries ignoring the richness of the whole domain of tiled patterns. You have other applications that try to take the domain seriously and follow the mathematical literature on the subject. What this means, generally, is that they allow the user to construct a tessellation in terms of its "wallpaper group" which is the mathematical classification of a plane pattern based on the symmetries the pattern admits.

The latter seems like what one would want but in practice it isn't, at least it is not what I want. It is too heavy of a constraint to work only in terms of wallpaper group because there are all kinds of patterns one might be interested in that fall outside of this formalism: Escher's tessellations of the Poincare disk model of the hyperbolic plane, for example, or, say, any kind of aperiodic tiling or any tessellations that admit similarity as a symmetry, i.e. scaling, such as Escher's square limit etchings, or my case: various spiral patterns.

It is instructive that Escher himself did not work in terms of symmetry groups. He created his own classification system which Doris Schattschneider refers to as Escher's "layman's theory" of tilings in her book (the authoritative text on Escher's tessellations). Escher's classification system comes from a pragmatic perspective rather than a formal mathematical perspective. He starts with basic cells -- a square, a rectangle, an isosceles right triangle, etc. -- and then enumerates the ways in which one can surround the cell with itself that result in tilings of the plane. There is a good overview of his system here.

What I like about Escher's approach is that it is geared towards construction. When you try to make one of these things you start with a tile or some a small set of simple tiles that tile the plane. You try to modify them until you get something you like. I decided to implement software that captures that approach. I didn't want to have to be too concerned with "wallpaper group" or other abstractions which I don't personally find to be intuitive anyway.

What I came up with is the following application that is written in C#. Below is video in which I use the application to construct something like Escher's butterfly tessellation -- system IX-D in his layman's theory -- but on a double logarithmic spiral (This is going to be the basis for the art of the first level of my game; the butterflies will be animated, flapping their wings, etc.)

[media]
[/media]

The way the application works is that there are two kinds of tiles: normal tiles and tile references. Normal tiles are composed of "sides". There are two kinds of sides: normal sides and side references. Tile references and side references point to another tile or side and say basically, "I am just like that tile or side, except apply this transformation to me". The transformations that tile references can apply to a tile are defined in terms of affine transformation matrices. The transformations that side references can apply are just two kinds of flipping relative to the sides' end points, either flipping "horizontally" or flipping "vertically" or both. The application then allows the user to add and move vertices on any sides that are on tiles that are marked as editable and then resolves all the references in real-time.

Right now, all of the information about tiles and sides and which is a reference to which has to be hand coded as an XML file, which is a pain (for the above I wrote a separate Python program that generated the XML defining a double logarithmic spiral of tiles that interact with each other according to Escher's IX-D system), but it is an extremely flexible formulation (it could, for example, handle hyperbolic tessellations in theory ... if you added a kind of side that is a circular arc and moebius transformations to the set of transformations the applications knows how to apply to referenced tiles).
Eventually I'd like to release this application as open source but I am not sure it will be useful to anyone but me in its current form. I need to incorporate the part that one has to hand code in XML into the actual GUI ... fix bugs and bezier curves and so forth, but please feel free to contact me if you are interested in the application in its current state or otherwise.

Source

## What I am working on

[color=rgb(51,51,51)][font=Georgia]

## I'm working on a puzzle game for iPads that takes the mechanics of a somewhat obscure game from the 1990s(*) and puts them on a double logarithmic spiral.

[/font][/color]

[color=rgb(51,51,51)][font=Georgia]

## The final app will be styled similar to one of M. C. Escher's tessellations of the plane with the tessellation changing at level breaks. I don't have a name yet...

[/font][/color]

[color=rgb(51,51,51)][font=Georgia]

## Gameplay will be like the following, which is video of a prototype I wrote in C# (just WinForms to the GDI, nothing fancy). I'm going to write the real thing in Swift using Sprite Kit, unless Sw

[/font][/color][color=rgb(51,51,51)][font=Georgia]

## ift turns out to be too immature in which case I'll use C++ and Cocos2d-x again.

[/font][/color]

[media]
[/media]
*: The identity of which I leave as an exercise for the reader.

Source

## Discretely Distributed Random Numbers in C#

Random numbers generated from a discrete distribution are commonly needed in game development.

By "discrete distibution" we just mean the roll you get from something like an unfair die, e.g. you want a random number from 0 to 5 but you want 4 and 5 to be twice as likely as 0, 1, 2, or 3. If we think of each possible random value as having a weight, in this case 0, 1, 2, and 3 would have a weight of 1 and 4 and 5 would have a weight of 2.

A simple way to generate these kinds of random values is the following. Given some n such that we want random values ranging from 0 to n-1 where for each 0 <= i < n we have a weight w(i):

1. Build a data structure mapping cumulative weight to each value i. By cumulative weight we mean for each i the sum of w(0), ... , w(i-1).
2. Generate a random number r from 0 to W-1 inclusive where W is the total weight i.e. the sum of w(i) for all i.
3. If r is a cumulative weight in our data structure return the value associated with it; otherwise, find the value v that is the first item in the data structure that has a cumulative weight greater than r and return v-1.
Obviously the data structure in the above could just be an unordered array but a better way to do it is to use a binary search tree because 3. will then be O(log n) rather than linear. In Java you can do this with a TreeMap. In C++ you can do this with an std::map (or in C++ you can do the whole thing with boost::random::discrete_distribution).

However, in C# you can't just use a SortedDictionary, which is a binary search tree under the hood. You can't use a SortedDictionary because it does not expose the equivalent of C++'s std::lower_bound and std::upper_bound or the equivalient of Java's TreeMap.floorEntry(...) and TreeMap.ceilingEntry(...). In order to perform the "otherwise" part of 3. above you need to efficiently be able to find the spot in the data structure where a key would go if it was in the data structure when it is in fact not in the data structure. There is no efficient way to do this with a SortedDictionary.

However, C#'s List does support a BinarySearch method that will return the bitwise complement of the index of the next element that is larger than the item you searched for so you can use that.

The downside of this whole approach is that there will be no way to efficiently add or remove items to the discrete distribution, but often you don't need this functionality anyway and the code to do the whole algorithm is very concise:
[code=csharp:0]class DiscreteDistributionRnd{ private List m_accumulatedWeights; private int m_totalWeight; private Random m_rnd; public DiscreteDistributionRnd(IEnumerable weights, Random rnd = null) { int accumulator = 0; m_accumulatedWeights = weights.Select( (int prob) => { int output = accumulator; accumulator += prob; return output; } ).ToList(); m_totalWeight = accumulator; m_rnd = (rnd != null) ? rnd : new Random(); } public DiscreteDistributionRnd(Random rnd, params int[] weights) : this(weights, rnd) { } public DiscreteDistributionRnd(params int[] weights) : this(weights, null) { } public int Next() { int index = m_accumulatedWeights.BinarySearch(m_rnd.Next(m_totalWeight)); return (index >= 0) ? index : ~index - 1; }}where usage is like
DiscreteDistributionRnd rnd = new DiscreteDistributionRnd(3,1,2,6);int[] ary = new int[4] {0,0,0,0};for (int i = 0; i < 100000; i++) ary[rnd.Next()]++;System.Diagnostics.Debug.WriteLine( "0 => {0}, 1 => {1}, 2 => {2}, 3 => {3}", (float)(ary[0] / 100000.0), (float)(ary[1] / 100000.0), (float)(ary[2] / 100000.0), (float)(ary[3] / 100000.0));Source

## Cocos2d-x + Box2d Breakout Updated

Cocos2d-x releases since version 3 have broken compatibility with old tutorials. This can especially be a problem when you want to do something slightly non-standard from the point-of-view of Cocos2d-x.

If you want to use physics in a Cocos2d-x game the current standard way to do this is to use the integrated physics classes, which are Chipmunk-based by default and can use Box2d too in some hybrid way the details of which are not at all clear. However, in my case I want to use Box2d, period, Box2d in a non-integrated manner for transparency and in order to leverage the vast amount of code that you get to peruse and possibly use by writing to vanilla Box2d. When doing something like this it can be hard to know where to start given that any sample code you find will be broken.

For setting up a Box2d/Cocos2d-x project there was always this BreakOut implementation to Cocos2d-iphone by Ray Wenderlich, link, which is transliterated into Cocos2d-x here but to relatively ancient versions of both Box2D and Cocos2d-x. I've taken that code and updated it to Cocos2d-x version 3.2 and Box2d version 2.3.

Here is my updated version.

To use do the following:

1. Setup a cocos2d-x v3.2 project via the python script. This will give you Box2d v2.3 set up in your project without you having to do anything else.
2. Copy the source code in the above zip file into your project's Classes directory.
3. Copy the image files the Ball.jpg, Block.jpg, and Paddle.jpg from here into your project's Resources directory.
4. Build.

(You can copy over the music/sound files too. I have the relevant calls commented out in the code above)

Source

## Zzazzy 1.0.0

My iPad game Zzazzy has made it through the review process and is now on the App Store: see here.
I will be updating the page I have for it here this week with more detailed instructions than what the app itself provides (There is a tutorial mode but it really just covers the basics), but basically it is an action puzzle game in which you lay tiles on a board to form interlocking words, but you're building words against the clock where

• forming large grids of words puts time back on the clock
• not using tiles eventually leads to tile death which takes time off the clock.

It's a continuous play type of game, like Tetris or something, but with a completely new game mechanic.

Screenshot below:

Source

## Snurtle and My Greatest Contribution to Mathematics, Such That It Was

This summer coming up will be 20 years since the summer after I graduated from college. I'm currently in that golden period in between software jobs -- starting a new one in March -- and, given the free time, have just done something which I have been meaning to do since that summer.
I generated the following image in a non-ad hoc manner, with re-useable software that I developed myself.

That year after college I was kind of shiftless. I chose to just not participate in the whole interview circus that normally accompanies one's senior year of MIT and let myself graduate without having plans of any kind for the future. I think that I was tired mostly and was kind of sick of the universe of engineering and all of its hurdles and dog and pony shows, and I think I kind of half-understood that this would be the last time in my life in which it would be possible for me to be totally free of that universe -- until, I guess, retirement.
Anyway, I lived in the slums of Boston -- that is, Roxbury -- with a friend of mine from high school and some roommates we found, one of whom ended going to jail (but that is another story), worked temp jobs, and didn't do much of anything ... except for some reason I became obsessed with aperiodic tilings of the plane and substitution tilings generally and put a lot of effort into coming up with one of my own. I wanted to find, for reasons that aren't clear to me now, an aperiodic analog of the normal regular hexagon tiling.
It's kind of a blur -- I don't really remember where the above came from as a sequence of steps -- but the above is a patch of the best tiling I discovered during this period. It is generated via the following substitutions. The lengths of the sides of triangles and trapezoids are multiples of ?, the golden ratio.

As far as I know, the above is original and has not been discussed in the literature but I never was able to come up with local matching rules on the hexagons to enforce aperiodicity.
At that time I did develop software for working on these sorts of structures but what I came up with in retrospect wasn't The Right Thing. This wasn't all together my fault. Those were different times: no GitHub, no free code. If I wanted to output into a vector format it would have to be my own vector format. If I wanted to render that format to the screen I would have to write code to render that vector format to the screen, and so on. Also GUI applications were all the rage and were still new and shiny, so I was biased in that direction. I never really liked what I came up with then, and it wasn't portable anyway; it was a black-and-white Macintosh application in C.
Having the negative example of that project all those years ago made it easy to see what I actually needed: not a GUI application but a programming language. So last week, I wrote one: a little language for specifying recursive structures like the above and rendering them in SVG. I'm calling it Snurtle because it is basically a combination of Python (a snake) and the turtle-based graphics of Logo. I chose Python syntax because I wrote the Snurtle interpreter in Python and thus got tokenization for free using Python's "tokenize" module.
So, for example, the following simple substitution

is represented by the following Snurtle script:sub square(n): terminal: poly("bisque", "orangered", 1 ): forward(n) turn(PI/2) forward(n) turn(PI/2) forward(n) nonterminal: rectangle(n) branch: turn(PI) forward(n/2) turn(-PI/2) forward(n/2) square(n/2) square(n/2)sub rectangle(n): terminal: poly("lightskyblue", "orangered", 1 ): forward(n/2) turn(PI/2) forward(n) turn(PI/2) forward(n/2) nonterminal: square(n/2) turn(PI) square(n/2)
yielding

which, I think, is self-explanatory except for the "branch" block. Branch blocks tell Snurtle to push the state of the turtle on to a stack and then pop it when exiting the branch i.e. branch works like parentheses operators in L-systems. Also the following Snurtle constructs are not illustrated in the above:

• flip blocks: similar in syntax to branch above. Tell Snurtle to multiply the angle arguments passed to turn statements by -1. (i.e. flipping them)
• stroke and fill blocks: similar to "poly" above.

Anyway, here is my snurtle source code. Usage to generate the above would be:[quote]
snurtle.py -w -s square -k 500 -m 8 -o squares.html -c "10,10? "snurtle_scriptssquares.snu[/quote]
where

• -w : wrap the generated SVG in HTML (so it can be viewed in browser)
• -s : initital substitution used to kick off the recursion
• -k : scale factor in SVG output
• -m : max stack depth before substituting in terminal blocks rather than nonterminal and ending the recursion
• -o : output filename
• -c : starting coordinate when generating SVG. (There is also a -d which similarly specifies the width and height attributes of the SVG but I am using the default in the above

Snurtle is pretty rough at this point, so check back here for updates if you are interested; in particular, substitutions can't currently have more than one argument and I want to make the color, etc., parameters to poly block be optional and maybe allow the user to specify z-order.

Source

## The World

For the game I'm working on I need to have sprites that travel along curving paths.

I'm talking about the sprites traveling along somewhat arbitrary curves, meaning curves that look good, not curves that result from gravity or other physical forces. If you need those kinds of curves, e.g. the parabolic trajectories of cannonballs, you need to simulate the forces acting on the sprites and that is not what I'm talking about in this post.

Caveat aside, an arbitrary curving path is a pretty common thing to need but I think is unnecessarily headache-inducing because curves in graphics are just confusing. Maybe you've found yourself thinking

• I don't know what the difference between a spline, a bezier curve, a bezier curve of various degrees, a B-spline, a t-spline, etc. is.
• I don't know which of the things mentioned above I need.
• Every time I try to read the wikipedia article on these things the math gets heavy and my eyes glaze over

etc.?

So assuming it's not just me, as a public service I'm going to try to clear this up.

Short version, if you need to have a sprite that travels along a curving path from point A to point B in x amount of time, you probably need a cubic bezier curve and generally, in 2d game programming, all you will ever need probably is n cubic bezier curves possibly concatenated together. You can concatenate them yourself if you need to do that, so what you need is a way to define a cubic bezier and a function to get the point along the bezier at some time t. Despite what you would think from trying to read the literature, this turns out to be trivial -- I mean less than a dozen lines of code.

More thoroughly, explaining away my bulleted list above:

• A spline is a more general term than a "bezier curve": a bezier curve is a particular polynomial function (that I will implement below) that defines a curve that goes from point A to point B given some control points. A bezier spline is an aggregation of n of these. A general spline can be an aggregation of other kinds of curves e.g. a B-spline is composed of a bunch of curves that are generalizations of bezier curves.
• The only kinds of beziers you need to be concerned with are quadratic and cubic beziers. Quadratic beziers are just parabolas and are not interesting. Cubic beziers are curves that go from point A to point B and are tangent to a given line at A and tangent to given line at B. They are defined by A and B plus two other control points that define the tangent lines and the weight that the tangent lines have on the curve.
• Cubic bezier curves are easy to implement. See below.

So here is my curve "library"

Bezier.h#include class Bezier { private: float x1_, y1_, x2_, y2_, x3_, y3_, x4_, y4_; public: Bezier(float x1, float y1, float x2, float y2, float x3, float y3, float x4, float y4); std::pair getPoint(float t) const;};

Bezier.cpp

#include "Bezier.h"Bezier::Bezier(float x1, float y1, float x2, float y2, float x3, float y3, float x4, float y4) : x1_(x1), y1_(y1), x2_(x2), y2_(y2), x3_(x3), y3_(y3), x4_(x4), y4_(y4) {}std::pair Bezier::getPoint(float t) const { float x = (x1_+t*(-x1_*3+t*(3*x1_ - x1_*t))) + t*(3*x2_+t*(-6*x2_ + x2_*3*t)) + t*t*(x3_*3-x3_*3*t) + x4_*t*t*t; float y = (y1_+t*(-y1_*3+t*(3*y1_ - y1_*t))) + t*(3*y2_+t*(-6*y2_ + y2_*3*t)) + t*t*(y3_*3-y3_*3*t) + y4_*t*t*t; return std::pair(x,y);}

You define a cubic bezier by making a Bezier object giving the constructor four points. (x1,y1) and (x4,y4) will be the start and end of the curve. The curve will be tangent to line segment (x1,y1)-(x2,y2) at its start and tangent to (x3,y3)-(x4,y4) at the end. To get a point along the curve call getPoint(t) where t=0.0 gives you (x1,y1), t=1.0 gives you (x4,y4), and 0.0 < t < 1.0 gives you the point along the curve in which 100t percent of the curve has been traversed e.g. 0.5 is halfway (in terms of t not arc-length unfortunately).

So that's it. Code is here. I also included a Win32 GDI project that draws cubic beziers, screenshot below. (The sample program is also a little example of how to write a very basic Win32 program, which these days younger programmers seem to appreciate as a sort of parlor trick...)

Source

## Syzygy Update

So, looking at the early entries of this blog, I must have started working on Syzygy around the beginning of the year 2012 because by March 2012 I had the Win32 prototype done. At that time, I didn't own a Macintosh, didn't own an iOS device, had never heard of cocos2d-x, and, professionally-wise, was still writing image processing code for a company in Seattle. Since then that company was killed, and I moved from Seattle to Los Angeles ... but anyway as of today, about a year later, I have the primary functionality of Syzygy running on my iPad, re-using the source code, mostly, from that prototype. I haven't really been working on it the whole time -- there was a lot of moving-to-California in there somewhere, but here's a screenshot (click for full-size):

There's a common question in the mobile games forum of gamedev.net: "How can I make a game for iOS only using Windows?". The answer to this is either (1) you can't or (2) write your game to Marmalade or cocos2d-x on Windows and then when you are done get a friend with a Mac to let you register as an Apple developer, build under Xcode, and submit to the App store. I always say (1) is the serious answer and if you are unserious, or want to develop a really simple game, then go with (2). Basically I say this because you need to run your game on a device frequently and early, and I'm seeing the truth to this now.

Now that I have Syzygy running on a device I'm seeing issues with input which are artifacts of running on an iPad. The prototype implemented mouse input as a stand-in for touch input. It turns out touch screens and mice aren't the same thing. The game plays on the device, but when you drag tiles your finger is in the way of the tile visually. You can't see the tile you are dragging -- this seems like it wouldn't be a big deal, but it kind of is. ... this sort of thing is the reason, in my opinion, that if you are not testing on a device during primary development then you are not being serious... (However, I'm under the impression that you need a Mac to even deploy to a device; there was a thread on Gamedev in which someone was saying that you can deploy to an iOS device from a PC, so YMMV...)

So not sure what I'm going to do about this, I'm thinking of making the tile the user is dragging larger and offset upper-leftwardly while the user is dragging it. The problem with this is to make it look nice I'd have to have large versions of all the relevant art and some of it I don't even really remember how I rendered in the first place...

Source

gamedev.net recently linked to this video about the making of Marble Madness, which got me thinking about the raster-to-vector via contour extraction code I wrote in Python last year and the fact that, it being the future and all, I can probably find all of the art from Marble Madness unrolled into a single image file. So three clicks later and, oh yeah:

(Click the image for full size)

So I ran the above through my raster-to-vector converter. Here are the results (zipped, this is a huge file, over 20,000 SVG paths) This file kills Adobe Illustrator. It took 15 minutes just to open it.

SVG for a single level is more manageable. Here's the 2nd level as unzipped SVG ... (curious to see if various browsers can handle this) Illustrator could handle this one pretty well so I experimented with applying various vector filters. Below is a bit of it with the corners rounded on the paths (click for a larger version):

Not sure what this all amounts to ... just some stuff I did today. However I did learn

• Python is slow. It took a really long time to generate the big file, seemed too long. I think C++ would've been like an order of magnitude faster -- that's my intuition anyway.
• My contour extraction code really works which is kind of surprising -- I thought for sure running it on something like this would crash it. (It does still have the problem that it can't handle paletted raster image formats, but that's the only bug I encountered)

Source

## Sprite Packing in Python

I've been working on my puzzle game Syzygy again, after a long hiatus, and am now writing to iOS/cocos2d-x rather than just working on the prototype I had implemented to Win32.

The way that you get sprite data into cocos2d is by including as a resource a sprite sheet image and a .plist file which is XML that specifies which sprite is where. Plists are apparently an old Mac thing -- I had never heard of this format. .plists describing a lot of sprites would be a chore to write by hand so there is a cottage industry of sprite packing applications.

I tried out one called TexturePacker and liked it a lot -- except that it is crippleware; I need a few features that are only in the full version; plus I can't stand crippleware; and I think $30 is too much for something that I can write myself over the weekend. So I decided to write my own sprite packer over the weekend. The result is pypacker, a python script: source code here. Usage is like pypacker -i -o -m -p where • = a path to a directory containing image files. (In any format supported by the python PIL module.) • = a path + filename prefix for the two output files e.g. given C:\foo\bar the script will generate C:\foo\bar.png and c:\foo\bar.plist • = the packing mode. Can be either "grow" or fixed dimensions such as "256x256?. "grow" tells the algorithm to begin packing rectangles from a blank slate expanding the packing as necessary. "256x256? et. al. tell the algorithm to start with the given image size and pack sprites into it by subdivision, throwing an error if they all won't fit. • -p = optional flag indicating you want the output image file dimensions padded to the nearest power-of-two-sized square. The algorithm I used is a recursive bin packing algorithm in which sprites are placed one-by-one into a binary tree. I based it directly on Jake Gordon's work in Javascript for generating sprite sheets for use in CSS, described here, only my algorithm is sort of like version 2 of his i.e. I fixed an issue that bugged me about his algorithm. The core of the algorithm is a function that looks like this:[code=code:1]def pack_images( named_images, grow_mode, max_dim): root=() while named_images: named_image = named_images.pop() if not root: if (grow_mode): root = rect_node((), rectangle(0, 0, named_image.img.size[0], named_image.img.size[1])) else: root = rect_node((), rectangle(0, 0, max_dim[0], max_dim[1])) root.split_node(named_image) continue leaf = find_empty_leaf(root, named_image.img) if (leaf): leaf.split_node(named_image) else: if (grow_mode): root.grow_node(named_image) else: raise Exception("Can't pack images into a %d by %d rectangle." % max_dim) return root We iterate through the images we want to pack. For each image, try to find a rectangular node in the tree that can contain the image. If one exists, place the image in the node and subdivide the node such that the remaining space, not taken up by the image, is available in the tree (this is what 'split_node' does). If such a node cannot be found, throw an exception if we are not in 'grow' mode or expand the root rectangle node to accommodate the new image if we are in 'grow' mode. This routine is very similar to the Javascript implementation I linked to above. The difference is in the details about the structure of the binary tree. Jake Gordon's Javascript implementation uses a node type that stores an image in the upper left and has children that he calls 'right' and 'down' like this: Since actual data is always burnt into the upper left, it means that the tree can never subdivide into this space; we can never recurse into the upper left. This results in the grow_node routine being awkward to write. When we grow the root we either want to extend to the right or extend down, if the upper left can be a node and not image data this is a simple matter of creating a new node and making the the existing root its upper or left child. Anyway, Jake Gordon's implementation results in a packing tree that cannot both grow right and grow down simultaneously because it would have been complicated to implement this. This limitation is not a problem practically as long as you sort the images from largest to smallest before running the packing algorithm -- a standard heuristic from the bin packing literature. I however wanted to see if the standard sorting heuristic is really accomplishing anything. I wanted to be able to pack rectangles in random order. I therefore simplified the trinary node structure of the Javascript implementation into true binary nodes either oriented horizontally or vertically like this: Further now only leafs can contain images and if a node is not a leaf it always has two valid, that is non-null, children. Using this type of tree structure makes the full grow_node routine more or less trivial. Beyond that, I'm using the following heuristics: • if the orientation (horizontally or vertically) of a split is not forced, split with the orientation that will result in the new empty node having the largest area • If the orientation of growing the root rect is not forced, grow in the direction that leads to the smallest increase in the maximum side length of the root rectangle. (This heuristic enforces squarishness and is extremely important. Without doing this the grow version of the algorithm is basically unusable, and in this sense this grow heuristic can be considered part of the algorithm rather than a heuristic that can be swapped out) Sorting by size (max side length) turns out be about a 6% improvement with this algorithm. Here's 500 rects packed with sorting (top) and without (bottom): EDIT: Actually the version I posted isn't the one that can handle expanding right and down simultaneously. (Although it doesn't effect results because sorting by size is hardcoded into the script). Will post the newer version soon... Source ## The Cocos2d-x Device Orientation Bug This took me all morning to figure out. I'm posting here so there is clear information on this subject in at least one place on the internet. The situation is this: when targeting iOS6 using Cocos2d-x v2.0.2, there is a bug in the auto-generated "Hello, World" code that shows up in a fresh project such that the compiled game will not display in landscape orientation even after following the steps in this item from Cocos2d-x documentation (such that it exists). The solution is to follow the steps enumerated in this note from Walzer Wang. This fix is probably already in v2.1 but 2.1 is still beta, as far as I know, so this issue is probably still in a lot of code out there... Source ## Thoughts on porting Syzygy to iOS I've started trying to figure out the way in which I'm going to port Syzygy to iOS. I don't actually own a Mac -- though I may get one this weekend -- so this is all theoretical at this point. What I have right now is an implementation written in C++ to the Win32 API. Part of this implementation is a very basic 2D game framework. This 2D framework has an abstract widget class that has render and update methods. I didn't call this class "sprite" because it is more general (and basic) than a sprite class : it can be implemented as anything that knows how to update and draw itself. For example, I have text widgets, that call the Win32 DrawText function in the draw method. Widgets are contained in GamePhase objects; GamePhases have a vector of lists of widgets, where each widget list represents a layer, so the order of a GamePhases's list vector is effectively enforcing a z-order. GamePhases have update and render functions that can do phase specific rendering (e.g. draw a background) and then call the render and update methods of the widgets contained in the layers. GamePhases also have a predicate "IsPhaseComplete" and an accessor "GetNextPhase" which are used along with update and render to implement the game loop. That's basically it as far as a game engine goes. This code clearly is not logically platform-dependent. In practice Windows leaked into the implementation in the Render method which takes an HDC as a formal parameter and elsewhere where I wasn't being careful in avoiding Win32 types. So my initial plan on porting to iOS was to refactor the 2D game framework part of the codebase to be truly platform independent and then to find an open source 2D drawing library that someone else implemented on top of OpenGL ES and reimplement the rendering code in terms of that 2d library. The trouble is the 2D OpenGl-based library surprisingly doesn't seem to exist. There is a project that someone did called Gles2d which is what I want but it is orphaned and never adapted to iOS anyway. It was implemented for the GamePark32 hardware, I believe. So my options are (1) Stick with the original plan and adapt the Gles2D codebase to iOS myself. (2) Stick with the original plan and write my own 2D graphics in OpenGL ES layer. (3) Throw out everything and reimplement the application to Cocos2d in ObjectiveC. (4) Keep whatever I can of my code and re-factor to use the Marmalade framework in C++. (5) Keep whatever I can of my code and re-factor to use Cocos2d-x in C++. (6) Stick with the original plan and write the platform dependent drawing stuff to SDL 1.3. Long story short, I think I'm going to do (5). (1) and (2) are just not work I feel like doing at this time. (3) would be a good solution but I'd be locked in to iOS and would have to gain more competence at ObjectiveC development than I feel like investing time-wise at this point -- however, I may end up doing things this way if it becomes clear that it is the easiest approach . (4) is out because I don't think I need a very poweful game engine, Marmalade costs money, and I wouldn't be using most of it. I'm ruling out (6) because I don't really trust SDL 1.3 on iOS; maybe I'm wrong about this but SDL doesn't officially support iOS and it just seems like there will be problems. So (5) ... Cocos2d-x is a reimplementation of the Cocos2d API but to C++ rather than ObjectiveC. It is designed for cross-platform (i.e. across iOS and Android specifically) and the project looks alive and well. There is even a Win32 build of it that uses PowerVR's GLES emulator for windows so I could in theory start work without actually owning a Macintosh. The only problem I see with Cocos2d-x is that documentation seems to be non-existent and it is being developed by guys who are clearly speaking English as a second language so I may have trouble finding answers to questions and so forth ... we will see. Anyway, ... thoughts? Source ## Syzygy for Win32, pre-pre-alpha release I'm releasing a prototype version of a puzzle game, Syzygy, that I intend eventually to port to iOS and possibly Android. The prototype is written to the Win32 API and should run on basically any Windows system without installing anything. Syzygy can be downloaded here. Just unzip these three files into a directory and run the executable. I have the Syzygy prototype parametrized such that a single XML file defines its gameplay. I'm looking for play testers who are interested in abstract puzzle games to play the game and provide feedback regarding good values for the definable parameters. If I get multiple helpful submissions I'll give$60 via paypal to whoever has the best revised XML file. Here's a brief explanation of the XML file.

The game is a Scrabble-like word game re-imagined as a one-player action puzzle game. Here's a screenshot (click on the image for a full-sized version):

Basically the game works as follows:

• The bar on the left is the game timer. When it is empty the game is over.
• Letter tiles randomly appear and the player must position the tiles in a legal crossword-style crossword grid by dragging them with the mouse pointer.
• When the player has positioned tiles such that they form two or more legal connected words, the player can double-click on one of the tiles to "lock them in" and the two or more words are then scored as follows (This is a modified version the scoring used in the game Literaxx, which is the public domain Scrabble variant):

• Yellow tiles are 1 point, green tiles are 2, blue tiles are 3, and red tiles are 5
• A tile on the a board cell of matching color receives triple its point value.
• The 2x and 3x board cells are double and triple word scores.
• There are two levels of parameter controlled bonuses for long words (see the PDF linked to above)
• The remaining time in the game timer is increased proportionally to the point value earned by a successful lock in and the player's score is increased by the score value of a successful lock-in times a level multiplier.
• Locked in tiles can be played off of but cannot be moved.
• Each tile has a bar timer widget on its right. When this timer expire, the tile disappears negatively effecting the global timer if the tile that expires is not locked in.
• There are three kinds of special tiles

• Random tiles: Random tiles look like gray transparent letter tiles (the weird looking 'M' tile above is one). They cycle through the alphabet until they are dragged the first time at which point they behave like normal letter tiles with no point value.
• Bomb tiles: (pictured above) When the user drags a bomb tile onto a group of connected locked-in or non-locked-in letter tiles, the target tiles will be destroyed without effecting the user's score or game timer.
• Juice tiles: (appear as lightening bolt icons, not shown above) When user drags a juice tile onto a group of connected locked-in or non-locked-in letter tiles, the tiles' local timer widgets receive additional time.
• The game levels up after a certain number of tiles are locked in. The game timer is re-filled at level transitions.

Oh, and graphics, sound, interface and the Windows version itself are all temporary so I'm not really looking for feedback about this sort of stuff -- just gameplay, feature ideas, and any reproducible crashes, I guess.

Source

## Raster-to-Vector plus not bitching about significant whitespace

So this weekend I learned Python and implemented a basic raster to vector converter in it. It converts whatever file formats the Python PIL module supports to SVG.
Here's the little guys from Joust embedded as SVG (This probably doesn't work on Internet Explorer, but if you're using IE you have bigger problems, brother)

See my blog, don't think I can embed here

and here's the fruits from Pac-Man: (My favorite is the Galaxian fruit)

ditto

Specifically I implemented classical contour extraction as described in the 1985 paper "Topological Structural Analysis of Digitized Binary Images by Border Following", which is the algorithm that OpenCV's cvFindContours uses, modified such that it supports color images rather than black-and-white bitmaps. (This is something that I may eventually have to do at work for real, i.e. in C++, so I thought it would be a good way to learn Python and make sure my modified algorithm was actually going to work -- it's not a trivial change because color images allow two contours to be adjacent which can't happen in a bit map image)
Here's the code. Usage is like:
python Sprite2Svg.py "c:workfruits.png" "c:workfruits.svg" 4 "#000000?
where the last two arguments are optional. The 3rd argument is a scale factor in pixels. The 4th argument is a background color that will be removed from the output. I think there's a bug right now in which the code doesn't support paletted images; trivial to fix, but I wanted to fix it in some general way and then forgot about it.
Anyway, things I like about Python:

• Significant whitespace turns out to not be annoying. (Who knew?)
• Coroutines!... about twenty years ago I was in a class in which I had to write a compiler in a language called CLU. All I remember about CLU is that (a) I once apparently wrote a compiler in it and (b) Coroutines! -- well it had the generator/yield construct, anyway. I wish C++ had generator/yield
• It isn't Perl. Can't stress this one enough.

Things I don't like about Python:

• The thing with version 3 being better but nobody using it.
• The issue I'm talking about here is annoying and I think the "nonlocal" declaration isn't the best solution in the world.

Source

## Playing simultaneous sounds in Win32

There is a nice unscary Win32 API call for playing sounds that is unpretentiously called "PlaySound". It is extremely easy to use. Unfortunately, PlaySound(...) has one problem that makes it unsuitable for even casual game development: although it will play audio asynchronously, it will not play two sounds simultaneously. Depending on what you are doing this may not be a big deal, e.g. warning beeps in an editor app and such, but for anything that is media rich it basically means that you can't use PlaySound.

This leaves you with several options:

1. Use a third party library.

2. Use the new Windows Audio Session API which came in with Vista.

3. Use the (legacy) low-level Win32 MCI routines.

4. Use the (legacy) low-level Win32 waveOut interface

So if you have serious audio needs The Right Thing is 1. or 2. above.

A little background on what I'm doing: I'm working on a Win32 C++ implementation of a puzzle game that I am eventually going to directly port to iOS. I don't really need serious audio as I just want something that is playable for game design debugging, i.e. balancing. I do want simultaneous sound effects though because I plan on releasing this Win32 prototype to gather feedback and I think sound effects add playability in this case.

Anyway, I looked into 1. and 2. On 1., I couldn't find a library that was dirt simple enough to justify its use in my project given all I am really looking for is a drop-in replacement for PlaySound that supports asynchronous simultaneous audio. On 2., well, the API is dense; I'm not looking for a research project and I couldn't find a lot of sample code -- maybe, it's too new? or maybe I wasn't looking in the right places.

On 3., if you don't know what I'm talking about, I'm talking about this function and friends. I remember using the MCI (multimedia command interface, I believe) routines back in the day, probably around 1997, and they are pretty easy to use. It's a weird little API relative to the rest of win32. You send command strings to a device object that look like "LOAD 'foo.wav' as Foo" and so forth, and I think that they will do simultaneous audio output. The problem is the MCI routines don't let you play sounds from memory and I want to play from WAVE resources that are embedded into the executable. To use MCI I would have to write my resources to a temp directory at start up and then play the serialized files. This seemed too ugly.

Which leaves 4. The problem with 4. is that waveOut et. al. are like the opposite of PlaySound: super low-level and absolutely un-user friendly. Fortunately there is a lot of code out there. In particular I found CWaveBox on CodeProject by a CodeProject user named Zenith__ that does basically what I need. I had lots of problems with this code, however, and ended up significantly re-working some code that was itself a re-work of the original CWaveBox that is posted in the CodeProject comments. I basically chose to work with the comments version because it executed the demo app as well as the original, but it was much shorter.

The code I started with is verbose, baroque, and in C rather than C++. I re-factored it in the following ways:

• Replaced two C-style arrays: one with a std::vector and the other with an std::map
• Cleaned up the .h file by making things that should be private private and moving as many implementation details into the .cpp file as I could
• Simplified some of the thread synchronization stuff by using WaitSingleObject instead of looping and polling
• Got rid of all C-isms i.e. mallocs, callocs, memsets, memcpys, etc. Replaced with C++-style allocation, std::copy, std::fill, etc.
• Got rid of magic constants by noticing many of them served as booleans, so replaced them with booleans
• Changed names of crazily named variables and functions
• Generally got rid of craziness ... we're talking goto's.

I honestly don't understand the code involved, which is weird given the amount of work I did on it. I mean, I get that there's a thread that's running and that it is playing chunks of waves in a loop, etc. That's the level at which I understand it ... kind of reminds me of this time I was working a DARPA contract and ported a function for converting from Military Grid Reference Numbers to longitude and latitude from Fortran to C without actually understanding the algorithm or knowing Fortran...

My code is here. Usage goes likes this:

[source lang="c++"]#include "WaveManager.h"
#include "resource.h"
// ...
{
WaveManager wave_mgr( kMaxNumSimultaneousWaves );

// ...
wave_mgr.Play(ID_SND_BUZZ);
// ...
// etc.
}[/source]

It actually works remarkably well.

However, the code still needs work if anyone is interested. There are a lot of functions returning error codes that are never checked. I think most of these should be changed to return void and should just throw if something serious happens. Also I think in the original implementation there could have been a race condition if you tried to load a wave after the playing thread was running. I wrapped the loading stuff in the critical section the playing thread uses, but this still might be a problem -- I'm not sure. If you use this code it is safest to load all your sound assets at start up before you start playing anything. But in terms of fixing it up more, I think the main thing is that someone who actually knows what they are doing vis-a-vis wave output could probably cut the verbosity in half or so.

Oh, and one more note on my work, I used some C++11 stuff in there ... basically I implemented loading from memory by taking the guts of the existing LoadFromFile implementation and making it into a function template that takes a handle type as the template parameter and has an additional formal parameter that is an std::function used for loading data. In the LoadFromFile case I instantiate the template with a file HANDLE as the template parameter and pass the function the Win32 call ReadFile wrapped thinly in a lambda. In the load from resource case, the template parameter is a custom buffer struct and the functor argument becomes, basically, a lambda wrapper around std::copy. But, anyway, just a heads up that this code will only compile under Visual Studio 2010 because of the lambdas. If you're interested in using it but don't do lambdas, it would be pretty easy to replace them with regular function pointers.

From The Curiously Recurring Gimlet Pattern

## Blitting with per-pixel alpha in Win32

I don't know how long the Win32 API has included an AlphaBlend() function. I mean, it came in whenever Msimg32.lib did but I'm not sure when that was, probably Windows XP era, I guess.

It's always been a pain in the ass to use; easy to use global alpha but per-pixel is a chore. You see a lot of people asking how to use this call at StackOverflow and other sites but basically never see a comprehensive reply. Generally the replies are variants of "It's easy. you just need to pre-multiply the alpha", which is true but unhelpful for two reasons: (1) doing so is a pain in the ass i.e. show me some code, buddy, and more importantly (2) in order to burn the alpha values into the RGB you need to actually have image data that contains an alpha channel but Win32 only natively supports loading BMP's which generally don't.

So on (2), for completeness, I should say I think that it is possible to get Photoshop to spit out a BMP file with alpha information. I haven't tried it but the advanced options when saving a BMP have an option for the format "a8 r8 g8 b8". I always see it grayed out but am guessing that it's possible to do this somehow. Also I think that you can load PNG's using GDI+; I know next to nothing about GDI+ but if that's what you use I'm not sure the solution I propose below is worth it just to get out of having to write the pre-multiply function yourself.

However, the above aside, if you want alpha-blended images in your application, you want to use PNG files ,and if you are writing to Win32 you need to use a 3rd-party library. The two 3rd party graphics libraries that people commonly use in Windows applications for things like loading PNG's and JPEG's are DevIL and FreeImage. I have no experience with DevIL and, frankly, it looks orphaned to me. What I suggest for blitting with semi-transparency in Win32 is using FreeImage, which seems tailor-made for doing this.
So below is an implementation on top of FreeImage demonstrating

• Loading PNG's (and other formats) as FreeImage data structures from Win32 resources.
• Converting from FreeImage to HBITMAPs with alpha burned in.
• Blitting the HBITMAPs with per-pixel alpha.

Here's the code I use for loading a PNG from a resource to FreeImage's data structures. FimgUtil::MemPtr and FimgUtil::Ptr are defined asboost::shared_ptrboost::shared_ptr
That is, I'm using smart pointers with custom deleters to call the appropiate FreeImage clean-up code on destruction. If this isn't your style the following code can easily be modified to use raw pointers.

### [source]

namespace {
bool GetResourceData(const char* name, BYTE*& ptr, unsigned long& size) {
HRSRC hrsrc = FindResource(NULL, name, RT_RCDATA);
if (hrsrc == NULL) {
return false;
}
if (handle == NULL) {
return false;
}
ptr = static_cast(LockResource(handle));
size = SizeofResource(NULL, hrsrc);
return true;
}
}

FimgUtil::MemPtr FimgUtil::GetMemoryPtr(FIMEMORY* fimem) {
return FimgUtil::MemPtr(fimem, FreeImage_CloseMemory);
}

FimgUtil::Ptr FimgUtil::GetBitmapPtr(FIBITMAP* fibmp) {
}

FimgUtil::Ptr FimgUtil::LoadFiBitmapFromResource(const char* rsrc_name, FREE_IMAGE_FORMAT format) {
BYTE* data;
unsigned long size = 0;
if (! GetResourceData(rsrc_name, data, size)) {
return FimgUtil::Ptr ();
}
FimgUtil::MemPtr buff = GetMemoryPtr(FreeImage_OpenMemory(data, size));
if (buff.get() == 0) {
return FimgUtil::Ptr ();
}
if (format == FIF_UNKNOWN) {
format = FreeImage_GetFileTypeFromMemory(buff.get(), 0);
if (format == FIF_UNKNOWN) {
return FimgUtil::Ptr ();
}
}
return GetBitmapPtr(
);
}
[/source]

To convert from an FIBITMAP* to a an HBITMAP and rolled together with the above:
[source lang="cpp"]

HBITMAP FimgUtil::FiBitmapToWin32Bitmap(const FimgUtil::Ptr & src_ptr, bool premultiply_alpha) {
if (premultiply_alpha) {
FreeImage_PreMultiplyWithAlpha( src_ptr.get() );
}
HDC hdc_scr = GetDC(NULL);
FIBITMAP* src = src_ptr.get();
HBITMAP hbm = CreateDIBitmap( hdc_scr, FreeImage_GetInfoHeader(src),
CBM_INIT, FreeImage_GetBits(src),
FreeImage_GetInfo(src),
DIB_RGB_COLORS);
ReleaseDC(NULL, hdc_scr);
return hbm;
}

HBITMAP FimgUtil::LoadPngResource( const char* rsrc_name, bool premultiply_alpha) {
FimgUtil::Ptr fibmp = LoadFiBitmapFromResource( rsrc_name, FIF_PNG );
return FiBitmapToWin32Bitmap( fibmp, premultiply_alpha);
}
[/source]
and a wrapper for blitting:

[source]

void FimgUtil::BlitWithAlpha( HDC dst, int dst_x, int dst_y, int wd, int hgt, HDC src, int src_x, int src_y, float alpha ) {
BLENDFUNCTION bf;

ZeroMemory( &bf, sizeof(BLENDFUNCTION) );
bf.BlendOp = AC_SRC_OVER;
bf.BlendFlags = 0;
bf.AlphaFormat = AC_SRC_ALPHA;
bf.SourceConstantAlpha = static_cast( 255 * alpha );

AlphaBlend( dst, dst_x, dst_y, wd, hgt, src, src_x, src_y, wd, hgt, bf);
}
[/source]

Source for my FreeImage utility functions is here.
Source
Followers 0