• ### Announcements

GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
• entries
14
10
• views
19647

Ramblings on code, etc.

## PlayFab Game Jam Postmortem

A couple of weeks ago I participated in a 48-hour game jam hosted by PlayFab here in Seattle, with fellow procedural planet veteran Alex Peterson, my good friend and composer href="http://leolangingermusic.com/" target="_blank">Leo Langinger, and the fortunate last minute addition of artist href="https://brentrawls.wordpress.com/" target="_blank">Brent Rawls.

We were both surprised and excited to have won this game jam, especially given the number and quality of competing entries.

Our entry, the somewhat awkwardly-named Mad-scien-tile-ology, is a Unity-powered take on the classical 'match-3' game (a la Bejeweled or Candy Crush), with the addition of an all-consuming biological 'creep', which consumes the game board as the player attempts to match tiles in order to slow its inexorable progress:

## What went right

• Separation of responsibilities
We had what I can only describe as an optimal team composition for such a short development cycle. Leo was able to focus on composing the music with a little sound design on the side, while Brent concentrated on the artwork, Alex handled all of the UI programming, and I wrote the gameplay logic.

• Time management
We hit the ground running, with an initial match-3 prototype playable early Saturday morning. Thereafter we went into planning mode, and white boarded the roadmap and scheduled checkpoints throughout the day for each deliverable and each integration point for assets. While the estimates weren't perfect, and we missed on a solid handful of items, the organisation helped us to hit 90% of what we set out to do, and still get 2 relatively decent nights of sleep during the competition.

• Building on Unity
Alex and I have both played around with Unity in the past, but neither of us had never shipped a full game thereon. Unity represented a fantastic time savings over building a game from scratch, and the asset pipeline alone saved us hours in wiring up the animations and audio.

• Having an artist, and an unusual artstyle
We hit on the idea of stop-motion papercraft before finding Brent, but honestly were it not for his efforts it would have been a disaster. Brent ran with the idea and produced visuals which are striking and unusual. The real-world textures of the paper, the bold colour palette and the stop-motion animations really help the game stand out from other games of this type.

• Having a composer, and an original score
It's easy to underestimate the impact of music on a video game, and as one of the only teams with a professional composer, I think we had an advantage out of the gate. Leo composed original scores for the title screen, gameplay loop, and victory/loss conditions. The upbeat and detailed music really helps sell the 'mad science' theme, and between composing he was able to produce a full range of foley effects for gameplay events that really help to sell the action on screen.

• Playtesting, playtesting, playtesting
We had a playable (if minimal) match-3 game from mid Saturday morning, and that allowed us to play test each new element as we added it. This can be a double-edged sword - when short on time, you can find yourself playing the game instead of implementing features, but it did give us a good idea of what did and didn't work in the context of the game, and allowed us to fit at least some balance tweaks into the time available.

The scrumboard

## What didn't go so well

• Version control + Unity = not so good
We are used to working with a variety of distributed version control systems, so at the start of the competition we threw everything into a git repository and went to town. Unfortunately, we quickly learned that Unity isn't terribly well suited to git. While all the source files and assets are handled just fine, a great deal of the configuration and logical wiring is contained in the single main.scene file, and being a binary file, git only sees it as an opaque blob. After a couple of merges that resulted in having to rewire assets by hand, we had to fall back to editing separate scene files and copy/pasting to the main scene file before we merged.

• Time is the enemy
48-hours is not a long time, and irrespective of our planning, time grew increasingly tight as the competition progressed. While we were able to finish the game to a point we were fairly happy with, a number of features fell to the wayside, most notably highscores. We had intended to implement online leaderboards using our host PlayFab's SDK, but that work had to be deprioritised to make time to fix critical gameplay bugs, and eventually we ran out of time.

• Last-minute changes are not your friend
This one largely follows from the last two points, but Alex and I both tweaked different elements right before we packaged the game for judging, and somewhere in our merge we managed to lose the explosion effect for the player's super-meter, as well as drastically increasing the pace and difficulty of the game in the final build. Neither change badly affected our ability to demonstrate the game, but it's a lesson learned to put the pencils down and focus on testing in the final hours.

• Always be prepared to talk
Winning the contest came out of left field, and the surprise coupled with a general lack of sleep had us roughly ad libbing our acceptance, and the subsequent quotes for the organiser's press release. While one wouldn't assume to win any competition, it turns out to be worth putting a few minutes of thought into what you would say if you do. Even a couple of sentences helps smooth over that deer-in-the-headlights moment.

Video game art 101

## What's next?

We're working on getting some of the more egregious bugs fixed, but if you're of a mind to see how it is all put together, the source code and unity project is available over on GitLab. I don't have binaries available for download yet, but we'll try and make it available in a playable form when we have a few more of the kinks worked out.

And I'd be remiss if I didn't give a shout out to PlayFab, for hosting (and catering!) a fantastic game jam, and our fellow competitors, who build some truly amazing games. Here's looking forward to next time.

Jam attendees checking out our game after the competition

Source

## PlayFab Game Jam Postmortem

A couple of weeks ago I participated in a 48-hour game jam hosted by PlayFab here in Seattle, with fellow procedural planet veteran Alex Peterson, my good friend and composer Leo Langinger, and the fortunate last minute addition of artist Brent Rawls.

We were both surprised and excited to have won this game jam, especially given the number and quality of competing entries.

Our entry, the somewhat awkwardly-named Mad-scien-tile-ology, is a Unity-powered take on the classical 'match-3' game (a la Bejeweled or Candy Crush), with the addition of an all-consuming biological 'creep', which consumes the game board as the player attempts to match tiles in order to slow its inexorable progress:

What went right

• Separation of responsibilities
We had what I can only describe as an optimal team composition for such a short development cycle. Leo was able to focus on composing the music with a little sound design on the side, while Brent concentrated on the artwork, Alex handled all of the UI programming, and I wrote the gameplay logic.
• Time management
We hit the ground running, with an initial match-3 prototype playable early Saturday morning. Thereafter we went into planning mode, and white boarded the roadmap and scheduled checkpoints throughout the day for each deliverable and each integration point for assets. While the estimates weren't perfect, and we missed on a solid handful of items, the organisation helped us to hit 90% of what we set out to do, and still get 2 relatively decent nights of sleep during the competition.
• Building on Unity
Alex and I have both played around with Unity in the past, but neither of us had never shipped a full game thereon. Unity represented a fantastic time savings over building a game from scratch, and the asset pipeline alone saved us hours in wiring up the animations and audio.
• Having an artist, and an unusual artstyle
We hit on the idea of stop-motion papercraft before finding Brent, but honestly were it not for his efforts it would have been a disaster. Brent ran with the idea and produced visuals which are striking and unusual. The real-world textures of the paper, the bold colour palette and the stop-motion animations really help the game stand out from other games of this type.
• Having a composer, and an original score
It's easy to underestimate the impact of music on a video game, and as one of the only teams with a professional composer, I think we had an advantage out of the gate. Leo composed original scores for the title screen, gameplay loop, and victory/loss conditions. The upbeat and detailed music really helps sell the 'mad science' theme, and between composing he was able to produce a full range of foley effects for gameplay events that really help to sell the action on screen.
• Playtesting, playtesting, playtesting
We had a playable (if minimal) match-3 game from mid Saturday morning, and that allowed us to play test each new element as we added it. This can be a double-edged sword - when short on time, you can find yourself playing the game instead of implementing features, but it did give us a good idea of what did and didn't work in the context of the game, and allowed us to fit at least some balance tweaks into the time available.

What didn't go so well

• Version control + Unity = not so good
We are used to working with a variety of distributed version control systems, so at the start of the competition we threw everything into a git repository and went to town. Unfortunately, we quickly learned that Unity isn't terribly well suited to git. While all the source files and assets are handled just fine, a great deal of the configuration and logical wiring is contained in the single main.scene file, and being a binary file, git only sees it as an opaque blob. After a couple of merges that resulted in having to rewire assets by hand, we had to fall back to editing separate scene files and copy/pasting to the main scene file before we merged.
• Time is the enemy
48-hours is not a long time, and irrespective of our planning, time grew increasingly tight as the competition progressed. While we were able to finish the game to a point we were fairly happy with, a number of features fell to the wayside, most notably highscores. We had intended to implement online leaderboards using our host PlayFab's SDK, but that work had to be deprioritised to make time to fix critical gameplay bugs, and eventually we ran out of time.
• Last-minute changes are not your friend
This one largely follows from the last two points, but Alex and I both tweaked different elements right before we packaged the game for judging, and somewhere in our merge we managed to lose the explosion effect for the player's super-meter, as well as drastically increasing the pace and difficulty of the game in the final build. Neither change badly affected our ability to demonstrate the game, but it's a lesson learned to put the pencils down and focus on testing in the final hours.
• Always be prepared to talk
Winning the contest came out of left field, and the surprise coupled with a general lack of sleep had us roughly ad libbing our acceptance, and the subsequent quotes for the organiser's press release. While one wouldn't assume to win any competition, it turns out to be worth putting a few minutes of thought into what you would say if you do. Even a couple of sentences helps smooth over that deer-in-the-headlights moment.

What's next?
We're working on getting some of the more egregious bugs fixed, but if you're of a mind to see how it is all put together, the source code and unity project is available over on GitLab. I don't have binaries available for download yet, but we'll try and make it available in a playable form when we have a few more of the kinks worked out.

And I'd be remiss if I didn't give a shout out to PlayFab, for hosting (and catering!) a fantastic game jam, and our fellow competitors, who build some truly amazing games. Here's looking forward to next time.

Source

## Data permanence (or a lack thereof)

When Google VP Vint Cerf warned that increased dependence on technology could lead to a 'digital dark age', he merely echoed the concern of everyone involved in the preservation of information in a digital world. While it is expedient to dismiss his claim as sensationalist and/or paranoid, Google's announcement yesterday that they are closing down the Google Code source code repositories provides an unfortunate echo to his cries.

When I received Google's email detailing the repositories I have ownership over, I found a number of University projects, some python sample code, an entry to a video game competition, my now-venerable python user interface library, and one more item which I had forgotten about: a collaboration some years back to build a video game.

Like most such ventures, the collaboration fell apart after a few short weeks, the project creator and I went our separate ways, and I never heard from him again. But now, with the code scheduled to be consigned to oblivion within a year, it seemed like a good time to reach out and formally put the repository to rest.

It was then that I realised just how easy it is to lose information forever. I have an email address for the project's creator, but it turned out to be a long-defunct hotmail account, in the name of the project, not the user. The handful of of emails we exchanged don't list a real name, and mining various websites I was only able to find a possible first name, as well as a location of Christmas Island - a place so obscure I doubt he actually lived there. Team collaboration was largely accomplished through a private forum, but the project's website is long gone, the contents of the forum with it. The domain is still registered, but through a registrar in China, which doesn't list an owner in their whois records.

Long story short, unless he happens to read this blog post, I'll probably never hear from 'star.anger@hotmail.com' again. And in the greater scheme of things, it doesn't really matter: the game was never made, what small quantity of code made it to the repository will never be reused, and I doubt there is clear ownership of the code and assets regardless. The principle of it all still rankles, though.

For however short a time, a group of individuals came together to build something ambitious. That endeavour is over, the fleeting sense of camaraderie long gone. All that remains is an untouched repository and the half-remembrance of an anonymous typist behind a presumably-distant keyboard.

Who knows? Perhaps the other team members have stayed in touch. All that I know is that it's all too easy to lose track of people and things in a world based entirely on ones and zeroes...

(Originally posted at https://swiftcoder.wordpress.com/2015/03/13/permanence-or-a-lack-thereof/)

## Approaches to Resource Disposal

I'm working on developing a novel programming language, working title 'aevum'. As part of that process, I'll be writing a series of articles about various aspects of language design and development.

Every long running computer program is going to need to obtain a variety of resources, and those resources are almost always finite. Memory, file handles, threads, GPU resources - all of these are relatively scarce, and exhausting the available supply will have dire consequences, anywhere from killing the program, to crashing the computer on which it runs.

Given this scarcity, it is essential that we can dispose of these resources as soon as we are finished using them (or at least, before they are needed elsewhere). Although that sounds simple enough, it turns out that there are a couple of hurdles to overcome.

The first hurdle relates to ownership. As long as every resource is owned exactly once (i.e. a member variable of one object, or a local variable to one function), then disposal is trivial - a resource is disposed of as soon as it's parent is disposed of. But requiring single ownership of every object comes with disadvantages of its own: with strict single ownership you can't easily maintain cyclic data structures such as bi-directional lists, graphs or trees.

On the other hand, if you elect to allow multiple ownership, you are then faced with the problem of how to determine when a resource is actually no longer being used. Obviously you can't dispose of it as long as even a single owner still exists, but how do you determine that the last owner is gone? You can explicitly keep track of the list of owners for each resource (a la reference counting), at the expense of both storage and performance, or you can at regular intervals scan the entire system to determine objects without owners (a la tracing garbage collectors), at the cost of determinism and performance.

Manual resource disposal

Manual resource disposal was once a staple of imperative languages, and while one might hope that it would be included here as a mere historical footnote, that is sadly not the case. The majority of garbage collected languages (including Java, Python and C#, to name but a few), make little explicit provision for the disposal of scarce non-memory resources. Why they do offer some support for locally-scoped resources (python's with statement, or C#'s using statement), long-lived or shared resources have to be managed by manual calls to a dispose method, potentially augmented by a manual reference counting system.

Why is this less than ideal? Primarily because it places the burden of resource disposal squarely on the programmer. Not only does it require a significant amount of additional code, but forgetting to call the disposal methods in one of any number of places will prevent the disposal of those resources.

Tracing garbage collection
Tracing garbage collectors are the bread and butter of modern programming languages. I'm not going to describe the workings in detail - there are many resources on the subject. Suffice it to say that at regular intervals we trace through all reachable objects, mark them as live, and dispose of everything else. The typical criticism of garbage collectors is that they are not deterministic, and collection may occur at any time, interrupting the normal executing of the program. While that represents a serious problem in hard real-time applications, there are a variety of ways to work around the problem, and I am mostly interested in a more subtle manifestation of the same issue.

The tracing portion of a garbage collection cycle has to touch all reachable memory, and the collection phase has to free every orphaned object, both of which may take a significant amount of time. For this reason, garbage collection is typically delayed until the system detects that it is about to run out of memory. That's all very well, but what if it isn't memory we are about to run out of? The garbage collector doesn't know anything about, say, file handles, so even if you run out of file handles, as long as there is plenty of memory (and modern machines have plentiful memory) garbage collection won't be triggered.

The typical solution to this problem is to require manual disposal of non-memory resources, which results in the same drawbacks we have already discussed above.

Reference counting
Reference counting is the darling of the C++ world, and sees pretty wide use even in other languages. Again, I'm not going to describe the workings in detail, but the basics are to attach a reference count to each object, increment that count each time a reference to the object is created, and decrement the count each time such a reference is destroyed. If the reference count drops to zero you know that there are no references to the object, and it can be deleted immediately.

Reference counting offers one key advantage over garbage collection: all objects can be disposed as soon as they are no longer needed. This is excellent from the perspective of disposing non-memory resources, but it unfortunately goes hand-in-hand with a number of flaws.

The first flaw is that reference counting in vulnerable to cycles, where objects A and B either directly or indirectly refer to each other, thereby preventing the reference count of either from ever reaching zero. This flaw is further is compounded by the fact that many common data structures (doubly-linked lists, bi-directional trees, cyclical graphs, etc.) all involve such circular references. We can mitigate this by making the programmer define which single references actually confers ownership (strong vs weak references), but this adds significant mental overhead for the programmer, and just emulates the ideal case of single ownership. We can also allow cycles, and run cycle detection to break these cycles, but that is roughly equivalent to garbage collection, and shares the same drawbacks.

The second flaw is that not only does the need to attach a reference count to every object consume additional memory, but the near-constant increment/decrement of reference counts also puts a considerable strain on the cache. This can be reduced by careful optimization of where reference counts are updated, and by deferring updates so as to batch them together, but the former adds to programmer complexity, and the latter drastically reduces the benefit of immediate disposal.

ARC (automatic reference counting)
Apple deployed reference counting as a crucial part of their API design with the advent of Mac OS X and their Objective-C frameworks. Initially this reference counting had to be done through manual calls to retain/release methods, and with the addition of some supporting constructs (such as autorelease pools) and a strongly documented convention for programmers to follow, this was very successful (albeit a little tedious).

After a brief foray into traditional garbage collection (which failed to meet the needs of the fast-growing iPhone ecosystem), they hit on a simpler idea: what if the compiler could perform reference counting for you? Mechanically following the same conventions provided to the programmer, and augmented by a couple of language annotations to influence behaviour, the compiler can guarantee to implement correct reference counting, and the optimiser can strip back out most redundant calls to remove much of the overhead thus introduced.

In general it is a very neat system, the major drawback being that it still relies on the programmer to annotate weak references correctly, and there remains some overhead in maintaining the necessary reference counts.

Rust
There are a number of interesting resource management ideas knocking around in the Mozilla foundation's Rust, namely unique pointers, borrowed references, and automatic region management.

Unique pointers are declared with a ~ (tilde character), and they uniquely own the object they point to. As in, no other object holds a pointer to it. If you assign one unique pointer to another, the contents are transferred (not copied) and the original no longer points to the owned object. Unique pointers make resource management a piece of cake, because if the pointer uniquely owns its contents, then we can destroy the contents as soon as the pointer goes out of scope.

Of course, as I mentioned in the introduction, there are a whole class of data structures which are very hard to create with only unique pointers, and that's where borrowed references come in. Rust lets you declare a reference with the & (ampersand character), and this can be used to 'borrow' a reference from a unique pointer. The borrowed reference refers to the same thing the unique pointer does, but it does not own the referenced object, and the compiler guarantees that the reference will not outlive the unique pointer (thus never impacting resource collection at all).

Since our references must be statically guaranteed not to outlive the matching unique pointer, we'd be quite limited in what we could do with these references. For example, we wouldn't be able to store such a reference in a container, because the lifetime of the container might outlast our unique pointer. And this is why we need automatic region management: regions define scopes within which lifetimes exist, and by limiting the reference to the region containing the unique pointer, we guarantee that the reference cannot outlive the pointer. But regions are hierarchical, and automatically created for every scope, so that as long as a container is owned by a child region of the region holding our unique pointer, we can add references to that container freely, secure in the knowledge that the container too will not outlive the original unique pointer.

And the best part is that the compiler can statically determine a bunch of this at compile time, and hand you back nice little errors when you violate regions, thus avoiding most of the runtime overhead. There are of course limitations, the programmer still has to be cognisant of the difference between unique pointers and borrowed references, and attempts to move referenced objects will occasionally induce head-scratching compiler errors. But overall, it is a very solid approach to purely deterministic resource disposal with minimal overhead.

Can we do better?
Maybe.

Apple's ARC and Rust's unique/borrowed/region system are both very promising approaches to improving the accuracy and performance of resource disposal, while lowering the cognitive load on the programmer and reducing the incidence resource leaks. And they both avoid the crippling restrictions on programmers imposed by classical research approaches, such as complete immutability of data or linear types. However, both continue to have some cognitive overhead, and both are relatively new approaches with the possibility of as yet unforeseen complications.

But for now, the trend of compiler-centric resource disposal approaches seems to be here to stay.

Source

## Fun with commas

This thread over at GameDev got me thinking, "can one assign Python-like tuples in C++?"

I don't want to pollute the thread in For Beginners with that discussion, but the answer is yes, even without C++11 initialiser lists:

#include struct A { A &operator = (int i) { std::cout << "A = " << i << std::flush; return *this; } A &operator , (int i) { std::cout << ", " << i << std::flush; return *this; }};int main() { A a; a = 10, 20, 30; std::cout << std::endl;}
Should you ever do this? Probably not. Though I'm guessing one of Boost's container libraries is doing exactly this.

(Source)

## Bidding a Freelance Contract

Although I am gainfully employed at present, in the past I have made a good portion of my living in freelance work: websites, Facebook applications, database tools - even the odd carpentry project. The most essential skill involved in freelancing any field? Communication. But the next most important skill is the ability to accurately estimate and bid for a contract.

If you're working a regular job, you are almost always paid by the hour. Freelance work is sometimes paid by the hour, but more often the client will want you to bid a fixed price for the entire project: a 'contract price'. And even if it is paid by the hour, the hourly rate isn't generally a pre-determined constant - you will have to get in there and negotiate the hourly rate you deserve.

So how does one estimate a fair bid for a contract? Lets take a look at some of the most important factors to consider:

Time
The first factors to consider are how long the project will take to complete, and when the client wants it to be completed by. If the project is going to take a month, and the client wants results in a week, you probably don't want to touch this. If it looks like a week's work, and the client is expecting it to take 6 months, then you may be badly underestimating the amount of work.

How do you estimate how long a project will take? That comes down to practice and experience. You have worked in your field, you know roughly how long it takes you to complete each type of task - so you look at the project, break it down into its component parts, figure out how long each will take, and put it all together again. Is this difficult? Yes. Will you make mistakes? Yes. But over time you'll learn, and we will get to dealing with mistakes in a moment.

Expenses
The next factor is direct expenses. These are the expenses incurred directly by the job: do you need to buy new tools/computer/software in order to complete the job? Will you need to buy books or training? Will you need to travel as part of the job, or commute to the client's offices? All of this should be pretty straightforward to determine, and the prices for all of it should be easy to figure out - just add it all up.

After that comes living expenses, and these are a little more tricky: rent, utilities, cell phone bills, daily transport costs, food, entertainment... I am assuming you have a fairly good idea of your own cost of living, but if not, you need to figure this all out soonest. Once you have a good idea of your cost of living, multiply it by your time estimate, and add it to the rest of your expenses.

Taxes
So you have your expenses, and those form your bottom line: if you want to complete the job and keep a roof over your head, you have to make at least that much. But wait... The federal government wants a cut. And then there are state taxes. And social security. And don't forget health insurance - unless you already have a full-time job in addition to your freelance work, you'll need to pay for your own.

So you need to calculate all this up as well, and add it to your expenses. You probably already known your health insurance premiums, and there are tax and social security calculators on the internet, so this shouldn't present much in the way of difficulties.

Profit margin
Unless you are truly desperate (and there will be times when you are desperate), you don't want to be just barely managing to pay the bills. So we need to build some profit into this estimate. Ideally, you want to be making 30% profit or more, and obviously, the higher the better as long as the market will bear (apart from ethical concerns if you're just ripping off the client).

Risk management
So you have your bottom line, you've built in a tidy little profit, but there is still the matter of risk - and there are really two issues at play here. The first is estimation error: your time estimate may not be perfect, or the project may just hit unexpected snags and expenses. And the second is part and parcel of the very nature of free lancing: this isn't a regular job, so once the contract is over you are once again unemployed. This means that you need to account not only for living expenses during the contract, but also for living expenses while you search for your next contract. If you are lucky, you might have another contract lined up by the time you finish this one, but nothing in life is guaranteed.

So you have to build in a certain 'buffer' to absorb these risks. A common rule of thumb is to double your estimate, others go with 1.5x, but in the end this is a judgement call, based on your evaluation of risk. And also on your evaluation of how high you can actually bid, which brings us to...

Client expectations
At this point you hopefully have a very good idea of how much money you need to make for this contract to be worthwhile to you. So we reach what is perhaps the trickiest part of the entire process: judging how much the client is willing to pay. This is largely subjective, but it involves taking a good look at the client: does the client actually have a lot of money? Are they miserly with the money they have? Have they already budgeted a small/large amount for this project?

And very importantly: are they soliciting bids from other freelancers on this project, with which you need to compete? If you are the only person they have asked for a quote, then it often behooves you to pitch your quote a little high - you'll have a chance to negotiate the final price. But if you are bidding against other people, then you will rarely (if ever) have a chance to submit a later counter-bid.

And that's pretty much the gist of it - get out there and start estimating. I guarantee you'll make mistakes, maybe take a loss on a few projects, but if you develop the knack for it, it all evens out in the end.

*A final word: subcontracting
If someone hires me to develop a website, I'm fine on the technical end of things, but I am no artist. So I'll need to hire a graphic designer to work with me for at least a portion of the project. This is tricky: you need to factor in all the same considerations for the artist that you do for yourself - the only saving grace being that you are absorbing most of the risk, which makes their calculation simpler. My best advice is to find your subcontractor early, and communicate with them on the bidding process. They will have a better idea than you about their own time estimates and expenses.

Source

## Logarithmic Spiral Distance Field

I have been playing around with distance field rendering, inspired by some of Inigo Quilez's work. Along the way I needed to define analytic distance functions for a number of fairly esoteric geometric primitives, among them the logarithmic spiral:

The distance function for this spiral is not particularly hard to derive, but the derivation isn't entirely straightforward, and it isn't documented anywhere else, so I thought I would share. I am only going to deal with logarithmic spirals centered on the origin, but the code is trivial to extend for spirals under translation.

Spirals are considerably more tractable in polar coordinates, so we start with the polar coordinate form of the logarithmic spiral equation:

$latex.php?latex=r+=+ae^{b\Theta}&bg=ffffff&fg=666666&s=0$ (1)

Where (roughly) a controls the starting angle, and b controls how tightly the spiral is wound.

Since we are given an input point in x,y Cartesian form, we need to convert that to polar coordinates as well:

$latex.php?latex=r_{target}+=+\sqrt{x^2+++y^2},\;+\Theta_{target}+=+atan(y/x)&bg=ffffff&fg=666666&s=0$

Now, we can observe that the closest point on the spiral to our input point must be on the line running through our input point and the origin - draw the line on the graph above if you want to check for yourself. Since the logarithmic spiral passes through the same radius line every 360?, this means than the closest point must be at an angle of:

$latex.php?latex=\Theta_{final}+=+\Theta_{target}+++n+*360^{\circ}&bg=ffffff&fg=666666&s=0$ (2)

Where n is a non-negative integer. We can combine (1) and (2), to arrive at an equation for r in terms of n:

$latex.php?latex=r+=+ae^{b(\Theta_{target}+++n*360^{\circ})}&bg=ffffff&fg=666666&s=0$ (3)

Which means we can find r if we know n. Unfortunately we don't know n, but we do know r[sub]target[/sub], which is an approximation for the value of r. We start by rearranging equation (3) in terms of n:

$latex.php?latex=n+=+\frac{\frac{ln(\frac{r}{a})}{b}+-+\Theta_{target}}{360^{\circ}}&bg=ffffff&fg=666666&s=0$ (4)

Now, feeding in the value of r[sub]target[/sub] for r will give us an approximate value for n. This approximation will be a real (float, if you prefer), and we can observe from the graph above that the closest point must be at either the next larger or smaller integer value of n.

If we take the floor and ceil of our approximation for n, we will have both integer quantities, and can feed each value back into equation (3) to determine the two possible values of r, r[sub]1[/sub] and r[sub]2[/sub]. The final step involves finding which of these is the closest, and the distance thereof:

$latex.php?latex=min(|r_1-r|,+|r_2-r|)&bg=ffffff&fg=666666&s=0$

And there you have it:

Distance field for a logarithmic spiral

The python source code below produces the image shown above, as a 1000x1000 pixel image PNM image written to stdout. If you aren't familiar with the PNM format, it is an exceedingly simple ascii-based analogue of a bitmap image, and can be loaded directly in GIMP.

import mathdef spiral(x, y, a=1.0, b=1.0): # calculate the target radius and theta r = math.sqrt(x*x + y*y) t = math.atan2(y, x) # early exit if the point requested is the origin itself # to avoid taking the logarithm of zero in the next step if (r == 0): return 0 # calculate the floating point approximation for n n = (math.log(r/a)/b - t)/(2.0*math.pi) # find the two possible radii for the closest point upper_r = a * math.pow(math.e, b * (t + 2.0*math.pi*math.ceil(n))) lower_r = a * math.pow(math.e, b * (t + 2.0*math.pi*math.floor(n))) # return the minimum distance to the target point return min(abs(upper_r - r), abs(r - lower_r))# produce a PNM image of the resultif __name__ == '__main__': print 'P2' print '# distance field image for spiral' print '1000 1000' print '255' for i in range(-500, 500): for j in range(-500, 500): print '%3d' % min( 255, int(spiral(i, j, 1.0, 0.5)) ), print

Source

## The price of progress

I recently installed the beta of Microsoft Office 2010, and the first thing that struck me is how it performs noticeably worse on my 3.0 GHz quad-core AMD gaming rig, than Office '98 performed on a now 12-year-old PowerBook G3, powered by a little 250 MHz PPC processor.

You can probably guess the next stage of this little anecdote... Office '98 on that G3 performed ever-so-slightly worse than Office 4.0 on a truly antediluvian PowerBook 180, which sported a fantastic (for the time) 33 MHz Motorola 68030 CPU.

Now, I am not being entirely fair here - the spellchecker is much faster, the grammar checker didn't even exist back then, and various other ancillary features have been added and improved. But the core issue remains, Office 2010 (or 2007, which is not in beta) running on a very decent gaming rig, takes longer to launch and is less responsive to keyboard input than Office 4.0 on an 33 MHz 68k.

And the problem isn't restricted to Microsoft products alone, as many pieces of software have suffered the same sort of creep, not least among them the Mac and Windows operating systems.

In the open-source world and among smaller developers this phenomenon is far less common: a well configured linux or BSD installation boots in a handful of seconds, Blender (sporting most of the features of expensive software such as 3DS Max and Maya) launches immediately and always remains responsive, and Maxis' Spore takes minutes to start up and load a game while Eskil's Love throws you into the game in under 10 seconds.

My current computer is many thousands of times faster than that PowerBook 180, so in theory at least, we should be able to do far more, and do the same old things much faster. Why then the slowdown?

It can't be lack of resources - we are talking about companies such as Microsoft, Apple and Adobe, all with enormous R&D and development budgets, and teams of experienced programmers and engineers. Besides, the open-source guys manage just fine, some with just a handful of programmers, and most with no budget whatsoever.

It has been argued that programmer laziness (a.k.a. badly educated programmers) are to blame, but I am not sure this can be the entire story. Certainly the 'dumbing down' of University-taught computer science hasn't helped, nor has the widespread rise of languages that 'protect' the programmer from the hardware, nor the rise of programming paradigms that seek to abstract away from low-level knowledge. But that is the topic of another rant, and is somewhat tangential to the topic at hand. Companies can afford to hire the best programmers, and could if they wanted to, create the demand necessary to reform education practices.

And that brings us to the real heart of the issue: software developers measure success in terms of sales and profit. As long as your software sells, there is no need to spend money on making the software perform better. And if you happen to have a virtual monopoly, such as Microsoft's Office or Adobe's Photoshop, then there is no incentive to improve the customer's experience, beyond what is needed to sell them a new version each year.

However, when you lose such a monopoly, the game changes, and it generally changes for the better. When FireFox, Opera and later Safari started cutting a swathe into Microsoft's Internet Explorer monopoly, Microsoft was forced to adapt. The latest version of Internet Explorer is fast, standards compliant, and relatively free of the virus infection risks that plagued earlier versions.

This outcome of the browser war has led at least a few to the conclusion that open-source is the answer, and that open-source will inevitably recreate what has been developed commercially, and either surpass that commercial product, or force it to evolve. Sadly, I don't see this happening particularly quickly, or on a wide scale - OpenOffice is playing catch-up in its efforts to provide an out-of-the-box replacement for Microsoft Office, GIMP lags far behind Photoshop, and linux, despite widespread adoption in a few key fields (namely budget servers and embedded devices) still lags far behind Windows and Mac in many areas.

For many years this wasn't a problem - every few years you would buy a new computer, typically an order of magnitude faster than the computer it replaced. If new versions of your software consumed a few million more cycles, well, there were cycles to burn, and besides, the hardware companies needed a market for faster computers, didn't they?

Nowadays the pendulum is swinging in the opposite direction. Atom powered netbooks, Tegra powered tablets, ARM powered smartphones - all of these promise a full computing experience in tiny packages with minimal power consumption. Even though the iPhone in your hand is considerably more powerful than that 33 MHz PowerBook 180, it doesn't have even a fraction of the computing power offered by your shiny new laptop or desktop. And users expect a lot more than they did in the early nineties - animated full colour user interfaces, high definition streaming video and flash applications, oh, and don't drain the battery!

CPU cycles today are becoming as precious as they ever were, only now many of our programmers have no experience of squeezing every last drop of performance out of them. Has the business of software development come full circle, and once again become the territory of the elite 'low-level' programmer?

Source

## RFC 1149 implemented

This one goes out to all the networking students in the house:

Wired reports that a firm in South Africa successfully demonstrated that data transmission via flash drive equipped carrier pigeon is faster than their existing internet service (source).

Of course, any networking student worth his salt should know that this approach dates back to 1990, in the form of RFC 1149...

Source

## simplui 1.0.4 released

New in this version:

• Support for multiple windows
• Java-style flow layout
• Full batching
• Numerous performance enhancements and bug fixes
• Minor theme tweaks
• setuptools/easy_install supportGiven the speed of development, simplui has moved to its own googlecode project:

You can obtain the source from Mercurial, or download the binary package there.

In addition, simplui has been integrated with setuptools/easy_install. You can find the package listing in the PyPI directory (here), or you can install immediately with easy_install:

easy_install simplui

(note that easy_install will not install the demo application and themes)

This release does come complete with a few of caveats:

• simplui is only compatible with pyglet 1.1 maintenance - not the experimental version in trunk
• There is a bug in pyglet 1.1.3 which can cause crashes if un-patched.
• On Mac OS X, you may need to upgrade setuptools (sudo easy_install -s /usr/bin setuptools)

Source

## Community News

Various things have been snowballing recently, with the result that both development and blogging have fallen by the wayside. A few interesting things are happening however, take a look and see for yourself, after the jump...

The future of pyglet
Alex Holkner recently announced that he isn't able to continue development/maintenance of pyglet anymore. If you work with pyglet, consider weighing in on the news group about the reorganisation of development:

simplui development
With the recent profusion of GUI toolkits and the discussion thereof on the pyglet mailing list, I have decided to formalise the development of simplui a bit. To that end, I have dedicated a section of the wiki to simplui. The information there is currently very sparse, but I hope to expand it in the near future.

http://wiki.darkcoda.com/wiki/simplui

In particular, I welcome discussion of future features and enhancements, on the road map and associated discussion page:

Happenings over @ GameDev.net
A while back I entered a prize-drawing for some GDNet goodies, by helping spread the word about Intel's Level Up 20098 contest. I was lucky enough to be picked as a winner, and the kind folks at GameDev sent me a satchel full of pens, mini GDNet frisbees, and sticky-headed darts in the shape of stick-figures - anyone know what they are intended for?

GameDev's editorial staff have approved a short n' sweet article by yours truly on the subject of spatial hashing. With a little luck it will appear on the site in the next week or so.

I figured I would also take this opportunity to mention that after 6 years and 4,000+ forum posts, I have become a member of GameDev's hallowed Top 50 Rated Posters (not including moderators and staff). Think that is worth a line on my resume?

Source

## simplui 1.0.3 released

No major features this time, instead a slew of small bug fixes, an update to the API, and the rendering code has been rewritten for performance (primarily though batching).

I wasn't intending to push a release out until more features were added, so consider this a maintenance release.

Source

## Simplui 1.0.2 released

The default themes provided by simplui

Today brings the 1.0.2 release of simplui. This is a beta release, previewing major enhancements, and I need as much feedback as possible on the new features. As such, this release isn't heavily optimised - that is on the wishlist for next release.

The big news for this release is themes support. The GUI is now fully skinned, using a variant on the ninepatch method code developed by Joe Wreschnig and Alex Holkner on the pyglet mailing list.

Each GUI frame can use a different theme (even in the same time!), and the theme can be changed at runtime. I have included two sample themes, one modelled on the Mac OS X 'Aqua' interface, and the other on the PyWidget GUI toolkit.

Also included are the usual crop of bug-fixes, including the squashing (hopefully for the last time) of the persistent event clipping bug.

As per usual, grab the tarball, or visit SVN, and let me know if you have any comments or suggestions.

Source

## Starfall: planet rendering

I just posted a quick youtube video to demonstrate the current state of the planet renderer. This is early development stuff, and the eye candy is minimal, but it should give you some idea of the scope.

I will follow up with a more technical blog post in the next few days, explaining all that is going on behind the scenes, and can't be seen in a video.

Part of the rationale behind this video is to stremline the whole video capture and posting process. Unfortunately, it hasn't been entirely straightforward so far. I went through a number of video capture tools before settling on FRAPS, which works well enough (though I would have prefered a free tool).

I also have had a terrible time converting the video for youtube - ATI's Avivo video converter is blazingly fast, but apparently produces an incompatibe audio codec in any of the high-quality settings. I was forced to fall back to the CPU-based Auto Gordian Knot, which both does a worse job, is very slow on my little Athalon 64 x2.

I am now experimenting with ffmpeg, but the command line options are confusing to say the least. If anyone has any clues/tips/tricks for getting FRAPS encoded video (and audio) into a suitable format for youtube HD, please let me know.

Source

## simplui 1.0.1 released

I am going to be performing a number of small releases for simplui over the next few weeks, as features are added and bugs are patched.

Today's 1.0.1 release introduces a slider control, docstrings for all widget constructors detailing the keyword arguments, and a couple of bug fixes.

You can grab the release tarball, or check out the code directly from SVN.

If anyone feels like taking it for a spin, I could do with bug reports and feedback on the API.

Source