Jump to content
  • Advertisement
  • entries
    437
  • comments
    1000
  • views
    337055

About this blog

Random words and other things

Entries in this blog

 

Old Code

Last week I rediscovered a 12 year old game project I was working on. I decided to look at it and write a blog about it. Here's a summary: Part 1: Archaeology - The rediscovery and an overview of the repository structure
Part 2: First look at the code - I open the door and peek inside, outlining the high level structure or concepts - it's a look back in time to GameDev.net in 2004 (Enginuity era)
Part 3: PhysFS & Entities - I remove the dependency on PhysicsFS to get the game starting. I explore the Entity structure and find dragons.
Part 4: Compile Times - I start cleaning up the code and apply some header discipline to drastically improve compilation times
Part 5 - It lives! - I get the game running and stare into face of a resurrected monster. I find that a lot of the systems and mechanics are no longer present or functional - as a result, the 'game' isn't a game at all anymore and won't be without significant work.
Part 6 - Cleanup - I start the cleanup of dead systems, removing a bunch of obviously unused junk. I extricate the smart pointer system in favour of a cleaner more deterministic ownership model.
Part 7 - You didn't need it - I start to remove everything else that has no material benefit to the current feature set in the game. The goal is to keep only the code that is useful. A *lot* of code is removed.
Part 8 - Journey's End - I remove the final thorn in the code, the entity system, keeping only what is needed. I realise that there's no more to remove. It's been a fun journey and the current result is arguably the code that should have been written 12 years ago. A great lesson in not over-engineering things for the sake of it and evolving as you go. It's also a view into the past and a realisation that you can learn and change a lot in 12 years. I'm pondering what to do next. The codebase is still "old", but improving it will result in writing a lot of new code. I may modernise it a bit and then keep going. Not decided yet. I hope you enjoy reading.

evolutional

evolutional

 

Post Mortem: Personal Labor Day "Game Jam" - Day 2

Day 2

I started day 2 pleased with that I had a playable game at the end of the first day, but it wasn't finished yet. Cracking on, I set myself some goals for the day. The first three were continuations from the first day.

[indent=1]? Add "game over" detection (no more moves) & message
[indent=1]? Add game reset and "new game" logic
[indent=1]? Add scoring + display of scoring on screen
[indent=1]? Add high score and score load/save
[indent=1]? Add win condition (2048 tile) & message display
[indent=1]? Clean up graphics
[indent=1]? Improve user interface
[indent=1]? Add sounds
[indent=1]? Create Android build (stretch)


Game Over

Implementing this was pretty simple. For the game to be lost, you have to satisfy two conditions.
There are no empty tiles on the board
You cannot squash any tiles in any direction

I added this by keeping track of the number of occupied tiles on the board. Whenever one was squashed, it went down and one spawned, it went up. Simple. To check for "no tiles empty", it was a case of providing a property which took the total tile count and subtracted the occupied count. I added some unit tests around this to make sure the counts were ok. I'm glad I did, as I refactored something later and this broke in a subtle way..

The second check for a game over condition is the CanISquash? condition. I implemented this by duplicating the squash routines and made them return true if they would squash. This was a bit of a hack, as I should really have refactored the squashing to take a flag so that I didn't have the copy paste everywhere. I don't like code duplication, and having my unit tests for squashing should let me refactor this pretty quickly.

With the game over recognised, I needed to add the concept of "game state". I added this to my player component, but I could easily have added it to a new component. I had two states at this point.public enum TileGameState{ Playing, GameOver,}
Adding the game over message was also simple to begin with. I created a GuiTextLabel class and a GuiRenderingService which took this label and printed the contents to the screen. I implemented a GuiService (the name sucks) which kept track of the game state and used it to modify the visibility of the text label. _gameOverLabel.Visible = (_player.State == TileGameState.GameOver);
This would be the start of my "battle" with the UI that took up a bunch of unexpected time.


New Game

I added a new state to my game state machine, "NewGame" which leaves us with:public enum TileGameState{ NewGame, Playing, GameOver,}
I hooked up the Spacebar key to be recognised during the game state and reset us to the new game state.

So the state transition of my game looks like.New Game --> Playing --> Game Over --> New Game
Starting a new game currently involves several things:
Reset the game board to the Empty tile
Spawn the two initial tiles
Reset the game state to "playing"

In my NumberGameService update loop, I react to this:public void Update(GameTime gameTime){ if (_player == null) { return; } var player = _player.GetComponent(); if (player.State == TileGameState.GameOver) { return; } if (player.State == TileGameState.NewGame) { _gameBoardLogic.InitializeBoard(); player.State = TileGameState.Playing; ResetPlayer(player); } // main game update... }
Scoring

Scoring is really simple in this game. Whenever you squash two tiles, you score the sum of them - so squashing 4 and 4 together will yield a score of 8 and so on.

To add scoring, I went back to my player component and added "score", created a GuiLabel for the score and added it to my GuiComponent. To update this label I created a ValueFactory property, which was really just a lambda to get the value for the label. I hooked this up to some code which returned the player's score.

To actually increase the score, it meant going back into my squash code and modifying the score. I refactored out my squash code into a single method which did the move and squash for two tiles identified as being squashed.private void SquashMoveTile(int x, int y, int nx, int ny){ var t = GetBoardTile(x, y); var nextTile = GetBoardTile(nx, ny); if (t != nextTile) { throw new Exception(); } SetBoardTile(x, y, t+1); SetBoardTile(nx, ny, TileType.Empty); _score += ScoreForTile(t + 1); --_occupiedTileCount;}
Because you can squash multiple tiles in a single move, I chose to let the squash code modify the player's score directly. I wish I'd not done this as it's pretty ugly - I should have made this method return the score and then kept track of it locally. It's pretty easy to clean up, especially with my tests in place


At this point, my game looked like this (yes, it still looks bad).




High Score & Profile Saving

Adding high score was pretty easy. When the game transitions to Game Over, check the current score against a high score and update it if needed. When a high score is detected, then set a flag so that a message can be displayed on the UI. I also needed a new "high score" label too. This lead to code which looked a bit like this:_playerScoreLabel.Text = string.Format("{0}", _player.Score);_playerHighScoreLabel.Text = string.Format("{0}", _player.HighScore);_gameOverPanel.Visible = (_player.State == TileGameState.GameOver);if (_gameOverPanel.Visible){ _gameOverLabel.Text = "GAME OVER"; _gameOverNewHighScoreLabel.Visible = _player.NewHighScore;}
As I was now grouping up labels, I added the concept of a "panel" to my GUI. It basically acts as a container for other elements and becomes their origin point; so.. gameOverPanel contained two elements - gameOverLabel and gameOverNewHighScoreLabel. Both had positions relative to the main gameOverPanel, so I can move this panel and also move the child elements. This is the beginning of an actual UI system, and as I said earlier - the start of some head scratching.

Now I had a high score, I needed to save it and load when the game starts. To do this, I used the XNA StorageDevice subsystem and serialized a "SaveGame" class to json using JSON.Net. Sure, people can cheat and change this - but who cares, right?


Winning

To win in 2048, you must create the 2048 tile. I implemented this by keeping track of the highest tile created when squashing. As soon as that tile hits 2048, the game can transition to Game Over, setting a "won" flag. It's a bit of a hack, but I used this flag to change the game over message to "You Won!" if this flag is set - and that's it.

With this in place, I had a fully playable game. You can win, lose and try and beat your own high score. I actually found myself playing it a lot during dev and it felt pretty close to the 2048 experience.


Graphics

Let's face it - I'm no artist. Making things look nice is hard for me but it's the thing that makes the biggest difference to how people see your work. The first thing to do was to update the tile set. I ended up with this...



Not perfect, but it does the job.

The next step was to add some UI colors and a "skin".

Putting this together, I ended up with:



Looks a bit better than the "Mother of God!" original, but still not ideal. But it's the best I could do in the time.

I cheated on the skin; instead of 9-slicing the UI skin and using each element in the correct place on the panel (corners, sides, middle) I cheated and used a panel texture. This creates horrible scaling artefacts for larger panels, as you can see here:



That's probably simple to fix; but I've not done it yet.


User Interface

To this point, the UI is built from 5 primitive classes.
GuiElement - the base element
GuiElementContainer - a collection of elements
GuiCanvas - the main UI parent (it is a container)
GuiPanel - the sub panels you see, with textures and color
GuiTextLabel - the text labels

Using these classes, I could compose the UI you saw earlier.

Here I was using absolute positioning for pretty much everything and realised that if I wanted to ship on Android, or even different Windows screen sizes, I'd need to make my UI scale.

So I sat about redesigning it. I had a look around at the various ways other people have done it. Notably:
Urho3D
CEGUI
Unity 5

For this iteration, I settled with something comparable with CEGUI and Urho3D. That is, my co-ordinate system has a dimension which has both Scale and Offset. These dimensions are used for positions and sizes, allowing me to create elements that are 50% of their container size and positioned in the center. Additionally, I can add an offset to this. Part of doing this was to add things like text centering for my labels, which work - but is crude, and doesn't work with multiline at the moment.

Implementing all of this took much longer than I expected and basically chewed up all of my remaining polish time and it's still far from ideal. I didn't get to add sounds, for example, and I have no player feedback systems in place except the basic movement.

Where I am with this right now is that it's functional, but still has a bunch of shortcomings. For starters, I have to code my UI - I don't yet have layout xml files. Also, I don't have basic things such as min/max sizing or margins. I'd like to aspire to implementing something like the new Unity 5 system, which basically allows me to anchor corners to a part of their parent, allowing nice scaling. My UI still doesn't scale very nicely and it needs work before I can even think about letting it loose on a non-fixed screen size that I control.



Retrospective

Even a simple game, such as 2048, can still take a bunch of work when creating it from scratch. The biggest thing I got from this is that I feel like I balanced my goals of the jam with the end result fairly well. I wanted a full functional game and some tech that I could potentially reuse next time. I feel I achieved both of these things. Sure, I could have created this game in 4-6 hours in Unity or Unreal and I wouldn't have had any of the UI issues or anything else that caused me problems, but that wasn't the goal here.

Overall, I was relatively happy with the architecture of my game engine. It was really quick to iterate on and felt comfortable to work with. Sure, the Entity-Component and Service paradigm may have been overkill for 2048, but it ended up working well and leaves me with something to build on next time. I think the only real area that was problematic was that of the UI, but that's something I'll look at again next time around.

I did want to add a game over "screen", as well as other menu screens (see last N game history, etc), and it became apparent that because my systems weren't designed around screens or priorities, I had no clean way of filtering inputs, or pausing the gamestate. This is something I will seek to address for the next game on this codebase.

I was happy with how I structured my process. My goals nicely lead into each other and provided something tangible when they were achieved. Having something on screen with the "Mother of God!" artwork was crude, but it let me accelerate to "playable" very quickly - and I was happy with that. I am also glad I put some unit tests around my code logic; it let me iterate over the rules quickly and with safety, providing me early feedback when I break something.

One small snag was that I was brand new to Pyxel Edit so was learning how it worked as I went on. I have a few niggles with it, so I plan to keep using it after this to get the hang of it.

All in all, I had a bunch of fun doing this. The game may be shitty but it's created from scratch and is as playable as the game it clones. For the first time in years I got into the "flow" with my own projects and went with it for two days.


Take Homes

Here's something that I'm taking with me and could be used as broader advice to people wanting to do something similar (or for total beginners, in fact).

[indent=1]? Know your goals up front
[indent=1]? Make sure your goal is realistic for your ability/knowledge
[indent=1]? Make sure your goal is realistic for your time
[indent=1]? Pick your tools wisely; use things you feel productive in or can learn quickly
[indent=1]? Iterate quickly to playable, with horrid art all the way
[indent=1]? Add tests around critical systems, they will help you later
[indent=1]? Don't underestimate things such as user interfaces; they can become a sinkhole
[indent=1]? Know when something is "good enough" and move on to the next thing, come back to things when you need to
[indent=1]? Have fun!

As a side note, the game 2048 is a great project for a beginner to game development to take on. It covers most things you would need to do in a game and provides a couple of interesting challenges in how you might want to execute (and optimize) the moves.


Next Steps

As this turned out to be pretty successful, I'm planning on looking at making another game. Perhaps a Space Invaders or Galaga clone using bits I can salvage from the codebase (which should be fairly high).

I may also write a follow-up post about the technical design of this codebase.

evolutional

evolutional

 

Post Mortem: Personal Labor Day "Game Jam" - Day 1

Motivation

It's been a long time since I've done anything directly game related in my spare time. I work 40+ hours a week on video games and by the time I get home, I often can't summon up the motivation to do anything dev related at home.

Sure, I've still got ideas and desires to work on projects I've carried in my head (or as failed prototypes) for literally years, but when I sit down to work on them at home the mojo never seems to be there.

I was interested in participating in the recent Ludum Dare 33 game jam, but never go round to it - and I regret not doing it.

So, over labor day I decided to sit down and make a game; the simplest game I could put together in 48 hours with only me and no support.

Because I enjoy a little bit of the tech work (engines, frameworks, etc) I chose to start from scratch in MonoGame instead of going the traditional Unity/Unreal 4 route. I also wanted to make something that resembled a decent architecture and not just a smashed together hack - primarily because I intend to do this again and wanted a base to work from.

The Game

The game I picked was 2048 - the crappy mobile game from Kerchapp (which is a clone of 1024, which is a clone of Threes). If you've never played it, it's a game about matching pairs of numbers together to make their double, and repeating until you make a 2048 tile.

The reasons for picking this game:
[indent=1]? I play this game
[indent=1]? Easy game mechanics
[indent=1]? Minimal graphics requirements
[indent=1]? Minimal audio requirements
[indent=1]? Achievable in a weekend
[indent=1]? I stand a good chance of finishing it


The Tools

To do make this game, I used the following tools:
Visual Studio 2013 (+ Resharper 8)
MonoGame 3.4
Pyxel Edit
paint.net

I didn't get around to implementing audio, so didn't use any tool for that.

Day 1 Goals

I didn't make a concrete plan for putting this together, but I did set mini "goals" for what I wanted to see.

Here's my list of day 1 goals and which were achieved.

[indent=1]? Lay down the foundations; resource system, entity system, component system
[indent=1]? Throw together some crude graphics and get a game board on screen
[indent=1]? Implement the input system & basic movement rules for tiles
[indent=1]? Implement the "squashing" of two adjacent tiles when you move them together
[indent=1]? Add "game over" detection (no more moves) & game reset
[indent=1]? Add scoring + display of scoring on screen

Implementing the basics of the framework were pretty simple. I put together an architecture I've played with in the past - the Entity-Component & Services architecture. The idea of this is that we have GameObjects (entities), which have Components attached. Unlike systems such as Unity, my components don't contain logic (or if they do, it's calculations). Instead, logic is collected in GameServices, which have a Start/Stop/Update method.

To tie all this together, I have a simple Event system, which passes messages around between anyone who subscribes to a given type. I implemented a class called a GameObjectComponentView, which subscribes to the GameObjectComponentAdded/Removed messages (and GameObjectDestroyed) and filters them based on specified component types. This object then maintains a list of objects it knows about, allowing you to enumerate over them knowing that each of them have the components you need. This system formed the basis of most of my systems.

To draw the main game grid, I decided to use a TileMap. I created a crude TileMap2DComponent which held the state of the board (4x4) and held some texture atlas information about how to draw each tile. All of the game board tiles were held in a single texture, with each tile being 64x64 (too small). My texture map was 2 layers, one which had the actual pieces and the other which had a "frame" for each piece. I could have drawn the pieces with the frame over them, but I was lazy

Tying this all together, I created my game service called NumberGameService, which used the TileMap2DComponent and my new NumberGamePlayerComponent.


Input System

Now I had something on screen, it was time to start thinking about inputs. I followed the standard approach of creating a simple "state map" and "action map". Essentially I had a json file which describes the key actions (press/unpress) and states (pressed/not pressed) and how to map them to game actions.

A snippet of this file is here:[code=:0]{ "ActionKeys": [ { "Key": "Escape", "Action": "action.quit" }, { "Key": "Space", "Action": "action.newgame" }, ], "StateKeys": [ { "Key": "Up", "Action": "action.move.up" }, { "Key": "Down", "Action": "action.move.down" }, { "Key": "Left", "Action": "action.move.left" }, { "Key": "Right", "Action": "action.move.right" }, ]}
My input service would then monitor these keys and raise events with the corresponding action in them. For the day 1 implementation, I raised an event for each one - but this was changed in day 2.

Now I could raise an event based on input, it was a case of hooking my systems up to do this. The Event Manager I wrote could handle this with ease; it was a case of making my NumberGameService subscribe to specific events and pass it a lambda/delegate to handle it, eg:_context.EventService.Subscribe(this, (evts, e) =>{ // do stuff});
This proved to be pretty flexible for everything I needed - but I did optimize it a bit later.


Game Logic

With inputs ready to go, I finally started to implement the game logic. Initially, I started this in my NumberGameService but it rapidly became painful to verify and iterate on. To address this I created a NumberGameBoardLogic class, which had the methods MoveLeft()/MoveRight()/MoveUp()/MoveDown() and would modify the NumberGamePlayerComponent and the TileMap2DComponent. With this class, I set about writing some Unit Tests, whereby I could set the state of the board to some known state and then simulate a move, verifying it did what it was supposed to.

An example would be:[code=:0][TestMethod]public void MoveRight_MovesTileWhenSingleTileCanMoveRight(){ var uut = CreateUut((a, b) => 0); uut.ResetBoard(); uut.SetBoardTile(2, 0, TileType.Number1); var result = uut.Move(PlayerMovementState.Right); result.Should().Be(true); uut.GetBoardTile(2, 0).Should().Be(TileType.Empty); uut.GetBoardTile(3, 0).Should().Be(TileType.Number1);}
It's not a great example of a test, but I wrote several tests to cover all the various scenarios I could see. This proved incredibly useful to picking up places I broke the code when I changed things - especially when I came to add scoring. If anything, I wish I'd created far more tests than the ones I did.

Squashing Tiles

With the movement set up, I was ready to implement "squashing" and therefore scoring. To do this, I went back to my unit tests and started implementing the tests to show the results, then I went back and fixed the code to pass the tests. This proved to be useful, as I broke a few things that would have taken me a while to find!

The move and squash logic is brute force and isn't optimised; but for this project it was good enough!

It goes something like this:[code=:0]bool MoveDown(){ var score = _score; var moves = DoMoveDown(); SquashDown(); if (_score - score 0; } DoMoveDown(); return true;}
In case you don't know how 2048 works; a move basically pushes all the tile to extreme of the direction you tell it. So if you had a row like such:[code=:0]|2| | |2|
And moved right, the result would be:[code=:0]| | |2|2|
But then because these tiles moved together and have the same number, they get squashed...[code=:0]| | | |4|
There is a second move if a squash has occurred because you can end up in situations like this:[code=:0]|2|2|2|2|
Should become:[code=:0]| | |4|4|
And not:[code=:0]| |4| |4|
Or even:[code=:0]| | | |8|
After each move; the game spawns a new low tile in an empty spot and you keep going.

With all this implemented, I had something playable! And I ran out of time - it was getting late and I needed a screenbreak. I felt pleased that I had a playable thing at the end of the day - and it was as fun as the 2048 game.

It's worth noting that it looked like shit. Here's the programmer art screenshot of how the game looked at the end of day 1...



To quote @EGVRoom when he saw this image... "Mother of God!". And it's true, it was bad.

With Day 1 wrapped up, I laid down a few goals for Day 2 - implement the Game Over/Win conditions (to make it a "game") and to polish what I had.

But I'll leave that for my next entry.

evolutional

evolutional

 

Chamois - Fluent Assertion Syntax for C++

One library that I love in .NET is Fluent Assertions. It takes assertion syntax in Unit Tests and wraps it up in a natural language syntax. Doing so, we get a nice looking structure for our test that reads like a requirement, rather than a bunch of code.

For example:[quote]Quote
[font='courier new']assert(my_variable == true);[/font][/quote]

Sure, it's nice and tearse for a programmer, but what if it were more fluent?[quote]Quote
[font='courier new']Assert::That(my_variable).Should().BeTrue();[/font][/quote]



Or a scenario where we want to check a range:[quote]Quote
[font='courier new']assert(!(my_variable >= min_value && my_variable


Oh crap, now we're getting more complex! But with a fluent syntax, it makes it more readable.[quote]Quote
[font='courier new']Assert::That(my_variable).Should().BeInRange(min_value, max_value);[/font][/quote]


Inspired by the brilliant .NET Fluent Assertions, I created Chamois, a header-only Fluent Assertion library for C++ Unit Testing.

The primary ways of expressing a test is as follows:[quote]Quote
[font='courier new']Chamois::Assert::That().Should().Be();
Chamois::Assert::That().Should().NotBe();[/font][/quote]


I currently support the following types:
Integral numerics (short, int, long, float, double)
String (via std::string/std::wstring, including const char* / const wchar_t*)
Arrays (simple arrays)
Pointers (naked pointers only)
Any object by reference that supports the equality operator

Support for more standard library containers, smart pointers, etc is planned.



Currently, I only support the Microsoft's test framework assertions but I'll be adding more soon - including c-style assert for simple tests.

evolutional

evolutional

 

On C++ Naming Conventions

I threw myself back into the deep end of C++ again a few months ago, having spent the last couple of years with an emphasis on C# and .NET.

One thing that I'm thinking of is the old subject of coding style. Every software house tends to have a house style, so at work you just adopt that - but at home, on personal projects I find myself drifting around stylistically, at least with naming conventions.

There's a few main styles that I tend to consider ok for me:

.NET Style
Microsoft have a standard set of style guides for .NET / C# programs which for the most part people adopt (although not always, even within Microsoft public code).

The .NET style is simple:
PascalCase for class names
PascalCase for public methods (all methods, in fact)
Interface classes start with an "I"
Public properties/variables are PascalCase
camelCase for variable names
Namespaces are PascalCase
Assembly Names are PascalCase

Code would look something like this:namespace MyProject{ public interface IBar { } public class Foo : IBar { public void DoSomething(int parameterName); public int SomeProperty { get; set; } }}
It's worth noting that the get/set property functions end up becoming something like "SomeProperty_get"/"SomeProperty_set" under the hood.


Java Style

Java programs also have a common style.
PascalCase for class names
Interface classes start with an "I" (but not always)
camellCase for public methods
Public properties/variables are camelCase, prefixed with getXX() or setXX() (as Java doesn't have properties)
camelCase for variable names
Namespaces are lowercase
Package Names are lowercase

In Java, the above example looks something like:package myproject;public interface IBar { }public class Foo implements IBar{ public void doSomething(int parameterName); public int getSomeProperty(); public void setSomeProperty(int value);}
C Style / C++ Standard Library Style

C++ seems to have a tonne of styles. One of the first ones you'll come across is that of the standard library, which appears to have adopted the C style convention, largely due to remaining consistent with the old code from yesteryear.

Here, we have:
lowercase_delimited for class names
Interface (abstract) classes aren't special
lowercase_delimited for public methods
Public properties/variables are lowercase_delimited, there doesn't seem to be a get/set style (?)
lowercase_delimitedfor variable names
Namespaces are lowercase

Back to our example:namespace myproject{ class bar { public: virtual ~bar() { } }; class foo : public bar { public: void do_something(int parameter_name); int get_some_property(); void set_some_property(int value); };}
Looking through some other sources and the Google C++ guide seem to lean to a blend of the Java Style with C++/Standard library style.

eg:
PascalCase for class names
Interface classes start with an "I" (but not always)
PamellCase for public methods
Public properties/variables are lowercase_delimited, prefixed with my_property() or set_my_property()
camelCase for variable names
Namespaces are lowercase_delimited

This leads to:namespace my_project{ class Bar { public: virtual ~Bar() { } }; class Foo : public Bar { public: void DoSomething(int parameterName); int some_property(); void set_some_property(int value); };}
With all this, the Google version seems to make a lot of sense.




What's your style and how did you pick it?

evolutional

evolutional

 

First version of FlatBuffers in .NET

I've committed by first alpha version of the FlatBuffers port to .NET to GitHub.

Since my last post, I decided to port the Java version to .NET as straight as I could, which means the use between Java and C# should be similar.

I ported the JavaTest buffer creation code; which looks as follows: var fbb = new FlatBufferBuilder(1); // We set up the same values as monsterdata.json: var str = fbb.CreateString("MyMonster"); Monster.StartInventoryVector(fbb, 5); for (int i = 4; i >= 0; i--) { fbb.AddByte((byte)i); } var inv = fbb.EndVector(); Monster.StartMonster(fbb); Monster.AddHp(fbb, (short)20); var mon2 = Monster.EndMonster(fbb); Monster.StartTest4Vector(fbb, 2); MyTest.Test.CreateTest(fbb, (short)10, (byte)20); MyTest.Test.CreateTest(fbb, (short)30, (byte)40); var test4 = fbb.EndVector(); Monster.StartMonster(fbb); Monster.AddPos(fbb, Vec3.CreateVec3(fbb, 1.0f, 2.0f, 3.0f, 3.0, (byte)4, (short)5, (byte)6)); Monster.AddHp(fbb, (short)80); Monster.AddName(fbb, str); Monster.AddInventory(fbb, inv); Monster.AddTestType(fbb, (byte)1); Monster.AddTest(fbb, mon2); Monster.AddTest4(fbb, test4); var mon = Monster.EndMonster(fbb); fbb.Finish(mon);
My test code will read a file output from C++ and assert against it being valid (readable). Then create the same buffer in .NET and run the same tests against it; exactly like the JavaTest does. My .NET version passes both sides, so it means we should be compliant to the FlatBuffer format and can achieve interop between C++ and Java.

By being in C#, this should be usable from Unity - either as code, or binary. I haven't verified this yet though.

Now that I've got a working version, I can start to address my other concerns - that it doesn't "look" nice to .NET developers ;)

What this really means is that I'll be building a reflection based serialization layer on top of this, so we can do code-first and/or transparent POCO serialization from the FlatBuffer format.

EDIT:

After thinking about this, there's a few things I want to look at improving. The first version was a straight port, and this works fine but sticks out in C# like a sore thumb.

Modify the C# code generator to create a property for get access (eg: not monster.Hp(); but monster.Hp; )
Change the generated C# objects to XXXReader and XXXWriter; this lets me have cleaner split of read/write concerns
Refactor the FlatBufferBuilder into a FlatBufferReader and a FlatBufferWriter - again, to separate the concerns a bit
Look into a nicer way of handling arrays/vectors
Build a "typemap", effectively a definition of a FlatBuffer type which can be used to serialize or code gen POCOs
Add the Serialization support for POCOs (mentioned above)
Add a MediaTypeFormatter for ASP.NET / etc to allow simple model binding with FlatBuffer format

evolutional

evolutional

 

FlatBuffers in .NET

I've been tinkering with Google's new FlatBuffers protocol and have an experimental port of it running in .NET (C#).

FlatBuffers is interesting and there's a couple of other alternatives in the form of ProtoBuf, Cap'n Proto, Simple Binary Encoding (SBE) and, to a lesser extent, JSON, XML and YAML. These protocols all share the same thing in common, to serialize your data into a format that's portable between applications - be it as an on disk format (save game, level, etc) or as a wire object for communicating with network services.

JSON is really the go-to choice now for people who use web services a lot and works well as a configuration file format too. However I've recently been noticing a trend towards binary encoded data, especially in the arena of game (client/server, especially); the reasoning behind this is often due to performance of both encode/decode and that the text-based formats take up more network bandwidth.

One issue with binary protocols has always been that it can be extremely easy to change the schema and render the existing data invalid.

Take the simple object:class Vector3{public: float X; float Y; float Z;}class Monster{public: int Health; Vector3 Position;}
We would probably serialize out with something like this:void SerializeMonster(std::ostream &stream, const Monster &monster){ stream
And deserialization would be something like this:Monster* DeserializeMonster(std::istream &stream){ auto monster = new Monster(); stream >> monster.Health; monster.Position = DeserializeVector3(stream); return monster;}Vector3* DeserializeVector3(std::istream &stream){ auto vector = new Vector3(); stream >> vector.X; stream >> vector.Y; stream >> vector.Z; return vector;}
If I wanted to add additional fields to the Monster class (Mana, Color, Weapons carried, etc), I would have to be very careful to keep my functions in sync, and moreover if I change the serialization order of anything all my stored data is useless.

This problem gets even bigger if you start talking outside of a single application, such as sending data from your client to a network server that might have been written in .NET, Java or any other language.

A few years ago Google released V2 of their Protobuf format which aims to bring a bit of sanity to this problem. They defined their serialization objects in a simple IDL (interface definition language). The IDL used by Protobuf has a formal native type format and then your data is defined as "messages". Protobuf has a "compiler" which then generates your C++ (and other languages) serialization code from the IDL.message Vector3 { required float x = 1; required float y = 2; required float x = 3;}message Monster { required int32 health = 1; required Vector3 position = 2;}
A key thing to note here is the numbering used; this defines the serialization order of the field. Protobuf mandates that if you want to maintain backwards compatibility, you must not change the ordering of existing items, new items are appended with a higher sequence and they are added as "optional" or with a default value. Anyone interested should look here for more details on this.

Protobuf does some clever encoding of the data to minimize the size; things such as packing numeric types and omitting null values.

Projects like FlatBuffers (and the others mentioned above) cite that the encoding/decoding step of Protobuf is expensive - both in terms of processing power and memory allocation. This can be especially true if done frequently - such as if your network server is communicating in protobuf format. In some of the online services I've worked on, serialization to/from the wire format to an internal format has been an area to focus optimization on.

FlatBuffers, Cap'n proto and SBE take the stance that the data belonging to the object is laid out the same in memory as it is on disk/during transport, thus bypassing the encoding and object allocation steps entirely. This strategy becomes an enabler of allowing simple mmap() use or streaming of this data, at the expense of the data size. I'm not going to go in the pro/cons here, as the Cap'n Proto FAQ does a better job. However, all of these in-memory binary data formats all acknowledge that having a "schema" which can be modified and retain backwards compatibility is a good thing. Basically, they want the benefits of Protobuf, with less overhead.

So back to the topic of this post; how does .NET fit into this? Given that engines such as Unity provide C# support and XNA / MonoGame / FNA are all managed frameworks capable of targeting multiple platforms, FlatBuffers in .NET has a couple of obvious use cases:
Data exchange between native code and C# editors/tools
Game data files with a smaller footprint than JSON/XML/YAML
Wire data exchange between Flatbuffer enabled client/servers (to a lesser extent, due to bandwidth concerns, but viable in the same datacentre)
Interprocess communication between .NET and native services (same box scenarios)

I started porting the Flatbuffers protocol to .NET and hit a slight dilemma; should we treat Flatbuffers as a serialization format, or should it also be the "memory" format too?

In Marc Gravell's Protobuf-net project he explicitly took the stance that "r[color=rgb(0,0,0)][font=arial]ather than being "a protocol buffers implementation that happens to be on .NET", it is a ".NET serializer that happens to use protocol buffers" - the emphasis is on being familiar to .NET users (for example, working on mutable, code-first classes if you want)." [/font][/color]

[color=rgb(0,0,0)][font=arial]Right now, I'm facing a similar question. What is more important for .NET users? Is it the ability to code first and serialize, or to be able to access Flatbuffer data without serializing? [/font][/color]

[color=rgb(0,0,0)][font=arial]Here's an example of the code-first approach: [/font][/color][FlatBuffersContract(ObjectType.Struct)]public class Vector3{ public float X { get; set; } public float Y { get; set; } public float Z { get; set; }}[FlatBuffersContract(ObjectType.Object)]public class Monster{ public int Health { get; set; } public Vector3 Position { get; set; }}var someBufferStream = ...;var monster = FlatBuffers.Serializer.Deserialize(someBufferStream);
We declared our various classes and used an attribute to annotate them. Then we explicitly take data from the stream to create a new Monster (and Vector3) object. From here, we can discard the buffer as we have our .NET objects to work with.

Serialization would be the opposite: create your objects in .NET and call a Serialize() method to fill a Flatbuffer.

The benefits of this are that is it is very .NET centric; we are used to code-first approaches and serializing to/from our entity classes. Conversely, we now have to either manually synch our IDL files, or write a tool to generate them. We also begin allocating many objects - especially if the buffer is large and has a lot of depth.

An alternative approach is the accessor pattern which the FlatBuffers Java port takes. Here, we effectively pass a reference to the data in Flatbuffer format and use a accessor methods to read what we need. The accessor classes would be generated by the IDL parser tool.public class MonsterAccessor{ public MonsterAccessor(FlatBufferStream flatBuffer, int objectOffset) { .. } public int Health { get { flatBuffer.ReadInt(... objectOffset + offset of health ...); } } public Vector3Accessor Position { get { return new Vector3Accessor(flatBuffer, flatBuffer.ReadOffset(... objectOffset + offset of position ...)); } }}public class Vector3Accessor{ public Vector3Accessor(FlatBufferStream flatBuffer, int objectOffset) { .. } public float X { get { flatBuffer.ReadInt(... objectOffset + offset of x ...); } } public float Y { get { flatBuffer.ReadOffset(... objectOffset + offset of y ...); } } // etc }var someBufferStream = ...;var offset = ...;var monsterAccessor = new MonsterAccessor(someBufferStream, offset);var monsterHealth = monsterAccessor.Health;var vector3Accessor = monsterAccessor.Position;// etc
In addition to the Reads, we would also supply "Write" methods to populate the buffer structures.

The benefit of this approach is that the buffer access becomes lightweight; we are no longer deserializing whole objects if we don't need to - we're not allocating memory (except for things like strings). We also get direct sync with the IDL version.
The downside of this is that working like this feels quite alien in the .NET world; the code to handle buffer access is very manual and in the case of the position access, very clunky. It's likely that we will end up creating lots of accessors - unless we rely on everything being static. Which also becomes pretty... nasty.

This latter approach is more akin to a "port" of Flatbuffers, rather than treating it like a serialization type.


So - with these approaches in mind; what would YOUR favoured access method for FlatBuffers be in .NET? Would you focus primarily on FlatBuffers as a serialization format, much like Marc Gravell did with protobuf-net? Or would you look at a true port? Or maybe there's some hybrid approach?

What would your use cases be?

evolutional

evolutional

 

Porting Accidental Noise Library to .NET

I've been inspired by the images that JTippetts' Accidental Noise Library can generate. However, these days I do very little C++ work so I felt like tinkering with a little project that would help me tip my toe back in this water and also have a bit of fun.

As a result, I decided to have a crack at porting Accidental to .NET. I'm aware that James Petruzzi already done a partial port and that Mike Tucker has integrated this partially into Unity3D, but this wasn't the exercise for me.

I wanted to a) have a little play with the noise b) understand the library and routines a little more and c) tinker with C++ again in a gentle porting project. If I have something cool or useful at the end of it, then it's a bonus.

Accidental is quite a nice system. It's effectively a series of modules that you configure and then chain together to get some pretty pictures out of the other end. Most modules typically have a "Source" and return results via a "Get" method; as such, the results of one module typically feed into another and so on until you get something useful out the other end.

The original Accidental library supports 2D (x,y), 3D (x,y,z), 4D (x,y,z,w) and 6D (x,y,z,w,u,v) modules/generators all off the same base. In my current implementation, I'm deliberately splitting the various parts into their various dimensional constituents. I don't know yet if this is the best approach, but it's making life easier for me.

I've ported over a bunch of stuff so far; but there's plenty to do.

In the original Accidental Noise Library, JTippetts used Lua to configure his pipeline. I'm going to look at using alternative, more C# friendly solutions to this - possibly JSON

Right now, everything's just in code so seeing the results of a tweak can be a little slow.

Here's some pictures:



Fractal: Billow (Default), autocorrected to -1,1 range and scaled to 6,6.



Cellular: Scaled to 6,6



Fractal: fBM, frequency 3



Fractal: Rigid Multi, octaves: 3, scaled to 6,6

I'll probably push the source up to GitHub at some point soon.

evolutional

evolutional

 

How Uncle Bob, Fowler & Beck, nCrunch and ReSharper changed my code

I've been trying to do Test Driven Development (TDD) for quite some time and struggled. I've always been what I would term a "traditional" coder - write something, debug it in some custom rigged harness to make sure things work as expected, then move on to the next piece of the system and repeat ad infinitum. This style of coding has got me through literally 20 years of development and so can't be that bad, right?

The problem with this approach is that defects become harder to find and worse, if you change things you have no real idea how they affect other parts of the system. A few years ago, I started adding Unit tests to my code. I'd write my code and then add unit tests around it to break it. This would be good, as I'd end up with more confidence in my code and how it worked. I could change things and know quickly if I broke something. However, this style of coding ended up with untestable or "hard to test" code. I'd end up with methods which required complex setup scenarios to test - and then if I changed the inner workings of the code, all of the old tests would start failing due to the setup being wrong or out of date. Worse, if you needed to change a method signature or modify an interface, it would mean hundreds of tests to change. I'm ashamed to admit it, but at times like this it became easier to discard the tests or restart the class or project.

I felt like I was stuck in a loop. I wanted to "do good" and have tested code, but also I found it hard to achieve. Something needed to change.

I picked up a copy of "Clean Code" by "Uncle" Bob Martin. Anyone who's into "the agile scene" will know of him, he's some sort of evangelist god of good code. I read Clean Code from cover to cover and took a lot of it in. This book evangelizes testable code like no other. It also evangelizes "clean" code. Clean code is simply "code that you understand without documentation". Much of the book talks about removing coding "smells" and references Martin Fowler & Kent Beck's "Refactoring" in the process. I came away from Clean Code realising that I write dirty code. Really dirty code, and *this* is why I can't test it. I also realised that the reason I can't test it is because it was written without being able to be tested; and this is because it's dirty. And when I can't test it, I don't know if it works or not. In fact, I've written stuff that I don't know if it can be run or not.

I've spent most of my life writing code. Some of it is admittedly hacky but stuff I thought was "production" quality is hacky. To find out more, I went into Fowler & Beck's "Refactoring". This book is simply a mine of useful information. The single biggest takeaway from the book is that you must leave any code (including your own) in a better, more understandable and testable state than you left it. This can be as simple as giving variables "good names" (not "i", but something that makes sense to the context) and removing duplication by reducing complex functions into smaller, more composable and understandable parts. It also reinforces SOLID concepts and makes you realise that a lot of the code you write isn't actually great OO code - even though you think it is.

So after spending a lot of the December holiday break with the stark realisation that I write dirty, untestable code whilst thinking I was doing "ok" and knowing that there is a better way, I decided to start doing it.

Two tools jumped out at me. ReSharper and nCrunch. First - nCrunch gives you instant visibility on whether your code is tested or not. Each and every line of executable code has a dot next to it... Red, Green and Black. The dots update in near realtime, to let you know if your code is working or not. Red means your code is covered by a failing test, green means it's covered by a passing test and black is almost the worst kind - it means that it's not covered by a test at all. in short, nCrunch takes out the barrier to attempting TDD and actually makes you embrace it. Having used it daily for a month, I now ensure that the "black" is minimal - in fact it's easier to write a test and make it pass than to face figuring out how to cover the black. You actively try and avoid a black... because not knowing is worse than knowing you screwed up. Black means you might have superfluous code - stuff you wrote without needing.

One big thing in Fowler & Beck's "Refactoring" book is that you try and migrate your code to a cleaner state through refactoring. Refactoring is the act of keeping existing code usable, whilst changing it to a new, cleaner state. A key factor in this is having the confidence to change it, which means having test coverage to prove you haven't broken it. My prior approach to refactoring would be more of the scorched earth process. I'd rip out old classes and methods, having code which didn't compile for hours, or even days... and then after it all, finding out my tests were broken and needed fixing. At this point, I'd often ditch the tests... which I know now is a cardinal sin. Fowler and Beck advocate an approach which means slowly migrating to your new state, whilst ensuring all tests pass with each change. nCrunch provides a mechanism to allow this - you know straight away when things don't compile and things don't work. So you fix them, and move on.

Resharper provides a bunch of useful tools to help you refactor; be it renaming stuff, moving namespaces, changing scope, etc - just generally helping you clean code up line by line. Combined with nCrunch, I can make changes to code and have it verified within seconds - in fact, before I even think I've done, I know I have... because everything is green.

With both of these tools, I've had code which has compiled for 99% for my working day. And I've had test coverage of over 90% for the most part. I realised the other day, that I'd done a large part of coding without having to manually compile once and hadn't hit the debugger in days. Yet I knew my code worked, because my tests were all green.... and there were literally hundreds of them. I recently added a new feature to my code using tests and refactoring; adding this feature resulted in the removal of the old behaviour. Using the techniques I'd learned and the new tools, I was able to add the new feature and migrate my codebase from the old one over the course of few hours. When it came to the point of removing a fundamental class in the old system, I hit delete and hat literally one failing test out of hundreds. And this test was to do with the old behaviour and not relevant to the new system. At this point, I knew I'd hit a pivotal moment in my life.

I'm now in a position that I never thought I'd be. I'm embracing TDD and adding or making large changes to my codebase without fear; I know with confidence that my code works and it's actually more understandable than it used to be. As a result, I feel like I'm becoming a better programmer. Not because I'm smart, but because I know what I write works. And this is a great position to be in.

evolutional

evolutional

 

Cloudy Requests

For the past year or so I've been working on Cloud Services for Fable Legends. I've been thinking of writing some articles on the subject... So an open request; if I were to write a few articles on writing Azure-based cloud services in C#, what would you want to see?

evolutional

evolutional

 

Ocyphaps

I've changed quite a few things since my last entry. First of all, I hit a wall in my Unity rip-off component system. Quite simply, I found it really shitty to work with.

Instead, I took cue from Juliean and took a look at Alec Thomas's EntityX system, and then at Richard Lord's Ash. After pondering these over a bit, I threw out what I had and started again.

I now have:
Components: Purely data
GameObjects: A simple entity which "holds" components
Systems: Things which manage collections of Components or game objects
Messages: The way systems communicate

I'm looking at adding in a "View" or "Node" system to let systems manage "Nodes", or groups of components. Ash has this concept, and it's quite interesting. It allows your system to say "subscribe me to objects with component A, B and D" without having to look them up.

Anyway, now I picked up an old project idea and started to run with the new system to see how it worked, any I'm quite happy.

I like using code names for projects. Mostly because I don't have to think of a game name on the spot - my latest project is called Ocyphaps. It's based on an old Amiga classic with trappings and inspirations from Terraria.

evolutional

evolutional

 

Data 'ponents

I've been working (slowly) on the Unity-like Component System that backs my current game project. As time goes on, it's becoming less and less like Unity.

So I was thinking over the "tagging" system I added last time and quickly came to a couple of realisations.

1) Having a "string" based tag is limiting, what I really want is an object-based tag that would let me attach arbitrary data to the entity
2) Following on from this, I realised that this was akin to having a data-component

Now I'm thinking of splitting up my component system into DataComponents and BehaviourComponents. The key difference is that DataComponents would be just pure data attached to the GameObject. Behaviours are more like they are now, with a Start/Update method - I would also look more into beefing up the message passing so that the primary route to inter-object communication is via async messaging.

Why would I do this? The reason at the moment is very much down to the fact that I have a bunch of data associated with a component that I need to share between a couple of others. The way I'm doing it at the moment is to have components reference each other to grab the shared data. This feels bad. I could either consolidate them into one to try something new.

I'm thinking of doing the data component way.

evolutional

evolutional

 

Articles submitted

This weekend I submitted not one, but two articles to Gamedev.net. They're about Microsoft's awesome cloud service Team Foundation Service. I hope they get approved

This week I did awesome stuff with MonoGame. I can't talk about the project, but I'm blown away with how good MonoGame is, especially for Windows 8-based devices. If you're a fan of XNA (and if you're not, you should be), then you should really pick MonoGame and have a play.

I'm still contemplating open sourcing my Unity-like XNA/MonoGame framework. I'm not sure what's holding me back, really. It is quite nerve wracking making your code open.

evolutional

evolutional

 

More on MonoGame/XNA Components

Have been slowly plugging away at my Combat Prototype for my game. I've kept a clean separation of the "Engine" and "Game", with each being in separate assemblies. The Engine code is the foundations - A Unity-like GameObject + component model that sits on top of MonoGame (or XNA). I'm not going to talk about the game part, but the continuation of the Engine core.


Messaging

Lately, I've added in a Messaging system. This was primarily due to me needing to communicate between GameObjects. For component-component communication on the same object, I'm just referencing the components directly. I decided to go messaging for intra-object communication because I really didn't want to start tightly coupling other objects to each other. My messaging implementation is different to Unity's. I'll talk about that here.

When a component wants to hook a message, it must call "RegisterMessageHandler" on the GameObject it's attached to.

This has the following prototype:void RegisterMessageHandler(string messageType, Action handler);

As you can see, it takes a handler function as a parameter. The GameObject itself then listens to any messages hooked by the components and routes them to each component when received.

Each message must implement the interface IGameObjectMessage, which has a single required property - "MessageName". I'm tempted to remove this requirement entirely and make it more like how Components work - requiring the messages to be bound by the message Type. This would give me something such as:
void RegisterMessageHandler(Action);
This feels more elegant. However, this may create an issue if the components are destroyed and don't unhook the messages first. I'm contemplating using C#'s event system and following a Weak Event Pattern. Thoughts are appreciated.

There's a few ways to send a message to other objects. I'll list them out:
SendMessage - Sends a message to a particular GameObject
BroadcastMessage - Sends a message to a GameObject and all its children
SendMessageUpwards - Sends the message to the GameObject's parent
BroadcastMessageUpwards - Sends the message to the GameObject's parent and it's ancestors.

One thing I'm debating with the Broadcast options is whether they should cascade all the way up and/or down the tree to children of children, etc.

I've also got a GameObjectGraph. Basically, a way of storing and accessing my "root" objects. I'm currently figuring out a nice way of exposing this to the world other than using a Singleton. It's likely to appear as a property on the GameObject in some way.

I need this because it acts as a sort of scene graph. It is the root object of the "world" and is called when I want to both Update() and Render() my scene. It also acts a place to fire global messages to notify all GameObjects in the world about something. At this level, I have:
BroadcastMessage - sends a message immediately to all objects, starting at the root and cascading downwards
QueueBroadcastMessage - drops a message onto a queue to be sent the next tick, before Update() is called

The message handlers themselves have the signature:[code=:0]void SomeHandler(IGameObject sender, IGameObjectMessage message);
The first parameter is the GameObject which sent the message. The astute of you will realise that it's actually a Component which sends messages and not the GameObject. It's debatable whether the first parameter should reference the component instead of the GameObject, as this would open up component-component direct communications.


Tagging

I've added a simple Tag collection to each object. These are text labels which can be searched on. A GameObject can have as many as needed. I currently use them for Linq queries over child objects:
[code=:0]foreach(var child in this.Children.Where( (o) => o.HasTag("sometag") ){ // do something with tagged object}

I'm contemplating extending this system to become a Triple Tag, basically: namespace:predicate=value. This would make things nice and flexible, especially when combined with Linq queries. For example, if I had two sides in the game and I wanted to find all units of type "tank", I could write a Linq query such as:
[code=:0]foreach(var tank in this.Children.Where( (o) => o.HasTagNamespace("SideA") && o.HasTagPredicate("UnitType") && o.HasTagValue("Tank")){}

That's about it for now. There's a few current niggles and areas I need to start looking into. These are:
Object creation via cloning (allows me to have Prefabs)
Object destruction (queue to kill next frame)
Object serialisation to JSON/BSON (allows me to save/load object definitions from files)
General object management and tracking.


The current systemis nice enough, I guess - it has quite a few rough edges and things I'm changing as I go on - I already plan on reusing this part of the system in other projects. I'm also contemplating open sourcing it on GitHub, I'm not sure though...

evolutional

evolutional

 

More on Components

The past few days I've been implementing the Component-Entity model in my current game prototype. I didn't like the way it was progressing in my previous implementation, the objects were getting large and the inheritance was already getting out of hand. It was a classic case of abusing inheritance where I should have favoured composition. Although my code is a prototype, it may turn into more - and even if it doesn't, it was becoming unmanageable.

So I took the code from my recent journal entry and started working it in. It's heavily inspired by Unity's model. In Unity, the base GameObject is both a scene graph and a component container. In Unity, the GameObject is also a sealed class, which prevents you from inheriting from it. You're stuck with what they give you.

The "base" components I've put in place so far:
Transform - A 2d transformation component which holds position, rotation, scale and things to do with placement in the world
Camera2d - A 2d camera which is the "window" on the world
Renderer - A 2d sprite renderer
Behaviour - A base component which is intended to form the basis of custom logic


Using these 4 components I've been able to pull apart my previous efforts and compose my world. I've then split apart my existing game logic into several other behaviour components. This includes everything from movement, targeting and weapon fire.

There's a few things I've noticed from Unity's implementation, and therefore mine. Most (in fact almost all) GameObjects have a Transform component. So many, in fact, that I've chosen to create one on default construction of the GameObject and have added a shortcut property on GameObject which lets me get to the Transform quickly. I've also followed Unity's lead and added a GameObject property to retrieve the object which this component is attached to.

It's interesting as some Component-Entity models dictate that components can only communicate with others via messaging. That is, all components have no knowledge of each other or direct access to each other. Unity's model breaks this in half, and suddenly you're able to access other components on your object (or indeed other objects) by accessing this.GameObject.Transform, and so on. Personally, I don't mind this. In my system, the Renderer needs to know the position of an object, and many behaviours need to modify it.

Right now I'm looking at the Scene Graph part of GameObjects. You can attach and remove them to each other and cascade Update() calls, but I'm looking at the matrix transform aspect of it now to allow you to set child objects relative to their parents. That won't take long to sort out.

I'm looking at adding a generic messaging and/or event system to the GameObjects/Components. This would allow other objects to subscribe and react to asynchronous messages from others. This would be handy for damage events, or anything else that needs to observe events from others.

I'll probably post another update with some details of the implementation.

evolutional

evolutional

 

Test-driven: Or, how I saw the light

T minus 1 month.

In my current project I'm working with a rare-breed programmer. An SDET, a Software Development Engineer in Test. These guys are accomplished programmers but focus on software quality.

"Hey, I write quality software", I say, "It works, doesn't it?".


T minus 4 years.

Working in a software team for a financial shop. The Head of Software keeps talking about Agile software development, Test-driven development, Continuous Integration etc. Sounds like a plan. Try it. Nice idea, can't see how it works. "I always test my code". I say. I do. Console application which wraps the component, fire in data, breakpoint it, verify outputs. Sorted. Job done.


T minus 3 years 11 months.

Cajoled into writing "Unit Tests". Great, double the amount of code to write, same amount of time = less functionality. Features creep, pressures mount, tests get disabled. Code rot. "I write quality software, it works doesn't it?".


T minus 3 years 9 months.

Epic megafuckton of a bomb drops. Trying to debug, fix, unravel. Breakpoints are my friend. "it works on my machine". Late nights. Slipped deadlines. Break original functionality. QA test team get blame. "Test should have spotted this, it's a basic bloody error!".


T minus 6 weeks.

A leopard doesn't need to change its spots. "I've been coding in OOP systems for years, I know the best practices!"


T minus 7 years.

Massive refactor of game engine. Deeply nested inheritance trees. Ugly! Decide to favour composition.


T minus 5 years.

Massive refactor of game engine. Too many hidden dependencies. This OO-lark is a pain!


T minus 3 weeks.

"We need to ensure high code quality on this, try looking at test driven development or something".

"No, that doesn't work. Tried it, takes too much time and gets in the way".


T minus 2 weeks.

Dependency Injection. Heard of it. Tried it in the past, found it to be useful in some situations. I'll learn that later. No rush.


T minus 7 days.

"This code you've written needs a real Cloud Service deployment!".

"Use dev fabric, it's what it's there for! I've written a load of tests to verify it works!"

"But you still need a hard external dependency!"

"That's life".


T minus 4 days.

Trying to test code which relies on something I have no control over. "I know this can happen, but I have no idea when or what causes it. It's part of Azure, how do I sort this?".


T minus 3 days.

"I need to learn this stuff". Spends long weekend reading, on YouTube and more, practicing, practicing, practicing.



T

The way I write code now is fundamentally different to how I wrote code 7 years ago, 7 months ago, even 7 days ago. I always prided myself on being a half decent software engineer, but ultimately I will confess - the majority of the code I've ever written in my lifetime has been very hard to isolate and test. I've followed best practices as much as possible, but have almost always ended up with hidden dependencies, internal state that's hard to affect or even worse, hard external dependencies.

Having embraced concepts like Dependency Injection, Continuous Integration, Mocking/Faking and Test-driven Development I finally feel like I've added security and confidence around the things I code.

I've tried all this in the past, even blogged about how I didn't get on with it. It's a tough sell. It adds time to development, perhaps even doubles it, but it's worth it.

Using DI and mocks, I can abstract away hard dependencies on external services or libraries and replace them with fakes. I can cause those obscure exceptions and make sure my code handles it. I don't have to wait for a blue moon, I don't have to slay 15 virgin princesses and hope that quantum mechanics favours my outcome today. I can fake the whole thing and force it. I have full control over the environment my code works in. It's like being a god of bits.

What's more, it has fundamentally changed the way I code. It sounds odd, but everything I code now I see the coupling, I see how state migrates between related components. I've done this stuff before, after all it's "good engineering practice", but lets face it - I must have gotten it wrong. Doesn't matter if the odd one slips in there does it?

I realise that to many of you this may be a "well duh". But to me it's a huge swing. A swing in attitude, mindset, approach and style. it's a paradigm shift that required a "click" to happen. I knew all this stuff before, but now I live it. I find it hard to see how I can go back to my old ways.

evolutional

evolutional

 

Riff: Understanding Component-Entity-Systems

First of all, Boreal Games has posted a great intro article to Component-Entity Systems. If you've not read it yet, I strongly suggest you do.

This journal entry is a riff of that theme.

For those who don't know, Unity 3d is based heavily on the Entity-Component model. The root "entity" in Unity is the "GameObject", which along with a whole load of other things, is a container for components. Unity has a load of components ready-made out of the box, which shows how powerful this model can be. Complex GameObjects are composed from these myriad of simple objects!

So taking Boreal's article and Unity's model, it got me thinking - "how would you implement a similar system in your .NET own projects?". I'm using C#/.NET, which gives us a whole load of lovely type inference, but you can do something similar in C++ with some hacks/tweaks/black magic. I won't discuss that here, as I expect Boreal Games will cover this.

First of all, let's start with the basic definition if a component. In .NET I've defined it as very simple interface:public interface IComponent{ string Name { get; }}


It's probably about as simple as you can get. It simply exposes its name, which is a friendly name for the component.

From here, I took a look at Unity's API documentation for the GameObject class. The following major component methods were obvious:
AddComponent - a factory method, creates a new component instance and adds to the object
GetComponent - a generic method which returns the added component of type TComponent, or null if it doesn't exist
GetComponents - returns all components associated with the object

On top of this were a bunch of child methods, which pulled back all components attached to the children of this GameObject and a bunch of default strongly typed helper component properties, likely used for optimisation of commonly accessed components. For the sake of simplicity, I'm going to skip of these and focus on the core functionality.

To do this, I specified a IComponentCollection interface.public interface IComponentCollection{ TComponent GetComponent() where TComponent : class, IComponent; IEnumerable GetComponents(); TComponent AddComponent() where TComponent : class , IComponent, new(); void RemoveComponent() where TComponent : class , IComponent;}


This is relatively simple; it allows the Add, Remove and Retrieval of components which conform to the IComponent interface we specified earlier. There's a bit of .NET generic foo which says the component type must be a class and have a default constructor, but other than that, it's simple.

So we have our component interface and our collection interface, let's implement them. For simplicity's sake, I chose to base my IGameObject interface on IComponentCollection. IGameObject has a bunch of other stuff in it, such as the InstanceId Guid and some other things you might want to add. You could choose to implement IComponentCollection in your own class if you needed, but for this example it wasn't needed.public interface IGameObject : IComponentCollection{ bool IsActive { get; } Guid InstanceId { get; } void SetActive(bool status);}



In this demo, I chose to pick a couple of obvious properties from Unity to demonstrate that the GameObject is more than just a component collection. You could very easily add some of the nice "helper" properties you saw on the Unity GameObject, which actually return existing component instances (via GetComponent).

Great. All the key interfaces are created, let's create an implementation of GameObject.public class GameObject : IGameObject{ Dictionary components; public GameObject() : this(Guid.NewGuid()) { } public GameObject(Guid instanceId) { this.InstanceId = instanceId; this.components = new Dictionary(); this.IsActive = true; } public bool IsActive { get; private set; } public Guid InstanceId { get; private set; } public TComponent GetComponent() where TComponent : class , IComponent { IComponent res = null; var t = components.TryGetValue(typeof(TComponent).Name, out res); if (t == false) return null; return res as TComponent; } public IEnumerable GetComponents() { return this.components.Values; } public TComponent AddComponent() where TComponent : class, IComponent, new() { var existing = this.GetComponent(); if (existing != null) throw new ComponentExistsException(typeof(TComponent).Name); // Factory method var newComponent = new TComponent(); this.components.Add(newComponent.GetType().Name, newComponent); return newComponent; }? public void SetActive(bool status) { this.IsActive = status; }? public void RemoveComponent() where TComponent : class, IComponent { var found = this.GetComponent(); if (found == null) throw new ComponentNotFoundException(typeof(TComponent).Name); components.Remove(typeof(TComponent).Name); }}



This class is really simple. It keeps an internal Dictionary of component type name and the instance.
GetComponent simply looks up the type name requested on the generic method and returns it if it exists; failing that, it returns null.
GetComponents simply returns the values in the dictionary.
AddComponent checks to see if the component is already there, if so throws a custom exception. If it's not there, we create a new instance from the IComponent implementation's default constructor and add to the dictionary.
RemoveComponent allows us to remove an existing component if it exists; if not, we throw a custom exception.

We can easily extend Add/Remove to accept of a pre-built component instance. It's up to you to add this ;)

So with this in place, let's create a custom component. Picking the simplest possible thing we can that many (if not all) GameObjects will have is the "Transform" component. In Unity, this is relatively complete - in our example I've simply created it to have a Position of Vector3d type. Realistically, your type will be more like Unity's and have a Rotation, various orientation vectors and matrices which help with using this in a 3d world.

Our TransformComponent looks something like this: public class TransformComponent : IComponent { public string Name { get { return "Transform"; } } public Vector3d Position { get; set; } }


Very, very simple. As you see, it implements the IComponent interface, and adds it own properties to that. We could add properties, methods, events, whatever we need.

As another example, I added a simple test component for the sake of unit testing,public class TestFakeComponent : IComponent { public string Name { get { return "this is a test"; } } public string SayHello() { return this.SayHello("Component System"); } public string SayHello(string who) { return string.Format("Hello {0}", who); } }

As long as we inherit from IComponent we're all good.

Let's see this in action. Here's some code adapted from my unit tests to show usage....
[code=:0]var go = new GameObject();var transform = go.AddComponent();transform.Position = new Vector3d(1000, 2000, 3000);



We create a new GameObject, add a new transform component to it, then modify the Position on it.

After this, we can find the transform and do something with it...
[code=:0]var found = go.GetComponent();


Simple.

I've packaged up my Visual Studio 2012 solution for you to play with. It contains the basic implementation detailed here and some simple MSTest-based unit tests.

Have fun.

evolutional

evolutional

 

Unity eh?

So in my last post I mentioned how I've decided to use Unity to build out my game. Having prototyped some basic concepts I've quickly realised that my idea would better suit a 2d viewpoint. I've also realised that my target devices at this moment in time are Windows 8RT, Windows 8 Phone and Windows 8 Store. All of which are currently unsupported by Unity. This can be something of a pain because whilst Unity have announced that they will support these devices at some point in the future, they've not given a date nor is there any total certainly around it.

This is leading me away from Unity as a main tool, and making me look at MonoGame instead. MonoGame is at version 3 and right now supports the whole XNA API v4. This appeals as I can use my normal IDE, my .NET 4.5 (Unity is still on 2.0) and publish to my target devices easily. The downside is that I have more work to do as XNA is a framework - and a simple one at that. I think Unity has some use to me in that I can use it prototype still, but I think it'll end there... unless they release full Win8 ecosystem support on the next couple of weeks.

Oh we'll.


The paper design of the combat mechanic is going quite well. Hopefully when it's a bit more nailed down I can start sharing some of it. I'm working on a prototype to prove it out.

evolutional

evolutional

 

Poking around Unity

For the past couple of weeks I've turned my time to dissecting the problem statement "If I were to build a simple casual game in the Supercell mould, what would it take?".

However instead of planning it with any real degree, I just picked some tech and started throwing it together.

I'm relatively pleased with the result so far. A simple server running in Windows Azure and a simple text console "client" which I can perform the various actions for the game. I've got a few of the core features in place already, which is satisfying:
Simple tech-tree of buildings and upgrades which is data-driven (defined in a JSON doc)
Basic resource generation and harvesting mechanic
Server-side action validation and state persistence

I can "log" in from anywhere and play (in theory). Although right now there's nothing more than building, upgrading to an extra level and resource harvesting.

I'm getting to the point where my text-based front end isn't satisfying and am looking to put it together in a form that more resembles a game. A couple of issues here already; my intended target platform is Windows Phone 8/Windows 8. This seems to limit the options I have for out-of-the-box engines and SDKs from the get-go.

Initially I was planning on using MonoGame. This is basically an open-source XNA and would let me target WinPhone8 from day 1 and migrate to other platforms as I needed. However, the downside of MonoGame is that you have to build lots of things you get as part of a full engine, or things you can add-on simply from third party libraries. This is a bit of a bummer from a productivity standpoint.

I've decided to work in Unity 4 "free" for the time being. At the moment this will be prototype-work mostly - trying to get the game as I want in an environment that allows rapid iteration. The downside is that if I end up loving Unity, I'll have to fork out a load of cash and will have to wait on the hope they provide WinPhone 8 support. But right now, the benefits of using it as a prototyping ground are strong, so it feels like a simple option.

Now, to figure out how to actually use Unity :)

Toodles.

evolutional

evolutional

 

2012 - A retrospective

First of all, I apologise for not being around as much as I used to be. GDNet used to be my #1 visited site, but lately I hardly ever poke around. I'll talk about that another time though.

I thought I'd do the "usual" retrospective of the year. I don't normally do these, but this year I have stuff to talk about. So here we go.

I started the year new in the role of Database Architect for a credit referencing and marketing company. I'd basically spent years climbing up this big old ladder to get up somewhere high in the company. Hell, I'd been there 8 or 9 years, I'd been learning everything in my field, going to big events, doing exams and really trying hard to be someone to go to in my technology area. I love SQL Server and the data platform behind it, I spent so much effort learning details and getting into the really dirt of it all. I loved it... until I realised what my company thought of the role of "architect".

Instead of being the people who're involved in projects and helping with the hard, finicky stuff, they put them in the role of "clerk". Basically someone who plans at only the highest level without getting their hands even remotely dirty in the detail. I found that everything I'd spent learning to get to this acclaimed position basically meant letting it all fall away and dealing with a level of detail that was naught more than boxes on a chart.

Anyone who's followed my journal (journey?) over the years will appreciate that this is not who I am. I like technical stuff. I like the dirty things in development. The high level, the abstract doesn't work for me when it's the sole focus of my job.

So at some point early in the year I got a LinkedIn request from Simon Sabin, one of the SQL Server GODS. I accepted, looked at his profile and saw an advert on the side - an advert from Microsoft, doing stuff with data for a game. I clicked it out of intrigue more than anything.

Fast forward a few weeks. I'd quit my old role. This job I'd spent so much effort getting to, I'd walked away. Wow. Who'd do that? Me, I guess.

I'd left because the role I saw on LinkedIn turned out to be working for Lionhead Studios. I laughed when I saw it - I was like "yeah, me, working at Lionhead Studios - I've played their games like... forever, right? I've never worked in games either, they won't take me". I applied anyway. Interviewed a couple of times, got an offer. Took it...

I LEFT MY ENTIRE LIFE.

This sounds odd, but I did. I left my house, I left the town I'd lived in for 10 years. All my friends. My job. Everything. My partner stuck up north with my cat, waiting to see if it worked out.

It did. I now work for Lionhead Studios. Microsoft Studios, EMEA. I've been here 6 months. I'm still here. And I see myself being here for quite some time.

Lionhead is quite frankly an excellent place to be in. The studio is really just a breeding ground for brilliance - every day I see stuff that makes me think "wow, I work with people who can do this?!". I love the lunchtimes, playing MP games against (and with) each other. I love the SCRUM reviews where you can see stuff that people have been working on. I love Creative Day and the side projects that people have put their time into. I love the random chats with people I've only just met. I love feeling like I've known these people all my life. I love the passion, the energy and the enthusiasm of the people in the studio.

Yet we're part of a much, much wider team. I've been working with people from loads of other studios too - 343, Turn 10, Rare, the new London studio and others. I have regular contact with the Xbox LIVE Team and a whole load of other awesome people at Microsoft. We're all in this together. We're part of a huge team of people who love games. Lionhead and Microsoft Studios is quite frankly an awesome place to be in. You can see the future being made, and it's amazing.

This year has been a year of huge change. I have literally changed my entire life. Career, place of living, even friends. It's been hard at times but it's been totally worth it. I feel like my focus has changed completely. I'm working in an area I care about, for a team I care about, for a project I care about and for a company I care about. It's a good feeling.

Now just to get my partner and cat down to Guildford and I'll be happy.

Looking forward to 2013.


.

evolutional

evolutional

 

Dusting off the old

I've recently decided that there's two things I need to look at lately to get me back up to speed. One of them is DirectX11 and more modern graphics coding. I'll talk about the other thing later on.

My old codebases were traditionally built upon OpenGL and the fixed function pipeline. Yes, yes, I know. I was lazy and didn't want to learn all this fancy shader stuff - why did I need to when I could just do what I wanted quickly in GL and my games were never going to be that graphically good anyway, right? Well in DX11 we basically have no choice in the matter, so it's about time that I learn this stuff.

I had two choices; one is to start afresh and start implementing a basic DX11 engine from scratch. The second was to find an old project of mine and retro fit it and then improve it over time. I chose the latter - an old shump project I was working on from 2004 seemed to fit the bill, so I picked that up.

So far, it's mostly been about ripping out SDL, replacing it with raw Win32 code and then pulling out the OpenGL code. All this whilst getting back up to speed with my old codebase. One thing that really struck me was how much my coding practices have improved since then. There's a lot of memory management stuff I should and will be ripping out later (I used superpig's refcounted pointer class from Enginuity...!), and there's probably too many singletons around (although I've kept the calls to them clean). The asset loaders and resource management really needs improving, moving away from what's there to perhaps adopt a more handle-based approach to load/unload on demand.

There's also other stuff, such as there being too little blurring between the various layers in the engine (input, view, logic) - so I'll probably be restructing it all to follow a more MVC style pattern which clearly separates the "views" from the control (input/network) and the game model (logic). A bit more data-driven stuff couldn't hurt either.

It could be quite an interesting project.

evolutional

evolutional

 

Runtime Compiled C++

I came across the Runtime-Compiled C++ project the other day and I'm intrigued. I've always been fascinated by scripting, virtual machines, reflection and other dynamic runtime stuff that adds flexibility into development. The Unreal 4 engine also has this with the "Hot Reload" feature, so I can see it really becoming more popular.

I'm interested in what effect this new sort of dynamically compiled C++ will have on scripting in particular.

One of the use cases for scripting has always been because you can change the script and modify behaviours quickly without having to stop and recompile your own game. Runtime-compiled C++ will blow that out of the water, especially when you see that it also has crash protection built in (eg: a crash won't kill your game).

Naturally, scripting will still have a valid place in development - script languages are often simpler for non-developers to use and often present functionality at a higher level than how you'd code C++ or any other low level language. But it does make me wonder whether this runtime compiled stuff will move some people away from scripts.

I'm going to grab the runtime-compiled code and have a play around with it.

evolutional

evolutional

 

Joining the 'biz

So this is kinda weird really. But also frickin' cool.

For as long as I can remember I've been doing game dev and game dev related stuff in my spare time. Mostly as a hobby, but also to help out people on this very site. I've done all sorts of stuff for GameDev.net - moderator, author, interviewer, news editor, etc - all in the name of me caring about games and the people who make them.

About 18 months ago, I decided to focus the majority of my effort into my career as a database professional - I wanted to become the best damn database guy I could, so focussed a lot of time on that. I studied for the MCP exams, went to conferences, immersed myself in blogs and technical books - and loved every moment. Combined with a 10 month crunch time at work meant that I had to let the game side slip for a while, in many ways I'd considered leaving it as something I loved, but couldn't focus on seriously.

Then recently I accidentally found a posting for a job which looked perfect for me - it combined my love of technology with my love for games and game dev. After interviewing a couple of times and meeting the people I'd be potentially working with, I KNEW it was perfect. I wanted this job more than any other. I felt like I'd found what I've been after for years, I found a bunch of people that I felt an instant connection with - easy, natural and passionate about their craft.

This story ends with a happy ending.

In July I start what I see as my dream job, working at Lionhead Studios. I'm giving up everything of my current world - job, house, friends, etc and moving 200 miles away to start a new life.

And I really can't wait.

Turns out if you dare to dream, sometimes they come true.

evolutional

evolutional

 

Diablo 3

I may have to buy a new mouse soon, I think Diablo 3 just destroyed my old one.

evolutional

evolutional

 

Taking your game to the Cloud with SQL Azure - Introducing SQL Azure

Taking your game to the Cloud with SQL Azure

Part 1 - Introducing SQL Azure

So you're writing an online game, chances are that you've been thinking about how you go about storing all that data your game needs. You'll have account/billing information, player data, game zone data, inventory data and a whole host of other data items that make up the persistent world of your game.

On top of this you'll want to audit a load of information to help you track issues, analyse hot points in the game, predict player behaviour, help identify fraud or cheating - when you start adding it all up, your game's data requirements really start to mount up.

The traditional route to this has pretty much been "well, I buy a server and put a database engine on there". Anyone who's ever set up a database platform will know that you've a whole host of considerations to take in place; you have to plan your storage requirements, make sure the server you're buying will handle the peak transaction throughput, and that your system will scale to your predicted player capacity, then you have to plan your high availability strategy to ensure that you get your 99.9% uptime; then you have to host all of your kit, likely in a couple of places to remove any single point of failure in the event of a disaster. All of this before you've even bought a box or installed a piece of software.

If you're planning an adventurous title or even if you're not - you'll probably find that you are forced to over-specify your hardware - or even worse, make compromises that become very expensive to rectify if your game hits the limits of the hardware you have in place.

Imagine you've taken the plunge and set up your database server; now you have to manage your backups, your growing data files and transaction logs. You make sure that your data is secured on disk to prevent people wandering off with all of your billing data. You have to manage your disks, making sure that your file placement strategies and I/O subsystem can keep the database running with as low latency as possible. You have to provide 24/7 support to keep all of this spinning because if you don't, your players are going to get mad and leave your game.

This is quite a lot to put in place. And if you don't do it, your game won't stand a chance.

Before you quit your idea and retreat back to doing single player pong clones, you'll be relieved to hear that much of the pain I just talked about is taken care of by SQL Azure, Microsoft's cloud database platform. SQL Azure is part of the Windows Azure cloud platform, which provides a huge range of functionality and hosting solutions that I won't talk about here.

In the rest of this article, I'll talk more about SQL Azure and cover how you go about setting it up and creating your first database in the cloud!


What you'll need

To follow along with this article, you'll need the following software and services:
A Windows LIVE Account (plus a credit card and mobile phone to set up an Azure trial)
Visual Studio 2010 with SP1, or the Visual Studio 2010 shell
SQL Server Data Tools (SSDT)
Microsoft SQL Server Express 2012 RC Tools


Introducing SQL Azure

SQL Azure is a cloud-based relational database platform that's built upon Microsoft SQL Server which is available for on-premise installations. However Azure and SQL Server have different release cycles and so the features and functionality of SQL Azure are different to the on-premise version of SQL Server; you should really consider them to be different platforms.

SQL Server has several features that aren't present in SQL Azure; some of these are simply that they don't make sense or aren't needed in the cloud. Azure supports the concept of Federations which are useful for scaling out, which isn't available in the on-premise version of SQL Server. The following MSDN link is useful for listing the current T-SQL features not supported by Azure.

The key benefits of Azure are:

[indent=1]Self-Management - As the platform is in the cloud, you don't have to incur any of the cost of buying hardware and hosting an on-premise SQL Server.

[indent=1]You can provision the storage you need, when you need it. This is a key benefit, as you can find that your initial costs are reduced because you don't have to buy all the storage you may need up front. This doesn't mean you can neglect storage and capacity planning, as you will still need to plan for the growth of your databases and the arbitrary size limits that Azure imposes upon them (1-5GB for Web and up to 150GB for Business).

[indent=1]Scalability - Azure lets you easily scale the size of your databases up to the fixed limits mentioned earlier. However, Azure provides a powerful scale-out mechanism in the form of Federation and sharding. These techniques allow you to horizontally partition your data across many databases, allowing you to elastically grow or shrink the number of these when you need them.

[indent=1]High Availability - The Azure platform has been designed for 99.9% uptime. Microsoft provides you with load balancing, redundancy and automatic failover, so you don't have to worry too much about keeping your service online.

SQL Azure Pricing

First off, SQL Azure isn't free, so you'll need to understand how much you'll be paying for the service. When costing up SQL Azure you need to know three main things;[list=a]
How many databases do you need?
How big is each database going to be?
How much data transfer are you going to be doing?

There's two SQL Azure editions, Web and Business, which effectively let you decide the size cap of the databases you're running. Regardless of the edition, the price of SQL Azure is going to be based upon the size of each database hosted in Azure. Each database cost starts at a fixed point, and then depending on the size, you pay for additional storage on top - until you hit the next boundary. Azure is billed in monthly cycles but the pricing is calculated daily based on the size of the databases each day.

The second thing you need to consider when pricing up Azure is the amount of data transfer you'll be doing. The good news is that inbound transfers are free - meaning that any data you send to the cloud won't cost you anything; however you will be charged for outbound transfers - currently $0.12 per GB in the US and Europe ($0.19/GB elsewhere). This does mean that you will have to think about the data that you're pulling off your databases - make sure your queries are selective so you only retrieve what you need, else you'll end up paying for it!

Here are the details for the Azure pricing plans.


Environment Setup

Great, now we've an idea about what SQL Azure is and how it's costed, we can start looking into getting set up and actually using it.
Before we begin, it's best to get your development environment set up. I recommend two key tools for working with SQL Server and SQL Azure; these are the SQL Server Data Tools and SQL Server Management Studio. Although it's possible to manage your Azure databases through the online portal, it's recommended that you work locally and publish to your Azure servers.

SQL Server Management Studio

SQL Server Management Studio (SSMS) is a database professional's staple tool for managing and querying SQL Server and Azure. If you don't already have the SQL Server 2008R2 or SQL Server 2012 client tools installed, you should get the SQL Server Express 2012 RC edition. It's worth installing both the local database engine and the management tools, as you'll likely want to work locally as well as on Azure.

SQL Server Data Tools

SSDT is based upon Visual Studio 2010 and provides you with a complete development environment for SQL Server and supports project deployment to Azure.

Whilst SSMS is more of a management and querying tool, SSDT is more of a project-centric development tool which can generate deployment packages for you. It's generally a good idea to work in a local environment and then deploy your changes to Azure when you're satisfied with them.

You can install SSDT by following the instructions in this link. The process is simple, but you will need to have either Visual Studio 2010 SP1 or the Visual Studio 2010 Shell installed already.


Setting up SQL Azure

Now that you've got the tools, you can go ahead and set up a SQL Azure account. Microsoft offer a 3 month trial that covers the entire Azure platform (not just SQL), so I'd recommend signing up to that whilst you can. The trial gives you the following limits:
750 hours compute
20GB storage
1GB database (Web edition)
20GB data transfer


To sign up for SQL Azure, you'll need to have a Windows LIVE account - most people may have one, but if you don't then you'll need to create one to use Azure. You'll also need a credit card for billing and a mobile phone to verify your identity. If you're signing up to the trial, then your credit card won't be billed unless you go over the very generous limits they impose on the trial, or let your service continue after the 3 months.
When you're ready, head over to https://windows.azure.com/ and complete your signup process.

Creating your first SQL Azure server

Now that you're signed up, you'll need to create your SQL Azure server. A Server in SQL Azure is purely a logical concept that's used as a container for up to 149 user databases.

To create your server, click "Database" in the left hand menu bar, then navigate to your subscription (Subscriptions>3-Month Free Trial); from here, click the "Create Server" icon in the ribbon menu.

When creating the server, you'll be asked for your region - this determines which Azure datacentre your service will be hosted from and can influence the costs of data transfers. Besides, it's always a good thing to pick a datacentre that's closer to you geographically to help reduce latency.

Next you'll be asked for a server administrator login. This is important as it's the master login to manage your server and lets you do almost anything to your databases within Azure. Create an obscure name and use a strong password; remember never to give this out to anyone!

Now you need to add a firewall rule to restrict access to connecting to your SQL Azure server. This provides a layer of security, allowing you to lockdown access to your databases from only your game servers or trusted clients (such as your development machine). If you're trying to connect from an IP that's not on the whitelist, then SQL Azure will deny any access to the server. At this step, you should add in the IP of the machine you're doing your development on; if you're on a dynamic IP, you'll have to remember to keep going in and updating this when your IP changes.

Creating your first SQL Azure database

Great, now we have a server set up which contains only a master database - it's time to create yourself a database to play with.
There's a few ways of doing this:
Via the Azure management portal you used to create the server
Via the server manager web portal (https://yourservername.database.windows.net)
Via SQL Server Management Studio
Via SQL Server Data Tools

To mix things up a bit, I'll go with option 3 - SQL Server Management Studio. Fire it up and connect - to do this you'll need to know the following:
The name of your server (XXXXXX.database.windows.net)
Your server administration login credentials

SQL Azure doesn't accept Windows Authentication credentials, so you will have to connect with the SQL Server Credentials:




If you get an error here and are sure the credentials are all correct, then go back and check your SQL Azure firewall settings.

Now you're connected up, you can create a database. Open up a new query window by clicking the "New Query" button. From here, you can type the command:

[indent=1][font=courier new,courier,monospace]create database MyDB (maxsize = 1 GB, edition = 'web')[/font]

Anyone that's used SQL Server before will notice something different here - the create database command syntax is different under SQL Azure. In this example I'm telling Azure to create me a database called "MyDB", with a maximum size of 1GB and using the Web edition of Azure.



Execute the command by hitting the "! Execute" button, and your database will be created.

You can create your first table by opening a new query window and setting the database context of the connection to "MyDb".




Type the following statement and execute it to create your table and add a line of data:

[indent=1][font=courier new,courier,monospace]create table dbo.MyTable[/font]
[indent=1][font=courier new,courier,monospace]([/font]
[indent=1][font=courier new,courier,monospace] MyTableId int not null identity primary key,[/font]
[indent=1][font=courier new,courier,monospace] MyField varchar(100) not null[/font]
[indent=1][font=courier new,courier,monospace])[/font]

[indent=1][font=courier new,courier,monospace]insert dbo.MyTable(MyField) values ('Hello, SQL Azure!')[/font]

Finally, let's execute a select statement to query this table...



From here, we can create all manner of SQL Server objects, such as tables, stored procedures, views and so forth; I'll be covering this in a bit more detail in subsequent parts - but feel free to play around in your new database.

In this article we learned about the benefits of using SQL Azure to host your game's database in the cloud. We also learned how to set up an Azure account, and create a SQL Azure database server. You also created your very first cloud-hosted database and executed a query against it.

In the next part of this article, I'll start exploring the sort of things you can use your database for - and demonstrate some examples of creating a database project and querying it from a simple application.

evolutional

evolutional

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!