• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
  • entries
  • comments
  • views

Entries in this blog

For over eigth years SVN was the cornerstone of the software engineering department at the company I work and it helped us to develop high quality medical simulation software. However as our product portfolio and out team grew, SVN started to show some weaknesses when resolving conflicts, hotfixing old releases and switching branches. So we decided that now it is time for a change. We did some evaulation of SCMs - namely TFS, Pearforce SCM, Mercurial, Plastic SCM and Git - and the sole winner, and honestly my personal favorite was Git. No other SCM would allow us so much freedom in creating a superb and decentralized process for software developing.
Just one caveat, Git is not that great in handling large binary files of which we have quite a few, since all our simulation assets are basically checked in together with the code into SVN. Yes storage is cheap, but wasting space with binary data which cannot be properly diffed just feels not nice. Apart from this when working on SSDs storage isn't that cheap anymore. Luckily the git large file storage extension was recently announced by github including a reference implementation of a storage server, so we decided to jump on this opportunity and try this out.

Setting up the LFS-reference server for a first test run was easy, our testing-infrastructure-guy churned out small script which takes over the configuraiton - which basically is setting a few environment variables - then starts the server. For the moment we just blatantly ignore any security issues. The server ran smoothly, so since we're already using Atlassian products we decided to install the trial version of stash as well
and got our developers to download SourceTree as a Git client. Especially our modelers and content-creators are not really keen on using Git from the command line so a nice UI will help getting them to accept the new tool.

Installation of the git-lfs client went smoothly as long as it can find the git-executable in your $PATH variable, if not the path can easily be supplied manually. Configuring lfs, registering stash as a remote repository and initially checking in our codebase and resources went well when done from the command line.
Tracking large files is very easy by supplying patterns similar to the ones in the .gitignore files
git lfs track "*.mp4"
On every push to the stash-repositories the files would be checked in into the LFS and stash would only get a text-file containing a hash & file-size for the files stored on the LFS server.

Sourcetree did not find the LFS-Extension at first until we realised that we needed to switch from "embedded git" to system git. But there the troubles would not end, since the LFS server did not run on the same hostname and the user needed to enter two sets of credentials for a push and aparently sourcetree could not handle that additional dialog. After a bit off googling we could fix that by usinggit config --local credential.helper store
we could enter the credentials once on the commandline and since they would not be asked again sourcetree could cope with this. What's not nice on this is that the credentials are stored in plain-text on the users machine. Here the winstore-helper for git comes in handy. This helper lets you use the windows credentials-system for storing the credentials and we considered that reasonably safe and on a plus side one can use the windows credentials settings to manage the credentials.

So basically we now have a running evaluation system with stash, the git-lfs server and users which can use the convenient UI of sourcetree to work with git. So where do we go next?

  • First on we will try to get more experience with LFS and will try out if we can get rid of the credentials-workaround by putting the LFS-server on the same domain as stash.
  • Then we will see if we can get stash to work with the LFS-extension
  • And last but not least we will fiddle around with the development workflow to see what works and what not.
In a previous entry I wrote about my journey to find a suitable library for creating and interactive UI for viewing and modifying graphs which lead me to the Open Graph Drawing Framework to layout my graph for drawing with Qt. So here is the code for the prototype that I wrote:

Setting up

First we set up a simple graph consisting of a parent with six children. I link child[0] to child[1] for a bit more complexity.//------------------------------------------------------------------------------// Creating example nodes and edges//------------------------------------------------------------------------------ogdf::Graph g; // Root data container for OGDFauto parent = g.newNode(); for(uint i = 0; i < 6; i++){ g.newNode(); g.newEdge(parent, g.lastNode());}// An edge between two nodes for more complexity in the layout// and to force ogdf to create bent edges g.newEdge(g.firstNode()->succ(), g.firstNode()->succ()->succ());
Then I set up QGraphicsView which is straight forward from the documentation. For simplicity I draw every node as a square of 40x40 pixels. It is also possible to set individual dimensions (and other attributes) or elliptical shapes for single nodes using the GraphAttributes class. But for simplicity I leave it as it is.QGraphicsView* contentWidget = new QGraphicsView();QGraphicsScene* graphicsScene = new QGraphicsScene();contentWidget->resize(500, 500);

Arranging the graph

After the set up, we call the layouting of OGDF. The spatial information of the graph among other attributes is stored in the GraphAttributes class. There are quite a few layout algorithms provided by OGDF, but I found the PlanarizationLayout the easiest to use, which yielded good results without too much tweaking.//------------------------------------------------------------------------------// Layouting//------------------------------------------------------------------------------// The GraphAttributes will contain the position and other layouting informationogdf::GraphAttributes ga(g, ogdf::GraphAttributes::nodeGraphics | ogdf::GraphAttributes::edgeGraphics | ogdf::GraphAttributes::nodeTemplate | ogdf::GraphAttributes::nodeStyle | ogdf::GraphAttributes::edgeStyle);ga.setAllHeight(40); // Set the dimensions of all nodes; ga.setAllWidth(40);// Create and apply a layout on the graphogdf::PlanarizationLayout layout; layout.call(ga);// resize the graphicsScene to the bounding box that was calculated in the layoutgraphicsScene->setSceneRect(QRect(0, 0, ga.boundingBox().width(), ga.boundingBox().height()));


Drawing of the nodes is straight forward. We iterate over all nodes in the graph, retrieve their positions from the GraphAttributes and then draw a square on the QGraphicsScene.// Draw the nodes QPolygonF square; // create a square of size 40, 40 to use for displaying latersquare << QPointF(-20, -20) << QPointF(-20, 20) << QPointF(20, 20) << QPointF(20, -20) << QPointF(-20, -20);//it is also possible to use the macro forall_nodes(iterateNode, g) from OGDF for(ogdf::node iterateNode = g.firstNode(); iterateNode; iterateNode = iterateNode->succ()){ double x = ga.x(iterateNode) ; double y = ga.y(iterateNode); QGraphicsPolygonItem* squareItem = new QGraphicsPolygonItem(); // Create a QtPolygon for the node squareItem->setPolygon(square); squareItem->setFlags(squareItem->flags() | QGraphicsItem::ItemIsSelectable | QGraphicsItem::ItemSendsGeometryChanges); squareItem->setBrush(QColor(1, 1, 0, 1)); squareItem->setPos( x, y); graphicsScene->addItem(squareItem);}
Drawing of the edges is a bit more complex, as the edge-paths can be a combination of straight lines and cubic bezier splines. From the start point of an edge it can go straight to the end line or have multiple splines in between. Every cubic bezier spline needs three points to be drawn, so we can check if the number of points in a edge path is a multiple of three to determine if we have splines in between. I havent tested if the splines are connected seamless or if they can have straight sections in between.// there is also a forall_edges(edge, g) macro for(ogdf::edge edge = g.firstEdge(); edge; edge = edge->succ()){ auto polyLine = ga.bends(edge); QPainterPath path; // here we should check if the line really has at least one point auto pointIterator = polyLine.begin(); path.moveTo((*pointIterator).m_x, (*pointIterator).m_y); // move the line to the starting point if(polyLine.size() % 3 != 0) // straight line to either the starting point of the spline or the end point { ++pointIterator; } for(uint i = 0; i < polyLine.size() / 3; ++i) // iterate over the splines. Every cubic bezier spline needs 3 points { /// get the three points and increase the iterator afterwards /// maybe we need a path.lineTo(point1) here as well, to connect multiple splines auto point1 = *(pointIterator++); auto point2 = *(pointIterator++); auto point3 = *(pointIterator++); path.cubicTo(point1.m_x, point1.m_y, point2.m_x, point2.m_y, point3.m_x, point3.m_y ); } if(polyLine.size() % 3 == 2) // Straight line to the end point { path.lineTo((*pointIterator).m_x, (*pointIterator).m_y); ++pointIterator; } graphicsScene->addPath(path); // Add the edge to the scene}
And finally we show our widgetcontentWidget->setScene(graphicsScene); // Setting the scene to the graphicsViewcontentWidget->show();
I am a visual oriented person, so when working with graphs, trees, state engines or basically anything that can be represented as a network, I like to have an visual representation of it. In the past I usually exported such structures to a .dot file and processed it with Graphviz to a JPG or PNG. However a more convenient way would be, to be able to view and modify such drawings online, so I decided to write a widget to do this for my current project.

I quickly identified two key questions which needed answering first:
1. How to display this in the application
2. how to compute the layouting of the graph and do the routing of the edges of it

As the project I'm working on is Qt-Based the first question was easily answered. QGraphView and the connected classes are perfect for this. They even support dragging around shapes, selecting them and so on.

First iteration: Graphviz

Layouting a graph can be tricky, especially if one wants to minimize the edge crossing and wants to have nice curvy splines or at least proper angled chinks for edges. So I started to look for a 3rd party library which I could interface over C++ to do the layouting for me. Since I already knew that graphviz is very powerful this was my first try. A bit of sifting through the web and I found this nice tutorial about using graphviz with Qt: I was quickly able to get a running prototype up. But the joy soon ended, as I tried to get the prototype fully integrated into our proejct. Graphviz is completely missing 64 bit support for windows and even after spending three days I could not get graphviz to compile on my own. So I started looking for alternatives.

Second Iteration: boost::graph

After a bit of asking around I stumbled on boost::graph which was nice, as I am already familiar with boost. Layouting a graph also worked out of the box by only including some headers without any previous compilation needed. However boost::graph quickly showed its limitations by not having the functionality to route the edges at all. So hoping that this function was not actually missing but that I just had not found it yet, I was prepared to accept this and just keep going. So I started looking around for theory on routing edges which lead me to OGDF which proved to be the solution.

Final iteration: OGDF

Already the name of the Project: The Open Graph Drawing Framework looked very promising, after all this was exactly what I wanted to do. After donwloading compilation was very straight forward with options for debug & release compilation as well as 32 or 64 bit. The documentation of it is a bit flimsy but with a bit of guesswork and rooting around in the reference documentation I quickly got another prototype running with only around 20 lines of code for setting up and layouting the graph. Integrating the protoype into the existing project was straight forward, however I needed to change a few parameters when building the library, so the debug-information was removed for the release version. So after this journey over three different libraries I'm very excited to start bring this up to a full functional tool and learn more about it.



Internal Quality is not negotiable

A quote by Martin Fowler (I think it's from Refactoring: improving the design of existing code), discussing the quality of software divides it into internal and external quality:

[quote]Internal quality is about the design of the software. This is purely the interest of development. If Internal quality starts falling the system will be less amenable to change in the future. [...] You need to be very careful about letting internal quality slip.
External Quality is the fitness for purpose of the software. It's most obvious measure is the Functional Tests, and some measure of the bugs that are still loose when the product is released.

-- Martin Fowler

Literature about managing software projects and code quality often states that internal quality of a product is never negotiable, no matter how urgently a feature for external quality needs implementation. Even the principles behind the Agile Manifesto say: "Continuous attention to technical excellence and good design enhances agility."
Since internal quality cannot be lowered, one should rather not implement the feature than compromise the code-quality. At the end of any iteration of development, the code should be clean, readable, well tested and versatile enough to accommodate changes in the future. Unfortunately, reality doesn't work that way. To this the principles behind the Agile Manifesto say: "Working software is the primary measure of progress. " So often enough we developers need to hurry up and hack in a crucial feature for a demo with a customer in last minute or during a crunch before a presentation at a conference or even a release. How often we heard the sentence: "Just make it work, we can clean up and maybe patch the software later", knowing well that "later" might as well be never. So we hack away furiously, violating our own code conventions and ease our conscience by adding a few "///@todo clean up... " into the code and getting unhappier the longer it takes.

Crunch and Refactor

So instead of fighting this contradiction and arguing that "It's not how it was planned", try to plan it. I call this method the "Crunch and Refactor" method, which consists of four simple rules. Crunch time is the time you spend quickly hacking new features into the project while refactor time is the time when you clean it up and advance the said features

  • The crunch time has to be defined in advance as a fixed time span
  • refactor time follows immediately after the crunch
  • refactor time is double the crunch time
  • Crunch time cannot happen right after refactor time

    The application of these rules in a project help to produce quick results, but also a stable code base, but more so it also helps to keep the developers focused and happy.
    By defining the crunch-times in advance your team can plan their free time (or the lack of it) accordingly and by setting a hard deadline one knows that the stressful time is limited. Being under constant high pressure without being able to see an end of it, is a quite sure way to a burn out, and seeing an end to it will help preventing it.
    So your stressful deadline is past, you had your release party and hopefully impressed the world with your product, but now it's time to clean up the kludges you left behind in the process. By starting with cleaning up right after the end of the crunch, it's ensured that all those hacks are still present in your teams mind, as well that you don't fall into the "we clean up later" trap.
    By giving your team double the time to clean up as you spent crunching, the importance placed on good quality software is signaled not just to the developers, but also to the management outside the team. The other benefit of it is, that since the batteries of a team are probably nearly drained after a stressful crunch, taking a pace that is more relaxed afterwards will help them recover their focus faster.
    The reason why you're not allowed to crunch right after refactoring is to avoid a alternating between the two phases, as you might want to get back to a normal development-cycle between two releases. The third rule also helps enforcing the first rule, by forcing the planing ahead.


    Of course "Crunch and Refactor" only works if you keep the times relatively short, it is no use to hack away without any decent engineering for six months and then trying to spend a year to clean it up afterwards. On the other hand defining every stressful day as a crunch-time and then spending two days relaxing and cleaning up is not the way to go either, as this is kind of a normal fluctuation during work. Anything from one-week crunches up to a month seem reasonable.
    A good idea for it is to lay this methodology over an agile methods like scrum, where you can define a sprint as crunch-sprint and the next two sprints as refactor-sprints, this separating those two steps even more clearly.
Sign in to follow this  
Followers 0