Paris GDC’08: Day One
Yet another journey into Gameland
And the journey began with my arrival to the hotel. A clean 23 square meter room with a double bed, a small kitchen and a safe box, that’s all I need to feel good. As I'm writing this, I try
to imagine the sessions that I will attend tomorrow and the day after. Yes, we're on Sunday evening, and all I can do is to prepare the coming days.
And the preparation is not that simple. Let's be clear: I'm alone, and I have to select a few lectures among 56 sessions in 5 different rooms. That's tough. If I have to cover another conference
this year or next year, I'll try to get some other gamedevver involved in order to cover more stuff. But this year, I'm alone. Which is a bit frustrating (again).
So what's coming tomorrow morning? At 9 am, Ralph Baer's history of video games is concurrent to an optimization-minded conference by Nicolas Thiberioz. The Media Molecule keynote follows –
as far as I know, they're going to present their work on Little Big Planet. After that, I have to attend the press conference. Okay, I would prefer to attend the research session about procedural
generation of content by Eric Galin, or even Howard Marks' presentation about free games. But I have to attend the press conference, so I'll do it. Hopefully, it will not last too long, and I'll be
able to attend the end of one of them. And then it will be lunchtime.
Figure 1: ok, I found the conference...
The afternoon begins with two concurrent sessions about optimization – one is sponsored by Intel, the other deals about multi-core platforms. At the same time, Guillaume de Fondemiere et al.
Are speaking of team management. 3:10 PM is the worst time for me: I'd like to attend 3 of the 5 lectures that take place at this time. Unfortunately, I lost this power a few years ago (when Romans
crucified me IIRC) so I have to choose between Jeremy Vikery's sessions about light and colors, McShaffry's MS Project talk and Servan Keondjan's lecture (this one looks good: it tries to imagine
what our future will look like). After that, the day will a bit smoother: I will attend ID's Matt Hooper talk – no way I'll miss that – and Chris Rock's session about Bioshock history
– again, the only reason why I would not attend this one can be summarized in two words: « my » and « death ».
And day one will be over.
Day 2 will be as hot as day 1. It begins with Sten Huebler of Crytek who will speak about level design in Crysis. At the same time, the research session of Samad Ahmadi deals with the
implementation of AI algorithm. Ben Cousins' keynote about Battlefield will follow. At 11:40 am, the programming session of Franc Hauselmann deals with game engine enhancement, while Jason Della Roca
et al. will describe the intimate relation between producers and developers. At 2 pm, Diarmid Campbell will discuss camera-based games and Sean Kane will talk about IP rights. Adam Moravansky's high
level physics talk will concurrent Vincent Scheurer's lecture about digital distribution.
Sadly, and because my train leaves Paris too early, I won't be able to attend the last keynote of Rob Pardo from Blizzard. In fact, I may also miss Nicolas Thiberioz's second session about
tesselation in a low poly word using the GPU.
All in all, there are too many things to see, and not enough time to do it. Which, again, is a bit frustrating.
How to Create an Industry the Making of the Brown Box and Pong
Ralph is the kind of guy who creates industries. And indeed, he created a whole industry. Not the kind of small, fragile market that appears at the beginning of each summer and that disappear when
people start to think about their return to work, but the kind of industry that lasts and grows to a multi-billion dollar market. This industry even has a name: we call it the video game
And that's what I’d call a pretty good achievement.
The venerable German-born inventor started to think about some TV-based gaming system in 1966 (see the scan of his first design notes here).
With some coworkers, he created the Brown Box, the very first programmable TV-based video game console. The concept was bought by Magnavox and later became the Magnovox Odissey (shipped in 1972).
The original Brown Box (Mr. Baer brought one on stage) was designed for two players. Each player is supposed to manipulate a controler with two variators (one for the up-down direction and another
one for the left-right direction) in order to move a point or a cursor on screen. Mr. Baer and his fellow assistant (sorry – didn’t heard his name) gave us a demonstration of the Brown
Box tennis game.
Figure 2: the Maganvox Odissey
There is something weird to see this age-old device deliver its message: two bars, a line in the center and a ball that crosses the screen. For those who think the ball is moving on a straight
line, you would have been surprised by its strange movements – some kind of randomy sinusoidal line that seems quite difficult to get on the other side (to be honnest, I’m pretty sure
that I wouldn’t have been able to play this game). A few swicthes allows you to “drastically” change the game: the two players on the same side (let’s call this
“squash”), one player against a wall (let’s call this “I have no friend”) and so on.
This first talk was pleasing. Unfortunately, I wasn’t able to attend the beginning of the talk (thanks to my sense of direction; I had a few issues, the main one that I wasn’t able to
find the Paris GDC building).
Alex Evans, Mark Healey of Media Molecule
Keynote: Media Molecule, Little Team, Big Ideas
The second talk of the day was also the first keynote of the conference. Alex and Mark chose a rather original form to their talk. Instead of running a bunch of slides behind them, they decided to
showcase their upcoming “Little Big Planet” game in a quite unique way. So they designed their presentation to fit in the game.
And that was beautiful. And fun.
Figure 3: Alex Evans and Mark Healey running a special level of Little Big Planet
The goal of this keynote was to present the game and the design process behind it.
Little Big Planet (or LBP for short) is heavily based upon the idea of user-generated content. There is a good reason for that: when the two veterans decided to create the company Media Molecule,
they didn’t want to build a big development team. Even in small teams you’ll get friction between people – these frictions would be even worse in a bigger team. But in the PS3 era,
the strength of a small team is inherently limited so they designed the game to overcome these limits.
The base idea is: if we cannot do it, let’s empower the players so they’ll be able to create their own levels. Every aspect of the game can be customized by the user, from the avatar
costumes to the textures of the background elements.
But giving to users the freedom to create their own world has its own drawbacks. The first one is that if you give many tools to the player, he will experience many difficulties before he’ll
be able to do something interesting. If you want to avoid that you have to make the tools very simple to use. From what I saw during their keynote, they succeeded: repainting the background is as
easy as selecting the repainting tool, moving a cursor on screen and pressing a button to apply the texture. Changing the costume is even simpler: you can select the costume parts individually or you
can ask for a random costume. Placing a prop in the game level is also quite easy: select it in the corresponding panel and place it where you want. That’s all. Of course, since the game is
multiplayer, you can build and decorate levels in groups.
The same philosophy is responsible for a prominent part of the gameplay. While the game is using 3D technology, the player is constrained in a small band on the screen. This is really a gameplay
choice: when they began to think about how the game should feel, they tried pure 2D, pure 3D and then (after a few fights) settled on this 2.5D environment. The result improved the quality of the
Honestly: LBP seems to be a really interesting game. The physics are smooth and fun, the graphics are stunning and the game is to be extended with user-generated content and other downloadable
content. During the QA session, someone asked to Alex and Mark what was their opinion about questionable content. Of course, this is probably going to be a huge issue. While Mark is eagerly waiting
for any user-made content – be it classical or questionable – Alex is more pragmatic. Of course, some moderation will take place to avoid this kind of issue (Mark told me later than Sony
is building a team to take care of that specific problem).
Last bit of information we got during the keynote: according to Mark Healey, LBP planet is going to be released in October’08.
Good thing when you're French in France: at least, some people speak your language. That was the case in the press conference – part of it was in English, but some speakers chose to speak
French. No problem for non-French speaking people: translators were sitting in the back of the room. I remember last year's GDC in France – and French-speaking conferences were criticized a
bit. I still believe it's a good thing to have them because not all French people speak English – and damned, we are in France :) (okay, that's a bad reason).
The conference started a bit late. Pierre Carde (from Connection Events) first presented the conference itself. I will skip the "I really thank you" part of that press conference (introduction of
the Imaginove and Cap Digital business clusters; they concentrate many companies of the same industry into a small place in order to create synergies). Good bits of this part include: confirmation of
the French government's strategy regarding the video game industry (pushing research as a way to improve it); presentation of a few research projects (autonomous characters at Paris 8 by Catherine
Pelachaud, the GENAC project (about procedural content generation) with Pierre Deltour of Widescreen Games and the PLAY ALL project with Olivier Veneri).
Figure 4: Antoine Vilette from Darkworks, Media Molecule (Alex Evans and Mark Healey), Pierre Carde, Jamil Moledina, Polina Bozek, Ben Cousins of EA DICE.
The second part of the conference was more industry-minded: guys from Media Molecule (Mark Healey and Alex Evans) were there, as well as EA DICE's Ben Cousins or Antoine Vilette from Darkworks. I
think. Not sure about that. Jamil Moledina of Think Services explained why they wanted to bring the GDC to Paris, while other people explained why they came to the conferences. Nothing very new here:
they came to share their ideas and to get new ideas from other people – being inspired – or to reward their collaborators (Darkworks sent 60 people to the conference!).
Let's finish with a few good QA:
- While it's still a technical conference, will the GDC turn into a content oriented conference? Yes and No; the market changes, and the goal of the conference is to keep track of the changes– or even better: to be ahead of the change. Content takes prevalence over the technical details.
- The history of the European GDC is a bit chaotic – is it going to change? It seems that Paris is a convenient location for the conference. There is a good chance that future European GDCwill stay in Paris.
Designing a Game for Multi-core Platforms: Pitfalls & Performance Tuning
Lionel Lemarié began the afternoon conferences sessions. The guy works in the profiling tools team at Sony and talks about multi-core optimization of games.
Let’s present an overview of the problem: recent PC processors, the PS3 Cell and the XBox360 Xenon processors are multi-core processors, although their underlying architectures are very
different. The Cell is made of one PPU (with 2 hardware threads) and a bunch of SPUs. The Xenon is made of 3 cores that support 2 hardware threads each. And recent PC processors are made of one to
many cores with 2 hardware threads each. The architecture differences have a great impact on software architecture, especially if you have to support all three platforms.
To enable yourself to use multi-core architectures, you have to use threads. Lionel suggests putting them in a thread pool. First, query the system to get the processor count then create one
thread per logical processor. Of course, you may want to limit the number of threads to something workable. The threads are used to run a set of small, independent tasks whose list is prepared by the
low priority main worker thread. You now get one massive benefit (this architecture is scalable) and a few drawbacks (resource management and synchronization is going to be a bit annoying now).
Regarding thread setup: let the system do the bad job, do not forget the rest of the engine, and try to put your main worker thread in a lower priority – because it does not do that much,
and balance work correctly. Balance is pretty hard, because there are so many things to take into account: do you sleep or do you “spin lock” important resources? And if you do so, how
fast do you “spin lock”?
Because as you will have guessed, thread synchronization is pretty important. The PS3 SDK offers barriers as synchronization primitive. Barrier is a fast synchronization primitive that allow
threads to wait until a condition is satisfied (typically: all threads finished their task). It is possible to emulate the same behavior under Windows XP but performances will be lower. On Vista,
there is no barrier as well. However, Vista implements condition variables that are quite similar.
With this approach, designing a task does not depend much on the underlying architecture. Let’s take a simple example. On the PS3, a classical task perform these operations:
- Get the next data block address.
- DMA the next data block.
- For each data block.
- Wait for the DMA to end.
- DMA the next data block.
- Process the current block.
- Send the data back using the DMA.
On the PC:
- Get the next data block address.
- Memcpy the data (optional).
- For each data block.
- Memcpy the next data block (optional).
- Process the current block.
- Memcpy the data back (optional).
In the end, the architecture is quite similar.
The optimization team at Sony created tools to help to visualize what really happens under the hood. Apart from their profiler SN Tuner (for PS3), they developed an in-game profiler that displays
the profiling information on-screen. Each thread is represented by one line, and on each line, color codes are used to display running tasks, synchronization and idle time. Special colors are used to
represent draw calls initialization and execution.
Figure 5: on-screen visualization of the running threads
Lionel did some experiments to verify the impact of the number of thread vs. the number of logical processors on a typical game (the falling blocks game in fig. 5). No surprise here: your biggest
enemy is synchronization. His first attempt to go from one thread to multithreads on a multi-core processor lead to a performance decrease of more than 60%. To remedy to the situation, he changed the
design of the tasks scheduler to create two FIFO instead of one. The main thread is still responsible for these FIFO creations but the umber of locks is drastically reduced. To summarize:
- Use mini-tasks to distribute the load among a thread pool. This architecture can be used on any multi-core platform. There are still some differences, but they are reasonably easy to workout.
- Verify the performance continuously within the game, with an on-screen profiler.
According to Lionel, this is a bit difficult to get right, but it’s worth the effort.
Bend MS Project to your will – again.
Mike (game programmer, author and as he says "MS project hater until 2005") already gave an earlier version of this session at GDC 2007. This talk is an improved version of the previous one
(that's the explanation for the "again" part).
If you ever had any chance to manage a team or some resources before, there is a good chance that you ended up using the well-known MS Project.
Speaking of MS Project, Mike told us that he spent too much time struggling with it. To be honest, this software has many problems. For instance, you have to get version 2007 to get a simple
feature that can change your life: undo. As a producer, it's a trapping tool – you can end up spending your whole time using it, but this is not your job. It's a bit slow too.
But there is good stuff in it: the versatile calendar, it's Gantt view, the possibility to get a history of your project, and the extensibility of the program. Mike spent some time to develop a
toolbox to extend MS Project.
So, how is it to be used? Mike gave us a list of tips to use in order to master the master.
- Put every feature you often use in the toolbar.
- Setup the project start-up date.
- Change the leveling order to "priority, standard". This is not the default value.
- Only after that you can create your task inventory. To do this correctly, start with what you know, ask a lot of questions. Given his experience, Mike identified a few key areas that are alwayspresent in every project. But remember that leveling is slow, so you should limit the number of tasks to a reasonable count (1500 tasks per file is good, split your project into multiple files ifneeded, but do not use master files).
- Organize tasks as features in categories and sub-categories, using custom cells.
- Don't forget non-development tasks.
- Use task properties to distinguish between tasks.
- Estimated times make good clues – a task with an estimated duration have to be followed more closely than a normal task.
- Don't use the link button – it's the most dangerous feature in MS Project. It will add constraints to the project where you should use priority (0 to 1000). Of course, predecessors andlinks have their utility – especially if they mirror what will happen in reality (a critical deliverable that moves from one person to another).
- The second most dangerous feature is the constraint type on tasks. It induces problems that are hard to understand, so you should constrain tasks only when you have to.
- Milestones (0 day tasks) can be scheduled easily using predecessors in a different way. For example, 2SS+4wks says that the milestone is supposed to be at 4 weeks after predecessor 2. Again,there is no need to link a milestone to another task – milestones are linked together using this specific predecessor notation.
There are of course many other pro tips to use. Mike only gave us the ones that have a greater impact on your project management. He also demonstrated to us how to use Excel pivot tables in
conjunction with MS Project (his toolbox automatically generates Excel files) to get additional information about how your project is going.
From DOOM to RAGE: Megatextures, Pushing Artistic Boundaries
There is one thing that you can’t say id Software is: they are not followers. From their inception to their latest achievement, they stretched the technology boundaries. Of course, pushing
the limits of technology also has a counterpart: you also have to improve your art pipeline accordingly. This was the subject of Matt Hoper’s talk.
Beware techies: we discuss art and game design here, not technology. If you want more technical information about mega-textures from a code point of view, this was not the good session (I heard a
few programmers at the conference who were a bit disappointed when they realized this).
Once he had introduced his talk, Matt showed us two game trailers. The first one was the now classical Doom 3 trailer. This was the occasion for our host to share some bits of the game design
process of Doom 3 with us. First observation: the frontier between game design and art is moving as worlds get more and more detailed. Three years ago, a designer at id was responsible for creating
95% of the game space. According to him, artists were basically reduced to making good-looking textures and models. When you increase the visual fidelity, the role of the artists becomes more
prevalent: if you want to get stunning art, you’d better let artist do it – as opposed to game designers. That’s what happened at id Software when they began to work with
Then came the trailer of their upcoming title – Rage. For this game, the design team and the art team worked together. For example, the design team put together the basis of a racetrack
level, decideed what part of the world that cannot be touched by the artists yet, and fed the artist with the information. Once they get it, the artists are basically free to do whatever they want to
populate the world. So they design textures, redo the modeling of some part of the world (id Tech 5 have may ways to help in this case; more on this later) and so on. As Matt said, they can go crazy
about the world. This wouldn’t be possible with the Doom 3 engine – at least, not at the same level. But id Tech 5 allows the artists to draw the world without being limited by details
such as the size of textures. They can achieve pixel accuracy without worrying too much on the technical issues. No texture tiling, no visible pattern – every pixel can be modified
independently of each other pixel in the scene.
You’d say that this is very good for outdoor scenes (where the idea of a mega-texture feels natural), but what about indoor scenes? Well, the same techology is applied successfully –
this is probably the biggest selling point of the technology. And when I say the same technology, I really mean the same technology. You no longer have to deal with different indoor and outdoor
behaviors. Everything works the same - everywhere. Matt showed us a few indoor examples – in a sort of before/after presentation. The “before” part is already very good. Then comes
the artist and the game editor. The “after” part is then completely outstanding: details everywhere, subtle effects and so on (unfortunately, I don’t have any shot to show you
– sorry for the inconvenience).
There are important points here: the first one is that indoor scenes are still built using usual technologies, so artists and game designers don’t have to learn a new way of work. The second
point is that if you ever try to change every pixel by hand, you’re going to die before you’ll have finished a single scene. To help, the it5 editor implements stamps that allow you to
stamp a texture onto another one, or (using 3D brushes) to stamp the geometry itself. Stamping makes the editor easy to use.
Mega-textures are really big. You can zoom in for a few second before seing any texture detail. id Software made them work on arbitrary geometry and by doing so they managed to create a whole new
Keep Calm and Carry On: The History of BioShock
This is already part of the legend: the making of Bioshock was difficult to say the least. Let’s begin with a bit of history.
Once upon a time there was a game called System Shock 2. This game was known to be incredibly good, but goodness doesn’t always means success – and the game sales were very low. In
other words, SS2 was a failure – but a good one. But ultimately a failure is a failure, so when 2K Games decided to make a successor to SS2, they meet a bit of resistance – or, as Chris
put it, “how do you sell a successor to a commercial failure?” To overcome this situation, the team decided to play a strange game: they created a buzz around a successor of SS2 within
the gamer community in order to show to the producers around the world that such a successor would be successful.
In 2005, they signed.
And they had to build a game within 14 months – but they had no idea about what the game should be.
Figure 6: yep. You'd better keep calm.
The first three months were dedicated to building up the schedule and developing the initial concept of the game. The next four months were used to build a game demo. 2K Australia worked on the
technology to build a world-class console engine while 2K Boston worked on the game itself.
The initial concepts were quite fuzzy: the game would feature evolved creatures that would live in their own ecology. Unfortunately, two problems arose. First, a non-player centric ecology limits
the player involvement in the game. Sure, it’s cool to see creatures fighting other creatures, but since most players buys games to play, it’s a bit frustrating in the end. The other
problem was related to the art design: some creatures were supposed to create emotion, but how can some kind of insect do that? The first play tests showed that players would simply destroy them
instead of trying to understand them.
At this point, the team decided to stop everything and make one single room that would feel right. Ideally, the room should tell what the game really is. Creatures were worked out too so that they
would fit the mood – instead of evolved creatures, they should insist on the notion of lost humanity in order to enable the creation of an emotional connection with the player.
Once this was done, the real game production could start again. By E3 2006, the team had a working prototype that was critically acclaimed. But there was still a big issue: while game sites were
quite enthusiastic, the gamers seemed to dismiss the game. The reason was the game complexity: is Biochock a RPG? A FPS? Something else?
“Complex games needs clear marketing”, said Chris. At the Microsoft X06 event, they decided to emphasize on the FPS part of the game, explaining that the other part of the gameplay
were there to support the FPS part. Again, the event was a big success – players were now eagerly waiting for the game. More important, it showed to the developers what the game should be.
Finalization of the game proved to be yet another big task. The gameplay was reoriented to focus more on the FPS experience, and the team realized soon that they forgot to take some
“details” into account. Nothing really important: do the players understand the game? What was the game script? And other “minor” things like that.
From this adventure, Chris drew an important conclusion: “always remember that you can screw up everything”. To avoid that, you need to listen to everyone, to doubt everything and most
importantly, be honest with yourself and your team.