Jump to content

  • Log In with Google      Sign In   
  • Create Account

Like
1Likes
Dislike

Develop 2009

By Oli Wilkinson | Published Jul 30 2009 10:38 AM in Event Coverage

game games #8211 online #8220 play larrabee players people
If you find this article contains errors or problems rendering it unreadable (missing images or files, mangled code, improper text formatting, etc) please contact the editor so corrections can be made. Thank you for helping us improve this resource



Conference Overview

The Develop conference is the UK’s premier games development conference and returned to Brighton for the 4th year running. The conference is
split across several tracks, covering Art, Audio, Business, Coding, Design and Production. New to the lineup this year was the Evolve conference, an entire day dedicated to the development of games
on emerging platforms, including iPhone and social networks.


The conference has been three intense days of ideas from the best in the games development industry today. From the speakers I saw, there was the notion of us being back in the frontier days of
games development – much akin to those in the 1980’s in which many smaller studios and developers could reach the mass market with novel games such as Elite. The possibilities of the
online space are only just being realised and embraced, but developers wishing to do so must remain agile and adapt to each challenge as the market shifts and evolves. We’re in the territory
that sees the consumer is the king; only the games that try and reach the real needs of the consumer will succeed. Players are demanding more opportunities to play and want greater freedom to tell
their own stories, interact with people and be creative in their play – if we fail to measure up to this demand we’ll be ignored for those that provide the experiences players want, where
they’re “hardcore” gamers or people with 10 minutes to play in their lunchtime.


Those developers leveraging the innovative sides of the online environment will reap decent commercial rewards; the traditional subscription model used by World of Warcraft is losing its appeal,
new games should learn from the Asian markets and offer Free-to-Play supported by novel and innovative ways to raise real money through monetisation of virtual item sales. The conference showed
several successful implementations, including those in channels that were unexpected such as static browser games.


For those with roots at the high-end of the market the online space will see changes as well, with networked consoles and PCs allowing developers to reach their customers in many new ways, be it
through virtual rewards to fully emergent behaviours that can only be realised through having thousands of players online simultaneously. We may also want to consider adding new routes into our games
via links into other platforms, such as the iPhone or Facebook. However when doing so you should be aware of not taking the cheap option and only achieve convergence using ideas that add value and
maintain the core ideals of your product. You may also want to embrace online distribution, allowing either your game to be downloaded from the internet or playable entirely by a streaming service
such as OnLive or GaiKai. Online systems such as Playstation’s Home allow us to interact in new ways with our communities and otherwise embrace portions of the market that are unavailable to
the high-end of the spectrum.


Whichever end of the spectrum you are, from sole developer on one platform to huge developer targeting multiple platforms, the message from this year’s Develop conference is clear – we
live in uncertain times, but we have many ways to innovate and drive our way forwards into the future. We need to be aware of our audience and tailor to their specific requirements whilst innovating
in the way we do so. The best way to do this is, as argued by several speakers, is that we should make the games we want to play and not what we think people want to play. All we have to do is make
careful choices and we can reap the rewards the future has to offer, be it as a static HTML-based game or a full AAA-multiplatform title.


Tuesday 14th – Evolve Conference

“Resetting the Game” – David Perry, Acclaim
“Browser Based Games: The Past, the Present and the future” – Jonathan Lindsay, Splitscreen Studios
“20 Great Innovations in Casual, Social and Mobile Games That You Should Steal” – Stuart Dredge, Pocket Gamer
“The Xbox LIVE Game Platform: Community Games for fun and profit” – Charlie Skillbeck, Microsoft XBox
“A Game is a Game is a Game” – Dave Thompson, Denki
“Case Study: A Browser-based MMOG on Every Desktop” – Jim McNiven, Kerb Games
“Practical Applications of Online Convergence” – Paul Croft, Mediatonic
“The Long Tail and Games: How digital distribution games everything. Maybe.” – David Edery, Fuzbi.

Resetting the Game

Kicking off the Evolve conference was the “Resetting the Game” keynote presented by Dave Perry. In his session Perry took a look at some of the possibilities that the online space
presents to game developers, even to the extent of declaring the virtual death of physical media and distribution methods. In this environment, he talked about how consumers have short attention
spans and want to play games that make it convenient for them to do so. Current online games, he says, require too many clicks to enter and have too many barriers overcome before you can play. Citing
World of Warcraft as an example, Perry demonstrated that the game required over 30 clicks and interactions before a player even came close to the action. For many games, Perry argued, this
sort of demand on the player will simply turn away many potential players – future games will need to be creative and accessible to make it convenient for players to play.

In an online world, Dave Perry asserted that those developers who embrace the virtual distribution and online channels will be the ones to reap the rewards in the future. Citing services like
GaiKai and OnLive as an example, game developers can deliver high quality games to players in an extremely convenient way – even down to removing the requirement of specialist hardware or the
concept of ‘platform’ entirely. Speaking specifically of the GaiKai platform, it was shown how features of social networks can be leveraged to allow friends to join game sessions or share
saved game sessions with each other to allow friends to game together from a single click. Such features, Perry argued, would allow games to self-promote by taking advantage of the viral trends in
social networking sites like Twitter and Facebook. Although the notion of friends playing together is being pushed, Perry argues that we need to make games for strangers to play together, again as it
allows for the most convenience to all players involved.


Turning to the future, Dave Perry then warned that the Western markets must act now to evolve or potentially face being overtaken by the growing Chinese and Korean markets who are already taking
ideas such as “Free to play” to market. In order to survive we must challenge our traditional business models by embracing micropayment models, figure out how to monetise outside of
current retail distribution platforms. Dave Perry’s session really set the tone for the rest of the day, with many other speakers echoing his sentiments.


Browser Based Games: The Past, the Present and the future

Jonathan Lindsay took us through the past, the present and the future of browser based games in his 11am session. Showing how browser-based games had evolved from being turn-based HTML games into
being real-time and interactive thanks to technologies such as flash, Jonathan then took us to the present and future of the browser-based game by demonstrating how we can deliver real-time games
that are also fully 3d and massively multiplayer. The key features of these games are that they run in a web-browser and require no download, no install and are played entirely online – by
doing this, he argues that they are more accessible for gamers who want to play high quality MMO games from virtually anywhere and at any time. Demonstrating their free-to-play game Pirate Galaxy,
Splitscreen Studios demonstrated several of the points made in Dave Perry’s keynote speech, such as how in-game items and features could be monetised without affecting the balance of the game.
The key to the design was to ensure that the game is balanced for all players, whether you choose to spend no money in the game or whether you choose to “buy your way through the grind”
as it were. Finishing up, Lindsay spoke of some of the trends they are seeing in online games from their players. The first is that the traditional subscription model is dying out and must be
leveraged in new and novel ways for players to accept it – rather than subscribe up-front, the subscription should be there to add value later on in the cycle when the player requires it.
Players expect fairness in the game, whether they have chosen to play without paying or whether they’ve put money into advancing themselves. As game developers, there’s a huge degree of
freedom to innovate in how in-game items can be monetised and we should all consider how even small ideas can bring in revenue. Finishing up, Lindsay cited some figures from Pirate Galaxy. The
overall revenue brought in for all players is current at around 1.40 Euros per player (including those that haven’t paid anything), but for those players that do buy items, they are receiving
an average of 24 Euros per player.

20 Great Innovations in Casual, Social and Mobile Games That You Should Steal

Next up, Stuart Dredge from Pocket Gamer ran through 50 innovative features in games that we should consider exploring more in our games. The first theme was that of bringing your friends into the
games you play, whether it’s their pictures, names or avatars the contact list features of Social networking sites and iPhones allow games to customise the game experience to the individual
player – examples included using highscore boards as a competitive prompt for friends, or simply putting names or pictures of people you know into the game. The next theme was that of using the
player’s music collection as part of the game. Examples included Audiosurf in which the player’s music formed the basis of the game, or basing music quizzes off the content of your own
library. A novel feature he believes could be used more is that people’s music files are used as data for the game, in the same way of barcode battler type systems of old. Next up, Dredge
believes that location could be used well in games, especially in handhelds that are starting to see GPS systems as standard – he believes it could lead to more games that factor your real-life
into the play mechanisms. Echoing some of the ideas from Dave Perry’s keynote Dredge observed that some games are using sites such as Twitter to send/join game invites and brag your leaderboard
stats. However one obvious issue here is the potential for the noise to become annoying spam unless it can be turned off. Leading from this, we could start considering using multiple online to
connect the overall game experience – for example the ability auto-upload videos of in-game feats to youtube or link mobile and console content together – if you play through the mobile
game you get an unlockable in the main game, for example. Dredge also considered game rethinks and mashups, in which existing games are reinvented or two genres are brought together to open up
different possibilities. The session was very quick-fire but overall I think the point was to walk away and see what you could do to innovate ideas for the new platforms we’re pondering.

The Xbox LIVE Game Platform: Community Games for fun and profit

After the lunch break Charlie Skillbeck lead a session about the Xbox LIVE Indie Games Platform. Most of us will know this as using XNA to write games and play it on your PC and/or Xbox 360. The
first half of the session didn’t cover anything new that isn’t on the XNA Creator’s club website, so I’m not going to discuss it at length. More interesting is the stats
coming out of this session – there’s reported to be 338 games on the Indie Games Platform now and many more expected soon as there’s been over a million downloads of the XNA Game
Studio system to date. When asked, he wouldn’t talk about money being made by any game currently on the system – several indie game developers are claiming that it’s impossible to
make money on the platform. Skillbeck was keen to push the point that money can be made but in order to do so you really want your game to stand out by ensuring that it’s polished to the
n’th degree as the quality of games is getting higher all the time. For hobby developers and students, Skillbeck was keen to highlight that having a game on the Indie Games Platform was
“better than a CV” as it showed determination to develop and publish your own title.

A Game is a Game is a Game

The next session I attended was entitled “A game is a game is a game”, lead by Dave Thompson of Denki. The key point that Thompson was trying to get across in this session is that we need
to stop labelling games and players as Casual or Hard-core. These labels, he argues, can’t even be defined accurately so therefore make little sense. Even worse – use of these labels can
actually put people off playing our games because people don’t necessarily align with what they think is a causal or hardcore gamer. Stating that Casual or Hardcore aren’t genres,
Thompson argues that we should stop using the terms entirely and start focussing on the games themselves, making the best game you can and ensure that you games that you love to play. If you make a
great game and love it, someone else will too, right?

Case Study: A Browser-based MMOG on Every Desktop

After a much needed cup of coffee, I settled into the next session which was a case study of a browser-based game created by Kerb Games. Jim McNiven introduced us to an old but popular HTML game
called project Rockstar that has been played and evolved for the past 8 years by Kerb Games. Using the lessons learned from that experience, McNiven talked us through the next browser-based game
created by the company, Sokator 442. When creating this game, Kerb Games were faced with several key questions – could they make a successful game in the highly competitive browser game space?
Should they go with HTML/Flash or use different technologies? Can their game be marketed effectively and cheaply? Would people still pay for browser-based games? With these questions in mind, they
designed their game to be “snackable” - playable in 10 minute sessions at a time that was convenient to the player at any time or place due to no downloads being needed. The use of
browser technology and HTML/Flash lead to a fast development time, achieving beta in 4 months of inception. In designing how to attract people to the game, they experimented with several models. The
first was an upfront signup system that caused a huge drop off rate with only 3% of people who started the signup actually completing. By moving the signup process to be part of the game itself they
saw their take-up rate jump up to 23%, a very positive step forward. The game is monetised in several ways such as allowing small irritations to be removed for a fee (a technique also seen in Pirate
Galaxy) or allowing the player’s character can be customised. Traffic was brought to the game by the use of affiliate schemes where the linking portal took a percentage of all cash earned from
the player after signup and friend recommendation schemes that rewarded people who brought friends on board from signup links that could be shared on sites like Facebook. With several thousand users
already and the average payment per player hovering around 12 Euros, the game has been a positive experience for Kerb Games. McNiven believes it shows that there is still interest in web-based games
and that they can be profitable. The key challenge faced, however, is that is it still very difficult to get traffic into the sites with many online game portals being extremely precious about their
users and unwilling to potentially lose them to an external game site.

Practical Applications of Online Convergence

Mediatonic’s Paul Croft stepped up to discuss the Practical Applications of Online convergence. With all the talk about games having a great opportunity in going online, Croft talked about
several models used in bringing games into the online space. The main ways used by a studio are those of commissioned games, in which an IP holder commissions a studio to create an online game for
them; an “advergame”, a mini-game that is designed to promote a product or brand or a fully-fledged game that’s designed to make money for your company. Both advergames and
commissioned games are generally used to pull traffic to large portal sites by embedding links and logos into the game at many stages. Croft argues here that such games shouldn’t be tied down
and should be allowed to be spread, copied or embedded by anyone in order to achieve maximum exposure. These games should also be a full gaming experience that is tailored for the specific medium the
game is running on, should provide around 15 minutes of entertainment and be into the action before 30 seconds due to the competition in their marketplace. Such games require anything from a 5-man
team taking 3 months to 10 people taking 6 months for the more high end games.

Indie games created for the online space are expected to produce a return for the company creating them. As a result, Croft indicated that it was a tough stream to create revenue and relies on
every game being developed rapidly and being a hit with players. Your typical team working on such a title would be 1-2 people and less than 2 months development per title. Croft also stated that
relying on ads to monetise the game is extremely unreliable – on average you can expect $0.07 per player/month, less if you rely on portals to host your game (who take between 10% and 35% of
all revenue). The best way seen to monetise such games are via microtransactions, with such games making $0.10 to $0.30 per player/month when posted in the social networking space. Croft warns that
such revenue often isn’t sustainable in one title due to the rapid appearance of clones.


Moving away from market models, Croft then talks about the convergence of games across multiple online platforms being the key to profitability. As seen in other sessions, Croft speaks of games
using features of social networks to achieve collaboration or competitiveness outside of the main gameplay itself; for example the use of a web-based level design portal for an iPhone game wherein
the community can share content with each other. Some games are creating clients for multiple online platforms simultaneously, allowing people to play the game on Facebook, the iPhone or other mobile
phone platforms, giving the player the maximum choice and convenience of accessibility.


Finishing up, Croft summarises that online can be profitable for developers and offer many opportunities, but it’s a highly dynamic environment that requires you to constantly evolve and
innovate to survive. The requirement for you to develop for multiple platforms requires a highly adaptive team with a varied skillset and knowledge of the usage patterns of your players on these
platforms.


The Long Tail and Games: How digital distribution games everything. Maybe

Finishing up what was an extremely full day was the Keynote “The Long Tail and Games: How digital distribution games everything. Maybe.” delivered by David Edery of Fuzbi. Edery starts
his talk with hard facts released by Sony and Microsoft about digital distribution on their platforms. 18% of Xbox LIVE Gold users download content, compared with 10% of PSN users – a figure
that includes free content offered on the platforms; when you bear in mind that many of the people with consoles aren’t even connected online at all, the numbers are extremely low. As a result,
Edery asserts that the console market is not yet a viable proposition for digital distribution to become the primary way of providing content to gamers. Even without these sobering figures, Edery
argues that Digital Distribution won’t be the nirvana that the Long Tail enthusiasts claim it to be. The issue is that the platform holders are still the gatekeepers and that these platform
holders have agendas. If you don’t fit their profile, he argues, your game will not be promoted in the digital space as much as others that do fit. Additionally, the platform holder has the
power to make ‘kings’ of games; if your game is featured in such a way you reap the benefits and all of your competitors suffer harshly as a result – however if your competitor
receives the favour, there is very little your game can do to recover in the platform, often fading into obscurity – Edery says that developers then turn to competing with each other to earn
the favour of the platform holder to ensure they don’t lose out.

Taking the flipside view, Edery stated that markets with no gatekeepers can lead to markets with high barriers of entry as they become overwhelmed with content, creating a situation wherein there
is too much content and too few consumers of that content. However in these markets Chris Anderson’s book argues that “The Long Tail” takes hold and every product sells a little, in
reality Edery points out that the market is still hit-driven and one look at iTunes or Amazon shows that 75% the revenue comes from 3% of the products – that’s a lot of items making
little or no money at all! What we would see in the current digital distribution environment is that the hits get bigger and that many indies are losing out in big ways to major labels. In addition
to this, the current digital distribution channels on consoles and PCs don’t have common Long Tail features such as recommendation engines or user rating systems to ease searching and browsing.
He also argues that it’s impossible for a company to give any discounts or incentives to attempt groups of players into buying your game. The platform holders control the pricing models in such
a way that you have to offer such things to all people or none.


Looking to the future Edery is certain that digital distribution and online games will become mainstream but he says we’re not there yet. And to get there we have to embrace all sorts of
innovations emerging from the East, such as Free-to-pay games backed up with microtransaction models, better access to games and ways to protect certain communities from games that may cause
offense.


Wednesday 15th – Develop Conference

"Online Functionality for Your Next Game? Why Not go 100% Online?” - Dave Jones, Realtime Worlds
“Preparing for Larrabee” – Dr Doug Binks, Intel
“Designer mash-up: David Braben and Dave Jones play Elite and GTA”
“Lua Scripting Interactive Behaviour for Playstation Home” – Dave Evans, SCEE
“Driving 3D TVs Using Current Generation Consoles” – Aaron Allport & Andrew Oliver, Blitz Game Studios

Online Functionality for Your Next Game? Why Not go 100% Online?

Dave Jones from Realtime Worlds started off the proceedings with the Keynote session entitled “Online Functionality for Your Next Game? Why Not go 100% Online?”. Jones opened up by
talking about the main principles he holds dear when designing games. Citing the original GTA as an example he feels games of a contemporary nature more accessible as people are instantly familiar
with the settings and surroundings. The game Lemmings was used to show that good games are built from very simple elements that lead to complicated compound behaviours and depth. He also believes
that humour and innovation are also key to letting players have fun in games. As an example of all these elements Jones showed some videos of how people use the open-world and co-operative nature of
Realtime Worlds’ original game, Crackdown, as a sandbox for their own kind of fun. Such models of play were not even conceived by the designers with Jones noting “I don’t know what
game they were playing, but it definitely wasn’t Crackdown”.

Showing some footage of APB, the new game from Realtime Worlds, Jones talked about the some of the opportunities given to his team in moving from the traditional single player game to moving the
game entirely online. Showing how the game starts out as single player missions, the dynamic world the game is set in allows scenarios to escalate and become huge multiplayer battles that are
automatically created via their dynamic matchmatching system. In many ways it is the extreme manifestation of your actions having an effect on the world and is an experience that couldn’t be
delivered in a game that is played offline. “Our players are our main content”, said Jones to highlight this. Embracing other features of a connected world, the player experience was said
to allow Creativity, Conflict and Celebrity. Creativity is delivered from innovative ways to interact with the world as well as the ability to 100% customise everything how your avatar looks to the
sounds they play when they win a fight or die. The game features an in-game audio tracker that lets people compose music and a system that lets people design clothing and other decals for things such
as vehicles – all of which can be traded to other players in game. Celebrity status can be gained in everything from being the best clan to the worst, the best fashion designer to the best
musician – such things that only really have meaning in a game that is fully online.


In many ways the session brought home some of the messages from Tuesday’s Evolve conference. It’s certainly obvious from the work of Realtime Worlds on APB that moving entirely online
has granted many more opportunities to deliver a great gaming experience than if they’d remained focussed on single player.


Preparing for Larrabee

Intel’s Doug Binks took the stage in the first Coding session of the Conference entitled “Preparing for Larrabee”. Prior to this session I had little or no knowledge of Larrabee so
I was very interested to see what the fuss was about. The first thing to note is that the term Larrabee is used to define an architecture and isn’t a specific product – so what is it
exactly? The Larrabee architecture describes a series of many-CPU cards that sit on the other side of the PCI bus and is designed primarily for rendering – to the layman this would describe a
graphics card. The Larrabee architecture has multiple x86 64-bit cores, an L2 cache and onboard memory that is accessible from each of the CPU cores. Additionally, the architecture specification has
several hardware texture fetch units and 512bit SIMD vector units that have scatter/gather instructions and register masking built in – effectively each core has been designed to be a vector
processing powerhouse.

The software part of the Larrabee architecture is a WDDM driver, meaning that it is treated as a display card by Windows and can be used seamlessly with DirectX for example. Developing for
Larrabee is going to be somewhat different to developing on a normal graphics card in that you write C or C++ code and compile it for the Larrabee CPUs, which means no new language or HLSL
equivalent. The radical notion of this is that we are no longer confined to the current architecture of graphics cards and can write any code in C/C++ to render our graphics – it may even mean
that hardware accelerated raytracers become the norm on Larrabee architecture. As long as there are no OS calls, Dr Binks was keen to say that many libraries can be dropped in and compiled for
Larrabee without issue.


Although no technical details were given, Dr Binks gave us some high-level points that we should be considering should we wish to take advantage of Larrabee when it is available to us. There are
three stages of migration; the first is the use of standard DirectX or OpenGL APIs – business as usual for us as Larrabee will support the current APIs on release. The second phase is to start
migrating components over to Larrabee, so DirectX is still used but some of the functionality becomes Larrabee-native - for example you may wish to skin or animate your models using C++ compiled for
Larrabee rather than using HLSL code. Mixed-mode operation means that you create or manipulate DirectX resources such as vertex buffers in Larrabee code without worrying about moving data back and
forth over the PCI bus. The third phase is the 100% Larrabee native renderer, this option is the most dramatic as it sees you abandoning all of your DirectX or OpenGL code and implementing your own
rendering engine. This does, however, grant you the most freedom such as the ability to use hardware accelerated voxels or raytracers. It even means that Larrabee could be used for non-rendering
functions such as Physics or AI, but it does mean that you should be aware of “who” is the consumer of the data processed by Larrabee to prevent too much data being pushed over the PCI
bus.


In preparing to move, Binks suggested that we migrate components over gradually and that the best way is to start coding with Larrabee in mind now. As a result we should be mindful of the
instruction cost of our graphics code and start turning features we don’t use off now, even if them being enabled on current hardware is effectively free. Binks suggests that we start
decoupling our resources from the rendering engine and start isolating our rendering engines away from the underlying DirectX or OpenGL APIs that we currently use. As Larrabee is entirely 64-bit Dr
Binks recommends that we should be testing our code on 64-bit systems now and compile it using Intel’s C/C++ compilers to highlight any issues that may be present and to show code which
can/will be optimised on the vector unit. Due to the many-core and multi-threaded nature of Larrabee we should start considering the use of task-based parallelism and stop thinking in terms of
individual threads. There is no doubt that the advice from Intel is that we should be moving to many-core architectures now. In summary, the Larrabee architecture looks like an exciting development
and is one that can offer us a lot of power – but in order to utilise that power, we should be preparing for it now.


For those wanting to read more, visit the Visual Adrenaline site or the "http://software.intel.com/en-us/articles/larrabee/">Larrabee site.


Designer mash-up: David Braben and Dave Jones play Elite and GTA

After the lunchtime break we were treated to a light-hearted session in which two of the UK’s industry legends played each other’s games and commented on the differences between them.
Dave Jones took the keyboard in hand to play the seminal Elite on the BBC Micro. Whilst leaving Jones struggling with the keys and difficult game play, Braben talked to us about some of the more
historical moments in Elite’s development. Back then, he said, Elite was a huge risk to publishers who argued that players wanted games that had 3 lives and were designed for 10 minute play
sessions; people were also said to want 2d games and wouldn’t understand the concept of 3d. The control system of many games at the time were simple joystick and 1 or 2 buttons, whereas Elite
had a full keyboard to get to grips with and the ship was operated more like an aircraft with roll, pitch and yaw rather than the standard arcade controls. As if to highlight this point, Dave Jones
struggled to figure out which key moved the player to the right before uttering the comment “so many keys” in dismay.

Braben then went on to talk about how Elite evolved as a game, starting as a simple 3d tech demo of a spaceship to becoming a fighting game; the key element of trading and loot dropping
wasn’t originally planned as a feature until Braben needed the fighting to actually lead to something. With the trading mechanic in place the game took on a whole new set of choices by
effectively allowing the player to choose to be a pirate and shoot ships for cargo or face being pirated themselves. Many of Elite’s revolutionary gameplay mechanics and design ideas simply
evolved into being because Braben and Bell added things they thought would be fun – they wrote the game for themselves. Elite sound out its first pressing of 50,000 units in the first week and
eventually sold between 1-2 million units.


Over 10 years later GTA was released by DMA design. Like Elite, GTA started out as a simple idea demonstrated by a tech demo – Dave Jones wanted to see how a police chase game would look in
a top-down open city environment and thus the game “Race ‘n Chase” was born. It was only when the developers found more fun in chasing down the pedestrians that the idea of GTA was
born and evolved from there. Jones said that GTA was built from very simple principles - with inspiration from pinball the game mechanic was simply to achieve 1 million points. The idea of the
feedback from a pinball table was also applied to the game, giving the player the opportunity for combos and score multipliers with instant visual and audio feedback. Unlike the accidental
controversy of Elite’s launch, GTA’s controversy was actually planned with DMA employing Max Clifford to drum up outrage over the apparent content of the game.


Both designers were asked to comment how they found porting their games from the original platforms. Jones commented that when developing GTA, the PS1 and other consoles were very much an
afterthought – a far cry from today’s development platforms. Although the PS1 saw GTA eventually, it was very much cut-down from the original due to the huge difference in specifications
of the systems. Braben told us how Elite had the opposite issue due to the original platform being so restricted, subsequent versions of Elite saw it arriving on platforms with much more power so
things like graphics and sound could be upgraded.


When commenting on the difference between then and now, both designers reiterated on the evolutional nature of their games and how it’s not viable to develop a large budget game in the same
way in today’s environment. Games are far more planned and designed for target markets today, back then both designers just created the games they thought would be fun. This ability to ad-lib
in this way was said by both to be the key to how the game eventually turned out, key features were added that eventually became the mainstay of the gameplay, with Braben commenting that
“freedom to innovate is hard to come by these days”.


As a final teaser, when asked if we’ll see another Elite game Braben simply said “Yes” and moved quickly to the next question. One thing’s for sure, I imagine the
experience of creating this new game is significantly different to how it was in 1984.


Lua Scripting Interactive Behaviour for Playstation Home

Dave Evans led my final session for the day with a talk about how Playstation 3’s Home can be used by your games to provide additional content and greater user-interaction with your world.
Introducing the Home Development Kit (HDK), Evans talked about the basic architecture of coding for PS Home. Using a series of Maya/3D Studio Max plugins and a Windows-based object management tool,
Developers can hook into Home in many ways; the most basic is the provision of Objects which consist of geometry and a series of Lua scripts attached to events. Objects can be given to players as
rewards or purchased via the PSN Store. Moving up a level Evans showed how a complete Scene can be put together to allow Home avatars to walk around and interact in an environment targeted
specifically for your game. Scenes can also feature images or videos streamed from your servers or serve up text or images on billboards specified in a HTML-like markup language called
“HSML” or the Media RSS feed system. Home also allows you to launch your game directly from the system, allowing you to create online games with people gathered from the Home environment,
thus providing greater access to your game.

Lua scripts are used in scenes to control everything from ambient behaviours such as sounds or particle systems all the way to scene-based mini-games in which multiple players can interact in a
full 3d environment with control over most of the Home engine. The Home API consists of over 400 functions exposed to Lua in a low-level way – for example you must obtain a handle to the
renderer system and use that rather than using a set of higher-level functions to achieve your goal. With this level of granularity, however, the API grants you a lot of power and can control
everything from area triggers and sound playback, to network messages between mini-games and control over commerce points for microtransactions in your game. Finishing the session Evans pointed out
that we should take advantage of Home as it offers new ways of interacting with your players. He also stated that we should reward players as it’s the quickest way to interact with your players
in Home, granting them exclusive rewards that they can show off to their friends for unlocking specific or difficult objectives in your game. He also points that using the game-launching features of
Home you can create some custom lobbies for people to gather and play your game with each other.


It was interesting to see some of the things that could be done in PS Home, especially when you bear in mind the mini-games and microtransaction opportunities it gives you. However I was
disappointed to hear that to get into Home development you must still have a full Devkit, although a Home-only kit can be purchased. This severely limits the access of Indie game developers to the
platform, when arguably they would be the ones to provide the most innovative experiences in the system.


Driving 3D TVs Using Current Generation Consoles

My final session of the day was a talk about the state of 3d movies in the home and how the game industry could get involved. Those thinking that games and films are already 3d, we’re talking
about the stereoscopic 3d systems in which the images on the screen appear to have depth into the screen and a projection of 3d outside of it. Cinema has begun to embrace the technology as it comes
hand-in-hand with digital distribution and although all of the current 3d films are CGI, live action films such as Ridley Scott’s Avatar are on their way. Several TV manufacturers such as
Samsung are already including 3d into their high-end televisions, but as Andrew Oliver notes, the feature has nothing to show it off. Games, he argues, will be amongst the first demonstrations of
this technology in the home as they can arrive before the films and players are available. 3d comes in two formats, multiview and stereoscopic 3d. Multiview doesn’t require anything other than
the TV (and media to play it) but the current technology suffers from picture issues. The most widely adopted 3d format today is that of stereoscopic 3d, the system in which two images are projected
from slightly different viewpoints to each other, much like how our own eyes work. The drawback to this system is that it requires a set of glasses to be worn, something that is somewhat of a turn
off for some people.

Now that we’re aware of how 3d is delivered in the home, Blitz Game Studios demonstrates their method of presenting 3d. All 3d games require a 3d TV and a console capable of presenting full
HD 1080p resolution, sorry Wii owners we’re only going to see 3d on the Xbox and Playstation systems. The “active” style glasses that effectively “flip” between one eye
and the other requires that you can render games in full 60 fps otherwise the illusion is destroyed. Finally, the 3d effect requires two renderings of the current frame, each from a different
viewpoint. The astute reader will realise that to make our games 3d-ready we must be capable of rendering 2x1080p frames at 60fps – a fairly high benchmark to expect! As Oliver points out,
however, this benchmark raises the gaming experience for all people, whether using 3d or not –we get 1080p resolution games running at 60fps. Once this resolution can be achieved, Blitz showed
us how the 3d effect could be created through the use of parallax effects and using a non-linear z-buffer to achieve a greater 3d resolution in the foreground; camera FOV can be used to great effect
to draw the viewer’s attention to specific areas when zooming in or out.


Of course, implementing 3d has its series of challenges. New decisions need to be made about the speed of the game, the location in the depth buffer the HUD/UI sit and how we avoid edge violation
problems that ruin the illusion. Oliver comments that the 3d effect can make people feel nauseous if misused, so too many moving elements on the screen should be avoided as should quickly panning
camera cuts. Considerations of the lighting and shadowing in the game needed to be made as shadows now effectively have screen depth as well – many small things can quickly spoil the illusion.
The challenges do not stop with development of your game; the single main issue in 3d is that there is no standard to the type of glasses used, the polarity types of the glasses (linear or circular)
or the TV screen output format itself. As a developer we need to make sure that the game is usable on all current models of glasses and TVs, meaning that the creation of code to output for each
format can be time consuming to develop and test – let alone the cost of purchasing the hardware. Finally, how does the 3d effect get advertised to the masses? Being that to see 3d, you need
the hardware – how do you sell the hardware to people without it when pictures on the internet or in magazines cannot convey the effect?


My personal opinion is that 3d is neat, the game “Invincible Tiger” was great to watch in the expo with the glasses, but I have many concerns about the viability of this tech for games
at this point in time. The lack of a hardware standard is a huge sticking point, as is the requirement for a full 1080p resolution game running at 60fps, let alone the requirement of rendering 2
frames at this resolution. Once these technical challenges are conquered, I’d like to see what other studios come up with.


Thursday 16th – Develop Conference

“The Wizards of OS: I Don’t Think We’re in C++ Anymore” – Doug Wolff, Eutechnyx
“Playstation: Cutting Edge Techniques” – Kish Hirani, Colin Hughes, SCEE
“Building LEGO Worlds, online, offline and everything in between” – Jonathan Smith (Traveller’s Tales), Henrik Lorensen (The LEGO Group) and Ryan Seabury
(NetDevil).
“Using GIT to Tame a Herd of Cats” – Lee Hammerton and Jake Turner, Crytek UK
“Rethinking Challenges in Games and Stories” - Ernest W. Adams

The Wizards of OS: I Don’t Think We’re in C++ Anymore

My first session of the day kicked off with a lighthearted session on how Python scripting was implemented in Eutechnyx’s game “Ride To Hell”. Looking back over a similar session
presented last year, Wolff talks about the “then” and “now”, showing how their original idea for using Python was forced to change over time. The original idea was that
Stackless Python was to be implemented and used to control everything in the game; the game engine itself would be more akin to that of an Operating System than a game, in which a low-level API was
provided to expose almost all of the engine’s features and offer the scripters full control over everything in the game. The idea was that the game’s engine executable would be static and
the only code to ever change when implementing this and future games would be the Python scripts and assets in the game. However, this idea didn’t fully take off... Performance of the Stackless
Python was the single main issue seen by the team – it was simply too slow to be used in the way they intended. Additionally, the freedom they found their low-level API presented was enormous,
actually it was too enormous. This idea was demonstrated by showing some of the code used to script missions in the game – a single “simple” mission was over 380 lines of Python
code, much of which was taken up checking the state of the object and acting accordingly. The team also found that the scripters often implemented the same ideas in many different ways due to the
flexibility they had at their disposal; as a result it meant that people became less productive due to the code being highly complex and non-standard. Debugging the script code was also hard, as
there was no way for a scripter to breakpoint their code and see variable values and so on. A hybrid solution was eventually created, but for many coders it was difficult to move from the comfort of
Visual Studio to a blackbox environment with no debugger.

In order to improve the performance and tame the issues of code complexity, the team decided that API should be scaled back significantly and abstracted into a series of game-specific actions and
goals. An example was shown that demonstrated this, with the original implementation having the series of actions “walk to bike, get on bike, ride to destination, get off bike” condensed
into a single engine command “travel to” which implicitly took care of everything else. However, this meant that the goal of “Engine is the OS” quickly started to break down
as more game specific code was needed in the engine. To implement this, the AI needed to be smarter than it was, allowing the game to make key decisions about whether the character was close enough
to the destination to walk or take his bike – all of these previous decisions were scripted explicitly in the original implementation. The main issue when it came to abstracting the API was
deciding how granular it needed to be – in order to decide this the team scripted up several different mission types and decided on the common traits and functionality seen between them.


Summing up, Wolff did highlight that the scripting allowed for much quicker iteration cycles in the game. New mission ideas could be prototyped and tested very quickly using Python but there were
many issues in implementing the scripted code, the main being that of performance – current consoles simply don’t have the performance to deliver their original idea using Stackless
Python. The team still believe that the original concept was right and will be working towards it in the future.


Playstation: Cutting Edge Techniques

Drawn by the prospect of a live demo of Sony’s Motion Controller first shown at the E3 this year, the Coding Keynote session was packed. First up was Kish Hirani to deliver some of the key
points about Sony’s upcoming tech and general development information for Playstation developers. The first point was to talk about the Mini PS3 Devkit that’s available to academia at a
cost of around 1700euros; the PSPGo Devkit was said to be available to all interested members of the PS Devnet. Without much further ado, the prototype Motion Controller was brought out and Kish
launched into the archery demo shown at the E3. The key points of the demo were that the controller requires the PS Eye which is used to track the light balls on the end of each controller. Using the
internal gyros and accelerometers the controller can be tracked when not visible by the PS Eye, allowing actions such as reaching behind your back, for example. The ball colour can be controlled by
the developer and 4 different controllers can be tracked simultaneously in 3d space with no lag. Developers who are interested in using this tech can apply for a prototype now, but they must be
approved by SCEE during the R&D phase of the controller. Sony are finishing up their AiLive middleware technology for use with the motion controller which will be available at no cost to all PS
Devnet members.

Continuing his talk, Kish moved on to speak about the latest developments in PS Eye technology. A new facial recognition library is available which can, amongst other things, detect the Age and
Gender of the players as well as determine the position of individual face parts, such as detect the shape of a player’s mouth. An upcoming release of this library aims to provide full skeletal
recognition, a feature recently shown off in Microsoft’s Natal demo at E3. Keen to show the application of the PS Eye, Kish showed a demo movie of two games, both of which augmented reality in
some way. The first game was akin to a virtual pet game that allowed people to interact with the pet in front of the TV using gestures and natural movements – another technology seen in
Microsoft’s Natal demo was also seen, the ability to “scan” in a drawing and have the pet interact with it in some way. The second game used the PSP Eye technology to allow the
player to track and hunt down a virtual pet; it showed an interesting use of the portable nature of the console to augment reality with the Eye technology.


Colin Hughes then took the stage to talk in some detail about the latest techniques employed by Playstation developers (PSP and PS3) in delivering the current series of games on the platforms. The
session was extremely detailed and would have been of the most benefit to PS3 developers, so I won’t go into the detail here. I will, however highlight some of the key points from Colin’s
talk. On the subject of preserving framerate on the PS3, Colin talked of using the screen-area of an object to track and drop small objects once the framerate starts to suffer. The idea here is that
the larger objects on the screen are the most important/visible to the player so a falling framerate should mean that lesser objects can be dropped without sacrificing visuals. A technique used on
WipEout HD was to dynamically change the screen resolution to ensure that constant framerate is preserved – the idea would be that the screen can be rendered at 1024x768 in a graphically
complex scene and jump back to 1280x1080 when the complexity drops off – in many cases this almost free due to many games dropping to a lower resolution for post-processing anyway. Another
technique was to use shader-LOD to reduce the complexity of shaders at distances where the effect isn’t visible anyway – for example one could disable the parallax and normal mapping
shaders of an object in the distance. Used with a z-prepass before the application of shaders, you can avoid many unnecessary shader passes for objects that may not even be visible or are too distant
for it to matter; in many cases this technique was shown to save almost 10% of GPU time even when considering there were effectively two renderings to determine the information needed for the
z-prepass and the final render of the screen. Deferred shading was also mentioned as being a must; with games like Killzone 2 using 5 render targets packed with different types of data to achieve
scenes which contained over 100 separate lights. Many of these techniques are non-PS3 specific, so both PC and Xbox developers would get a lot out of researching them for their platforms.


From here, Hughes talked more specifically of PS3-only optimisations that rely upon the Cell SPUs to take much of the load off the RSX unit and save GPU time. It’s possible to use the SPU
for geometry processing, per-vertex lighting and even post-processing effects. However it should be noted that using the SPU requires you to be aware of the differences between the SPU and the RSX,
specifically that certain operations are more costly on the SPU and the fact that the SPU doesn’t have direct texture access means that data needs to be copied to main memory. The SPUs
can’t work efficiently at HD resolution, so data needs to be scaled to fit – this makes them ideal for processing blur/glow effects but poor at high quality image processing. The parallel
processing available on the SPUs allows for many complicated effects to be achieved, in some cases they outperformed the traditional GPU solution. The accumulation of all these techniques is
available in Sony’s “Edge” library, a tech available to all Devnet developers. Hughes notes that the decision to use the SPUs isn’t because the SPU is faster, but because they
can often be idle at the time and makes sense to offload some of the processing while the GPU is busy. Clever use of the hardware can lead to highly advanced and optimised graphics. "33">


Building LEGO Worlds, online, offline and everything in between

Kicking off the Design track’s keynote, Lorensen talked about how children don’t really play with physical LEGO bricks as they did before, so the key issue faced by the company is how the
LEGO experience can be brought to the digital medium. Looking back on some of the ideas pushed in the Evolve track, it’s easy to see how the LEGO Group are in a perfect place to offer the
experience craved by many gamers whilst still remaining true to their traditional roots. Ideas pushed by LEGO are those of having a highly accessible and open system, a low entry barrier and act as a
platform that encourages creative thought and user-generated content – all traits of both the traditional LEGO bricks and the LEGO games. To highlight this Jonathan Smith showed off some of the
level creation facilities available in LEGO Indiana Jones and how the act of creation has tried to be as real to the traditional LEGO experience as possible – you control a mini figure to build
the level around you, just as you would if you were physically building the blocks. This idea of creation has become the very fabric of both the LEGO website in which users can create and share ideas
all the way into the upcoming LEGO MMORPG game, LEGO Universe in which the world is literally built by the players. The move to a digital LEGO was difficult and suffered from many failed attempts;
the key being replicating the physical experience of building in a virtual environment – the team believe that the current and upcoming games have this experience in place.

In talking of convergence of multiple games and media into the online space, the LEGO Group were keen to make sure that each move was planned and added value to the overall experience. For
example, they said it would be easy to “cash in” and take the easy route to short-term profit, but this experience would cheapen the overall product. Their idea is to make very distinct,
focussed products and deliberately design in links and convergence when it was the right thing to do. The idea refreshing, especially as many developers clamour about to find how they can link as
many media together as possible to make money quickly. The key driver to designing the games for LEGO is to remember that players are driven to live out their stories and allow for the emergent
gameplay and experimentation that comes from one’s own creativity. The idea of social play is extremely attractive to many, especially children, who enjoy interacting with the community around
them either through chat, co-operative play or through sharing LEGO designs and models. The LEGO Group pay special attention to their community and listen to their needs as well as fostering a
culture wherein the “bad” elements are ousted by community as a whole – the idea of a mature, self-moderating community must be appealing to many developers in the online space.


The talk was very well presented and I think served to show off how some of the key ideas discussed in the Evolve conference could be approached by other developers and companies. "34">


Using GIT to Tame a Herd of Cats

The primary purpose of this session was to talk of how Free Radical Design (now Crytek UK) leveraged Linus Torvalds’ GIT Source Control Management system to manage a large codebase and maintain
stability when is being worked on by a large team. Talking of the problems they faced, I imagine the scenario is familiar to many large teams. With 80+ coders working on a single codebase that
spanned multiple platforms and multiple games it was common to see over 100 commits to the system each day. With such high volumes of commits, the potential chance of error was high and the team
spent the majority of their time fighting off bugs that had been introduced somewhere in the cycle and repairing the codebase so that builds were still possible. The solution, it seemed, was to
reduce the percentage error per commit. To achieve this many things were put in place, such as unit testing, TDD and even codebase closures in which no commits were allowed after a specific time,
allowing people to spend the rest of the afternoon getting ready for the nightly build. This lead to the team being incredibly unproductive as everyone effectively had to down-tools each night and
sit waiting for a stable build to emerge. The bigger the team got, the larger the problem got highlighting that the best solution would be to reduce the size of the teams. Obviously they
couldn’t lay people off for the sake of the codebase stability so it was decided that the teams needed to work in a hierarchy.

It was decided that features would be worked on by specific teams in order to isolate and localise whole sections of the codebase to specific teams. This was then mapped into a hierarchy; a team
of up to 5 developers was led by a senior developer, who in turn was led by a lead programmer and so on all the way to the root of the tree. The senior developer is responsible for maintaining the
code quality of the 5 people under them, meaning that it is only allowed into the main source repository when it all works together. At each point in this hierarchy the people can be assured that the
code they receive is functional, if not it becomes easy to track where the issues appeared due to the smaller team sizes localising the development effort. With people acting as gatekeepers of their
nodes, they ensure that they only good quality code to be seen by others and can expect that only good quality code arrives to them.


When this hierarchy was decided upon, the current use of Subversion as a SCM didn’t fit this way of working so the team turned their attentions to hierarchical SCM systems such as GIT,
Bazaar and Mercurial. GIT was chosen as it was a system designed specifically for source code by people who work with source code every day, members of the team managing the Linux kernel. Having
decided on GIT, the team began to migrate their SVN codebase to GIT – a task that took literally weeks to achieve. The rollout was achieved gradually and focussed upon the engine development
team, so many people were still left working in SVN, notably the game developers and artists; as a result there was a constant maintenance of pushing and pulling to and from the SVN repository to the
GIT_HEAD main repository. However this push from GIT to SVN was treated as a weekly milestone by the team and was used to focus the development and delivery of features to the game development teams.
A SVN pull into GIT was done daily, so the engine team always had the cutting-edge game code – the reason for this is that the team found it easier to map the SVN code into GIT but not the GIT
hierarchy and layout to SVN.


Working in the hierarchical way encouraged developers to take advantage of source control locally, making literally hundreds of local commits per day and only pushing up to the next level when the
feature they were working on was fully complete. This provided benefits such as point in time recovery and diffs, providing east resolution to issues such as “but it worked last
night...”. Likewise, the senior and lead developers could be assured that the majority of the code they receive has been checked by many people, thus removing many of the potential issues
before they break the main codebase. This way of working allowed people to create shared branches that were pulled and integrated constantly, allowing potentially code-breaking changes to be shared
and developed across many teams before they are merged seamlessly into the main codebase.


Whilst having many benefits, the process did have its issues. The main was that you need a GIT expert or two on hand to resolve issues that can occur through incorrect use of the system. As
it’s such a powerful system it is possible to trash a repository fairly easily and although it’s possible to quickly recover from these scenarios, you still need someone at hand to tell
you how. Secondly, as GIT is designed primarily for Source Code, keeping ever-changing binary assets such as art or sound assets can lead to many problems. People linking GIT to traditional
“flat” SCM systems such as Perforce or SVN will suffer issues as the paradigm doesn’t translate back very well from GIT (for example, the lack of revision numbers in GIT). The team
also suffered issues with line endings, but didn’t detail what they were.


Overall, however, the Free Radical Engine team had a positive experience of using GIT and would recommend that people try it. Stating that Crytek don’t use GIT except in localised
situations, the team do miss the freedom allowed by the system.


Rethinking Challenges in Games and Stories

My final session of the day and indeed the conference as a whole was a talk by game design guru Ernest W. Adams. His talk was actually a rehash of one presented at the GDC in 2007 due to his last
minute inclusion at the develop conference. However, it was interesting to see that it remains completely relevant, even 2 years after originally conceived.

The talk was split into two main parts; the first talked about the concept of difficulty in games what we can do to provide the best experience for each player. The second talked about ways we
could move beyond the traditional challenge/reward concept we use in games. When discussing difficulty, Adams admitted that he was bad at games and often played them on the easiest setting to see the
content and narrative. When defining difficulty Adams split it into 6 influencing factors, 4 we can control and 2 we can’t. The ones we can control are the intrinsic skill required to meet a
challenge, the stress (time pressure) required to meet it, the power we provide the player in achieving the goal and the previous experience of our game that the player has. These four elements can
be combined to create what is the perceived difficulty of the game – we can make things require more skill, or better twitch reactions from the player, or we can provide more or less assistance
to achieve the goals provided. The 2 factors we have no control in are the native talent of some players and the previous experience of other similar games. All six of these factors combine to create
the “perceived” difficulty of your game. Managing this perceived difficulty is paramount, if players see the game as too hard they’ll get frustrated – if they see it as too
easy they’ll get bored. Your gam






Comments

Note: Please offer only positive, constructive comments - we are looking to promote a positive atmosphere where collaboration is valued above all else.




PARTNERS