Jump to content

  • Log In with Google      Sign In   
  • Create Account

GDC 2011

Modular Component-Based AI Systems

Posted by , in Summits, AI 08 March 2011 - * * * * * · 2,225 views
components, AI summit and 2 more...
The GDC 2011 AI Summit opened up with three heavy-hitters from the AI world (Brett Laming, Joel McGinnis, and Alex Champandard) discussing the merits and motivations behind component-based architectures.

Although the term has gained popularity in recent years, and most people in the room expressed at least some familiarity with the concepts, there remains a substantial amount of uncertainty as to what exactly a component architecture entails. To address this, the lecturers presented an outline of how and why component architecture gained the spotlight in modern games engineering, and provided some tips and important rules on how to approach component-based designs.


Historical Trends
As object-oriented programming took hold and languages like C++ finally gained enough traction in the games industry to see widespread adoption, the typical design methodology involved creating rich, deep hierarchies of inter-derived classes. This quickly ran into issues such as multiple inheritance's "diamond problem," brittle structure, and questions of how to deal with "non-inheritance" situations (where some but not all functionality of a branch of the inheritance tree is desired in a particular leaf class).

One reaction to this was to push functionality towards the roots of the inheritance tree, essentially creating "fat base classes." This is even more problematic in a practical sense because bloat of code in the base classes decreases readability, clarity, and maintainability; simultaneously, bloat of data in the base classes led to immense memory wastage and overhead, which became thoroughly unacceptable as games grew in scale.

A more promising direction was to make the entire engine core highly light-weight and extremely data-driven, where virtually all of the behaviour and richness of the game simulation was accomplished in data rather than directly modeled in code. This approach still has its proponents, but suffers from a critical weakness: it lacks natural hooking points and specificity by which one can drill into the running simulation and inspect or modify its state. Put simply, offloading the complexity into data (away from code) deprives us of all the benefits of code-modeled introspection and manipulation.


Enter the Component Model
A central observation behind the introduction (and indeed the widespread adoption) of component-based architectures is that there are fundamentally four things in a simulation which need to be elegantly captured:

  • Classification of entities (Is this a weapon? An item? A door? A sharp weapon? etc.)
  • Key properties (How much damage does this weapon do? How much does it weigh? Which direction does the door open, and what key(s) does it require?)
  • Defined mappings of inputs to outputs (Weapon damage values modify health values; keys modify door lock states; etc.)
  • Interchangeability (Can I use this weapon in place of that one?)
Component architectures provide a modeling tool for all four areas; although other approaches can say the same, components provide a compact and highly elegant manner in which to reach these goals.

The main difference between the component mode of thought and older, less desirable approaches is the notion of systems. Indeed, it is worth noting that proper application of component architecture demands the use of systems richly; anything less will essentially collapse back into the same types of fat-architecture we were trying to escape in the first place. Moreover, in a systems-oriented model, granularity of functionality becomes desirable rather than problematic.

Systems are, fundamentally, the "glue" by which components are organized and compartmentalized. Moreover, systems formalize the interactions between components and other systems. This drives reusability in several key ways:

  • Inheritance can be used (sparingly!) to reuse logic and data relationships directly
  • The structure of interrelated components can be reused modularly
  • Data flow between components and systems can be interchanged as needed
  • Compartmentalization separates reusable elements into neat packages
  • As a bonus, parallelization can easily be accomplished between systems
Careful use of class inheritance, along with factory methods, serialization, and run-time type information (RTTI) frameworks, can provide a highly data-driven model without sacrificing the specificity and hooks of a richer code model. In addition, the deployment of systems can help identify dependencies and functional structure within the simulation itself, allowing for easier maintenance and iteration on existing code.

Another potential win of systems over gamegraphs and similar structures is the elimination of redundant searches. A system can keep track of all the components/entities which are relevant to that system directly, thereby avoiding the need to constantly traverse the game universe looking for those entities. This in turn provides a stark highlighting of the lifetime relationships between various entities, which can be a major advantage when it comes time to do dependency analysis on the simulation itself.

Last but not least, components allow for late binding and re-binding of type information. Have a set of logic that relies on park benches, which suddenly needs to be rewritten to use dumpsters instead? The code change amounts to tweaking a single "tag" within the appropriate system, rather than making large numbers of tedious and fragile changes to raw code dependent on the actual "park bench" or "dumpster" classes. The data-driven aspects of component architectures become a major advantage in this sort of scenario.

It is worth reinforcing the fact that component models are not "an architecture" but rather a paradigm in which architectures can be created. As with virtually everything in the engineering and architectural realms, the exact details will depend highly on the specific game or simulation we are setting out to make.


Parallelization
Component models can be a very powerful tool on modern platforms where concurrently-executing code is a central aspect of engine design. One important observation is that AI work (and indeed simulation work in general) essentially consists of reading and writing properties of entities in the simulation, and potentially rearranging the logical structure of those entities (moving objects, creating new NPCs, recycling old assets, etc.). Envisioning this as a sort of circuit diagram is a useful technique; data flows "down stream" between systems each frame. Any mutation of game state which can be passed down stream to later components can be accomplished using just the execution stack space, since later systems will always have access to that memory space safely. However, any "up stream" communication needs to be delayed by a frame by queuing a "message" which is read by the appropriate system in a subsequent tick. This decomposes nicely into a job/task system, which is a (deservedly) popular means of handling parallelism in modern engines.

As with any other parallelization tasks, a few fundamental rules apply:

  • Minimize the volume of data propagated throughout the system
  • Further, minimize the lifetime of any data that does need to be passed around
  • When possible, derive data rather than duplicating it; no need to store mass, volume, and pressure when any two will suffice
  • Locality of reference is key; custom allocation is, as always, a major win here
  • NULL checks can be eliminated by using dummy non-operative objects instead of empty pointers
  • Propagate RTTI information along with pointers in order to avoid duplicate virtual-table lookups
  • Vectorize component update operations via SIMD instruction sets
  • Perform jobs in batches across cores (helps with cache/false sharing issues)
  • Interleaved allocation is a powerful tool for leveraging SIMD and other parallelization techniques

Concluding Thoughts
The session was a great way to open up the AI summit, cramming in vast amounts of valuable advice and information in the one time-slot when everyone's mind was guaranteed to not already be turned into jelly. Although not necessarily new details to many of those experienced with component architectures, there were plenty of nuggets to guide the decision-making process of both novice and veteran architects alike. An informal poll of the audience suggested that a substantial portion of those in attendance learned at least something valuable to take back to their own individual design efforts - the hallmark of a truly successful session.


The Circle of GDC Life

Posted by , 08 March 2011 - * * * * * · 779 views

So another GDC has come and gone - my fourth, personally - and it seems that the same old patterns held true yet again this year that have dominated the GDC experience so many times before:

  • Count down the weeks, then days, then hours before GDC
  • I fail to sleep properly the night before flying out
  • Something goes wrong with the flight out (this time, one of my connections was cancelled; no big deal, thankfully)
  • Arrive in San Francisco a day or so early, relax, prepare for the chaos of the coming week
  • Kick off the partying right on Sunday night
  • Stay up way too late and feel terrible on Monday morning, but full of adrenaline anyways
  • Repeat this process 4-5 times until it is time to depart San Francisco
  • Something goes wrong with the flight home (this year, we tried to collide with another plane while landing in Houston)
  • Eventually get home at some unholy hour, drive the long trek back from the airport, proceed to become unconscious for several days
  • Wake up with some kind of hobovirus from the conference and feel like crap for the next week
  • Start counting down the weeks until we can do it all over again

As so many others have rightly pointed out, GDC isn't really about the conference itself all that much. Sure, the sessions can be educational, inspiring, and entertaining; the parties are unquestionably worth a week trip in and of themselves; and the swag (although it seems to get lighter every year) is always fun. But the real win of attending GDC is the other people.

In my personal situation, I contract for a German company while living in Atlanta, GA in the USA. I literally know no one else in the area in the games industry, partly because the industry barely exists around here in the first place. I get precisely one week a year to talk shop with fellow industry professionals, and that is at GDC. So to me, the conference is a tremendously exciting outlet for all the technological, motivational, and personal gibberish that clogs up my brain tubes from the preceding year.

It's also pretty much the only way to recharge my batteries for the following year.

GDC 2011 was no exception to this rule, just as it was no exception to the many other patterns that have always held true of GDC for me. And I've come away with not just the customary three-inch stack of business cards, but with some genuinely fascinating contacts and opportunities that will no doubt heavily shape the year to come.

I've talked to people about how to further my career; how to improve my working process within that career; how to do more interesting and entertaining things during the course of performing my job; how to further my own personal interests both in and outside of computing; and as always there were a handful of folks that are just great friends to have around.

GDC is like a massive family reunion, except with less drama about that One Really Weird Uncle and substantially less crappy picnic potato salad. The games industry is truly a microcosm of humanity as a whole; there are people from all aspects of life, perspectives, beliefs, opinions, and feelings represented. What makes that slice of the species so powerful to me is that we're all there under a common banner - to make and enjoy great games. Nowhere else in the world have I found such a diverse crowd of folks all willing to set aside differences for something singular and common; nowhere else have I found so many people all willing to learn from each other instead of simply disagreeing vehemently (or violently).

It may seem melodramatic - especially if you've never had the privilege of immersing yourself in that sea of nearly 20,000 souls - but GDC is magic.


As is customary I'll be providing some coverage of the whole shebang here on GameDev.Net over the next few days, primarily focusing on the AI summit. Stay tuned for more sentimentality and drivel about the Event You Shouldn't Have Missed.


IGDA Business and Legal SIG official

Posted by , in Sessions, Summits, Education, Business/Management 03 March 2011 - - - - - - · 855 views
IGDA, Business, Legal, Education
A group of IGDA industry professionals and attorneys gathered yesterday at GDC to formalize the new Business and Legal SIG. The meeting was moderated by Dan Rosenthal; approximately 20-30 IGDA members and industry professionals were in attendance. Topics included goals for the new SIG, events and organizational structure. In addition to informational white papers related to business and legal issues in the games industry, the members proposed increased political involvement in addition to a greater focus on globalization/international issues for the SIG.

Members were particularly interested in seeing specific case studies for "freemium", free-to-play, and pay-to-play business models. Additional proposals included a one day business start-up summit and an entrepreneur track for future IGDA events to address the business and legal issues developers face when starting new projects and new studios. The Business and Legal SIG hopes to provide a wide variety of resources for industry professionals, including member-managed information compilation for regional and country tax credit incentives for studios and publishers.

Specifically, the new SIG discussed the possibility of creating a Business and Legal Wiki or other resource management tool that will tie into IGDA's Educational SIG. SIG members also proposed educational outreach in addition to resource compilation. Many members expressed the concern that valuable information doesn't reach academic programs related to game development.

Approximately six IGDA members volunteered to head the SIG's steering committee. If you would like to learn more about the new SIG, visit http://www.igda.org and sign up to join the mailing list.


Keep it Together: Encouraging Cooperative Behavior During Co-Op Play

Posted by , in Sessions, Design 03 March 2011 - * * * * - · 1,032 views

Patrick Redding, of Ubisoft Montreal, has formerly worked on Far Cry 2 (ie, best first-person game), previously directed the co-op component of Splinter Cell: Conviction. He is now the game director at the new Ubisoft Toronto studio. He started by making a distinction between player cooperation as opposed to systemic cooperation. He referenced a blog post entitled Tahrir: The Game, posing the hypothetical scenario about making a game dealing with non-violent revolutions. "There's a misunderstanding that [...] through the fabric of the twitterverse" that revolutions are somehow giving rise to these revolutions, but that's a fallacious thought. Revolutions start from within a country with people dealing in high-stakes, dangerous situations banding together and forming strong social bonds with one another.

"Players become invested in the success of a collaborative partners" because of that ongoing process of negotiation with other players. It converts selfish motives into those associated with the collective goal of the group.

Redding went on to talk about Minecraft and a server he played on with a bunch of friends -- but, on this server, he actually rarely sees any of his friends on this server. It's largely a server with people building things of their own will. A few weeks ago, he took a break from the game and "it wasn't really a hard break to take." He then signed on a few weeks later, though, and he opened the server and was greeted by these giant land masses and statues and in his absence his friends have created some amazing Inception-like cityscape. "Players respond very, very positively to this collective agency."

"Why do players cooperate?," Redding asked. "How do we achieve the conditions in which shared intentionality" arises in large, big-budget AAA games. Redding went on to discuss the lessons he learned in the co-op component of Splinter Cell: Conviction.

The first big lesson is that "players will work together to optimize system input." In the detection model for Splinter Cell: Conviction, "2x the players [does not equal] 2x the detections" since the players end up working together for their common cause, despite that this is theoretically very possible. The next lesson Redding and his team learned is that "Shared intentionality promotes individual self-expression." At the high-level, players developer strategies. At the mid-level, they create individual, lower scope plans. At the low-level, players make riskier choices in co-op than they do in single-player. And, in mastery, players in co-op are more willing to explore optional paths because they "know that they have a more reasonable chance of regrouping with their team and trying another approach if it doesn't work out."

"Players derive satisfaction from meaningful cooperation" and this meaningful cooperation results in players enjoying the game more. So much so, that players are much more willing to forgive flaws of the game than they are in single-player.

Redding then goes into formal design tools and starts looking for genre-agnostic design tools and tools that are systemic. He starts this discussion with "cooperative dynamics," using the same notion of dynamics as Clint Hocking talking about on Wednesday. "Dynamics are what deliver the final game experience." Redding lists some of the dynamics that they used in Splinter Cell: Conviction, such as "gating/tethering" (a very prescriptive dynamic), "'exotic challenges" ("altered camera/controls for some players"), "punitive systems" and "buffing systems," "assymetric abilities," "survival/attrition," and "combined actions." In explicating the "combined action" dynamic, Redding elaborated with a definition I really liked: "Any game challenge attacks a discrete set of player skills: precision, timing, measurement, management, tactical choice, strategy, puzzle-solving." Beyond this, the solution to the challenge is largely left to player choice.

Redding closed with some of the lessons that he and his team learned from their experience on Splinter Cell: Conviction.


GDC 2011: Day 3

Posted by , 03 March 2011 - - - - - - · 1,089 views

Day 3 started, much like Day 2, at 5:00am, because for some reason I'm under the false assumption that I should continue waking up at my normal time all week.

That is a poor assumption.

Day 3's sessions started with the keynote from Nintendo president Satoru Iwata, and was entitled "Video Games Turn 25." Largely, the session was about Iwata recounting the early days of Nintendo and attempting to promote feelings of pride and ambition in the development community through a variety of anecdotes. This part of the session was actually great to listen to, but it's when Iwata began talking about the features and promise of the Nintendo 3DS specifically that the keynote became more of a light version of Nintendo's E3 press conference (Nintendo of America president Reggie Fils-Aimes even came out at one point to talk at length about it).

What should have been the keynote was the next session, given by former Ubisoft Montreal Creative Director and now LucasArts creative Director Clint Hocking (about whose site/twitter name I had a remarkable discovery). The session, entitled "Dynamics: The State of the Art," was general enough and entertaining enough to appeal to just about anyone at GDC -- not just the game design track it was on -- and contained an abundance of useful and insightful information. Hocking, whose GDC lectures are consistently amongst the best sessions that GDC has to offer, posited that before we bother talking about what specific video games mean, we need to understand "how they mean." Hocking's point being that we need to be able to understand the most basic aspects and at the highest levels of how an interactive medium conveys meaning through play. No single part of this session was mind-blowing, but its tremendous holistic value cannot be understated.

Next up was the GDC Microtalks, with Naughty Dog lead designer Richard LeMarchand presenting all of the individual speakers (ranging from David Jaffe to Colleen Macklin to Brenda Brathwaite) in his opening microtalk. It was in this opening microtalk that LeMarchand gave the theme for the session: "How you play." Nothing in these sessions provides new information, but each lecture had a very sentimental core (except Jaffe's, which had a largely practical tone about the amount of time it takes to get into console games) with the takeaway being largely inspirational in nature.

It was around this point that I disliked that the main conference didn't have the same lunch break time instituted that all of the summits do. Not that my abilities to eat a sandwich while walking are particularly bad, but they are.

Frank Lantz's "Life and Death and Middle Pair: Go, Poker and the Sublime" was next and it was a very interesting talk to hear, as I am largely unfamiliar with Go and a pretty poor Poker player. Lantz's primary purpose was to illustrate the timeless nature and endless depth that both of these two games have and the way that they are pervasive in the mind of anyone who plays them. My favorite point was the relatedness between the notion of "expected value" and probability in Poker and how it leads people to inadvertantly come to understand the scientific method through a practical introduction to what is, essentially, bayesian theory.

"The Failure Workshop" was next and, really, the main takeaway from the whole session was to prototype early and test out ideas before rat-holing into tangential work too early on.

My favorite talk of the day came from Kent Hudson, a game designer at LucasArts and former designer at 2k Marin who did Bioshock and the in-production X-COM, entitled "Player-Driven Stories: How Do We Get There?" In the session, Hudson went over both the theory/ideas behind a more systemically-driven game design that allowed games to take a less prescripted approach to story-telling and a more involving player experience. The way to get here is to more systemically measure a player's actions and, specifically, their relationships to other entities in any given game. Through this relationship monitoring, the game can heuristically monitor a player's actions and, as necessary, react to the sum total or an individual component of all that collected data when the time is right. Hudson referenced the three tenets of self-determination theory to determine what players really need in order to reach "happiness": autonomy (referred to as "agency" in the session), relatedness, and competence. And it is through the successful recognition and embrace of these three pillars that a game can properly involve a player in its world. Hudson then took the necessary step from all of the theory into the practical world of AAA game development, by highlighting that it is necessary to rethink the way that AAA games approach content in order to properly be able to fill out a game world with content flexible enough to be able to respond to a variety of player stimulii. Hudson, specifically, referenced the removal of five major time- and money-consuming elements of content: VO, custom writing, environments, models, and animation, and ways to really "own" a style that allowed a development team to re-appropriate its budget as necessary for a game that isn't as prescripted as a lot of today's games typically are. Given that the last thing I wrote for my site was entitled "The Systemic Integrity of Expression," I agree fully.

It's somewhat sad that directly across the hall from Hudson's session, David Cage was saying things like "Game mechanics are evil. Mechanics are a limitation. We need to redefine what interacting means." Which, I mean, no.

After the day's sessions wrapped, it was time for the Independent Games Festival awards show and the Game Developer's Choice Awards show. Unlike last year, the awards show was unexpectedly entertaining and completely hilarious due to IGF host Anthony Carboni and GDCA host Tim Schafer being thoroughly amazing. It weirded me out a little that, during the Game Developers Choice Awards, so many of the categories were filled with games that I had so little love for. The closest I got to rooting for a game was when Dragon Quest IX: Sentinel of the Starry Skies and Metal Gear Solid: Peacewalker were both up for a nomination (in the same mobile game category).

The day ended with some good fun at the Nidhogg tournament at the Eve Lounge and then some other miscellaneous happenings.



Electric Elephants and Communism: Clint Hocking on How Games Mean

Posted by , in Design 03 March 2011 - - - - - - · 1,190 views

Clint Hocking gave a thought provoking and interesting talk -- complete with allusions to electric elephants, communism, and russian film icon Lev Kuleshov -- in an attempt to answer a question Chris Hecker raised during earlier GDC's. Specifically: How do games mean?

No that's not a grammatical error or a joke. It's a play on the question "what do games mean?"

There are many problems with the question "what do games mean?" The largest might be that we don't have a particularly good way to systematically answer it. The question of "how do games mean?" (or more gramatically: "how do games create meaning?") is more answerable, and more importantly, the answer is perhaps useful in creating games that are rich with meaning.

So how do games mean? I'll try to summarize Mr. Hocking's view here, although it's quite intricate. Mr. Hocking has an interesting answer to that question, but before getting to that, it might be instructive to look at the history of film.

Lev Kuleshov is an iconic figure in film. He's best known for discovering the Kuleshov Effect. From Wikipedia:

Kuleshov edited together a short film in which a shot of the expressionless face of Tsarist matinee idol Ivan Mozzhukhin was alternated with various other shots (a plate of soup, a girl, a little girl's coffin). The film was shown to an audience who believed that the expression on Mozzhukhin's face was different each time he appeared, depending on whether he was "looking at" the plate of soup, the girl, or the coffin, showing an expression of hunger, desire or grief respectively. Actually the footage of Mozzhukhin was the same shot repeated over and over again. Vsevolod Pudovkin (who later claimed to have been the co-creator of the experiment) described in 1929 how the audience "raved about the acting.... the heavy pensiveness of his mood over the forgotten soup, were touched and moved by the deep sorrow with which he looked on the dead child, and noted the lust with which he observed the woman. But we knew that in all three cases the face was exactly the same."[1]

Kuleshov used the experiment to indicate the usefulness and effectiveness of film editing. The implication is that viewers brought their own emotional reactions to this sequence of images, and then moreover attributed those reactions to the actor, investing his impassive face with their own feelings.


Here's the important bit in all of that: through the creative use of editing, the audience can be brought to find meaning in something that is otherwise ambiguous. Ivan Mozzhukhin's face was the same in all shots, but by allowing the viewer to interpret his reactions, the editing created meaning that was otherwise non existent.

So what does that have to do with meaning? Well in Mr. Hocking's view, films create meaning through editing. Narrative might direct the experience, but editing is the basic building block -- the "how", if you will -- in which meaning is constructed from.

Well that's great for film, but how do games create meaning?

Dynamics, would be Mr. Hocking's answer.

Wait what are dynamics?

Dynamics are the behavior of the game itself, the way it interacts with you in response to your actions. This is where meaning is created. In this sense, cutscenes or stories wouldn't be the primary creators of meaning -- because they aren't part of the behavior of the game. They can frame and reinforce the meaning of the dynamics, but they aren't where meaning is created.

As background for where this "dynamics" term comes from, one model of thinking about game design is the "Mechanics, Dynamics, Aesthetics" model. You can think of it this way:

Mechanics = Rules
Dynamics = Behaviors that arise from the rules
Aesthetics = Feelings that result from the players experiencing the behaviors

There's the "message model of meaning", in which mechanics overrule dynamics. You could think of this as a more authoritarian view of how you should view the game, in which the designer uses the rules of the game in order to enforce a specific message.

There's also the "abdication of authorship" model, in which dynamics overshadow mechanics. In this model, player agency is maximized, but the trade off is that with extended player agency, the designer's control over the experience is reduced.

Games can fall upon a spectrum in this sense, with some games reserving the creation of meaning for themselves, while other games allow the player more control over what the meaning of the game is.


The Failure Workshop

Posted by , in Sessions, Design 02 March 2011 - - - - - - · 1,409 views

Kyle Gabler opens the failure workshop with the story about a game entitled "Robot and the Cities That Built Him" which was to be a project based off of a seven day experimental gameplay project. "Because we're game developers, we started by making a bunch of different units," he said. "This is an indie game! The robots are not destructive, they're a metaphor."

"But that wasn't big enough," he said. Then the game became "Robot and the cities... the musical!" And it started with bunnies jumping through the forest singing "it's a fuzzy wuzzy day." And this is the best thing ever, I think. "Their fuzzy wuzzy skins peels off revealing cold metal beneath."

"...but it's still not fun," he said "and so we did what we should have done six months earlier" and they made a quick prototype with as little production effort as possible. And then they realized the game wasn't fun "or deep or interesting in any way." "The second reason this is horrible is that we had lasers [...] and it just wasn't us. I don't know, I'll never make a game with a sword in it." 2D Boy extrapolated two things from their experience: "No amount of theming will save a bad idea" and the second thing was that "Trying to live up to a previous game is paralyzing."

Then George Fan (of Plants vs. Zombies and Popcap fame) took the stage. He opened his bit with a slide entitled "My Failure Story." He continued with a little background on his history as a child doodling and sketching out game ideas in rough drawings on papers. And he ended this with "Cat-Mouse-Foosball," the first game he made; "hey, [the design] worked for Bomberman, why couldn't it work for Cat-Mouse-Foosball." "I prototyped one level of the game and realized how poorly it played and never bothered with the rest," citing all of the other peripheral design work he did at the start of the project as being a waste of time. Fan then showed a demo that he made in the present of Cat-Mouse-Foosball and considered it an accurate representation, albeit ten years later, of how bad the game was.

"In 2001, I almost quit making games forever..." Fan says in another slide. "The first thing I had to do was recognize the distinction between a thing I was familiar with [illustration] and something I was not [game design]." "Games are more like this complicated machine" rather than a quick sketch someone can envision in their head or jot down on a piece of paper. "[Games just] aren't something you can keep in your head once." This led Fan to his first conclusion: "Start prototyping the game as soon as you can [...] you're not going to know if the game is fun or not until you're actually sitting there playing the game." He ended his presentation by saying "Don't give up! If you love what you do, you will persevere."

Next up in the failure workshop was Matthew Wegner, who founded Flashbang Studios (which started as a casual game development company with only three people). "After we had some money in the bank" a few years later, Flashbang then went on to make a variety of games that were done in, roughly, eight weeks a piece and uploaded to Blurst.com. And this whole process didn't lead Flashbang to any money. Wegner simplified this to saying eight weeks is too little time to make a game like World of Goo but too much time for a game like Canabalt. Flashbang's first game, Offroad Velociraptor Safari, was their first release and accounted for almost a third of all of their traffic. "We set Blurst up in such a way as if it failed [...] we still had a really great time."

Wegner then moved on to Off-Road Velociraptor Safari HD which would take their most popular game and make an HD version of it for consoles. They spent three months on the HD version building off of the web version and preparing it for publisher work. Wegner then showed off the result of this time with the trailer for Offroad Velociraptor Safari HD. "We were definitely pushing in this HD direction [...] and it turned out to be a pretty big mistake. We were currently three people and by calling the game HD" Flashbang was setting unrealistic expectations to everyone who would play the game. "And it turned out that we actually hated working on this" and the act of simply polishing an old game with better graphics and sanded edges. Wegner summarized this with a quote from Dean Karnazes: "Somewhere along the line we seem to have confused comfort with happiness."

Summarizing the issues with the HD approach, Wegner related a unicycle example to lead to his eventual point "we weren't willing to fall backwards" and get weird and clever with the game they were working on. The team then redirected a little and attempted to become a bit more non-photorealistic with a cartography look and explore, in similar senses, with the gameplay mechanics. One of the design mistakes Flashbang drew from this redirection is that they failed "design[ing] for players."

Chris Hecker ended the Failure Workshop with his "Rock Climbing Failure." "We're going to concentrate on the failures between 2001 and 2003," Hecker said, joking about ignoring the other independent failures. Hecker talked about everything he was involved with during this period of time that wasn't "ship[ing] the game." He then demoed the game.

"So what went wrong?" he asked. He then listed out: "Technology rat-holing. Non-game distractions. And lack of ass-in-chair." Hecker then demonstrated a ridiculous level of math in Mathematica. It was pretty great. Hecker said all of these failures all boil down to one single problem: "I was scared of game design," continuing, "Design is hard, unpredictable, mysterious, unstable." He concluded by pointing to Spy Party and saying that while all of the aforementioned problems still arise, he is solving the fundamental problem by making sure that everything he does is playable.


AI Roundtable

Posted by , in Programming 02 March 2011 - - - - - - · 552 views

I am sitting now at the AI Roundtable, between Mike Lewis and Dave Mark. These guys were part of the crew running the full AI summit, which I unfortunately missed. Roundtables are something I discovered late at last year's GDC, and I think they might be some of the best events. It gives you more of a chance to engage directly with people, instead of just sitting through lectures.

* Dave Mark is discussing using real life as a model for building AI, by reflecting on how we make decisions in real life.
* How do you make complex AI stuff in the background (eg emotions etc) actually useful and visible to the user/player? Some games do it explicitly. Animation can do a lot as well. Ambiguity can be used so that the player simply makes up their own ideas about what is happening. Maybe the subconscious feel of the game is enough.
* It's dangerous to avoid doing AI because characters are only on screen for a few seconds. Maybe less characters could be on screen for longer, doing more interesting things.
* A lot of discussion about emotion. Candidly, I don't see the point...I don't want characters who get sad and angry and scared, really. I want characters who aren't stupid. That's the cure modern games seem to be missing...but I haven't played shooters in a long time.
* Talking about heavy animation blending in order to enable variety in how characters act and behave. It's difficult for me not to plug my company at this point, but we just don't have a product yet.
* The best designers are programmers. This is obvious to me. These non-tech people who want to still design the nuts and bolts of games need to get lost.
* More discussion about really enabling designers and junior programmers to build AI. Small scope custom scripting languages, designing around the designers, or teaching them code concepts. I'm imagining separate water fountains for coders and designers.
* Interesting suggestion -- go learn to use an appliance from a foreign country, with a language you don't understand. This is how designers feel about the tools engineers give them.
* Still more discussion about designers. I've never worked with a game designer, per se, so I don't have much perspective on the matter.
* Are designers like puppies? You have to train them just right or they leave a mess, from the sound of it.
* Another interesting way to approach AI, keeping compute power in mind -- use really stupid AI for characters until they actually become directly relevant to the player, and make up the rest from there. Not degrade existing AIs, but take uninteresting characters and promote them to much more detailed creations.
* Some games, making the enemies too smart isn't helpful because then you just lose. It's better to have them play out patterns that are complex and interesting but learnable and can be learned. I'm not sure I agree with that -- depends on your market.
* When is the last time someone reviewed a co-op game and said the co-op AI was good?


SPU based Deferred Shading in Battlefield 3

Posted by , in Programming 02 March 2011 - - - - - - · 1,400 views

Waiting right now for Christina Coffin's talk on SPU based deferred shading in Battlefield 3. I hear they use DX11 compute shader heavily on the PC side, so it'll be interesting to see how they leverage the SPUs.

* Previous DICE games were forward rendered, just a few lights. Mirror's Edge, which they SHOULD be making a sequel to, used pre-rendered radiosity lighting. Goal of Battlefield 3 was to really expand the lighting and materials, and DICE picked deferred.
* Their PS3 implementation uses 5-6 SPUs in parallel with the GPU and CPU.
* GPU does the initial G-buffer fill, and then passes it off for shading by the SPU. Still waiting to see what that means. Are they using SPUs as fragment shaders?
* The framebuffer is sliced into 64x64 tiles, use a simple shared incrementing counter for sync.
* SPUs compute 16 pixels at a time, in SoA format at full float precision
* The GPU is still busy on other shading while the SPUs eat through the deferred shading
* About 8 millis real time on 5 SPUs for deferred shading while GPU does other stuff, equals 40ms total max compute time contributed by the SPUs.
* Light culling is done in two stage tile hierarchy, after whole-camera and coarse Z in light volumes
* A branch is used to skip all 16 pixels if the attenuation makes them unlit. This is a net win despite the branching cost.
* The SPUs are very even pipe heavy due to all of the FPU action, so complex functions can be replaced with a look-up table using strictly odd-pipe instructions, can decrease 21 cycle functions to 4 in case of complex functions.
* There is still a pure GPU based implementation of the shading pipeline that maintain visual parity, in order to facilitate debugging and validation.
* SPU job code can be reloaded into the live game, enabling bug fixes with quick iteration.
* My good friend Steven Tovey gets a call out at the end of the talk! He loves SPUs, so if you meet him you might wanna wear a dust mask.


GDC 2011 Keynote: Video Games Turn 25

Posted by , in Sessions, Business/Management 02 March 2011 - - - - - - · 1,146 views

Satoru Iwata started his keynote by highlighting the worries of developers in the development community concerned with job security.

"You are the center of the video game universe," he said to the attendees of the GDC 25 keynote, attempting to ease those concerns.

Iwata then moved on to teaching himself programming on his own time, joking that "I concluded that I pretty much had video games figured out." Iwata then talked about meeting and working with Shiguro Miyamoto, saying "I was convinced [my work] was technically superior." Miyamoto's games then outsold Iwata's by a ridiculous margin; Miyamoto 'taught' him: "Content is really king." Iwata confessed that this is when he learned that technology and engineering weren't everything.

In talking about the early days of Nintendo, Iwata said, compared to games today, they were "video game cave men." He talked about wearing different hats, making enough money to pay the rent, and other problems typical of any start-up that we generally don't think about in regards to a company like Nintendo anymore.

Iwata started talking about a large-scale survey they run twice a year that they started in Japan in 2005 and have since expanded that practice to other regions. Iwata then showed a graph showing the "Composition of the U.S. Gaming Population" and how the prominent gender of gamers between ages 4 and 75 switches from being predominately male to being predominately female. Iwata then moved on to show the active gaming population in the United States and Europe, both of which exceed 100M users (with the US exceeding 160M) as of October 2010.

The next topic Iwata tackled was that of "social networks" and "social games." He wanted to clarify the use of the "social" in either of these terms and the widely-believed implication of the term of "social game." He aims to redefine "social" as simply being a large group of people and the activities they choose to engage amongst one another with. Iwata then took the opportunity to promote the role that various Nintendo products have had on smaller-scale social groups (families and the like). "In those early days, being social only meant 'competing.'" Iwata says as he cited that people would connect two Game Boys together with a link cable in order to duel in Tetris.

"I don't want it to seem that Nintendo is taking too much credit for its role in creating the social game," Iwata says. He then cites Call of Duty's role in multiplayer and Microsoft's "considerable investment" in Xbox Live.

The term "must-have" describes something that Iwata feels is so important that every gamer "must have it." "These tend to come from one of three sources," Iwata says, starting by citing hardware itself as a source (using the first Game Boy as an example and the role in incubating portable gaming it played). "Second, there were times when a game itself is must-have [...] names as diverse as Sonic the Hedghog, Just Dance, Grand Theft Auto, Guitar Hero, Angry Birds, The Legend of Zelda, and Tetris." "But, there is a third source of must-have that extends from neither hardware or software [...] it comes from the player itself. It's that social appeal of gaming." Iwata cites how Pokemon mechanically encouraged people to trade their Pokemon with their friends as a reason for why the franchise has been so hugely popular over the year. Iwata also cites "universal appeal" as a reason why so many of Nintendo's games have been so popular over the year.

The keynote then turned into a Nintendo press conference as Iwata took all these principles to talk about Kirby (which was a great story) and the Nintendo 3DS (which was a press conference spiel). Reggie Fils-Aimes also appeared to talk further about the Nintendo 3DS. "It's a system to play games," Fils-Aimes said on numerous occasions as he talked about Netflix and movie trailers.