fmx/07 Coverage - Day Three/Day Four

Published June 01, 2007 by Emmanuel Deloget, posted by Myopic Rhino
Do you see issues with this article? Let us know.
Advertisement

fmx/07 coverage, day three: the workshop day

This day, I decided to attend more technical workshops. To be honest, they were not that technical, but they were still more precise than a bare presentation. All in all, that was - again - an enjoyable day, filled with passion, creativity, and - oh, well - speakers.

D3D10 Unleashed
by Bruce Dawson, Programmer, XBox team, Microsoft

I learned something in this workshop - mainly that DirectX 10 is really a great API and that it's far superior to DirectX 9 in many key areas. Let's introduce a bit of cleverness in our view: DirectX 9 is about to die - not because of DirectX 10, but because the programming model is convoluted and difficult to set up, despite the quite nice object oriented architecture. And in fact, that's probably the main reason that drove the creation of the new graphic API from Microsoft.

To create DirectX 10, everything has been cleaned from the ground up. A new driver model is used (which, unfortunately, disallows the porting of DX10 to Windows XP; I'm still not sure about that but hey, that's a MS product, I don't have the source code here, so I can't check my own assumptions), and D3D gets a brand new architecture which is more easy to map to the hardware.

Bruce Dawson gave us a short presentation of the major changes in the D3D10 architecture.

  • While there are still some optional features in the API, the caps bits disappeared, forcing all the vendors to support a common set of features. It obviously makes things easier for the programmer, although some people might find the removal of the caps bits questionable (I know I do; what will happen when they release DX11? or even DX10.1? I predict the return of the caps bits).
  • The driver model implements a brand new unified and virtualized GPU memory management layer. The direct conclusion is that there are no more device lost cases to handle; moreover, the driver can now handle the memory more efficiently, depending on the new usage types (which are far less cryptic than the previous one; what does D3DUSAGE_DEFAULT mean, in the end?).
  • Everything has been done to accelerate batch rendering. The worst enemy of the batch renderer is the state system. It has been heavily changed (states are now logically grouped, and have been hidden into COM objects), and the net result of the change is that you only have 5 different state blocks, and the core state management API is a list of 7 different functions. The same simplicity can be adopted by the hardware vendors as well, resulting in faster state changes - and as a consequence, faster rendering.

Of course, this list can't be exhaustive if I forget to include the 3 other major new features of D3D10.

  • the shader model 4.0 generalizes shaders, and introduces a new step in the rendering by the use of geometry shaders. GS are executed after vertex shaders, and before pixel shaders. They allow the programmer to use the GPU to create new sets of vertices from the output of the vertex shader. For example, one can implement a particle system directly on the GPU, together with the creation of new particles or the destruction of dead particles. Another possibility would be to create the geometry of a shadow volume on the GPU directly, instead of relying on the CPU to allocate the vertex buffers (that would definitely lessen the bandwidth of this technique).
  • resources have been reworked and generalized. By themselves, resources hold little information about their type, but you can create views on resources - and thus interpret them as a specific kind of resource. For example, constant buffers are a new kind of resources that can be bound to a shader and that features an efficient way to set up or update constants. The resource usage creation flag has been changed - you now have 4 possibilities to create resources:
    • immutable resources - never updated
    • default resources - updated less than once per frame
    • dynamic resources - updated more than once per frame
    • staging resources - this is the fast path

The simplicity of the architecture of this new (well, not that new, ok) API is quite appealing. As Bruce said, the long list of new features should allow the emergence of new graphic techniques (for example, D3D10 is perfectly able to handle the rendering of a whole scene by using a single DrawPrimitive call; the reason behind that lies in SM 4.0: the GPU is now able to select the material to apply to an object).

COLLADA, A Khronos standard
by Remi Arnaud, Graphic Architect, SCEA

R?mi gave us quite a heavy overview of COLLADA. He began by describing the design goals and the objectives of the standard. He was kind enough to allow us to host his slides - you can download the PDF using the download link on the sidebar to the right. (Available soon, sorry - Ed.)

Everything began because of the rising costs of the production of content - the reason behind that is that the overall complexity of the content is rising. The obvious solution to the problem is to use better tools, but here we face a list of issues: tools are more and more specialized, files are difficult to exchange between the different tools, not to mention that asset version management - which is more and more prevalent - is difficult if not impossible, due to the binary nature of these content files. As a consequence, there is a need for the development of a new file format that should have (at least) the following possibilities:

  • it should be extensible
  • it should be modular
  • and it should be validated - to avoid invalid files

The COLLADA project began 4 years ago with the goal to liberate content by creating a file format that would not be tied to a tool vendor. The adaption of the format would then allow the users to have a better toolchain, while at the same time creating synergies between the different industries.

The current version of the COLLADA standard (from the Khronos Group) is version 1.4.0.

So, what's all this buzz about COLLADA? R?mi is just as realistic when he describes the pro and cons of either contributing to COLLADA or using it. Contributing to the file format is costly, but it makes your content accessible. Adopting COLLADA forces you to deal with a new file format, but on the other hand, it's openness allows more synergies with the other tools you own. And there are a lot of tools that support COLLADA right now: Photoshop CS3, NVidia FX Composer 2 (yeah, we got a demo of that!), many commercial 3D packages (either natively or by using a plug-in), and so on.

COLLADA brings many features together:

  • multiple COLLADA files can be aggregated into one file
  • they can contain information about the model, the material, the physics or animation
  • since the file is an XML file,
    • it can be read by a human being
    • it can be automatically validated - using XML schemas
    • it can be tracked by a source control management software
    • it can be extended by software - as long as they respect the XML schema.

Before he concluded his talk, Mr. Arnaud presented to us the different features by using videos (which are not in the slides, sorry =)). For my part, I consider COLLADA to be a major innovation - something you can't miss if you're a tool programmer somewhere on earth, or even if you're an independent game programmer.

We all want emotions in Games!
by Gilles Monteil, Animation Researcher, Ubisoft

Gilles' talk was quite impressive - and I will try to cover it extensively, as he made very good points. He gave us a detailed analysis of what emotion is about, and explored some ways that could be used to import emotions in games.

He began by giving us a few examples, most notably King Kong, which features something quite rare these days: in order to enhance user immersion into the game, it has no GUI. That brings up the following question: how can movies contribute to video games?

  • narration is a key movie feature, while games tend to use narration in a very limited way. Beyond Good and Evil, for example, uses narration only as a way to give missions to the player.
  • drama control is also a key feature of movies; games don't take much advantage of this, although it's a strong vector of emotion.
  • movies are less limited in their use of 3D: the camera is used to enhance drama or narration, lighting is often more precise, and so on.

Merging the storyline into gameplay is a difficult narration technique, but it still can be achieved. Half Life 2's short in-game cinematics are a brilliant example of the effectiveness of this technique. The downside is while the player immersion is improved, it requires a lot of work to achieve this level of quality, and there is an evident lack of tools to help game designers in this area. Another example is ICO: the reality of interaction between the two characters is a key narration feature in the game, and a good way to generate emotions for the player.

Of course, new consoles and new technologies are bringing new strategies to create emotions. The Nintendo Wii and DS creates completely new experiences, and satisfies the needs of our most basic emotions - by allowing us to safely experiment new things. Sony and Microsoft, by pushing the technology, allow the game developers to explore new ways to create emotions, but they are plagued with a common disease: they have to climb the Uncanny Valley. Characters are more and more realistic but they still don't behave as human beinga. There is clearly a gap that research must try to fill - and the best way to do this is probably to use procedural techniques, such as motion synthesis.

The main difference between cinema and games is interactivity. When you watch a movie, you are a spectator; when you play a game, you are an actor. Gilles notes that this is the same difference that we get between a speech and a debate. Can we overcome this limitation? Fa?ade (a piece of interactive cinema) tries to capture the essence of movies (you are a spectator, watching the scene as it goes) while allowing you to interact with the other character, thanks to a very strong AI engine.

As he said earlier, the camera in games (most of the time) is not used to enhance drama. Its goal is to allow the player to play the game. But still, some techniques exists that would not require much work to be adapted to games. One of them is called the K effect, from the name of Koulechov - a famous Russian film maker. The main idea behind the effect is that whatever you give it, your brain will try to find a meaning for it. The typical example features two consecutive images: the first image shows a man with an inexpressive face. The second image shows a gun on a table. The result is that even if there is no link between these two images, the brain will see a link, and will build an emotion directly from this link. This can be adapted to games as well, as Gilles proved a few minutes later.

After this panorama, Gilles gave us a detailed explanation of what creates emotions in something that games have used for years: movements.

The first point comes as an answer of this question: if we have more action, do we have more emotion? The answer is "no", because action is not emotion - just like text is not emotion. But if you bring rhythm into the equation (acceleration, deceleration, ...), you modify the perception that people will get from the action, and the result is the creation of empathy: "this person looks sad", "you look angry", and so on.

Games typically have a lot of possible actions, but they don't convey any emotional response. Changing the rhythm of these actions (through transitions, for example) would change the way people are seeing them.

The second point is about space - it deals with the position and the cardinality of things. Here, some artistic rules prevail (playing with the camera, choreography, animation rules) but the main things to note is that our body expresses something, even without moving (think to the Thinker of Rodin). There is a clear parallel between this talk and Ken Perlin's talk the day before.

To prove his point, Gilles showed us an experimental video that was grabbed using the Rainbow Six engine. The video features a character and a glowing point in space. The face of the character is inexpressive, and only rhythm and space are controlled. When the goal is to express fear, the character is trying to be as far as possible from the glowing point.

Guess what, the result is stunning. Even if the character don't show any emotion (no facial animation), he still seems frightened (or curious, or angry). This is a clear application of the K effect Gilles mentioned previously. It surprised me to the point that I could even see the facial expression of the character change, despite the fact that I knew it was not the case. A big Wow for this demo.

Unfortunately, Gilles hadn't the time to finish his talk. He still had a bunch of slides to present, one of them being particularly important, as it dealt with the third point: what his called dynamo-rhythm. Of course, without much explanation, I can't say that I understood this point very well. I promise I will get in touch with Gilles to get more information on the subject.

Eve Online: space ships to avatars
by Torfi Frans Olafsson, Technical Producer, CCP Games

The main goal of Torfi Frans Olafsson's talk was to present the pipeline and the ideas behind the latest evolution of Eve Online - the just released Revelations expansion set.

Eve Online is an atypical game in the MMO world: all the players are centralized onto a unique server (a cluster of Linux servers - incidentally, this cluster is considered to be a supercomputer), every expansion set is free and distributed to every existing player.

What interested me in that talk is not much the game itself - we had a fairly long presentation of the game and of its new expansion set (of course, this is the reason why this coverage is a bit shorter than the other ones). What I found the most interesting is the way CCP Games works to create this game, more specifically how it handles game assets - and I'm not sure I have the right to say everything I know about it.

At the core of the game asset creation, there is a concept drawing. But instead of doing a very detailed drawing of the future asset (something which is quite time consuming), the concept creator does very fast mock up - but he does a lot of them. Since these mock ups are not representing a huge work, changing them upon request doesn't create any affect conflict (imagine a 3D modeler who spends 2 full days doing his character concept work; then his manager tells him that he has to change the clothes of the characters. Now, imagine you are the ears of the manager...). Once a final decision has been taken, the concept is refined, and the model is created.

You might think that it's not much different from anything that happens in the other game studios, but you can't be more wrong. The key difference lies in the people that are chosen to create the concept arts. Those people are not your grandma artists: clothes are designed by a costume maker, buildings are designed by professional architects, and so on. All in all, it adds a whole new level of realism and authenticity to the game. You won't find bikini armors in the game (well, you do, but it has been done on purpose, not because "omg grrrl"). Buildings are not built of random boxes - they have a purpose, and their architecture is tied to this purpose.

Now, all I have to say is that the result is quite good. Of course, this kind of setup might cost more money than a traditional setup (I asked the question to Mr. Olafsson, and he didn't want to tell me the real numbers), but at the end you're building more intelligence and more consistency into your game. So that's not that bad.

fmx/07 coverage, day four: the final words

Those who read my journal know that I was unable to attend the talks before 3 PM for professional reasons. Since most of the interesting talks where early in the day, I had not much to do (well, I still had the chance to attend some other clever talks, don't worry). So I spent some time in the expo - and while it was not great, it was still not that bad. Let the journey begin!

XNA Game Studio Express
by Bruce Dawson, Programmer, XBox team, Microsoft

This talk was originally scheduled with Pete Isensee, but (for a reason I don't know) Bruce took his place.

The goal of this workshop was to present the XNA platform to people who knew that it exists, but who were unaware of its potentialities. You have to understand that most programmers here are professional game programmers, and it seems that they tend to think that the XNA platform is targeting only the hobbyist or independent game developers. They are wrong: XNA can also be used to create quick prototypes of games.

First, he presented the basics of XNA: on Windows, it's built on the .NET platform; on Xbox 360, XNA uses the .NET compact framework. On top of the XNA framework, you'll find the XNA Game Studio - as of today, only the Express edition (which uses Visual C# .NET 2005 Express Edition to bootstrap) exists, but a professional edition is on the way, and might be released before the end of the year. Cross platform (Windows, Xbox 360) development is possible, since at least 95% of the project code will be shared by both environments - of course, the code has to be written in C#. And, to finish with generalities, development for the Xbox 360 costs 99$ per year.

The good thing about XNA is that it abstracts the DirectX API. As a consequence, you don't have to know DirectX details, the programming is easier, and in the end you'll spend more time focusing on the game itself, not wasting your time digging in the low level world. This is even easier due to the application model defined by the XNA framework: the very first lines of the code you write are already part of your game. You don't have to write a complex engine before you start.

Let's be more technical: the graphic part of the XNA framework is using the programmable pipeline of DirectX 9, and hides many details from the API (including device lost, which is a great news). It also abstracts the platform (Windows or Xbox 360) to ease porting. The audio system is built upon XACT. There is a direct consequence to this: the sound designer is more involved in the project, as he is responsible for defining the conditions when sounds must be played. About the input, everything is simplified too: the framework handles the Xbox 360 controllers (PC, X360), the keyboard (PC, X360) and the mouse (PC only; don't even think about using the mouse if you're building an application that is supposed to work on the two platforms). And, of course, there are other parts in the framework: a unified storage API, as well as a powerful math API, and many other things.

The most interesting part (for me) is probably what Bruce said about the future possibilities that Microsoft is considering for XNA:

  • other languages (the yeah!)
  • support for more advanced tools, based upon the professional versions of Visual Studio .NET
  • professional games (I'll remind you that Schyzoid is on the way)
  • more community content - the framework is still missing a professional grade physics library
  • GUI and network support

And so on. Bruce insisted on the fact that the XNA team is very attentive to user requests, so those who are interested in having their needs taken into account can connect to connect.microsoft.com. You know what you have to do!

The Expo floor
I had some time this day to visit the expo floor. To be honest, there is not much to see there - and more important, nothing to get from any booths, except maybe some candies from an Adobe representative (they were quite good by the way). So I decided to take in some product presentations instead of stealing pens or bags. And some of these presentations were actually quite good.

The first one I saw was about the new Adobe CS3 suite, and more exactly I was presented with the new Photoshop CS3 Ultra Expensive Edition (I'm not sure it's really called that). This new Photoshop is able to import 3D COLLADA files and to put them into a layer. The 3D object can then be moved, rotated or sized in this layer, but the most important feature is not there: the textures that belong to this 3D object are stored in separate sub-layers, allowing you to easily modify them and see the result in real time. In the end, that's probably the most powerful texture editing tool I have ever seen.

But before you can do that, you have to create your UV map - that's where Polygonal Design's UNFOLD3D comes to help. The goal of this tool is to create UV maps from a mesh. The only operation you have to do is to select the edges of your mesh that will be used as edge splits in the UV map. Then you can unfold the mesh into a very low distortion map. I really urge you to watch the videos of UNFOLD3D in action, as the whole thing is incredibly easy to use, and utterly powerful. To sum up a bit: you don't have to waste your time again to create high quality UV maps. That's probably the technology product that was the most amazing on the expo floor (from a game development point of view, of course). The price might look a bit high (from 399EUR), but for this price, you won't waste any more time (not to mention that even if you're not a great artist, you'll be able to do something great).

Lionhead Animation
by Nanette Kaulig, Lionhead

Nanette's talk was about game models and the work that was required to get them to the level of quality you see in Black & White 2 and in the upcoming Fable 2. She mainly presented to us two creatures: the cow of B&W2 and (guess what...) the dog of Fable 2.

Both creatures share something: they require a huge number of hand tuned animations to get real. Every creature in B&W2 required roughly 300 keyframed animations (no motion capture; but you can understand how no human can pretend to be a cow...). To test all these animations - as well as the models - a special animation editor has been built by the studio engineers. Ms. Kaulig gave us a short demo of this editor: to be honest, and while it's a useful tool, there is nothing really revolutionary in this editor. You can play animations, add a floor, handle a few variables (in B&W2 for example, the evilness of the creature, so you can see how a creature changes when it becomes more evil or more good), and so on.

What's more interesting is the work that both the animators and the programmers had to do in order to incorporate the creatures in the game.

  • handling of the foot planting: this is typically done by the software, but animators it can have an impact on animations. So all animations have to be tested with regard to foot planting.
  • interaction with the surroundings and with other characters: animations are keyframed. It means that animators had to take care of how animations can work when (for example) a creature is fighting another creature.
  • look at: it's typically implemented as a procedural change in the keyframed animation; but not all animations support that, and some animations that may support it look very unnatural when the head is moved.
  • facial expressiveness: this is an animators job - but it has an impact on look at, for example.

The result of this work is that many built animations are thrown in the waste basket, sometimes because something goes really wrong, and sometimes because, well, the animation is quite weird (Nanette showed us the animation of one of the cow's super powers (super powers were sort of removed later in the game development, and replaced by simpler magic). The animation featured a cow that went into a milk spitting frenzy. While quite fun to watch, the weirdness of having a cow that pees milk on its enemies was a bit out of subject).

The same kind of work is currently done to handle the dog of Fable 2 - at the moment of the show, 165 animations have been created for the dog, and that was a Work in Progress. Nanette showed us one of the latest dog videos, and that was absolutely stunning: this dog seems so real and full of life! Then she told us that the dog was anything but real - the modelers brought many human features (notably the eyes, which are a mix of human and dog eyes) to improve the emotional link with the player. But the dog stays stunning - not to mention that it's quite a big job to animate it (let's give another example: the tongue alone is made of 9 bones).

Not only is it a hard job to animate, but there is also a few more things to handle:

  • foot planting: most dogs have four legs. Which means that foot planting is a bit more complex than with a 2-legged creature. The software has been built to take that into account, and both the front legs and the back legs are foot planted when the dog is on a slope.
  • interaction with the surrounding and with other characters: the dog loves its master, and shows it. This required the development of new algorithm to make sure that when the dog jumps to you, the result looks real and natural.
  • the look at system is also very important: the dog frequently looks at its master, and this has to be simulated. Dogs also have a tendency to always look around them, search for something on the ground, and so on. They can't concentrate on a single thing for a long time. Of course, the goal of the look at system is to avoid breaking animations, and - of course - some animations just don't support that the dog moves his head to look at something else.
  • navigation: when a biped runs, then walks, the blending between both animation is quite natural. When a dog runs, then walks, this blending can't be assumed - and in fact, it's completely unnatural. So new animations (accelerate, decelerate) had to be created to handle this problem.

Ok, that was interesting (at least, the animation videos were really impressive). I must confess that I'm not an artist, so I have a hard time to deciding what is difficult, and what is not. At the end of the talk, I asked some questions to a professional animator who has the good idea to be French (yes, I believe it's a pretty good idea) - and he was somewhat impressed by the quality of the work of Lionhead animators. So I guess I should be. And the dog is really cute (you'll find some GDC videos on youtube).

The Future of Games
by Matthew Jeffrey, Electronic Arts

This was the last talk of the conference - and it took place right before the EA party began. I don't know if I was just not very focused on the subject or if I am just dumb, but I'm not sure I understood the real subject of the talk. Not to mention that I'm a bit skeptic about his views.

Matthew did a strong analysis of the current game market. The "future" bits in his analysis were reduced to two main points:

  • a better use of the internet users as a market target. Matthew considers that most internet game website are not monetizing as much as they could. That's probably true. I mean, I can play Xpert Eleven for free. And in the end, that's why this kind of site works.
  • a better segmentation of the market, in order to reach all the different kind of players. Those who still think that there are only two kind of players (hardcore players and casual players) are a bit wrong. It seems that there are a lot of possible categories (techno addict, asocial gamer, ...) that needs to be addressed, and the better you understand them, the better you sell your product.

What I will recall about this talk is the demography study. It was quite well done, but unfortunately I was so focused on finding the meaning of the talk that I didn't take any note - and I forgot the figures. Overall, it was an interesting talk, although it should have been named differently. I was waiting for a talk about how games will grow, how companies should organize their content pipeline (you know, fmx/07 is about digital content); I got a talk about some other possibilities of growth for the game industry.

Well, like I said, I'm not sure I understood everything correctly!

Photos courtesy of the fmx/07 staff.

Cancel Save
0 Likes 0 Comments

Comments

Nobody has left a comment. You can be the first!
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Advertisement