About this blog
A development blog for the episodic VR game Masters.
Entries in this blog
It's been a while since we last posted, so it's time for a quick update. While most of the interaction and fighting in Masters will be with and against human characters, the player will also occasionally face supernatural creatures. Today we present a mixture between a pit bull and a bat.
Modeling and texturing of the creature is nearly done, and we'll apply final tweaks once we've integrated it into our environments. The nice thing about creatures and monsters is how they allow us more creative freedom than human characters. Everybody knows what a human is supposed to look like, so even small inaccuracies can destroy believability. This is obviously much less of a factor when designing a pit bull bat.
The creature is currently being rigged by our animator, and we should soon be able to show it in motion, along with other characters and some cool VR gameplay footage of Masters. Stay tuned!
During the past few months weaEUR(TM)ve been doing a lot of R&D and generated many assets, scripts, effects and other elements. Now is the time to structure and organize the project, tieing together everything we have so far and getting our content production workflow ready for prime time.
With in-engine level generation, lighting and effect design next on our schedule, weaEUR(TM)re at a point where we need to dial in the look and feel of Masters. Of course we find inspiration in movies and other games. We also discuss our many ideas and draw artwork to ensure that weaEUR(TM)re all on the same page regarding the intended style.
Luckily, weaEUR(TM)re a small team, and although we initially all had slightly different thoughts about what the game should be like, we quickly found a common denominator. We consider it essential to agree on the general direction of the game in an early stage so that all team members remain motivated.
Many readers have asked us about in-game videos, and they are coming up very soon. WeaEUR(TM)ve got a rough VR control scheme in place, and within the next weeks weaEUR(TM)ll be ready to post actual footage of Masters, stay tuned!
Our character workflow series continues as we dive right into rigging and skinning. We mentioned previously that itaEUR(TM)s important to leave some room between the various geometry elements in order to avoid interpenetration. In the case of our character this proved to be important indeed as you will see in a moment. Of course these gaps shouldnaEUR(TM)t be too big either, or else things will start looking unnatural.
We used MayaaEUR(TM)s HumanIK to quickly generate a default rig. Requirements for a character will often differ from what the default rig provides, but nonetheless itaEUR(TM)s a great starting point and can be extended or modified easily. For instance, we added fingers to the rig because we need decent hand animation for all of our spell-casting characters.
ItaEUR(TM)s advisable to model the character in a standard T-pose or whichever other pose is best suited for your bones setup. Then you can select the character geometry and let HumanIK calculate the rig for you. ItaEUR(TM)s important that the Definition section of the rig displays everything in green, that way you know youaEUR(TM)re good to go. If there are any issues at this stage, you will have difficulty transferring the animation in Unity later on.
With regards to skinning, things have certainly gotten a lot better over the last 15 to 20 years. MayaaEUR(TM)s Bind Skin feature does a decent job at distributing bone weights across your characteraEUR(TM)s geometry. Still, decent is not perfect, so manual cleanup canaEUR(TM)t be avoided.
To identify problem areas, we like to put the character in a aEURoeVan DammeaEUR? or a aEURoeMatrixaEUR? pose. ItaEUR(TM)s very unlikely that the character will ever hit such extreme angles in the game, but if we can get all deformation to look clean in those poses, they will also look clean during all other types of animation we may throw at the rig.
After putting the Paint Skin Weights tool to good use and ensuring that our villain can do the splits without spazzing out, heaEUR(TM)s ready for Unity. We simulate the coat in realtime, itaEUR(TM)s not controlled by the bones animation, and if youaEUR(TM)re curious what the end result of all this work looks like, we have an in-engine video coming up for you very soon.
Maybe youaEUR(TM)ve previously seen some cool Nvidia demos with awesome looking hair and wondered why hair in games never matches this quality. To put it briefly, games have a whole lot more processing to do than just character hair. Tech demos typically focus on a single feature and throw a computeraEUR(TM)s entire processing power at it.
Of course we donaEUR(TM)t have the luxury to do that, hair is only a tiny tiny part of what makes up our game. For performance reasons we canaEUR(TM)t do tech demo quality, but we still want the results to look decent because hair is a key visual component of most characters.
The standard way to do hair in games is to use a number of polygonal strips and map them with hair textures. This puts much less of a strain on the game engine than letting it calculate thousands of individual hair strands.
For Masters, we have developed a procedural workflow that allows us to easily iterate on hair styles. We use SideFX Houdini to do hair modeling, processing and rendering because it gives us tremendous control and gets hair into Unity quickly.
We first model a high resolution hair style using HoudiniaEUR(TM)s standard hair tools. Afterwards, our custom asset generates around 400 polygon strips that roughly follow the hair and the shape of the head. A texture baking node then renders the high resolution hair onto the polygon patches.
We can output many different types of texture maps, such as color, normals, occlusion, displacement and so on. These texture maps are used to control shader parameters inside of Unity where we tweak the final look.
The remainder of this post gets quite technical as we take a more in-depth look at what we described above. If you want to skip this section, thereaEUR(TM)s a video at the end of the article!
Modeling the high resolution hair is straightforward. We generate around 2,000 guide hairs and shape them the way we want them with standard tools. The full hair style consists of 40,000 strands which are scattered around the guides. Both the guide hairs and the high resolution hairs serve as an input to our custom Houdini-to-Unity asset.
We take the guide hairs as a starting point for the realtime version, then strategically delete 80% of them so that weaEUR(TM)re left with about 400 evenly distributed hair curves. From there, we calculate reasonable normals and upvectors in order to extrude the curves aEURoesidewaysaEUR? and turn them into polygonal strips. We use the head mesh as a target for our upvectors, so that the polygon strips orient themselves towards the head.
In order not to waste any UV space later, we also calculate an individual width for each strip. We do this by first assigning each of the 40,000 high resolution hairs to its nearest polygon strip. Afterwards, we calculate the widest diameter of each hair bundle, and that diameter serves as a multiplier for the final stripaEUR(TM)s width.
In the case of our villain character, the widest strip ended up being 4.5 centimeters (1.77aEUR3) while the thinnest strip is only 0.3 centimeters (0.12aEUR3) wide. UV space optimization succeeded! The UV layout itself is generated by two standard Houdini nodes that pack the map densely.
At this point weaEUR(TM)re nearly ready to render the textures. Unfortunately, with all polygon strips positioned on the head, thereaEUR(TM)s no reasonable way to render the entire texture map at once. Render rays would hit all sorts of overlapping details, and the results would be subpar.
Initially we solved the problem by rendering each polygon strip individually and merging them into a single map in post. Although we wrote scripts to facilitate this process, it proved to be too cumbersome and time consuming.
Instead, we came up with another solution. Our asset now repositions and rotates the polygon patches and their assigned high resolution hairs into a grid-like structure. This does not modify the UVs at all, it only transforms the geometry in world space. By orienting all strips in roughly the same direction, we avoid intersecting rays and can render the entire texture map in a single go.
Together with the rest of our procedural setup, we can now modify the original hair style at any time without worrying about what happens downstream. Everything we explained is calculated automatically without any manual labor involved. When weaEUR(TM)re ready to go back into Unity, we push a button to let Houdini recook all nodes and re-render the texture maps.
Once rendering is finished, we can immediately refresh Unity and take a look at the realtime version of the newly modified hair style. Of course itaEUR(TM)s also easy to upres or downres the realtime hair style. Right now it clocks in at around 2,500 polygons total, and going higher or lower is simply a matter of changing a parameter on the asset.
Regarding the shader setup inside of Unity, weaEUR(TM)re not doing anything fancy. We currently only use two different maps, a diffuse map and a flow map. The flow map describes in which direction the individual hair strands flow within the UV space, this helps give more realistic highlights on the shaded hair.
To wrap up this excessively long article, hereaEUR(TM)s a short video that showcases the realtime hair in motion and gives a quick summary of our setup process.
After some trial and error we have come to the conclusion that itaEUR(TM)s best to build most of our scene geometry using a number of building blocks, such as wall or floor pieces of a fixed size. These small entities are easy to control and can be modified at any time. When instancing the pieces, changing the base piece will update all child pieces. This way we can make significant changes to our environments even after theyaEUR(TM)ve been built.
Small base elements are also helpful when it comes to UV mapping, lightmap generation and collision mesh generation. They have a low memory footprint, they load rapidly and last but not least they may even allow us to build scenes semi-procedurally. Either way, putting together game environments with this technique is a quick and easy process. ItaEUR(TM)s a bit like playing with Lego bricks, except we get to fully customize their look and shape.
Goran, our designated environment artist, is certainly having fun with the concept of using simple elements to build scenes that look very complex. We are always amazed how he starts with a few blocks and ends up with these fascinating spaces. WeaEUR(TM)re even more amazed when he tells us that he used only three different pieces to generate the whole thing.
This said, scene geometry is only half of the equation. Lighting, too, will play a big role in getting the atmosphere right, and Goran is already taking Unity apart to get the most out of it in terms of visual fidelity. Masters will be a glorious sight to experience in VR, thataEUR(TM)s a promise!
Last week we showed you various texturing stages of our villain. He was missing the head then, so letaEUR(TM)s take another look.
The head and face are prominent areas of every character, therefore special attention and care must be taken when preparing UVs and painting textures. We typically spend more time on it than we do for example on a piece of clothing. However, in terms of overall workflow the head texturing process is no different from what we described previously. We start with a lowpoly mesh, sculpt it and paint it.
When it comes to faces, the cameraaEUR(TM)s field of view makes quite a difference. Setting up a large field of view results in lots of perspective, which optically widens facial features and makes them more prominent. A smaller field of view on the other hand flattens the face. It often makes sense to match the camera settings in the sculpting / texturing application to the game engineaEUR(TM)s camera.
A clean single-island UV layout is helpful if the face is to be textured using photographs. This is how we used to do it in the past, but ever since we discovered Substance Painter we prefer to paint head textures by hand. The application provides a lot of good tools to create believable skin.
Our villain character has already moved on into the rigging and animation stage, but of course he is still missing hair. We are currently trying out various methods of getting decent hair into Unity. This is a tricky subject, and we will discuss it in more detail in a later blog post. For now hereaEUR(TM)s a quick sneak peek of some highres hair style we prepared. It remains to be seen how much detail we can get into the actual game.
This blog entry is cross posted from our main blog at http://masters-game.com
Typically, artificial intelligence in video games does not have a lot to do with aEURoeintelligence.aEUR? The algorithms are carefully hand-tweaked to the gameaEUR(TM)s scenarios and made aEURoegood enoughaEUR? to provide a decent challenge to the player. The problem with this approach is that the aEURoeintelligenceaEUR? must be coded somehow. Therefore, programmers try to simplify the problem, for example by giving the AI a certain number of states.
Maybe an enemy AI has a neutral state in which it only walks around. When it hears a noise, it may switch into an alert state in which it actively searches for the origin of that noise. When it sees the player, it switches into an aggressive state and attacks. None of this has anything to do with learning or intelligence. ItaEUR(TM)s a fixed set of rules defined by the programmer in an attempt to give the AI human-like behavior. Some games pull this off better than others.
Obviously itaEUR(TM)s our goal to end up in the aEURoebetteraEUR? category, but not only that, we are actively looking into alternative ways of giving the computer-controlled characters intelligent behavior. Currently, we are considering a type of machine learning called reinforcement learning. The idea behind it is to give the AI rewards for all actions it takes. These rewards can be positive, negative or even zero. ItaEUR(TM)s similar to how animals are trained, and similar even to how humans learn. A child that touches the hot coffee mug is given a negative reward by the mug (aEURoeouch, thataEUR(TM)s hotaEUR?) and will remember to be careful next time.
You can train a reinforcement learning AI by defining an appropriate reward structure. For example, you may give a +5 reward whenever the AI manages to take health points from the player and a -3 reward whenever it loses health itself. In this example weaEUR(TM)d be teaching the AI to be aggressive. It cares more about hurting the player than it cares about maintaining its own health. We might also give a -1 reward for every second the fight goes on, to encourage the AI to win as quickly as possible.
In a game like ours there are many possible actions an AI can take. In the early stages of learning, the machine will be pretty clueless about what to do. Remember, weaEUR(TM)re not directly telling it through code what to do! LetaEUR(TM)s say the player casts a fire ball. The clueless AI stands there and decides to do nothing. The fire ball hits it, and it loses health. This gives a -3 reward for each lost hit point, and the AI will begin to understand that the state aEURoefireball approachingaEUR? should likely not be followed by the action aEURoedo nothing.aEUR?
You can see that even though we never specifically tell the AI through code what to do against a fireball, it will experiment with all the possible actions. Through trial and error it will figure out the best actions to take in any given situation and learn to become good at the game. Probably too good for a human opponent, but thataEUR(TM)s a topic for another post.
This blog entry is cross posted from our main blog at http://masters-game.com
We are in the unique position of having someone with an architectural background on our team. This is a luxury few game developers have, and so it was obvious which of us three would be responsible for environment and level design.
The building complex in Episode 1 will be one of the first things players get to experience when playing the game. ItaEUR(TM)s an important location, and we wanted the beginning of our storyline to be visually memorable and breathtaking from the very first moment. The idea behind the buildingaEUR(TM)s design was to mix heavy brutalist architecture with a more contemporary setting. We wanted to create this familiar modern space while not making it bland and boring. Masters is a game whose storyline is filled with mystery and the supernatural, so we need atmosphere, tons and tons of it.
A big consideration was to make the entire area massively diverse in terms of geometry. Sure, you can do four walls and a ceiling, itaEUR(TM)s what everyone does. But we had big visual impact in mind, complex looking structures, things that will take your breath away when looking at them in VR. The buildingaEUR(TM)s central section is a big atrium surrounded by irregularly displaced floors with balconies. At the edge of the building we also have a more linear semi-open atrium that visually extends to the outside.
Thanks to the complexity of this design we now have lots of options for different vantage points and places where action will occur. Our initial target renders have moody, maybe eerie lighting. The sky is overcast, and the building is dimly lit, with an emphasis on the mysterious glowing items inside the glass cabinets. Eventually the environment will have to be populated with various assets such as furniture, but for now these images give an excellent idea of the kind of atmosphere we shoot for.
ItaEUR(TM)s difficult to do something completely new when it comes to games, even VR games. Our game is certainly not the first that allows you to cast spells. Although weaEUR(TM)re not completely reinventing the wheel, weaEUR(TM)re going to spin it in unusual ways. And weaEUR(TM)re convinced that things like our engaging storyline, the multi-segment gesture system or the impactful visuals will make Masters stand out from the crowd.
This blog entry is cross posted from our main blog at http://masters-game.com
Today weaEUR(TM)ll briefly touch on a few technical details regarding character modeling for games. When creating a 3D character model, there are various aspects to keep in mind before passing it on to the rigging and animation department. Of course, some of your main concerns as a character artist will always be that your model has a great design, that it looks cool and believable and so on.
However, the technical side is equally important if the character is supposed to move at some point, which is a given for most use cases. So unless youaEUR(TM)re working on a statue, you may want to take a quick look at the following pictures.
Keeping an eye on the polygon count is still relevant in todayaEUR(TM)s games, so donaEUR(TM)t splurge by adding unnecessary edge cuts or subdivisions. An exception are areas of the model that will bend or stretch heavily, such as elbow or knee joints. These are sections that can benefit from a few extra loops to allow for good deformation. Do this and you may get some free coffee from your rigging and animation colleagues for making their life easier.
Another way to score points with the rigging department is to be mindful of elements that are very close together, such as pieces of clothing. Painting deformation weights in these regions can be tricky, so avoid overlaps in your model and keep some space between the various elements. If possible, donaEUR(TM)t merge or fuse the elements. Instead, keep them as separate objects or give them material IDs to allow for easy selection and weight painting.
Have a nice day!
A big focus in Masters will be the playeraEUR(TM)s ability to cast spells. In order to do so, the player must draw a gesture into the air, the game will recognize the gesture and cast the appropriate spell. For gesture recognition we make use of a custom-coded vanilla neural network. This is a machine learning algorithm that allows the system to learn any number of things, in our case the shape of spell gestures.
This works by first training the network with lots of example shapes. Once the network is trained, it can then recognize player input correctly. The great thing about this is that the training phase happens during development. So the game ships with a pre-trained network that allows for spot-on gesture recognition. Also, with a system like this in place, we may allow players to define their own spell shapes later in the game.
On the technical side, there are a few considerations that had to be worked out. Some of the more obvious pitfalls are that players wonaEUR(TM)t all draw their shapes in the same size or in the same position. This is easy to fix through normalization.
But in order to allow for perfectly smooth, frustration-free gameplay, we also want players to be able to draw their shapes in any direction, clockwise or counter-clockwise, and to start drawing closed shapes (such as a square) from any point (i.e. not force the player to start drawing a square from one of the corners). While some of these things were a bit tricky to implement, in the end we pulled it off without too many issues.
When we were done coding the algorithms for recognizing user gestures, we faced a big problem: gibberish input. Machine learning algorithms like the one we use are trained to recognize a given number of things. Our neural network assumes that whatever you draw is one of the spell shapes that the network was trained with.
To give an example: you may draw the number 8, and if this shape was not part of the training set of shapes, the network may end up being convinced that you just drew a circle. Because out of all the shapes the network learned, a circle may be the closest shape to the one you drew. In order to solve this, we had to implement a rejection mechanism that detects unknown shapes.
This is essentially a separate filter thataEUR(TM)s applied after the neural network outputs its prediction. We run a few tests figuring out whether the networkaEUR(TM)s guess makes sense given the user input, and if it does not we tell the player that their spell attempt failed. Getting back to the example from before, the network would tell us that the player drew a circle. We would run our filters and find out that the 8 that was drawn is too different from a circle (the closest known shape) and fail the spell.
We find machine learning a fascinating concept in general, and since our villains are supposed to be very powerful iconic characters, we may use machine learning algorithms for enemy AI, too. This would allow enemies to improve during the course of the game, adapting to the useraEUR(TM)s play style. No promises on this one though, sometimes aEURoefaking itaEUR? ends up being the better choice, time will tell. And by the way, a bit of aEURoefaking itaEUR? canaEUR(TM)t ever be avoided, because an AI that adapts perfectly to the player would have to be dumbed down in order to be beatable at all.
So much about the technical side of our gesture recognition system. Of course, these technical aspects and inner workings of the game are not relevant to players at all. They simply want to have fun, so we must provide them with a system thataEUR(TM)s both challenging and rewarding without being frustrating. While things are not set in stone yet, our current plan is to allow players to aEURoewriteaEUR? spells using a very simple aEURoesentenceaEUR? structure. The image shows you what this might look like.
Essentially, we want to reuse gestures frequently so that players can get accustomed to them and learn them in a fun way. Early on in the game, players may only be required to draw the gesture for aEURoeFireaEUR?, which is the middle square of Example I in the image. Doing so may generate a little flame in the playeraEUR(TM)s hand that might be used to light up a dark environment. Later, gestures can be extended and combined in many different ways. So the fire gesture might be extended into aEURoeFire-PushaEUR? to launch a fire ball or aEURoeFire-Push-BurstaEUR? to cause a big explosion.
We will have to do a lot of playtesting to find out how much complexity we can demand of players without things becoming frustrating. We donaEUR(TM)t want to overcomplicate the system, but we do want to give players a sense of achievement when they successfully cast a powerful spell.
This blog entry is cross posted from our main blog at http://masters-game.com/
To start off this blog, let us give you a quick introduction to ourselves and to our game.
We are a team of three and have been working together for 4 years. What makes us unique is the way in which we augment each otheraEUR(TM)s strengths to form a work force thataEUR(TM)s incredibly versatile. Originally we come from the gaming, architecture and media entertainment industries. Now weaEUR(TM)re putting our combined 30 years of content production experience together to make our first joint game project a reality.
The name of this project is MASTERS, and it is an ambitious story-based VR game that sets out to deliver a great gaming experience coupled with high production values and an exciting storyline. We donaEUR(TM)t want to give away too many plot details, but as the name suggests, during the course of the game players will encounter various members of a group generally referred to as the Masters. Often times, these encounters will be hostile, and players must defend themselves against their powerful foes.
While fighting is an important aspect of MASTERS, it is not a pure action game. Rather, we let players explore their environments, move the storyline forward and immerse themselves in the world we create. When fights occur, one way to engage in combat is by using a system that lets players draw various gestures into the air in order to cast a spell.
Speaking of aEURoespells,aEUR? we will likely give them a different name once we have the story fleshed out entirely. Spells, magic and other well-known elements from Fantasy fiction and media will be in the game, but stylistically we are going for Contemporary Fantasy. We wonaEUR(TM)t have witches, wizards, orcs or fairies. Our world is modern, it has cars and skyscrapers and phones. And it has mystical elements, supernatural beings, powers that transcend the laws of nature.
In this blog we will continually post progress updates for our game. Sometimes weaEUR(TM)ll write about general topics, sometimes weaEUR(TM)ll go into technical details about game development. Hopefully, we can find a good mix between in-depth tech info as well as broader subjects interesting to non-developers. Due to the immense amount of work required to complete this game, we are planning an episodic release schedule, with the first episode releasing late 2017.
This blog entry is cross posted from our main blog at http://masters-game.com/