Unity 3-0 Enter the Third Dimension

Published January 27, 2012 by Will Goldstone, posted by GameDev.net
Do you see issues with this article? Let us know.
Advertisement
Before getting started with any 3D package, it is crucial to understand the environment you'll be working in.

As such, in this article by Will Goldstone, author of Unity 3.x Game Development Essentials, we'll make sure you're prepared by looking at some important 3D concepts before moving on to discuss the concepts and interface of Unity itself. You will learn about:

  • Coordinates and vectors
  • 3D shapes
  • Materials and textures
  • Rigidbody dynamics
  • Collision detection
  • GameObjects and Components
  • Assets and Scenes
  • Prefabs
  • Unity editor interface

Getting to grips with 3D


Let's take a look at the crucial elements of 3D worlds, and how Unity lets you develop games in three dimensions.

Coordinates


If you have worked with any 3D application before, you'll likely be familiar with the concept of the Z-axis. The Z-axis, in addition to the existing X for horizontal and Y for vertical, represents depth. In 3D applications, you'll see information on objects laid out in X, Y, Z format--this is known as the Cartesian coordinate method. Dimensions, rotational values, and positions in the 3D world can all be described in this way. As in other documentation of 3D, you'll see such information written with parenthesis, shown as follows:

(3, 5, 3)

This is mostly for neatness, and also due to the fact that in programming, these values must be written in this way. Regardless of their presentation, you can assume that any sets of three values separated by commas will be in X, Y, Z order.

In the following image, a cube is shown at location (3,5,3) in the 3D world, meaning it is 3 units from 0 in the X-axis, 5 up in the Y-axis, and 3 forward in the Z-axis:

1444_01_14.png



Local space versus world space


A crucial concept to begin looking at is the difference between local space and world space. In any 3D package, the world you will work in is technically infinite, and it can be difficult to keep track of the location of objects within it. In every 3D world, there is a point of origin, often referred to as the 'origin'> or 'world zero', as it is represented by the position (0,0,0).

All world positions of objects in 3D are relative to world zero. However, to make things simpler, we also use local space (also known as object space) to define object positions in relation to one another. These relationships are known as parent-child relationships. In Unity, parent-child relationships can be established easily by dragging one object onto another in the Hierarchy. This causes the dragged object to become a child, and its coordinates from then on are read in terms relative to the parent object. For example, if the child object is exactly at the same world position as the parent object, its position is said to be (0,0,0), even if the parent position is not at world zero.

Local space assumes that every object has its own zero point, which is the point from which its axes emerge. This is usually the center of the object, and by creating relationships between objects, we can compare their positions in relation to one another. Such relationships, known as parent-child relationships, mean that we can calculate distances from other objects using local space, with the parent object's position becoming the new zero point for any of its child objects.

This is especially important to bear in mind when working on art assets in 3D modelling tools, as you should always ensure that your models are created at 0,0,0 in the package that you are using. This is to ensure that when imported into Unity, their axes are read correctly.

We can illustrate this in 2D, as the same conventions will apply to 3D. In the following example:

1444_01_09.png


  • The first diagram (i) shows two objects in world space. A large cube exists at coordinates(3,3), and a smaller one at coordinates (6,7).
  • In the second diagram (ii), the smaller cube has been made a child object of the larger cube. As such the smaller cube's coordinates are said to be (3,4), because its zero point is the world position of the parent.

Vectors


You'll also see 3D vectors described in Cartesian coordinates. Like their 2D counterparts, 3D vectors are simply lines drawn in the 3D world that have a direction and a length. Vectors can be moved in world space, but remain unchanged themselves. Vectors are useful in a game engine context, as they allow us to calculate distances, relative angles between objects, and the direction of objects.

Cameras


Cameras are essential in the 3D world, as they act as the viewport for the screen.

Cameras can be placed at any point in the world, animated, or attached to characters or objects as part of a game scenario. Many cameras can exist in a particular scene, but it is assumed that a single main camera will always render what the player sees. This is why Unity gives you a Main Camera object whenever you create a new scene.

Projection mode--3D versus 2D


The Projection mode of a camera states whether it renders in 3D (Perspective) or 2D (Orthographic). Ordinarily, cameras are set to Perspective Projection mode, and as such have a pyramid shaped Field of View (FOV). A Perspective mode camera renders in 3D and is the default Projection mode for a camera in Unity. Cameras can also be set to Orthographic Projection mode in order to render in 2D--these have a rectangular field of view. This can be used on a main camera to create complete 2D games or simply used as a secondary camera used to render Heads Up Display (HUD) elements such as a map or health bar.

In game engines, you'll notice that effects such as lighting, motion blurs, and other effects are applied to the camera to help with game simulation of a person's eye view of the world--you can even add a few cinematic effects that the human eye will never experience, such as lens flares when looking at the sun!

Most modern 3D games utilize multiple cameras to show parts of the game world that the character camera is not currently looking at--like a 'cutaway' in cinematic terms. Unity does this with ease by allowing many cameras in a single scene, which can be scripted to act as the main camera at any point during runtime. Multiple cameras can also be used in a game to control the rendering of particular 2D and 3D elements separately as part of the optimization process. For example, objects may be grouped in layers, and cameras may be assigned to render objects in particular layers. This gives us more control over individual renders of certain elements in the game.

Polygons, edges, vertices, and meshes


In constructing 3D shapes, all objects are ultimately made up of interconnected 2D shapes known as polygons. On importing models from a modeling application, Unity converts all polygons to polygon triangles. By combining many linked polygons, 3D modeling applications allow us to build complex shapes, known as meshes. Polygon triangles (also referred to as faces) are in turn made up of three connected edges. The locations at which these edges meet are known as points or vertices.

1444_01_05.png



By knowing these locations, game engines are able to make calculations regarding the points of impact, known as collisions, when using complex collision detection with Mesh Colliders, such as in shooting games to detect the exact location at which a bullet has hit another object. In addition to building 3D shapes that are rendered visibly, mesh data can have many other uses. For example, it can be used to specify a shape for collision that is less detailed than a visible object, but roughly the same shape. This can help save performance as the physics engine needn't check a mesh in detail for collisions. This is seen in the following image from the Unity car tutorial, where the vehicle itself is more detailed than its collision mesh:

1444_01_10.png



In the second image, you can see that the amount of detail in the mesh used for the collider is far less than the visible mesh itself:

1444_01_11.png



In game projects, it is crucial for the developer to understand the importance of the polygon count. The polygon count is the total number of polygons, often in reference to models, but also in reference to props, or an entire game level (or in Unity terms, 'Scene'). The higher the number of polygons, the more work your computer must do to render the objects onscreen. This is why we've seen an increase in the level of detail from early 3D games to those of today. Simply compare the visual detail in a game such as id's Quake(1996) with the details seen in Epic's Gears Of War (2006) in just a decade. As a result of faster technology, game developers are now able to model 3D characters and worlds, for games that contain a much higher polygon count and resultant level of realism, and this trend will inevitably continue in the years to come. This said, as more platforms emerge such as mobile and online, games previously seen on dedicated consoles can now be played in a web browser thanks to Unity. As such, the hardware constraints are as important now as ever, as lower powered devices such as mobile phones and tablets are able to run 3D games. For this reason, when modeling any object to add to your game, you should consider polygonal detail, and where it is most required.

Materials, textures, and shaders


Materials are a common concept to all 3D applications, as they provide the means to set the visual appearance of a 3D model. From basic colors to reflective image-based surfaces, materials handle everything.

Let's start with a simple color and the option of using one or more images--known as textures. In a single material, the material works with the shader, which is a script in charge of the style of rendering. For example, in a reflective shader, the material will render reflections of surrounding objects, but maintain its color or the look of the image applied as its texture.

In Unity, the use of materials is easy. Any materials created in your 3D modeling package will be imported and recreated automatically by the engine and created as assets that are reusable. You can also create your own materials from scratch, assigning images as textures and selecting a shader from a large library that comes built-in. You may also write your own shader scripts or copy-paste those written by fellow developers in the Unity community, giving you more freedom for expansion beyond the included set.

When creating textures for a game in a graphics package such as Photoshop or GIMP, you must be aware of the resolution. Larger textures will give you the chance to add more detail to your textured models, but be more intensive to render. Game textures imported into Unity will be scaled to a power of 2 resolution. For example:

  • 64px x 64px
  • 128px x 128px
  • 256px x 256px
  • 512px x 512px
  • 1024px x 1024px

Creating textures of these sizes with content that matches at the edges will mean that they can be tiled successfully by Unity. You may also use textures scaled to values that are not powers of two, but mostly these are used for GUI elements.




Rigidbody physics


For developers working with game engines, physics engines provide an accompanying way of simulating real-world responses for objects in games. In Unity, the game engine uses Nvidia's PhysX engine, a popular and highly accurate commercial physics engine.

In game engines, there is no assumption that an object should be affected by physics--firstly because it requires a lot of processing power, and secondly because there is simply no need to do so. For example, in a 3D driving game, it makes sense for the cars to be under the influence of the physics engine, but not the track or surrounding objects, such as trees, walls, and so on--they will remain static for the duration of the game. For this reason, when making games in Unity a Rigidbody physics component is given to any object that you wish to be under the control of the physics engine, and ideally any moving object, so that the physics engine is aware of the moving object, to save on performance.

Physics engines for games use the Rigidbody dynamics system of creating realistic motion. This simply means that instead of objects being static in the 3D world, they can have properties such as mass, gravity, velocity, and friction.

As the power of hardware and software increases, Rigidbody physics is becoming more widely applied in games, as it offers the potential for more varied and realistic simulation. We'll be utilizing rigid body dynamics as part of our prototype in this article.

Collision detection


More crucial in game engines than in 3D animation, collision detection is the way we analyze our 3D world for inter-object collisions. By giving an object a Collider component, we are effectively placing an invisible net around it. This net usually mimics its shape and is in charge of reporting any collisions with other colliders, making the game engine respond accordingly.

There are two main types of Collider in Unity--Primitives and Meshes. Primitive shapes in 3D terms are simple geometric objects such as Boxes, Spheres, and Capsules. Therefore, a primitive collider such as a Box collider in Unity has that shape, regardless of the visual shape of the 3D object it is applied to. Often, Primitive colliders are used because they are computationally cheaper or because there is no need for precision. A Mesh collider is more expensive as it can be based upon the shape of the 3D mesh it is applied to; therefore, the more complex the mesh, the more detailed and precise the collider will be, and more computationally expensive it will become. However, as shown in the Car tutorial example earlier, it is possible to assign a simpler mesh than that which is rendered, in order to create simpler and more efficient mesh colliders.

The following diagram illustrates the various types and subtypes of collider:


1444_01_15.png



For example, in a ten-pin bowling game, a simple Sphere collider will surround the ball, while the pins themselves will have either a simple Capsule collider, or for a more realistic collision, employ a Mesh collider, as this will be shaped the same as the 3D mesh of the pin. On impact, the colliders of any affected objects will report to the physics engine, which will dictate their reaction, based on the direction of impact, speed, and other factors.

In this example, employing a Mesh collider to fit exactly to the shape of the pin model would be more accurate but is more expensive in processing terms. This simply means that it demands more processing power from the computer, the cost of which is reflected in slower performance, and hence the term expensive.

Essential Unity concepts


Unity makes the game production process simple by giving you a set of logical steps to build any conceivable game scenario. Renowned for being non-game-type specific, Unity offers you a blank canvas and a set of consistent procedures to let your imagination be the limit of your creativity. By establishing its use of the GameObject concept, you are able to break down parts of your game into easily manageable objects, which are made of many individual Component parts. By making individual objects within the game--introducing functionality to them with each component you add, you are able to infinitely expand your game in a logical progressive manner.

Component parts in turn have Variables--essentially properties of the component, or settings to control them with. By adjusting these variables, you'll have complete control over the effect that Component has on your object. The following diagram illustrates this:

1444_01_06.png



In the following image we can see a Game Object with a Light Component, as seen in the Unity interface:

1444_01_12.png



Now let's look at how this approach would be used in a simple gameplay context.

The Unity way--an example


If we wished to have a bouncing ball as part of a game, then we would begin with a sphere. This can quickly be created from the Unity menus, and will give you a new GameObject with a Sphere mesh (the 3D shape itself). Unity will automatically add a Renderer component to make it visible. Having created this, we can then add a Rigidbody component. A Rigidbody (Unity refers to most two-word phrases as a single word term) is a component which tells Unity to apply its physics engine to an object. With this comes properties such as mass, gravity, drag, and also the ability to apply forces to the object, either when the player commands it or simply when it collides with another object.

Our sphere will now fall to the ground when the game runs, but how do we make it bounce? This is simple! The collider component has a variable called Physic Material--this is a setting for the physics engine, defining how it will react to other objects' surfaces. Here we can select Bouncy--a ready-made Physic material provided by Unity as part of an importable package and voila! Our bouncing ball is complete in only a few clicks.

This streamlined approach for the most basic of tasks, such as the previous example, seems pedestrian at first. However, you'll soon find that by applying this approach to more complex tasks, they become very simple to achieve. Here is an overview of some further key Unity concepts you'll need to know as you get started.

Assets


These are the building blocks of all Unity projects. From textures in the form of image files, through 3D models for meshes, and sound files for effects, Unity refers to the files you'll use to create your game as assets. This is why in any Unity project folder all files used are stored in a child folder named Assets. This Assets folder is mirrored in the Project panel of the Unity interface; see The interface section in this article.

Scenes


In Unity, you should think of scenes as individual levels, or areas of game content--though some developers create entire games in a single scene, such as, puzzle games, by dynamically loading content through code. By constructing your game with many scenes, you'll be able to distribute loading times and test different parts of your game individually. New scenes are often used separately to a game scene you may be working on, in order to prototype or test a piece of potential gameplay.

Any currently open scene is what you are working on, as no two scenes can be worked on simultaneously. Scenes can be manipulated and constructed by using the Hierarchy and Scene views.

GameObjects


Any active object in the currently open scene is called a GameObject. Certain assets taken from the Project panel such as models and prefabs become game objects when placed (or 'instantiated') into the current scene. Other objects such as particle systems and primitives can be placed into the scene by using the Create button on the Hierarchy or by using the GameObject menu at the top of the interface. All GameObjects contain at least one component to begin with, that is, the Transform component. Transform simply tells the Unity engine the position, rotation, and scale of an object--all described in X, Y, Z coordinate (or in the case of scale, dimensional) order. In turn, the component can then be addressed in scripting in order to set an object's position, rotation, or scale. From this initial component, you will build upon GameObjects with further components, adding required functionality to build every part of any game scenario you can imagine.

In the following image, you can see the most basic form of a Game Object, as shown in the Inspector panel:

1444_01_13.png



GameObjects can also be nested in the Hierarchy, in order to create the parent-child relationships mentioned previously.

Components


Components come in various forms. They can be for creating behavior, defining appearance, and influencing other aspects of an object's function in the game. By attaching components to an object, you can immediately apply new parts of the game engine to your object. Common components of game production come built-in with Unity, such as the Rigidbody component mentioned earlier, down to simpler elements such as lights, cameras, particle emitters, and more. To build further interactive elements of the game, you'll write scripts, which are also treated as components in Unity. Try to think of a script as something that extends or modifies the existing functionality available in Unity or creates behavior with the Unity scripting classes provided.



Scripts


While being considered by Unity to be components, scripts are an essential part of game production, and deserve a mention as a key concept. We will write our scripts in both C Sharp (More often written as 'C#') and Javascript. You should also be aware that Unity offers you the opportunity to write in Boo (a derivative of the Python language).We have chosen to primarily focus on C# and Javascript as these are the main two languages used by Unity developers, and Boo is not supported for scripting on mobile devices; for this reason it is not advised to begin learning Unity scripting with Boo.

Unity does not require you to learn how the coding of its own engine works or how to modify it, but you will be utilizing scripting in almost every game scenario you develop. The beauty of using Unity scripting is that any script you write for your game will be straightforward enough after a few examples, as Unity has its own built-in Behavior class called Monobehaviour--a set of scripting instructions for you to call upon. For many new developers, getting to grips with scripting can be a daunting prospect, and one that threatens to put off new Unity users who are more accustomed to design. If this is your first attempt at getting into game development, or you have no experience in writing code, do not worry. We will introduce scripting one step at a time, with a mind to showing you not only the importance, but also the power of effective scripting for your Unity games.

To write scripts, you'll use Unity's standalone script editor, Monodevelop. This separate application can be found in the Unity application folder on your PC or Mac and will be launched any time you edit a new script or an existing one. Amending and saving scripts in the script editor will immediately update the script in Unity as soon as you switch back to Unity. You may also designate your own script editor in the Unity preferences if you wish to, such as Visual Studio. Monodevelop is recommended however, as it offers auto-completion of code as you type and is natively developed and updated by Unity Technologies.

Prefabs


Unity's development approach hinges around the GameObject concept, but it also has a clever way to store objects as assets to be reused in different parts of your game, and then instantiated (also known as 'spawning' or 'cloning') at any time. By creating complex objects with various components and settings, you'll be effectively building a template for something you may want to spawn multiple instances of (hence 'instantiate'), with each instance then being individually modifiable.

Consider a crate as an example--you may have given the object in the game a mass, and written scripted behaviors for its destruction, and chances are you'll want to use this object more than once in a game, and perhaps even in games other than the one it was designed for.

Prefabs allow you to store the object, complete with components and current configuration. Comparable to the MovieClip concept in Adobe Flash, think of prefabs simply as empty containers that you can fill with objects to form a data template you'll likely recycle.

Summary


In this article, we saw the concepts of working in 3D and how game development works with Unity. Having covered how 3D development works, you will learn the core windows that make up the Unity Editor environment.
Cancel Save
0 Likes 0 Comments

Comments

Nobody has left a comment. You can be the first!
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Advertisement