• entries
    64
  • comments
    46
  • views
    80410

Entries in this blog

ericrrichards22
Typically, in a strategy game, in addition to the triangle mesh that we use to draw the terrain, there is an underlying logical representation, usually dividing the terrain into rectangular or hexagonal tiles. This grid is generally what is used to order units around, construct buildings, select targets and so forth. To do all this, we need to be able to select locations on the terrain using the mouse, so we will need to implement terrain/mouse-ray picking for our terrain, similar to what we have done previously, with model triangle picking.

We cannot simply use the same techniques that we used earlier for our terrain, however. For one, in our previous example, we were using a brute-force linear searching technique to find the picked triangle out of all the triangles in the mesh. That worked in that case, however, the mesh that we were trying to pick only contained 1850 triangles. I have been using a terrain in these examples that, when fully tessellated, is 2049x2049 vertices, which means that our terrain consists of more than 8 million triangles. It's pretty unlikely that we could manage to use the same brute-force technique with that many triangles, so we need to use some kind of space partitioning data structure to reduce the portion of the terrain that we need to consider for intersection.

Additionally, we cannot really perform a per-triangle intersection test in any case, since our terrain uses a dynamic LOD system. The triangles of the terrain mesh are only generated on the GPU, in the hull shader, so we don't have access to the terrain mesh triangles on the CPU, where we will be doing our picking. Because of these two constraints, I have decide on using a quadtree of axis-aligned bounding boxes to implement picking on the terrain. Using a quad tree speeds up our intersection testing considerably, since most of the time we will be able to exclude three-fourths of our terrain from further consideration at each level of the tree. This also maps quite nicely to the concept of a grid layout for representing our terrain, and allows us to select individual terrain tiles fairly efficiently, since the bounding boxes at the terminal leaves of the tree will thus encompass a single logical terrain tile. In the screenshot below, you can see how this works; the boxes drawn in color over the terrain are at double the size of the logical terrain tiles, since I ran out of video memory drawing the terminal bounding boxes, but you can see that the red ball is located on the upper-quadrant of the white bounding box containing it.

bvh_thumb2.png?imgmax=800

Read more "
ericrrichards22
[font=Arial]



Minimaps are a common feature of many different types of games, especially those in which the game world is larger than the area the player can see on screen at once. Generally, a minimap allows the player to keep track of where they are in the larger game world, and in many games, particularly strategy and simulation games where the view camera is not tied to any specific player character, allow the player to move their viewing location more quickly than by using the direct camera controls. Often, a minimap will also provide a high-level view of unit movement, building locations, fog-of-war and other game specific information.

Today, we will look at implementing a minimap that will show us a birds-eye view of the our Terrain class. We'll also superimpose the frustum for our main rendering camera over the terrain, so that we can easily see how much of the terrain is in view. We'll also support moving our viewpoint by clicking on the minimap. All of this functionality will be wrapped up into a class, so that we can render multiple minimaps, and place them wherever we like within our application window.

As always, the full code for this example can be downloaded from GitHub, at https://github.com/ericrrichards/dx11.git. The relevant project is the Minimap project. The implementation of this minimap code was largely inspired by Chapter 11 of Carl Granberg's Programming an RTS Game with Direct3D, particularly the camera frustum drawing code. If you can find a copy (it appears to be out of print, and copies are going for outrageous prices on Amazon...), I would highly recommend grabbing it.
image_thumb%25255B1%25255D.png?imgmax=800

[/font][font=Arial]


Read more "

[/font]
ericrrichards22
[font=Arial]

Now that I'm finished up with everything that I wanted to cover from

[/font]Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0[font=Arial]

, I want to spend some time improving the Terrain class that

[/font]we[font=Arial]

[/font]introduced[font=Arial]

[/font]earlier[font=Arial]

. My ultimate goal is to create a two tiered strategy game, with a turn-based strategic level and either a turn-based or real-time tactical level. My favorite games have always been these kinds of strategic/tactical hybrids, such as (in roughly chronological order)

[/font]Centurion: Defender of Rome[font=Arial]

,

[/font]Lords of the Realm[font=Arial]

,

[/font]Close Combat[font=Arial]

and the

[/font]Total War series[font=Arial]

. In all of these games, the tactical combat is one of the main highlights of gameplay, and so the terrain that that combat occurs upon is very important, both aesthetically and for gameplay.

[/font]
[font=Arial]

Or first step will be to incorporate some of the graphical improvements that we have recently implemented into our terrain rendering. We will be adding shadow-mapping and SSAO support to the terrain in this installment. In the screenshots below, we have our light source (the sun) low on the horizon behind the mountain range. The first shot shows our current Terrain rendering result, with no shadows or ambient occlusion. In the second, shadows have been added, which in addition to just showing shadows, has dulled down a lot of the odd-looking highlights in the first shot. The final shot shows both shadow-mapping and ambient occlusion applied to the terrain. The ambient occlusion adds a little more detail to the scene; regardless of it's accuracy, I kind of like the effect, just to noise up the textures applied to the terrain, although I may tweak it a bit to lighten the darker spots up a bit.

We are going to need to add another set of effect techniques to our shader effect, to support shadow mapping, as well as a technique to draw to the shadow map, and another technique to draw the normal/depth map for SSAO. For the latter two techniques, we will need to implement a new hull shader, since I would like to have the shadow maps and normal-depth maps match the fully-tessellated geometry; using the normal hull shader that dynamically tessellates may result in shadows that change shape as you move around the map. For the normal/depth technique, we will also need to implement a new pixel shader. Our domain shader is also going to need to be updated, so that it create the texture coordinates for sampling both the shadow map and the ssao map, and our pixel shader will need to be updated to do the shadow and ambient occlusion calculations.

This sounds like a lot of work, but really, it is mostly a matter of adapting what we have already done. As always, you can download my full code for this example from GitHub at https://github.com/ericrrichards/dx11.git. This example doesn't really have a stand-alone project, as it came about as I was on my way to implementing a minimap, and thus these techniques are showcased as part of the Minimap project.

[/font]

[font=Arial]

Basic Terrain Rendering
image_thumb%25255B29%25255D.png?imgmax=800

[/font]

[font=Arial]

Shadowmapping Added
image_thumb%25255B31%25255D.png?imgmax=800

[/font]

[font=Arial]

Shadowmapping and SSAO
image_thumb%25255B33%25255D.png?imgmax=800

[/font]
[font=Arial]

Read more "

[/font]
ericrrichards22
[font=Arial]



Quite a while back, I presented an example that rendered water waves by computing a wave equation and updating a polygonal mesh each frame. This method produced fairly nice graphical results, but it was very CPU-intensive, and relied on updating a vertex buffer every frame, so it had relatively poor performance.

We can use displacement mapping to approximate the wave calculation and modify the geometry all on the GPU, which can be considerably faster. At a very high level, what we will do is render a polygon grid mesh, using two height/normal maps that we will scroll in different directions and at different rates. Then, for each vertex that we create using the tessellation stages, we will sample the two heightmaps, and add the sampled offsets to the vertex's y-coordinate. Because we are scrolling the heightmaps at different rates, small peaks and valleys will appear and disappear over time, resulting in an effect that looks like waves. Using different control parameters, we can control this wave effect, and generate either a still, calm surface, like a mountain pond at first light, or big, choppy waves, like the ocean in the midst of a tempest.

This example is based off of the final exercise of Chapter 18 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. The original code that inspired this example is not located with the other example for Chapter 18, but rather in the SelectedCodeSolutions directory. You can download my source code in full from https://github.com/ericrrichards/dx11.git, under the 29-WavesDemo project. One thing to note is that you will need to have a DirectX 11 compatible video card to execute this example, as we will be using tessellation stage shaders that are only available in DirectX 11.
image_thumb%25255B1%25255D.png?imgmax=800

[/font][font=Arial]


Read more "

[/font]
ericrrichards22
[font=Arial]


In real-time lighting applications, like games, we usually only calculate direct lighting, i.e. light that originates from a light source and hits an object directly. The Phong lighting model that we have been using thus far is an example of this; we only calculate the direct diffuse and specular lighting. We either ignore indirect light (light that has bounced off of other objects in the scene), or approximate it using a fixed ambient term. This is very fast to calculate, but not terribly physically accurate. Physically accurate lighting models can model these indirect light bounces, but are typically too computationally expensive to use in a real-time application, which needs to render at least 30 frames per second. However, using the ambient lighting term to approximate indirect light has some issues, as you can see in the screenshot below. This depicts our standard skull and columns scene, rendered using only ambient lighting. Because we are using a fixed ambient color, each object is rendered as a solid color, with no definition. Essentially, we are making the assumption that indirect light bounces uniformly onto all surfaces of our objects, which is often not physically accurate.

[/font][font=Arial]


image_thumb%25255B2%25255D.png?imgmax=800

[/font][font=Arial]


Naturally, some portions of our scene will receive more indirect light than other portions, if we were actually modeling the way that light bounces within our scene. Some portions of the scene will receive the maximum amount of indirect light, while other portions, such as the nooks and crannies of our skull, should appear darker, since fewer indirect light rays should be able to hit those surfaces because the surrounding geometry would, realistically, block those rays from reaching the surface.

[/font]
[font=Arial]


In a classical global illumination scheme, we would simulate indirect light by casting rays from the object surface point in a hemispherical pattern, checking for geometry that would prevent light from reaching the point. Assuming that our models are static, this could be a viable method, provided we performed these calculations off-line; ray tracing is very expensive, since we would need to cast a large number of rays to produce an acceptable result, and performing that many intersection tests can be very expensive. With animated models, this method very quickly becomes untenable; whenever the models in the scene move, we would need to recalculate the occlusion values, which is simply too slow to do in real-time.

[/font]
[font=Arial]


Screen-Space Ambient Occlusion is a fast technique for approximating ambient occlusion, developed by Crytek for the game Crysis. We will initially draw the scene to a render target, which will contain the normal and depth information for each pixel in the scene. Then, we can sample this normal/depth surface to calculate occlusion values for each pixel, which we will save to another render target. Finally, in our usual shader effect, we can sample this occlusion map to modify the ambient term in our lighting calculation. While this method is not perfectly realistic, it is very fast, and generally produces good results. As you can see in the screen shot below, using SSAO darkens up the cavities of the skull and around the bases of the columns and spheres, providing some sense of depth.

[/font][font=Arial]


image_thumb%25255B3%25255D.png?imgmax=800

[/font][font=Arial]


The code for this example is based on Chapter 22 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. The example presented here has been stripped down considerably to demonstrate only the SSAO effects; lighting and texturing have been disabled, and the shadow mapping effects in Luna's example have been removed. The full code for this example can be found at my GitHub repository, https://github.com/ericrrichards/dx11.git, under the SSAODemo2 project. A more faithful adaptation of Luna's example can also be found in the 28-SsaoDemo project.

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


This weekend, I updated my home workstation from Windows 8 to Windows 8.1. Just before doing this, I had done a bunch of work on my SSAO implementation, which I was intending to write up here once I got back from a visit home to do some deer hunting and help my parents get their firewood in. When I got back, I fired up my machine, and loaded up VS to run the SSAO sample to grab some screenshots. Immediately, my demo application crashed, while trying to create the DirectX 11 device. I had done some work over the weekend to downgrade the vertex and pixel shaders in the example to SM4, so that they could run on my laptop, which has an older integrated Intel video card that only supports DX10.1. I figured that I had borked something up in the process, so I tried running some of my other, simpler demos. Same error message popped up; DXGI_ERROR_UNSUPPORTED. Now, I am running a GTX 560 TI, so I know Direct3D 11 should be supported.

[/font]
[font=Arial]


However, I have been using Nvidia's driver update tool to keep myself at the latest and greatest driver version, so I figured that perhaps the latest driver I downloaded had some bugs. Go to Nvidia's site, check for any updates. Looks like I have the latest driver. Hmm...

[/font]
[font=Arial]


So I turned again to google, trying to find some reason why I would suddenly be unable to create a DirectX device. The fourth result I found was this:http://stackoverflow.com/questions/18082080/d3d11-create-device-debug-on-windows-8-1. Apparently I need to download the Windows 8.1 SDK, now. I'm guessing that, since I had VS installed prior to updating, I didn't get the latest SDK installed, and the Windows 8 SDK, which I did have installed, wouldn't cut it anymore, at least when trying to create a debug device. So I went ahead and installed the 8.1 SDK from here. Restart VS, rebuild the project in question, and now it runs perfectly. Argh. At least it's working again; I just wish I didn't have to waste an hour futzing around with it...

[/font]
[font=Arial]


Originally posted at http://www.richardssoftware.net/2013/11/windows-81-and-slimdx.html

[/font]
ericrrichards22
[font=Arial]


Shadow mapping is a technique to cast shadows from arbitrary objects onto arbitrary 3D surfaces. You may recall that we implemented planar shadows earlier using the stencil buffer. Although this technique worked well for rendering shadows onto planar (flat) surfaces, this technique does not work well when we want to cast shadows onto curved or irregular surfaces, which renders it of relatively little use. Shadow mapping gets around these limitations by rendering the scene from the perspective of a light and saving the depth information into a texture called a shadow map. Then, when we are rendering our scene to the backbuffer, in the pixel shader, we determine the depth value of the pixel being rendered, relative to the light position, and compare it to a sampled value from the shadow map. If the computed value is greater than the sampled value, then the pixel being rendered is not visible from the light, and so the pixel is in shadow, and we do not compute the diffuse and specular lighting for the pixel; otherwise, we render the pixel as normal. Using a simple point sampling technique for shadow mapping results in very hard, aliased shadows: a pixel is either in shadow or lit; therefore, we will use a sampling technique known as percentage closer filtering (PCF), which uses a box filter to determine how shadowed the pixel is. This allows us to render partially shadowed pixels, which results in softer shadow edges.

[/font]
[font=Arial]


This example is based on the example from Chapter 21 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. The full source for this example can be downloaded from my GitHub repository at https://github.com/ericrrichards/dx11.git, under the ShadowDemos project.

[/font][font=Arial]


image_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


I had promised that we would move on to discussing shadows, using the shadow mapping technique. However, when I got back into the code I had written for that example, I realized that I was really sick of handling all of the geometry for our stock columns & skull scene. So I decided that, rather than manage all of the buffer creation and litter the example app with all of the buffer counts, offsets, materials and world transforms necessary to create our primitive meshes, I would take some time and extend the BasicModel class with some factory methods to create geometric models for us, and leverage the BasicModel class to encapsulate and manage all of that data. This cleans up the example code considerably, so that next time when we do look at shadow mapping, there will be a lot less noise to deal with.

[/font]
[font=Arial]


The heavy lifting for these methods is already done; our GeometryGenerator class already does the work of generating the vertex and index data for these geometric meshes. All that we have left to do is massage that geometry into our BasicModel's MeshGeometry structure, add a default material and textures, and create a Subset for the entire mesh. As the material and textures are public, we can safely initialize the model with a default material and null textures, since we can apply a different material or apply diffuse or normal maps to the model after it is created.

[/font]
[font=Arial]


The full source for this example can be downloaded from my GitHub repository, at https://github.com/ericrrichards/dx11.git, under the ShapeModels project.

[/font]
[font=Arial]


image_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


Particle systems are a technique commonly used to simulate chaotic phenomena, which are not easy to render using normal polygons. Some common examples include fire, smoke, rain, snow, or sparks. The particle system implementation that we are going to develop will be general enough to support many different effects; we will be using the GPU's StreamOut stage to update our particle systems, which means that all of the physics calculations and logic to update the particles will reside in our shader code, so that by substituting different shaders, we can achieve different effects using our base particle system implementation.

[/font]
[font=Arial]


The code for this example was adapted from Chapter 20 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0, ported to C# and SlimDX. The full source for the example can be found at my GitHub repository, at https://github.com/ericrrichards/dx11.git, under the ParticlesDemo project.

[/font]
[font=Arial]


Below, you can see the results of adding two particles systems to our terrain demo. At the center of the screen, we have a flame particle effect, along with a rain particle effect.

[/font][font=Arial]


image_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


Sorry for the hiatus, I've been very busy with work and life the last couple weeks. Today, we're going to look at loading meshes with skeletal animations in DirectX 11, using SlimDX and Assimp.Net in C#. This will probably be our most complicated example yet, so bear with me. This example is inspired by Chapter 25 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0, although with some heavy modifications. Mr. Luna's code uses a custom animation format, which I found less than totally useful; realistically, we would want to be able to load skinned meshes exported in one of the commonly used 3D modeling formats. To facilitate this, we will again use the .NET port of the Assimp library, Assimp.Net. The code I am using to load and interpret the animation and bone data is heavily based on Scott Lee's Animation Importer code, ported to C#. The full source for this example can be found on my GitHub repository, at https://github.com/ericrrichards/dx11.git under the SkinnedModels project. The meshes used in the example are taken from the example code for Carl Granberg's Programming an RTS Game with Direct3D.

[/font]
[font=Arial]


Skeletal animation is the standard way to animate 3D character models. Generally, a character model will be represented by two structures: the exterior vertex mesh, or skin, and a tree of control points specifying the joints or bones that make up the skeleton of the mesh. Each vertex in the skin is associated with one or more bones, along with a weight that determines how much influence the bone should have on the final position of the skin vertex. Each bone is represented by a transformation matrix specifying the translation, rotation and scale that determines the final position of the bone. The bones are defined in a hierarchy, so that each bone's transformation is specified relative to its parent bone. Thus, given a standard bipedal skeleton, if we rotate the upper arm bone of the model, this rotation will propagate to the lower arm and hand bones of the model, analogously to how our actual joints and bones work.

[/font]
[font=Arial]


Animations are defined by a series of keyframes, each of which specifies the transformation of each bone in the skeleton at a given time. To get the appropriate transformation at a given time t, we linearly interpolate between the two closest keyframes. Because of this, we will typically store the bone transformations in a decomposed form, specifying the translation, scale and rotation components separately, building the transformation matrix at a given time from the interpolated components. A skinned model may contain many different animation sets; for instance, we'll commonly have a walk animation, and attack animation, an idle animation, and a death animation.

[/font][font=Arial]


The process of loading an animated mesh can be summarized as follows:

[/font]

  1. Extract the bone hierarchy of the model skeleton.
  2. Extract the animations from the model, along with all bone keyframes for each animation.
  3. Extract the skin vertex data, along with the vertex bone indices and weights.
  4. Extract the model materials and textures.
[font=Arial]


To draw the skinned model, we need to advance the animation to the correct frame, then pass the bone transforms to our vertex shader, where we will use the vertex indices and weights to transform the vertex position to the proper location.

[/font]
[font=Arial]


skinnedModels_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]

So far, we have either worked with procedurally generated meshes, like our boxes and cylinders, or loaded very simple text-based mesh formats. For any kind of real application, however, we will need to have the capability to load meshes created by artists using 3D modeling and animation programs, like Blender or 3DS Max. There are a number of potential solutions to this problem; we could write an importer to read the specific file format of the program we are most likely to use for our engine. This does present a new host of problems, however: unless we write another importer, we are limited to just using models created with that modeling software, or we will have to rely on or create plugins to our modeling software to reformat models created using different software. There is a myriad of different 3D model formats in more-or-less common use, some of which are open-source, some of which are proprietary; many of both are surprisingly hard to find good, authoritative documentation on. All this makes the prospects of creating a general-purpose, cross-format model importer a daunting task.

[/font]

[font=Arial]

Fortunately, there exists an open-source library with support for loading most of the commonly used model formats, ASSIMP, or Open Asset Import Library. Although Assimp is written in C++, there exists a port for .NET, AssimpNet, which we will be able to use directly in our code, without the hassle of wrapping the native Assimp library ourselves. I am not a lawyer, but it looks like both the Assimp and AssimpNet have very permissive licenses that should allow one to use them in any kind of hobby or professional project.

[/font]

[font=Arial]

While we can use Assimp to load our model data, we will need to create our own C# model class to manage and render the model. For that, I will be following the example of Chapter 23 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. This example will not be a straight conversion of his example code, since I will be ditching the m3d model format which he uses, and instead loading models from the standard old Microsoft DirectX X format. The models I will be using come from the example code for Chapter 4 of Carl Granberg's Programming an RTS Game with Direct3D, although you may use any of the supported Assimp model formats if you want to use other meshes instead. The full source for this example can be found on my GitHub repository, athttps://github.com/ericrrichards/dx11.git, under the AssimpModel project.

[/font]

[font=Arial]

image_thumb%25255B1%25255D.png?imgmax=800

[/font]

model2.JPG
A more complicated model with multiple subsets, from the Assimp test model collection.

[font=Arial]

Read More...

[/font]
ericrrichards22
[font=Arial]


Previously, we have used our Terrain class solely with heightmaps that we have loaded from a file. Now, we are going to extend our Terrain class to generate random heightmaps as well, which will add variety to our examples. We will be using Perlin Noise, which is a method of generating naturalistic pseudo-random textures developed by Ken Perlin. One of the advantages of using Perlin noise is that its output is deterministic; for a given set of control parameters, we will always generate the same noise texture. This is desirable for our terrain generation algorithm because it allows us to recreate the same heightmap given the same initial seed value and control parameters.

[/font]
[font=Arial]


Because we will be generating random heightmaps for our terrain, we will also need to generate an appropriate blendmap to texture the resulting terrain. We will be using a simple method that assigns the different terrain textures based on the heightmap values; a more complex simulation might model the effects of weather, temperature and moisture to assign diffuse textures, but simply using the elevation works quite well with the texture set that we are using.

[/font]
[font=Arial]


The code for this example is heavily influenced by Chapter 4 of Carl Granberg's Programming an RTS Game with Direct3D, adapted into C# and taking advantage of some performance improvements that multi-core CPUs on modern computers offer us. The full code for this example can be found on my GitHub repository, at https://github.com/ericrrichards/dx11.git, under the RandomTerrainDemo project. In addition to the random terrain generation code, we will also look at how we can add Windows Forms controls to our application, which we will use to specify the parameters to use to create the random terrain.

[/font]
[font=Arial]


random1_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


I know that I have been saying that I will cover random terrain generation in my next post for several posts now, and if all goes well, that post will be published today or tomorrow. First, though, we will talk about Direct2D, and using it to draw a loading screen, which we will display while the terrain is being generated to give some indication of the progress of our terrain generation algorithm.

[/font]
[font=Arial]


Direct2D is a new 2D rendering API from Microsoft built on top of DirectX 10 (or DirectX 11 in Windows 8). It provides functionality for rendering bitmaps, vector graphics and text in screen-space using hardware acceleration. It is positioned as a successor to the GDI rendering interface, and, so far as I can tell, the old DirectDraw API from earlier versions of DirectX. According to the documentation, it should be possible to use Direct2D to render 2D elements to a Direct3D 11 render target, although I have had some difficulties in actually getting that use-case to work. It does appear to work excellently for pure 2D rendering, say for menu or loading screens, with a simpler syntax than using Direct3D with an orthographic projection.

[/font]
[font=Arial]


We will be adding Direct2D support to our base application class D3DApp, and use Direct2D to render a progress bar with some status text during our initialization stage, as we load and generate the Direct3D resources for our scene. Please note that the implementation presented here should only be used while there is no active Direct3D rendering occurring; due to my difficulties in getting Direct2D/Direct3D interoperation to work correctly, Direct2D will be using a different backbuffer than Direct3D, so interleaving Direct2D drawing with Direct3D rendering will result in some very noticeable flickering when the backbuffers switch.

[/font]

[font=Arial]


The inspiration for this example comes from Chapter 5 of Carl Granberg's Programming an RTS Game with Direct3D, where a similar Progress screen is implemented using DirectX9 and the fixed function pipeline. With the removal of the ID3DXFont interface from newer versions of DirectX, as well as the lack of the ability to clear just a portion of a DirectX11 DeviceContext's render target, a straight conversion of that code would require some fairly heavy lifting to implement in Direct3D 11, and so we will be using Direct2D instead. The full code for this example can be found on my GitHub repository, athttps://github.com/ericrrichards/dx11.git, and is implemented in the TerrainDemo and RandomTerrainDemo projects.

[/font][font=Arial]


progressBar_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


Last time, we discussed terrain rendering, using the tessellation stages of the GPU to render the terrain mesh with distance-based LOD. That method required a DX11-compliant graphics card, since the Hull and Domain shader stages are new to Direct3D11. According to the latest Steam Hardware survey, nearly 65% of gamers have a DX11 graphics card, which is certainly the majority of potential users, and only likely to increase in the future. Of the remaining 35% of gamers, 31% are still using DX10 graphics cards. While we can safely ignore the small percentage of the market that is still limping along on DX9 cards (I myself still have an old laptop with a GeForce Go 7400 in my oldest laptop, but that machine is seven years old and on its last legs), restricting ourselves to only DX 11 cards cuts out a third of potential users of your application. For that reason, I'm going to cover an alternative, CPU-based implementation of our previous LOD terrain rendering example. If you have the option, I would suggest that you only bother with the previous DX11 method, as tessellating the terrain mesh yourself on the CPU is relatively more complex, prone to error, less performant, and produces a somewhat lower quality result; if you must support DX10 graphics cards, however, this method or one similar to it will do the job, while the hull/domain shader method will not.

[/font]

[font=Arial]


We will be implementing this rendering method as an additional render path in our Terrain class, if we detect that the user has a DX10 compatible graphics card. This allows us to reuse a large chunk of the previous code. For the rest, we will adapt portions of the HLSL shader code that we previously implemented into C#, as well as use some inspirations from Chapter 4 of Carl Granberg's Programming an RTS Game with Direct3D. The full code for this example can be found at my GitHub repository, https://github.com/ericrrichards/dx11.git, under the TerrainDemo project.

[/font]
[font=Arial]


dx10-terrain_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


A common task for strategy and other games with an outdoor setting is the rendering of the terrain for the level. Probably the most convenient way to model a terrain is to create a triangular grid, and then perturb the y-coordinates of the vertices to match the desired elevations. This elevation data can be determined by using a mathematical function, as we have done in our previous examples, or by sampling an array or texture known as a heightmap. Using a heightmap to describe the terrain elevations allows us more fine-grain control over the details of our terrain, and also allows us to define the terrain easily, either using a procedural method to create random heightmaps, or by creating an image in a paint program.

[/font]
[font=Arial]


Because a terrain can be very large, we will want to optimize the rendering of it as much as possible. One easy way to save rendering cycles is to only draw the vertices of the terrain that can be seen by the player, using frustum culling techniques similar to those we have already covered. Another way is to render the mesh using a variable level of detail, by using the Hull and Domain shaders to render the terrain mesh with more polygons near the camera, and fewer in the distance, in a manner similar to that we used for our Displacement mapping effect. Combining the two techniques allows us to render a very large terrain, with a very high level of detail, at a high frame rate, although it does limit us to running on DirectX 11 compliant graphics cards.

[/font]
[font=Arial]


We will also use a technique called texture splatting to render our terrain with multiple textures in a single rendering call. This technique involves using a separate texture, called a blend map, in addition to the diffuse textures that are applied to the mesh, in order to define which texture is applied to which portion of the mesh.

[/font]
[font=Arial]


The code for this example was adapted from Chapter 19 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0, with some additional inspirations from Chapter 4 of Carl Granberg's Programming an RTS Game with Direct3D. The full source for this example can be downloaded from my GitHub repository, at https://github.com/ericrrichards/dx11.git, under the TerrainDemo project.

[/font][font=Arial]


image_thumb%25255B3%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


In our last example on normal mapping and displacement mapping, we made use of the new Direct3D 11 tessellation stages when implementing our displacement mapping effect. For the purposes of the example, we did not examine too closely the concepts involved in making use of these new features, namely the Hull and Domain shaders. These new shader types are sufficiently complicated that they deserve a separate treatment of their own, particularly since we will continue to make use of them for more complicated effects in the future.

[/font]
[font=Arial]


The Hull and Domain shaders are covered in Chapter 13 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0, which I had previously skipped over. Rather than use the example from that chapter, I am going to use the shader effect we developed for our last example instead, so that we can dive into the details of how the hull and domain shaders work in the context of a useful example that we have some background with.

[/font]
[font=Arial]


The primary motivation for using the tessellation stages is to offload work from the the CPU and main memory onto the GPU. We have already looked at a couple of the benefits of this technique in our previous post, but some of the advantages of using the tessellation stages are:

[/font]

  • We can use a lower detail mesh, and specify additional detail using less memory-intensive techniques, like the displacement mapping technique presented earlier, to produce the final, high-quality mesh that is displayed.
  • We can adjust the level of detail of a mesh on-the-fly, depending on the distance of the mesh from the camera or other criteria that we define.
  • We can perform expensive calculations, like collisions and physics calculations, on the simplified mesh stored in main memory, and still render the highly-detailed generated mesh.

    The Tessellation Stages

    [font=Arial]


    The tessellation stages sit in the graphics pipeline between the vertex shader and the geometry shader. When we render using the tessellation stages, the vertices created by the vertex shader are not really the vertices that will be rendered to the screen; instead, they are control points which define a triangular or quad patch, which will be further refined by the tessellation stages into vertices. For most of our usages, we will either be working with triangular patches, with 3 control points, or quad patches, with 4 control points, which correspond to the corner vertices of the triangle or quad. Direct3D 11 supports patches with up to 32 control points, which might be suitable for rendering meshes based on Bezier curves.

    [/font][font=Arial]


    The tessellation stages can be broken down into three component stages:

    [/font]

    • Hull Shader Stage - The hull shader operates on each control point for a geometry patch, and can add, remove or modify its input control points before passing the patch onto the the tessellator stage. The Hull shader also calculates the tessellation factors for a patch, which instruct the tessellator stage how to break the patch up into individual vertices. The hull shader is fully programmable, meaning that we need to define an HLSL function that will be evaluated to construct the patch control points and tessellation factors.
    • Tessellator Stage - The tessellator stage is a fixed-function (meaning that we do not have to write a shader for it) stage, which samples the input patch and generates a set of vertices that divide the patch, according to the tessellation factors supplied by the hull shader and a partitioning scheme, which defines the algorithm used to subdivide the patch. Vertices created by the tessellator are normalized; i.e. quad patch vertices are specified by referring to them by their (u,v) coordinates on the surface of the quad, while triangle patch vertices use barycentric coordinates to specify their location within the triangle patch.
    • Domain Shader Stage - The domain shader is a programmable stage (we need to write a shader function for it), which operates on the normalized vertices input from the tessellator stage, and maps them into their final positions within the patch. Typically, the domain shader will interpolate the final vertex value from the patch control points using the uv or barycentric coordinates output by the tessellator. The output vertices from the domain shader will then be passed along to the next stage in the pipeline, either the geometry shader or the pixel shader.

      [font=Arial]


      With these definitions out of the way, we can now dive into the displacement mapping effect from our previous example and examine just how the tessellation stages generate the displacement mapped geometry we see on the screen.

      [/font]
      [font=Arial]


      Read More...

      [/font]
ericrrichards22
[font=Arial]

Today, we are going to cover a couple of additional techniques that we can use to achieve more realistic lighting in our 3D scenes. Going back to our first discussion of

[/font]lighting[font=Arial]

, recall that thus far, we have been using per-pixel,

[/font]Phong lighting[font=Arial]

. This style of lighting was an improvement upon the earlier method of

[/font]Gourad lighting[font=Arial]

, by interpolating the vertex normals over the resulting surface pixels, and calculating the color of an object per-pixel, rather than per-vertex. Generally, the Phong model gives us good results, but it is limited, in that we can only specify the normals to be interpolated from at the vertices. For objects that should appear smooth, this is sufficient to give realistic-looking lighting; for surfaces that have more uneven textures applied to them, the illusion can break down, since the specular highlights computed from the interpolated normals will not match up with the apparent topology of the surface.

[/font]

[font=Arial]

image_thumb1.png?imgmax=800

[/font]

[font=Arial]

In the screenshot above, you can see that the highlights on the nearest column are very smooth, and match the geometry of the cylinder. However, the column has a texture applied that makes it appear to be constructed out of stone blocks, jointed with mortar. In real life, such a material would have all kinds of nooks and crannies and deformities that would affect the way light hits the surface and create much more irregular highlights than in the image above. Ideally, we would want to model those surface details in our scene, for the greatest realism. This is the motivation for the techniques we are going to discuss today.

[/font]

[font=Arial]

One technique to improve the lighting of textured objects is called bump or normal mapping. Instead of just using the interpolated pixel normal, we will combine it with a normal sampled from a special texture, called a normal map, which allows us to match the per-pixel normal to the perceived surface texture, and achieve more believable lighting.

[/font]

[font=Arial]

The other technique is called displacement mapping. Similarly, we use an additional texture to specify the per-texel surface details, but this time, rather than a surface normal, the texture, called a displacement map or heightmap, stores an offset that indicates how much the texel sticks out or is sunken in from its base position. We use this offset to modify the position of the vertices of an object along the vertex normal. For best results, we can increase the tessellation of the mesh using a domain shader, so that the vertex resolution of our mesh is as great as the resolution of our heightmap. Displacement mapping is often combined with normal mapping, for the highest level of realism.

[/font]

[font=Arial]

normal-mapped_thumb1.png?imgmax=800

[/font]
[font=Arial]

Normal mapped columns

[/font]
[font=Arial]

displacement-mapped_thumb1.png?imgmax=800

[/font]
[font=Arial]

Displacement mapped columns

[/font]

[font=Arial]

This example is based off of Chapter 18 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. You can download the full source for this example from my GitHub repository, athttps://github.com/ericrrichards/dx11.git, under the NormalDisplacementMaps project.

[/font]

[font=Arial]

NOTE: You will need to have a DirectX 11 compatible video card in order to use the displacement mapping method presented here, as it makes use of the Domain and Hull shaders, which are new to DX 11.

[/font]

Read More...
ericrrichards22
[font=Arial]


Last time, we looked at using cube maps to render a skybox around our 3D scenes, and also how to use that sky cubemap to render some environmental reflections onto our scene objects. While this method of rendering reflections is relatively cheap, performance-wise and can give an additional touch of realism to background geometry, it has some serious limitations if you look at it too closely. For one, none of our local scene geometry is captured in the sky cubemap, so, for instance, you can look at our reflective skull in the center and see the reflections of the distant mountains, which should be occluded by the surrounding columns. This deficiency can be overlooked for minor details, or for surfaces with low reflectivity, but it really sticks out if you have a large, highly reflective surface. Additionally, because we are using the same cubemap for all objects, the reflections at any object in our scene are not totally accurate, as our cubemap sampling technique does not differentiate on the position of the environment mapped object in the scene.

[/font]
[font=Arial]


The solution to these issues is to render a cube map, at runtime, for each reflective object using Direct3D. By rendering the cubemap for each object on the fly, we can incorporate all of the visible scene details, (characters, geometry, particle effects, etc) in the reflection, which looks much more realistic. This is, of course, at the cost of the additional overhead involved in rendering these additional cubemaps each frame, as we have to effectively render the whole scene six times for each object that requires dynamic reflections.

[/font]
[font=Arial]


This example is based on the second portion of Chapter 17 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0, with the code ported to C# and SlimDX from the native C++ used in the original example. You can download the full source for this example from my GitHub repository, at https://github.com/ericrrichards/dx11.git, under the DynamicCubeMap project.

[/font]
[font=Arial]


image_thumb%252525255B3%252525255D%25255B1%25255D_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


This time, we are going to take a look at a special class of texture, the cube map, and a couple of the common applications for cube maps, skyboxes and environment-mapped reflections. Skyboxes allow us to model far away details, like the sky or distant scenery, to create a sense that the world is more expansive than just our scene geometry, in an inexpensive way. Environment-mapped reflections allow us to model reflections on surfaces that are irregular or curved, rather than on flat, planar surfaces as in our Mirror Demo.

[/font]
[font=Arial]


The code for this example is adapted from the first part of Chapter 17 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. You can download the full source for this example from my GitHub repository, https://github.com/ericrrichards/dx11.git, under the CubeMap project.

[/font]
[font=Arial]


cubemap_thumb%25255B3%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


So far, we have only been concerned with drawing a 3D scene to the 2D computer screen, by projecting the 3D positions of objects to the 2D pixels of the screen. Often, you will want to perform the reverse operation; given a pixel on the screen, which object in the 3D scene corresponds to that pixel? Probably the most common application for this kind of transformation is selecting and moving objects in the scene using the mouse, as in most modern real-time strategy games, although the concept has other applications.

[/font]

[font=Arial]


The traditional method of performing this kind of object picking relies on a technique called ray-casting. We shoot a ray from the camera position through the selected point on the near-plane of our view frustum, which is obtained by converting the screen pixel location into normalized device coordinates, and then intersect the resulting ray with each object in our scene. The first object intersected by the ray is the object that is "picked."

[/font][font=Arial]


The code for this example is based on Chapter 16 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0, with some modifications. You can download the full source from my GitHub repository at https://github.com/ericrrichards/dx11.git, under the PickingDemo project.

[/font]
[font=Arial]


Picking_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


Today, we are going to reprise our Camera class from the Camera Demo. In addition to the FPS-style camera that we have already implemented, we will create a Look-At camera, a camera that remains focused on a point and pans around its target. This camera will be similar to the very basic camera we implemented for our initial examples (see the Colored Cube Demo). While our FPS camera is ideal for first-person type views, the Look-At camera can be used for third-person views, or the "birds-eye" view common in city-builder and strategy games. As part of this process, we will abstract out the common functionality that all cameras will share from our FPS camera into an abstract base camera class.

[/font]

[font=Arial]


The inspiration for this example come both from Mr. Luna's Camera Demo (Chapter 14 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0), and the camera implemented in Chapter 5 of Carl Granberg's Programming an RTS Game with Direct3D. You can download the full source for this example from my GitHub repository at https://github.com/ericrrichards/dx11.git under the CameraDemo project. To switch between the FPS and Look-At camera, use the F(ps) and L(ook-at) keys on your keyboard.

[/font]
[font=Arial]


lookat_thumb%25255B4%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


One of the main bottlenecks to the speed of a Direct3D application is the number of Draw calls that are issued to the GPU, along with the overhead of switching shader constants for each object that is drawn. Today, we are going to look at two methods of optimizing our drawing code. Hardware instancing allows us to minimize the overhead of drawing identical geometry in our scene, by batching the draw calls for our objects and utilizing per-instance data to avoid the overhead in uploading our per-object world matrices. Frustum culling enables us to determine which objects will be seen by our camera, and to skip the Draw calls for objects that will be clipped by the GPU during projection. Together, the two techniques reap a significant increase in frame rate.

[/font]

[font=Arial]


The source code for this example was adapted from the InstancingAndCulling demo from Chapter 15 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. Additionally, the frustum culling code for this example was adapted from Chapter 5 of Carl Granberg's Programming an RTS Game with Direct3D (Luna's implementation of frustum culling relied heavily on xnacollision.h, which isn't really included in the base SlimDX). You can download the full source for this example from my GitHub repository at https://github.com/ericrrichards/dx11.git under the InstancingAndCulling project.

[/font]
[font=Arial]


instancing_and_culling_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]

Up until now, we have been using a fixed, orbiting camera to view our demo applications. This style of camera works adequately for our purposes, but for a real game project, you would probably want a more flexible type of camera implementation. Additionally, thus far we have been including our camera-specific code directly in our main application classes, which, again, works, but does not scale well to a real game application. Therefore, we will be splitting out our camera-related code out into a new class (Camera.cs) that we will add to our Core library. This example maps to the CameraDemo example from Chapter 14 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. The full code for this example can be downloaded from my GitHub repository,https://github.com/ericrrichards/dx11.git, under the CameraDemo project.

[/font]

[font=Arial]

We will be implementing a traditional First-Person style camera, as one sees in many FPS's and RPG games. Conceptually, we can think of this style of camera as consisting of a position in our 3D world, typically located as the position of the eyes of the player character, along with a vector frame-of-reference, which defines the direction the character is looking. In most cases, this camera is constrained to only rotate about its X and Y axes, thus we can pitch up and down, or yaw left and right. For some applications, such as a space or aircraft simulation, you would also want to support rotation on the Z (roll) axis. Our camera will support two degrees of motion; back and forward in the direction of our camera's local Z (Look) axis, and left and right strafing along our local X (Right) axis. Depending on your game type, you might also want to implement methods to move the camera up and down on its local Y axis, for instance for jumping, or climbing ladders. For now, we are not going to implement any collision detection with our 3D objects; our camera will operate very similarly to the Half-Life or Quake camera when using the noclip cheat.

[/font]

[font=Arial]

Our camera class will additionally manage its view and projection matrices, as well as storing information that we can use to extract the view frustum. Below is a screenshot of our scene rendered using the viewpoint of our Camera class (This scene is the same as our scene from the LitSkull Demo, with textures applied to the shapes).

[/font]
[font=Arial]

camera_thumb%25255B1%25255D.png?imgmax=800

[/font]

[font=Arial]

Read More...

[/font]
ericrrichards22
[font=Arial]


When I first learned about programming DirectX using shaders, it was back when DirectX 9 was the newest thing around. Back then, there were only two stages in the shader pipeline, the Vertex and Pixel shaders that we have been utilizing thus far. DirectX 10 introduced the geometry shader, which allows us to modify entire geometric primitives on the hardware, after they have gone through the vertex shader.

[/font]
[font=Arial]


One application of this capability is rendering billboards. Billboarding is a common technique for rendering far-off objects or minor scene details, by replacing a full 3D object with a texture drawn to a quad that is oriented towards the viewer. This is much less performance-intensive, and for far-off objects and minor details, provides a good-enough approximation. As an example, many games use billboarding to render grass or other foliage, and the Total War series renders far-away units as billboard sprites (In Medieval Total War II, you can see this by zooming in and out on a unit; at a certain point, you'll see the unit "pop", which is the point where the Total War engine switches from rendering sprite billboards to rendering the full 3D model). The older way of rendering billboards required one to maintain a dynamic vertex buffer of the quads for the billboards, and to transform the vertices to orient towards the viewer on the CPU whenever the camera moved. Dynamic vertex buffers have a lot of overhead, because it is necessary to re-upload the geometry to the GPU every time it changes, along with the additional overhead of uploading four vertices per billboard. Using the geometry shader, we can use a static vertex buffer of 3D points, with only a single vertex per billboard, and expand the point to a camera-aligned quad in the geometry shader.

[/font]
[font=Arial]


We'll illustrate this technique by porting the TreeBillboard example from Chapter 11 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. This demo builds upon our previous Alpha-blending example, adding some tree billboards to the scene. You can download the full code for this example from my GitHub repository, at https://github.com/ericrrichards/dx11.git under the TreeBillboardDemo project.

[/font][font=Arial]


billboard_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]

In this post, we are going to discuss applications of the Direct3D stencil buffer, by porting the example from Chapter 10 of

[/font]Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0[font=Arial]

to C# and SlimDX. We will create a simple scene, consisting of an object (in our case, the skull mesh that we have used

[/font]previously[font=Arial]

), and some simple room geometry, including a section which will act as a mirror and reflect the rest of the geometry in our scene. We will also implement planar shadows, so that our central object will cast shadows on the rest of our geometry when it is blocking our primary directional light. The full code for this example can be downloaded from my GitHub repository, at

[/font]https://github.com/ericrrichards/dx11.git[font=Arial]

, under the MirrorDemo project.

[/font]

[font=Arial]

mirror_thumb%25255B1%25255D.png?imgmax=800

[/font]

[font=Arial]

Read More...

[/font]