Jump to content

  • Log In with Google      Sign In   
  • Create Account

Richards Software Ramblings

Expanding our BasicModel Class

Posted by , 25 October 2013 - - - - - - · 561 views
C#, SlimDX, DirectX 11, Models

I had promised that we would move on to discussing shadows, using the shadow mapping technique. However, when I got back into the code I had written for that example, I realized that I was really sick of handling all of the geometry for our stock columns & skull scene. So I decided that, rather than manage all of the buffer creation and litter the example app with all of the buffer counts, offsets, materials and world transforms necessary to create our primitive meshes, I would take some time and extend the BasicModel class with some factory methods to create geometric models for us, and leverage the BasicModel class to encapsulate and manage all of that data. This cleans up the example code considerably, so that next time when we do look at shadow mapping, there will be a lot less noise to deal with.

The heavy lifting for these methods is already done; our GeometryGenerator class already does the work of generating the vertex and index data for these geometric meshes. All that we have left to do is massage that geometry into our BasicModel’s MeshGeometry structure, add a default material and textures, and create a Subset for the entire mesh. As the material and textures are public, we can safely initialize the model with a default material and null textures, since we can apply a different material or apply diffuse or normal maps to the model after it is created.

The full source for this example can be downloaded from my GitHub repository, at https://github.com/ericrrichards/dx11.git, under the ShapeModels project.

Posted Image


Particle Systems using Stream-Out in DirectX 11 and SlimDX

Posted by , 21 October 2013 - - - - - - · 1,278 views
C#SlimDX, DirectX 11 and 3 more...

Particle systems are a technique commonly used to simulate chaotic phenomena, which are not easy to render using normal polygons. Some common examples include fire, smoke, rain, snow, or sparks. The particle system implementation that we are going to develop will be general enough to support many different effects; we will be using the GPU’s StreamOut stage to update our particle systems, which means that all of the physics calculations and logic to update the particles will reside in our shader code, so that by substituting different shaders, we can achieve different effects using our base particle system implementation.

The code for this example was adapted from Chapter 20 of Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0, ported to C# and SlimDX. The full source for the example can be found at my GitHub repository, at https://github.com/ericrrichards/dx11.git, under the ParticlesDemo project.

Below, you can see the results of adding two particles systems to our terrain demo. At the center of the screen, we have a flame particle effect, along with a rain particle effect.

Posted Image


Skinned Models in DirectX 11 with SlimDX and Assimp.Net

Posted by , 15 October 2013 - - - - - - · 1,307 views
C#, SlimDX, Assimp, DirectX 11 and 2 more...

Sorry for the hiatus, I’ve been very busy with work and life the last couple weeks. Today, we’re going to look at loading meshes with skeletal animations in DirectX 11, using SlimDX and Assimp.Net in C#. This will probably be our most complicated example yet, so bear with me. This example is inspired by Chapter 25 of Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0, although with some heavy modifications. Mr. Luna’s code uses a custom animation format, which I found less than totally useful; realistically, we would want to be able to load skinned meshes exported in one of the commonly used 3D modeling formats. To facilitate this, we will again use the .NET port of the Assimp library, Assimp.Net. The code I am using to load and interpret the animation and bone data is heavily based on Scott Lee’s Animation Importer code, ported to C#. The full source for this example can be found on my GitHub repository, at https://github.com/ericrrichards/dx11.git under the SkinnedModels project. The meshes used in the example are taken from the example code for Carl Granberg’s Programming an RTS Game with Direct3D.

Skeletal animation is the standard way to animate 3D character models. Generally, a character model will be represented by two structures: the exterior vertex mesh, or skin, and a tree of control points specifying the joints or bones that make up the skeleton of the mesh. Each vertex in the skin is associated with one or more bones, along with a weight that determines how much influence the bone should have on the final position of the skin vertex. Each bone is represented by a transformation matrix specifying the translation, rotation and scale that determines the final position of the bone. The bones are defined in a hierarchy, so that each bone’s transformation is specified relative to its parent bone. Thus, given a standard bipedal skeleton, if we rotate the upper arm bone of the model, this rotation will propagate to the lower arm and hand bones of the model, analogously to how our actual joints and bones work.

Animations are defined by a series of keyframes, each of which specifies the transformation of each bone in the skeleton at a given time. To get the appropriate transformation at a given time t, we linearly interpolate between the two closest keyframes. Because of this, we will typically store the bone transformations in a decomposed form, specifying the translation, scale and rotation components separately, building the transformation matrix at a given time from the interpolated components. A skinned model may contain many different animation sets; for instance, we’ll commonly have a walk animation, and attack animation, an idle animation, and a death animation.

The process of loading an animated mesh can be summarized as follows:
  • Extract the bone hierarchy of the model skeleton.
  • Extract the animations from the model, along with all bone keyframes for each animation.
  • Extract the skin vertex data, along with the vertex bone indices and weights.
  • Extract the model materials and textures.

To draw the skinned model, we need to advance the animation to the correct frame, then pass the bone transforms to our vertex shader, where we will use the vertex indices and weights to transform the vertex position to the proper location.

Posted Image


Loading 3D Models using Assimp.Net and SlimDX

Posted by , 03 October 2013 - - - - - - · 2,088 views
C#, SlimDX, DirectX 11, Models and 1 more...
So far, we have either worked with procedurally generated meshes, like our boxes and cylinders, or loaded very simple text-based mesh formats. For any kind of real application, however, we will need to have the capability to load meshes created by artists using 3D modeling and animation programs, like Blender or 3DS Max. There are a number of potential solutions to this problem; we could write an importer to read the specific file format of the program we are most likely to use for our engine. This does present a new host of problems, however: unless we write another importer, we are limited to just using models created with that modeling software, or we will have to rely on or create plugins to our modeling software to reformat models created using different software. There is a myriad of different 3D model formats in more-or-less common use, some of which are open-source, some of which are proprietary; many of both are surprisingly hard to find good, authoritative documentation on. All this makes the prospects of creating a general-purpose, cross-format model importer a daunting task.

Fortunately, there exists an open-source library with support for loading most of the commonly used model formats, ASSIMP, or Open Asset Import Library. Although Assimp is written in C++, there exists a port for .NET, AssimpNet, which we will be able to use directly in our code, without the hassle of wrapping the native Assimp library ourselves. I am not a lawyer, but it looks like both the Assimp and AssimpNet have very permissive licenses that should allow one to use them in any kind of hobby or professional project.

While we can use Assimp to load our model data, we will need to create our own C# model class to manage and render the model. For that, I will be following the example of Chapter 23 of Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0. This example will not be a straight conversion of his example code, since I will be ditching the m3d model format which he uses, and instead loading models from the standard old Microsoft DirectX X format. The models I will be using come from the example code for Chapter 4 of Carl Granberg’s Programming an RTS Game with Direct3D, although you may use any of the supported Assimp model formats if you want to use other meshes instead. The full source for this example can be found on my GitHub repository, athttps://github.com/ericrrichards/dx11.git, under the AssimpModel project.

Posted Image

Posted Image
A more complicated model with multiple subsets, from the Assimp test model collection.


Generating Random Terrains using Perlin Noise

Posted by , 01 October 2013 - - - - - - · 2,353 views
C#, SlimDX, DirectX 11, Terrrain and 3 more...

Previously, we have used our Terrain class solely with heightmaps that we have loaded from a file. Now, we are going to extend our Terrain class to generate random heightmaps as well, which will add variety to our examples. We will be using Perlin Noise, which is a method of generating naturalistic pseudo-random textures developed by Ken Perlin. One of the advantages of using Perlin noise is that its output is deterministic; for a given set of control parameters, we will always generate the same noise texture. This is desirable for our terrain generation algorithm because it allows us to recreate the same heightmap given the same initial seed value and control parameters.

Because we will be generating random heightmaps for our terrain, we will also need to generate an appropriate blendmap to texture the resulting terrain. We will be using a simple method that assigns the different terrain textures based on the heightmap values; a more complex simulation might model the effects of weather, temperature and moisture to assign diffuse textures, but simply using the elevation works quite well with the texture set that we are using.

The code for this example is heavily influenced by Chapter 4 of Carl Granberg’s Programming an RTS Game with Direct3D, adapted into C# and taking advantage of some performance improvements that multi-core CPUs on modern computers offer us. The full code for this example can be found on my GitHub repository, at https://github.com/ericrrichards/dx11.git, under the RandomTerrainDemo project. In addition to the random terrain generation code, we will also look at how we can add Windows Forms controls to our application, which we will use to specify the parameters to use to create the random terrain.

Posted Image


Drawing a Loading Screen with Direct2D in SlimDX

Posted by , 30 September 2013 - - - - - - · 777 views
C#, SlimDX, DirectX11, Direct2D

I know that I have been saying that I will cover random terrain generation in my next post for several posts now, and if all goes well, that post will be published today or tomorrow. First, though, we will talk about Direct2D, and using it to draw a loading screen, which we will display while the terrain is being generated to give some indication of the progress of our terrain generation algorithm.

Direct2D is a new 2D rendering API from Microsoft built on top of DirectX 10 (or DirectX 11 in Windows 8). It provides functionality for rendering bitmaps, vector graphics and text in screen-space using hardware acceleration. It is positioned as a successor to the GDI rendering interface, and, so far as I can tell, the old DirectDraw API from earlier versions of DirectX. According to the documentation, it should be possible to use Direct2D to render 2D elements to a Direct3D 11 render target, although I have had some difficulties in actually getting that use-case to work. It does appear to work excellently for pure 2D rendering, say for menu or loading screens, with a simpler syntax than using Direct3D with an orthographic projection.

We will be adding Direct2D support to our base application class D3DApp, and use Direct2D to render a progress bar with some status text during our initialization stage, as we load and generate the Direct3D resources for our scene. Please note that the implementation presented here should only be used while there is no active Direct3D rendering occurring; due to my difficulties in getting Direct2D/Direct3D interoperation to work correctly, Direct2D will be using a different backbuffer than Direct3D, so interleaving Direct2D drawing with Direct3D rendering will result in some very noticeable flickering when the backbuffers switch.

The inspiration for this example comes from Chapter 5 of Carl Granberg’s Programming an RTS Game with Direct3D, where a similar Progress screen is implemented using DirectX9 and the fixed function pipeline. With the removal of the ID3DXFont interface from newer versions of DirectX, as well as the lack of the ability to clear just a portion of a DirectX11 DeviceContext’s render target, a straight conversion of that code would require some fairly heavy lifting to implement in Direct3D 11, and so we will be using Direct2D instead. The full code for this example can be found on my GitHub repository, athttps://github.com/ericrrichards/dx11.git, and is implemented in the TerrainDemo and RandomTerrainDemo projects.

Posted Image


Terrain LOD for DirectX 10 Graphics Cards, using SlimDX and Direct3D 11

Posted by , 27 September 2013 - - - - - - · 737 views
C#, SlimDX, DirectX11, DirectX10 and 6 more...

Last time, we discussed terrain rendering, using the tessellation stages of the GPU to render the terrain mesh with distance-based LOD. That method required a DX11-compliant graphics card, since the Hull and Domain shader stages are new to Direct3D11. According to the latest Steam Hardware survey, nearly 65% of gamers have a DX11 graphics card, which is certainly the majority of potential users, and only likely to increase in the future. Of the remaining 35% of gamers, 31% are still using DX10 graphics cards. While we can safely ignore the small percentage of the market that is still limping along on DX9 cards (I myself still have an old laptop with a GeForce Go 7400 in my oldest laptop, but that machine is seven years old and on its last legs), restricting ourselves to only DX 11 cards cuts out a third of potential users of your application. For that reason, I’m going to cover an alternative, CPU-based implementation of our previous LOD terrain rendering example. If you have the option, I would suggest that you only bother with the previous DX11 method, as tessellating the terrain mesh yourself on the CPU is relatively more complex, prone to error, less performant, and produces a somewhat lower quality result; if you must support DX10 graphics cards, however, this method or one similar to it will do the job, while the hull/domain shader method will not.

We will be implementing this rendering method as an additional render path in our Terrain class, if we detect that the user has a DX10 compatible graphics card. This allows us to reuse a large chunk of the previous code. For the rest, we will adapt portions of the HLSL shader code that we previously implemented into C#, as well as use some inspirations from Chapter 4 of Carl Granberg’s Programming an RTS Game with Direct3D. The full code for this example can be found at my GitHub repository, https://github.com/ericrrichards/dx11.git, under the TerrainDemo project.

Posted Image


Dynamic Terrain Rendering with SlimDX and Direct3D 11

Posted by , 21 September 2013 - - - - - - · 1,004 views
C#, DirectX 11, SlimDX, Terrain and 4 more...

A common task for strategy and other games with an outdoor setting is the rendering of the terrain for the level. Probably the most convenient way to model a terrain is to create a triangular grid, and then perturb the y-coordinates of the vertices to match the desired elevations. This elevation data can be determined by using a mathematical function, as we have done in our previous examples, or by sampling an array or texture known as a heightmap. Using a heightmap to describe the terrain elevations allows us more fine-grain control over the details of our terrain, and also allows us to define the terrain easily, either using a procedural method to create random heightmaps, or by creating an image in a paint program.

Because a terrain can be very large, we will want to optimize the rendering of it as much as possible. One easy way to save rendering cycles is to only draw the vertices of the terrain that can be seen by the player, using frustum culling techniques similar to those we have already covered. Another way is to render the mesh using a variable level of detail, by using the Hull and Domain shaders to render the terrain mesh with more polygons near the camera, and fewer in the distance, in a manner similar to that we used for our Displacement mapping effect. Combining the two techniques allows us to render a very large terrain, with a very high level of detail, at a high frame rate, although it does limit us to running on DirectX 11 compliant graphics cards.

We will also use a technique called texture splatting to render our terrain with multiple textures in a single rendering call. This technique involves using a separate texture, called a blend map, in addition to the diffuse textures that are applied to the mesh, in order to define which texture is applied to which portion of the mesh.

The code for this example was adapted from Chapter 19 of Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0, with some additional inspirations from Chapter 4 of Carl Granberg’s Programming an RTS Game with Direct3D. The full source for this example can be downloaded from my GitHub repository, at https://github.com/ericrrichards/dx11.git, under the TerrainDemo project.

Posted Image


Diving into the Tessellation Stages of Direct3D 11

Posted by , 17 September 2013 - - - - - - · 850 views
C#, Direct3D 11, SlimDX and 3 more...

In our last example on normal mapping and displacement mapping, we made use of the new Direct3D 11 tessellation stages when implementing our displacement mapping effect. For the purposes of the example, we did not examine too closely the concepts involved in making use of these new features, namely the Hull and Domain shaders. These new shader types are sufficiently complicated that they deserve a separate treatment of their own, particularly since we will continue to make use of them for more complicated effects in the future.

The Hull and Domain shaders are covered in Chapter 13 of Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0, which I had previously skipped over. Rather than use the example from that chapter, I am going to use the shader effect we developed for our last example instead, so that we can dive into the details of how the hull and domain shaders work in the context of a useful example that we have some background with.

The primary motivation for using the tessellation stages is to offload work from the the CPU and main memory onto the GPU. We have already looked at a couple of the benefits of this technique in our previous post, but some of the advantages of using the tessellation stages are:
  • We can use a lower detail mesh, and specify additional detail using less memory-intensive techniques, like the displacement mapping technique presented earlier, to produce the final, high-quality mesh that is displayed.
  • We can adjust the level of detail of a mesh on-the-fly, depending on the distance of the mesh from the camera or other criteria that we define.
  • We can perform expensive calculations, like collisions and physics calculations, on the simplified mesh stored in main memory, and still render the highly-detailed generated mesh.
The Tessellation Stages
The tessellation stages sit in the graphics pipeline between the vertex shader and the geometry shader. When we render using the tessellation stages, the vertices created by the vertex shader are not really the vertices that will be rendered to the screen; instead, they are control points which define a triangular or quad patch, which will be further refined by the tessellation stages into vertices. For most of our usages, we will either be working with triangular patches, with 3 control points, or quad patches, with 4 control points, which correspond to the corner vertices of the triangle or quad. Direct3D 11 supports patches with up to 32 control points, which might be suitable for rendering meshes based on Bezier curves.

The tessellation stages can be broken down into three component stages:
  • Hull Shader Stage – The hull shader operates on each control point for a geometry patch, and can add, remove or modify its input control points before passing the patch onto the the tessellator stage. The Hull shader also calculates the tessellation factors for a patch, which instruct the tessellator stage how to break the patch up into individual vertices. The hull shader is fully programmable, meaning that we need to define an HLSL function that will be evaluated to construct the patch control points and tessellation factors.
  • Tessellator Stage – The tessellator stage is a fixed-function (meaning that we do not have to write a shader for it) stage, which samples the input patch and generates a set of vertices that divide the patch, according to the tessellation factors supplied by the hull shader and a partitioning scheme, which defines the algorithm used to subdivide the patch. Vertices created by the tessellator are normalized; i.e. quad patch vertices are specified by referring to them by their (u,v) coordinates on the surface of the quad, while triangle patch vertices use barycentric coordinates to specify their location within the triangle patch.
  • Domain Shader Stage – The domain shader is a programmable stage (we need to write a shader function for it), which operates on the normalized vertices input from the tessellator stage, and maps them into their final positions within the patch. Typically, the domain shader will interpolate the final vertex value from the patch control points using the uv or barycentric coordinates output by the tessellator. The output vertices from the domain shader will then be passed along to the next stage in the pipeline, either the geometry shader or the pixel shader.

With these definitions out of the way, we can now dive into the displacement mapping effect from our previous example and examine just how the tessellation stages generate the displacement mapped geometry we see on the screen.


Bump and Displacement Mapping with SlimDX and Direct3D 11

Posted by , 16 September 2013 - - - - - - · 673 views
C#, SlimDX, Direct3D 11 and 2 more...
Today, we are going to cover a couple of additional techniques that we can use to achieve more realistic lighting in our 3D scenes. Going back to our first discussion of lighting, recall that thus far, we have been using per-pixel, Phong lighting. This style of lighting was an improvement upon the earlier method ofGourad lighting, by interpolating the vertex normals over the resulting surface pixels, and calculating the color of an object per-pixel, rather than per-vertex. Generally, the Phong model gives us good results, but it is limited, in that we can only specify the normals to be interpolated from at the vertices. For objects that should appear smooth, this is sufficient to give realistic-looking lighting; for surfaces that have more uneven textures applied to them, the illusion can break down, since the specular highlights computed from the interpolated normals will not match up with the apparent topology of the surface.

Posted Image

In the screenshot above, you can see that the highlights on the nearest column are very smooth, and match the geometry of the cylinder. However, the column has a texture applied that makes it appear to be constructed out of stone blocks, jointed with mortar. In real life, such a material would have all kinds of nooks and crannies and deformities that would affect the way light hits the surface and create much more irregular highlights than in the image above. Ideally, we would want to model those surface details in our scene, for the greatest realism. This is the motivation for the techniques we are going to discuss today.

One technique to improve the lighting of textured objects is called bump or normal mapping. Instead of just using the interpolated pixel normal, we will combine it with a normal sampled from a special texture, called a normal map, which allows us to match the per-pixel normal to the perceived surface texture, and achieve more believable lighting.

The other technique is called displacement mapping. Similarly, we use an additional texture to specify the per-texel surface details, but this time, rather than a surface normal, the texture, called a displacement map or heightmap, stores an offset that indicates how much the texel sticks out or is sunken in from its base position. We use this offset to modify the position of the vertices of an object along the vertex normal. For best results, we can increase the tessellation of the mesh using a domain shader, so that the vertex resolution of our mesh is as great as the resolution of our heightmap. Displacement mapping is often combined with normal mapping, for the highest level of realism.

Posted Image
Normal mapped columns
Posted Image
Displacement mapped columns

This example is based off of Chapter 18 of Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0. You can download the full source for this example from my GitHub repository, athttps://github.com/ericrrichards/dx11.git, under the NormalDisplacementMaps project.

NOTE: You will need to have a DirectX 11 compatible video card in order to use the displacement mapping method presented here, as it makes use of the Domain and Hull shaders, which are new to DX 11.


October 2016 »

232425 26 272829

Recent Comments

Latest Visitors