• entries
    64
  • comments
    46
  • views
    79724

Entries in this blog

ericrrichards22
[font=Arial]

So far, we have either worked with procedurally generated meshes, like our boxes and cylinders, or loaded very simple text-based mesh formats. For any kind of real application, however, we will need to have the capability to load meshes created by artists using 3D modeling and animation programs, like Blender or 3DS Max. There are a number of potential solutions to this problem; we could write an importer to read the specific file format of the program we are most likely to use for our engine. This does present a new host of problems, however: unless we write another importer, we are limited to just using models created with that modeling software, or we will have to rely on or create plugins to our modeling software to reformat models created using different software. There is a myriad of different 3D model formats in more-or-less common use, some of which are open-source, some of which are proprietary; many of both are surprisingly hard to find good, authoritative documentation on. All this makes the prospects of creating a general-purpose, cross-format model importer a daunting task.

[/font]

[font=Arial]

Fortunately, there exists an open-source library with support for loading most of the commonly used model formats, ASSIMP, or Open Asset Import Library. Although Assimp is written in C++, there exists a port for .NET, AssimpNet, which we will be able to use directly in our code, without the hassle of wrapping the native Assimp library ourselves. I am not a lawyer, but it looks like both the Assimp and AssimpNet have very permissive licenses that should allow one to use them in any kind of hobby or professional project.

[/font]

[font=Arial]

While we can use Assimp to load our model data, we will need to create our own C# model class to manage and render the model. For that, I will be following the example of Chapter 23 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. This example will not be a straight conversion of his example code, since I will be ditching the m3d model format which he uses, and instead loading models from the standard old Microsoft DirectX X format. The models I will be using come from the example code for Chapter 4 of Carl Granberg's Programming an RTS Game with Direct3D, although you may use any of the supported Assimp model formats if you want to use other meshes instead. The full source for this example can be found on my GitHub repository, athttps://github.com/ericrrichards/dx11.git, under the AssimpModel project.

[/font]

[font=Arial]

image_thumb%25255B1%25255D.png?imgmax=800

[/font]

model2.JPG
A more complicated model with multiple subsets, from the Assimp test model collection.

[font=Arial]

Read More...

[/font]
ericrrichards22
http://www.youtube.com/watch?feature=player_embedded&v=WIOQuEJSpEg
Watch the intrepid red blob wind its way through the mountain slopes!

Last time, we discussed the implementation of our A* pathfinding algorithm, as well as some commonly used heuristics for A*. Now we're going to put all of the pieces together and get a working example to showcase this pathfinding work.

We'll need to slightly rework our mouse picking code to return the tile in our map that was hit, rather than just the bounding box center. To do this, we're going to need to modify our QuadTree, so that the leaf nodes are tagged with the MapTile that their bounding boxes enclose.

We'll also revisit the function that calculates which portions of the map are connected, as the original method in Part 1 was horribly inefficient on some maps. Instead, we'll use a different method, which uses a series of depth-first searches to calculate the connected sets of MapTiles in the map. This method is much faster, particularly on maps that have more disconnected sets of tiles.

We'll also need to develop a simple class to represent our unit, which will allow it to update and render itself, as well as maintain pathfinding information. The unit class implementation used here is based in part on material presented in Chapter 9 of Carl Granberg's Programming an RTS Game with Direct3D.

Finally, we'll add an additional texture map to our rendering shader, which will draw impassible terrain using a special texture, so that we can easily see the obstacles that our unit will be navigating around. You can see this in the video above; the impassible areas are shown with a slightly darker texture, with dark rifts.

The full code for this example can be found on my GitHub repository, https://github.com/ericrrichards/dx11.git, under the 33-Pathfinding project.

Read more "
ericrrichards22
[font=Arial]


Previously, we have used our Terrain class solely with heightmaps that we have loaded from a file. Now, we are going to extend our Terrain class to generate random heightmaps as well, which will add variety to our examples. We will be using Perlin Noise, which is a method of generating naturalistic pseudo-random textures developed by Ken Perlin. One of the advantages of using Perlin noise is that its output is deterministic; for a given set of control parameters, we will always generate the same noise texture. This is desirable for our terrain generation algorithm because it allows us to recreate the same heightmap given the same initial seed value and control parameters.

[/font]
[font=Arial]


Because we will be generating random heightmaps for our terrain, we will also need to generate an appropriate blendmap to texture the resulting terrain. We will be using a simple method that assigns the different terrain textures based on the heightmap values; a more complex simulation might model the effects of weather, temperature and moisture to assign diffuse textures, but simply using the elevation works quite well with the texture set that we are using.

[/font]
[font=Arial]


The code for this example is heavily influenced by Chapter 4 of Carl Granberg's Programming an RTS Game with Direct3D, adapted into C# and taking advantage of some performance improvements that multi-core CPUs on modern computers offer us. The full code for this example can be found on my GitHub repository, at https://github.com/ericrrichards/dx11.git, under the RandomTerrainDemo project. In addition to the random terrain generation code, we will also look at how we can add Windows Forms controls to our application, which we will use to specify the parameters to use to create the random terrain.

[/font]
[font=Arial]


random1_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22

A little over two years ago, I first saw Amit Patel's article on Polygonal Map Generation, and thought it was incredibly cool. The use of Voronoi regions created a very nice, slightly irregular look, compared to grid-based terrains. At the time, I had just finished up working on my DX11 random terrain code, and it looked like a fun project to try to tackle.

I then proceeded to spend several months messing around with different implementations of Fortune's Algorithm in C# to get started and generate the Voronoi polygons used to generate a terrain along the lines of Amit's example. At this point, I've lost track of all of the different versions that I've sort of melded together to produce the code that I've ended up with in the end, but some of the more influential are:


  • The original implementation of the algorithm by Steve Fortune
  • This JS version by Nicolas Garcia Belmonte
  • This C++ version, which is a port of the ActionScript version used in the original Amit Patel map generator.
    The original goal was to create a map generator, suitable for a kind of overworld/strategic level map. But, alas, life happened, and I got bogged down before I got that far. I did, however, wind up with a fairly cool tool for generating Voronoi diagrams. Because I had spent so much time trying to iron out bugs in my implementation of the algorithm, I ended up producing a WinForms application that allows you to step through the algorithm one iteration at a time, visualizing the sites that are added to the diagram, the vertices and edges, as well as the position of the beach and sweep lines. Eventually I also worked in options to show the circles through three sites that define where a Voronoi vertex is located, as well as the Delauney triangulation of the sites.
    72_voronoi.PNG
    Voronoi regions, with the edges drawn in white, and the sites as the blue points.

    72_delauney.PNG
    Delauney triangulation, with triangle edges in green.

    72_both.PNG
    Showing both the Voronoi regions and the Delauney triangles.

    I won't pretend that this code is fantastic, but it's kind of interesting, and I worked at it for quite a while, so better to have it out here than moldering on a hard drive somewhere. If you'd like to see the source, it is available on GitHub. You can also download the executable below if you like - I make no promises that it will work everywhere, but it is a pretty standard .Net 4.5 Windows Forms application. I've also got some videos below the fold, if you'd like to see this in action.

    Download Voronoi


    Read more...

ericrrichards22
This time around, we are going to be porting the Hills Demo of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0 from C++ to C# and SlimDX.

This will be very similar to the previous entry, where we drew a colored box, with a few changes:

  • We'll start developing a class to generate mesh geometry (to replace the functionality that used to be present in previous versions of DirectX in the D3DX libraries).
  • We'll be using pre-compiled effect files, rather than compiling them at runtime.
    hillDemo_thumb%5B1%5D.png?imgmax=800
    Read More...
ericrrichards22
For the last six or eight months, off and on, I've been trying to write some code that will create a Voronoi diagram from a set of random points, inspired by Amit Patel's Polygonal Map Generation demo. In the last week or so, I had a bit of a break-through, in that I finally managed to get an implementation of Fortune's Algorithm put together that would actually work and generate the Voronoi edges and vertices correctly so that I could render them. Since most of the existing implementations I've been trying to work from are either broken or an enormous pain in the ass to try to understand, I'm planning on writing up my implementation, once I'm finally happy with it. I've still got a fair way to go, since right now I'm only outputting the graph edges and vertices successfully, and while that renders prettily, I haven't gotten to the point that I have all of the really useful graph connectivity information being output yet. Hopefully, if everything goes well, I'll manage to finish that up by the end of the week, but this has proven to be a real bugger of a problem to solve.

As I mentioned, most of the implementations I've studied are either incomplete or broken. Particularly, I haven't found an example yet that correctly clips the edges of the generated Voronoi polygons to a rectangular area. So long as all you care about is rendering the graph to the screen, this isn't a big problem, since most 2D graphics libraries will happily draw lines that extend beyond the drawable area of the screen. However, if you're trying to subdivide a rectangular region into Voronoi polygons, its kind of nice to have your edges actually clipped to that region. What I'm envisioning doing with this code eventually is using it to render 3D maps, subdivided into territories, but with an overlaid rectangular grid for moving units - think of the strategic map in Total War games or Lords of the Realm.

After I got frustrated with trying to make the broken clipping code I was working from perform correctly, I trolled Google and came across theCohen-Sutherland line-clipping algorithm. This looked to be exactly what I needed, and wonder of wonders, the Wikipedia page actually featured a readable, reasonable example, rather than the obtuse academic pseudo-code you usually find there (see the Fortune's Algorithm article...). The only thing I would caution you about with the Wikipedia example is that it uses a coordinate system where the origin is the lower-left bounds of a rectangle as usually encountered in mathematics, rather than the upper-left origin we commonly use in computer graphics, so some tweaking is necessary.

The code for this example can be downloaded from my github repository, at https://github.com/ericrrichards/dx11.git. The algorithm is included in the Algorithms project, while the example code is in the CohenSutherlandExample project. The example code is a quick-and-dirty mess of GDI drawing code, so I'm going to focus on the algorithm code.

image_thumb1.png?imgmax=800
After clipping:
image_thumb3.png?imgmax=800

Read more "
ericrrichards22
A couple of weeks ago as I was browsing HackerNews, I stumbled onto an article about creating spherical procedural maps by Andy Gainey. Procedural terrain/map generation is always something I find interesting, having grown up slightly obsessed with Civilization and its successors. Various methods of tiling a sphere in order to make a game grid have been bouncing around the back of my mind for a while now - unfortunately finding time to explore some of those ideas has been problematic. But, after reading through Mr. Gainey's piece and looking over the source for his demo, I got inspired to take a stab at something similar. My first attempt was to try to directly port his Javascript implementation, which got bogged down in the impedance mismatch between going from JS to C# and WebGL to DirectX at the same time, so I stepped back, and decided to come at the idea again, working in smaller steps.

The first step was to generate a spherical triangle mesh. For this purpose, the spherical mesh that we have previously worked with is not very well suited, as its topology is very inconsistent, particularly around the poles, as you can see in the screenshot below. Instead of this kind of lattitude/longitude subdivision, we will instead use an alternate method, which starts from an icosahedron, then subdivides each face of the icosahedron into four triangles, and finally projects the resulting vertices onto the sphere. This produces a much more regular tessellation, known as a geodesic grid, as you can see in the right screenshot below. This tessellation is not without artifacts - at the vertices of the original icosahedron, the resulting geodesic will have five edges, while all other vertices have six edges, similar to a real-life soccer ball.


regularSphere.PNG
Lattitude/Longitude tessellation, with artifacts at the poles
geosphere.PNG
Geodesic tessellation

Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0, Chapter 6, presented a method of generating of generating a geodesic sphere, which I previously skipped over. The code for this example is based upon his example, with some improvements to eliminate redundant vertices.

The code for this example can be obtained from my GitHub repository.


Read more...
ericrrichards22
[font=Arial]


Sorry for the hiatus, I've been very busy with work and life the last couple weeks. Today, we're going to look at loading meshes with skeletal animations in DirectX 11, using SlimDX and Assimp.Net in C#. This will probably be our most complicated example yet, so bear with me. This example is inspired by Chapter 25 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0, although with some heavy modifications. Mr. Luna's code uses a custom animation format, which I found less than totally useful; realistically, we would want to be able to load skinned meshes exported in one of the commonly used 3D modeling formats. To facilitate this, we will again use the .NET port of the Assimp library, Assimp.Net. The code I am using to load and interpret the animation and bone data is heavily based on Scott Lee's Animation Importer code, ported to C#. The full source for this example can be found on my GitHub repository, at https://github.com/ericrrichards/dx11.git under the SkinnedModels project. The meshes used in the example are taken from the example code for Carl Granberg's Programming an RTS Game with Direct3D.

[/font]
[font=Arial]


Skeletal animation is the standard way to animate 3D character models. Generally, a character model will be represented by two structures: the exterior vertex mesh, or skin, and a tree of control points specifying the joints or bones that make up the skeleton of the mesh. Each vertex in the skin is associated with one or more bones, along with a weight that determines how much influence the bone should have on the final position of the skin vertex. Each bone is represented by a transformation matrix specifying the translation, rotation and scale that determines the final position of the bone. The bones are defined in a hierarchy, so that each bone's transformation is specified relative to its parent bone. Thus, given a standard bipedal skeleton, if we rotate the upper arm bone of the model, this rotation will propagate to the lower arm and hand bones of the model, analogously to how our actual joints and bones work.

[/font]
[font=Arial]


Animations are defined by a series of keyframes, each of which specifies the transformation of each bone in the skeleton at a given time. To get the appropriate transformation at a given time t, we linearly interpolate between the two closest keyframes. Because of this, we will typically store the bone transformations in a decomposed form, specifying the translation, scale and rotation components separately, building the transformation matrix at a given time from the interpolated components. A skinned model may contain many different animation sets; for instance, we'll commonly have a walk animation, and attack animation, an idle animation, and a death animation.

[/font][font=Arial]


The process of loading an animated mesh can be summarized as follows:

[/font]

  1. Extract the bone hierarchy of the model skeleton.
  2. Extract the animations from the model, along with all bone keyframes for each animation.
  3. Extract the skin vertex data, along with the vertex bone indices and weights.
  4. Extract the model materials and textures.
[font=Arial]


To draw the skinned model, we need to advance the animation to the correct frame, then pass the bone transforms to our vertex shader, where we will use the vertex indices and weights to transform the vertex position to the proper location.

[/font]
[font=Arial]


skinnedModels_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
Just a quick update today. I've updated the 3D model loading code to use the latest version of AssimpNet that is on Nuget now. The latest code is updated on GitHub.

The biggest changes appear to be that the AssimpImporter/Exporter classes have been merged into a single AssimpContext class that can do both. Some of the GetXXX methods to retrieve vertex elements and textures also appear to be replaced with properties. The way that one hooks into the internal Assimp logging has also changed. I haven't discovered many other changes, besides some additional file format support yet, but I haven't gotten around to really explore it that closely. Mildly irritating that the interface has changed here, but hey, at least it is being actively worked on (which is more than I can say for some of my stuff at times...).

I've been quite busy with work lately, so I haven't had a huge amount of time to work on anything here. I'm also a little burned out on doing straight DX11 graphics stuff, so I think I'm going to try to transition into doing some more applied work for a while. I've got some stuff in the pipeline on making simple games with Direct2D, and ideally I'd like to get involved with One Game a Month, to give me a little additional motivation.


Read more...
ericrrichards22
Tuesday was the anniversary of my first real post on this blog. For the most part, I've tried to keep my content here on the technical side of things, but, what the hell, this is a good time to reflect on a year of blogging - what went well, what went poorly, and where I'm going from here.

What I Meant to Accomplish (And What I actually Accomplished...)


Content


I restarted this blog about a year ago to document my attempt at learning DirectX 11, using Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. In my day job, I mostly code ASP.NET and Winforms applications using C#, so I decided to convert the C++ examples from Mr.Luna's book into C#, using SlimDX as my managed DirectX wrapper. SlimDX appeared to me to be slightly more mature than its main competitor, SharpDX, as its documentation was a little more complete, and it had been out in the wild a little bit longer, so there was a bit more third-party information (StackOverflow, other blogs, GameDev.net forum postings, etc.) on it. I suppose I could have also gone with XNA, although Microsoft appears to have abandoned any new development on it (Last release 9/16/2010...), and I felt like SlimDX's simple wrapper around the DirectX library would be easier to translate than shoehorning into the XNA model, not to mention wrassling with the XNA Content Pipeline.

My initial goal was to work through the book, converting each of the examples presented and then blogging about the process. Except for the chapters on Compute Shaders and Quaternions (which I have yet to tackle, mostly because the examples are not terribly interesting), I completed that goal by the middle of November of last year. From there, I started incorporating elements from Carl Granberg's Programming an RTS Game with Direct3D into the terrain rendering code that Luna's book presented, as well as dabbling in integrating Direct2D andSpriteTextRenderer to handle 2D drawing.

After that, my intention was to start working my way through Ian Millington's book, Game Physics Engine Development. This is about where I ran out of steam. Between the hassle of trying to reconcile the code examples from this book, which were based on OpenGL and somewhat less self-contained than what I had been working on previously, various issues in my personal and work life, and the general malaise of an especially cold, dark winter here in New England, my impetus to work on my side projects faded away spectacularly. If any of you followed this regularly, I'm sure you've noticed that it's been almost four months since I've posted anything new, and before that there was another dry spell of more than a month.

With the arrival of summer and a move that reduces both the strain on my finances and the number of hours per day I spend driving back and forth to work considerably, I've found that my mood has improved by leaps and bounds, and I have the extra energy to expend on coding outside of work again, finally. Right now, I've got a number of new things I'm working through, and a number of ideas in the pipeline that should be making their way up here in the near future.

image_thumb%25255B5%25255D.png?imgmax=800

Read More...
ericrrichards22
[font=Arial]


Particle systems are a technique commonly used to simulate chaotic phenomena, which are not easy to render using normal polygons. Some common examples include fire, smoke, rain, snow, or sparks. The particle system implementation that we are going to develop will be general enough to support many different effects; we will be using the GPU's StreamOut stage to update our particle systems, which means that all of the physics calculations and logic to update the particles will reside in our shader code, so that by substituting different shaders, we can achieve different effects using our base particle system implementation.

[/font]
[font=Arial]


The code for this example was adapted from Chapter 20 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0, ported to C# and SlimDX. The full source for the example can be found at my GitHub repository, at https://github.com/ericrrichards/dx11.git, under the ParticlesDemo project.

[/font]
[font=Arial]


Below, you can see the results of adding two particles systems to our terrain demo. At the center of the screen, we have a flame particle effect, along with a rain particle effect.

[/font][font=Arial]


image_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


This weekend, I updated my home workstation from Windows 8 to Windows 8.1. Just before doing this, I had done a bunch of work on my SSAO implementation, which I was intending to write up here once I got back from a visit home to do some deer hunting and help my parents get their firewood in. When I got back, I fired up my machine, and loaded up VS to run the SSAO sample to grab some screenshots. Immediately, my demo application crashed, while trying to create the DirectX 11 device. I had done some work over the weekend to downgrade the vertex and pixel shaders in the example to SM4, so that they could run on my laptop, which has an older integrated Intel video card that only supports DX10.1. I figured that I had borked something up in the process, so I tried running some of my other, simpler demos. Same error message popped up; DXGI_ERROR_UNSUPPORTED. Now, I am running a GTX 560 TI, so I know Direct3D 11 should be supported.

[/font]
[font=Arial]


However, I have been using Nvidia's driver update tool to keep myself at the latest and greatest driver version, so I figured that perhaps the latest driver I downloaded had some bugs. Go to Nvidia's site, check for any updates. Looks like I have the latest driver. Hmm...

[/font]
[font=Arial]


So I turned again to google, trying to find some reason why I would suddenly be unable to create a DirectX device. The fourth result I found was this:http://stackoverflow.com/questions/18082080/d3d11-create-device-debug-on-windows-8-1. Apparently I need to download the Windows 8.1 SDK, now. I'm guessing that, since I had VS installed prior to updating, I didn't get the latest SDK installed, and the Windows 8 SDK, which I did have installed, wouldn't cut it anymore, at least when trying to create a debug device. So I went ahead and installed the 8.1 SDK from here. Restart VS, rebuild the project in question, and now it runs perfectly. Argh. At least it's working again; I just wish I didn't have to waste an hour futzing around with it...

[/font]
[font=Arial]


Originally posted at http://www.richardssoftware.net/2013/11/windows-81-and-slimdx.html

[/font]
ericrrichards22
For the past few weeks, I've been once again noodling on the idea of starting a .NET port of a classic Id FPS. As a kid on my first computer, an off-brand 486 with DOS, I just hit the tail end of the good old days of shareware. And amongst all the floppy disks of kiddy and educational software and sliming Gruzzles couldn't really hold a candle to exploring Indiana Jones and the Last Crusade-esque Gothic castles and knifing Nazis.

While the original source-code for Wolfenstein 3D has been available for some time, it is a bit of a slog trying to wade through C code that was written 20 years ago, with near and far pointers, blitting directly to VGA memory, and hand-rolled assembly routines, let alone build the project successfully. Consequently, converting over to C# is a bit of a struggle, particularly for some of the low-level pointer manipulation and when loading in binary assets - it is very helpful to be able to step through both codebases side by side in the debugger to figure out any discrepancies.

Because of these difficulties, I have started looking at the Chocolate Wolfenstein 3D project, by Fabien Sanglard. Mr. Sanglard is a great student of the Id Software engine source code, and has done several very nice writeups of the different engines open-sourced by Id. He was even planning on writing a book-length analysis of Wolfenstein 3D, which hopefully is still underway. Chocolate Wolfenstein 3D is a more modernized C++ conversion of the original Id code, with some of the more gnarly bits smoothed out, and using SDL for cross-platform window-management, graphics, input, and sound. Even better, it can be built and run using current versions of Visual Studio.

The only problem I had with the Chocolate Wolfenstein 3D GitHub repository is that it is missing some dependencies and requires a small amount of tweaking in order to get it to build and run on Visual Studio 2013. These steps are not particularly difficult, but if you simply clone the repo and hit F5, it doesn't work right out of the box. If you are working on a Mac, there is a very nice guide on setting up the project in XCode, but I have not found a similar guide for Windows, so I decided to document the steps that I went through and share those here.

titlescreen.png


Read more...
ericrrichards22
Not really game related, but something I've been working on lately.

Recently, I have been using OWIN a good deal for developing internal web applications. One of the chief benefits of this is that OWIN offers the ability to host its own HTTP server, which allows me to get out of the business of installing and configuring IIS on windows, which is one of the main points of pain when deploying the products I work on to our customers. Unfortunately, when I first started using OWIN, there was not a version of ASP.NET MVC available that was compatible with OWIN. Most of my previous experience with programming web servers has been based on MVC (except for briefly experiencing WebForms hell), so finding a similar framework that was compatible with OWIN was one of my first priorities.

In my search, I discovered Nancy, a fairly similar MVC-style framework which offered OWIN support. It also was capable of using the same Razor view engine as ASP.NET MVC, with some minor differences, so I was able to convert existing IIS ASP.NET MVC applications to OWIN/Nancy using most of the existing views and front-end code. At some point I plan to write an article illustrating how one would do this type of conversion, but for now, I'm going to examine one particular gotcha I discovered when converting my personal Netflix-type video application to OWIN/Nancy: serving HTML5 video files.

Read more...
ericrrichards22
Many moons ago now, I picked up a copy of HLSL Development Cookbook by Doron Feinstein. I had intended to work my way through it after I finished up Luna's Introduction to 3D Game Programming with Direct3D 11.0, but winter and life kind of got in the way...

Another difficulty I had with this book was that the code samples made heavy use of DXUT which was the semi-official Direct3D sample framework. With the Windows 8 transistion, DXUT is sorta-kinda deprecated now, and besides, SlimDX doesn't really have support for it - SlimDX is more of a bare-metal binding to the underlying DirectX framework, which DXUT sits on top of.

So in any case, I was going to have to do a fair amount of rework to adapt the samples from this book to fit into the framework of code that I'd built up over the past year or so. Swapping the UI related stuff out for WinForms controls hosted in my SlimDX app didn't seem as though it would be that hard, but a bigger stumbling block was the fact that all of the sample meshes provided with the code used the .sdkmesh format. SDKMesh is a 3D model format that used to be used in a lot of the DirectX sample code, after .X models kind of fell into disfavor (Of course, now it appears they are using yet another model format, .CMO). SDKMesh is unfortunately not supported by Assimp, so I can't use Assimp.Net to load SDKMesh meshes. Fortunately, it is a relatively simple binary format. The documentation, at least that provided in the DirectX July 2010 SDK, is a little spotty, and not totally correct, but its possible to figure things out by looking at the source code for DXUT and another C# SDKMesh loader that appears to have disappeared from the internet since I first found it...

Well, this has sat in my todo pile for long enough, let's get started. As always, you can get the code for this example from my public GitHub repository.

sdkmesh.png
Read more...
ericrrichards22
In our previous installment, we discussed the data structures that we will use to represent the graph which we will use for pathfinding on the terrain, as well as the initial pre-processing that was necessary to populate that graph with the information that our pathfinding algorithm will make use of. Now, we are ready to actually implement our pathfinding algorithm. We'll be using A*, probably the most commonly used graph search algorithm for pathfinding.


A* is one of the most commonly used pathfinding algorithms in games because it is fast, flexible, and relatively simple to implement. A* was originally a refinement of Dijkstra's graph search algorithm. Dijkstra's algorithm is guaranteed to determine the shortest path between any two nodes in a directed graph, however, because Dijkstra's algorithm only takes into account the cost of reaching an intermediate node from the start node, it tends to consider many nodes that are not on the optimal path. An alternative to Dijkstra's algorithm is Greedy Best-First search. Best-First uses a heuristic function to estimate the cost of reaching the goal from a given intermediate node, without reference to the cost of reaching the current node from the start node. This means that Best-First tends to consider far fewer nodes than Dijkstra, but is not guaranteed to produce the shortest path in a graph which includes obstacles that are not predicted by the heuristic.


A* blends these two approaches, by using a cost function (f(x)) to evaluate each node based on both the cost from the start node (g(x)) and the estimated cost to the goal (h(x)). This allows A* to both find the optimum shortest path, while considering fewer nodes than pure Dijkstra's algorithm. The number of intermediate nodes expanded by A* is somewhat dependent on the characteristics of the heuristic function used. There are generally three cases of heuristics that can be used to control A*, which result in different performance characteristics:

  • When h(x) underestimates the true cost of reaching the goal from the current node, A* will expand more nodes, but is guaranteed to find the shortest path.
  • When h(x) is exactly the true cost of reaching the goal, A* will only expand nodes along the shortest path, meaning that it runs very fast and produces the optimal path.
  • When h(x) overestimates the true cost of reaching the goal from the current node, A* will expand fewer intermediate nodes. Depending on how much h(x) underestimates the true cost, this may result in paths that are not the true shortest path; however, this does allow the algorithm to complete more quickly.

    For games, we will generally use heuristics of the third class. It is important that we generate good paths when doing pathfinding for our units, but it is generally not necessary that they be mathematically perfect; they just need to look good enough, and the speed savings are very important when we are trying to cram all of our rendering and update code into just a few tens of milliseconds, in order to hit 30-60 frames per second.


    A* uses two sets to keep track of the nodes that it is operating on. The first set is the closed set, which contains all of the nodes that A* has previously considered; this is sometimes called the interior of the search. The other set is the open set, which contains those nodes which are adjacent to nodes in the closed set, but which have not yet been processed by the A* algorithm. The open set is generally sorted by the calculated cost of the node (f(x)), so that the algorithm can easily select the most promising new node to consider. Because of this, we usually consider the open list to be a priority queue. The particular implementation of this priority queue has a large impact on the speed of A*; for best performance, we need to have a data structure that supports fast membership checks (is a node in the queue?), fast removal of the best element in the queue, and fast insertions into the queue. Amit Patel provides a good overview of the pros and cons of different data structures for the priority queue on his A* page; I will be using a priority queue derived from Blue Raja's Priority Queue class, which is essentially a binary heap. For our closed set, the primary operations that we will perform are insertions and membership tests, which makes the .Net HashSet class a good choice.

    Read more "
ericrrichards22
Howdy. Today, I'm going to discuss rendering UI text using the SlimDX SpriteTextRenderer library. This is a very nifty and light-weight extension library for SlimDX, hosted on CodePlex. In older versions of DirectX, it used to be possible to easily render sprites and text using the ID3DXSprite and ID3DXFont interfaces, but those have been removed in newer versions of DirectX. I've experimented with some other approaches, such as using Direct2D and DirectWrite or the DirectX Toolkit, but wasn't happy with the results. For whatever reason, Direct2D doesn't interop well with DirectX 11, unless you create a shared DirectX 10 device and jump through a bunch of hoops, and even then it is kind of a PITA. Likewise, I have yet to find C# bindings for the DirectX Toolkit, so that's kind of a non-starter for me; I'd either have to rewrite the pieces that I want to use with SlimDX, or figure out the marshaling to use the C++ dlls. So for that reason, the SpriteTextRenderer library seems to be my best option at the moment, and it turned out to be relatively simple to integrate into my application framework.


If you've used either the old DirectX 9 interfaces or XNA, then it'll be pretty intuitive how to use SpriteTextRenderer. The SpriteRenderer class has some useful methods to draw 2D sprites, which I haven't explored much yet, since I have already added code to draw scree-space quads. The TextBlockRenderer class provides some simple and handy methods to draw text up on the screen. Internally, it uses DirectWrite to generate sprite font textures at runtime, so you can use any installed system fonts, and specify the weight, style, and point size easily, without worrying about the nitty gritty details of creating the font.


One limitation of the TextBlockRenderer class is that you can only use an instance of it to render text with a single font. Thus, if you want to use different font sizes or styles, you need to create different instances for each font that you want to use. Because of this, I've written a simple manager class, which I'm calling FontCache, which will provide a central point to store all the fonts that are used, as well as a default font if you just want to throw some text up onto the screen.


The new code for rendering text has been added to my pathfinding demo, available at my GitHub repository,https://github.com/ericrrichards/dx11.git.
font_thumb%25255B1%25255D.jpg?imgmax=800
(Looks much better without jpeg compression...)

Read more "
ericrrichards22
[font=Arial]


In real-time lighting applications, like games, we usually only calculate direct lighting, i.e. light that originates from a light source and hits an object directly. The Phong lighting model that we have been using thus far is an example of this; we only calculate the direct diffuse and specular lighting. We either ignore indirect light (light that has bounced off of other objects in the scene), or approximate it using a fixed ambient term. This is very fast to calculate, but not terribly physically accurate. Physically accurate lighting models can model these indirect light bounces, but are typically too computationally expensive to use in a real-time application, which needs to render at least 30 frames per second. However, using the ambient lighting term to approximate indirect light has some issues, as you can see in the screenshot below. This depicts our standard skull and columns scene, rendered using only ambient lighting. Because we are using a fixed ambient color, each object is rendered as a solid color, with no definition. Essentially, we are making the assumption that indirect light bounces uniformly onto all surfaces of our objects, which is often not physically accurate.

[/font][font=Arial]


image_thumb%25255B2%25255D.png?imgmax=800

[/font][font=Arial]


Naturally, some portions of our scene will receive more indirect light than other portions, if we were actually modeling the way that light bounces within our scene. Some portions of the scene will receive the maximum amount of indirect light, while other portions, such as the nooks and crannies of our skull, should appear darker, since fewer indirect light rays should be able to hit those surfaces because the surrounding geometry would, realistically, block those rays from reaching the surface.

[/font]
[font=Arial]


In a classical global illumination scheme, we would simulate indirect light by casting rays from the object surface point in a hemispherical pattern, checking for geometry that would prevent light from reaching the point. Assuming that our models are static, this could be a viable method, provided we performed these calculations off-line; ray tracing is very expensive, since we would need to cast a large number of rays to produce an acceptable result, and performing that many intersection tests can be very expensive. With animated models, this method very quickly becomes untenable; whenever the models in the scene move, we would need to recalculate the occlusion values, which is simply too slow to do in real-time.

[/font]
[font=Arial]


Screen-Space Ambient Occlusion is a fast technique for approximating ambient occlusion, developed by Crytek for the game Crysis. We will initially draw the scene to a render target, which will contain the normal and depth information for each pixel in the scene. Then, we can sample this normal/depth surface to calculate occlusion values for each pixel, which we will save to another render target. Finally, in our usual shader effect, we can sample this occlusion map to modify the ambient term in our lighting calculation. While this method is not perfectly realistic, it is very fast, and generally produces good results. As you can see in the screen shot below, using SSAO darkens up the cavities of the skull and around the bases of the columns and spheres, providing some sense of depth.

[/font][font=Arial]


image_thumb%25255B3%25255D.png?imgmax=800

[/font][font=Arial]


The code for this example is based on Chapter 22 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. The example presented here has been stripped down considerably to demonstrate only the SSAO effects; lighting and texturing have been disabled, and the shadow mapping effects in Luna's example have been removed. The full code for this example can be found at my GitHub repository, https://github.com/ericrrichards/dx11.git, under the SSAODemo2 project. A more faithful adaptation of Luna's example can also be found in the 28-SsaoDemo project.

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
This was originally intended to be a single post on pathfinding, but it got too long, and so I am splitting it up into three or four smaller pieces. Today,we're going to look at the data structures that we will use to represent the nodes of our pathfinding graph, and generating that graph from our terrain class.

When we were working on our quadtree to detect mouse clicks on the terrain, we introduced the concept of logical terrain tiles; these were the smallest sections of the terrain mesh that we wanted to hit when we did mouse picking, representing a 2x2 portion of our fully-tessellated mesh. These logical terrain tiles are a good starting point for generating what I am going to call our map: the 2D grid that we will use for pathfinding, placing units and other objects, defining areas of the terrain, AI calculations, and so forth. At the moment, there isn't really anything to these tiles, as they are simply a bounding box attached to the leaf nodes of our quad tree. That's not terribly useful by itself, so we are going to create a data structure to represent these tiles, along with an array to contain them in our Terrain class. Once we have a structure to contain our tile information, we need to extract that information from our Terrain class and heightmap, and generate the graph representing the tiles and the connections between them, so that we can use it in our pathfinding algorithm.

The pathfinding code implemented here was originally derived from Chapter 4 of Carl Granberg's Programming an RTS Game with Direct3D. I've made some heavy modifications, working from that starting point, using material from Amit Patel's blog and BlueRaja's C# PriorityQueue implementation. The full code for this example can be found on my GitHub repository, https://github.com/ericrrichards/dx11.git, under the 33-Pathfinding project.
graph_thumb4_thumb.jpg?imgmax=800


Read more "
ericrrichards22

IBM Connect 2014

Yikes! It's been more than two weeks since my last post... It's been a busy two weeks, as my employer has been gearing up for and attending IBM's Connect 2014 conference in Orlando. So I've had less time than usual to work on my side projects here. Because of that, I'm going to go outside of my usual format, and recap the conference and some thoughts on it. These are my personal opinions, so bear in mind the old adage about the ubiquity and quality of opinions...
1545900_10153775947190296_46999126_n_thumb%25255B1%25255D.jpg?imgmax=800

Read more "
ericrrichards22
Typically, in a strategy game, in addition to the triangle mesh that we use to draw the terrain, there is an underlying logical representation, usually dividing the terrain into rectangular or hexagonal tiles. This grid is generally what is used to order units around, construct buildings, select targets and so forth. To do all this, we need to be able to select locations on the terrain using the mouse, so we will need to implement terrain/mouse-ray picking for our terrain, similar to what we have done previously, with model triangle picking.

We cannot simply use the same techniques that we used earlier for our terrain, however. For one, in our previous example, we were using a brute-force linear searching technique to find the picked triangle out of all the triangles in the mesh. That worked in that case, however, the mesh that we were trying to pick only contained 1850 triangles. I have been using a terrain in these examples that, when fully tessellated, is 2049x2049 vertices, which means that our terrain consists of more than 8 million triangles. It's pretty unlikely that we could manage to use the same brute-force technique with that many triangles, so we need to use some kind of space partitioning data structure to reduce the portion of the terrain that we need to consider for intersection.

Additionally, we cannot really perform a per-triangle intersection test in any case, since our terrain uses a dynamic LOD system. The triangles of the terrain mesh are only generated on the GPU, in the hull shader, so we don't have access to the terrain mesh triangles on the CPU, where we will be doing our picking. Because of these two constraints, I have decide on using a quadtree of axis-aligned bounding boxes to implement picking on the terrain. Using a quad tree speeds up our intersection testing considerably, since most of the time we will be able to exclude three-fourths of our terrain from further consideration at each level of the tree. This also maps quite nicely to the concept of a grid layout for representing our terrain, and allows us to select individual terrain tiles fairly efficiently, since the bounding boxes at the terminal leaves of the tree will thus encompass a single logical terrain tile. In the screenshot below, you can see how this works; the boxes drawn in color over the terrain are at double the size of the logical terrain tiles, since I ran out of video memory drawing the terminal bounding boxes, but you can see that the red ball is located on the upper-quadrant of the white bounding box containing it.

bvh_thumb2.png?imgmax=800

Read more "
ericrrichards22
[font=Arial]

Now that I'm finished up with everything that I wanted to cover from

[/font]Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0[font=Arial]

, I want to spend some time improving the Terrain class that

[/font]we[font=Arial]

[/font]introduced[font=Arial]

[/font]earlier[font=Arial]

. My ultimate goal is to create a two tiered strategy game, with a turn-based strategic level and either a turn-based or real-time tactical level. My favorite games have always been these kinds of strategic/tactical hybrids, such as (in roughly chronological order)

[/font]Centurion: Defender of Rome[font=Arial]

,

[/font]Lords of the Realm[font=Arial]

,

[/font]Close Combat[font=Arial]

and the

[/font]Total War series[font=Arial]

. In all of these games, the tactical combat is one of the main highlights of gameplay, and so the terrain that that combat occurs upon is very important, both aesthetically and for gameplay.

[/font]
[font=Arial]

Or first step will be to incorporate some of the graphical improvements that we have recently implemented into our terrain rendering. We will be adding shadow-mapping and SSAO support to the terrain in this installment. In the screenshots below, we have our light source (the sun) low on the horizon behind the mountain range. The first shot shows our current Terrain rendering result, with no shadows or ambient occlusion. In the second, shadows have been added, which in addition to just showing shadows, has dulled down a lot of the odd-looking highlights in the first shot. The final shot shows both shadow-mapping and ambient occlusion applied to the terrain. The ambient occlusion adds a little more detail to the scene; regardless of it's accuracy, I kind of like the effect, just to noise up the textures applied to the terrain, although I may tweak it a bit to lighten the darker spots up a bit.

We are going to need to add another set of effect techniques to our shader effect, to support shadow mapping, as well as a technique to draw to the shadow map, and another technique to draw the normal/depth map for SSAO. For the latter two techniques, we will need to implement a new hull shader, since I would like to have the shadow maps and normal-depth maps match the fully-tessellated geometry; using the normal hull shader that dynamically tessellates may result in shadows that change shape as you move around the map. For the normal/depth technique, we will also need to implement a new pixel shader. Our domain shader is also going to need to be updated, so that it create the texture coordinates for sampling both the shadow map and the ssao map, and our pixel shader will need to be updated to do the shadow and ambient occlusion calculations.

This sounds like a lot of work, but really, it is mostly a matter of adapting what we have already done. As always, you can download my full code for this example from GitHub at https://github.com/ericrrichards/dx11.git. This example doesn't really have a stand-alone project, as it came about as I was on my way to implementing a minimap, and thus these techniques are showcased as part of the Minimap project.

[/font]

[font=Arial]

Basic Terrain Rendering
image_thumb%25255B29%25255D.png?imgmax=800

[/font]

[font=Arial]

Shadowmapping Added
image_thumb%25255B31%25255D.png?imgmax=800

[/font]

[font=Arial]

Shadowmapping and SSAO
image_thumb%25255B33%25255D.png?imgmax=800

[/font]
[font=Arial]

Read more "

[/font]
ericrrichards22
[font=Arial]


Shadow mapping is a technique to cast shadows from arbitrary objects onto arbitrary 3D surfaces. You may recall that we implemented planar shadows earlier using the stencil buffer. Although this technique worked well for rendering shadows onto planar (flat) surfaces, this technique does not work well when we want to cast shadows onto curved or irregular surfaces, which renders it of relatively little use. Shadow mapping gets around these limitations by rendering the scene from the perspective of a light and saving the depth information into a texture called a shadow map. Then, when we are rendering our scene to the backbuffer, in the pixel shader, we determine the depth value of the pixel being rendered, relative to the light position, and compare it to a sampled value from the shadow map. If the computed value is greater than the sampled value, then the pixel being rendered is not visible from the light, and so the pixel is in shadow, and we do not compute the diffuse and specular lighting for the pixel; otherwise, we render the pixel as normal. Using a simple point sampling technique for shadow mapping results in very hard, aliased shadows: a pixel is either in shadow or lit; therefore, we will use a sampling technique known as percentage closer filtering (PCF), which uses a box filter to determine how shadowed the pixel is. This allows us to render partially shadowed pixels, which results in softer shadow edges.

[/font]
[font=Arial]


This example is based on the example from Chapter 21 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. The full source for this example can be downloaded from my GitHub repository at https://github.com/ericrrichards/dx11.git, under the ShadowDemos project.

[/font][font=Arial]


image_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


When I first learned about programming DirectX using shaders, it was back when DirectX 9 was the newest thing around. Back then, there were only two stages in the shader pipeline, the Vertex and Pixel shaders that we have been utilizing thus far. DirectX 10 introduced the geometry shader, which allows us to modify entire geometric primitives on the hardware, after they have gone through the vertex shader.

[/font]
[font=Arial]


One application of this capability is rendering billboards. Billboarding is a common technique for rendering far-off objects or minor scene details, by replacing a full 3D object with a texture drawn to a quad that is oriented towards the viewer. This is much less performance-intensive, and for far-off objects and minor details, provides a good-enough approximation. As an example, many games use billboarding to render grass or other foliage, and the Total War series renders far-away units as billboard sprites (In Medieval Total War II, you can see this by zooming in and out on a unit; at a certain point, you'll see the unit "pop", which is the point where the Total War engine switches from rendering sprite billboards to rendering the full 3D model). The older way of rendering billboards required one to maintain a dynamic vertex buffer of the quads for the billboards, and to transform the vertices to orient towards the viewer on the CPU whenever the camera moved. Dynamic vertex buffers have a lot of overhead, because it is necessary to re-upload the geometry to the GPU every time it changes, along with the additional overhead of uploading four vertices per billboard. Using the geometry shader, we can use a static vertex buffer of 3D points, with only a single vertex per billboard, and expand the point to a camera-aligned quad in the geometry shader.

[/font]
[font=Arial]


We'll illustrate this technique by porting the TreeBillboard example from Chapter 11 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. This demo builds upon our previous Alpha-blending example, adding some tree billboards to the scene. You can download the full code for this example from my GitHub repository, at https://github.com/ericrrichards/dx11.git under the TreeBillboardDemo project.

[/font][font=Arial]


billboard_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]