• entries
    64
  • comments
    46
  • views
    80404

Entries in this blog

ericrrichards22
http://www.youtube.com/watch?feature=player_embedded&v=WIOQuEJSpEg
Watch the intrepid red blob wind its way through the mountain slopes!

Last time, we discussed the implementation of our A* pathfinding algorithm, as well as some commonly used heuristics for A*. Now we're going to put all of the pieces together and get a working example to showcase this pathfinding work.

We'll need to slightly rework our mouse picking code to return the tile in our map that was hit, rather than just the bounding box center. To do this, we're going to need to modify our QuadTree, so that the leaf nodes are tagged with the MapTile that their bounding boxes enclose.

We'll also revisit the function that calculates which portions of the map are connected, as the original method in Part 1 was horribly inefficient on some maps. Instead, we'll use a different method, which uses a series of depth-first searches to calculate the connected sets of MapTiles in the map. This method is much faster, particularly on maps that have more disconnected sets of tiles.

We'll also need to develop a simple class to represent our unit, which will allow it to update and render itself, as well as maintain pathfinding information. The unit class implementation used here is based in part on material presented in Chapter 9 of Carl Granberg's Programming an RTS Game with Direct3D.

Finally, we'll add an additional texture map to our rendering shader, which will draw impassible terrain using a special texture, so that we can easily see the obstacles that our unit will be navigating around. You can see this in the video above; the impassible areas are shown with a slightly darker texture, with dark rifts.

The full code for this example can be found on my GitHub repository, https://github.com/ericrrichards/dx11.git, under the 33-Pathfinding project.

Read more "
ericrrichards22
[font=Arial]

So far, we have either worked with procedurally generated meshes, like our boxes and cylinders, or loaded very simple text-based mesh formats. For any kind of real application, however, we will need to have the capability to load meshes created by artists using 3D modeling and animation programs, like Blender or 3DS Max. There are a number of potential solutions to this problem; we could write an importer to read the specific file format of the program we are most likely to use for our engine. This does present a new host of problems, however: unless we write another importer, we are limited to just using models created with that modeling software, or we will have to rely on or create plugins to our modeling software to reformat models created using different software. There is a myriad of different 3D model formats in more-or-less common use, some of which are open-source, some of which are proprietary; many of both are surprisingly hard to find good, authoritative documentation on. All this makes the prospects of creating a general-purpose, cross-format model importer a daunting task.

[/font]

[font=Arial]

Fortunately, there exists an open-source library with support for loading most of the commonly used model formats, ASSIMP, or Open Asset Import Library. Although Assimp is written in C++, there exists a port for .NET, AssimpNet, which we will be able to use directly in our code, without the hassle of wrapping the native Assimp library ourselves. I am not a lawyer, but it looks like both the Assimp and AssimpNet have very permissive licenses that should allow one to use them in any kind of hobby or professional project.

[/font]

[font=Arial]

While we can use Assimp to load our model data, we will need to create our own C# model class to manage and render the model. For that, I will be following the example of Chapter 23 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. This example will not be a straight conversion of his example code, since I will be ditching the m3d model format which he uses, and instead loading models from the standard old Microsoft DirectX X format. The models I will be using come from the example code for Chapter 4 of Carl Granberg's Programming an RTS Game with Direct3D, although you may use any of the supported Assimp model formats if you want to use other meshes instead. The full source for this example can be found on my GitHub repository, athttps://github.com/ericrrichards/dx11.git, under the AssimpModel project.

[/font]

[font=Arial]

image_thumb%25255B1%25255D.png?imgmax=800

[/font]

model2.JPG
A more complicated model with multiple subsets, from the Assimp test model collection.

[font=Arial]

Read More...

[/font]
ericrrichards22
[font=Arial]

This time, we are going to take the scene that we used for the

[/font]Shapes Demo[font=Arial]

, and apply a three-point lighting shader. We'll replace the central sphere from the scene with the skull model that we loaded from a file in the

[/font]Skull Demo[font=Arial]

, to make the scene a little more interesting. We will also do some work encapsulating our shader in a C# class, as we will be using this shader effect as a basis that we will extend when we look at texturing, blending and other effects. As always, the full code for this example can be found at my Github repository

[/font]https://github.com/ericrrichards/dx11.git[font=Arial]

; the project for this example can be found in the DX11 solution under Examples/LitSkull.

[/font]
[font=Arial]

litskull1_thumb.png?imgmax=800litskull2_thumb.png?imgmax=800litskull3_thumb.png?imgmax=800

[/font]
Left to Right: Rendering the scene with 1 key light, rendering the scene with key and fill lights, rendering the scene with key, fill and back lights

Read More...
ericrrichards22
[font=Arial]


I know that I have been saying that I will cover random terrain generation in my next post for several posts now, and if all goes well, that post will be published today or tomorrow. First, though, we will talk about Direct2D, and using it to draw a loading screen, which we will display while the terrain is being generated to give some indication of the progress of our terrain generation algorithm.

[/font]
[font=Arial]


Direct2D is a new 2D rendering API from Microsoft built on top of DirectX 10 (or DirectX 11 in Windows 8). It provides functionality for rendering bitmaps, vector graphics and text in screen-space using hardware acceleration. It is positioned as a successor to the GDI rendering interface, and, so far as I can tell, the old DirectDraw API from earlier versions of DirectX. According to the documentation, it should be possible to use Direct2D to render 2D elements to a Direct3D 11 render target, although I have had some difficulties in actually getting that use-case to work. It does appear to work excellently for pure 2D rendering, say for menu or loading screens, with a simpler syntax than using Direct3D with an orthographic projection.

[/font]
[font=Arial]


We will be adding Direct2D support to our base application class D3DApp, and use Direct2D to render a progress bar with some status text during our initialization stage, as we load and generate the Direct3D resources for our scene. Please note that the implementation presented here should only be used while there is no active Direct3D rendering occurring; due to my difficulties in getting Direct2D/Direct3D interoperation to work correctly, Direct2D will be using a different backbuffer than Direct3D, so interleaving Direct2D drawing with Direct3D rendering will result in some very noticeable flickering when the backbuffers switch.

[/font]

[font=Arial]


The inspiration for this example comes from Chapter 5 of Carl Granberg's Programming an RTS Game with Direct3D, where a similar Progress screen is implemented using DirectX9 and the fixed function pipeline. With the removal of the ID3DXFont interface from newer versions of DirectX, as well as the lack of the ability to clear just a portion of a DirectX11 DeviceContext's render target, a straight conversion of that code would require some fairly heavy lifting to implement in Direct3D 11, and so we will be using Direct2D instead. The full code for this example can be found on my GitHub repository, athttps://github.com/ericrrichards/dx11.git, and is implemented in the TerrainDemo and RandomTerrainDemo projects.

[/font][font=Arial]


progressBar_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
Typically, in a strategy game, in addition to the triangle mesh that we use to draw the terrain, there is an underlying logical representation, usually dividing the terrain into rectangular or hexagonal tiles. This grid is generally what is used to order units around, construct buildings, select targets and so forth. To do all this, we need to be able to select locations on the terrain using the mouse, so we will need to implement terrain/mouse-ray picking for our terrain, similar to what we have done previously, with model triangle picking.

We cannot simply use the same techniques that we used earlier for our terrain, however. For one, in our previous example, we were using a brute-force linear searching technique to find the picked triangle out of all the triangles in the mesh. That worked in that case, however, the mesh that we were trying to pick only contained 1850 triangles. I have been using a terrain in these examples that, when fully tessellated, is 2049x2049 vertices, which means that our terrain consists of more than 8 million triangles. It's pretty unlikely that we could manage to use the same brute-force technique with that many triangles, so we need to use some kind of space partitioning data structure to reduce the portion of the terrain that we need to consider for intersection.

Additionally, we cannot really perform a per-triangle intersection test in any case, since our terrain uses a dynamic LOD system. The triangles of the terrain mesh are only generated on the GPU, in the hull shader, so we don't have access to the terrain mesh triangles on the CPU, where we will be doing our picking. Because of these two constraints, I have decide on using a quadtree of axis-aligned bounding boxes to implement picking on the terrain. Using a quad tree speeds up our intersection testing considerably, since most of the time we will be able to exclude three-fourths of our terrain from further consideration at each level of the tree. This also maps quite nicely to the concept of a grid layout for representing our terrain, and allows us to select individual terrain tiles fairly efficiently, since the bounding boxes at the terminal leaves of the tree will thus encompass a single logical terrain tile. In the screenshot below, you can see how this works; the boxes drawn in color over the terrain are at double the size of the logical terrain tiles, since I ran out of video memory drawing the terminal bounding boxes, but you can see that the red ball is located on the upper-quadrant of the white bounding box containing it.

bvh_thumb2.png?imgmax=800

Read more "
ericrrichards22

A little over two years ago, I first saw Amit Patel's article on Polygonal Map Generation, and thought it was incredibly cool. The use of Voronoi regions created a very nice, slightly irregular look, compared to grid-based terrains. At the time, I had just finished up working on my DX11 random terrain code, and it looked like a fun project to try to tackle.

I then proceeded to spend several months messing around with different implementations of Fortune's Algorithm in C# to get started and generate the Voronoi polygons used to generate a terrain along the lines of Amit's example. At this point, I've lost track of all of the different versions that I've sort of melded together to produce the code that I've ended up with in the end, but some of the more influential are:


  • The original implementation of the algorithm by Steve Fortune
  • This JS version by Nicolas Garcia Belmonte
  • This C++ version, which is a port of the ActionScript version used in the original Amit Patel map generator.
    The original goal was to create a map generator, suitable for a kind of overworld/strategic level map. But, alas, life happened, and I got bogged down before I got that far. I did, however, wind up with a fairly cool tool for generating Voronoi diagrams. Because I had spent so much time trying to iron out bugs in my implementation of the algorithm, I ended up producing a WinForms application that allows you to step through the algorithm one iteration at a time, visualizing the sites that are added to the diagram, the vertices and edges, as well as the position of the beach and sweep lines. Eventually I also worked in options to show the circles through three sites that define where a Voronoi vertex is located, as well as the Delauney triangulation of the sites.
    72_voronoi.PNG
    Voronoi regions, with the edges drawn in white, and the sites as the blue points.

    72_delauney.PNG
    Delauney triangulation, with triangle edges in green.

    72_both.PNG
    Showing both the Voronoi regions and the Delauney triangles.

    I won't pretend that this code is fantastic, but it's kind of interesting, and I worked at it for quite a while, so better to have it out here than moldering on a hard drive somewhere. If you'd like to see the source, it is available on GitHub. You can also download the executable below if you like - I make no promises that it will work everywhere, but it is a pretty standard .Net 4.5 Windows Forms application. I've also got some videos below the fold, if you'd like to see this in action.

    Download Voronoi


    Read more...

ericrrichards22
[font=Arial]



Minimaps are a common feature of many different types of games, especially those in which the game world is larger than the area the player can see on screen at once. Generally, a minimap allows the player to keep track of where they are in the larger game world, and in many games, particularly strategy and simulation games where the view camera is not tied to any specific player character, allow the player to move their viewing location more quickly than by using the direct camera controls. Often, a minimap will also provide a high-level view of unit movement, building locations, fog-of-war and other game specific information.

Today, we will look at implementing a minimap that will show us a birds-eye view of the our Terrain class. We'll also superimpose the frustum for our main rendering camera over the terrain, so that we can easily see how much of the terrain is in view. We'll also support moving our viewpoint by clicking on the minimap. All of this functionality will be wrapped up into a class, so that we can render multiple minimaps, and place them wherever we like within our application window.

As always, the full code for this example can be downloaded from GitHub, at https://github.com/ericrrichards/dx11.git. The relevant project is the Minimap project. The implementation of this minimap code was largely inspired by Chapter 11 of Carl Granberg's Programming an RTS Game with Direct3D, particularly the camera frustum drawing code. If you can find a copy (it appears to be out of print, and copies are going for outrageous prices on Amazon...), I would highly recommend grabbing it.
image_thumb%25255B1%25255D.png?imgmax=800

[/font][font=Arial]


Read more "

[/font]
ericrrichards22
I came across an interesting bug in the wrapper classes for my HLSL shader effects today. In preparation for creating a class to represent a game unit, for the purposes of demonstrating the terrain pathfinding code that I finished up last night, I had been refactoring my BasicModeland SkinnedModel classes to inherit from a common abstract base class, and after getting everything to the state that it could compile again, I had fired up the SkinnedModels example project to make sure everything was still rendering and updating correctly. I got called away to do something else, and ended up checking back in on it a half hour or so later, to find that the example had died with an OutOfMemoryException. Looking at Task Manager, this relatively small demo program was consuming over 1.5 GB of memory!

I restarted the demo, and watched the memory allocation as it ran, and noticed that the memory used seemed to be climbing quite alarmingly, 0.5-1 MB every time Task Manager updated. Somehow, I'd never noticed this before... So I started the project in Visual Studio, using the Performance Wizard to sample the .Net memory allocation, and let the demo run for a couple of minutes. Memory usage had spiked up to about 150MB, in this simple demo that loaded maybe 35 MB of textures, models, code and external libraries...
memprofiling_thumb%25255B1%25255D.jpg?imgmax=800

Read More...
ericrrichards22
[font=Arial]

This time around, we are going to revisit our old friend, the Waves Demo, and add textures to the land and water meshes. We will also be taking advantage of the gTexTransform matrix of our Basic.fx shader to tile our land texture multiple times across our mesh, to achieve more detail, and use tiling and translations on on our water mesh texture to create a simple but very visually appealing animation for our waves. This demo corresponds to the TexturedHillsAndWaves demo from Chapter 8 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. You can download the full source for this example from my GitHub repository at https://github.com/ericrrichards/dx11.git.

[/font]
[font=Arial]

Here is a still of our finished project this time:

[/font]
[font=Arial]

texHills_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]

I'm also going to try to upload a video of this demo in action, as the static screenshot doesn't quite do justice:

[/font]
[font=Arial]

https://www.youtube.com/watch?feature=player_embedded&v=Xb5k3NxiPeo

[/font]

[font=Arial]

Read More...

[/font]
ericrrichards22
[font=Arial]


Particle systems are a technique commonly used to simulate chaotic phenomena, which are not easy to render using normal polygons. Some common examples include fire, smoke, rain, snow, or sparks. The particle system implementation that we are going to develop will be general enough to support many different effects; we will be using the GPU's StreamOut stage to update our particle systems, which means that all of the physics calculations and logic to update the particles will reside in our shader code, so that by substituting different shaders, we can achieve different effects using our base particle system implementation.

[/font]
[font=Arial]


The code for this example was adapted from Chapter 20 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0, ported to C# and SlimDX. The full source for the example can be found at my GitHub repository, at https://github.com/ericrrichards/dx11.git, under the ParticlesDemo project.

[/font]
[font=Arial]


Below, you can see the results of adding two particles systems to our terrain demo. At the center of the screen, we have a flame particle effect, along with a rain particle effect.

[/font][font=Arial]


image_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


When I first learned about programming DirectX using shaders, it was back when DirectX 9 was the newest thing around. Back then, there were only two stages in the shader pipeline, the Vertex and Pixel shaders that we have been utilizing thus far. DirectX 10 introduced the geometry shader, which allows us to modify entire geometric primitives on the hardware, after they have gone through the vertex shader.

[/font]
[font=Arial]


One application of this capability is rendering billboards. Billboarding is a common technique for rendering far-off objects or minor scene details, by replacing a full 3D object with a texture drawn to a quad that is oriented towards the viewer. This is much less performance-intensive, and for far-off objects and minor details, provides a good-enough approximation. As an example, many games use billboarding to render grass or other foliage, and the Total War series renders far-away units as billboard sprites (In Medieval Total War II, you can see this by zooming in and out on a unit; at a certain point, you'll see the unit "pop", which is the point where the Total War engine switches from rendering sprite billboards to rendering the full 3D model). The older way of rendering billboards required one to maintain a dynamic vertex buffer of the quads for the billboards, and to transform the vertices to orient towards the viewer on the CPU whenever the camera moved. Dynamic vertex buffers have a lot of overhead, because it is necessary to re-upload the geometry to the GPU every time it changes, along with the additional overhead of uploading four vertices per billboard. Using the geometry shader, we can use a static vertex buffer of 3D points, with only a single vertex per billboard, and expand the point to a camera-aligned quad in the geometry shader.

[/font]
[font=Arial]


We'll illustrate this technique by porting the TreeBillboard example from Chapter 11 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. This demo builds upon our previous Alpha-blending example, adding some tree billboards to the scene. You can download the full code for this example from my GitHub repository, at https://github.com/ericrrichards/dx11.git under the TreeBillboardDemo project.

[/font][font=Arial]


billboard_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


In real-time lighting applications, like games, we usually only calculate direct lighting, i.e. light that originates from a light source and hits an object directly. The Phong lighting model that we have been using thus far is an example of this; we only calculate the direct diffuse and specular lighting. We either ignore indirect light (light that has bounced off of other objects in the scene), or approximate it using a fixed ambient term. This is very fast to calculate, but not terribly physically accurate. Physically accurate lighting models can model these indirect light bounces, but are typically too computationally expensive to use in a real-time application, which needs to render at least 30 frames per second. However, using the ambient lighting term to approximate indirect light has some issues, as you can see in the screenshot below. This depicts our standard skull and columns scene, rendered using only ambient lighting. Because we are using a fixed ambient color, each object is rendered as a solid color, with no definition. Essentially, we are making the assumption that indirect light bounces uniformly onto all surfaces of our objects, which is often not physically accurate.

[/font][font=Arial]


image_thumb%25255B2%25255D.png?imgmax=800

[/font][font=Arial]


Naturally, some portions of our scene will receive more indirect light than other portions, if we were actually modeling the way that light bounces within our scene. Some portions of the scene will receive the maximum amount of indirect light, while other portions, such as the nooks and crannies of our skull, should appear darker, since fewer indirect light rays should be able to hit those surfaces because the surrounding geometry would, realistically, block those rays from reaching the surface.

[/font]
[font=Arial]


In a classical global illumination scheme, we would simulate indirect light by casting rays from the object surface point in a hemispherical pattern, checking for geometry that would prevent light from reaching the point. Assuming that our models are static, this could be a viable method, provided we performed these calculations off-line; ray tracing is very expensive, since we would need to cast a large number of rays to produce an acceptable result, and performing that many intersection tests can be very expensive. With animated models, this method very quickly becomes untenable; whenever the models in the scene move, we would need to recalculate the occlusion values, which is simply too slow to do in real-time.

[/font]
[font=Arial]


Screen-Space Ambient Occlusion is a fast technique for approximating ambient occlusion, developed by Crytek for the game Crysis. We will initially draw the scene to a render target, which will contain the normal and depth information for each pixel in the scene. Then, we can sample this normal/depth surface to calculate occlusion values for each pixel, which we will save to another render target. Finally, in our usual shader effect, we can sample this occlusion map to modify the ambient term in our lighting calculation. While this method is not perfectly realistic, it is very fast, and generally produces good results. As you can see in the screen shot below, using SSAO darkens up the cavities of the skull and around the bases of the columns and spheres, providing some sense of depth.

[/font][font=Arial]


image_thumb%25255B3%25255D.png?imgmax=800

[/font][font=Arial]


The code for this example is based on Chapter 22 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. The example presented here has been stripped down considerably to demonstrate only the SSAO effects; lighting and texturing have been disabled, and the shadow mapping effects in Luna's example have been removed. The full code for this example can be found at my GitHub repository, https://github.com/ericrrichards/dx11.git, under the SSAODemo2 project. A more faithful adaptation of Luna's example can also be found in the 28-SsaoDemo project.

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
As I mentioned last time, I'm going to move on from fiddling with my Terrain class for a little while, and start working on some physics code instead. I bought a copy of Ian Millington's Game Physics Engine Development some months ago and skimmed through it, but was too busy with other things to really get into the accompanying source code. Now, I do have some free cycles, so I'm planning on working through the examples from the book as my next set of posts.

Once again, the original source code is in C++, rather than C# as I'll be using. Millington's code also uses OpenGL and GLUT, rather than DirectX. Consequently, these aren't going to be such straight ports like I did with most of Frank Luna's examples; I'll be porting the core physics code, and then for the examples, I'm just going to have to make something up that showcases the same features.

In any case, we'll start off with the simple particle physics of Chapters 3 & 4, and build a demo that simulates the ballistics of firing some different types of projectiles. You can find my source for this example on my GitHub page, at https://github.com/ericrrichards/dx11.git.
http://www.youtube.com/watch?feature=player_embedded&v=0X98m3WX8OA
Here you can see the four projectile types: 1.) a pistol-type round, 2.) a large artillery shell, 3) a fireball, 4.) a bolt from a railgun or energy weapon



Read more "
ericrrichards22
A couple of weeks ago as I was browsing HackerNews, I stumbled onto an article about creating spherical procedural maps by Andy Gainey. Procedural terrain/map generation is always something I find interesting, having grown up slightly obsessed with Civilization and its successors. Various methods of tiling a sphere in order to make a game grid have been bouncing around the back of my mind for a while now - unfortunately finding time to explore some of those ideas has been problematic. But, after reading through Mr. Gainey's piece and looking over the source for his demo, I got inspired to take a stab at something similar. My first attempt was to try to directly port his Javascript implementation, which got bogged down in the impedance mismatch between going from JS to C# and WebGL to DirectX at the same time, so I stepped back, and decided to come at the idea again, working in smaller steps.

The first step was to generate a spherical triangle mesh. For this purpose, the spherical mesh that we have previously worked with is not very well suited, as its topology is very inconsistent, particularly around the poles, as you can see in the screenshot below. Instead of this kind of lattitude/longitude subdivision, we will instead use an alternate method, which starts from an icosahedron, then subdivides each face of the icosahedron into four triangles, and finally projects the resulting vertices onto the sphere. This produces a much more regular tessellation, known as a geodesic grid, as you can see in the right screenshot below. This tessellation is not without artifacts - at the vertices of the original icosahedron, the resulting geodesic will have five edges, while all other vertices have six edges, similar to a real-life soccer ball.


regularSphere.PNG
Lattitude/Longitude tessellation, with artifacts at the poles
geosphere.PNG
Geodesic tessellation

Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0, Chapter 6, presented a method of generating of generating a geodesic sphere, which I previously skipped over. The code for this example is based upon his example, with some improvements to eliminate redundant vertices.

The code for this example can be obtained from my GitHub repository.


Read more...
ericrrichards22
For the past few weeks, I've been once again noodling on the idea of starting a .NET port of a classic Id FPS. As a kid on my first computer, an off-brand 486 with DOS, I just hit the tail end of the good old days of shareware. And amongst all the floppy disks of kiddy and educational software and sliming Gruzzles couldn't really hold a candle to exploring Indiana Jones and the Last Crusade-esque Gothic castles and knifing Nazis.

While the original source-code for Wolfenstein 3D has been available for some time, it is a bit of a slog trying to wade through C code that was written 20 years ago, with near and far pointers, blitting directly to VGA memory, and hand-rolled assembly routines, let alone build the project successfully. Consequently, converting over to C# is a bit of a struggle, particularly for some of the low-level pointer manipulation and when loading in binary assets - it is very helpful to be able to step through both codebases side by side in the debugger to figure out any discrepancies.

Because of these difficulties, I have started looking at the Chocolate Wolfenstein 3D project, by Fabien Sanglard. Mr. Sanglard is a great student of the Id Software engine source code, and has done several very nice writeups of the different engines open-sourced by Id. He was even planning on writing a book-length analysis of Wolfenstein 3D, which hopefully is still underway. Chocolate Wolfenstein 3D is a more modernized C++ conversion of the original Id code, with some of the more gnarly bits smoothed out, and using SDL for cross-platform window-management, graphics, input, and sound. Even better, it can be built and run using current versions of Visual Studio.

The only problem I had with the Chocolate Wolfenstein 3D GitHub repository is that it is missing some dependencies and requires a small amount of tweaking in order to get it to build and run on Visual Studio 2013. These steps are not particularly difficult, but if you simply clone the repo and hit F5, it doesn't work right out of the box. If you are working on a Mac, there is a very nice guide on setting up the project in XCode, but I have not found a similar guide for Windows, so I decided to document the steps that I went through and share those here.

titlescreen.png


Read more...
ericrrichards22
In our previous installment, we discussed the data structures that we will use to represent the graph which we will use for pathfinding on the terrain, as well as the initial pre-processing that was necessary to populate that graph with the information that our pathfinding algorithm will make use of. Now, we are ready to actually implement our pathfinding algorithm. We'll be using A*, probably the most commonly used graph search algorithm for pathfinding.


A* is one of the most commonly used pathfinding algorithms in games because it is fast, flexible, and relatively simple to implement. A* was originally a refinement of Dijkstra's graph search algorithm. Dijkstra's algorithm is guaranteed to determine the shortest path between any two nodes in a directed graph, however, because Dijkstra's algorithm only takes into account the cost of reaching an intermediate node from the start node, it tends to consider many nodes that are not on the optimal path. An alternative to Dijkstra's algorithm is Greedy Best-First search. Best-First uses a heuristic function to estimate the cost of reaching the goal from a given intermediate node, without reference to the cost of reaching the current node from the start node. This means that Best-First tends to consider far fewer nodes than Dijkstra, but is not guaranteed to produce the shortest path in a graph which includes obstacles that are not predicted by the heuristic.


A* blends these two approaches, by using a cost function (f(x)) to evaluate each node based on both the cost from the start node (g(x)) and the estimated cost to the goal (h(x)). This allows A* to both find the optimum shortest path, while considering fewer nodes than pure Dijkstra's algorithm. The number of intermediate nodes expanded by A* is somewhat dependent on the characteristics of the heuristic function used. There are generally three cases of heuristics that can be used to control A*, which result in different performance characteristics:

  • When h(x) underestimates the true cost of reaching the goal from the current node, A* will expand more nodes, but is guaranteed to find the shortest path.
  • When h(x) is exactly the true cost of reaching the goal, A* will only expand nodes along the shortest path, meaning that it runs very fast and produces the optimal path.
  • When h(x) overestimates the true cost of reaching the goal from the current node, A* will expand fewer intermediate nodes. Depending on how much h(x) underestimates the true cost, this may result in paths that are not the true shortest path; however, this does allow the algorithm to complete more quickly.

    For games, we will generally use heuristics of the third class. It is important that we generate good paths when doing pathfinding for our units, but it is generally not necessary that they be mathematically perfect; they just need to look good enough, and the speed savings are very important when we are trying to cram all of our rendering and update code into just a few tens of milliseconds, in order to hit 30-60 frames per second.


    A* uses two sets to keep track of the nodes that it is operating on. The first set is the closed set, which contains all of the nodes that A* has previously considered; this is sometimes called the interior of the search. The other set is the open set, which contains those nodes which are adjacent to nodes in the closed set, but which have not yet been processed by the A* algorithm. The open set is generally sorted by the calculated cost of the node (f(x)), so that the algorithm can easily select the most promising new node to consider. Because of this, we usually consider the open list to be a priority queue. The particular implementation of this priority queue has a large impact on the speed of A*; for best performance, we need to have a data structure that supports fast membership checks (is a node in the queue?), fast removal of the best element in the queue, and fast insertions into the queue. Amit Patel provides a good overview of the pros and cons of different data structures for the priority queue on his A* page; I will be using a priority queue derived from Blue Raja's Priority Queue class, which is essentially a binary heap. For our closed set, the primary operations that we will perform are insertions and membership tests, which makes the .Net HashSet class a good choice.

    Read more "
ericrrichards22
[font=Arial]


This time, we are going to take a look at a special class of texture, the cube map, and a couple of the common applications for cube maps, skyboxes and environment-mapped reflections. Skyboxes allow us to model far away details, like the sky or distant scenery, to create a sense that the world is more expansive than just our scene geometry, in an inexpensive way. Environment-mapped reflections allow us to model reflections on surfaces that are irregular or curved, rather than on flat, planar surfaces as in our Mirror Demo.

[/font]
[font=Arial]


The code for this example is adapted from the first part of Chapter 17 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. You can download the full source for this example from my GitHub repository, https://github.com/ericrrichards/dx11.git, under the CubeMap project.

[/font]
[font=Arial]


cubemap_thumb%25255B3%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22

A few weeks back, I picked up a copy of James Buck's Mazes for Programmers: Code Your Own Twisty Little Passages. (The title is, of course, an homage to one of the original text adventure games, Colossal Cave Adventure, or Adventure for short, which featured a maze puzzle where every room was described as "You are in a maze of twisty little passages, all alike"). I read through most of it on my veranda in the mornings, looking out over Long Bay in Antigua while I was on vacation, wishing I had brought my laptop along with me (although my girlfriend is probably happy I did not...). Once I got back, I started diving into implementing some of the ideas from the book - it's been quite a joy, and a much needed break from debugging UCWA applications at the day job.

The code in the book is implemented in Ruby, however, I'm not a Rubyist, so I decided that I'd work through the examples in my language of choice, C#. The code is pretty simple, so porting it over has been pretty straight-forward. I also figured it would be an interesting exercise for trying out some of the newer features in the more recent versions of C# - I've been stuck using C# 5.0 due to some tooling that we haven't had a chance to upgrade yet at work. Today, I'm going to mostly cover the examples from Chapter 2, which lays out the general plumbing of the framework that the rest of the book builds on, and implements two simple maze-building algorithms, the Binary Tree and Sidewinder algorithms. For the full code, you can check out my GitHub repository; things there may change slightly from the version presented here as I work through more of the book, but the general structure should remain similar.
Video:
http://richardssoftware.blob.core.windows.net/video/binaryTree.mp4

http://richardssoftware.blob.core.windows.net/video/sidewinder.mp4

Read more...

ericrrichards22
[font=Arial]


Last time, we looked at using cube maps to render a skybox around our 3D scenes, and also how to use that sky cubemap to render some environmental reflections onto our scene objects. While this method of rendering reflections is relatively cheap, performance-wise and can give an additional touch of realism to background geometry, it has some serious limitations if you look at it too closely. For one, none of our local scene geometry is captured in the sky cubemap, so, for instance, you can look at our reflective skull in the center and see the reflections of the distant mountains, which should be occluded by the surrounding columns. This deficiency can be overlooked for minor details, or for surfaces with low reflectivity, but it really sticks out if you have a large, highly reflective surface. Additionally, because we are using the same cubemap for all objects, the reflections at any object in our scene are not totally accurate, as our cubemap sampling technique does not differentiate on the position of the environment mapped object in the scene.

[/font]
[font=Arial]


The solution to these issues is to render a cube map, at runtime, for each reflective object using Direct3D. By rendering the cubemap for each object on the fly, we can incorporate all of the visible scene details, (characters, geometry, particle effects, etc) in the reflection, which looks much more realistic. This is, of course, at the cost of the additional overhead involved in rendering these additional cubemaps each frame, as we have to effectively render the whole scene six times for each object that requires dynamic reflections.

[/font]
[font=Arial]


This example is based on the second portion of Chapter 17 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0, with the code ported to C# and SlimDX from the native C++ used in the original example. You can download the full source for this example from my GitHub repository, at https://github.com/ericrrichards/dx11.git, under the DynamicCubeMap project.

[/font]
[font=Arial]


image_thumb%252525255B3%252525255D%25255B1%25255D_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
A while back I picked up Frank Luna's Introduction to 3D Game Programming with DirectX 11. I've been meaning to start going through it for some time, but some personal things have cut into my programming time significantly. I recently acquired a two year old Australian Cattle dog, and Dixie has kept me on the dead run for the weeks... It's very difficult to muster up the courage to dive into a programming project when you feel half zombified from getting up at 4:30 to let the dog out so she will stop her adorable yet frenzied licking attacks on your fiance.
8642_10152898576275296_798694434_n.jpg

But today, I'm going to start. I have not dealt with DirectX 11, nor 10 before; it's only in the last year or so that I built myself a modern desktop to replace my badly aging laptop from 2006. I do have some experience playing around with DirectX 9, so it will be interesting to see what has changed.

As I mentioned, I'll be going through Introduction to 3D Game Programming with DirectX 11 by Frank D. Luna. At first glance, it seems very similar to his previous book, Introduction to 3D Game Programming with Direct X 9.0c: A Shader Approach, which I found to be very well-written, with easy to follow code examples. One wrinkle is that I'm going to be doing this in C#, with SlimDX, rather than in C++ and straight DirectX 11. At my job, I mostly write C#, and have developed a serious fondness for it. Additionally, I've found that attempting to port C++ code over to C# is a somewhat interesting exercise, and not a bad way to learn some of the darker and less-common corners of the C# standard libraries.

I plan on doing a post on each major example. So there may be a couple per chapter, and I don't know what kind of pace my schedule will allow me. If you're interested at looking at my code as I'm going along, you can clone my Git repository at https://github.com/ericrrichards/dx11.git. I'm not going to include the original book code (I couldn't find a license specifically for the code, and I'm struggling to parse the legalese on the inside cover of the book), but you can download it from the book's website at http://www.d3dcoder.net/d3d11.htm.

(This was originally posted at http://richardssoftware.blogspot.com/2013/07/learning-directx-11-via-slimdx.html)
ericrrichards22
I'm going to skip over the first few chapters of Introduction to 3D Game Programming with Direct3D 11.0. I've got a basic understanding of basic linear algebra for 3D graphics, and I'm not eager to get into the weeds there, instead opting to go straight into the meat of DirectX coding. Besides the math pre-requisites, these chapters also give an overview of the XNA Math C++ library, which I am going to forgo for the SlimDX types instead.

With that out of the way, we will begin with Chapter 4, Direct3D Initialization. This chapter covers basic initialization of the Direct3D device, and lays out the application framework used for the rest of the book. If you've read any of Mr. Luna's other DirectX books, this framework will look very familiar, and even if you have not, it is basically a canonical example of a Windows game loop implementation. The framework provides a base class that handles creating a window, initializing DirectX, and provides some virtual methods that can be overridden in derived classes for application-specific logic. Additionally, this chapter introduces a timer class, which allows for updating the game world based on the elapsed time per frame, and for determining the frame rate, using the high-performance system timer.
Capture_thumb%255B8%255D.png?imgmax=800
Read More...
ericrrichards22
Well, it's been two rough months since the last post. Work has been crazy, and I've been wrestling with finishing up my never-ending Voronoi mapping project. So I decided to switch gears and return to the HLSL Development Cookbook. Now that I've got code in place to handle loading the .sdkmesh model files used by the book's examples, the hardest part is done. Now it is just a matter of converting plain HLSL vertex and pixel shaders over into the Effects Framework format I've been using, and porting the DXUT C++ code over to C# and my SlimDX application framework.

The first chapter of the HLSL Development Cookbook covers classic forward lighting, which is the style of lighting that we have used throughout the examples posted here thus far (see Directional, Point and Spot lights and three-point lighting). Later, the book covers some more complicated lighting schemes, but we'll get to that in due time.

Up first on the slate is ambient lighting. If you've been reading along with my posts, you may recall that we briefly discussed ambient lighting when defining our Light and Material classes. Ambient lighting is something of a hack that was invented to fake the effect of indirect lighting (light that hits a surface after bouncing off one or more other surfaces) without having to actually perform the computations to model the light rays. Historically, back in the days before programmable shaders, this ambient lighting was expressed as constant color values for the lights and materials in the scene being rendered. So far, I have followed this simple scheme in the shaders I adapted from Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. As you can see in the screenshot on the left below, this method results in a flat ambient color, which conceals the surface details of the mesh. Combined with the diffuse and specular terms in the traditional Phong reflection equation, this method works well, but on its own it is underwhelming.

HLSL Development Cookbook presents an alternative method of calculating ambient lighting, called hemispheric ambient lighting. This method takes advantage of programmable pixel shaders, and allows us to calculate a varying ambient color per pixel, based on the interpolated surface normal and two constant color values representing the color of light that is bounced off the ground and off the ceiling/sky. As you can see in the screenshot on the right below, this lighting method preserves the surface details of the mesh (so long as different colors are selected for the top and bottom colors). As we'll see, this method of computing ambient lighting is also pretty cheap to do computationally.


flatAmbient.PNG
Old-style flat ambient lighting
hemisphericalAmbient.PNG
Hemispherical ambient lighting

As always, the full source code for this post is available at my github repository.

Read more...
ericrrichards22
[font=Arial]

Today, we are going to cover a couple of additional techniques that we can use to achieve more realistic lighting in our 3D scenes. Going back to our first discussion of

[/font]lighting[font=Arial]

, recall that thus far, we have been using per-pixel,

[/font]Phong lighting[font=Arial]

. This style of lighting was an improvement upon the earlier method of

[/font]Gourad lighting[font=Arial]

, by interpolating the vertex normals over the resulting surface pixels, and calculating the color of an object per-pixel, rather than per-vertex. Generally, the Phong model gives us good results, but it is limited, in that we can only specify the normals to be interpolated from at the vertices. For objects that should appear smooth, this is sufficient to give realistic-looking lighting; for surfaces that have more uneven textures applied to them, the illusion can break down, since the specular highlights computed from the interpolated normals will not match up with the apparent topology of the surface.

[/font]

[font=Arial]

image_thumb1.png?imgmax=800

[/font]

[font=Arial]

In the screenshot above, you can see that the highlights on the nearest column are very smooth, and match the geometry of the cylinder. However, the column has a texture applied that makes it appear to be constructed out of stone blocks, jointed with mortar. In real life, such a material would have all kinds of nooks and crannies and deformities that would affect the way light hits the surface and create much more irregular highlights than in the image above. Ideally, we would want to model those surface details in our scene, for the greatest realism. This is the motivation for the techniques we are going to discuss today.

[/font]

[font=Arial]

One technique to improve the lighting of textured objects is called bump or normal mapping. Instead of just using the interpolated pixel normal, we will combine it with a normal sampled from a special texture, called a normal map, which allows us to match the per-pixel normal to the perceived surface texture, and achieve more believable lighting.

[/font]

[font=Arial]

The other technique is called displacement mapping. Similarly, we use an additional texture to specify the per-texel surface details, but this time, rather than a surface normal, the texture, called a displacement map or heightmap, stores an offset that indicates how much the texel sticks out or is sunken in from its base position. We use this offset to modify the position of the vertices of an object along the vertex normal. For best results, we can increase the tessellation of the mesh using a domain shader, so that the vertex resolution of our mesh is as great as the resolution of our heightmap. Displacement mapping is often combined with normal mapping, for the highest level of realism.

[/font]

[font=Arial]

normal-mapped_thumb1.png?imgmax=800

[/font]
[font=Arial]

Normal mapped columns

[/font]
[font=Arial]

displacement-mapped_thumb1.png?imgmax=800

[/font]
[font=Arial]

Displacement mapped columns

[/font]

[font=Arial]

This example is based off of Chapter 18 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. You can download the full source for this example from my GitHub repository, athttps://github.com/ericrrichards/dx11.git, under the NormalDisplacementMaps project.

[/font]

[font=Arial]

NOTE: You will need to have a DirectX 11 compatible video card in order to use the displacement mapping method presented here, as it makes use of the Domain and Hull shaders, which are new to DX 11.

[/font]

Read More...
ericrrichards22
[font=Arial]


In our last example on normal mapping and displacement mapping, we made use of the new Direct3D 11 tessellation stages when implementing our displacement mapping effect. For the purposes of the example, we did not examine too closely the concepts involved in making use of these new features, namely the Hull and Domain shaders. These new shader types are sufficiently complicated that they deserve a separate treatment of their own, particularly since we will continue to make use of them for more complicated effects in the future.

[/font]
[font=Arial]


The Hull and Domain shaders are covered in Chapter 13 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0, which I had previously skipped over. Rather than use the example from that chapter, I am going to use the shader effect we developed for our last example instead, so that we can dive into the details of how the hull and domain shaders work in the context of a useful example that we have some background with.

[/font]
[font=Arial]


The primary motivation for using the tessellation stages is to offload work from the the CPU and main memory onto the GPU. We have already looked at a couple of the benefits of this technique in our previous post, but some of the advantages of using the tessellation stages are:

[/font]

  • We can use a lower detail mesh, and specify additional detail using less memory-intensive techniques, like the displacement mapping technique presented earlier, to produce the final, high-quality mesh that is displayed.
  • We can adjust the level of detail of a mesh on-the-fly, depending on the distance of the mesh from the camera or other criteria that we define.
  • We can perform expensive calculations, like collisions and physics calculations, on the simplified mesh stored in main memory, and still render the highly-detailed generated mesh.

    The Tessellation Stages

    [font=Arial]


    The tessellation stages sit in the graphics pipeline between the vertex shader and the geometry shader. When we render using the tessellation stages, the vertices created by the vertex shader are not really the vertices that will be rendered to the screen; instead, they are control points which define a triangular or quad patch, which will be further refined by the tessellation stages into vertices. For most of our usages, we will either be working with triangular patches, with 3 control points, or quad patches, with 4 control points, which correspond to the corner vertices of the triangle or quad. Direct3D 11 supports patches with up to 32 control points, which might be suitable for rendering meshes based on Bezier curves.

    [/font][font=Arial]


    The tessellation stages can be broken down into three component stages:

    [/font]

    • Hull Shader Stage - The hull shader operates on each control point for a geometry patch, and can add, remove or modify its input control points before passing the patch onto the the tessellator stage. The Hull shader also calculates the tessellation factors for a patch, which instruct the tessellator stage how to break the patch up into individual vertices. The hull shader is fully programmable, meaning that we need to define an HLSL function that will be evaluated to construct the patch control points and tessellation factors.
    • Tessellator Stage - The tessellator stage is a fixed-function (meaning that we do not have to write a shader for it) stage, which samples the input patch and generates a set of vertices that divide the patch, according to the tessellation factors supplied by the hull shader and a partitioning scheme, which defines the algorithm used to subdivide the patch. Vertices created by the tessellator are normalized; i.e. quad patch vertices are specified by referring to them by their (u,v) coordinates on the surface of the quad, while triangle patch vertices use barycentric coordinates to specify their location within the triangle patch.
    • Domain Shader Stage - The domain shader is a programmable stage (we need to write a shader function for it), which operates on the normalized vertices input from the tessellator stage, and maps them into their final positions within the patch. Typically, the domain shader will interpolate the final vertex value from the patch control points using the uv or barycentric coordinates output by the tessellator. The output vertices from the domain shader will then be passed along to the next stage in the pipeline, either the geometry shader or the pixel shader.

      [font=Arial]


      With these definitions out of the way, we can now dive into the displacement mapping effect from our previous example and examine just how the tessellation stages generate the displacement mapped geometry we see on the screen.

      [/font]
      [font=Arial]


      Read More...

      [/font]