• entries
    64
  • comments
    46
  • views
    79756

Entries in this blog

ericrrichards22
[font=Arial]


This weekend, I updated my home workstation from Windows 8 to Windows 8.1. Just before doing this, I had done a bunch of work on my SSAO implementation, which I was intending to write up here once I got back from a visit home to do some deer hunting and help my parents get their firewood in. When I got back, I fired up my machine, and loaded up VS to run the SSAO sample to grab some screenshots. Immediately, my demo application crashed, while trying to create the DirectX 11 device. I had done some work over the weekend to downgrade the vertex and pixel shaders in the example to SM4, so that they could run on my laptop, which has an older integrated Intel video card that only supports DX10.1. I figured that I had borked something up in the process, so I tried running some of my other, simpler demos. Same error message popped up; DXGI_ERROR_UNSUPPORTED. Now, I am running a GTX 560 TI, so I know Direct3D 11 should be supported.

[/font]
[font=Arial]


However, I have been using Nvidia's driver update tool to keep myself at the latest and greatest driver version, so I figured that perhaps the latest driver I downloaded had some bugs. Go to Nvidia's site, check for any updates. Looks like I have the latest driver. Hmm...

[/font]
[font=Arial]


So I turned again to google, trying to find some reason why I would suddenly be unable to create a DirectX device. The fourth result I found was this:http://stackoverflow.com/questions/18082080/d3d11-create-device-debug-on-windows-8-1. Apparently I need to download the Windows 8.1 SDK, now. I'm guessing that, since I had VS installed prior to updating, I didn't get the latest SDK installed, and the Windows 8 SDK, which I did have installed, wouldn't cut it anymore, at least when trying to create a debug device. So I went ahead and installed the 8.1 SDK from here. Restart VS, rebuild the project in question, and now it runs perfectly. Argh. At least it's working again; I just wish I didn't have to waste an hour futzing around with it...

[/font]
[font=Arial]


Originally posted at http://www.richardssoftware.net/2013/11/windows-81-and-slimdx.html

[/font]
ericrrichards22

A little over two years ago, I first saw Amit Patel's article on Polygonal Map Generation, and thought it was incredibly cool. The use of Voronoi regions created a very nice, slightly irregular look, compared to grid-based terrains. At the time, I had just finished up working on my DX11 random terrain code, and it looked like a fun project to try to tackle.

I then proceeded to spend several months messing around with different implementations of Fortune's Algorithm in C# to get started and generate the Voronoi polygons used to generate a terrain along the lines of Amit's example. At this point, I've lost track of all of the different versions that I've sort of melded together to produce the code that I've ended up with in the end, but some of the more influential are:


  • The original implementation of the algorithm by Steve Fortune
  • This JS version by Nicolas Garcia Belmonte
  • This C++ version, which is a port of the ActionScript version used in the original Amit Patel map generator.
    The original goal was to create a map generator, suitable for a kind of overworld/strategic level map. But, alas, life happened, and I got bogged down before I got that far. I did, however, wind up with a fairly cool tool for generating Voronoi diagrams. Because I had spent so much time trying to iron out bugs in my implementation of the algorithm, I ended up producing a WinForms application that allows you to step through the algorithm one iteration at a time, visualizing the sites that are added to the diagram, the vertices and edges, as well as the position of the beach and sweep lines. Eventually I also worked in options to show the circles through three sites that define where a Voronoi vertex is located, as well as the Delauney triangulation of the sites.
    72_voronoi.PNG
    Voronoi regions, with the edges drawn in white, and the sites as the blue points.

    72_delauney.PNG
    Delauney triangulation, with triangle edges in green.

    72_both.PNG
    Showing both the Voronoi regions and the Delauney triangles.

    I won't pretend that this code is fantastic, but it's kind of interesting, and I worked at it for quite a while, so better to have it out here than moldering on a hard drive somewhere. If you'd like to see the source, it is available on GitHub. You can also download the executable below if you like - I make no promises that it will work everywhere, but it is a pretty standard .Net 4.5 Windows Forms application. I've also got some videos below the fold, if you'd like to see this in action.

    Download Voronoi


    Read more...

ericrrichards22
[font=Arial]

We're going to wrap up our exploration of Chapter 8 of

[/font]Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0[font=Arial]

by implementing one of the exercises from the end of the chapter. This exercise asks us to render a cube, similar to our

[/font]Crate Demo[font=Arial]

, but this time to show a succession of different textures, in order to create an animation, similar to a child's flip book. Mr. Luna suggests that we simply load an array of separate textures and swap them based on our simulation time, but we are going to go one step beyond, and implement a texture atlas, so that we will have all of the frames of animation composited into a single texture, and we can select the individual frames by changing our texture coordinate transform matrix. We'll wrap this functionality up into a little utility class that we can then reuse.

[/font]

[font=Arial]

firebox_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]

https://www.youtube.com/watch?feature=player_embedded&v=6Q8o06YRxc0

[/font]


[font=Arial]

Read More...

[/font]
ericrrichards22
[font=Arial]

This time, we are going to take the scene that we used for the

[/font]Shapes Demo[font=Arial]

, and apply a three-point lighting shader. We'll replace the central sphere from the scene with the skull model that we loaded from a file in the

[/font]Skull Demo[font=Arial]

, to make the scene a little more interesting. We will also do some work encapsulating our shader in a C# class, as we will be using this shader effect as a basis that we will extend when we look at texturing, blending and other effects. As always, the full code for this example can be found at my Github repository

[/font]https://github.com/ericrrichards/dx11.git[font=Arial]

; the project for this example can be found in the DX11 solution under Examples/LitSkull.

[/font]
[font=Arial]

litskull1_thumb.png?imgmax=800litskull2_thumb.png?imgmax=800litskull3_thumb.png?imgmax=800

[/font]
Left to Right: Rendering the scene with 1 key light, rendering the scene with key and fill lights, rendering the scene with key, fill and back lights

Read More...
ericrrichards22

Texturing 101

[font=Arial]

This time around, we are going to begin with a simple texturing example. We'll draw a simple cube, and apply a crate-style texture to it. We'll need to make some changes to our Basic.fx shader code, as well as the C# wrapper class, BasicEffect. Lastly, we'll need to create a new vertex structure, which will contain, in addition to the position and normal information we have been using, a uv texture coordinate. If you are following along in

[/font]Mr. Luna's book[font=Arial]

, this would be Chapter 8, the Crate Demo. You can see my full code for the demo at

[/font]https://github.com/ericrrichards/dx11.git[font=Arial]

, under DX11/CrateDemo.

[/font]

[font=Arial]

crate_thumb%25255B1%25255D.png?imgmax=800

[/font]

[font=Arial]

Read More...

[/font]
ericrrichards22
[font=Arial]

This time around, we are going to revisit our old friend, the Waves Demo, and add textures to the land and water meshes. We will also be taking advantage of the gTexTransform matrix of our Basic.fx shader to tile our land texture multiple times across our mesh, to achieve more detail, and use tiling and translations on on our water mesh texture to create a simple but very visually appealing animation for our waves. This demo corresponds to the TexturedHillsAndWaves demo from Chapter 8 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. You can download the full source for this example from my GitHub repository at https://github.com/ericrrichards/dx11.git.

[/font]
[font=Arial]

Here is a still of our finished project this time:

[/font]
[font=Arial]

texHills_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]

I'm also going to try to upload a video of this demo in action, as the static screenshot doesn't quite do justice:

[/font]
[font=Arial]

https://www.youtube.com/watch?feature=player_embedded&v=Xb5k3NxiPeo

[/font]

[font=Arial]

Read More...

[/font]
ericrrichards22
Typically, in a strategy game, in addition to the triangle mesh that we use to draw the terrain, there is an underlying logical representation, usually dividing the terrain into rectangular or hexagonal tiles. This grid is generally what is used to order units around, construct buildings, select targets and so forth. To do all this, we need to be able to select locations on the terrain using the mouse, so we will need to implement terrain/mouse-ray picking for our terrain, similar to what we have done previously, with model triangle picking.

We cannot simply use the same techniques that we used earlier for our terrain, however. For one, in our previous example, we were using a brute-force linear searching technique to find the picked triangle out of all the triangles in the mesh. That worked in that case, however, the mesh that we were trying to pick only contained 1850 triangles. I have been using a terrain in these examples that, when fully tessellated, is 2049x2049 vertices, which means that our terrain consists of more than 8 million triangles. It's pretty unlikely that we could manage to use the same brute-force technique with that many triangles, so we need to use some kind of space partitioning data structure to reduce the portion of the terrain that we need to consider for intersection.

Additionally, we cannot really perform a per-triangle intersection test in any case, since our terrain uses a dynamic LOD system. The triangles of the terrain mesh are only generated on the GPU, in the hull shader, so we don't have access to the terrain mesh triangles on the CPU, where we will be doing our picking. Because of these two constraints, I have decide on using a quadtree of axis-aligned bounding boxes to implement picking on the terrain. Using a quad tree speeds up our intersection testing considerably, since most of the time we will be able to exclude three-fourths of our terrain from further consideration at each level of the tree. This also maps quite nicely to the concept of a grid layout for representing our terrain, and allows us to select individual terrain tiles fairly efficiently, since the bounding boxes at the terminal leaves of the tree will thus encompass a single logical terrain tile. In the screenshot below, you can see how this works; the boxes drawn in color over the terrain are at double the size of the logical terrain tiles, since I ran out of video memory drawing the terminal bounding boxes, but you can see that the red ball is located on the upper-quadrant of the white bounding box containing it.

bvh_thumb2.png?imgmax=800

Read more "
ericrrichards22
[font=Arial]


Last time, we discussed terrain rendering, using the tessellation stages of the GPU to render the terrain mesh with distance-based LOD. That method required a DX11-compliant graphics card, since the Hull and Domain shader stages are new to Direct3D11. According to the latest Steam Hardware survey, nearly 65% of gamers have a DX11 graphics card, which is certainly the majority of potential users, and only likely to increase in the future. Of the remaining 35% of gamers, 31% are still using DX10 graphics cards. While we can safely ignore the small percentage of the market that is still limping along on DX9 cards (I myself still have an old laptop with a GeForce Go 7400 in my oldest laptop, but that machine is seven years old and on its last legs), restricting ourselves to only DX 11 cards cuts out a third of potential users of your application. For that reason, I'm going to cover an alternative, CPU-based implementation of our previous LOD terrain rendering example. If you have the option, I would suggest that you only bother with the previous DX11 method, as tessellating the terrain mesh yourself on the CPU is relatively more complex, prone to error, less performant, and produces a somewhat lower quality result; if you must support DX10 graphics cards, however, this method or one similar to it will do the job, while the hull/domain shader method will not.

[/font]

[font=Arial]


We will be implementing this rendering method as an additional render path in our Terrain class, if we detect that the user has a DX10 compatible graphics card. This allows us to reuse a large chunk of the previous code. For the rest, we will adapt portions of the HLSL shader code that we previously implemented into C#, as well as use some inspirations from Chapter 4 of Carl Granberg's Programming an RTS Game with Direct3D. The full code for this example can be found at my GitHub repository, https://github.com/ericrrichards/dx11.git, under the TerrainDemo project.

[/font]
[font=Arial]


dx10-terrain_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


In real-time lighting applications, like games, we usually only calculate direct lighting, i.e. light that originates from a light source and hits an object directly. The Phong lighting model that we have been using thus far is an example of this; we only calculate the direct diffuse and specular lighting. We either ignore indirect light (light that has bounced off of other objects in the scene), or approximate it using a fixed ambient term. This is very fast to calculate, but not terribly physically accurate. Physically accurate lighting models can model these indirect light bounces, but are typically too computationally expensive to use in a real-time application, which needs to render at least 30 frames per second. However, using the ambient lighting term to approximate indirect light has some issues, as you can see in the screenshot below. This depicts our standard skull and columns scene, rendered using only ambient lighting. Because we are using a fixed ambient color, each object is rendered as a solid color, with no definition. Essentially, we are making the assumption that indirect light bounces uniformly onto all surfaces of our objects, which is often not physically accurate.

[/font][font=Arial]


image_thumb%25255B2%25255D.png?imgmax=800

[/font][font=Arial]


Naturally, some portions of our scene will receive more indirect light than other portions, if we were actually modeling the way that light bounces within our scene. Some portions of the scene will receive the maximum amount of indirect light, while other portions, such as the nooks and crannies of our skull, should appear darker, since fewer indirect light rays should be able to hit those surfaces because the surrounding geometry would, realistically, block those rays from reaching the surface.

[/font]
[font=Arial]


In a classical global illumination scheme, we would simulate indirect light by casting rays from the object surface point in a hemispherical pattern, checking for geometry that would prevent light from reaching the point. Assuming that our models are static, this could be a viable method, provided we performed these calculations off-line; ray tracing is very expensive, since we would need to cast a large number of rays to produce an acceptable result, and performing that many intersection tests can be very expensive. With animated models, this method very quickly becomes untenable; whenever the models in the scene move, we would need to recalculate the occlusion values, which is simply too slow to do in real-time.

[/font]
[font=Arial]


Screen-Space Ambient Occlusion is a fast technique for approximating ambient occlusion, developed by Crytek for the game Crysis. We will initially draw the scene to a render target, which will contain the normal and depth information for each pixel in the scene. Then, we can sample this normal/depth surface to calculate occlusion values for each pixel, which we will save to another render target. Finally, in our usual shader effect, we can sample this occlusion map to modify the ambient term in our lighting calculation. While this method is not perfectly realistic, it is very fast, and generally produces good results. As you can see in the screen shot below, using SSAO darkens up the cavities of the skull and around the bases of the columns and spheres, providing some sense of depth.

[/font][font=Arial]


image_thumb%25255B3%25255D.png?imgmax=800

[/font][font=Arial]


The code for this example is based on Chapter 22 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. The example presented here has been stripped down considerably to demonstrate only the SSAO effects; lighting and texturing have been disabled, and the shadow mapping effects in Luna's example have been removed. The full code for this example can be found at my GitHub repository, https://github.com/ericrrichards/dx11.git, under the SSAODemo2 project. A more faithful adaptation of Luna's example can also be found in the 28-SsaoDemo project.

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


This time, we are going to take a look at a special class of texture, the cube map, and a couple of the common applications for cube maps, skyboxes and environment-mapped reflections. Skyboxes allow us to model far away details, like the sky or distant scenery, to create a sense that the world is more expansive than just our scene geometry, in an inexpensive way. Environment-mapped reflections allow us to model reflections on surfaces that are irregular or curved, rather than on flat, planar surfaces as in our Mirror Demo.

[/font]
[font=Arial]


The code for this example is adapted from the first part of Chapter 17 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. You can download the full source for this example from my GitHub repository, https://github.com/ericrrichards/dx11.git, under the CubeMap project.

[/font]
[font=Arial]


cubemap_thumb%25255B3%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
[font=Arial]


Sorry for the hiatus, I've been very busy with work and life the last couple weeks. Today, we're going to look at loading meshes with skeletal animations in DirectX 11, using SlimDX and Assimp.Net in C#. This will probably be our most complicated example yet, so bear with me. This example is inspired by Chapter 25 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0, although with some heavy modifications. Mr. Luna's code uses a custom animation format, which I found less than totally useful; realistically, we would want to be able to load skinned meshes exported in one of the commonly used 3D modeling formats. To facilitate this, we will again use the .NET port of the Assimp library, Assimp.Net. The code I am using to load and interpret the animation and bone data is heavily based on Scott Lee's Animation Importer code, ported to C#. The full source for this example can be found on my GitHub repository, at https://github.com/ericrrichards/dx11.git under the SkinnedModels project. The meshes used in the example are taken from the example code for Carl Granberg's Programming an RTS Game with Direct3D.

[/font]
[font=Arial]


Skeletal animation is the standard way to animate 3D character models. Generally, a character model will be represented by two structures: the exterior vertex mesh, or skin, and a tree of control points specifying the joints or bones that make up the skeleton of the mesh. Each vertex in the skin is associated with one or more bones, along with a weight that determines how much influence the bone should have on the final position of the skin vertex. Each bone is represented by a transformation matrix specifying the translation, rotation and scale that determines the final position of the bone. The bones are defined in a hierarchy, so that each bone's transformation is specified relative to its parent bone. Thus, given a standard bipedal skeleton, if we rotate the upper arm bone of the model, this rotation will propagate to the lower arm and hand bones of the model, analogously to how our actual joints and bones work.

[/font]
[font=Arial]


Animations are defined by a series of keyframes, each of which specifies the transformation of each bone in the skeleton at a given time. To get the appropriate transformation at a given time t, we linearly interpolate between the two closest keyframes. Because of this, we will typically store the bone transformations in a decomposed form, specifying the translation, scale and rotation components separately, building the transformation matrix at a given time from the interpolated components. A skinned model may contain many different animation sets; for instance, we'll commonly have a walk animation, and attack animation, an idle animation, and a death animation.

[/font][font=Arial]


The process of loading an animated mesh can be summarized as follows:

[/font]

  1. Extract the bone hierarchy of the model skeleton.
  2. Extract the animations from the model, along with all bone keyframes for each animation.
  3. Extract the skin vertex data, along with the vertex bone indices and weights.
  4. Extract the model materials and textures.
[font=Arial]


To draw the skinned model, we need to advance the animation to the correct frame, then pass the bone transforms to our vertex shader, where we will use the vertex indices and weights to transform the vertex position to the proper location.

[/font]
[font=Arial]


skinnedModels_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
I've decided to move my blog off of Blogger. Blogger was great for getting started, but it has just become too painful to fight with going forward. I'm sick of fighting the Blogger templating to force it to display my content the way that I want it to. Blogger has a habit of absolutely mangling the html that I try to post. For posts consisting mostly of plain-text and images, this is not that big a problem, but I have to spend a ton of time trying to get code-heavy posts to render in a readable way. Over the last year, I've spent far more time tweaking my posted html to get it right than it has taken me in the last week to write a home-rolled blog engine for this new site to use, even counting extracting my existing content out of Blogger and converting it into a new format.

Beyond that, there are a number of things that I would like to do going forward that I simply cannot do easily with Blogger, but I can do very easily if I have control over the site. I'm far more comfortable writing html, javascript and server-side code than I will ever be at bending Blogger to my will.

I'm sure there will be some speedbumps, but I think this is a much better solution for me, and ultimately, for you, readers. I'm nowhere near finished, but I think I've got the essentials ironed out, so I'm going to go ahead with the switch-over.

Things that should still work:

  • Existing links to my page should still be valid, at least for content. I've put in quite a bit of effort trying to get the old-style Blogger urls to play nicely with my new site
  • All of the old content has been imported.
  • Images should still be fine, since those were all externally hosted and I haven't swapped those out yet. Eventually, I'd like to serve all my screenshots locally, so I can convert them to jpegs so pages will load faster (most are loss-less PNGs right now).
    Things that are broken

    • Comments are disabled for the moment. Extracting the primary content from Blogger was enough of a pain, so I haven't bothered with the old comments yet. I also need to add server-side support for comments. If you have any questions about any of the tutorials in the meantime, at the top of each post is a link with my name that will allow you to send me an email. I'll try to get back to you as quickly as I can.
    • Rss feeds - From my analytics, I don't think anybody actually used the blog feed that Blogger provided, but I'm not going to bother with implementing one for the new site unless there is some demand or I have some spare time and get inspired.
    • Some of the older posts might look a little wonky. I'm going through them as I have time and making sure that the content I extracted from Blogger renders decently, but it is time-consuming. Particularly some of the oldest posts, when I was still using the online Blogger editor, before I standardized my workflow on Windows Live Writer, may be a little bit off.
      Thanks for bearing with me. If you notice anything unusual, feel free to send me an [email="ericrrichards@gmail.com"]email[/email], it would be very helpful in pinpointing issues.

      Visit the new site!
ericrrichards22
As I mentioned last time, I'm going to move on from fiddling with my Terrain class for a little while, and start working on some physics code instead. I bought a copy of Ian Millington's Game Physics Engine Development some months ago and skimmed through it, but was too busy with other things to really get into the accompanying source code. Now, I do have some free cycles, so I'm planning on working through the examples from the book as my next set of posts.

Once again, the original source code is in C++, rather than C# as I'll be using. Millington's code also uses OpenGL and GLUT, rather than DirectX. Consequently, these aren't going to be such straight ports like I did with most of Frank Luna's examples; I'll be porting the core physics code, and then for the examples, I'm just going to have to make something up that showcases the same features.

In any case, we'll start off with the simple particle physics of Chapters 3 & 4, and build a demo that simulates the ballistics of firing some different types of projectiles. You can find my source for this example on my GitHub page, at https://github.com/ericrrichards/dx11.git.
http://www.youtube.com/watch?feature=player_embedded&v=0X98m3WX8OA
Here you can see the four projectile types: 1.) a pistol-type round, 2.) a large artillery shell, 3) a fireball, 4.) a bolt from a railgun or energy weapon



Read more "
ericrrichards22
This time up, we are going to add some additional shape types to our GeometryGenerator class, and look at how to redraw the same geometry at different locations and scales in our scene. This example corresponds to the ShapesDemo from Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0.

shapes_thumb%25255B1%25255D.png?imgmax=800
NOTE: The framerate is atrocious because I am running on an old Intel G45/43 integrated chipset... On a real 3D chipset, like my GTX 560 Ti, you would have several thousand frames per second.

As you can see, in addition to the Grid mesh that we implemented previously for the Hills Demo, we have added a box, spheres, and cylinders. We are also only keeping a single instance of each type of geometry in our vertex and index buffers, and drawing them with different world matrices to render multiple objects in our scene. Lastly, we are drawing in wireframe mode, which involves setting up some different render states from our previous examples.

Read More...
ericrrichards22
[font=Arial]


Shadow mapping is a technique to cast shadows from arbitrary objects onto arbitrary 3D surfaces. You may recall that we implemented planar shadows earlier using the stencil buffer. Although this technique worked well for rendering shadows onto planar (flat) surfaces, this technique does not work well when we want to cast shadows onto curved or irregular surfaces, which renders it of relatively little use. Shadow mapping gets around these limitations by rendering the scene from the perspective of a light and saving the depth information into a texture called a shadow map. Then, when we are rendering our scene to the backbuffer, in the pixel shader, we determine the depth value of the pixel being rendered, relative to the light position, and compare it to a sampled value from the shadow map. If the computed value is greater than the sampled value, then the pixel being rendered is not visible from the light, and so the pixel is in shadow, and we do not compute the diffuse and specular lighting for the pixel; otherwise, we render the pixel as normal. Using a simple point sampling technique for shadow mapping results in very hard, aliased shadows: a pixel is either in shadow or lit; therefore, we will use a sampling technique known as percentage closer filtering (PCF), which uses a box filter to determine how shadowed the pixel is. This allows us to render partially shadowed pixels, which results in softer shadow edges.

[/font]
[font=Arial]


This example is based on the example from Chapter 21 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. The full source for this example can be downloaded from my GitHub repository at https://github.com/ericrrichards/dx11.git, under the ShadowDemos project.

[/font][font=Arial]


image_thumb%25255B1%25255D.png?imgmax=800

[/font]
[font=Arial]


Read More...

[/font]
ericrrichards22
For the past few weeks, I've been once again noodling on the idea of starting a .NET port of a classic Id FPS. As a kid on my first computer, an off-brand 486 with DOS, I just hit the tail end of the good old days of shareware. And amongst all the floppy disks of kiddy and educational software and sliming Gruzzles couldn't really hold a candle to exploring Indiana Jones and the Last Crusade-esque Gothic castles and knifing Nazis.

While the original source-code for Wolfenstein 3D has been available for some time, it is a bit of a slog trying to wade through C code that was written 20 years ago, with near and far pointers, blitting directly to VGA memory, and hand-rolled assembly routines, let alone build the project successfully. Consequently, converting over to C# is a bit of a struggle, particularly for some of the low-level pointer manipulation and when loading in binary assets - it is very helpful to be able to step through both codebases side by side in the debugger to figure out any discrepancies.

Because of these difficulties, I have started looking at the Chocolate Wolfenstein 3D project, by Fabien Sanglard. Mr. Sanglard is a great student of the Id Software engine source code, and has done several very nice writeups of the different engines open-sourced by Id. He was even planning on writing a book-length analysis of Wolfenstein 3D, which hopefully is still underway. Chocolate Wolfenstein 3D is a more modernized C++ conversion of the original Id code, with some of the more gnarly bits smoothed out, and using SDL for cross-platform window-management, graphics, input, and sound. Even better, it can be built and run using current versions of Visual Studio.

The only problem I had with the Chocolate Wolfenstein 3D GitHub repository is that it is missing some dependencies and requires a small amount of tweaking in order to get it to build and run on Visual Studio 2013. These steps are not particularly difficult, but if you simply clone the repo and hit F5, it doesn't work right out of the box. If you are working on a Mac, there is a very nice guide on setting up the project in XCode, but I have not found a similar guide for Windows, so I decided to document the steps that I went through and share those here.

titlescreen.png


Read more...
ericrrichards22
Not really game related, but something I've been working on lately.

Recently, I have been using OWIN a good deal for developing internal web applications. One of the chief benefits of this is that OWIN offers the ability to host its own HTTP server, which allows me to get out of the business of installing and configuring IIS on windows, which is one of the main points of pain when deploying the products I work on to our customers. Unfortunately, when I first started using OWIN, there was not a version of ASP.NET MVC available that was compatible with OWIN. Most of my previous experience with programming web servers has been based on MVC (except for briefly experiencing WebForms hell), so finding a similar framework that was compatible with OWIN was one of my first priorities.

In my search, I discovered Nancy, a fairly similar MVC-style framework which offered OWIN support. It also was capable of using the same Razor view engine as ASP.NET MVC, with some minor differences, so I was able to convert existing IIS ASP.NET MVC applications to OWIN/Nancy using most of the existing views and front-end code. At some point I plan to write an article illustrating how one would do this type of conversion, but for now, I'm going to examine one particular gotcha I discovered when converting my personal Netflix-type video application to OWIN/Nancy: serving HTML5 video files.

Read more...
ericrrichards22
[font=Arial]



Quite a while back, I presented an example that rendered water waves by computing a wave equation and updating a polygonal mesh each frame. This method produced fairly nice graphical results, but it was very CPU-intensive, and relied on updating a vertex buffer every frame, so it had relatively poor performance.

We can use displacement mapping to approximate the wave calculation and modify the geometry all on the GPU, which can be considerably faster. At a very high level, what we will do is render a polygon grid mesh, using two height/normal maps that we will scroll in different directions and at different rates. Then, for each vertex that we create using the tessellation stages, we will sample the two heightmaps, and add the sampled offsets to the vertex's y-coordinate. Because we are scrolling the heightmaps at different rates, small peaks and valleys will appear and disappear over time, resulting in an effect that looks like waves. Using different control parameters, we can control this wave effect, and generate either a still, calm surface, like a mountain pond at first light, or big, choppy waves, like the ocean in the midst of a tempest.

This example is based off of the final exercise of Chapter 18 of Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. The original code that inspired this example is not located with the other example for Chapter 18, but rather in the SelectedCodeSolutions directory. You can download my source code in full from https://github.com/ericrrichards/dx11.git, under the 29-WavesDemo project. One thing to note is that you will need to have a DirectX 11 compatible video card to execute this example, as we will be using tessellation stage shaders that are only available in DirectX 11.
image_thumb%25255B1%25255D.png?imgmax=800

[/font][font=Arial]


Read more "

[/font]
ericrrichards22
Howdy. Today, I'm going to discuss rendering UI text using the SlimDX SpriteTextRenderer library. This is a very nifty and light-weight extension library for SlimDX, hosted on CodePlex. In older versions of DirectX, it used to be possible to easily render sprites and text using the ID3DXSprite and ID3DXFont interfaces, but those have been removed in newer versions of DirectX. I've experimented with some other approaches, such as using Direct2D and DirectWrite or the DirectX Toolkit, but wasn't happy with the results. For whatever reason, Direct2D doesn't interop well with DirectX 11, unless you create a shared DirectX 10 device and jump through a bunch of hoops, and even then it is kind of a PITA. Likewise, I have yet to find C# bindings for the DirectX Toolkit, so that's kind of a non-starter for me; I'd either have to rewrite the pieces that I want to use with SlimDX, or figure out the marshaling to use the C++ dlls. So for that reason, the SpriteTextRenderer library seems to be my best option at the moment, and it turned out to be relatively simple to integrate into my application framework.


If you've used either the old DirectX 9 interfaces or XNA, then it'll be pretty intuitive how to use SpriteTextRenderer. The SpriteRenderer class has some useful methods to draw 2D sprites, which I haven't explored much yet, since I have already added code to draw scree-space quads. The TextBlockRenderer class provides some simple and handy methods to draw text up on the screen. Internally, it uses DirectWrite to generate sprite font textures at runtime, so you can use any installed system fonts, and specify the weight, style, and point size easily, without worrying about the nitty gritty details of creating the font.


One limitation of the TextBlockRenderer class is that you can only use an instance of it to render text with a single font. Thus, if you want to use different font sizes or styles, you need to create different instances for each font that you want to use. Because of this, I've written a simple manager class, which I'm calling FontCache, which will provide a central point to store all the fonts that are used, as well as a default font if you just want to throw some text up onto the screen.


The new code for rendering text has been added to my pathfinding demo, available at my GitHub repository,https://github.com/ericrrichards/dx11.git.
font_thumb%25255B1%25255D.jpg?imgmax=800
(Looks much better without jpeg compression...)

Read more "
ericrrichards22
Howdy, time for an update. I've mostly gotten my terrain pathfinding code first cut completed; I'm creating the navigation graph, and I've got an implementation of A* finished that allows me to create a list of terrain nodes that represents the path between tile A and tile B. I'm going to hold off a bit on presenting all of that, since I haven't yet managed to get a nice looking demo to show off the pathfinding yet. I need to do some more work to create a simple unit class that can follow the path generated by A*, and between work and life stuff, I haven't gotten the chance to round that out satisfactorily yet.

I've also been doing some pretty heavy refactoring on various engine components, both for design and performance reasons. After the last series of posts on augmenting the Terrain class, and in anticipation of adding even more functionality as I added pathfinding support, I decided to take some time and split out the code that handles Direct3D resources and rendering from the more agnostic logical terrain representation. I'm not looking to do this at the moment, but this might also make implementing an OpenGL rendering system less painful, potentially.

Going through this, I don't think I am done splitting things up. I'm kind of a fan of small, tightly focused classes, but I'm not necessarily an OOP junkie. Right now, I'm pretty happy with how I have split things out. I've got the Terrain class, which contains mostly the rendering independent logical terrain representation, such as the quad tree and picking code, the terrain heightmap and heightmap generation code, and the global terrain state properties (world-space size, initialization information struct, etc). The rendering and DirectX resource management code has been split out into the new TerrainRenderer class, which does all of the drawing and creates all of the DirectX vertex buffers and texture resources.

I'll spare you all the intermediate gyrations that this refactoring push put me through, and just post the resulting two classes. Resharper was invaluable in this process; if you have access to a full version of Visual Studio, I don't think there is a better way to spend $100. I shiver to think of how difficult this would have been without access to its refactoring and renaming tools.

Read More...
ericrrichards22

Alright, ready for the third installment of this ray tracing series? This time, we'll get some actual rays, and start tracing them through a scene. Our scene is still going to be empty, but we're starting to get somewhere. Although the book I'm working from is titled Ray Tracing in One Weekend, it's starting to look like my project is going to be more like Ray Tracing in One Year...

Once again, I'm going to put all of the relevant new code for this segment up here, but if you want to see the bits I've missed, check out my GitHub project. We will be circling back to the Vector3 structure I created last time, since I inevitably left out some useful operations...

The core of what a ray tracer does is to trace rays from an origin, often called the eye, for obvious reasons, through each pixel in the image, and then out into our scene to whatever objects lie beyond. We don't have any objects to actually hit, yet, but we are going to lay the groundwork to start doing that next time. Below, you can see the setup of our eye, the image plane, and the rays that shoot from the eye through the image and into the scene beyond.
71_eye-image.png

Read more...

ericrrichards22

It's going to take me considerably longer than one weekend to build out a ray tracer...

Last time, I laid the groundwork to construct a PPM image and output a simple gradient image, like the one below.
70_vector.png
This time around, I'm going to focus on building some useful abstractions that will make work going forward easier. This is going to focus on two areas:


  • A Vector3 class, which will be helpful for representing 3D points, directional vector, RGB colors and offsets. We'll implement some useful operators and geometric methods in addition.
  • A Bitmap class, which will represent our output raster and handle the details of saving that raster out as a PPM image file.
    Ultimately, we'll be producing the same image as in the last installment, but with considerably less boilerplate code, and lay the groundwork for making our lives much easier going forward when we get to some more meaty topics. As always, the full code is available on GitHub, but I'll be presenting the full code for this example in this post.


    Read more...

ericrrichards22
[font=Arial]

In this post, we are going to discuss applications of the Direct3D stencil buffer, by porting the example from Chapter 10 of

[/font]Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0[font=Arial]

to C# and SlimDX. We will create a simple scene, consisting of an object (in our case, the skull mesh that we have used

[/font]previously[font=Arial]

), and some simple room geometry, including a section which will act as a mirror and reflect the rest of the geometry in our scene. We will also implement planar shadows, so that our central object will cast shadows on the rest of our geometry when it is blocking our primary directional light. The full code for this example can be downloaded from my GitHub repository, at

[/font]https://github.com/ericrrichards/dx11.git[font=Arial]

, under the MirrorDemo project.

[/font]

[font=Arial]

mirror_thumb%25255B1%25255D.png?imgmax=800

[/font]

[font=Arial]

Read More...

[/font]
ericrrichards22
http://www.youtube.com/watch?feature=player_embedded&v=WIOQuEJSpEg
Watch the intrepid red blob wind its way through the mountain slopes!

Last time, we discussed the implementation of our A* pathfinding algorithm, as well as some commonly used heuristics for A*. Now we're going to put all of the pieces together and get a working example to showcase this pathfinding work.

We'll need to slightly rework our mouse picking code to return the tile in our map that was hit, rather than just the bounding box center. To do this, we're going to need to modify our QuadTree, so that the leaf nodes are tagged with the MapTile that their bounding boxes enclose.

We'll also revisit the function that calculates which portions of the map are connected, as the original method in Part 1 was horribly inefficient on some maps. Instead, we'll use a different method, which uses a series of depth-first searches to calculate the connected sets of MapTiles in the map. This method is much faster, particularly on maps that have more disconnected sets of tiles.

We'll also need to develop a simple class to represent our unit, which will allow it to update and render itself, as well as maintain pathfinding information. The unit class implementation used here is based in part on material presented in Chapter 9 of Carl Granberg's Programming an RTS Game with Direct3D.

Finally, we'll add an additional texture map to our rendering shader, which will draw impassible terrain using a special texture, so that we can easily see the obstacles that our unit will be navigating around. You can see this in the video above; the impassible areas are shown with a slightly darker texture, with dark rifts.

The full code for this example can be found on my GitHub repository, https://github.com/ericrrichards/dx11.git, under the 33-Pathfinding project.

Read more "
ericrrichards22
In our previous installment, we discussed the data structures that we will use to represent the graph which we will use for pathfinding on the terrain, as well as the initial pre-processing that was necessary to populate that graph with the information that our pathfinding algorithm will make use of. Now, we are ready to actually implement our pathfinding algorithm. We'll be using A*, probably the most commonly used graph search algorithm for pathfinding.


A* is one of the most commonly used pathfinding algorithms in games because it is fast, flexible, and relatively simple to implement. A* was originally a refinement of Dijkstra's graph search algorithm. Dijkstra's algorithm is guaranteed to determine the shortest path between any two nodes in a directed graph, however, because Dijkstra's algorithm only takes into account the cost of reaching an intermediate node from the start node, it tends to consider many nodes that are not on the optimal path. An alternative to Dijkstra's algorithm is Greedy Best-First search. Best-First uses a heuristic function to estimate the cost of reaching the goal from a given intermediate node, without reference to the cost of reaching the current node from the start node. This means that Best-First tends to consider far fewer nodes than Dijkstra, but is not guaranteed to produce the shortest path in a graph which includes obstacles that are not predicted by the heuristic.


A* blends these two approaches, by using a cost function (f(x)) to evaluate each node based on both the cost from the start node (g(x)) and the estimated cost to the goal (h(x)). This allows A* to both find the optimum shortest path, while considering fewer nodes than pure Dijkstra's algorithm. The number of intermediate nodes expanded by A* is somewhat dependent on the characteristics of the heuristic function used. There are generally three cases of heuristics that can be used to control A*, which result in different performance characteristics:

  • When h(x) underestimates the true cost of reaching the goal from the current node, A* will expand more nodes, but is guaranteed to find the shortest path.
  • When h(x) is exactly the true cost of reaching the goal, A* will only expand nodes along the shortest path, meaning that it runs very fast and produces the optimal path.
  • When h(x) overestimates the true cost of reaching the goal from the current node, A* will expand fewer intermediate nodes. Depending on how much h(x) underestimates the true cost, this may result in paths that are not the true shortest path; however, this does allow the algorithm to complete more quickly.

    For games, we will generally use heuristics of the third class. It is important that we generate good paths when doing pathfinding for our units, but it is generally not necessary that they be mathematically perfect; they just need to look good enough, and the speed savings are very important when we are trying to cram all of our rendering and update code into just a few tens of milliseconds, in order to hit 30-60 frames per second.


    A* uses two sets to keep track of the nodes that it is operating on. The first set is the closed set, which contains all of the nodes that A* has previously considered; this is sometimes called the interior of the search. The other set is the open set, which contains those nodes which are adjacent to nodes in the closed set, but which have not yet been processed by the A* algorithm. The open set is generally sorted by the calculated cost of the node (f(x)), so that the algorithm can easily select the most promising new node to consider. Because of this, we usually consider the open list to be a priority queue. The particular implementation of this priority queue has a large impact on the speed of A*; for best performance, we need to have a data structure that supports fast membership checks (is a node in the queue?), fast removal of the best element in the queue, and fast insertions into the queue. Amit Patel provides a good overview of the pros and cons of different data structures for the priority queue on his A* page; I will be using a priority queue derived from Blue Raja's Priority Queue class, which is essentially a binary heap. For our closed set, the primary operations that we will perform are insertions and membership tests, which makes the .Net HashSet class a good choice.

    Read more "