• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
  • entries
    64
  • comments
    46
  • views
    79914

Entries in this blog

ericrrichards22

Last time, I had started working through Jamis Buck's Mazes for Programmers. I've made quite a lot of progress in the meantime, but haven't managed to keep up with posting about that progress. Today, we're going to work through the meat of Chapter 3, where we'll implement a simplified version of Dijkstra's Algorithm (really a breadth-first search), which we'll use to find paths through our mazes. We will also use this algorithm to help visualize the biases that exist in our two previous maze generation algorithms, by coloring the backgrounds of each cell according to its distance from a starting point.

You can download the code for this post by downloading or cloning the Post_2 branch from my GitHub. Going forward, I'm going to try and tag the code for each post in a new branch - I'm still used to the old SVN model, but I'm trying to figure out how to take advantage of Git a little more.

Below you can see an example of where we'll end up, drawing the longest path through a maze, and coloring each cell according to its distance along the path.74_maxPath.png

Let's get started!


Read more...

ericrrichards22

A few weeks back, I picked up a copy of James Buck's Mazes for Programmers: Code Your Own Twisty Little Passages. (The title is, of course, an homage to one of the original text adventure games, Colossal Cave Adventure, or Adventure for short, which featured a maze puzzle where every room was described as "You are in a maze of twisty little passages, all alike"). I read through most of it on my veranda in the mornings, looking out over Long Bay in Antigua while I was on vacation, wishing I had brought my laptop along with me (although my girlfriend is probably happy I did not...). Once I got back, I started diving into implementing some of the ideas from the book - it's been quite a joy, and a much needed break from debugging UCWA applications at the day job.

The code in the book is implemented in Ruby, however, I'm not a Rubyist, so I decided that I'd work through the examples in my language of choice, C#. The code is pretty simple, so porting it over has been pretty straight-forward. I also figured it would be an interesting exercise for trying out some of the newer features in the more recent versions of C# - I've been stuck using C# 5.0 due to some tooling that we haven't had a chance to upgrade yet at work. Today, I'm going to mostly cover the examples from Chapter 2, which lays out the general plumbing of the framework that the rest of the book builds on, and implements two simple maze-building algorithms, the Binary Tree and Sidewinder algorithms. For the full code, you can check out my GitHub repository; things there may change slightly from the version presented here as I work through more of the book, but the general structure should remain similar.
Video:
http://richardssoftware.blob.core.windows.net/video/binaryTree.mp4

http://richardssoftware.blob.core.windows.net/video/sidewinder.mp4

Read more...

ericrrichards22

A little over two years ago, I first saw Amit Patel's article on Polygonal Map Generation, and thought it was incredibly cool. The use of Voronoi regions created a very nice, slightly irregular look, compared to grid-based terrains. At the time, I had just finished up working on my DX11 random terrain code, and it looked like a fun project to try to tackle.

I then proceeded to spend several months messing around with different implementations of Fortune's Algorithm in C# to get started and generate the Voronoi polygons used to generate a terrain along the lines of Amit's example. At this point, I've lost track of all of the different versions that I've sort of melded together to produce the code that I've ended up with in the end, but some of the more influential are:


  • The original implementation of the algorithm by Steve Fortune
  • This JS version by Nicolas Garcia Belmonte
  • This C++ version, which is a port of the ActionScript version used in the original Amit Patel map generator.
    The original goal was to create a map generator, suitable for a kind of overworld/strategic level map. But, alas, life happened, and I got bogged down before I got that far. I did, however, wind up with a fairly cool tool for generating Voronoi diagrams. Because I had spent so much time trying to iron out bugs in my implementation of the algorithm, I ended up producing a WinForms application that allows you to step through the algorithm one iteration at a time, visualizing the sites that are added to the diagram, the vertices and edges, as well as the position of the beach and sweep lines. Eventually I also worked in options to show the circles through three sites that define where a Voronoi vertex is located, as well as the Delauney triangulation of the sites.
    72_voronoi.PNG
    Voronoi regions, with the edges drawn in white, and the sites as the blue points.

    72_delauney.PNG
    Delauney triangulation, with triangle edges in green.

    72_both.PNG
    Showing both the Voronoi regions and the Delauney triangles.

    I won't pretend that this code is fantastic, but it's kind of interesting, and I worked at it for quite a while, so better to have it out here than moldering on a hard drive somewhere. If you'd like to see the source, it is available on GitHub. You can also download the executable below if you like - I make no promises that it will work everywhere, but it is a pretty standard .Net 4.5 Windows Forms application. I've also got some videos below the fold, if you'd like to see this in action.

    Download Voronoi


    Read more...

ericrrichards22

Alright, ready for the third installment of this ray tracing series? This time, we'll get some actual rays, and start tracing them through a scene. Our scene is still going to be empty, but we're starting to get somewhere. Although the book I'm working from is titled Ray Tracing in One Weekend, it's starting to look like my project is going to be more like Ray Tracing in One Year...

Once again, I'm going to put all of the relevant new code for this segment up here, but if you want to see the bits I've missed, check out my GitHub project. We will be circling back to the Vector3 structure I created last time, since I inevitably left out some useful operations...

The core of what a ray tracer does is to trace rays from an origin, often called the eye, for obvious reasons, through each pixel in the image, and then out into our scene to whatever objects lie beyond. We don't have any objects to actually hit, yet, but we are going to lay the groundwork to start doing that next time. Below, you can see the setup of our eye, the image plane, and the rays that shoot from the eye through the image and into the scene beyond.
71_eye-image.png

Read more...

ericrrichards22

It's going to take me considerably longer than one weekend to build out a ray tracer...

Last time, I laid the groundwork to construct a PPM image and output a simple gradient image, like the one below.
70_vector.png
This time around, I'm going to focus on building some useful abstractions that will make work going forward easier. This is going to focus on two areas:


  • A Vector3 class, which will be helpful for representing 3D points, directional vector, RGB colors and offsets. We'll implement some useful operators and geometric methods in addition.
  • A Bitmap class, which will represent our output raster and handle the details of saving that raster out as a PPM image file.
    Ultimately, we'll be producing the same image as in the last installment, but with considerably less boilerplate code, and lay the groundwork for making our lives much easier going forward when we get to some more meaty topics. As always, the full code is available on GitHub, but I'll be presenting the full code for this example in this post.


    Read more...

ericrrichards22

Hello Raytracing

Whew, it's been a while...

A few weeks ago, I happened across a new book by Peter Shirley, Ray Tracing in One Weekend. Longer ago than I like to remember, I took a computer graphics course in college, and the bulk of our project work revolved around writing a simple ray tracer in Java. It was one of the few really code-heavy CS courses I took, and I really enjoyed it; from time to time I keep pulling down that old project and port it over to whatever new language I'm trying to learn. One of the primary textbooks for that course was Fundamentals of Computer Graphics, aka "the tiger book," of which Mr. Shirley was also an author. Since that's one of the more accessible graphics textbooks I've encountered, I figured this new offering was worth a look. It didn't disappoint.

Ray Tracing in One Weekend is, true to its title, a quick read, but it packs a punch in its just under 50 pages. Even better, it's running at around $3 (or free if you have Kindle Unlimited). I paged through it in a couple of hours, then, as I often do, set about working through the code.

If you want to follow along, I've put my code up on Github, although it's still a work in progress; we're wrapping up a new release at the day job and so I've not had a huge amount of time to work on anything extra. Fortunately, each example I'll be doing here is pretty self-contained, and so all of the code will be up here. We'll start at the beginning, with a simple Hello World a la raytracing.


69_hello.png

Read more...

ericrrichards22
One of my favorite books on AI programming for games is Matt Buckland's Programming Game AI By Example. Many AI programming books lean more towards presenting topics and theories, leaving the dirty work of implementing the techniques and algorithms up to the reader. This book takes a very different tack, with each chapter featuring one or more fully implemented examples illustrating the techniques covered. Even better, it comes in a format similar to a trade-paperback, rather than a coffee table book-sized tome, so it's relatively handy to carry around and read a bit at a time, as programming books go. It is also decidedly focused on the kinds of real-world, relatively simple techniques that one would actually use in the majority of games. And, mixed in the right combinations, these simple techniques can be very powerful, as the second half of the book displays, building an increasingly sophisticated top-down, 2D shooter game.

What I particularly like about this book though, is that while it is presented as a book for programming game AI, it may be the best practical explanation of a number of fundamental AI techniques and patterns that I have seen. The lessons that I learned reading through this book have been just as applicable in my day-job as an enterprise developer as in my hobby work programming games, much more so than what I learned toiling through a semester-long AI course based on Artificial Intelligence: A Modern Approach...

The first real chapter of Mr. Buckland's book (leaving aside the obligatory math primer chapter), is devoted to finite state machines. Finite State Machines are one of the simpler ways of organizing decision making, and are probably one of the most intuitive. I'll let Mr. Buckland's definition stand by itself:
[quote]
A finite state machine is a device, or a model of a device, which has a finite number of states it can be in at any given time and can operate on input to either make transitions from one state to another or to cause an output or action to take place. A finite state machine can only be in one state at any moment in time.
Matt Buckland Programming Game AI By Example, p.44
[/quote]

I've been working on porting the first example from Mr. Buckland's FSM chapter from C++ to C#, featuring a mildly alcoholic, spaghetti Western gold miner named Bob. I'm going to focus mostly on the FSM-specific code here, but you can get the full code from https://github.com/ericrrichards/ai/tree/master/trunk/WestWorld1.

67_WestWorld1.PNG


Read more...
ericrrichards22
For the past few weeks, I've been once again noodling on the idea of starting a .NET port of a classic Id FPS. As a kid on my first computer, an off-brand 486 with DOS, I just hit the tail end of the good old days of shareware. And amongst all the floppy disks of kiddy and educational software and sliming Gruzzles couldn't really hold a candle to exploring Indiana Jones and the Last Crusade-esque Gothic castles and knifing Nazis.

While the original source-code for Wolfenstein 3D has been available for some time, it is a bit of a slog trying to wade through C code that was written 20 years ago, with near and far pointers, blitting directly to VGA memory, and hand-rolled assembly routines, let alone build the project successfully. Consequently, converting over to C# is a bit of a struggle, particularly for some of the low-level pointer manipulation and when loading in binary assets - it is very helpful to be able to step through both codebases side by side in the debugger to figure out any discrepancies.

Because of these difficulties, I have started looking at the Chocolate Wolfenstein 3D project, by Fabien Sanglard. Mr. Sanglard is a great student of the Id Software engine source code, and has done several very nice writeups of the different engines open-sourced by Id. He was even planning on writing a book-length analysis of Wolfenstein 3D, which hopefully is still underway. Chocolate Wolfenstein 3D is a more modernized C++ conversion of the original Id code, with some of the more gnarly bits smoothed out, and using SDL for cross-platform window-management, graphics, input, and sound. Even better, it can be built and run using current versions of Visual Studio.

The only problem I had with the Chocolate Wolfenstein 3D GitHub repository is that it is missing some dependencies and requires a small amount of tweaking in order to get it to build and run on Visual Studio 2013. These steps are not particularly difficult, but if you simply clone the repo and hit F5, it doesn't work right out of the box. If you are working on a Mac, there is a very nice guide on setting up the project in XCode, but I have not found a similar guide for Windows, so I decided to document the steps that I went through and share those here.

titlescreen.png


Read more...
ericrrichards22
Just a quick update today. I've updated the 3D model loading code to use the latest version of AssimpNet that is on Nuget now. The latest code is updated on GitHub.

The biggest changes appear to be that the AssimpImporter/Exporter classes have been merged into a single AssimpContext class that can do both. Some of the GetXXX methods to retrieve vertex elements and textures also appear to be replaced with properties. The way that one hooks into the internal Assimp logging has also changed. I haven't discovered many other changes, besides some additional file format support yet, but I haven't gotten around to really explore it that closely. Mildly irritating that the interface has changed here, but hey, at least it is being actively worked on (which is more than I can say for some of my stuff at times...).

I've been quite busy with work lately, so I haven't had a huge amount of time to work on anything here. I'm also a little burned out on doing straight DX11 graphics stuff, so I think I'm going to try to transition into doing some more applied work for a while. I've got some stuff in the pipeline on making simple games with Direct2D, and ideally I'd like to get involved with One Game a Month, to give me a little additional motivation.


Read more...
ericrrichards22
Not really game related, but something I've been working on lately.

Recently, I have been using OWIN a good deal for developing internal web applications. One of the chief benefits of this is that OWIN offers the ability to host its own HTTP server, which allows me to get out of the business of installing and configuring IIS on windows, which is one of the main points of pain when deploying the products I work on to our customers. Unfortunately, when I first started using OWIN, there was not a version of ASP.NET MVC available that was compatible with OWIN. Most of my previous experience with programming web servers has been based on MVC (except for briefly experiencing WebForms hell), so finding a similar framework that was compatible with OWIN was one of my first priorities.

In my search, I discovered Nancy, a fairly similar MVC-style framework which offered OWIN support. It also was capable of using the same Razor view engine as ASP.NET MVC, with some minor differences, so I was able to convert existing IIS ASP.NET MVC applications to OWIN/Nancy using most of the existing views and front-end code. At some point I plan to write an article illustrating how one would do this type of conversion, but for now, I'm going to examine one particular gotcha I discovered when converting my personal Netflix-type video application to OWIN/Nancy: serving HTML5 video files.

Read more...
ericrrichards22
A couple of weeks ago as I was browsing HackerNews, I stumbled onto an article about creating spherical procedural maps by Andy Gainey. Procedural terrain/map generation is always something I find interesting, having grown up slightly obsessed with Civilization and its successors. Various methods of tiling a sphere in order to make a game grid have been bouncing around the back of my mind for a while now - unfortunately finding time to explore some of those ideas has been problematic. But, after reading through Mr. Gainey's piece and looking over the source for his demo, I got inspired to take a stab at something similar. My first attempt was to try to directly port his Javascript implementation, which got bogged down in the impedance mismatch between going from JS to C# and WebGL to DirectX at the same time, so I stepped back, and decided to come at the idea again, working in smaller steps.

The first step was to generate a spherical triangle mesh. For this purpose, the spherical mesh that we have previously worked with is not very well suited, as its topology is very inconsistent, particularly around the poles, as you can see in the screenshot below. Instead of this kind of lattitude/longitude subdivision, we will instead use an alternate method, which starts from an icosahedron, then subdivides each face of the icosahedron into four triangles, and finally projects the resulting vertices onto the sphere. This produces a much more regular tessellation, known as a geodesic grid, as you can see in the right screenshot below. This tessellation is not without artifacts - at the vertices of the original icosahedron, the resulting geodesic will have five edges, while all other vertices have six edges, similar to a real-life soccer ball.


regularSphere.PNG
Lattitude/Longitude tessellation, with artifacts at the poles
geosphere.PNG
Geodesic tessellation

Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0, Chapter 6, presented a method of generating of generating a geodesic sphere, which I previously skipped over. The code for this example is based upon his example, with some improvements to eliminate redundant vertices.

The code for this example can be obtained from my GitHub repository.


Read more...
ericrrichards22
Moving along with Chapter 1 of the HLSL Development Cookbook, we're on to handling directional lighting. I've covered most of the theory behind directional lighting previously, so this is going to be quite brief.

To recap, in case anyone is unfamiliar with the term, a directional light is a light which illuminates the entire scene equally from a given direction. Typically this means a light source which is so large and far away from the scene being rendered, such as the sun or moon, that any attenuation in light intensity, via the inverse-square law or variations in the direction from any point in the scene to the light source location are neglible. Thus, we can model a directional light simply by using a directional vector, representing the direction of incoming light from the source, and the color of the light emitted by the light source.

Generally, we model the light hitting a surface by breaking it into two components, diffuse and specular, according to an empirical lighting equation called the Phong reflection model. Diffuse light is the light that reflects from a surface equally in all directions which is calculated from the angle between the surface normal vector and the vector from the surface to the light source. Specular light is light that is reflected off of glossy surfaces in a view-dependant direction. This direction of this specular reflection is controlled by the surface normal, the vector from the surface to the light, and the vector from the surface to the viewer of the scene, while the size and color of the specular highlights are controlled by properties of the material being lit, the specular exponent, approximating the "smoothness" of the object, and the specular color of the material. Many objects will reflect specular reflections in all color frequencies, while others, mostly metals, will absorb some frequencies more than others. For now, we're only going to consider the first class of materials.

Below you can see a model being lit by directional light, in addition to the ambient lighting we used in the last example. Here, the directional light is coming downward and and from the lower-left to the upper-right. Surfaces with normals facing more directly towards the light source are lit more brightly than surfaces which are angled partly away from the light, while surfaces facing away from the light are lit only by ambient lighting. In addition, you can see a series of brighter spots along the back of the rabbit model, where there are specular highlights.

Full code for this example can be downloaded from my GitHub repository.[color=rgb(204,204,204)][font='Helvetica Neue'][background=rgb(51,51,51)]


directionalLighting.PNG[/background][/font][/color]

Read more...
ericrrichards22
Well, it's been two rough months since the last post. Work has been crazy, and I've been wrestling with finishing up my never-ending Voronoi mapping project. So I decided to switch gears and return to the HLSL Development Cookbook. Now that I've got code in place to handle loading the .sdkmesh model files used by the book's examples, the hardest part is done. Now it is just a matter of converting plain HLSL vertex and pixel shaders over into the Effects Framework format I've been using, and porting the DXUT C++ code over to C# and my SlimDX application framework.

The first chapter of the HLSL Development Cookbook covers classic forward lighting, which is the style of lighting that we have used throughout the examples posted here thus far (see Directional, Point and Spot lights and three-point lighting). Later, the book covers some more complicated lighting schemes, but we'll get to that in due time.

Up first on the slate is ambient lighting. If you've been reading along with my posts, you may recall that we briefly discussed ambient lighting when defining our Light and Material classes. Ambient lighting is something of a hack that was invented to fake the effect of indirect lighting (light that hits a surface after bouncing off one or more other surfaces) without having to actually perform the computations to model the light rays. Historically, back in the days before programmable shaders, this ambient lighting was expressed as constant color values for the lights and materials in the scene being rendered. So far, I have followed this simple scheme in the shaders I adapted from Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. As you can see in the screenshot on the left below, this method results in a flat ambient color, which conceals the surface details of the mesh. Combined with the diffuse and specular terms in the traditional Phong reflection equation, this method works well, but on its own it is underwhelming.

HLSL Development Cookbook presents an alternative method of calculating ambient lighting, called hemispheric ambient lighting. This method takes advantage of programmable pixel shaders, and allows us to calculate a varying ambient color per pixel, based on the interpolated surface normal and two constant color values representing the color of light that is bounced off the ground and off the ceiling/sky. As you can see in the screenshot on the right below, this lighting method preserves the surface details of the mesh (so long as different colors are selected for the top and bottom colors). As we'll see, this method of computing ambient lighting is also pretty cheap to do computationally.


flatAmbient.PNG
Old-style flat ambient lighting
hemisphericalAmbient.PNG
Hemispherical ambient lighting

As always, the full source code for this post is available at my github repository.

Read more...
ericrrichards22
Many moons ago now, I picked up a copy of HLSL Development Cookbook by Doron Feinstein. I had intended to work my way through it after I finished up Luna's Introduction to 3D Game Programming with Direct3D 11.0, but winter and life kind of got in the way...

Another difficulty I had with this book was that the code samples made heavy use of DXUT which was the semi-official Direct3D sample framework. With the Windows 8 transistion, DXUT is sorta-kinda deprecated now, and besides, SlimDX doesn't really have support for it - SlimDX is more of a bare-metal binding to the underlying DirectX framework, which DXUT sits on top of.

So in any case, I was going to have to do a fair amount of rework to adapt the samples from this book to fit into the framework of code that I'd built up over the past year or so. Swapping the UI related stuff out for WinForms controls hosted in my SlimDX app didn't seem as though it would be that hard, but a bigger stumbling block was the fact that all of the sample meshes provided with the code used the .sdkmesh format. SDKMesh is a 3D model format that used to be used in a lot of the DirectX sample code, after .X models kind of fell into disfavor (Of course, now it appears they are using yet another model format, .CMO). SDKMesh is unfortunately not supported by Assimp, so I can't use Assimp.Net to load SDKMesh meshes. Fortunately, it is a relatively simple binary format. The documentation, at least that provided in the DirectX July 2010 SDK, is a little spotty, and not totally correct, but its possible to figure things out by looking at the source code for DXUT and another C# SDKMesh loader that appears to have disappeared from the internet since I first found it...

Well, this has sat in my todo pile for long enough, let's get started. As always, you can get the code for this example from my public GitHub repository.

sdkmesh.png
Read more...
ericrrichards22
I've decided to move my blog off of Blogger. Blogger was great for getting started, but it has just become too painful to fight with going forward. I'm sick of fighting the Blogger templating to force it to display my content the way that I want it to. Blogger has a habit of absolutely mangling the html that I try to post. For posts consisting mostly of plain-text and images, this is not that big a problem, but I have to spend a ton of time trying to get code-heavy posts to render in a readable way. Over the last year, I've spent far more time tweaking my posted html to get it right than it has taken me in the last week to write a home-rolled blog engine for this new site to use, even counting extracting my existing content out of Blogger and converting it into a new format.

Beyond that, there are a number of things that I would like to do going forward that I simply cannot do easily with Blogger, but I can do very easily if I have control over the site. I'm far more comfortable writing html, javascript and server-side code than I will ever be at bending Blogger to my will.

I'm sure there will be some speedbumps, but I think this is a much better solution for me, and ultimately, for you, readers. I'm nowhere near finished, but I think I've got the essentials ironed out, so I'm going to go ahead with the switch-over.

Things that should still work:

  • Existing links to my page should still be valid, at least for content. I've put in quite a bit of effort trying to get the old-style Blogger urls to play nicely with my new site
  • All of the old content has been imported.
  • Images should still be fine, since those were all externally hosted and I haven't swapped those out yet. Eventually, I'd like to serve all my screenshots locally, so I can convert them to jpegs so pages will load faster (most are loss-less PNGs right now).
    Things that are broken

    • Comments are disabled for the moment. Extracting the primary content from Blogger was enough of a pain, so I haven't bothered with the old comments yet. I also need to add server-side support for comments. If you have any questions about any of the tutorials in the meantime, at the top of each post is a link with my name that will allow you to send me an email. I'll try to get back to you as quickly as I can.
    • Rss feeds - From my analytics, I don't think anybody actually used the blog feed that Blogger provided, but I'm not going to bother with implementing one for the new site unless there is some demand or I have some spare time and get inspired.
    • Some of the older posts might look a little wonky. I'm going through them as I have time and making sure that the content I extracted from Blogger renders decently, but it is time-consuming. Particularly some of the oldest posts, when I was still using the online Blogger editor, before I standardized my workflow on Windows Live Writer, may be a little bit off.
      Thanks for bearing with me. If you notice anything unusual, feel free to send me an [email="ericrrichards@gmail.com"]email[/email], it would be very helpful in pinpointing issues.

      Visit the new site!
ericrrichards22
For the last six or eight months, off and on, I've been trying to write some code that will create a Voronoi diagram from a set of random points, inspired by Amit Patel's Polygonal Map Generation demo. In the last week or so, I had a bit of a break-through, in that I finally managed to get an implementation of Fortune's Algorithm put together that would actually work and generate the Voronoi edges and vertices correctly so that I could render them. Since most of the existing implementations I've been trying to work from are either broken or an enormous pain in the ass to try to understand, I'm planning on writing up my implementation, once I'm finally happy with it. I've still got a fair way to go, since right now I'm only outputting the graph edges and vertices successfully, and while that renders prettily, I haven't gotten to the point that I have all of the really useful graph connectivity information being output yet. Hopefully, if everything goes well, I'll manage to finish that up by the end of the week, but this has proven to be a real bugger of a problem to solve.

As I mentioned, most of the implementations I've studied are either incomplete or broken. Particularly, I haven't found an example yet that correctly clips the edges of the generated Voronoi polygons to a rectangular area. So long as all you care about is rendering the graph to the screen, this isn't a big problem, since most 2D graphics libraries will happily draw lines that extend beyond the drawable area of the screen. However, if you're trying to subdivide a rectangular region into Voronoi polygons, its kind of nice to have your edges actually clipped to that region. What I'm envisioning doing with this code eventually is using it to render 3D maps, subdivided into territories, but with an overlaid rectangular grid for moving units - think of the strategic map in Total War games or Lords of the Realm.

After I got frustrated with trying to make the broken clipping code I was working from perform correctly, I trolled Google and came across theCohen-Sutherland line-clipping algorithm. This looked to be exactly what I needed, and wonder of wonders, the Wikipedia page actually featured a readable, reasonable example, rather than the obtuse academic pseudo-code you usually find there (see the Fortune's Algorithm article...). The only thing I would caution you about with the Wikipedia example is that it uses a coordinate system where the origin is the lower-left bounds of a rectangle as usually encountered in mathematics, rather than the upper-left origin we commonly use in computer graphics, so some tweaking is necessary.

The code for this example can be downloaded from my github repository, at https://github.com/ericrrichards/dx11.git. The algorithm is included in the Algorithms project, while the example code is in the CohenSutherlandExample project. The example code is a quick-and-dirty mess of GDI drawing code, so I'm going to focus on the algorithm code.

image_thumb1.png?imgmax=800
After clipping:
image_thumb3.png?imgmax=800

Read more "
ericrrichards22
Tuesday was the anniversary of my first real post on this blog. For the most part, I've tried to keep my content here on the technical side of things, but, what the hell, this is a good time to reflect on a year of blogging - what went well, what went poorly, and where I'm going from here.

What I Meant to Accomplish (And What I actually Accomplished...)


Content


I restarted this blog about a year ago to document my attempt at learning DirectX 11, using Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. In my day job, I mostly code ASP.NET and Winforms applications using C#, so I decided to convert the C++ examples from Mr.Luna's book into C#, using SlimDX as my managed DirectX wrapper. SlimDX appeared to me to be slightly more mature than its main competitor, SharpDX, as its documentation was a little more complete, and it had been out in the wild a little bit longer, so there was a bit more third-party information (StackOverflow, other blogs, GameDev.net forum postings, etc.) on it. I suppose I could have also gone with XNA, although Microsoft appears to have abandoned any new development on it (Last release 9/16/2010...), and I felt like SlimDX's simple wrapper around the DirectX library would be easier to translate than shoehorning into the XNA model, not to mention wrassling with the XNA Content Pipeline.

My initial goal was to work through the book, converting each of the examples presented and then blogging about the process. Except for the chapters on Compute Shaders and Quaternions (which I have yet to tackle, mostly because the examples are not terribly interesting), I completed that goal by the middle of November of last year. From there, I started incorporating elements from Carl Granberg's Programming an RTS Game with Direct3D into the terrain rendering code that Luna's book presented, as well as dabbling in integrating Direct2D andSpriteTextRenderer to handle 2D drawing.

After that, my intention was to start working my way through Ian Millington's book, Game Physics Engine Development. This is about where I ran out of steam. Between the hassle of trying to reconcile the code examples from this book, which were based on OpenGL and somewhat less self-contained than what I had been working on previously, various issues in my personal and work life, and the general malaise of an especially cold, dark winter here in New England, my impetus to work on my side projects faded away spectacularly. If any of you followed this regularly, I'm sure you've noticed that it's been almost four months since I've posted anything new, and before that there was another dry spell of more than a month.

With the arrival of summer and a move that reduces both the strain on my finances and the number of hours per day I spend driving back and forth to work considerably, I've found that my mood has improved by leaps and bounds, and I have the extra energy to expend on coding outside of work again, finally. Right now, I've got a number of new things I'm working through, and a number of ideas in the pipeline that should be making their way up here in the near future.

image_thumb%25255B5%25255D.png?imgmax=800

Read More...
ericrrichards22
Howdy. Today, I'm going to discuss rendering UI text using the SlimDX SpriteTextRenderer library. This is a very nifty and light-weight extension library for SlimDX, hosted on CodePlex. In older versions of DirectX, it used to be possible to easily render sprites and text using the ID3DXSprite and ID3DXFont interfaces, but those have been removed in newer versions of DirectX. I've experimented with some other approaches, such as using Direct2D and DirectWrite or the DirectX Toolkit, but wasn't happy with the results. For whatever reason, Direct2D doesn't interop well with DirectX 11, unless you create a shared DirectX 10 device and jump through a bunch of hoops, and even then it is kind of a PITA. Likewise, I have yet to find C# bindings for the DirectX Toolkit, so that's kind of a non-starter for me; I'd either have to rewrite the pieces that I want to use with SlimDX, or figure out the marshaling to use the C++ dlls. So for that reason, the SpriteTextRenderer library seems to be my best option at the moment, and it turned out to be relatively simple to integrate into my application framework.


If you've used either the old DirectX 9 interfaces or XNA, then it'll be pretty intuitive how to use SpriteTextRenderer. The SpriteRenderer class has some useful methods to draw 2D sprites, which I haven't explored much yet, since I have already added code to draw scree-space quads. The TextBlockRenderer class provides some simple and handy methods to draw text up on the screen. Internally, it uses DirectWrite to generate sprite font textures at runtime, so you can use any installed system fonts, and specify the weight, style, and point size easily, without worrying about the nitty gritty details of creating the font.


One limitation of the TextBlockRenderer class is that you can only use an instance of it to render text with a single font. Thus, if you want to use different font sizes or styles, you need to create different instances for each font that you want to use. Because of this, I've written a simple manager class, which I'm calling FontCache, which will provide a central point to store all the fonts that are used, as well as a default font if you just want to throw some text up onto the screen.


The new code for rendering text has been added to my pathfinding demo, available at my GitHub repository,https://github.com/ericrrichards/dx11.git.
font_thumb%25255B1%25255D.jpg?imgmax=800
(Looks much better without jpeg compression...)

Read more "
ericrrichards22

IBM Connect 2014

Yikes! It's been more than two weeks since my last post... It's been a busy two weeks, as my employer has been gearing up for and attending IBM's Connect 2014 conference in Orlando. So I've had less time than usual to work on my side projects here. Because of that, I'm going to go outside of my usual format, and recap the conference and some thoughts on it. These are my personal opinions, so bear in mind the old adage about the ubiquity and quality of opinions...
1545900_10153775947190296_46999126_n_thumb%25255B1%25255D.jpg?imgmax=800

Read more "
ericrrichards22
As I mentioned last time, I'm going to move on from fiddling with my Terrain class for a little while, and start working on some physics code instead. I bought a copy of Ian Millington's Game Physics Engine Development some months ago and skimmed through it, but was too busy with other things to really get into the accompanying source code. Now, I do have some free cycles, so I'm planning on working through the examples from the book as my next set of posts.

Once again, the original source code is in C++, rather than C# as I'll be using. Millington's code also uses OpenGL and GLUT, rather than DirectX. Consequently, these aren't going to be such straight ports like I did with most of Frank Luna's examples; I'll be porting the core physics code, and then for the examples, I'm just going to have to make something up that showcases the same features.

In any case, we'll start off with the simple particle physics of Chapters 3 & 4, and build a demo that simulates the ballistics of firing some different types of projectiles. You can find my source for this example on my GitHub page, at https://github.com/ericrrichards/dx11.git.
http://www.youtube.com/watch?feature=player_embedded&v=0X98m3WX8OA
Here you can see the four projectile types: 1.) a pistol-type round, 2.) a large artillery shell, 3) a fireball, 4.) a bolt from a railgun or energy weapon



Read more "
ericrrichards22
http://www.youtube.com/watch?feature=player_embedded&v=WIOQuEJSpEg
Watch the intrepid red blob wind its way through the mountain slopes!

Last time, we discussed the implementation of our A* pathfinding algorithm, as well as some commonly used heuristics for A*. Now we're going to put all of the pieces together and get a working example to showcase this pathfinding work.

We'll need to slightly rework our mouse picking code to return the tile in our map that was hit, rather than just the bounding box center. To do this, we're going to need to modify our QuadTree, so that the leaf nodes are tagged with the MapTile that their bounding boxes enclose.

We'll also revisit the function that calculates which portions of the map are connected, as the original method in Part 1 was horribly inefficient on some maps. Instead, we'll use a different method, which uses a series of depth-first searches to calculate the connected sets of MapTiles in the map. This method is much faster, particularly on maps that have more disconnected sets of tiles.

We'll also need to develop a simple class to represent our unit, which will allow it to update and render itself, as well as maintain pathfinding information. The unit class implementation used here is based in part on material presented in Chapter 9 of Carl Granberg's Programming an RTS Game with Direct3D.

Finally, we'll add an additional texture map to our rendering shader, which will draw impassible terrain using a special texture, so that we can easily see the obstacles that our unit will be navigating around. You can see this in the video above; the impassible areas are shown with a slightly darker texture, with dark rifts.

The full code for this example can be found on my GitHub repository, https://github.com/ericrrichards/dx11.git, under the 33-Pathfinding project.

Read more "
ericrrichards22
In our previous installment, we discussed the data structures that we will use to represent the graph which we will use for pathfinding on the terrain, as well as the initial pre-processing that was necessary to populate that graph with the information that our pathfinding algorithm will make use of. Now, we are ready to actually implement our pathfinding algorithm. We'll be using A*, probably the most commonly used graph search algorithm for pathfinding.


A* is one of the most commonly used pathfinding algorithms in games because it is fast, flexible, and relatively simple to implement. A* was originally a refinement of Dijkstra's graph search algorithm. Dijkstra's algorithm is guaranteed to determine the shortest path between any two nodes in a directed graph, however, because Dijkstra's algorithm only takes into account the cost of reaching an intermediate node from the start node, it tends to consider many nodes that are not on the optimal path. An alternative to Dijkstra's algorithm is Greedy Best-First search. Best-First uses a heuristic function to estimate the cost of reaching the goal from a given intermediate node, without reference to the cost of reaching the current node from the start node. This means that Best-First tends to consider far fewer nodes than Dijkstra, but is not guaranteed to produce the shortest path in a graph which includes obstacles that are not predicted by the heuristic.


A* blends these two approaches, by using a cost function (f(x)) to evaluate each node based on both the cost from the start node (g(x)) and the estimated cost to the goal (h(x)). This allows A* to both find the optimum shortest path, while considering fewer nodes than pure Dijkstra's algorithm. The number of intermediate nodes expanded by A* is somewhat dependent on the characteristics of the heuristic function used. There are generally three cases of heuristics that can be used to control A*, which result in different performance characteristics:

  • When h(x) underestimates the true cost of reaching the goal from the current node, A* will expand more nodes, but is guaranteed to find the shortest path.
  • When h(x) is exactly the true cost of reaching the goal, A* will only expand nodes along the shortest path, meaning that it runs very fast and produces the optimal path.
  • When h(x) overestimates the true cost of reaching the goal from the current node, A* will expand fewer intermediate nodes. Depending on how much h(x) underestimates the true cost, this may result in paths that are not the true shortest path; however, this does allow the algorithm to complete more quickly.

    For games, we will generally use heuristics of the third class. It is important that we generate good paths when doing pathfinding for our units, but it is generally not necessary that they be mathematically perfect; they just need to look good enough, and the speed savings are very important when we are trying to cram all of our rendering and update code into just a few tens of milliseconds, in order to hit 30-60 frames per second.


    A* uses two sets to keep track of the nodes that it is operating on. The first set is the closed set, which contains all of the nodes that A* has previously considered; this is sometimes called the interior of the search. The other set is the open set, which contains those nodes which are adjacent to nodes in the closed set, but which have not yet been processed by the A* algorithm. The open set is generally sorted by the calculated cost of the node (f(x)), so that the algorithm can easily select the most promising new node to consider. Because of this, we usually consider the open list to be a priority queue. The particular implementation of this priority queue has a large impact on the speed of A*; for best performance, we need to have a data structure that supports fast membership checks (is a node in the queue?), fast removal of the best element in the queue, and fast insertions into the queue. Amit Patel provides a good overview of the pros and cons of different data structures for the priority queue on his A* page; I will be using a priority queue derived from Blue Raja's Priority Queue class, which is essentially a binary heap. For our closed set, the primary operations that we will perform are insertions and membership tests, which makes the .Net HashSet class a good choice.

    Read more "
ericrrichards22
This was originally intended to be a single post on pathfinding, but it got too long, and so I am splitting it up into three or four smaller pieces. Today,we're going to look at the data structures that we will use to represent the nodes of our pathfinding graph, and generating that graph from our terrain class.

When we were working on our quadtree to detect mouse clicks on the terrain, we introduced the concept of logical terrain tiles; these were the smallest sections of the terrain mesh that we wanted to hit when we did mouse picking, representing a 2x2 portion of our fully-tessellated mesh. These logical terrain tiles are a good starting point for generating what I am going to call our map: the 2D grid that we will use for pathfinding, placing units and other objects, defining areas of the terrain, AI calculations, and so forth. At the moment, there isn't really anything to these tiles, as they are simply a bounding box attached to the leaf nodes of our quad tree. That's not terribly useful by itself, so we are going to create a data structure to represent these tiles, along with an array to contain them in our Terrain class. Once we have a structure to contain our tile information, we need to extract that information from our Terrain class and heightmap, and generate the graph representing the tiles and the connections between them, so that we can use it in our pathfinding algorithm.

The pathfinding code implemented here was originally derived from Chapter 4 of Carl Granberg's Programming an RTS Game with Direct3D. I've made some heavy modifications, working from that starting point, using material from Amit Patel's blog and BlueRaja's C# PriorityQueue implementation. The full code for this example can be found on my GitHub repository, https://github.com/ericrrichards/dx11.git, under the 33-Pathfinding project.
graph_thumb4_thumb.jpg?imgmax=800


Read more "
ericrrichards22
I came across an interesting bug in the wrapper classes for my HLSL shader effects today. In preparation for creating a class to represent a game unit, for the purposes of demonstrating the terrain pathfinding code that I finished up last night, I had been refactoring my BasicModeland SkinnedModel classes to inherit from a common abstract base class, and after getting everything to the state that it could compile again, I had fired up the SkinnedModels example project to make sure everything was still rendering and updating correctly. I got called away to do something else, and ended up checking back in on it a half hour or so later, to find that the example had died with an OutOfMemoryException. Looking at Task Manager, this relatively small demo program was consuming over 1.5 GB of memory!

I restarted the demo, and watched the memory allocation as it ran, and noticed that the memory used seemed to be climbing quite alarmingly, 0.5-1 MB every time Task Manager updated. Somehow, I'd never noticed this before... So I started the project in Visual Studio, using the Performance Wizard to sample the .Net memory allocation, and let the demo run for a couple of minutes. Memory usage had spiked up to about 150MB, in this simple demo that loaded maybe 35 MB of textures, models, code and external libraries...
memprofiling_thumb%25255B1%25255D.jpg?imgmax=800

Read More...
ericrrichards22
Howdy, time for an update. I've mostly gotten my terrain pathfinding code first cut completed; I'm creating the navigation graph, and I've got an implementation of A* finished that allows me to create a list of terrain nodes that represents the path between tile A and tile B. I'm going to hold off a bit on presenting all of that, since I haven't yet managed to get a nice looking demo to show off the pathfinding yet. I need to do some more work to create a simple unit class that can follow the path generated by A*, and between work and life stuff, I haven't gotten the chance to round that out satisfactorily yet.

I've also been doing some pretty heavy refactoring on various engine components, both for design and performance reasons. After the last series of posts on augmenting the Terrain class, and in anticipation of adding even more functionality as I added pathfinding support, I decided to take some time and split out the code that handles Direct3D resources and rendering from the more agnostic logical terrain representation. I'm not looking to do this at the moment, but this might also make implementing an OpenGL rendering system less painful, potentially.

Going through this, I don't think I am done splitting things up. I'm kind of a fan of small, tightly focused classes, but I'm not necessarily an OOP junkie. Right now, I'm pretty happy with how I have split things out. I've got the Terrain class, which contains mostly the rendering independent logical terrain representation, such as the quad tree and picking code, the terrain heightmap and heightmap generation code, and the global terrain state properties (world-space size, initialization information struct, etc). The rendering and DirectX resource management code has been split out into the new TerrainRenderer class, which does all of the drawing and creates all of the DirectX vertex buffers and texture resources.

I'll spare you all the intermediate gyrations that this refactoring push put me through, and just post the resulting two classes. Resharper was invaluable in this process; if you have access to a full version of Visual Studio, I don't think there is a better way to spend $100. I shiver to think of how difficult this would have been without access to its refactoring and renaming tools.

Read More...