Jump to content

  • Log In with Google      Sign In   
  • Create Account

Lyost's Journal



D3D11 Framework Update

Posted by , 10 March 2013 - - - - - - · 1,213 views

While there is still plenty of work to do on my d3d11 framework, I thought I'd post an update on the current status. For what's currently in it:
  • keyboard input
  • mouse input
  • vertex shaders
  • pixel shaders
  • viewports
  • blend and depth stencil states
  • texturing (just loading from a file for now, will be expanding functionality later)
  • resizing/fullscreen toggling
A short list of things that I will be adding is (I have a longer list of tasks for my own reference):
  • rest of the shader types
  • model loading
  • high dpi mouse support
  • joystick/xbox controller support
  • deferred contexts
  • sprite support (DirectXTK looks like it might make this part easy, especially the text support)
  • networking lib
It would be further along, but two main factors slowed down development on it. The first and main factor was that I wasn't able to work on it for most of January due to work requiring a significant amount of extra hours. The second and more of a speed bump was there are several functions that MSDN notes that cannot be used in applications submitted to the Windows Store (the entire D3D11 reflection API for example). While I am a long way off from doing anything with the Windows Store, if ever, I figured it might make this framework more useful and reduce maintenance down the line if I followed the recommendations and requirements for it. Part of that is to compile shaders at build time instead of runtime, so I upgraded to VS2012 to get integrated support for that instead of trying to roll my own content pipeline (which is not off the table for other content types, just no immediate plans for that feature).

Converting the project from VS2010 to VS2012 needed four notable changes, the first three of which are straight forward:
1. Removing references to the June 2010 DirectX SDK from include and linker paths, since the header and library files are now part of the VS2012 installation and available in default directories.
2. Switching from xnamath to directxmath.h, which in addition to changing the name of the include file is also updating the namespace of the types in the file to be part of the DirectX namespace. Since the rest of the DirectX functions and types (i.e. ID3D11DeviceContext) are not in that namespace, it seems a bit inconsistent to have some functions/types in the namespace and others outside of it.
3. The next easy change was to do build time shader compilation and change the runtime loading mechanism to load the .cso file. This was a matter of splitting the vertex and pixel shaders into separate .hlsl files and setting their properties appropriately (mainly "Entrypoint Name" and "Shader Type", I also like "Treat Warnings As Errors" on). The ContentManager class also needed its shader loading mechanism updated. Instead of calling D3DX11CompileFromFile in the CompileShader function, the CompileShader function was changed to LoadFile, which does a binary load of a file into memory. After that, the code was the same of calling CreateVertexShader or CreatePixelShader on the ID3D11Device instance. There was a little wrinkle in these edits for current working directory when debugging vs the location of the .cso files. The .cso files were placed in $(OutDir) with the results of compiling the other projects in the solution (i.e. .lib and .exe files, not the .obj for each code file), whereas the working directory when debugging was the code directory for the test program. This meant that just specifying the filename without a path would fail to find the file at runtime. So either the output path for compiling the shaders needed to change or the working directory when debugging needed to change. I chose to change the working directory since running in the same directory as the .exe file is a better match to how an application would be used outside of development. This also meant that the texture I was using needed to be copied to $(OutDir) as well, which was easy enough to add as a "Post-Build Event" (though I initially put in the copy command as "cp" since I'm used to Linux, but it didn't take long to remember the Windows name for the command is "copy").
4. This was the not so straight forward change, texture loading. Previously I was using D3DX11CreateShaderResourceViewFromFile, which is not available in VS2012. To get texture loading working again, I had to do a bit of research. I didn't want to get bogged down into loading various image file formats and reinventing the wheel (this entire project can probably be called reinventing the wheel, but the point of it is more about applying D3D11 and expanding my understanding of it). Luckily, right on the MSDN page for D3DX11CreateShaderResourceViewFromFile, there are links to recommended replacements. The recommended runtime replacement is DirectXTK, which compiled right out of the box for me, no playing with settings or paths to get it to compile. For using it, there was one problem I ran into. Initially I was using seafloor.dds from Tutorial 7 of the June 2010 DirectX Sample Browser. The DirectXTK Simple Win32 Sample comes with a texture file of the same name and looks the same. However, there is at least a slight difference in the file contents. The one from the June 2010 sample failed to load with an ERROR_NOT_SUPPORTED issue, but the one from the DirectXTK sample works. After dealing with that, I decided to get rid of third party content, and am now using a debug texture I had lying around which is a .png file and it works with DirectXTK just fine.

Backing up for a moment to before I did the VS2012 upgrade, in my previous journal entry of "D3D11 Port of Particle System Rendering", I mentioned that I wanted to look into other ways for creating the input layout description for a vertex shader. The first thing I looked into was the shader reflection API, so that the input layout would automatically be determined by the vertex shader code. This was obviously before I knew the API would not be available to Windows Store applications or decided to follow those requirements. Aside from that, I ran into a different issue that prevented me from using it here. The vertex shader code knows what its inputs are, but it doesn't know which are per-vertex and which are per-instance. From looking at D3D11_INPUT_ELEMENT_DESC, knowing per-vertex or per-instance is important to set the InputSlotClass and InstanceDataStepRate fields correctly. Since reflection wasn't an option here, I created a class to manage the input layout, not surprisingly named InputLayout. It does avoid the simple mistake I had done in the previous journal entry of incorrectly computing the aligned byte offsets, as well as avoiding spelling mistakes for semantic names, and avoiding attempts to set more input layouts than were allocated. In its current form, it does force non-instance vertex buffers into slot 0 and instance into slot 1, precluding any additional vertex buffers, which I might revisit later once I hit a scenario that requires more.

For the configuring the D3D11 rendering pipeline, my approach has been to get everything into a known state. Taking the example of setting a constant buffer on the vertex shader stage, if a particular constant buffer is not set, then whatever the previous value was will be present the next time the shader is invoked. To avoid these stale states, in the framework when a VertexShader instance's MakeActive function is called, it sets all of the constant buffers to the last value the VertexShader instance received for them (the actual function for setting a constant buffer is in ShaderBase). Taking this known state design a step further is where the RenderPipeline class comes in (once I get the parts done and integrated into it anyways). The plan is for when an instance's MakeActive function is called that the vertex shader, pixel shader, their constant buffers, etc all are active on the provided ID3D11DeviceContext. An obvious improvement to setting all of the pipeline info each time would be to only set the parts that don't match the last value for the ID3D11DeviceContext instance. However, I've been trying to get things working before I start looking into performance improvements.

Below is a screen shot of the test program, which is using instance rendering for 2 cubes, and 4 viewports to display them from different angles.
instance cubes viewports



Attached File  d3d11_framework.zip (126.77KB)
downloads: 83 is a zip file that has the source for the framework (MIT license, like normal for me). Though it does not have DirectXTK in it, which that can be found here. For adding it in, put the DirectXTK include files in the framework's external_tools/DirectXTK/public_inc/ directory, and once build the DirectXTK.lib file in external_tools/DirectXTK/lib/Debug/ or external_tools/DirectXTK/lib/Release/ depending on which configuration you built it for.


Public header generation

Posted by , 06 January 2013 - - - - - - · 696 views

In the course of expanding and fixing the D3D11 framework from my previous journal entry, I've found that there are types and functions that I want to have as public in the library but not exposed outside of it (internal scope in C#). Instead of purely using the pimpl idiom (for those unfamiliar with the term or how to apply it, these should help explain it: http://en.wikipedia.org/wiki/Opaque_pointer and http://www.gamedev.net/page/resources/_/technical/general-programming/the-c-pimpl-r1794), I decided on public and private headers, for which pimpl can be used to hide types. To automatically generate the public headers, I created a utility program that uses tokens in the private headers to generate the public ones (to be clear, the utility doesn't automatically do pimpl, that's up to the developer, the utility just does simple line exclusion). I did a few quick searches and didn't see a utility that already did this for C++, but it was simple enough to program my own.

There were a few choices for how to specify the start/stop tokens: comments, ifdef, ifndef. So that IDE outlining can show the affected sections and so the build of the library doesn't need an additional define (this is pure laziness since adding a define to the project settings is trivial), I went with lines starting with "#ifndef" and "#endif" which also contain "PUBLIC_HEADER" to mark the start/end of sections to exclude from the public header. The lines need to start with the "#ifndef" or "#endif" tokens so that if needed the line can be commented out rather than needing to delete it and remember where they need to go if/when they need to be added back in. I've tested it successfully so far for hiding types (via pimpl's declare and only use pointers approach) and functions. For an example usage:
#ifndef FOO_H
#define FOO_H

class Foo
{
  public:
    // stuff...
    
#ifndef PUBLIC_HEADER
    // this function is excluded from the public header
    void blah();
#endif /* PUBLIC_HEADER */
    
    // stuff...
  private:
    // stuff...
};

#endif /* FOO_H */



For integrating it into the library build, I added a post-build event to make the public include directory and call the utility. That has the side effect of the test program in the solution can't check its code against the library headers until the library build is complete and if an error is found during the test program build, clicking the error message opens the public include instead of the private one in the solution. I consider this okay since it is not the normal use case. The normal use case is having a separate solution to which a released build of the library is added to "additional dependencies", and the path to the public headers is added to the "additional include directories" project settings.

An alternative to generating public headers is to use the same headers for public and private, and switch to using an ifdef for when building the library. I didn't go this way primarially due to one issue: if a program build sets that define, all the library private stuff is available to the program

Source (under MIT license as normal): Attached File  public_header_generator.zip (8.62KB)
downloads: 42


D3D11 Port of Particle System Rendering - with source

Posted by , 29 December 2012 - - - - - - · 1,107 views

The D3D11 port of the rendering process for my particle system editor is complete (VC++ 2010 project and source under the MIT license can be found at the end of this entry, also you will need to edit the include and additional dependancy paths to build). The hardest part of the project has been finding time for it since my job has been demanding evenings and weekends recently to release that project. But, I've been on vacation this past week and I've spent part of it on finishing this port, which from looking back took about 12 weeks of calendar time and vastly less actual time (at least half of which was this week).

I started this port after reading "Practical Rendering & Computation with Direct3D11" and thought it would be a good first project to start applying what I had learned from that book. From reading it and from using XNA on previous projects, I knew I wanted to create an interface layer to hide most of the details of D3D11 from the application, which is where the d3d11_framework project (also included below) came from. It is a partial recreation of the XNA framework in C++, though is comparatively in very rough shape. It also takes into account the differences between D3D9/XNA and D3D11, such as using xnamath.h and passing a ID3D11DeviceContext* to the rendering functions so that multithreaded rendering is possible, though the creation of deferred contexts is currently not implemented. The current form is good enough to get through a sample project (a port of tutorial 7 from the D3D11 samples included in the June 2010 SDK, to which I added instance rendering) and the particle system rendering. My plan for my next project is some cleanup work and expansion of the framework, including pushing parts of the particle system port to the framework. I already have a laundry list of tasks that is sure to grow. I had thought about a compute shader version of the particle system, but I'm going to hold off on that at least for now.

In the course of working on this project, I'm pretty sure I made every newbie mistake:
  • Using XMVECTOR and XMMATRIX in dynamically allocated types, which lead to access violations when changing them since they may not be properly aligned. So, now I use them only during calculations and store the result in XMFLOAT3/XMFLOAT4 or XMFLOAT4X4.
  • Missing that the order of the arguments to XMMatrixRotationRollPitchYaw(pitch, yaw, roll) is different than XNA (yaw, pitch, roll) and different than its function name (roll, pitch, yaw)
  • Constant buffers bound to the vertex shader stage are not automatically bound to the pixel shader stage (not sure if this is just XNA that does this or if it's D3D9)
  • For the layout description, having both aligned byte offsets for the instance entries set to 0 (one of the things on my to-do list for the framework is to look into making creating the layout description less error prone)
In addition to that, while creating the C++ port, it made me wonder why I made the C# editor and XNA renderer multithreaded. The C++ version is completely single threaded, since there really is no need for multithreading. The best answer I can come up with for why the C# versions have multithreading is that C# makes it easy.

d3d11_framework: Attached File  d3d11_framework.zip (137.35KB)
downloads: 88
particle system renderer: Attached File  ParticleSystemEditor_D3D11.zip (66.41KB)
downloads: 95


Particle System Editor - Edit: Now with source

Posted by , 30 September 2012 - - - - - - · 1,494 views

[heading]Reason for the Project[/heading]
In the tank game, the particle system was probably one of the weaker parts of the game on the technical side. In addition, it was a tedious process to adjust it: run the game, get to a point where the particle system will be used, kick it off, exit, tweak it, and repeat. Which is why my project after that was this particle system editor.

[heading]Two Windows[/heading]
My plan was to create an editor UI using winforms and the rendering process in XNA. Since I want to start playing around with Direct3D 11 soon, including creating a renderer for this in it, I did not have the rendering window embedded in the editor window. Instead the winforms editor uses a TCP socket to connect to the rendering window. All the configuration settings and changes to them are sent over the socket. This way once I get to creating the D3D11 renderer, the winforms editor can be used without modification.
Posted Image

Posted Image

When I started this, I expected the winforms and XNA side of things to take about a week. Instead it ended up taking just over two weeks in terms of development time (calendar time is a different story). A couple days of the extra development time was trying out instance rendering for the first time. I also moved the billboarding to the vertex shader (the tank game did it on the CPU) using the method described in the 1.2 section of GPU Gems 2. Though instead of moving the vertices to the world space position in the vertex buffer, I just passed the world space position as part of the instance data. This allowed for vertex buffer containing the position of the vertices to remain 3 or 4 vertices long (depending on particle shape) and unmodified.

[heading]Re-use[/heading]
So that the particle systems created are easy to use in future projects, the editor can save/load files using an XML format. I also wrote it to be easily expandable in case a new configuration parameter or feature is needed. In case I end up going really wild with it, I have been writing it thus far so plug-ins could eventually be supported for existing categories on configuration parameters (i.e. adding a new dispersion type or emitter shape).
The particle system code itself is mostly re-usable but could use a variety of tweaks for a game implementation. The implementation in the renderer allows for the configuration to change at any time. In a game, that additional complexity is likely not needed. Also, many of the conditionals could be completely removed in a version for a game. And of course there would need to be a minor change to get rid of the looping that is in the current implementation (emitter runs for a configurable time period and after all the particles die, it waits a second before starting up again).

[heading]Next[/heading]
While I could expand this to include attaching particle systems to models, then defining triggers for when the particle systems should start/stop, I'm not going to do that (yet). Next up for me is to finish reading "Practical Rendering & Computation with Direct3D11". After that and maybe a few basic practice projects in D3D11 will be the D3D11 renderer for the particle system editor, which I'll mainly be doing for a combination of practice and code that I'll very likely need on a future game.

[heading]Download[/heading]
Since particle systems look better in action, if you want to try it, you can grab the editor Attached File  particle_system_editor.zip (227.67KB)
downloads: 65 and the renderer Attached File  xna_renderer.zip (238.32KB)
downloads: 54. Please note, that these are just a zipping up of the directories I had Visual C# publish to. I haven't tried to use this feature before, and the only testing I've done on it was on one of my computers here and it required installing the XNA Framework Redistributable 4.0 first. If I was going to release it as a real tool instead of "if you feel like it, feel free to give it a whirl", then there are other features I would like to include, such as:
  • Installer that includes both the editor and renderer
  • Being able to set more things to change over time, such as individual particle size, alpha (overriding the coloring setting in the current coloring modes)
  • Adding alpha to the color sets and sequences (currently alpha is used only in textures)
  • Lighting and materials
  • Models as particles (not sure if this is normally done, but could be cool for particle systems with a low number of active particles)
  • Add color selection to the dialog for adding/editing color set/sequence entries instead of using a button to launch a color selection dialog
  • Custom control for selecting colors that is faster to use
  • More intuitive UI for doing color sequences when interpolation is turned on
  • Make renderer part of editor window by default, while keeping client/server code as an option
  • Handle resizing the editor window/dialogs
  • Help menu
  • Ability to specify a custom port/IP/host name for the client/server portion
  • Renderer able to render multiple particle systems at once
  • Editor able to edit multiple particle systems (i.e. each particle system is its own tab in the editor UI)
  • Make color set/sequence % readable on dark colors
  • Add ability to unselect a color in the color set/sequence list boxes
[heading]Source[/heading]
I had been thinking about releasing the source for this project and since the comments after the original posting said it would be useful, here it is: Attached File  ParticleSystemEditor.zip (115.92KB)
downloads: 49. That is a zip file of the Visual C# solution, project files, and source. I'm going with the MIT license for this (LICENSE file included in the same directory as the solution files), so have fun with the code.


XNA Game Complete

Posted by , 29 December 2011 - * * * * * · 1,077 views

[heading]Summary[/heading]
The primary goals of my tank game project were: to learn about the XNA framework, and create a feature complete game. I feel that I have accomplished both. Though I used the term "feature complete game" instead of "complete game" because all the code is there and working for the game, but content is lacking. There is 1 model for an enemy turret, 1 model for both the player and enemy tanks, textures for all models are their UV maps, and only 1 level currently (tested level changes by having the same level be the next level). As for what's in:
  • Support for multiple players in split screen
  • Main Menu for starting a campaign, selecting a level, or changing options
  • Basic particle systems for weapon impacts and object destruction
  • Location based triggers
  • Checkpoint respawn system, complete with fallback if there are too many deaths at the current checkpoint too quickly
  • State machine-based AI where the state machine is specified for each enemy in the level instead of enemy type
  • Navmesh and A* for pathfinding and smoothing
  • Ability for subsequent levels. Due to detecting when to switch to the next level as a trigger, the campaign actually doesn't have to be linear since each next level trigger can go to a different level and there can be multiple triggers in a given level.
Some key things that I've learned from this are:
  • XNA is great at creating quick test programs
  • XNA's Ray struct needs to have its Direction field be unit length or intersection methods don't give correct results
  • Blender 2.56 is significantly easier to use than the last version I tried
  • When creating a model, be careful with the object matrices if you plan to do anything beyond just drawing the model. Having an object matrix not be the identity matrix means the vertices aren't actually being updated in the modeling software and can produce unexpected results in a custom model processor
  • Using "SharedResource = true" means that what is supposed to be the same instance will in fact be different instances at run-time
  • Pathsmoothing is easy on a navmesh when all regions in a navmesh share a complete edge
Here's some screen shots of the game:
Posted Image
The main menu (as with the rest of the game, nothing fancy)

Posted Image
Level start (player's tank in foreground, enemy tank a bit further back and to the side, wall on the left)

Posted Image
Weapon particle system

Posted Image
Particle system for killing an enemy tank

[heading]What's next[/heading]
There's a few things I have planned for my next few projects. I want to spend some time playing with different graphics algorithms, as well as create a more complex particle system and better looking particle system. While I'm working on those, I'm also going to be reading a book on D3D11. Once all that is done, I plan to start on working on my next game. I already have a few ideas to flesh out and will probably start the design doc before I actually finish any of the other projects :-)


Content Pipeline Navmeshes and SharedResource

Posted by , 21 November 2011 - - - - - - · 385 views

While working on a basic statemachine AI for my tank game, I found that my A* pathfinding on a navmesh wasn't working. I had already verified that the A* implementation worked in a tester program, which created the navmesh at run-time. However in the game, the content pipeline is used for loading a navmesh from an XML file. The A* implementation was using Object.ReferenceEquals to determine whether the destination region was reached, and was relying on each region's neighbor list working on the same instances of regions. As it turns out that due to having each region's neighbor list have [ContentSerializer(SharedResource = true)] attribute and parameter on the property, neighbors were not the same instances throughout the navmesh and therefore the destination detection would not work, as well as the closed list not working properly.

My navmesh is specified in an XML file enumerating the regions and their connecting edges. For the sake of this particular game, the regions of the navmesh are axis aligned boxes where connected regions must share a full edge. Because of the axis aligned boxes part, my class for storing the regions is named "Rect". I originally tried to declare its neighbor list property as:
[ContentSerializer]
public Rect[] Neighbors { get; private set; }
That produced a compiler error for a cyclic reference. This is because of when region 0 is a neighbor of region 1, region 1 is in region 0's neighbor list and vice versa. From a quick search, the normal way to deal with this is to set "SharedResource = true" parameter for the "ContentSerializer" attribute:
[ContentSerializer (SharedResource = true)]
public Rect[] Neighbors { get; private set; }
What I hadn't realized until debugging my A* implementation was that setting "SharedResource = true" made it so the region 1 instance in region 0's neighbor list was not the same instance as in the list of all regions in the navmesh. The particular test I used to determine this was:
if (Object.ReferenceEquals(m_regions[0],m_regions[1].Neighbors[0]))
{
  int foo = 3;
}
if (Object.ReferenceEquals(m_regions[0],m_regions[1].Neighbors[1]))
{
  int foo = 3;
}
With breakpoints set at both of the "int foo = 3;" lines, and neither breakpoint was hit. It should also probably be noted that in my navmesh for this level each region is connected only to two other regions which is why only the two if statements above where needed for this test. After debugging this, I found http://social.msdn.m...4-652207885ad8/ which (while old) confirms that setting SharedResource to true does result in different instances.

My solution for dealing with this was to store an ID number for each region and have the neighbor list that the XML importer populates be a list of those IDs. At run-time after the navmesh is loaded, each region now has a CompleteLoad function called which takes the list of all regions. This function takes care of converting the IDs to the actual instance, which allows my A* implementation to function properly.


Mesh's Matrix

Posted by , 22 October 2011 - - - - - - · 616 views

[heading]Model Processing[/heading]
While working on a model processor to determine the bounds of a model, I noticed something odd. The min/max were not what I expected and the total width did not match the model. The code is simple and taken from one of my tester programs were it worked correctly, so I was wondering why this model was an issue.
private void GetAllVertices(NodeContent input,LinkedList<Vector3> verts,
  ref BoundingBox box)
{
  MeshContent mesh = input as MeshContent;
  if (mesh != null)
  {
	// load the positions from the mesh node
	foreach (Vector3 pos in mesh.Positions)
	{
  	// store the vertex
  	verts.AddLast(pos);
  	
  	// update the bounding box
  	if (pos.X < box.Min.X)
  	{
    	box.Min.X = pos.X;
  	}
  	if (pos.X > box.Max.X)
  	{
    	box.Max.X = pos.X;
  	}
  	if (pos.Y < box.Min.Y)
  	{
    	box.Min.Y = pos.Y;
  	}
  	if (pos.Y > box.Max.Y)
  	{
    	box.Max.Y = pos.Y;
  	}
  	if (pos.Z < box.Min.Z)
  	{
    	box.Min.Z = pos.Z;
  	}
  	if (pos.Z > box.Max.Z)
  	{
    	box.Max.Z = pos.Z;
  	}
	}
  }
  else
  {
	// recurse on the children of a non-mesh node
	foreach (NodeContent child in input.Children)
	{
  	GetAllVertices(child,verts,ref box);
	}
  }
}
This function is called from the model processor's Process method with box initialized so that its min values are float.MaxValue and its max values are float.MinValue. The part of storing the vertices in a linked list was for other functionality in this particular model processor (building a heightmap from the mesh).

Since this code works elsewhere, I inspected the fbx file. The fbx files I use are generated from the fbx exporter that comes with Blender 2.56.5, which generates text based fbx files; Autodesk's site says fbx files can also be binary. I noticed that the matrix in the PoseNode for the model was not the identity matrix, whereas in the models that work it is. Later exploration of the fbx files also showed that the Model node for the mesh has a "Lcl Rotation" property whose rotation in degrees matches the matrix.

The model showing the issue is the result of manipulating a Blender grid object. These objects are in the xy plane when created. Since I was creating terrain for a game where terrain is in the xz plane and y is the height, I rotated the object by -90 degrees around the x-axis, where the minus sign is to get the normal facing the right direction of +y. This is what created the issue. Setting rotation or scale in object mode does not edit the vertices of the mesh, instead it manipulates the object's world matrix. Since the model I was dealing with was a single mesh, the solution was simple, set the object rotation to 0 and in edit mode rotate by -90 around the x-axis. By applying the rotation in edit mode, the vertices' coordinates were actually changed by the rotation. To help avoid this problem with future models, I've updated the "Modeling Requirements" section of my design doc to specify that objects must have a rotation of 0 around each axis and a scale of 1.

[heading]How This Matters to Drawing[/heading]
Going through the process in the previous section helped me understand the details of a different section of code. When drawing a simple model in XNA, the code is typically along the lines of:
Matrix[] transforms = new Matrix[model.Bones.Count];
model.CopyAbsoluteBoneTransformsTo(transforms);
foreach (ModelMesh mesh in model.Meshes)
{
  foreach (BasicEffect effect in mesh.Effects)
  {
	effect.World = transforms[mesh.ParentBone.Index] * World;
	effect.View = cam.View;
	effect.Projection = cam.Projection;
  }

  mesh.Draw();
}
With World being a Matrix defined by the game. Why do "transforms[mesh.ParentBone.Index] * World" instead of just World? Both MSDN and "Learning XNA 4.0" by Aaron Reed don't go into detail of why it is necessary. MSDN keeps referring to bones in its brief explanation, but doesn't mention that even models without a skeleton still have 1 bone in XNA (discussed in the next paragraph). "Learning XNA 4.0" by Aaron Reed says it's to handle models that are comprised of multiple meshes. However, this code matters for single mesh models that are not attached to a skeleton, such as my terrain model in the previous section.

All meshes in a model have a world space matrix attached to them. This matrix allows rotation and scaling of the mesh to be applied to the vertices that it is comprised of. XNA loads this matrix as if it were a bone and sets it as the parent bone of the mesh. If you skip applying this matrix, then your rendering of the model may not match what is shown in the software used to create the model.


My Search for Pitch and Roll

Posted by , 19 October 2011 - - - - - - · 369 views

[heading]Intro[/heading]
Don't let the title deceive you, this is not a simple rotation case. This is the subject that I've spent more time on than any other in my tank game so far, even more than the rest of the game combined. If I had found a solution, this dev journal entry would probably be named "Orientation of Objects Substantially Larger Than Heightmap Resolution". After a week1 of working on it, I thought I would end up writing a tutorial on this, due to being unable to find reference material for this particular problem (aside from one page that I have been unable to find again and isn't in the browser history when I got to looking for it). After a month1 plus of working on this, I decided it was time to move on as my motivation was getting sapped. I may come back to this problem later, but for now, I want to get the rest of the game working.

This problem is determining the pitch and roll of the tanks based on the terrain they are over. Positioning an object on a heightmap using a single point is a fairly simple task and I could find an abundance of reference material on it. When using a single point, the orientation of the object can be determined by looking at the normal of the heightmap cell it is in or the face it is on. However, my tanks are substantially larger than the resolution of my heightmap. The tanks roughly have a footprint of 9.5 meters by 3.5 meters, and the heightmap is a regular 2d grid in xz with indices being 1 meter apart. I could increase distance between points on the heightmap, but thought that there must be a better solution.

While looking for the better solution, I spun off test programs so that I could debug different techniques and their parts separately from the whole game and not have to revert the changes in order to get the game back. This showed one of the strengths of XNA, how quickly a new program can get up and running so you can concentrate on what you are trying to do. Window creation, gathering input, model loading, basic rendering are all there once the project is created. The only thing missing is a basic camera class or struct, which of course I brought over from the game. While I'm not sold on XNA for large projects, for test programs the quick setup is very nice.

[heading]First Test Program[/heading]
The first test program was to debug part of an algorithm that didn't pan out. My first attempt was to gather all the entries in the heightmap that are within the bounding box of the object/tank, and from there try to determine the pitch and roll. For this and the next attempt, the real trick is going from heightmap values to pitch and roll, which I will discuss in the next section. This way of gathering heightmap values has a flaw in that it could cause the object to clip through an incline if the bounds didn't end on an integer value (due to the heightmap having a resolution of 1 meter and its alignment).

The part of first attempt that the first test program dealt with is determining which entries in the heightmap are within the object's bounding box. There are 4 pieces of information needed for this search, the bounding box, object's position, the object's rotation (just yaw since determining pitch and roll is the ultimate goal), and the heightmap. The axis-aligned bounding box, which XNA's BoundingBox struct represents, is determined by the model. After doing a model processor for skeletal animation, writing one to determine the bounding box was trivial. The object's position and rotation are variables that could be adjusted by the user and are passed to the function that does the search. The heightmap is the class I put the search function in, and for the sake of this test it is just an 11 by 11 grid with all heights set to 0. The search has an additional twist of in the design doc I specified that the origin of the model is to be the center of mass2, which is not necessarily the geometric center. This is easy to deal with, but worth mentioning. As with ray-sphere intersection on scaled, rotated, and translated spheres discussed in my previous dev journal entry, I created a matrix but this one is to convert the entries of the heightmap, which are in world space, into the object space of the bounds. This matrix is:
Matrix.CreateTranslation(-box.Center()) * rot * Matrix.CreateTranslation(pos)
Where box.Center() is an extension method to get the center of the BoundingBox, rot is the object's rotation matrix for yaw, and pos is the position of the object in world space. As with the ray conversion, I multiplied each heightmap entry against by the inverse of the matrix. The result of the conversions are run against BoundingBox's Contains method, and so long as it does not return ContainmentType.Disjoint, the position is in bounds.

In order to verify this part of the algorithm worked and debug earlier iterations of the code, I drew a series of squares to represent the heightmap locations which are colored white if not a match and green if they are within the bounds. The squares were centered at the heightmap position and scaled down slightly to not overlap. An outline box was also drawn as reference for where the bounds are. The model for the outline box was also the model used for creating the bounding box for this test program rather than tank model from the game. A screen shot is below (note: white squares that partially overlap with the outline box are outside the bounds because it is the center of the square that is the entry in the heightmap and the center is outside the outline box).
Posted Image

[heading]Second Test Program[/heading]
The purpose of the second test program was to convert the heightmap values into pitch and roll, the hard part of this problem. A variant of the outline box model was used for this program, with the change being adding some height to the model. Since I didn't bother making the outline box anything more than a series of faces in either tester, in this tester I set the cull mode to none so that both front and back faces would be drawn. In addition to that model, several terrain test patches were created to cover all the basic scenarios and some more complicated ones. So that the object's position was fixed for the tests against the different terrain, instead of moving the object, I switched the terrain and ran the algorithm on the switch. Doing it this way was nice so that breakpoints were not being hit on every update cycle, just when there was an actual change.
Posted Image
From taking the final code from the first test program and integrating it into the second, I quickly realized that this was not going work. It's been quite awhile since this part, so I don't remember all the problems. But from what I remember, the two main problems were:
1. The y value for the object was an unknown, and wouldn't be known until the pitch and roll were known.
2. By taking all of the heightmap entries within the bounds, I could compute the pitch and roll between any number of them, and how many there were would depend on the footprint of the object.
To handle problem 2, I tried to split the heightmap entries into quadrants within the bounds, trying to keep track of the entry that would cause the greatest pitch and/or roll. This is what made problem 1 an issue since to calculate the pitch and roll of a point relative to the center, both points needed to be known and the center was not yet known.

Those problems and that reference page I couldn't re-find, caused me to switch to a completely different method. I would use control points3 at defined positions on the bounding box of the object and deal with pitch and roll between the control points. In order to avoid clipping through the crest of a hill, I went with 8 control points, 4 at the corners and the other 4 along the perimeter aligned with the center of mass. The numbers in the image below are the indices I used into the array that stored the points.
Posted Image
For each control point, the heightmap would be queried for the height at that point. After that, the points could be combined to compute various pitches and rolls. Since I had decided in the design doc that models should be facing the +x direction, pitch is rotation around the z axis and roll around the x axis. Based on XNA's description of the arguments in Matrix.CreateFromYawPitchRoll, it seems more typical to have the facing direction be along +z. In my next game, I will change the facing direction, but will keep my current one for this game and test programs. In the image below, the numbers are the indices in the arrays of floats for keeping track of the individual pitches and rolls between the control points. In case it's not clear, the 4 and 5 indices are between the corner control points and the rest are between a particular corner and a center of mass aligned control point.
Posted Image
After computing the various pitches and rolls between the control points, the remaining questions were:
1. Which of those values should be used for pitch and roll?
2. What is the y value for the object's position?
I'll answer the second question first as it was consistent across any method I used to try to answer the first question. For each control point, I took the xz vector from subtracting the highest control point from the current one one and applied the pitch and roll from the answer to the first question to that vector, then added the resulting vector to the highest control point. Since code is probably more clear (note: going through only the first 4 indices is because those are the corner control points and the only ones needed for the next part):
for (int i = 0; i < 4; ++i)
{
  if (i != max_index)
  {
	Vector3 xz = new Vector3(
  	control_points_object[i].X - control_points_object[max_index].X,
  	0,
  	control_points_object[i].Z - control_points_object[max_index].Z);
	control_points_object[i] =
  	Vector3.Transform(xz,pitch_roll) + control_points_object[max_index];
  }
}
After that, performing bilinear interpolation on the corner control points would result in the y value for the object's position.

The answer to the first question is where I got stuck. After trying a variety of methods, I asked on the forums in case someone else had solved this problem before. Apparently, this isn't a common problem as there were no responses, which had me hoping I could find a solution and post a tutorial on it in a dev journal. The methods I had tried before posting the question were doing the min, max, min magnitude, and max magnitude per side and combining the sides with the same variety of functions. Resorting to these combinations was well after thinking about the problem and trying/debugging the one that seemed reasonable. Some would give reasonable results in default orientation, but break when I tested with other yaws (my test cases were yaws, in radians, of 0, pi/4, pi/2, 3*pi/4,pi,-pi/4). I then tried to do a series of if/else cases for equalities and inequalities, which again worked in default orientation and broke in several of the other terrain/yaw test cases.

Also, in the course of these initial attempts, it was clear that when both pitch and roll were non-zero each needed to be scaled back. Rotating by both at their full values caused over-rotation and the object to clip through the terrain. The scaling factor I used after realizing its necessity was:
public float Scale
{
  get
  {
	return Math.Abs(Pitch) / (Math.Abs(Pitch) + Math.Abs(Roll) + .000001f);
  }
}
With the "+ .000001f" simply to handle the case were pitch and roll are both 0. For applying the scaling factor, it is simply Scale * Pitch, and (1 - Scale) * Roll.

The next time I worked on this after posting the question on the forums, I tried a brute force approach. Go through all combinations of pitches and rolls between the control points, assign a score to each combination, and pick the one with the best score. So that I could debug the scoring method, I made the test program keep all the combinations and flip to the next best combination with a button press. My goal with this brute force method was to find the right answer in all the test cases and analyze the results to find out what is the right combination method.

After debugging the scoring mechanism and code, I went with a two part scoring mechanism. The first part is that combinations whose control points all remain at or above their heightmap values are better than those that have any that dip below. After that, it is which combination has the smallest total increase for each control point's y value over their heightmap values. What I was shooting for with this scoring mechanism is finding the result that matched the terrain the closest without going below it.

From analyzing the results of that scoring mechanism, the min/max approaches didn't fit the results at all. So, I adjusted my if/else method of computing the final result to be very close to the brute force method. It wasn't an exact match because the analysis showed a contradiction. Looking at one of the test cases for the inequality results would require a particular index to be used, but a different test case with the same inequality result, would need a different index. After adjusting the if/else method, I re-enabled that code and disabled the scoring mechanism and ran through the test cases. While most of the results looked good, the contradiction case looked terrible.

[heading]Game Integration[/heading]
Since the scoring mechanism produced good results and at this point I wanted to get on with the game and not stay on this problem indefinitely (I might come back to it eventually), I tried plugging the scoring method into the game. Given what the game currently does and what I expect it to do, it should be able to handle the added performance cost of running through the combinations without a noticeable degradation to performance. Performance wasn't the problem. While the scoring results looked good in a static scene, when applied to a moving object the difference frame-to-frame was too severe. The camera is at a fixed following distance from the tank, so the tank's movement affects the camera which made the jitter very apparent. The jitter was due to only looking at the tank's current position on the terrain, so the best fit pitch and roll on that frame had no relation to the pitch and roll on the previous.

At this point, I've decided it's time to move on with my game as there are plenty of other aspects that need to be implemented. At the time of working on this problem, there wasn't a hud, object-object collision detection, or any AI yet. The first two of those have since been added and the third is the next topic to start looking at4. While I'm not sure if there is a way to combine the various pitches and rolls between the control points to get a satisfying result, I think this problem might be solvable by using physics to tell the tank when the ground level has increased or when center of mass has crested a hill or cliff and can start tilting down. But, physics simulation is a subject well beyond the scope of this particular game. So for now, my game is just going with level terrain. Since the goals of the game are to:
1. Get experience with XNA
2. Create a full game
3. Be a small game that can be developed quickly
I'm willing to sacrifice the detail of cresting hills and crossing short trenches, since that detail combined with how much is left to do has made the "quickly" part of goal #3 to have passed.

[heading]Foot Notes[/heading]
1 Just to be clear, these are calendar times and has been spare time work, definitely no where close to 40+ hour work weeks.
2 The reason for models having their origin be the center of mass instead of their geometric center is because when I was doing the initial planning, I was thinking of integrating some physics into it. I have since decided that physics is outside the scope of this game due to the amount of complexity it brings and this was supposed to be a quick to develop game.
3 "Control points" might not be the conventional term and could be why I can't re-find that reference page.
4 I read "Programming Game AI by Example" by Mat Buckland a couple months ago and look forward to applying some of that.


Shooting and Hit Detection

Posted by , 16 October 2011 - - - - - - · 723 views

Since my plan for the tank game was to be a quick game to develop1, I decided to just do hit-scan weapons instead of actually generating a projectile object and tracking it. Naturally, I went with a 2-tiered approach of a fast-rejection phase followed by a more precise test.

Due to my game objects being able to rotate, I went with bounding spheres for the fast-rejection phase. I could have dealt with adjusting an AABB (axis-aligned bounding box2) for rotation or using OBB (oriented bounding box), but decided to go with bounding spheres for the simplicity. XNA already provides a BoundingSphere struct that can be tested for intersection against a ray, so it was the natural choice to use that.

For the more precise phase, I could have gone against a collision mesh and gotten very accurate results from that. But for the sake of playing around a bit, I decided to take a different approach, which I think turned out to be a good exploration of XNA's Ray and BoundingSphere structs. I used an XML file to specify spheres that closely approximate the mesh. In order to allow these spheres to approximate the mesh, they can be scaled and translated along each axis as well as rotated about an arbitrary axis. Each sphere is also attached to a bone so that animation is taken into account.

Posted Image
Turret and tank models

Posted Image
Turret and tank collision spheres

Manipulating spheres this way is just applying what I learned in my college computer graphics course's ray tracer. If you have never written a ray tracer, I highly recommend the exercise. In that program, there were different primitives that we needed to support (spheres, triangles, cubes, cylinders, etc). The primitives were all unit size and centered at the origin. In order to get a non-unit sphere or triangle, a scale operation had to be applied, which adjusted the primitive instance's world matrix. When it came to rendering, the ray was just multiplied by the inverse of the world matrix. It is worth noting that to use just the inverse both the ray's position and direction were done as Vector4 with the w component being 1 and 0 respectively.

In the tank game, for checking the collision spheres against the ray, I tried adjusting an instance of XNA's Ray struct in the same way and once again using XNA's BoundingSphere struct for the intersection test. This did not work. From looking at the documentation of BoundingSphere.Intersects, I didn't see a reason why it wouldn't. So the next step in my debugging was extracting the relevant values with break points and manually doing the math for the hit and miss test cases. The math worked out correctly. Turns out on the Ray's member documentation page, the direction is specified as a unit vector. There is no note that using a non-unit direction would cause the Intersects method to give incorrect results. This is relevant because by scaling the collision spheres the converted ray's direction will not be a unit length (unless the resulting scale is 1). My solution to this was to implement the ray-sphere intersection test myself in my CollisionSphere class. Another solution would have been to convert the ray then normalize the direction. However, that will give an incorrect t value back, which will mess up determining which is the closest intersection to the ray's source and positioning cross-hairs in 3D space. Maybe it's because of my prior experience for this, but I find it bizarre that XNA's Ray struct has this unit length requirement.

1 It has already taken substantially longer than I expected, but I am learning not only the framework, but also more about game development. So, I still intend to complete this project.
2 While I assume that most people reading this already know that AABB stands for axis-aligned bounding box, I prefer to define the meanings of all abbreviations at least once just in case someone is unfamiliar.


Starting with XNA

Posted by , 16 October 2011 - - - - - - · 543 views

Normal Mapping


After reading and going through the code in "Learning XNA 4.0" by Aaron Reed, I created a few minor test programs to play with some basic rendering techniques while learning more of the XNA framework. In each of these programs, I used simple models to test against (cubes, spheres, and a curved surface in one case).

The first of the rendering techniques was doing per-pixel lighting manually instead of using BasicEffect's PerferPerPixelLighting property. The development of this program was straight forward and there wasn't anything of note.

After that I went onto normal mapping. There are plenty of tutorials online for normal mapping, so I'm not going to go into the details of the algorithm. The only hang up I had in this program was getting the binormal and tangent. There are also plenty of tutorials online on how to compute those. But, in XNA it turns out you don't need to do so manually. I didn't see this mentioned in the various references on normal mapping, so thought I'd mention this little detail here. After you add the model to your content project, expand the "Content Processor" property for that model and set "Generate Tangent Frames" to "True". This way the XNA framework's model processor takes care of computing the binormal and tangent which will then be available to your shaders. This was just a lesson in explore your tools.

Skeletal Animation


I found "Learning XNA 4.0" by Aaron Reed to be a good introduction to XNA. However, I also think the main thing it is lacking is that it does not cover the content pipeline. While it covers a large variety of topics in XNA and provides good detail, the content pipeline is a big part of using XNA beyond the BasicEffect rendering mechanism. Even to use XNA's SkinnedEffect renderer, a custom model processor is needed. Luckily, Microsoft's App Hub has the Skinned Model example that covers creating the relevant model processor. From analyzing it, I think it's a good example to learn from. It shows some additional classes/structs to help with loading, and shows that an additional library is needed to actually allow the serialized data to accessible to the content pipeline and the game at runtime.

For the purposes of playing with skeletal animation, I also wrote my own vertex and pixel shaders for rendering an animated test model that was just basically 3 joined cubes with 3 bones. To help with debugging both the computation of the bone matrices and the shaders, I switched the program between using the custom shaders and XNA's SkinnedEffect renderer. In the course of debugging the program, I was surprised to find out that the content pipeline automatically took care of interpolating the keyframes. The test model only had 2 keyframes in the fbx file, but after loading there were 104 keyframes.






September 2016 »

S M T W T F S
    123
45678910
11121314151617
181920212223 24
252627282930 


PARTNERS