Jump to content

  • Log In with Google      Sign In   
  • Create Account

Chronicles of the Hieroglyph

Using NuGet to Manage Dependencies

Posted by , 01 September 2014 - - - - - - · 487 views
hieroglyph3, nuget, directxtk
Managing Dependencies
Hieroglyph 3 has two primary dependencies - Lua and DirectXTK. Lua is used to provide simple scripting support, while DirectXTK is used for loading textures. Both of these libraries are included in the Hieroglyph 3 repository in source form. This allows for easy building with different build configurations and options, but it also comes with a number of costs as well.

First of all, you have to manually update your own repository whenever your dependencies make changes - or risk falling behind with the latest and greatest changes. In addition, since there are lots of source files in each of these dependencies, it bulks up the repository which makes cloning slower and increases the size of the repository overall.

Another big down side is that when you rebuild the entire solution, you have to rebuild all of the dependencies as well. This is sometimes a good thing (as mentioned above about the various build options) but in general it just adds time to the build process. Since the dependencies don't really change very often, then doing a full rebuild is needlessly longer than it should be.

Managing Dependencies with NuGet
With the most recent commit of Hieroglyph 3, I have replaced the DirectXTK source distribution with a reference to a NuGet package. If you aren't familiar with NuGet, it is basically a package manager that you can use to automatically download and install pre-built dependencies. This is actually old news for .net developers, who have had access to NuGet for quite some time. However, for native C++ developers, this is a relatively new facility for managing the type of dependencies discussed above.

The package manager console is built right into Visual Studio, making it easy to count on your users having access to it. Overall, I spent about 10 minutes trying things out, and with a single 'Install-Package directxtk' command, I was in business.

So now, I have a single XML file that references direcxtk, and when you build, the needed library and include files are automatically downloaded if they haven't already been. This actually solves most of the issues mentioned above, without bulking up the repository with large lib files. I'm trying this out with the DirectXTK first, and if it works out well then I will also update the Lua dependency as well.

In fact, if it works as well as advertised, I may even build a NuGet package out of Hieroglyph 3 for simple installation and use of the library...

Build & Project Configurations for Hieroglyph 3

Posted by , 09 July 2014 - - - - - - · 661 views
msbuild, hieroglyph3
Building Hieroglyph 3

Hieroglyph 3 always has had an 'SDK' folder where the engine static library is built to in its various configuration and platform incarnations, and the include (*.h, *.inl) files are copied to an include folder. This lets a user of the engine have an easy way to build the engine and grab the result for use in another project that doesn't want to have a source copy of Hieroglyph 3 in the project. You can put the various different versions of the static library output into different folders using some of the built in macros to modify the build path. For example, I use the following setting for my Output Directory project property:


The sample applications included with the engine link against this SDK folder accordingly, and it works well in most situations. There are occasional issues when Visual Studio will open a copied header file from the SDK folder instead of the original source version, which leads to strange phantom bugs where edits that you made earlier disappear, but that is manageable with some diligence.

MSBuilding Hieroglyph 3

However, when trying to clean all configurations, or to build all configurations, doing so from the IDE is no fun - especially if you are building large projects that take some time. So I recently dove into using MSBuild from the command line and wrote a couple of batch files to build them all automatically for me. For example, here is the sdk_build.bat file:

msbuild Hieroglyph3_Desktop_SDK.sln /t:Rebuild /p:Configuration=Debug /m:4 /p:Platform=Win32
msbuild Hieroglyph3_Desktop_SDK.sln /t:Rebuild /p:Configuration=Release /m:4 /p:Platform=Win32
msbuild Hieroglyph3_Desktop_SDK.sln /t:Rebuild /p:Configuration=Debug /m:4 /p:Platform=x64
msbuild Hieroglyph3_Desktop_SDK.sln /t:Rebuild /p:Configuration=Release /m:4 /p:Platform=x64

This let's you fire and forget about the building process, and it allows for automatically generating the output of your project. There is a corresponding version for cleaning as well. This is my first time using msbuild from the command line (it is the same build system that the IDE uses) and I am quite happy with how easy it is to work with. One slightly alternate motive for experimenting with this is to eventually incorporate a continuous integration server into the project, which would also need some script driven build setups.

Dependency Linking

One other recent change that I made to the project settings is to set the 'Link Library Dependencies' to true for all of my static libraries that consume other static libraries. In the past, I always defaulted to making the end application collect and link to all static libraries used throughout the entire build chain. That started to get old really quick once I started incorporating more and more extension libraries. For example, I have Kinect and Oculus Rift extensions which have their own static libraries. Then the Hieroglyph 3 project has dependencies on Lua and DirectXTK which have their own static libs.

By using the 'Link Library Dependencies' I no longer have to pass the libs down the build chain - each library can link in whatever it is using, and the applications have a significantly simpler burden to get up and running. Simpler is better in my book!

Source Code Management

One other final note about source code management. Hieroglyph 3 has used SVN as its SCM for a long time. Times have changed, and open source development has come a long way since I started out on the project. I will be migrating the repository on Codeplex over to Git, which I think will make it much easier to accept contributions as well as to utilize modern open source tooling for history and tracking purposes. I use Git at work, and I really like the decentralized nature of it. It is time to move on...

Miscellaneous Stuff

I have also been playing around a little with the Visual Studio 2014 CTP, and some of the new C++ features that it brings with it. There is some good stuff in there (see here for some details) so check it out and see what you can do with them!

Also, it was recently announced that the CppCon sessions will be professionally video recorded. CppCon is going to be a big fat C++ fest with lots of great talks scheduled (6 tracks worth!), so if you haven't already registered, go do it now! The program and abstracts are available now, so take a look and see if it would be good for you to check it out!

Simple Mesh Loaders

Posted by , 26 June 2014 - - - - - - · 498 views
stl, obj, d3d11
Over the years, I have relied on the trusty old Milkshape3D file format for getting my 3D meshes into my engine. When I first started out in 3D programming, I didn't have a lot of cash to pick up one of the heavy duty modeling tools, so I shelled out the $20 for Milkshape and used that for most of my model loading needs. It came with a simple SDK that I used to understand the format, and then I wrote a file loader for it which worked just fine (despite my lack of experience writing such things...).

Later on, a PLY loader was written by Jack Hoxley (jollyjeffers for those of you who have been around here a while) while we were working on Practical Rendering and Computation with Direct3D 11. Other than these two formats, all other geometry loaded into the engine was procedurally created or just brute force created in code. I had been thinking of integrating AssImp for quite a while, and finally sat down to try it out.

While I have lots of respect for the authors of AssImp, and I think it is a great project that meets a big need, I decided not to incorporate it into Hieroglyph 3. In general, I don't like adding dependencies to the library unless they are absolutely needed. AssImp seemed potentially worth the hassle, so I spent a day or two reading its docs and trying to get a feel for how the API worked and what I would need to do to get it up and running. By the time I was done, I felt relatively confident that I could get something up and running quickly.

So I tried to integrate the building of AssImp into my solution and add it as one of the primary projects in the build chain. I messed with the project files for about 45 minutes, and finally decided that it wasn't meant to be - if I can't add a project into a solution seamlessly in the first few tries, then something isn't working. Either their build system is different, or I'm not understanding something, or whatever - I just didn't want to add a bunch of complexity to the engine just to add more file format capability.

Instead, I decided I would simply write some basic file loaders for the formats that I wanted to work with. To start out with, I implemented the STL loader, which was actually exceedingly easy to do. In fact, here is the complete code for the loader:
// This file is a portion of the Hieroglyph 3 Rendering Engine.  It is distributed
// under the MIT License, available in the root of this distribution and 
// at the following URL:
// http://www.opensource.org/licenses/mit-license.php
// Copyright (c) Jason Zink 
// This is a simple loader for STL binary files.  The usage concept is that the
// face data gets loaded into a vector, and the application can then use the face
// data as it sees fit.  This simplifies the loading of the files, while not 
// making decisions for the developer about how to use the data.
// Our face representation eliminates the unused AttributeByteCount to allow each
// face to align to 4 byte boundaries.  More information about the STL file format 
// can be found on the wikipedia page:
// http://en.wikipedia.org/wiki/STL_%28file_format%29.
#ifndef MeshSTL_h
#define MeshSTL_h
#include <vector>
#include <fstream>
#include "Vector3f.h"
namespace Glyph3 { namespace STL {

template<typename T>
void read( std::ifstream& s, T& item )
	s.read( reinterpret_cast<char*>(&item), sizeof(item) );
class MeshSTL
	MeshSTL( const std::wstring& filename ) : faces()
		unsigned int faceCount = 0; 

		// Open the file for input, in binary mode, and put the marker at the end.
		// This let's us grab the file size by reading the 'get' marker location.
		// If the file doesn't open, simply return without loading.

		std::ifstream stlFile( filename, std::ios::in | std::ios::ate | std::ios::binary );
		if ( !stlFile.is_open() ) { return; }

		unsigned int fileSize = static_cast<unsigned int>( stlFile.tellg() );

		// Skip the header of the STL file, and read in the number of faces.  We
		// then ensure that the file is actually large enough to handle that many
		// faces before we proceed.

		stlFile.seekg( 80 );
		read( stlFile, faceCount );

		if ( fileSize < 84 + faceCount * FILE_FACE_SIZE ) { return; }

		// Now we read the face data in, and add it to our vector of faces.  We
		// provided an ifstream constructor for our face to allow constructing
		// the vector elements in place.  Before starting the loop, we reserve
		// enough space in the vector to ensure we don't need to reallocate while
		// loading (and skip all of the unneeded copying...).

		faces.reserve( faceCount );

		for ( unsigned int i = 0; i < faceCount; ++i ) {
			faces.emplace_back( stlFile );


	struct Face
		Face( std::ifstream& s ) 
			read( s, normal );	// Read normal vector
			read( s, v0 );		// Read vertex 0
			read( s, v1 );		// Read vertex 1
			read( s, v2 );		// Read vertex 2
			s.seekg( 2, std::ios_base::cur ); // Skip 2 bytes for unused data

		Vector3f normal;
		Vector3f v0;
		Vector3f v1;
		Vector3f v2;

	static const unsigned int FILE_FACE_SIZE = sizeof(Vector3f)*4 + sizeof(unsigned short);

	std::vector<Face> faces;

} }
#endif // MeshSTL_h

That's the whole thing - in a single header file. The general idea is to load the file contents into memory, and then let the developer decide how to use that data to generate the needed vertices and indices. I don't necessarily know exactly what vertex layout and all that in advance, so having flexibility is pretty important in Hieroglyph 3. Once I wrote this (which I would be happy to get criticism on by the way!) I decided that I could also write an OBJ loader, along with the corresponding MTL file loader to go with it. I am quite honestly so happy that I went this path instead of using another third party library - now I just need to add a single header file, and I have access to a new format.

Oculus Rift + Hieroglyph 3 = Fun!

Posted by , 14 June 2014 - - - - - - · 689 views
rift, d3d11
I recently have been adding support to Hieroglyph 3 for the Oculus Rift. This post is going to discuss the process a little bit, and how the design of the Hieroglyph 3 engine ended up providing a hassle free option for adding Rift interaction to an application. Here's the first screen shot of the properly running output rendering:

Attached Image

I have been an admirer of the Rift for quite some time, and I wanted to find a way to integrate it into some of the sample applications in Hieroglyph. I'll assume most of you are already familiar with the device itself, but when you think about how to integrate it into an engine you are looking at two different aspects: 1) Input from the HMD's sensors, and 2) Output to the HMD's screen. If your engine is modular, it shouldn't be too hard to add a few new options for a camera object and an rendering pass object.

After working on the engine for many years, I was completely not interested in building and maintaining multiple copies of my sample applications just to support a different camera and rendering model. I work pretty much on my own on the engine, and my free time seems to be vanishingly small nowadays, so it is critical to get a solution that would allow for either a runtime decision about standard or HMD rendering, or a compile time decision using a few definitions to conditionally choose the HMD. I'm pretty close to that point, and have a single new application (OculusRiftSample) set up for testing and integration.

Representing the HMD

The first step in getting Rift support was to build a few classes to represent the HMD itself. I am working with the OculusSDK 0.3.2 right now, which provides a C-API for interacting with the device. I basically created one class (RiftManager) that provides very simple RAII style initialization and uninitialization of the API itself, and then one class that would represent the overall HMD (RiftHMD).

RiftHMD is where most of the magic happens with the creation of an HMD object, lifetime management, and data acquisition and conversion to the Hieroglyph objects. The OculusSDK provides its own math types, so a few small conversion and helper functions to get the sensor orientation and some field of view values was necessary. You can check out the class here (header, source).

Getting the Input

Once you have a way to initialize the device and grab some sensor data, the first job is to apply that to an object in your scene that will represent the camera movement of your user. In Hieroglyph 3 this is a accomplished with an IController implementation, called RiftController (header, source). This simple class takes a shared_ptr to a RiftHMD instance, and then reads the orientation and writes it to the entity that it is attached to.

All actors in Hieroglyph are composed of a Node3D and an Entity3D. The node is the core of the object, and the entity is attached to it. This allows for easily composing both local (via the entity) and absolute (via the node) motion of an object. For our Rift based camera, we attach the RiftController to the entity of the Camera actor. This lets you move around with the normal WASD controls, but also look around with the Rift too.

Rendering the Output

Rendering is also quite interesting for the Rift. The DK1 device has a 1280x800 display, but you don't actually render to that object. Instead, you render to off-screen textures (at much higher resolutions) and then the SDK uses these textures as input to a final rendering pass that applies the distortion to your rendered images and maps that to the HMD's display. All of this stuff is nicely encapsulated into a specialized SceneRenderTask object called ViewRift (header, source).

This object creates the needed textures, sets up the state objects and viewports needed for rendering, and also supplies the actual rendering sequence needed for each eye. This construct where a rendering pass is encapsulated into an object has been one of the oldest and best design choices that I have ever made. I can't emphasize it enough - make your rendering code component based and you will be much happier in the long run! All rendering in Hieroglyph is done in these SceneRenderTask objects, which are just bound to the camera during initialization.

The Final Integration

So in the end, integration into an application follows these easy steps:

1. Create a RiftManager and RiftHMD instance.
2. Create the application's window according to the RiftHMD's resolution.
3. Create a RiftController and attach it to the Camera's entity.
4. Create a ViewRift object and bind it to the camera for rendering.
5. Put on the headset and look around :)

It is simple enough to meet my requirements of easy addition to existing samples. I still need to automate the process, but it is ready to go. Now I want to experiment with the device and see what types of new samples I could build that take advantage of the stereo vision capabilities. The device really is as cool as everyone says it is, so go out and give it a shot!

Unique Style of Flow Control

Posted by , 08 June 2014 - - - - - - · 452 views

I recently picked up a copy of the book "Developing Microsoft Media Foundation Applications" by Anton Polinger. I have been interested in adding some video capture and playback for my rendering framework, and finally got a chance to get started on the book.

What I immediately found interesting was in the foreword on page xvi, the author describes a coding practice that he uses throughout the examples. The idea is to use a 'do {} while(false)' pattern, and he puts all of his operations into the brackets of the do statement. The loop will execute precisely once, and he wraps all of those operations with a macro that 'break' on an error. This effectively jumps to the end of the block when an error occurs, without requiring verbose return codes or exception handling.

I haven't ever seen this type of flow control, so I was wondering what your thoughts on this are. I would assume that the compiler would optimize out the loop altogether (eliminating the loop, but still executing the block) due to the always false condition, but the macros for checking failure would still end up jumping properly to the end of the block. It seems like a relatively efficient pattern for executing a number of heavy duty operations, while still retaining pretty good readability.

I have asked around, and this seems to be at least a known practice. If there are any interesting use cases where this makes especially good sense, I would love to hear about them!

Thoughts on Direct3D 12

Posted by , 09 April 2014 - - - - - - · 2,888 views
At the BUILD 2014 conference, Max McMullen provided an overview of some of the changes coming in Direct3D 12. In case you missed it, take a look at it here. In general, I really enjoy checking out the API changes that are made with each iteration of D3D, so I wanted to take a short break from my WPF activities to consider how the new (preliminary) designs might impact my own projects.

Less Is More
When you take a look at the overall changes that are being discussed, you end up with less overhead but more responsibility - so less is more really does apply in this case. Most or all of the changes are designed to simplify the work of the driver and runtime at the expense of the application having to ensure that resources remain coherent while they are being used by the pipeline. This type of trade off can be a double edged sword, since it can require more work on your side to ensure that your program is correct. However, there have been a number of hintsabout significant tooling support - so I am initially encouraged that this is being considered by the D3D team.

My initial feeling when I saw all of these changes is that they are in fact quite reasonable. When I consider how I would modify Hieroglyph 3 to accommodate these changes, I don't see a major tear-up. Each changes seems fairly well contained on the API side, and Max provided a pretty good rationale for why each change was needed. Here are the major areas of changes that I noted, and some comments on how they fit with Hieroglyph 3.

Pipeline State Objects
The jump from D3D9 to D3D10/11 essentially saw a grouping of the various pipeline states into immutable objects. The concept was to reduce the number of times that you have to touch the API in order to set a pipeline up for a draw call. It sounds like D3D12 will take this all the way, and make most of the non-resource pipeline state into a single big state object - the Pipeline State Object (or PSO as it is sure to be referred to as...). This seems like an evolutionary step to me, continuing what was already started in the transition to D3D10.

Hieroglyph 3 already emulates this concept with the RenderEffectDX11 class, which encapsulates the pipeline state to be set when drawing a particular object. Each object can have its own state, and replacing this with a PSO will be fairly simple. Most likely the PSO can be created centrally in a cache of PSOs, and just handed out to whichever RenderEffectDX11 instance that happens to match the same state. If none match, then we create a new entry in the PSO cache. Since the states are immutable, we don't have to worry about modifications, and the runtime objects lifetimes can be managed centrally in the cache. If this makes the system faster, I'm all for it!

Resource Hazard Management
Instead of the runtime actively watching for your resources to be bound either as an input or an output (but not allow both simultaneously), Direct3D 12 will instead use an explicit resource barrier for you to indicate when a resource is transitioning from one to the other. I have actually run into problems with the way that Direct3D 11 handles this hazard management before, so this is a welcome change.

For example, in the MirrorMirror sample I do a multiple pass rendering sequence where you generate an environment map for each reflective object, followed by the final rendering pass where the reflective objects use the environment maps as shader resources. When you go to do the final rendering pass, you either have to set the output merger state or the pipeline state first. If you bind the pipeline state first, then the environment map gets bound to the pixel shader with a shader resource view. However, from the previous pass the environment map is still bound to the output merger with a render target view - so the runtime unbinds the oldest setting and issues a warning. If you set the states in the opposite order, then you get the same situation on the next frame when you try to bind the render target view for output.

This essentially forces you to either ignore the warning (and just take whatever performance hit it gives you) or you have to explicitly clear one of the states before configuring for the next rendering pass. Neither of these ever seemed like a good option - but in D3D12 I will have the ability to explicitly tell the runtime what I am doing. I like this change.

Descriptor Heaps and Tables
The next change to consider is how resources are bound to the pipeline. Direct3D 12 introduces the use of Descriptor Heaps and Tables, which sound like simple user mode PODs to point to resources. This moves the previous runtime calls for binding resources (mostly) out of the runtime and into the application code, which again should be a good thing.

In Hieroglyph 3, I currently use pipeline state monitors to manage the arrays of resource bindings at each corresponding pipeline stage. This is done mostly to prevent redundant state change calls, but this could easily be updated to accommodate the flexible descriptors. I'm more or less already managing a list of the resources that I want bound at draw time, so converting to using descriptors should be fairly easy. It will be interesting to try out different mechanisms here to see what gives better performance - i.e. should I keep one huge descriptor heap for all resources, or should I manage smaller ones for each object, or somewhere in between?

Bye Bye Immediate Context
The final major change that I noted is the removal of the immediate context. This one actually seems the least intrusive to me, due to the nature of the deferred context to immediate context relationship in the existing D3D11 API. Essentially both of these beasts use the same COM interface, but deferred contexts are used to generate command lists while immediate contexts consume them. This seems like a small distinction, but in reality you have to design your system so that it knows which context is the immediate one (or else you can't ever actually draw anything) and which are deferred. So they are the same interface only in theory...

In Hieroglyph 3, I would use deferred contexts to do a single rendering pass and generate a command list object. After all of these command lists were generated, I batched them up and fed them to the immediate context. The proposed changes in D3D12 are actually not all that different - they replace the immediate context with a Command Queue which more closely represents what is really going on under the covers with the GPU and driver. Porting to use such a command queue should be fairly easy (you just feed it command lists, same as immediate context), but updating to take advantage of the new monitoring of the command queue will be an interesting challenge.

There was also a Command Bundle concept introduced, which is essentially a mini-command list. These are expected to speed up the time it takes to generate GPU commands to match a particular sequence of API calls by caching those calls into a Command Bundle. This will introduce another challenging element into the system - how big or small should the command bundles be? When should you be using a command list instead of a command bundle? Most likely only profiling will tell, but it should be an interesting challenge to squeeze the most performance as possible out of the GPU, driver, and your application :).

So those are my thoughts about Direct3D 12. Overall I am overtly positive about the performance benefits and the expected amount of additional effort it will require. There aren't any major show-stoppers that I can see, but of course it is still early days and the API can still change or introduce new elements before it is released.

I would be interested to hear if anyone else has considered this or found a particular piece of the talk interesting or if you see any issues with it. Now is the time to give feedback to the Direct3D team - so speak up and start the discussion!

Hieroglyph3 and WPF

Posted by , 02 April 2014 - - - - - - · 1,024 views
d3d11, wpf, ui
As I mentioned in my last entry, I am in the process of evaluating a number of different UI frameworks for use with Direct3D 11 integrated into them. This is mostly for editor style applications, and also an excuse for me to learn some other frameworks. The last few commits to the Hieroglyph 3 codebase have encompassed some work that I have done on WPF, and that is what I wanted to discuss in this post.

As a pure C++ developer, I haven't ever really spent lots of time with managed code. C# seems like a pretty cool language, and lots of people love it, so getting the chance to put together a small WPF based C# example is a cool learning experience for me. To get up to speed, I checked out a couple of PluralSight tutorials on WPF, and away I went.

In general, I have to say that XAML is really the star of this party. The hierarchical nature and the raw power of what you can do with XAML is just silly compared to any other tech that I have used in the past for UI design / layout. I suppose HTML + CSS would be the next closest thing, but the tooling that Microsoft provides for working with XAML is really top notch... The best part about this is that XAML is usable by native C++ on Windows Store apps, so anything I'm learning here will probably apply there too. But I guess that is the topic for another post...

For my sample application, I basically just wanted to show that it was possible to run some of my existing D3D11 code to generate some frames and get them up and running interactively on a WPF based UI. So I'll leave further discussion of WPF and XAML for other posts, and focus on how I got this working.

Direct3D Shared Resources
It turns out that there is already an easy way to interop with Direct3D 9 built right into WPF - it is a ImageSource object called D3DImage. This works great for connecting a D3D9 rendered scene to a WPF app, but I was after D3D11 integration. This isn't supported right out of the box, but it is possible with Direct3D9Ex. The key piece is that D3D9Ex is capable of sharing a surface with D3D11 devices, which allows you to follow a workflow like this:

1. Create a texture 2D in D3D11, specifying the shared miscellaneous flag
2. Get a shared handle from #1's texture
3. Create a texture in D3D9Ex from the shared handle
4. Render into your D3D11 texture
5. Use D3D9Ex to push that shared texture to D3DImage

There are additional details and requirements involved, but this gives you the overall gist of what needs to be done. I found a sample project by Jeremiah Morill that already implemented most of this work in a reusable C++/CLI assembly project, so I used this as the starting point.

After you get the interop stuff working, the next task is to get your native C++ code working in a managed application. This is a fairly well documented practice, as you can write native C++ code and wrap it with a managed C++/CLI wrapper to expose it to other managed code. This was also my first foray into this activity, so it took some experimentation - but it is doable!

The Demo
After all the integration work, additional assembly references, solution setup, project property modifications, and some playing around, I managed to get my sample up and running. What you see below is a simple WPF based C# application, with a main render window that is overlapped by a single button (I know, I know, it doesn't get much more exciting...).

Attached Image

However exciting this may look, it is actually quite relevant. The button being overlapped onto the rendering surface shows that there are no airgap issues here - you can put your UI elements on, over, or composited with your rendered scene. This is actually pretty cool, and a nice capability to have if you are building an editor. It is always nice to have the option to obscure some parts of the render target with UI elements in certain circumstances, so this is a good thing.

I am planning out an article on the whole process, so hopefully I can share all the details fairly and gotchas fairly soon. I'm totally new to the article process here on GameDev.net now, so we'll see how that goes :) Other UI frameworks and development activities are yet to come!

User Interface Frameworks

Posted by , 12 March 2014 - - - - - - · 926 views

Lately I have found myself looking for an easy (or easier) way to get some native D3D11 code to play nicely with a user interface framework. Way back when I first started out writing Hieroglyph, I had the basic idea that I could just make my own user interface in the engine. I think many people start out with this mindset, but it is rarely the best way to go. You have to build so many pieces to make it work well, and there is almost always one more thing that you have to add to take the next step in a project... It is so much better to have an existing framework that you can simply adopt and put your rendering code into, and you will be significantly more productive if don't have to reinvent the wheel.

Native Frameworks
Unfortunately, there aren't tons of options available for native code. I don't count Win32 as a UI framework, but rather more as a way to make some windows to render into. Technically you can create some basic UI elements, but it doesn't really count. MFC is an option, but it is a relatively old codebase and it can feel pretty clunky until you really have some experience with it. There is also the downside that MFC isn't available with the Express versions of Visual Studio, which limits the audience that can use your code.

wxWidgets is another option that is open source and cross platform. This solves the issue of being available on the Express SKU, but it has a design that is very reminiscent of MFC... There has to be something more modern available. Qt is another open source and cross platform solution, but it is a really, really big framework. In the context of Hieroglyph 3, it would require all of its users to download and manage a whole other library in addition to Hieroglyph 3 itself, which is less than ideal (although it is a viable option).

Managed Frameworks
So if we decided not to use a native framework, but rather a managed one, then some of the issues from above go away too. Basically there are a couple frameworks to choose from: Winforms and WPF. Both of these frameworks are available on Express SKUs, so there is no issue there. Since they are included "in the box", the users don't have manually download any additional libraries on their own. So these seem like a viable option as well. The obvious downside here is that we have to create a native-to-managed interface, which requires careful and deliberate planning on how the UI framework has to interact with the native code. This is non-trivial work, especially if you haven't done it before...

Native and Managed Frameworks
There is one additional possibility that combines both of these worlds. On Windows 8, it is possible to build WinRT apps with native C++ and XAML. This allows for a built in framework, available on Express SKUs, and access from a native codebase - no managed code required. This is really attractive to me, since I have never been much of a managed developer. But there is always a catch... it is only available for WinRT applications. This significantly limits the people that have access to it (at least at the moment) but it still remains as an interesting option.

So What To Do?
Well, currently Hieroglyph3 supports MFC, at least at a basic level. One of the users of the library has shown a sample of using wxWidgets with it in a similar way, so I may try to see if he will contribute that back to the community (@Ramon: Please consider it!). In addition to these options, I also want to explore Winforms and WPF. And in the long run, I want to use C++ and XAML as well. So I choose all of the above, except for Qt at the moment.

My plan is to build some library support for each of these frameworks in a optional additional library manner, similar to how the current MFC solution is separate from the core projects. As I go through each framework, I'll be discussing my impressions of them, as well as discussing the process. Strangely it seems that there isn't too much centrally located information on this very important topic, and I hope to consolidate some of it here in this journal. I'll probably start out by describing the existing MFC solution, so stay tuned for that one next time.