Jump to content

  • Log In with Google      Sign In   
  • Create Account

Chronicles of the Hieroglyph

We finally see some Hololens development details...

Posted by , 01 March 2016 - - - - - - · 612 views

After what seems like an eternity (but in reality was about 1 year) we now have quite a bit of information about the Hololens. Perhaps even more interesting is that we have a pretty good picture of what development for the device will look like. If you haven't seen it already, you can start looking on the Development Overview page for a good introduction.


Right off the bat, it was great to see that Microsoft is supporting both a Unity based development model and a more traditional development model where you can work in C++ or any of the other usual suspects for UWP applications (JavaScript, C#, or VB). I like playing around in Unity, but after being away from C++ for a while I start to get a little anxious... Anyways, there is lots of documentation there and some interesting tidbits about how applications will be interacting with the Hololens itself.


I'm also very happy to see some of the Academy content, which is essentially step by step tutorials to get you started. Having this type of content prior to the device being released is a nice change of pace from any of the big tech companies, and I hope this is a trend that continues as more and more VR and AR headsets are released.


While I am fortunate enough to be in the first wave of headsets coming out, it is also an excellent idea to have an emulator be a first class citizen. $3000 is a whole bunch of money, and the devices will be limited in supply initially, so providing a way for people to work on their programs without a device is a great thing that again I hope catches on in the industry. From the looks of it, the emulator should be pretty capable so I'm looking forward to giving it a spin myself when it is released.


Our next checkpoint is going to be at the BUILD conference at the end of March, where I would expect the bits to finally be released and of course more details to be made available. Microsoft has been killing it with their Hololens announcements, so hopefully this one goes out with a bang to the developers!


So how about you guys - is anyone out there planning to build some AR programs with the Hololens? Do you have any opinions on the documentation after reading through it? This is just the beginning, so be sure to provide as much feedback as you can to the Hololens team as it will only help improve the final product in the end.


I am planning to integrate Hololens with Hieroglyph 3, so it should be fun to build up the framework around the Holographic APIs. I'll be sure to continue posting here about my progress, so stay tuned!

Wrapping up 2015, planning for 2016

Posted by , 29 December 2015 - - - - - - · 722 views

Its always a good practice to think about what you have done over the past year, and of course to think about what is coming in the next year. I have found this to be a good way to gain perspective about my various projects and activities, so I will indulge in some thinking out loud here.


This year I was extremely busy at work. Normally this would be to the detriment of my passion outside of work (3D software development) but in fact I have been incorporating my 3D work into the daily job. There is a program at work that allows us to pitch ideas for new products and services, and I have had some success in pitching concepts related to 3D programming. So while I have been relatively quiet on GameDev.net, I have in fact been working harder than ever on my craft of choice.


With that said, I have had the opportunity to use Hieroglyph 3 in some of these projects. It is interesting to go from a single developer working on the rendering framework to a larger organization having to use it as well - there are lots of things that I take for granted that have to be explained to other developers, and this provides some insight into what can be improved. There are a number of areas that I will indeed be iterating over, including the following:

  • NuGet Support: After over a year working with NuGet, I think it is time to remove its use from Hieroglyph. It reduces the source needed for building the library, but it also causes a delay in updating to new compiler versions due to the need for the NuGet package owner to update. I would like to find a better solution here, possibly with something like a source code version of NuGet.
  • Static Library Management: Over the years I have added some dependencies, some of which are integrated (Lua, DirectXTK) and others that are optional (MFC, WPF, OculusSDK, Kinect SDK, etc...). I'm not currently happy with the way these libraries are managed, and there are some cases where static libraries are handled differently depending on the source. I would like to improve optional library support, preferably also improving the project support system in Visual Studio as well.
  • Moving to Git: I have used SVN for Hieroglyph from the start, but it is time to move to Git - I think there isn't much more to be said about that one...
  • Scene Management: My original scene management system was loosely based on the Dave Eberly style scene graph, and it has evolved from there. However, with modern C++ coming of age, there are a number of new topics to explore. For example, the Actor system is inheritance based, but it could just as easily use static interfaces with templates. Object ownership is also an interesting area that I previously baked into a usage pattern, but it could be more explicitly defined as well.
  • Less Framework for more Flexible Framework: There are many times when I wished it was easier to use pieces of Hieroglyph without needing the whole thing. I would also like to move more in a less connected usage pattern whenever possible, to make each piece more standalone.


Overall I am really happy with the experiences I have gained over the past year, and I really look forward to applying some of those experiences to the points above in 2016...


There are a lot of exciting things that will be happening in 2016 for the world of 3D programming. First and foremost is the arrival of the head mounted displays that we have been building up to over the past couple of years. On the VR side I have been working mostly with the Oculus developer kits, but some colleagues are also using the HTC Vive. Whichever side you choose to work on (or both) you will be happy with the arrival of the final headsets.


The one that I am most interested in though, is the Hololens. Even with the limited field of view, I see the potential use cases for Hololens as significantly more broad than with closed off VR headsets. Of course, nothing is really known about the development support, so there is lots to be seen if they get it right or not. In any case, I can't wait to get a developer kit and see what can be done with it!


There are other things that I would like to explore related to AR and VR on the engine side as well. In the past, you always built your engine to produce a 2D output image. This is technically still true for the new HMDs (albeit with two output 2D images) but they make you think about designing more for 3D content. There has been lots of discussion about managing GUIs in 3D, but I think there are still lots of areas to investigate where we can take advantage of keeping our content in 3D deeper into the pipeline.


There are lots more areas that I'll be exploring over the coming year, but I'll just have to write about them when we get there. I think it is going to be a great year, so happy new year and lets take advantage of the time we have to make some great stuff!

Visual Studio 2013 and Graphics Development

Posted by , 30 March 2015 - - - - - - · 298 views

Way back when D3D9 was first released, graphics debugging and performance was a complete black art. It was really, really difficult to know what was going on, and when the graphical output wasn't quite what it was supposed to be then you really had to put on your detective hat and begin to deduce what the issue could be. It was even worse for performance topics, where you need to figure out where to spend some significant time optimizing or changing something around to speed things up a bit.

I might make myself sound old, but nowadays it is ridiculously easy to track down both correctness errors, as well as most performance issues. I recently finished up an application at work that is used for marketing, and the performance was pretty good but not great. I suspected that I was somehow GPU bound, since the framerate dropped more or less proportionally with render target resolution, but I wasn't sure exactly what was the issue. The rendering technique goes something like this:
  • Render the scene into a 'silhouette' buffer, used for identifying objects to be highlighted (only geometry that will be outlined gets rendered here)
  • Render the scene into the normal render target (whole scene is rendered here).
  • Render a full screen quad that samples the silhouette buffer and applies the highlights.
There is a normal desktop mode, and it also has an Oculus Rift mode as well. Of course, the Rift mode decreases performance, since you render the whole sequence twice (once for each eye). I decided I wanted to throw the Visual Studio 2013 performance and diagnostic tools a chance to diagnose what was going on, and I was totally surprised to see that there was a single draw call that was taking substantially more time than the others. If you aren't familiar with the performance tools, you just have to go to "Debug > Performance and Diagnostics" and then check CPU and GPU, and your off to the races.

Due to the multi-pass technique that I was using, I figured it would be either a bandwidth issue, or possibly even a pixel shader limitation - but I totally didn't think it would be primarily from a single draw call... So I then fired up the graphics debugger and took a frame capture, and looked into a similar place in the frame to see if I could trace back which draw call it was. Next I clicked on the "Frame Analysis" tab, where I then found the following graph:

Attached Image

I clicked on the obvious item and tracked it back to a simple OBJ based model, that was rendering a super high resolution mesh instead of the version intended for use in the application. So instead of going down the rabbit hole to figure out if my two pass rendering was the issue, I trivially replaced the model and solved the issue.

So the moral of the story is this: make use of the modern debugging tools that are available in VS2013 and the forthcoming VS2015. They will help you understand your CPU and GPU utilization, find issues, and let you focus on the important, most-bang-for-the-buck tasks!

Using STL Algorithms

Posted by , 04 March 2015 - - - - - - · 279 views
C++, STL, algorithm
I have a small confession to make: I have very rarely used STL algorithms, even though I know I should. There is lots of really great reasons why I should, there top C++ developers out there that tell you to use them whenever it applies, but I just don't feel 100% comfortable with them yet. With that in mind, we can explore a recent foray into the use of an algorithm to both make things easier and harder at the same time :)

The Setup
I recently wrote some code that would access a vector of structures, where each structure is sorted according to one of its data members. The data member itself happened to be a float. There is nothing too fancy or special about the data structure at all. One particular use of the data stored in this vector is to find the two elements that surround a provided input value. In this case, the code is being used to perform an interpolation between those two structures, based on the location of the input value with respect to the two enclosing structures.

That sounds easy enough, but in fact it can lead to a morass of code littered with if statements, a for loop, and a few other things that are kind of 'old-school C++'. Since I was accessing two elements at a time, I was stuck with a for loop (or something equivalent) and because you have to handle the cases where the input is outside of the two ends of the vector, there must be some branching involved too. By the time it was up and running, I went back to try and comment the code and ensure that when I came back 3 months from now that I could understand it. I was having a bit of trouble writing a good set of comments, which I thought was a bad sign...

The Swing
It was then that I decided I would give it a shot to try and find an STL algorithm that could get me pretty close to the same result but hopefully with less code to maintain. In case you aren't familiar, I am talking about the contents of the algorithm header, which supplies something like 40 prebuilt functions for some of the most common operations in computing with the STL data structures.

The less code you have to write, the better off you are in the long run. In addition, algorithms have (supposedly) well defined functionality which means that your code should be more expressive to boot. The trick is to know which algorithm will do what you want, and then to understand the semantics of the algorithms to make sure you are using it correctly. This latter part is where I fell down this time...

In case you haven't already guessed it, the algorithm that can be used for the searching task I mentioned above is the std::lower_bound and std::upper_bound functions. If you haven't ever used them, I would recommend writing out a few sample programs to play around with them - they are incredibly easy to use, and you don't need to fuss about the implementation. It just does its thing and returns the result.

So I modified my function to use lower_bound, and then compare the result with the begin and end of the vector. The code was about 1/4 the size, very easy to understand, and easily described with comments. I thought it was a grand testament to the algorithm header - the stories were true, and I should be using algorithms whenever I can!

The Miss
The problem is that when I was more closely scrutinizing the results of the function calls, it turned out that I was always getting an iterator to the element above the input value. That didn't seem to make any sense, since a lower_bound function should be returning the item that is just lower than my input! But no, that isn't how it works...

Let's assume a simple example of a vector of floats, that is populated like this:
std::vector<float> values = {1.0f, 2.0f, 3.0f};
Now if you call lower bound and pass in a value of 1.5f, then I would expect to get an iterator to the first element. It turns out that you will always get the second element as a result. The reason is that the iterator that is returned is actually designed to be the iterator where you would insert the new value if you were going to insert it. That's right - it has nothing to do with the lower bound, but rather it is the insertion location.

I thought to myself "That's strange... I wonder what upper_bound does then...". It actually returns an iterator to the item after your input value - which is what I would expect it to do. It turns out that the name lower_bound is not entirely in agreement with what most of us would call a lower bound. The only real difference between these two functions is when you are requesting a value that is already in the vector (or if there are multiples of that value in the vector). It is a very subtle difference, and maddeningly difficult to reason about when you are an hour or two into refactoring a relatively minor function.

The End
So the moral of the story is that algorithms are good, and they can help you. If given the choice, I would take the algorithm version of the function any day. But you take responsibility for fully understanding what the algorithm does - and you will pay the consequences if you jump in without fully understanding what the algorithm does!

If you want to try it out for yourself, here is the simple starter code that I used to play around with it:

#include "stdafx.h"
#include <algorithm>
#include <iostream>
#include <vector>

int _tmain(int argc, _TCHAR* argv[])
	std::vector<float> values{ 1.0f, 2.0f, 2.0f, 3.0f };

	auto lower_bound_result = std::lower_bound(values.begin(), values.end(), 2.0f);
	auto upper_bound_result = std::upper_bound(values.begin(), values.end(), 2.0f);

	std::wcout << L"Lower Bound: " << lower_bound_result - values.begin() << std::endl;
	std::wcout << L"Upper Bound: " << upper_bound_result - values.begin() << std::endl;

	return 0;

Microsoft's Hololens

Posted by , 19 February 2015 - - - - - - · 385 views

In general, I have always been interested in computer graphics. There is lots of different problems to solve, and if you like math/geometry then there really aren't many better ways to exercise your brain than working in this area. Once you know how to take geometry and project it onto an image, it can also be quite satisfying to do the opposite - start to investigate computer vision and understand how to take an image, and convert it back into a set of objects. A while back, I integrated Kinect support into Hieroglyph 3 for just such a reason - you can do some really interesting stuff with a color and a depth map of a scene, and even cooler stuff if you take that image sequence over time.

A natural extension of generating images for a monitor is to generate them for something like the Oculus Rift. This is actually not much different than regular computer graphics, except that you get the position and orientation of your camera from the Head Mounted Display (HMD), and then render the scene twice for two eyes. I also explored this by adding Oculus Rift support into Hieroglyph 3. It is really cool to play around with the technology and see how it works. Everyone talks about 'presence' with VR, and it really is a bit spooky how much you can get tricked into feeling like you are there instead of here.

However, in the end VR stuff basically takes you from a 2D monitor window into a virtual world, and wraps that virtual world all around you. The problem is, there is no in between - you can't see anything of the real world when you have the HMD on. This presents all kinds of problems, including the question about how do you interact with a virtual world if you can't see your own hands! There are lots of extremely smart people working on this very problem, many of them at Oculus I'm sure. But there is a solution to this problem already coming down the road, that might just change the very nature of how we approach interfacing to the devices all around us: The Hololens.

Attached Image

If you think about it, the Hololens combines both computer graphics and computer vision. You get to put computer generated objects into the real world using computer vision. Of course, we don't know yet how much access we will get to the underlying technology (although that answer is apparently at least partially coming at Build according to a few interviews) but it is really cool to think you can interact with the basic structure of the world around you - at the same time you are wearing the HMD.

That totally solves the challenge of how to interact with the world around you with the HMD on. I don't know how well it is implemented, or how low the latency is on the headset, but if it works as everyone seems to be reporting, then I can't wait to get my hands on one of the development kits for the Hololens. Consider the example Minecraft demo that Microsoft showed off:

Attached Image

Think of all the cool things you could do with access to the room's basic structure, and selecting overwriting of its contents. Have you ever played Portal? The opportunities are limitless...

What do you guys think - what will you do with this technology!?!?!

Data Design for Scene Graphs Pt.II

Posted by , 21 December 2014 - - - - - - · 501 views

Last time around, I described current data layout of my primary scene graph class - the Entity3D. After some thought and further research into what direction I want to take the engine, I decided to make a few changes. Let's start with the printout of the current layout - after my recent changes:
2>  class Entity3D	size(288):
2>  	+---
2>   0	| m_pParent
2>   4	| ?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@ m_Name
2>  28	| Transform3D Transform
2>  216	| ?$ControllerPack@VEntity3D@Glyph3@@ Controllers
2>  232	| Renderable Visual
2>  252	| ParameterContainer Parameters
2>  268	| CompositeShape Shape
2>  284	| m_pUserData
2>  	+---
As compared to before, there is a huge reduction in size - from 396 bytes down to 288. Some of this was due to discovering some of my own incompetence (there was an extra unused matrix object in one of the objects that composed the Entity3D) and some due to actual design changes. I suppose this is a good advertisement for checking the resulting layout of your objects - to find extra matrices that don't need to be there!

The design changes show a general refactoring of the objects contents into separate classes. All of the scale, rotation, and position (and their matrices) have been refactored into a Transform3D object. The rendering related objects are now part of the Renderable class. The respective member functions have also been moved accordingly. This type of refactoring helps to consolidate the content, and it also makes it easier to include this same functionality in another class by simply including those new classes.

That leads to the other big change that I have made. Previously the Node3D class was a sub-class of Entity3D, which made both classes virtual (and hence used a vtable). This is less than optimal since it messes with the cache, it makes every single Entity3D and Node3D bigger than it really needs to be (by one pointer) and it doesn't really buy you very much in either functionality or savings. So I decided to split the inheritance hierarchy, and just make Entity3D and Node3D their own standalone classes.

The refactored objects I described above made it pretty easy to build a Node3D without inheriting its functionality. The only real hiccup was that the controller system had to be made template based, but that wasn't really an issue. Overall it was a pretty easy transition, and now the Node3D has a better defined purpose - to provide the links between objects in the scene graph. I'm pretty happy with the change so far...

In the end, there is some objects which were simply removed from the scene graph objects. This is primarily the bounding spheres, which will be relocated into the composite shapes object. That work is still under way, so I'm sure I'll write more about it once it is ready!

Tune in to the Connect(); conference

Posted by , 10 November 2014 - - - - - - · 403 views
connect, microsoft, developer
Just a short entry for a quick plug. In case you haven't heard of it yet, there is a very developer focused event coming up this week from Microsoft. You can find all the details on the event page here: http://www.aka.ms/connect. There is some great stuff in store, so you won't be sorry to tune in!

Data Design for Scene Graphs

Posted by , 14 October 2014 - - - - - - · 658 views

CppCon 2014
In case you haven't heard about, all of the sessions from CppCon 2014 are being released on the CppCon YouTube Channel. This is a fantastic way for you to catch up on the latest and greatest things that people are doing with C++ (especially modern C++), and the price is about as good as it gets. There are over 100 sessions, and they are being posted as they are processed, so be sure to check back periodically.

This post is going to be related to one of the keynote talks by Mike Acton titled "Data Oriented Design and C++". The talk itself was pretty good, with the delivery only suffering from slightly jumping from topic to topic (he is also more of a C fan than C++, but nobody is perfect Posted Image ). However, the concept and message that Mr. Acton was delivering came through loud and clear - you should be paying close attention to your data and how it is used. His examples showed his clear understanding and (what I consider to be) unique way of diagnosing what is going on. One of his examples was about the scene graph node class in Ogre3D, which he subsequently cut into and showed how he would change things.

Hieroglyph 3 Scene Graph
That got me thinking about the scene graph in Hieroglyph 3. This system is probably one of the oldest designs in the whole library, and has its origins in the Wild Magic library by Dave Eberly. The basic idea is to have a spatial class to represent your 3D objects (called Entity3D), and a node class which is derived from the spatial class (called Node3D) but which adds the scene graph connectivity. This system has worked for me for ages, and is a simple, clear design that is easy to reason about (at least to me it is...).

Prompted by the talk I mentioned above, I wanted to take a closer look at the data contained within my scene graph classes, and try to see if I am wasting memory and/or performance in an obvious way. I never tried to "optimize" this system, since it never showed up as a significant bottleneck before, but I think it will serve as a nice experiment - and it will let me explore some of the techniques for drilling into this type of information.

I am going to spread this analysis over a few blog posts, mostly so that I can provide the appropriate amount of coverage to each particular topic that I touch on. For this post, we will simply start out by identifying one easy to use tool in Visual Studio - the unsupported memory layout compiler flags. There have been many instances where Microsoft has mentioned two different memory layout flags that will dump layout info about an object to the output window in VS: /d1reportSingleClassLayoutCLASSNAME and /d1reportAllClassLayout. They do what it sounds like - basically just report the size of one or all classes in the translation unit being compiled. You can add this to the 'Command Line' entry in the property file of the CPP file containing the desired class, and then compile only that translation unit by selecting it and pressing CTRL+F7. Be sure to be consistent in how you test as well - using release mode is probably the most sensible, and stick to either 32- or 64-bit compilation.

The output can be a bit verbose, but within it you will find a definition for your class along with a byte offset for each member. An example is shown here:
1>  class Entity3D      size(396):
1>      +---
1>   0  | {vfptr}
1>   4  | Vector3f m_vTranslation
1>  16  | Matrix3f m_mRotation
1>  52  | Vector3f m_vScale
1>  64  | Matrix4f m_mWorld
1>  128 | Matrix4f m_mLocal
1>  192 | m_bPickable
1>  193 | m_bHidden
1>  194 | m_bCalcLocal
1>      | <alignment member> (size=1)
1>  196 | m_pParent
1>  200 | ?$vector@PAVIController@Glyph3@@V?$allocator@PAVIController@Glyph3@@@std@@ m_Controllers
1>  212 | ?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@ m_Name
1>  236 | EntityRenderParams m_sParams
1>  320 | ParameterContainer Parameters
1>  336 | Sphere3f m_ModelBoundingSphere
1>  356 | Sphere3f m_WorldBoundingSphere
1>  376 | CompositeShape CompositeShape
1>  392 | m_pUserData
1>      +---
There are a few things you can see right away without trying too hard:
  • The object is 396 bytes big
  • The class is part of an inheritance hierarchy, since it has a vfptr
  • There is some wasted space in the middle of the object due to alignment requirements
After seeing this, I started to dig in to what is really needed from these classes, and then started to identify a few different strategies for simplifying and reducing their sizes. More on those results in the next post or two!

One more thing...
On October 1st, I was re-awarded as a Visual C++ MVP! That marks six years running (yay!). Whenever I hit a milestone like this, I always like to reflect on how I got to that point - and GameDev.net is always a strong contributor to where I am today. I truly learned a ton from the people in this community, and I get a great deal of satisfaction from trying to contribute back - so thank you all for being part of something great!