After spending some time with the parameter system updates described previously (here and here), I have turned my attention to a few other improvements in the sample frameworks. During the development of our book and the corresponding samples for it, there was clearly a need to be able to draw some text on the samples - both for identification purposes and for displaying some statistics as well. Text rendering has changed its form a few times in the engine, so I thought I would recount how it used to be done and how I arrived at the situation that I am in today.
[subheading]DX9 Text Rendering[/subheading]
In the early days of the Hieroglyph engine, I was using D3D9 for all of my rendering. Conveniently D3D9 included text rendering capabilities in the D3DX library. Naturally I just used this simple interface and threw some text onto the screen in the application code. This worked quite well, and I didn't see any real problems with this use case.
In Hieroglyph 2, I expanded the possibilities of text rendering by including scene graph entities that represent text. This allowed for 3D text and for some fancy ways of animating text - since it was part of the scene graph, each text entity could be manipulated with all of my standard animation techniques. This also worked well enough, but it always seemed a little bit shoe-horned into place to have a 2D rendering system fit into the 3D spatial scene graph.
[subheading]DX11 Text Rendering[/subheading]
I actually skipped over D3D10 and went straight to D3D11 for Hieroglyph 3. This was mostly for a learning exercise, and since D3D11 was a superset of the D3D10 functionality it made sense. In D3D11 the text rendering situation is for the most part a mess. There is no direct support in D3DX. It is often recommended to possibly use DirectWrite/Direct2D which provides a very rich feature set, but it is clearly a workaround instead of a good solution. One other possibility is to use GDI/GDI+ as the basis of the text rendering system, and then rolling your own text drawing routines. This is the method that the engine currently uses, and the implementation was performed by MJP.
The basic technique is to write out a texture with the requisite font/size/characters, and then later generate a set of vertices that can render the text by selecting the appropriate texture coordinates for each character. MJP's version efficiently uses instanced rendering, and the system works quite well with not much hassle to get it set up. The problem with this setup was that it didn't follow any of the same rendering paradigms as the rest of the engine - in particular when it came to multithreaded rendering...
[subheading]Multithreaded Text Rendering[/subheading]
Up until earlier today, the engine simply used the text rendering system as it was, but it didn't work in multithreaded rendering mode. This was a result of how D3D11 handles multithreaded command submission - the pipeline state of a device context is reset (in most cases) after generating or executing a command list. The engine uses batches of rendering called RenderView's, which each configure the pipeline as needed. Since the text rendering occurs outside of these render views, it didn't really work in the MT scenario (although it worked fine for single threaded mode, which just uses the immediate context).
The (well, at least my) solution to the problem is to create a render view for rendering a 2D text overlay. This essentially queues up any text drawing and then processes it in a batch in the same manner as all of the other rendering payloads. The pipeline configuration is performed in the same, standardized way - making for an efficient and clean implementation. In addition, there is some additional benefit to be gained by this setup...
In general, the setup and creation of the vertex data for rendering the text characters takes up some relatively significant CPU time. By packaging all of that work into a render view, and subsequently processing that view in a parallel rendering thread, it mitigates the CPU time used. If a user has a multi-core machine, then all that manual data manipulation is done more or less for free as long as the time it takes is less than at least one of the other render views being processed. This provides a nice performance bonus in addition to the cleanliness of the implementation. Its a nice thing all the way around...
The changes can be seen in the latest commit to the Hieroglyph 3 repository (primarily in the ViewTextOverlay class...). It is used in teh SkinAndBones sample, and is manually processed sequentially at the moment... The sample programs will be updated in the next few days.