Jump to content

  • Log In with Google      Sign In   
  • Create Account





New Parameter Management System for Hieroglyph 3

Posted by Jason Z, 28 May 2011 · 368 views

As described in my last entry (which seems like it was ages ago), I have been overhauling my parameter system in an attempt to speed things up when it comes to configuring the rendering pipeline for a particular draw call. The overall process is fairly automated already, but I wanted to try to minimize the number of string comparisons needed to perform a rendering pass. As a side note, this was identified as one of the bigger performance wasters in the multithreaded rendering sample for our book - so this isn't a wild goose chase :)


How it Currently Works


The tricky part is to design a system which isn't totally different than the current system, supports multiple threads using the parameters in parallel on the CPU, and allows for very fast lookups of parameters for both writing and reading of parameters for every object in every frame. Due to the frequency that this operation is performed, there is the potential for a good speed improvement if a useful design can be found. The existing system actually provided for multiple parameter manager instances to be used in parallel, with one instance for each rendering thread to use. This allowed for individual rendering passes to have their own 'ecosystem' of parameters that wouldn't be affected by anything being used by other threads --> for example, the view matrix could be used by multiple threads in isolation without any problem.

This works, but the big drawback is that any client that wants to write a parameter uses a function call such as this:

ParameterManager::SetVectorParameter( std::wstring name, Vector4f value );
The name is used to look up a parameter instance in a std::map, then later on the user of the parameters (i.e. the renderer) reads the parameter data in the same way. This amplifies the number of lookups needed, and they are performed for any named parameter that is used in a shader. For D3D11, there are five programmable stages (plus the compute shader) which can add up to be lots of lookups...


The Initial Solution


My initial thoughts on how to solve this problem was to completely eliminate the use of a lookup at all. Instead, any clients that write a parameter value would initialize their own copy of the parameter and then just directly reference the parameter themselves. This does indeed eliminate the cost of the lookup and has an amortized cost of zero, which is pretty hard to beat :). However, if every client has a direct reference to a named parameter, then it messes up the parallelizability of the whole system. I could have a render view reference a unique copy of each parameter per thread, but I don't guarantee that a render view will be run on the same thread from frame to frame... I needed something a little better.


Final Solution


Instead, I decided to have multiple parameter manager instances referencing a single std::map of parameters, and have multiple copies of the data stored in the parameter objects - one copy for each thread. Then the parameter managers could be assigned an ID, the rendering clients could use the parameter manager for a given thread without knowing which copy of the data they were using (the parameter manager handles the selection of the right array index transparently). This allows for parallel access to the data from multiple threads, while still providing the "pre-lookup" benefits mentioned above, and actually reducing the overall complexity of the system. I am quite pleased with this solution, and it also has the nice side benefit that it allows existing code to be run in a single threaded mode unmodified - so I am backwards compatible with my own sample programs.

The implementation part took some time, the better part of a month (that is mostly due to lack of availability, but still...). Now that it has been implemented, I can run some speed tests and see how much of a difference it makes.


The Results


As mentioned above, I wanted to improve the performance of the multithreaded sample program from our book. This makes it a natural choice for performing a before and after benchmark. So the sample essentially maximizes the number of state changes by rendering reflective paraboloid maps with many surrounding objects. The demo is run in 512x512 output mode with three reflective objects and a varying number of objects around them (ranging from 200 - 1000 objects), with 4 threads on a quad core machine. The following chart shows the improvement with FPS along the y-axis and the number of objects on the x-axis.

Attached Image
As you can see, the new method runs at roughly 150% of the framerate of the old method. To say the least, I am happy with the results :). With this improvement, I think the parameter system can be considered good enough and just needs to be cleaned up before I push it to the Codeplex repository.

So the point is this - don't be afraid to fundamentally change the way that you do a very frequent task, since it can be quite easy to get a big performance win with it! Even if it takes lots of work and time (and for hobbyists this is tough to swallow), it can really be worth it in the end.




October 2014 »

S M T W T F S
   1234
567891011
12131415161718
19202122232425
26272829 30 31 

Recent Comments

Recent Comments

PARTNERS