About this blog
Musings and progress reports from an independent game engine developer
Entries in this blog
Hey! I decided that I need a place to write about everything that is going on in my software development. So here it is.
My current project: A set of easy-to-use cross-platform C++ libraries aimed at application development and game engines. These will be released open-source when they are complete.
This is the base library for all of the other libraries. It contains a large number of classes that are often used in most applications, especially games. These include my own set of basic data structures, ASCII/Unicode template string class supporting all encoding types, abstractions for streaming I/O, and thread and synchronization classes. It includes a full-featured multi-dimensional math library with SIMD processing extensions (Altivec and SSE).
Currently working on - updating old code to the newest quality standards and documenting stuff
This will eventually be a full-featured sound processing and I/O library. It will support a wide range of multichannel DSP effects, sound file formats, and expose a comprehensive cross-platform device driver interface. It will aim to be on-par with tools like FMOD and WWISE. In addition, it will be paired with a new integrated version of GSound, my real-time acoustic simulation engine (aimed at being fast enough for games).
Currently working on - streaming sound file I/O and various DSP effects.
This library is a home-grown 3D physics engine supporting all standard collision primitive types and constraint types. It uses GJK/EPA for convex collision detection and a sequential-impulse based constraint solver. It will eventually support all commonly expected features and will compete with the Bullet physics library.
Currently working on - triangle mesh vs. primitive collision detection.
This is an extremely flexible graphics engine built on top of OpenGL. The main design goal was to maximize flexibility and usefulness while decreasing the required complexity. It uses an automatic shader attribute binding system that allows designers to set the semantic usage of shader inputs and have the engine automatically supply the necessary information to the shader. It is renderer-type agnostic - can be used to implement a deferred renderer or a forward renderer. The library is designed so that users can build complex efficient application-specific renderers with a generic material system with the least amount of code possible. It should take less than 500 lines of user code to write a complex data-driven renderer with dynamic lights and shadows.
Currently working on - adding support for shader uniform arrays and improving the material system.
My timeline to finish all of these libraries is the end of this year. Physics and Graphics are around 60% complete, Sound is around 30% complete, Framework is 85% complete. Hopefully these will be a great resource for the game development community when they are released!
I've been making more progress on the audio library I'm developing. The main things I've finished in the last few days have been the WASAPI device interface code for the windows version, as well as a compressor/limiter effect.
For background, my audio library exposes a simple interface for accessing the available audio devices connected to a system.
While implementing an interface to Apple's Core Audio device system was fairly painless and robust, using WASAPI on windows has been painful. Even having followed the documentation closely, I still had trouble getting everything to work as expected.
The exclusive/shared mode tradeoff is ridiculous (OS X has no such thing). The result is that multi-channel audio devices can't expose any more channels in shared mode than are configured in the control panel (i.e. for stereo, surround). I had heard that WASAPI was more pro-audio friendly, but it doesn't seem to allow true multi-track recording or playback in anything other than exclusive mode. The problem with exclusive mode is that it doesn't allow any other applications access to the device - a big no-no if you're trying to get your app to play nice with the rest of the system.
It looks like I will be forced to add an ASIO layer to the windows code in order to allow non-intrusive multitracking on systems with ASIO drivers installed.
The other major hurdle in adding device support was dealing with device events - mainly detecting when a device is disconnected or connected or when the default device is changed. For instance, I have a class called DefaultSoundDevice which automatically keeps track of the current default system input and output devices and switches to the proper device if it is changed in the system control panel.
The next thing that I have to handle is detecting the semantic usage of each output channel and forwarding this info to the client - what speaker does each channel correspond to? This way surround and other configurations are sure that they are getting the correct channels.
Finally, I finished implementing a full-featured compressor/limiter in the DSP framework I am developing. It has all of the standard controls: threshold, ratio, attack, release, makeup gain, plus a few less-standard ones: variable soft knee, variable-length RMS peak-sensing window, and input gain.
I've also finished WAVE file encoding/decoding, OGG file decoding (encoding to come), and a few utility classes that handle file recording/playback.
I've been trying to make some more progress lately on the general-purpose sound processing and IO library I'm developing (yet unnamed). Here are a few of the things I've been working on:
Filter Architecture: My engine is designed around audio processing objects called SoundFilters which act kind of like a VST or AU plugin. They provide a generic interface for processing input and output audio. I've recently reworked the class design to support multiple inputs and outputs per filter (which can themselves be any number of channels wide), and input and output names. Another addition has been an implementation of a generic parameter system. Filters subclass SoundFilter and override methods which provide an interface to generic-typed parameters. This allows filters to be used fully without knowing their actual type, similar to how generic parameters work on a VST or AU plugin. My system currently allows boolean, integer, float, and double-typed parameters.
Threaded Recording: One thing I've noticed in Apple's Logic is its tendency to halt recording if the destination disk is too slow. I have added capabilities to my engine which allows it to transparently buffer data to a separate encoding thread which then writes the data to disk as it can. This keeps the disk or blocking I/O from blocking the audio rendering thread and allows more robust recording than Logic can provide. I was able to record 100 mono 24/44.1 wave files to disk in real time on my 7200rpm laptop drive.
Digital Filter Design: I know very little about how to actually design IIR filters and implement them in DSP code, so I've been trying to learn all of the math that is necessary to understand filters. I'm working through a free online course at MIT: Signals and Systems. Hopefully I'll understand this stuff a lot more afterwards. I'm stuck on a few things in the library until I can get a good set of EQ filters done (HP, LP, HS, LS, Parametric).