it is based on a hacked version of my MIDI synth (extended to do 3D positional audio stuff and support arbitrary sound effects, as well as more channels, ...). (the MIDI synth then basically took over for all the other sound-mixing).
while I was at it, I also threw on a dynamic environmental reverb effect (it tries to figure out what the location the player is in should sound like), ...
it isn't necessarily strictly realistic (I just sort of guessed things like how much various materials would reflect sound), but it sort of works I guess...
it does something similar to Minecraft-style block-lighting (with the listener taking the role of the torch), then takes into account which voxels were reached by approximately how-much sound and how much sound they should reflect (per block type), then adds this value to the appropriate place in the reverb filter.
this is partly to try to make it so? that sound mostly only bounces off the "surface" of the nearby walls/..., rather than every voxel inside a solid mass.
it takes into account the amount of sound-reaching the block, the type of material, and? the distance from the listener (mostly used to calculate the delay). the reverb is basically a FIR filter, and the distance is used to calculate where in the filter to add the sound-value (basically, a delay in terms of samples).
for actual mixing though, the spatial location of the sound has no real effect though (the reverb is basically just a post-filter).
basically, the "shape" exists, but only as 1-dimensional spikes a sample-time axis. to calculate the a sample's reverb, we do a dot-product? of the filter with the recently-played samples (this "pulls forwards" things like echoes and similar), and again for each sample.
as far as the algorithm is concerned though, each voxel is treated as a point in space which may reflect sound back to the listener.