Object lifecycle management

Started by
7 comments, last by Laval B 7 years, 5 months ago

Hello again everyone.

This post is related to this other post http://www.gamedev.net/topic/682978-audio-system/#entry5314425 i made a few weeks ago. The link is just for reference.

In short, i'm working on the development of a 3D audio system. It's a personal project and i'm still at the protorype and experimentation phase. I have been mostly working on the algorithms for 3D sound rendering so far. Doppler shift and distance model are the next things.

My problem is with the way the host application comminicates with the mixer thread. Here are the basic objects involved in this part :

- AudioSource : It is a struct that contains the mixing parameters that can be updated by the application (position, speed, orientation, state : play, pause, stop, etc). This is a very light weight object (64 bytes each). This a POD in the most simple sens.

- MixerSource : This is an internal representation of a source containing data that are accessed only by the mixer (the current playback position, a pointer to the audio data buffer, etc). These informations are persistent for a source accross updates. An AudioSource has a pointer to this structure (it could eventually be a handle of some sort to make it more opaque).

- AudioBuffer : This object contains the audio pcm data to be played by the source and its parameters (sample rate, etc). A buffer can be shared by multiple sources and they are accessed in read only by the mixer. It basically loads and process like 2.5 or 3ms worth of data every cycle.

So the application will call an update method that takes an array of AudioSources, a Listener and the number of AudioSources in the array. This represents an update of the sound configurations in the scene that can be done every frame or every few frames. A copy of this array is then made into a circular queue of AudioSource arrays (copying 64 sources takes less then a microsecond with memcpy) and the mixer thread just process them. The goal of this method is to reduce the amount of synchronization between the two threads. When the mixer starts working on an update (a list of sources), it is working on its own copy.

For synchronization, i'm using a classic critical section/condition variable pair for now and it works great. (CRITICAL_SECTION / CONDITION_VARIABL pair on Windows and a pthread_mutex_t / pthread_cond_t on Linux).

The problem i have is when deleting a source or a buffer. Adding is not a problem because a source will be part of an update only after it has been added. When a source is deleted is another story. If the application wants to remove a source, it needs to synchronise and delete the source but there are some copy (updates) of the source in the queue that still have a pointer to this source. There are different ways i could manage this, but i'm not really foud of any of them.

I would like to know your thoughts about this.

We think in generalities, but we live in details.
- Alfred North Whitehead
Advertisement
(std::mutex and) std::shared_ptr?

I guess my real question is why you're not using FMOD, since you mentioned it in your other post. Licensing concerns? Is this an exploratory project?

I mean it sounds like you're having fun, which is fine. (Some people juggle geese.) I'm just curious.
void hurrrrrrrr() {__asm sub [ebp+4],5;}

There are ten kinds of people in this world: those who understand binary and those who don't.

(std::mutex and) std::shared_ptr?

Yes, reference counting is the solution i was trying to avoid but i guess there isn't must choice since buffers are shared. I still don't like the idea that the mixer thread has to do memory management.

We think in generalities, but we live in details.
- Alfred North Whitehead
Apart from music you may not need to release things too aggressively unless you're in a limited resource situation. Generally want to have sound effects loaded and ready to fire so there's no latency when triggering them.

Even with music you would probably benefit from a single circular buffer rather than continually allocating and releasing decoded segments.
void hurrrrrrrr() {__asm sub [ebp+4],5;}

There are ten kinds of people in this world: those who understand binary and those who don't.

Apart from music you may not need to release things too aggressively unless you're in a limited resource situation. Generally want to have sound effects loaded and ready to fire so there's no latency when triggering them.

Even with music you would probably benefit from a single circular buffer rather than continually allocating and releasing decoded segments.

You are right. The only moment i can think of where there can be more important allocation is when a level is unloaded to load another one. Using pools for source and buffer objects would also make allocation/deallocation time deterministic and short. Even the buffer's data could be from a linear allocator i guess.

Thx for the thoughts.

We think in generalities, but we live in details.
- Alfred North Whitehead

This problem sounds like the same issues that D3D/GL face, except they've got a user-thread, a driver-thread and the GPU (or D3D11/12/Vulkan have multiple user threads, which complicates things a bit more).

Basically -- if the user asks to deallocate a resource, the operation can't occur until after the GPU has finished with that resource, and, if the user wants to modify a resource that's in use by the GPU then some magic is required.

The first one is easy to implement. When deallocation of a resource is requested, put it into a "to be deallocated" list, which will be processed at the end of the next "Update".

The second one is a bit harder. The simplest solution involves locking. Whenever the user wants to modify a resource, have them lock the resource, and unlock it when they're finished. The lock operation waits for your mixer thread to finish what it's doing, then makes the mixer thread sleep until the user unlocks the resource. They can then modify resources whenever they like and it will behave just as deterministically as a single-threaded program would.

The issue with this method is performance though -- if the user thread does a lot of resource updates (or any at all, per update), then your throughput takes a massive hit.

The other method is often called "orphaning" or "discard semantics" in graphics APIs. When the user requests to lock a buffer, instead of actually locking anything, you simply allocate a brand new buffer and return that to the user. The user can usually either request a "discard lock" (where you can give them a buffer with any old contents in it), or a standard lock, where after allocating your new buffer, you'd have to memcpy the old data into it first -- for use in cases where the user only wants to modify a small section of the data.

At the same time as allocating this new buffer, you add the old buffer into the "to be deallocated" list, so that it gets free'ed up later in time, when the mixer has finished with it. Ideally the user is referring to resources using some kind of handle, so you attach the new buffer pointer to their handle.

The other method is often called "orphaning" or "discard semantics" in graphics APIs. When the user requests to lock a buffer, instead of actually locking anything, you simply allocate a brand new buffer and return that to the user. The user can usually either request a "discard lock" (where you can give them a buffer with any old contents in it), or a standard lock, where after allocating your new buffer, you'd have to memcpy the old data into it first -- for use in cases where the user only wants to modify a small section of the data.

At the same time as allocating this new buffer, you add the old buffer into the "to be deallocated" list, so that it gets free'ed up later in time, when the mixer has finished with it. Ideally the user is referring to resources using some kind of handle, so you attach the new buffer pointer to their handle.

This methods is very intesting because, for one thing, it would allow me to be able to modify/replace the content of a buffer that is currently in use. This is something i wasn't even

considering at this point. The nice thing about it is that it isn't really that difficult to implement. The list of things to be deleted is a great idea. I will probably need some sort of reference counting because there may me multiple updates queued that use the same buffer (resources). Just a couple of questions :

1. Just to make sure i follow you, when you say "(where you can give them a buffer with any old contents in it)", do you mean recycling an old buffer (memory area) ?

2. The list of buffers to be deallocated would be processed by the mixer thread when it's done with the current update ?

Thank you very much for the idea.

We think in generalities, but we live in details.
- Alfred North Whitehead

1. Just to make sure i follow you, when you say "(where you can give them a buffer with any old contents in it)", do you mean recycling an old buffer (memory area) ?

2. The list of buffers to be deallocated would be processed by the mixer thread when it's done with the current update ?

1. Yeah, in graphics you usually have the option of a "lock-discard" operation, which locks the resource for writing by the user thread, but also results in the contents of the resource being undefined (requiring the user to completely fill it with new data). This means the driver-thread has a lot of flexibility. It can actually lock an existing resource, or it can malloc a new one, or it can recycle one that's been previously used for something else.

You might also want to offer a "lock-preserve" operation though, which lock a resource for writing by the user thread, but ensures that the resource given to the user does contain the previous state of that resource. This can mean actually locking the existing resource and returning it, or if you're returning a malloced/recycled resource, then the driver-thread must memcpy the old data into that new allocation before handing it over to the user.

2. Yep. All resource lifetime operations are performed on the mixer/driver thread. The user thread just makes requests for these things to occur.

One last question if i may.

Would it be too much of a restriction to require that updates as well as resource creation/destruction could be done onlt by a single thread at a time ? It wouldn't need to always be the same and the construction of the update liste could itself be done concurrently on multiple threads but it could be submitted only from one thread (at a time). It simplifies many things and allows for uptimisations to have a single producer thread.

We think in generalities, but we live in details.
- Alfred North Whitehead

This topic is closed to new replies.

Advertisement