Sign in to follow this  
floatingwoods

allocating/destroying buffers from a different thread/fiber

Recommended Posts

Hello, I never paid attention nor thought of it, but now suddenly I am wondering if this is really ok? I have a main application thread that is launching several auxiliary threads or rather fibers. Those fibers then allocate data that sometimes is released by the application thread. That worked fine until another library (Lua) outputed an error saying the same thread should release its resources. I guess that is specific to that library, but still I got curious now. Thanks

Share this post


Link to post
Share on other sites
Quote:
Original post by Evil Steve
There's no problem with releasing something from a thread other than the one that created it, so long as the release function is thread safe. If you're deleting stuff, the heap is thread-safe, so that's fine.
But be aware that you are incurring the cost of a synchronisation - malloc/free or new/delete must lock a mutex to ensure two threads don't modify the allocation structures at the same time.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Quote:
Original post by Evil Steve
There's no problem with releasing something from a thread other than the one that created it, so long as the release function is thread safe. If you're deleting stuff, the heap is thread-safe, so that's fine.
But be aware that you are incurring the cost of a synchronisation - malloc/free or new/delete must lock a mutex to ensure two threads don't modify the allocation structures at the same time.


True in general, but modern allocators try to allocate/free items from a thread-local heap, which avoids this problem most of the time. Granted none of the default mallocs that I'm aware of use this kind of technique, but they're around. TBB has one, for example.

Share this post


Link to post
Share on other sites
Quote:
Original post by cache_hit
modern allocators try to allocate/free items from a thread-local heap, which avoids this problem most of the time. Granted none of the default mallocs that I'm aware of use this kind of technique, but they're around. TBB has one, for example.
Does that help in the case that I allocate a buffer on one thread, and then free it on another? I don't see a trivial way to implement that without locking, and you have to take care of the edge cases...

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Quote:
Original post by cache_hit
modern allocators try to allocate/free items from a thread-local heap, which avoids this problem most of the time. Granted none of the default mallocs that I'm aware of use this kind of technique, but they're around. TBB has one, for example.
Does that help in the case that I allocate a buffer on one thread, and then free it on another? I don't see a trivial way to implement that without locking, and you have to take care of the edge cases...


No I think it will still internally lock in that case, but the OP mentioned fibers, in which case it should be ok. But you're right, in the general case it's hard (impossible?) to have a completely lock-free allocator.

Edit: After re-reading, he still says its' released on a different thread (not just a different fiber). It might be possible to restructure this so that the releasing is simply happening on a different fiber on the same thread.

My guess is that lua (i don't know much about it) is allocating / freeing the memory for you, and for speed it is using a single-threaded allocator, in which case you need to either do it on the same thread like you said, or do it on a different fiber in the same thread (still the same thread of course, just a different way of visualizing it)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this