Background resource loading - waiting for some item to finish
Members - Reputation: 209
Posted 20 November 2012 - 03:53 AM
I'm trying to implement asynchronous resource loading in my engine (by using a separate thread for I/O and decompression) for the first time in my life and i couldn't find solutions to my problem.
1) How should I implement waiting for a specific resource to be loaded?
For instance, when i'm dragging an asset from the asset tree view and dropping it onto the render viewport in my editor,
i need to block the main thread and wait until the resource data is fully loaded or timeout is reached.
2) And how do I specify a timeout value for the load request?
I've read that it's usually done by using request ids/tokens/handles for manipulating individual requests (e.g.: wait/cancel/setPriority).
How is it usually done?
3) How to avoid temporary allocations/extraneous copies during resource loading?
It's bad for performance when the client allocates temporary buffer, issues an async read request into that buffer, instantiates resource, frees the buffer and repeats this procedure, say, a hundred times.
Should i have one streaming buffer which is filled if the user didn't specify her own output buffer?
i've put the whole thing on Pastebin and i'd be very pleased if you could look through the code and spot errors/propose improvements
header file: http://pastebin.com/8pyHnrSh
source file: http://pastebin.com/WFQ1dPyW
Members - Reputation: 2060
Posted 20 November 2012 - 07:00 AM
2) std::future has timeout capability as well.
If you don't have a C++11 compiler, you could use boost::future as a replacement.
Edited by Rattrap, 20 November 2012 - 07:02 AM.
Moderators - Reputation: 40012
Posted 20 November 2012 - 08:21 AM
2) Yeah, often the function that initiates a request returns some kind of identifier that you can use to check on it's status or abort the request. You probably don't need to build timeouts into the system at this level, as long as you've got the ability to abort at this level (even that's not required for many games ) Then, the systems that use the file loader can implement user-triggered, or time-based, etc aborting, if they require it.
3) With decompression, it's common for the decompression thread to keep it's own buffers for re-use.
If you are reading a lot of files into temporary memory before parsing them and throwing the allocations away, it might make sense to allocate a small ring of buffers for the file loader to use as defaults. You'd have to collect some statistics on the size of your "temporary" files and how many are being loaded at once to make an educated guess as to the best sizes to use.
As for "allocate buffer, async read into buffer, instantiate resource from buffer, free buffer", you can often just "instantiate resource, async read into resource". I've split my async asset loading into a few steps to help with this --
When you issue an async request, you provide a callback for allocating the asset, and a callback for parsing the asset. When the file size becomes known (I store a table of sizes as part of the file system, which is resident in RAM, so this is immediate), the allocation callback is triggered to create the resource. Then the resource is filled in asynchronously, and when it's done the 'parsing' callback is triggered to finalize the resource. Assets can also report an array of sizes if they require some of their data to be streamed into different allocations (e.g. a texture header for the CPU and the actual texel data for the GPU).
4) If you want to write OS-specific code, you can usually implement async file loading without any extra threads, as OS file interations are likely async natively (the blocking API is often a wrapper around the async API). However, if you want to do background decompression, that obviously does require some threading ;)
Edited by Hodgman, 20 November 2012 - 08:30 AM.
Members - Reputation: 209
Posted 20 November 2012 - 08:50 AM
I get a pointer and start manipulating the created mesh instance (e.g. move, rotate) or get an error screen.
2) Where can I find out more about efficient implementation of such a system (load request queue with fast access by request ids and minimum allocations) ?
Right now my queues are dynamic arrays and I perform a linear search to find requests by their ids.
3) So, your advice would be to delegate the task of managing temporary memory to the client?
(Then I'll have to think about fighting fragmentation on the client side.)
(In the background thread I'll be doing reading, decompression, pointer patching, setting external references (dependencies) to proxy objects and issuing requests to load them later.)