However, as the design developed and the library took shape a number of key elements to it prohibited this async ability from becoming a reality; however last week I sat down and had a think about how I wanted this all to work.
So, the main thing about sync transfers is that in order to make use of them it seems that your thread has to be in an alertable state; which based on my current investigations means you are either sleeping in said state, or you are waiting on an event object. Now, for applications which have direction control over the async io going on this isn't really a problem, however we are trying to wrap all of this functionality away and present the user with a clean interface which handles everything behind the scenes.
Enter worker threads.
Now, the basic idea is that when the user issues an async load call the library sets up some information so that the load can be done and then hands the data off to a worker thread which starts the io off and then waits for it to complete before processing it. As we don't want our worker threads constantly chewing up resouces looking for new work some kinda of blocking system is needed.
Enter message systems.
Each thread will be basically a loop on a GetMessage() call (on Win32, on Linux/OS X it looks like the message queue API should do the job); this call blocks until a certain message is retrieved; in this case we'll have two:
- New data to process
This does introduce a new requirement for a startup function for GTL, the alternative was to compile time fix things, however the ability to adapt the number of worker threads based on the number of cores at runtime would be sane I feel. Also, the introduction of an init function could well remove the slight static hackery which GTL currently uses to initalise a few internal structures.
The start up routine needs to create a group of worker threads, which enter a suspended state via the GetMessage() call, and keeps a track of their ID in a queue.
When there is new work to do the id at the front of the queue is used to send a message to that thread giving it details to process the data, the thread id is then pushed on to the back of the queue.
(That all said, I think the queue magic is only needed on Windows, from what I can gather about the message queue on Linux/OS X threads pull stuff off on a first come first served basis.)
This system could also be used for pulling resources from remote locations, libcurl is my lib of choice for this and I believe it is blocking until data has come down.
Once everything has been processed a flag will be set in the structure which was passed back to the user to indicate the state of the data.
The front API I'd like to keep as much the same as possible, and it will also be able to perform normal file loading if the user wants. Something I need to look into is the possibility of async io with PhysFS, as I don't know if that will work with it or not as yet.
I'm probably going to write the backend to be as data agnostic as possible so that it can be used with little effort from GTL and my audio loading library (which I would have been working on today if not for a really bad nights sleep last night which has left me too tired).
So, that's the kinda direction things are going to go in; external library wise, for Win32 at least, I'll be using native threads as I can't see a way of sanely extracting the thread id any other way. As the Linux and OS X versions don't look like they are going to require thread id trickery however they might get written using something like Boost::Thread.
So, comments? Questions?