Jump to content
  • Advertisement
Sign in to follow this  
silvermace

Thread Synchronisation Object for EVERY Resource?

This topic is 4082 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey all, so i'm writing some threading code for my engine and there is a class known as IResource which has the method getStatusMessage which provides an std::string which describes the current loading status of the resource. I have background loading, so that means access to the getStatusMessage/setStatusMessage needs to be synchronised, and to do this, I have chosen to use a Critical Section, now here's the kicker, I will need another critsec to synchronise access to my actual resource data.. and hence there will be 2 sync-objects per resource, and there are probably going to be a few hundred resources during loading and a few less during runtime.. I have searched the net for usage of critical sections, and I'm well versed with them as I have used them for many business type applications which required synchronisation, but I can't seem to find any decent literature on volume usage or any design/architecture strategies such as pooling for the synch objects. The only hint that it may be okay to have "alot" is from MSDN:
Quote:
InitializeCriticalSectionAndSpinCount ...*snip*... Windows 2000 ...*snip*... Do not set this bit if you are creating a large number of critical section objects, because it consumes a significant amount of nonpaged pool. Note that this event is allocated on demand starting with Windows XP and the high-order bit is ignored.
Any advice regarding this matter, and especially reuse strategies (if there are any worthy ones), would be greatly appreciated, also, if I'm doing something very stupid, please let me know :)

Share this post


Link to post
Share on other sites
Advertisement

concept:
ILoaderCallback {
void loading_started( IResource *p_resource );
void loading_status( IResource *p_resource, std::string &status );
void loading_complete( IResource *p_resource );
};



Your application registers with loader, and provides the above callback.

Loader then calls the appropriate events as needed.

The get/set approach doesn't work in this case, due to asynchronous nature.

Share this post


Link to post
Share on other sites
I see, thats my current solution kindof inverted, as only 1 sync object would be required on the observer side. Thats convenient, because I have the observer functionality built in already :).

That still leaves the question though, is it okay (it's pretty much a must though..) to have a sync-object per-object (for the resource's actual data) ?

Cheers,
-Danu

Share this post


Link to post
Share on other sites
Check out InterlockedExchange.

There are also numerous other Interlocked functions that perform their operations at the atomic level. This will alleviate the need for having synchronization objects since you're basically just querying for state.

Happy coding.

Share this post


Link to post
Share on other sites
Quote:
That still leaves the question though, is it okay (it's pretty much a must though..) to have a sync-object per-object (for the resource's actual data) ?


Why? Let's say I have a bitmap. Once it's loaded, it's there. It won't change. It may go away at some point, but once it's loaded, it's free for all.

The proper solution to this would be making resources accessible via boost smart pointers.

This allows you to use resource system in async fashion without any locks at all.


typedef boost::smart_ptr< IResource > ResourcePtr;

Loader loader;

...
{
loader.load_resource( "map.bmp", this ); // this == callback
}
...
void loading_complete( ResourcePtr p_resource ) {
engine.foo( ..., p_resource );
}
void resource_unloaded( ResourcePtr p_resource ) {
engine.bar( ..., p_resource );
}



Now the only place where you might possibly need locks is within foo and bar methods where you update engine's internal state, but *only* if foo and bar can be called from multiple threads.

The way callbacks from loader are designed, they would be invoked from single loader thread, hence the handlers would never be called concurrently.

Share this post


Link to post
Share on other sites
this feature set is intended for data streaming, as such the renderer and physics engines must be able to accept data as it is made available, this requires staged locking and continuous "test" for data availability across (possibly) several threads. The rendering engine is currently spread across 2 threads as well, so none of this is trivial. I respect and appreciate your advice thus far, expect more from me later :D

Share this post


Link to post
Share on other sites
Then you need a streaming interface in the first place.

If it's streamed via network, and you handle all the work yourself, then there's Boost::ASIO, which handles not only sockets, but is also full asynchronous programming wrapper. There's other async solutions as well.

You cannot treat your resource like a static class anymore, you need to make your application aware that something is being streamed.

One simple way to do this is lock-free queues, commonly used in audio streaming.


| |
[loader] ---data chunk-->|[queue]|---reassembly--> [local buffer] --use--> [renderer]
thread1 | | thread2




Here, as the data is streamed, you put chunks into the queue (std::vector<char> or similar). This allows your renderer thread to poll without blocking or locking whenever it has time, and then it assembles the resource locally, in its own thread. This works without a problem for streaming data, where chunks are discarded after use.

If you're using bulky resources, such as meshes, or images, then this type of streaming won't help you much, since you'll need all data in the first place.

The lockfree concept can also be applied to immutable resources (ones that cannot change during life-time). In case of streaming, once a part of resource has been loaded, it's no longer available.

In case of image:

class image;

// indicates that a new chunk has been loaded into the image
class available_chunk {

friend class image;

long size() { return data_length; );
private:
available_chunk( start, end );

long data_start;
long data_length;
}

class image {
public:
friend class loader;

void get_data( available_chunk &chunk, std::vector< char > &bytes ) {
// copy image_data from image_data[chunk.data_start] to image_data[chunk.data_start + chunk.data_length]
// into bytes
}
void get_all( available_chunk &chunk, std::vector< char > &bytes ) {
// copy image_data from image_data[0] to image_data[chunk.data_start + chunk.data_length]
// into bytes
}
private:
void append( char *new_data, int size ) {
int old_start = available;
// copy new_data into image_data[available]
available += size;

for (all listeners) {
listener.push( new available_chunk( old_start, size );
}
}

char *image_data; // has size of entire image data
long available;
}



The new architecture is now this:

available_chunk
lock-free queues
+-->[queue]------>[renderer]
[loader]--->[image]---+-->[queue]------>[physics]
+-->[queue]------>[anything]




Since available_chunk is entirely private it becomes impossible for application to cheat and provide invalid data. Since image's image_data is private, it's only modified by single-threaded loader.

Users (renderer, physics and anything) remember last received available_chunk, and can use that information to request entire content loaded so far, or just last available chunk.

This entire design relies on image hiding the data, and providing only available_chunk structures to users (which it guarantees to be valid, within bounds, and only available as data is loaded).

The entire design can then be lock-free. Loader's single thread can be enforced by image class (but not via singleton), making it possible to have multiple loaders for multiple images.

Share this post


Link to post
Share on other sites
I plan on doing something similar in my engine. My idea was more on the line of

// WARNING, UNTESTED CODE IS FOR CONCEPT SHARING PURPOSE ONLY
class Loader
{
public:
virtual ~Loader() = 0; // virtual destructor for safe inheritance
virtual void operator()() = 0; // loads object
};
Loader::~Loader() {}; // in source file to get around linking error

template <typename ResourceT,typename ResourceLoaderT>
class QueryableLoader : public Loader
{
public:
QueryableLoader(ResourceT &resource_,ResourceLoaderT &resourceLoader_)
: resource(resource_),resourceLoader(resourceLoader_) {};
float Status() const { return status; };
void operator()() { resourceLoader(resource); }; // loads resource through resource loader
private:
volatile float status; // loading status percentage, no locking required on volatile type
ResourceT &resource; // reference to resource
ResourceLoaderT &resourceLoader; // reference to resource loader
};



Loader's (which can be QueryableLoader's) are then pushed on to the loading queue to be asynchronously loaded.

It may be valuable to make "ResourceT &resource" and "ResourceLoaderT &resourceLoader" boost shared_ptr's to eliminate potential object lifetime issues. Or you can have ResourceLoaderT pass by value and ResourceT as a boost shared_ptr. That way the ResourceLoaderT object doesn't have to be a pointer, and it is destroyed when the QuerableLoader is destroyed.

This is only thread-safe if the resource is not accessed until the object has been fully loaded.

Some benefits that motivate my thinking were:
1) Resources are not tied to I/O. Via the ResourceLoaderT object you can parameterize the way in which resources are loaded (from RAM, disk, newtwork, script file, zip file, etc.)
2) Asynchronous loading is not tied to resources or I/O methods. They are parameterized via the ResourceT object and the ResourceLoaderT object which is dynamically binded to the same interface via the Loader class.

Share this post


Link to post
Share on other sites
Yea, my architecture is something like that, I have a resource manager, and i have two main resource interfaces, a resource and a resource loader. Users declare their resources and the intentions for the resources and register callbacks for action upon load completion (and other related events), the manager uses the "no-f**k-off-and-come-back-next/n-frame" approach when the renderer or any other engine wants a resource. Everywhere in the engine handle's are used rather than actual resource pointers, so no major issues with resource lifetime.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!