Jump to content
  • Advertisement
Sign in to follow this  
amarhys

D3D and multi threads

This topic is 4233 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I am working on a myst-like game with a 360 panoramic graphics engine. Each view is composed of : - 6 textures for the panoramic cube (top, bottom, left, right, front and back, each texture is 1512x1512). - additional textures for local animations on individual cube faces (number of animations, number of textures for each animation and texture size depend of the view). Panoramic textures are created forever at initialization and are updated using D3DXLoadSurfaceFromFile(...) function. Animation textures are created on the fly for each view (as number of textures and size of textures are not predictible) using D3DXCreateTextureFromFileEx(...) function. When the player left clicks the mouse to change from one view to another, it takes about 1 to 1.5 seconde to load the textures of the next view (depending of how many animations have to be loaded) and during this time, the render() function is not executed then animation of current view are frozen. To resolve this issue, I changed the code to load the textures of the next view in another thread while the main thread is looping on render() function. It works as following, in pseudo code : LOADING THREAD : ---------------- 1. Wait for Start semaphore 2. Get parameters for textures to be loaded (global variable) 3. Load all needed textures 4. Assert End semaphore MAIN THREAD : ------------- When the player left clicks the mouse : 1. Set parameters for texture loading (global variable) 2. Assert Start semaphore 3. While (1) 3.1 Check End semaphore. If asserted, break while loop 3.2 Execute render() function 4. Check status of texture loading process (global variable) 4. Switch from current view to next view With this implementation, it works as expected because animations keep on playing when next view textures are beeing loaded but sometimes (by sometimes I mean it is not predictible) D3DXCreateTextureFromFileEx(...) function which is used to load animation textures returns E_OUTOFMEMORY. Obviously I checked that texture resources which are updated/loaded in LOADING THREAD are not used at the same time in render() function in MAIN THREAD. Is there an issue calling D3D functions in two concurrent threads ? Thank you in advance for your answer. Cheers Amarhys

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by amarhys
Is there an issue calling D3D functions in two concurrent threads ?
Are you creating it with D3DCREATE_MULTITHREADED? If not, then D3D won't be in a thread-safe mode and well, anything could happen [smile]

But, even with that flag it's generally considered A Bad Thing™ to be calling into D3D from multiple threads.

Worker threads are perfectly possible, but you want to set them up to do non-D3D work. E.g. load the binary data from the file, parse/verify it (or whatever), and then hand it over to the core D3D thread which will actually create the resource for/from it.

There were a few presentations from GameFest/GDC covering good multi-threaded design - have a look on the MS site.

hth
Jack

Share this post


Link to post
Share on other sites
Thanks for your answer jollyjeffers.

I had a look at GameFest/GDC presentations concerning multi-threading on MS website and as you said, D3DCREATE_MULTITHREADED is not recommended.

However, the 1.5 seconds spent to load textures from external files are split as following :
- 0.3s for file accesses + data copy into memory
- 1.2s for D3DXLoadSurfaceFromFileInMemory(...) or D3DXCreateTextureFromFileInMemoryEx(...) function execution (I use DXT3 textures)

=> main time is spent in D3D functions then using a thread for file I/O only is not very interesting in performance point of view and will not really solve my problem.

A solution could be to bring back the code used to load the textures in the main thread and to call render() function after each call to D3DXLoadSurfaceFromFileInMemory(...) or D3DXCreateTextureFromFileInMemoryEx(...) instead of waiting for all textures to be loaded before executing render(). I am going to try that.

Cheers
Amarhys

Share this post


Link to post
Share on other sites
I'd recommend doing only file open/read/close on the secondary thread, and leave all the d3d calls in the primary thread.

Depending on how you've got things set up, keep in mind that d3d might be doing things like creating mip-maps and format conversions. If you use .dds files with pre-created mip-maps in the final format (like DXT compressed), there will be a whole lot of work that won't have to happen after you load the texture.

Also, depending on your usage patterns, some people suggest pre-creating the d3d resource (in this case your texture) at application start-up, and then keeping a pool of them around. When you actually get your texture bits in memory from your other thread, you can just lock/copy the data into the existing resource. Before doing that, though, try the .dds route.

Share this post


Link to post
Share on other sites
Thanks for your answer JasonBlochowiak.

There are no mip-maps in my textures (panoramic cube face textures and animation textures are quite always at the same distance of the camera) but you are right, D3D has to perform the conversion from DXT format.

I think I cannot pre-create the textures for animations (I did that only for cube faces) because my graphics engine supports up to 10 animations simulteanously with a number of frames per animations up to 64 => potentially 640 textures at the same time (but it never happens). Size of animation textures is usually small (about 256x256) but it could be more in some rare cases (maximum supported is the size of a cube texture face i.e 1512x1512). Then, in the better case, I would have to create 640 textures of 256x256 at start-up (about 160MB in ARGB) and I guess it is not a good idea.

I will try to call render() function more often in the texture loading procedure.

Today, when player left clicks, I do (if I come back to a unique thread) :


for i = 1 to n
load texture i
end for
render()



I can try (because textures which have to be loaded for the next view are not used yet by the Render() function until the view is switched)


for i = 1 to n
load texture i
render()
end for



Maybe it will be enough to solve my problem. The most important for me is not the time to load texture (which is bigger in second case) but the fact that animations keep on being played during the texture loading procedure.

Cheers
Amarhys



Share this post


Link to post
Share on other sites
I suppose you proposed solution would work, but it doesn't seem particularly 'clean' and might be a bit nasty to scale upwards at a later date. I think you can still solve this by using multi-threading.

According to your figures, the expensive part is creating and loading the resource. By using D3DX you roll these two into one atomic operation.

Referring back to Jason's comment:
Quote:
depending on your usage patterns, some people suggest pre-creating the d3d resource (in this case your texture) at application start-up, and then keeping a pool of them around. When you actually get your texture bits in memory from your other thread, you can just lock/copy the data into the existing resource.
This sort of resource 'staging' can be very efficient.

Create a double-buffered pool of resources and then stream-in data and swap the rendering pointers around. You never actually create/release allocated VRAM which can be a big win.

Obviously there is still the problem of encoding to DXT3 which is probably the more painful part. I see two options here:

1. Store your source artwork in DXT3 format so you don't have to do 'on the fly' compression. Add it as a build-time task for example.

2. Look into using a D3DDEVTYPE_NULLREF device to do DXT3 compression on a seperate thread. I've not tried this, but it should allow you to load and compress your data, obtain the pointer and memcpy_s() this to the master thread that actually writes it to the VRAM used for rendering.


hth
Jack

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!