Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

Agony

Preloading textures in background *solved, mostly*

This topic is 5373 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

(DirectX 9, C++, WinXP) I have textures that are 1024x1024. They can't be any smaller, since they are backgrounds for the screen, and have to be perfect to the pixel. I have two threads, one that renders, and one that loads resources within around 10-15 seconds of being used. The renderer must have a solid frame rate. This isn't really all that hard, since my vertex count is always going to at most a few hundred, and at the moment, is around only 20, and I'm only using two 1024x1024 textures at once. The problem is when I load a new texture for the background (which I do once every 10 to 20 seconds; it's sort of a slide show program). I've gotten all the rest of my loading code to run in the background and not cause any problems. However, whenever the texture is first used, my framerate drops briefly, but noticeably. I suppose that's because the texture is being copied from system memory to video memory. So I did some research and found PreLoad(). But now it is even more obvious that this is the problem, because the slowdown happens precisely when I call PreLoad() on the texture. I've tried lowering the thread priority to its lowest setting right before the PreLoad() call, but it seems that DirectX (or the video driver) either insists on doing the operation all at once, or that it raises its priority internally, thus negating whatever priority changes I make. What I need is to be able to load the texture into video memory in the background, so that it doesn't cause a noticeable performance hit, or to be able to manually load in pieces of it at a time, and thus explicitly do the same thing that I want DirectX to do for me automatically. Any thoughts you have would be great. My hardware (I'm at work, I use what they give me) is just a Mini-ITX VIA Epia M, with its onboard CLE266 graphics chipset and a 1 GHz C3 processor. And I believe I have 32 MBs of memory devoted to the graphics. If not 32, then 64. Either way, that's more than enough for two or three 1024x1024 textures which are all 1024x1024, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED. I suppose they could be in the default pool if that would allow some other technique to be used. And have a dynamic usage as well, if it would somehow help. Although I tried setting them to dynamic yesterday as a quick test, and within 10 seconds the program crashed, but that's probably some other issue altogether that should be fixable. [edited by - Agony on March 5, 2004 5:27:54 PM]

Share this post


Link to post
Share on other sites
Advertisement
Make sure it''s the loading by profiling. I dynamically load lots of 256x256 jpg files, and I don''t see any framerate drop.

How are you loading them, what memory pool are you using? It may be because the images are too large, maybe you can split the images into 256x256 tiles, and load them tile by tile. Then re-composite them.

Share this post


Link to post
Share on other sites
They''re created in the managed pool. I tried the default pool, and then it was slowing down during D3DXLoadSurfaceFromMemory(). I was hopeful about that, because then I could load smaller rectangles of the image at a time, but it turns out that just loading even a single pixel was a problem. I suppose the locking of the texture was stalling things, although I still find it a little odd, since I''m not actually using the textures at the point when I create them and such.

I have indeed done profiling, and it is indeed precisely on the PreLoad() call that the problem occurs. Loading from the hard drive, doing system memory copies, etc., doesn''t affect the render thread''s performance. It does all of that stuff in the background, like it should. It''s just when the texture is first used, or is PreLoad()ed, that the background thread takes away too much processing time from the renderer thread. I''ve tried pretty much everything I can think of to make sure that this is the problem, and to circumvent it, before bugging everyone here about it. But now I''m pretty stumped.

When you say to split them up into 256x256 images, and the re-composite them, I suppose you mean by just using more quads to render multiple textures? If I must do that, then I''ll experiment with it, but for one, that''ll make a lot of my design much uglier, and two, I don''t like the idea of having to make ~16 SetTexture() calls, when 1 should do the job. Or by re-compositing them, do you mean somehow combining them back into one texture, or at least somehow treating them as if they were one texture, after they''ve been loaded into video memory? I didn''t think it could be done, but if you know how, I''d love to hear it.

Share this post


Link to post
Share on other sites
Yes, I meant render multiple quads. It''s so strange, my engine keeps reading new images in real time, I render sometimes 300 frames a second and it does not skip a beat. I have never used PreLoad(), and I did not load 1024x1024 images like that, so I''m not sure.

Share this post


Link to post
Share on other sites
In the scheduler, when a resource is going to be needed soon, I call a function from some DLL that our company has used for a few years, which renders some file to a bitmap. I use D3DXLoadSurfaceFromMemory to copy it to a texture. Then, since I know that I''m going to need the texture to be in video memory soon, and since I know I have very few resources already taking up video memory, I call PreLoad() on the texture. If I don''t do this, then when I actually draw a quad that uses that texture, then DirectX goes, "Oh, this is a managed texture, and it isn''t in video memory yet, and it is being used this frame. I better load it." Which causes the whole graphics pipeline to stall apparently, or something, and thus everytime I display a new page, it pauses for about a tenth of a second. So I called PreLoad() ahead of time, because I figured if I had 4 seconds available before the texture would be used, PreLoad() could just slowly copy the texture to video memory over a second or so easily, without disrupting rendering of stuff that is currently visible.

I''m gonna play around with drivers. I do know that there have been plenty of complaints concerning EPIA M CLE266 drivers. VIA seems to be pretty lazy with the support on this system. This program would probably work fine on a lot of other platforms. But one of the goals of this program is to be independent from which motherboard, graphics card, processor, etc. is use. Which shouldn''t be difficult. Just a few textures, only a few hundred vertices at the worst of times. No biggy. Urg...

Share this post


Link to post
Share on other sites
I dont know if this will help, but it might help to not have the graphics card to create 8 or 9 mipmaps, try setting the miplevel to 1...

hope this might help

Share this post


Link to post
Share on other sites
I was about gonna shoot myself, since I didn''t remember what I specified for mipmaps. 0 is such a sneaky value. But alas, I already specified exactly 1 mipmap level, so that wasn''t it. Thanks for the try, though.

Share this post


Link to post
Share on other sites
Well, I''ve solved it, mostly. When I make my textures D3DUSAGE_DYNAMIC / D3DPOOL_DEFAULT, then they don''t cause any problems when loaded into video memory. I still have that problem where it crashes after a while, although I''ve studying that one carefully, and I''m sure that it is another problem altogether. Aside from that, though, I have 100% stable 60 frames per second, which is exactly what this product needs. That crashing is really weird, though. It seems to happen around the time I try to draw the 13th page. 13 is an evil number. I didn''t believe it until now. I have been converted.

And thanks for the suggestions!


int main() { return *((int*)0); }

Share this post


Link to post
Share on other sites
If you are using D3DPOOL_DEFAULT for your textures, are you freeing some from the video memory ? Unlike D3DPOOL_MANAGED you dont have an automatic load/unload of the texture as you need the texture or video memory for another managed resources.

Your crash is probably due to the fact that you fail to create a surface because there is no more free video memory.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!