Texture Cache

Started by
8 comments, last by DrNecessiter 20 years, 8 months ago
I need to rework my "disk to RAM" texture cache and have a question for anybody with applicable experience. I need to manage the loading of many gigabytes of textures into RAM. I let DirextX do the "RAM to VRAM" caching via the MANAGED flag. Formerly, my cache consisted of a set of pre-allocated (created) texture objects. The user needed to know roughly how many textures of each size/type were needed. At runtime this is nice because there are no CreateTexture calls. It simply looks for the right sized slot on a LRU basis. Of course, this system is very inflexible and potentially wasteful of memory. My question is this... how "bad" is it to create and destroy textures at runtime? What I''d like to do is set an amount of RAM to be used by the cache. When the cache approaches the limit, new requests simply Destroy() old textures until there is enough room the the new one. This is clearly more flexible, but has several potential problems. - "In the loop" Create() and Destroy() calls. How bad are these? Am I going to see a stutter when textures are created? - Memory fragmentation. Probably not too big of an issue because the textures are typically so much bigger than a memory page granularity. Any thoughts / suggestions?
Advertisement
You will almost certainly see a stutter when you load a texture into memory, in my experience. It might seem like a waste of memory, but it is always best to get all the textures you need into memory before you need them.
Co-creator of Star Bandits -- a graphical Science Fiction multiplayer online game, in the style of "Trade Wars'.
I''m mostly concerned with the ADDED overhead of creating the texture. Obviously I can''t avoid the load.

And... since I have 65 Gigabytes of texture... it ain''t all gettin'' loaded at startup :-)

(This is a geographic visualization system that pages texture in and out while flying around the scenery.)
I think that loading the textures from disk is going to be the bottleneck, not creating the textures.

Consider using compressed textures - you might be able to fit them all in memory.
John BoltonLocomotive Games (THQ)Current Project: Destroy All Humans (Wii). IN STORES NOW!
I do use compressed textures. Understand that this is a GIS system, and the source textures are potentially unlimited. My current database is 65 GIGABYTES of DXT1 (no alpha) texture.

I load textures in a parallel thread which makes the loading nearly seamless. There is no "stutter" due to disk access because it all happens via DMA. However, the main thread will block on a Create and Destroy call, so those are what I am interested in right now.
No advice on offer but I''m very impressed at 65Gb of data! How exactly are you planning to distribute this behemoth and expect people to have room for it on their HD? Is this a game or some fancy professional app? What does GIS stand for?
GIS = Geographical Information System

I''ts basically a simulator for the Navy. They have many hundreds of gigabytes of phototexture at 1 meter resolution. I cut these up into tiles and load as you fly along.

Theoretically, with a big enough storage system, you could fly around the world seamlessly.

We''re currently using 120 Gig external FireWire drives. Very nice and pretty fast, actually. Prices are dropping too.
Random thoughts:

1) How expensive the Create*() call is depends on lots of factors. For a managed texture only the system memory copy need to be created immediately so IMO it should be "about as expensive as a malloc() plus a bit".

2) If the card doesn''t do DXTn natively, then there''ll be conversion going on when the texture is uploaded into VRAM.

3) Many chips use "swizzled" data formats for their textures where the texels are reorganised to better match the access patterns used by rasterisers. This means any non-discarding Lock() call has to unswizzle into linear form and the Lock() has to reswizzle. Not sure if DXTn''s need swizzling - probably not actually due to their texel layout.

4a) "RAM" is too loose a term, and definately not one that''s relevent any more to user mode programs running in Windows. So "RAM to VRAM" should really be "virtual to VRAM"...

4b) ...which means Windows already has a super optimised "disk to true RAM" caching scheme built in (run a low level HD monitor to see just how much virtual memory gets used - even with tons of memory chips in your machine and not much running - it does a surprisingly speedy job). The system memory copy of a managed D3D texture will be paged out on an LRU (or similar) basis anyway...

4c) If I were in your situation I''d definately look at how I''d leverage that OS support when (re)designing the caching scheme. Unfortunately you''re limited to 2Gb or 4Gb address spaces under Win32 so that''s not so much of your 65Gb, but is enough to provide a significant streaming cache [assuming this data can be streamed - i.e. has some sort of location based key]. May wanna start looking at Win64 too.

5) With that much data and the opportunity to fix the base platform for the customer (I suspect) - I''d also take a close look at some UMA based motherboards with on board T&L (the nForce series for example) since potentially they simplify your problem and remove most upload costs.

6) Since you probably are streaming in one sense or other (i.e. can predict which textures will be needed in n frames time from now), if you aren''t already, you should definately be helping D3D''s texture manager with calls to PreLoad() and SetPriority() for the texture interfaces.

7) General tip for managed textures anyway - create all your unmanaged textures (render targets etc) BEFORE *ANY* of the managed ones are created. Evict all the managed resources if you do need to create an unmanaged one afterwards.



--
Simon O''Connor
ex -Creative Asylum
Programmer &
Microsoft MVP

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

Let me add my 2cents.

Why do you think of creating/destroying the textures?

I would to as you said: create a number of textures, and use them as cache. Now - this cache could be done in such a way that it can expand itself (need a cache objet? no object "old enough" to be destroyed? Lets just make a new one). This is how caching can work fully dynamic.

I would be very negative towards creating/destroying texture instances just out of "my way of doing dx programming".

NOW - on the negative side, I must say I pwould prefer to go with a static set cache. Why? The risk of running out of memory DURING the program run is lower. I mean, the navy pilot should not get a "sorry, not enough video memory". So I would stick with (card wise) a static texture cache.

SICA - I still think your 4a/4b comments ar etotally off in this question. I am sure he has no problem loading the textures from the external system to ram. and I dont think cachin could help too much. These will be separate textures, and he will definitly NOT load them all :-)

Nice oproblem - I fight something similar right now (trying to get a fractal world (preconstructed - fractal in th editor) to load for in theory endless terrain :-)
RegardsThomas TomiczekTHONA Consulting Ltd.(Microsoft MVP C#/.NET)
quote:Original post by thona

SICA - I still think your 4a/4b comments ar etotally off in this question. I am sure he has no problem loading the textures from the external system to ram. and I dont think cachin could help too much. These will be separate textures, and he will definitly NOT load them all :-)


As my post mentioned, that was just my random [& rambling] thoughts on the entire problem and things as a whole rather than a specific analysis or solution.

My point being to take the whole state of the PC and OS into account when considering solutions to: "I need to rework my disk to RAM texture cache".

IMO being very careful about pre-caching WILL have positive benefits if part of a streaming, load spreading/balancing and pre-fetching based system.

I too doubt that he''ll be loading ALL the textures, but I do think that taking the behaviour of the virtual memory manager into account when designing your loading and management scheme is ESSENTIAL - i.e. an older resource slot might not actually be in physical memory so really actually needs re-loading from disk.

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

This topic is closed to new replies.

Advertisement