Loading and caching resources

Started by
8 comments, last by Zipster 11 years, 5 months ago
Hi,

in my Engine I have a ResourceManager class, that loads resources (textures, shaders, materials etc.) from hard drive and caches them. I.e:

ShaderPtr p1 = manager->loadShader("Color.psh");
...
ShaderPtr p2 = manager->loadShader("Color.psh");

When shader Color.psh is loaded from the first time, it is loaded from hard drive, a ShaderPtr is stored in a map and returned. Now when Color.psh is requested for the second time, the Shader is just returned from the cache map instead of loading it again.

This works fine, but now I have a problem with textures. I have to pass the size of the texture to the load function, because internally I use D3DXCreateTextureFromFileEx() to load the texture and If I do not explicitely specify the real size of the texture, DirectX will use the next power of 2 size - something I do not want.
So my loading looks like this:

TexturePtr t1 = manager->loadTexture("Grass.png", 500, 200); // A
...
TexturePtr t2 = manager->loadTexture("Grass.png", 300, 300); // B

In line A the manager really loads Grass.png from the disk and creates a texture of size 500x200. Now in line B the manager detects that Grass.png is already loaded and just returns a reference to the already created texture of size 500x200. Thus t2 now stores a texture object with size of 500x200, even though I wanted the texture to be loaded with a size of 300x300! (In other words: Another instance with another size).

Do you guys have an idea how I could solve this problem? How I could keep a caching system but also support resources with different values.
Thanks!
Advertisement
To distinguish between different objects, you need different ids. Taking the name is often sufficient, but if you want to incooperate more data, you need to expand your identifier. In your case I would just expand it to eg. "filename_widht_height" ( Grass.png_500_200), done automatically in the loadTexture method.
Treat the dimensions as extensions to the name, making it a different resource.

If possible you could make a texture manager you can tell to first load a 500*300 texture because you know you can get the other ones from that, then have some special code for the smaller sizes to use the big one.

o3o

You may find that you end up needing more additional attributes on your cached assets, in which case you may want to use a small struct (with appropriate comparison operators) as the key, instead of a string.
[size="1"]
I don't know how you store your texture pointers, but with an string identifier, i guessing map<string, TexturPtr>. This will need to be extended to support additional data to use as the identifier.

Here u can implement a functor to create your own comparison function. Its easy to create, you can variate between different types, and is manageble. The problem that occurs is that that you will need to implement a key-struct for each type and combination. But as you currently need to create a load function for each resource type, this I guess will be a minor problem.


struct TextureKey
{
string name; //texture name
int sizeX, sizeY; //texture size
bool operator<(const TextureKey &a, const TextureKey &b)
{
return std::strcmp(a.name, b.name) < 0 &amp;&amp; a.sizeX == b.sizeX &amp;&amp; a.sizeY == b.sizeY;
}
};

...

map<TextureKey, TexturePtr> mTexturePtrs;


(Have not tried the above code, but it looks right :) )
David Gustavsson
Developer on "Hello space"
IndieDB
As Waterlimon said, you can add the dimensions to the string key, instead of only using the texture name.

The key could look something like this "texture.bmp 500 200".
I'd recommend against putting the dimensions in the string, if it can be avoided. Better to have type safety.

Cape's code has the right idea, but is incorrect. I believe the following would be more correct:

struct TextureKey
{
std::string name;
int width;
int height;
};

bool operator<(const TextureKey &a, const TextureKey &b)
{
// Incorrect: return (a.width < b.width) || (a.height < b.height) || (a.name < b.name);
if(a.width < b.width)
{
return true;
}
else if(a.width == b.width)
{
if(a.height < b.height)
{
return true;
}
else if(a.height == b.height)
{
return a.name < b.name;
}
}
return false;
}

map<TextureKey, TexturePtr> textureCache;
Thanks to all of you guys! I have chosen the custom key struct with operator<(). The extended string approach also would have worked, but the struct is more flexible.

@rip-off: I think your operator<() contains a bug. Assume these values for a and b: {52,30, "Foo"}, {50,40, "Foo"}
Your width has the highest priority, so operator<(a,b) should return false. But your operator< will return true, since a.width < b.width is false but a.height < b.height evaluates to true.

My struct looks like this (I hope its correct;)

struct TextureCacheID {
std::string textureFilename;
uint width, height;

bool operator<(const TextureCacheID& rhs) const {
return (textureFilename < rhs.textureFilename) ||
((textureFilename == rhs.textureFilename) && (width < rhs.width)) ||
((textureFilename == rhs.textureFilename) && (width == rhs.width) && (height < rhs.height));
}
};


@rip-off: I think your operator<() contains a bug.
[/quote]
You are correct. I've edited the post to include a hopefully correct implementation.
I'm curious, why does the user need to specify the size of a texture asset? To me the texture size is part of the internal implementation, and can either be discovered at runtime by examining the texture header or by generating an asset manifest offline that contains this kind of information.

This topic is closed to new replies.

Advertisement