Jump to content
  • Advertisement
Sign in to follow this  
MartinSmith160

Improving loading times for 2D animations in Directx

This topic is 2025 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi Guys,

 

I was wandering if you could help me. I have a 2D game built using directx 9 and its all working fine. I would however like to reduce the load time for the game as there is a substantial amount of graphics to load. My sprite class uses LPDIRECT3DTEXTURE9 for the images and an image is loaded using D3DXGetImageInfoFromFile().

 

This approach works perfectly its just I have some animations that have a large number of large individual frames to load. This increases the load time and the size of the overall game by quite a bit. I was looking at a way to replace these animations with some form of flash animation sequence but i cant find a way to maintain the transparency of a flash animation in directx. 

 

Any help of this would be greatly appreciated,

 

All the best,

Martin

Share this post


Link to post
Share on other sites
Advertisement

Pack all data into 1 simple binary file, then use D3DX...LoadFromMemory or fill your buffers manually (or whatever is your way).

I had ~3000 static objects, and each had its own file so it took ~40 seconds to load, after putting all into 1 file now it takes ~2sec.

Share this post


Link to post
Share on other sites

Hi belfegor, thanks for the reply.

 

This sounds along the right lines. I dont really understand how you mean tho. could go explain a little further. I will save all the data of the graphics into one binary file and then use that to load into the directx textures.

Share this post


Link to post
Share on other sites

Is this along the lines you you mentioned:

 

- Create a struct which contains a set of D3DXIMAGE_INFO items

- I would then open my binary data file and initialize all the D3DXIMAGE_INFO items in the struct

- I would then pass these items into my sprite objects load instead of a filename.

 

Therefore cutting out all the individual creations of the D3DXIMAGE_INFO objects, which is what happens now.

Share this post


Link to post
Share on other sites
Ok so this is the load function. Could I just save out the ST_texture objects amd then load them back in. Im worried about the directx contexts not being right.

bool ST_Texture::Load(LPDIRECT3DDEVICE9 d3dDevice, std::string filename, D3DCOLOR transcolor)
{ //standard Windows return value
HRESULT result;

//get width and height from bitmap file
result = D3DXGetImageInfoFromFile(filename.c_str(), &info);
if (result != D3D_OK)
{
st_engine->LogError("Failed to load graphics file: " + filename,false,"ENGINE_ERRORS.txt");
texture = NULL;
return 0;
}

//create the new texture by loading a bitmap image file
D3DXCreateTextureFromFileEx(
d3dDevice, //Direct3D device object
filename.c_str(), //bitmap filename
info.Width, //bitmap image width
info.Height, //bitmap image height
1, //mip-map levels (1 for no chain)
D3DPOOL_DEFAULT, //the type of surface (standard)
D3DFMT_UNKNOWN, //surface format (default)
D3DPOOL_DEFAULT, //memory class for the texture
D3DX_DEFAULT, //image filter
D3DX_DEFAULT, //mip filter
transcolor, //color key for transparency
&info, //bitmap file info (from loaded file)
NULL, //color palette
&texture ); //destination texture

//make sure the bitmap textre was loaded correctly
if (result != D3D_OK)
{
st_engine->LogError("Failed to load graphics file: " + filename,false,"ENGINE_ERRORS.txt");
texture = NULL;
return 0;
}

return 1;
}

Share this post


Link to post
Share on other sites

You know how to read/write to binary files?

 

Here is simple example with reading from actual texture file then using D3DXCreateTextureFromFileInMemory or you could use Ex version (its the same):

:

 

std::ifstream is("texture.dds", std::ios::binary);
    is.seekg (0, is.end);
    auto length = is.tellg();
    is.seekg (0, is.beg);
    std::vector<char> vFile(length);
    is.read((char*)&vFile[0], vFile.size());
    is.close();
    hr = D3DXCreateTextureFromFileInMemory(d3d9device, (LPCVOID)&vFile[0], vFile.size(), &texture);
    if(FAILED(hr))
    {
        MessageBox(0, TEXT("D3DXCreateTextureFromFileInMemory() failed!"), 0, 0);
    }

 

If you were pack bunch of textures in 1 binary, you could write some info that you need (offset to specific texture, dimensions, names, lenght in bytes...), and then you could read for eample:

 

 

std::ifstream is("packed_textures.bin", std::ios::binary);
... //find offset to specific texture
    is.seekg (...); // put cursor to that postion
UINT_t len;
is.read( (char*)&len, sizeof(UINT));
std::vector<char> vFile(len);
is.read((char*)&vFile[0], vFile.size());
hr = D3DXCreateTextureFromFileInMemory(d3d9device, (LPCVOID)&vFile[0], vFile.size(), &texture);
...
Edited by belfegor

Share this post


Link to post
Share on other sites

Hi Belfegor,

 

Thanks for the code, that really helps. I fully understand the first section of code. I just don't get a few things regarding the second. When there is more than one texture in the binary file, im not sure how to search through the file to pick out each individual texture information. 

 

This might be related to the previous question, I'm not sure how you actually "Pack" the textures into one binary file. All my Graphics are .tga files. Would I have to write a program to convert .tga in the .dds file and then save each one into one full file. I have never used a .dds file before so im not really sure how to use them.

Share this post


Link to post
Share on other sites

All my Graphics are .tga files

No need to convert to dds, in link i gave you for D3DXCreateTextureFromFileInMemory/Ex documentation:

This function supports the following file formats: .bmp, .dds, .dib, .hdr, .jpg, .pfm, .png, .ppm, and .tga

 
You should make up your system for accessing texture data.
I'll make an simple example for writing/packing:
...
struct TextureInfo
{
    std::string name; // file name/path (dds, tga...)
    D3DXIMAGE_INFO ii;
};
vector<TextureInfo> vTextureNames; // fill with textures info to be packed, use D3DXGetImageInfoFromFile to get D3DXIMAGE_INFO
...
ofsream os("packed_textures.bin", ios::binary);
UINT texFilesCount = vTextureNames.size();
os.write( (const char*)&texFilesCount, sizeof(UINT) );
 
for(size_t i = 0; i < vTextureNames.size(); ++i) {
UINT nameLen = vTextureNames.name.size();
os.write( (const char*)&nameLen, sizeof(size_t) );  // to know how much to read it later
os.write( vTextureNames.name.c_str(), vTextureNames.name.size() );
os.write( (const char*)&vTextureNames.ii, sizeof(D3DXIMAGE_INFO) );

// opening texture file to copy it to our binary
ifstream is(vTextureNames.name, ios::binary);
is.seekg (0, is.end);
    UINT length = is.tellg();
    is.seekg (0, is.beg);
    std::vector<char> vFile(length);
    is.read((char*)&vFile[0], vFile.size());
    is.close();

os.write( (const char*)&length, sizeof(UINT) ); // to know how much to read it later
os.write( (const char*)&vFile[0], vFile.size() );
}
os.close()
Does this makes some sense?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!