• Advertisement
Sign in to follow this  

Improving loading times for 2D animations in Directx

This topic is 1698 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi Guys,

 

I was wandering if you could help me. I have a 2D game built using directx 9 and its all working fine. I would however like to reduce the load time for the game as there is a substantial amount of graphics to load. My sprite class uses LPDIRECT3DTEXTURE9 for the images and an image is loaded using D3DXGetImageInfoFromFile().

 

This approach works perfectly its just I have some animations that have a large number of large individual frames to load. This increases the load time and the size of the overall game by quite a bit. I was looking at a way to replace these animations with some form of flash animation sequence but i cant find a way to maintain the transparency of a flash animation in directx. 

 

Any help of this would be greatly appreciated,

 

All the best,

Martin

Share this post


Link to post
Share on other sites
Advertisement

Pack all data into 1 simple binary file, then use D3DX...LoadFromMemory or fill your buffers manually (or whatever is your way).

I had ~3000 static objects, and each had its own file so it took ~40 seconds to load, after putting all into 1 file now it takes ~2sec.

Share this post


Link to post
Share on other sites

Hi belfegor, thanks for the reply.

 

This sounds along the right lines. I dont really understand how you mean tho. could go explain a little further. I will save all the data of the graphics into one binary file and then use that to load into the directx textures.

Share this post


Link to post
Share on other sites

Is this along the lines you you mentioned:

 

- Create a struct which contains a set of D3DXIMAGE_INFO items

- I would then open my binary data file and initialize all the D3DXIMAGE_INFO items in the struct

- I would then pass these items into my sprite objects load instead of a filename.

 

Therefore cutting out all the individual creations of the D3DXIMAGE_INFO objects, which is what happens now.

Share this post


Link to post
Share on other sites
Ok so this is the load function. Could I just save out the ST_texture objects amd then load them back in. Im worried about the directx contexts not being right.

bool ST_Texture::Load(LPDIRECT3DDEVICE9 d3dDevice, std::string filename, D3DCOLOR transcolor)
{ //standard Windows return value
HRESULT result;

//get width and height from bitmap file
result = D3DXGetImageInfoFromFile(filename.c_str(), &info);
if (result != D3D_OK)
{
st_engine->LogError("Failed to load graphics file: " + filename,false,"ENGINE_ERRORS.txt");
texture = NULL;
return 0;
}

//create the new texture by loading a bitmap image file
D3DXCreateTextureFromFileEx(
d3dDevice, //Direct3D device object
filename.c_str(), //bitmap filename
info.Width, //bitmap image width
info.Height, //bitmap image height
1, //mip-map levels (1 for no chain)
D3DPOOL_DEFAULT, //the type of surface (standard)
D3DFMT_UNKNOWN, //surface format (default)
D3DPOOL_DEFAULT, //memory class for the texture
D3DX_DEFAULT, //image filter
D3DX_DEFAULT, //mip filter
transcolor, //color key for transparency
&info, //bitmap file info (from loaded file)
NULL, //color palette
&texture ); //destination texture

//make sure the bitmap textre was loaded correctly
if (result != D3D_OK)
{
st_engine->LogError("Failed to load graphics file: " + filename,false,"ENGINE_ERRORS.txt");
texture = NULL;
return 0;
}

return 1;
}

Share this post


Link to post
Share on other sites

You know how to read/write to binary files?

 

Here is simple example with reading from actual texture file then using D3DXCreateTextureFromFileInMemory or you could use Ex version (its the same):

:

 

std::ifstream is("texture.dds", std::ios::binary);
    is.seekg (0, is.end);
    auto length = is.tellg();
    is.seekg (0, is.beg);
    std::vector<char> vFile(length);
    is.read((char*)&vFile[0], vFile.size());
    is.close();
    hr = D3DXCreateTextureFromFileInMemory(d3d9device, (LPCVOID)&vFile[0], vFile.size(), &texture);
    if(FAILED(hr))
    {
        MessageBox(0, TEXT("D3DXCreateTextureFromFileInMemory() failed!"), 0, 0);
    }

 

If you were pack bunch of textures in 1 binary, you could write some info that you need (offset to specific texture, dimensions, names, lenght in bytes...), and then you could read for eample:

 

 

std::ifstream is("packed_textures.bin", std::ios::binary);
... //find offset to specific texture
    is.seekg (...); // put cursor to that postion
UINT_t len;
is.read( (char*)&len, sizeof(UINT));
std::vector<char> vFile(len);
is.read((char*)&vFile[0], vFile.size());
hr = D3DXCreateTextureFromFileInMemory(d3d9device, (LPCVOID)&vFile[0], vFile.size(), &texture);
...
Edited by belfegor

Share this post


Link to post
Share on other sites

Hi Belfegor,

 

Thanks for the code, that really helps. I fully understand the first section of code. I just don't get a few things regarding the second. When there is more than one texture in the binary file, im not sure how to search through the file to pick out each individual texture information. 

 

This might be related to the previous question, I'm not sure how you actually "Pack" the textures into one binary file. All my Graphics are .tga files. Would I have to write a program to convert .tga in the .dds file and then save each one into one full file. I have never used a .dds file before so im not really sure how to use them.

Share this post


Link to post
Share on other sites

All my Graphics are .tga files

No need to convert to dds, in link i gave you for D3DXCreateTextureFromFileInMemory/Ex documentation:

This function supports the following file formats: .bmp, .dds, .dib, .hdr, .jpg, .pfm, .png, .ppm, and .tga

 
You should make up your system for accessing texture data.
I'll make an simple example for writing/packing:
...
struct TextureInfo
{
    std::string name; // file name/path (dds, tga...)
    D3DXIMAGE_INFO ii;
};
vector<TextureInfo> vTextureNames; // fill with textures info to be packed, use D3DXGetImageInfoFromFile to get D3DXIMAGE_INFO
...
ofsream os("packed_textures.bin", ios::binary);
UINT texFilesCount = vTextureNames.size();
os.write( (const char*)&texFilesCount, sizeof(UINT) );
 
for(size_t i = 0; i < vTextureNames.size(); ++i) {
UINT nameLen = vTextureNames.name.size();
os.write( (const char*)&nameLen, sizeof(size_t) );  // to know how much to read it later
os.write( vTextureNames.name.c_str(), vTextureNames.name.size() );
os.write( (const char*)&vTextureNames.ii, sizeof(D3DXIMAGE_INFO) );

// opening texture file to copy it to our binary
ifstream is(vTextureNames.name, ios::binary);
is.seekg (0, is.end);
    UINT length = is.tellg();
    is.seekg (0, is.beg);
    std::vector<char> vFile(length);
    is.read((char*)&vFile[0], vFile.size());
    is.close();

os.write( (const char*)&length, sizeof(UINT) ); // to know how much to read it later
os.write( (const char*)&vFile[0], vFile.size() );
}
os.close()
Does this makes some sense?

Share this post


Link to post
Share on other sites

Hi Belfegor,

 

Just a quick question, each graphic file will have binary data for the D3DXIMAGE_INFO and the actual graphic data from the source graphic file. For a specific item, i will extract the binary data from the packed file and put that into a buffer. I was just wandering does the data have to be in a specific order. So first I add the D3DXIMAGE_INFO bytes in and then the graphic data in and then use that buffer in the D3DXCreateTextureFromFileInMemory function.

 

Thanks again for the help.

Martin

 
 

Share this post


Link to post
Share on other sites

It doesn't matter what kind of and in what order do you put things, what matters is that you read those in order that you put them and you need to know their types and size.

Sometimes you must first write the count of objects for array type of things like vectors and strings, before you write those, so you can know how much to read later on:

 

write

 

std::vector<int> vInt;
...
std::size_t len = vInt.size();
os.write( (const char*)&len, sizeof(std::size_t) );
os.write( (const char*)&vInt[0], vInt.size() * sizeof(int) );

 

read

 

std::size_t len;
is.read( (char*)&len, sizeof(std::size_t) ); // first thing in file written is length so we read that first, sizeof(std::size_t) bytes
std::vector<int> vInt(len); // create buffer with enough space to read into it
is.read( (char*)&vInt[0], len * sizeof(int) ); // second thing in file is buffer to read, len * sizeof(int) bytes
Edited by belfegor

Share this post


Link to post
Share on other sites

Hi Belfegor,

 

I have done what you said and its working. I have wrote the code to do the following:

 

  • Pack data into one big binary file
  • Extract the data in a usable form
  • load a LPDIRECT3DTEXTURE9 using that data

 

My question is now, i think my extraction must be poor because loading now takes longer.

 

This Is my header code:

struct TextureInfo
{
    std::string name; // file name/path (dds, tga...)
    D3DXIMAGE_INFO ii;
};

struct Texture_Item
{
	std::string name;
	std::vector<char> info_buffer;
	std::vector<char> buffer;
};

std::vector<std::string> texture_filenames;
std::vector<TextureInfo> vTextureNames;
std::vector<Texture_Item> vTextureItems;

void CreateBinaryFile();
void ExtractBinaryData();

ST_Texture test_bulk_textures[3000];
ST_Sprite *test_bulk_sprites[3000];

 

Here is my Pack data function:

void GameApp::CreateBinaryFile()
{
	texture_filenames.clear();
	vTextureNames.clear();

	for(int i = 0; i < 3000; i++)
		texture_filenames.push_back("Graphics/cloud-day-3.tga");

	TextureInfo temp_texture_info;

	for(int i = 0; i < texture_filenames.size(); i++)
	{
		temp_texture_info.name = texture_filenames;
		D3DXGetImageInfoFromFile(temp_texture_info.name.c_str(), &temp_texture_info.ii);
		vTextureNames.push_back(temp_texture_info);
	}

	std::ofstream os("packed_textures.bin", ios::binary);
	UINT texFilesCount = vTextureNames.size();
	os.write( (const char*)&texFilesCount, sizeof(UINT) );

	for(size_t i = 0; i < vTextureNames.size(); ++i) 
	{
		UINT nameLen = vTextureNames.name.size();
		os.write( (const char*)&nameLen, sizeof(size_t) );  // to know how much to read it later
		os.write( vTextureNames.name.c_str(), vTextureNames.name.size() );
		os.write( (const char*)&vTextureNames.ii, sizeof(D3DXIMAGE_INFO) );
	
		// opening texture file to copy it to our binary
		ifstream is(vTextureNames.name, ios::binary);
		is.seekg (0, is.end);
		UINT length = is.tellg();
		is.seekg (0, is.beg);
		std::vector<char> vFile(length);
		is.read((char*)&vFile[0], vFile.size());
		is.close();
	
		os.write( (const char*)&length, sizeof(UINT) ); // to know how much to read it later
		os.write( (const char*)&vFile[0], vFile.size() );
	}
	
	os.close();
}

 

This is my Unpack function:

 

void GameApp::ExtractBinaryData()
{
	vTextureItems.clear();

	Texture_Item temp_item;

	UINT total_texture_count = 0;
	size_t name_length = 0;
	UINT texture_byte_length = 0;

	ifstream is("packed_textures.bin", ios::binary);
	is.seekg (0, is.end);
	UINT length = is.tellg();
	is.seekg (0, is.beg);
	
	//Read total number of texture
	is.read( (char*) &total_texture_count, sizeof(UINT) );

	for(int i = 0; i < total_texture_count; i++)
	{
		temp_item.name = "";
		temp_item.info_buffer.clear();
		temp_item.buffer.clear();

		is.read( (char*) &name_length, sizeof(size_t) );
		
		temp_item.name.resize(name_length);
		is.read( (char*) &temp_item.name[0], temp_item.name.size() * sizeof(char));

		temp_item.info_buffer.resize(sizeof(D3DXIMAGE_INFO));
		is.read( (char*) &temp_item.info_buffer[0], sizeof(D3DXIMAGE_INFO) );

		is.read( (char*) &texture_byte_length, sizeof(UINT) );

		temp_item.buffer.resize(texture_byte_length);
		is.read( (char*) &temp_item.buffer[0], texture_byte_length);

		vTextureItems.push_back(temp_item);

	}

	is.close();
}

 

 

and this is my new load function to use that data:

 

bool ST_Texture::Load(LPDIRECT3DDEVICE9 d3dDevice,std::string filename, std::vector<char> d3d_info_buffer, std::vector<char> graphic_buffer, D3DCOLOR transcolor)
{
	//standard Windows return value
	HRESULT result;

	//get width and height from bitmap file
	/*result = D3DXGetImageInfoFromFileInMemory((LPCVOID)&d3d_info_buffer[0], d3d_info_buffer.size(), &info);
	if (result != D3D_OK) 	
	{
		st_engine->LogError("Failed to load graphics file: INFO BUFFER",false,"ENGINE_ERRORS.txt");
		texture = NULL;
		return 0;
	}*/

	//get width and height from bitmap file
	result = D3DXGetImageInfoFromFile(filename.c_str(), &info);
	if (result != D3D_OK) 	
	{
		st_engine->LogError("Failed to load graphics file: " + filename,false,"ENGINE_ERRORS.txt");
		texture = NULL;
		return 0;
	}

	result = D3DXCreateTextureFromFileInMemoryEx(
		d3dDevice,
		(LPCVOID)&graphic_buffer[0],
		graphic_buffer.size(),
		info.Width,
		info.Height,
		1,                     //mip-map levels (1 for no chain)
		D3DPOOL_DEFAULT,       //the type of surface (standard)
		D3DFMT_UNKNOWN,        //surface format (default)
		D3DPOOL_DEFAULT,       //memory class for the texture
		D3DX_DEFAULT,          //image filter
		D3DX_DEFAULT,          //mip filter
		transcolor,            //color key for transparency
		&info,                 //bitmap file info (from loaded file)
		NULL,                  //color palette
		&texture );            //destination texture

	if (result != D3D_OK) 	
	{
		st_engine->LogError("Failed to load graphics file: FROM BUFFER",false,"ENGINE_ERRORS.txt");
		texture = NULL;
		return 0;
	}

	return 1;

}

 

 

As you can see from the load function, i couldnt get the loading of the D3DXIMAGE_INFO from memory working. im not sure why. so I tried it with the standard way and it worked. so atleast i know the graphic binary data load is working.

 

My question is, am i doing something obviously wrong because my load time is huge now. The ExtractBinaryData() takes ages and so does the actual load.

 

it worked great for one file, but then i tried your example of using 3000 items. now its takes ages.

 

Sorry i know that was a lot.

Share this post


Link to post
Share on other sites

3000 textures OMG blink.png

 

I see you load whole file in memory, that is not good idea as you will run out of memory fast.

Read one in buffer fill texture then discard that buffer.

 

...i couldnt get the loading of the D3DXIMAGE_INFO from memory working...

 

You are doing it all wrong, "graphic_buffer" holds "file in memory" not "d3d_info_buffer" so:

 

result = D3DXGetImageInfoFromFileInMemory((LPCVOID)&graphic_buffer[0], graphic_buffer.size(), &info);

 

but you dont even need that since you read that info from file already. Code is a big mess now for quick solution.

It should be read and then discarded:

 

//temp_item.info_buffer.resize(sizeof(D3DXIMAGE_INFO)); // NO!
//is.read( (char*) &temp_item.info_buffer[0], sizeof(D3DXIMAGE_INFO) );// NO!
...
D3DXIMAGE_INFO info;
is.read( (char*) &info, sizeof(D3DXIMAGE_INFO) );
 
result = D3DXCreateTextureFromFileInMemoryEx(
        ...,...,,....,
        &info, 
        NULL, //color palette
        &texture ); 
 

 

Let me setup project and i will write whole code sample for you.

Edited by belfegor

Share this post


Link to post
Share on other sites

EDIT:

OK here is the test in attachment. I think that OS is doing some tricks behind back with file caching (or whatever) since i create 3000 textures from 1 file (as you do),

and reading from 1 binary is just 100ms faster then old "from file" method, although it took just ~1 second to load them all (Release mode), and i don't have SSD hdd.

I think it should be tested with different texture files so it doesn't have a chance to do funny bisnis and results to be valid for comparison.

Edited by belfegor

Share this post


Link to post
Share on other sites

Ok cheers for making that code. I ran it and I used one of my graphics. After some testing i think .tga is better than .png. They seem to load faster. Anyway, I made some changes to my code to reflect you feedback and now files are loading faster. I ran the 3000 test in your program and then I ran it it in mine and they were pretty much the same. Around 4 seconds to load 3000 textures which seems great. 

 

but...

 

I added the time measuring between start and end of loading to my application and then tested the time it would take to load 3000 textures via the old method. Using the D3DXCreateTextureFromFileEx function and they wern't very far apart at all. 

 

I will however start using this loading technique with my game and see if i start to see faster results.

 

Thanks again for the help, your a true legend.

 

Ill let you know how I get on.

Share this post


Link to post
Share on other sites

I wrote a test program that created 500 textures. all the textures were mixed, so different file types and directories. I used the standard load from file method and then used the load from single binary file and measured the durations.

 

its still faster to use the load from file method. The binary file takes longer. Not really sure what im going to do now. might have to look into getting flash video into the game instead of using all the big animations. I started off tying to get flash in but i could get the transparency of a video to show through which meant it was hard to mix the videos in with the other game graphics. 

 

I will keep trying tho.

 

Cheers mate.

Share this post


Link to post
Share on other sites

Do I understand it right that you have each animation frame in a separate image file? Maybe you could try to put more frames to one image, into some kind of matrix. And then just change which part of the image is the sprite using, instead of changing which image is the sprite using. That could be overall faster.

Share this post


Link to post
Share on other sites

Hi Tom

 

I have tried this approach before but with the big animations (some have full screen frames)  the graphics card fails to load them because they have a maximum texture size. 

 

For example I have an animation which has 60 fames and each frame is 1027x768. If I tied to put that into a sprite sheet it wold be huge and from past memory most graphics cards have a max size of something like 2024 x 2024 or something like that.

 

When using this approach before directx just renders a white box the size of a single frame. Am I looking at this in the wrong way.

 

Cheers.

Share this post


Link to post
Share on other sites

Oh so we are talking about such huge images. Then it's also quite clear, why are you having problems with loading times ;)

Maybe it really is time to move from this frame-by-frame animation to some video files?

Share this post


Link to post
Share on other sites

maybe, maybe. for some scenarios in the game a video wont wok but i guess that's more to do with the game design than a directx issue.

 

Chees for the advice tho. 

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement