Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 28 Mar 2012
Offline Last Active Jun 19 2013 02:09 AM

Posts I've Made

In Topic: Improving loading times for 2D animations in Directx

31 May 2013 - 07:09 AM

maybe, maybe. for some scenarios in the game a video wont wok but i guess that's more to do with the game design than a directx issue.


Chees for the advice tho. 

In Topic: Improving loading times for 2D animations in Directx

31 May 2013 - 04:29 AM

Hi Tom


I have tried this approach before but with the big animations (some have full screen frames)  the graphics card fails to load them because they have a maximum texture size. 


For example I have an animation which has 60 fames and each frame is 1027x768. If I tied to put that into a sprite sheet it wold be huge and from past memory most graphics cards have a max size of something like 2024 x 2024 or something like that.


When using this approach before directx just renders a white box the size of a single frame. Am I looking at this in the wrong way.



In Topic: Improving loading times for 2D animations in Directx

31 May 2013 - 02:46 AM

I wrote a test program that created 500 textures. all the textures were mixed, so different file types and directories. I used the standard load from file method and then used the load from single binary file and measured the durations.


its still faster to use the load from file method. The binary file takes longer. Not really sure what im going to do now. might have to look into getting flash video into the game instead of using all the big animations. I started off tying to get flash in but i could get the transparency of a video to show through which meant it was hard to mix the videos in with the other game graphics. 


I will keep trying tho.


Cheers mate.

In Topic: Improving loading times for 2D animations in Directx

30 May 2013 - 01:27 PM

Ok cheers for making that code. I ran it and I used one of my graphics. After some testing i think .tga is better than .png. They seem to load faster. Anyway, I made some changes to my code to reflect you feedback and now files are loading faster. I ran the 3000 test in your program and then I ran it it in mine and they were pretty much the same. Around 4 seconds to load 3000 textures which seems great. 




I added the time measuring between start and end of loading to my application and then tested the time it would take to load 3000 textures via the old method. Using the D3DXCreateTextureFromFileEx function and they wern't very far apart at all. 


I will however start using this loading technique with my game and see if i start to see faster results.


Thanks again for the help, your a true legend.


Ill let you know how I get on.

In Topic: Improving loading times for 2D animations in Directx

30 May 2013 - 08:23 AM

Hi Belfegor,


I have done what you said and its working. I have wrote the code to do the following:


  • Pack data into one big binary file

  • Extract the data in a usable form

  • load a LPDIRECT3DTEXTURE9 using that data


My question is now, i think my extraction must be poor because loading now takes longer.


This Is my header code:

struct TextureInfo
    std::string name; // file name/path (dds, tga...)

struct Texture_Item
	std::string name;
	std::vector<char> info_buffer;
	std::vector<char> buffer;

std::vector<std::string> texture_filenames;
std::vector<TextureInfo> vTextureNames;
std::vector<Texture_Item> vTextureItems;

void CreateBinaryFile();
void ExtractBinaryData();

ST_Texture test_bulk_textures[3000];
ST_Sprite *test_bulk_sprites[3000];


Here is my Pack data function:

void GameApp::CreateBinaryFile()

	for(int i = 0; i < 3000; i++)

	TextureInfo temp_texture_info;

	for(int i = 0; i < texture_filenames.size(); i++)
		temp_texture_info.name = texture_filenames[i];
		D3DXGetImageInfoFromFile(temp_texture_info.name.c_str(), &temp_texture_info.ii);

	std::ofstream os("packed_textures.bin", ios::binary);
	UINT texFilesCount = vTextureNames.size();
	os.write( (const char*)&texFilesCount, sizeof(UINT) );

	for(size_t i = 0; i < vTextureNames.size(); ++i) 
		UINT nameLen = vTextureNames[i].name.size();
		os.write( (const char*)&nameLen, sizeof(size_t) );  // to know how much to read it later
		os.write( vTextureNames[i].name.c_str(), vTextureNames[i].name.size() );
		os.write( (const char*)&vTextureNames[i].ii, sizeof(D3DXIMAGE_INFO) );
		// opening texture file to copy it to our binary
		ifstream is(vTextureNames[i].name, ios::binary);
		is.seekg (0, is.end);
		UINT length = is.tellg();
		is.seekg (0, is.beg);
		std::vector<char> vFile(length);
		is.read((char*)&vFile[0], vFile.size());
		os.write( (const char*)&length, sizeof(UINT) ); // to know how much to read it later
		os.write( (const char*)&vFile[0], vFile.size() );


This is my Unpack function:


void GameApp::ExtractBinaryData()

	Texture_Item temp_item;

	UINT total_texture_count = 0;
	size_t name_length = 0;
	UINT texture_byte_length = 0;

	ifstream is("packed_textures.bin", ios::binary);
	is.seekg (0, is.end);
	UINT length = is.tellg();
	is.seekg (0, is.beg);
	//Read total number of texture
	is.read( (char*) &total_texture_count, sizeof(UINT) );

	for(int i = 0; i < total_texture_count; i++)
		temp_item.name = "";

		is.read( (char*) &name_length, sizeof(size_t) );
		is.read( (char*) &temp_item.name[0], temp_item.name.size() * sizeof(char));

		is.read( (char*) &temp_item.info_buffer[0], sizeof(D3DXIMAGE_INFO) );

		is.read( (char*) &texture_byte_length, sizeof(UINT) );

		is.read( (char*) &temp_item.buffer[0], texture_byte_length);






and this is my new load function to use that data:


bool ST_Texture::Load(LPDIRECT3DDEVICE9 d3dDevice,std::string filename, std::vector<char> d3d_info_buffer, std::vector<char> graphic_buffer, D3DCOLOR transcolor)
	//standard Windows return value
	HRESULT result;

	//get width and height from bitmap file
	/*result = D3DXGetImageInfoFromFileInMemory((LPCVOID)&d3d_info_buffer[0], d3d_info_buffer.size(), &info);
	if (result != D3D_OK) 	
		st_engine->LogError("Failed to load graphics file: INFO BUFFER",false,"ENGINE_ERRORS.txt");
		texture = NULL;
		return 0;

	//get width and height from bitmap file
	result = D3DXGetImageInfoFromFile(filename.c_str(), &info);
	if (result != D3D_OK) 	
		st_engine->LogError("Failed to load graphics file: " + filename,false,"ENGINE_ERRORS.txt");
		texture = NULL;
		return 0;

	result = D3DXCreateTextureFromFileInMemoryEx(
		1,                     //mip-map levels (1 for no chain)
		D3DPOOL_DEFAULT,       //the type of surface (standard)
		D3DFMT_UNKNOWN,        //surface format (default)
		D3DPOOL_DEFAULT,       //memory class for the texture
		D3DX_DEFAULT,          //image filter
		D3DX_DEFAULT,          //mip filter
		transcolor,            //color key for transparency
		&info,                 //bitmap file info (from loaded file)
		NULL,                  //color palette
		&texture );            //destination texture

	if (result != D3D_OK) 	
		st_engine->LogError("Failed to load graphics file: FROM BUFFER",false,"ENGINE_ERRORS.txt");
		texture = NULL;
		return 0;

	return 1;




As you can see from the load function, i couldnt get the loading of the D3DXIMAGE_INFO from memory working. im not sure why. so I tried it with the standard way and it worked. so atleast i know the graphic binary data load is working.


My question is, am i doing something obviously wrong because my load time is huge now. The ExtractBinaryData() takes ages and so does the actual load.


it worked great for one file, but then i tried your example of using 3000 items. now its takes ages.


Sorry i know that was a lot.