Precompiled HLSL shaders

Started by
1 comment, last by Dawoodoz 12 years, 6 months ago
I am going to save and load precompiled shaders automatically to reduce most of the loading time.

What is the safest way to store loaded data in a ID3DBlob** so that I can return the same datatype as when compiling a new shader? :)

Is there anything depending on the current graphics card's instruction set in the compiled shader returned from D3DX11CompileFromMemory?
Is there a safe way to know if the shader was compiled for another graphics card? :unsure:
Advertisement
You don't need to concern yourself about "different graphics card".

Here is how I do it:



// -------------------------------------------------------
// Save shader to binary file
// -------------------------------------------------------
static void SaveShaderToBinaryFile(const std::string& file,ID3D10Blob* shaderBlob)
{
// In case of error
if(!shaderBlob)
return;

int size;
std::ofstream fout;
fout.open(file.c_str(),std::ios::binary);
size=shaderBlob->GetBufferSize();
fout.write((char*)&size,sizeof(int));
fout.write((char*)shaderBlob->GetBufferPointer(),size);
fout.close();
}
// -------------------------------------------------------
// Load shader from binary file
// -------------------------------------------------------
static ID3D10Blob* LoadShaderFromBinaryFile(const std::string& file)
{
ID3D10Blob* shaderBlob=NULL;


std::ifstream fin(file.c_str(),std::ios::binary);
if(fin.bad()||fin.fail())
{
std::string es="LoadShaderFromBinaryFile failed: file does not exist: ";
es.append(file);
MessageBoxA(NULL,es.c_str(),"ERROR",MB_OK);
}
int size;
fin.read((char*)&size,sizeof(int));
void* buf=malloc(size);
fin.read((char*)buf,size);

D3DCreateBlob(size,&shaderBlob);
memcpy(shaderBlob->GetBufferPointer(),buf,size);

free(buf);
fin.close();

return shaderBlob;
}
Thanks, then I just add my checksum and version number to the file. :)

This topic is closed to new replies.

Advertisement