They are read-only once they are deserialized.
That solution looks awesome, if I can skip on the serialization library I'm all for it, as I can then so no reason for it. Can you give a brief explanation on how this solution would work?
The function I posted shows that there is no work to be done in deserialization (it's just a cast of bytes to your structure type). The magic behind that is in the changes to your structures included above the function. The header that I include at the top is open-source (MIT license), or you can implement your own pretty easily by copying what the header does.
Pointers have been replaced with Offset<T>'s, which are basically just a 32-bit integer that holds a relative offset value.
To convert an Offset<T> into a T* at runtime, the operation is:
int& offset = ...;//given an integer offset variable
char* bytes = (char*)&offset;//cast the variable's address to a byte address
bytes += offset;//increment the byte pointer by the value of the integer
T* object = (T*)bytes;//cast the new address to the desired object type
In that header, this is done inside Offset<T>::operator->, so that you can use these offset variables as if they were pointers!
Now that we're not using pointers (which are absolute memory addresses), and we're instead using relative/local memory offsets, we're able to save them to disk. The same offset value can be stored in disk and used in RAM, so no processing is required on load -- instead, every time that you use the Offset<T> variable as a pointer, you pay the cost of an extra addition instruction, which is negligible. Also, performance may be improved despite that additional addition, as now the locality of all your data is guaranteed to be great (it's all in a single contiguous allocation, whereas the std::vector's are at the mercy of wherever new feels like storing your data).
The other change to your structures is I've replaced the std::vector<T>'s with List<T>'s from that header. This is a variable-sized structure, which begins with an int32 "header" containing the length of the array, and then the header is followed by the actual array data.
Because List<T> is a variable sized struct, you can't embed it as a member easily, because it's size isn't known at compile time, so your member variable is only big enough to hold the header. The actual array data will overlap with the other members. To fix that, I use offsets to lists in the modified version of your code:
struct Foo_Broken
{
List<Bar> a;//header followed by data
List<Bar> b;//uh oh, a's data will overwrite b's header
};
struct Foo_Fixed
{
Offset<List<Bar>> a;//the list is somewhere else, not right here, no overflow
List<Bar> b;//this one doesn't *have* to use an offset.
//Just be aware that now Foo_Fixed is a variable-sized structure, because it will be followed by b's data!
};
So that's the "deserialization"/runtime part covered, which is easy. The tricky part is the serialization routines to get data into this format.
Personally, I generate all my data from some C# tools, so I've made some extensions to C#'s BinaryWriter to help with this task... but the same ideas would work with any kind of "binary file writer" class that can write data of different sizes, can tell you it's position in the file, and lets you jump back and forth in the file.
Say I wanted to write some data to match the C++ struct of:
struct Header
{
Offset<List<float>> data1;
Offset<List<u8>> data2;
};
I'd use some C# code like this:
List<float> data1 = ...
List<byte> data2 = ...
//^^ inputs
BinaryWriter writer = ...
//^^ output file
//first write some placeholder data for the header structure (two 32-bit offsets), but remember their positions
long headerData1pos = writer.WriteTemp32();
long headerData2pos = writer.WriteTemp32();
//now to write the List<float> data1 member
//first, rewind to headerData1pos, and overwrite it with the offset from there to here, then fast-forward back to here
writer.OverwriteTemp32(headerData1pos, writer.RelativeOffset(headerData1pos));
writer.Write32( data1.Count )//write the list header - 32bit array size
foreach( var data in data1 )
writer.WriteFloat( data );//write the array contents
//now to write the List<u8> data2 member
//again, rewind to the header, write the actual offset value in it, then fast-forward back to the end of the file
writer.OverwriteTemp32(headerData2pos, writer.RelativeOffset(headerData2pos));
writer.Write32( data2.Count )//write the list header - 32bit array size
foreach( var data in data2 )
writer.Write8( data );//write the array contents
The resulting file's bytes can then be loaded into RAM in your C++ app, cast to a Header*, and it will just work.
To save space on disc, your file system / file loader can implement some kind of compression on storing/loading files if you want, like ZLIB/GZ/LZMA/etc...
If the inputs to the above routine were 1 float with hex value 0x12345678, and 4 bytes with the values 1, 2, 3 and 4, the output file would look like this (when interpreted as groups of 32-bit integers expressed as hex):
0: 0x00000008 // data1 offset - jump forward 8 bytes to line #2
1: 0x0000000C // data2 offset - jump forward 12 bytes to line #4
2: 0x00000001 // data1 list header - 1 item in array
3: 0x12345678 // our float value
4: 0x00000004 // data2 list header - 4 items in array
5: 0x04030201 // our 4 byte values (in little endian order, the right hand byte is written/read before the left hand byte).
[hr]
Question - Do adapters convert DXT textures to table/map when loading, or do they just store in DXT format and do the calculations per-render?
GPUs have dedicated hardware to perform DXT decompression on pixels at the last possible moment (right when the shader asks for a pixel). This greatly improves performance because it means that the texture is still compressed even in the texture cache, which means more data can be cached at once, and less bandwidth is required per texture-fetch
also, if using DXT5 and mipmaps, these can make the image around 4x larger than if using DXT1 with no mipmaps, which is also an issue.
To use this as an excuse to expand on the above statement on DXT performance - mipmaps are also extremely important for performance (regardless of texture format), because they improve locality of texture fetches / reduce bandwidth of fetches (during minification scenarios). So you should usually use them in most-cases. As mentioned by cr88192, with DXT textures, mipmaps are usually saved on disc, but with other formats like JPEG, they're often generated on-load.