Jump to content

  • Log In with Google      Sign In   
  • Create Account


Model Format: Concept and Loading


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
11 replies to this topic

#1 LarryKing   Members   -  Reputation: 414

Like
1Likes
Like

Posted 03 January 2013 - 06:37 PM

Hello all, I'm looking for some feedback regrading a model format and loader I've been working on over the past few days. I'm transitioning to C++, coming from C# and this has been a great exercise thus far, but I feel this post might end up a bit lengthy.
 
Some background as to what I wanted to accomplish with the file format:

  • The model can contain a large number of meshes.
  • Each mesh can have an arbitrary number of vertex buffers; not that a mesh should have 8, 16, or even 65,535 vertex streams, just that it could.
  • Each mesh can have an arbitrary number of textures; again, see above.
  • Each mesh and the model itself and should have some sort of bounding volume.
  • Fast loading; the model should use local pointers and require minimal live processing.
  • Possibility for compression

 
After some research, it appears that loading directly into memory and then adjusting local pointers seems to be one of the fastest ways to load an object. So the entire format reflects the objects that make up my model.
 
A model contains: a pointer to some ModelTextures, a pointer to the MeshHeaders, a pointer to the MeshCullDatas and a pointer to the MeshDrawDatas.
 
I've tried to implement some Data Oriented Design – a very different concept coming from C#. I've split up the meshes into arrays of data needed for different operations: Culling and Drawing.
 
Furthermore, I'm attempting to implement this as part of my content manager, so a ModelTexture, is really just a wrapper around a shared_ptr<Texture2D> that is retrieved from another content cache.
 
All right, so here is what the model format looks like, I made a diagram!
*sorry it's so tall...
 
ModelFormat_zps271dc65c.png
 
The actual files are exported from a tool I've written in C#. I'm loading Collada files via Assimp, calculating any user requested data and displaying the model via SharpDX in a WinForms app.
 
In the end, the model gets exported to the file by the exporter first writing each mesh data object to “virtual memory,” adjusting all of the pointers and finally using a binary writer to spit out the finished file.
 
Pretty straight forward for me as I'm used to C#, but the scary stuff happens when we get to actually loading the file in C++.
 
First I load the entire file with an ifstream into a char[]. Then I cast the char[] to a Model. Now I need to offset the local pointers so that the model will work in memory; however! I read somewhere that you can't add pointers in C++, only subtract them, but to offset local pointers you needed to add!
After much internet searching, I finally found an object ptrdiff_t that I could retrieve from a pointer, add to, and then cast back to a pointer. The question then became, “Is this legal, what I'm doing?” For a full day I pondered before quizzically deciding that it should? be legal. I mean how else would you offset pointers when you shouldn't just cast to an int?
The next problem arrived when I realized that I needed to somehow delete the model from memory as well. Again not sure, as I had casted a char[] to a model, if I could delete the model. I pretended I could and wrote the destructor. Miraculously it seemed to work! The “Memory” window in Visual Studio seemed to show that the object had successfully been deleted, although I'm still not sure if I need to call delete on the model's pointers as they weren't created with new.
 
So now, I have all this code for loading a model, but I'm not sure if it's legal, safe, or even sensible!
 
Enough talk though, here's the code for loading the model:
 

std::shared_ptr<Ruined::Graphics::Model> ModelLoader::Load(const std::string &name)
{
	Ruined::Graphics::Model * model;

	std::ifstream file (m_BaseDirectory + name, std::ios::in|std::ios::binary|std::ios::ate);
	if (file.is_open())
	{
		// Get the file's total size
		unsigned int size = file.tellg();
		// Create a char[] of the size to load the file into
		char* memblock = new char [size];
		// Seek to the beginning and read the file
		file.seekg (0, std::ios::beg);
		file.read (memblock, size);
		// Finally close the file
		file.close();

		// Cast the char[] to a Ruined::Graphics::Model pointer
		model = static_cast<Ruined::Graphics::Model *>((void*)memblock);

		// The location of the model in memory
		ptrdiff_t memOffset = (ptrdiff_t)model;

		// Offset the model's local pointers
		// Mesh Headers
		ptrdiff_t intOffset;
		model->MeshHeaders = (Ruined::Graphics::MeshHeader*)(memOffset + (ptrdiff_t)model->MeshHeaders);

		// Mesh Culling Datas
		// intOffset = (ptrdiff_t)model->MeshCullDatas;
		model->MeshCullDatas = (Ruined::Graphics::MeshCullData*)(memOffset + (ptrdiff_t)model->MeshCullDatas);

		// Mesh Drawing Datas
		// intOffset = (ptrdiff_t)model->MeshDrawDatas;
		model->MeshDrawDatas = (Ruined::Graphics::MeshDrawData*)(memOffset + (ptrdiff_t)model->MeshDrawDatas);

		// Model's Ruined::Graphics::ModelTexture pointer
		// intOffset = (ptrdiff_t)model->Textures;
		model->Textures = (Ruined::Graphics::ModelTexture*)(memOffset + (ptrdiff_t)model->Textures);
				
		// Load the model's textures
		for(int t = 0; t < model->TextureCount; t++)
		{
			// Offset TextureName pointers
			// intOffset = (ptrdiff_t)(model->Textures[t].TextureName);
			model->Textures[t].TextureName = (char*)(memOffset + (ptrdiff_t)(model->Textures[t].TextureName));
			// Load the texture
			model->Textures[t].TextureContent = p_TextureCache->Load(model->Textures[t].TextureName);
		}

		HRESULT hresult;
		Ruined::Graphics::MeshDrawData * tempMeshD = nullptr;
		for(int m = 0; m < model->MeshCount; m++)
		{

			// Build the buffers
			tempMeshD = &model->MeshDrawDatas[m];

			// Offset Index Buffer
			// intOffset = (ptrdiff_t)tempMeshD->IndexBuffer;
			tempMeshD->IndexBuffer = (ID3D11Buffer*)(memOffset + (ptrdiff_t)tempMeshD->IndexBuffer);

			// Offset Vertex Buffer
			// intOffset = (ptrdiff_t)tempMeshD->VertexBuffer;
			tempMeshD->VertexBuffers = (ID3D11Buffer**)(memOffset + (ptrdiff_t)tempMeshD->VertexBuffers);

			// Offset Strides
			// intOffset = (ptrdiff_t)tempMeshD->Strides;
			tempMeshD->Strides = (unsigned int*)(memOffset + (ptrdiff_t)tempMeshD->Strides);

			// Offset Resources
			intOffset = (ptrdiff_t)tempMeshD->Resources;
			tempMeshD->Resources = (ID3D11ShaderResourceView**)(memOffset + intOffset);

			// Convert Resources * to unsigned int *
			unsigned int * index = (unsigned int*)(memOffset + intOffset);

			// Assign to the poingters from the model's textures
			for(int t = 0; t < model->MeshHeaders[m].ResourceCount; t++)		
				tempMeshD->Resources[t] = model->Textures[index[t]].TextureContent.get()->p_shaderResourceView;

			// Desc for the index buffer
			D3D11_BUFFER_DESC indexBufferDesc;		
			indexBufferDesc.Usage = D3D11_USAGE_DEFAULT;
			indexBufferDesc.ByteWidth = tempMeshD->IndexCount * (tempMeshD->IndexFormat == DXGI_FORMAT_R16_UINT ? sizeof(unsigned short) : sizeof(unsigned int));
			indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER;
			indexBufferDesc.CPUAccessFlags = 0;
			indexBufferDesc.MiscFlags = 0;

			D3D11_SUBRESOURCE_DATA indexData;
			indexData.pSysMem = tempMeshD->IndexBuffer;
			indexData.SysMemPitch = 0;
			indexData.SysMemSlicePitch = 0;

			hresult = p_Graphics->GetDevice()->CreateBuffer(&indexBufferDesc, &indexData, &tempMeshD->IndexBuffer);
			if(FAILED(hresult))
			{
				OutputDebugStringA("Failed to create Index Buffer");
			}

						
			// Create each vertex buffer
			Ruined::Graphics::MeshBufferDesc * tempDesc = (Ruined::Graphics::MeshBufferDesc*)(tempMeshD->VertexBuffers);
			for(unsigned int b = 0; b < tempMeshD->VertexBufferCount; b++)
			{
							
				// Each buffer gets a desc
				D3D11_BUFFER_DESC bufferDesc;
				bufferDesc.Usage = D3D11_USAGE_DEFAULT;
				bufferDesc.ByteWidth = tempDesc[b].BufferWidth;
				bufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
				bufferDesc.CPUAccessFlags = 0;
				bufferDesc.MiscFlags = 0;

				// Each buffer needs a subresource data
				D3D11_SUBRESOURCE_DATA subData;
				subData.pSysMem = (void*)((ptrdiff_t)tempDesc[b].Data + memOffset);
				subData.SysMemPitch = 0;
				subData.SysMemSlicePitch = 0;

				hresult = p_Graphics->GetDevice()->CreateBuffer(&bufferDesc, &subData, &(tempMeshD->VertexBuffers[b]));
				if(FAILED(hresult))
				{
					OutputDebugStringA("Failed to create Vertex Buffer");
				}
			}

						
		}

					
	}
	else
	{
		std::string errorMsg = "Failed to load Model: ";
		errorMsg += m_BaseDirectory + name + "\n";
		OutputDebugStringA(errorMsg.c_str());

		// Set model equal to something
		model = new Ruined::Graphics::Model();
	}

	std::shared_ptr<Ruined::Graphics::Model> sModel(model);

	return sModel;
}

So that makes a little more sense, here is Model.h:

#include "MeshDrawData.h"
#include "ModelTexture.h"

#include <memory>

namespace Ruined
{
	namespace Graphics
	{
		// Combine Model Header and Model Pointers
		struct __declspec(dllexport) Model
		{

		public:
			// Header
			unsigned short _FILETYPE;
			unsigned short _FILEVERSION;
			unsigned int ModelSize;
		
			unsigned short TextureCount;
			unsigned short MeshCount;
			DirectX::BoundingBox BoundingBox;

			// Pointers
			ModelTexture * Textures;
			MeshHeader * MeshHeaders;
			MeshCullData * MeshCullDatas;
			MeshDrawData * MeshDrawDatas;

		public:
			Model(void);
			
			~Model(void);
		};
	}
}

#endif

 
Here is ModelTexture.h:

#pragma once
#ifndef _MODELTEXTURE_H
#define _MODELTEXTURE_H_ 

// Includes //
#include "Texture2D.h"

#include <memory>

namespace Ruined
{
	namespace Graphics
	{
		struct __declspec(dllexport) ModelTexture
		{
		public:
			char* TextureName;
			std::shared_ptr<Texture2D> TextureContent;
		};
	}
}

#endif

 
 
Here are the mesh objects:

#ifndef _MESHCULLDATA_H_
#define _MESHCULLDATA_H_

#include <DirectXCollision.h>

namespace Ruined
{
	namespace Graphics
	{
		struct __declspec(dllexport) MeshCullData
		{
		public:
			DirectX::BoundingBox BoundingBox;
		};
	}
}

#endif
#ifndef _MESHHEADER_H_
#define _MESHHEADER_H_

#include "MeshBufferDesc.h"

namespace Ruined
{
	namespace Graphics
	{
		enum MeshMask : unsigned short
		{
			Undefined       = 0x0000,
			Texture         = 0x0001,
			UVCoord         = 0x0002,
			Color           = 0x0004,
			Normal          = 0x0008,
			Tangent         = 0x0010,
			Binormal        = 0x0020,
			BoneIndices     = 0x0040,
			BoneWeights     = 0x0080,
			SplitBuffers    = 0x0100,
			AlphaBlend      = 0x4000
		};

		struct __declspec(dllexport) MeshHeader
		{
			MeshMask Mask;
			unsigned char UVStreamCount;
			unsigned char ColorStreamCount;
			unsigned short ResourceCount;
		};
	}
}

#endif
#ifndef _MESHDRAWDATA_H_
#define _MESHDRAWDATA_H_

#include <d3d11.h>

namespace Ruined
{
	namespace Graphics
	{
		struct __declspec(dllexport) MeshDrawData
		{
		public:
			unsigned int VertexBufferCount;
			ID3D11Buffer ** VertexBuffers;
			unsigned int * Strides;
			ID3D11Buffer * IndexBuffer;
			DXGI_FORMAT IndexFormat;
			ID3D11ShaderResourceView ** Resources;
			unsigned int IndexCount;
		};
	}
}

#endif
#ifndef _MESHBUFFERDESC_H_
#define _MESHBUFFERDESC_H_


namespace Ruined
{
	namespace Graphics
	{
		// Used for creating vertex buffers.
		// Only accessed at load time.
		struct __declspec(dllexport) MeshBufferDesc
		{
		public:
			unsigned int BufferWidth;
			void * Data;
		};
	}
}

#endif

Lastly here is the Model destructor:

Model::~Model(void)
{
	if(Textures != nullptr)
	{
		for(int t = 0; t < TextureCount; t++)
		{
			Textures[t].TextureContent.reset();
			Textures[t].TextureName = nullptr;
		}
	}

	if(MeshDrawDatas != nullptr)
	{
		for(int m = 0; m < MeshCount; m++)
		{
			if(MeshDrawDatas[m].IndexBuffer != nullptr)
			{
				MeshDrawDatas[m].IndexBuffer->Release();
				MeshDrawDatas[m].IndexBuffer = nullptr;
			}

			for(int v = 0; v < MeshDrawDatas[m].VertexBufferCount; v++)
			{
				if(MeshDrawDatas[m].VertexBuffers[v] != nullptr)
				{
					MeshDrawDatas[m].VertexBuffers[v]->Release();
					MeshDrawDatas[m].VertexBuffers[v] = nullptr;
				}
			}
		}
	}
}

 
Holly cow! That's one long post.
If anyone could take the time to read this, even just part of it, and lend me a hand, I would be very thankful.


Edited by LarryKing, 03 January 2013 - 06:40 PM.


Sponsor:

#2 Hodgman   Moderators   -  Reputation: 27527

Like
2Likes
Like

Posted 03 January 2013 - 07:13 PM

I read somewhere that you can't add pointers in C++, only subtract them, but to offset local pointers you needed to add!

Adding two pointers doesn't really make sense, but you're not trying to add two pointers! :-)

You've got a pointer to your memory block, and you've got an offset (which is an integer) that you want to add together. Adding pointers and integers is well defined -- adding 42 to a pointer of type "T*" will give you a pointer that's advanced by sizeof(T)*42 bytes, and seeing that sizeof(char)==1 adding integers to char*-type pointers is a fairly intuitive and common way of performing "pointer math".

 

Usually you'd do something like this to perform "pointer patching" -- converting local file offsets to real pointers:

struct Header
{
	union
	{
		Foo* foo_pointer;
		int foo_offset;
	}
};
static_assert( sizeof(int) == sizeof(void*) );//the int type in the union should be the same size as a pointer type.
//n.b. this means your file generator (C# tool) has to be aware of whether it's generating files for a 64-bit or 32-bit application!

char* memblock = new char[]...
Header* header = (Header*)memblock;
header->foo_pointer = (Foo*)(memblock + header->foo_offset); //n.b. memblock is a char*

Your code is OK though, because casting a pointer to a ptrdiff_t and back to a pointer works on every compiler I've ever used ;)

However, you can replace "memOffset" with "memblock" and it will work the same, but look a bit more intuitive.

Also, your casting of your "file offset pointers" to integers via e.g. "(ptrdiff_t)model->MeshHeaders" is basically equivalent to my union above, so there's no need to change it if it's more intuitive for you to do it this way.


In my engine, to make my file-formats independent of the actual pointer size (32/64 bit), and to simplify my loading routines, I usually avoid performing pointer-patching on-load, and instead do it on-demand each time the "pointer" is used. Also, I use offsets that are relative to the position of the offset variable itself, rather that ones that are relative to the beginning of the file to facilitate this. If you're interested, see the Offset class in this header (also, Address is used for offsets that are relative to the beginning of some memory-block).

 

The next problem arrived when I realized that I needed to somehow delete the model from memory as well. Again not sure, as I had casted a char[] to a model, if I could delete the model. I pretended I could and wrote the destructor. Miraculously it seemed to work!

It might seem to work, but that sounds very bad. If the memory was created with char* buffer = new char [size], then it needs to be deleted with delete [] buffer (where buffer is a char*).

 

This gets complicated because in your file-failure case, you are allocating the memory with model = new Ruined::Graphics::Model(), in which case it needs to be deleted with delete model (where model is a Model*).

Personally, I'd remove that failure case and return NULL, or change it to allocate the memory consistently, with:

char* buffer = new char [sizeof(Ruined::Graphics::Model)]

 

Further, you can't just use a regular shared_ptr to clean up after you, because it will use the wrong type of delete -- you need to configure it to use a custom deleter that calls your own "destructor" function and then deletes the char array properly.


Edited by Hodgman, 03 January 2013 - 07:41 PM.


#3 LarryKing   Members   -  Reputation: 414

Like
0Likes
Like

Posted 03 January 2013 - 07:49 PM

All right! It's good to know that you can offset a pointer with an int.

I also like the idea of offsets relative to the position of the offset, although I wonder what would happen if you had a negative offset;

Theoretically it wouldn't matter and would still work.

 

Now, regarding the custom deleter, the deleter would call a "delete" function of the model, correct?

Would it then cast the Model* to a char* before finally calling delete[] foochar?

 

I assume the deleter would look something like this:

 

void DeleteModel( *Model)
{
     Model->Shutdown();
     delete[] (char*)Model;
}

 

Thankyou.

 

EDIT:

Your offset template is a really good idea!

It took me a moment to understand what it was doing - still new to C++


Edited by LarryKing, 03 January 2013 - 08:02 PM.


#4 Krohm   Crossbones+   -  Reputation: 2960

Like
0Likes
Like

Posted 04 January 2013 - 01:44 AM

Personally I'd be extremely careful at considering those "pointers". They can change size, have odd values. As said, they are really offsets and should be treated as such.

I'd rather not put texture blobs in the model format directly. This functionality could be emulated using archives (I have to admit texture loading is still in the works for me).

Do not put API-dependant values in your blobs (such as DXGI_FORMAT). Well, you can, but it's going to look wrong in the long run.



#5 RobTheBloke   Crossbones+   -  Reputation: 2295

Like
0Likes
Like

Posted 04 January 2013 - 05:51 AM

A small point.....

 

Stop exporting all of your structs! Consider this:

 

struct DLL_EXPORT Foo
{
};

 

This will cause all implicitly created methods (eg constructor, copy constructor, destruction, assignment) to be DLL exported, which for a constructor that does exactly nothing, is completely pointless. You only need to export a class when it actually has member functions that need to be accessible from other dlls/exes.



#6 LarryKing   Members   -  Reputation: 414

Like
0Likes
Like

Posted 04 January 2013 - 09:46 AM

Personally I'd be extremely careful at considering those "pointers". They can change size, have odd values. As said, they are really offsets and should be treated as such.

I'd rather not put texture blobs in the model format directly. This functionality could be emulated using archives (I have to admit texture loading is still in the works for me).

Do not put API-dependant values in your blobs (such as DXGI_FORMAT). Well, you can, but it's going to look wrong in the long run.

 

Yeah, I'm adjusting the loader so that they are treated more as offsets.

I'm not storing the textures in the file, the blob just contains the character data for the texture's name. The model loader has no idea how to load the textures, it just request the texture from the texture cache. The cache  can then load the texture from where ever.

You're right about not putting the DXGI_FORMAT in there, I'll adjust that.

 

A small point.....

 

Stop exporting all of your structs! Consider this:

 

 

struct DLL_EXPORT Foo
{
};

 

 

This will cause all implicitly created methods (eg constructor, copy constructor, destruction, assignment) to be DLL exported, which for a constructor that does exactly nothing, is completely pointless. You only need to export a class when it actually has member functions that need to be accessible from other dlls/exes.

 

All right. It looks like I'll have to adjust quite a few things in the library as I've just been tacking the dllexports on to make visual studio be quiet.

 

Thanks for all of the feedback!



#7 Kwizatz   GDNet+   -  Reputation: 1182

Like
0Likes
Like

Posted 04 January 2013 - 02:39 PM

I keep textures, meshes, animations and skeletons in different files, my model file is sort of an index of what files constitute a model, so I load and process a model file that in turn loads (or gets references to already loaded) meshes, animations, etc. This way I can mix and match different parts and avoid data redundancy (IE: packing the same texture in two different files).

 

I removed multiple vertex buffers from my format and instead use a single interleaved buffer, this made sense for my purposes since it simplifies the code and apparently its the recommended way of doing it for mobile platforms (OpenGL ES), that's probably also true for the PC, though a with a quick Google search you'll find out there isn't much difference. Don't get me wrong, Multiple vertex buffer support is good, supporting interleaved as well is better.

 

Other things I noticed have already been covered.


Edited by Kwizatz, 04 January 2013 - 02:43 PM.


#8 LarryKing   Members   -  Reputation: 414

Like
0Likes
Like

Posted 04 January 2013 - 04:11 PM

I keep textures, meshes, animations and skeletons in different files, my model file is sort of an index of what files constitute a model, so I load and process a model file that in turn loads (or gets references to already loaded) meshes, animations, etc. This way I can mix and match different parts and avoid data redundancy (IE: packing the same texture in two different files).
 
I removed multiple vertex buffers from my format and instead use a single interleaved buffer, this made sense for my purposes since it simplifies the code and apparently its the recommended way of doing it for mobile platforms (OpenGL ES), that's probably also true for the PC, though a with a quick Google search you'll find out there isn't much difference. Don't get me wrong, Multiple vertex buffer support is good, supporting interleaved as well is better.
 
Other things I noticed have already been covered.


Yep, the model is really just a container of meshes. The textures are stored in other files - the model just contains the names of the textures so they can be loaded.

Similarly, all other types/objects, ie bones and animations, will be - I haven't gotten to this stage yet - stored in separate files.

 

The way I have my vertex buffers set up is that a mesh can have multiple vertex buffers, but doesn't necessarily need to. Each vertex buffer can also contain interleaved attributes.

When the files are opened in my exporter tool, you can create new, split up, and delete vertex buffers; however, it defaults to two vertex buffers per mesh.

 

So the default could look like this:

  • Buffer 1 - Position
  • Buffer 2 - TexCoord, Color, Normal, Tangent, Binormal...

 

The buffers could be split to look like anything really. ie:

  • Buffer 1 - Position, Color1
  • Buffer 2 - TexCoord1, TexCoord2, Color2
  • Buffer 3 - Normal, Tangent, Binormal

The big reason, I opted for split vertex buffers, is fast shadowing. I just have to bind the buffer that contains the position data (which I force to always be the first buffer).

 

I am worried, though, about how I should deal with getting the VertexLayout to the renderer. I'm not sure if I should store it per mesh, load it similarly to how I'm loading textures, or just have each material specify.

The vertex layout itself could end up being 100 bytes per mesh, so storing it per-mesh doesn't really seem ideal.

 

Any ideas?

Thank you for your input.



#9 Kwizatz   GDNet+   -  Reputation: 1182

Like
0Likes
Like

Posted 04 January 2013 - 05:56 PM

100 bytes per mesh is nothing really (considering you'll probably have between 1 and 10 meshes per model), but if all your meshes use the same layout, or some use the same and some don't, you could add a "layout array" and have each mesh reference their layout by layout index, in the case that there is only one layout, you're only wasting 1,2, 4 or 8 bytes (depending on your index type) per mesh + 1 sizeof(your offset type) for your layout array start pointer.

 

Also, I don't think I read it, or you didn't mentioned it, I think everyone assumes this is for characters/props and not for architectural/level models or is it for both? not that important, but I guess if you are modeling a cathedral you might end up with more than 10 meshes in a single model.


Edited by Kwizatz, 04 January 2013 - 06:01 PM.


#10 LarryKing   Members   -  Reputation: 414

Like
0Likes
Like

Posted 04 January 2013 - 07:18 PM

100 bytes per mesh is nothing really (considering you'll probably have between 1 and 10 meshes per model), but if all your meshes use the same layout, or some use the same and some don't, you could add a "layout array" and have each mesh reference their layout by layout index, in the case that there is only one layout, you're only wasting 1,2, 4 or 8 bytes (depending on your index type) per mesh + 1 sizeof(your offset type) for your layout array start pointer.

 

Also, I don't think I read it, or you didn't mentioned it, I think everyone assumes this is for characters/props and not for architectural/level models or is it for both? not that important, but I guess if you are modeling a cathedral you might end up with more than 10 meshes in a single model.

 

So you're saying store the vertex layouts in the model, and have the meshes index to the matching layout. That might work.

After a quick thought thought, 100 bytes per mesh * a mean of 8 meshes per model * about 1000 models per level = ~781 KB. (averages not from any real statistics)

Not nearly as bad as I had imagined.

 

The other thing I could do is store the vertex layouts in separate files. That way meshes using the same layout would just use the same file, which I could cache.

If I did it this way I would only have to have one copy of each unique vertex layout in memory and on disk.

 

Regarding what a model would contain: the idea was that a Model would just be a generic class to store related Meshes. A mesh would have an index buffer, N vertex buffers, and N resources / textures. This way a model could contain, for example, all of the meshes that composed an automobile.

These models would then be used by objects that would also have a material. Now of course some checking would have to be done to make sure that the model would be compatible with the material. Luckily each mesh has a MeshMask, 16 bits specifying what attributes the mesh contains; the mesh header also keeps track of the number of TexCoord and vertex color streams as well as the number of texture resources. To check if a model and a material were compatible, I would just loop over each mesh and check if its mask and header were compatible with the materials requirements.

A level would then contain many objects that could be culled, sorted, and drawn.

 

My intent was for the model to be able to store pretty much any type of graphic meshes;, whether it contains a weighted charter,  a tree, a sword, or a spaceship shouldn't matter as the model is just a container for meshes. A model should also be able to store thousands of meshes(up to 65, 536) assuming the file doesn't become over 4GB in size. I don't think I'm missing anything that would prevent this, but I could very well have skipped right over something.

 

Again, thank you for your ideas.



#11 Hodgman   Moderators   -  Reputation: 27527

Like
0Likes
Like

Posted 04 January 2013 - 08:59 PM

I also like the idea of offsets relative to the position of the offset, although I wonder what would happen if you had a negative offset;
Theoretically it wouldn't matter and would still work.
...
I assume the deleter would look something like this:
Yep, that's how you'd clean up your allocation.
 
Yeah, negative offsets "just work", seeing I used a signed integer type. Also, seeing that an offset of 0 doesn't go anywhere (it takes you to the offset value itself, which isn't useful) I can use 0 as a "null pointer" value.
On the C# side (when generating data files), I've written an extension class for BinaryWriter (I've also added functions like Write32, Write16, etc, to make my file-writing code more explicit and obvious), that I use like this:
writer.Write32(buffers.Count());//num stream formats
long tocStreamFormats = writer.WriteTemp32();//offset
... later ...
writer.OverwriteTemp32(tocStreamFormats, writer.RelativeOffset(tocStreamFormats));//overwrite placeholder with relative offset
for (int i = 0; i != buffers.Count(); ++i)
{ ...
Do not put API-dependant values in your blobs (such as DXGI_FORMAT). Well, you can, but it's going to look wrong in the long run
There's nothing wrong with doing this, as long as you only support one API per platform and are ok with shipping different data archives per platform, and you've got a build-system that can automatically generate your files.
e.g. on every console game I've worked on, the data has had to be compiled once for PC, PS3 and 360 (with some files being shared across platforms, but many being different per build). If we have to modify some detail per platform, you just increment the version number on that model exporter, and the build system automatically recompiles all the models.

The upside to using API-specific structures in your files is that your loading code can be extremely simple (with no parsing step required), but yes, the downside for a cross-platform game is that you'd either have to ship different model files, or your model file would have to include structures for both D3D/GL/etc.
I am worried, though, about how I should deal with getting the VertexLayout to the renderer. I'm not sure if I should store it per mesh, load it similarly to how I'm loading textures, or just have each material specify.
I treat vertex formats as an external dependency, like textures, materials, etc...
Also BTW, I don't store texture-handles in the model file, only material-handles. The material can specify the shader, uniforms, textures, etc...
 
The complicated part with Vertex Layouts is that they're a bridge between both models and shaders. I deal with this by having a "streams.lua" file (or multiple of them), which describes model-side vertex layouts and shader-side vertex layouts, and which model layouts can be used with which shader layouts.
When writing my shaders, I also have to specifically name the shader-side vertex layout that I want to use from this file.
e.g. For a shader that requires position/normal and 2 tex-coords, and a shadow-shader that only requires position, I could write in this file:
StreamFormat("testStream",
{
    { -- stream 0
        { Float, 3, Position },
    },
    { -- stream 1
        { Float, 3, Normal,   0 },
        { Float, 2, TexCoord, 0 },
        { Float, 2, TexCoord, 1, "Projection1" },
    },
})
VertexFormat("testVertex",
{
    { "position", float3, Position },
    { "normal",   float3, Normal },
    { "texcoord", float2, TexCoord, 0 },
    { "texcoord", float2, TexCoord, 1 },
})
VertexFormat("testShadowVertex",
{
    { "position", float3, Position },
})
InputLayout( "testStream", "testVertex" )
InputLayout( "testStream", "testShadowVertex" )
When compiling a model, I look at the which vertex-formats the current mesh needs to be compatible with (based on which shaders the artist have applied to the mesh), and then find the set of stream-formats that are compatible with all of those vertex formats, and then compile the model using the stream-format from that set that best matches the data that's present in the DAE.

Above, there's only 1 stream format, which defines a position stream and a normal/tex0/tex1 stream. These could be stored in different vertex buffers, or the same vertex buffer using offsets. If the DAE didn't include some of those attributes, then the model would produce an error during the data-build process.
 
The graph of build-rules to create a model (.geo) file look something like this for me:
Source     C# Intermediate        Game Data

      +- temp/mat
      |
.DAE -+
      |
      +- temp/geo ------------+
                              |
                              +- data/geo
                              |
              +- temp/streams +
              |               |
.streams.lua -+               |
                              |
       +- temp/vlayout -------+
       |
.hlsl -+- temp/hlsl
       |
       +- temp/technique

Edited by Hodgman, 04 January 2013 - 09:31 PM.


#12 JohnnyCode   Members   -  Reputation: 182

Like
0Likes
Like

Posted 07 January 2013 - 10:51 AM

The geometry format I did:

- Does not contain texture names, you assign textures to it's material id's arbitray way

- The format was designed for instant loading, thus all elements (positions,normals,texcoords, material ids, vertex colors, tangents,bone weights , bone indicies) are grouped in arrays matching the vertex buffers layout (positions,normals,texcoords)1 array and so on.

- Appart from geometry definiton, it can contain object space matricies of bone animation for every frame.

So it is like:

-triangle count

-verticies count

-indicies array

-positions,normals,texcoords -array

-material ids, tangents,bone weights, bone ids,vertex color -array

-number of animation frames

-number of bones

-bone names

-coresponding matricies, for every frame

 






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS