Vertextypes with Array of unknown size during compiletime for VertexBuffer

Started by
7 comments, last by Nicholas1991 11 years ago

Hi Folks,

This is some kind of beginner question, but I couldn't find the answer anywhere.

I have successfully used the following Vertex-Type until now for simple textured geometry. An array of these vertices is then passed via IASetVertexBuffers(...):


struct
VERTEXPOSITIONNORMALTEXTURED
{

	D3DXVECTOR3 position;
	D3DXVECTOR3 normal; 
	D3DXVECTOR2 texcoord;


	static D3D11_INPUT_ELEMENT_DESC* GetVertexLayout()
	{
		static D3D11_INPUT_ELEMENT_DESC layout[] =
		{
			{"POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 
				0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
			{"NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 
				0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
			{"TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 
				0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0 },
		};

		return layout; 
	}

	static UINT GetNumElements(){ return 3; }
};
 

Now I would like to use more than one map on certain objects. I could of course now create lots of VertexTypes with one to ten UV-Coordinates for example to deal with objects that have one to ten UV-Coordinates. However that is not really an elegant solution. Of course I don't want to use an array of fixed size ten because then most of the time I store almost ten times as much data for each vertex than is necessary (most objects ingame only have a texture and no other maps, and most objects that use more than one map will probably not use all ten map slots).

But if I want to create an array with unkown size during compiletime I would use pointers.


...
D3DXVECTOR3 position;
D3DXVECTOR3 normal;
D3DXVECTOR2* texcoords;
UINT numTexCoords;
...

?

?

The layout[] would also be created during runtime. But what about the pointer? How do I Access this array in my HLSL VertexShader Method?

Thanks in advance

Nicholas

Advertisement

I know my English isn't the best at the moment so is anything confusing about my question? I mean, "it can't be done" is also ok, if that's the case ^^.

Thanks in advance

Nicholas

Here is some more code to show what is frustrating me. I'm copy&past-ing a lot which is exactly what I want to avoid :/. (I have not yet compiled it (I'm in the middle of rewriting parts of my code :/), so there might be some minor errors.)

First the VertexTypes:


        struct VERTEX1MAP
	{
			DirectX::XMFLOAT3 position;
			DirectX::XMFLOAT3 normal;
			DirectX::XMFLOAT2 mapcoord[1];
	};

	struct VERTEX2MAP
	{
			DirectX::XMFLOAT3 position;
			DirectX::XMFLOAT3 normal;
			DirectX::XMFLOAT2 mapcoord[2];
	};

	struct VERTEX3MAP
	{
			DirectX::XMFLOAT3 position;
			DirectX::XMFLOAT3 normal;
			DirectX::XMFLOAT2 mapcoord[3];
	};


... and so on

And here are the methods used to fill the VertexArrays:


VERTEX1MAP BinaryMeshExporter::getFilledVertexType1(const Point3& position, const Point3& normal, const UVVert* uv, const pair<int,int>& uvAccess)
{
	VERTEX1MAP out;
	out.position = Point3ToXMFLOAT3(position);
	out.normal = Point3ToXMFLOAT3(normal);
	out.mapcoord[0] = XMFLOAT2(uv[0][uvAccess.first], -uv[0][uvAccess.second]);
	return out;	
}

VERTEX2MAP BinaryMeshExporter::getFilledVertexType2(const Point3& position, const Point3& normal, const UVVert* uv, const pair<int,int>& uvAccess)
{
	VERTEX2MAP out;
	out.position = Point3ToXMFLOAT3(position);
	out.normal = Point3ToXMFLOAT3(normal);
	for ( int i=0; i<2; i++ )
		out.mapcoord = XMFLOAT2(uv[uvAccess.first], -uv[uvAccess.second]);
	return out;
}

VERTEX3MAP BinaryMeshExporter::getFilledVertexType3(const Point3& position, const Point3& normal, const UVVert* uv, const pair<int,int>& uvAccess)
{
	VERTEX3MAP out;
	out.position = Point3ToXMFLOAT3(position);
	out.normal = Point3ToXMFLOAT3(normal);
	for ( int i=0; i<2; i++ )
		out.mapcoord = XMFLOAT2(uv[uvAccess.first], -uv[uvAccess.second]);
	return out;
}


... And so on

Now as you can see, they are basically identical except for the array size of mapcoord. And also the fillmethods are identical except for the VertexType they use.

Is there any kind of polymorphic concept for structs (without changing the structs size)?

I'm really desperate to change the code above (for the better), but I do not see how :/.

One approach would be to separate the texcoords into a different slot. There's a informative GPU Gems article: Optimizing Resource Management with Multistreaming. The article is D3D9 but the idea translates well to D3D11 (streams = vertex buffer slots, vertex declaration = input layout).

Even so, I wonder if this is really a good idea. The fact that for every new texcoord you introduce you will have a different input layout, so your vertex shader will have to be adjusted too. This does not sound as flexible as you intend.

Alternatively: Instead of storing the texcoords as a vertex buffer stream use a (structured) buffer or texture. Provide the texcoord count (and maybe stride) as shader constants and sample/load the texcoords in the vertex shader with a loop.

Your last suggestion is exactly what I am looking for.

So you suggest, I store the UV-Information in a texture-like-format and then access this information in the vertexshader via a texture sampler? So I guess my exporter for 3ds max could already store this information in the proper format. However, I read that sampling from a texture is slower than simply accessing data from the input struct. Is it true?

Thanks for your help so far, much appreciated

Nicholas

Don't bother about efficiency too much until you got something that works. Other than that: You don't know earlier than when you have several approaches which you can profile and compare. See e.g. this thread, especially kauna's entry why it could actually be more performant.

Now I'm not so sure anymore if the "late loading" of the texcoords is so a good idea after all: You will have the "signature explosion" somewhere later anyway. Assuming you want to use texcoords the usual way, so they interpolate across primitives, you have to provide them as output fully before the pixel shader. So providing a setup up to a max amount of texcoords is probably not as bad after all, meaning e.g. 10 shaders (or rather 10 shader chains), 10 layouts. But splitting the streams is IMO a good idea. How about code generation ? Not sure if the terms shader permutation and shader stitching apply here, but google them, they might give you some ideas.

Why do you need so many texcoords anyway ? Or rather why the flexibility ? Describe what your actual goal is, maybe there's a better or easier way.

Hm, the problem is, I'm no 3ds max expert. Until now I thought I would need separate UV-Coords for all 12 maps. But if I would use the same UV-Coords for all 12 maps, my problem would be solved. However, I don't know, if someone more experienced with 3ds max would appreciate the option of individual UV-Coords for all maps :/ (maybe some advanced techniques require a special UV arrangement). But that question should probably be posted in a different forum ^^.

But with such a small VertexFormat splitting the streams would then be unnecessary, wouldn't it?

Yeah, definitively.

Can't help you with 3Ds Max, sorry to say. But I really think you got some unecessary complication. I don't have experience with professional 3D content creation and toolchain, but I bet there has to be a way. 12 Texcoords, nah, sounds fishy. At least for the current generation of games. Rather 12 textures or something (atlas?), using the same, or maybe transformed, texcoord.

OK. Thanks for your help!

I'll move on to the 3ds max forum ^^.

Best wishes

Nicholas

This topic is closed to new replies.

Advertisement