different meshes, different shaders.

Started by
7 comments, last by FantasyVII 6 years, 6 months ago

Let's say I have a mesh that only has vertex position and UV's. Lets say I have another mesh that has position data and normals only. Let's say I have another one that has position, UV's and Normal data. Do I make a struct that is tailored for each mesh? One struct that has only position and UV and another that has only position and normals and so on?

For shaders, do I do the same? One shader that takes in position and UV's and another that takes position and normals etc....

Then I render all the meshes that have things in common together?

 

Or do I make one big struct that has UV, position, normals, binormal and tangent and send it to one shader? If I do it this way I'm sending extra unnecessary data to the gpu if one of the meshes doesn't have normals or UV's for example.

 

 

Advertisement

Having a single structure for all kind of combinations is no problem:


struct ctriangle {
	cvector3 a_position;
	cvector3 a_normal;
	cvector2 a_uv;
	cvector4 a_color;
	cvector3 b_position;
	cvector3 b_normal;
	cvector2 b_uv;
	cvector4 b_color;
	cvector3 c_position;
	cvector3 c_normal;
	cvector2 c_uv;
	cvector4 c_color;
};

struct cmesh {
	int32_t count;
	ctriangle triangles[CONFIG_MAX_MESH_TRIANGLES];
};

Meanwhile, in the vertex buffer structure, you can add booleans (has_normalshas_uv, has_color) determining what you will be using, and when pushing the mesh to the GPU, you will check those booleans to see what you will extract from the structure cmesh/ctriangle.



struct cvertex_buffer {
	GLuint vbo_gl_id;
	bool has_normal;
	bool has_uv;
	bool has_color;
	GLsizei vertices_count;
	GLsizei max_vertices;
};

void push_mesh_to_vertex(cvertex_buffer *vb, cmesh *mesh);

Regarding shader, you probably want separated shaders according to the kind of data that is sent (with/without normals, with/without uv, with/without color). Not because of technical limitations, per se, bc you could send everything (vertex, normals, uv, color) and set shader variables to determine what is used or not, but that would be just making your shader more complex.

 

Tdlr:

Structure containing vertices, normals, uv and color.

A separated shader for each case (with/without normals, with/without uv, with/without color).

Yeah, I was thinking the same exact thing. Alright. Thanks for confirming.

You're better off asking this in the graphics sub-forum.  Perhaps you could ask a mod to move it.

Do you have a particular API in mind?

Anyway you can use multiple vertex streams so you can break your vertex buffers into multiple smaller streams.

Basically AOS vs SOA.

edit -

For shaders, do I do the same? One shader that takes in position and UV's and another that takes position and normals etc....

Then I render all the meshes that have things in common together?

Yes.

-potential energy is easily made kinetic-

But I think it will make less management intensive if you simply assume default values for whatever your mesh lacks support of so you would't need to switch anytime as for the case that context switches costs performance too. So I would for meshes of the same "kind" use a standard assumption model that filles anything else with zeroes and then let the shader decide if processing is needed or skipped

17 minutes ago, Shaarigan said:

But I think it will make less management intensive if you simply assume default values for whatever your mesh lacks support of so you would't need to switch anytime as for the case that context switches costs performance too. So I would for meshes of the same "kind" use a standard assumption model that filles anything else with zeroes and then let the shader decide if processing is needed or skipped

Yes, but then it can become a problem for lighting (are these normals default values or do I really have normals), texturing (is that default UVs ?) and so on.

The answer might look trivial for normals since a null normal should not exist, but less for UVs, colors...

I make shaders for the situations that we require, and then artists create models/materials that fit the requirements of the shaders.

If I'm making an "unlit" shader, then it won't require normals. When I import a model into our engine for use with that shader, I'll discard any normals that exist in the original model file.

If I'm making a normal-mapping shader, it will require normals and tangents. When importing a model into our engine for use with that shader, I'll report an error message if the original model file doesn't contain normals and tangents.

The actual memory layout of the vertex buffers is controlled by a config file. 
e.g. a shader might require positions/normals/uv's, but the vertex buffer format could be:
* struct Stream0 { vec3 position; vec3 normal; vec2 uv; }, or 
* struct Stream0 { vec3 position; }; struct Stream1 { vec3 normal; vec2 uv; } or
* struct Stream0 { vec3 position; }; struct Stream1 { vec3 normal; }; struct Stream2 { vec2 uv; } or
* struct Stream0 { vec3 position; u32 packed_normal; }; struct Stream1 { u16 u; u16 v; }, etc, etc...

So shaders declare a group of vertex-attributes that they require (e.g. position, normal, uv), and then this config file declares a whole bunch of potential buffer storage formats. When importing a model, we find the union of all the vertex-attribute groups that the model will be used with (based on the vertex shaders that it will be used with). We then find the sub-set of storage formats that are compatible with all of those vertex-attribute groups, pick the smallest one, and then import the model data to that buffer format. 

In D3D11, we then generate an Input Layout object that describes the mapping from the storage format to the VS attributes, or equivalent structures for other APIs.

12 hours ago, Infinisearch said:

You're better off asking this in the graphics sub-forum.  Perhaps you could ask a mod to move it.

Do you have a particular API in mind?

Anyway you can use multiple vertex streams so you can break your vertex buffers into multiple smaller streams.

Basically AOS vs SOA.

edit -

Yes.

oops. I thought I did post it in the graphics section. It was 4 am when I posted this. I guess graphics is close enough to gameplay. :P

I'm using almost every graphics API out there. DX11, GL4+, GLES and Metal.

 

4 minutes ago, Hodgman said:

I make shaders for the situations that we require, and then artists create models/materials that fit the requirements of the shaders.

If I'm making an "unlit" shader, then it won't require normals. When I import a model into our engine for use with that shader, I'll discard any normals that exist in the original model file.

If I'm making a normal-mapping shader, it will require normals and tangents. When importing a model into our engine for use with that shader, I'll report an error message if the original model file doesn't contain normals and tangents.

The actual memory layout of the vertex buffers is controlled by a config file. 
e.g. a shader might require positions/normals/uv's, but the vertex buffer format could be:
* struct Stream0 { vec3 position; vec3 normal; vec2 uv; }, or 
* struct Stream0 { vec3 position; }; struct Stream1 { vec3 normal; vec2 uv; } or
* struct Stream0 { vec3 position; }; struct Stream1 { vec3 normal; }; struct Stream2 { vec2 uv; } or
* struct Stream0 { vec3 position; u32 packed_normal; }; struct Stream1 { u16 u; u16 v; }, etc, etc...

So shaders declare a group of vertex-attributes that they require (e.g. position, normal, uv), and then this config file declares a whole bunch of potential buffer storage formats. When importing a model, we find the union of all the vertex-attribute groups that the model will be used with (based on the vertex shaders that it will be used with). We then find the sub-set of storage formats that are compatible with all of those vertex-attribute groups, pick the smallest one, and then import the model data to that buffer format. 

In D3D11, we then generate an Input Layout object that describes the mapping from the storage format to the VS attributes, or equivalent structures for other APIs.

2

 

That's perfect. Thanks !

This topic is closed to new replies.

Advertisement