Jump to content
  • Advertisement
Sign in to follow this  
noodleBowl

DX11 Models, model matrices, and rendering

This topic is 436 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I was thinking about how to render multiple objects. Things like sprites, truck models, plane models, boats models, etc. And I'm not too sure about this process

Let's say I have a vector of Models objects

class Model
{
  Matrix4 modelMat;
  VertexData vertices;
  Texture texture;
  Shader shader;
};

Since each model has is own model matrix, as all models should, does this mean I now need to have 1 draw call per model?

Because each model that needs to be drawn could change the MVP matrix used by the bound vertex shader. Meaning I have to keep updating/mapping the constant buffer my MVP matrix is stored in, which is used by the vertex shader

Am I thinking about all of this wrong? Isn't this horribly inefficient?

Share this post


Link to post
Share on other sites
Advertisement

The options

One drawcall per model OR

instancing which is one call per N models of the same type OR

pre transform vertices for some static models so the vertices are in world space OR

If dx12 use drawindirect with CPU or with a GPU driven pipeline OR

if dx11 use instancing with manual vertex fetch for clustered rendering

if dx11 use draw indirect with either virtual texturing or thin gbuffer with deferred texturing.

I think thats all of them.

edit - there's also merge instancing but thats similar to the fifth one I listed.

edit2 - look into texture atlas's to help batching draw calls.

Edited by Infinisearch

Share this post


Link to post
Share on other sites
2 hours ago, noodleBowl said:

Since each model has is own model matrix, as all models should, does this mean I now need to have 1 draw call per model?

There are other options listed above.

2 hours ago, noodleBowl said:

Am I thinking about all of this wrong? Isn't this horribly inefficient?

You aren't necessarily thinking about this wrong just incomplete since there are other options.  As far as inefficient goes it depends on the order you draw your models in for dx11.  And you do have a "draw call budget" to think about but it depends on your CPU/CPU load.  Basically if you are DX11 the simple thing to do is sort by shader then texture then other state changes and use instancing where possible.

Share this post


Link to post
Share on other sites

Might be a stupid question here but what is considered a "static" model? Would a Sprite be considered a static model because it does not move although you can animate it

If I were to pretransform my vertices would it only work for static models?

Share this post


Link to post
Share on other sites

It means no movement too.  Think about it if you pretransform the vertices then if you move them they would need to be transformed again... whats the point.  Think walls that don't move in any way or buildings in a cityscape.  I'm no expert on this technique since I never bothered using it.  Maybe @Hodgman can explain the different variations of the technique better than me, I think I remember him mentioning it once.

But like I said for now you're better off just batching properly and using instancing where possible with texture atlases to make the batches bigger.

Oh and here are some presentations and papers that describe some of the techniques

http://www.humus.name/Articles/Persson_GraphicsGemsForGames.pptx

http://advances.realtimerendering.com/s2015/aaltonenhaar_siggraph2015_combined_final_footer_220dpi.pptx

 

Edited by Infinisearch

Share this post


Link to post
Share on other sites

@Infinisearch Thanks for the above!

After looking at those presentations I do have some basic/general questions.

Using these as an example, lets say I have the following meshes

Knight
Airplane
Cruise Ship

And because I would want to draw a variety of the above, which may all have different transforms. I cannot put them into one buffer and save on draw calls correct (excluding instancing in this case)?

Even if I needed to draw something enough to warrant instancing and they all had different positions, rotations, and etc can I still instance? I though for instancing to work everything had to be the same

Share this post


Link to post
Share on other sites
34 minutes ago, noodleBowl said:

And because I would want to draw a variety of the above, which may all have different transforms. I cannot put them into one buffer and save on draw calls correct (excluding instancing in this case)?

 

36 minutes ago, noodleBowl said:

Even if I needed to draw something enough to warrant instancing and they all had different positions, rotations, and etc can I still instance? I though for instancing to work everything had to be the same

First of all if they are of the same vertex type you should be able to stick them in the same buffer thus reducing state changes between draw calls. (look at the arguments to a draw call to understand what I mean)  Alright forget about pretransforming vertices since that would be for static objects only.  So option one is to use one draw call per model.  Option two is for each model that has exactly the same geometry data(not the transform and other constants) use instancing.  Option three is packing textures into a texture atlas (dx11 and before, dx12 is different) and then using instancing on the same models (but now with different textures in addition to different tranforms and constants).  Option four is merge instancing in which you combine instancing with manual vertex fetch and a texture atlas (with this you can have different geometry, textures, transform, and constants).  The only constraint is that the different models should be approximately the same size otherwise you waste performance on degenerate triangles.  Option five is an extension of merge instancing in which instead of using a instance size as big as the biggest model of the group you use an instance size that is much smaller than the model size.  This requires you to split your models into triangle clusters of the same size and potentially use triangle strips.  But this technique allows you to do cluster based gpu culling which can be a big performance win.  Then there is draw indirect which is different in DX11 and DX12, but in directx12 it will allow for some nice tricks on models that vary.

So to answer your question for standard instancing in dx11 the geometry and texture has to be the same but the transform and other constants like color can vary.  In dx11 if you implement a texture atlas you would be able to vary texture while using instancing but the geometry would be the same.  In dx11 if you use manual vertex fetching you throw away using the post transform vertex cache but now you can use instancing to draw different geometry.  There are two way to do this, I described them above and posted links to the techniques in my previous post.

Share this post


Link to post
Share on other sites
6 hours ago, Infinisearch said:

First of all if they are of the same vertex type you should be able to stick them in the same buffer thus reducing state changes between draw calls. (look at the arguments to a draw call to understand what I mean)

I'm actually not really sure what you are talking about here? The Draw call only has a vertex count and startVertexLocation. Am I looking at the wrong function? The only thing I can think of is the D3D11_INPUT_ELEMENT_DESC needed for a input layout

D3D11_INPUT_ELEMENT_DESC inputElementDescription[] = {
	{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
	{ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};

 

I'm looking at this tutorial on standard instancing and I don't 100% understand the input layout when it comes to the instance data. More specifically I don't understand why they have changed the InputSlot to 1. Is this because the are binding 2 buffers and using 1 would point to the second buffer (m_instanceBuffer) where the instance modifications are stored? OR is it really just that they are reusing a semantic (TEXCOORD) and the two bound buffers (m_vertexBuffer and m_instanceBuffer) are treated as one big buffer?

In the tutorial they create a InstanceType struct to hold the modifications they want to do to the vertex positions. But in a case of using a transform (model) matrix to do vertex data modifications would it be done the same way instead of using a constant buffer? 

Edited by noodleBowl

Share this post


Link to post
Share on other sites
8 hours ago, Infinisearch said:

Alright forget about pretransforming vertices since that would be for static objects only.

Wouldn't it make sense to pretransform dynamic meshes too?

Thinking of skinning, tesselating, etc. multiple times for each shadow map, i assume pretransforming would be faster even if this means additional reads / writes to global memory. Drawing all models with one call is another advantage, GPU culling another, everything becomes less fragmented.

But i never tried that yet.

One thing i tried is to store an matrix index in vertex data (position.w), and load matrix per vertex. That worked surprisingly well, although on AMD it wastes registers. I did not notice a performance difference between drawing 2 million boxes with unique matrix per box or just using one global transform. Seems the rasterizer limited (boxes were just textured but not lit).

 

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!