Jump to content
  • Advertisement
Sign in to follow this  
noodleBowl

Models, buffers, and rendering

This topic is 536 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have been thinking about this lately and I was wondering if I could get some guidance
 
Lets say I have 4 models
 

Truck
Boat
Airplane
Sprite (2D Quad)

 
They all use a different texture and have a different set of vertices. The vehicles use a vehicle shader and the Sprite uses its own sprite shader.
So we are looking something like this

Models and their resources

Truck
  Truck Vertices
  Truck Texture
  Vehicle Shader

Boat
  Boat Vertices
  Boat Texture
  Vehicle Shader

Airplane
  Airplane Vertices
  Airplane Texture
  Vehicle Shader

Sprite (2D Quad)
  Sprite Vertices
  Sprite Texture
  Sprite Shader

 
Now my question might be stupid, but how do you go about rendering all of these different things in a manor of best practice?
Leaning more on the side of how should all this data be organized so rendering is efficient?
 
Should all of these models boil down to their own class? Eg The Truck model is really a Truck Class. Then I store all of the information (vertices, shader, texture) in the class.
Eg the Truck class has a ID3D11Buffer member for the vertex data required by the model. And then all other Models follow suit. EG Sprite class also has its own ID3D11Buffer member which holds the 6 vertices of a quad
 
Then when rending I do something like

//Psudeo Code
Foreach renderable //A renderable is one of the models as its class (Truck class, sprite class, etc)
{
	//1. Bind the renderable's texture
	//2. Bind the renderable's Shader
	//3. Bind the renderable's vertex buffer
	//4. DirectX device context draw call
}

//Present all the things we just drew
swapChain->Present(0, 0);\

 
Am I right on my assumptions or is this actually a really horrible way to handle things?

Share this post


Link to post
Share on other sites
Advertisement

There is not standard practice, as most of the decision behind how to go about rendering stuff efficiently is based on use case( although there may be general guidelines, ex. minimize state changes and whatnot ).

With that said, as much as you think the models are different and they may be at a higher level, but in the end all your shader will see is a bunch of vertices whose primitive type may include floats for your vertex position, normal, texture coordinates and what not. So there is technically a single 'model', but each instance will have different data ( I guess this could be interpreted as 3 different models ), but the base object 'model' should not care whether its a vehicle, tree, or whatever other model you have. With respect to the shader, why would you assume that you would need 3 separate shaders. If the model vertices are of the same format, ex. if all models have a vertex position, vertex normal, texture coords, and assuming no special lighting or shading effect, then all N models can share the same shader.

The shader(s) just expects vertex data, texture etc..and does not care who or what they belong to...

Share this post


Link to post
Share on other sites
I personally moved away from class to mesh mapping because at the end of the day it want much of a step to manage mesh and associated shader for that mesh. I still think i have too many different shaders and can boil it down more.

My advice. Design a framework in which you can associate whatever mesh to whatever shader. Allowing for this to be grouped and batch up based on shader in use. What you may find is a strong correllation. Other things i would suggest also would be to try and make your shader definition for vertices as common as possible. You don't have to use everything in the vertex shader but it speeds things up in development of you limit the different vertex formats you have.

Share this post


Link to post
Share on other sites

 

Now my question might be stupid, but how do you go about rendering all of these different things in a manor of best practice?

 

As already stated, you may not need different shaders. You can have a single shader that shades things in a particular way uniformly throughout your game expecting similar sets of data per draw no matter the mesh, or subset requesting the render.

 

Now even if you have shaders that shade things differently, you can combine those if you so choose, making it a uber-shader, where all the logic of the smaller shaders is concatenated into a large shader, whose texel shading is determined based upon control flow arguments submitted on the app side. Some may argue for one approach over the other, but like the gum commercial, it really is Up2U.

 

However, unto the context of efficient rendering with D3D11 from what I understand it usually follows the guidelines of grouping like data. That is, group similar shaders, and the respective meshes, and Input Layouts when it comes to bulk rendering, as to eliminate unnecessary D3D calls, and context switches on the GPU itself.

Share this post


Link to post
Share on other sites

Early on, it doesn't matter. Unless you have thousands of instances based on hundreds of meshes, you don't necessarily need to worry about it. Stick all the vertices into 1 big vertex buffer and just change the pointer before drawing something. Track your shader state and if it isn't a new shader, then don't bind it. A lot of things nowadays use the same shader (deferred rendering).

Share this post


Link to post
Share on other sites

I think I'm more focused on the vertex buffers and how that data for the meshes, regardless if it was a truck, car, boat, cat, sprite, or whatever, should be organized in code
 
 

However, unto the context of efficient rendering with D3D11 from what I understand it usually follows the guidelines of grouping like data. That is, group similar shaders, and the respective meshes, and Input Layouts when it comes to bulk rendering, as to eliminate unnecessary D3D calls, and context switches on the GPU itself.

 

I think this captures my line of thinking best when I was writing this

 

In my example, I was thinking that regardless of how many or whatever item(s) I need to render they would all have their resources at hand. Eg all Vehicles would use the Vehicle Shader (created once and then referenced)

But my issue came up in what to do about the buffer's like the vertex buffer. Now I'm starting to think that I should handle everything like the Shaders. I load the mesh and create whatever buffers I need once. Then I have a separate class that will use the mesh. Eg I have a TruckMesh class. This class holds all of the buffer data (vertex, index, etc), the texture, shader to use, etc. But then I also have Truck class which has all the properties needed by a truck (speed, turn radius, damage, etc) and it just references the TruckMesh so I know's what resources to use when I need to render it

Share this post


Link to post
Share on other sites

memory pools provide maximum flexibility.

 

assets of a particular type (mesh, texture, shader, etc) are kept in a memory pool. any combo of assets (that makes sense) can be bound to the pipeline.

 

then an object becomes almost an ECS entity, just a list of the asset IDs it uses.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!