• Advertisement
Sign in to follow this  

How should I manage the relationship between resources and objects drawn in a 3D scene? (C++|OpenGL)

This topic is 874 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to design a relatively simple 3D scene rendering engine. I'm trying to also make it fairly high-level, so it's easy to just specify things like the position of a light, where an object should be, etc.

 

The problem is that I'm finding it difficult to design and implement this. More specifically, managing the relationship between the resources and the in-game objects I'm drawing.

 

I can, at the moment, load a mesh, a texture, a shader, specify the transform, and then draw. But that doesn't really solve much. If I want to create an entity, X, and draw it at a given position (by inserting it into the scene data), how should the rendering engine know X's mesh and texture? How do I specify what X is? Who manages it's resources (and said resources' lifetimes)? And how do I cleanly decouple what knows what "visual assets" X should be composed of, what actually draws X, and the gameplay code?

 

I was thinking that maybe a solution would be to create a "SceneRenderer" class that takes a scene, a renderer, and some information on what X is composed of, and ties everything together. But then I may run into trouble, since such a class in ambitious, I'm not sure how I'd "define what X is", and I would still need to deal with the resource management thing.

 

Sorry if what I'm saying is a bit confusing. But I'm not really sure how to explain what I wanting to achieve. I don't want something that can draw an asset, I want something that can draw based off of scene data managed by the gameplay code, which wouldn't contain actual information on how to draw, just what to draw and where. And I'm not too sure how to implement it. Or if I'm just going in the wrong direction with my thinking.

 

Just to be a bit more clear, I'm asking about architectural decisions on creating a rendering engine, not actual code (code examples are welcome, though).

Share this post


Link to post
Share on other sites
Advertisement

There are many ways to approach this, but in my implementation I prefer to have entities render themselves. Without getting really sophisticated in the renderer, you might just end up with something too generic and limiting otherwise. Rendering is really a two-step process, because objects that can render themselves have no knowledge of the rendering pipeline in general, or their placement within it. So, I generally split the process into a "render" function and a "draw" function. The render function simply submits an entry into the render queue with all information that the renderer needs to sort it for efficiency, choose the correct shader permutation, and fill in uniform buffers with matrices and such. The render entry includes the "draw" function as a callback. In c++11 this can be with lambdas, std::bind and std::function, or good old c function pointers. The render queue collects all entries into a buffer for the frame, sorts them for best drawing order, and then calls the draw callbacks in order.

Share this post


Link to post
Share on other sites

There are many ways to approach this, but in my implementation I prefer to have entities render themselves. Without getting really sophisticated in the renderer, you might just end up with something too generic and limiting otherwise. Rendering is really a two-step process, because objects that can render themselves have no knowledge of the rendering pipeline in general, or their placement within it. So, I generally split the process into a "render" function and a "draw" function. The render function simply submits an entry into the render queue with all information that the renderer needs to sort it for efficiency, choose the correct shader permutation, and fill in uniform buffers with matrices and such. The render entry includes the "draw" function as a callback. In c++11 this can be with lambdas, std::bind and std::function, or good old c function pointers. The render queue collects all entries into a buffer for the frame, sorts them for best drawing order, and then calls the draw callbacks in order.

 

Thanks for the tips! But a few questions:

 

1. Your solution was to use an "immediate" rendering system, where entities submit themselves. A lot of example I've seen use a "retained" mode where the entities simply modifies the information as needed, but the scene remains the same each frame otherwise. Is there any reason why you decided to go with the "immediate" system? Any benefits to it? And how did you manage things like lighting?

2. You mention how you split rendering into two functions, but I'm not quite sure what the purpose of the "draw" function/callback is used for. Can you please elaborate?

Share this post


Link to post
Share on other sites

1. Your solution was to use an "immediate" rendering system, where entities submit themselves. A lot of example I've seen use a "retained" mode where the entities simply modifies the information as needed, but the scene remains the same each frame otherwise. Is there any reason why you decided to go with the "immediate" system? Any benefits to it? And how did you manage things like lighting?

2. You mention how you split rendering into two functions, but I'm not quite sure what the purpose of the "draw" function/callback is used for. Can you please elaborate?

 

Well... not really an immediate mode approach. An entity in the scene may contain a MeshInstance component. After the scene is frustum-culled for each active viewport, I run through the visible mesh instances and call mesh.render(entityId, scene). So entities don't actually submit themselves, the scene rendering process submits them in a way. Entities just know how to package themselves up for the renderer to call back later, and they know how to actually draw themselves with OpenGL calls in the callback. For example, a mesh contains several drawsets (basically one for each unique material), so the render function for a mesh actually submits multiple entries into the render queue, one for each drawset. The terrain system does its drawing in a very different way than a mesh, but the render function is very similar. It just packages up render entries with a 64-bit key containing lots of state information embedded in bit fields that the renderer uses to sort everything later.

 

I use a deferred rendering approach, so I also submit light volume geometry into the queue, and define it in a different "scene layer" than the scene geometry (passed in 4 bits within the render key). The light volume scene layer is sorted to render after the scene geometry layer. The process is exactly the same as rendering a mesh though. An entity in the scene contains a LightInstance component. After frustum culling the light volume geometry, I run through all visible lights and call light.render(entityId, scene). This submits entries into the render queue for the deferred lighting pass, which happens after the "normal" scene geometry is rendered into the g-buffer. Later the renderer runs through the entries and calls the callback (the light's "draw" function) passing the unique instance state, world position, orientation, etc. The light drawing function is very different again from the mesh and terrain; basically just a free function that can draw a sphere, cone, or fullscreen quad depending on the light type.

 

Splitting the rendering process into two steps allows me to fill up a render queue first with all unique data that the renderer needs to render the frame. There is still shared resource data (like mesh vb/ib/animation frames) not contained in the render queue, but the draw callback can access all that stuff. So this allows for render submissions to come from multiple threads. Drawing does not necessarily happen on the same thread as game update. If there are >= 4 cores detected, my engine decides that drawing will be done in parallel. Since the render queue + the shared resource itself contains everything needed to draw the frame, updating of the next frame can begin immediately in parallel on another thread while drawing takes place.

 

It's a pretty big topic and I could go on, but hope this clears up what I was saying a bit.

Edited by y2kiah

Share this post


Link to post
Share on other sites

how should the rendering engine know X's mesh and texture?

X references the list of mesh(es) it needs, and each mesh references the textures&shaders it needs.

 

 

 


How do I specify what X is?

The renderer doesn't need to know that. It just needs a list of polygons, textures and shaders to send to the GPU.

 

 

 


Who manages it's resources (and said resources' lifetimes)?

The rendering queue. It takes as input all the info needed about resources (like the references between resources mentioned above) to decide when one of X's resources is no longer needed or when new resources must be loaded - but it does not hold any info about X - everything it holds is considered to be a primitive resource that can be sent directly to  the GPU for rendering, even transformation matrices. The goal here is to have as little data as possible sent to the graphics card every frame, so the rendering queue must keep track of how many times each resource is referenced in the current frame, and discard resources which are no longer referenced. And the rendering queue must also be optimized towards reducing the number of required draw calls, so when new resources are added, it must sort them based on specific criteria, like the textures and/or shaders they use.

 

 

 


And how do I cleanly decouple what knows what "visual assets" X should be composed of, what actually draws X, and the gameplay code?

The drawing code should not be bound to X, and it does not need to care what X is. Drawing should be implemented in a single general-purpose Render() function that just takes the list of polygons, textures and shaders from the (optimized) render queue and passes them to the GPU. If you have things like different vertex formats for different meshes, then they (the vertex formats) should be added as a resource to the render queue (and referenced by X) and then used in the Render() function.

Gameplay logic is handled in a separate "Update" function, which fills in the higher-level data about X - like it's position. The Update function handles interactions between all the X and Ys, and then places them into the render queue. It does not handle resources. The resources are allocated/discarded only by the rendering queue, when X is added to the queue, or when Y is removed.

 

 

 


I'm not sure how I'd "define what X is"

Only X itself should know what it is and the ways it can interact with all the Ys. X should have virtual methods that can be called like this in the Update function: X->InterractWith(Y). Ys should also have virtual methods that tell X what they are, so X can decide what to do in the Interract method, like Y->Position(). Initially, you should start by defining only a single class for both X and Y, and stuff as much functionality in there as possible, along with the methods required for updating. Later, as you find that you need different types of info or behavior, you can start declaring X, Y and Z classes derived from that base class that override the base class' Interact(Y) and/or "Position()" methods. Then, you can also try using dynamic_cast to tell appart X from Y and decide which methods to call between them, but ideally, X Y and Z shouldn't declare additional methods for interaction... X should be able to figure out the identity of Y just by using the "Y->Position()" method that is already available in the base class as well and overridden by Y. When you start using dynamic_cast, things get complicated fast.

Or, to put it another way: the interactions should not depend on what X or Y is; only on specific properties of X and Y, like the position. Only the end-user really needs to know that X is X and Y is Y, and for that you simply add something like a "Name" data member in the base class of X and Y, which holds the info needed to tell the user that X is X and Y is Y.

Or, an even simpler explanation: Instead of thinking that the code (the interactions) must be written aroud what X is, try thinking about it the other way around: What X does ( the code in it's Interract method(s)) defines what X is.

Edited by tonemgub

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement