3D Renderer Architecture with OpenGL

Started by
5 comments, last by Hodgman 5 years, 8 months ago

I've read many tutorials on opengl and how to render neat stuff but I can't find any article on how to merge all these techniques together into an "engine" or a "renderer".

My thoughts is that finally I need to have this 3d renderer, physics engine, sound, network etc all connected with an "Engine" class, so right now my question is how does the renderer work (where do I put the shadow mapping code, culling etc.).

I'm pretty clueless so any piece of information will be great

Advertisement

Sorry to say, but that is where you come in. If all that was already done, then what does that leave you to do? Besides just 'mimicking' what is already existent. If you know all the bits and pieces, then the design of how to fit these all together is up to you as everyone use case will be different. There is no magic recipe for an 'engine' or 'renderer' architecture. However, there are a few best practice or common practice that smart people on this forum can give you pointer on. My suggestion is to dive in, maybe do a high-level design mock-up based on your intended use and then ask to have it critique for example. Or better yet, figure out what is you actually need before you start thinking about minute details...those will come in good time.

Haha. My piano teacher told me there is no shame in copying from good pianists, so I thought about doing the same here. Thanks

First, understand that an engine can be a simple while() loop that polls for input and moves and object and draws it. You can build upon that and continually build new features and decide to refactor when you need to. I wouldn't worry about organizing the perfect way to do it up front. I could post a long reply, but if you are starting your own engine from scratch, just implement the features you need/want to work on. Get it working and keep building new features.

From an advance standpoint, typically the renderer is on it's own thread. It will have positions/locations stored outside of what positions  game/physics have. When a new rock is spawned that doesnt move, you could tell the renderer "here is an object, the 3d mesh it belongs to, and the location". For something like a moveable character, the gameplay/physics thread will have some kind of pointer/handle to be able to tell the render thread "hey that object, this is where it is now". Things like that.
Render class, Camera class, Camera  could do the rendering, you can render shadows from camera  or the main players view from a camera. You just have to start somewhere. If you just want to learn rendering tricks and shaders, then work towards that.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

How you put together the code is dependent upon what you want to do with it. Defining an end goal for what you want your renderer to do is a good start in working out how it should be pieced together. Performance, platform support, features, whether it is it going to be rendering 1000's objects in a world such as in a voxel engine or rendering a small amount of highly detailed models. 

Looking at how other engines communicate with their rendering systems might also help. For instance Unity has components viewing the scene with a Camera, a mesh renderer that renders any 3D geometry and a single light component handles a variety of different light types including shadows.

Unity Developer, C#, C++, Game Developer

I am currently employed as a Unity Developer at New Moon Studios in York working on a range of augmented and virtual reality applications and game

In my low level renderer:

Resource lists hold buffer & texture bindings. State groups hold resource lists, UBO bindings, pipeline state (depth/stencil, blend, raster) and a high level shader technique binding. 

A shader technique contains many different shader program objects to be used by the same object - depth only rendering, forward shading, deferred g-buffer filling, etc. 

A draw-item is created from a collection of state groups, a draw description (linear/indexed, primitive type, number primitives), and a shader-pass ID (shadow pass, forward shading pass, etc). From those inputs, the minimal set of states and bindings can be extracted to form the draw item. 

In my high level renderer:

Rendering pipelines declare lists of render stages that they want to collect draw items for. A stage has a shader pass ID, render target(s)  / depth target (FBO), a camera frustum, and a state group that will be appended to any draw items created for that stage. 

A model is made up of nodes and meshes. Nodes have bounding volumes for visibility culling, and meshes. Meshes have a collection of draw items (potentially one for each stage declared by the current set of pipelines). When creating a model, the pipelines a queried to find out the list of potential stages, so that draw items for each stage can be pre-created. 

Scenes are collections of models. 

To render a scene, the current set of pipelines first generate a list of stages that they will be drawing. The scene then collects a list of draw items applicable to each stage (meshes that have a draw item for that stage / their shader has a program for the stage's pass ID,  and who's node is visible to the stage's camera frustum). The pipelines can then submit those lists of draw items in the appropriate order. 

To implement shadow mapping:

I'd add a new shadow stage at the start of my pipeline,  which uses the depth-only shader pass ID and the light's frustum. I'd modify the forward shading stage to contain a texture binding for the shadow map and a UBO binding for any related data (e.g. light's view matrix) in its state group, ensuring every object in the forward shading pass can access the shadow map. The scene will then frustum cull from the lights point of view and collect and shadow casters. The pipeline will submit these draws before the forward shading stage, which then can consume the results. 

This topic is closed to new replies.

Advertisement