Jump to content
  • Advertisement
Sign in to follow this  
Zondartul

Scene System?

This topic is 875 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to figure out how to organize the parts of my game engine and how they talk to each other.

 

Previously, whenever I wanted to display something on screen, I just immediately made a draw call for the individual item in question (point, line, triangle, mesh, screen text, etc), and, using OpenGL 2.x (for silly historical reasons), that was fine. However, I ran into two problems:

  • The need to display something on screen can arise outside of a "renderEverything()" call, for instance if I have debug info to display while processing physics, or when processing user input. I don't yet use threads, but in the future, other threads might need to have something drawn too.
  • Redrawing the entire scene from scratch is very slow
  • Immediate calls like glBegin(GL_LINES) ...glVertex3f()...glEnd() do not exist in OpenGL 3.x that I want to move to (afaik)

So I figured I want to create a "scene system". The idea is that the scene system would accept requests to have things drawn, either once (during the next frame), or until the drawn object's lifetime runs out (i.e. next N frames). In turn, the scene system would be the only part of my code that has knowledge of OpenGL and has to link against it. It would figure out how to efficiently upload data to the GPU (compose OpenGL arrays / buffers, track if they need re-uploading or not) and be responsible for issuing draw calls.

 

My question is, is there a name for such a system in industry? How is this problem commonly solved? Is having a "scene system" even a sane idea in the first place?

 

PS: I thought about posting this in Graphics Programming and Theory, but since the question is less about OpenGL/DirectX/geometry and more about high-level program design, I figured it's more appropriate here.

Edited by Zondartul

Share this post


Link to post
Share on other sites
Advertisement

The idea of the "Scene Manager" as Sean mentioned, is normally to store a subset of data that is possible to be drawn at any given time. It's job is also to handle the transformation hierarchies of objects and the camera. Remember that you don't move the camera, you move the world. So this is a very important part of the engine.

 

The scene manager can be implemented with a variety of techniques. Each one has an upside, and a down side that you need to be concider. And if need be, you'll likely find yourself adapting a data structure to suit your needs.

 

For a while, I've been using trees as a data structure for my scene manager. But I recently swapped it's data structure to something more practical for what my engine is expected to handle.
 

Now I use sets made by arrays.

 

So I have a Master Scene, which is the super-set of all sets. Below that, are the world scenes. The Master Scene has access to all data in it's sets. Sub-set scenes are not allowed to concern themselves with the master scene. And dynamic objects are stored in the master scene, and union-ed with subsets.

The Subsets are generally treated as the immediate area. Because my game allows eight characters to be located anywhere in the world, this structure was necessary to limit the data that would get computed.

 

When a Character is in the overworld. An entire scene is a 13x13 block of cells around them. For each character that roams past that block, they will obtain their own scene-set that is encased around them. When the player swaps characters. The camera will tell the master scene to change it's set. Thus moving it to the new set. When it's time to render, the render core will only receive the set of information that the active camera is in.

This is also extended to work with networking as well.

 

As far as how the data is stored...

The sets are nothing more than a somewhat ordered array of proxy's. And an array of pointers to their respective subsets, supersets, and unions.

 

The arrays are sorted by level of child depth using an insertion sort.

Share this post


Link to post
Share on other sites

It's job is also to handle the transformation hierarchies of objects and the camera.

This is not the scene manager’s job; the scene graph is an implicit nature of the actors in the scene.
Actors themselves can be parents and children of other actors. Actors themselves manage how a parent transform affects their own transforms, etc.

The scene manager will issue a “pre-update” call to each actor, which actors can use to calculate their final transforms taking their local transforms with their parents’ transforms, but to the scene manager this call is a black box. The actors could use it to make spaghetti for all the scene manager cares.



In addition to what has already been said, I would point out that there is no reason to restrict yourself to having only a single scene at a time.
I real-life example that I had where 2 scenes would be the best solution was in a golf game I made in which you spend most of your time on the 3D field, but for swinging the club the golf course is shrunken to fit a certain area of the screen and a 3D guy swinging a club is overlaid on top of it with a different set of projections and view transforms.



Render-queues are a good way to sort objects for rendering inside a single scene, but if we are talking about overlaying the UI, debug text, etc., then you would really want layers.
Usually engines have a fixed and predefined set of layers, each intended to render a specific part of the scene.
Usually there are about 16 layers, but most of them are blank or “reserved” for future use.
An example of a simple set of layers would be:
Layer 0: Player.
Layer 3: Solid 3D objects.
Layer 5: Terrain.
Layer 7: Skybox.
Layer 8: Translucent 3D objects.
Layer 10: Post processing.
Layer 12: UI/HUD.
Layer 15: Debug text.


Layers are drawn in order, so later layers are drawn on top of earlier layers.
The way each layer is drawn is custom. The 3D layers would use render queues, but the terrain layer would use a system specific to terrain (chunks, GeoMipmaps, etc.), and the skybox layer would just be a single standard draw call.



  • Redrawing the entire scene from scratch is very slow
Too bad. That is what you do. If it is slow, it is probably because you are using glBegin() etc., and are otherwise non-optimal.
The solution isn’t to try redrawing only what has changed, the solution is to fix the actual bottlenecks.

The idea is that the scene system would accept requests to have things drawn, either once (during the next frame), or until the drawn object's lifetime runs out (i.e. next N frames).

Issue calls on a per-frame basis. Never assume anything about any future Nth frame. You don’t know when the player is going to hit the button to go to the next stage and all your assets need to be unloaded or exchanged.


L. Spiro

Share this post


Link to post
Share on other sites
Thanks for your replies everyone. Yes, it seems I am making a render queue. I think having several queues as "layers" is a good idea. Sorting a queue based on distance/translucency is also a good idea.

As for positioning, I'm thinking of just looping through everhthing with a .render() function and calling it. Within that function the renderable would push requests on the renderqueue of the form "draw a tri-mesh at myPos+toLocal(offset)", and possibly call .render for its children if it has any. This would be done every frame.

The render queue would then process all the requests that came from the regular render loop, and those from elsewhere in the program, and figure out how to actually draw the primitives (lines, triangles, triangle meshes).

Space partitioning and occlusion would probably happen in the render loop, not the render queue, because those tasks depend on object logic, whereas all a render queue sees are primitives and render settings (the renderable would decide whether to call its children/push requests, or quit early if AABB is not in view)

I wonder if there should be a "camera object", or if the perspective/ortho projection and screen size should be the concerns of the render queue.

Also, I'm not sure what it means to "move the world, not the camera". Isn't it all the same? I translate the projection matrix to look from a different point, or I translate the model-view matrix to move my shapes before drawing them, in the end the two get multiplied anyway. I'm not doing any per-vertex operations most frames either, so what does that saying even mean?

Share this post


Link to post
Share on other sites

This is not the scene manager’s job; the scene graph is an implicit nature of the actors in the scene.
Actors themselves can be parents and children of other actors. Actors themselves manage how a parent transform affects their own transforms, etc.


To be fair, that presupposes that all engines treat transforms as an innate part of their game objects and do not have a completely independent scene hierarchy.

Or that the engine doesn't _store_ the transform in a separate data structure and then pretend like it's part of the game object itself.

I'm actually not at all a fan of the method you outline because it implies an awful lot of virtual function calls in hierarchy order which is ungood for CPUs' branch predictors. Also, I can't actually think of a real use case for it.

It also implies a lot of synchronization overhead between objects and the scene hierarchy. This is especially a huge problem for multi-threaded renderers. You really want those transforms to be outside objects so that you can snapshot the transform hierarchy for later rendering in parallel with game updates, or for handing off to a debugging system, or for interpolating between physics ticks, or so on.

In addition to what has already been said, I would point out that there is no reason to restrict yourself to having only a single scene at a time.


Very much this. I'd expand the sentiment to the need to support completely separate "spaces" with their own scenes, own physics world, and own copies of anything spatial. Which isn't how I'd implement the given golf game example, but still. :)

Engines without such a feature suck. Like, hard. Yes, that includes most of the major common ones in use today. Seriously, wtf guys. Have spaces. The end.

I wonder if there should be a "camera object", or if the perspective/ortho projection and screen size should be the concerns of the render queue.


A camera object is very common. They get put into the scene graph / hierarchy like anything else, meaning that you can manipulate the camera position and orientation by manipulating other objects (e.g., to keep the camera attached to a player). It also simplifies advanced features like movie playback because the camera is just a transform and can be animated along splines and the like in a completely identitical fashion to all other objects, rather than needing special treatment.

Also, I'm not sure what it means to "move the world, not the camera". Isn't it all the same?


You've got the math right. Changing a camera matrix "moves" the world into view space. Though I'm also unsure what that has to do with this conversation.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!