Jump to content
  • Advertisement
Sign in to follow this  

Unity Scene structures and how Unity approaches it

This topic is 2599 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am intrigued by the way Unity designed the scene structure. There are game objects, which are placed in a hierarchy. The hierarchy takes care of transformation hierarchies and ownership of subnodes. Everything else is done by attaching components to gameobjects. This corresponds to the multi-view approach presented in http://www.realityprime.com/articles/scenegraphs-past-present-and-future . I can imagine that internally, whenever a component is attached/detached, subsystems like the renderer and the physics engine are notified, and they react accordingly if one component is of interest to them.

I've had a similar idea for a while, but I think the Unity approach is better. I thought of entities in a scene. Entities are some sort of property table, like JSON objects. Each subsystem then interprets the entities. For example, a "material" value is of interest to the renderer, but not to the network. A "rotation" value is of interest to the renderer, to the physics engine, but not to the sound system etc. In Unity, subsystems do not interpret, they just look if there is a component X attached to gameobject Y.

While I find the entity approach perhaps a bit more elegant, I think the Unity one is much more practical. What do you think?

Share this post

Link to post
Share on other sites
I'm quite interested in this; I like how they solved some issues here and there.
For example, each entity makes sense with only a single transform: it definitely makes sense to have a single [font="Courier New"]transform[/font] variable. I suppose the engine might be actively looking for those components and I see there's chance providing an explicit field could make things faster.
I'm not entirely sure what's the point in having [font="Courier New"]camera [/font]or even [font="Courier New"]particleEmitter[/font]. It looks to me those components are not widespread enough to justify their presence in base class (in the case of [font="Courier New"]camera[/font]) or not important enough (in case of [font="Courier New"]particleEmitter [/font]- how to attach more than one emitter?).

I'd say I agree with you somehow. The "entity" approach feels somehow "more elegant". Problem is, it really does not scale well.
There are a few threads about that, I can point out one I have started some time ago.
What I can say is - WRT my limited experience dealing with my system - one can indeed get quite some mileage out of pure "entity" systems. Unless there's "magic" - such as automatic sync of xform and collider - there's going to be some glue to keep the various components in sync. This glue is not so bad after all. To a certain degree, I like having the relationships explicitly documented in the code.

[size="1"]EDIT: pulled out comments about "scene graphs".

Share this post

Link to post
Share on other sites
Note that I do not talk about entity trees. A common update() method is a terrible idea, for the reasons presented in your linked thread. Instead, I see the entity as a collection of values. The whole thing came to me when I was thinking about MVC in games. With this system, subsystems would get notified of changes in the entity. For example, the input subsystem changes the position value, then the renderer and the physics engine get notified about this change. The list of entities would be the model, the subsystems the views, and the controller would be split between the scene (which contains the entities) and the subsystems.

The different between a common base class and my idea is that in my idea, subsystems still can (and typically do) have their own representation of the entity. For example, the renderer has its own objects containing vertex buffers etc. The entity acts as a nexus, a central point for synchronizing all of these representations. However, as I said, it means that subsystems have to interpret the information in the entity, and this can cost performance. Another problem is the flexibility - how well does this really map to real-world problems?

Also, this idea might fall apart if the representations cannot be kept apart cleanly. What about bounding volume hierarchies for example? They might be interesting for both the renderer and the physics engine (and perhaps sound as well, to cull audio emitters).

To summarize my idea:
  • entities are simply JSON objects
  • each subsystem interprets this JSON object in its own way, creating internal representations of the entity
  • the MVC pattern is applied, the entity acts as the model; if one value in the entity is changed, listening subsystems get notified about the change
  • all subsystems get notified about newly created and destroyed entities
    • weak coupling
    • this can work in a multithreaded fashion, if the notifications to the listeners are done in a thread safe manner
    • it is easy to add/remove subsystems
    • there is no central update method; subsystems run at their own speed
      • possible impact on performance, since the subsystems need to interpret the JSON data
      • sharing data between subsystems is not easy, which is relevant for example for BVHs
      • uncertain how well this maps to real-world scenarios

Share this post

Link to post
Share on other sites
Basically you're taking the component model to its lowest common denominator: each value is a component.
I'm not quite sure you're going to make it work. It's too low-level to make sense to me (also consider you'll need to figure out a way to avoid clashes).
So to a certain degree, what you plan to do is a component model and therefore will have the advantages of the component model (in pure line of concept)... except figuring out the relationships between the values. In a standard component model, those would be described in a component's documentation but since your components are values... uhm... it looks a bit quirky to me. What you'll need to do will be to define a set of hard-coded rules which will have to be followed. Want to have both graphics and physics representation? The graphics representation will have to use [font="Courier New"]gra_xform [/font]property, while the physics will go for [font="Courier New"]phys_xform[/font]. Need to sync those? Two routes:
  1. Add magic. As the rules grow more and more complicated the benefit of separation involved in component model is, IMHO, quickly lost.
  2. Add callbacks. I'd be scared of defining those, with no sub-components to refer to, I fear their number and definition could go out of control. But that's only my opinion. Of course we can define them wrt the modified variable (by pointers?), whatever this is possible would be language dependant I guess, or require quite some wrapping. I'd be careful: nothing is even remotely elegant to just have sub-components with their own properties.
I'll try to be more concise. You're stripping the component model to the basics, property == component. By doing so, you go so low level than I fear you might lose the benefits of components. At the end of the day this takes a component model and degenerates it so much it's basically back to an entity model.

Entity: "does entity X have property xform?" if yes, position somewhere in the world. Entity props --> behavior
Component: "update component X. Signal changes". Entity's behavior emerges from component behavior. The implication is the other way around.
Think at why OOP is successful - stuff which is correlated goes in the same structure. The component model extends this by allowing composition.

I really appreciate JSON but using it for runtime representation... I'd be careful.

Weak coupling is only apparent because of the reasons above. Some system would still need to put together those values - the coupling is just moved out of the system, but it's still there - and I see no way to avoid documentations such as "use phys_xform to place the collision volume in the right place". The coupling is moved at higher level, yes, but it's still there.

Multithreading capability is debatable. I suppose you could register a callback for each property (we're talking about hundreds of callbacks easily, if not thousands) but even this is only part of the whole picture: physics systems for example will tick at 20-30hz no matter what. Nonetheless, processor dispatch might eventually degenerate this in a fully sequential model (because I don't think you really want to use atomics for everything aren't you?).

I'm sorry to write this but I think it would not scale well.

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!