Jump to content
Site Stability Read more... ×
  • Advertisement
SephireX

Engine Design Questions

Recommended Posts

Hi,

I have a few questions regarding best practice for engine design.

1. Should smart pointers be used at a low level in the engine such as in the renderer or do they cause a significant decrease in performance?

2. Should the physics engine be wrapped in an abstraction layer to allow for other physics engines? For example this would allow a change of physics engine later. Although the wrapper would likely have to change to facilitate the new one.

3. The Banshee3D engine is an example of an engine that defines a common interface for physics, sound, renderer, rendering api and creates the implementations as plugins. This seems like a nice flexible approach instead of having the implementations as part of the main codebase. Are there any downsides to doing this? Of course this engine is open source and intended to be general purpose. i think the author's idea was to allow the user to more easily switch to different third party libraries.

 

Edited by SephireX

Share this post


Link to post
Share on other sites
Advertisement
Quote

Should smart pointers be used at a low level in the engine such as in the renderer or do they cause a significant decrease in performance?

In case you haven't already seen it, you might take a look at this thread. There's some interesting discussion there about object lifetime and ownership that might be relevant.

As for your specific question, it seems like impact on performance would depend on things like usage patterns and smart pointer implementation. If used properly (e.g. not transferring or sharing ownership unnecessarily, especially in performance-critical code), it seems at least possible that smart pointer usage wouldn't negatively impact performance to any significant degree. But, I'd check out the thread mentioned above, as it discusses this issue and some other related issues (and also includes discussion of some alternatives to smart pointers).

Share this post


Link to post
Share on other sites
18 hours ago, SephireX said:

Should smart pointers be used at a low level in the engine such as in the renderer or do they cause a significant decrease in performance?

This depends on the level of memory management your engine is working at. Big engines tend to manage their objects without using smart/ref pointers or at least implement them by themselfs because anything about memory is runtime critical. The CPU is good at calculating hundrets of millions of instructions per second but Memory/File access just relates on systems inside your computer that are apart from the CPU; should mean that anything that is physical far away from the CPU is slow by definition.

The impact is even bigger on file access because of disc access time, cache misses and whatever happens here too.

I'm working mostly using plain pointers in my API level because they're easy to manipulate and I could for example cast an integer to a 4 byte array without any impact. My memory model is based on allocators that keep track on the ammount and size of memory requested and throw an assertion to the programmer when there are leaks occuring. Behind such an allocator may be the OS API malloc or a static/dynamic allocated chunk of memory that was loaded during initialization.

Memory leaks are just one side of the problem where fragmentation is the other one. Imagine of allocating 2 integers and 1 long. Those end in a memory layout that is 16 bytes range. You can release the long and allocate 2 additional integers, anything fine up to here but what happens if you release the long and allocate a short (2 byte) is that then there would be 6 byte range space. If that space is in between two integers then you can't allocate a long again in that space. So running your engine/game results in fragmentized memory the longer it runs.

This is why engines tend to manage their memory themself in different approaches. Some have a garbage collector, others manage different sized objects in different regions of memory they allocated as a huge block previously.

What I want to say here is that using smart pointers (the STL ones in C++ for example) will lead to memory fragmentation because those container classes aren't clever enougth to do anything else than malloc/new under the hood.

 

18 hours ago, SephireX said:

Should the physics engine be wrapped in an abstraction layer to allow for other physics engines? For example this would allow a change of physics engine later. Although the wrapper would likely have to change to facilitate the new one

This is a question you have to ask when developing for multiple platforms and/or closed source. Normally, a non-plugin approach would have less management overhead and be possibly faster (a few instructions more per second so nothing much). I prefer the modular approach so having an own project per topic (a topic for example is the physics engine) and link them together in C++ as static linked libraries. So compile-time plugins if you want to mae it like that.

The classic plugin system loads a library dynamic at runtime and lets the OS connect it into your running application. This might be a performance impact too

 

18 hours ago, SephireX said:

The Banshee3D engine is an example of an engine that defines a common interface for physics, sound, renderer, rendering api and creates the implementations as plugins. This seems like a nice flexible approach instead of having the implementations as part of the main codebase. Are there any downsides to doing this?

OGRE is another example because they tend to be multiplatform compatible so loading DirectX, GL or Vulkan renderer at startup and then pass everything to those renderers.

The main question is; is it worth doing management overhead if you use the same renderer/sound/physics engine in 99% of the time? What are the benefits of having a pluginable system?

The only reason I see for an engine to support this except for extending functionality is either in the editor to support the workflow of editing and production code a little more or if your engine is closed source but you want users to use it for different projects so replacing the 3D render pipeline with a 2D one for example.

Making a multiplatform engine dosen't benefit from plugins over conditional compiling very much and plugins make debugging your code very hard especially if they effect each other.

Engines like Unreal, Lumberyard, Urho (and mine) are build from source nowdays so you have a custom tooling or make-system driven environment that selects the best solution for any platform you build those engines for. They compile different modules into the components of the engine and customizations are a lot easier than struggling with plugins.

But don't get me wrong, they also define how their APIs expect to work with them so you'll have an interface anyways

Share this post


Link to post
Share on other sites
Posted (edited)

Thanks for the replies. I know it has been a while since I posted this but I was reading back over your answers today and had a few questions.

If a smart ptr is created from an existing raw pointer where its memory was not allocated using "new" but instead with a custom allocator, would that be okay? 

The reason I asked abut the plugins issue, is because I was thinking of forking the banshee foundation framework which is licensed under MIT and is only the engine part of the code. I think its nice and cleanly written and would save me a lot of time instead of writing everything from scratch. I think one advantage of the plugins system and dynamically linking is that you can change rendering api on starting the application instead of having to compile two versions. If I run my Unreal application on linux, I can run the application and pass -vulkan or -open-gl as arguments and it will run with either. Though I do think wrapping physX and making the renderer pluginable is over-engineering.

The engine uses bsl which is just hlsl with a few additions and it can compile this to hlsl or glsl. Now that spir-v cross compiler is available and it can do glsl, hlsl and msl to spir-v and vice versa, is it better to just use this? 

 

Edited by SephireX

Share this post


Link to post
Share on other sites
3 hours ago, SephireX said:

If a smart ptr is created from an existing raw pointer where its memory was not allocated using "new" but instead with a custom allocator, would that be okay? 

The short answer (I would think) is yes, provided of course that the smart pointer is implemented and/or configured appropriately. If that doesn't answer the question adequately, maybe you could elaborate a little on what you have in mind.

Share this post


Link to post
Share on other sites
Posted (edited)
On 2/13/2019 at 4:55 PM, SephireX said:

Hi,

I have a few questions regarding best practice for engine design.

1. Should smart pointers be used at a low level in the engine such as in the renderer or do they cause a significant decrease in performance?

I use smart pointers in many places. If you don't use them you still need to manage object lifetime somehow so there will be performance considerations either way. That being said you shouldn't use them everywhere. Smart pointers designate ownership.  There is nothing wrong with using smart pointers in your main data structures and passing around raw pointers for use on temporary basis. Another strategy is to pass around your smart pointers by reference. What you are trying to do here is avoid the reference count increments, decrements and checks.  Keep in mind unique_ptr is basically a raw pointer so there is no performance hit. Use them freely.

However shared_ptr is a different story.  There is a separate memory control block. If you used shared_ptr, use make_shared where possible. This combines the memory allocation of the pointer and the control block. Better yet, don't use shared_ptr at all.  While it is a versatile implementation it has some major drawbacks. First the control block contains an extra reference count for any weak_ptr references whether you need it or not.  This is not so bad.  Much worse is the fact that the pointer itself is typically TWICE! the size of a raw pointer. There is a raw pointer for the control block and one for the object it self. You can write a small test program and see if this is true in your standard library implementation.

In any case If you are using a lot of them it's a huge memory hog which of course has memory caching implications.   If your design is such that you can have a standard base class, it's easy to put the reference count there, and do your own smart pointer implementation.  This is not really that difficult.  You can do it in a few minutes if you know how. 

If you want to get fancy you can work smart pointers into a custom heap implementation.  The pointers are aligned by some standard byte alignment so you can then reduce their size.  So for instance you can have 8 byte aligned 32 bit heap pointers into a 32 gig max heap.  So again you have cut your pointer size in half on a 64 bit machines.  The two drawback are that you have to have the heap available to reference your objects and you are limited by the heap size.  The later problem is partially offset by the fact that you can have multiple heaps . Also implementation is a bit more tricky but the memory saving can be drastic depending on what you are doing. While you're at it you can do slab allocation and make a super fast heap.
 

Quote

Should the physics engine be wrapped in an abstraction layer to allow for other physics engines? For example this would allow a change of physics engine later. Although the wrapper would likely have to change to facilitate the new one.

 

I don't, but then I'm doing procedural generation. I build stuff directly into an octree and do collision calculations right there.  This is mainly because I'm doing JIT (Just in Time) streaming terrain.  However for a more typical use case, I would fathom this is a reasonable option. I guess I don't really have a strong opinion on this one. It's kind of a design decision.

Edited by Gnollrunner

Share this post


Link to post
Share on other sites
On 2/13/2019 at 8:55 AM, SephireX said:

1. Should smart pointers be used at a low level in the engine such as in the renderer or do they cause a significant decrease in performance?

Probably not, although using scoped smart pointers to own large buffers that you treat as memory pools is not the worst thing in the world. It's not even about a "decrease in performance" necessarily, it has more to do with the way smart pointers make you think about the resources in your engine. Low level code demands flexibility in how you store your data. You may want a traditional array of structures, but you may want to organize your data as a structure of arrays. You might create a data layout specifically for simd operations. There is another consideration, and that is if you are using C++, and the std library smart pointers like unique_ptr, shared_ptr, weak_ptr, you are introducing templates all over your code, asking your compiler to do a ton of extra work. This coding style has a major, MAJOR negative impact on compile times. I'm talking differences like 3 seconds vs. 10 minutes... it's orders of magnitude. That being said, smart pointers can be an OK thing at higher levels in your engine code, though I now prefer to avoid them personally (it wasn't always this way for me).

 

On 2/13/2019 at 8:55 AM, SephireX said:

2. Should the physics engine be wrapped in an abstraction layer to allow for other physics engines? For example this would allow a change of physics engine later. Although the wrapper would likely have to change to facilitate the new one.

Yes you should have an abstraction layer, and it should not have to change to facilitate a new physics engine, or you have made the wrong abstraction. You need to think about your abstraction layer from the other direction. The abstraction should start from the game side, it should not be thought of as a wrapper for the physics engine. It should encapsulate the operations that the game actually needs the physics engine to do for it. Even if you don't plan to swap out the physics engine later, a clean boundary layer between the physics library and the game code is still crucial. A new physics engine implementation would just need to re-implement those operations using its own internal types and APIs. Backwards thinking when it comes to engine architecture is very prevalent -- you see it a lot with renderers that try to do OpenGL, DirectX, etc. -- and it leads to brittle, pointless abstractions that just add work and complexity without any benefit. Wrappers do not require OOP interfaces, virtual functions etc., it can be a simple set of procedures and/or memory buffers.

Share this post


Link to post
Share on other sites

@y2kiah

The engine api would only expose physics components and nothing about the physics engine or a wrapper. The components are themselves a wrapper for the user. The physics components would be written to suit the game programmer. My question is should the engine code talk to the physics engine or a physics engine wrapper and why?

Share this post


Link to post
Share on other sites
Posted (edited)
  1. No, don't use smart pointers in your API. This is a beginner mistake in terms of API design.
  2. I don't recommend wrapping your physics up. Creating a good physics wrapper is quite difficult, and you'll end up just wasting time unless you make it fairly high level. If you create higher level wrappers this increases your chance of success. The worst case scenario is you create a low level wrapper that shrink wraps the physics API you are trying to hide, unable to generalize to other physics engines. This is much worse than simply using a physics API directly.
  3. Yes, there are downsides to creating higher level wrappers. The biggest downside is these wrappers are never going to be strong representations of the underlying API without a lot of competence going into the implementation and design. Typically pieces of the underlying technology break off and disappear, such as error reporting or special case features only available for a certain technology but not others.

In my opinion picking a single good third party technology is much more important than trying to create an abstraction layer over them. Typically people try making abstraction layers without the skill to make a good one, and end up wrapping poorly chosen libraries under poorly thought through abstraction layers, resulting in a terrible game engine.

If you wanted an example of a good engine implementation with wisely chosen 3rd party libraries I suggest looking into the source and design of Love2D.

Edited by Randy Gaul

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!