Engine Core Features

Recommended Posts

So here again with some more opinion/discussion related topic about what features should/are normally be related into a game engine's core system from the implementation side of view. With a huge time I invested into writing and even more into research for my very own game engine project, there were as many differences as there are to coding guidelines. Especially open source engines (Urho3D, Lumberyard) and those that offer there source code (Unreal, CryEngine, Phyre) change during the centuries depending on current technical, coding and hardware standards. They all consist of third party code and libraries that are set apart from the core system bur rely on it.

Now I'm on a point where I want restructure/cleanup my project(s), removing obsolete and integrate prototyping code as same as change the growing code base to have a better clean managed file structure. The question that has had to emerge for long or short is what parts of the code base should be a core feature in order to keep the core system clean and flexible. Again reading many cross references it pointed some similarity out but also huge differences (like Unreals hashing/encryption code beeing in core) so I made a list of important stuff any game needs (I left out platform specifics and audio/renderer here because they are platform dependent and should stay in an own module/system)

  • Allocator (memory management)
  • Math (Vector, Matrix, Quaternion ...)
  • Threading (Threading, Locks)
  • Container (apart from the STL, Queue, Buffers, certain types of Array ...)
  • String (management, encoding, utils)
  • Input (seen a game without input?)
  • IO (reading/writing files and manage filesystem structure)
  • Type Handling

And a list that is optional but may/may not be inside the core code

  • Logging (because it is a development tool)
  • Profiler (see logging)
  • Serialization
  • Plugins
  • Events

In my opinion, logging and profiler code is a heavyly used feature in development but is stripped out mostly when deploying a release version, not all games rely on events rather than using other interaction (like polling, fixed function loops) and also the same for plugins and serializing/deserializing data; so what do you think should also be a must have in core system or should go from the list into core system as well as a must have? What else should go outside as an own module/system (into an own subfolder inside the engine project's code base)?

Courious to read your opinions ;)

Edited by Shaarigan

Share this post


Link to post
Share on other sites

Cross platform GPU wrapper (preferably low level in the core module - with models/scenes/materials as a higher level module), Audio, 3D collision and rigid body dynamics are all just as core as input (for most games). Though yes these can be modules that live alongside the core... But in that case, input should be too.

A networking module needs friendly sockets, HTTP, SSL and encryption (some platforms require all traffic to be encrypted).

Algorithms for hashing, etc, make sense in the core. e.g. The hash map will reference them!

I don't have a string class in my engine; it's a good way to discourage people from using strings :P 

Under threading, I'd have a thread-pool and job graph, not individual threads and locks.

Core game IO should be a lot more limited than generic IO. Loading of named assets, and reading/writing profile / savegame files (no general OS disk access). These are really two different things - not two uses of one IO library. e.g. assets are always streamed asynchronously from known formats and never written to. Savegames are always stored in some OS-dependent path such as "my documents"/appdata/home/etc.

Profiling, logging, assertions are mandatory development tools required to do your job, so should be in the core. Same goes for a good memory manager with debugging features. I'm leaning more and more towards having something like ImGui and a line rendering API as core, simply for development/debugging purposes. 

If you plan on using a Scripting language (or even reloadable C++), a binding/reflection system makes sense as it will be pervasive. You can also use these bindings to generate serialisation/visualisation functions.

Share this post


Link to post
Share on other sites

Forgot the memory management/allocator stuff so added it to the list :P I'm on the hop to force strict memory management at all levels of the engine to at least throw ya an assertion failure arround the head when leaving anything on the heap on scope exit.
 

59 minutes ago, Hodgman said:

Cross platform GPU wrapper (preferably low level in the core module - with models/scenes/materials as a higher level module)

I think you mean the graphics API (DX, GL, VK, whatever) here?

1 hour ago, Hodgman said:

3D collision and rigid body dynamics

Highly depends on the game, Puzzle or 2D platformer might not need any of these so I see them in an optional external instead of core where I'm not sure about input system here (what is a reason to write this post). But I think you were right and it depends a level upward than placing it in core directly.

Networking should stay external as not every game needs it but as I have seen for example Unreal hosts its HTTP stuff inside the core module. I personally would avoid this and instead keep networking low level on the one hand (to have possibility to write also server code) but also more high level as a game client.

Same thoughts for the crypto module. It consist of a lot of static functions to utilize AES, ECDSA, XTEA as same as different hashing functions SHA, RIPEMD and MD5 (as short hash) where the core system contains some text related 32/64 bit hashing functions used for example to identify names.

I agree on your arguments about strings and IO in general. Assets should be converted to packages of engine preprocessed data and savegames should be serialized/ deserialized in order to keep a level of error proofness but these systems may need to interact themselfs with the disk/ network/ cloud storage so I keeped this in my entire list ;)Nearly anything inside my code is already using streams/ buffered reading and static shared memory (aka memory mapped files) as needed and passing a stream interface is preferreable by any data processing function.

Under threading I have currently threads and locks as utility code wrapping the underlaying OS APIs where my JobPool and event system is in an own module but I see that this might go as well into core too. What do you think about event driven engines vs. polling/ update driven ones?

Profiler and logging were currently own modules as I thought of having them in build while development but strip them out when shipping. Profiler may be a point of diskussion but I see logging at least for support purposes as a core feature now so maybe move it there. Dont know about profiler yet :)

Reflection is currently realized as an external module named MetaType that is intendet to provide some meta information and C# like function invocation (for runtime class manipulation) as same as some serialization frontends. As I have seen this in core for various projects, I will potentially move it into core level, for at least support scripting and maybe editor UI interaction from C# level.

Share this post


Link to post
Share on other sites

Hardware/OS specific items should still have an interface layer in your core such that it is consistent no matter how you implement the backend.  In regards to that, you really should add input device handling (keyboard, mouse & joystick) and window management items.  Depending on goals, a lot of folks roll window management into the rendering engine but I tend to think it should be separated for quite a few reasons.  Just my $0.02.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Forum Statistics

    • Total Topics
      628686
    • Total Posts
      2984229
  • Similar Content

    • By lawnjelly
      It comes that time again when I try and get my PC build working on Android via Android Studio. All was going swimmingly, it ran in the emulator fine, but on my first actual test device (Google Nexus 7 2012 tablet (32 bit ARM Cortex-A9, ARM v7A architecture)) I was getting a 'SIGBUS illegal alignment' crash.
      My little research has indicated that while x86 is fine with loading 16 / 32 / 64 bit values from any byte address in memory, the earlier ARM chips may need data to be aligned to the data size. This isn't a massive problem, and I see the reason for it (probably faster, like SIMD aligned loads, and simpler for the CPU). I probably have quite a few of these, particular in my own byte packed file formats. I can adjust the exporter / formats so that they are using the required alignment.
      Just to confirm, if anyone knows this, is it all 16 / 32 / 64 bit accesses that need to be data size aligned on early android devices? Or e.g. just 64 bit size access? 
      And is there any easy way to get the compiler to spit out some kind of useful information as to the alignment of each member of a struct / class, so I can quickly pin down the culprits?
      The ARM docs (http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka15414.html) suggest another alternative is using a __packed qualifier. Anyone used this, is this practical?
    • By Josheir
      In the following code:

       
      Point p = a[1]; center of rotation for (int i = 0; I<4; i++) { int x = a[i].x - p.x; int y = a[i].y - p.y; a[i].x = y + p.x; a[i].y = - x + p.y; }  
      I am understanding that a 90 degree shift results in a change like:   
      xNew = -y
      yNew = x
       
      Could someone please explain how the two additions and subtractions of the p.x and p.y works?
       
      Thank you,
      Josheir
    • By alex1997
      Hey, I've a minor problem that prevents me from moving forward with development and looking to find a way that could solve it. Overall, I'm having a sf::VertexArray object and looking to reander a shader inside its area. The problem is that the shader takes the window as canvas and only becomes visible in the object range which is not what I'm looking for.. 
      Here's a stackoverflow links that shows the expected behaviour image. Any tips or help is really appreciated. I would have accepted that answer, but currently it does not work with #version 330 ...
    • By noodleBowl
      I just finished up my 1st iteration of my sprite renderer and I'm sort of questioning its performance.
      Currently, I am trying to render 10K worth of 64x64 textured sprites in a 800x600 window. These sprites all using the same texture, vertex shader, and pixel shader. There is basically no state changes. The sprite renderer itself is dynamic using the D3D11_MAP_WRITE_NO_OVERWRITE then D3D11_MAP_WRITE_DISCARD when the vertex buffer is full. The buffer is large enough to hold all 10K sprites and execute them in a single draw call. Cutting the buffer size down to only being able to fit 1000 sprites before a draw call is executed does not seem to matter / improve performance.  When I clock the time it takes to complete the render method for my sprite renderer (the only renderer that is running) I'm getting about 40ms. Aside from trying to adjust the size of the vertex buffer, I have tried using 1x1 texture and making the window smaller (640x480) as quick and dirty check to see if the GPU was the bottleneck, but I still get 40ms with both of those cases. 

      I'm kind of at a loss. What are some of the ways that I could figure out where my bottleneck is?
      I feel like only being able to render 10K sprites is really low, but I'm not sure. I'm not sure if I coded a poor renderer and there is a bottleneck somewhere or I'm being limited by my hardware

      Just some other info:
      Dev PC specs: GPU: Intel HD Graphics 4600 / Nvidia GTX 850M (Nvidia is set to be the preferred GPU in the Nvida control panel. Vsync is set to off) CPU: Intel Core i7-4710HQ @ 2.5GHz Renderer:
      //The renderer has a working depth buffer //Sprites have matrices that are precomputed. These pretransformed vertices are placed into the buffer Matrix4 model = sprite->getModelMatrix(); verts[0].position = model * verts[0].position; verts[1].position = model * verts[1].position; verts[2].position = model * verts[2].position; verts[3].position = model * verts[3].position; verts[4].position = model * verts[4].position; verts[5].position = model * verts[5].position; //Vertex buffer is flaged for dynamic use vertexBuffer = BufferModule::createVertexBuffer(D3D11_USAGE_DYNAMIC, D3D11_CPU_ACCESS_WRITE, sizeof(SpriteVertex) * MAX_VERTEX_COUNT_FOR_BUFFER); //The vertex buffer is mapped to when adding a sprite to the buffer //vertexBufferMapType could be D3D11_MAP_WRITE_NO_OVERWRITE or D3D11_MAP_WRITE_DISCARD depending on the data already in the vertex buffer D3D11_MAPPED_SUBRESOURCE resource = vertexBuffer->map(vertexBufferMapType); memcpy(((SpriteVertex*)resource.pData) + vertexCountInBuffer, verts, BYTES_PER_SPRITE); vertexBuffer->unmap(); //The constant buffer used for the MVP matrix is updated once per draw call D3D11_MAPPED_SUBRESOURCE resource = mvpConstBuffer->map(D3D11_MAP_WRITE_DISCARD); memcpy(resource.pData, projectionMatrix.getData(), sizeof(Matrix4)); mvpConstBuffer->unmap(); Vertex / Pixel Shader:
      cbuffer mvpBuffer : register(b0) { matrix mvp; } struct VertexInput { float4 position : POSITION; float2 texCoords : TEXCOORD0; float4 color : COLOR; }; struct PixelInput { float4 position : SV_POSITION; float2 texCoords : TEXCOORD0; float4 color : COLOR; }; PixelInput VSMain(VertexInput input) { input.position.w = 1.0f; PixelInput output; output.position = mul(mvp, input.position); output.texCoords = input.texCoords; output.color = input.color; return output; } Texture2D shaderTexture; SamplerState samplerType; float4 PSMain(PixelInput input) : SV_TARGET { float4 textureColor = shaderTexture.Sample(samplerType, input.texCoords); return textureColor; }  
      If anymore info is needed feel free to ask, I would really like to know how I can improve this assuming I'm not hardware limited
    • By John Mckrugins
      My short-term  goal right now is a job as a Junior Programmer in any game company, just to get my foot int the door and start earning some income.
      My long term goal is to Programme for bigger more established  game companies and help games that interest me.
      Im in semi-fortunate position where i don't have to work a full time job so i have the  learn how to programme.
      i did my research into whats a good beginner way to start,  Unity and C# came up a lot, so i threw my hat in.
      For the past 5 months i've been learning C# and Unity using the udemy tutorials at a slow but steady pace as i come from a 0 maths/ programming background.
      Right now  getting the hang of things , understanding most code and the  unity engine to a point where i feel comfortable at my current level around Beginner/ Intermediate.
      Although im still keen to continue with Unity, I cant help this nagging feeling that(lets say for arguments sake i do actually get a job) if i do peruse this path and end up with a job as a developer for a company that uses Unity or whatever else uses C# . There is going to be a point at however many X years down the line i am, im still using unity,  im going to be in a position where i want to work on bigger more mainstream games that use C++.
      I want to get a job ASAP, i know it will take at the very least another 7 months probably, learning more code, making a portfolio and all the rest, so i dont really want to change and start from scratch again.
      Im not bashing unity but it looks like its main use is mobile games, which would be perfectly fine as a starting point, but not as a long term career. 
      Hypothetically  If i continue to focus on learning C# / Unity to reach my goal, and at some-point i want to move into bigger prospects and learn C++, how screwed would i be if i wanted to transition over.
      Im by no means a  smart guy that picks up things fast, im just willing to put in the time/effort.
      Should i scrap learning C# and unity for C++ / Unreal  or just power on through and transition later down the line after i get some experience first.
      Time is a factor, but i want to make sure im not condemning myself to a path i wont like later down the line...
       
       
  • Popular Now