Help for developing Game Engine Editor

Started by
9 comments, last by Vilem Otte 4 years, 4 months ago

Which UI library is the best? sister,qt,wxwidgets,.....

I wanna write it in C++ and the engine is C++ and OpenGL

Which one Godot,  blender, and Unity uses?

Or they are using pure OpenGL?

If yes, What about performance?

Advertisement

I've seen 2 approaches. 

First one is to use a native/cross-platform GUI widget. Things like Win32, Winform, GTK, Qt, etc.... With this the game engine would only draws to the output window inside the main window. I think 

  • Pros: Complex UI is handled by the operating system/gui widget library.
  • Cons: Some of the editor code will be tied to the gui widget, while some other will be on the engine. That make it sometime difficult to track down an issue.

The second one is to use the engine to draw everying. This is the approach that many game engine nowadays take, including Unity, Unreal, Godot, etc.... 

  • Pro: Editor codes run on game engine, so if there's problem, it's mostlikely on the engine.
  • Con: The engine needs to support complex gui layout, otherwise it will be difficult to write an editor.

Personally I think I'd go with the first one first, if I just strart writing the game engine. After the game engine is mature enough and supports more complex gui layout I might move to the second.

http://9tawan.net/en/

Unreal is not entirely correct; they use (or have used) .NET/CLI to code their editor and inside they have their own UI library for the editor which is different from that for the game. You can look into their source code as they made it available for Unreal devs and it is also on GitHub. However, it is not OpenGL rather than the specific renderer for the target platform; DirectX on Windows.

Unity is sadly closed source but as far as I know, the editor is nothing else than an extended Unity Player. It instanciates a Unity Player process in the background and hooks into it to render UI and whatever. Rendering here is driven by the background Graphics API; DirectX on Windows for example. The editor code is C#, this means that in a hidden C# assembly you could also look into on GitHub, there is anything from inside the editor defined and also the public GUI and EditorGUI APIs exposed. It is for example possible to use C# Reflection to hook into and customize the general Unity Editor UI.

I did this for exchanging the Project Browser for a custom window that is still able to use the Folder/Asset Tree that stays along with the Asset View.

I don't know about about Godot, their code base looks horrible, but Urho3D for example uses also the engine specific rendering stuff and has AngelScript on top to define the Editor UI.

My targeted approach is 2 things, having a C# to C++ cross language editor and use HTML and a custom GDI renderer to define the UI.The reasons I'm doing this is simple, C# is a great language for tools and already provides a wide set of functionality for general application coding. You don't have to take care of the memory as the GC does this for you, so you don't have to define and maintain two different kinds of memory management in your game engine. C# provides (at least in Winforms) a handle that can be passed to the engine code for instanciating a render context for example in OpenGL to live on every window or window-control you like and GDI functions are really simple to use. Not to mention even hot reloading when you change your engine's code on editor time ?

The downside of this approach is that you need a marshalling layer between engine and editor code but this can also be auto generated from a tool. Also you expose only the functions you really need.

Using HTML/ CSS to define your UI is an accepted approach in game development. There are already some libraries that offer from parsing HTML5 and CSS3, to layouting and even take responsability for the rendering entirely for you. They are sometimes free to use and sometimes commercial and even Unity has changed their UI model to a subset of HTML and USS (Unity Style Sheets). It is however simple to create UIs with HTML, test them in a browser of choice and after anything is correct, import them into a library or your own custom system from the well defined and clear HTML5 specs (don't waste yor time with HTML4 and CSS2, they are obsolete).

My opinion to the topic is simple, take the editor as what it is: a tool and keep your engine and game aside from it. Nobody is getting sick from not being able to play the game inside the editor, it will even help people to start thinking about whatever they are doing and not just headless hit the play button every minute

Look up Dear imgui. It's awesome. Though it doesn't have everything you'll want (like drag and drop)
And you might not be 100% happy with the performance. It's easy and fun to work with though.

Video Game Programmer.
5 years in industry.

 

On 11/15/2019 at 6:21 AM, mr_tawan said:

I've seen 2 approaches. 

First one is to use a native/cross-platform GUI widget. Things like Win32, Winform, GTK, Qt, etc.... With this the game engine would only draws to the output window inside the main window. I think 

  • Pros: Complex UI is handled by the operating system/gui widget library.
  • Cons: Some of the editor code will be tied to the gui widget, while some other will be on the engine. That make it sometime difficult to track down an issue.

The second one is to use the engine to draw everying. This is the approach that many game engine nowadays take, including Unity, Unreal, Godot, etc.... 

  • Pro: Editor codes run on game engine, so if there's problem, it's mostlikely on the engine.
  • Con: The engine needs to support complex gui layout, otherwise it will be difficult to write an editor.

Personally I think I'd go with the first one first, if I just strart writing the game engine. After the game engine is mature enough and supports more complex gui layout I might move to the second.

you mean create editor UI with engine? It means pure OpenGL or DirectX?

 
 
 
 
On 11/15/2019 at 8:00 AM, Shaarigan said:

 

Unity is sadly closed source but as far as I know, the editor is nothing else than an extended Unity Player

You mean Unity editor is kind of a game made with unity engine?

Yes. Today's major engine editors work as modular systems allowing the loading and unloading of modules, effectively the same as a plugin to the game engine.

They start the game engine up, then load the editor as the main running module. At the editor program's request the engine can also load and unload your code as running modules, which allows you to play the game inside the editor.

When you run your game alone, it loads the engine up then loads your game as the main running module.

On 11/15/2019 at 11:33 PM, Net-Ninja said:

Look up Dear imgui. It's awesome. Though it doesn't have everything you'll want (like drag and drop)

@Net-Ninja It already has it for quite a bit of time - (BeginDragDropSource, SetDragDropPayload, EndDragDropSource, BeginDragDropTarget, AcceptDragDropPayload and EndDragDropTarget - are the functions under ImGui namespace you want to look at). Previously I had to simulate those functions on my own. I'm using them to simulate drag & drop models into the scene, and also to drag & drop nodes within scenegraph.

@Repixel And editor UI with ImGui for a game engine can look F.e. like this:

editor.thumb.png.19f469d584886994d35073217ce4b364.png

Fig. 01 - Editor of my own game engine

Of course it does support majority of features lots of other engines do (in-editor play/pause/stop of the scene, load/save of the scene, undo/redo, various components (note that actual components can be added on-demand)).

The editor is just a project under the engine (although directly outputting executable file, it could technically be just a dll - as all of the projects do have same interface used for starting ... similar situation is for the components, while those I'm using are directly in engine core, you can add custom components as dlls as long as they follow the same interface), the runtime viewer is another project under the same engine (again outputting executable file directly).

The same engine (and editor!) was used to build scene for small competition game here on GameDev.net - GROOM (shameless self-promotion):

groom.thumb.png.ad808aa518fd6618c211e8cbc2ae603c.png

Fig. 02 - GROOM, a simple game built with this engine

The game actually uses completely different renderer (instead of Direct3D 12 based one, it uses a custom real time path tracer written in OpenCL), different windowing system (it uses SOIL instead of native windows like the Editor uses). The rest is technically the same - scene management, resource loaders, physics & collision system, etc. Actually I have used the editor to put together the scene and set up components for each of the objects you can see in GROOM.

EDIT: To demonstrate how actual engine, runtime or even GROOM looks like in terms of source (I tried to keep this short):


class Main : public Engine::System
{
private:
	// Attributes for the application
	// ...
  
public:
	// Engine system constructor
	Main(Engine::Log* log, Engine::Constants* options) : Engine::System("Main")
	{
        // Here we allocate everything we're going to need (resource managers, some subsystems like renderer, physics, etc.)
		// ...
	}

	// Engine system destructor
	virtual ~Main()
	{
    	// Here we deallocate everything we have allocated (destroy resource managers, de-init subsystems like renderer, physics, etc.)
		// ...
	}

	// Scene, subsystems, etc. initialization
	virtual bool Init()
	{
        // For case of GROOM - here we load resources into managers, set up scenes, etc.
		// ...
	}

	// Scene, subsystems, etc. deinitialization
    virtual void Shutdown()
	{
        // For case of GROOM - here goes resource deinitialization, etc.
    	// ...
	}

	// System update function
    virtual void Update()
	{
        // For case of GROOM - here goes actual game loop for given scene (there are 2 scenes - one for main menu, one for actual game level)
        // The actual loop here calls BVHs update and render, apart from actual game logic step
		// ...
	}

	// System event handlers
	void Handle(const Engine::Keyboard::KeyPressed& kp)
	{
    	// Handle key pressed event
    	// ...
	}

    // Other event handlers here
    // ...
};

 

Now the actual application running this can be for example this (the most simple):


int main()
{
	Engine::Log* log = new Engine::Log();
	log->AddOutput(new std::ofstream("Output.log"), Engine::Log::LOG_DEFAULT);

	Engine::Constants* options = new Engine::Constants(log, "Config.conf");

	Engine::Input* input = new Engine::Input();

	Main* m = new Main(log, options);

	Engine::Core::Instance()->Add(input);
    // Add other subsystems
    // ...

	Engine::Core::Instance()->SetLog(log);
	Engine::Core::Instance()->SetOptions(options);

	Engine::Core::Instance()->Run();

    Engine::Core::Instance()->Dispose();

    // Cleanup
    // ...
	delete input;
	delete options;
	delete log;

	return 0;
}

EDIT 2: Sorry for long post.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

As @mr_tawan mentioned options, I'd recommend the first one. I do the same, which is not a good selling point yet :) 

And indeed, a good point in this is my editor code is well separated from the engine/game, and modules like importing content from external sources are not "polluting" the engine. And also yes, some of my editing functionality have gone in the engine, but I believe with more careful interface planning it could have been avoided.

The main difference between the two is the way the GUI is displayed. Using the op system, WinGDI in my case has the advantage of I can completely decouple the GUI and the engine. eg: in some long process, like light baking, I can simply pause the engine and spend all my cpu on rendering while the GUI can still display progressbar and messages..etc

 

 
 
 
 
 
 
4
On 11/17/2019 at 2:40 AM, Vilem Otte said:

 

groom.thumb.png.ad808aa518fd6618c211e8cbc2ae603c.png

Fig. 02 - GROOM, a simple game built with this engine

The game actually uses completely different renderer (instead of Direct3D 12 based one, it uses a custom real time path tracer written in OpenCL), different windowing system (it uses SOIL instead of native windows like the Editor uses). The rest is technically the same - scene management, resource loaders, physics & collision system, etc. Actually I have used the editor to put together the scene and set up components for each of the objects you can see in GROOM.

 

real-time pathtracing!?

It looks crazy. what is your hardware specification?

You can give it a try:

 

It runs quite smooth in full HD on 1st gen Ryzen with Rx 590. But is playable even on Intel Graphics on my laptop with smaller resolution ?

If future challenge allows it, then I'd like to use real time path tracing again.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

This topic is closed to new replies.

Advertisement