Creating a level editor alongside the game, how to apply it?

Started by
10 comments, last by mightypigeon 11 years, 1 month ago

Looks like I've started on the right track already, as I already have my engine code compiled into a DLL and the game project uses it. Where I do not know how to add editor features right now is for instance, rendering things like labels on objects or the rotate/scale/move gizmo, and rendering other debug shapes. I suppose they just need to be treated as special cases in the rendering engine, so they do not become affected by the lighting, for example. Sounds like the most popular option is to have the game and editor code as totally separate projects. Guess as a start, I'll keep both in one project but keep the editor code and game code in separate folders.

I'm sort of stuck there myself. I have this module in my code called HelpObjects, and it's just handles to global vertex and index buffers that are manipulated with Update() and Render() functions for drawing scene help objects such as lights and cameras. It literally took hours to hand-code my camera with the two film reels, frustum box, etc, but it works really nice now. The thing I'm stuck as is view scale. My camera, for example is about 5x5x2.5 units in volume, and that is massive in this game I'm working on where generic units are treated as meters. My player's ship model is like 2 units large long, and the 6 cameras that move with it to render its environment map are so large, you have to zoom the camera far out just to see that it's inside a bunch of camera gizmos.

What I'm going try is: setup an orthogonal camera, and do ray picking. If any gizmos are in view, I'll un-project their 3D coordinates to 2D screen-space coordinates and just draw my 3D gizmos in 2D ortho space like I would for sprites, but keep the rotation. The ortho view will always keep my camera the same size just like you'd see in 3DS Max, Maya, Blender, etc.

That might work, but the ortho-to-perspective might distort the way the rotation looks...

I have no editors for now, but any helper geometry (as you refer to as gizmos) are subclassed from my engine's generic "Object" class, and overrides the RenderHelpers() class to display themselves. So, when I set the RENDER_HELPERS flag to my scene, all helpers render. I also have flags for normals and bones for model instances that only get allocated, updated and rendered when those flags are set. ;)

Advertisement

@mightypigeon: Did you build that using .NET with C# or VB? When I was working in C++ and DirectX, I'd build mine using the winapi C classes, which is gross Win32 stuff from the 90's... Designing .NET applications are a breeze, but getting my C++ engine built on top of OpenGL to interact with .NET hasn't ever worked out for me yet.

EDIT: Woah, I asked before I looked up wxWidgets. You sir, answered by question before I even knew haha

Back in the day I used C# for the interface with a small helper DLL written in C++ which was used to communicate with the engine using .NET interop. It was a pain.

Lately I've been doing it the other way around, I was playing around with WPF and enjoyed it so now I have written a couple of dialogs for the editor using it and C#. They export to a COM library. After a lot of mucking around, the end result I can load up those WPF windows in native C++ using COM - no managed C++ or C++/CLI stuff at all, wanted to keep that away from the engine. I ended up pulling an all-nighter to get it going, and I'm having nightmares just thinking about it again, but it's one of those things that once you do it the first time, it's there forever. Definitely a learning experience!


Looks like I've started on the right track already, as I already have my engine code compiled into a DLL and the game project uses it. Where I do not know how to add editor features right now is for instance, rendering things like labels on objects or the rotate/scale/move gizmo, and rendering other debug shapes. I suppose they just need to be treated as special cases in the rendering engine, so they do not become affected by the lighting, for example. Sounds like the most popular option is to have the game and editor code as totally separate projects. Guess as a start, I'll keep both in one project but keep the editor code and game code in separate folders.

Rendering ad-hoc lines and labels and things like that is definitely worth implementing, even if you aren't writing an editor. It can be a life saver down the track as you can quickly implement debug drawing in your game code (draw an AI's navigation path for example)

And speaking of gizmos, here is something I found that I found a while back: https://github.com/CedricGuillemet/LibGizmo

Works great.

[size="1"]

This topic is closed to new replies.

Advertisement