• entries
8
35
• views
16468

# The Concept of A Mesh Editor

1411 views

I've always loved the concept of skinned mesh - skeletal animation. The mathematical basis really appeals to me. I remember the first time I got a DX9 application working with Frank Luna's skinned mesh class as a basis, and seeing Tiny walking, waving, etc. I had examined the DXSDK skinned mesh example, of course, but had been rather confused at all the arrays for bone indices, matrices, etc., because that example generalizes to a hierarchy that supports multiple meshes with individual textures. Luna used a single mesh with a single texture.

I spent a lot of time playing with the code, and examining actual data - dumping matrices to the debug output, dumping bone names with associated parent and children names, converting quaternions to matrices, etc. Relating that to the shader code to figure out how it all worked together was, by far, the best time I've spent in my programming "career."

Once I had a basic understanding of allocating a hierarchy, mesh containers, animation keyframes, etc., I was hooked. The biggest blessing was the text format of X-files. Seeing the structure and its data in a file, and examining the hierarchy loaded into my app gave me a lot of opportunity to understand the concept of skinned mesh animation. That was also my introduction to HLSL and shaders as I'd previously used just the DX9 pipeline for drawing "stuff." Now that I'm into D3D11, I really miss D3DXLoadMeshHierarchyFromX, mesh->GenerateAdjacency(), ID3DXEffect, etc. I now have my own x-file loader, mesh/skeleton hierarchy class, animation controller class, SRV manager class, shader manager class, etc. I never got into D3D11 effects, as I wanted to understand what was going on "under the hood" with writing and compiling shaders, setting constant buffers, input layouts, etc.

Luckily I found a DirectX x-file exporter for Blender. That took a while to fully understand with regard to creating and exporting multiple animations. Lots of experimentation with settings. However, once again, the text format version of X-files was a life-saver.

I found adding, editing and weighting vertices in Blender to be a little difficult. Probably just my laziness with regard to learning the Blender interface better. However, that was an excuse to convert my DX9 mesh/animation editor (below) to D3D11.

I also use the Win32 interface extensively to make life easier. I learned the Windows API early on, writing assembly code and using interrupts, just to see what I could do. Then DX7, 8 and 9 came along and I went on to child windows to do what I wanted to do. The above app shows a child window for DX9 rendering, and two GDI child windows specific to animation editing. It worked (after a fashion) but further development was interrupted by a friend here on gamedev talking so much about D3D11, and posting pix of all the awesome shader tricks he was capable of. So I started on the road to D3D11 and my DX9 work ground to a halt.

I still have a ways to go to be really proficient in D3D11, but I've experimented a lot and learned enough to want to go back and try to create a mesh editor in D3D11. N.B., Blender can (probably) do everything I need, but I love creating my own tools and classes for my asset pipeline. Also, an editor (versus a real-time game) has a tremendous advantage - timing and efficiency is not nearly as critical as it is for games. E.g., drawing a curve for a bone animation using GDI calls can be really S - L - O - W. However, for me anyway, anything on the order of 15 to 30 milliseconds for the rendering loop is just fine.

# The Concept

When I start a new project, I usually start out with pad and pencil and write out in words what I want the result or features of the program to be. Then I make a list of things in general terms that may be necessary to learn about, or difficulties I've run into in other projects that need to be "fixed."

One of the things I want to "fix" in the D3D11 editor is to eliminate the horrendous use of switch statements and booleans I have in my DX9 editor for function dispatching. That is, that app is a mega-class that handles everything. When the loop gets a WM_LBUTTONDOWN message, I have to determine what the current operation is to determine how to handle it. Is the user selecting something? Is it the start of a drag? What should be selected - a vertex, a face, a bone? What data needs to be displayed? Really ugly, and designed to ensure the wrong thing will always happen when the code is edited. In addition, the Undo/Redo stack was completely FUBAR. Saving the current state based on what was going to be done, had been done and what would be needed to restore the state...

Having promised myself that someday I really should organize my code, I started looking around at candidates for program architecture. I ran across L. Spiro's approach to game states, and it interested me. I started thinking in terms of modes in the editor, each mode defining its own response to user input. E.g., when in object mode, WM_LBUTTONDOWN means the user wants to start translating the entire object. In vertex editing mode, the user is selecting/deselecting a vertex. The actions are well-defined, and the Undo/Redo data is clear.

That leads to the common concept of the window procedure checking to see if GWL_USERDATA is non-null, and (if not null) casting the long as a class pointer and dispatching to the class MsgProc procedure. If each mode inherits from class VzMsgProc (which has a virtual MsgProc function), response to user input is easily redirected to the mode by setting GWL_USERDATA to a pointer to the instance of the mode class when the current mode is changed. That's appealing to me.

So, at present, I have an app class (for app initialization and the run loop), an app manager class (which maintains persistent data such as pointers to instances of the graphics, main window, shader-manager and SRV-manager classes, as well as utility functions), and several mode classes (to which gets passed a pointer to the instance of the app manager). The architecture is shamelessly based on L. Spiro's concept and works very nicely.

Each mode can be written to do specific tasks without the need to worry about interfering with another mode's processes, as only one mode exists at a time! When a new mode is invoked, the previous mode passes persistent data as needed to the app manager before it's destroyed, and a new instance for the requested mode is created and becomes the current mode.

It took a day or two to get the architecture set up properly. I had to go back to basics when designing the classes to avoid circular dependencies - i.e., declaring (not defining) classes in headers. I had gotten into the (very bad) habit of throwing #include's into each class header that needed to know about another class. Mea culpa, mea culpa.

However, with that setup, it's relatively easy to design another mode and add it (somewhat) seamlessly to the editor.

For my next trick, only because I discovered the possibility and want to try it out, is to generate keyboard accelerators for each mode. That is, keypresses in each mode are context sensitive - "X" may mean one thing while editing a vertex, and another when extruding a face.

 /////////////////////////////// an experiment for creating custom accelerator tables //////////////// ACCEL modeAccel[] = { FALT, WORD('c'), IDM_VIEW_CULL_TOGGLE, FALT, WORD('C'), IDM_VIEW_CULL_TOGGLE, 0, WORD('l'), IDM_VIEW_LIGHTING_TOGGLE, 0, 'L', IDM_VIEW_LIGHTING_TOGGLE, 0, WORD('w'), IDM_VIEW_WIREFRAME_TOGGLE, 0, WORD('W'), IDM_VIEW_WIREFRAME_TOGGLE, }; int numEntries = sizeof(modeAccel) / sizeof(ACCEL); hAccel = CreateAcceleratorTable(modeAccel, numEntries);

In the app's main loop:

 // If there are Window messages then process them. while (PeekMessage(&msg, 0, 0, 0, PM_REMOVE)) { if (!TranslateAccelerator(msg.hwnd, appManager->GetCurMode()->hAccel, &msg)) { TranslateMessage(&msg); DispatchMessage(&msg); } }

Gotta do some thinking about generalizing the command identifiers, maybe as simple as IDM_C_ALT and IDM_W.

Nice work. Took me forever to write the skeleton animation stuff for my modeller. You're brave using Win API. I never want to go back there again since I found Qt :)

Nice work.

Thanks. Luna's code (DX9) ran for me out-of-the-box, but I spent a long time getting the generalized multiple-meshes-multiple-textures code working. Only after that did I get Granberg's book and have it all laid out!

Re: Win API and Qt - I learned Win32 at my father's knee, have a lot of reusable code, and actually enjoy exploring Win32**. I do sometimes use external libraries (AntTweakBar, etc.) but wanted to limit the dependencies for this editor. Currently I'm using only the DXTK.

** It still amazes me that MS has maintained compatibility over the years. I keep Petzold at my side (1998) and it all still works like a charm.

I suppose I should give Qt a try sometime in the near future.

## Create an account

Register a new account