Sign in to follow this  
  • entries
    8
  • comments
    35
  • views
    16187

Window message processing - each object gets a look at them

Sign in to follow this  
Buckeye

1047 views

At the moment I have a hierarchy of objects:

class VzAppManager - base class for storing persistent data and providing support - GetGraphics(), GetWindow(), SetEditMenu(), etc.

class MeshEdit3Manager - inherits from VzAppManager. Explicit class for my mesh editor. I could have just expanded VzAppManager to handle everything but, as this is my first attempt at adapting L. Spiro's gamestate architecture to an editor, I want to keep classes reusable - e.g., VzAppManager.

class VzMsgProc - base class which has a function: virtual LRESULT MsgProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam);
Used for main window procedure dispatching. I.e.,

LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam){ VzMsgProc* proc = (VzMsgProc*)GetWindowLongA(hWnd, GWL_USERDATA); if (proc != NULL) return proc->MsgProc(hWnd, message, wParam, lParam); ... // default processing}

class VzEditMode - base class for editing modes. Similar to L. Spiro's gamestate. VzEditMode inherits from VzMsgProc so any mode can be at the top of the messaging chain. This class also has L. Spiro's virtual methods:

virtual void Init(VzAppManager *appManager) {} virtual void Destroy(VzAppManager *appManager) {} virtual void Tick(VzAppManager *appManager); virtual void Draw(VzAppManager *appManager) {} virtual LRESULT MsgProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam);

Now, for example, I have a class for editing the mesh object as a whole (not individual verts or faces).

class VzModeObject - inherits from VzEditMode and implements:

Init - gets the mesh pointer (may be null if nothings loaded or the mesh was deleted) from appManager, and sets up the main window Edit menu and associated keyboard shortcuts for that menu. At the moment, the Edit menu for this mode is just "Verts V" -> go into vertex edit mode through the menu, or via the keyboard shortcut .

Tick - calls the app manager's CameraHandleMouseWheel() function which zooms/unzooms the view. That's what I want the mouse wheel to do in this mode. Not implemented yet - TBD - check for user commands to translate/scale/rotate the mesh.

Draw - draws the mesh with the current user's options for wireframe/solidfill, lighting on/off, cull-backface/cull-none, etc.

MsgProc - at the moment, just checks if the window message is WM_COMMAND with a message ID of IDM_EDIT_MENUSTART - that's the message ID sent by the Edit menu "Verts" option or accelerator (keyboard shortcut) . If that's the message, msg->SetNextMode( VZMODE_EDITVERTS ) is called and VzAppManager sets a flag to change the mode at the next Tick() call in the main loop.

MsgProc then simply returns mgr->MsgProc(hwnd, message, wParam, lParam) to give the MeshEdit3Manager instance a chance at the window messages.

MeshEdit3Manager::MsgProc does bunches of editor specific stuff - checks for commands to exit, to change wireframe, culling and lighting states, stores keyboard input, stores flags and mouse positions for LMB button down/up, mouse moves, and mouse wheel state; and handles window resizing, making calls to Graphics() to resize the backbuffer, etc. [ Come to think of it, WM_EXIT should probably be handled by VzAppManager instead. --> Added to the TODO list. ]

The MeshEdit3Manager::MsgProc then returns VzAppManager::MsgProc.

VzAppManager::MsgProc (at the moment) handles just WM_PAINT and WM_DESTROY. If the message is handled, it returns 0. Otherwise it returns the one and only DefWindowProc call in the chain.

Of note:

case WM_DESTROY: // the current mode and it's MsgProc will soon be deleted, so provide a MsgProc until the app is closed. GetMainWindow().SetMsgProc(this); PostQuitMessage(0); break; // eventually returns 0;

Next up, I'm thinking - setup VzMode_VertEdit - first part: how to display selected/unselected vertices.
Sign in to follow this  


7 Comments


Recommended Comments

Good approach. My modeller uses a similar system in that there is a base Tool class which each of the editing tools derive from, and the Tool interface looks a lot like what you have. The ToolBar widget takes care of setting the current tool, and the View class just interacts with the base Tool interface, which as well as having methods for mouse move, press etc also supports an interface for custom drawing on the view so the View is completely ignorant of what the current tool is actually doing.

 

Actually, there are up to four views in my editor on screen at any one time, so all the Tool methods take a pointer to a View as their first parameter so whichever view you, for example, click on is made available to the Tool method locally like that. That saved a great deal of pain.

 

Its a very nice, extensible approach, the way you are going.

Share this comment


Link to comment

Your Tool does sound similar to my Mode, and your View as an application manger.

 

I'm glad you mentioned the possibility of multiple views. Those are in my plans but I haven't implemented them yet, as I haven't decided whether to have a separate swapchain or just individual viewports for each. Either way is going to be a pain, but will have to be done. But your mention was a reminder for me to keep all the Draw routines flexible!

 

DX9 was convenient in that regard as multiple views were merely a class with a rectangle and display options (projection, view, raster/blend states, etc.). For each view, it was just a matter of setting the viewport from each rectangle and Present'ing to the same rectangle in the main window client. The rest of the client area was untouched, could be rendered to with GDI, and (as shown in a previous blog entry) the main window had GDI child windows arranged in it.

Share this comment


Link to comment

I ended up using swapchains even though I used D3D9 actually. I wrote a wrapper for Qt that gave you a GraphicsWidget which registered itself with the Direct3DDevice and did a lot of stuff under the hood to keep everything working, so client-side, you just created a class derived from GraphicsWidget as a normal Qt widget and be a standalone rendering context.

 

I think if you're using a framework, its best to get as "on board" as you can with the way the framework works. I used Qt's signals/slots system for communicating between the View and the Tools so, thinking about it, the View didn't even know there as a Tool interface, it just fired signals like mousePressed(View *view, const QMouseEvent &e) and so on that another part of the code was responsible for connecting and disconnecting from whatever the current tool was.

 

Not sure how I would have approached it using Win32. Like you, I'm a seasoned Win32 developer but I'd never want to go back to that for anything involving a GUI again.

Share this comment


Link to comment
I'd never want to go back to that for anything involving a GUI again.

 

biggrin.png  Actually, Win32 is a comfort zone for me. Although Windows (through its window procedure) is an event driven system, I'm more comfortable with a menu-driven system.

 

Your post, particularly the aspects of event-driven programming, got me thinking more about why I'm interested in the "mode-state" approach. I added another entry to this blog which mentions my avoidance of event-driven programming. I've done that (event-driven stuff) in the past, but, for me anyway, it gets messy.

 

 

I think if you're using a framework ...

Not sure what you mean exactly by a "framework." The editor app is just a console app with a (Win32) window as part of the interface. The framework (I suppose) I'm using is the messaging system. Am I not understanding your comment? Wouldn't be the first time I've failed to understand something wink.png

Share this comment


Link to comment
No, sorry, I was referring to myself and the decision to the the Qt signal and slot mechanism in my own project.

Qt is also entirely event driven, the main difference is that all the events are already mapped into strongly typed Q*Event classes so you aren't messing around decoding message parameters. You just implement a virtual method on your child Widget and the framework takes care of the rest.

I've written similar wrappers for Win32 myself in the past, not as complex as Qt of course but a framework that basically translates Windows messages into virtual function calls on a class. Its a good approach.

Share this comment


Link to comment

Qt certainly seems to be a popular GUI choice. I haven't yet given it a try. Your recommendation is, however, duly noted, Aardvajk!

Share this comment


Link to comment

Allow me to throw some cold water on Qt, then. I tried v1 almost 20 years ago, and v5 two years ago... got the feeling it was luring me in with conveniences, goading me into 1990s-style GUI design, and sticking me with some genuinely badly designed (yet overengineered) stuff like the color/theme system... ugh.

 

But it might be the path of least resistance... certainly less than Win32 or GTK+.

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now