It's amazing how click-happy user interfaces are! Click this, click that, click that, click this, ask me stupid questions and require me to click "Next" each time... The mouse is a pointer; is it possible to infer intent from its location and "context"? If I'm staring at my blank desktop and then point at the Start Menu, you might as well open it up. It's not like I'm doing something else at the moment, so it's unlikely to irritate me. Don't do it if the pointer is engaged or captured - dragging a widget or icon, for instance.
Essentially, a click should initiate an action, and menu navigation is not an action. By reducing the number of clicks, I think we can reduce hand fatigue and the rate of RSIs. Of course, there's the possibility of false positives in a system like this - accidentally pointing at the Start Menu bringing it up, necessitating a click to cancel, which is worse. That's easily remedied by having the pointer lying outside the boundaries of a menu hierarchy cancel the menu. This raises its own problem, namely the probability of losing menus due to awkward placement. Fortunately, that's a problem that's been solved: every new child (popup) menu should open with its mid-point anchored to the parent item, such that the maximum distance of travel in either direction is no more than half the height of the menu.
In addition, a "boundary transition area" can be in effect around windows, menus, etc that causes the mouse to exhibit a little "resistance" to crossing into the next region, reducing the ease with which false positives occur.
In a system such as the above, though, keyboard focus has to be divorced from mouse focus. One of my dissatisfactions with the Windows GUI model is that keyboard focus is tied to both mouse focus and window order. Being able to point at a window without changing its order yet scroll its contents while maintaining original keyboard focus isn't irrational to me.
Next up: More GUI philosophy: Windowing, palette menus and why MDI is a bad idea.