I finally went to my graduation on Saturday, which involved sitting on the stage for two hours watching 500+ students graduate while I waited for my turn. Thankfully that is the last graduation I'll attend, and the very last formal education I'll endure. I'd post photo's, but some of you may be planning on eating again someday.
It looks like I'll be heading to Canada for a research visit from October 1 through until the end of December. I've got the flights and everything sorted out, but am now waiting on the necessary visa.
After about 4 months off, I've finally started doing some research again. After my thesis was submitted, I was seriously pissed-off with camera control, and didn't want anything to do with it. I've had a few ideas for improvement, so I'll be looking into those while in Canada. I've had to post-pone my current work until I go, since I have to prepare a journal paper.
Screenshot of the test application
This is the test application I use for most of my work at improving my constraint-based camera system. Most of the numbers will make sense to people, but the crap on the top-right is strictly to do with the current camera setup.
The Height, Distance, Orientation values define how the camera should be positioned in relation to the target (it's a pyramid, just on a bad angle). These are the constraints. The *Weight values indicate how important the constraint is to be satisfied. This basically allows you to control how the camera moves to satisfy certain constraints. So a low orientation weight means the camera will maintain distance/height, but doesn't care so much about rotation. The FrameCoherence weight controls the overall motion (smoothness) of the camera.
The camera system works by searching a subset of the environment (as shown by DomainSize, Passes, and ScaleFactor). The constraint solver tries to find a camera position that best satisfies all of the constraints. Because not all constraints are weighted equally (and the target moves), the optimal camera position cannot be known before time. My PhD was mostly on a quick way to find the optimal (near-optimal, since my method is incomplete) position.
What is cool about the constraint-based camera systems is how easy they are to extend. To add occlusion avoidance, I just introduce a new constraint to avoid obstacles. This doesn't require any modification to the constraint solver. Since all constraints are weighted, you can control how the camera will move to avoid occlusions. A high weight will make the camera avoid occlusions quickly, a low weight will avoid them slowly. In my current implementation I can make the camera avoid occlusions differently depending on where the occlusion is. Not that you'd want to, but you can.
The more common requirement of the constraints is actually to do more with cinematography. It's a straightforward process to create a new constraint for cinematic camera angles (e.g. two-shot, over-the-shoulder, etc...). The constraints can be turned on/off in between frames to create scripted cinematics. This is one area where I'll be looking to make a few improvements soon.
I guess that's probably the most confusing description of how my camera system works. If anyone wants more info, I'd be happy to try to explain it better [smile].