Camera Integration

Started by
15 comments, last by Shael 14 years, 2 months ago
My cameras are very very simple, and come in two parts. I have the "Camera" object which is just an eye, a target, an up vector and an FOV. This class is part of the renderer architecture, and when you set up a render list you add a camera to it. (Yes, this means each render list has only one camera)

Secondly, in my logic system, I have CameraControllers. These have advanced behaviour that the renderer is unaware of, such as performing a raycast and switching between first and third person depending on the result. These objects are responsible for their own behaviour, one in particular that I use is the EntityTrackCameraController, which sits on its spot and tracks the entity. Another is the ThirdPersonCameraController, which is a typical 3rd person camera.

I update the camera controllers during frame update, which means my gamestate class needs a ptr to the camera it is currently using; I then call camera_controller->update(delta) and it will do its thing. Pretty simple really.

Each frame, when I gather a render list, I call camera_controller->getCamera() and it will return the renderer friendly camera object. This object contains all the data the renderer needs to do a gluLookAt, set up a suitible viewport, and set FOV. This means that the renderer is totally agnostic to any of the logic surrounding a camera controller, while the camera controllers are agnostic of any rendering API.
Don't thank me, thank the moon's gravitation pull! Post in My Journal and help me to not procrastinate!
Advertisement
I was actually about to ask about how the cameras link in the with the renderer. I like both ideas presented and might do a mixture of them.

I do have another question though, how would multiple cameras be supported in either of your systems? I can see that you would have one camera for the main players view, but you could also have a camera that renders to a texture and displayed within the view of the player. This would mean you would need to form some kind of list of active cameras or something? Same idea applies if you want to have multiple views on the one screen.
Quote:I do have another question though, how would multiple cameras be supported in either of your systems? I can see that you would have one camera for the main players view, but you could also have a camera that renders to a texture and displayed within the view of the player. This would mean you would need to form some kind of list of active cameras or something? Same idea applies if you want to have multiple views on the one screen.
Right, you'd need a list of cameras, just as you said. Presumably the cameras would render in a fixed order, so that (e.g.) certain views will render over others if needed, and so that any render targets (e.g. for render-to-texture) will be available for use by subsequent cameras.

Parameters associated with each camera might include:

- Render order (relative to other cameras)
- Buffer-clearing options (e.g. whether or not to clear the color and depth buffers)
- Background clear color for color buffer clearing
- Projection parameters, including type (orthographic or perspective)
- Viewport parameters
- Render target (framebuffer, texture, etc.)

Another thing you have to consider is which objects will be included in the render passes for which cameras. For example, an 'in-game' camera might only render game entities, while the 'UI' camera might only render UI elements. You might also have objects that need to be rendered by multiple cameras. For example, a first-person 'player' camera and a security camera would probably render the same objects, just from different points of view. Or, imagine a little 'info' window that showed the player or some other entity, but nothing else; in that scenario, that entity would need to be associated with two cameras, while all other entities were associated with the main camera only.

One way to accomplish this is to give each camera an ID, and then assign one or more camera IDs to each renderable. Then, each renderable is rendered in the render pass for every camera with which it's associated.
That sounds like a good way to go about it.

But just to clarify - in the case of the security camera you might have an entity for example, "CameraStreamEntity" which associates itself with a "RenderToTextureCamera". Now, to determine if the renderer even needs to do any rendering with said camera, it would use a "Main Camera" to determine if the entity is in its frustum?

What confuses me is the "Main Camera" and what it gets associated with. I would assume it gets associated with the player? It would have to be managed as a special case compared to other cameras?
Quote:But just to clarify - in the case of the security camera you might have an entity for example, "CameraStreamEntity" which associates itself with a "RenderToTextureCamera". Now, to determine if the renderer even needs to do any rendering with said camera, it would use a "Main Camera" to determine if the entity is in its frustum?
I think I understand what you're getting it. Basically, you're talking about skipping the rendering for the security camera if the 'screen' on which the security camera view appears isn't visible from any other camera, right?

If so, I don't have an answer per se, not having tackled that particular problem myself. It seems like the problem could become somewhat involved though. Imagine, for example, multiple security cameras, some of which might be able to 'see' the screens associated with other cameras. If any one of those screens were visible to a 'normal' camera (such as the one associated with the player), then you would need to render the camera view for that screen. If *another* screen were visible on the first screen, you'd need to render the view for that screen as well. And so on. I don't have a solution to that problem available off the top of my head, but maybe someone else has done something like what you describe and can offer some specific suggestions.
Quote:What confuses me is the "Main Camera" and what it gets associated with. I would assume it gets associated with the player? It would have to be managed as a special case compared to other cameras?
I would not think of any camera as being a special case, but maybe that's just because I haven't encountered a situation in which it was necessary to do so. Still, I can't think of a reason right off why a 'main' camera would need to be treated differently than any other camera.
First of all, my camera set-up is more or less the same as what jyk describes (I also jumped on the train of entity/component systems ;)).

Quote:Original post by jyk
If so, I don't have an answer per se, not having tackled that particular problem myself. It seems like the problem could become somewhat involved though. Imagine, for example, multiple security cameras, some of which might be able to 'see' the screens associated with other cameras. If any one of those screens were visible to a 'normal' camera (such as the one associated with the player), then you would need to render the camera view for that screen. If *another* screen were visible on the first screen, you'd need to render the view for that screen as well. And so on. I don't have a solution to that problem available off the top of my head, but maybe someone else has done something like what you describe and can offer some specific suggestions.

The solution I'm about to use: The renderables in the scene graph are filtered for those that are visible. They are collected in a queue of rendering jobs. The jobs are later sorted by the main renderer anyway. During this sort all jobs that require a render target other than the main one are detected and handled specially, i.e. by invoking the rendering process recursively, obviously with the altered camera set-up.
Sorry for late reply, have no internet at home at the moment. Just like to thank you all for your ideas. I think I've worked out a solution which uses a mixture of what has been discussed. So far so good :)

This topic is closed to new replies.

Advertisement