Multiple cameras rendering in OpenGL

Started by
4 comments, last by sienaiwun 11 years, 6 months ago
Is there a solution for multiple cameras (300~700 in number) rendering in a scene. I solve it in a easy way by for-loop and it's time-consuming. Any better idea?
Advertisement
I don´t know, but...
why would one want to have that many cameras?
Do I understand it right that you want to have 300-700 different point of views?
If so, it is pretty obvious the computer has very much to do...
You could use the geometry shader to duplicate your geometry, transform each duplicate with a different MVP-matrix (skip that in the vertex shader), and assign each duplicate to a different gl_ViewportIndex.

That will render N different "cameras" at one time.

Of course, things that are normally kind of "simple" such as occlusion culling your view frustum are totall impossible then, because you cannot know a priori what cameras 3, 4, or 17 will see (well... you can ... but it's not that trivial any more, and no big efficiency gain anyway, since you must render a lot of geometry that is culled from either viewport).

It's probably just as fast (and more straightforward) to just use many different FBOs and render once into each one in a [font=courier new,courier,monospace]for [/font]loop as you're doing already.

Note that when you have 300+ cameras in one scene, a very allowable optimization is to only render each camera every 5th or 10th or so frame. Nobody will notice anyway. That way you only have to render maybe 30 cameras per frame.
This is an odd question. Unless you can get all (or even a portion) of the cameras to render in parallel, you are going to have to render the scene in each camera sequentially. In that case, why not just use one camera and move it from position to position?
I appreciate your answers very much.

You could use the geometry shader to duplicate your geometry, transform each duplicate with a different MVP-matrix (skip that in the vertex shader), and assign each duplicate to a different gl_ViewportIndex.

I considered it to be as same as I have done in time performance because vertex duplication takes almost the same time. Does it so ?

Of course, things that are normally kind of "simple" such as occlusion culling your view frustum are totall impossible then, because you cannot know a priori what cameras 3, 4, or 17 will see (well... you can ... but it's not that trivial any more, and no big efficiency gain anyway, since you must render a lot of geometry that is culled from either viewport).

It's probably just as fast (and more straightforward) to just use many different FBOs and render once into each one in a for loop as you're doing already.


I do not get it. Can you explain it further?

Note that when you have 300+ cameras in one scene, a very allowable optimization is to only render each camera every 5th or 10th or so frame. Nobody will notice anyway. That way you only have to render maybe 30 cameras per frame.

Thanks, I have considered it seriously .

This is an odd question. Unless you can get all (or even a portion) of the cameras to render in parallel, you are going to have to render the scene in each camera sequentially. In that case, why not just use one camera and move it from position to position?

My "render in parallel" way is that: group the cameras. In each camera group, the viewing frustums of cameras do not intersect to guarantee proper vertex MVP projections in vertex shader.
It has drawbacks:
1. Many geometry need to be divided into many pieces manually . ex:the ceiling plane need to subdivide if a camera look at the center of the ceiling plane .
2. The grouping is not good. In average, 2 cameras in a group

Do I understand it right that you want to have 300-700 different point of views?
If so, it is pretty obvious the computer has very much to do...

I use it to render different textures. all cameras are in different place, looking at different views.

This topic is closed to new replies.

Advertisement