Large scenes and multiple view frusta

Started by
7 comments, last by jefferytitan 11 years, 8 months ago
I'm working on a game which takes place on a rather large terrain based environment. I can get quite large view distances for now, about 1-2 km, but it starts to eat up framerate, eventhough I use lod and impostors. Another thing is that I start to get some z fighting effects. Have anyone experience on the multiple view frusta method, where the frustum is split into parts and the scene is rendered multiple times?

It will surely improve z fighting, but I'm hoping it would also be possible to get a significant performance boost out of it. Basically, the idea would be that because players would usually move by foot, so that the camera moves quite slowly relative to the total scale of the environment. So if the far away scenery is not updated every frame, the errors might not be perceivable.

Maybe one frustum split could already remedy the z fighting issue, if I plan on having around 5-10 km view distance. Do you suppose the following would work: Draw, say 1-10 km range to a sort of skybox, with e.g. four sides (top and bottom are probably not needed). Then on top of the skybox, the 0-1 km range would be drawn normally. There would need to be some overlap and maybe dissolving to seamlessly connect the two. The problem would then be, that drawing the skybox would take about four times as much processing time as without it, so some approximations need to be made. Of course there is no point to update the box, unless the camera moves a threshold distance, or something important changes far away (an enemy soldier appears, if you can see him that far away). This probably gives the desired performance boost. But when the time of update arrives, I would expect a noticeable sudden drop in framerate. Maybe the updating could be somehow divided over multiple frames, but I'm not sure yet how to do this cleanly. I might also use a bit low res render target to get rid of pixel processing bottlenecks. Maybe I could also update the box gradually by scissoring out slices, if the rendering is pixel limited rather than vertex limited. Has anyone done these kind of things?

I'm not sure but maybe they did something like this in Operation Flashpoint: Red River. They had quite large view distance and a sort of "waving air" effect for distant ground. My guess is that at least they rendered far away stuff to off screen buffer and applied this filter, but I'm not sure if they did some performance tricks with this also.
Advertisement
Man, I was really looking forward to a response to this topic and I've been watching it every day since. :-/

I'll keep watching just in case.
So maybe this has not been experimented with widely. I started implementing this today, but it might take some time before good results are achieved. If they are.

For now, I can show one image to give some preview:

http://koti.mbnet.fi/~blender/public/sceneBox.jpg

In the picture, you can see only one face of the cube rendered. There is a black line at the intersection of the cube face and the actual geometry. I don't know what causes that.

I applied a simple and crude blur effect to it also. I was planning on having some sort of, possibly fake, depth of field for very far away scenery.
For now, the sky is not being drawn for some reason. Also the terrain colors and mixing seems to be slightly different in the render texture.

So, it is a start. I'll get back after I get it working better.
Now it's looking more like what I had imagined:

http://koti.mbnet.fi/~blender/public/sceneBox2.jpg
http://koti.mbnet.fi/~blender/public/sceneBox3.jpg

The box is now completely rendered and the sky, which is dynamic, is not included in it. The box textures include the alpha channel so that the box may be drawn without blocking the sky. The seam between the box and near geometry is not too noticeable even without any blending (at least in this scene), but I think that some sort of blending is needed in general.

I think that blurring the far scenery brings a lot more sense of large environment, but maybe some find it disturbing, if it gives the sensation of myopia. What do you think?

So now there is a drop in frame rate and a noticeable yank when the box is updated. I hope that the latter can be mitigated by storing two instances of box textures and gently blending between the two at update event. The former can probably be smoothed by distributing the update process over multiple frames.

Overall, I'm very pleased to experience 60 fps with my scene, when it previously was around 30.
I think that looks terrific! I've been thinking about trying something like that for about 10 years now, but never got to it.

Is this more of a z-buffer issue for you or more about reducing the frame to frame load of rendering distant vistas? What do you do when you have a plane or dragon or whatever is situationally appropriate for your project flying in from a distance?

Have you considered logarithmic z-buffers?

I think that looks terrific! I've been thinking about trying something like that for about 10 years now, but never got to it.

Is this more of a z-buffer issue for you or more about reducing the frame to frame load of rendering distant vistas? What do you do when you have a plane or dragon or whatever is situationally appropriate for your project flying in from a distance?

Have you considered logarithmic z-buffers?


It looks pretty good when the player does not move too much. I guess that offloading the update over multiple frames smoothly is the key here. I'm not quite there yet. My primary motivation for this was to reduce per frame work, but also z fighting was a small issue. I know of logarithmic z-buffers, but have not tried them.

I guess that two copies of the cube textures should be stored along with z buffers to allow drawing dynamic objects far away in between updates.

I'm a bit worried that this technique will make things complicated, but the first impression was very good.
Hey, I don't know how I missed this post. I've experimented very loosely with this, so I find the topic interesting. I started just rendering twice, once with a view frustrum for distant objects and once for near objects. I chose ranges that slightly overlapped because otherwise some polygons got missed, reason unknown. I did consider only using scaled down low LOD models for the distant objects, but didn't have a chance to try.

I'd love to do a logarithmic depth buffer, but it sounds difficult to implement without hardware support.

As far as skybox imposters for performance, you could just update one wall of the skybox each frame to spread the load.

Hey, I don't know how I missed this post. I've experimented very loosely with this, so I find the topic interesting. I started just rendering twice, once with a view frustrum for distant objects and once for near objects. I chose ranges that slightly overlapped because otherwise some polygons got missed, reason unknown. I did consider only using scaled down low LOD models for the distant objects, but didn't have a chance to try.

I'd love to do a logarithmic depth buffer, but it sounds difficult to implement without hardware support.

As far as skybox imposters for performance, you could just update one wall of the skybox each frame to spread the load.


Updating one wall each frame would not give any performance benefit compared to the case with no box at all. It's probably not desirable that framerate drops, e.g., from 60 to 30 at each update as the player moves. If possible, I would try to distribute even the update of a single wall over a few frames. Of course this is limited by overlap between the box and the near scene and the allowed distortion in the far away scene. Making the box bigger should help, but at the expence that rendering the near scene is heavier.
So your main concern is fill rate? Yeah, I suppose you could break each wall into an arbitrary number of tiles, and render one tile per frame. You would have to wait until all were rendered before blending it in or there may be weird artifacts. I did consider splitting the load by interlacing low-res renders and blending in as each is available, but I suspect it would look odd. Plus I suspect interlacing isn't efficient in hardware terms. If you blur distant scenery you could just use a lower resolution of course.

This topic is closed to new replies.

Advertisement