Jump to content
  • Advertisement
Sign in to follow this  
magicstix

Skydome vs skyquad?

This topic is 2379 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all,
I've noticed in a lot of the literature out there on rendering atmospheric effects that the skydome technique is still used heavily. However, I've also noticed some situations where a screen aligned, max-depth 'skyquad' is used in which a single quad is rendered behind the scene, and the sky drawn on it.

Given that a skydome would theoretically require a lot more triangles, and that a skyquad could just as easily represent the sky if written in a modern shader with only two triangles, what would be the tradeoffs of using one technique over another?

Share this post


Link to post
Share on other sites
Advertisement
I've no experience with the "skyquad" method, but it sounds as though it would involve a potentially quite messy re-evaluation of texture coords every frame. A sky dome (or sphere, box, etc depending on your use case requirements - a dome is no good if you ever need to look down at it!) has the advantage that you can forward the vertex position to the fragment shader for further use, which can come in very handy for some effects. Either way vertex counts shouldn't really matter; for these objects they are going to be incredibly miniscule compared to the rest of the scene.

Share this post


Link to post
Share on other sites
I'm curious how this sky quad would work. If you want to be able to look around, you'll somehow need to get some correct view of the sky onto the quad. To my mind you'd either be doing some fancy perspective based lookup into a cubemap (like this) or you'd prerender some dome/box onto the quad texture. The former would represent the biggest saving in geometry at the cost of a more complex shader. It might be worthwhile to reduce your vertex count by prerendering if you have a fixed viewpoint or when looking out through windows, but doesn't seem to be worth the hassle for dynamic outdoor scenes where the quad texture would need to update a lot.

Share this post


Link to post
Share on other sites

I'm curious how this sky quad would work. If you want to be able to look around, you'll somehow need to get some correct view of the sky onto the quad. To my mind you'd either be doing some fancy perspective based lookup into a cubemap (like this) or you'd prerender some dome/box onto the quad texture. The former would represent the biggest saving in geometry at the cost of a more complex shader. It might be worthwhile to reduce your vertex count by prerendering if you have a fixed viewpoint or when looking out through windows, but doesn't seem to be worth the hassle for dynamic outdoor scenes where the quad texture would need to update a lot.


I implemented the sky/background as a cubemap texture and part of the deferred pipeline. To access the cubemap I use the same frustum corner vectors which are used with directional lighting . The sky is then blended with the rest of the scene according to the pixels z-value. This way I don't have to draw the sky separately and worry about z-buffering or losing texture fillrate.

There is no messy mathematics in this, but of course you'll need to calculate the frustum corners every frame. That is part of the deferred rendering however.


Or course you may draw the full-screen quad and access the cubemap without a deferred renderer.


Cheers!

Share this post


Link to post
Share on other sites
The "sky-quad" or sky-triangle works by multiplying each vertex by the inverse of the camerarotation * projection matrix, and using the resultant vectors as 3D texture-coords for a cube-map lookup.
I don't think there are any tradeoffs you have to worry about for the simple case, and if you do more advanced effects you will probably end up with a method that's needed for the particular effect you want to achieve, so the choice will be made for you.

Share this post


Link to post
Share on other sites

The "sky-quad" or sky-triangle works by multiplying each vertex by the inverse of the camerarotation * projection matrix, and using the resultant vectors as 3D texture-coords for a cube-map lookup.
I don't think there are any tradeoffs you have to worry about for the simple case, and if you do more advanced effects you will probably end up with a method that's needed for the particular effect you want to achieve, so the choice will be made for you.


The paper I'm looking at actually uses this inverse matrix approach, but it's treating the data as though it's a normalized vector (at least in the comments) as the v portion of the ray equation (x + tv).

To me this doesn't quite seem accurate, as I'd expect you to be able to pick up the v portion just from the vertex coordinates of the quad multiplied by the 3x3 camera rotation matrix. So then, why do they multiply by the inverse of the view matrix (which would presumably contain translation info as well), and does the result *really* represent the direction normal of a ray projected through the pixel?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!