Sky Domes

Recommended Posts

How many ico sphere subdivisions steps does one normally use for rendering sky domes? Depending on the resulting number of vertices, does one normally use the Geometry Shader for the tessellation or just bind a mesh? Which non-uniform scaling does one typically apply to transform the ico sphere to an ellipsoid (flatter appearance thus more natural)?

For rendering an ico sphere as model, I typically use 5k triangles (2.5k vertices, 5 subdivision steps).

Edited by matt77hias

Share on other sites

2 triangles ? a sky dome is a infinite distant surface,  all you need is a view vector from your pixel position and use it to intersect a virtual geometry to sample your sky textures

Edited by galop1n

Share on other sites

A sky dome is rendered around the camera, and is actually pretty small. Its just enough to cover the entire view of the camera, and moves with it, so it appears infinitely distant. So, really anything that covers the entire view of the camera would work. Sky boxes (a cube) are pretty easy to implement, and won't require any texture scrolling.

If you want to use a ico sphere anyway, just divide it once, or twice. Twice, going by how blender does it, gives you a round enough sphere with only 42 vertices.

Share on other sites
12 hours ago, galop1n said:

2 triangles ?

Could I still achieve the following (by applying stretching the transformation some how):

The sky above the camera appears flatter (less curved).

Edited by matt77hias

Share on other sites

I was hoping galop1n would answer this, because I'm not sure if I fully understand the way he was talking about doing it. But it seems like you could use the angle from the view vector to the horizon to translate the geometry towards the camera when looking up enough.

Share on other sites
1 hour ago, Yxjmir said:

I was hoping galop1n would answer this, because I'm not sure if I fully understand the way he was talking about doing it.

If you use a fullscreen primitive (quad, triangle, etc.), you only need the NDC xy coordinates and use NDC z=1.0 (far plane), transform NDC -> (float4) -> view -> (float4x4) world, sample the cubemap with the world space coordinates. This should be equivalent with a sphere about the camera.

Not sure, how he intends to stretch (non-uniform scaling?) this sphere to an ellipsoid, however.

Edited by matt77hias

Share on other sites

If I were you, unless you can figure it out or find a tutorial/example, I would just use a cube or sphere. You can rotate them, and change the color multiplied by the texture color to simulate day/night cycle, or have it fade over time into the sunrise/noon/sunset/midnight textures. The Sun is its own billboard(always faces directly towards the camera), and so is the moon, stars/clouds should be on the skydome's textures.

You could also fake the depth of the sphere with a slightly smaller one that is shorter along the up-axis that has a cloud texture, and rotates around the depth-axis. Doing this you don't have to worry about the long side of the ellipsoid ending up overhead. It also makes the clouds move independently of the day/night cycle adding to the realism.

Whatever you decide, start simple then after you get it working add things like trying to simulate depth on the 2 triangles.

Share on other sites

My skybox can be set to 4 to 16 planes, box to circle, with different colors for up and down vertices, and selectable how many times the texture will be mirrored,

the only problem i have is the top and bottom not being there, so you cant look up in my games,

i need some professional skybox tutorial, or maybe have a sky geosphere ?, i want fast low poly rendering with best results,

i also want moving clounds passing by, anyone have good tutorial ?

Share on other sites
10 hours ago, the incredible smoker said:

the only problem i have is the top and bottom not being there, so you cant look up in my games,

This sounds like it might be a face winding order problem or normals are pointing the wrong way.

10 hours ago, the incredible smoker said:

i want fast low poly rendering with best results

Using a cube is the simplest and if textured well enough you won't notice a difference between it and a sphere, no matter how many sides it has.

This looks very simple, but I haven't tested it, it uses openGL but the steps are pretty much the same regardless. It looks like a series of tutorials so maybe he explained other parts that you need to fully implement it.

I would like to point out that the skybox should ONLY show the sky/sky and clouds. The cube map example image in the tutorial is a bad example for a skybox. If you need more help, you might need to start a new topic, since matt77hias asked about subdividing the geometry, and make an icosphere appear flatter.

Share on other sites
14 hours ago, Yxjmir said:

This sounds like it might be a face winding order problem or normals are pointing the wrong way.

I did not add triangles for the top and bottom, just the sides, so there is no problem.

14 hours ago, Yxjmir said:

Using a cube is the simplest and if textured well enough you won't notice a difference between it and a sphere, no matter how many sides it has.

If there was no difference in performance then i want it completly sound, i bet the PC crashes, ofcourse it matters, also in poly count.

Share on other sites
56 minutes ago, the incredible smoker said:

If there was no difference in performance then i want it completly sound, i bet the PC crashes, ofcourse it matters, also in poly count.

There is no difference in quality if it is textured well enough. Skydomes/skyboxes are typically rendered without taking depth or lighting into consideration. You won't be able tell how many faces it has because there should be no shading on the faces. You'll only notice if the texture doesn't line up properly for each face. This is what I was saying when I said:

15 hours ago, Yxjmir said:

Using a cube is the simplest and if textured well enough you won't notice a difference between it and a sphere, no matter how many sides it has.

Share on other sites

With box you can see the corners if you look good, when playing the game you wont notice.

Share on other sites

I tried the full screen quad approach but I found that with icosphere geometry, you have oportunity for more effects, like simulating weather effects depending on height of vertex, etc. I am using a simple array of vertices from inside the shader, see this file. It gets compiled into an "immediate costant buffer" in hlsl. You should just call Draw(240) with a triangle list topology without the need of a vertex or index buffers. You vertex id (SV_VERTEXID) semantic in the vertex shader directlz indexes into the arraz and you have your position. Just don't transform it with the world matrix, and leave the translation while transforming with the viewprojection by setting the position's w component to zero.

You can see an example here.

The GPU probably doesn't care anyway if you draw 240 vertices or 6 for full screen quad in the VS. Maybe the pixel shader does, but that would be heavier with the full screen quad for the same effect anyway.

Edited by turanszkij

Share on other sites
1 hour ago, turanszkij said:

6 for full screen quad in the VS

Currently it is only one quad (fullscreen Triangle to be exactly).

But what about the stretching? Do you use an unmodified icosphere or an actual ellipsoid?

Share on other sites
2 minutes ago, matt77hias said:

Currently it is only one quad (fullscreen Triangle to be exactly).

But what about the stretching? Do you use an unmodified icosphere or an actual ellipsoid?

Just a plain old icosphere, which I am also using for deferred lights and debug geometries as well. What do you mean by stretching?

Share on other sites
15 minutes ago, turanszkij said:

Just a plain old icosphere, which I am also using for deferred lights and debug geometries as well. What do you mean by stretching?

See the image some posts above this one. The sphere is non-uniformly scaled to an ellipsoid, which kind of stretches the sky above the camera, resulting in a flatter sky above the camera.

Share on other sites
9 minutes ago, matt77hias said:

See the image some posts above this one. The sphere is non-uniformly scaled to an ellipsoid, which kind of stretches the sky above the camera, resulting in a flatter sky above the camera.

Oh yes, I see. You could just scale the sphere in the vertex shader by multiplying the vertices' y coord by something less than 1 before projection. This way the same icosphere can be reused in multiple shaders. I'm not doing the scaling though.

Share on other sites
2 hours ago, turanszkij said:

You can see an example here.

I used the zero-initialization (e.g. VSOut Out = (VSOut)0;) initially as well, but I noticed some small performance gains without it.

I noticed your globals.hlsli which contains both a g_xFrame_MainCamera_VP and g_xCamera_VP. Your skyVS uses the latter and a g_xFrame_MainCamera_PrevVP. I presume that g_xFrame_MainCamera_VP = g_xCamera_VP in most cases, but what if you have a flat mirror and some sky visible through the mirror? Won't the reflection camera conflict with g_xFrame_MainCamera_PrevVP? I started to refactor and merge lots of constant buffers together, but the camera still puzzles me. I currently have a large number of "passes" which are all tweaked with minimal constant buffers, but this becomes less maintainable in the future.

I am also curious about g_xFrame_MainCamera_InvP (posted a recent topic about it)? Both support for perspective and orthographic cameras?

7 minutes ago, turanszkij said:

Oh yes, I see. You could just scale the sphere in the vertex shader by multiplying the vertices' y coord by something less than 1 before projection.

Do you think this factor is fixed or scene/game dependent?

Edited by matt77hias

Share on other sites
1 hour ago, matt77hias said:

I used the zero-initialization (e.g. VSOut Out = (VSOut)0;) initially as well, but I noticed some small performance gains without it.

I noticed your globals.hlsli which contains both a g_xFrame_MainCamera_VP and g_xCamera_VP. Your skyVS uses the latter and a g_xFrame_MainCamera_PrevVP. I presume that g_xFrame_MainCamera_VP = g_xCamera_VP in most cases, but what if you have a flat mirror and some sky visible through the mirror? Won't the reflection camera conflict with g_xFrame_MainCamera_PrevVP? I started to refactor and merge lots of constant buffers together, but the camera still puzzles me. I currently have a large number of "passes" which are all tweaked with minimal constant buffers, but this becomes less maintainable in the future.

I am also curious about g_xFrame_MainCamera_InvP (posted a recent topic about it)? Both support for perspective and orthographic cameras?

Do you think this factor is fixed or scene/game dependent?

I don't like the zero init as well for some time now. Not for perf reasons, but this way the compiler can't warn you against uninitialized vars and you can easily miss them. Justhard to get down to refactoring it

The maincamera stuff can be confusing. The small camera constant buffer is responsible of rendering the scene for multiple cameras, while the post processes are using the main camera buffer. The Prev and inverse camera properties only matter for the post processes, the other passes, like reflection and shadow pass can't generate velocity buffers for instance for which the prev camera is required. I expect that this maybe has some bugs and oddities which were not found yet by me.

I haven't a clue about tbe scale factor, you should experiment with it a bit and share the results

Share on other sites
15 minutes ago, turanszkij said:

The Prev and inverse camera properties only matter for the post processes

So deferred shading is categorized under post-process (since you need to convert from NDC to view space or world space, depending on your shading space)? Although, I do not see material unpacking data for converting data from the GBuffer (assuming you do not use 32bit channels for your materials). And similar for sky domes, they are categorized under post-processing (since you need to convert back to world space which isn't handled by the camera)? If so, I understand the separation and I am thinking of using something similar for the camera (and move this upstream away from my passes). Originally, I thought you created FrameCB for custom (engine-independent) scripts as a kind of data API for the user.

Edited by matt77hias

Share on other sites
On ‎23‎/‎09‎/‎2017 at 7:01 PM, matt77hias said:

So deferred shading is categorized under post-process (since you need to convert from NDC to view space or world space, depending on your shading space)? Although, I do not see material unpacking data for converting data from the GBuffer (assuming you do not use 32bit channels for your materials). And similar for sky domes, they are categorized under post-processing (since you need to convert back to world space which isn't handled by the camera)? If so, I understand the separation and I am thinking of using something similar for the camera (and move this upstream away from my passes). Originally, I thought you created FrameCB for custom (engine-independent) scripts as a kind of data API for the user.

You are right, the deferred lighting can only be done to the main camera in my engine, and while the sky rendering can be performed for other passes, they don't meaningfully use the previous camera props in their pixel shaders. I didn't understand the last sentence, so you are probably right.

Share on other sites
3 hours ago, turanszkij said:

I didn't understand the last sentence

I originally thought you defined these persistent buffers for the game developer using your engine and wanting to write custom shaders (and of course, you can use these buffers for the engine shaders as well).

Share on other sites

I used the icosphere-on-the-GPU approach:

PSInputWorldPosition VS(uint vertex_id : SV_VertexID) {
PSInputWorldPosition output;

output.p_world = 100.0f * g_icosphere[vertex_id];
output.p       = mul(float4(output.p_world, 1.0f), g_world_to_projection);

return output;
}

I currently use some constant to cover the scene (though you are not allowed to move 100.0f units), but is it possible to remove the constant and cover the scene anyway?

Share on other sites

You have to translate it to the same position as the camera. Also, here is a link to a youtube video about making skyboxes, its written in java, but its not difficult to follow for whatever language your using. Its pretty much the same steps minus the java specific stuff.

Share on other sites
3 minutes ago, Yxjmir said:

You have to translate it to the same position as the camera. Also, here is a link to a youtube video about making skyboxes, its written in java, but its not difficult to follow for whatever language your using. Its pretty much the same steps minus the java specific stuff.

PSInputWorldPosition VS(uint vertex_id : SV_VertexID) {

const float3 p_world = g_icosphere[vertex_id];
const float3 p_view  = mul(p_world, (float3x3)g_world_to_view);
const float4 p_proj  = mul(float4(p_view, 1.0f), g_view_to_projection);

PSInputWorldPosition output;

output.p_world = p_world;
output.p       = float4(p_proj.xy, 0.0f, p_proj.w);

return output;
}

I was a bit dizzy of debugging, so couldn't think straight. I keep working with the direction in world space instead of the position. I transform the direction from world to view space, use that direction as a position in view space and transform to projection space. So the clue is basically to have a direction over the sphere in view space (rotated direction over the sphere in world space).

Edited by matt77hias

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

• Forum Statistics

• Total Topics
628707
• Total Posts
2984310

• 23
• 10
• 9
• 13
• 13