Sky Domes

Started by
27 comments, last by matt77hias 6 years, 5 months ago
56 minutes ago, the incredible smoker said:

If there was no difference in performance then i want it completly sound, i bet the PC crashes, ofcourse it matters, also in poly count.

There is no difference in quality if it is textured well enough. Skydomes/skyboxes are typically rendered without taking depth or lighting into consideration. You won't be able tell how many faces it has because there should be no shading on the faces. You'll only notice if the texture doesn't line up properly for each face. This is what I was saying when I said:

15 hours ago, Yxjmir said:

Using a cube is the simplest and if textured well enough you won't notice a difference between it and a sphere, no matter how many sides it has.

 

Advertisement

With box you can see the corners if you look good, when playing the game you wont notice.

S T O P C R I M E !

Visual Pro 2005 C++ DX9 Cubase VST 3.70 Working on : LevelContainer class & LevelEditor

I tried the full screen quad approach but I found that with icosphere geometry, you have oportunity for more effects, like simulating weather effects depending on height of vertex, etc. I am using a simple array of vertices from inside the shader, see this file. It gets compiled into an "immediate costant buffer" in hlsl. You should just call Draw(240) with a triangle list topology without the need of a vertex or index buffers. You vertex id (SV_VERTEXID) semantic in the vertex shader directlz indexes into the arraz and you have your position. Just don't transform it with the world matrix, and leave the translation while transforming with the viewprojection by setting the position's w component to zero.

You can see an example here.

The GPU probably doesn't care anyway if you draw 240 vertices or 6 for full screen quad in the VS. Maybe the pixel shader does, but that would be heavier with the full screen quad for the same effect anyway.

1 hour ago, turanszkij said:

6 for full screen quad in the VS

Currently it is only one quad (fullscreen Triangle to be exactly).

But what about the stretching? Do you use an unmodified icosphere or an actual ellipsoid?

🧙

2 minutes ago, matt77hias said:

Currently it is only one quad (fullscreen Triangle to be exactly).

But what about the stretching? Do you use an unmodified icosphere or an actual ellipsoid?

Just a plain old icosphere, which I am also using for deferred lights and debug geometries as well. What do you mean by stretching?

15 minutes ago, turanszkij said:

Just a plain old icosphere, which I am also using for deferred lights and debug geometries as well. What do you mean by stretching?

See the image some posts above this one. The sphere is non-uniformly scaled to an ellipsoid, which kind of stretches the sky above the camera, resulting in a flatter sky above the camera.

🧙

9 minutes ago, matt77hias said:

See the image some posts above this one. The sphere is non-uniformly scaled to an ellipsoid, which kind of stretches the sky above the camera, resulting in a flatter sky above the camera.

Oh yes, I see. You could just scale the sphere in the vertex shader by multiplying the vertices' y coord by something less than 1 before projection. This way the same icosphere can be reused in multiple shaders. I'm not doing the scaling though.

2 hours ago, turanszkij said:

You can see an example here.

I used the zero-initialization (e.g. VSOut Out = (VSOut)0;) initially as well, but I noticed some small performance gains without it.

I noticed your globals.hlsli which contains both a g_xFrame_MainCamera_VP and g_xCamera_VP. Your skyVS uses the latter and a g_xFrame_MainCamera_PrevVP. I presume that g_xFrame_MainCamera_VP = g_xCamera_VP in most cases, but what if you have a flat mirror and some sky visible through the mirror? Won't the reflection camera conflict with g_xFrame_MainCamera_PrevVP? I started to refactor and merge lots of constant buffers together, but the camera still puzzles me. I currently have a large number of "passes" which are all tweaked with minimal constant buffers, but this becomes less maintainable in the future.

I am also curious about g_xFrame_MainCamera_InvP (posted a recent topic about it)? Both support for perspective and orthographic cameras?

7 minutes ago, turanszkij said:

Oh yes, I see. You could just scale the sphere in the vertex shader by multiplying the vertices' y coord by something less than 1 before projection.

Do you think this factor is fixed or scene/game dependent?

🧙

1 hour ago, matt77hias said:

I used the zero-initialization (e.g. VSOut Out = (VSOut)0;) initially as well, but I noticed some small performance gains without it.

I noticed your globals.hlsli which contains both a g_xFrame_MainCamera_VP and g_xCamera_VP. Your skyVS uses the latter and a g_xFrame_MainCamera_PrevVP. I presume that g_xFrame_MainCamera_VP = g_xCamera_VP in most cases, but what if you have a flat mirror and some sky visible through the mirror? Won't the reflection camera conflict with g_xFrame_MainCamera_PrevVP? I started to refactor and merge lots of constant buffers together, but the camera still puzzles me. I currently have a large number of "passes" which are all tweaked with minimal constant buffers, but this becomes less maintainable in the future.

I am also curious about g_xFrame_MainCamera_InvP (posted a recent topic about it)? Both support for perspective and orthographic cameras?

Do you think this factor is fixed or scene/game dependent?

I don't like the zero init as well for some time now. Not for perf reasons, but this way the compiler can't warn you against uninitialized vars and you can easily miss them. Justhard to get down to refactoring it :) 

The maincamera stuff can be confusing. The small camera constant buffer is responsible of rendering the scene for multiple cameras, while the post processes are using the main camera buffer. The Prev and inverse camera properties only matter for the post processes, the other passes, like reflection and shadow pass can't generate velocity buffers for instance for which the prev camera is required. I expect that this maybe has some bugs and oddities which were not found yet by me.

I haven't a clue about tbe scale factor, you should experiment with it a bit and share the results :)

15 minutes ago, turanszkij said:

The Prev and inverse camera properties only matter for the post processes

So deferred shading is categorized under post-process (since you need to convert from NDC to view space or world space, depending on your shading space)? Although, I do not see material unpacking data for converting data from the GBuffer (assuming you do not use 32bit channels for your materials). And similar for sky domes, they are categorized under post-processing (since you need to convert back to world space which isn't handled by the camera)? If so, I understand the separation and I am thinking of using something similar for the camera (and move this upstream away from my passes). Originally, I thought you created FrameCB for custom (engine-independent) scripts as a kind of data API for the user.

 

🧙

This topic is closed to new replies.

Advertisement