Sky Domes

Started by
27 comments, last by matt77hias 6 years, 5 months ago
On ‎23‎/‎09‎/‎2017 at 7:01 PM, matt77hias said:

So deferred shading is categorized under post-process (since you need to convert from NDC to view space or world space, depending on your shading space)? Although, I do not see material unpacking data for converting data from the GBuffer (assuming you do not use 32bit channels for your materials). And similar for sky domes, they are categorized under post-processing (since you need to convert back to world space which isn't handled by the camera)? If so, I understand the separation and I am thinking of using something similar for the camera (and move this upstream away from my passes). Originally, I thought you created FrameCB for custom (engine-independent) scripts as a kind of data API for the user.

 

You are right, the deferred lighting can only be done to the main camera in my engine, and while the sky rendering can be performed for other passes, they don't meaningfully use the previous camera props in their pixel shaders. I didn't understand the last sentence, so you are probably right. ;)

Advertisement
3 hours ago, turanszkij said:

I didn't understand the last sentence

I originally thought you defined these persistent buffers for the game developer using your engine and wanting to write custom shaders (and of course, you can use these buffers for the engine shaders as well).

🧙

I used the icosphere-on-the-GPU approach:


PSInputWorldPosition VS(uint vertex_id : SV_VertexID) {
    PSInputWorldPosition output;
    
    output.p_world = 100.0f * g_icosphere[vertex_id];
    output.p       = mul(float4(output.p_world, 1.0f), g_world_to_projection);

    return output;
}

I currently use some constant to cover the scene (though you are not allowed to move 100.0f units), but is it possible to remove the constant and cover the scene anyway?

🧙

You have to translate it to the same position as the camera. Also, here is a link to a youtube video about making skyboxes, its written in java, but its not difficult to follow for whatever language your using. Its pretty much the same steps minus the java specific stuff.

" rel="external">
3 minutes ago, Yxjmir said:

You have to translate it to the same position as the camera. Also, here is a link to a youtube video about making skyboxes, its written in java, but its not difficult to follow for whatever language your using. Its pretty much the same steps minus the java specific stuff.

" rel="external">

PSInputWorldPosition VS(uint vertex_id : SV_VertexID) {
    
    const float3 p_world = g_icosphere[vertex_id];
    const float3 p_view  = mul(p_world, (float3x3)g_world_to_view);
    const float4 p_proj  = mul(float4(p_view, 1.0f), g_view_to_projection);

    PSInputWorldPosition output;
    
    output.p_world = p_world;
    output.p       = float4(p_proj.xy, 0.0f, p_proj.w);

    return output;
}

I was a bit dizzy of debugging, so couldn't think straight. I keep working with the direction in world space instead of the position. I transform the direction from world to view space, use that direction as a position in view space and transform to projection space. So the clue is basically to have a direction over the sphere in view space (rotated direction over the sphere in world space).

🧙

Well that was fast. Looks like that will work.

5 minutes ago, Yxjmir said:

Well that was fast. Looks like that will work.

Idd. was already coding for some minutes :P

The magic of non-uniform stretching (FYI @turanszkij):


PSInputWorldPosition VS(uint vertex_id : SV_VertexID) {
    
    const float3 p_world = g_icosphere[vertex_id];
    const float3 p_view  = mul(p_world, (float3x3)g_world_to_view);
    const float3 p_sview = float3(1.0f, 1.0f, 1.5f) * p_view;
    const float4 p_proj  = mul(float4(p_sview, 1.0f), g_view_to_projection);

    PSInputWorldPosition output;
    
    output.p_world = p_world;
    output.p       = float4(p_proj.xy, 0.0f, p_proj.w);

    return output;
}

Without [1,1,1]:

uniform.thumb.png.ab74f7f43d6c5371cb3e98a9d4735dd4.png

With [1,1,1.5]:

non-uniform.thumb.png.8e4be81dda05616df2359595cb4ccec8.png

The little center cloud looks way further with the non-uniform scaling.

WARNING: For everyone wanting to use the above code snippet, I inverted my z-buffer. So instead of

output.p = float4(p_proj.xy, 0.0f, p_proj.w);

you need to use

output.p = p_proj.xyww;

for a normal z-buffer.

🧙

Final version:


PSInputWorldPosition VS(uint vertex_id : SV_VertexID) {
    PSInputWorldPosition output;
    
    output.p_world = g_icosphere[vertex_id];
    
    const float3 p_view  = mul(output.p_world, (float3x3)g_world_to_view);
    const float4 p_proj  = mul(float4(p_view.xy, 1.5f * p_view.z, 1.0f), 
                               g_view_to_projection);

    // Non-inverted Z-buffer: output.p = p_proj.xyww;
    output.p       = float4(p_proj.xy, 0.0f, p_proj.w);

    return output;
}

 

🧙

This topic is closed to new replies.

Advertisement