Followers 0

# Skybox, render order and clipping thoughts

## 11 posts in this topic

Hi all,

I've managed to implement a skybox shader in my d3d engine, which functionally is 'OK'.

Never the less there are a few things I can't figure out yet.

Rendering order;

- I know render the skybox first thing each frame, with zbuffer and zwrite disabled, enabled after draw skybox

- after this I render the scene/ meshes

Clipping;

- when objects reach the far clipping plane, they are clipped inside the skybox.

This sounds logically but doesn't look ok. Here are 2 screenshots:

My questions;

1. would there be an advantage in rendering the skybox after the meshes/ last thing per frame?

I tried it out with zbuffer true and zwrite false. When scaling my skybox x15 (or so) this functionally works, but I don't see the advantage. Enlarging the skybox rather doesn't feel good.

2. what could I with the ugly clipping? I could change the far plane so objects or completely culled or not, instead of 'clipped'. Are make the far plane even farther so you won't notice that much (performance impact?).

Or maybe you have other ideas on this.

Here's the code of my draw skybox function and pseudo code of my renderframe function;

bool CD3dskybox::Render(D3DXVECTOR3 pPlayerPosition, CD3dcam *pCam, LPDIRECT3DDEVICE9 pD3ddev)
{
mSkyBoxMesh.mWorldPos = mPlayerPosition;
mSkyBoxMesh.UpdateWorldMatrix();
D3DXVECTOR3ToFloat3(pPlayerPosition, mPlayerPosition);

if(D3DERR_INVALIDCALL ==	mSkyBoxEffect->SetMatrix("ViewProj", &pCam->mMatViewProjection)) return false;
if(D3DERR_INVALIDCALL ==	mSkyBoxEffect->SetFloatArray("CameraPosition", mPlayerPosition, 3)) return false;
if(D3DERR_INVALIDCALL ==	mSkyBoxEffect->SetTexture("SkyBoxTexture", mSkyBoxTexture)) return false;
if(D3DERR_INVALIDCALL ==	mSkyBoxEffect->SetMatrix("World", &mSkyBoxMesh.mMatWorld)) return false;
if(D3D_OK !=				pD3ddev->SetStreamSource(0, mSkyBoxMesh.mVtxBuffer, 0, mSkyBoxMesh.mVtxSize)) return false;
if(D3D_OK !=				pD3ddev->SetIndices(mSkyBoxMesh.mFaceBuffer)) return false;

if(D3D_OK != pD3ddev->SetRenderState(D3DRS_ZENABLE, D3DZB_FALSE)) return false;
if(D3D_OK != pD3ddev->SetRenderState(D3DRS_ZWRITEENABLE, FALSE)) return false;
if(D3D_OK != pD3ddev->SetRenderState(D3DRS_CULLMODE, D3DCULL_CW)) return false;

mSkyBoxEffect->Begin(&mNumPasses, 0); // D3DXFX_DONOTSAVESTATE);
for(unsigned int i=0;i<1;++i)
{
mSkyBoxEffect->BeginPass(i);
mSkyBoxMesh.RenderAll(pD3ddev, LIST);
mSkyBoxEffect->EndPass();
}
mSkyBoxEffect->End();

if(D3D_OK != pD3ddev->SetRenderState(D3DRS_ZENABLE, D3DZB_TRUE)) return false;
if(D3D_OK != pD3ddev->SetRenderState(D3DRS_ZWRITEENABLE, TRUE)) return false;
if(D3D_OK != pD3ddev->SetRenderState(D3DRS_CULLMODE, D3DCULL_CCW)) return false;

return true;
}

bool CD3d::RenderFrameV2(CD3dscene *pD3dscene, CD3dcam *pCam)
{
if(!CheckDevice()) { mDeviceLost = true; return true; }
mObjRendered = 0;	mAttrRendered = 0;

pCam->Update();

mD3ddev->Clear(0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0);
mD3ddev->BeginScene();

if(pD3dscene->mSkyBoxInScene)
if(!pD3dscene->mSkyBox.Render(pCam->mPosition, pCam, mD3ddev)) return false;

for(ec=0;ec<pD3dscene->mNrEffects;++ec)
{
pD3dscene->mEffect[ec]->Begin(&pD3dscene->mEffectNumPasses[ec], 0); //D3DXFX_DONOTSAVESTATE);

// loop through all materials, meshes and subobjects, drawprimitives etc.

pD3dscene->mEffect[ec]->End();
}
// FFP rendering
PrintSceneInfo(pCam, mObjRendered, mAttrRendered);

mD3ddev->EndScene();
HRESULT hr = mD3ddev->Present(NULL, NULL, NULL, NULL);

return true;
}


0

0

##### Share on other sites

1. No real advantage, no. Perhaps some marginal benefit from reduced overdraw, but I doubt that it's enough to be reflected in your framerate. However, if you render the skybox first, then you can perhaps use it as your z-clear by enabling z-write and disable z-test.

2. Your z-far looks unusually close to the camera by modern standards, I think most users would expect greater draw distances. So first up I'd increase it as much as possible. But then you're still going to have clipping somewhere, if it's still a problem then there's a few tricks to help reduce the impact:

- You could use fogging (although it can be tricky to blend convincingly to the skybox)

- You could add a bit more relief to your terrain, at the moment it's very flat which accentuates the draw distance cut-off.

- You could try the approach of artificially curving your terrain (your vertex shader would modify vertex positions based on distance from camera), so that your flat terrain gains a horizon. I've only seen this in very arcadey RPG games though, not sure if you can pull it off in your style.

- You could fade out the geometry into the skybox. It's tricky to alpha fade the world geometry without requiring lots of code to handle draw order, so the trick is to render your skybox last and somehow use the depth of the scene at a point to determine the opacity. There's a few ways to do this, but writing a 'skybox opacity' value into the frame buffer's alpha is the most straightforward.

0

##### Share on other sites

1. I wouldn't render the sky box first; it's going to be a waste of processing as based on that scene most of the skybox simply isn't going to be visible after the rest of the scene is drawn. Depending on how complex your sky shader is that could soon add up.

(Also don't try to do a z-write as a 'z-clear' replacement; the hardware uses z-clear to reinitialise internal structures etc and to make sure early z-rejection is enabled. Writing a z-pass instead of a clear could be counter productive in this case.)

2. Doing the sky last also solves your clipping into the sky issue as the sky is always drawn behind other objects regardless of their depth. More importantly you can't just 'push the far plane out further' as this will affect the accuracy of the z-buffer and you'll start to get issues such as z-fighting.

1

##### Share on other sites

Thanks both.

@Phantom; does this mean you would go for:

- clear (zbuffer and target)

- render all meshes/ scene

- disable zwrite only, keep ztesting true/ active

- render skybox

- enable zwrite

next frame same story.

This would mean I need to scale everything better then I do now :)

And having a skybox based on specific dimensions. If I understand correct, the dimensions of the skybox then will also influence the 'clipping' against the skybox instead of farplane.

Is this what you mean?

@Columbo; I will get into fogging soon to see what I can do with it regarding the 'clipping'.

As easy as what you said sounds "but writing a 'skybox opacity' value into the frame buffer's alpha is the most straightforward.", I have no idea what you mean with this :)

0

##### Share on other sites

To make your sky geometry always appear to be at infinite depth, you can edit it's vertex shader so it outputs w as z.

e.g. right at the end, after computing the position: OUT.Position.z = OUT.Position.w;

@Columbo As Phantom said -- it's important to actually issue clear commands for the z-buffer on modern hardware. If you instead just clear it yourself by rendering distant objects, you'll basically be disabling the Hi-Z feature.

2

##### Share on other sites

Great, that works and prevents me from having to scale the skybox * same factor as far clipping plane.

I now do the following:

- Clear zbuffer and target surface

- rending all my meshes using shaders

- render the skybox; disable zwrite, cullmode clockwise, draw, enable zwrite, cullcode ccw

- render sort off hud, scene statistics (lpd3dxfont), using ffp

- present

The scene is outdoor and my far plane is now 1000 (was 170). Doesn't seem to be that high, does it?

Not sure yet if I have an avantage in setting default renderstates after rendering the skybox in combination with D3DXFX_DONOTSAVESTATE with the effects/ shader rendering. If the default setting saving and setting renderstates by the 'effect' d3dx functions, is with say 30 renderstates and I set manually only about 5 to 10, then this might bring me something, or am I overseeing something?

@Hodgman; although it's working now without rescaling the skybox, don't exactly yet why. Can you explain what the 'w' position exactly contains and how's it's generated? (I don't use W for Z-buffering, at least I think, not setting it at least).

0

##### Share on other sites
The scene is outdoor and my far plane is now 1000 (was 170). Doesn't seem to be that high, does it?

This depends on your near clip value; for an open world game I worked on we had two passes where the first 'near' pass had a near clip of 0.03f and a far of 333.3f (the 2nd went from 333.3f for it's near plane, then off into the distance).

0

##### Share on other sites
Thanks phantom, interesting article. I now use 1 for the near plane and 1000 for the far plane, will change it probably with more knowledge. Still curious though on the w value of the position, Will try to find that out
0

##### Share on other sites

@Phantom; just played around abit with the near plane, I got it up till 4.65 (float) without messing up the rendering of the scene.

If I understand correcly, with 4.65 near and 750 far, there's less Z buffer 'capability' wasted in 0.0 up till 4.65, which is 'extra' and is now used from 4.65 to 750 (far).

Doesn't feel that much do, 4.65 instead of 1.0.

Any other options to 'trick things' and get near plane further without affecting visiblity?

0

##### Share on other sites
The short version of the 'w' trick is -- the GPU computes each pixels depth buffer value as 'zbuf = pos.z / pos.w'.
So, if you set "z = w", then you get 'zbuf = w/w' or 'zbuf = 1'.

To explain what 'pos.w' is, is a bit harder. It's sometimes called the 'homogeneous coordinate' of the position.
To do our 3D perspective projections, it's easier to work in 4D space. In the vertex shader, we convert our euclidean position to a homogenous position by giving it a 4th (w) coordinate of 1. Our matrices then operate on these 4D values, producing a 4D position output. The GPU hardware automatically divides the position by its 'w' value per pixel, which is how we convert from homogenous coordinates back to euclidean coordinates.

All your other vertex-shader outputs (e.g. Texture coordinates) are also divided by 'pos.w' per pixel in order to achieve "perspective correct" interpolation. So, it's an important detail in 3D projection (and linear algebra) even if you don't understand it's meaning ;-)
2

##### Share on other sites

Hi Hodgman,

Thanks, I think I understand, actually working my way through Getting started with DX9 book from Frank Luna :)

If I understand correct:

- if W = 0, we talk about the vector

- if W = 1, we talk about a point, 1 makes sure translations etc. will succeed

- Mapping back from X, Y, Z, W to X,Y,Z, is done by dividing X,Y and Z by W

Reflecting this, doing homegenous transformation on pos, would mean that pos.z / W = z / 1 = still Z?

And W/W = W (1).

Does this means that saying pos.z = pos.w, that for my skybox shader vertices, Z is always 1?

(and therefor is always rendered within Z visible, assuming minimum Z = 0.0 and max. is 1.0?

0

## Create an account

Register a new account