• Create Account

## Artefacts using big model

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

15 replies to this topic

### #1BlackJoker  Members

1205
Like
0Likes
Like

Posted 11 March 2013 - 02:44 PM

Hello,

I am using DX 11 for rendering. With snall objects everything is OK, but with big objects appears some strange behavior. Some triangles began to disappear and appear chaotically during camera movement. More detail you can see on the attached screenshots.

Length of the ship is 3km, but it has only 15000 triangles.

Does someone know what is the problem here? Maybe someone already faced with something similar?

### #2Steve_Segreto  Members

2007
Like
3Likes
Like

Posted 11 March 2013 - 02:55 PM

Looks like z-fighting, what are your projection matrix parameters for far and near z-clip plane?

### #3BlackJoker  Members

1205
Like
0Likes
Like

Posted 11 March 2013 - 03:43 PM

I have the following parameters for projection matrix:

screenDepth = 100000.0f;
screenNear = 0.01f;

### #4MJP  Moderators

18215
Like
3Likes
Like

Posted 11 March 2013 - 03:49 PM

Yeah those are wayyyyy too far apart. Bring in the far plane or push up the near plane.

### #5BlackJoker  Members

1205
Like
1Likes
Like

Posted 11 March 2013 - 04:05 PM

Hm, looks like this was the reason. I set

screenDepth = 60000.0f;
screenNear = 1.0f;

and everything seems OK, but I cannot understand how in that case render a big/huge maps, for example, for a hundred of thousands km of space? Because of screen depth it cannot be rendered. What I must do in that case?

Edited by BlackJoker, 11 March 2013 - 04:06 PM.

### #6TiagoCosta  Members

3596
Like
2Likes
Like

Posted 11 March 2013 - 04:39 PM

There are some methods to "increase" the precision of depth buffers, this article talks about some. Take special attention reading the logarithmic depth buffer part.

Edited by TiagoCosta, 11 March 2013 - 04:42 PM.

### #7swiftcoder  Senior Moderators

17824
Like
2Likes
Like

Posted 11 March 2013 - 05:48 PM

TiagoCosto's article is the best you get with direct hardware support. For bigger scenes (above the effective accuracy of a logarithmic/floating-point depth buffer, you need to partition the objects in your scene into depth ranges, and render each range with it's own projection matrix with the near/far planes adjusted.

Tristam MacDonald - Software Engineer @ Amazon - [swiftcoding] [GitHub]

### #8Matias Goldberg  Members

9067
Like
1Likes
Like

Posted 11 March 2013 - 06:09 PM

Hm, looks like this was the reason. I set

screenDepth = 60000.0f;
screenNear = 1.0f;

and everything seems OK, but I cannot understand how in that case render a big/huge maps, for example, for a hundred of thousands km of space? Because of screen depth it cannot be rendered. What I must do in that case?

Short story, we cheat.

Long story: Oh no, I won't go into detail. But we use advanced rendering techniques to maximize precision (such as the ones in Outterra blog). Here's another (advanced, not sure if good for a newbie) slide: Rendering vast worlds.

We also render what's far away using 2D billboards instead of actual 3D geometry (the billboard is called impostor because we render the 3D to a texture and then display that texture as a billboard) and stuff like that, or just use fog, etc to hide artifacts.

Very people prefer to render the scene in two passes (2 depth buffers, what's close in one pass, what's far in the other one) and then composite both. It's troublesome but does the job.

### #9BlackJoker  Members

1205
Like
0Likes
Like

Posted 11 March 2013 - 11:49 PM

Thanks to all for help. It is very interesting. I didnt now about such techniques. Please, say, if I want to render a huge map, I also could you 3 and more projection matrices for that? But I dont know how it would display on screen in such case. Will I see all the objects I render in this case on all distances or some of them I would not see?

### #10swiftcoder  Senior Moderators

17824
Like
1Likes
Like

Posted 12 March 2013 - 09:02 AM

Thanks to all for help. It is very interesting. I didnt now about such techniques. Please, say, if I want to render a huge map, I also could you 3 and more projection matrices for that? But I dont know how it would display on screen in such case. Will I see all the objects I render in this case on all distances or some of them I would not see?

You can render a terrain in multiple regions, sorted by their distance to the viewer. I've done this for a quick clipmap terrain prototype, but while it can be made to work, the depth interactions at region edges can get a little sticky...

Space scenes (which came up earlier in the thread?) are generally much simpler, because they are fairly sparsely populated. You can reasonably sort the entire list of objects from back-to-front, customise the near/far planes for each one, and then just draw them out in order.

Tristam MacDonald - Software Engineer @ Amazon - [swiftcoding] [GitHub]

### #11BlackJoker  Members

1205
Like
0Likes
Like

Posted 13 March 2013 - 11:46 AM

swiftcoder

Thanks, I will try to render a big model with 2 or more projection matrices. Will see how it be

### #12BlackJoker  Members

1205
Like
0Likes
Like

Posted 14 March 2013 - 03:21 PM

Could please someone give a simple example with logarithmic depth buffer for DirectX 11, because from the article hard to understand how to implement it?

Edited by BlackJoker, 14 March 2013 - 03:21 PM.

### #13mikiex  Members

261
Like
1Likes
Like

Posted 15 March 2013 - 04:41 PM

Hm, looks like this was the reason. I set

screenDepth = 60000.0f;
screenNear = 1.0f;

and everything seems OK, but I cannot understand how in that case render a big/huge maps, for example, for a hundred of thousands km of space? Because of screen depth it cannot be rendered. What I must do in that case?

It's worth pointing out the depth buffer being normally non-linear,  changing the near clip makes a huge difference compared to moving the far.

So you want to push the near as far forward as possible, but this depends on how close you camera will be to objects in the scene. The far does matter, but getting the near correct is most important. Also remeber you say 3km, but you mean 3k of unitys as scale is arbitrary.

### #14BlackJoker  Members

1205
Like
0Likes
Like

Posted 16 March 2013 - 08:28 AM

mikiex,

Thaks for clarification, but I want that camera will be as close to object as possible. Do you know something about logarithmic depth buffers. I tried to find an example for DX 11, but I dint find any code example for it.

### #15mikiex  Members

261
Like
0Likes
Like

Posted 16 March 2013 - 05:27 PM

mikiex,

Thaks for clarification, but I want that camera will be as close to object as possible. Do you know something about logarithmic depth buffers. I tried to find an example for DX 11, but I dint find any code example for it.

Close as possible is your eyeball touching the surface a near clip of 0 I don't know anything about log depth buffers other than the concept exists, certainly DX11 would make things like this possible I am sure. Consider though not many games in the past have bothered to come up with a physically correct solution, yes its been a matter of contention but every project I've worked on we have managed to work around this issue by faking it.

### #16TiagoCosta  Members

3596
Like
1Likes
Like

Posted 17 March 2013 - 06:14 AM

Could please someone give a simple example with logarithmic depth buffer for DirectX 11, because from the article hard to understand how to implement it?

Add this line to the end of the vertex shader after the projection multiply.

out.Position.z = log(C*out.Position.z + 1) / log(C*Far + 1) * out.Position.w;


where C is a constant used to choose the resolution near the camera: try it with C = 1.0f.

and Far is the far plane distance used to create the projection matrix.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.