Texturing huge levels?

Started by
3 comments, last by 21st Century Moose 11 years, 5 months ago
So if I have a huge open level I need to render in-game (using XNA atm), what's the best method people can recommend for the texturing?
I've noticed in most commercial games, environments are only rendered with the high-res textures within a certain distance of the player, this then switches to a lower-res texture after a certain distance (which tends to be missing normal maps too). You can usually spot this in commercial games if you look for it.

I guess it's obvious that this is necessary for high-end graphics but I'm unsure about how it's accomplished? If I handle the texturing with a shader, is it just a matter of creating 2 different texture samplers, one for the high res texture and one for the low? And then just choose which one based on the distance from the character? I assume that would work but is that the most efficient way to go about it? I'm unclear about where the most resources are used. Wouldn't this just add to the strain on the GPU? Having 2 samplers instead of 1? Or would this be less important than the fact that there'll be less texture resolution on screen in total?

Thanks in advance and apologies for the clumsy description.
Advertisement
Most environments use modular textures, that is they reuse textures on mass. Only exception which comes to mind is id tech5 (game: rage), which use virtual textures (called mega textures). The process of reducing the qualtiy of a texture is called mipmapping. Normalmaps are a special case, because high-frequency textures would result to light-flickering in the distance. Toning them down by either a clever use of mipmapping or turning off expensive shaders, is a valid option.

If mipmapping is not enough, you need 2 samplers and an interpolation value (ie. time dependent or depth from a g-buffer) to blend between the textures. But this technique makes only sense when streaming in hi-detail textures, if the hi-detail texture is present, use mipmapping.
Thanks for the quick reply.
Yes, I'm aware how mipmapping works, but I was under the impression that standard mipmapping would give more of a blend between foreground and background textures whereas quite a lot of commercial games I've actually noticed a clear line between the 2 (it's incredibly noticable in the original Mass Effect). Does this not suggest that they're using 2 texture samplers? EDIT: quickly read up on the details of mipmapping and I assume the visual artifact was just down to them using point-filtering?

Could you also elaborate on what your last paragraph meant? What do you mean by "only makes sense when streaming in hi-detail textures"? And I've done the 2 samplers with an interpolation value approach before, in one of Riemers XNA tutorials if I remember correctly. I'm not entirely clear what the difference is between these 2 methods? Surely using 2 texture samplers and interpolating between them is effectively just a manual way of achieving the same thing mipmapping does?

clear line between the 2

Have you a screenshot to show off the effect you meant ?


What do you mean by "only makes sense when streaming in hi-detail textures"?

Hi-res textures can consume huge amounts of memory, especially on memory limited devices like consoles. In this case only low res-textures are often loaded for potentially visible geometry. When you come close to the geometry the hi-res textures are loaded in the background (streaming) and slowly blended in to exchange the low res version. This is a continiously process in which no longer used hi-res textures are overwritten by new hi-res textures.

A simple way to accomplish this is to use two samplers during the blending (exchange low res<->hi res). Once you have faded it in you can either use an other shader, a dummy texture, branching etc. to turn it off.
To understand this I'd recommend that you look at how it works in a sample program. I'm going to refer you to the D3D9 mipmapping sample on the CodeSampler website: http://www.codesampler.com/dx9src/dx9src_3.htm#dx9_texture_mipmapping

Don't worry about the source code for now, just run the application.

The option of interest here is D3DSAMP_MIPFILTER, which controls the type of filtering done between miplevels. You should switch back and forth between D3DTEXF_LINEAR and D3DTEXF_POINT for this option, and see how the image changes. The linear mip filter will give you a smooth gradient between mip levels, whereas the point mip filter will give you the sudden changes with visible lines between different levels of detail that you have observed.

Now, the texture they've chosen for this sample app is a fairly contrived one designed to demonstrate the effect of different filters, so what you can also do is try replacing it with something more representative of what you're likely to encounter in a real program; I used the brick wall texture available at http://en.wikipedia.org/wiki/File:Brickwall_texture.jpg as an example, just pasting the new image into the textures provided in this app and reducing size as appropriate, and here's what it looks like with a point mip filter: http://i47.tinypic.com/o5p16f.jpg

You need to look a little closer, but again you can see a sharp transition between different mip levels.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

This topic is closed to new replies.

Advertisement