• Advertisement
Sign in to follow this  

Adding detail to heightmap terrain with noise?

This topic is 1661 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

When synthesizing extra detail, i don't understand how it can be done on the GPU? I mean if you're using a noise function or sampling a noise texture, you're building that extra detail on existing heightmap data, right? But what if you're going down more levels where the previous level is also synthesized?

 

For example:

 

- Heightmap -

Level 0 - Coarse resolution

Level 1 - Fine resolution

Level 2 - Finer resolution

 

Now you want to add extra LOD levels 

 

- Synthesis -

Level 3 - Interpolate Level 2 vertices + Add noise

Level 4 - ??? + Add noise

 

The question marks represent the Level 3 vertices that you want to interpolate, except you don't have their heights stored...

 

What am i missing?

 

Edit: Also, what about lighting? How to calculate the new normals due to the added noise?

Edited by Waaayoff

Share this post


Link to post
Share on other sites
Advertisement
Noise can be used used to add fidelity at material transitions on a terrain by introducing stochastic detail to the otherwise low frequency blend masks. Alternatively, you could use Blend Maps on a per-material basis to achieve a similar effect (adding high frequency detail to a low frequency blend mask).

Share this post


Link to post
Share on other sites
References to "adding detail to heightmap terrain with noise" I have seen have been about using noise to break up the the low frequency (i.e. low resolution) blend transitions between terrain materials. Read the Blend Maps paper I linked to to understand the concept of low and high frequency blending detail (it is an alternative to using noise). Edited by GeneralQuery

Share this post


Link to post
Share on other sites


Oh i'm not talking about materials, i'm talking about geometry.

People don't tend to refine geometry that way, all that often.

 

It's a useful technique in a handful of specific cases, but most of the time it is cheaper to leave the base geometry alone and use normal maps or other displacement mapping techniques to add surface detail (although displacement mapping may include geometry refinement, such as via DX11 tesselation).

Share this post


Link to post
Share on other sites

 


Oh i'm not talking about materials, i'm talking about geometry.

People don't tend to refine geometry that way, all that often.

 

It's a useful technique in a handful of specific cases, but most of the time it is cheaper to leave the base geometry alone and use normal maps or other displacement mapping techniques to add surface detail (although displacement mapping may include geometry refinement, such as via DX11 tesselation).

 

 

Oh ok, although i think GPU geometry clipmaps does that...

Share this post


Link to post
Share on other sites

From gpu gems website:

float4 UpsamplePS(float2 p_uv : TEXCOORD0) : COLOR
{
    float residual = tex2D(ResidualSampler, p_uv*OneOverSize);  
    
    p_uv = floor(p_uv);
    float2 p_uv_div2 = p_uv/2;
    float2 lookup_tij = p_uv_div2+1; 
    float4 maskType = tex2D(LookupSampler, lookup_tij);     
          
    matrix maskMatrix[4];
    maskMatrix[0] = matrix(0, 0, 0, 0,
                           0, -1.0f/16.0f, 0, 0,
                           0, 0, 0, 0,
                           1.0f/256.0f, -9.0f/256.0f, -9.0f/256.0f, 1.0f/256.0f);
                           
    maskMatrix[1] = matrix(0, 1, 0, 0,
                           0, 9.0f/16.0f, 0, 0,
                           -1.0f/16.0f, 9.0f/16.0f, 9.0f/16.0f, -1.0f/16.0f,
                           -9.0f/256.0f, 81.0f/256.0f, 81.0f/256.0f, -9.0f/256.0f);                        
                           
    maskMatrix[2] = matrix(0, 0, 0, 0,
                           0, 9.0f/16.0f, 0, 0,
                           0, 0, 0, 0,
                           -9.0f/256.0f, 81.0f/256.0f, 81.0f/256.0f, -9.0f/256.0f);
                           
    maskMatrix[3] = matrix(0, 0, 0, 0,
                           0, -1.0f/16.0f, 0, 0,
                           0, 0, 0, 0,
                           1.0f/256.0f, -9.0f/256.0f, -9.0f/256.0f, 1.0f/256.0f);

    float2 offset = float2(dot(maskType.bgra, float4(1, 1.5, 1, 1.5)), dot(maskType.bgra, float4(1, 1, 1.5, 1.5)));
    
    float z_predicted=0;
    offset = (p_uv_div2-offset+0.5)*OneOverSize+TextureOffset;
    for(int i = 0; i < 4; i++) {
        float zrowv[4];
        for (int j = 0; j < 4; j++) {
                float2 vij    = offset+float2(i,j)*OneOverSize;
                zrowv[j]      = tex2D(CoarseLevelElevationSampler, vij);
        }
        
        vector mask = mul(maskType.bgra, maskMatrix[i]);
        vector zrow = vector(zrowv[0], zrowv[1], zrowv[2], zrowv[3]);
        zrow = floor(zrow);
        z_predicted = z_predicted+dot(zrow, mask);
    }

    
    z_predicted = floor(z_predicted);
    
    // add the residual to get the actual elevation
    float zf = z_predicted + residual;  
    
    // zf should always be an integer, since it gets packed
    //  into the integer component of the floating-point texture
    zf = floor(zf);
    
    float4 uvc = floor(float4((p_uv_div2+float2(0.5f,0)), 
                              (p_uv_div2+float2(0,0.5f))))*OneOverSize+TextureOffset.xyxy; 
            
    // look up the z_predicted value in the coarser levels  
    float zc0 = floor(tex2D(CoarseLevelElevationSampler, float4(uvc.xy, 0, 1)));
    float zc1 = floor(tex2D(CoarseLevelElevationSampler, float4(uvc.zw, 0, 1)));        
    
    float zf_zd = zf + ((zc0+zc1)/2-zf+256)/512;

    return float4(zf_zd, 0, 0, 0);
}

Share this post


Link to post
Share on other sites
Read the full GPU Gems chapter. It's a LOD technique for performance, not about displacement mapping or tessellation.

Share this post


Link to post
Share on other sites

I know that, but they add detail at runtime using either synthesis or decompression. I'm talking about the synthesis.

Share this post


Link to post
Share on other sites
They do not add detail. You start with your "hi res" mesh (generated offline, through some modelling process or whatever) and then reduce the complexity of the mesh with respect to the distance from the viewer to reduce the workload of the rendering process. This is not the same as "adding detail at runtime" which would imply some form of tessellation/surface displacement technique (or whatever). Edited by GeneralQuery

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement