• Advertisement

3D Limitation of terrain resolution

Recommended Posts

Sorry for my short english :(

Hello, currently I'm developing heightmap renderer for study.

The renderer runs mostly on GPU side(geometry shader).

Currently, with zero point arrays(vec3(0,0,0)), geometry shader expands terrain(128x128 to 128x128x3x3) and computes height and normal every frame.

128x128x3x3 terrain is my current limitation with GTX660, which takes 23 ms for each frame.

As far as I know, for the purpose of game development, CPU-side calculation is efficient rather than GPU-side.

So I'm trying to change terrain generation mechanism to CPU-side calculation.

By doing so, GPU would calculate height and normal only one time on initiation.

What GPU will do is rendering terrain data from pre-calculated vbo or ssbo.

However what I want to ask is, "What is the maximum terrain resolution that can be rendered without freezing?"

I know that it differs by other components in renderer, and GPU performance, but please tell me based on your experience.

example1) Terrain 16384x16384 was enough to render 60fps with GTX1060

example2) Summoner's rift in League of legends uses 4096 x 4096 terrain map

Waiting for your answer, thank you for reading!!

Edited by yhkim

Share this post

Link to post
Share on other sites


The answer will depend on how do you render that terrain. The naive approach will be to generate a grid and apply height and also store normals as attributes, this will work OK but once you start making bigger terrains you will be wasting a lot of mesh density further from the camera and also outside of the frustum. 

Most terrain systems use a Level of Detail system where density of the terrain decreases far away from the camera. You can also do some visibility tests to reject parts of the terrain outside the view frustum.

Recently I worked on a terrain system, height was stored on a texture (then it was sampled on the VS) normals where also calculated from this texture. The terrain was divided into chunks (so I could apply frustum culling) I experimented with tessellation for LOD but I didn't have enough time to polish it (however I've got acceptable results). With this, I was able to render quite large terrains at a nice frame rate.

I can't give you exact numbers, you may want to experiment with different techniques. For example if you intend to make some kind of procedural and deformable terrain you may want to implement it as marching cubes etc.

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By PhillipHamlyn
      I have a procedurally generated tiled landscape, and want to apply 'regional' information to the tiles at runtime; so Forests, Roads - pretty much anything that could be defined as a 'region'. Up until now I've done this by creating a mesh defining the 'region' on the CPU and interrogating that mesh during the landscape tile generation; I then add regional information to the landscape tile via a series of Vertex boolean properties. For each landscape tile vertex I do a ray-mesh intersect into the 'region' mesh and get some value from that mesh.

      For example my landscape vertex could be;
      struct Vtx { Vector3 Position; bool IsForest; bool IsRoad; bool IsRiver; } I would then have a region mesh defining a forest, another defining rivers etc. When generating my landscape veretexes I do an intersect check on the various 'region' meshes to see what kind of landscape that vertex falls within.

      My ray-mesh intersect code isn't particularly fast, and there may be many 'region' meshes to interrogate, and I want to see if I can move this work onto the GPU, so that when I create a set of tile vertexes I can call a compute/other shader and pass the region mesh to it, and interrogate that mesh inside the shader. The output would be a buffer where all the landscape vertex boolean values have been filled in.

      The way I see this being done is to pass in two RWStucturedBuffer to a compute shader, one containing the landscape vertexes, and the other containing some definition of the region mesh, (possibly the region might consist of two buffers containing a set of positions and indexes). The compute shader would do a ray-mesh intersect check on each landscape vertex and would set the boolean flags on a corresponding output buffer.

      In theory this is a parallelisable operation (no one landscape vertex relies on another for its values) but I've not seen any examples of a ray-mesh intersect being done in a compute shader; so I'm wondering if my approach is wrong, and the reason I've not seen any examples, is because no-one does it that way. If anyone can comment on;
      Is this a really bad idea ? If no-one does it that way, does everyone use a Texture to define this kind of 'region' information ? If so - given I've only got a small number of possible types of region, what Texture Format would be appropriate, as 32bits seems really wasteful. Is there a common other approach to adding information to a basic height-mapped tile system that would perform well for runtime generated tiles ? Thanks
    • By GytisDev
      without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles,  where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
      Thank you in advance.
    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
  • Advertisement