• Advertisement
Sign in to follow this  

OpenGL Implementing LOD based Terrain

This topic is 2642 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey folks! Im currently trying to implement a satisfying terrain with LOD. My requirements are:
1. Tremendous height map => streaming
2. Dynamic lighting and shadowing
3. More might appear while I progress, but right now its just those 2...

I would like some advanced people to guide me through this... I have already implemented (2^n + 1) x (2^n + 1) height map convertion into the simple quad tree of chunks, which contain their own VB, IB and AABB. The incoming information for this convertion is:

1. Height Map size: (2^n + 1) x (2^n + 1);
2. Vertex number per quad: vertexNumberPerQuad = 2^i + 1, i = 0...n =>
=> Number of LODs (quad tree depth): numberOfLODs = (n + 1) - log2(vertexNumberPerQuad - 1), so 1 <= numberOfLODs <= (n + 1);
3. Root deviation;

I compute deviation for each chunk as:

Deviation(L + 1) = Deviation(L) / 2

On rendering stage all I do is traversing top-bottom this tree and cheking a simple equation:

p = K * (d / D), K = viewport_width / [2 * tan(horizontal_fov / 2)];

p - maximum screen-space vertex error;
d - deviation of current chunk;
D - distance from camera to the center of AABB of current chunk;

As you can notice on the picture I use triangle strips "snake" approach to render a chunk.


Chunked LOD


For now thats it (quite simple right? :])... I can't go further due to the lack of experience and poor knowledge of recent trends in this subject, moreover its a little bit hard for me to understand algorithms, while reading papers, since papers are very short and provide little information...

So first of all, tell me if I am heading the right way by making a kind of "Chunked LOD"?

And, secondly, if I am on a right way, then it seems like the next step is crack elimination: which approach would you recommend? Personally I dont like skirts, since they might look ugly when textured. But avoiding skirts leads to 2 different approaches that have problems, which I dunno how to solve:

1. If adjacent chunks differ in 2 or more LODs, I have to omit more than 1 vertex in a T-junction;

2. If I stick stricktly with a rule "adjacent chunks may differ only in 1 LOD", consequently I get another headache - make some restricktions to quad tree;

Have no idea about both :[
I also appreciate any suggestions about mixing different algorithms!

Using OpenGL + Java btw. Thanks!

[Edited by - Haroogan on October 21, 2010 3:45:15 PM]

Share this post


Link to post
Share on other sites
Advertisement
If you're going stick with geometry based terrain, you can use RobMaddison's Terrain LOD stitching on the GPU. It's very easy to implement (4 lines of shader code). BTW Rob, if you read this, please check your PM's lol.

Quote:
Original post by spacerat
Why using triangles ? Its easier to raycast the terrain - you dont need to worry about geometry in this case.
http://msdn.microsoft.com/en-us/library/ee416425%28VS.85%29.aspx


niiiiiiice!

Share this post


Link to post
Share on other sites
First of all I dont really understand whats the essence of raycasting and why is it good for terrain? Does it simplify terrain assembling? Does it simplify texturing and lighting? Does it bring decent FPS rates...? And is it useful in making planet-like terrain, since, for instance, chuncked LOD is!

Well about Rob Maddison... ummmhh, u know his algorithm is described too shallowly, isnt it? Again the lack of direct and structured information :[

Share this post


Link to post
Share on other sites
Quote:
Original post by Haroogan
First of all I dont really understand whats the essence of raycasting and why is it good for terrain? Does it simplify terrain assembling? Does it simplify texturing and lighting? Does it bring decent FPS rates...? And is it useful in making planet-like terrain, since, for instance, chuncked LOD is!

Well about Rob Maddison... ummmhh, u know his algorithm is described too shallowly, isnt it? Again the lack of direct and structured information :[


There's an executable of the raycasting sample in the dx sdk.

I could actually extract enough information out of Rob Maddison's post to write a well working implementation. After being stitched with his algorithm (which is basically a modulo operation on the patch edges) the grid looks like this:



Share this post


Link to post
Share on other sites
Before you start with raycasting on such a scale, you should consider a few things:
- Raycasting does not (cannot) make use of MSAA (other than triangles)
- Cone step mapping as proposed in this article, requires non-trivial preprocessing. We are talking about several minutes to hours for a large texture. Editing in or close to real time is out of the question.
- Most other raycasting algorithms are either very slow or inaccurate (or both).
- Raycasting requires the pixel shader to output depth for proper results, which means you have no Z-Cull

Share this post


Link to post
Share on other sites
>> why is it good for terrain?

you can do terrain with displacementmapping in one pass

>> Does it simplify terrain assembling?

yes - you have unlimited geometry details and no trouble with triangle count

>>Does it simplify texturing and lighting?

no - thats the same

>>Does it bring decent FPS rates...?

yes. my volume terrain raycaster gets about 100fps just for the terrain without SSAO&SSDM (http://www.youtube.com/watch?v=f4bYYWnQbSU)

>>And is it useful in making planet-like terrain, since, for instance, chuncked LOD is!

if you need a complex distance or angle function to calc the texture coord on a sphere it might slow down...

Share this post


Link to post
Share on other sites
So u've dropped down ur FPS to 100 just by adding terrain on GTX 285? lawl... How about additional geometry and high polygonal animated models then? And as I understand raycasting is able only on very recent graphic cards right?

Now about Maddison:

"During this stage, fill in a matrix of LOD levels - for a 4096x4096 terrain with a 32x32 high-LOD patch size, this involves filling a matrix of [128][128] - but only those elements that fall within the viewing frustum and only the outer edges of each of those elements need to be set."

What matrix? Does he mean just to create a 2-dimensional array of 128x128 and fill it with pointers to chuncks that are going to rendered, and those who shouldnt be rendered must have null pointers in the appropriate position of this 2-dimensional array? He wants to do it every frame?

"and only the outer edges of each of those elements need to be set."

What are outer edges of those elements?

"I believe this is standard practise for pre-determining the LODs of neighbouring patches."

After filling this matrix he learns about neighbours of each chunk? How?

"front-to-back-sorted"

WTF?

BTW, ur screenshot looks like GeoClipMapping, I mean nested LOD layers?

[Edited by - Haroogan on October 23, 2010 5:10:54 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by Haroogan
What matrix? Does he mean just to create a 2-dimensional array of 128x128 and fill it with pointers to chuncks that are going to rendered, and those who shouldnt be rendered must have null pointers in the appropriate position of this 2-dimensional array?


Yes i think you got it. You first figure out which nodes are going to be rendered and write their LOD's (or pointer to the node) into the matrix. If a node isn't visible, you don't have to write it into the matrix because invisible nodes don't cause cracks :)

This is blazingly fast since you only do the outer edges. You could bake the LOD-Matrix on the GPU but it's not even worth it. Here's my code for this step:


void CeTerrain::buildRenderQueue(node& n)
{
if(!Ce::cam->InFrustumAABB(&n.aabb)) return;

n.lod = max(n.depth / 2, computeLOD(n));

if(n.lod <= n.depth)
{
UINT nodeSize = lodMatrixSize / n.depth;
UINT coord = n.pos.y * lodMatrixSize + n.pos.x;

node** lodPtr = &LODMATRIX[coord];

for(UINT i = 0; i < nodeSize; ++i)
{
*(lodPtr + i * lodMatrixSize) = &n;
*(lodPtr + i * lodMatrixSize + (nodeSize - 1)) = &n;
*(lodPtr + i) = &n;
*(lodPtr + i + lodMatrixSize * (nodeSize - 1)) = &n;
}

renderQueue.push(&n);

return;
}

for(UINT i = 0; i < 4; ++i) buildRenderQueue(*n.child[i]);
}



Quote:

He wants to do it every frame?


Only when the camera moves, unless you have some kind of animated terrain.

Quote:

What are outer edges of those elements?


I can best explain it with an image. The colored borders are the edges of the nodes.



Quote:

After filling this matrix he learns about neighbours of each chunk? How?


You look it up in the LOD matrix or you store pointers in the nodes to the neighbours in the LOD matrix and just dereference them before you render the node.

Quote:

"front-to-back-sorted"


This would just require to push the nodes into a priority queue with the distance as comparison key. Or use a vector and sort it after the LOD step. It's not required but prevents some overdraw on the GPU for better performance.

Quote:

BTW, ur screenshot looks like GeoClipMapping, I mean nested LOD layers?


Hugues Hoppe's geometry clipmaps are a little different as far as i can tell.

[Edited by - Daniel E on October 23, 2010 11:23:55 AM]

Share this post


Link to post
Share on other sites
For completeness, here's the shader code for stitching patches on the GPU:

with conditions:

VS_output vs_terr(VS_input input)
{
VS_output output;

float4 pos = { input.pos.x, 0, input.pos.y, 1 };

if (pos.x < 0.5f) pos.z -= pos.z % nLODs.x;
else if (pos.x > NUMVERTS - 0.5f) pos.z -= pos.z % nLODs.z;

if (pos.z < 0.5f) pos.x -= pos.x % nLODs.y;
else if (pos.z > NUMVERTS - 0.5f) pos.x -= pos.x % nLODs.w;
}



no conditions:


VS_output vs_terr(VS_input input)
{
VS_output output;

float4 pos = { input.pos.x, 0, input.pos.y, 1 };

pos.z -= input.isEdge.x * (pos.z % nLODs.x);
pos.z -= input.isEdge.z * (pos.z % nLODs.z);

pos.x -= input.isEdge.y * (pos.x % nLODs.y);
pos.x -= input.isEdge.w * (pos.x % nLODs.w);
}

Share this post


Link to post
Share on other sites
I wonder if anybody here has a clue about how to triangulate with a single strip the 2D-array of vertices (used / not used) produced by RQT. Image follows...

Strip Triangulation

One remark: dont take size on the image into consideration, coz I suppose that there must be an algorithm, which can triangulate an arbitary (2^n + 1)x(2^n + 1) 2D-array (used / not used) into a single strip! :]

Share this post


Link to post
Share on other sites
Hilbert curve?

But why, in the age of primitive restart, would I want to go through a lot of pain for drawing everything in one strip? It is so much easier to draw line by line, taking post transform cache into account, i.e. not making lines too broad and prefetching the first row. Then add a primitive restart index at the end of every line, and you're done.

Share this post


Link to post
Share on other sites
Hilberts curve wont fit here i think... Look at the one that is below:

Hiberts Curve

Its degree is 2 and it walks through 16 points of a quad, i. e. a quad with 4 points per border. However quads in RQT have 2^n + 1 points per border, for instance the smallest one would have 3 points per border and the next one 5 points per border... So as u can see - no way u can fit Hilberts curve here to iterate through all points... I wish we could :] I might be wrong but its just my shallow thoughts about it.

Now about your idea "samoth": please provide an index buffer to clear it up. But before look at my first post, where I presented a screenshot. As u can see each block there is assembled with single strip in snake style. Looks like a brute force... Now I want to RQT triangulate these blocks to reduce number of triangles. So lets assume after RQT procedure i got 2d-array of points like on the picture below:

Triangulation

So Id like to see ur idea on index buffer for this configuration to implement desired triangulation from picture... (I gave u numbers to simplify ur interpretation)

[Edited by - Haroogan on October 25, 2010 9:00:19 AM]

Share this post


Link to post
Share on other sites
So quiet here ... :]

Alright, yesterday I've implemented RQT triangulation. Now I have the following terrain tesselation sequence:

1. As I said in the very first post - I use chunked LOD, but I didnt like that each chunk has a "brute force" structure. For instance, if all chunks are 129x129, why da hell would I render all these ~10k vertices just for a single chunk!?
2. Thats why I decided to make an RQT triangulation within each chunk to drop useless details and consequently increase perfomance. Since each chunk is associated with a height map block, which current chunk will represent, I make an RQT triangulation over this height map block.
3. RQT triangulation can be adjusted using the max deviation parameter (those who know will understand), according to Parajola's paper in 1998. I use the same approach.
4. One disadvantage is that, when I used "brute force" chunks I was able to tesselate them using a single "snake" strip (which has minimal number of degenerates), however now I have no clue how to implement it with RQT triangulation, so I just use single triangle list for each chunk.

Here u go a couple of screenshots made over 2049x2049 height map using this technique:





1. No frustum culling yet;
2. No stiching yet;
3. No code optimizations yet (it's written horribly and quit ugly, just a dry algorithm :] );
4. No geomorphing yet;

Considering that, there are several things to be implemented yet, I'm gonna update this post while I progress :]

P.S. Question about how to single-strip RQT production is still open!

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By Balma Alparisi
      i got error 1282 in my code.
      sf::ContextSettings settings; settings.majorVersion = 4; settings.minorVersion = 5; settings.attributeFlags = settings.Core; sf::Window window; window.create(sf::VideoMode(1600, 900), "Texture Unit Rectangle", sf::Style::Close, settings); window.setActive(true); window.setVerticalSyncEnabled(true); glewInit(); GLuint shaderProgram = createShaderProgram("FX/Rectangle.vss", "FX/Rectangle.fss"); float vertex[] = { -0.5f,0.5f,0.0f, 0.0f,0.0f, -0.5f,-0.5f,0.0f, 0.0f,1.0f, 0.5f,0.5f,0.0f, 1.0f,0.0f, 0.5,-0.5f,0.0f, 1.0f,1.0f, }; GLuint indices[] = { 0,1,2, 1,2,3, }; GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex), vertex, GL_STATIC_DRAW); GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices,GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(float) * 5, (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, false, sizeof(float) * 5, (void*)(sizeof(float) * 3)); glEnableVertexAttribArray(1); GLuint texture[2]; glGenTextures(2, texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageOne = new sf::Image; bool isImageOneLoaded = imageOne->loadFromFile("Texture/container.jpg"); if (isImageOneLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageOne->getSize().x, imageOne->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageOne->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageOne; glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageTwo = new sf::Image; bool isImageTwoLoaded = imageTwo->loadFromFile("Texture/awesomeface.png"); if (isImageTwoLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageTwo->getSize().x, imageTwo->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageTwo->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageTwo; glUniform1i(glGetUniformLocation(shaderProgram, "inTextureOne"), 0); glUniform1i(glGetUniformLocation(shaderProgram, "inTextureTwo"), 1); GLenum error = glGetError(); std::cout << error << std::endl; sf::Event event; bool isRunning = true; while (isRunning) { while (window.pollEvent(event)) { if (event.type == event.Closed) { isRunning = false; } } glClear(GL_COLOR_BUFFER_BIT); if (isImageOneLoaded && isImageTwoLoaded) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glUseProgram(shaderProgram); } glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr); glBindVertexArray(0); window.display(); } glDeleteVertexArrays(1, &vao); glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteProgram(shaderProgram); glDeleteTextures(2,texture); return 0; } and this is the vertex shader
      #version 450 core layout(location=0) in vec3 inPos; layout(location=1) in vec2 inTexCoord; out vec2 TexCoord; void main() { gl_Position=vec4(inPos,1.0); TexCoord=inTexCoord; } and the fragment shader
      #version 450 core in vec2 TexCoord; uniform sampler2D inTextureOne; uniform sampler2D inTextureTwo; out vec4 FragmentColor; void main() { FragmentColor=mix(texture(inTextureOne,TexCoord),texture(inTextureTwo,TexCoord),0.2); } I was expecting awesomeface.png on top of container.jpg

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
  • Advertisement