• Advertisement
Sign in to follow this  

OpenGL Eliminating texture seams at terrain chunk

This topic is 1091 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, fellow game developer. nice to see you!

I've decided that I'd make games for my android phones. It has several limitation such as GLES max texture dimension. So in my terrain editor, I implement a new feature that is basically chunked texture. I store the complete texture data on the memory, but for each terrain chunk (being 32*32 tiles or 33*33 vertices) there is an OpenGL Texture assigned for it (which, dynamically change its data according to the global data). Each chunk texture has x,y coordinate of the global data as well as width and height.

Whenever I "paint" the global texture, each affected chunk would apply its changes. It works well, but I notice an artifact when rendering. I understand that it's caused by the GL_LINEAR filter I set for the minification and magnification filter. Each of the chunk texture would samples its own "mipmap" without taking account the neighboring chunk. So it's visually wrong. However, I still don't know how to minimize this effect. How would you solve this kind of problem? I'm trying to think a simple way out but seriously, coding this game editor had me drained inside out. I can't think clearly.

PS: these texture aren't diffuse texture. It's alpha map for the terrain texture splatting. For this simple test, I generate the global texture on the fly using simple noise calculation. It's tileable but since there're seams at the edge of each chunk, it breaks realism.

Here is how the terrain chunking looks with GL_LINEAR filter

1510980_653994921375583_5090747062442770

And here it is with GL_NEAREST instead. Unsurprisingly, no artifacts is seen here.

11001863_653994931375582_505464589805159

TL;DR : How would you eliminate the seams at the terrain chunk's edge?

Edit : uv wrap mode is GL_CLAMP_TO_EDGE Edited by Bow_vernon

Share this post


Link to post
Share on other sites
Advertisement

If you want perfect tiling between the right side of one texture and the left side of another separate texture then you need to add borders of 2^N where N is the number of mip-levels not including the full-sized texture, and fill this border with the edge-pixels from the texture it's supposed to tile into, in order to make sure that all the N mip-levels have proper blending.

You probably don't need more than 4 or 8 pixel borders as it won't be very noticeable for smaller mips.

 

If you want to improve performance or texture usage, I would recommend placing the chunk textures in groups to form larger textures, so any chunks that share a group effectively form a smaller square terrain where a single texture covers all those chunks, and only border these textures.

Edited by Erik Rufelt

Share this post


Link to post
Share on other sites

If you want perfect tiling between the right side of one texture and the left side of another separate texture then you need to add borders of 2^N where N is the number of mip-levels not including the full-sized texture, and fill this border with the edge-pixels from the texture it's supposed to tile into, in order to make sure that all the N mip-levels have proper blending.
You probably don't need more than 4 or 8 pixel borders as it won't be very noticeable for smaller mips.


So are these border pixels part of the pixel data or are they separate properties that can be set in opengl? For example this texture chunk is 1024x1024. If adding border would increase dimension, suppose it's 4 pixel border. Wouldn't it make it 1028x1028 thus violating non power of two requirements? but it's interesting though. I'll look it up more later

Do your textures have their wrap mode set to GL_CLAMP_TO_EDGE? I'm pretty sure that would eliminate the worst of the artefacts you are experiencing there.


Well it's already using GL_CLAMP_TO_EDGE. Sorry I didnt mention it earlier

Share this post


Link to post
Share on other sites


So are these border pixels part of the pixel data or are they separate properties that can be set in opengl? For example this texture chunk is 1024x1024. If adding border would increase dimension, suppose it's 4 pixel border. Wouldn't it make it 1028x1028 thus violating non power of two requirements? but it's interesting though. I'll look it up more later

 

It would, so you have to let the usable part of the texture be 1016x1016 to add 4 pixel borders at each edge. This usually isn't a problem as with bilinear filtering and scaling anyway it won't look any different, just set the texture coords to be [4/1024, 1020/1024] instead of [0, 1].

Share this post


Link to post
Share on other sites

Are you sure your texture is tileable? It certainly doesn't look it. Try downloading a proper tiling texture, and see if the problem still exists..

Edited by mark ds

Share this post


Link to post
Share on other sites


Are you sure your texture is tileable? It certainly doesn't look it.

This strikes me as well. Given the random nature of your texture contents, it's not clear you'll be able to tell if it is tiling or not, since with sampling set to GL_NEAREST, it just looks like noise.

 

Try generating textures with larger-scale features, and see if the feature line up. My guess would be you have an small error in the texture coordinates at the edge of each tile.

Share this post


Link to post
Share on other sites

I think you have misunderstood the OP, or else I have. I believe his intentions are to have every tile use a completely separate texture that can be individually painted for texture splatting or blending between layers. (If that is the case I assume it will often be quite magnified when viewed close-up, at which point borders are usually necessary).

Edited by Erik Rufelt

Share this post


Link to post
Share on other sites

I believe his intentions are to have every tile use a completely separate texture that can be individually painted for texture splatting or blending between layers

That is my understanding as well, but the errors he is currently seeing are not consistent with my experience of that scenario. Those seams are far more apparent than is reasonable for just filtering error.

Share this post


Link to post
Share on other sites
I think the uv mapping is not continuous, ie if an edge is shared between 2 quads, the uv value of vertex doesn't match.
(assuming that 2 quads uv map doesn't overlap)

Share this post


Link to post
Share on other sites

Are you sure your texture is tileable? It certainly doesn't look it. Try downloading a proper tiling texture, and see if the problem still exists..

when I use the complete texture (no chunking), it's continues. as was seen with the GL_NEAREST filter. 

 

 

 


Are you sure your texture is tileable? It certainly doesn't look it.

This strikes me as well. Given the random nature of your texture contents, it's not clear you'll be able to tell if it is tiling or not, since with sampling set to GL_NEAREST, it just looks like noise.

 

Try generating textures with larger-scale features, and see if the feature line up. My guess would be you have an small error in the texture coordinates at the edge of each tile.

 

You, sir are the real MVP tongue.png . You are correct. The artifact only shows when the values between the neighboring chunk vary greatly. That should never happen in the real game. And when I fill my global blend map with sane values, the artifact is no more!! see the screenshot below to see the proof.

 

I think the uv mapping is not continuous, ie if an edge is shared between 2 quads, the uv value of vertex doesn't match.
(assuming that 2 quads uv map doesn't overlap)

The uv mapping is continuous. I've checked it thoroughly at the quadtree creation code. 

 

Update:

Heheh, the artifact is virtually "lost" it only becomes visible when the neighboring chunk has pixel value that is very different. so in my noise texture test it's always visible. But now I've implemented the terrain paint brush. It seems perfect for now. Well the key is simply to not let the difference to be too great for the neighboring chunk's texels. And since it's for alpha map, it would be blurred anyway as to not make it look too sharp which is unnatural for terrain texture splatting. Thanks guys for your input!!

 

1544321_654504471324628_6585463346077015

see? no visible artifact biggrin.png

10408706_654502884658120_920214084531081

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
    • By QQemka
      Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
      Let's go:
      Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
      Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
      Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
      What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
      There were several more but i forgot/solved them at time of writing
      Thanks in advance
    • By RenanRR
      Hi All,
      I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
      I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
      Vertex Shader:
      #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
      ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);  
      So, some doubts:
      - Why use it like that?
      - Is it okay to manipulate the camera that way?
      -in this way, are not the vertex's positions that changes instead of the camera?
      - I need to pass MVP to all shaders of object in my scenes ?
       
      What it seems, is that the camera stands still and the scenery that changes...
      it's right?
       
       
      Thank you
       
    • By dpadam450
      Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

      int rgbValue = int(textureSample.w);//4 bytes of data packed as color
      // algorithm might not be correct and endianness might need switching.
      vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
      extractedData /= 255.0f;
    • By Devashish Khandelwal
      While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
      Anyone has any idea .. what should I do?
  • Advertisement