Sign in to follow this  
Lodeman

OpenGL Terrain texturing: multi-texturing + brush tool

Recommended Posts

Lodeman    1596

Hi all,

 

I'm at the point at which I'd like to implement a brush to paint terrain, so I'd like to bounce some ideas and thoughts here and hopefully come up with a proper implementation strategy. I'm using OpenGL for graphics.

 

My goals are:

- allow the user to paint up to for example 16 different terrain textures, with an alpha value of his/her choosing

- allow texture splatting for up to 4 overlapping textures of alpha values < 1, any other layers painted ontop with an alpha < 1 would be ignored (but you could paint over it by using an alpha = 1, although I'm open to other suggestions here)

 

 

I've done some research on the internet, and read over a bunch of tutorials. But the tutorials and information I found usually just covered multi-texturing using a pre-determined height algorithm, so I'm not entirely sure how to deal with terrain brushes.

 

For texture splatting, it seems to me I could just send in an extra vector of glm::vec4, each value corresponding to RGBA to determine the texture splatting. However I'm not sure how to accomodate information corresponding to the 16 possible textures.
I would send in a texture array for these, but how would I know which four textures to use in the texture splatting? Would I need to use another vector of glm::vec4, which stores the index of the texture to use? Or are there better, more efficient ways to go about this?

 

Any good resources on creating terrain brushes I'm overlooking here, that aren't limited to just a few textures but describe the general case of x textures?

 

Much thanks in advance!

 

Share this post


Link to post
Share on other sites
mark ds    1786

I'm assuming you're using a heightmap and smaller 'cookie cutter' terrain patches..?

 

My version is considerably more complex than the following because I'm disassociating my texturing from the vertices (for both LOD reasons and the fact I'm using corner-based Wang tiles), but in essence this is what I'm doing (it's kind of like a poor-man's mega-texturing!)

 

In conjunction with my heightmap I have a texture map, which contains integers from 0-n, where n is the maximum number of textures in a texture array.

 

Every triangle that gets drawn has three sets of texture coordinates - with alpha values of:

 

1

|\

|   \

|      \

|         \

|______\ 0

0

 

 

0

|\

|   \

|      \

|         \

|______\ 1

0

 

 

0

|\

|   \

|      \

|         \

|______\ 0

1

 

These alpha values are obvious used in the pixel shader for blending purposes. Each of the three texturing coordinates are assigned the associated integers from the texture map, and passed to the pixel shader for use as an array index for texture lookup purposes. Obviously, the 3-way multitexturing can also use the same array index for each of the three texture coordinate sets to create a solid texture.

 

Although every triangle is now triple textured, as it were, for the most part adjacent triangles will use the same texturing, which is bandwidth efficient.

 

The advantage of this method over traditional RGBA weighting methods (which is the method every single internet resource I've seen talks about) is that you are pretty much unlimited as to how many different textures are allowed (the spec says a minimum of 256 layers must be supported), and the fact that terrain texturing has a pretty much fixed cost (rather than rendering geometry multiple times and blending). Obviously if you have many different textures very nearby the bandwidth requirements will increase accordingly.

Share this post


Link to post
Share on other sites
mark ds    1786

Just to add...

 

You don't actually need to store all three alpha values: one of them is 1 minus the sum of the other two.

 

Also, you can use a neat optimisation for lower end hardware. Say you use three similar grass textures (array indexes 0, 1 & 2) to break up the uniformity of a grassy area. You could allow the user to select 'simple texturing' and remove two of the textures and recalculate the array (and associated texture map). This would reduce the bandwidth requirements by two thirds.

Share this post


Link to post
Share on other sites
Lodeman    1596

Hi Mark and thank you for your interesting reply.

 

Just to make sure I'm understanding you correctly, I'm going to go over how I'd implement your method into mine.

 

Currently, my terrain consists of:

    vector<glm::vec3> m_vertices; //the vertices of the terrain

    vector<GLuint> m_indices;

    vector<glm::vec2> m_texCoords;

    vector<glm::vec3> m_normals;

Triangle distribution is indeed done by the "cookie-cutter" distribution. In this implementation I only have one grass texture applied to the entire terrain.
Now, based off your explanation, I would add the following to my terrain:

    vector<glm::vec3> m_textureIndices; //the same length as m_vertices, each component of the vec3 describes ONE texture indice from 0..n (n = amount of terrain textures)

    vector<glm::vec2> m_blendingAlphas; //same length as m_vertices, x = alpha1, y = alpha2 (z not needed as it is = 1-x-y)

I would send these into my GLSL shader and determine the used textures via m_textureIndices, and determine the amount of blending needed via m_blendingAlphas.
This would allow the terrain to use n textures, of which 3 would be blended/splatted. I assume I can increase this to 4 by making m_textureIndices a vector<glm::vec4>, and m_blendingAlphas a vector<glm::vec3>.

 

Is this roughly what you mean Mark?

 

Thanks again!

Lodeman

Share this post


Link to post
Share on other sites
mark ds    1786

Yeah, that's pretty much it.

 

My system works slightly differently because of the sheer size of the terrain - I have a 6145x5121 16-bit heightmap and an associated 6145x5121 8-bit array index texture. These are both stored as textures to use as lookups in the vertex shader. The trick (at least in my implementation) is that each heightmap point (and associated vertices) have exactly one texture applied to them - which either fade to another texture at each adjacent vertex, or 'fade to the same texture' to repeat the texture over a larger area. As such, I have no need for four textures per tri - but obvious your requirements will differ from mine.

 

Conceptually, it like drawing 3 textured triangles (as in the pseudo art in my above post), and then blending them, but with the benefit of only actually specifying one set of vertices.

 

 

 

(PS - sorry if that's not totally clear, but I've had several beers!!!)

Edited by mark ds

Share this post


Link to post
Share on other sites
Lodeman    1596

Hi Mark,

 

Hope you enjoyed your beers :-)

 

I've been making progress, but am running into an issue I can best describe visually:

uqf4.png
 

As you can see, I'm having trouble getting the blending to work properly within the radius of my terrain brush, at the edges of the brush. I can't seem to figure out how I can create a nice smooth, circular blend on fragment/pixel level. I'm thinking this might be an inherent limitation with my current implementation, in which I store the blending alphas on a per-vertex level.

 

Could you shed some light on how to tackle this?

 

Cheers!

Share this post


Link to post
Share on other sites
Lodeman    1596

I think a possible approach to this would be to have a large alpha-map texture for each possible terrain texture, spreading the entire terrain.
However as my heightmap is already of a considerable size (1024*1024), I'm concerned that each alpha map would have to be this * a large factor, in order to cover enough fragments in enough detail. So if I go with for example 12 terrain textures, I'd be sending in 12 large alpha-map textures of for example dimensions: 20480*20480

 

Is this a viable approach that would perform decently enough or are there better techniques I don't know of? :p

Share this post


Link to post
Share on other sites
RobMaddison    1151

I do my terrain texturing in a similar way to this.  Although I don't store 3 sets of texture coordinates per triangle corner.  I just have a terrain-sized 8-bit texture, each pixel of which designates the texture of the corner of the terrain triangle it maps to (so you can have 255 different textures across the entire terrain).  In order to blend between textures on each triangle, though, I have to do multiple passes with alpha-blending:

 

My painting algorithm computes which texture is most abundant on a terrain patch (32 x 32 x 2 triangles) and draws that one first without alpha blending.  During the first pass, anywhere that texture does not exist on a triangle corner, black is used and is blended into corners where the texture does exist using a 16x16 'blending texture' (this is essentially a small 16x16 texture with a different colour in each corner blended to the opposite corner between 1 and 0 - the shader uses this to blend across the triangle from 0-1).

 

For each subsequent pass (texture), the same terrain patch is drawn but the alpha is 0 in corners where the texture does not exist and 1 in corners where it does.  When you draw all the passes, the terrain looks as it does in the screenshot in Lodeman's post but with nicely smoothed borders.  With a good base (lower-resolution) texture mixed in you can barely tell the terrain painting is triangle based - think Crysis does it this exact way.

 

So if you have a terrain patch with lots of textures, you end up doing multiple passes but it's still extremely quick.  I blend out the detail textures with a low-resolution 'base' texture at a specified distance so you only really end up doing multiple passes on close patches (which is statistically no more than 4).  It's pretty efficient and I get very fast frame rates for a 4096x4096 terrain with multiple textures in each patch.  You can also easily stream in the terrain detail textures (instead of having a terrain-sized one) and just pull them in when you get close to them so memory overhead is fairly minimal.

 

My question is, how are you guys doing this without multiple passes?

 

Edit: looks like my implementation is identical to mark ds' - Mark, do you do multiple passes too?

Edited by RobMaddison

Share this post


Link to post
Share on other sites
dpadam450    2357

Looks like you are taking each vertex as a certain alpha + material. IE you cant ever get a circle because your circle is cutting a triangle in half. You need to use a texture always for terrain. That will also require doing some shaders.

 

Couple of my terrains for reference using 2048 maps (1 RGB material) (1 RBG alpha).
http://th00.deviantart.net/fs71/PRE/f/2013/339/f/0/big_by_dpadam450-d6wuvn9.jpg
http://fc03.deviantart.net/fs71/f/2011/328/b/b/basic_terrain_by_dpadam450-d4h4tvz.jpg

 

I've also found that you need 2 texture for each material that are very similar to reduce tiling. Ex: if you have snow and grass, make 2 grass textures that look relatively the same color and scale, and the same with snow. That way each material tiles much better. You could do this for example by taking 2 pictures of grass with your camera a few feet apart that are simliar but different and tiling those would work.

Edited by dpadam450

Share this post


Link to post
Share on other sites
mark ds    1786

 

Hi Rob. Sorry for not responding earlier, but did you get anywhere with this in the end.


Was that meant for me?

 

No - it was meant for Loademan! I copied from the wrong post. Sorry!

Share this post


Link to post
Share on other sites
Lodeman    1596

I've had to temporarily knock it back on my todo list, as I was getting driver crashes with my current painting code once I started painting on second layers.
While I couldn't pinpoint the exact reason for the crash (my pc would just freeze, and after awhile I'd get a message that the graphics driver recovered), I should note that I was putting a ton of triangles through to the GPU for my terrain...I'm thinking perhaps it was just getting to be too much for the GPU?.

 

So now I'm first implementing a terrain LOD system (going for CDLOD), then I'll return to this!

Share this post


Link to post
Share on other sites
Lodeman    1596

Got my LOD system done, and am returning to this now!

@mark ds: I was wondering if you could go more into depth as to how you blend your textures. From your initial post I get the impression you are able to do the blending in one go:

 

and the fact that terrain texturing has a pretty much fixed cost (rather than rendering geometry multiple times and blending).

But as Rob describes, I currently don't see how one can do this without doing multiple blending passes.
Thanks in advance for any clarification!

 

Cheers and beers,
Lodeman

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By povilaslt2
      Hello. I'm Programmer who is in search of 2D game project who preferably uses OpenGL and C++. You can see my projects in GitHub. Project genre doesn't matter (except MMO's :D).
    • By ZeldaFan555
      Hello, My name is Matt. I am a programmer. I mostly use Java, but can use C++ and various other languages. I'm looking for someone to partner up with for random projects, preferably using OpenGL, though I'd be open to just about anything. If you're interested you can contact me on Skype or on here, thank you!
      Skype: Mangodoor408
    • By tyhender
      Hello, my name is Mark. I'm hobby programmer. 
      So recently,I thought that it's good idea to find people to create a full 3D engine. I'm looking for people experienced in scripting 3D shaders and implementing physics into engine(game)(we are going to use the React physics engine). 
      And,ye,no money =D I'm just looking for hobbyists that will be proud of their work. If engine(or game) will have financial succes,well,then maybe =D
      Sorry for late replies.
      I mostly give more information when people PM me,but this post is REALLY short,even for me =D
      So here's few more points:
      Engine will use openGL and SDL for graphics. It will use React3D physics library for physics simulation. Engine(most probably,atleast for the first part) won't have graphical fron-end,it will be a framework . I think final engine should be enough to set up an FPS in a couple of minutes. A bit about my self:
      I've been programming for 7 years total. I learned very slowly it as "secondary interesting thing" for like 3 years, but then began to script more seriously.  My primary language is C++,which we are going to use for the engine. Yes,I did 3D graphics with physics simulation before. No, my portfolio isn't very impressive. I'm working on that No,I wasn't employed officially. If anybody need to know more PM me. 
       
    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
  • Popular Now