• Create Account

Design question: How many polygons is ideal for a landscape surface?

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

7 replies to this topic

#1eggmatters  Members

110
Like
0Likes
Like

Posted 24 August 2012 - 05:34 PM

Hi, I have a fairly rudimentary design question.
I've created an artificial landscape using the diamond square algorithm. Overall, I'm happy with it. I decided to use Triangles as the base polygon for rendering because they're the easiest to compute normals against.
I have a question about how to proceed concerning two factors:
1. Vertexes with local maximums in the height map "spike" as the higest vertex is the top of a triangle. I have these pointy bits in my landscape. It seems to me that the only ideal solution is to subdivide the polygons over any given set of vertices and apply noise to "weather" these spikes.
2. My vertices are seperated by one whole integer value (althogh they're rendered using floats.) When aplying a texture to polys with a lot of surface area (due to vertical relief) the texture is stretched. I once again expected this but would like some feedback on how to proceed.

I've attached a screenshot. This is with the roughness factor turned way up. I've only applied one texture so you can safely ignore that except for the stretching effects on the mountain and cliff faces.

If I divide a polygon into several smaller ones, what would be the ideal "depth" to achieve this?
Is there some other, better, method to map textures to polygons with varying surface areas? (I haven't seen anything like this, nor would expect to.)

There's no real code to show as this is basically a design question but would be happy to provide a sample if you'd like.

Thanks!

Attached Thumbnails

Edited by eggmatters, 25 August 2012 - 11:08 AM.

#2eggmatters  Members

110
Like
0Likes
Like

Posted 27 August 2012 - 12:43 PM

If anybody is interested, I was able to remove the spikes and subdivide the terrain. In the diamond square algorithm, the square step was adjusted to accept a random number within the range of the average of the surrounding points modified by 2 factors, overall height and roughness.

The algorithm ran for all points rendered (including the subdivisions.) Each x and y of the terrain heightmap were whole integer values, so I added a "subdivision" factor so that when I rendered the vertices, they were each placed on a point between 0 and 1.0 per subdivision.

So,

[source lang="cpp"]Terraingen::Terraingen(int dimension, int polyDepth) { //polyDepth is the number of polygons to render between 0 and 1; dimensions = dimension * polyDepth; srand((unsigned)time(0)); range = 300; roughness = 10;//heightmap: terrain = std::vector< std::vector<float> >(dimensions + 1, std::vector<float>(dimensions + 1, 0.0f));//run diamond square. . .//During rendering:void GLWidget::testDiamondStepTerrain(int polyDepth) { int Width = dsTerrain.size(); //dsTerrain is the heightmap. int Height = dsTerrain.size(); std::vector<TRIANGLE> triangles; float poffset = 1.0f / polyDepth; float scaleX, scaleZ; //floating point x and z values between 0 and 1. for(int x = 0; x < Width - 1; x++){ if (x > 0) { int tmpX = x / polyDepth; int modX = x % polyDepth; scaleX = ((modX) == 0) ? tmpX : tmpX + (poffset * modX); } else { scaleX = 0; } for(int z = 0; z < Height - 1; z++){ if (z > 0) { int tmpZ = z / polyDepth; int modZ = z % polyDepth; scaleZ = ((modZ) == 0) ? tmpZ : tmpZ + (poffset * modZ); } [/source]

So this works for polyDepth = 2, but not for 4 or higher (polyDepth must be a multiple of 2.) I know why, but can you spot it?

#3larspensjo  Members

1561
Like
0Likes
Like

Posted 28 August 2012 - 02:30 AM

I decided to use Triangles as the base polygon for rendering

There are no other polygons (with more edges), unless you are using the deprecated legacy OpenGL. But maybe that is a conscious choice you did? Just to warn you that it may lead into an irreversable dead-end.
Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#4eggmatters  Members

110
Like
0Likes
Like

Posted 28 August 2012 - 09:56 AM

Not sure I follow you Lars. But, that's good news as it was a conscious choice because in an earlier incarnation, I was using quads. When I tried to compute a an upright vector (for POV) I wound up splitting the quad into triangles anyways. But, that being said - all of the openGl literature I've seen (whose dates are dubious) allows for Quads, triangle fans, quad strips, multi-sided polgons etc.

But, what is the dead end that you're speaking of? Using something other than triangles or some other piece of my post? Thanks!

#5larspensjo  Members

1561
Like
0Likes
Like

Posted 29 August 2012 - 06:15 AM

See some explanations:

http://stackoverflow.com/questions/1218449/opengl-whats-the-deal-with-deprecation
http://www.opengl.org/wiki/Fixed_Function_Pipeline

In the immediate mode, you do a glBegin(), you define some vertices, possibly some information about each vertex, and then glEnd(). This is limited in functionality. There are enhancements, but from Opengl 3 it was deprecated (though still available in compatibility contexts).

The modern way to do this is to define a local buffer with all your vertex data. This data can be of any type or meaning, OpenGL doesn't care. You then transfer the buffer to OpenGL memory, and tell OpenGL how to extract data from the buffer (still not depending on what is a coordinate, what is a color, or UV, etc.). Finally, you define the vertex shader and fragment shader. Here, the interpretation of the data is done, producing coordinates, colors and mapping textures. And much more. It is obviously more effort than the immediate mode. But it is far more flexible. You are guaranteed to reach a point where you want to do something if you have a "real" project, and the best answer is to modify the shader to get it done.

For a goot tutorials how it is done, see: Learning Modern 3D Graphics Programming or http://www.opengl-tutorial.org/.
Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#6eggmatters  Members

110
Like
0Likes
Like

Posted 29 August 2012 - 10:23 AM

Thanks for that. Was just at that stage. I was aware of the need to buffer vertex data and found a good tut but I am appreciative of the links. What the code snippet didn't show was that I compile each terrain section into a list. That helped somewhat. Can I compile the list using vertex arrays or are the two implementations mutually exclusive? (Sorry to ask such a naive question, will probably find out on my own. -- Just found: "Remember that you cannot place any client state commands in the display list, therefore, glEnableClientState(), glVertexPointer() and glNormalPointer() should not be included in the display list." )
I was led to vertex arrays by
1. seeing it in the literature and knowing that I would need to implement it once I started throwing a lot of polys around and
2. A bug in my vertex rendering inserted a neat little space between the edges of all of my polygons (I forgot to redefine an offset) and voila! the rendering screamed. I looked around for ways to eliminate polygon edge repetition and was pointed towards some literature on the arrays. Thanks for pointing me to some more.

BTW, I'm running openGL 2.6.(the QUADS support) I installed it in May I believe. I'm developing on a Debian system and their package management system is notorious for out of version packages since they are so meticulous about what they accept in their repos. Waiting to compile the latest jars from Mesa.

#7larspensjo  Members

1561
Like
0Likes
Like

Posted 30 August 2012 - 01:05 AM

You say OpenGl 2.6, but I think it is 2.1 (which was the last in OpenGL2+). Some older graphics cards doesn't support better than that, but they are quite old today and have low performance. Unless your target is to support really low performance systems, you may as well go for OpenGL 3+. OpenGL 4+ adds some advanced features that can wait for now.

I find names in OpenGL highly confusing, and the documentation isn't very helpful.

VBO, Vertex Buffer Object: This is the OpenGL buffer where you can store data. Almost any kind of data, usually vertex lists, texture coordinates, etc.
VAO, Vertex Array Object: This is a kind an object that saves a state and pointers. From OpenGL 3, you always need at least one. They are actually quite helpful, but at first it is a little confusing what they store. When it is setup correctly, all you have to do is bind it before the draw call (glBindVertexArray), and OpenGL will know where to find all vertex data. I couldn't learn how to do this from the documentation, but using examples helped a lot.

There are some really nice reference implemtations at https://github.com/progschj/OpenGL-Examples. They are as small as they can get, easy to read, but still provide a complete one-file application. My experience from OpenGL is to never change too much at a time. I copy a lot from example, and do small changes. OpenGL is sensitive to mistakes, based as it is on many global states. The famous "black screen of death".

Btw, sorry for not answering on your original questions in a straight way!
Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#8eggmatters  Members

110
Like
0Likes
Like

Posted 30 August 2012 - 10:29 AM

Great stuff! Thanks. I'm re-aligning my code for VBO's now. I'm at < 1000 lines in and luckily, am doing small changes + VCS (local git) to keep things manageable. Appreciate the directions!

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.