3D complex polygons rendering on a sphere with GLU tessellation routines

Started by
10 comments, last by leonard2012 10 years, 2 months ago

In my app, I need to render complex polygons representing e.g. continents, rainfall regions on the Earth surface. The Earth is simulated with a bi-tangent sphere like Google Earth. In 2D orthogonal projection, I can render complex polygons with GLU tessellation routines (gluTessNew, gluTessCallback and so on). When I do the same in 3D perspective projection, most of the part of a polygon except the boundary is occluded by the sphere surface. The cause of the error is that all the vertices of a tessellated polygon lie on a plane, and this plane is behind the spherical surface of the Earth. I really appreciate if someone could recommend some hints on how to accomplish the task.

Advertisement

One solution would be to render your polygons onto a cubemap (render them without the planet), and then project that onto the planet sphere.

o3o

One solution would be to render your polygons onto a cubemap (render them without the planet), and then project that onto the planet sphere.

Thanks for reply.

You mean I may use texture mapping.Could you recommend some relevant code samples? I am new to OpenGL and have no clear mind on how to do this.

http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/ is a simple tutorial covering render to texture.

I'm not sure the result is going to look right though. You're rendering concave results from GLU tessellation, so generating a texture from that to slap on a convex model doesn't sound right.

Can you explain how you're generating the data (what input data you have) and some of your thinking? It kind of sounds like you're just coming at this problem with the wrong tools, but I can't even tell if you're using a 3d model program, using procedural generation, or getting it from an external source for the "complex polygons" you have.

http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/ is a simple tutorial covering render to texture.

I'm not sure the result is going to look right though. You're rendering concave results from GLU tessellation, so generating a texture from that to slap on a convex model doesn't sound right.

Can you explain how you're generating the data (what input data you have) and some of your thinking? It kind of sounds like you're just coming at this problem with the wrong tools, but I can't even tell if you're using a 3d model program, using procedural generation, or getting it from an external source for the "complex polygons" you have.

I am sorry for being so late to reply. I was sent on-site to our client to fix bugs in our system in the past two weeks.

What I want to do is just draw the continents in a different color (e.g. white) from the Earth surface color (e.g. light blue). The continent boundary data can be obtained from either authoritative agencies or open source GIS projects. The data is encoded in ISRI shape format and consists of thousands of complex polygons.

Why not just process the input data and separate the polygons into continents and non-continents before rendering? It would give you a lot more flexibility for future improvements, especially when you start wanting more classifications than just continent and non-continent. Then you can just draw the polygon data in a normal way, and apply any post-processing you might dream up in the future.

I just want to make sure you really do need to do render-time computations since it will be a lot harder and a lot less flexible. Continent boundaries don't change very often, so it just feels like this isn't something that benefits from information only known after you render.

Why not just process the input data and separate the polygons into continents and non-continents before rendering? It would give you a lot more flexibility for future improvements, especially when you start wanting more classifications than just continent and non-continent. Then you can just draw the polygon data in a normal way, and apply any post-processing you might dream up in the future.

I just want to make sure you really do need to do render-time computations since it will be a lot harder and a lot less flexible. Continent boundaries don't change very often, so it just feels like this isn't something that benefits from information only known after you render.

Thank you for further discussion. Actually the polygons of the input data all represent continents. Among them, a small portion of polygons are continent boundaries and therefore of large length, whereas the rest (usually hundreds of) are all island and peninsular boundaries and of small length.

Let's take a look at a large polygon for example representing north American continent. As all the vertices of a polygon are on its boundary, the tessellation of this polygon is just a plane. So a possible solution is to make a mesh from a polygon by adding some vertices inside the polygon.

As to why I choose render-time computation, there are two reasons. First, we need to draw on the planet surface other iregular polygons, whose boundaries tend to change from time to time. Typical examples of these polygons are rainfall/drought/forest regions. Second, users may change projection types at runtime, which cause recalculation of boundary vertex coordinates.

Are you actually changing vertex data directly when you say you're recalculating boundary vertex coordinates? If so, you should not be doing that as it will compound rounding errors as your program runs for long periods of time. The standard approach is to use matrices that hold your modifications of the vertices. If a user changes projection at runtime, it should only change the projection matrix.

If you generate the mesh for a continent prior to rendering (either prior to run-time or during initialization, data loading, whatever), why would that prevent you from drawing your irregular polygons on the surface? I'd think it would make it easier. You'd actually know where the surface is.

The projection mentioned in my previous post is cartographic projections, not projection matrix.

The problem with the mesh method is that GLU tessellation routines work for polygons, not mesh. I wonder if OpenGL support triangulation of a mesh.

The picture below shows what we have done in rendering continent in white and ocean in blue with GLU tessellation routines.

canada2d.png

We need to do the same (shading continents in white) in 3D but have not figure it out.

canada3d.png

This topic is closed to new replies.

Advertisement