I would like to create a program (VB6, C, Java, etc) that allows to provide synthetic views (static, non dynamic) to 360 degrees similar to
http://www.lalpinistavirtuale.it/Panorami/Ronce_sintetico.jpg
providing that only the geographical coordinates (latitude, longitude and height over the sea) of "point of view".
The program will pick up the SRTM elevation data from a file with an accuracy of 1". Reading this file are already able to calculate the height above sea level of anywhere, but I miss the most important part, that of display.
Outset I would like to know if OpenGL is capable of handling the enormous number of small triangles that form the image.
In second place are looking for pieces of code, algorithms and other resources that will enable me to obtain this result.
Thanks in advance and I apologize for my bad English
360 degree landscape (panoramas)
Rendering the basic panorama can be done easily by rendering four images with 90° horizontal FOV cameras and just glueing the images together. More interesting and efficient solutions might exist but that is not going to be the core issue.
Considering the resolution and the areas involved you will need either out of core data loading or simplification - you cannot load all the data involved at the same time, OpenGL or no OpenGL. The system I work on uses (offline) simplification and level of detail to make datasets like that displayable in realtime on relatively low end systems. Decent simplification is not something implement over night though and requires (depending on the area involved) quite a lengthy preprocessing.
Without being able to fall back on a significant codebase I would try loading manageable chunks of data and rendering those, then freeing them and loading the next chunk. It doesn't matter if a frame takes seconds or minutes or hours or days, after all. I would probably try some kind of point cloud rendering with normal estimation, there does not seem to be a point to triangulate that data.
A completely open issue would be to calculate how much in the distance data has to be loaded. When approximating curvature by a simple sphere with earth radius the visibility is obviously limited, but a mountain in the distance is visible for quite a while with curvature and you cannot just stop because there is 'invisible' flat terrain in between. Unless you create some kind of max-filtered downsampled look-up map first you might have to always traverse so far that the biggest possible mountain you could encounter is no longer visible due to curvature. The camera far plane would have to be adjusted accordingly or you have to investigate some kind of infinite-far-plane technique.
Some care also has to be taken so that a proper local coordinate system is used for rendering, precision becomes a non-trivial (but solvable with care) issue when dealing with areas of dozens or hundreds of square kilometers.
Overall it's certainly doable (although I'm not sure OpenGL is the right way to go, a software raytracer might be a better solution) but it is a difficult task without having a pre-existing codebase to deal with a lot of the things involved. It becomes even more difficult if you have no previous experience in working with and rendering such data.
Considering the resolution and the areas involved you will need either out of core data loading or simplification - you cannot load all the data involved at the same time, OpenGL or no OpenGL. The system I work on uses (offline) simplification and level of detail to make datasets like that displayable in realtime on relatively low end systems. Decent simplification is not something implement over night though and requires (depending on the area involved) quite a lengthy preprocessing.
Without being able to fall back on a significant codebase I would try loading manageable chunks of data and rendering those, then freeing them and loading the next chunk. It doesn't matter if a frame takes seconds or minutes or hours or days, after all. I would probably try some kind of point cloud rendering with normal estimation, there does not seem to be a point to triangulate that data.
A completely open issue would be to calculate how much in the distance data has to be loaded. When approximating curvature by a simple sphere with earth radius the visibility is obviously limited, but a mountain in the distance is visible for quite a while with curvature and you cannot just stop because there is 'invisible' flat terrain in between. Unless you create some kind of max-filtered downsampled look-up map first you might have to always traverse so far that the biggest possible mountain you could encounter is no longer visible due to curvature. The camera far plane would have to be adjusted accordingly or you have to investigate some kind of infinite-far-plane technique.
Some care also has to be taken so that a proper local coordinate system is used for rendering, precision becomes a non-trivial (but solvable with care) issue when dealing with areas of dozens or hundreds of square kilometers.
Overall it's certainly doable (although I'm not sure OpenGL is the right way to go, a software raytracer might be a better solution) but it is a difficult task without having a pre-existing codebase to deal with a lot of the things involved. It becomes even more difficult if you have no previous experience in working with and rendering such data.
Overall it's certainly doable (although I'm not sure OpenGL is the right way to go, a software raytracer might be a better solution) but it is a difficult task without having a pre-existing codebase to deal with a lot of the things involved. It becomes even more difficult if you have no previous experience in working with and rendering such data.
Thanks for the recommendations.
That's the reason for my intervention: looking algorithms and pieces of code already optimized for this purpose (or at least a similar).
Any other ideas?
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement