Sign in to follow this  
sprite_hound

spherical clipmaps and texture / world coords

Recommended Posts

I've been reading the Spherical Clipmaps for Terrain Rendering paper, and trying to implement it. However, I'm having trouble understanding section 3.3, which is the really the key point. The basic idea is to create a static geometry hemisphere (parametized by phi (longitude) and theta (latitude)) in view space, with the viewer located at the pole (theta == 0). The hemisphere is composed of several rings, the rendering of which depends on visibility and distance from the surface. A circular patch (the most detailed area) is used to cover the hole in the middle. This breaks down into something like this:
	for (int i = 1; i != maxLodLevels; ++i) {
		theta = std::pow(2.0, -i) * cml::constantsd::pi();
		
		for (int j = 0; j != lodSubLevels; ++j) {
			double subtheta = theta * pow(2.0, -j/static_cast<double>(lodSubLevels));
			
			for (double phi = 0.0; phi < phiMax; phi += phiStep) {
				vertices.push_back( cml::vector3f(
					cos(cml::rad(phi)) * sin(subtheta),
					sin(cml::rad(phi)) * sin(subtheta),
					cos(subtheta)) );
			}
		}
	}
	// set up vbo.
	// create center patch.



I'm passing this in as a vertex in the vbo. And, so far so good: The next stage is the one that's causing me problems: finding of the world space coordinates of the vertices, and then the texture coordinates for the vertex (and fragment) texture look up. The paper gives the position of the vertex in world (or model I suppose) space, and then the calculation of phi and theta like so (translated into glsl):
uniform float Thetav;

//...
	modelPosition.x = cos(Thetav)*gl_Vertex.x - sin(Thetav)*gl_Vertex.z;
	modelPosition.y = gl_Vertex.y;
	modelPosition.z = -sin(Thetav)*gl_Vertex.x + cos(Thetav)*gl_Vertex.z;
	
	float phi = atan(modelPosition.y, modelPosition.x);
	float theta = acos(modelPosition.z) - Thetav;



I'm calculating Thetav (the viewer world space latitude) like so:
	// cp is the world space camera position.
	// pos is the position of the center of the sphere.
	cml::vector3f vp = cp - pos;
	thetav = cml::constantsd::pi() - (cml::asin_safe(vp[1] / vp.length()) + 0.5 * cml::constantsd::pi());
	// thetav ranges from 0 at the north pole to pi at the south,
	// as seems to be indicated in the paper.



... and then passing it to the vertex shader. This, however, seems to causes my hemisphere to distort oddly when I move the camera position north or south of the equator. It also seems odd that I'm not using phiv (the viewer world space longitude) anywhere. Looking at the paper I'm not even sure that p (modelPosition above) is actually meant to be the world space coordinate... I tried a completely different tactic using billboarding to orient the hemisphere towards the viewer, but this caused problems when the viewer approached the poles. Can anyone explain this part of the paper, or have any suggestions as to how to get from the local coordinates to world space and texture coords? Thanks! sprite.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this