# Mapping a 360-degree camera to a viewport

This topic is 4405 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Traditionally, cameras work something like this. They face in a certain direction (labeled as P on this 0-to-360-degree linear scale) and have a certain viewing wedge (labeled with parentheses). This is then translated to screen space.
0                         360
[====(===P===)==============]   world space

|
|  camera transformation
v

left edge          right edge
(===========================)  screen space

I'd like to make a simple camera class that, instead of mapping a single spherical wedge, maps the entire viewing sphere to the screen-space rectangle, as approximately demonstrated in the diagram below. Colloquially, if you were standing at the center of a giant spherical room, you would be able to view the entire sphere at once. Unfortunately, I'm having a really hard time pounding out the math, and I suspect most of it is due to the weird incontinuities that occur at the wrap-around boundary. Can anyone point me to some resources that might help out with this?
0                         360
[=====P==========)(=========]   world space

|
|  wacky camera transformation
v

left edge          right edge
(===========================)  screen space

Thanks, -- k2

##### Share on other sites
Wouldn't this be exactly the same transformation used to represent a globe on a flat map? You'd have a choice of several different ways; this MathWorld page has the math behind Mercator, for example.

##### Share on other sites
You could look into PanQuake, a hack on the Quake 1 sources to achieve a 360 degree panorama view.
They have full sources available.

##### Share on other sites
Quote:
 Original post by PromitWouldn't this be exactly the same transformation used to represent a globe on a flat map? You'd have a choice of several different ways; this MathWorld page has the math behind Mercator, for example.

Not quite. For instance, the Mercator projections projects a sphere onto a cylinder of infinite height (which you can then "cut" and "unroll" to form a rectangle). Unfortunately, a screen does not have infinite space. ;)

Quote:
 Original post by ZaoYou could look into PanQuake, a hack on the Quake 1 sources to achieve a 360 degree panorama view.They have full sources available.

Hmm, this looks more promising. It would have to be a completely panoramic view, however. Generally, when people say panorama, they mean that the viewing area is a plane P is passed through the center of the viewing sphere, and then two equidistant planes P_top and P_bottom are passed through the sphere as well. The intersected area is the panorama.

I'll see if I can get it to generalize to my particular case which is where P_top and P_bottom are tangent to viewing sphere itself (i.e., the intersected area is the whole sphere).

I do know this much -- my projection would definitely have to have the following properties:

1.) If the camera is pointing in a given direction, the point at the opposite direction is on the edge of the viewing rectangle. In fact, every point on the edge of the viewing rectangle is the same point (namely, the point just described).

2.) The infinitesimal distortion as a fraction of the overall sphere is zero in the direction of the camera and tends to 1 as one approaches the edges.

3.) The only great circle of the viewing sphere that would appear as a line is the one that passes through the poles. ALl other circles are apparent curves (parabolas, I think).

4.) Not an affine transformation.

##### Share on other sites
Well, first off, this won't be an affine projection, since colinearity cannot be preserved in a sphere->plane mapping. So you won't directly be able to directly use scanline rasterization for this.

But let's see. The conventional pinhole model is λx=Π0X. Here, instead of x being on the plane xz=1, it's on the sphere sqrt(xx2+xy2+xz2) = 0. Then you have a mapping R2 -> R3 from the unit 2D plane to the unit sphere... a common one, of course, is spherical coordinates (tossing out r, since it's always 1). So in your raytracer, given a θ and a φ, you'd convert to the point on the unit sphere, from that you'd have your ray, and from there proceed exactly as for conventional raytracing.

You COULD get tricky with using scanline rendering and warping a cube-mapped set of views onto an unwrapped sphere. That'd introduce large aliasing effects, but it doesn't seem like it'd be too difficult.