Sign in to follow this  
Dirge

Cube map to cylindrical map

Recommended Posts

Dirge    300
Does anyone have any idea's or references on any algorithms to convert a cube map to a cylindricaly wrapped rectangular (nxn) map? I'm currently generating a cube map that I wrap around a sphere but I'd like to squish it down to a single texture. Thanks very much.

Share this post


Link to post
Share on other sites
moagstar    263
The texture u[0..1], v[0..1] maps to theta[0..2pi], phi[-(pi/2)..(pi/2)]

For each pixel in the destination map
Map u, v coordinates to polar theta, phi coordinates

theta = u * pi
phi = -(pi / 2) + (v * pi)

Convert polar coordinates to cartesian coordinates

x = cos(phi) * cos(theta)
y = sin(phi)
z = cos(phi) * sin(theta)

This defines a ray, computing the intersection of the ray with the cube map gives you the point to sample. The intersection can be easily computed, the component with the largest magnitude selects the face, with the sign selecting either positive or negative i.e.

[ 0.7, 0.5, -0.1] selects +x
[-0.7, 0.5, -0.1] selects -x

[ 0.1, 0.4, 0.0] selects +y
[ 0.1, -0.4, 0.0] selects -y

[ 0.1, 0.1, 0.8] selects +z
[ 0.1, 0.1, -0.8] selects -z

The other two components are remapped from [-1..1] to [0..1] and used as the u and v texture coordinates, so in the positive x example above

[0.7, 0.5, -0.1] would sample the +x face at [(0.1 + 1) / 2, (0.5 + 1) / 2]

NOTE : the z component -0.1 becomes 0.1 because of the coordinate system, assuming positive z points out of the screen, the coordinate system you are using will affect the exact mapping of ray components to texture coordinates.

Share this post


Link to post
Share on other sites
Dirge    300
Hmm, I'm real close but any idea why a cubemap that looks like this:

img1

would produce this:

img2


The loop that generates the texture pixels looks like this (excuse it's sloppiness, I was in a hurry):


for ( i = 0; i < 256; i++ )
{
for ( j = 0; j < 256; j++ )
{
int offset = ( i * 256 + j ) * 4;

float u = i / 256.0f;
float v = j / 256.0f;

const float pi = 3.14f;
float theta = u * pi;
float phi = -( pi / 2.0f ) + ( v * pi );

float x = cosf( phi ) * cosf( theta );
float y = sinf( phi );
float z = cosf( phi ) * sinf( theta );

int iAxis = 0;
bool bPositive = false;

if ( fabs( x ) > fabs( y ) )
{
if ( fabs( x ) > fabs( z ) )
{
iAxis = 0;
bPositive = x >= 0.0f;
u = z;
v = y;
}
else
{
iAxis = 2;
bPositive = z >= 0.0f;
u = x;
v = y;
}
}
else
{
if ( fabs( y ) > fabs( z ) )
{
iAxis = 1;
bPositive = y >= 0.0f;
u = x;
v = z;
}
else
{
iAxis = 2;
bPositive = z >= 0.0f;
u = x;
v = y;
}
}

D3DCUBEMAP_FACES face = D3DCUBEMAP_FACE_POSITIVE_X;

switch ( iAxis )
{
case 0:
if ( bPositive )
{
face = D3DCUBEMAP_FACE_POSITIVE_X;
}
else
{
face = D3DCUBEMAP_FACE_NEGATIVE_X;
}
break;

case 1:
if ( bPositive )
{
face = D3DCUBEMAP_FACE_POSITIVE_Y;
}
else
{
face = D3DCUBEMAP_FACE_NEGATIVE_Y;
}
break;

case 2:
if ( bPositive )
{
face = D3DCUBEMAP_FACE_POSITIVE_Z;
}
else
{
face = D3DCUBEMAP_FACE_NEGATIVE_Z;
}
break;
}

// Scale/Bias from [-1, 1] into [0, 1].
u = ( u + 1.0f ) * 0.5f * 256.0f;
v = ( v + 1.0f ) * 0.5f * 256.0f;

// Round up and truncate.
u = int( u + 0.5f );
v = int( v + 0.5f );

pCubeTexture->LockRect( face, 0, &SrcLockedRect, NULL, 0 );
pSrcPixels = static_cast< unsigned char * >( SrcLockedRect.pBits );

int iCubeOffset = (int)u * SrcLockedRect.Pitch + (int)v * 4;

// ARGB.
pDestPixels[ offset + 3 ] = pSrcPixels[ iCubeOffset + 3 ];
pDestPixels[ offset + 2 ] = pSrcPixels[ iCubeOffset + 2 ];
pDestPixels[ offset + 1 ] = pSrcPixels[ iCubeOffset + 1 ];
pDestPixels[ offset + 0 ] = pSrcPixels[ iCubeOffset + 0 ];

pCubeTexture->UnlockRect( (D3DCUBEMAP_FACES)face, 0 );
}
}




Thanks again for your help thus far. If I may ask, do you have a reference for this algorithm (or some place I could learn about it)?

Share this post


Link to post
Share on other sites
Dirge    300
Ahh, looks like I just wasn't googling the right terms. "Cube map to Spherical" gave me this: http://astronomy.swin.edu.au/~pbourke/projection/spheretexture, which appears to be what you described, but even using his exact algorithm doesn't appear to give me proper results. Can you offer some advice on converting to a left-handed coordinate system for the cartesian conversion? I believe that may be it (and negating my z doesn't quite do it).

Share this post


Link to post
Share on other sites
moagstar    263
Firstly u[0..2pi] and v[-pi..pi]. It looks like coordinates are the wrong way and your dimensions are incorrect.

Also it looks like you have your u and v sampling a bit messed up, I've had this problem before when blurring cube maps. Take a look at this picture, it should help clarify the relationship between texture and world coordinates :



Here is how you map {x, y, z} to {FACE, u, v} for each of the faces :

+X -> z[-1..1] becomes u[1..0] and y[-1..1] becomes v[1..0]
-X -> z[-1..1] becomes u[0..1] and y[-1..1] becomes v[1..0]

+Y -> x[-1..1] becomes u[0..1] and z[-1..1] becomes v[0..1]
-Y -> x[-1..1] becomes u[0..1] and z[-1..1] becomes v[1..0]

+Z -> x[-1..1] becomes u[0..1] and y[-1..1] becomes v[1..0]
-Z -> x[-1..1] becomes u[1..0] and y[-1..1] becomes v[1..0]

Sorry I didn't have time to look at your code in depth, but hopefully you can work it out from this. Hope that helps.

Share this post


Link to post
Share on other sites
Dirge    300
No apologies needed, you've been more than helpful and I greatly appreciate it.

I think I almost have this working but one last question; whats the algorithm to transform from spherical to cartesian for a left-handed coordinate system (I'm using Direct3D)?

Many thanks!

Share this post


Link to post
Share on other sites
moagstar    263
The equation for a RHS is :

x = cos(phi) * cos(theta)
y = sin(phi)
z = cos(phi) * sin(theta)

For a LHS, you simply need to negate the z

There is a slight error in the conversion from u and v to theta and phi that I described (the method I described below for converting u is assuming that u is in the range [-1..1], Which is probably why it appears you are not sampling enough of the cube faces) So for u in the range [0..1] the algorithm is :

theta = 2pi * u
phi = (-pi / 2) + (pi * v)

Have you made any progress on this? Do you have an image of how your destination map is looking now?

Share this post


Link to post
Share on other sites
Dirge    300
Well I fixed a number of issues so progress yes but still not what I'm expecting. I'm basically stumped at this point...

Here's what I have to show though. The first image maps colors to each face:


r g b
+x [ 1 0 1 ] i.e. pink
-x [ 0 0 1 ] i.e. blue
+y [ 1 0 0 ] i.e. red
-y [ 0 1 0 ] i.e. green
+z [ 0 1 1 ] i.e. cyan
-z [ 1 1 0 ] i.e. yellow


img


Here's the color info:

img


The results are just plain wrong.

As far as I know the uv coordinates are valid for the destination texture.
The conversion to spherical signed numbers looks correct.
The conversion from spherical to cartesian (including negation of z for lhs) looks correct.
The "cube intersection" by finding the greatest axis, keeping the sign and matching with the cube face is correct.
The final src uv generation from the specified face looks right, and finally the conversion to actual pixels looks right.

I'm totally stumped. As far as I can tell it looks like an issue with the spherical conversion (although I do now take into account the proper axis and range, the * 2 issue).

If you have any more idea's I'm definitely open to them.

Share this post


Link to post
Share on other sites
moagstar    263
Sorry for the late reply, OK, it's not looking too bad actually it looks almost there, the main problems I can see are that a) Your Y axis is wrong, +Y should be pointing up b) Your mapping of u seems to be inverted. If you post your current code I can take a look to see if I can pinpoint the error.

Share this post


Link to post
Share on other sites
Dirge    300
Here ya go.


for ( y = 0; y < iHeight; y++ )
{
for ( x = 0; x < iWidth; x++ )
{
int offset = y * DestLockedRect.Pitch + x * 4;

float u = x / (float)iWidth;
float v = y / (float)iHeight;

const float pi = D3DX_PI;
float theta = 2.0f * u * pi;
float phi = ( -pi / 2.0f ) + ( v * pi );

vSamp.x = cosf( phi ) * cosf( theta );
vSamp.y = sinf( phi );
vSamp.z = cosf( phi ) * sinf( theta );

int iAxis = 0;
bool bPositive = false;

// Find the prominent axis.
if ( fabs( vSamp.x ) > fabs( vSamp.y ) )
{
if ( fabs( vSamp.x ) > fabs( vSamp.z ) )
{
iAxis = 0;
bPositive = vSamp.x >= 0.0f;
}
else
{
iAxis = 2;
bPositive = vSamp.z >= 0.0f;
}
}
else
{
if ( fabs( vSamp.y ) > fabs( vSamp.z ) )
{
iAxis = 1;
bPositive = vSamp.y >= 0.0f;
}
else
{
iAxis = 2;
bPositive = vSamp.z >= 0.0f;
}
}

D3DCUBEMAP_FACES face = D3DCUBEMAP_FACE_POSITIVE_X;

char chColor[ 3 ] = { 255, 255, 255 };

// Retrive the cube map uv coordinate.
switch ( iAxis )
{
case 0:
if ( bPositive )
{
face = D3DCUBEMAP_FACE_POSITIVE_X;
u = vSamp.z;
v = vSamp.y;
chColor[ 0 ] = 255; chColor[ 1 ] = 0; chColor[ 2 ] = 255;
}
else
{
face = D3DCUBEMAP_FACE_NEGATIVE_X;
u = -vSamp.z;
v = vSamp.y;
chColor[ 0 ] = 0; chColor[ 1 ] = 0; chColor[ 2 ] = 255;
}
break;

case 1:
if ( bPositive )
{
face = D3DCUBEMAP_FACE_POSITIVE_Y;
u = vSamp.x;
v = vSamp.z;
chColor[ 0 ] = 255; chColor[ 1 ] = 0; chColor[ 2 ] = 0;
}
else
{
face = D3DCUBEMAP_FACE_NEGATIVE_Y;
u = vSamp.x;
v = -vSamp.z;
chColor[ 0 ] = 0; chColor[ 1 ] = 255; chColor[ 2 ] = 0;
}
break;

case 2:
if ( bPositive )
{
face = D3DCUBEMAP_FACE_POSITIVE_Z;
u = -vSamp.x;
v = vSamp.y;
chColor[ 0 ] = 0; chColor[ 1 ] = 255; chColor[ 2 ] = 255;

}
else
{
face = D3DCUBEMAP_FACE_NEGATIVE_Z;
u = vSamp.x;
v = vSamp.y;
chColor[ 0 ] = 255; chColor[ 1 ] = 255; chColor[ 2 ] = 0;
}
break;

default: assert( 0 ); break;
}

// Scale/Bias from [-1, 1] to [0, 1].
u = ( u + 1.0f ) / 2.0f;
v = ( v + 1.0f ) / 2.0f;

// Map to exact cube map face pixels, round up, then truncate.
u = int( u * 256.0f + 0.5f );
v = int( v * 256.0f + 0.5f );

pCubeTexture->LockRect( face, 0, &SrcLockedRect, NULL, 0 );
pSrcPixels = static_cast< unsigned char * >( SrcLockedRect.pBits );

int iCubeOffset = (int)v * SrcLockedRect.Pitch + (int)u * 4;

// ARGB.
#if 1
pDestPixels[ offset + 3 ] = pSrcPixels[ iCubeOffset + 3 ];
pDestPixels[ offset + 2 ] = pSrcPixels[ iCubeOffset + 2 ];
pDestPixels[ offset + 1 ] = pSrcPixels[ iCubeOffset + 1 ];
pDestPixels[ offset + 0 ] = pSrcPixels[ iCubeOffset + 0 ];
#else

pDestPixels[ offset + 3 ] = 255;
pDestPixels[ offset + 2 ] = chColor[ 0 ];
pDestPixels[ offset + 1 ] = chColor[ 1 ];
pDestPixels[ offset + 0 ] = chColor[ 2 ];
#endif

pCubeTexture->UnlockRect( face, 0 );
}
}




It's odd but it also doesn't look like those central, planar xz directions are actually warping properly. If you spot anything let me know, I'm really hoping to put this sucker to bed.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
This has been a very helpful thread. I was trying to do the same thing in reverse, ie. project a cylindrical texture onto a cube. There is one addition to the process that may be of help to some.

moagstar wrote:

The other two components are remapped from [-1..1] to [0..1] and used as the u and v texture coordinates, so in the positive x example above

[0.7, 0.5, -0.1] would sample the +x face at [(0.1 + 1) / 2, (0.5 + 1) / 2]


The above algorithm will not sample the entire face of the cube. The y and z values that map to u and v in the example above need to be divided by x. So:

[0.7, 0.5, -0.1] would sample the +x face at [(0.1 / 0.7 + 1) / 2, (0.5 / 0.7 + 1) / 2]

Make sure you use the absolute value of the largest component when doing the divide.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this