# 3D Manual cubemap lookup/filtering in HLSL

## Recommended Posts

I'm working in an old application (DX9-based) where I don't have access to the C code, but I can write any (model 3.0) HLSL shaders I want.  I'm trying to mess with some cube mapping concepts.  I've gotten to the point where I'm rendering a cube map of the scene to a cross cube that I can plug directly into ATI cubemapgen for filtering, which is already easier than trying to make one in Blender, so I'm pretty happy so far.  But I would like to do my own filtering and lookups for two purposes: one, to effortlessly render directly to sphere map (which is the out-of-the-box environment mapping for the renderer I'm using), and two, to try out dynamic cube mapping so I can play with something approaching real-time reflections.  Also, eventually, I'd like to do realish-time angular Gaussian on the cube map so that I can get a good feel for how to map specular roughness values to Gaussian-blurred environment miplevels.  It's hard to get a feel for that when it requires processing through several independent, slow applications.

Unfortunately, the math to do lookups and filtering is challenging, and I can't find anybody else online doing the same thing.  It seems to me that I'm going to need a world-vector-to-cube-cross-UV function for the lookup, then a cube-cross-UV-to-world-vector function for the filtering (so I can point sample four or more adjacent texels, then interpolate on the basis of angular distance rather than UV distance.)

First, I'm wondering if there's any kind of matrix that I can use here to transform vector to cube-cross map, rather than doing a bunch of conditionals on the basis of which cube face I want to read.  This seems like maybe it would be possible?  But I'm not really sure, it's kind of a weird transformation.  Right now, my cube cross is a 3:4 portrait, going top/front/bottom/back from top to bottom, because that's what cubemapgen wants to see.  I suppose I could make another texture from it with a different orientation, if that would mean I could skip a bunch of conditionals on every lookup.

Second, it seems like once I have the face, I could just use something like my rendering matrix for that face to transform a vector to UV space,  but I'm not sure that I could use the inverse of that matrix to get a vector from an arbitrary cube texel for filtering, because it involves a projection matrix-- I know those are kind of special, but I'm still wrapping my head around a lot of these concepts.  I'm not even sure I could make the inverse very easily; I can grab an inverseProj from the engine, but I'm writing to projM._11_22 to set the FOV to 90, and I'm not sure how that would affect the inverse.

Really interested in any kind of discussion on techniques involved, as well as any free resources.  I'd like to solve the problem, but it's much more important to me to use the problem as a way to learn more.

##### Share on other sites

Getting the cubemap face + UV coordinates from a direction vector is fairly simple. The largest component determines the face, and the other two components are then your UV's after you divide by the max component, and then remap them from [-1, 1] to [0, 1]. Here's some example code for you from one of my open-source projects:

template<typename T> static XMVECTOR SampleCubemap(Float3 direction, const TextureData<T>& texData)
{
Assert_(texData.NumSlices == 6);

float maxComponent = std::max(std::max(std::abs(direction.x), std::abs(direction.y)), std::abs(direction.z));
uint32 faceIdx = 0;
Float2 uv = Float2(direction.y, direction.z);
if(direction.x == maxComponent)
{
faceIdx = 0;
uv = Float2(-direction.z, -direction.y) / direction.x;
}
else if(-direction.x == maxComponent)
{
faceIdx = 1;
uv = Float2(direction.z, -direction.y) / -direction.x;
}
else if(direction.y == maxComponent)
{
faceIdx = 2;
uv = Float2(direction.x, direction.z) / direction.y;
}
else if(-direction.y == maxComponent)
{
faceIdx = 3;
uv = Float2(direction.x, -direction.z) / -direction.y;
}
else if(direction.z == maxComponent)
{
faceIdx = 4;
uv = Float2(direction.x, -direction.y) / direction.z;
}
else if(-direction.z == maxComponent)
{
faceIdx = 5;
uv = Float2(-direction.x, -direction.y) / -direction.z;
}

uv = uv * Float2(0.5f, 0.5f) + Float2(0.5f, 0.5f);
return SampleTexture2D(uv, faceIdx, texData);
}

I don't think there's any simple matrix or transformation that will get you UV coordinates for a cubemap that's set up as a "cross". It would be easier if you had all of the faces laid out horizontally or vertically in cubemap face order(-X, +X, -Y, +Y, -Z, +Z), but if that's not possible then it should be possible to do a bit of extra computation to go from face index -> cross coordinates.

From there doing bilinear filtering isn't too hard by just treating the texture as 2D, but smoothly filtering across cubemap faces requires all kinds of special logic.

##### Share on other sites

Thank you!

Was working on this since writing but wasn't getting anywhere.  I'd just given up when I read your message, figuring I'd wait until I'm smarter.  Replaced my ridiculous, non-functional code and it works   Now I just have to figure out why to actually use + and - and .zy vs .yz since I just trial-and-errored it.

I'm sure there's a reason cubemap filtering goes so slowly.  But at least I've already found things to read and try when it comes to that, so hopefully I won't get stuck.

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628288
• Total Posts
2981845
• ### Similar Content

https://mattdesl.svbtle.com/drawing-lines-is-hard#screenspace-projected-lines_2
And I'm trying to understand how the algorithm works. I'm currently testing it in Unity3D to first get a grasp of it and later port it to webgl.
What I'm having problems is the space in which the calculations take place. First the author calculates the position in NDC and takes into account the aspect ratio of the screen.  Later, he calculates a displacement vector which he calls offset, and adds that to the position that is still in projective space, with the offset having a W value of 1. What's going on here? why can you add a vector in NDC to the resulting position of the projection? what's the relation there?. Also, what is that value of 1 in W doing?
Supposedly this algorithm makes the thickness of the line independent of the depth, but I'm failing to see why.
Any help is appreciated. Thanks

• Hey guys, Im getting bounding box of a mesh in my engine using D3DXComputeBoundingBox, but when I use this function, looks like the mesh is every on position (0,0,0), but it isn't.

The bounding box should be in position of sphere, and dont (0,0,0)

D3DXComputeBoundingBox getting wrong sMin and sMax (how we can see on the pic, it isnt a problem of render...)

How it should be:

The code im using to get bounding box:
BYTE * pData; pMeshContainer->MeshData.pMesh->LockVertexBuffer( D3DLOCK_READONLY, (void**)&pData ); //Compute Bounding Box D3DXComputeBoundingBox( (const D3DXVECTOR3*)(pData), pMeshContainer->MeshData.pMesh->GetNumVertices(), pMeshContainer->MeshData.pMesh->GetNumBytesPerVertex(), &pMeshContainer->cBoundingBox.sMin, &pMeshContainer->cBoundingBox.sMax ); pMeshContainer->cBoundingBox.sMid = (pMeshContainer->cBoundingBox.sMax - pMeshContainer->cBoundingBox.sMin) * 0.5f; pMeshContainer->cBoundingBox.sCenter = (pMeshContainer->cBoundingBox.sMax + pMeshContainer->cBoundingBox.sMin) * 0.5f; //Compute Bounding Sphere D3DXComputeBoundingSphere( (const D3DXVECTOR3*)(pData), pMeshContainer->MeshData.pMesh->GetNumVertices(), pMeshContainer->MeshData.pMesh->GetNumBytesPerVertex(), &pMeshContainer->cBoundingSphere.sCenter, &pMeshContainer->cBoundingSphere.fRadius ); pMeshContainer->MeshData.pMesh->UnlockVertexBuffer(); //We have min and max values, use these to get the 8 corners of the bounding box pMeshContainer->cBoundingBox.sBoxPoints[0] = D3DXVECTOR3( pMeshContainer->cBoundingBox.sMin.x, pMeshContainer->cBoundingBox.sMin.y, pMeshContainer->cBoundingBox.sMin.z ); //xyz pMeshContainer->cBoundingBox.sBoxPoints[1] = D3DXVECTOR3( pMeshContainer->cBoundingBox.sMax.x, pMeshContainer->cBoundingBox.sMin.y, pMeshContainer->cBoundingBox.sMin.z ); //Xyz pMeshContainer->cBoundingBox.sBoxPoints[2] = D3DXVECTOR3( pMeshContainer->cBoundingBox.sMin.x, pMeshContainer->cBoundingBox.sMax.y, pMeshContainer->cBoundingBox.sMin.z ); //xYz pMeshContainer->cBoundingBox.sBoxPoints[3] = D3DXVECTOR3( pMeshContainer->cBoundingBox.sMax.x, pMeshContainer->cBoundingBox.sMax.y, pMeshContainer->cBoundingBox.sMin.z ); //XYz pMeshContainer->cBoundingBox.sBoxPoints[4] = D3DXVECTOR3( pMeshContainer->cBoundingBox.sMin.x, pMeshContainer->cBoundingBox.sMin.y, pMeshContainer->cBoundingBox.sMax.z ); //xyZ pMeshContainer->cBoundingBox.sBoxPoints[5] = D3DXVECTOR3( pMeshContainer->cBoundingBox.sMax.x, pMeshContainer->cBoundingBox.sMin.y, pMeshContainer->cBoundingBox.sMax.z ); //XyZ pMeshContainer->cBoundingBox.sBoxPoints[6] = D3DXVECTOR3( pMeshContainer->cBoundingBox.sMin.x, pMeshContainer->cBoundingBox.sMax.y, pMeshContainer->cBoundingBox.sMax.z ); //xYZ pMeshContainer->cBoundingBox.sBoxPoints[7] = D3DXVECTOR3( pMeshContainer->cBoundingBox.sMax.x, pMeshContainer->cBoundingBox.sMax.y, pMeshContainer->cBoundingBox.sMax.z ); //XYZ SAFE_RELEASE( pMeshContainer->lpBoundingBoxMesh ); SAFE_RELEASE( pMeshContainer->lpBoundingSphereMesh ); //Create Bounding Sphere Mesh D3DXCreateSphere( lpDevice, pMeshContainer->cBoundingSphere.fRadius, 15, 10, &pMeshContainer->lpBoundingSphereMesh, NULL ); //Create Bounding Box Mesh float fWidth = pMeshContainer->cBoundingBox.sMax.x - pMeshContainer->cBoundingBox.sMin.x; float fHeight = pMeshContainer->cBoundingBox.sMax.y - pMeshContainer->cBoundingBox.sMin.y; float fDepth = pMeshContainer->cBoundingBox.sMax.z - pMeshContainer->cBoundingBox.sMin.z; D3DXCreateBox( lpDevice, fWidth, fHeight, fDepth, &pMeshContainer->lpBoundingBoxMesh, NULL ); Im not using any World transform on the mesh or bounding box...

• Hi there everyone! I'm trying to implement SPH using CPU single core. I'm having troubles in making it stable. I'd like some help in order to understand what is wrong and how could I fix it. Please, take a look at the following videos:
Water inside sphere using Kelager's parameters
Water inside big box
Water inside thinner box
I've already tried using XSPH, the hash method to find the neighbors (now I'm using the regular grid, because the hash method didn't work for me) and two different ways of calculating the pressure force.
I'm using mostly the following articles:
Particle-Based Fluid Simulation for Interactive Applications, Matthias Müller, David Charypar and Markus Gross
Lagrangian Fluid Dynamics Using Smoothed Particle Hydrodynamics, Micky Kelager
Smoothed Particle Hydrodynamics Real-Time Fluid Simulation Approach, David Staubach
Fluid Simulation using Smoothed Particle Hydrodynamics, Burak Ertekin
3D Langrangian Fluid Solver using SPH approximations, Chris Priscott
Any ideas? Thanks!

• Hey all,
As some of you may know, I do have a Computer Science background, but either by chance/design/fate/insert stupid excuse here, I didn't take any graphics courses in my undergraduate degree, but now I'd be very interested in at least learning the basics of graphics and potentially pursuing more in graphics. I'm interested in all sorts of graphics in general, so everything from real-time engines to rendering engines like Arnold, Octane, etc. Can anyone point me in the right directions for books/tutorials?