Projecting a Textured Mesh onto a Cube Texture

Started by
5 comments, last by PhillipHamlyn 10 years, 9 months ago

Hi,

I have a technique which I'm trying out, in order to reduce poly count on distant multi-textured objects. The approach is to generate a convex hull of the object and render that instead (when at a distance). I do this reasonably successfully for my shadows - although they are clearly not high quality shadows, for buildings and trees etc they suffice for me, and this means I can render only the hull on my shadow casting pass. However; I want to extend the low poly goodness of hull drawing by texturing my hull using a cube map.

In order to properly texture the 3D hull I want to use texCUBE to sample a cube map, prepared during conditioning, of the multi-textured mesh fully rendered.

Although I can create a "sky box" map of each of the faces of the original mesh (rotated orthogonally), its not what is needed here I dont think. I want to be able to reverse the process of texCUBE by projecting the mesh onto the texture render target "from within the mesh" - i.e. with the camera in the mesh centroid. I would hopefully generate a cube map of the mesh which I can later use with textCUBE to render my hull.

A couple of questions;

1) Is this approach common/reasonable ?

2) How do I go about projecting the cube map from my multi-textured mesh. The projection is neither Orthogonal or Perspective but "exploded", but I've no real idea whether that is feasible.

Thanks

Phillip

Advertisement

It took me a few minutes to figure out what you were asking, but I think I understand now and no it's not really practical. This is how I understand it:

  • You want distant meshes to instead draw as cubes that each have an individual cube map representing the view of the mesh at each face (6 in total).
  • Using which ever faces the view matrix can see, you wish the mesh to be "reconstructed" to look as if it is a fabrication of a mesh (like a billboard).

If I have understood it correctly, this means there are some unfortunate flaws as follows:

  • This means you will be storing a cube map per mesh that would be subject to this technique, which will likely consume far more vram than a large number of vertices and indices. Here is a quick example:

    For example a 512 * 512 * 6 faces * 4 bytes = approximately 6 megabytes per cube map where each face is 512 x 512 pixels in 32 bit.
    Versus (300,000 vertices * 20 bytes stride) = 5.72 megabytes per 100,000 unique triangles where each vertex is a VertexPositionTexture.

    When looking at this I think most people would rather distribute 100,000 triangles between meshes than have to store 6 MB per single mesh, more efficient.
  • I have seen a technique talked about by Renaud Bédard (guy who programmed fez) where a Texture3D is linearly sampled in shader, technically this could be used to achieve what you asked, but it is complicated and very shader heavy. Should be noted he used the technique to store many tiles in one texture, so in his case although a 3D texture is to the power of 3 pixel wise (far larger than a cube map) it was a worth while trade off versus texture swapping.
  • Even if you do elect this technique, it will mean on start-up of the scene you will need to make the user wait while each face is rendered for each cube map, the waiting time may not be noticeable with a few cube maps in the scene, but I bet it becomes more and more undesirable the more you cube maps you throw in the mix,

I think your idea is intuitive, but at the same time there is a big part of me crying out saying this technique is like trying to open a door with your shoulder blades. There are time tested techniques for dealing with distant object, such as frustum checking, back face culling, fog (oldie but goldie), bsp tree's, and many many more, I bet you are familiar with many of them, but it's worth noting that these techniques are common because people find them reliable.

Anyway down to brass tax, when all else fails remember that the most common technique is often the best.

Aimee

We are now on Tumblr, so please come and check out our site!

http://xpod-games.com

Oh hang on I think I mis-interpreted what you were asking, lets take another stab at it. So you have one huge cube map that surrounds the scene and you want to render distant meshes onto that 1 cube map so distant meshes don't have to be rendered individually.

In this case for outdoor scenes where the ground is flat this may work, but for uneven ground and indoor scenes you'll likely encounter problems with how you deal with depth and perspective. To add a little irony when you move through the scene, you would likely have to re-construct the 1 cube map often as things get closer or further away, which means any performance gain you get from the technique will be close to nullified.

Although this may quash what I said previously in parts, a lot of the previous post still applies, did you know for example that in Half Life 2 they used a technique where all geometry that would permanently be far away from the player was very low poly? Just another very helpful and common technique that could offer what you are after.

Aimee

We are now on Tumblr, so please come and check out our site!

http://xpod-games.com

AmzzBee - thanks for your detailed responses - I fear I have not explained myself well - I already use all the techniques you mention; my journey and thinking are here http://philliphamlyn.wordpress.com/2013/07/21/hulls-instead-of-simplification/

The TexCUBE function allows me to sample a pre-rendered texture based on the mesh Position vertex and its relationship to the cube centre. I want to understand how to make the texture map that is sampled by the texCUBE. I have a simplistic hull of my 3D shape (therefore low poly, being a hull) and want to texture it using the cube texture I'd normally provide to texCUBE. The part I dont understand is how to go about projecting my mesh onto the cube texture (being a standard render target) such that it can be sampled in this way.

I'm generally happy my basic techique of hull rendering saves me significant enough time to make it worthwhile - given that I have thousands of identical tree meshes, they are not unique and do not have uniqu texture maps, so rendering just the hull is really quite a powerful technique, and only has to be done in the conditioning phase, not at runtime

Visually what I am after is a hint at how to project as though me camera was positioned in the centre of a mesh. I need to get 6 individual cube faces. Would putting my camera in the mesh centre and just doing 6 perspective projections do the trick ?

I can easily do 6 billboard orthogonal projections, but if I tried to sample them with texCUBE I would not get the effect I want.

Phillip.

It's fairly straight forward to render a mesh for each face on a texture cube, the restriction is that you must preform 6 draw calls because each face of the texture cube will need assigning as a render target. In our engine we use RenderTargetCube in combination with CubeMapFace and individually set view matrices for each face, we do this to perform shadow mapping for point lights. It incurs 6 draw calls like I said, but our engine is optimized to only draw relevant geometry so it's very fast. In the following MyRenderTargetCube can be cast to a TextureCube:


GraphicsDevice.SetRenderTarget(MyRenderTargetCube, CubeMapFace.PositiveY);

I still maintain that the technique you are using will likely drive you bonkers later with the restrictions it imposes and the extra overhead that may not be apparent yet. But it's not my place to tell you what to do, and to tell you the truth I am intrigued to find out if you manage to get it working smile.png

Aimee

We are now on Tumblr, so please come and check out our site!

http://xpod-games.com

AmzBee,

Thanks again for your reply. I appreciate your responses but It seems I am not at all describing my intent clearly; I dont want to render a cube at runtime - I want to render the cubemap at compile/conditioning time. I will use the cubemap to render the texture onto my single face 3D mesh (not a cube) at runtime.

I probably am explaining this badly, so breaking it down might be more useful for other potential respondees.

1) I have a low poly, single face, 3D hull mesh, but no texture to draw on it. I already draw hundreds of these in my scene using hardware instancing. This is an interemediate step between my 2D billboards and my full textures, complex multi faced mesh.

2) I want to use a cubemap to decorate my 3D hull mesh using texCUBE.

3) This hull mesh represents a simplification of an existing but complex multi-faced mesh which I've converted into a hull for speed of rendering at medium distances.

4) I want my cube map texture to be generated at compile/conditioning time using the fully complex texture mesh, but I lack the knowledge of how to do this step.

In theory the image here http://philliphamlyn.files.wordpress.com/2013/07/image14.png illustrates what I expect my cube texture to look like BUT - this is generated through orthogonal projection, so its not useful as a cube map; to work for me each point on my hull must map to a coloured pixel on the cube map (using texCUBE) so my cube texture MUST be fully coloured, not an external projection.

Imagine my camera was at a point central to my 3D mesh. The magic of texCUBE is that it projects a ray from the centre onto the cube map, thereby preventing me from needing very complex texture coordinates. I want to reverse that process with my complex mesh so that the ray projected from within the centre of the mesh paints the object "from within". By doing this - essentailly a explosive projection of my mesh, I will generate a cube texture which can be used to drape any hull mesh of the same rought shape as the original mesh. To repeat; I will be doing this at compile/conditioning time - all I will do at render time is draw my hull mesh using the cube texture in a single batched hardware instanced call. This image might illustrate my thinking http://philliphamlyn.files.wordpress.com/2013/07/image17.png

I am beginning to think what I need is to use ray casting concepts here to get the intersect point from the mesh centre to the most distant mesh coloured surface and record that point manually on a cube texture, but that would mean casting one ray for each pixel on my destination cube texture; I had hoped there would be some kind of technique which would be better. Perhaps the only way to do this is to walk every possible cubemap pixel (not insurmountable) and cast a ray into the centre of me object and use the first surface pixel sampled - it will be correct from a projection point of view - its just so close to the normal process of perspective rendering that I think I must be missing something.

Phillip

Update on my progress with this, for anyone in the future who might be interested.

I haven't quite solved it yet, but I'm trying the following.

1) Place my camera at the centroid point of the hull mesh, looking Forward.

2) Reverse the depth buffer, clearing to Zero and using GreaterThan depth function (to sample the furthest pixel, not the nearest - I want a picture of the outside, not the inside).

3) Rotate the mesh to each of its 6 orthogonal faces, and Render each face to a RenderTarget.

4) Stitch the 6 render targets into a single Cube Texture.

5) At runtime, sample the cube texture from the hull mesh vertexes using texCUBE.

I'm having a little trouble getting 2+3 working at the moment, but I think the depth buffer reversal might be the key technnique I'm looking for.

I'll post an update with some visuals once I've got progress.

This topic is closed to new replies.

Advertisement