How to put different textures on different sides of a Cube in DirectX 11?

Started by
8 comments, last by BBeck 7 years, 6 months ago

I have recently learned how to texture a cube in DirectX with a single texture but I am now trying to put multiple textures on a cube. If your confused the photo attached below should clarify.

If someone can explain how I can have different textures for different cube sides or any shape with code that would be great.

The MSDN documentation says that I need to use a Texture Cube/Array but it lacks information/code regarding implementation.

Advertisement
Just quickly. You will need to set your texture resource and draw each side individually. That's the most primative way. There are more complicated ways but simply drawing a side then changing the texture is a good start.

Indie game developer - Game WIP

Strafe (Working Title) - Currently in need of another developer and modeler/graphic artist (professional & amateur's artists welcome)

Insane Software Facebook

I am aware of cube mapping which seems like the right way to do this but all tutorials on the internet use a skybox which is different to what I'm trying to do. Could you explain how I can do that for a simple cube? Is the way you described the better way or what most engines like Unreal or Unity use?

How you would do this depends on how static the textures that you are applying are - do you want each side of the box to be different, but each box to be the same, aside from rotations? If that is the case, then usually what is done is to create a texture similar to a 2D pattern that could be folded up into the 3D box shape, and UV-map the faces of the box to match the texture.

For instance:

crate-texture.jpg

Eric Richards

SlimDX tutorials - http://www.richardssoftware.net/

Twitter - @EricRichards22

I was thinking of doing this but I am confused how I can get the right UV coordinates for the cube map above. With a single texture (not a cube map) it's as easy as defining 0,1 or 1,0. With this method how can I obtain the exact UV coordinates for each face? Is it just trial and error until it looks right?

You can see the layed out texture as a grid with X pixels * Y pixels.
For each side you pick the starting X and Y position for the 1st UV and then add up or subtract the number of pixels that one side of the cube needs (aka,one of the 6 parts of the texture).

For future usage this is the way to go, make 6 texture switches for drawing just one cube is only interesting for testing/ learning, because of performance.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

As an alternative to cubemapping use a standard texture array with 6 slices. Your texture coordinates become 2 floats and an int rather than just 2 floats, with the 2 floats being the very same as what you would use for a standard 2D texture and the int specifying which slice to sample from. Use Texture2DArray::Sample rather than Texture2D::Sample in your HLSL, and beware of some tutorials (e.g. the popular RasterTek one seems to suffer from this) that try to teach you that a texture array is just an array of textures (it's not).

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I think I'm starting to understand what you're asking. If I'm understanding you right, you're saying you've used a single texture and repeated it across all the faces of the cube using UV coordinates. What you're wanting to do is to do as shown above where you have a T or t set of images in a single picture and you want to texture with that.

You could map those coordinates manually, but the easiest way to do this is what they call UV wrapping. That's where you take a complex model and texture it. Typically, you would make your model, in this case a cube, in a modeling program like Blender and then UV unwrap it to produce a UV map. You would then paint the model on the UV map texture and then wrap it around the model. This video seems to basically explain the process.

Another option is to assign a completely different texture to each face. I "believe" you have to model all 6 faces of the cube as completely separate squares. Then you can assign a texture to each square just like texturing any other square.

I don't believe there is a way to assign more than one UV coordinate to a vertex without getting pretty fancy with a custom shader. Typically, it's one set of UV coordinates per vertex and that means that vertex has to map to the same spot in the picture for all 3 faces it touches. That's why the above solution with UV wrapping works, but it won't work to assign completely different textures to a each face when they share a vertex. None the less, you can model it so each face has it's own vertex (6 quads where you use a lot more vertices) and then you can assign each a different set of UV coordinates and a completely different picture.

UV wrapping is what you want for most things, most of the time. The problem is that you are limited to the resolution of a single texture. So for a building, a single texture might be stretched too much over the entire building. You could model each wall of the building as a separate model and parent them together as a single object. Being separate models, they could each have their own UV maps and textures. You might even be able to use the same model for 3 sides of the building and then a different one for the front of the building requiring only two models with two textures and just repeating 1 of them 3 times. They can be joined as a single model in the modeling program even if they are not physically joined. In Blender, creating new models in edit mode will make all the models one object. Creating a new object in object mode will create completely separate objects.

One thing you might be able to do that could help you understand the T textures above is to to UV wrap a cube and then open up the data files to see what the UV coordinates look like. You're still between 0 and 1, but a single square could be from 0.5 to 0.75 or something like that. You are fitting all those different squares in the texture into texture coordinates between 0 and 1 since 0 is one corner and 1 is another.

These T shapes are pretty standard and pretty simple. So for them, you can if you really want assign the UV coordinates by hand to match the picture above. Presumably, the 4 squares going vertical are dividing the texture into 4 equal parts and so their Vertical UV coordinates are going to be from 0 to 0.25 to 0.5 to 0.75 to 1.0 right off the top of my head. I don't think it's divided so equally horizontally. So, you would probably have to try horizontal coordinates by trial and error. Remember that the model is likely sharing a single vertex and that vertex is mapping to a spot in the photo so that it works for at least 2 faces, and possibly 3.

Any time the model gets more complex than a cube, it's going to become impossible to do this by hand. Then you'll have to use a modeling program like Blender for UV wrapping and un-wrapping.

There are quite a few YouTube videos on UV wrapping. So, just do a search on that and you'll likely find more information that you want.

Thanks for the responses everyone, so I now understand that I need to make a cube map like the texture ericrichards posted and then map it using UV coordinates. I then find the UV coordinates by finding the size of one face in UV coordinates and use simple math to find the UV's of the rest of the faces. This seems to be fine for now but I want to implement reflections in my engine in the future so could somebody explain with code how I can use a 'texturecube' to achieve the same cube map effect (using the same method as a skybox but with simple geometry)?

Here's a video that seems to address the exact situation of the cube you were talking about. It's another UV wrapping video, but it seems to cut to the chase on cubes.

There's at least a couple different ways to do reflections. For reflective plane like water you basically create another camera with a reflected vantage point to draw the "reflection" on the surface of the water and you also introduce some transparency and waves.

What I think of as "cube mapping" is what you seem to be talking about where you put the reflection of the area into the paint of a car for example. I've never done that and so I barely even understand it in concept. But basically my understanding is that you figure the pixel normal for each pixel on the model and point it at a cube image that surrounds the car like a sky box. Then you can sample the spot on the cube map to get the reflected color for that one pixel. I would have to go study the subject to tell you much beyond that. But I know cube mapping is how they do 360 degree reflections like on cars. Also, I know they "cheat" a lot on these. To do it exactly, you would programmatically build a cube map real time of the 6 directions from the object's vantage point. This would give you a 100% accurate cube map for reflections in real time although it's a fair amount of calculating. You're probably talking about rendering to 6 render targets and building a cube map out of that using 6 cameras (view matrices). Then once you have the cube map built, you could use it to determine reflections from the environment on the object. A cheaper method would be to just use the skybox. But I've seen programmers go for an even cheaper method, which is to use a cube map texture that kind-of-sort-of-half-seems-like-it-might-almost-be-something-that-vaguely-resembles-something-one-might-interpret-as-roughly-similar-to-something-that-one-might-expect-the-surrounding-area-to-possibly-look-like-if-they-were-in-a-similar-environment. They have a generic cube map that is something like a blurry skybox that vaugely resembles the environment and they use that as a "reflection" on the object even though it's actually so blurry you can't make much out and it barely even resembles the actual environment it's supposedly reflecting. Still, if it's good enough to convince people it's a reflection, then they can believe they're looking at a chrome object for example because it's "reflective". I've seen it done and it actually works pretty well believe it or not.

Regardless how you build your cube map, whether it's 100% realistic or just a poor facsimile, using it for reflections should be the same. And for that, I barely know the concept well enough to guess how they're doing it from tangential exposure to the subject. But in theory it would seem to me it's a matter of getting a pixel normal which is just an interpolated normal between the 3 vertex normals of the triangle, which is what you're typically working with in a pixel shader and then figuring out which pixel on the cube surrounding it it is pointing to and using that color for the reflection. Actually, now that I think about it, you would use the pixel normal to determine the plane of the pixel, then calculate a normal from the camera to the pixel and reflect that off the plane to point to the spot on the cube map as if the pixel were a mirror. So it wouldn't be the color that the pixel points to, but rather the color of the reflection off the pixel plane, which would be the opposite angle of the camera to the pixel plane. Just think of the pixel as being a mirror that faces in the direction of the pixel normal. Then from the camera position, what spot would be reflected in the pixel mirror on the cube surrounding the object?

Here's a video. They are using the cube twice. Once they just render it as a cube, or skybox, to create the environment. Then they use the same cube in a totally different way to calculate the reflection off the object. The key thing to understand here is they could have made the reflection on the object reflect an entirely different environment than the skybox in the scene. Because it's the same cube image it appears to be a reflection of the environment.

This topic is closed to new replies.

Advertisement