Help with UV mapping procedural game

Started by
5 comments, last by TeaTreeTim 7 years, 10 months ago

Lately, I'm working on redoing the mesh algorithms for my game. The game is set up as cubes, and I only render visible sides to save performance. I also re-use vertices for multiple faces to stay away from Unity's vertex limit. If you don't understand, here's a picture:

cuberendering.png

In the example, the blue face would be the visible face. From my 8 possible vertices, triangles would be drawn across (1,4,5) and (1,0,4). As far as I know, this is working correctly. But now I need to do this with UVs. I currently use a texture atlas, and I cant quite comprehend how I'll map the textures to the vertices, because each vertex can be a part of multiple faces. Does anyone know a solution to this? I'll provide my code if needed.

I'm making an open source voxel space game called V0xel_Sp4ce! Help me out with the repository here: https://github.com/SpikeViper/V0xel_Sp4ce

Advertisement

Although it's highly dependent on art style, I've used an object-space trimapping technique to avoid U/V mapping on my procedural terrain and CSG buildings. Each "material texture" is actually three textures, for the XZ, XY, and YZ planes. In the pixel shader, I use object-space position and a (uniform) scaling parameter to sample each of the three textures, then use the object-space surface normal to slerp between the three samples.

Con: no texture alignment; it's only suitable for tiled or detail textures on static objects.

Con: is relatively quite expensive when applied mostly to planar-aligned geometry.

Pro: works on any geometry; with good texture selection, it looks like natural, carved, or poured material.

Pro: no mapping or stretching artefacts.

RIP GameDev.net: launched 2 unusably-broken forum engines in as many years, and now has ceased operating as a forum at all, happy to remain naught but an advertising platform with an attached social media presense, headed by a staff who by their own admission have no idea what their userbase wants or expects.Here's to the good times; shame they exist in the past.

Although it's highly dependent on art style, I've used an object-space trimapping technique to avoid U/V mapping on my procedural terrain and CSG buildings. Each "material texture" is actually three textures, for the XZ, XY, and YZ planes. In the pixel shader, I use object-space position and a (uniform) scaling parameter to sample each of the three textures, then use the object-space surface normal to slerp between the three samples.

Con: no texture alignment; it's only suitable for tiled or detail textures on static objects.

Con: is relatively quite expensive when applied mostly to planar-aligned geometry.

Pro: works on any geometry; with good texture selection, it looks like natural, carved, or poured material.

Pro: no mapping or stretching artefacts.

Is that a little overkill for simple blocks, or no? Before, I made each face and using UVs wrapped a texture to it, along with a specular and emission map. But, that went over the vertex limit of Unity a bit, so I decided to cut down and re-use vertices.

I'm making an open source voxel space game called V0xel_Sp4ce! Help me out with the repository here: https://github.com/SpikeViper/V0xel_Sp4ce

For exclusively plane-aligned faces, yeah it's overkill. I had domes and arbitrary 2D CSG polygons extruded upwards into buildings to cope with.

RIP GameDev.net: launched 2 unusably-broken forum engines in as many years, and now has ceased operating as a forum at all, happy to remain naught but an advertising platform with an attached social media presense, headed by a staff who by their own admission have no idea what their userbase wants or expects.Here's to the good times; shame they exist in the past.

So, should I be making each face with different vertices, or is what I'm trying to do possible?

I'm making an open source voxel space game called V0xel_Sp4ce! Help me out with the repository here: https://github.com/SpikeViper/V0xel_Sp4ce

If you need different texture coordinates for different faces of your cube, then the faces can't share vertices. Simple as that.

Assuming your cubes are axis aligned, you could use triplanar texturing, and only draw certain sides of the cubes in one draw call (so your cubes are only cubes conceptually - they wouldn't be organized like that in your index buffers). That way you only need one texture sample, instead of lerping between three in the pixel shader based on orientation (the thing Wyrframe is doing). And if you can determine the texture coordinates by x/y/z position, then you potentially wouldn't even need texture coordinates in your cube's vertices, which means you could re-use them between faces.

Just use the normal in the pixel shader. If the normal is float3(0, 0, 1) then use world coordinates x and y for the texture coordinates. The point of tri-planar is to merge plane aligned samples based on which plane you are closest to. In your case you are aligned on one only.

This topic is closed to new replies.

Advertisement