[DX9] Multi-textured meshes, sampler index trouble

Started by
7 comments, last by Super Llama 11 years, 6 months ago
I'm trying to make a level rendering system using a single mesh for the map geometry. I'd planned on using vertices' W coordinate as a texture ID and then passing multiple textures to the shader using device->SetTexture. I got this far before I realized that you can't index a sampler array using a variable, which I'm very upset about as it invalidates my entire plan unless I want to make a massive mess of elseifs... is there some better way to do this? A 3D texture would work on the shader side of things, but it'd be harder to set up in my C++ backend. I just want a way to put multiple textures on one mesh, determining which texture to draw using data from the vertices. Because they're tiled, I can't just bake them all into one texture either. I wish there was a way to convert an array of sampler2D's into one sampler3D, but I'm sure there isn't... any other ideas?

I'm using C++ with DX9. No FX, just shaders and SetShaderConstantF and such.
It's a sofa! It's a camel! No! It's Super Llama!
Advertisement
I think that you are out of luck with the described technique since (AFAIK) you can't dynamically index the samplers in D3D9.

One way to minimize the texture changes is to use a texture atlas, which may be generated offline.

Of course, it is good to ask yourself too whether this will be something that you must absolutely do. Have you measured that you are bound by amount of different textures / texture changes?

Cheers!
I guess I could use a texture atlas with manual tiling, but I really would prefer to use separate textures if possible. I kinda think that if I'm willing to take the time to make a texture atlas generator, I might as well just make a 3D texture generator instead, since then I could use the W coordinate like I originally wanted. Though I did just test a massive switch case and technically that works, but only with up to 16 samplers, which probably won't be enough for an entire map.
It's a sofa! It's a camel! No! It's Super Llama!
I think that the problem with 3d texture is the mip mapping part. I think I saw a case few months back when someone tried to put terrain textures to a 3d texture. The mip mapping part was incontrollable resulting textures blending with neighbouring slices at distances.

I'm not sure why you want to make your map a single mesh. Doesn't it make frustum culling rather difficult for example?

Cheers!
That's true, I hadn't thought of frustum culling. The main reason I wanted it to be one mesh is because the collision response is easier to calculate that way and it's also easier to export from 3dsmax in one piece rather than several. Cutting it into one mesh per triangle would be ridiculous, and I'm not really sure what would be a good intermediate step. I mean, technically I could make all maps using an additive scheme like the interiors in the Elder Scrolls games, using nothing but single-texture single-mesh objects all put together into a world... but I'd planned on using Valve Hammer Editor, which I'm very used to, exporting to DXF, opening them in 3dsmax and exporting to my format, then texturing them in an in-engine tool with special W coordinate stuff. It sounded great until I hit this obstacle...

EDIT: to restate-- I'm not sure where to divide a mesh up if I were to do so-- dividing it based on texture wouldn't help with culling, but dividing it based on culling wouldn't help me texture. Technically I could divide it based on culling and then use a texture atlas, but that's actually quite a bit of work...
It's a sofa! It's a camel! No! It's Super Llama!
After reconsidering the texture cube, I guess that it should be possible to limit the used mip level so that the there shouldn't be problems with nearby slices leaking. However, the bigger question is that using cube limits you to use one and same texture format / size for all the mesh. You'd probably want to use a different texture format for normal maps for example.

Handling collision and rendering with the same geometry may not be a good idea. It may seem logical to tie those 2 things together since they are both geometry related, but that may be all that is incommon. You should consider using a simpler and separate geometry for collision. You may also use a separate collision library which helps you to structure your program in a way that you'll keep the collision and rendering data separated.

Cheers!
My collision engine does allow for a mesh to have a separate triangle list for collision, but it's still 1 physobject per mesh. I pretty much code everything from scratch except DirectX and Windows, so I don't use third party libraries. It's just an Ellipsoid/Polygon intersection test, I don't have Poly-Poly collision yet and when I do it'll probably be convex polygons only. For now, though, the map file has one mesh that's already fairly low poly, and I've been using the same mesh for rendering and collision because of the fact that it's a hammer map exported into my format.

Technically if I were to take your suggestion of using a single mesh for collision and several for rendering, I would just have to give the collision mesh a nonrenderable mesh and set its physics mesh to the one I had been using, then fill the scene with a lot of non-physics objects set up for culling. Still, though, I'll have to figure out what criteria to cull based on-- I mean, I don't have any sort of area portal system and I'd rather not manually separate them in 3dsmax... though technically that is an option. Another idea though-- I did manage to get a W-based texture selector working using a massive switch case, but it only supports up to 16 textures because of the SM3 sampler limit. However, if I were to chunk the map into a bunch of culled pieces, it'd be easy to have less than 16 textures per chunk at least.

So to summarize, I'm thinking I could turn off rendering on my big one-piece non-textured map file, split it into a bunch of non-physical pieces, multitexture each of the pieces separately, then draw the pieces separately using frustum culling. Technically though if I make a few changes to the collision response I don't even need the one-piece physics object and I can just make the chunks physical as well. Do you see any problems with the switch-based per-chunk multitexture idea? Is swapping out a bunch of samplers per pixel shader painfully slow or something? It sounds like it would work to me, but that massive switch case does make me feel a little nautious :P
It's a sofa! It's a camel! No! It's Super Llama!
I'd avoid the switch based texture selection. Of course you may run some random tests to see how it affects the performance. Probably the performance hit is big, or at least eventually you may find out when your level gets more complex that big part of you processing power is lost there. I wouldn't worry about the texture changes, especially since the quantity of textures seems to be rather low. You may still sort your drawing commands in a way that there is the least amount of texture changes.

Also, for physics, you may and you should use several collision objects for an entire level. Of course it is perfectly possible to use a "polygon" soup mesh for the level. Typically it is just a lot simpler to use geometric primitives such as boxes, spheres, capsules etc. Keep in mind that most of the games don't use poly-poly tests, since that's rather expensive and unnecessary. I understand your desire to write everything by yourself, but I'd check for some available and easy to implement options such as bullet.

You may use some sort of spatial separation tree to split your level at triangle / (or preferably) mesh level. Octree or kd-tree may do the job.

Best regards!
I did look into BSP's and such, but my main weakness is writing an editor for something rather than actually writing the system itself. My engine is very strangely coded and even the "easy to implement" things like bullet or lua would require far more work than usual to make them play nice with my system. I won't go into detail, but I have a very polymorphic exe that loads different scenes (including editors) depending on the "map file" you load. And the map file is not just geometry and entities-- it's basically an entire program, and may or may not have GUI's, physics objects, shaders, textures, etc., all potentially embedded inside them. I also have my own compiled scripting language specifically for creating these map files.

I guess my wacky interface wouldn't really be too much of a roadblock for implementing a third party tool if I really needed to, but I really just prefer making it myself because I can understand fully how it works. If anything in my engine is slow, I want it to be my fault and my fault only. I don't take the beaten path with coding, and I never have... I really don't know why. However, this doesn't apply with editors, and I'd much rather figure out a way to texture an exported hammer map than make my own new map editor. I suppose an additive mesh system suits the engine better the way it's built anyway-- the main reason I want multitex is because I want to be able to use hammer's excellent brush toolset, and the easiest way I can find to render it is using multiple textures.

I guess it's just time to give up on hammer and use static mesh pieces to build maps, that seems like it is still the best alternative to hammer and multitex.
It's a sofa! It's a camel! No! It's Super Llama!

This topic is closed to new replies.

Advertisement