Okay, I'll explain the whole thing.
In the position buffer I'll have a set of 16 vertexes. The first 8 vertexes will compose a cube with sides of length 3.0f. The second 8 vertexes will be the same cube, but elevated by 1.0f. In this way I can use an index buffer to render 15 distinct cube shapes (plus one quad shape for sprites) with different 'corners' of the cube raised by 1.0f. In this way I can draw moderately complex geometry using a sort of grid. All of the textures to be applied to sides of the individual blocks would be stored in a single texture composed of 32x64 tile pairs (one tile for the top of the cube and one for the sides). The U/V vertex buffer will basically be a set of texture coordinates at the corner points of each tile. The index buffer matched to this vertex buffer would cause a different tile to be drawn to the currently rendering brick based on the start index. In this way, at the start of the map scene, I can upload the texture, both vertex buffer parts, and both index buffers. They should be pretty small, so for the duration of the entire scene I can render any shape in any position using any tile from the texture by sending only:
*position buffer start index
*texture coord buffer start index
Basically since the data is so small I'm wanting to upload it manually to the device and then use it for the whole scene, avoiding having to continually send redundant data, since both the positions and coordinates can so easily be specified and ordered beforehand.
The docs on vertex declarations state that their intended use is to eliminate the need for sending redundant data over the bus, which is one of the reasons I have a feeling that there's some way to make this happen.
If it can't be done on the CPU then do you think I could write a vert shader for something like this? I've only messed with pixel shaders before.
Thank you for your response, also.
The reason I want to spit the stream is...