Well yeah, in a way I'm happy with the dark cube -- it's textured, it's lit (though, yeah I agree, it's way too dark; I gotta play it a bit -- probably like you said the ambient stuff; or the light's normals, but it's lit), it's scaled, and positioned in the world properly.
The cube is deliberately overly complex, because rather than having to keep playing with the scale in Maya, I just made a more complex one, and use the scaling features in my engine. Scaling a larger cube with smaller amounts of details causes the spatial interpolation of lighting to be larger. In other words, if vertices, using the normals specified, are where the actual lighting models/algorithms are calculated internally by D3D, then the dispersal of light from one vertex to another can cause you to "run out of light" before it hits the next vert due to smooth linear falloff of the light from one vert tto another (even though that shouldn't occur, it seems to; I've seen an article somewhere along the line that actually mentioned the same thing). The quick dirty solution for testing was to just add more vertices ;)
That's strange. It shouldn't be. I actually have it setting the texture of the quad to the render texture before calling DrawPrimitive (Yes I know, the DrawIndexedPrimitive would be faster)
Well, in the screenshot where both are rendered, I have the model rendering twice, once for the texture, and another being rendered to the backbuffer just so I can flip on wireframe and demonstrate that the image your seeing (The one with the in-you-face texture) is not what I think should be.
currently the render texture is being set via a d3device's SetRenderTarget() call with the first parameter (DWORD RenderTargetIndex) being 0 for the RenderTargetIndex, and the texturestage for the quad being 1 (I had it at 0, but I just got an empty screen -- unless I rendered the model outside of the render texture building codeblock that is..