In trying to reduce poly could for trees and other horizontally symmetrical meshes I'm trying a technique where I calculate a billboard dynamically from the mesh, and fade between the billboard and the 3D mesh at a given distance.
This works OK for most objects and in most circumstances (typically where the object orientation is not imporant; rocks, trees etc) but I'm having trouble working out how to project the billboard image from the mesh. These might be obvious questions, but I'm really stuck visualising what I'm doing.
First question; when generating the Texture2D image for the billboard, and I do this simply by rendering the mesh onto a transparent Texture2D Render Target but what is the relationship between the camera and the mesh bounding box ? I want to fill the billboard sprite as much as possible so I would want the camera position to be in a location which would make the projected image fit the 256x256 Texture2D as completely as possible (to get as much detail in the sprite as possible) but since my object meshes are different real world sizes, no one camera location gives me what I want for all objects. My problem seems to be to eliminate the distance scaling but is there a common projection matrix for instance which would prevent scaling with distance, so all I'd need to worry about was providing a Scale transform, rather than the Translate transform of the camera ?
Second question; I will scale the billboard at runtime to coincide with the 3D object bounding box, so that the billboard looks a similar size to the mesh when I segway between the two - is this a common technique ? If so, do you prepare multiple billboards for a series of orientations for non-horizontally symmetric meshes (i.e. buildings, where they have a fixed and important game orientation, we want a billboard which is a reasonable approximation of the bulding facade when seen from a particular angle - not truly a billboard I guess but a similar technique) ?