Right now I am taking care of the final steps to make a terrain geo mipmapping system. It creates all vertex data for every mesh LOD at loading time. There are two concerns with this- the CPU time required to calculate normals, tangents and bitangents when creating the meshes for each terrain tile, and the memory footprint it takes.
Because I was careless with putting buffer creation into too many loops, I ran into OutOfMemory exceptions before the code was stable. Doing more GPU work would also be helpful when I start on the terrain editor so that vertices are updated quickly with minimal lag.
So to alleviate these two problems to some degree I want to know if there are huge benefits to calculate some of the vertex data on the GPU in real-time. I have a custom vertex declaration structure that's 56 bytes per vertex, and on 2048x2048 maps, this can easily exceed 300 megabytes (lower LOD meshes included). That structure includes Position, Texture Coordinate, Normal, Bitangent and Tangent. It might be possible to reduce this to Position and Texture Coordinate which would be 20 bytes per vertex and have the shaders figure out the other three values.
I'm using XNA so geometry shaders are not available. I know it's possible to compute normals using partial derivatives, but for a non-faceted shading look, I don't know how to achieve this.
My first thought is to have a stream with a vertex buffer containing the XZ locations, and a transformation matrix for each tile so each terrain LOD can share the same buffer. Then I can either have a separate stream with Y (height) locations to push up each vertex. I'm just not sure if or how it's possible to look up adjacent vertices in the shader to calculate normal data per vertex.