They work similar to standard billboards, in that you want one specific vector of the billboard's orientation (say its y-axis) to always point in the direction of the camera. The difference is you also want the laser to keep pointing in the direction of its tangent line (say the z-axis). Because of this you only have the freedom to reorient the billboard around its z-axis. So towards whatever direction you re-orient the y-axis, it needs the be orthogonal in respect to the z-axis. You can do this by orthonormalizing the camera-vertex vector in respect to the tangent vector.
So the three vectors that form a billboard matrix' orthogonal base are:
I can't compile shaders online(because of hardware limitations), so i cannot generate dummy shader for each inputlayout...
Most practical method is to validate the input layout against an actual shader that uses the input layout.
When binding an input layout together with a vertex shader, the layout needs to have (validated) element for all the VS' input elements (matching in position and order). It can have additional element used by other shaders, but it can never omit elements required by the shader.
For example, if you have an input layout that's validated against a shader that uses the input elements [POSITION, NORMAL, UV], the layout can be used to feed the following shader input structs: [POSITION, NORMAL, UV] [POSITION, NORMAL] [POSITION] [-]. (You'd validate such an input layout against the shader that has the largest span of input elements.)
Shadow volume are extruded in the direction of the light vector in world space. Since this happens in the geometry shader, you'll have to do the view/projection transformation afterwards in the GS as well. The extruded volumes are transformed the same way as any other object in the scene.
Note that clipping and depth division takes place after the GS stage. The w-component has no meaning when passing data from the VS to GS. SV_Position is a required output of the geometry shader, not the vertex shader (when a GS is active).
There's a technique called 'Dual Contouring' that extends upon the Marching Cubes algorithm. Aside from weight data on grid corners it also takes tangent vectors on the grid field into account, to allow for better reconstruction of edges.
Are you trying to capture the entire desktop, or only a single desktop window?
GetDisplaySurfaceData() only works in full screen mode. In windowed mode DXGI can blit to a shared surface managed by the desktop windows manager, but has no access to the entire screen buffer. DXGI 1.2 under Windows 8 adds access to the desktop via a Duplication API.
It simply depends on whether you find the pre-computed normals usable or not. If they differ much from your own calculations or give poor lighting results you might feel like throwing them out. Tangent vectors can always be orthonormalized in respect to a normal vector.
They didn't literally say that it's hopeless, did they?
It looks rushed, but that gives the characters a nice way of blending with the background. Facial expressions on the characters aren't too bad, but especially there you need to add more detail. Also try to be more subtle with the burn and dodge tools. They're an easy way of adding more depth/contrast, but that's better achieved through proper lighting.
It's likely a pitch-offset error. Instead of manually writing the contents of a BMP you're probably better of using an existing image writer. For example, a WIC (Windows Imaging Component) object can be initialized directly using Map() data and write out to a large number of different file formats.