Different tex-coords for the mask, provided through the vertices already, transformed or reconstructed (as Servant has shown) are warranted here. Maybe you wouldn't need those triangles at all: If you look at it as a post-process (it sounds like you're after some sort of vignetting) you just draw a screen quad/triangle and do all the magic in the fragment shader.
Here is an example I wrote for D3D11, hopefully providing some food for thought.
In D3D11 a index of -1 has a special meaning (see here, last paragraph "Generating Multiple Strips"). IIRC in OpenGL you can define that value arbitrarily. I wonder if you even get something in the geometry shader then, since it operates on primitives. If that is the case, try another number, e.g. -2.
You can grab the index in the vertex shader with SV_VertexID and pass it along. Maybe that helps.
Flags and other properties can only be set at creation of resources/textures. Similar applies to views. That's how the API works: You decide what you need and create (usually) everything at app start.
When you say "usage staging" do you mean I set usage flags to = staging or something like that?
Yes, though in this case it's an enum, you can't combine them. This is why you need this copy operation: you can't read back a usage=default resource directly. Again, this is how the API/driver/hardware works, one has to get used to it.
Also, is there a more efficient way to just skip the target texture or backbuffer or swap chain or whatever it's using to put it on the screen and just tell it right off the bat to ignore the screen and use a Texture2D as the output instead? It seems like this would be quicker to process and a bit less messy.
Bacterius answered this already - correctly. You don't need a swap chain, this is only if you want to render to a window. You create a Texture2D and a RendertTargetView thereof and you're ready to go.
I get the impression you're quite confused about all of this. I recommend going through the rastertek tutorial (there are even SharpDX transliterations around IIRC) and/or buy F.Luna's book. Though the latter is C++ I consider it the best book for D3D11 beginners.
It's not either, it's both. These are flags, so certain combinations are valid and RenderTarget combined with ShaderResource is a quite common one.
For readback you need an additional resource with usage staging (and cpu read), only these can be actually mapped and read back. After your rendering use context.CopyResource from your render target texture to the staging texture. Then you can map.
Check the D3D debug layer output and/or with a graphics debugger if your resources are actually bound correctly. Other bugs aside it's usually a read/write hazard: One has to explicitly unbind resources before binding it elsewhere otherwise the API will nullify such an attempt (but report it in the debug layer).
PS: The debug layer is enabled with D3D11_CREATE_DEVICE_DEBUG at device creation.
That's the normal behavior of a visualized depth buffer. The values are distributed hyperbolically, so more values are near 1 (white). If you want better visualization, transform them back to linear depth.
PS: Save yourself some typing: return float4(input.pos.z,input.pos.z,input.pos.z,input.pos.z) is equivalent to return input.pos.zzzz
Edit: Waaaaait, is that SV_Position ? I don't even know what z means here, never checked myself and the docs are enigmatic. But the effect you're describing sounds like the depth value. Could be worse though, if it's e.g. view space z grey might be even rarer (z can go higher than 1 and will be clamped to one).
I'd actually advice against a geometry shader here and use hardware instancing. Not only is it simpler and allows for other geometry without changing the shaders it will also be likely much faster (Edit: Wrong! See MJP's link below, instancing can turn out bad for low vertex count). One can also only do triangle strips and no indexed drawing with a geo shader (at least not without going through other hoops like stream-out).
PS: I'm still curious how you pulled that off with 14 vertices only
Hmmm that still compiles individual shaders, not effects. Compiling effects (fx_5_0 profile and technique11 entries in the shader file) will fail compilation and report mismatches. No, I was talking about the D3D debug layer (D3D11_CREATE_DEVICE_DEBUG at device creation). Then a error will be reported when issuing a draw call. I don't think you can check this earlier, unless you use shader reflection.
I'm surprised it drew anything at all. The debug layer should scream at a input-output mismatch (from VS to the next stage, probably pixel shader in your case). Order and type must be identical (there are exceptions: system value semantics, and one can also strip semantics from the end in a follow-up stage). But seriously, look at the debug output.