Different tex-coords for the mask, provided through the vertices already, transformed or reconstructed (as Servant has shown) are warranted here. Maybe you wouldn't need those triangles at all: If you look at it as a post-process (it sounds like you're after some sort of vignetting) you just draw a screen quad/triangle and do all the magic in the fragment shader.
Here is an example I wrote for D3D11, hopefully providing some food for thought.
In D3D11 a index of -1 has a special meaning (see here, last paragraph "Generating Multiple Strips"). IIRC in OpenGL you can define that value arbitrarily. I wonder if you even get something in the geometry shader then, since it operates on primitives. If that is the case, try another number, e.g. -2.
You can grab the index in the vertex shader with SV_VertexID and pass it along. Maybe that helps.
Flags and other properties can only be set at creation of resources/textures. Similar applies to views. That's how the API works: You decide what you need and create (usually) everything at app start.
When you say "usage staging" do you mean I set usage flags to = staging or something like that?
Yes, though in this case it's an enum, you can't combine them. This is why you need this copy operation: you can't read back a usage=default resource directly. Again, this is how the API/driver/hardware works, one has to get used to it.
Also, is there a more efficient way to just skip the target texture or backbuffer or swap chain or whatever it's using to put it on the screen and just tell it right off the bat to ignore the screen and use a Texture2D as the output instead? It seems like this would be quicker to process and a bit less messy.
Bacterius answered this already - correctly. You don't need a swap chain, this is only if you want to render to a window. You create a Texture2D and a RendertTargetView thereof and you're ready to go.
I get the impression you're quite confused about all of this. I recommend going through the rastertek tutorial (there are even SharpDX transliterations around IIRC) and/or buy F.Luna's book. Though the latter is C++ I consider it the best book for D3D11 beginners.
It's not either, it's both. These are flags, so certain combinations are valid and RenderTarget combined with ShaderResource is a quite common one.
For readback you need an additional resource with usage staging (and cpu read), only these can be actually mapped and read back. After your rendering use context.CopyResource from your render target texture to the staging texture. Then you can map.
Check the D3D debug layer output and/or with a graphics debugger if your resources are actually bound correctly. Other bugs aside it's usually a read/write hazard: One has to explicitly unbind resources before binding it elsewhere otherwise the API will nullify such an attempt (but report it in the debug layer).
PS: The debug layer is enabled with D3D11_CREATE_DEVICE_DEBUG at device creation.