What's the deal with setting multiple viewports on the rasterizer?

Started by
5 comments, last by MJP 8 years, 11 months ago
The rasterizer appears to let me set multiple viewports:

* Rasterizer.SetViewPorts(...) in C#/SharpDX
* ID3D11DeviceContext::RSSetViewports in C++

When I go to render, the first of those is used and the others are ignored. I'm not actually interested in multiple viewports right now, but I was curious what this is supposed to do. I mean, the rasterizer controls the way the pixels light up, so if we've set multiple viewports and we then draw something, obviously having multiple viewports is nonsensical, unless... I dunno, it's supposed to draw the same thing to both viewports? It doesn't do that in this case though, so why the option to set multiple viewports?
Advertisement

The SV_ViewportArrayIndex semantic allows you to select which viewport from this array to use on the fly, instead of constantly switching viewports in the API, note that the viewport bounds you choose must eventually find their way onto the graphics card for the rasterizer to use, so passing a new viewport is just like any other state change. Apparently it's a semantic applied to the geometry shader output, but I would imagine you can apply it to the vertex shader output if you have no geometry shader (I may be wrong on this).

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

Ah, I haven't done anything with the geometry shader yet. Seems kind of weird... any idea what kind of use case might dictate messing with viewport selection from within a shader?

I haven't tried it myself, nor am I sure how it would work, but perhaps it could be used for rendering stereo images - i.e., split-screen with slightly different views to simulate distance between the eyes. I.e., one set of objects into the geometry shader, two sets (one to each viewport) output.

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

The main example I've seen it used for is things like cubemap rendering, where the GS can send each triangle to n/6 of the viewports.

Might also be useful for doing stereo/VR in a single pass, using a GS to duplicate all the triangles into the 2nd viewport.

I think it's also used for 3d/volumetric rendering. Render targets have to be two-dimensional, meaning you can't render directly to a 3d volume... but you can represent a 3d volume as a stack of 2d viewports. The GS can then send triangles to all of the slices/viewports that they cover.

I guess you could also do stuff like try to render shadowmaps for a large number of light sources to an atlas in one pass. The geo shader could decide which triangles are visible to which lights, and output them to the right views within the shadow atlas.

Interesting stuff... beyond my pay grade right now, but good to know!

Apparently it's a semantic applied to the geometry shader output, but I would imagine you can apply it to the vertex shader output if you have no geometry shader (I may be wrong on this).


Unfortunately, that's not the case. You can only use it as an output from a geometry shader.

Recent AMD hardware supports setting it from a vertex shader at the hardware level, but it's not exposed in D3D11. However they did expose it as an OpenGL extension.

This topic is closed to new replies.

Advertisement