Jump to content

  • Log In with Google      Sign In   
  • Create Account

steven166

Member Since 22 Nov 2011
Offline Last Active Today, 01:02 AM

Posts I've Made

In Topic: How to enable supersampling in DirectX 11?

22 February 2016 - 06:27 AM

Thanks for helping me guys. It works.


In Topic: How to enable supersampling in DirectX 11?

21 February 2016 - 04:08 PM

I just use ResolveSubResource to resolve the multisampled render target


In Topic: How to enable supersampling in DirectX 11?

21 February 2016 - 09:45 AM

I have the same think and I render a full-screen quad and check it with the following simple pixel shader:

// 4 samples per pixel
float4 PS(VS_OUTPUT input, uint sampleIndex : SV_SAMPLEINDEX) : SV_TARGET
{
         if (sampleIndex % 2 == 0)
              return float4(1,0f, 0.0f, 0.0f, 1.0f);

         return float4(1.0f, 1.0f, 0.0f, 1.0f);
}

We have 4 sampes per pixel so we have red color at the 2nd and the 4rd samples and yellow color at the 1st and the 3rd samples. But the result is all red.


In Topic: How to enable supersampling in DirectX 11?

21 February 2016 - 08:22 AM

Thanks guys for your reply, but I think that rendering to a multi-sampled render target then resolving it is a MSAA rendering technique not a SSAA rendering. And using SV_SampleIndex for running pixel shader per sample instead of per pixel but I do not know how to check that the pixel shader is performed at sample frequency. I have search on the internet and they said that SSAA rendering is a two-pass rendering. First, we render to a larger render target (for example 2 times larger for both dimensions) then down sample it to the original size in the second pass. If we use a multi-sampled resource render target in the first pass, it is a combination of SSAA and MSAA, isn't it?


In Topic: A problem about implementing stochastic rasterization for rendering motion blur

04 February 2016 - 06:03 AM

I have also tried to understand the author's C++ code for computing a upper left ray and a lower right ray. But I am sure that those two rays are in the world space or the camera space. Because he computes a ray direction then apply a transformation using a function named ToWorldSpace(), but this function uses the camera's rotation matrix. It confuses me.


PARTNERS