Jump to content
  • Advertisement
Sign in to follow this  

OpenGL A problem about implementing stochastic rasterization for rendering motion blur

This topic is 992 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have read "Real-Time Stochastic Rasterization on Conventional GPU Architectures" paper. Now I am trying to implement this algorithm using DirectX 11 and HLSL instead of OpenGL and glsl (can be downloaded at http://graphics.cs.williams.edu/papers/StochasticHPG10/stochastic-shaders-reference.zip), but there is something that I do not understand:


First, I generate random time and store in a 128 x 128 2D texture. In pixel shader, the author calculates fragment (sample) position using


ivec2 iFragCoordBase = ivec2(gl_FragCoord.xy) * ivec2(MSAA_SAMPLES_CONSUMED_X, MSAA_SAMPLES_CONSUMED_Y) + ivec2(samplePatternShift & 2, samplePatternShift >> 1) * 64;

for (int iy = 0; iy < MSAA_SAMPLES_CONSUMED_Y; ++iy) {
        for (int ix = 0; ix < MSAA_SAMPLES_CONSUMED_X; ++ix) {
            int index = ix + iy * MSAA_SAMPLES_CONSUMED_X;

        } // for ix
    } // for iy

I do not know that iFragCoordBase means, since the authour have already specified sample positions. In DirectX and HLSL if I use NvAPI to specify sample positions, do I need to compute iFragCoordBase variable?


Is there anybody experienced this problem?


Share this post

Link to post
Share on other sites

Thank for the reply, iFragCoordBase is to calculate a sample position, the whole function is:

    visibilityMask_last = 0;

    int2 iFragCoordBase = int2(input.posH.xy)* int2(MSAA_SAMPLES_CONSUMED_X, MSAA_SAMPLES_CONSUMED_Y) +
							int2(samplePatternShift & 2, samplePatternShift >> 1) * 64;

    for (int iy = 0; iy < MSAA_SAMPLES_CONSUMED_Y; ++iy) // In 4xMSAA case, MSAA_SAMPLES_CONSUMED_Y = 2
        for (int ix = 0; ix < MSAA_SAMPLES_CONSUMED_X; ++ix) // MSAA_SAMPLES_CONSUMED_X = 2
            int index = ix + iy * MSAA_SAMPLES_CONSUMED_X; //sample index

            int2 samplePos = (iFragCoordBase + int2(ix, iy)) & int2(127, 127);

			// Load a random time at the current sample from a 2D texture
            float t = g_randomTime.Load(int3(samplePos, 0)); 
			//-------------------Find the current position of a moving triangle at time t----------------
            float3 csPositionAT = lerp(csPositionA0, csPositionA1, t); 
            float3 csPositionBT = lerp(csPositionB0, csPositionB1, t);
            float3 csPositionCT = lerp(csPositionC0, csPositionC1, t);

            //------------------------At the current sample, generate a ray from camera------------------
            float3 rayDir;
            float2 rayOrigin;
            computeRays(input.posH.xy, samplePosXY[index], rayDir, rayOrigin);

			//------------------------------Perform ray-triangle intersection----------------------------
            float3 weight;
            float distance = intersectTri(rayDir, rayOrigin, csPositionAT, csPositionBT, csPositionCT, weight);
            if (distance > 0.0f)
				csPosition_last = rayDir * distance + float3(rayOrigin, 0.0);
                weight_last = weight;
                t_last = t;
				visibilityMask_last = visibilityMask_last | (1 << index);
                rayDir_last = rayDir;
                rayOrigin_last = rayOrigin;

Time is generated randomly in C++ and stored in a 128 x 128 2Dtexture

Share this post

Link to post
Share on other sites
So they're using iFragCoordBase to lookup a value the random time texture. This will essentially "tile" the random texture over the screen, taking MSAA subsamples into account. So if there's no MSAA the random texture will be tiled over 128x128 squares on the screen, while for the 4xMSAA case it will be tiled over 64x64 squares. This ensures that each of the 4 subsamples gets a different random time value inside of the loop.

Share this post

Link to post
Share on other sites

Thank you so much. There is one more thing about generating a ray from a sample position. I have searched on the internet and found a solution to generate a camera-space ray using picking algorithm, which computes a camera-space position from a screen-space position and camera is at the origin. But the original source code does not use this method, the author uses an interpolation. Shader code is

    // Under MSAA, shift away from the pixel center
    // subpixelOffset is a hardcorded sample position
    vec2 myPix = gl_FragCoord.xy - viewportOrigin - vec2(0.5) + (subpixelOffset - vec2(0.5));

    // Compute the fraction of the way in screen space that this pixel
    // is between ulRayDir and lrRayDir.
    vec2 fraction = myPix * viewportSizeSub1Inv;

    // Note: z is the same on both rays, so no need to interpolate it
    direction = vec3(mix(ulRayDir.xy, lrRayDir.xy, fraction), ulRayDir.z);

    origin = vec2(0.0);

    direction = normalize(direction);

Is there any difference between these two approaches?

Share this post

Link to post
Share on other sites

I have also tried to understand the author's C++ code for computing a upper left ray and a lower right ray. But I am sure that those two rays are in the world space or the camera space. Because he computes a ray direction then apply a transformation using a function named ToWorldSpace(), but this function uses the camera's rotation matrix. It confuses me.

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!