Jump to content

  • Log In with Google      Sign In   
  • Create Account

We need your help!

We need 7 developers from Canada and 18 more from Australia to help us complete a research survey.

Support our site by taking a quick sponsored survey and win a chance at a $50 Amazon gift card. Click here to get started!


MJP

Member Since 29 Mar 2007
Offline Last Active Yesterday, 10:31 PM

Posts I've Made

In Topic: Firing many rays at one pixel?

27 August 2015 - 03:27 PM

For path tracers, it's pretty common to use random or psuedo-random sampling patterns for this purpose. In contrast to regular sample patterns (AKA sample patterns that are the same per-pixel, like in Hodgman's example), they will hide aliasing better but will replace it with noise. I would strongly suggest reading through Physically Based Rendering if you haven't already, since it has a great overview of the some of the more popular sampling patterns (stratified, Hammersley, Latin Hypercube, etc.)

In Topic: Simple Solar Radiance Calculation

25 August 2015 - 01:05 PM

When we were working on this a few months ago, we had been using a sample implementation of the Preetham solar radiance function because we were having some trouble getting the Hosek solar radiance function to work correctly. I revisited this a little while ago and I was able to get the Hosek sample implementation to work correctly, so I would suggest using that instead. The one catch is that their sample code only has a spectral implementation, so you need to do the spectral-RGB conversion yourself. To do that, I used the Spectrum classes from pbrt-v3.

The way their sample code works is that you need to make a sky model state for each wavelength that you're sampling. The pbrt SampledSpectrum class uses 60 samples ranging from 400-700nm, so that's what I used. Then for each wavelength you can sample a point on the solar disc to get the corresponding radiance, which you do by passing the sky state to their solar radiance function. I just create and delete the sky states on the fly, but you can cache and reuse them if you want. You just need to regenerate the states if the sun elevation, turbidity, or ground albedo changes. Their function also returns non-uniform radiance across the solar disc, so you may want to take multiple samples around the disc to get the most accurate result. Otherwise you can just take one sample right in the center.

This is the code I'm using at the moment. I can't promise that it's bug-free, but it seems to be working.

const float SunSize = DegToRad(0.27f);  // Angular radius of the sun from Earth

float thetaS = std::acos(1.0f - sunDirection.y);
float elevation = (Pi / 2.0f) - thetaS;

Float3 sunRadiance;

SampledSpectrum groundAlbedoSpectrum = SampledSpectrum::FromRGB(GroundAlbedo);
SampledSpectrum solarRadiance;

const uint64 NumDiscSamples = 8;
for(uint64 x = 0; x < NumDiscSamples; ++x)
{
    for(uint64 y = 0; y < NumDiscSamples; ++y)
    {
        float u = (x + 0.5f) / NumDiscSamples;
        float v = (y + 0.5f) / NumDiscSamples;
        Float2 discSamplePos = SquareToConcentricDiskMapping(u, v);

        float theta = elevation + discSamplePos.y * SunSize;
        float gamma = discSamplePos.x * SunSize;

        for(int32 i = 0; i < nSpectralSamples; ++i)
        {
            ArHosekSkyModelState* skyState = arhosekskymodelstate_alloc_init(elevation, turbidity, groundAlbedoSpectrum[i]);
            float wavelength = Lerp(float(SampledLambdaStart), float(SampledLambdaEnd), i / float(nSpectralSamples));

            solarRadiance[i] = float(arhosekskymodel_solar_radiance(skyState, theta, gamma, wavelength));

            arhosekskymodelstate_free(skyState);
            skyState = nullptr;
        }

        Float3 sampleRadiance = solarRadiance.ToRGB();
        sunRadiance += sampleRadiance;
    }
}

// Account for coordinate system scaling, and sample averaging
sunRadiance *= 100.0f * (1.0f / NumDiscSamples) * (1.0f / NumDiscSamples);
This computes an average radiance across the entire solar disc. I'm doing it this way so that the code works with the rest of our framework, which currently works off the assumption that the solar disc has a uniform radiance. If you just want to compute the appropriate intensity to use for a directional light, then you can just directly compute irradiance instead. To this you need to evaluate the integral of cos(theta) * radiance, which you can do with monte carlo. Basically for each sample you compute you would multiply by N dot L (where 'N' is the direction towards the center of the sun, and 'L' is your current sample direction), and accumulate the sum. Then you would need to multiply the sum by InversePDF / NumSamples. Otherwise, if you assume the radiance is uniform then you can compute the irradiance integral analytically:

static float IlluminanceIntegral(float theta)
{
    float cosTheta = std::cos(theta);
    return Pi * (1.0f - (cosTheta * cosTheta));
}
where 'theta' is angular radiance of the sun. So the final irradiance would be IlluminanceIntegral(SunSize) * sunRadiance.

Oh, and that 'SquareToConcentricDiskMapping' function is just an implementation of Peter Shirley's method for mapping from a unit square to a unit circle:

inline Float2 SquareToConcentricDiskMapping(float x, float y)
{
    float phi = 0.0f;
    float r = 0.0f;

    // -- (a,b) is now on [-1,1]ˆ2
    float a = 2.0f * x - 1.0f;
    float b = 2.0f * y - 1.0f;

    if(a > -b)                      // region 1 or 2
    {
        if(a > b)                   // region 1, also |a| > |b|
        {
            r = a;
            phi = (Pi / 4.0f) * (b / a);
        }
        else                        // region 2, also |b| > |a|
        {
            r = b;
            phi = (Pi / 4.0f) * (2.0f - (a / b));
        }
    }
    else                            // region 3 or 4
    {
        if(a < b)                   // region 3, also |a| >= |b|, a != 0
        {
            r = -a;
            phi = (Pi / 4.0f) * (4.0f + (b / a));
        }
        else                        // region 4, |b| >= |a|, but a==0 and b==0 could occur.
        {
            r = -b;
            if(b != 0)
                phi = (Pi / 4.0f) * (6.0f - (a / b));
            else
                phi = 0;
        }
    }

    Float2 result;
    result.x = r * std::cos(phi);
    result.y = r * std::sin(phi);
    return result;
}
Hope this helps!

In Topic: The Order 1886: Spherical Gaussian Lightmaps

21 August 2015 - 03:43 PM

We had a custom GI baking system written top of Optix. Our tools were integrated into Maya (including our renderer), so the lighting artists would open the scene and Maya and initiate bakes. From there, we would package up the scene data and distribute it to multiple nodes on our bake farm, which were essentially Linux PC's running mostly GTX 780's.

We're still working on finish up our course notes, but once they're available there will be a lot more details about representing using an SG NDF and warping it to the correct space. We're also working on a code sample that bakes SG lightmaps and renders the scene.

Also, regarding the golden spiral: if you do a google search for "golden spiral on sphere", you can find some articles (like this one) that show you how to do it.

In Topic: Eye rendering - parallax correction

18 August 2015 - 02:54 PM

First, look up the equations for refraction. These will tell you how to compute the refracted light direction based on the surface normal and IOR. If you have a mesh for the cornea that matches the actual dimensions of the human eye, then calculating the refraction is really easy in the pixel shader: your incoming light direction will be the eye->pixel vector, and the normal will be the interpolated surface normal of the mesh. Once you've calculated the refracted view direction, you just need to intersect it with the iris. A simple way to do this is to treat the iris as a flat plane that's 2.18mm from the apex of the cornea. You can then do a simple ray/plane intersection test to find the point on the surface of the iris that you're shading. To get the right UV coordinates to use, you just need a simple way of mapping your iris UV's to your actual positions on the iris (I just used an artist-configurable scale value on the XY coordinates of the iris surface). I would recommend doing all of this in a coordinate space that's local to the eye, since it makes the calculations simpler. For instance, you could have it set up such that the apex of the cornea is at X=Y=Z=0, and the iris is plane perpendicular with the XY plane located 2.8mm from the origin.

In Topic: UpdateSubresource on StructuredBuffer

15 August 2015 - 04:28 PM

The interface kinda lets you believe that a DrawCall is executed when called.


Indeed, it does make it appear like that is the case. That's actually one of the major changes for D3D12: with D3D12 you build up one or more command lists, and then you must explicitly submit them to the GPU. This makes it very clear that you're buffering up commands in advance, and also lets you make the choice as to how much latency you want between building command lists and having the GPU execute them. It also completely exposes the memory synchronization to the programmer. So instead of having something like D3D11_MAP_WRITE_DISCARD where the driver is responsible for doing things behind the scenes to avoid stalls, it's up to you to make sure that you don't accidentally write to memory that the GPU is currently using.

PARTNERS