R&D Prefiltered LightProbe discontinuous

Recommended Posts

Posted (edited)

Seams.thumb.png.a2190c9874109aede2f9db4f9a7ae83c.png

I was reworking on my LightProbe filter, and I wrote some code to generate the Reference Cubemap, but then I noticed some discontinuous on the border of each face.(Top:CPU implementaion, Bottom: GPU implementation, the contrast has been adjusted on the right side)

At first I think it maybe caused by the interpolation, but then I tried the same algorithm in 2D (like a slice in the normal light probe prefiltering) for better visualization, and the result really confused me.

See the attachments, the top half is the Prefiltered Color value, displayed per channel, it's upside down because I used the ColorValue directly as the y coordinate. 

59a3ec7a626ae_Roughness0.5.thumb.png.9f58ef3ec648dfee8c9fda0ab821ecce.png

59a3ec7b1a040_Roughness1.thumb.png.e8f96810505a36740b99e5b4224fe617.png

The bottom half is the differential of the color, it's very clearly there is a discontinuous, and the position is where the border should be. And as the roughness goes higher, the plot gets stranger .

So, I am kinda of stuck in here, what's happening and what to do to remove this artifact? Anybody have any idea? 

and here is my code

inline FVector2D Map(int32 FaceIndex, int32 i, int32 FaceSize, float& SolidAngle)
{
    float u = 2 * (i + 0.5) / (float)FaceSize - 1;

    FVector2D Return;
    switch (FaceIndex)
    {
    case 0: Return = FVector2D(-u, -1); break;
    case 1: Return = FVector2D(-1, u);  break;
    case 2: Return = FVector2D(u, 1); break;
    case 3: Return = FVector2D(1, -u); break;
    }

    SolidAngle = 1.0f / FMath::Pow(Return.SizeSquared(), 3.0f / 2.0f);
    return Return.SafeNormal();
}

void Test2D()
{
    const int32 Res = 256;
    const int32 MipLevel = 8;

    TArray<FLinearColor>    Source;
    TArray<FLinearColor>    Prefiltered;

    Source.AddZeroed(Res * 4);
    Prefiltered.AddZeroed(Res * 4);

    for (int32 i = 0; i < Res; ++i)
    {
        Source[i] = FLinearColor(1, 0, 0);
        Source[Res + i] = FLinearColor(0, 1, 0);
        Source[Res * 2 + i] = FLinearColor(0, 0, 1);
        Source[Res * 3 + i] = FLinearColor(0, 0, 0);
    }

    const float Roughness = MipLevel / 8.0f;
    const float a = Roughness * Roughness;
    const float a2 = a * a;

    // Brute force sampling with GGX kernel
    for (int32 FaceIndex = 0; FaceIndex < 4; ++FaceIndex)
    {
        for (int32 i = 0; i < Res; ++i)
        {
            float SolidAngle = 0;
            FVector2D N = Map(FaceIndex, i, Res, SolidAngle);

            double TotalColor[3] = {};
            double TotalWeight = 0;
            for (int32 SampleFace = 0; SampleFace < 4; ++SampleFace)
            {
                for (int32 j = 0; j < Res; ++j)
                {
                    float SampleJacobian = 0;
                    FVector2D L = Map(SampleFace, j, Res, SampleJacobian);
                    const float NoL = (L | N);
                    if (NoL <= 0)
                        continue;

                    const FVector2D H = (N + L).SafeNormal();
                    const float NoH = (N | H);

                    float D = a2 * NoL * SampleJacobian / FMath::Pow(NoH*NoH * (a2 - 1) + 1, 2.0f) ;
                    TotalWeight += D;
                    FLinearColor Sample = Source[SampleFace * Res + j] * D;
                    TotalColor[0] += Sample.R;
                    TotalColor[1] += Sample.G;
                    TotalColor[2] += Sample.B;
                }
            }
            if (TotalWeight > 0)
            {
                Prefiltered[FaceIndex * Res + i] = FLinearColor(
                    TotalColor[0] / TotalWeight,
                    TotalColor[1] / TotalWeight,
                    TotalColor[2] / TotalWeight);
            }
        }
    }

    // Save to bmp
    const int32 Width = 4 * Res;
    const int32 Height = 768;

    TArray<FColor> Bitmap;
    Bitmap.SetNum(Width * Height);

    // Prefiltered Color curve per channel
    float MaxDelta = 0;
    for (int32 x = 0; x < Width; ++x)
    {
        FColor SourceColor = Source[x].ToFColor(false);

        Bitmap[x] = SourceColor;

        FColor Sample = Prefiltered[x].ToFColor(false);


        check(Sample.R < 256);
        check(Sample.G < 256);
        check(Sample.B < 256);
        Bitmap[Sample.R * Width + x] = FColor(255, 0, 0);
        Bitmap[Sample.G * Width + x] = FColor(0, 255, 0);
        Bitmap[Sample.B * Width + x] = FColor(0, 0, 255);

        if (x > 0)
        {
            const FLinearColor Delta = Prefiltered[x] - Prefiltered[x - 1];

            MaxDelta = FMath::Max(MaxDelta, FMath::Max3(FMath::Abs(Delta.R), FMath::Abs(Delta.G), FMath::Abs(Delta.B)));
        }
    }

    // Differential per channel
    const float Scale = 128 / MaxDelta;
    for (int32 x = 1; x < Width; ++x)
    {
        const FLinearColor Delta = Prefiltered[x] - Prefiltered[x - 1];

        Bitmap[int32(512 + Delta.R * Scale) * Width + x] = FColor(255, 0, 0);
        Bitmap[int32(512 + Delta.G * Scale) * Width + x] = FColor(0, 255, 0);
        Bitmap[int32(512 + Delta.B * Scale) * Width + x] = FColor(0, 0, 255);
    }

    FFileHelper::CreateBitmap(TEXT("Test"), Width, Height, Bitmap.GetData());
}

 

Roughness 0.5.bmp

Roughness 1.bmp

Edited by wuyakuma

Share this post


Link to post
Share on other sites

Well, the diff is NOT continuous. As we sample L/N by texcoordinate, if you write down the function, you'll notice the function is piecewise. It's different at each plane/line. So the discontinuity of diff is natural. 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Forum Statistics

    • Total Topics
      628697
    • Total Posts
      2984271
  • Similar Content

    • By Vincent_M
      My SDF font looks great at large sizes, but not when I draw it at smaller sizes. I have my orthogonal projection matrix setup so that each unit is a 1x1 pixel. The text is rendered from Freetype2 to a texture atlas @ 56px with a spread of 8 pixels (the multiplier is 8x and scaled down). I'm drawing @ 18px in the screenshot attached to this post. The way I calculate the size of the text quads is by dividing the desired size (18px in the screenshot) by the size of the glyphs in the atlas (56px in this case), and scaling the glyph sprite by that factor. So: 18/56 = ~0.32, and I multiply the rect's size vector by that when it comes to vertex placement (this obviously doesn't apply to the vertices' texture coords). Now, I made sure that all metrics stored in my SDF font files are whole numbers (rect position/size, bearing amounts, advance, etc), but when I scale the font, vertex positions are almost always not going to be whole numbers. I increase the "edge" smoothstep shader parameter for smaller text as well, but it doesn't seem to help all that much.

    • By AtherKhan
      Hi guys,
      So I have an AI game in mind and I was wondering what are the best ways or techniques to sell my idea of my prototype and proof of concept. Should I make a trailer? Should make a magazine style book? Should I make a video in talking about my game like they do in Kickstarter campaigns? Any feedback would be highly appreciated!
    • By matrefeytontias
      So here's the deal : many many years ago, I saw screenshots of Miegakure, that very famous 4D puzzle-platforming game you probably all know about by now. The thing is, it never came out, not even a playable demo, except at big gaming events that I have no way to get to. As such, I decided a while ago that I had waited long enough and I decided to start working on my own mathematically accurate 4D rendering engine.
      Without going too deep, the point of it is that 4D objects live in 4D space, and the so-called 4D camera just cuts a 3D slice of the 4D space and of every 4D object in it, which is then passed to your regular run-of-the-mill 3D engine to display. Doesn't sound like anything too hard then.
      The big problem however comes from optimisation. In 3D engines, you expect your geometry to never change ever, allowing for a lot of cool stuff like GPU caching and the like, and is usually pretty vital for performance. However in a 4D engine, the thing that never changes is your 4D geometry, not the 3D geometry that results from the cutting (that in fact changes every frame). The more mathematically inclined will also think about spatial complexity, since in 4 dimensions you have "a lot more space" to put objects in (purposefully keeping it vague). Moreover, I don't want to go through the trouble of building an actual 3D engine, because a lot of existing engines do that a lot better, and I would probably waste all of my time and motivation working on 3D instead of 4D.
      As a demonstration, my very first demo uses Three.js and is basically a 4D enigma : http://mattias.refeyton.fr/PAF/slicing . The goal is to get to the other side of the wall where the green cube is, knowing that the wall is too high to jump over and that you can't go around it. You can use ZQSD to move (French keyboard, sorry), and A and E to look "ana" and "kata", which are the 4D equivalent of left and right. You'll excuse the roughness of the whole thing, as it was done in 5 days for a school project (it was the perfect opportunity). This has only been tested on Firefox and Chrome.
      Hence my question : what do I use as a foundation to work on this ? I'd like to use either C, C++ (for performance) or Haxe (for the multiple targets), if that gives any leads. Of course, doing it from scratch is a totally valid answer, as I would be able to include many 4D-only things (such as 4D lighting and other cool shit) that I'm having trouble seeing how I could implement them in an existing engine. Another thing to take in consideration is that there's probably going to be a 4D physics engine to come with it, and that I'm not sure how hard or easy making that work with an existing 3D engine would be.
      Also I'm killing two birds with one stone by asking if anybody would be interested by a stream of this. I'm planning to eventually stream my work on this, which would include math on blank paper, and heavily mathematically-inclined discussion, not just coding (relatively little coding in fact).
    • By ramirofages
      Hello everyone, I was following this article:
      https://mattdesl.svbtle.com/drawing-lines-is-hard#screenspace-projected-lines_2
      And I'm trying to understand how the algorithm works. I'm currently testing it in Unity3D to first get a grasp of it and later port it to webgl.
      What I'm having problems with is the space in which the calculations take place. First the author calculates the position in NDC and takes into account the aspect ratio of the screen.  Later, he calculates a displacement vector which he calls offset, and adds that to the position that is still in projective space, with the offset having a W value of 1. What's going on here? why can you add a vector in NDC to the resulting position of the projection? what's the relation there?. Also, what is that value of 1 in W doing? shouldn't it be 0 ?
      Supposedly this algorithm makes the thickness of the line independent of the depth, but I'm failing to see why.
      Any help is appreciated. Thanks
    • By Yarden2JR
      Hi there everyone! I'm trying to implement SPH using CPU single core. I'm having troubles in making it stable. I'd like some help in order to understand what is wrong and how could I fix it. Please, take a look at the following videos:
      Water inside sphere using Kelager's parameters
      Water inside big box
      Water inside thinner box
      I've already tried using XSPH, the hash method to find the neighbors (now I'm using the regular grid, because the hash method didn't work for me) and two different ways of calculating the pressure force.
      I'm using mostly the following articles:
      Particle-Based Fluid Simulation for Interactive Applications, Matthias Müller, David Charypar and Markus Gross
      Lagrangian Fluid Dynamics Using Smoothed Particle Hydrodynamics, Micky Kelager
      Smoothed Particle Hydrodynamics Real-Time Fluid Simulation Approach, David Staubach
      Fluid Simulation using Smoothed Particle Hydrodynamics, Burak Ertekin
      3D Langrangian Fluid Solver using SPH approximations, Chris Priscott
      Any ideas? Thanks!
  • Popular Now