• Advertisement

Krypt0n

Member
  • Content count

    1075
  • Joined

  • Last visited

  • Days Won

    2

Krypt0n last won the day on March 31

Krypt0n had the most liked content!

Community Reputation

4752 Excellent

About Krypt0n

  • Rank
    Contributor

Personal Information

  • Interests
    Programming
  1. if you discard only 1/255 of the pixel (assuming you mean black == 0.f), then there won't be any performance benefit from discard nor stenciling, as the GPU will dispatch pixel in groups of at least 2x2 (yes, fragments will be processed that are already rejected). Therefor the only speedup you'll get is in cases where there are 2x2 pixel blocks of black pixel, which are also exactly in the 2x2 pixel grid. 1:4Billion Hence, if that is only about performance, go for a discard, it has at least less setup overhead.
  2. CPU Raytracer

    I'd be also interested in how many sample/pixel you actually trace to get such a smooth result. Highres screenshot ++
  3. Merge rgb and alpha channels

    Just in case you need something special again: With the single file header for images, you can probably write that in a few minutes: https://github.com/nothings/stb
  4. are you reading the texture via samplers or image load ?
  5. Temporal dithering

    Have you tweaked it, or are you using the pseudo code randomizer?
  6. Hard-code one specific ray, that hits something, and step in your debugger through the code. On CPU it's quite easy to figure out where the expected value diverges from the one generated by your source fmin(d1,d2) sounds ok to me.
  7. how do you fill the image? maybe you index it by x+y*height instead of x+y*width or something.
  8. make a 32x32 image and step with your debugger pixel by pixel, check why every 2nd line is black (or not written). (From a glance on it, your ray generation code looks fine. Maybe avoid doing halton for now, just to keep the count of possible error sources low.
  9. It usually helps to split a big problem into smaller stages and get every step working individually. From the image, it looks as if something with the first hit is wrong, aka camera rays, as there is some interlacing. (I would guess your "SetPixel" might be somehow wrong.) For debugging, you could try to just visualize the first hit and output the pure diffuse color. If that works, add some explicit tracing to a light source (e.g. point light), with old school lighting calculations.
  10. Temporal dithering

    It was pseudo code to make the trivial idea behind it easy to understand, this whole code can be summarized to color.rgb += frac(sin(pixelid*199.f+frametime*123.f)*123.f)*(1.f/range); resulting in exactly the same output
  11. Temporal dithering

    you can adjust my pseudo code easily e.g. float Noise(float2 uv, float t){return frac(sin((uv.x+uv.y)*199.f+t)*123.f);} float Dither(float v, float colorCount,float t) { float c = v * colorCount; c += Noise(PixelPosition,t) > frac(c) ? 1.f : 0.f;//this checks whether the colors last bit(s) is/are above a random number and therefore rounds randomly c -= frac( c ); c /= colorCount; return c; } ... const float range = 128.f; //8bit //const float range = 512.f; //10bit color.r = Dither(color.r,range,frameTime*123.f); color.g = Dither(color.g,range,frameTime*123.f); color.b = Dither(color.b,range,frameTime*123.f);
  12. it generates a vector on the sphere, not inside, not outside, but exactly on a unit sphere. u and v are two input seeds that you have to provide, to get the corresponding output vector. This values depend on your sampling approach, could be two fully random vectors, or maybe based on stratified sampling, could be a regular grid, some sequence (e.g. halton) or however you want to approach it.
  13. 3D Assimp Model Load (FBX) Help!

    that's ok, for a small scope. note everything needs a sophisticated tool/build pipeline. Good luck with your DirectX12 project
  14. the most trivial way I'd come up with. vec3 diffuse(float u,float v) { float radius = 1.f-sqrt(v); u = u * 2.f * FLT_PI; return vec3(cosf(u)*radius,cosf(u)*radius,v); } You can google information about it using "uniform hemisphere sampling" or in particular for diffuse "cosine hemisphere sampling"
  15. Looks in sync for me, maybe the half frame rate animation updates give an impression of lag.
  • Advertisement