DirectX12 adds a Ray Tracing API

Started by
40 comments, last by NikiTo 6 years ago

Looks in sync for me, maybe the half frame rate animation updates give an impression of lag.

Advertisement

Yes, it could be the framerate. It called my attention quickly. It is easy to notice.

And in the demo with Star Wars, I notices they did not use textures for the reflecting surfaces. Only flat materials. I guess this way they saved few fetches.

it's maybe not easy to do proper sampling in reflections, imho hardware derivates won't be correct.

Just putting this here to troll a bit. this is frostbite's talk about battlefront 2 static GI.

 

image (2).png

5 hours ago, Lightness1024 said:

Just putting this here to troll a bit. this is frostbite's talk about battlefront 2 static GI.

Path tracing can be interactive (even real time) to some extent. The biggest problem is noise though (you will be able to do just few samples per pixel).

In directly lit areas you're quite okay, but once you're in indirectly lit part - there is a lot of noise. Let me just shoot random example here:

 

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

Programming challenge:

As you can see, even in real world, light is fading away after some number of iterations. It is not going infinitely. I don't want to bring Matrix theory into conversation, but it worth mentioning that this could be cheating of the reality simulation hardware. 
I am working currently in computer vision, and I found out two things- compared to computers, brain has infinite computational power, and second- there is no need to program an App that surpasses the real world. Some times it is hard for me to distinguish faces when they are painted for example, so I don't expect my App to recognize faces when they are painted. I am not trying to surpass brain, I only try to get close to the brain(failing currently).
For this computational challenge, I would make light fade completely after very few iterations. It would still look nearly real, and would save computations. I think, for a game, even 3 levels of mirror recursion would be enough if they fade gradually to dark/no-reflection.
A curious experiment would be to grow a child since newborn with VR glasses that show him a reality like Wolfenstein 3D 1992. And when its brain adapts to operate in lame graphics, we take the glasses off and watch the reaction. It is something that happened with all of us who are 25+ years old. For example, I always remembered Robot Jox the Movie as an amazing real-like movie, until I watched it again and it was sooo lameeee. My brain added to my memories amazing CGI effects to that lame movie.
(I would not let my children play 3D games too often. 3D games are low polygon and single ocular, so the growing up and adapting brain of my child would get used to fake reality and would feel different with real reality. When the brain is developed already with real perspective/lights/physics from playing with real toys in real environment, it is ok to play games, but again, not too often).
That's why I love advanced reality games, because it is like hacking perception with any kind of techniques. Giving to the brain the best available(in terms of hardware) lie to make it believe. Most of the computational job is done by brain actually.

This happens simply due to 1 thing - perfect mirror does NOT exist. And if your parameters for materials in unbiased renderer of your choice are correct, you will be able to see exactly the same phenomena after certain number of samples per pixel.

Note. that in unbiased path tracing - you will terminate your ray eventually (Russian Roulette). Infinite mirror (analytical) isn't really possible due to infinite number of iterations in your path that would be needed to resolve.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

@Vilem Otte I would ask the 3D artist to provide me with a model with additional data made up of the unique indexes of polygons. This way, I could apply a noise/blur post filter and remove the noise without damaging the important edges. I think it would look nice enough this way.

About the ray tracing, I am not sure if the tracer is simply iterating/bouncing-around or solving the n-body problem between reflections for real.

@NikiTo A real mirror test (might have some params wrong in traversal - so no guarantees I don't lose energy somewhere! Some hard coded exposure value has been used too!).

As for noise/removal - you can't. You won't be unbiased anymore. Also this will damage texturing, details of indirect illumination, etc.

What OptiX does for noise removal is a lot more complex, and it still is visible and introduces more problems. This is due to nature of such noise and how the noise data is created.

EDIT: OptiX denoiser - http://research.nvidia.com/publication/interactive-reconstruction-monte-carlo-image-sequences-using-recurrent-denoising

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

@Vilem Otte You are right. For reflections, the noise removal would blur the reflected image. For your previous(Quake 2) example, it should work.

This topic is closed to new replies.

Advertisement