Jump to content
  • Advertisement
Sign in to follow this  
arjansingh00

What is the future of Real Time Rendering?

This topic is 766 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

In the near future what will make real time rendering engines get closer to photo realism? Majority of engines have the same features like:

 

Physically Based Rendering/Lighting

Dynamic Reflections

Shadow Mapping (also PCSS)

Ambient Occlusion (SSAO, HBAO)

Bloom & HDR

Depth of Field

Lens Flares

Motion Blur

Particle Effects (Fog, Fire)

But what will really make things look life like? Will most techniques become physically based to improve quality? Are there any new breakthrough techniques?

Share this post


Link to post
Share on other sites
Advertisement

That was an interesting article, but it basically says that we should have a physically based approach to everything (materials, lights) and especially shadows. Beyond that is there anything holding back real time engines from photo realism? Are there any new Ambient Occlusion techniques or ways to improve shadows (e.g PCSS, HBAO)...

Share this post


Link to post
Share on other sites

imho all (or at least most) screen space techniques should go.

for example for AO you would want something like this but it is too performance/memory hungry so we fake it in screen space.

Share this post


Link to post
Share on other sites

History shows that realtime rendering is always a couple of steps behind offline rendering, but eventually catches up. To illustrate; any decent current-gen game engine can produce more realistic results than you could achieve in 3DStudio R1.

Current state of offline rendering is all-raytracing, so you would expect graphics cards and engines to also make that switch in some years. If you use the Cycles renderer with Blender you can already see a glimpse of that. Raytracing will also drastically reduce a lot of tedious work that goes into faking all sorts of things. Correct lighting, soft shadows, refractive materials, reflections etc. are all 'free' when using a raytracing algorithm and do not need special case scenarios. This makes me think the current approach will quicky become obsolete.

Since raytracing lends itself perfectly for parallel computation, current GPUs are already on the right track. I think more improvements in GPU design will be made to accomodate raytracing. Compute shaders are already a step in that direction.

My experience is that things may come sooner than you expect. Current technology is so much more advanced than I could anticipate 20 years ago.

Share this post


Link to post
Share on other sites

imho all (or at least most) screen space techniques should go.

for example for AO you would want something like this but it is too performance/memory hungry so we fake it in screen space.

In the meantime, the state of the art involves refining screen-space approaches to use algorithms that are consistent with reality and to validate their outputs against these more robust approaches (voxels, ray-tracing, etc). e.g. see "Ground-Truth AO" SSAO technique: http://iryoku.com/downloads/Practical-Realtime-Strategies-for-Accurate-Indirect-Occlusion.pdf

Share this post


Link to post
Share on other sites

Current real-time rendering is still a grab-bag of hacks, approximations, crutches, etc and these all need to go.  For example, anything that is currently faked by a post-processing pass should be eliminated with extreme prejudice and moved directly into the main lighting model instead.  Likewise, all of our common shadowing techniques are fundamentally broken.

 

The best rendering doesn't need bloom, doesn't need tonemapping, doesn't need SSAO, GTFO or RFLMAO - all of these effects come naturally from the lighting model and don't need to be simulated or faked afterwards.

 

The ideal future of rendering is one where we actually move away from checkbox lists like that in the OP; a future where the renderer doesn't have these features because it doesn't need them.  It doesn't need crutches because the end result of it's standard rendering passes is just correct.

 

The interesting thing is that GPUs have (and have had for some time) the ability to do all of this; what's lacking is the muscle.  Right now doing all of the hacks and approximations is still the most effective way to get a result with current GPU power.

 

until Real Time Ray Tracing becomes a thing, how will this become possible for rasterization? 

Share this post


Link to post
Share on other sites

until Real Time Ray Tracing becomes a thing, how will this become possible for rasterization?


I don't have any particular insight into how it can happen, just that it needs to happen.

Share this post


Link to post
Share on other sites

For example, anything that is currently faked by a post-processing pass should be eliminated with extreme prejudice and moved directly into the main lighting model instead.

 

Only for things that are approximating real-world phenomena. There are effects that can't easily be done via lighting alone, which is why films make extensive use of colour grading.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!