Deconstructing Post process of the Order 1886

Started by
8 comments, last by kalle_h 8 years, 7 months ago

Hello, I have just finished a game called the order 1886,

I was blown away by the post effects, it looks amazing, but im not exactly sure what effects are taking place.

Ive spent the morning reading through available information on the orders rendering but it mostly focus's on the physically based rendering, something my engine already supports.

I want to try implement some nice post effects in my game.

Firstly can you help me track down which effects are in this image; my guess, edge vignette, some kind of chromatic distortion? depth of field, some kind of scene texture/grain? something else?

the-order-1886-uscreen-5.jpg

Advertisement

Stay tuned! One of the actual developers of the game will shortly be with you :D

Until the developer is here I would recommend their slides on their rendering tech:

http://advances.realtimerendering.com/s2015/rad_siggraph_advances_2015.pptx

(See here: http://advances.realtimerendering.com/s2015/index.html and the other presentations here: http://www.readyatdawn.com/presentations/ )

The post-processing chain went like this:

* Motion blur - full resolution, based on Morgan McGuire's work with optimizations inspired by Jorge Jimenez's presentation on COD: AW

* Depth of field - half resolution, with 7x7 bokeh-shaped gather patterns similar to what Tiago Sousa proposed in his CryEngine 3 presentation from SIGGRAPH 2013.

* Bloom - half resolution, separable 21x21 Guassian filter in a compute shader

* Lens flares - 1/4 resolution, with several custom screen-space filter kernels for different shapes (aperture, streaks, etc.). Originally implemented using FFT to convolve arbitrary artist-specified kernels (painted in textures) in the frequency domain, but at the very end of the project we switched to fixed kernels since we only ever used one fixed set of shape textures

* Tone Mapping - combined chromatic aberration, lens distortion, film grain, exposure, tone mapping, bloom + lens flare composite, and color correction

Can you elaborate on the 21x21 gaussian filter?

Until the developer is here I would recommend their slides on their rendering tech:

http://advances.realtimerendering.com/s2015/rad_siggraph_advances_2015.pptx

(See here: http://advances.realtimerendering.com/s2015/index.html and the other presentations here: http://www.readyatdawn.com/presentations/ )

Thanks for the link ! I spent some time today going over the various ready at dawn presentations, some really inspiring stuff !

The post-processing chain went like this:

* Motion blur - full resolution, based on Morgan McGuire's work with optimizations inspired by Jorge Jimenez's presentation on COD: AW

* Depth of field - half resolution, with 7x7 bokeh-shaped gather patterns similar to what Tiago Sousa proposed in his CryEngine 3 presentation from SIGGRAPH 2013.

* Bloom - half resolution, separable 21x21 Guassian filter in a compute shader

* Lens flares - 1/4 resolution, with several custom screen-space filter kernels for different shapes (aperture, streaks, etc.). Originally implemented using FFT to convolve arbitrary artist-specified kernels (painted in textures) in the frequency domain, but at the very end of the project we switched to fixed kernels since we only ever used one fixed set of shape textures

* Tone Mapping - combined chromatic aberration, lens distortion, film grain, exposure, tone mapping, bloom + lens flare composite, and color correction

Wow thanks MJP ! that's really interesting, I've already implemented some of the more common affects,

I think its the final step in your postpro chain, the Tone mapping stage where the magic happens....

Could you share any info on what kind of Tone mapping /exposure the order uses? is it like the approach implemented in your FXAA sample project?

Can you elaborate on the 21x21 gaussian filter?


It's a compute shader where every thread does a texture load and stores it in shared memory, and then loops over neighboring samples by pulling them from shared memory. I believe it's similar to the "SeparableFilter11" sample from the Radeon SDK.

Could you share any info on what kind of Tone mapping /exposure the order uses? is it like the approach implemented in your FXAA sample project?


For tone mapping we used the curve that John Hable came up with for Uncharted 2, with our own settings that one of our lighting artists came up with. Exposure was either manually specified, or came from auto-exposure. The auto-exposure system was really simple: it just computed the geometric mean of luminance for every pixel using a compute shader, and then came up with an exposure value by mapping the average luminance to a "middle grey" value. We also had controls that could clamp the auto-exposure to min and max exposure values, as well as settings for the temporal adaptation. By the time the project shipped we weren't very happy with either: the tone mapping mapped middle grey to a weird point which made it hard to work with, and the auto-exposure required a lot of hand-holding. Both are areas of active R&D for us. For tone mapping I've been trying out the ACES RRT + sRGB monitor ODT, which is nice because it's standardized and well-behaved. I also still like the curve the HP Duiker came up with by scanning film stock, which is what I usually use in the sample code on my blog. For exposure I think we need to come up with better weighting schemes that ensure that we expose for the "important" scene elements.

The post-processing chain went like this:

* Motion blur - full resolution, based on Morgan McGuire's work with optimizations inspired by Jorge Jimenez's presentation on COD: AW

* Depth of field - half resolution, with 7x7 bokeh-shaped gather patterns similar to what Tiago Sousa proposed in his CryEngine 3 presentation from SIGGRAPH 2013.

* Bloom - half resolution, separable 21x21 Guassian filter in a compute shader

* Lens flares - 1/4 resolution, with several custom screen-space filter kernels for different shapes (aperture, streaks, etc.). Originally implemented using FFT to convolve arbitrary artist-specified kernels (painted in textures) in the frequency domain, but at the very end of the project we switched to fixed kernels since we only ever used one fixed set of shape textures

* Tone Mapping - combined chromatic aberration, lens distortion, film grain, exposure, tone mapping, bloom + lens flare composite, and color correction

Thanks for the info!

Did you guys use any SSAO?

Did you guys use any SSAO?


No. Characters and other dynamic objects had capsules that were skinned to joints, and a low-resolution pass would accumulate the diffuse + specular occlusion per-pixel for all capsules within the falloff range.

Did you guys use any SSAO?


No. Characters and other dynamic objects had capsules that were skinned to joints, and a low-resolution pass would accumulate the diffuse + specular occlusion per-pixel for all capsules within the falloff range.

Capsule occlusion do look very nice in game. By the way did you ever tested any SSAO variants? Would there have been any benefits?

This topic is closed to new replies.

Advertisement