HDR inverse tone mapping MSAA resolve

Started by
7 comments, last by Krypt0n 11 years, 2 months ago

Came up with a "solution" for getting good anti-aliasing with HDR rendering only to realize that it's already been done before... Anyway, here goes!

http://theagentd.blogspot.se/2013/01/hdr-inverse-tone-mapping-msaa-resolve.html

Advertisement

Yeah I remember I tried this a year or two ago. While it does work in terms of causing your resolve to produce perceptually-smooth gradients, I'm no longer convinced that performing the resolve in post-tonemap is the "right thing to do". I posted some thoughts and screenshots here and here.

I've read those two already. =S


These results are little interesting since they illustrate the differences in the two different approaches. In the first image the filtering is performed with HDR values, so you get similar effects to applying DOF or motion blur in HDR where bright values can dominate their local neighborhood. The second image shows quite a different result, where the darker geometry actually ends up looking “thicker” against the bright blue sky.
I do not share this conclusion at all. The darker geometry isn't getting thicker! It's always been the bright geometry that bleeds over to the neighbors! I do agree that a wider filter is needed to completely get rid of aliasing (especially temporal aliasing) but I do not think it removes the need for post-tone map resolves. To some extent, using a wide filter is just plain blurring, meaning that it only makes the problem less apparent. A high enough contrast will still reintroduce aliasing, and also dilate bright objects even more. That cannot happen no matter how high the contrast is with per sample tone mapping.

What I think is most important is temporal aliasing. Sadly I don't have a DX11 card so I can't run the test program, but from what I can see from the images, the gradient is far from linear like it should be (or well, gamma corrected or something). I'd love to see how it works under sub-pixel movement, and I think that's where resolving before tone mapping will really shine.
Yeah I remember I tried this a year or two ago. While it does work in terms of causing your resolve to produce perceptually-smooth gradients, I'm no longer convinced that performing the resolve in post-tonemap is the "right thing to do". I posted some thoughts and screenshots here and here.

it's indeed not a correct way to do it, but the whole antialiasing as most do it is just a trick to get an effect our brain/eyes are tricked to see. in reality there is no such thing as anti aliasing.

the anti aliasing trick that we think to see needs to be foremost split into 2 effects, temporal and spatial anti aliasing.

temporal aliasing ist visible in real world, sometimes you can watch throw fences and watch a pattern behind them, in an unlucky moment you will see 'flimmering artifacts' like through undersampling, that in fact is undersampling if the fences don't fullfill the sampling theorem " if samples are taken at a slower rate than twice the band limit, then there are some signals that will not be correctly reconstructed". in games, when you see those fences or trees etc. people always yell that aliasing is hurting their eyes, but in fact they could very much see themself in a windy forest how noisy everything is (not through a camera, but with pure eyes).

temporal aliasing is the reason why movie studios have such insane amounts of supersampling, sometimes 33x33 on a 4k resolutions. One would think you cannot see the difference at 8x8 already, but the noise comes through once you start moving, it's visible especially on specular highlights, that's why some of the highest computation cost goes into ocean surfaces and hair rendering.

spatial aliasing on another side, is hard to find in real world, if you take a clip of the sunset over an ocean and your shutter speed is really low (without shaking your hands, and your camera doesn't filter too much), you will see individual pixel in the distant water surface that are way brighter than surrounding pixel and it will look like aliasing. the reason is that there is a reflection area that is actually smaller than a sensor-pixel and it's mirroring in that particular direction, like a laser beam, not an emiting surface.

Usually, the area of focus is really small, everything else, even if it looks sharps, is already blured by some pixels, and the brighter it is, the further it bleeds to the neighbouring pixels (like a gauss with a high amplitude, and there are magnitutdes between environment radiosity and light sources), the dimmer it is, the less you can notice the aliasing. also, the focus area, that should be actually sharp, is still not sharp! you have some blur due to scattering in the atmosphere (especially near the light source, which makes it glow/halo), your finest lenses have still some distortion, even those telescope lenses that are insanely big, build sometimes for years and as well crafted as mankind can, have distortion, and without compensation filters especially made for that lense, for that temperature and that time of night, would be 1/100th of their sharpness. imagining it for a pinhole camera, we would need an infinitely small hole, to have a sharp picture like rendered by games, on the other side, games usually render with exactly that camera.

in movies, to compensate for this, the shutter spead is set higher, then ocean water is captured for a longer time on less sensitive film and you see less 'aliasing'. when you render CG effects, you blur the movie afterwards and then you sharpen it again. it might sound retarded, but that's how you keep the image noise free and natural, beyond what games look like. that super sharp 8xMSAA rendering will never look real and it will never be aliasing free with HDR, it wouldn't be in real world if we would have those sharp sensors, lenses and no other artifacts.

if you look at a cg rendering of wall-e

http://0.tqn.com/d/kidstvmovies/1/0/I/H/walle008.jpg

and the final movie on disk

http://hq55.com/disney/walle/walle-disneyscreencaps.com-521.jpg

you'll notice how much quality loss there is, but that's actually the paradox situation.

you cannot really take a current game rendering, blur and sharpen it to get that effect, because of the temporal aliasing. single images might look very real after post processing, but not at 30fps. you need to render at insane quality, accept a massive downsampling/blur, then upsample it again and you'll get a decent image.

on really high end GPUs, that post processing is in no range of real time. a lot of filter are applied iteratively. e.g. 'unsharpen' applied once with high intensity gives really mediocre quality, while applying it 100times with really low intensity, will give a surprisingly good quality.

I've tried to do proper HDR tonemapped rendering, with no post-tonemappng antialiasing, but the whole processing is done in HDR and just the very last stage is tone mapping, when I've done my car rendering (some images are not anti aliased):

http://www.gamedev.net/page/community/iotd/index.html/_/realistic-realtime-car-rendering-r113

you can see that even the car paint with metalic flakes looks decent on most pictures, you won't get that by supersampling only.

@KrypOn:

I know MSAA and game rendering in general will always be sharper than movies. The "game camera" is way too sharp, and we can't afford using insane amounts of supersampling. Also, gamers are used to the super sharp rendering, so when less sharp antialiasing filters come out a lot of people complains at the sharpness loss. So what exactly are you proposing?

Your car renders are nice, but there's two issues: They're saved in JPG and I don't know exactly which ones are supposed to be antialiased or what resolution they were actually rendered in. I doubt the first image was actually rendered at 901x507, but the question if it's been downsampled or simply cropped.

just supersampling won't help, that's what I wanted to state all the time. it's not about the sample count, if you look at a star above in the night, and the star is tiny yet bright, you will always render one aliased big pixel and when you move the camera, the stars will jitter, even with 33x33 AA.

real world doesn't, simply, because it's blurry. you have a lot of light scattering through our earth's atmosphere and you won't focus on that start exactly.

I'm proposing to stop rendering in resolutions and subsamples, but change it to rendering in pure samples. if you want to output it on your phone with 800x480 pixel or on your 4k TV, it doesn't matter, in both cases you need nearly the same amount of samples to make it free of temporal aliasing as far as possible and free of aliasing due to HDR. We need to apply filter that simulate the artifacts correctly instead of putting our efforts into AA. like you see by those HDR artifacts, AA won't solve the problem.

my pictures were rendered internally in 2560x1440 (FSAA ;) ), but are then post processed and downsampled to whatever resolution you want. it's sadly not a high quality filter, it was already hard to keep that realtime on my gtx460 :(

What if, for each pixel, you maintain a list of the polygons that are covering it. When a polygon intersects the rectangular boundary of a pixel, it clips away any existing poly's in the list which it overlaps. When 'resolving' the list, you can calculate the exact area of that pixel that is covered by the polygon. No discrete sampling patterns...

Do any film-quality renderers do this?

@MJP

I took a few looks on the shader source code. I might try to implement wide resolve filters in my test program to see how well it handles temporal aliasing.

@Krypt0n

What you're proposing isn't realistic. A phone GPU can't render at the same resolution / sample rate as a desktop GPU. I also don't see how this is any different from plain supersampling, which you yourself said wasn't usable for performance reasons. I'm not trying to revolutionize graphics in games as we know them, I'm just trying to cram a little more visual quality out of the samples as we do it now. What you want to do goes way beyond that so it's a pretty different topic.

@Hodgman

I've thought about that too, but it differs a lot from how rendering is done to today and it's very difficult to implement in hardware. I actually think I have a program doing pretty much this but it's extremely slow. The technique shares many points with deep rendering, which isn't really possible for performance reasons. There are two solutions that I've heard a little of: A-buffers and UAV rendering.

A-buffers use a multisampled buffer with multisampling disabled and routes fragments into the samples. This gives you a maximum number of polygons that can cover a single pixel of 8, but it doesn't handle overdraw well since there's no depth testing. You can do multiple passes to increase that number though, and it's very possible to do this in realtime. It also supports transparency perfectly. The main problem is lighting, especially with deferred shading. Memory usage and the bandwidth required is multiplied by 8 which is a huge hit with the huge deferred shading buffers, and the shading cost is increased a lot too. 8 samples are also often not enough. In a mesh where 4 triangle corners meet you already have 4 samples used by a single layer of overdraw.

The other method uses a single huge fragment buffer for all pixels. Whenever a fragment is drawn, it's simply added to a linked list of fragments for that pixel and the pushed into the buffer. That way you have a maximum number of fragments for the whole screen instead of per pixel, but this approach fits the GPU pretty badly. Since we're rendering the fragments to a buffer, we have to bypass the ROPs which reduces performance. Also, traversing the linked list per pixel leads to pretty much random access into this buffer which is very slow. The pixel buffer also needs DX11 hardware.

Another problem I encountered was that you need to do conservative rasterization. You want the GPU to fill all pixels that intersects the pixel area, not just overlaps the center of the pixels. There's actually no really good way to do this except with geometry shaders to increase the size of the triangle, which again reduces performance since you also have to recalculate texture coordinates, normals and all other vertex attributes. If you don't, they'll be stretched out and have the wrong values for the original triangle area when the triangle is made slightly larger to cover all pixels the original triangle intersects.

Needless to say, I pretty quickly abandoned the idea due to performance reasons.

What if, for each pixel, you maintain a list of the polygons that are covering it. When a polygon intersects the rectangular boundary of a pixel, it clips away any existing poly's in the list which it overlaps. When 'resolving' the list, you can calculate the exact area of that pixel that is covered by the polygon. No discrete sampling patterns...

Do any film-quality renderers do this?

it would be too suboptimal to do that per pixel, but there were efforts to do similar things. one early quake engine renderer was based on that idea, instead of a z-buffer, all polys were added in a perfect front to back order into this "beam tree". your suggest is kinda a beam tree per pixel instead of per screen. but not having the perfect order, would requiere this tree to be 3d, that's kinda a solid-BSP construction.

Movies don't do that, the closest algorithm is probably the reyes renderer, but it's binning not per pixel, but into tiles of the framebuffer. in the end it's still rasterizing those tiles, but as they're doing the shading per vertex, they could actually really do the algorithmic instead of sampling way. tho, I'm afraid you might end up with quite some messy accuracy problem, special cases and epsilons. rasterization of micro triangles is so lovely, because it's very determined and works flawlessly without getting brain damage while implementing it :D, and it's very fast, at least compared to an algorithmic way.

the PowerVR HW is also doing a binnig like that, keeping track of the polys, anti aliasing is also kinda for free.

but that all won't solve HDR antialiasing issues.

@theagentd

the difference is, that you have to let go the demand of having pixel perfect drawing. it needs to be a bit flawed to get closer to the real world.

in a perfect world, you'd not just apply those effects as a post process as I did. there are several solutions:

-ray tracing, of course (unrealistic on mobile devices? ImgTec seems to have different plans :D http://arstechnica.com/gadgets/2013/01/shedding-some-realistic-light-on-imaginations-real-time-ray-tracing-card/ )

-stochastic rasterization

-point rendering/splatting

-voxel tracing

those alternative algorithms have often problems to render pixel perfect anyway, rendering them slightly flawed can increase the speed of those algorithms by 10x, you'll get other benefits in exchange. it's not unrealistic, we are just way too used and specialized rendering pixel perfect triangles, but now we are stuck in a lot of areas, indirect lighting is getting barely better, shadows are problematic, hdr produces aliasing, combining materials is not simple, we are often limited simply by the amount of drawcalls we do, efficient culling requires a lot of effort...

This topic is closed to new replies.

Advertisement