Jump to content
  • Advertisement
  • entries
  • comments
  • views

I think I might have done something clever...

Sign in to follow this  


I asked a question on DIRECTXDEV a couple of days ago about texture filtering with High Dynamic Range textures (that is, D3DFMT_A32B32G32R32F and D3DFMT_R32F in my case). Wasn't too surprised at the (very quick) reply: Current hardware can't do linear filtering of HDR textures (except half-precision formats on the later GeForce 6/7 series).

To maximise the bloom effect in my code, yet attempt to minimize the processing requirements, it's performed on an 8x8 downsampled image. The net result is that the 9 samples used by the bloom filter can cover 8x as much texture real-estate. The downside is that it can look a bit shit [grin]

(Click To Enlarge)

In the above diagram you can see (as the main image to the bottom right) the final composite image - with the red face of the cube producing a somewhat OTT blur [smile]. You can also see straight off that it's showing some nasty point-filtered artifacts [headshake].

So, I sat down during eXistenZ last night to implement my own linear filter. I thought I'd share it with you lot here on the chance it might be useful to someone [wink]

For the downsampling stages in the above diagram, the kernel is computed by adding a set of offsets to the incoming texture coordinate and sampling the incoming texture multiple times (in the case of a 3x3 downsample, 9 texture reads). These offsets are the reciprocal of the texture dimensions. This way you can get the addresses for the neighbouring pixels.

My angle on the filtering took this approach, but instead of just averaging the sampled pixels, weighting them by the proximity to the desired sampling point. The idea being to, as the name suggests, provide a linear interpolation between the sampled pixels.

I adopted a 5-point sampling pattern:

The middle of that pattern is the pixel defined by the incoming texture coordinate to the pixel shader - so there's no magic in getting that coordinate. The other 4 are simply loaded in using +/- of the reciprocal height or width (as appropriate).

The trick comes in, having read the additional 2 pairs (horizontal and vertical) to weighting them.

For the incoming texture coordinate, considering that the texture being read will only be point filtered, we might end up with a situation like so:

(The whole box is a single pixel, the coordinate marked is w.r.t. the pixel not the whole texture!)

For the initial sample it doesn't matter what coordinate "inside" the pixel is receieved as it'll still map to the same element. However, if we can somehow compute this "inside" ratio we can weight the neighbouring pixels appropriately. From the above diagram we'd want to bias the ABOVE sample by 0.2 (1.0 - 0.8) and the BELOW sample by 0.8. The LEFT sample should be weighted by 0.2 and the RIGHT sample by 0.8 (1.0 - 0.8)...

So I cooked up this HLSL helper function:
float GetBloomWeight( float tc, float rcpDimension )
return ( tc - ( floor( tc / rcpDimension ) * rcpDimension ) ) / rcpDimension;

The final result isn't perfect by any means - but have a look at this:

(Click To Enlarge)

The blocky effect is still there, but it's not quite as pronounced as the original, trivial, implementation was [smile]

On the downside it has added an extra 30 instructions (4 texture 26 arithmetic) to my pixel shader [oh]
Sign in to follow this  


Recommended Comments

Looks great!

Isn't that a wacked out movie? What's with the meat guns. [lol]

I wouldn't mind taking a look at that bloom filter sometime. [smile]

Share this comment

Link to comment
Hum, maybe i didn't understand it well.. but what prevents you from implementing standard bilinear filtering yourself in your pixel shader ? There's an instruction to get the fractional part of a number (FRC in GL's ARB_fragment_program), so i don't see why i'd take 26 arithmetic instructions to do filtering.

More than filtering, what i think lacks the most for a proper HDR implementation is blending. No current hardware (except the NV 6800+) supports it; it's all god when you're doing a demo only showing HDR, but in an actual game, you'll probably want to do multipass, or to combine it with some other effects like particles. Without blending.. you're basically forced to do it yourself by ping-ponging into render buffers. Just not usable.

Share this comment

Link to comment
Smart [grin].


Isn't that a wacked out movie?

Yeah, but it is Cronenberg - so what do you expect [wink]

Hum, maybe i didn't understand it well.. but what prevents you from implementing standard bilinear filtering yourself in your pixel shader ?

No good reason really [smile]

Well, the reason was that I sat down with a blank piece of paper and a rough idea of what I wanted to do... I didn't use any research papers and/or any tutorials - hence the implementation I offered is, well, my own [grin]. I don't doubt there's bigger and better available!


Share this comment

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!