Sign in to follow this  

It's the Little Things That Matter -- Tips and Tricks

This topic is 4016 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I don't know about you guys, but I desperately need a break from the "How do I do XYZ" and "XYZ is broken plz help" threads. Not that they're a bad thing, but a change of pace is sorely needed. (Also, everyone posting threads in this forum uses the default post icon, or the question mark. How creepy is that?) Anyway, the point of this thread is to share small tips and tricks that we can all add to our graphics programming toolboxes. We're not talking big things here. Shadowing techniques and lighting models and all, those are big. This thread is for small, quick, compact things. I'll kick things off with a couple, and then hopefully lots of people will share more. Linearized Depth -- I'm sure most of you have run into that nasty problem where the Z buffer is actually exponential and so you get Z fighting artifacts if your far/near ratio is too big. Turns out there's a really nifty, easy hack that will turn your depth values back linear. The cost? One VS scalar multiply and two extra CPU divides when you set up your projection matrix. Two directional lights -- The linked project is absolutely beautiful thanks to several different tricks. But the most stunningly simple one is the application of not one but two directional lights for the outdoor lighting. I mean, most of us know to toss up a d-light to represent our sun and all, but adding a second one with a different color and angle adds a staggering amount of quality to the scene when done right. Premultiplied alpha -- Such a stupidly simple modification to standard alpha blending, yet you gain so much. I can't do the article justice; just read it and let Tom enlighten you.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
A word of caution: the "linearized depth" trick will only give you a linearized depth buffer if you only render screen-aligned polygons. The problem is that the division is done on interpolated values. lerp( n0 * d0, n1 * d1, f ) / lerp ( d0, d1, f ) != lerp( n0, n1, f ). This will introduce a new type of depth buffer artifact which may be more objectionable than the standard ones. It will be similar to the sort of artifacts you get when using linear interpolation while rendering dual paraboloid shadow maps. It could still be a precision victory, but the article should've mentioned this pitfall.

Share this post


Link to post
Share on other sites
This should also be a similar problem when doing perspective-correct texture mapping (or lighting, if you go that extreme). My guess is that the precision problems are not big enough to cause noticeable artifacts. And if you are doing perspective-correct texture mapping, you are already linearly interpolating 1/w, so your correct factor should be one vertex multiply or 3-4 scalars in G80.

Share this post


Link to post
Share on other sites
My current favorite "tip" is that time spent doing proper rendering debug aids is invaluable. I recently added a little debug thing which overlays the contents of render-to-texture buffers and it's been really helpful while developing some of the neat effects I've been messing with. Without it you end up doing quick debug hacks and trying to render the immediate outputs to the framebuffer which always gets messy and error prone.

Tip #2: Making it possible to reload a shader while an app is running saves you wasting lots of time waiting for the app to start up each time, and means you get better quality results.

[Edited by - OrangyTang on December 15, 2006 7:26:33 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
A word of caution: the "linearized depth" trick will only give you a linearized depth buffer if you only render screen-aligned polygons. The problem is that the division is done on interpolated values. lerp( n0 * d0, n1 * d1, f ) / lerp ( d0, d1, f ) != lerp( n0, n1, f ). This will introduce a new type of depth buffer artifact which may be more objectionable than the standard ones. It will be similar to the sort of artifacts you get when using linear interpolation while rendering dual paraboloid shadow maps. It could still be a precision victory, but the article should've mentioned this pitfall.


A little aside to Promit and reltham:



Although, like you guys mentioned in IRC last night, a little more detail about this artifact would be nice. Where/how does it come up? How does it exhibit itself, etc?

Share this post


Link to post
Share on other sites
Quote:
Original post by Cypher19
Quote:
Original post by Anonymous Poster
A word of caution: the "linearized depth" trick will only give you a linearized depth buffer if you only render screen-aligned polygons. The problem is that the division is done on interpolated values. lerp( n0 * d0, n1 * d1, f ) / lerp ( d0, d1, f ) != lerp( n0, n1, f ). This will introduce a new type of depth buffer artifact which may be more objectionable than the standard ones. It will be similar to the sort of artifacts you get when using linear interpolation while rendering dual paraboloid shadow maps. It could still be a precision victory, but the article should've mentioned this pitfall.


A little aside to Promit and reltham:



Although, like you guys mentioned in IRC last night, a little more detail about this artifact would be nice. Where/how does it come up? How does it exhibit itself, etc?


Looking at the dual parabloid shadow map problem discussing in this paper: http://portal.acm.org/citation.cfm?id=1183316.1183331&coll=GUIDE&dl=GUIDE&CFID=15151515&CFTOKEN=6184618
I think that as long as everything uses the linearized Z approach then it will all distort in the same way and work as desired for Z buffer usage. Since we aren't visualizing the Z, we are just using it for relative comparisons, I think it won't really matter.

Of course, I could be wrong, we'll need to do some detailed testing.

Share this post


Link to post
Share on other sites
Wow, someone has actually seen/read our paper! =) Anyways, I'm curious to hear about any related results. The depth buffer is such a tricky beast. Although I'd have to go look at the linearized Z stuff in more detail to see how related it is to the DPSM stuff.

Share this post


Link to post
Share on other sites
Premultiplied Alpha FTW! I typically use it for my particle systems, so that the same texture can both make things darker (high alpha, low brightness), brighter (low alpha, high brightness) or blended (medium alpha, medium brightness).

Linear depth buffers are actually a bad idea. With the regular Z buffer, the shape of a pixel is isomorphic across the range of the scene. With a linear Z buffer (a k a a W buffer), the shape of a pixel is very deep and skinny up front, and very wide and stubby out towards the end.
Another way to think about it: If a pixel covers 1x1 centimeter 2 meters from the viewer, and covers a 0.1 cm depth area in the Z buffer, it should be 2x2 centimeters when it's 4 meters from the viewer, and cover a 0.2 cm range in Z. With linear buffers, the same pixel will likely cover 1x1x1 cm at 2 meters, and 2x2x1 cm at 4 meters.

Share this post


Link to post
Share on other sites
Quote:
Original post by Promit
Premultiplied alpha -- Such a stupidly simple modification to standard alpha blending, yet you gain so much. I can't do the article justice; just read it and let Tom enlighten you.


Just something to note for people who want to implement this: Make sure you also multiply the vertex color by the vertex alpha, otherwise you'll get all sorts of wacky effect when you try to fade out your particles. :)

Share this post


Link to post
Share on other sites
Quote:
Original post by hplus0603
Premultiplied Alpha FTW! I typically use it for my particle systems, so that the same texture can both make things darker (high alpha, low brightness), brighter (low alpha, high brightness) or blended (medium alpha, medium brightness).

I really should look into pre-multiplied alpha more - other than this one trick for particle systems, I've always seen it as a more unintuitive and awkward way of handling blending.

The other problem is, how do you get an artist to create a suitable texture for such a particle effect? Drawing it direct is hideously non-intuitive and difficult. Do you do some kind of pre-processing to combine multiple source textures into one runtime texture?

IMHO the particle blend hack isn't that useful anyway. Taking the original example of smoke (lerped) containing firey sparks (additive), then to get a smooth transition you need to have lots of animation frames taking up lots of memory. Or alternatively you could just use two particle systems, one as lerp and one as add, do the fade/transition using vertex colours and you get a much smoother transition yet only using two texture rather than 60+.

Share this post


Link to post
Share on other sites
Quote:
Original post by OrangyTang
Quote:
Original post by hplus0603
Premultiplied Alpha FTW! I typically use it for my particle systems, so that the same texture can both make things darker (high alpha, low brightness), brighter (low alpha, high brightness) or blended (medium alpha, medium brightness).

I really should look into pre-multiplied alpha more - other than this one trick for particle systems, I've always seen it as a more unintuitive and awkward way of handling blending.

The other problem is, how do you get an artist to create a suitable texture for such a particle effect? Drawing it direct is hideously non-intuitive and difficult. Do you do some kind of pre-processing to combine multiple source textures into one runtime texture?


Yeah, it's non-intuitive, but the premultiplication should be applied in either during asset processing or at load time. It'd be a nightmare to try and do it directly through photoshop.

Getting the more 'interesting' effects out of it (Like a neon sign with a light halo, for example) would probably be a bit of a pain if you don't run your textures through any preprocessing steps, although if you work off some sort of material system I suppose it would be possible to smoosh several images together at load time.

As for making an particle go from standard alpha to 'additive' alpha, you only need to change the vertex colors to get the effect:

Full alpha, * color = standard alpha
Zero alpha, white color = additive
Zero alpha, black color = transparent

Share this post


Link to post
Share on other sites
Quote:
Original post by PlayfulPuppy
Yeah, it's non-intuitive, but the premultiplication should be applied in either during asset processing or at load time. It'd be a nightmare to try and do it directly through photoshop.

Ok, a compile-time preprocess sounds much more sensible.

Quote:
As for making an particle go from standard alpha to 'additive' alpha, you only need to change the vertex colors to get the effect:

Full alpha, * color = standard alpha
Zero alpha, white color = additive
Zero alpha, black color = transparent

But (unless I'm missing something) this doesn't work for the smoke/sparks example, where both additive and lerped elements are stored in the same texture, and we want to fade one out while keeping the other.

Share this post


Link to post
Share on other sites
Quote:
Original post by OrangyTang
Quote:
As for making an particle go from standard alpha to 'additive' alpha, you only need to change the vertex colors to get the effect:

Full alpha, * color = standard alpha
Zero alpha, white color = additive
Zero alpha, black color = transparent

But (unless I'm missing something) this doesn't work for the smoke/sparks example, where both additive and lerped elements are stored in the same texture, and we want to fade one out while keeping the other.


True, it wouldn't work if you wanted to fade between the two elements independently.

Although nothing's stopping you from doing 2 particle systems with both using premultiplied alpha, of course. ;)

Share this post


Link to post
Share on other sites
Don't normalize a four component vector in a shader and expect it to actually work in lighting computations! Instead normalize the *.xyz portion of it. I can't even tell you how many times I have done this...

Share this post


Link to post
Share on other sites
Quote:
Original post by PlayfulPuppy
True, it wouldn't work if you wanted to fade between the two elements independently.

Although nothing's stopping you from doing 2 particle systems with both using premultiplied alpha, of course. ;)

But I can do that with regular blending, so whats the point of using premultiplied alpha at all?

Share this post


Link to post
Share on other sites
Quote:

Yeah, it's non-intuitive, but the premultiplication should be applied in either during asset processing or at load time. It'd be a nightmare to try and do it directly through photoshop.

a nightmare? premultiplied alpha just means that you take each texel and multiply it with it's alpha right? so wheres the problem? you an do that with 3 clicks in photoshop.

regards,
m4gnus

Share this post


Link to post
Share on other sites
I dont see much point to pre-mulitplied alpha either.. in most cases normal alpha blending is the correct method. This author is some kind of nutty evangelist...You uss alpha masks to control transparency normally, so why do we need to mess around with blending modes that will yeild unexpected results?

Share this post


Link to post
Share on other sites
It says premultiplied alpha is cool, because faces dont need to be sorted.
Thats a great advantage. And seamless transition with vertex color from one blending type to another is another great advantage.
And maybe premultiplied alpha is even cheaper.

Share this post


Link to post
Share on other sites
Premultiplied alpha filters better. Normaly, when using linear filtering, high alpha of adjacent pixels can increase the contribution from a low-alpha pixel, even though that pixel shouldn't add much. In premultiplied alpha, this can't happen.

Share this post


Link to post
Share on other sites
Quote:
Original post by Matt Aufderheide
I dont see much point to pre-mulitplied alpha either.. in most cases normal alpha blending is the correct method. This author is some kind of nutty evangelist...You uss alpha masks to control transparency normally, so why do we need to mess around with blending modes that will yeild unexpected results?


The 4 arguments (texture compression, composition, multipass lightning, additive and blending in a single operation) are well developped - and for 3 of them he shows what are the benefits of premultiplied alpha over normal alpha. Considering that, I won't call him a "nutty evangelist" - hey, that's TomF! [grin]

Share this post


Link to post
Share on other sites

This topic is 4016 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this