Constant screen space thickness toon black outline

Started by
3 comments, last by Jason Z 15 years, 10 months ago
I have a very simple vertex toon shader that creates a toon black outline on objects by simply inverting the object and scaling the verts out along their normal like so: Vertex shader HLSL code:


// Compute vertex pushed along its normal
float3 ToonVertexPos = VertexPos + ( VertexNormal * Thickness )

// Compute screen position
float4 LocalPos  = float4( ToonVertexPos, 1.0f );
float4 WorldPos  = mul( g_World, LocalPos );
foat4 ScreenPos = mul( g_WorldViewProj, WorldPos );

// Send results to pixel shader
OUT.Pos = ScreenPos;


This works great, but the black outline is in 3d so as you get closer/further from objects it gets bigger/smaller (in screen space) as expected. What I'd like to try is a constant screen space thickness black outline regardless of how close/far away you are from objects. I've tried a bunch of things, but I'm not sure how to counter-act the perspective when computing the ToonVertexPos (ScreenPos.w changes if I update the Thickness based on an initial computed ScreenPos.w) Has anyone solved this tricky math problem before? thanks -Steve
Steve Broumley
Advertisement
I guess you want to scale the thickness depending on distance from the camera, but put a max on it, else at a distance it'll just be a black blob
Roger that, lets run like hell!
Thanks for the reply Dave. I understand the high level of what I need to do like you suggest, it's the implementation that I'm hung up on.

Ideally I'd like to do it per vertex, rather than have a per object scale. So in the vertex shader, I just need to scale Thickness by ScreenPos.w to counter-act the perspective divide. But that doesn't quite work, because after adjusting Thickness, this will create a new ToonVertexPos, which after running through the matrices again will produce a different ScreenPos.w, so the scaling is not accurate again and I'm back at square one.

It's more of a math problem than a graphics problem, maybe I should post in the math forums too?

thanks
-Steve.
Steve Broumley
It might be a bit of a hack but you could just pass the camera coordinates in and scale based on the vector?
Perhaps instead of extruding the vertices along the vertex normal, try to calculate the component of the normal that is perpendicular to the view vector at that vertex. Then try to modify the amount that the outline is extended along the perpendicular vector by some factor based on the post projection w value. w is more or less just a measure of the distance from the viewer, so you could also use the view space z value to modify the scaling amount as well.

I haven't actually tried this, but there shouldn't be any technical reason that the displacement can't be modified in this manner. I hope this helps!

This topic is closed to new replies.

Advertisement