# Constant screen space thickness toon black outline

This topic is 3798 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I have a very simple vertex toon shader that creates a toon black outline on objects by simply inverting the object and scaling the verts out along their normal like so: Vertex shader HLSL code:

// Compute vertex pushed along its normal
float3 ToonVertexPos = VertexPos + ( VertexNormal * Thickness )

// Compute screen position
float4 LocalPos  = float4( ToonVertexPos, 1.0f );
float4 WorldPos  = mul( g_World, LocalPos );
foat4 ScreenPos = mul( g_WorldViewProj, WorldPos );

// Send results to pixel shader
OUT.Pos = ScreenPos;


This works great, but the black outline is in 3d so as you get closer/further from objects it gets bigger/smaller (in screen space) as expected. What I'd like to try is a constant screen space thickness black outline regardless of how close/far away you are from objects. I've tried a bunch of things, but I'm not sure how to counter-act the perspective when computing the ToonVertexPos (ScreenPos.w changes if I update the Thickness based on an initial computed ScreenPos.w) Has anyone solved this tricky math problem before? thanks -Steve

##### Share on other sites
I guess you want to scale the thickness depending on distance from the camera, but put a max on it, else at a distance it'll just be a black blob

##### Share on other sites
Thanks for the reply Dave. I understand the high level of what I need to do like you suggest, it's the implementation that I'm hung up on.

Ideally I'd like to do it per vertex, rather than have a per object scale. So in the vertex shader, I just need to scale Thickness by ScreenPos.w to counter-act the perspective divide. But that doesn't quite work, because after adjusting Thickness, this will create a new ToonVertexPos, which after running through the matrices again will produce a different ScreenPos.w, so the scaling is not accurate again and I'm back at square one.

It's more of a math problem than a graphics problem, maybe I should post in the math forums too?

thanks
-Steve.

##### Share on other sites
It might be a bit of a hack but you could just pass the camera coordinates in and scale based on the vector?

##### Share on other sites
Perhaps instead of extruding the vertices along the vertex normal, try to calculate the component of the normal that is perpendicular to the view vector at that vertex. Then try to modify the amount that the outline is extended along the perpendicular vector by some factor based on the post projection w value. w is more or less just a measure of the distance from the viewer, so you could also use the view space z value to modify the scaling amount as well.

I haven't actually tried this, but there shouldn't be any technical reason that the displacement can't be modified in this manner. I hope this helps!

1. 1
Rutin
34
2. 2
3. 3
4. 4
5. 5

• 12
• 14
• 9
• 9
• 9
• ### Forum Statistics

• Total Topics
633332
• Total Posts
3011402
• ### Who's Online (See full list)

There are no registered users currently online

×

## Important Information

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!