Sign in to follow this  
dopplex

Silhouette Rendering - looking for feedback (Now with integral based anti-aliasing!)

Recommended Posts

dopplex    164
EDIT: Apologies for the lack of source tags to date. I've just fixed it. I had been looking for a way to do source in the forum interface, and didn't run across the FAQ explaining the tags until just now! My silhouette renderer implementation: Silhouette Hi all, I've been working on a cartoon-renderer which needs very robust handling of object silhouettes. As an exercise, I tried to avoid using any specific implementation as a guide while putting it together, though I grabbed an idea here and there. Now that it's done (and basically functional) I'd like to get some feedback on how I *should* have done it ;-) - both in terms of output quality and efficiency. (I've only really been focusing on learning real time graphics for about a month, so I'm sure I'm missing a number of things here) I'll put my actual shader code at the end, but I'll start with the output, and then the basic approach. First, screenshots: Screenshot 1 Screenshot 2 Screenshot 3 Screenshot 4 (One of the major output quality issues I'm having should be visible in the first shot - it looks like crap from far away) For reference, I'm working in XNA/HLSL (Not that it makes all that much difference with this). The basic approach I took was very similar to what I understand to be the approach for Shadow Volumes - preprocess the geometry and create an additional mesh full of degenerate quads (one quad per edge). I then process that additional mesh in the vertex shader, decide whether it should be drawn as an edge, and then - if so - I extrude the quad out along the vertex normal of it's "parent" vertex. In order to do this, I'm using preprocessed geometry that has two vertex buffers for each portion of the model - one being the original vertex buffer for the model setup, and a second one specifically for the silhouette. Here's the structure of the added vertex buffer, and some explanation, because I think I'm doing some weird things here: First off - All of the quads in the added geometry have four points. Two of these are the vertices from the original geometry. All I want to do with these is do whatever transform is needed and push them out again - I don't need much data for these. The other two are added vertices for the quad. I've used some of my vertex channels to pass in additional data for calculating what to do with these vertices - mostly, the surface normals of the two triangles neighboring the edge, information about the non-degenerate vertices, and a set of flags for letting the vertex shader know that this vertex needs to be handled as an extruded vertex rather than as the regular geometry.
struct VS_INPUT { 
    float4 Position			: POSITION0; 
	float4 Normal			: NORMAL0;
	float4 Tri1Normal		: NORMAL1;

	float4 Tri2Normal		: NORMAL2;
	float4 V1Normal			: NORMAL3;
	float4 V2Normal			: NORMAL4;
	float2 Tex			: TEXCOORD3;
	float4 V1Position		: TEXCOORD4;
	float4 V2Position		: TEXCOORD5;
	float4 Flags			: TEXCOORD6;
}; 





The actual silhouette check is easy enough - dot(Tri1Normal, ViewDir)*dot(Tri2Normal,ViewDir) and if it comes out negative, it's an edge. There's also an additional test where edges of sufficient sharpness are rendered. I'll likely also add an override into the vertex data to allow certain edges to be defined to always or never show - though that's not in currently. The pixel shader is *intended* to render the edges procedurally, and to multi-sample in order to anti-alias the edge. This is only sort-of working. Here's what I like about my implementation: - Edge thickness is automatically perspective adjusted. This looks quite good. - I have control over edge thickness - and it would be easy to hand over edge thickness control to the modeler, who could embed some of that data in the vertex. - It *should* be able to automatically anti-alias the edges it finds (even though I feel like it could be doing a better job than it is currently) - The vast majority of the work is handled by the vertex shader - which leaves CPU free for things like physics. - The preprocessing I'm doing ought to dovetail nicely with a shadow volume implementation - I like the overall Look/Feel I'm getting quite a bit Here's what I don't like: - Because of the perspective correction, thin edges render very inconsistently. I've been trying to adjust things so that the full line renders with less blend, but haven't been able to make that work yet. - I feel like I'm missing a lot of optimization opportunities, due to lack of familiarity with working with the hardware - Duplicate vertex buffers seem like they'd hurt performance (But I can see other performance issues arising if I try to combine them) - Definitely starts choking my poor integrated graphics chip with software vertex shading even at a point without much geometry. - Having the normals of the adjacent triangle baked into my vertex data is going to cause me problems down the road - especially with animation. The problem is that I just can't see how to fix this without somehow involving the CPU. Again, I'd love any comments - including "You're doing this entirely wrong, this other approach meets all of your needs better" (provided that you actually point me towards that approach!) Shader code: Parameters:


float gLineThickness = 6.0f;
float4x4 World; 
float4x4 View; 
float4x4 Project; 
float2 ScreenSize = float2(800,600);
float AddScale = 1.0f;
float4 InkColor = float4(0.0f, 0.0f, 0.0f, 1.0f);
float4 EyePosition;
float4 DiffuseColor;
float3 OutlineParam = float3(.5f, 1.8f, 1.3f);

#define numIter  9.0f

 
struct VS_INPUT { 
    float4 Position			: POSITION0; 
	float4 Normal			: NORMAL0;
	float4 Tri1Normal		: NORMAL1;
	float4 Tri2Normal		: NORMAL2;
	float4 V1Normal			: NORMAL3;
	float4 V2Normal			: NORMAL4;
	float2 Tex				: TEXCOORD3;
	float4 V1Position		: TEXCOORD4;
	float4 V2Position		: TEXCOORD5;
	float4 Flags			: TEXCOORD6;
}; 
 
struct VS_OUTPUT { 

    float4 Position			: POSITION0; 
    float2 Tex				: TEXCOORD0;
	float4 Normal			: TEXCOORD1;
	float4 ViewDirection	: TEXCOORD2;
	float2 ScaleFactor	: TEXCOORD3;
	float4 Debug			: TEXCOORD4;
}; 
 





Vertex Shader:
VS_OUTPUT Transform2(VS_INPUT Input) { 


    VS_OUTPUT Output;
	float4x4 WorldViewProject = mul(mul(World, View), Project);
	float4 ObjectPosition = mul(Input.Position, World);
	float4 ViewVec = EyePosition - ObjectPosition;
	float4 ViewVec2 = mul(mul(EyePosition - ObjectPosition, View), Project);

	float4 OutputPosition = mul(Input.Position, WorldViewProject);
	float2 ScaleFactor = float2(0.0f, 0.0f);
	float4 Debug = (float4)0.0f;

    Output.Normal = mul(Input.Normal, World);
    Output.ViewDirection = EyePosition - ObjectPosition;
	Output.Tex = Input.Tex;
	if (Input.Flags.w >= 0)
	{
		float4 Tri1Normal = normalize(mul(Input.Tri1Normal, World));
		float4 Tri2Normal = normalize(mul(Input.Tri2Normal, World));
		float Tri1DotV = dot(Tri1Normal, ViewVec);
		float Tri2DotV = dot(Tri2Normal, ViewVec);
		
		float Det = Tri1DotV * Tri2DotV;
		OutputPosition = mul(Input.V1Position, WorldViewProject);


		float Det2 = dot(Tri1Normal, Tri2Normal);
		if (Det2 < 0.9f) Det = -1.0f;
		if (Det < 0 )
		{
			float4 ExtrudeVec = float4(0.0f, 0.0f, 0.0f, 0.0f);
			float4 OriginPoint;
			float4 OriginNormal;
			if (Input.Flags.x==0)
			{
			OriginNormal= normalize(Input.V1Normal);
			OriginPoint = Input.V1Position;
			}
			if (Input.Flags.x>0)
			{
			OriginNormal= normalize(Input.V2Normal);
			OriginPoint = Input.V2Position;
			}
		float4 extrusionPointOS = OriginPoint + normalize(OriginNormal) * gLineThickness;
		float4 extrusionPointCS = mul(extrusionPointOS, WorldViewProject);
		OutputPosition = mul(OriginPoint, WorldViewProject);



		float2 extrusionVecSS = extrusionPointCS.xy/extrusionPointCS.w - OutputPosition .xy / OutputPosition.w;
		extrusionVecSS = (extrusionVecSS * 0.5 + 0.5) * ScreenSize;
//		ScaleFactor.x = sqrt(extrusionVecSS.x * extrusionVecSS.x + extrusionVecSS.y * extrusionVecSS.y);
		ScaleFactor.x = length(extrusionVecSS.xy);


		OutputPosition = OriginPoint +  normalize(OriginNormal) * gLineThickness * AddScale;
		OutputPosition = mul(OutputPosition, WorldViewProject);

		}
		

	}

	Output.Debug = Debug;
	Output.ScaleFactor = ScaleFactor;

	Output.Position = OutputPosition;



	

    return Output; 
} 






Pixel Shader:
struct PS_INPUT {
	float4 Color			: COLOR0;
    float4 Position			: POSITION0; 
    float4 Tex				: TEXCOORD0;
	float4 Normal			: TEXCOORD1;
	float4 ViewDirection	: TEXCOORD2;
	float2 ScaleFactor		: TEXCOORD3;
		float4 Debug		: TEXCOORD4;
};
 
float4 BasicShader(PS_INPUT Input) : COLOR0 {


float LinePos = Input.Tex.y * AddScale;  //Scale the tex coords as if the poly were smaller.
									//Essentially, unstretch the stretching that was done
									//when we increased the length of the extrustion vector
									//in the vertex shader.  In theory, pUV.y should now = 1
									//at the point where the original poly would have ended
									//(Something seems wrong with this at the moment)




float rescale = 1.0f / Input.Tex.y;  //scale an interpolated value back to fin value
float ScaleFactor = 1.0f/(rescale*Input.ScaleFactor.x);  //Since ScaleFactor will have been interpolated 
														//between 0 and 1, we need to uninterpolate to keep
														//it constant
		
LinePos += ScaleFactor;  //Tiny shift to improve inside edge antialiasing problems occuring from rasterization
float HalfScale = ScaleFactor * .5f;
float usedScale;

float mappedUV;
float Alpha = 0.0f;
float nMin1 = numIter - 1.0f;
for (int c=0; c < numIter; c ++)
{
usedScale = (c/nMin1)*ScaleFactor - HalfScale;
mappedUV = abs(OutlineParam.x - LinePos + usedScale)*OutlineParam.y;
Alpha +=  clamp(OutlineParam.z * (1.0f - pow (mappedUV,2.0f)),0.0f,1.0f);

}		


float4 Color = InkColor;
Color.w = Alpha/numIter;
return Color;

} 
 
technique BasicShader { 
    pass P0 { 
        AlphaBlendEnable = True;
        SrcBlend = srcalpha;
        DestBlend = invsrcalpha;
		AlphaTestEnable = true;
		AlphaRef = 0x00000010;
        AlphaFunc = Greater;

        CullMode = None;
        VertexShader = compile vs_2_0 Transform2(); 
        PixelShader = compile ps_2_0 BasicShader(); 
    } 
}





[Edited by - dopplex on April 25, 2008 9:41:15 AM]

Share this post


Link to post
Share on other sites
dopplex    164
Just to start off - ways I already see to optimize this:

My Vertex structure is really redundant. I don't need to be checking a flag to figure out which of the edge vertices is the right parent - I can just do that in preprocessing, and then only stick vertex data for the proper parent in that channel.

I don't think I even need my NORMAL0 channel as input. I'm faithfully passing it to the pixel shader - but even if I eventually need it there, it's something I can generate within the vertex shader.

I could also probably get rid of the Flags channel. I can tell whether the vertex is an extruded one or not from it's V texture coordinate. Since I'm consolidating my parent vertex info in preprocessing, I don't need a flag to choose between them anymore. And I can make the texture coordinate a float3 and embed line thickness in there, when I put that in.

Updated potential vertex structure:


struct VS_INPUT {
float4 Position : POSITION0;
float4 Normal : NORMAL0;

float4 Tri1Normal : NORMAL1;
float4 Tri2Normal : NORMAL2;

float4 ParentNormal : NORMAL3;
float2 Tex : TEXCOORD3;
float4 ParentPosition : TEXCOORD4;

};



[Edited by - dopplex on April 25, 2008 9:37:08 AM]

Share this post


Link to post
Share on other sites
MJP    19790
One thing you *might* want to look into as either an alternative or supplement to your technique (probably not a replacement) would be adding outlines by performing edge detection as a post-process using depth and/or normal information for the entire screen. You can get pretty good results with a sobel operator or something similar, which gives nice smooth lines.

Main Advantages:

-Less per-vertex work. This may or may not be an advantage depending on the scene, whether the device has hardware vertex acceleration, what the device's vertex/pixel shader ration is, and the resolution
-Gives nice smooth lines
-Process is decoupled from the rendering of geometry, which can simplify things

Main Disadvantages:

-Requires access to a depth buffer and optimally a normal buffer. Depth buffers can't be easily accessed due to API restrictions, so usually you're stuck rendering it yourself using MRT. However these days it's becoming very useful to have access to depth information, whether its for depth-of-field, motion blur, light beams, fog, atmosphere, whatever.
-Will probably be less consistent than extruding the edges
-Will be more expensive at higher resolutions
-Harder to control where its being applied

Anyway just thought I'd mention it. I really don't know if it would be a huge win performance-wise over your current method, but it might be worth checking out as an option for lower-spec systems.

Share this post


Link to post
Share on other sites
dopplex    164
As an option for lower specced systems, wouldn't it be very difficult to assume the availability of MRTs, though? I know that the laptop I'm currently working on (Intel integrated graphics!) is limited to both software vertex processing and a single render target, which is a bit limiting. It manages the sample shots I posted - but I think that adding much more geometry would probably choke it pretty well.

For dealing with more capable hardware, though, it's likely that I'll have the depth/normal data available (I started out trying that way on my (much more capable) desktop, before running into the issue of lack of MRTs on the laptop), and the image space approach will likely do a better job with the thinner edges and for more zoomed out views. (These are almost always the edges detects by the angle between faces being > some threshold. The surface normals that are dividing a visible from a backfacing polygon by their nature tend to be close to perpendicular to the view vector. When these edges are thick enough to show up, it's actually a pretty nice effect)

One idea may be to do some form of LOD on the silhouettes - the vertex approach looks much better close up, but image processing might be a better bet zoomed out. Getting them to transition gracefully might be an issue though.

The main reason though, ultimately, for going with the vertex shader for this is style. I very much want to separate the silhouette detection process from the rendering process, so that once I find a silhouette I can have an unlimited number of options as far as rendering it.

Right now, all I'm doing in the pixel shader is trying to anti-alias the line based on the V texture coordinate. Essentially, though, what I have is the full space of a quad on which to render whatever kind of edge effect I like. I should - for instance, be able to mimic many kinds of "brush" strokes along the edges - and it's definitely something I'm planning on exploring further (for example, I bet that there's a huge amount of things I could do by combining brush textures with something like Perlin noise).

The main limitation - and I'd love some suggestions on getting around this - is that right now I really have no way of knowing where in the "U" space of the quad I am.

This problem is mostly due to the fact that the vertices that are still attached to the geometry are shared between my edges. Because of that, I have no way of knowing whether I'm on an edge that has U going from 0 to 1, 1 to 0, or is just stuck at 0 or 1.

One possibility might be to make sure I have a u delta on the extruded points (since the pair of those are currently unique to each edge) and somehow "un-interpolate" across the rest of the quad. (that is, I know that the top left vertex is (1,1), the top right one is (0,1) and that both bottom ones are (0,0) - in theory, since I know the interpolated V value, I feel like there ought to be a way to derive a U value that is interpolated solely through the top two points...

Share this post


Link to post
Share on other sites
dopplex    164
I've finally managed to fix the anti-aliasing, or so it seems.

I've actually done something pretty cool there...

Since it's a procedural texture - and since the procedure that generates it is actually of the quadratic equation variety....

I decided that rather than simply multi-sample, I'd just integrate the equation over the entire pixel. Infinite multi-sampling!

(Even more astoundingly... it actually seems to work! Even cooler... it's actually finding the f(x) = 0 and f(x) =1 roots of the function within the pixel shader. Which is just stupid in terms of efficiency - but kinda cool in that it's doing it (They'll get hard coded by the material system eventually))

Here's the code for the integral-anti-aliasing pixel shader (Pre-calculating for constant presets will remove around half the code, and make it take significantly fewer instructions than multisampling too):


float4 BasicIntegralShader(PS_INPUT Input) : COLOR0 {
float4 Color;

float LinePos = Input.Tex.y * AddScale; //Scale the tex coords as if the poly were smaller.
//Essentially, unstretch the stretching that was done
//when we increased the length of the extrustion vector
//in the vertex shader. In theory, pUV.y should now = 1
//at the point where the original poly would have ended
//(Something seems wrong with this at the moment)




float rescale = 1.0f / Input.Tex.y; //scale an interpolated value back to fin value

float ScaleFactor = 0.5f/(length(rescale*Input.ScaleFactor.xy)); //Since ScaleFactor will have been interpolated
//between 0 and 1, we need to uninterpolate to keep

float Thresh = OutlineParam.y; //used very often
float Avg = OutlineParam.x;
float Soft = OutlineParam.z;
float Thresh2 = pow(OutlineParam.y,2); //used very often
float Avg2 = pow(OutlineParam.x,2);
float Soft2 = pow(OutlineParam.z,2);

float varA = -Soft*Thresh2;
float varB = 2*Avg*Soft*Thresh2;
float varC = Soft*(1-Avg2*Thresh2);


float xmin1t = Avg + 1 / Thresh;
float xmin2t = Avg - 1 / Thresh;

varC = varC -1;
float xmax1t = (-varB+sqrt(pow(varB,2) - 4 * varA *varC))/(2*varA);
float xmax2t = (-varB-sqrt(pow(varB,2) - 4 * varA *varC))/(2*varA);

float xmin1 = min(xmin1t, xmin2t);
float xmin2 = max(xmin1t, xmin2t);

float xmax1 = min(xmax1t, xmax2t);
float xmax2 = max(xmax1t, xmax2t);



float iA =Thresh2*Soft/-3.0f;
float iB = Avg*Thresh2*Soft;
float iC = Soft*(1-Avg2*Thresh2);
xmin1 = clamp(LinePos-ScaleFactor,xmin1,xmin2);
xmin2 = clamp(LinePos+ScaleFactor,xmin1,xmin2);

xmax1 = clamp(LinePos-ScaleFactor,xmax1,xmax2);
xmax2 = clamp(LinePos+ScaleFactor,xmax1,xmax2);




float Ixmin1 = iA * pow(xmin1,3) + iB * pow(xmin1,2) + iC*xmin1;

float Ixmin2 = iA * pow(xmin2,3) + iB * pow(xmin2,2) + iC*xmin2;
iC = iC-1;
float Ixmax1 = iA * pow(xmax1,3) + iB * pow(xmax1,2) + iC*xmax1;

float Ixmax2 = iA * pow(xmax2,3) + iB * pow(xmax2,2) + iC*xmax2;

float Alpha = Ixmin2 - Ixmin1 - Ixmax2 + Ixmax1;


Color = InkColor;
Color.w = Alpha/(2*ScaleFactor);
return Color;




}



[Edited by - dopplex on April 25, 2008 9:30:40 AM]

Share this post


Link to post
Share on other sites
MJP    19790
That looks like some nice stuff! Got a picture? [smile]

By the way I *think* there's ways to pass a non-interpolated value to the pixel shader when you have flat-shaded mode active...but I forget which register you need to use. I'll have a look around MSDN and see if I can find it.

EDIT: I think it's the COLOR register, but I'm still not sure.

Share this post


Link to post
Share on other sites
dopplex    164
I just realized that I'd been making things *way* too complicated. I'd been integrating over a parabolic curve to try to get fall-off near the edges - but for antialiasing, all I really need to integrate is f(x) = 1, which as you might expect is FAR easier.

The only problem I'm not hitting is that while outer-edges appear to be anti-aliasing very, very well, the inner edges do not appear to be anti-aliasing at all. I'm not entirely clear on why that is - in theory, the function ought to be symmetrical. Any ideas why this may be occuring?

EDIT: Okay, I've taken a closer look. If I reduce the width of the stroke - ie, take my integral over [.02, .98] instead of [0,1], the other edge anti-aliases well. However, this creates a small gap between the line and the model that is apparent close up. I think there may be an issue with the silhouette lines getting clipped where they intersect the object, causing the anti-aliased pixel to not rasterize... Is there a way to make this not happen without causing lines that ought to be occluded to show?


Here's the second version of the shader (The first is still useful - it just gives a much fuzzier effect). This one is much, much shorter.

float4 BasicIntegralShader2(PS_INPUT Input) : COLOR0 {
float4 Color;

float LinePos = Input.Tex.y * AddScale;
float rescale = 1.0f / Input.Tex.y; //scale an interpolated value back to fin value
float ScaleFactor = 0.5f/(length(rescale*Input.ScaleFactor.xy));
//Scalefactor should now give us the length of half a pixel
//in the unit of our UV coordinates (Only V, really
//since we're working in only one dimension)



//Figure out the bounds to integrate over.
float x1 = saturate(LinePos-ScaleFactor);
float x2 = saturate(LinePos+ScaleFactor);
//V<0 or V>1 are outside our line - so we
//only want to integrate over the portion of the function that is
//within out line (which is defined as between 0 and 1 here)
//Could easily make it narrower though

//And since the integration of f(x)=1 is... f'(x)=x
//the actual integration is actually this easy: (It's just the (x2-x1) part)
float Alpha = (x2-x1)/(2*ScaleFactor);

Color = InkColor;
Color.w = Alpha;
return Color;
}






And screenshots:
"Spaghetti" (Just outlines, no object. Have some z issues here, not sure how to get around them. They tend not to be an issue with shading though)

An example of the aliasing issue on the inner edge:

Fully cel-shaded and outlined:


[Edited by - dopplex on April 25, 2008 9:05:36 AM]

Share this post


Link to post
Share on other sites
dopplex    164
Also adding updated version of the vertex shader (Some fixes made in order to get the anti-aliasing to work properly)
Vertex Shader


VS_OUTPUT Transform2(VS_INPUT Input) {


VS_OUTPUT Output;
float4x4 WorldViewProject = mul(mul(World, View), Project);
float4 ObjectPosition = mul(Input.Position, World);
float4 ViewVec = EyePosition - ObjectPosition;
float4 ViewVec2 = mul(mul(EyePosition - ObjectPosition, View), Project);

float4 OutputPosition = mul(Input.Position, WorldViewProject);
float2 ScaleFactor = float2(0.0f, 0.0f);
float4 Debug = (float4)0.0f;

Output.Normal = mul(Input.Normal, World);
Output.ViewDirection = EyePosition - ObjectPosition;
Output.Tex = Input.Tex;
if (Input.Flags.w >= 0)
{
float4 Tri1Normal = normalize(mul(Input.Tri1Normal, World));
float4 Tri2Normal = normalize(mul(Input.Tri2Normal, World));
float Tri1DotV = dot(Tri1Normal, ViewVec);
float Tri2DotV = dot(Tri2Normal, ViewVec);

float Det = Tri1DotV * Tri2DotV;
OutputPosition = mul(Input.V1Position, WorldViewProject);


float Det2 = dot(Tri1Normal, Tri2Normal);
if (Det2 < -1.0f) Det = -1.0f;
if (Det < 0 )
{
float4 ExtrudeVec = float4(0.0f, 0.0f, 0.0f, 0.0f);
float4 OriginPoint;
float4 OriginNormal;
if (Input.Flags.x==0)
{
OriginNormal= normalize(Input.V1Normal);
OriginPoint = Input.V1Position;
}
if (Input.Flags.x>0)
{
OriginNormal= normalize(Input.V2Normal);
OriginPoint = Input.V2Position;
}
float4 extrusionPointOS = OriginPoint + normalize(OriginNormal) * gLineThickness;
float4 extrusionPointCS = mul(extrusionPointOS, WorldViewProject);
OutputPosition = mul(OriginPoint, WorldViewProject);



float2 extrusionVecSS = extrusionPointCS.xy/extrusionPointCS.w - OutputPosition.xy / OutputPosition.w;
// extrusionVecSS = (extrusionVecSS * 0.5 + 0.5) * ScreenSize;
extrusionVecSS = (extrusionVecSS * 0.5) * ScreenSize;
// ScaleFactor.x = sqrt(extrusionVecSS.x * extrusionVecSS.x + extrusionVecSS.y * extrusionVecSS.y);
// ScaleFactor.x = length(pow(extrusionVecSS.xy,2));
ScaleFactor.xy = extrusionVecSS.xy;

OutputPosition = OriginPoint + normalize(OriginNormal) * gLineThickness * AddScale;
OutputPosition = mul(OutputPosition, WorldViewProject);

}


}

Output.Debug = Debug;
Output.ScaleFactor = ScaleFactor;

Output.Position = OutputPosition;




return Output;
}

Share this post


Link to post
Share on other sites
dopplex    164
Hi Jason,

thanks for the link. I had actually run across it already (after finishing the initial implementation). I think that's actually where I figured out to use Alpha-test to get rid of my fully transparent edges rather than manually using clip() at the end of my pixel shader. Generally, it looks like I took a pretty similar approach to what you had laid out - up to and including the edge detection method (My vertex structure is less compact though - something i need to fix).

I think I diverge after the edge is detected. Rather than extrude my edge along the normal of the forward facing triangle face, I'm extruding along the parent vertex normals. (My reason for this was to make sure that adjoining edges fully joined up) I also don't do any differentiating between ridges and valleys - I have a line that will add internal lines based on the angle between edges, but I've actually turned it off in most of the screens just because I preferred the silhouette only style)

Actually, now that I look closer, I think that I'm preprocessing differently - I'm only adding two vertices per edge instead of four (and turning off backface culling for the silhouette shader) (although the two vertices of the original geometry that I'm keeping per-edge may be the equivalent)

As a last question - when you talk about updating the vertex data for working with dynamic models, in what way would you go about updating the vertex data? I was under the impression that needing to alter vertex buffers using the CPU would cause issues with performance - is the proper approach to not use vertex buffers in this situation?

Edit: Also, I got my first look at "Okami" over the weekend (hadn't caught it in its PS2 form). The silhouette effect they use looks very similar to the one generated by this kind of method (ie. thick outlines that extend beyond the bounds of the regular geometry) and I'm trying to figure out how exactly they managed it, given that I don't think that either the PS2 or the Wii has shader support...

[Edited by - dopplex on April 28, 2008 12:26:57 PM]

Share this post


Link to post
Share on other sites
Jason Z    6436
For dynamic cases, say like a human figure, you need to keep the normal vector and tangent vectors in the edge mesh updated according to how the figure is currently posed. This is really more important for the ridges and valleys than the actual silhouette, but it still applies to the silhouette as well.

Now that D3D10 is around, the geometry shader would be much more appropriate to use for both pre-processing as well as rendering and selecting the edges to display (not to mention performing the extruding). That might be something to check into if you want to advance the technique to run more on the GPU than the CPU.

Your work looks pretty good, keep up the good work!

Share this post


Link to post
Share on other sites
dopplex    164
Quote:
Original post by Jason Z
For dynamic cases, say like a human figure, you need to keep the normal vector and tangent vectors in the edge mesh updated according to how the figure is currently posed. This is really more important for the ridges and valleys than the actual silhouette, but it still applies to the silhouette as well.

Now that D3D10 is around, the geometry shader would be much more appropriate to use for both pre-processing as well as rendering and selecting the edges to display (not to mention performing the extruding). That might be something to check into if you want to advance the technique to run more on the GPU than the CPU.

Your work looks pretty good, keep up the good work!


Yeah, the geometry shader seems to be exactly what would be needed here. Unfortunately, I don't think I'll be experimenting with that for a while, since I want to avoid doing anything that would restrict me to Vista-only setups.

Animation is the next thing I'm planning to tackle - just haven't quite had the time to figure out how I want to work it yet!

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this