• Create Account

Need scary sound effects or creepy audio loops for your next horror-themed game? Check out Highscore Vol.3 - The Horror Edition in our marketplace. 50 sounds and 10 loops for only \$9.99!

# taby

Member Since 10 Feb 2005
Offline Last Active May 07 2013 08:23 AM

### In Topic: AMD 6310 GLSL/FBO texture copy issue

04 May 2013 - 04:14 PM

Problem doesn't occur on Intel GPU. Much obliged for the help.

### In Topic: A C++ code to smooth (and fix cracks in) meshes generated by the standard, or...

29 April 2013 - 11:12 AM

Another simple smoothing algorithm to try is where the scale is multiplied by a factor related to each vertex's angle deficit (curvature), ie.:

vertex[i] += displacement[i]*scale*((2pi - total_angle[i])/(2pi)).

This should smooth out spikes and pits, but leave ridges, valleys and flat regions relatively untouched -- which seems ideal for the task at hand.

Not sure what the algorithm's called, nor how much it would differ from the curvature normal weighting, but will try later today.

Update:  implemented as suggested, it doesn't work entirely as expected, but will try other things related to it.

### In Topic: A C++ code to smooth (and fix cracks in) meshes generated by the standard, or...

23 April 2013 - 10:51 AM

Nice work! This could surely come in handy at some point.

Thank you. I hope it comes it handy.

The paper 'Geometric Signal Processing on Polygonal Meshes' by G. Taubin mentions a few edge weight schemes:

- constant unit weight: w_ij = 1

- inverse edge length weight: w_ij = 1 / |e_ij|

- curvature normal-esque (cotan) weight: w_ij = 1 / tan(theta_ij0) + 1 / tan(theta_ij1)

The code in the zip file from the first post uses the constant unit weight scheme (see indexed_mesh::laplace_smooth()).

I didn't put the other schemes into the code zip file because I'm not sure if I'm implementing them correctly, but I'll include them here in case anyone wishes to play with them to see if/where I'm going wrong.

Here's the code to replace the laplace_smooth function with the inverse edge length weight scheme:

void indexed_mesh::inverse_edge_length_smooth(const float scale)
{
vector<vertex_3> displacements(vertices.size(), vertex_3(0, 0, 0));

// Get per-vertex displacement.
for(size_t i = 0; i < vertices.size(); i++)
{
// Skip rogue vertices (which were probably made rogue during a previous
// attempt to fix mesh cracks).
if(0 == vertex_to_vertex_indices[i].size())
continue;

vector<float> weights(vertex_to_vertex_indices[i].size(), 0.0f);

// Calculate weights based on inverse edge lengths.
for(size_t j = 0; j < vertex_to_vertex_indices[i].size(); j++)
{
size_t neighbour_j = vertex_to_vertex_indices[i][j];

float edge_length = vertices[i].distance(vertices[neighbour_j]);

if(0 == edge_length)
edge_length = numeric_limits<float>::epsilon();

weights[j] = 1.0f / edge_length;
}

// Normalize the weights so that they sum up to 1.
float s = 0;

for(size_t j = 0; j < weights.size(); j++)
s += weights[j];

if(0 == s)
s = numeric_limits<float>::epsilon();

for(size_t j = 0; j < weights.size(); j++)
weights[j] /= s;

// Sum the displacements.
for(size_t j = 0; j < vertex_to_vertex_indices[i].size(); j++)
{
size_t neighbour_j = vertex_to_vertex_indices[i][j];
displacements[i] += (vertices[neighbour_j] - vertices[i])*weights[j];
}
}

// Apply per-vertex displacement.
for(size_t i = 0; i < vertices.size(); i++)
vertices[i] += displacements[i]*scale;
}

Here's the code to replace the laplace_smooth function with the curvature normal-esque weight scheme (it works most times, but no guarantees):

void indexed_mesh::curvature_normal_smooth(const float scale)
{
vector<vertex_3> displacements(vertices.size(), vertex_3(0, 0, 0));

// Get per-vertex displacement.
for(size_t i = 0; i < vertices.size(); i++)
{
if(0 == vertex_to_vertex_indices[i].size())
continue;

vector<float> weights(vertex_to_vertex_indices[i].size(), 0.0f);

size_t angle_error = 0;

// For each vertex pair (ie. each edge),
// calculate weight based on the two opposing angles (ie. curvature normal scheme).
for(size_t j = 0; j < vertex_to_vertex_indices[i].size(); j++)
{
size_t angle_count = 0;

size_t neighbour_j = vertex_to_vertex_indices[i][j];

// Find out which two triangles are shared by the edge.
for(size_t k = 0; k < vertex_to_triangle_indices[i].size(); k++)
{
for(size_t l = 0; l < vertex_to_triangle_indices[neighbour_j].size(); l++)
{
size_t tri0_index = vertex_to_triangle_indices[i][k];
size_t tri1_index = vertex_to_triangle_indices[neighbour_j][l];

// This will occur twice per edge.
if(tri0_index == tri1_index)
{
// Find the third vertex in this triangle (the vertex that doesn't belong to the edge).
for(size_t m = 0; m < 3; m++)
{
// This will occur once per triangle.
if(triangles[tri0_index].vertex_indices[m] != i && triangles[tri0_index].vertex_indices[m] != neighbour_j)
{
size_t opp_vert_index = triangles[tri0_index].vertex_indices[m];

// Get the angle opposite of the edge.
vertex_3 a = vertices[i] - vertices[opp_vert_index];
vertex_3 b = vertices[neighbour_j] - vertices[opp_vert_index];
a.normalize();
b.normalize();

float dot = a.dot(b);

if(-1 > dot)
dot = -1;
else if(1 < dot)
dot = 1;

float angle = acosf(dot);

// Curvature normal weighting.
float slope = tanf(angle);

if(0 == slope)
slope = numeric_limits<float>::epsilon();

// Note: Some weights will be negative, due to obtuse triangles.
// You may wish to do weights[j] += fabsf(1.0f / slope); here.
weights[j] += 1.0f / slope;

angle_count++;

break;
}
}

// Since we found a triangle match, we can skip to the first vertex's next triangle.
break;
}
}
} // End of: Find out which two triangles are shared by the vertex pair.

if(angle_count != 2)
angle_error++;

} // End of: For each vertex pair (ie. each edge).

if(angle_error != 0)
{
cout << "Warning: Vertex " << i << " belongs to " << angle_error << " edges that do not belong to two triangles (" << vertex_to_vertex_indices[i].size() - angle_error << " edges were OK)." << endl;
cout << "Your mesh probably has cracks or holes in it." << endl;
}

// Normalize the weights so that they sum up to 1.
float s = 0;

// Note: Some weights will be negative, due to obtuse triangles.
// You may wish to do s += fabsf(weights[j]); here.
for(size_t j = 0; j < weights.size(); j++)
s += weights[j];

if(0 == s)
s = numeric_limits<float>::epsilon();

for(size_t j = 0; j < weights.size(); j++)
weights[j] /= s;

// Sum the displacements.
for(size_t j = 0; j < vertex_to_vertex_indices[i].size(); j++)
{
size_t neighbour_j = vertex_to_vertex_indices[i][j];

displacements[i] += (vertices[neighbour_j] - vertices[i])*weights[j];
}
}

// To do: Find out why there are cases where displacement is much, much, much larger than all edge lengths put together.

// Apply per-vertex displacement.
for(size_t i = 0; i < vertices.size(); i++)
vertices[i] += displacements[i]*scale;
}

### In Topic: A C++ code to smooth (and fix cracks in) meshes generated by the standard, or...

23 April 2013 - 10:36 AM

Yes, I am using the plain vanilla, ancient, original MC that produces topological inconsistencies (writing that makes it seem worse than it is, because, really, the cracks generally seem to be just that; cracks, not gaping holes; each boundary consists of an even number of edges where the majority of the angle is distributed amongst all but two of the vertices) -- I've once tried the version with topological guarantees / case ambiguity resolution by Lewiner, et al. back in the mid-200x's, but I went with the version from Bourke's site in the end because it does a relatively decent job with relatively minimal headache.

I updated the first post to indicate that I'm using the standard, original MC algorithm (see: P. Bourke's 'Polygonising a scalar field').

If anyone has a full, cost-free, public domain C++ implementation of an adaptive MC algorithm to share, I'd be more than happy to absorb it into my toolkit -- perhaps GD could consider the possibility of adding in a 'recipes' section of the site to gather all of these kinds of things into one nicely organized spot. Apparently I implemented one when I was experimenting back in the mid-200x's and I forgot.

In any case, the major point of the post was mesh smoothing -- if anyone has any kind of implementation of Taubin smoothing that uses 'Fujiwara' (scale-dependent) or curvature normal (cotan) weighting, but doesn't destroy the mesh, that would be nice too. I've had no luck with these; the literature is abundant (to the point where it's conflicting).

### In Topic: gaussian curvature on a 3D mesh

04 November 2012 - 08:56 PM

http://en.wikipedia.org/wiki/Defect_(geometry)

http://en.wikipedia.org/wiki/Gauss-Bonnet_theorem

If you feel like getting to the heart of it on your own. In order to figure out the curvature at a vertex you'll need to have the adjacency data for your mesh, which I'm pretty sure is not something that an OBJ file contains, so you'd have to do that yourself. Once you do that, and know how to calculate the curvature, you'll be pretty knowledgable and you'll be able to figure out all kinds of other cool things. It's tough the first time around, but there are lots of people here who can help if you get stuck.

P.s. that 1st link is mangled, sorry. Just search for "defect geometry" on Wikipedia, I guess. Basically you need to get all triangles associated with a vertex, then loop through them and add up the total angle at that vertex. If the angle is 2pi, then there is no "curvature" at that vertex (the triangles are all coplanar)... If not, then it's "curved" at that vertex. Repeat for all vertices.

PARTNERS