View more

View more

View more

### Image of the Day Submit

IOTD | Top Screenshots

### The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

# High Speed Quadric Mesh Simplification without problems (Resolved)

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

10 replies to this topic

### #1spacerat  Members

Posted 10 May 2014 - 05:51 AM

MIT license, well documented and working without errors.

The latest source is also included in the lowermost reply.

Summary I am currently on a new implementation of a Quadric based mesh simplification algorithm. Its quite short (130 lines for the main part) , but apparently there is a bug somewhere. I searched already a while but couldnt find it yet. If one of you can spot it, any help is appreciated.

In turn, everyone gets a great mesh simplification method for free once it works

Its able to reduce 90% of 650.000 triangles in about 1 sec. (Single threaded) by using less memory than most other methods.

Algorithm Different from the original method, it does not store the per-edge-error in an edges list, but per triangle. This avoids the edges list completely as the edge list is slow creating and updating. On the other hand, every error is computed twice - but that's not so serious.

Methods

calculate_error calculates the error between two vertices and output the vertex an edge might be reduced to.

reduce_polygons main function to reduce the mesh

The Problem Here a screenshot of the bug: Some faces are flipped and look displaced..

Here the source code

struct Triangle{ int v[3];double err[3];bool deleted; };
struct Vertex{ vec3f p,n;int dst,dirty;Matrix q; };
std::vector<Triangle> triangles;
std::vector<Vertex> vertices;

double vertex_error(Matrix q, double x, double y, double z);
double calculate_error(int id_v1, int id_v2, vec3f &p_result);

void reduce_polygons()
{
// Init defaults
loopi(0,vertices.size())
{
vertices[i].dst=-1;
vertices[i].q=Matrix(0.0);
vertices[i].dirty=false;
vertices[i].n=vec3f(0,0,0);
}
loopi(0,triangles.size()) triangles[i].deleted=0;

loopi(0,triangles.size())
{
Triangle &t=triangles[i]; vec3f n,p[3];

loopj(0,3) p[j]=vertices[t.v[j]].p;
n.cross(p[1]-p[0],p[2]-p[0]);
n.normalize();

loopj(0,3) vertices[t.v[j]].q =
vertices[t.v[j]].q+Matrix(n.x,n.y,n.z,-n.dot(p[0]));
}
// Calc Edge Error
loopi(0,triangles.size())
{
Triangle &t=triangles[i];vec3f p;
loopj(0,3) t.err[j]=calculate_error(t.v[j],t.v[(j+1)%3],p);
}
int deleted_triangles=0;

loopl(0,25) // iteration
{
// remove vertices & mark deleted triangles
loopi(0,triangles.size()) if(!triangles[i].deleted)
{
Triangle &t=triangles[i];
if(vertices[t.v[0]].dirty) continue;
if(vertices[t.v[1]].dirty) continue;
if(vertices[t.v[2]].dirty) continue;

loopj(0,3)
{
int i0=t.v[ j     ]; Vertex &v0 = vertices[i0];
int i1=t.v[(j+1)%3]; Vertex &v1 = vertices[i1];

bool test1=t.err[j] < 0.00000001*l*l*l*l*l;
bool test2=(v0.p-v1.p).length()<0.3*l;

// remove based on edgelength and quadric error
if(test1 && test2)
{
calculate_error(i0,i1,v0.p);
//v0.p=(v1.p+v0.p)*0.5;
v0.q=v1.q+v0.q;
v1.dst=i0;
v1.dirty=true;
v0.dirty=true;
t.deleted=1;
deleted_triangles++;
break;
}
}
}
// update connectivity
loopi(0,triangles.size()) if(!triangles[i].deleted)
{
Triangle &t=triangles[i];

loopj(0,3)
if(vertices[t.v[j]].dst>=0)
{
t.v[j]=vertices[t.v[j]].dst;
if(t.v[j]==t.v[(j+1)%3] || t.v[j]==t.v[(j+2)%3] )
{
// two equal points -> delete triangle
t.deleted=1;
deleted_triangles++;
break;
}
t.dirty=1;
}
if(!t.dirty)continue;

// update error
bool dirty=0;
loopj(0,3) dirty|=vertices[t.v[j]].dirty;
if(!dirty)continue;

// update error
vec3f p;
loopj(0,3) t.err[j]=calculate_error(t.v[j],t.v[(j+1)%3],p);
}
// clear dirty flag
loopi(0,vertices.size()) vertices[i].dirty=0;
}
}

double vertex_error(Matrix q, double x, double y, double z)
{
return   q[0]*x*x + 2*q[1]*x*y + 2*q[2]*x*z + 2*q[3]*x + q[5]*y*y
+ 2*q[6]*y*z + 2*q[7]*y + q[10]*z*z + 2*q[11]*z + q[15];
}

double calculate_error(int id_v1, int id_v2, vec3f &p_result)
{
Matrix q_bar = vertices[id_v1].q + vertices[id_v2].q;
Matrix q_delta (  q_bar[0], q_bar[1],  q_bar[2],  q_bar[3],
q_bar[4], q_bar[5],  q_bar[6],  q_bar[7],
q_bar[8], q_bar[9], q_bar[10], q_bar[11],
0,        0,          0,        1);
if ( double det = q_delta.det(0, 1, 2, 4, 5, 6, 8, 9, 10) )
{
p_result.x = -1/det*(q_delta.det(1, 2, 3, 5, 6, 7, 9, 10, 11));
p_result.y =  1/det*(q_delta.det(0, 2, 3, 4, 6, 7, 8, 10, 11));
p_result.z = -1/det*(q_delta.det(0, 1, 3, 4, 5, 7, 8, 9, 11));
}
else
{
vec3f p1=vertices[id_v1].p;
vec3f p2=vertices[id_v1].p;
vec3f p3=(p1+p2)/2;
double error1 = vertex_error(q_bar, p1.x,p1.y,p1.z);
double error2 = vertex_error(q_bar, p2.x,p2.y,p2.z);
double error3 = vertex_error(q_bar, p3.x,p3.y,p3.z);
double min_error = min(error1, min(error2, error3));
if (error1 == min_error) p_result=p1;
if (error2 == min_error) p_result=p2;
if (error3 == min_error) p_result=p3;
}
double min_error = vertex_error(q_bar, p_result.x, p_result.y, p_result.z);
return min_error;
}


Edited by spacerat, 15 May 2014 - 09:43 PM.

### #2eppo  Members

Posted 10 May 2014 - 11:23 AM

Based on what criteria do you prioritize one contraction over the other? I don't see you (re-)sorting the list of triangles.

### #3Aressera  Members

Posted 10 May 2014 - 01:09 PM

If you use a proper data structure (custom heap) for the edge list, you can get similar performance to your implementation, but with much better quality. This keeps the edge list sorted automatically, and if you store the position in the (array-based) heap along with the edge, you can get O(log(N)) updates/removals with each iteration. The problem with not updating the list of edges on each iteration is that the algorithm no longer optimally picks the edge with the least error, so your results will not be as good as the original quadric-error edge collapse algorithm.

The problem you're probably seeing is that triangles can flip over during an edge collapse, causing overlap of mostly coplanar triangles. You can fix this by checking each triangle that shares the edge vertices to make sure that its normal does not change sign (dot product with old normal < 0).

### #4spacerat  Members

Posted 12 May 2014 - 03:40 AM

Based on what criteria do you prioritize one contraction over the other? I don't see you (re-)sorting the list of triangles.

Its threshold based - one threshold for the edge length, and one for the quadric error. Once the triangle error problem is solved, I will try a sorted list for better quality.

Currently, the decision if a triangle is removed is performed in line 60 in the code above.

If you use a proper data structure (custom heap) for the edge list, you can get similar performance to your implementation, but with much better quality. This keeps the edge list sorted automatically, and if you store the position in the (array-based) heap along with the edge, you can get O(log(N)) updates/removals with each iteration. The problem with not updating the list of edges on each iteration is that the algorithm no longer optimally picks the edge with the least error, so your results will not be as good as the original quadric-error edge collapse algorithm.

The problem you're probably seeing is that triangles can flip over during an edge collapse, causing overlap of mostly coplanar triangles. You can fix this by checking each triangle that shares the edge vertices to make sure that its normal does not change sign (dot product with old normal < 0).

The triangle flipping is a good point. I have spend a lot of time now to figure out how to efficiently do this test without slowing down the method too much. The result is a modified method that stores a list of triangles for each vertex, where each of these triangles has one reference to this vertex. It allows to efficiently test for flipping and collapsing one vertex.

In the code below, the list with ids named refs. It is referred to by tstart,tcount from vertex and contains the triangle ID (tid) and vertex offset inside the triangle (tvertex). (In the current version it just grows - but in the final version it needs to be re-initialized every couple of iterations to avoid using too much ram)

The problem is that even it prevents some errors, still flipped triangles occur somehow. I could not yet resolve the problem. Is there anything else that can cause the flipping error ? Or can you spot an error in the flipping test below ? ( function flipped )

Edit: Seems I found a hint: the problem seems to arise when the triangles get almost line - shaped, so all 3 points are along one line.

For the speed, the method can basically be even faster as it doesnt require so many iterations.

I first need to resolve the error however, before speeding up and improving the quality by sorting.

For the edge based structure, I have tried a edge based structure before, but since I had referenced to the triangles in the edges it got quite messy to update them after one triangle was removed. The speed might be equal, yes.

Here the new version ( Simplify.h )

struct Triangle{ int v[3];double err[3];bool deleted,dirty;vec3f n; };
struct Vertex{ vec3f p;int tstart,tcount;Matrix q; };
struct Ref { int tid,tvertex; };
std::vector<Triangle> triangles;
std::vector<Vertex> vertices;
std::vector<Ref> refs;

double vertex_error(Matrix q, double x, double y, double z);
double calculate_error(int id_v1, int id_v2, vec3f &p_result);

bool flipped(vec3f p,int i0,int i1,Vertex &v0,Vertex &v1,std::vector<vec3f> &tmp_n)
{
// p  : new vertex position ( (v0.p+v1.p)/2 for the most simple case )
// i0 : index of vertex 0
// i1 : index of vertex 1
// v0 : edge vertex 0
// v1 : edge vertex 1
// tmp_n : array to temporarily store normals to avoid recomputing them

loopk(0,v0.tcount)
{
Triangle &t=triangles[refs[v0.tstart+k].tid];
int s=refs[v0.tstart+k].tvertex;
int id1=t.v[(s+1)%3];
int id2=t.v[(s+2)%3];

if(id1==i1 || id2==i1 ) // delete ?
{
tmp_n[k].x=-10;
continue;
}
vec3f p1 = vertices[id1].p;
vec3f p2 = vertices[id2].p;
vec3f n;
n.cross(p1-p,p2-p);
n.normalize();
tmp_n[k]=n;

if(n.dot(t.n)<0.0) return true;
}
return false;
}
void update_triangles(int i0,Vertex &v,std::vector<vec3f> &tmp_n,int &deleted_triangles)
{
vec3f p;
loopk(0,v.tcount)
{
Ref &r=refs[v.tstart+k];
Triangle &t=triangles[r.tid];
if(t.deleted)continue;
if(tmp_n[k].x<-5)
{
t.deleted=1;
deleted_triangles++;
continue;
}
t.v[r.tvertex]=i0;
t.n=tmp_n[k];
t.dirty=1;
t.err[0]=calculate_error(t.v[0],t.v[1],p);
t.err[1]=calculate_error(t.v[1],t.v[2],p);
t.err[2]=calculate_error(t.v[2],t.v[0],p);
refs.push_back(r);
}
}

void reduce_polygons()
{
printf("reduce_polygons - start\n");
int timeStart=timeGetTime();

// Init defaults
loopi(0,vertices.size())
{
Vertex &v=vertices[i];
v.q=Matrix(0.0);
v.tstart=0;
v.tcount=0;
}
loopi(0,triangles.size())
triangles[i].deleted=
triangles[i].dirty=0;

// init Reference ID list
refs.resize(triangles.size()*3);

loopi(0,triangles.size())
{
Triangle &t=triangles[i];
loopj(0,3) vertices[t.v[j]].tcount++;
}
int tstart=0;
loopi(0,vertices.size())
{
Vertex &v=vertices[i];
v.tstart=tstart;
tstart+=v.tcount;
v.tcount=0;
}
loopi(0,triangles.size())
{
Triangle &t=triangles[i];
loopj(0,3)
{
Vertex &v=vertices[t.v[j]];
refs[v.tstart+v.tcount].tid=i;
refs[v.tstart+v.tcount].tvertex=j;
v.tcount++;
}
}
loopi(0,triangles.size())
{
Triangle &t=triangles[i]; vec3f n,p[3];

loopj(0,3) p[j]=vertices[t.v[j]].p;
n.cross(p[1]-p[0],p[2]-p[0]);
n.normalize();
t.n=n;

loopj(0,3) vertices[t.v[j]].q =
vertices[t.v[j]].q+Matrix(n.x,n.y,n.z,-n.dot(p[0]));
}
// Calc Edge Error
loopi(0,triangles.size())
{
Triangle &t=triangles[i];vec3f p;
loopj(0,3) t.err[j]=calculate_error(t.v[j],t.v[(j+1)%3],p);
}
int deleted_triangles=0;
std::vector<vec3f> tmp_n0,tmp_n1;

loopl(0,20) // iteration
{
// remove vertices & mark deleted triangles
loopi(0,triangles.size()) if(!triangles[i].deleted) if(!triangles[i].dirty)
{
Triangle &t=triangles[i];

loopj(0,3) // all edges of one triangle
{
int i0=t.v[ j     ]; Vertex &v0 = vertices[i0];
int i1=t.v[(j+1)%3]; Vertex &v1 = vertices[i1];

bool test1=t.err[j] < 0.00000001*l*l*l*l*l;
bool test2=(v0.p-v1.p).length()<0.3*l;

// remove based on edgelength and quadric error
if(test1 && test2) // remove edge ?
{
vec3f p;
calculate_error(i0,i1,p); // quadric based new edge
//p=(v1.p+v0.p)*0.5;      // simple new edge for testing

tmp_n0.resize(v0.tcount); // normals temporarily
tmp_n1.resize(v1.tcount); // normals temporarily

// dont remove if flipped
if( flipped(p,i0,i1,v0,v1,tmp_n0) ) continue;
if( flipped(p,i1,i0,v1,v0,tmp_n1) ) continue;

// not flipped, so remove edge
v0.p=p;
v0.q=v1.q+v0.q;
int tstart=refs.size();
update_triangles(i0,v0,tmp_n0,deleted_triangles);
update_triangles(i0,v1,tmp_n1,deleted_triangles);
int tcount=refs.size()-tstart;
v0.tstart=tstart;
v0.tcount=tcount;
break;
}
}
}
// clear dirty flag
loopi(0,triangles.size()) triangles[i].dirty=0;
}
}


Edited by spacerat, 12 May 2014 - 10:29 AM.

### #5spacerat  Members

Posted 12 May 2014 - 10:47 PM

Ok, now the new method almost works. I found two reasons why it didnt work before:

• I recalculated the normal after collapsing - there its better to keep the original
• Flipping happened after the triangle became almost line shaped and the normal was 0

The new version avoids this as follows:

• The new normal is always tested against the original normal
• The inner angle between 2 edges of the new triangle is tested to not get too line shaped (dot > 0.99)

Remaining problem:

• The edge collapse stops now even it does not reach the target number of triangles, as any further collapse would lead to violation of the above two tests.

Now, what is the best solution to that ?

Here the new version :  Simplify.h  ( including sorting by error and stopping at a certain number of triangles )

Edited by spacerat, 12 May 2014 - 10:48 PM.

### #6Aressera  Members

Posted 13 May 2014 - 12:01 AM

That's why you need to update the error for neighboring edges after each collapse and re-sort the edge collapse list at each iteration after adding newly created edges - there are collapses that you could still make but you don't know about them because they're not at the top of the heap, or they correspond to new edges that weren't in the original mesh, and therefore don't even exist in the collapse list.

Here's my working edge collapse code (won't compile as-is), but it's well-commented and should give you an idea of how to handle the resorting efficiently.


class FatVertex
{
public:

inline FatVertex( const Vector3& newPosition )
:	position( newPosition ),
finalIndex( 0 ),
collapsed( false ),
checked( false )
{
}

Vector3 position;

/// A list of the indices of the vertices that share an edge with vertex.
ShortArrayList<Index,4> vertexNeighbors;

/// A list of the indices of the triangles that use this vertex.
ShortArrayList<Index,6> triangleNeighbors;

/// The final index of this vertex in the output list.
Index finalIndex;

/// A boolean value indicating whether or not this vertex has been collapsed.
Bool collapsed;

/// A boolean value indicating whether or not this vertex has been collapsed.
Bool checked;

};

class FatTriangle
{
public:

inline FatTriangle( Index newV0, Index newV1, Index newV2, const Plane3& newPlane )
:	plane( newPlane ),
finalIndex( 0 ),
collapsed( false )
{
v[0] = newV0;
v[1] = newV1;
v[2] = newV2;
}

inline Bool hasVertex( const Index vertexIndex ) const
{
return v[0] == vertexIndex || v[1] == vertexIndex || v[2] == vertexIndex;
}

inline Bool getVertexIndex( const Index vertexIndex, Index& index ) const
{
for ( Index i = 0; i < 3; i++ )
{
if ( v[i] == vertexIndex )
{
index = i;
return true;
}
}
return false;
}

inline Bool replaceVertex( Index replaceIndex, Index newIndex )
{
if ( v[0] == replaceIndex )
v[0] = newIndex;
else if ( v[1] == replaceIndex )
v[1] = newIndex;
else if ( v[2] == replaceIndex )
v[2] = newIndex;
else
return false;

return true;
}

/// The indices of the vertices of this triangle.
Index v[3];

/// The plane equation for this triangle.
Plane3 plane;

/// The final index of this triangle in the output list.
Index finalIndex;

/// A boolean value indicating whether or not this triangle has been collapsed.
Bool collapsed;

};

class EdgeCollapse
{
public:

inline EdgeCollapse( Index newV1, Index newV2, const Vector3& newTarget, Real newCost )
:	v1( newV1 ),
v2( newV2 ),
target( newTarget ),
cost( newCost ),
queueIndex( math::max<Index>() )
{
}

/// Return whether or not this edge collapse is the same as another.
inline Bool operator == ( const EdgeCollapse& other ) const
{
return (v1 == other.v1 && v2 == other.v2) || (v1 == other.v2 && v2 == other.v1);
}

Index v1;
Index v2;

Vector3 target;
Real cost;

/// The index position of this edge collapse in the edge collapse queue.
Index queueIndex;

};

class EdgeCollapseReference
{
public:

inline EdgeCollapseReference( EdgeCollapse* newCollapse )
:	collapse( newCollapse )
{
}

/// Return whether or not this edge collapse is the same as another.
inline Bool operator == ( const EdgeCollapseReference& other ) const
{
return collapse == other.collapse;
}

/// Return whether or not this edge collapse has a lower cost than another.
inline Bool operator < ( const EdgeCollapseReference& other ) const
{
return collapse->cost > other.collapse->cost;
}

inline Real getCost() const
{
return collapse->cost;
}

EdgeCollapse* collapse;

};

class EdgeCollapseQueue
{
public:

inline EdgeCollapseQueue( Size newCapacity )
:	array( util::allocate<EdgeCollapseReference>( newCapacity ) ),
capacity( newCapacity ),
numCollapses( 0 )
{
}

inline ~EdgeCollapseQueue()
{
this->clear();
util::deallocate( array );
}

inline void add( const EdgeCollapseReference& newCollapse )
{
if ( numCollapses == capacity )
doubleCapacity();

Index i = numCollapses;
Index parent = getParentIndex(i);

new (array + numCollapses) EdgeCollapseReference( newCollapse );
array[i].collapse->queueIndex = i;
numCollapses++;

// reorder the queue's heap so that the new element is in the correct place.
while ( array[parent] < array[i] )
{
swap( parent, i );

i = parent;
parent = getParentIndex(i);
}
}

inline void remove()
{
// Decrement the number of elements in the queue.
numCollapses--;

// Copy the last element in the queue's array into the largest element's location.
array[0] = array[numCollapses];
array[0].collapse->queueIndex = 0;

// Call the destructor for the old last element.
array[numCollapses].~EdgeCollapseReference();

// Convert the new array into a heap, starting at the first element.
this->heapify( 0 );
}

/// Ensure that the heap is propery ordered after the specified element was changed.
inline void update( Index i )
{
if ( i > 0 )
{
Index parent = getParentIndex(i);

// Did the entry increase its value?
if ( array[parent] < array[i] )
{
do
{
swap( parent, i );

i = parent;
parent = getParentIndex(i);
}
while ( i > 0 && array[parent] < array[i] );

return;
}
}

this->heapify( i );
}

inline const EdgeCollapseReference& getFirst() const
{
return *array;
}

inline Size getSize() const
{
return numCollapses;
}

inline void clear()
{
if ( array != NULL )
callDestructors( array, numCollapses );

numCollapses = Size(0);
}

private:

/// Get the index of a child elements's parent element given the child element's index.
/**
* If the child index is zero, the method returns 0 because this child element is
* at the top of the heap and has no parent by definition.
*/
inline static Index getParentIndex( Index child )
{
if ( child == Index(0) )
return Index(0);

return (child - Index(1))/Index(2);
}

/// Get the index of the left child element given the parent element's index.
inline static Index getChildIndex1( Index parent )
{
return (parent << 1) + Index(1);
}

/// Get the index of the right child element given the parent element's index.
inline static Index getChildIndex2( Index parent )
{
return (parent << 1) + Index(2);
}

/// Convert the specified sub-heap, with root at index i, into a heap.
inline void heapify( Index i )
{
while ( i < numCollapses )
{
Index childIndex1 = getChildIndex1(i);
Index childIndex2 = getChildIndex2(i);
Index max;

if ( childIndex1 < numCollapses && array[i] < array[childIndex1] )
max = childIndex1;
else
max = i;

if ( childIndex2 < numCollapses && array[max] < array[childIndex2] )
max = childIndex2;

if ( max == i )
break;

swap( max, i );
i = max;
}
}

// Swap two elements in the heap.
GSOUND_FORCE_INLINE void swap( Index index1, Index index2 )
{
EdgeCollapseReference temp = array[index1];
array[index1] = array[index2];
array[index2] = temp;

// Update the indices for the swapped elements.
array[index1].collapse->queueIndex = index1;
array[index2].collapse->queueIndex = index2;
}

inline void doubleCapacity()
{
// check to see if the array has not yet been allocated.
if ( capacity == 0 )
{
// allocate the array to hold elements.
capacity = 8;
array = util::allocate<EdgeCollapseReference>( capacity );
}
else
resize( capacity << 1 );
}

/// Double the size of the element array to increase the capacity of the priority queue.
/**
* This method deallocates the previously used array, and then allocates
* a new block of memory with a size equal to double the previous size.
* It then copies over all of the previous elements into the new array.
*/
void resize( Size newCapacity )
{
// Update the capacity and allocate a new array.
capacity = newCapacity;
EdgeCollapseReference* oldArray = array;
array = util::allocate<EdgeCollapseReference>( capacity );

// copy the elements from the old array to the new array.
moveObjects( array, oldArray, numCollapses );

// deallocate the old array.
util::deallocate( oldArray );
}

inline static void moveObjects( EdgeCollapseReference* destination,
const EdgeCollapseReference* source, Size number )
{
const EdgeCollapseReference* const sourceEnd = source + number;

while ( source != sourceEnd )
{
// copy the object from the source to destination
new (destination) EdgeCollapseReference(*source);

// call the destructors on the source
source->~EdgeCollapseReference();

destination++;
source++;
}
}

inline static void callDestructors( EdgeCollapseReference* array, Size number )
{
const EdgeCollapseReference* const arrayEnd = array + number;

while ( array != arrayEnd )
{
array->~EdgeCollapseReference();
array++;
}
}

EdgeCollapseReference* array;

Size capacity;

Size numCollapses;

};

class QEMVertex
{
public:

inline QEMVertex( const Matrix4& newQ )
:	Q( newQ )
{
}

/// The quadric error metric matrix Q for this vertex.
Matrix4 Q;

/// A list of the edge collapses that include this vertex.
ShortArrayList<EdgeCollapseReference,4> collapses;

};

Bool MeshPreprocessor:: collapseEdges( ArrayList<FatVertex>& vertices, ArrayList<FatTriangle>& triangles, Real maxCost )
{
//***************************************************************************
// Compute the error matrix for each vertex in the mesh.

const Size numVertices = vertices.getSize();
ArrayList<QEMVertex> qemVertices( numVertices );

for ( Index i = 0; i < numVertices; i++ )
{
FatVertex& vertex = vertices[i];
vertex.checked = false;
qemVertices.add( computeQ( vertex, triangles ) );
}

//***************************************************************************
// Compute the target vertices and initial costs for all edges in the mesh.

ArrayList<EdgeCollapse> edgeCollapses;

for ( Index i = 0; i < numVertices; i++ )
{
FatVertex& vertex = vertices[i];
QEMVertex& qemVertex = qemVertices[i];
const Size numNeighbors = vertex.vertexNeighbors.getSize();

// Skip previously collapsed vertices.
if ( vertex.collapsed )
continue;

for ( Index n = 0; n < numNeighbors; n++ )
{
const Index neighborIndex = vertex.vertexNeighbors[n];
FatVertex& vertex2 = vertices[neighborIndex];

// Skip neighbors that have already been checked.
if ( vertex2.checked || vertex2.collapsed )
continue;

const QEMVertex& qemVertex2 = qemVertices[neighborIndex];

// Compute the combined error matrix for these vertices.
Matrix4 Q12 = qemVertex.Q + qemVertex2.Q;

// Compute the optimal collapse vertex and the cost for this edge collapse.
Vector3 target = computeCollapseVertex( Q12, vertex.position, vertex2.position );

// Compute the error for this target vertex.
Real cost = computeQError( Q12, target );

// Add this edge collapse to the edge collapse list.
edgeCollapses.add( EdgeCollapse( i, neighborIndex, target, cost ) );
}

vertex.checked = true;
}

EdgeCollapseQueue edgeCollapseQueue( edgeCollapses.getSize() );

for ( Index i = 0; i < edgeCollapses.getSize(); i++ )
{
EdgeCollapse& collapse = edgeCollapses[i];
QEMVertex& qemV1 = qemVertices[collapse.v1];
QEMVertex& qemV2 = qemVertices[collapse.v2];

}

//***************************************************************************
// Collapse edges until the maximum cost is reached.

nextEdgeCollapse:

while ( edgeCollapseQueue.getSize() > 0 )
{
// Get the next edge collapse, then remove it from the queue.
EdgeCollapseReference collapseReference = edgeCollapseQueue.getFirst();
edgeCollapseQueue.remove();

// Check to see if all further collapses exceed the maximum cost parameter.
// If so, we are done collapsing edges.
if ( collapseReference.getCost() > maxCost )
break;

EdgeCollapse& collapse = *collapseReference.collapse;

// Skip degenerate edge collapses.
if ( collapse.v1 == collapse.v2 )
continue;

const Index fromIndex = collapse.v1;
FatVertex& fromVertex = vertices[fromIndex];

const Index toIndex = collapse.v2;
FatVertex& toVertex = vertices[toIndex];

// Skip 'from' or 'to' vertices that have already been collapsed.
if ( fromVertex.collapsed || toVertex.collapsed )
continue;

if ( vertexIsBorder( fromVertex, triangles ) || vertexIsBorder( toVertex, triangles ) )
continue;

//***************************************************************************
// Check to make sure this edge collapse won't invert any triangles.

for ( Index t = 0; t < fromVertex.triangleNeighbors.getSize(); t++ )
{
const Index triangleIndex = fromVertex.triangleNeighbors[t];
const FatTriangle& triangle = triangles[triangleIndex];

// Check only triangles that aren't removed.
if ( !triangle.hasVertex( toIndex ) )
{
// Recompute the triangle plane equation.
Plane3 newPlane( triangle.v[0] == fromIndex ? collapse.target : vertices[triangle.v[0]].position,
triangle.v[1] == fromIndex ? collapse.target : vertices[triangle.v[1]].position,
triangle.v[2] == fromIndex ? collapse.target : vertices[triangle.v[2]].position );

// If the normals don't point in the same direction, then the triangle
// would be reversed by this collapse and the mesh would fold over.
// Don't do this edge collapse because it makes the model degenerate.
if ( math::dot( triangle.plane.normal, newPlane.normal ) < Real(0.0) )
goto nextEdgeCollapse;
}
}

for ( Index t = 0; t < toVertex.triangleNeighbors.getSize(); t++ )
{
const Index triangleIndex = toVertex.triangleNeighbors[t];
const FatTriangle& triangle = triangles[triangleIndex];

// Check only triangles that aren't removed.
if ( !triangle.hasVertex( fromIndex ) )
{
// Recompute the triangle plane equation.
Plane3 newPlane( triangle.v[0] == toIndex ? collapse.target : vertices[triangle.v[0]].position,
triangle.v[1] == toIndex ? collapse.target : vertices[triangle.v[1]].position,
triangle.v[2] == toIndex ? collapse.target : vertices[triangle.v[2]].position );

// If the normals don't point in the same direction, then the triangle
// would be reversed by this collapse and the mesh would fold over.
// Don't do this edge collapse because it makes the model degenerate.
if ( math::dot( triangle.plane.normal, newPlane.normal ) < Real(0.0) )
goto nextEdgeCollapse;
}
}

//***************************************************************************

// Mark the 'from' vertex as collapsed.
fromVertex.collapsed = true;

// Update the 'to' vertex to have the optimal collapsed position.
toVertex.position = collapse.target;

// Update the triangles that shared the from vertex.
for ( Index t = 0; t < fromVertex.triangleNeighbors.getSize(); t++ )
{
const Index triangleIndex = fromVertex.triangleNeighbors[t];
FatTriangle& triangle = triangles[triangleIndex];

if ( triangle.hasVertex( toIndex ) )
{
// This triangle becomes degenerate after the edge collapse.
// Mark it as collapsed.
triangle.collapsed = true;

// Remove this triangle from the list of neighbors for its vertices.
if ( triangle.v[0] != fromIndex )
vertices[triangle.v[0]].triangleNeighbors.remove( triangleIndex );

if ( triangle.v[1] != fromIndex )
vertices[triangle.v[1]].triangleNeighbors.remove( triangleIndex );

if ( triangle.v[2] != fromIndex )
vertices[triangle.v[2]].triangleNeighbors.remove( triangleIndex );
}
else
{
// This triangle needs to be updated with the new vertex index.
triangle.replaceVertex( fromIndex, toIndex );

// Recompute the triangle plane equation.
triangle.plane = Plane3( vertices[triangle.v[0]].position,
vertices[triangle.v[1]].position,
vertices[triangle.v[2]].position );

// Make sure the to vertex has this triangle as its neighbor.
}
}

// Clear the triangle neighbor list for the 'from' vertex since it is no longer used.
fromVertex.triangleNeighbors.clear();

toVertex.vertexNeighbors.removeUnordered( fromIndex );

for ( Index i = 0; i < fromVertex.vertexNeighbors.getSize(); i++ )
{
const Index neighborIndex = fromVertex.vertexNeighbors[i];

if ( neighborIndex == toIndex )
continue;

// Add this neighbor of the 'from' vertex to the list of neighbors for the 'to' vertex.
if ( !toVertex.vertexNeighbors.contains( neighborIndex ) )
{
// Update the 'to' vertex.

// Update the neighbor vertex.
FatVertex& neighbor = vertices[neighborIndex];
neighbor.vertexNeighbors.removeUnordered( fromIndex );
}
}

fromVertex.vertexNeighbors.clear();

QEMVertex& fromQEMVertex = qemVertices[fromIndex];
QEMVertex& toQEMVertex = qemVertices[toIndex];

// Compute the error matrix for the collapsed vertex.
toQEMVertex.Q += fromQEMVertex.Q;

// Remove the current collapse from the 'to' vertex.
toQEMVertex.collapses.remove( collapseReference );

// Merge the collapses associated with the 'from' vertex with those from the 'to' vertex.
for ( Index i = 0; i < fromQEMVertex.collapses.getSize(); i++ )
{
const EdgeCollapseReference& ref = fromQEMVertex.collapses[i];

// Replace the 'from' vertex in the collapse structures.
if ( ref.collapse->v1 == fromIndex )
ref.collapse->v1 = toIndex;
else if ( ref.collapse->v2 == fromIndex )
ref.collapse->v2 = toIndex;

if ( ref.collapse->v1 == ref.collapse->v2 )
continue;

// Add the collapse to the collapses for the 'to' vertex if it is not a duplicate.
if ( !toQEMVertex.collapses.contains( ref ) )
}

// Clear the collapses from the 'from' vertex.
fromQEMVertex.collapses.clear();

// Recompute the cost for all edge collapses that involve the 'to' vertex.
for ( Index i = 0; i < toQEMVertex.collapses.getSize(); )
{
const EdgeCollapseReference& ref = toQEMVertex.collapses[i];
EdgeCollapse& newCollapse = *ref.collapse;
const Index v1Index = newCollapse.v1;
const Index v2Index = newCollapse.v2;

// Compute the combined error matrix for these vertices.
Matrix4 Q12 = qemVertices[v1Index].Q + qemVertices[v2Index].Q;

// Compute the new optimal collapse vertex and the cost for this edge collapse.
newCollapse.target = computeCollapseVertex( Q12, vertices[v1Index].position, vertices[v2Index].position );

// Compute the error for this target vertex.
newCollapse.cost = computeQError( Q12, newCollapse.target );

// Update the recomputed edge collapse in the heap.
edgeCollapseQueue.update( newCollapse.queueIndex );

i++;
}
}

//***************************************************************************

return true;
}

Matrix4 MeshPreprocessor:: computeQ( const FatVertex& vertex, const ArrayList<FatTriangle>& triangles )
{
const Size numTriangleNeighbors = vertex.triangleNeighbors.getSize();
Matrix4 Q;

for ( Index i = 0; i < numTriangleNeighbors; i++ )
{
const Index triangleIndex = vertex.triangleNeighbors[i];
const FatTriangle& triangle = triangles[triangleIndex];

// Compute the error matrix for this triangle's plane.
Vector4 p( triangle.plane.normal, triangle.plane.offset );
Matrix4 Kp( p.x*p.x, p.y*p.x, p.z*p.x, p.w*p.x,
p.x*p.y, p.y*p.y, p.z*p.y, p.w*p.y,
p.x*p.z, p.y*p.z, p.z*p.z, p.w*p.z,
p.x*p.w, p.y*p.w, p.z*p.w, p.w*p.w );

// Accumulate the error matrix.
Q += Kp;
}

return Q;
}

Real MeshPreprocessor:: computeQError( const Matrix4& Q, const Vector3& v )
{
Vector4 v4( v, 1 );

return math::abs( math::dot( v4, Q*v4 ) );
}

Vector3 MeshPreprocessor:: computeCollapseVertex( const Matrix4& Q12, const Vector3& v1, const Vector3& v2 )
{/*
// Compute the matrix to solve for the optimal collapse location.
Matrix4 q = Q12;
q.x.w = q.y.w = q.z.w = 0;
q.w.w = 1;

// Try inverting the matrix Q to compute the minimum-cost collapsed vertex.
Matrix4 qInverse;

Vector3 midpoint = math::midpoint( v1, v2 );
Real midpointCost = computeQError( Q12, midpoint );

if ( q.invert( qInverse, 0.1 ) && computeQError( Q12, qInverse.w.getXYZ() ) < midpointCost )
{
return qInverse.w.getXYZ();
}
else*/
{
// Inversion failed so pick the vertex or midpoint with the lowest cost.
Vector3 midpoint = math::midpoint( v1, v2 );
Real midpointCost = computeQError( Q12, midpoint );
Real v1Cost = computeQError( Q12, v1 );
Real v2Cost = computeQError( Q12, v2 );

// Return the vertex or midpoint with the least cost.
if ( v1Cost < v2Cost )
{
if ( v1Cost < midpointCost )
return v1;
else
return midpoint;
}
else
{
if ( v2Cost < midpointCost )
return v2;
else
return midpoint;
}
}
}

Bool MeshPreprocessor:: vertexIsBorder( const FatVertex& vertex, const ArrayList<FatTriangle>& triangles )
{
const Size numVertexNeighbors = vertex.vertexNeighbors.getSize();
const Size numTriangleNeighbors = vertex.triangleNeighbors.getSize();

for ( Index v = 0; v < numVertexNeighbors; v++ )
{
const Index neighborIndex = vertex.vertexNeighbors[v];
Size numNeighborTriangles = 0;

for ( Index t = 0; t < numTriangleNeighbors; t++ )
{
const FatTriangle& triangle = triangles[vertex.triangleNeighbors[t]];

if ( triangle.hasVertex( neighborIndex ) )
numNeighborTriangles++;
}

if ( numNeighborTriangles == Size(1) )
return true;
}

return false;
}


Edited by Aressera, 13 May 2014 - 12:04 AM.

### #7spacerat  Members

Posted 13 May 2014 - 08:04 AM

Thanks a lot for the hints and the code. Now it works well

You are right - the error updating was the issue. I have resolved that now.

The main problem was apparently that i needed to compute the quadric per vertex from scratch each couple of iterations.

For the other iterations I simply added the quadric of the vertex to collapse to the target vertex and updated all edges.

Left to do is a check if a vertex is an border vertex and to add support for attributes.

Compared to Meshlab, its about 4x faster.

It is several times slower than vertex clustering ( Rapid Simplification of Multi-Attribute Meshes HPG Paper ),

but has much higher quality due to the quadric metric.

Even there is no sorting and just a steadily increased threshold, the result is still good.

Left is the program output (5 seconds), right the Meshlab result (20 seconds) for reducing 2 million triangles down to 20k.

Edited by spacerat, 14 May 2014 - 08:39 PM.

### #8L. Spiro  Members

Posted 13 May 2014 - 03:32 PM

Nice work, Sven.

L. Spiro

### #9spacerat  Members

Posted 15 May 2014 - 07:31 PM

Thank you Spiro!

I just updated the code

• Now everything is commented
• Borders are preserved well in the new version
• Added a new parameter (aggressiveness) to guide the collapsing speed. If the parameter is lower (5 instead of 7 e.g.), more iterations will be required and the result is closer to the optimum.
• The matrix is reduced from 16 to 10 elements since its symmetric

Here a comparison to other methods, and the updated Code (got a bit larger due to border tests)  DOWNLOAD.

Edited by spacerat, 17 May 2014 - 02:21 AM.

### #10tanglaoya  Members

Posted 03 September 2014 - 08:41 AM

Dear spacerat,

Thank you very much for providing the program.

I have downloaded the file and have a quick test. I noticed that there are executable files and I run the file .\bin64\PolygonSimplification.exe by double click it. I will load the wall.obj automatically and simplify it. However, it crashed latter. I tested another file but it also failed. Is there anything wrong?

In addition, can the program simplify the surface meshes which form several sub-domains? Currently I put two reverse ordered triangles in the surface which are common boundary of two neighbor sub-domains. Is it correct? If not, how to form the model with multiple sub-domains?

I have put my model to here:

http://forums.cgsociety.org/attachment.php?attachmentid=175821

Thanks,
Tang Laoya

Edited by tanglaoya, 03 September 2014 - 08:44 AM.

### #11Void12  Members

Posted 27 September 2014 - 07:34 AM

More user-friendly code )

Author, thanks!

But what about texture coords and normals??

How compute them?

Edited by Void12, 27 September 2014 - 08:04 AM.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.