how to get all the valid pairs if I've got all of the vertex in the mesh?

Started by
3 comments, last by klaymore142 16 years ago
hey I got a problem with my program. I've got all of the vertex position in the mesh which to be optimized. The Algorthm is base on the paper "Surface Simplification Using Quadric Error Metrics" in Siggraph long long ago. There is a threshold. And any two vertex who's distance is less than the threshold is a valid pair. So I got to get all of the valid pairs , but there is so much vertex in the mesh . Iteration will cost me O(n*n) time which will not be tolerable. So is there a better way to do it?? By the way , I use Dx. Any sugguestion is welcome , Really appreciate. (I can't find The implemtation of the Algorthm , if someone knows , I will be really appreciate)
Advertisement
Well, there might be a better solution than this, but...

You can optimize the O(n^2) by realizing that a visited pair doesn't need to be visited a second time. For example, the first vertex could be paired with any other vertex (not itself). The second one could then be paired with any vertex, but not the first or itself. Same with the third, and so on.

This brings your efficiency to O(n*n/2). Still not that great, but better.
-----------------------Codito, ergo sumhttp://www.klaymore.net
Performing spatial partitioning can get better than O(n^2) time. Theoretically, partitioning n vertices into m number of partitions will get you on average O(n^2 / m).

When I wrote my vertex welder, I simply partitioned all the vertices into a big octree and then for each leaf node, performed the standard O((n^2+n)/2) algorithm (as mentioned by above poster) to check distances between vertices and weld if needed.

On average, I have about sqrt(n) number of vertices in each leaf node in the octree - meaning that there are the same number of leaf nodes as there are vertices in each. I have no idea why I chose this number - it was completely arbitrary but it seems to work. I guess it would theoretically give O(n^1.5) running time.

After implementing the octree, I recieved a 700x speed boost when welding a 100,000 vertex mesh (and about 90-100x for a more realistic mesh of about 10,000 vertices).
NextWar: The Quest for Earth available now for Windows Phone 7.
Quote:Original post by klaymore142
This brings your efficiency to O(n*n/2). Still not that great, but better.
O(n*n/2) IS O(n^2). They are the same. For that matter, O((n*n+n)/2) = O(n^2) as well.

Spatial partitioning is really the way to go. Here's a potential way of doing it:

For every vertex <x,y,z> generate two points <x-threshold,y-threshold,z-threshold> and <x+threshold,y+threshold,z+threshold>. Sort all three of these points [the original and the two generated ones] with respect to their separate axis, flagging the '-threshold' point as a starting point, and the '+threshold' point as an ending point, and the original as a regular point [so now you have three separate lists, with each entry marking what vertex it came from such as an int to act as an index, a single float or double or whatever normally was the type of your vertices, and a 3 state enum flag]. Iterate over each sorted list, and for each vertex create a set of point such that the 'regular point' of a given point lays between the 'starting point' and the 'ending point' of another point on a per axis basis. Take the intersection of each axis's set per vertex, and you get a set of points that you can check with your brute force O(n^2) alg. Depending on your point density, this runs at approximately the cost of sorting [n*log(n)] asymptotically, or worst case of nearly n^2 for high point density [instances in which your threshold is large compared to the average distance between points].

You could further reduce the need for checking with clever selection of the axis onto which you are projecting, but for such a simple test as distance, it's really not worth the effort or computational time to discover an effective axis set.

If you would like to see pseudo code or something a bit more explicitly descriptive, let me know.

Pretty much any spatial partitioning scheme you use is going to resolve down to cost of sorting [at best]. The one mentioned above is just a really simple way of doing it, and is suitable for this sort of activity.

[Edited by - Drigovas on April 3, 2008 2:51:52 PM]
Quote:O(n*n/2) IS O(n^2). They are the same. For that matter, O((n*n+n)/2) = O(n^2) as well.

Using proper big-O notation, you are right. However, the notation is misleading - O(n^2/2) will grow slower asymptotically than O(n^2).

It would make more sense to say that the average performance is O(n*n/2).

But it becomes irrelevant, since there's a better solution anyway.
-----------------------Codito, ergo sumhttp://www.klaymore.net

This topic is closed to new replies.

Advertisement