• Advertisement
Sign in to follow this  

GraphicsGems polygon-cube intersection (fails)

This topic is 2995 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Find it here. In short, it works most of the time, but in some cases where the polygon cuts the cube near its corner (it seems to me, this only happens when the cube contains a portion of an edge, but only in some cases - yes I've pentuple checked the segment_intersects_cube() routine :)*), it fails. * edit: ack! - just noticed a case where it also fails when a polygon slices through the cube, but none of the actual edges are involved. I have been going over this code for almost three hours now: I'm fairly sure my input is correct and that I've made no mistakes adapting it. Just to be sure, however, here's the adapted version (comments removed):
#define MAXDIM2(v) ((v)[0] > (v)[1] ? 0 : 1)
#define MAXDIM3(v) ((v)[0] > (v)[2] ? MAXDIM2(v) : MAXDIM2((v)+1)+1)
#define SQR(x) ((x)*(x))
#define IN_CLOSED_INTERVAL(a,x,b) (((x)-(a)) * ((x)-(b)) <= 0)
#define SIGN_NONZERO(x) ((x) < 0 ? -1 : 1)

#define seg_contains_point(a,b,x) (((b) > (x)) - ((a) > (x)))


extern int polygon_contains_point_3d(IVertex * verts, TVector3D polynormal, TReal point[3])
{
    TReal abspolynormal[3];
    int zaxis, xaxis, yaxis, count;
    int xdirection;

    for(int i = 0; i < 3; i++)
        abspolynormal = fabs(polynormal);

    zaxis = MAXDIM3(abspolynormal);

    if(polynormal[zaxis] < 0)
        {
        xaxis = (zaxis + 2) % 3;
        yaxis = (zaxis + 1) % 3;
        }
    else
        {
        xaxis = (zaxis + 1) % 3;
        yaxis = (zaxis + 2) % 3;
        }

    count = 0;
    IVertex* vtx = verts;
    while(vtx)
        {
        IVertex* v = vtx;
        IVertex* w = vtx->next ? vtx->next : verts;
        if(xdirection = seg_contains_point(v->vPosition[xaxis], w->vPosition[xaxis], point[xaxis]))
            {
            if(seg_contains_point(v->vPosition[yaxis], w->vPosition[yaxis], point[yaxis]))
                {
                if(xdirection * (point[xaxis]-v->vPosition[xaxis]) * (w->vPosition[yaxis]-v->vPosition[yaxis]) <= 
                   xdirection * (point[yaxis]-v->vPosition[yaxis]) * (w->vPosition[xaxis]-v->vPosition[xaxis]))
                   count += xdirection;
                }
            else
                {
                if(v->vPosition[yaxis] <= point[yaxis])
                    count += xdirection;
                }
            }
        vtx = vtx->next;
        }

    return count;
}




extern int
segment_intersects_cube(TVector3D v0, TVector3D v1)
{
    int iplus1, iplus2, edgevec_signs[3];
    TVector3D edgevec;

    edgevec = v1 - v0;

    for(int i = 0; i < 3; i++)
        edgevec_signs = SIGN_NONZERO(edgevec);
    for(int i = 0; i < 3; i++)
        {
        if(v0 * edgevec_signs > .5) return 0;
        if(v1 * edgevec_signs < -.5) return 0;
        }

    for(int i = 0; i < 3; i++)
        {
        TReal rhomb_normal_dot_v0, rhomb_normal_dot_cubedge;

        iplus1 = (i + 1) % 3;
        iplus2 = (i + 2) % 3;

        rhomb_normal_dot_v0 = edgevec[iplus2] * v0[iplus1]
            - edgevec[iplus1] * v0[iplus2];

        rhomb_normal_dot_cubedge = .5 *
            (edgevec[iplus2] * edgevec_signs[iplus1] +
            edgevec[iplus1] * edgevec_signs[iplus2]);

        if(SQR(rhomb_normal_dot_v0) > SQR(rhomb_normal_dot_cubedge))
            return 0;
        }
    
    return 1;
}





extern int
polygon_intersects_cube(IVertex * verts,
			TVector3D polynormal)
{
    TVector3D best_diagonal;
    TVector3D p;
    TReal t;

    IVertex* v = verts;
    while(v)
        {
        if(segment_intersects_cube(v->vPosition, v->next ? v->next->vPosition : verts->vPosition))
            return 1;
        v = v->next;
        }

    for(int i = 0; i < 3; i++)
        best_diagonal = SIGN_NONZERO(polynormal);

    t = VecDotProduct(&polynormal, &verts->vPosition) / VecDotProduct(&polynormal, &best_diagonal);

    if(!IN_CLOSED_INTERVAL(-.5, t, .5))
        return 0;

    p = best_diagonal * t;

    return polygon_contains_point_3d(verts, polynormal, p);
}


I'm sure people have used this code before and I'm sure in its original form it works correctly, but I'm failing to see where I'm going wrong here. So, if anyone has an idea or can spot a mistake, I'd appreciate it a lot :)

Share this post


Link to post
Share on other sites
Advertisement
Is there a better test you have in mind? I'm all ears :). This test is cheap becuase I can optimize my code to produce unit size octree nodes at leaves, which means I only have to translate the polygons by the node's origin to do the test (eg no scaling involved).

If there's a better way out there, I'd love to hear about it, though. I tried to tackle this using my own brain, but all the various cases and tests got pretty expensive pretty fast.

Share this post


Link to post
Share on other sites
oh yes. the SAT (starting to sound like an evangelist).

also here, but in 3D.

I'll give you the raw test, not considering optimisations, especially since you transform to unit test, but here goes.


// the 'footprint' of the box when projected along an axis.
void boxInterval(const Vector& axis, const Box& box, float& min, float& max)
{
const Vector& c = box.centre; // box centre position.
const Vector& h = box.halfsize; // half size of the box. if box is (1, 1, 1) in size, half size is (0.5, 0.5, 0.5)

// extent of the box on axis
float hx = fabs(axis.x) * h.x;
float hy = fabs(axis.y) * h.y;
float hz = fabs(axis.z) * h.z;
float r = hx + hy + hz;

// position of the box on axis
float p = dotProduct(c, axis);

// interval of the box on axis.
min = p - r;
max = p + r;
}

// the 'footprint' of a polygon when projected along an axis.
void polygonInterval(const Vector& axis, const Polygon& poly, float& min, float& max)
{
// project first vertex
min = max = dotProduct(axis, poly.vertex[0]);

// expend to the projection of remaining vertices
for(int i = 1; i < poly.vertexcount; i++)
{
float d = dotProduct(axis, poly.vertex);
if(d < min) min = d;
else if (d > max) max = d;
}
}

bool intervalsIntersect(const Vector& axis, const Polygon& poly, const Box& box)
{
// this may not be necessary.
// but if the axis of projection is degenerate (length < 1.0E-4f), ignore test.
#define CHECK_DEGENERATE_AXES (1)
#ifdef CHECK_DEGENERATE_AXES
float a2 = dotProduct(axis, axis);
if(a2 < 1.0E-8f) return true;
#endif

float bmin, bmax;
float pmin, pmax;

// calculate footprint of the objects along the axis
boxInterval(axis, box, bmin, bmax);
polygonInterval(axis, poly, pmin, pmax);

// check if projected intervals overlap.
return(pmin <= bmax) && (pmax >= bmin);
}

bool polygonBoxIntersect(const Polygon& poly, const Box& box)
{
// test projection along polygon normal.
if(!intervalsIntersect(poly.normal, poly, box)) return false;

// the box major axes, always constant for aabbs
static const Vector boxAxes[3] = { Vector(1, 0, 0), Vector(0, 1, 0), Vector(0, 0, 1) };

for(int i = 0; i < 3; i ++)
{
// test projections along box major axes.
if(!intervalsIntersect(boxAxes, poly, box)) return false;

// test cross product of edges and box axes.
for(int j = 0, k = (poly.vertexcount-1); j < poly.vertexcount; k=j, j++)
{
// polygon edge.
Vector e = (poly.vertex[k] - poly.vertex[j]);

// cross product of polygon edge and box major axis.
Vector cross_axis = crossProduct(e, boxAxes);

// test projections along the cross_axis.
if(!intervalsIntersect(cross_axis, poly, box)) return false;
}
}
return true;
}


In 3D, the SAT axes are unique face normals, and cross products of every unique edges of object A against every unique edges of Object B.

This is for convex objects. if your polygons are concave, you will need to break them down to convex polygons first (tesselation).

Share this post


Link to post
Share on other sites
note that this is the unoptimised version. IF you do for example axis aligned boxes and triangles, you can reduce the maths, branching, ect to something much faster (although the raw test is plenty fast to start with).

The principle of their algorithm is the same, except optimised for speed.

If you have polygons and not triangles, I'd consider reducing your polygons into triangles and using the moller test.

Dave Eberly has another version of the algorithm, reduced to its minimum components, although the implementation would be more cryptic (since it's an optimised version of the optimised version!).

Share this post


Link to post
Share on other sites
Hey - I adapted your code and it works perfectly! Thank you for ending this - I've been stuck on it for ages now :D!

About triangulation - I can't (well, it'd simply be stupid to) do that at this stage, which is why I've been racking my brain over how to do full polygon tests - I need the polygonalized data and it would be far less efficient to start triangulating the geometry at this stage as much of it is temporary and there can be a lot of faces involved in the tests (I'm triangulating as the last stage for rendering only).

In any case - thanks once more - this has been most helpful!

Share this post


Link to post
Share on other sites
just keep in mind that the algorithm works only for convex polygons [grin].

Share this post


Link to post
Share on other sites
Thanks for the warning - this might have ended up being a bit of a surprise as there's no guarantee that I'll only be dealing with convex polygons.

Incidentally, will the code fail or simply produce unexpected output if I pass a concave polygon to it? A vast majority of my geometry is convex (I'd say 99.9%), but since I'll be running CSG passes over it repeatedly, there's a chance that it might produce concave results at one point or another. I'd be willing to settle with assuming a very small error margin for the time being if there's a good chance the collision will handled correctly.

Share this post


Link to post
Share on other sites
Undefined results. However, if you can use the convex hull (the convex polygon around your concave polygon, should be relatively easy to compute), that would work. Of course, tessellation or triangulation will work better.

OpenGL has some utility libraries to tessellate polygons, if that's any help. Tessellation is not easy, however if your polygons are not self intersecting, it shouldn't be difficult to do.

If this real-time or 'cooking' your geometry?

Furthermore, have you considered using a bounding volume of some sort around your polygons for space partitioning? For example, using bounding boxes, or as I said briefly, using the convex hull around your polygon (which would be the polygon itself if it is already convex, or the polygon hull around your polygon vertices, which would be as simple as removing vertices that are creating concave corners). From experience, false positives are not that much of a problem, except for performance, but I think that's a trade off worth considering in your case.

Finally, what about other methods for space-partitioning? I'm quite partial to loose KD-trees from the collision detection stand point.

Also, for the collision tests themselves, convex geometry simplifies things greatly (even though it's not really simple in itself). Concave polygons means you cannot use the SAT and most algorithms discussed everywhere, so I suspect you will have to triangulate / tessellate your polygons anyway.

Share this post


Link to post
Share on other sites
I'm basically procedurally modelling geometry, which means that I have to work with polygons (eg not triangles) - or rather it is MUCH faster to work with polygons - and I have to be ready to handle concave geometry as (as mentioned above), I'm carrying out CSG operations. So, in short, I'm 'cooking' it. I can't think of a way my code could generate self-intersecting polygons so I'm not worried about those.

Thing is, so far I haven't been able to produce a concave polygon that would actually end up having to go into the SAT test. It's possible I can avoid that entirely by simply carefully organizing my splits; on the other hand, I don't want this to throw a nasty surprise at me when I've forgotten about it and unexpected results start showing up.

Wrapping a convex hull around the concave polygon is actually a great idea! I don't need pixel perfect precision at that stage and it's unlikely that kind of approximation would end up visually impairing the result.

As for spatial structuring, I'm using octrees and AABB/AABB tests for simplicity and as primary filters (from there on the SAT test for partitioning stuff into the octree and finally a polygon/polyhedron test for precision culling once I'm sure I need to do it). These are working wonders and are pretty simple to manage as well (the polyhedra tests are pretty nasty, actually). I'll probably end up building somekind of a BSP implementation for the final export as I hope to be looking at tens of thousands of triangles in a scene at one point. I've already written a triangulator, which works on concave polygons and I'm being very careful to avoid holes (I simply split a polygon in two before cutting a hole in it - avoids a world of mess).

I'll have a look at the loose kd Trees - although TBH it's way too early for collision at this stage and I'm thinking of adding a bit of portability by using some better standardized implementation of BSP trees :). Although... True enough, I've no idea how standardized anything is when it comes to BSP trees.

As this is editor stage stuff, I want to be able to run this on the CPU only, which is why I specifically avoided using any GPU API.

The biggest concern I have right now is actually the point-in-polyhedron problem: I haven't implemented true boolean operations yet (so far I'm working with planes only), but at one point I'll have to, to clean up the insides. After that there's a good chance the SAT test won't suffice because I won't know for sure which octree nodes are inside a polyhedrdon and which are not (simple non-folding polyhedra are a piece of cake to manage - just test if the node is behind all polygon planes if it doesn't intersect any of the polygons). But TBH I don't even have a clue how to approach testing a point in complex polyhedra. Possibly, one hack of a work-around would be to ignore "inside" geometry entirely and cull it away in the very last stage (after which it won't be necessary to exlcude geometry based on location anymore). However, that has its cost as well, so yeah... :D

Share this post


Link to post
Share on other sites
the GLU tesselation stuff is not gpu, but cpu. It's one of these useful utilities that you can use to pre-process your annoying gemoetry.

For BSP trees, They are a bit nasty. for collision detection I would look at offset BSP trees, as explained by Stan Melax, and used in Quake3. It's still a bit of a pain to implement and to use.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement