• Create Account

\$50

Like
0Likes
Dislike

# Walls and Shadows in 2D

By Scott Mayo | Published Nov 09 2009 10:44 PM in Graphics Programming and Theory

wall walls left point points light figure ray set
 If you find this article contains errors or problems rendering it unreadable (missing images or files, mangled code, improper text formatting, etc) please contact the editor so corrections can be made. Thank you for helping us improve this resource

This article will explore a technique for efficiently computing visible and lit regions in a 2D scene.

The algorithms presented are intended for use with large maps, and where computation time of some appreciable fraction of a second is tolerable. There are faster algorithms, but they involve solving the intersection and union of arbitrary polygons; the reasons for rejecting this approach are discussed below.

Consider a region, such as a cave, dungeon or city, viewed from above, such that floors are shown by areas and walls by lines. Floors are always enclosed by some set of walls. Put another way, all maps are bounded by a continuous wall. We will ignore other map features; for our purposes, the only “important” features are walls and floors.

Lights exist at points, and have a given area of effect, defined by a radius. Everything inside this radius that faces the light is considered lit. Lights can have overlapping areas. Walls cast shadows, making the walls and floors behind them unlit (there is no provision for light attenuating – getting dimmer over distance – things are either lit or unlit).

The observer also exists at a point. The problem is to quickly discover what he can see, where the area seen is defined as any point he has line of sight to (that is, those walls and floors not occluded by other walls), that are also lit. For efficiency reasons, the observer’s vision also has a maximum radius. Anything outside that radius is not visible, even if it is lit and unobstructed. The algorithm can function without this limit, the limitation is an optimization.

In addition to knowing what areas are currently visible, it is desirable to know what areas have been previously visible, and to be able to display those areas differently.

An example is given in Figure 1. The observer is the red hollow circle, and standing at the intersection of three passageways. White solid circles are light (the observer, in this case, is carrying one).

Dark green is used for walls and floors that have not been seen yet. Blue walls and medium grey floors mark areas that have been seen previously. White walls and light grey floor denote what’s currently visible. Note that while the observer’s own light doesn’t reach all the way down that west passageway, there’s a light to the south that illuminates part of it, giving a disjoint area of visibility. To the southeast, there’s another light that helps light that southeast corridor, so the whole corridor is visible. Finally, to the northeast, there’s a small window in the wall, allowing the observer to illuminate, and see, a small amount of the room to the east.

Figure 1 - A simple example of a map

In this technique, the floor is broken into small fragments (triangles in this case, because they are easy to render), which serve as the algorithm’s basic unit of area. When determining what part of the floor is visible, I’m really asking what set of triangles is visible. If the centroid (average of the vertices) of a triangle is visible, the whole triangle is considered visible. This has the side effect of making the bounds of the visibility area slightly jagged, as you can see in the disjoint area down the west corridor. In my own application, this is acceptable, but blending of adjacent triangles could be done in order to get a smooth gradient between visible and invisible areas.

Because walls divide up floor area and walls can run at all sorts of angles, cutting the floor into small triangles often results in additional, smaller, triangles being generated. All told, a map can contain millions of floor triangles and hundreds of walls. For the curious, figure 2 shows the triangles generated for a small section of the map:

Figure 2 - You are in a maze of twisty triangles, all largely the same

For lighting (and seeing) the walls themselves, I do something similar – walls are broken into segments, and the center point of each segment is checked to see if it is visible. If it is visible, that whole segment is visible. For this reason, segments are kept short and walls can generate thousands of them in a large map.

Because of this, when it comes time to determine which parts of walls and floors are not visible it may be necessary to evaluate millions of points for the floor and thousands of points for wall segments. Conceptually, they all need to be evaluated against every wall to determine if line of sight exists from the observer, and that process has to be repeated for each light as well.

Clearly, a brute force approach will not work in reasonable time. The goal is to move the observer to a new point, or move a light to a new point (often both, since the observer often carries a light), and know as quickly as possible what areas of floor and segments of walls can be seen. Comparing possibly millions of points against hundreds or thousands of walls and doing a line of sight calculation – essentially calculating the intersection of two line segments, for one for a line of vision and one for a wall – isn’t acceptably fast.

It turns out that lighting and vision can be handled by the same algorithm, since they are both occluded by walls in the same way. They can both be represented by casting rays out from a given point, and stopping the rays when they hit a wall. If there’s no wall along that ray, the ray is cut short by a distance limit instead (this illustrates another difficulty with using polygon intersections – polygons don’t have curved sides, and approximating them with short straight lines increases the cost of the intersection test).

Figure 3 - A "polygon" of light, with curved parts

Since there can be multiple lights, unions of polygons would be required:

Figure 4 - Union of two lit areas

Since we’ve defined visibility as areas that are both lit and within line of sight of the observer, the intersection of polygons representing lit areas (itself a union) and the polygon representing the area of sight represent the visible areas. Figure 5 repeats the original example, with yellow lines roughly delineating the lit area, and red lines bounding the area of sight. Areas within both are the visible areas.

Figure 5 - Vision and Light compute Visibility

The result of the intersection is a (potentially empty) set of polygons. To maintain a history of what’s been visible, a union of the previously visible areas and the currently visible area is also required. The combination can create polygons with holes, and for complex maps, a large number of sides. Figure 6 shows the result of a wall, a number of square columns, and a short walk along the north side of the wall by the observer, carrying a light. The union of previously visible areas is shown in darker grey1.

Figure 6 - Columns, a Wall, and A Messy Polygon Union

Doing polygon union and intersection is complex. Naïve implementations of these algorithms run into problems with boundary conditions and, in complex cases, floating point accuracy. There are packages available that solve these problems, and deal elegantly with disjoint polygons and holes, but they are available under restricted license2. I wanted an unencumbered solution, and was willing to trade off some amount of runtime to get it.

But a brute force computation of every floor point vs. every obstructing wall is unacceptable. What is needed is an efficient way to evaluate the many floor and wall points for visibility.

The basic approach amounts to describing the shadows cast by walls. Since each point in the floor has to be tested against these shadows, the algorithms focus on making it as inexpensive as possible to determine if any given point is within a shadow, without loss of accuracy. Since “as inexpensive as possible” is still too expensive, given the sheer number of points to consider, the approach also includes determining which walls are already covered by other walls (in effect, we work to discard walls which don’t change anything we care about.) Where walls cannot be discarded, the algorithm attempts to determine what parts of walls contribute to meaningful shadows, and which parts are irrelevant. Note that while I talk about shadows here, everything also applies to occluding the observer’s view; since, as noted, a wall stops a line of sight in exactly the same way that it cuts short a ray of light. Finally, I discuss the critical optimizations that make the approach fast enough to use on large and complex maps.

We start by making all points are relative to the location of the light in question; in other words everything is translated so that the light is at the origin. This translation drops terms out of many formulas and provides significant savings.

The first step is to identify when a wall casts a shadow over a point – efficiently. The simplest way to do this is to take the endpoints of a wall and arrange them so that the first point is clockwise from the second point, as seen from the origin. If they are counterclockwise, swap the points. If they are collinear with the light, throw out that wall, because it can’t cast a shadow. I will refer to the endpoints as the left and right endpoints, with the understanding that this is from the perspective of the origin.

A 2D cross product3 (from the origin, to the start point, to the end point), reveals both the edge-on case and the clockwise or counterclockwise winding of the end points.

Here’s an example:

Figure 7 - Walls that do, and don't, matter

In figure 7, line C is edge on to the light and casts no shadow of its own, so it gets dropped. B isn’t edge on, so we keep it, with BA as its right endpoint and BC as the left (as seen from the light at the origin). D is also a wall that matters, and DC becomes the right endpoint, while DE becomes the left. In some applications, in which walls always form continuous loops, A and E can also be dropped because they represent hidden surfaces. The same rules that apply to 3D surface removal apply here – in a closed figure, walls that face away from the origin can be dropped without harm. However, this algorithm works even if walls don’t form closed figures.

Now, cast a ray out from the origin though the right endpoint of a given wall – use B as an example, and cast a ray through BA. Note that any point that is in shadow happens to be to the left of this line (so are many other points, but the point is that all the shadowed ones are). Calculating “to the left of” is cheap: it’s the 2D cross product from the origin, to the right endpoint of the wall, to the point in question; for example, point X in Figure 7. The cross product gives a value that’s negative on one side of the wall, positive on the other side, and zero if the point in question is on the line.

Repeat for the ray from the origin to the left end of the line segment, BC. All shadowed points are to the right of this line, which is determined by another 2D cross product. All that remains is to determine if the point is behind the wall or in front of the wall, with respect to the center. This is yet another 2D cross product, this time from the wall’s right endpoint, to the left end point, to the point in question. Point X in Figure 7 would pass all three of these tests, so it is in B’s shadow.

All told, at most three cross products (six multiplies and three subtracts, and three comparisons with 0), tell if a point is shadowed by a wall. In many cases, a single cross product will prove that a point is not shadowed by a wall. But that still leaves the problem of comparing many, many thousands of points against hundreds of walls.

Having established an algorithm to test a point against a wall, we now need to find ways to minimize how often we have to use it. Any wall we can cull results in hundreds of thousands of fewer operations! So a first pass at culling is simply to remove any wall which is outside the radius of the light by creating a bounding square around the origin with the “radius” of the light, and a bounding rectangle from each wall’s endpoints. If these don’t overlap, that wall can’t affect lighting, and is discarded.

Figure 8 - Using rectangles and overlap to discard walls

In Figure 8, F’s rectangle doesn’t overlap the light’s rectangle, so F gets discarded. H and G overlap and so are kept – H is a mistake because it’s not really in the circle of light that matters, but this is a very cheap test that discards most walls in a large map very quickly, and that’s what we want for now. In applications where walls form loops or tight groups, the entire set can be given a single bounding rectangle, allowing whole groups of walls to be culled by a single overlap test.

Whatever walls are left might cast shadows. For each, we calculate the squared distance between the origin and the nearest part of the wall. This is slightly messy, since “nearest” could be either endpoint, or a point somewhere between. Given these distances, sort the list of walls so that the closest rise to the top. In the case of ties, there is generally some advantage in letting the longer wall sort closer to the top. This will usually put the walls casting the largest shadows near the top of the list. This helps performance considerably, but nothing breaks if the list isn’t perfectly sorted. In figure 8, G would be judged closer, by virtue of the northernmost endpoint. H’s closest point, near the middle of H, is further off.

Once we’ve dropped all the obviously uninvolved walls and sorted the rest by distance, it’s time to walk through the list of walls, adding them to the (initially empty) set of walls that cast shadows. If a wall turns out to be occluded by other walls in this phase, we cull it. Usually, anyway - in the interest of speed, the algorithm settles for discarding most such walls, but can miss cases. In practice, it misses few cases, so I have not troubled to improve this phase’s culling method.

To explain how this culling is done, we must introduce some concepts. Each wall generates a pair of rays, both starting from the light (origin) and one through each endpoint. As noted before, points to the right of the “left” ray, and to the left of the “right” ray, bound the possible shadow. However, some of that area might be already shadowed by another wall – one wall can partially cover another. In fact, all of that area might be shadowed by other walls – the current wall might be totally occluded. If it’s only partially occluded, what we want to do is “narrow” its rays, pushing the left ray further right and/or the right ray further left, to account for the existing, known shadows in this area. The reason for this is that we don’t want to compare points against multiple walls if we don’t have to, and we have to compare a point against any wall if it lies between the wall’s left and right rays. The narrower that angle becomes, the fewer points have to be checked against that wall.

So when we take in a new wall, the first thing we do is look at the two line segments between the origin and the wall’s endpoints (each in turn). If that line segment intersects a wall we already accepted, then the intersected wall casts a shadow we probably care about in regard to the current wall. The interesting point here is that we don’t care where the intersection occurs. An example will show why:

Figure 9 - Intersecting origin-to-endpoint with other walls

Assume the current wall, W, is half-hidden by an already accepted, closer one, S. Assume that it’s the left half of W that’s covered by the nearer wall, as in the example above. The way we discover this is by checking the line segment from the origin to left endpoint of W, against the already-accepted walls. It can intersect several. Once we find an intersection, we know immediately that the intersected wall, S, is going to cover at least part of W, and on the left side (ignore the case where the S’s left endpoint and S’s right endpoint are collinear with the origin – it doesn’t change anything). Notice that we don’t care where S gets intersected.

So what do we do? We replace W’s left endpoint ray with S’s right endpoint ray. In effect, we push the left endpoint ray of W to the right. Having done that, we check to see if we’ve pushed it so far to the right that it is now at or past W’s own right ray. If so, S completely occludes W and we discard W immediately. If not, we’ve made W “narrower”. In our example, W survived and its shadow got narrower, as marked in grey.

Figure 10 - Pushing W's left ray

We keep doing this, looking for other walls to play the part of S that intersect W’s left or right end rays. When they do, we update (“nudge”) W’s left (or right) ray by replacing it with S’s right (or left) ray. If the endpoint rays of W meet or cross during this “nudging” process, W is discarded.

Figure 11 - Losing W

In the example above, S has pushed W’s left ray, and J has pushed W’s right ray. A 2D cross product tells us that left and right rays have gotten past each other in the process, so W is judged to be completely occluded, and gets dropped. Otherwise, it survives, with a potentially much narrowed shadow, and it adds it to the set of kept walls. Note that if J had been longer to the left, it could have intersected both W’s red lines, and it would have pushed both W’s left and right rays by itself, forcing them to cross. This makes sense; it would have completely occluded W all by itself and we’d expect it to cause W to drop out.

Note that this algorithm doesn’t notice the case where a short, close wall casts a shadow over the middle of a long wall a little further off. In this case, both walls end up passing this check, and the long wall doesn’t get its rays changed (the only way to do that would be to split the long wall into two pieces and narrow them individually). This isn’t much of a problem in practice, because when it comes time to check points, we will again check the walls in order of distance from the origin, so the smaller, closer wall is likely to be checked first. Points it shadows won’t have to be checked again, so for those points the longer wall never needs to be checked at all. There are unusual cases where points do end up in redundant checks, but they are unusual enough not to be much of a runtime problem.

As we work through the list of candidates, we are generally working outward from the origin, so it’s not uncommon for more and more walls to end up discarded because they are completely occluded. This helps keep this part of the algorithm quick.

It remains to find a good way to detect intersections of line segments. We don’t want round-off problems (this might report false intersections or miss real ones, causing annoying issues), and we don’t care where the intersection itself actually occurs. It turns out that a reasonable way to do this is to take W’s two points, and each potential S’s two points, and arrange them into a quadrilateral. We take the 4 points in this order: S’s left, W’s left, S’s right, W’s right. If the line segments cross, the four points in order form a convex polygon. If they don’t, it is concave. An example serves:

Figure 12 - Detecting intersections

R and S cross, so the (green) polygon formed by the 4 endpoints, is a convex kite shape. R and L don’t cross, so the resulting (orange) polygon isn’t convex; it’s not even simple.

It turns out that there is a fair amount of misinformation about testing for convex polygons on the ‘Net. Talk about counting changes in sign sounds interesting (and cheap), but I’ve yet to see an implementation of this that works in all cases, including horizontal and vertical lines. What I’ve ended up with is more expensive but gets all cases, even if two points in the quad happen to be coincident. I calculate the 4 2D cross products, going around the quad (in either order). If they are all negative OR they are all positive, it’s convex. Anything else is concave or worse. While not cheap (up to 8 multiplies and quite a few subtracts), we can stop as soon as we get a difference in sign. On hardware that can do floating subtracts in parallel, this is not too bad in cost.

By itself, that’s enough to discard unneeded walls in most cases, and minimize the scope of influence of the surviving walls. Just with what we have, it’s possible to determine if points are occluded by walls. But we’d still like it faster. We always want things faster, that’s why we buy computers.

# Making it faster

Make sure all we just discussed makes sense to you, because we’re about to add some complications. There are four optimizations that can be applied to all this, unrelated to each other.

1. In my maps, walls (except doors) always touch one other wall at their endpoint. (They are really wall surfaces - just as in a 3D model, all the surface polygons are just that, surfaces, always touching neighbor surfaces along edges.) This leads to an optimization, though it is a little fussy to apply. An example serves best.

Imagine you’re a light at the origin, and over at x=5 there’s a wall stretching from (5,0) to (5,5), running north and south. It casts an obvious shadow to the east. We’ll call that wall A. But imagine that at (5,5) there’s another wall, running to (4,6), diagonally to the northwest. Call it B.

Figure 13 - Extending A's influence

A has the usual left and right rays: the right ray passes through (0,5), the left through (5.5). B has its own rays, with a right ray at (5,5) and a left ray at (4.6).

Between them, they cast a single, joined shadow, wider than the shadows they cast individually. The shape of the shadow is complicated, but it’s worth noticing that for any point behind the line of A (that is, with x > 5, noted in green), that point is in shadow if it is between A’s right ray and B’s left ray. This is because B extends A, and (important point) it extends it by turning somewhat towards the light, not away. (It would also work if A and B were collinear, but in my system, collinear walls that share an endpoint become a single wall). There are also points B shadows that have X < 5, but when we are just considering A, it’s fair to say that we can apply B’s left bound instead of A’s left bound when asking about points that are behind the line A makes. A’s ability to screen things, given in light grey, has effectively been extended by the dark grey area.

I don’t take advantage of this when it comes to considering which points are in shadow, because all it does is increase the number of points that are candidates for testing for any given wall, and that doesn’t help. However, I do take advantage of this when determining what walls occlude other walls.

I do this by keeping two sets of vector pairs for each wall. The ones I’ve been calling left and right are the “inner” vectors, named because they tend to move inward, and their goal is to get pushed closer together by other walls, ideally to cross. But there is also a pair of left and right vectors I call the outer pair. They start at the endpoints of the wall like the inner ones do, but they try to grow outward. They grow outward when 1) I find the wall that shares an endpoint and 2) this other wall (in my scheme there can only be one) does not bend away from the light. This is an easy check to make – it’s another 2D cross product, from A’s left endpoint, to A’s right, to B’s right. If that comes up counterclockwise, A’s outer right vector gets replaced by a copy of B’s outer right vector (as long as that’s an improvement – it’s important to check that you’re really pushing the outer right vector more to the right.)

And note the trick affects both A and B. If B extends A’s right outer vector, then A is a great candidate for extending B’s left outer vector.

Applied carefully, the extra reach this gives walls helps discard distant walls very quickly in cases where there are long runs of joined walls. I find that for most maps, the difference this makes is not large, and given the work I put into getting it right, I might not have done it if I’d realized how little it helps most maps.

2. When considering points, it’s important not to waste time testing any given point against a wall that can’t possibly shadow it. Each point, after all, has three tests it has to pass, per wall. Is it to the right of the left vector, to the left of the right vector, and is it behind the line of the wall. On average, half of all points are going to pass that first test, for most walls. That means that a fair amount of the time, the second test is going to be needed for points that are not remotely candidates. And given huge numbers of points, that’s unacceptable.

Figure 14 - The futility of any one test

Here, P is to the right of the left endpoint-origin line, so it’s a candidate for being shadowed by the wall. But so are Q and R, and they clearly aren’t going to be shadowed. It would be helpful, then, if we could only test a point against the walls that have a good shot of shadowing it.

Trigonometric solutions suggest themselves, but trig functions are much too expensive.

What I do is create a list of “aggregate wedges.” Each wall’s shadow, called a wedge, is compared with the other wedges of the other walls. If they overlap, they are added to the same set of wedges, and I keep track of the leftmost and rightmost ray among everything in the same set.

Figure 15 - Creating sets of shaodw wedges

Of course, if you’re in the square room without a door (never mind how you got in), you end up with all the walls in the same set, and the rays that bound the set end up “enclosing” every point on the map! So this trick is useless in these kinds of maps. But in maps of towns, with many freestanding buildings and hence many independent wedges, you can often get whole groups of walls into a number of disjoint sets, and since each set has an enclosing pair of rays that covers all the walls in the set, you can test any given point against the set’s rays: if it’s not between them, you don’t have to test any of the walls in that set.

This sounds pretty, but it can be maddening to get right. You can have two independent sets, and then find a wall that belongs in them both, effectively merging two sets into one big one.

Figure 16 - Combining sets

You end up doing a certain amount of set management. I have a sloppy and dirty way to do this which is reasonably fast, but it’s not pretty.

Another difficulty is knowing when a wall’s vectors overlap an existing set’s bounds. There are several cases. A wall’s vectors could be completely inside the set’s bounds, in which case the wall just becomes a member of the set and the set’s bounds don’t change. Or it can overlap to the left, joining the set and pushing the set’s left vector. Or it can overlap on the right. Or it can overlap on both sides, pushing both set vectors outward. Or it can be disjoint to that set. Keep in mind that a set can have rays at an obtuse angle, enclosing quite a bit of area. It’s surprisingly hard to diagnose all these cases properly. The algorithm is difficult to describe, but the source code is at the end of this document.

When all is said and done, this optimization is very worthwhile for some classes of maps that would otherwise hammer the point test algorithm with a lot of pointless tests. But it was not much fun to write.

3. This is my favorite optimization because it’s simple and it tends to help a great deal in certain, specific cases. When I’m gathering walls, I’m computing distance (actually, distance squared – no square roots were used in any of this) between the light and each wall. Along the way I keep track of the minimum such distance I see. If, for example, the closest distance to any wall is 50 units, then any point that is closer to the origin than 50 units can never be in shadow and doesn’t have to be checked against any wall. (Of course, if the light’s actual radius of effect is smaller than this distance, that value is used instead).

Figure 17 - Identifying points unaffected by walls

If the light is close to a wall, this optimization saves very little. If it’s not, this can put hundreds or even thousands of points into “lit” status very quickly indeed. In order to avoid edge cases, I subtract a small amount from the minimum wall distance after I compute it, so there’s no question of points right at a wall being considered lit unduly.

4. This is my second favorite optimization because it’s simple, dirty and very effective. When I generate floor triangles, I use an algorithm that more or less sweeps though in strips. Because triangles are small, and generated such some neighboring triangles are adjacent in the list of triangles, odds are often very high that if a triangle is shadowed by a wall, the next triangle to consider is going to be shadowed by the same wall.

Figure 18 - Neighbors often suffer the same fate

So the best optimization of all is to remember which wall last shadowed a triangle, and test the next triangle against that one first, always. After all, if a triangle is shadowed by any wall, it doesn’t have to be tested against any other; we don’t care about finding the “best” shadow, we just want to quickly find one that works. If the light happens to be close to a wall (which ruins optimization 3), this one can be very powerful. W, here, is likely to shadow about half the map.

One final trick – not exactly an optimization – has to do with the fact that this code runs on a dual core processor. I cut the triangle list in about half and give each half to a separate thread to run though. (Each thread has its own copy of the pointer used in optimization 4, so they don’t interfere with each other in any way.) This trick doesn’t always cut the runtime in half – it’s not uncommon for one thread to get saddled with most or all of the cases that require more checks – but it helps. Other speedups involve not using STL or boost, and sticking to old fashion arrays of data structures – heresy in some C++ circles, but the speed gains are worth any purist’s complaints.

What’s left is trivial. Each floor triangle, and short wall segment, have a set of bits, one for each possible light, and one for the viewer. If the object is not in shadow, the appropriate light’s bit is set in that object. If any such bit is set, the object is lit. There is also a bit for the observer, which as noted uses the exact same algorithm. If the object is lit, that algorithm is then run for that point for the observer, and if it comes up un-occluded, it is marked visible (and also marked “was once visible”, because I need to keep a history of what has already been seen). Moving a light is a matter of clearing all bits that correspond to that light in all the objects, and then “recasting” the light from its new location. In my application, most lights don’t move frequently, so not much of that happens.

My favorite acid test at the moment is an open town map with 3 million floor triangles and almost 7000 walls. With ambient light turned on (which means everything is automatically considered lit, so everything has to be checked for “visible to observer”), and a vision limit of 400 units (so about 336,000 possible triangles in range), my worst case compute times are about 0.75 seconds on a dual core 2Ghz Intel processor. Typical times are a more acceptable 0.4 sec or so. Kinder maps (underground caves, tight packed cities, open fields with few obstructions) manage times of well under 0.1 sec.

Some examples of my implementation:

Figure 19 - Room in underground city

Figure 20 - Ambient light, buildings and rock outcrops

Figure 21 - Limited lights, windows and doors

Notice that lights in buildings shine out windows, and also enable peeks inside the building from outside, forming a number of disjoint areas of visibility.

# Code Listing

What follows gives the general sense of the algorithms’ implementation. Do note that the code is not compiler-ready: support classes like Point, Wall, WallSeg and SimpleTriangle are not provided, but their implementation is reasonably obvious.

This code is released freely and for any purpose, commercial or private – it’s free and I don’t care what happens, nor do I need to be told. It is also without any warranty or promise of fitness, obviously. It works in my application as far as I know, and with some adjustment, may work in yours. The comments will show some of the battles that occurred in getting it to work. The code may contain optimizations I didn’t discuss above.

enum Clockness {Straight, Clockwise, Counterclockwise};

enum Facing {Colinear, Inside, Outside};

static inline Clockness clocknessOrigin(const Point& p1, const Point& P2)
{
const float a = p1.x * P2.y - P2.x * p1.y;
if (a &gt; 0)
return Counterclockwise; // aka left
if (a &lt; 0)
return Clockwise;
return Straight;
}
static inline bool clocknessOriginIsCounterClockwise(const Point& p1, const Point& P2)
{
return p1.x * P2.y - P2.x * p1.y &gt; 0;
}
static inline bool clocknessOriginIsClockwise(const Point& p1, const Point& P2)
{
return p1.x * P2.y - P2.x * p1.y &lt; 0;
}

class LineSegment
{
public:
Point begin_;
Point vector_; //begin_+vector_ is the end point

inline LineSegment(const Point& begin, const Point& end)
: begin_(begin), vector_(end - begin) {}

inline const Point& begin() const {return begin_;}
inline Point end() const {return begin_ + vector_;}

inline LineSegment(){}

//We don't care *where* they intersect and we want to avoid divides and round off surprises.
//So we don't attempt to solve the equations and check bounds.
//We form a quadilateral with AB and CD, in ACBD order. This is a convex kite shape if the
// segments cross. Anything else isn't a convex shape. If endpoints touch, we get a triangle,
//which will be declared convex, which works for us.
//life improves if a faster way is found to do this, but it has to be accurate.
bool doTheyIntersect(const LineSegment &m) const
{
Point p[4];
p[0] = begin();
p[1] = m.begin();
p[2] = end();
p[3] = m.end();
unsigned char flag = 0;

{
float z  = (p[1].x - p[0].x) * (p[2].y - p[1].y) -
(p[1].y - p[0].y) * (p[2].x - p[1].x);
if (z &gt; 0)
flag = 2;
else if (z &lt; 0)
flag = 1;
}
{
float z  = (p[2].x - p[1].x) * (p[3].y - p[2].y) -
(p[2].y - p[1].y) * (p[3].x - p[2].x);
if (z &gt; 0)
flag |= 2;
else if (z &lt; 0)
flag |= 1;
if (flag == 3)
return false;
}
{
float z  = (p[3].x - p[2].x) * (p[0].y - p[3].y) -
(p[3].y - p[2].y) * (p[0].x - p[3].x);
if (z &gt; 0)
flag |= 2;
else if (z &lt; 0)
flag |= 1;
if (flag == 3)
return false;
}
{
float z  = (p[0].x - p[3].x) * (p[1].y - p[0].y) -
(p[0].y - p[3].y) * (p[1].x - p[0].x);
if (z &gt; 0)
flag |= 2;
else if (z &lt; 0)
flag |= 1;
}
return flag != 3;
}

inline void set(const Point& begin, const Point& end)
{
begin_ = begin;
vector_ = end - begin;
}

inline void setOriginAndVector(const Point& begin, const Point& v)
{
begin_ = begin;
vector_ = v;
}

/*
Given this Line, starting from begin_ and moving towards end, then turning towards
P2, is the turn clockwise, counterclockwise, or straight?
Note: for a counterclockwise polygon of which this segment is a side,
Clockwise means P2 would "light the outer side" and
Counterclockwise means P2 would "light the inner side".
Straight means colinear.
*/
inline Clockness clockness(const Point& P2) const
{
const float a = vector_.x * (P2.y - begin_.y) - (P2.x - begin_.x) * vector_.y;
if (a &gt; 0)
return Counterclockwise; // aka left
if (a &lt; 0)
return Clockwise;
return Straight;
}

inline bool clocknessIsClockwise(const Point& P2) const
{
return vector_.x * (P2.y - begin_.y) - (P2.x - begin_.x) * vector_.y &lt; 0;
}

//relative to origin
inline Clockness myClockness() const {return clocknessOrigin(begin(), end());}
inline bool clockOK() const {return myClockness() == Counterclockwise;}

//is clockOK(), this is true if p and center are on opposide sides of me
//if p is on the line, this returns false
inline bool outside(const Point p) const
{
return clockness(p) == Clockwise;
}
inline bool outsideOrColinear(const Point p) const
{
return clockness(p) != Counterclockwise;
}

void print() const
{
begin().print(); printf(" to "); end().print();
}
};

class Wall;
/*
A wedge is a line segment that denotes a wall, and two rays from the center, that
denote the relevant left and right bound that matter when looking at this wall.
Initially, the left and right bound are set from the wall's endpoints, as those
are the edges of the shadow. But walls in front of (eg, centerward) of this wall
might occlude the end points, and we detect that when we add this wedge. If it happens,
we use the occluding wall's endpoints to nudge our own shadow rays. The idea is to
minimise the shadow bounds of any given wall by cutting away areas that are already
occluded by closer walls. That way, a given point to test can often avoid being tested
against multiple, overlapping areas. More important, if we nudge the effective left and right
rays for this wall until they meet or pass each other, that means this wall is completely occluded,
and we can discard it entirely, which is the holy grail of this code. Fewer walls means
faster code.

For any point that's between the effective left and right rays of a given wall, the
next question is if it's behind the wall. If it is, it's definitively occluded and we
don't need to test it any more. Otherwise, on to the next wall.
*/
class AggregateWedge;

enum VectorComparison {ColinearWithVector, RightOfVector, OppositeVector, LeftOfVector};
static VectorComparison compareVectors(const Point& reference, const Point& point)
{
switch (clocknessOrigin(reference, point))
{
case Clockwise:
return RightOfVector;
case Counterclockwise:
return LeftOfVector;
}
if (reference.dot(point) &gt; 0)
return ColinearWithVector;
return OppositeVector;
}

class LittleTree {
public:
enum WhichVec {Left2, Right1, Right2} whichVector; //(sort tie-breaker), right must be greater than left
const Point*       position;  //vector end
LittleTree*        greater; //that is, further around to the right
LittleTree*        lesser; //that is, less far around to the right

LittleTree() {greater = lesser = NULL;}

{
if (lesser)
at[*ip] = whichVector;
++*ip;
if (greater)
}
//walk the tree in order, filling an array
{
int i = 0;
}
};

class VectorPair {
public:
Point    leftVector;
Point    rightVector;
bool     acute;

VectorPair() {}

VectorPair(const Point& left, const Point& right)
{
leftVector = left;
rightVector = right;
acute = true;
}

bool isAllEncompassing() const {return leftVector.x == 0 && leftVector.y == 0;}

void set(const Point& left, const Point& right)
{
leftVector = left;
rightVector = right;
acute = clocknessOrigin(leftVector, rightVector) == Clockwise;
}

void setKnownAcute(const Point& left, const Point& right)
{
leftVector = left;
rightVector = right;
acute = true;
}

void setAllEncompassing()
{
acute = false;
leftVector = rightVector = Point(0,0);
}

bool isIn(const Point p) const
{
if (acute)
return clocknessOrigin( leftVector, p) != Counterclockwise &&
clocknessOrigin(rightVector, p) != Clockwise;

//this accepts all points if leftVector == 0,0
return clocknessOrigin( leftVector, p) != Counterclockwise ||
clocknessOrigin(rightVector, p) != Clockwise;
}

//true if we adopted the pair into ourselves. False if disjoint.
bool update(const VectorPair& v)
{
/* I might completely enclose him - that means no change
I might be completely disjoint - that means no change, but work elsewhere
He might enclose all of me - I take on his bounds
We could overlap; I take on some of his bounds
--
We figure this by starting at L1 and moving clockwise, hitting (in some
order) R2, L2 and R1. Those 3 can appear in any order as we move
clockwise, and some can be colinear (in which case, we pretend a convenient order).
Where L1 and R1 are the bounds we want to update, we have 6 cases:
L1 L2 R1 R2 - new bounds are L1 R2 (ie, update our R)
L1 L2 R2 R1 - no change, L1 R1 already encloses L2 R2
L1 R1 L2 R2 - the pairs are disjoint, no change, but a new pair has to be managed
L1 R1 R2 L2 - new bounds are L2 R2; it swallowed us (update both)
L1 R2 L2 R1 - all encompassing; set bounds both to 0,0
L1 R2 R1 L2 - new bounds are L2 R1 (ie, update our L)

If any two rays are colinear, sort them so that left comes first, then right.
If 2 lefts or 2 rights, order doesn't really matter. The left/right case does because
we want L1 R1 L2 R2, where R1=L2, to be processed as L1 L2 R1 R2 (update R, not disjoint)
*/

//special cases - if we already have the whole circle, update doesn't do anything
if (isAllEncompassing())
return true; //v is part of this wedge (everything is)

//if we're being updated by a full circle...
if (v.isAllEncompassing())
{
setAllEncompassing(); //we become one
return true;
}

/*
Now we just need to identify which order the 3 other lines are in, relative to L1.
Not so easy since we don't want to resort to arctan or anything else that risks
any roundoff. But clockness from L1 puts them either Clockwise (sooner), or
Straight (use dot product to see if same as L1 or after Clockwise), or
CounterClockwise (later). Within that, we can use clockness between points to sort between them.
*/
//get the points R1, L2 and R2 listed so we can sort them by how far around to the right
// they are from L1
LittleTree list[3];

//order we add them in here doesn't matter
list[0].whichVector = LittleTree::Right1;
list[0].position = &this-&gt;rightVector;

list[1].whichVector = LittleTree::Left2;
list[1].position = &v.leftVector;

list[2].whichVector = LittleTree::Right2;
list[2].position = &v.rightVector;

//[0] will be top of tree; add in 1 & 2 under it somewhere
for (int i = 1; i &lt; 3; ++i)
{
LittleTree* at = &list[0];
do {
bool IisGreater = list[i].whichVector &gt; at-&gt;whichVector; //default if nothing else works

VectorComparison L1ToAt = compareVectors(leftVector, *at-&gt;position);
VectorComparison L1ToI = compareVectors(leftVector, *list[i].position);
if (L1ToI &lt; L1ToAt)
IisGreater = false;
else if (L1ToI &gt; L1ToAt)
IisGreater = true;
else
{
if (L1ToI != OppositeVector && L1ToI != ColinearWithVector)
{
//they are in the same general half circle, so this works
switch (clocknessOrigin(*at-&gt;position, *list[i].position))
{
case Clockwise: IisGreater = true; break;
case Counterclockwise: IisGreater = false; break;
}
}
}
//now we know where [i] goes (unless something else is there)
if (IisGreater)
{
if (at-&gt;greater == NULL)
{
at-&gt;greater = &list[i];
break; //done searching for [I]'s place
}
at = at-&gt;greater;
continue;
}
if (at-&gt;lesser == NULL)
{
at-&gt;lesser = &list[i];
break; //done searching for [I]'s place
}
at = at-&gt;lesser;
continue;
} while (true);
}
//we have a tree with proper order. Read out the vector ids
LittleTree::WhichVec sortedList[3];

unsigned int caseId = (sortedList[0] &lt;&lt; 2) | sortedList[1]; //form ids into a key. Two is enough to be unique
switch (caseId)
{
case (LittleTree::Left2 &lt;&lt; 2) | LittleTree::Right2: //L1 L2 R2 R1
return true; //no change, we just adopt it

case (LittleTree::Right1 &lt;&lt; 2) | LittleTree::Left2: //L1 R1 L2 R2
return false; //disjoint!

case (LittleTree::Right1 &lt;&lt; 2) | LittleTree::Right2: //L1 R1 R2 L2
*this = v;
return true; //we take on his bounds

case (LittleTree::Right2 &lt;&lt; 2) | LittleTree::Left2: //L1 R2 L2 R1
setAllEncompassing();
return true; //now we have everything

case (LittleTree::Left2 &lt;&lt; 2) | LittleTree::Right1: //L1 L2 R1 R2
rightVector = v.rightVector;
break;

default: //(LittleTree::Right2 &lt;&lt; 2) | LittleTree::Right1: //L1 R2 R1 L2
leftVector = v.leftVector;
break;
}

//we need to fix acute
acute = clocknessOrigin(leftVector, rightVector) == Clockwise;
return true;
}
};

class Wedge {
public:
//all points relative to center
LineSegment wall; //begin is the clockwise, right hand direction

Point    leftSideVector;  //ray from center to this defines left or "end" side
Wedge*   leftSidePoker; //if I'm updated, who did it

Point    rightSideVector;  //ray from center to this defines left or "end" side
Wedge*   rightSidePoker;  //if I'm updated, who did it

Wall*    source;        //original Wall of .wall

VectorPair  outVectors;

float       nearestDistance;  //how close wall gets to origin (squared)

AggregateWedge* myAggregate;  //what am I part of?

inline Wedge(): source(NULL), leftSidePoker(NULL), rightSidePoker(NULL), myAggregate(NULL) {}

void setInitialVectors()
{
leftSidePoker = rightSidePoker = NULL;

rightSideVector = wall.begin();
leftSideVector = wall.end();

outVectors.setKnownAcute(wall.end(), wall.begin());
}

inline bool testOccluded(const Point p, const float distSq) const //relative to center
{
if (distSq &lt; nearestDistance)
return false; //it cannot
if (clocknessOriginIsCounterClockwise(leftSideVector, p))
return false; //not mine
if (clocknessOriginIsClockwise(rightSideVector, p))
return false; //not mine
return wall.outside(p); //on the outside
}

inline bool testOccludedOuter(const Point p, const float distSq) const //relative to center
{
if (distSq &lt; nearestDistance) //this helps a surprising amount in at least Enya
return false; //it cannot
return wall.outside(p) && outVectors.isIn(p); //on the outside
}

inline bool nudgeLeftVector(Wedge* wedge)
{
/*
So. wedge occludes at least part of my wall, on the left side.
It might actually be the case of an adjacent wall to my left. If so,
my end() is his begin(). And if so, I can change HIS rightSideVectorOut
to my right (begin) point, assuming my begin point is forward of (or on)
his wall. That means he can help kill other walls better.
*/
if (wedge-&gt;wall.begin() == wall.end() && !wedge-&gt;wall.outside(wall.begin())) //is it legal?
{
outVectors.update(VectorPair(wedge-&gt;wall.end(), wedge-&gt;wall.begin()));
wedge-&gt;outVectors.update(VectorPair(wall.end(), wall.begin()));
}

//turning this on drives the final wedge down, but not very much
bool okToDoOut = true;

bool improved = false;
do {
if (wall.outside(wedge-&gt;wall.begin()))
break; //illegal move, stop here

if (clocknessOrigin(leftSideVector, wedge-&gt;wall.begin()) == Clockwise)
{
leftSideVector = wedge-&gt;wall.begin();

leftSidePoker = wedge;
improved = true;
}

if (okToDoOut)
{
okToDoOut = !wall.outside(wedge-&gt;wall.end());
if (okToDoOut)
outVectors.update(VectorPair(wedge-&gt;wall.end(), wall.begin()));
}

wedge = wedge-&gt;rightSidePoker;
} while (wedge);
return improved;
}

inline bool nudgeRightVector(Wedge* wedge)
{
/*
So. wedge occludes at least part of my wall, on the right side.
It might actually be the case of an adjacent wall to my right. If so,
my begin() is his end(). And if so, I can change HIS leftSideVectorOut
to my left (end() point, assuming my begin point is forward of (or on)
his wall. That means he can help kill other walls better.
*/
if (wedge-&gt;wall.end() == wall.begin() && !wedge-&gt;wall.outside(wall.end())) //is it legal?
{
outVectors.update(VectorPair(wedge-&gt;wall.end(), wedge-&gt;wall.begin()));
wedge-&gt;outVectors.update(VectorPair(wall.end(), wall.begin()));
}

//turning this on drives the final wedge count down, but not very much
bool okToDoOut = true;

bool improved = false;
do {
if (wall.outside(wedge-&gt;wall.end()))
return improved; //illegal move

if (clocknessOrigin(rightSideVector, wedge-&gt;wall.end()) == Counterclockwise)
{
rightSideVector = wedge-&gt;wall.end();

rightSidePoker = wedge;
improved = true;
}
if (okToDoOut)
{
okToDoOut = !wall.outside(wedge-&gt;wall.begin());
if (okToDoOut)
outVectors.update(VectorPair(wall.end(), wedge-&gt;wall.begin()));
}
wedge = wedge-&gt;leftSidePoker;
} while (wedge);
return improved;
}
};

class AggregateWedge {
public:
VectorPair  vectors;
AggregateWedge*   nowOwnedBy;

bool isIn(const Point& p) const
{
return vectors.isIn(p);
}

bool isAllEncompassing() const {return vectors.leftVector.x == 0 && vectors.leftVector.y == 0;}

void init(Wedge* w)
{
vectors.setKnownAcute(w-&gt;leftSideVector, w-&gt;rightSideVector);
w-&gt;myAggregate = this;
nowOwnedBy = NULL;
}

//true if it caused a merge
{
return false; //then I don't do anything
if (!vectors.update(VectorPair(w-&gt;wall.end(), w-&gt;wall.begin())))
return false; //disjoint

AggregateWedge* previousAggregate = w-&gt;myAggregate;
w-&gt;myAggregate = this; //now I belong to this

if (previousAggregate != NULL) //then it's a merge
{
vectors.update(previousAggregate-&gt;vectors);

//That means we have to redirect that to this
assert(previousAggregate-&gt;nowOwnedBy == NULL);
previousAggregate-&gt;nowOwnedBy = this;
return true;
}
return false;
}
};

class AggregateWedgeSet {
public:
int               at;
int               firstValid;
AggregateWedge    agList[8192];

float             minDistanceSq;
float             maxDistanceSq;

AggregateWedgeSet() : minDistanceSq(0), maxDistanceSq(FLT_MAX)  {}

{
at = 0;
for (int j = 0; j &lt; numberWedges; ++j)
{
Wedge* w = wedgeList + j;
w-&gt;myAggregate = NULL; //none yet
bool mergesHappened = false;
for (int i = 0; i &lt; at; ++i)

if (mergesHappened)
{
//some number of aggregates got merged into w-&gt;myAggregate
//We need to do fixups on the wedges' pointers
for (int k = 0; k &lt; j; ++k)
{
AggregateWedge* in = wedgeList[k].myAggregate;
if (in-&gt;nowOwnedBy) //do you need an update?
{
in = in-&gt;nowOwnedBy;
while (in-&gt;nowOwnedBy) //any more?
in = in-&gt;nowOwnedBy;

wedgeList[k].myAggregate = in;
}
}

for (int k = 0; k &lt; at; ++k)
agList[k].nowOwnedBy = NULL;
}

if (w-&gt;myAggregate == NULL) //time to start a new one
{
agList[at++].init(w);
}
} // all wedges in

minDistanceSq = FLT_MAX;
for (int j = 0; j &lt; numberWedges; ++j)
{
//get nearest approach
float ds = wedgeList[j].nearestDistance;
if (ds &lt; minDistanceSq)
minDistanceSq = ds;
}
minDistanceSq -= 0.25f; //fear roundoff - pull this is a little

firstValid = 0;
for (int i = 0; i &lt; at; ++i)
{
firstValid = i;

#if 0 // Not sure this is working? Maybe relates to using L to change bounds?
//if this is the only valid wedge and it is all-encompassing, then we can
//walk all the wedges and find the furthest away point (which will be some
//wall endpoint). Anything beyond that cannot be in bounds.
if (agList[i].isAllEncompassing())
{
maxDistanceSq = 0;
for (int j = 0; j &lt; numberWedges; ++j)
{
float ds = wedgeList[j].wall.begin().dotSelf();
if (ds &gt; maxDistanceSq)
maxDistanceSq = ds;
ds = wedgeList[j].wall.end().dotSelf();
if (ds &gt; maxDistanceSq)
maxDistanceSq = ds;
}
}
#endif
break;
}
}

const AggregateWedge* whichAggregateWedge(const Point p) const
{
for (int i = firstValid; i &lt; at; ++i)
{
continue;
if (agList[i].isIn(p))
{
return agList + i;
}
}
return NULL;
}
};

//#define UsingOuter //this slows us down. Do not use.

#ifdef UsingOuter
#define TheTest testOccludedOuter
#else
#define TheTest testOccluded
#endif

class AreaOfView {
public:
Point    center;
int      numberWedges;
BoundingRect  bounds;
Wedge    wedges[8192];

//VERY experimental
AggregateWedgeSet ags;

inline AreaOfView(const Point& center_, const float radius) :
center(center_),
numberWedges(0)
{
}

void changeTo(const Point& center_, const float radius)
{
center = center_;
numberWedges = 0;
}

void recompute() //rebuild the wedges, with existing center and radius
{
numberWedges = 0;
}

inline bool isIn(Point p) const
{
p -= center;
const float distSq = p.dotSelf();
return false;
for (int i = 0; i &lt; numberWedges; ++i)
{
if (wedges[i].TheTest(p, distSq))
return false;
}
return true;
}

/* On the theory that the wedge that rejected your last point has a higher than
average chance of rejecting your next one, let the calling thread provide
space to maintain the index of the last hit
*/
inline bool isInWithCheat(Point p, int* hack) const
{
p -= center;
const float distSq = p.dotSelf();
return false;
if (distSq &lt; ags.minDistanceSq)
return true; //this range is always unencumbered by walls
if (distSq &gt; ags.maxDistanceSq) //not working. Why?
return false;

if (numberWedges == 0)
return true; //no boundaries

//try whatever worked last time, first. It will tend to win again
if (wedges[*hack].TheTest(p, distSq))
{
return false;
}

#define UseAgg
#define UseAggP

#ifdef UseAgg
const AggregateWedge* whichHasMe = ags.whichAggregateWedge(p);
if (whichHasMe == NULL)
return true; //can't be occluded!
#endif

//try everything else
for (int i = 0; i &lt; *hack; ++i)
{
#ifdef UseAggP
#ifdef UseAgg
if (wedges[i].myAggregate != whichHasMe)
continue;
#endif
#endif
if (wedges[i].TheTest(p, distSq))
{
*hack = i; //remember what worked for next time
return false;
}
}

for (int i = *hack + 1; i &lt; numberWedges ; ++i)
{
#ifdef UseAggP
#ifdef UseAgg //does seem to help speed, but don't work yet
if (wedges[i].myAggregate != whichHasMe)
continue;
#endif
#endif
if (wedges[i].TheTest(p, distSq))
{
*hack = i; //remember what worked for next time
return false;
}
}

return true;
}

inline bool isInWithWallExclusion(Point p, const Wall* excludeWall) const
{
p -= center;
const float distSq = p.dotSelf();
return false;
for (int i = 0; i &lt; numberWedges; ++i)
{
if (wedges[i].source == excludeWall)//this one doesn't count
continue;
if (wedges[i].TheTest(p, distSq ))
return false;
}
return true;
}

void addWall(Wall* w, const float nearestDistance);
};

class AreaRef {
public:
AreaOfView*    a;
AreaRef() {a = NULL;}
void set(const Point& p, float radius)
{
if (a == NULL)
else
}
~AreaRef() {delete a;}
void empty() {delete a; a = NULL;}
AreaOfView* operator-&gt;() const {return a;}
};

class WallSet {
public:
int   length;
int   at;
WallAndDist*   list;

WallSet()
{
at = 0;
length = 2038;
list = (WallAndDist*)malloc(length * sizeof(*list));
}

~WallSet() {free(list);}

void add(Wall* w, const float distSq)
{
if (at &gt;= length)
{
length *= 2;
list = (WallAndDist*)realloc(list, length * sizeof(*list));
}
list[at].wall = w;
const LineSeg* s = w-&gt;getSeg();
list[at].lenSq = s-&gt;p[0].distanceSq(s-&gt;p[1]);
list[at++].distSq = distSq;
}

inline void sortByCloseness()
{
qsort(list, at, sizeof *list, cmpWallDist);
}
};

void AreaOfView::addWall(Wall* w, const float nearestDistance)
{
if (numberWedges &gt;= NUMOF(wedges))
return; //we are screwed

const LineSeg* seg = w-&gt;getSeg();
Point w1 = seg-&gt;p[0] - center;
Point w2 = seg-&gt;p[1] - center;
LineSegment* wallSeg = &wedges[numberWedges].wall;

switch (clocknessOrigin(w1, w2))
{
case Clockwise:
wallSeg-&gt;set(w2, w1);
break;
case Counterclockwise:
wallSeg-&gt;set(w1, w2);
break;
default:
return; //uninteresting, edge on
}

wedges[numberWedges].setInitialVectors(); //set left and right vectors from wall

const LineSegment right(Point(0,0), wallSeg-&gt;begin());
const LineSegment left(Point(0,0), wallSeg-&gt;end());

//now we start trimming
for (int i = 0; i &lt; numberWedges; ++i)
{
//if this occludes both begin and it, it occludes the wall
if (wedges[i].testOccludedOuter(wallSeg-&gt;begin(), wedges[numberWedges].nearestDistance) &&
wedges[i].testOccludedOuter(wallSeg-&gt;end(), wedges[numberWedges].nearestDistance))
return;

bool changed = false;

//test right side
if (wedges[i].wall.doTheyIntersect(right))
{
changed = wedges[numberWedges].nudgeRightVector(wedges + i);
}
//test left side
if (wedges[i].wall.doTheyIntersect(left))
{
changed |= wedges[numberWedges].nudgeLeftVector(wedges + i);
}

if (changed)
{

if (wedges[numberWedges].rightSidePoker &&
wedges[numberWedges].rightSidePoker == wedges[numberWedges].leftSidePoker)
return; //cheap test for some total occlusion cases

if ( //simplify
LineSegment(Point(0,0), wedges[numberWedges].rightSideVector).clockness(
wedges[numberWedges].leftSideVector) != Counterclockwise)
{
return; //occluded
}
}

}
//we have a keeper
wedges[numberWedges].nearestDistance = nearestDistance;
wedges[numberWedges].source = w;
++numberWedges;
}

{
//get the set of walls that can occlude.
WallSet relevant;

int initialRun = run1IsMapEdge? 2 : 1;

for (int run = initialRun; run &lt; wallRuns; ++run)
{
const WallRun* currentLoop = &wallLists[run];
//does this loop overlap our area?
if (!currentLoop-&gt;bounds.overlapNonzero(bounds))
continue; //not an interesting loop, nothing in it can occlude

//run 1 is the outer loop; we care about walls facing in.
//subsequent runs are inner loops, and we care about walls facing out
const Facing relevantFacing = run==1? Inside :  Outside;

//some walls in this loop may cast shadows. Here we go looking for them.
for (int wall = 0; wall &lt; currentLoop-&gt;wallCount; ++wall)
{
Wall* currentWall = currentLoop-&gt;list[wall];
//We don't currently have walls that are transparent (those are actually doors), but we could someday
if (currentWall-&gt;isTransparent())
continue; //toss windows

const LineSeg* currentSeg = currentWall-&gt;getSeg();

//We need to reject walls that are colinear with our rectangle bounds.
//That's important because we don't want to deal with walls that *overlap*
//any polygon sides; that complicates the intersection code. Walls don't
// overlap other walls, and we will discard edge-on walls, so walls
// don't overlap shadow lines; but without this they could overlap the
// original bounding rectangle (the polygon-edge-of-last-resort).
// We do have to consider overlap with creating shadows.
//
//Since we're looking at vertical and horisontal lines, which are pretty common,
// we can also quickly discard those which are outside the rectangle, as well as
// colinear with it.
if (currentSeg-&gt;isVertical())
{
if (currentSeg-&gt;p[0].x &lt;= bounds.mins.x ||
currentSeg-&gt;p[0].x &gt;= bounds.maxs.x)
continue;
}
else if (currentSeg-&gt;isHorisontal() &&
(currentSeg-&gt;p[0].y &lt;= bounds.mins.y ||
currentSeg-&gt;p[0].y &gt;= bounds.maxs.y))
continue;

//If the wall faces away from the center point, or is edge on, it
// doesn't cast a shadow, so boot it. (Getting rid of edge-on stuff
// avoids walls that overlap shadow lines).
if (currentSeg-&gt;facing(center) != relevantFacing)
continue; //faces away (or edge-on)

//We still could be dealing with an angled wall that's entirely out of range -
// and anyway we want to know the distances from the center to the line segment's
// nearest point, so we can sort.
//Getting the distance to a segment requires work.
float distSq;
if (currentSeg-&gt;p[1].dot(currentSeg-&gt;p[0], center) &lt; 0)
{
//center is out beyond seg.p[1], so the distance is between p[1] and center
distSq = center.distanceSq(currentSeg-&gt;p[1]);
}
else if (currentSeg-&gt;p[0].dot(currentSeg-&gt;p[1], center) &lt; 0)
{
//center is out beyond seg.p[0], so the distance is between p[0] and center
distSq = center.distanceSq(currentSeg-&gt;p[0]);
}
else
{
//distance from center to a perpendicular, dropped on the line segment
//Our LineSeg class gives us a, b, c as Ax + By = C, so we adapt for -C
float numerator = currentSeg-&gt;a * center.x +
currentSeg-&gt;b * center.y +
-currentSeg-&gt;c;
float demonSq =  currentSeg-&gt;a * currentSeg-&gt;a +
currentSeg-&gt;b * currentSeg-&gt;b;
distSq = numerator * numerator / demonSq;
}

//if the nearest point is outside or at the radius of interest, this wall can't matter.
continue; //out of reach

//Need to keep this one
}
}

//add doors, too. They don't have loops or bounding rectangles, and it's important to
// get the right seg. Skip transparent ones.
const WallRun* currentLoop = &wallLists[0];
//some walls in this loop may cast shadows. Here we go looking for them.
for (int wall = 0; wall &lt; currentLoop-&gt;wallCount; ++wall)
{
Wall* currentWall = currentLoop-&gt;list[wall];
if (currentWall-&gt;isTransparent())
continue; //toss windows

const LineSeg* currentSeg = currentWall-&gt;getSeg();

//Horisontal and vertical lines are common, and easy to test for out of bounds.
//That saves a more expensive distance check.
if (currentSeg-&gt;isVertical())
{
if (currentSeg-&gt;p[0].x &lt;= bounds.mins.x ||
currentSeg-&gt;p[0].x &gt;= bounds.maxs.x)
continue;
}
else if (currentSeg-&gt;isHorisontal() &&
(currentSeg-&gt;p[0].y &lt;= bounds.mins.y ||
currentSeg-&gt;p[0].y &gt;= bounds.maxs.y))
continue;

if (currentSeg-&gt;facing(center) == Straight)
continue; //kill edge on walls

float distSq;
if (currentSeg-&gt;p[1].dot(currentSeg-&gt;p[0], center) &lt; 0)
{
//center is out beyond seg.p[1], so the distance is between p[1] and center
distSq = center.distanceSq(currentSeg-&gt;p[1]);
}
else if (currentSeg-&gt;p[0].dot(currentSeg-&gt;p[1], center) &lt; 0)
{
//center is out beyond seg.p[0], so the distance is between p[0] and center
distSq = center.distanceSq(currentSeg-&gt;p[0]);
}
else
{
//distance from center to a perpendicular, dropped on the line segment
//Our LineSeg class gives us a, b, c as Ax + By = C, so we adapt for -C
float numerator = currentSeg-&gt;a * center.x +
currentSeg-&gt;b * center.y +
-currentSeg-&gt;c;
float demonSq =  currentSeg-&gt;a * currentSeg-&gt;a +
currentSeg-&gt;b * currentSeg-&gt;b;
distSq = numerator * numerator / demonSq;
}

//if the nearest point is outside the radius of interest, this wall can't matter.
continue; //out of reach

//Need to keep this one
}
//sort by nearness; nearer ones should be done first, as that might make more walls
//identifiably irrelevant.
relevant.sortByCloseness();

//relevant.print();

//now, "do" each wall
for (int i = 0; i &lt; relevant.at; ++i)
{
}

//build the aggregate wedge list
#if 0
if (center == lastAtD)
{
char buf[256];
sprintf(buf, "%d wedges in set", numberWedges);
MessageBox(gHwnd, buf, "Wedge Count", MB_OK);
}
#endif
}

static AreaRef areaOfView;

static Number visionMaxDistance;
static BoundingRect inView;
static Point viewPoint;
static unsigned char* changedTriangleFlags;
static unsigned char changedLightFlags[NUMOF(lights)];
static bool inComputeVisible = false;

static inline void doVisibleWork(int i, const int lim)
{
//if the only lights on are at the viewpoint (and no superlight), and the
//lights all stay within the visiion limit (they usually do) then we don't
// have to do anything but copy the "lit" flag to the "you're visible" state.
//That's a huge win; we do't have to consult the area of view
bool soloLightRulesVisibility = !superLight;
if (soloLightRulesVisibility)
{
for (int k = 0; k &lt; NUMOF(lights); ++k)
if (lights[k].on)
{
if (lights[k].p != lastAtD || lights[k].range() &gt; vmd)
{
soloLightRulesVisibility = false;
break;
}
}
}
if (soloLightRulesVisibility)
{
//what's visible here is simply what's lit. We just copy the lit flag, no math needed.
for (; i &lt; lim; ++i)
{
if (simpleTriList[i]-&gt;setVisiblity(
simpleTriList[i]-&gt;lit()
))
{
unsigned char v = 0x4 | (simpleTriList[i]-&gt;wasVisible &lt;&lt; 1) | (simpleTriList[i]-&gt;isVisible & 1);
changedTriangleFlags[i] = v;
}
}
return;
}

int lookupHack[3] = {0,0,0};
for (; i &lt; lim; ++i)
{
//we get a huge win from not calculating lightOfSight to unlit things
if (simpleTriList[i]-&gt;setVisiblity(
simpleTriList[i]-&gt;lit() &&
areaOfView-&gt;isInWithCheat(simpleTriList[i]-&gt;center, lookupHack)
))
{
unsigned char v = 0x4 | (simpleTriList[i]-&gt;wasVisible &lt;&lt; 1) | (simpleTriList[i]-&gt;isVisible & 1);
changedTriangleFlags[i] = v;
}
}
}
//End


1To give a sense of the algorithm’s performance, computing the currently visible area in figure 5, and combining it with the previous area, took 0.003 seconds on a dual core 2Ghz processor. However, keep mind that figure 5 represents very small and simple map (containing less than 100,000 triangles).
2GPC, PolyBoolean and LEDA are among these packages. Some discussion of the problems of runtime, runspace, and accuracy can be found at http://www.complex-a5.ru/polyboolean/comp.html. Boost’s GTL is promising, but it forces the user of integers for coordinates, and the implementation is still evolving.
3Formally speaking, there isn’t a cross product defined in 2D; they only work in 3 or 7 dimensions. What’s referred to here is the z value computed as part of a 3D cross product, according to u.x * v.y - v.x * u.y.