Projecting lines on circles

Started by
35 comments, last by Medium9 14 years, 3 months ago
Hi! I wonder if (and how) it might be possible to (in 2D) project a couple of line segments (given by their endpoints) onto a circular perimeter. A picture says more than a thousand words: The circle mostly would not be centered at the origin, but may be placed anywhere. The radius I belive doesn't really matter, since speaking in angles, the result would always be the same, so this is just for displaying purposes here. The most suitable output of such a procedure for me, would be some structure from which I could sample the colors depending on an angle. I would as well need some sort of a "z-buffer", that could tell me for each angle, how far the corresponding line I "see" is from the center. Can that a) be achieved, and b) be done extremely fast? Thanks for any help, and merry christmas!
Advertisement
a) Yes, it can be achieved.

b) Depends on what you mean by "extremely fast".

About how many segments do you expect to have? How often would you need to query an angle?
I would have roughly 400 to 2000 segments. My goal is to query for about half a million circles (or points that is), each with about 20-50 angular queries. I would want to do so in a matter of a few seconds, say ~5 or less. (Implementig this via a shader is an option, but for now I'm only interested in the theoretical possibilities.)

I do have the scene divided into squares of which each contains only the segments it coveres, leaving some options for culling, although I couldn't picture an effective way of using this here yet.

Edit: Why I think there might be a way of doing this that fast is, that a graphics card practically does the same thing, only in 3D. My "circle" would be the FOV, it's center the camera, and my segments were the polygons. Thinking of a scene filled only with some thousand polygons that just have color, and nothing else going on (no lighting, texturing etc.), one can easily get to some thousand FPS. I pictured my problem to be similar, just in 2D, and with a "FOV" of 360°. That is where this idea was born.
I don't think there's anything you can do to make it that fast. Perhaps someone else has better ideas than I do, though.
To what precision do you need the projection on the circle, in terms of angular granularity? Because if it can be sufficiently coarse to make this reasonable, you can cast rays from the center of the circle and see what the ray hits. The naive implementation of that would be O(kn) where k is the number of angles and n is the number of line segments but you can probably get it to be more like O(k log n) by using an efficient data structure to store the line segments -- maybe a quadtree?

Otherwise, if you want the projection on the circle to be perfect, i.e. infinitesimal granularity, off the top of my head, I'd say you'd have to sort the lines from back to front and then do the projection relying on the painter's algorithm, which would be O(n^2).
Shooting rays is exactly what I currently do, which does work so far, but I'm somehow not perfectly satisfied with it. Mostly because the angular resolution would have to be much too high if I'm too far away from any segments to avoid streaks in the picture (this is, like some earlier topics from me, about filling a bitmap on the basis of lines on it). Adaptive angles are not an option, because some neccessary optimization precomputes bresenham-like square marchings for every angle to come.

This really is a longshot idea, because I realized that the task can be seen as somewhat similar to what GPUs do, where polygons are easily projected onto a view by some matrix operations. I hoped I could somehow base on this, but if the consensus turns out to be "not feasible", that's okay too. (Except that I would continue to try to come up with something day in and out ;))
It would be easier to suggest optimizations if we knew what you were going to use it for.

That said, I guess it could be done in O(n log(n)). I haven't thought it through for a complete algorithm, but it should work something like the following. You might need to handle some special cases.

First calculate the angle to each point. For example a point straight to the right would be 0, and straight to the left 180, etc. This is O(n).
Then sort these points, which is dependent on the sorting algorithm, but can be O(n log(n)). Each point will also be marked as either a 'start point' or 'end point' for the line it belongs to, where the start point is the one with the lesser angle. You might have to handle the case where a line crosses the zero-degree edge..
Then go through the points in order, as a 'rotating sweeping line' algorithm, and handle every encountered point. Add each start point to a sorted tree. The comparison at each node is which side of the current node line the new point is on. Front is the same side as the circle center. So the first start point added will be the root, and the second point will be compared to the line the first point belongs to, and added to the correct side in the tree.
With the correct tree insert and delete is O(log(n)), and it's done n times, so you get O(n log(n)) there too.
Whenever you hit an end-point, you remove the corresponding node from the tree.

Whenever a line is at the front of the tree, then it is the currently visible line. So whenever the front line changes (could be both from insertions and deletions), another line is now in front, and you add the angle-interval from the last handled point to the current handled point, to your circle color intervals, with the color of the line that was in front until now.

The algorithm continues until each line has been added to the tree and subsequently removed.
The problem is that assuming a single thread, 3GHz processor, 5 seconds of processing time and 500,000 "circles", one has a budget of 30,000 cycles to do the whole "rendering" of up to 2000 segments and the 20-50 queries. That really isn't very much at all.

There are so few queries per circle that it might not even be worth doing any rendering, and you are then better off just doing ray tracing (which seems to be what you are doing). Maybe you should just concentrate in doing the ray tracing fast enough.

You can have a grid where each cell knows which segments it contains (perhaps partially), and you can advance through the grid and test for intersection with only the segments in cells that the ray goes through. Depending on how your segments are distributed, this might be fast enough. Now your budget is only 600 cycles per query, though.

Quote:
This really is a longshot idea, because I realized that the task can be seen as somewhat similar to what GPUs do, where polygons are easily projected onto a view by some matrix operations. I hoped I could somehow base on this, but if the consensus turns out to be "not feasible", that's okay too. (Except that I would continue to try to come up with something day in and out ;))


Yeah, I see what you're saying, ... Maybe you could trick the GPU into solving your problem for you?

Actually, based on the GPU model ... another way you could do this would be to set up something similar to a depth buffer. In other words, iterate through the line segments in an arbitrary order and project them on to the circle but divide the circle into a finite number of slices and in each slice keep track of the depth of the current line segment portion you have projected on the slice. But I guess that has the same problem as what you are already doing?

Oh, yeah, and the other way would be O(n log n) as Erik says above. I was having a brain fart and thought for some reason that just doing the actual projection once the lines were sorted would be n^2...
I'm sorry, I really should have given some more background, but didn't want to make the posting too large. For completenesses sake, I may reference the thread where I was helped to get where I stand right now. (Which is exactly what you suggest in your last paragraph alvaro.)

Erik's idea sound intriguing, but I wonder if consectively building and deconstructing a binary tree for every pixel of a bitmap can do it in time. The idea though really is great! I may really try that one out, at least I may use it for some "be really thorough, take your time"-mode, because the output should be flawless.

To complete the picture of what I do:
My lines are actually rendered (in the sense of "made into segments", not as an image) Catmull-Rom splines, that define edges on a bitmap. My task is to reconstruct a bitmap largely resembling the original where the edges came from. Working directly with the splines always turned out to be MUCH too slow, which is why I do this with sufficiently accurate line segments - and that is, why there are so many of them. The number of pixels of the target bitmap is what defines my "half a million circles": Each pixel needs this, and I considered target sizes of roughly 800x600 pixel, though they may vary considerably as well.

This topic is closed to new replies.

Advertisement