Sign in to follow this  
Medium9

Projecting lines on circles

Recommended Posts

Hi! I wonder if (and how) it might be possible to (in 2D) project a couple of line segments (given by their endpoints) onto a circular perimeter. A picture says more than a thousand words: The circle mostly would not be centered at the origin, but may be placed anywhere. The radius I belive doesn't really matter, since speaking in angles, the result would always be the same, so this is just for displaying purposes here. The most suitable output of such a procedure for me, would be some structure from which I could sample the colors depending on an angle. I would as well need some sort of a "z-buffer", that could tell me for each angle, how far the corresponding line I "see" is from the center. Can that a) be achieved, and b) be done extremely fast? Thanks for any help, and merry christmas!

Share this post


Link to post
Share on other sites
I would have roughly 400 to 2000 segments. My goal is to query for about half a million circles (or points that is), each with about 20-50 angular queries. I would want to do so in a matter of a few seconds, say ~5 or less. (Implementig this via a shader is an option, but for now I'm only interested in the theoretical possibilities.)

I do have the scene divided into squares of which each contains only the segments it coveres, leaving some options for culling, although I couldn't picture an effective way of using this here yet.

Edit: Why I think there might be a way of doing this that fast is, that a graphics card practically does the same thing, only in 3D. My "circle" would be the FOV, it's center the camera, and my segments were the polygons. Thinking of a scene filled only with some thousand polygons that just have color, and nothing else going on (no lighting, texturing etc.), one can easily get to some thousand FPS. I pictured my problem to be similar, just in 2D, and with a "FOV" of 360°. That is where this idea was born.

Share this post


Link to post
Share on other sites
To what precision do you need the projection on the circle, in terms of angular granularity? Because if it can be sufficiently coarse to make this reasonable, you can cast rays from the center of the circle and see what the ray hits. The naive implementation of that would be O(kn) where k is the number of angles and n is the number of line segments but you can probably get it to be more like O(k log n) by using an efficient data structure to store the line segments -- maybe a quadtree?

Otherwise, if you want the projection on the circle to be perfect, i.e. infinitesimal granularity, off the top of my head, I'd say you'd have to sort the lines from back to front and then do the projection relying on the painter's algorithm, which would be O(n^2).

Share this post


Link to post
Share on other sites
Shooting rays is exactly what I currently do, which does work so far, but I'm somehow not perfectly satisfied with it. Mostly because the angular resolution would have to be much too high if I'm too far away from any segments to avoid streaks in the picture (this is, like some earlier topics from me, about filling a bitmap on the basis of lines on it). Adaptive angles are not an option, because some neccessary optimization precomputes bresenham-like square marchings for every angle to come.

This really is a longshot idea, because I realized that the task can be seen as somewhat similar to what GPUs do, where polygons are easily projected onto a view by some matrix operations. I hoped I could somehow base on this, but if the consensus turns out to be "not feasible", that's okay too. (Except that I would continue to try to come up with something day in and out ;))

Share this post


Link to post
Share on other sites
It would be easier to suggest optimizations if we knew what you were going to use it for.

That said, I guess it could be done in O(n log(n)). I haven't thought it through for a complete algorithm, but it should work something like the following. You might need to handle some special cases.

First calculate the angle to each point. For example a point straight to the right would be 0, and straight to the left 180, etc. This is O(n).
Then sort these points, which is dependent on the sorting algorithm, but can be O(n log(n)). Each point will also be marked as either a 'start point' or 'end point' for the line it belongs to, where the start point is the one with the lesser angle. You might have to handle the case where a line crosses the zero-degree edge..
Then go through the points in order, as a 'rotating sweeping line' algorithm, and handle every encountered point. Add each start point to a sorted tree. The comparison at each node is which side of the current node line the new point is on. Front is the same side as the circle center. So the first start point added will be the root, and the second point will be compared to the line the first point belongs to, and added to the correct side in the tree.
With the correct tree insert and delete is O(log(n)), and it's done n times, so you get O(n log(n)) there too.
Whenever you hit an end-point, you remove the corresponding node from the tree.

Whenever a line is at the front of the tree, then it is the currently visible line. So whenever the front line changes (could be both from insertions and deletions), another line is now in front, and you add the angle-interval from the last handled point to the current handled point, to your circle color intervals, with the color of the line that was in front until now.

The algorithm continues until each line has been added to the tree and subsequently removed.

Share this post


Link to post
Share on other sites
The problem is that assuming a single thread, 3GHz processor, 5 seconds of processing time and 500,000 "circles", one has a budget of 30,000 cycles to do the whole "rendering" of up to 2000 segments and the 20-50 queries. That really isn't very much at all.

There are so few queries per circle that it might not even be worth doing any rendering, and you are then better off just doing ray tracing (which seems to be what you are doing). Maybe you should just concentrate in doing the ray tracing fast enough.

You can have a grid where each cell knows which segments it contains (perhaps partially), and you can advance through the grid and test for intersection with only the segments in cells that the ray goes through. Depending on how your segments are distributed, this might be fast enough. Now your budget is only 600 cycles per query, though.

Share this post


Link to post
Share on other sites
Quote:

This really is a longshot idea, because I realized that the task can be seen as somewhat similar to what GPUs do, where polygons are easily projected onto a view by some matrix operations. I hoped I could somehow base on this, but if the consensus turns out to be "not feasible", that's okay too. (Except that I would continue to try to come up with something day in and out ;))


Yeah, I see what you're saying, ... Maybe you could trick the GPU into solving your problem for you?

Actually, based on the GPU model ... another way you could do this would be to set up something similar to a depth buffer. In other words, iterate through the line segments in an arbitrary order and project them on to the circle but divide the circle into a finite number of slices and in each slice keep track of the depth of the current line segment portion you have projected on the slice. But I guess that has the same problem as what you are already doing?

Oh, yeah, and the other way would be O(n log n) as Erik says above. I was having a brain fart and thought for some reason that just doing the actual projection once the lines were sorted would be n^2...

Share this post


Link to post
Share on other sites
I'm sorry, I really should have given some more background, but didn't want to make the posting too large. For completenesses sake, I may reference the thread where I was helped to get where I stand right now. (Which is exactly what you suggest in your last paragraph alvaro.)

Erik's idea sound intriguing, but I wonder if consectively building and deconstructing a binary tree for every pixel of a bitmap can do it in time. The idea though really is great! I may really try that one out, at least I may use it for some "be really thorough, take your time"-mode, because the output should be flawless.

To complete the picture of what I do:
My lines are actually rendered (in the sense of "made into segments", not as an image) Catmull-Rom splines, that define edges on a bitmap. My task is to reconstruct a bitmap largely resembling the original where the edges came from. Working directly with the splines always turned out to be MUCH too slow, which is why I do this with sufficiently accurate line segments - and that is, why there are so many of them. The number of pixels of the target bitmap is what defines my "half a million circles": Each pixel needs this, and I considered target sizes of roughly 800x600 pixel, though they may vary considerably as well.

Share this post


Link to post
Share on other sites
If I understand correctly.. you have a large amount of lines, that don't change, and a large number of points somewhere among all these lines. From each point, you want to find the first intersection, in 50 different directions?

Share this post


Link to post
Share on other sites
That 50 isn't mandatory, but about that granularity has produced largely proper result. Other than that, your description is accurate.

However, what I hoped could spare me the need for any fixed angular granularity, was if I somehow could gather just those where a new spline segment get's in front.

While writing this, I had another idea for approaching this, based on what you described earlier:
I build the mentioned tree only for the start- and endpoints of the original splines (they never intersect each other!), giving me a direct indication of which "rendered" spline (read: the segments produced from it) need proper testing. This may really be something, because I only have to deal with ~50 to 300 Splines in total, which is a whole magnitude less than segments. Still much though. What do you think?

Share this post


Link to post
Share on other sites
A grid should definitely be a very good idea in that case. The time to build the structure won't matter, if it's only done once and used for a few million rays. A BSP could also work..

To do it exactly you don't really have to sort every line perfectly. Many splines will probably be far enough apart that a single line between the end-points of the spline will give the exact same result. You could perhaps build a structure that used that approach for splines far away from each other, and only subdivided the spline when two splines intersect each others bounding boxes or something.

Share this post


Link to post
Share on other sites
You can calculate the angle by using the dot product beceause the dot product between two unit vectors is the cosine of the angle between them.



In pseudocode:

vector2d vec1 = linepoint1 - circlepos, vec2 = linepoint2 - circlepos;

vec1 /= vec1.length();
vec2 /= vec2.length();

double angle = vec1.x * vec2.x + vec1.y * vec2.y;


Share this post


Link to post
Share on other sites
Maybe I haven't really understood then. I thought I had to rebuild (and gradually deconstruct) such a tree for every pixel then.


Another thing just came to my mind... silly ideas are plenty in my brain:
I might actually use the normal procedure of rendering a scene with DirectX. Once for each pixel.
For that, I would extrude all line segments into Y, making them a bit like lots of panels standing in a large room. I then place my camera at a position coresponding to a pixel, give it a 360° FOV, and render to a target of height=1 pixel, and width=whatever resolution I may like.
This would leave me with a strip of pixels, and an according depth buffer. If I then took the average of all pixels weighed by their depth, I would be able to produce what I need: One color for the one pixel.
Downside: I need to do it for evey pixel (though the geometry is fixed, good thing!), and I am not sure if pulling the rendertarget's data from VRAM to sys RAM that often might render the approach useless. It looks to me though, that this may be complete overkill, since I expand into 3D for plain 2D uses, but at least I would (ab-)use the hardware in a way it was ment to work. But I have much too few experience with 3D APIs to evaluate if this really can be an option. Does all this sound reasonable in any way?

Share this post


Link to post
Share on other sites
Quote:
Original post by Medium9
Maybe I haven't really understood then. I thought I had to rebuild (and gradually deconstruct) such a tree for every pixel then.


If the lines to intersect against don't change, then no, you most definitely only have to build it once.

Share this post


Link to post
Share on other sites
I try to express how I understod your method in my words. It would be great if you could correct me where I have my error!

- I pick one particular pixel I want to evaluate.
- Calculate the angle (respective to some axis) of the vector from that pixel's position, to every line's start and end point.
- Sort points by their angle, while retaining the link to their partner (some pointer or such)
- Start at some angle (practically the first entry in that sorted pointlist)
- put that point into a tree as root
[loop]
- go to the next entry
- if it's an endpoint that belongs to a former point stored in the tree, determin if it lies on the same side of the current "active" segment as the pixel I currently process
- if so, store it as new root point, the new "active" segment
- if not, store it where it fits in between
- each time I come along an endpoint, remove it's linked startpoint from the structure
- terminate when there is no more startpoint to remove
[endloop]
- Do all that for each and every pixel in the target bitmap (this is where my assumption of rebuilding and -sorting for every pixel comes from)

Since you spoke of a tree, and what I described looks more like a list, I am sure I really did misunderstand something. The approach however looks too tasty to not ask you for a correction. Especially because calculating the angles and sorting the list over and over again for each pixel most defenitely will turn out to be really slow I fear.

Share this post


Link to post
Share on other sites
That one has to be done for each pixel. If you use a BSP or grid-structure of the lines to intersect with, then it only has to be done once for the whole image. I don't know which one would be fastest. With complex data-structures, memory-access is more likely to become the bottle-neck.

Your list seems good.. but I'm not sure I understand exactly what you mean with the tree.. the end-points always removes the line from the tree, and the start-points always adds a line to the tree. Whenever there is a change to which point is at the front of the tree, and only then, you add the interval from angle(old front-point) -> angle(new front) with color(old front) to the list of intervals on the circle.
If you calculate this whole thing for every pixel I doubt very much that it will be faster than a tree that doesn't have to be recalculated, unless you have a lot more than 50 rays per pixel.

Share this post


Link to post
Share on other sites
Mhhh, I really have some difficulty picturing how that tree would have to look like, and how I would approach each pixel with that, or how it may save me from calculating all angles and sorting by them for each and every pixel anew. I must admit though, that I haven't used a BSP so far (yeah, I'm farily new to graphics related programming - or at least more or less advanced techniques like these. I know what it is though. Also, vector math in general is no real issue).

Also, I have trouble seeing where dividing into squares may come in handy here. That notion caught my interest especially because I already have things split up in a grid, so I could just use that.

Share this post


Link to post
Share on other sites
Quote:
Original post by Medium9
Mhhh, I really have some difficulty picturing how that tree would have to look like, and how I would approach each pixel with that, or how it may save me from calculating all angles and sorting by them for each and every pixel anew.


It won't. That algorithm requires everything to be recalculated for each pixel.

Quote:
I must admit though, that I haven't used a BSP so far (yeah, I'm farily new to graphics related programming - or at least more or less advanced techniques like these. I know what it is though. Also, vector math in general is no real issue).

Also, I have trouble seeing where dividing into squares may come in handy here. That notion caught my interest especially because I already have things split up in a grid, so I could just use that.


Grids and BSPs only requires building the tree once.

I think there was a mention in your previous thread you linked to about just drawing a line, kinda like Bresenham, but including every touched pixel.
This is the easiest and possibly fastest way. Simply create an image, into which you draw each and every one of your lines, with the correct color. Then for each point, you draw 50 straight rays in that image, in the correct direction. Whenever that ray is drawn on a pixel with a different color than the background, you have hit an existing line from the previous step, and know the color to use. You do everything in pixels. If you need more precision, create a larger image.
This is easily done in a shader, and could be quite fast.

If precision isn't good enough, you could do intersection-tests for every line that intersects each of your "pixels" in the grid. If you increase resolution enough, the result will be the same. I don't know which way will be the fastest.

Share this post


Link to post
Share on other sites
Oh, okay. Then I might have understood something afterall. Too bad, since I do know that sorting in that magnitude is in no way feasible. I needed that for a much earlier approach where I sorted by distance from a pixel. I guess I read more from what you actually said.

That bresenham-like algo is used to traverse my grid. I currently precompute an array of blocksize^2 (in pixels) size, and for each pixel in that block I attach a list of offsets that tell me which blocks that ray will hit while it progresses from that pixel. I do this several times, once for every angle until I am fully around, so that in the end I just have to determin any pixels coordinates it it's block, and can then simply look up which further blocks to visit on my way finding an intersecting line at that angle. The algorithm is not actually used for drawing.

Do you mean I should actually "draw" my rays in that last thing you mentioned? I understood it like: Draw all rays in the color of the line they eventually will hit (and stop it there). Consider these rays as lines for further pixels to come. Meaning: If a later pixel's ray hits an earlier ray, that is the color to use for the new ray.
Wouldn't any "rayline" cut off a part of the actual lines for all pixels that lie on the other side of that ray, that would perfectly possibly hit an actual line behind that rayline if not stopped by it?

I will show images of what the situation is. Maybe that can clear up a bit what I really have to deal with.

The original image:


The extracted edges (it's the edges of the edges actually):

(Antialiasing is caused by resizing the picture here.)

The reconstruction as I want it to be:


Basically, evey pixel's color depends on the lines it "sees" when making a full turn. That is, depending on the colors, AND distances, as I want a smooth transition everywhere.
In that example, the issue why I want to find another solution to this is not visible. Due to angular granularity, pixels that are far away from most lines tend to produces fields with well visible "streaks", because they alternatingly miss and hit some closer line when progressing in x or y.

Also, I want to apologize for my apparent difficulties in expressing and understanding today. Don't know where that comes from, I usually get things faster :(

Share this post


Link to post
Share on other sites
Quote:
Original post by Medium9
Do you mean I should actually "draw" my rays in that last thing you mentioned?


No I meant going through the same procedure as drawing the line, but not actually updating pixels. When tracing the rays. The lines to be hit should be drawn to the buffer which is inspected by the ray-tracing.

I see what you are trying to do now, and that it can require quite a few rays to approximate it properly. Does the actual distance to the seen lines influence the weight with which that line's color is applied, or only how many degrees on the circle is taken up by that line?

If you need the exact intervals, then an algorithm like the one sorting the points by angle is probably what you need..

Approximating the actual circle by segments could open up for new techniques, and might not influence the quality too much. Then you could change the whole thing to projection, and draw the lines with the Z-buffer, or similar. This could also be done very fast with a BSP-tree, and front-to-back drawing, stopping when all pixels are filled.

General occlusion culling techniques could be used to bring the number of lines down, as most lines can't possibly be visible from large parts of the map. This is probably one of the things you could gain the most from.

Share this post


Link to post
Share on other sites
Actually this should be doable with very good precision, in a shader or in software, using projection to the circle. This is probably what you talked about in your first posts.

Say you divide the circle into 360 parts, each which will have one color. Consider this a 1D texture of 360 texels. Each line consists of two points, each which has a degree attached to it, and it is therefore represented by a line between two pixels in the 1D-texture. Calculating the degrees is easy, and could be done in a vertex-shader. The distance is also easily calculated, and can be used as the Z, for depth-culling.
At each pixel, draw all the lines to this 1D texture, and then read the average color from it. Should be quite fast, and could be done completely on the GPU. You might need to modify Z to give proper results with interpolation.
The number of pixels drawn shouldn't matter that much, so you could probably increase that texture-size quite a lot for precision.

Share this post


Link to post
Share on other sites
This is exactly what my initial intent was in this thread! Oh I have failed miserably in explaining this, I can see that now, sorry.

It is precisely the steps you describe, where I am not quite clear on how to actually do it. Mostly the part of drawing the lines as if they are projected onto a circle, while performantly considering something z-buffer like.

This z-buffer would have to be used to weigh every texels contribution, yes, and I additionally divide the entire result by the accumulation of all z values to get the final resulting color in [0, 1] range, and to avoid fading into black in mostly "empty" parts of the image.
Theoretically so, that a color may distribute undiminished in every direction unless another color (line) get's closer and gains more importance (causing a smooth gradient). This is most important for pixels that can "see" across an image's borders, since these borders usually are just empty. This must not cause "black influence" on these pixels.

Share this post


Link to post
Share on other sites
Basically you would 'unfold' the circle, so you get a 1D-texture, a line, which is 2 * pi * radius pixels in length. Each of these pixels correspond to exactly the angle index / radius. The radius doesn't actually matter in this case, but is basically just the precision of the texture.

So to do projection on a circle, you would only calculate that angle, which can be done something like the following in a vertex shader:

float2 point;
float2 center;

point -= center;
float angle = atan2(point.y, point.x) + PI;
angle *= (textureSize / (2 * PI));

float Z = length(point);

Out.Position = float4(angle, 0, Z, 1);
Out.Color = In.Color;



Again, you might have to think about the Z and perspective-correct interpolation.. haven't exactly thought about that in detail.. but it can sometimes require that you interpolate correctly.

Then in the pixel shader:

...
Out.Color = calculations(In.Color, Z);



Draw with additive blending, to a 32-bit floating point target if you need precision. Then average the whole thing, and put it in the final image. The averaging could be done for example by automatic mip-map generation, and copying the 1x1 level to the output image (perhaps by drawing a point to the output image render target).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this