Jump to content
  • Advertisement
Sign in to follow this  
jakovn

Center of the line in image

This topic is 3261 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am trying to determine a center of the vertical line in the image, and that in a subpixel position. Is there an algorithm for that, something with the name, or maybe you have idea how can it be best made So I am running through pixels horizontally and for example get these values: 5 14 60 111 100 37 pixel # 1 2 3 4 5 6 Where is the brightness peak? Somewhere around 4.3

Share this post


Link to post
Share on other sites
Advertisement
First, there is Hough transform. That one is suitable if you can filter the image.

Otherwise, there is statistical approach which takes into consideration how the line is drawn (or sampled by CCD or similar) in the first place.

Assume you want to render a line of color C at x=4.3. Let pixel at x=4 be A, and at x=5 be B.

Then, one possible way is (f is fraction part, or 0.3):
A = C * sqrt(1-f)
B = C * sqrt(f)

If a line of color C was drawn onto pixels A and B, then f will be constant across all horizontal lines.

Then it's simply a matter of reversing the above formula. We don't know C or f, but we know A and B. So:
A -> C = sqrt(1-f) / A
B -> C = sqrt(f) / B

sqrt(1-f) / A = sqrt(f) / B
sqrt(1-f)/sqrt(f) = A/B
(1-f)/f = (A/B)^2
1/f - 1 = (A/B)^2
1/f = (A/B)^2+1
f = 1/( (A/B)^2 + 1 )


Applied to example values given:
5     14    60     111    100    37
0.89 0.94 0.77 0.44 0.12


Repeat this for each row. Then, compute the standard deviation of each column. The column with smallest deviation is the one in which the line is. If the line is fairly distinguishable, then it should be the column with 0.44 value, so in our case, x=4.44.

Obviously, this approach requires modeling the original line sampling, and ignores the noise and potential background. A more robust approach would apply this across all pixels, taking into consideration that when a line is drawn at x=4 and 5, pixel at x=4 also contains some data from x=3.

But the above is fairly trivial to implement, and can yield adequate results.

It's obviously possible to apply much more sophisticated methods.

Share this post


Link to post
Share on other sites
Quote:
Original post by jakovn
I am trying to determine a center of the vertical line in the image, and that in a subpixel position. Is there an algorithm for that, something with the name, or maybe you have idea how can it be best made

So I am running through pixels horizontally and for example get these values:
5 14 60 111 100 37
pixel #
1 2 3 4 5 6

Where is the brightness peak? Somewhere around 4.3
Forgive me if this is a dumb question, but in the information that you provided what would give any indication that the maximum would be at 4.3? Why not 3.9? Why not at f(4.0) == 111?

Share this post


Link to post
Share on other sites
I don't understand the part:
"The column with smallest deviation is the one in which the line is"
Between 100 and 37 is smaller (if understand what you mean)

When I draw curve from these pixels
5----14----60----80----70----37----10
--0.88--0.94--0.64--0.43--0.21--0.06--

it seems the peak is around 4.17 when I put these values in Excel and choose 'Smooth lines' (That is smitty1276 how I got the peak)

The line is a laser line so all these pixel are part of the line cross-section which is sinusoidal form I assume

Share this post


Link to post
Share on other sites
Quote:
Original post by jakovn

it seems the peak is around 4.17 when I put these values in Excel and choose 'Smooth lines'


First, calculate the factors for each row.
Then, calculate the deviation over each column of these factors.

I downloaded Lenna, resized it to 256x256, converted to grayscale, then wrote it out as 8-bit raw (resulting in 256x256 bytes file.
#include <fstream>
#include <iostream>
#include <math.h>
#include <vector>

unsigned char image[256*256];
double factors[255][256];

int main()
{
memset(factors, 0, sizeof(factors));

std::ifstream f("c:\\lenna.raw", std::ios_base::binary);
f.read((char*)image, 256*256);

for (int y = 0; y < 255; y++) {
for (int x = 0; x < 255; x++) {
int ofs = y*256+x;
factors[x][y] = 1.0 / (1.0 + image[ofs]/image[ofs+1]);
}
}
int minx = 0;
double mind = 99999;
double mina = 0;
for (int x = 0; x < 255; x++) {
double avg = 0;
double dev = 0;
for (int y = 0; y < 256; y++) avg += factors[x][y];
avg /= 256;
for (int y = 0; y < 256; y++) dev += (factors[x][y] - avg)*(factors[x][y] - avg);
dev = sqrt(dev/256);
std::cout << x << " = " << dev << ", " << avg << std::endl;
if (dev < mind) {
mind = dev;
minx = x;
mina = avg;
}
}
std::cout << "Best fit at x=" << minx << " + " << mina << std::endl;
}



The above produces:
Quote:
0 = 0.247477, 0.693359
1 = 0.252966, 0.728516
2 = 0.250853, 0.710938
3 = 0.251696, 0.716797
4 = 0.249511, 0.703125
5 = 0.240916, 0.669922
6 = 0.252966, 0.728516
7 = 0.240258, 0.667969
8 = 0.240916, 0.669922
9 = 0.238889, 0.664063
10 = 0.251696, 0.716797
11 = 0.247844, 0.847656
12 = 0.212658, 0.888672
13 = 0.166784, 0.941406
...
20 = 0.253508, 0.763672
21 = 0.246063, 0.6875
22 = 0.253508, 0.763672
23 = 0.253846, 0.746094
24 = 0.251696, 0.716797
25 = 0.171163, 0.5625
26 = 0.0441942, 0.5
27 = 0.0311889, 0.498047
28 = 0.0441942, 0.5
29 = 0.0927477, 0.513672
30 = 0.13526, 0.535156
31 = 0.135828, 0.534505
32 = 0.113017, 0.51888
33 = 0.229365, 0.866536
34 = 0.213135, 0.892578
35 = 0.218177, 0.888997
36 = 0.206662, 0.902344
37 = 0.236721, 0.846354
...
Best fit at x=27 + 0.498047


If you look at second column, that is the standard deviation. Local minima is where the vertical lines are. There is another strong minima at x=16, and a few other minor ones. The best result however is obtained for a line at 27.498047.


But as said accuracy of results depends on how well you can model the line. A simple horizontal convolution filter would probably do the job just as well, or if the image is noisy, more robust techniques might be needed.

The meat of this approach is in realization that all rows will share same characteristic, since vertical line will appear at approximately the same place.

Share this post


Link to post
Share on other sites
If I understand the algorithm you are determining the position of the vertical line on the image?

http://img337.imageshack.us/img337/2442/center.png

I am trying to find center of the each horizontal cross-section on the laser line.
Down there I pulled out one row, enlarge it 10x and marked the position algorithm should be looking for on that particular row.

Share this post


Link to post
Share on other sites
Quote:
Original post by jakovn
If I understand the algorithm you are determining the position of the vertical line on the image?

Yes, it's what the original problem asked.

Quote:
I am trying to find center of the each horizontal cross-section on the laser line.
Down there I pulled out one row, enlarge it 10x and marked the position algorithm should be looking for on that particular row.


Curve fit Gaussian curve for each row. b will determine the "exact" position, a and c will determine the intensity. c can be used as quality criteria - it should be similar across all rows, regardless of a. It's somewhat difficult to determine true exact sub-pixel position if lines are evaluated in isolation. Considering small sample size (likely just 2-5 pixels), the error from noise could affect the results.

There are several methods which can be used, either Levenberg-Marquardt, Gauss-Newton, or perhaps just iterative bisection over some range of (a,b,c).

Alternatively, examining Gaussian curve in frequency domain (as per FFT) could help avoid some of noise bias.

For more accurate results, the actual curve being fitted would need to take into consideration responses of systems involved in image acquisition (such as CCD sensors or whatever the image acquisition device is, and model that into the actual curve), and perhaps even rounding due to 8-bit precision. For example, CCDs can exhibit some bleeding across cells, and they might not have linear response across all wavelengths. Usually, some form of calibration could be used to increase accuracy of results.

But IMHO, examining c across all rows should serve as a fairly good criteria, but I don't have any tangible reason to confirm it.

Share this post


Link to post
Share on other sites
Another alternative is using a sinc filter to convert the samples to a continuous function and then pick the maximum of that function. In your example, this would give you a value of about 4.266139.

[Edited by - alvaro on October 17, 2009 11:06:50 AM]

Share this post


Link to post
Share on other sites
These methods look promissing! Thanks for the input!
Is any of this implemeted in c++ or c# as a part of a some library?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!