• 13
• 15
• 27
• 9
• 9

# Center of the line in image

This topic is 3073 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I am trying to determine a center of the vertical line in the image, and that in a subpixel position. Is there an algorithm for that, something with the name, or maybe you have idea how can it be best made So I am running through pixels horizontally and for example get these values: 5 14 60 111 100 37 pixel # 1 2 3 4 5 6 Where is the brightness peak? Somewhere around 4.3

##### Share on other sites
Try the following article, which gives a good introduction to such algorithms: http://www.gamedev.net/reference/articles/article2007.asp.

##### Share on other sites
First, there is Hough transform. That one is suitable if you can filter the image.

Otherwise, there is statistical approach which takes into consideration how the line is drawn (or sampled by CCD or similar) in the first place.

Assume you want to render a line of color C at x=4.3. Let pixel at x=4 be A, and at x=5 be B.

Then, one possible way is (f is fraction part, or 0.3):
A = C * sqrt(1-f)
B = C * sqrt(f)

If a line of color C was drawn onto pixels A and B, then f will be constant across all horizontal lines.

Then it's simply a matter of reversing the above formula. We don't know C or f, but we know A and B. So:
A -> C = sqrt(1-f) / AB -> C = sqrt(f) / Bsqrt(1-f) / A = sqrt(f) / Bsqrt(1-f)/sqrt(f) = A/B(1-f)/f = (A/B)^21/f - 1 = (A/B)^21/f = (A/B)^2+1f = 1/( (A/B)^2 + 1 )

Applied to example values given:
5     14    60     111    100    37  0.89  0.94   0.77   0.44    0.12

Repeat this for each row. Then, compute the standard deviation of each column. The column with smallest deviation is the one in which the line is. If the line is fairly distinguishable, then it should be the column with 0.44 value, so in our case, x=4.44.

Obviously, this approach requires modeling the original line sampling, and ignores the noise and potential background. A more robust approach would apply this across all pixels, taking into consideration that when a line is drawn at x=4 and 5, pixel at x=4 also contains some data from x=3.

But the above is fairly trivial to implement, and can yield adequate results.

It's obviously possible to apply much more sophisticated methods.

##### Share on other sites
Quote:
 Original post by jakovnI am trying to determine a center of the vertical line in the image, and that in a subpixel position. Is there an algorithm for that, something with the name, or maybe you have idea how can it be best madeSo I am running through pixels horizontally and for example get these values:5 14 60 111 100 37pixel #1 2 3 4 5 6Where is the brightness peak? Somewhere around 4.3
Forgive me if this is a dumb question, but in the information that you provided what would give any indication that the maximum would be at 4.3? Why not 3.9? Why not at f(4.0) == 111?

##### Share on other sites
I don't understand the part:
"The column with smallest deviation is the one in which the line is"
Between 100 and 37 is smaller (if understand what you mean)

When I draw curve from these pixels
5----14----60----80----70----37----10
--0.88--0.94--0.64--0.43--0.21--0.06--

it seems the peak is around 4.17 when I put these values in Excel and choose 'Smooth lines' (That is smitty1276 how I got the peak)

The line is a laser line so all these pixel are part of the line cross-section which is sinusoidal form I assume

##### Share on other sites
Quote:
 Original post by jakovnit seems the peak is around 4.17 when I put these values in Excel and choose 'Smooth lines'

First, calculate the factors for each row.
Then, calculate the deviation over each column of these factors.

I downloaded Lenna, resized it to 256x256, converted to grayscale, then wrote it out as 8-bit raw (resulting in 256x256 bytes file.
#include <fstream>#include <iostream>#include <math.h>#include <vector>unsigned char image[256*256];double factors[255][256];int main(){	memset(factors, 0, sizeof(factors));	std::ifstream f("c:\\lenna.raw", std::ios_base::binary);	f.read((char*)image, 256*256);	for (int y = 0; y < 255; y++) {		for (int x = 0; x < 255; x++) {			int ofs = y*256+x;			factors[x][y] = 1.0 / (1.0 + image[ofs]/image[ofs+1]);		}	}	int minx = 0;	double mind = 99999;	double mina = 0;	for (int x = 0; x < 255; x++) {		double avg = 0;		double dev = 0;		for (int y = 0; y < 256; y++) avg += factors[x][y];		avg /= 256;		for (int y = 0; y < 256; y++) dev += (factors[x][y] - avg)*(factors[x][y] - avg);		dev = sqrt(dev/256);		std::cout << x << " = " << dev << ", " << avg << std::endl;		if (dev < mind) { 			mind = dev;			minx = x;			mina = avg;		}	}	std::cout << "Best fit at x=" << minx << " + " << mina << std::endl;}

The above produces:
Quote:
 0 = 0.247477, 0.6933591 = 0.252966, 0.7285162 = 0.250853, 0.7109383 = 0.251696, 0.7167974 = 0.249511, 0.7031255 = 0.240916, 0.6699226 = 0.252966, 0.7285167 = 0.240258, 0.6679698 = 0.240916, 0.6699229 = 0.238889, 0.66406310 = 0.251696, 0.71679711 = 0.247844, 0.84765612 = 0.212658, 0.88867213 = 0.166784, 0.941406...20 = 0.253508, 0.76367221 = 0.246063, 0.687522 = 0.253508, 0.76367223 = 0.253846, 0.74609424 = 0.251696, 0.71679725 = 0.171163, 0.562526 = 0.0441942, 0.527 = 0.0311889, 0.49804728 = 0.0441942, 0.529 = 0.0927477, 0.51367230 = 0.13526, 0.53515631 = 0.135828, 0.53450532 = 0.113017, 0.5188833 = 0.229365, 0.86653634 = 0.213135, 0.89257835 = 0.218177, 0.88899736 = 0.206662, 0.90234437 = 0.236721, 0.846354...Best fit at x=27 + 0.498047

If you look at second column, that is the standard deviation. Local minima is where the vertical lines are. There is another strong minima at x=16, and a few other minor ones. The best result however is obtained for a line at 27.498047.

But as said accuracy of results depends on how well you can model the line. A simple horizontal convolution filter would probably do the job just as well, or if the image is noisy, more robust techniques might be needed.

The meat of this approach is in realization that all rows will share same characteristic, since vertical line will appear at approximately the same place.

##### Share on other sites
If I understand the algorithm you are determining the position of the vertical line on the image?

http://img337.imageshack.us/img337/2442/center.png

I am trying to find center of the each horizontal cross-section on the laser line.
Down there I pulled out one row, enlarge it 10x and marked the position algorithm should be looking for on that particular row.

##### Share on other sites
Quote:
 Original post by jakovnIf I understand the algorithm you are determining the position of the vertical line on the image?

Yes, it's what the original problem asked.

Quote:
 I am trying to find center of the each horizontal cross-section on the laser line.Down there I pulled out one row, enlarge it 10x and marked the position algorithm should be looking for on that particular row.

Curve fit Gaussian curve for each row. b will determine the "exact" position, a and c will determine the intensity. c can be used as quality criteria - it should be similar across all rows, regardless of a. It's somewhat difficult to determine true exact sub-pixel position if lines are evaluated in isolation. Considering small sample size (likely just 2-5 pixels), the error from noise could affect the results.

There are several methods which can be used, either Levenberg-Marquardt, Gauss-Newton, or perhaps just iterative bisection over some range of (a,b,c).

Alternatively, examining Gaussian curve in frequency domain (as per FFT) could help avoid some of noise bias.

For more accurate results, the actual curve being fitted would need to take into consideration responses of systems involved in image acquisition (such as CCD sensors or whatever the image acquisition device is, and model that into the actual curve), and perhaps even rounding due to 8-bit precision. For example, CCDs can exhibit some bleeding across cells, and they might not have linear response across all wavelengths. Usually, some form of calibration could be used to increase accuracy of results.

But IMHO, examining c across all rows should serve as a fairly good criteria, but I don't have any tangible reason to confirm it.

##### Share on other sites
Another alternative is using a sinc filter to convert the samples to a continuous function and then pick the maximum of that function. In your example, this would give you a value of about 4.266139.

[Edited by - alvaro on October 17, 2009 11:06:50 AM]