**2**

# Line Algorithm for Ray Tracing

###
#1
Members - Reputation: **585**

Posted 25 September 2012 - 08:14 PM

###
#2
Members - Reputation: **271**

Posted 25 September 2012 - 09:49 PM

I remember implementing this a while ago... be careful that this type of stuff is extremely hard to implement well due to precision/logic errors and debugging difficulties:

Essentially we want to walk the grid while keeping track of the distance to the next grid cell in each dimension. Let's say in a grid of 1x1 cells, we're at (0, 0.5) (cell 0,0) with direction <1, 1>. The distance to the next cell in the X (DX) direction is sqrt(2), while the distance to the next cell in the Y direction (DY) is sqrt(1/2). Thus because DY is smaller, we need to go up 1 cell to cell (0, 1) (enters the cell at (0.5, 1)), and recalculate DX and DY. After we've went up 1 cell, DX is now smaller, so we need to visit the next cell in the X direction. And so forth.

You can optimize this by precomputing a whole set of values. I have implemented it here https://github.com/j...uniformgrid.cpp

dda_vals pre computes values

dda_next walks to the next cell, given current cell/info

Please again note that this kind of stuff is really frustrating/difficult to develop (my implemention is not correct for cases), so use a preexisting intersection library if at all possible (embree is great).

**Edited by jameszhao00, 25 September 2012 - 09:56 PM.**

###
#3
Members - Reputation: **585**

Posted 28 September 2012 - 03:24 PM

To continue, you simply add x again and run through count again while adding the slope once your done adding 1 count times. With the slope initially being less than 1, I don't think you even need the count value because im pretty sure you can't skip a square with a value less than 1. Also, if slope is negative, simply add -1 and adjust the values that need to be negative to negative instead of keeping them positive. Example: if your in quadrent 1, x and y are positive, quadrent 2, x is negative, y is positive, etc.

Would this be a valid algorithm? And a quick one also as your only adding integers (1 float per iretation) for every grid you run through??

###
#4
Members - Reputation: **975**

Posted 29 September 2012 - 06:08 PM

You have two points,

p1 = (x1, y1, z1), and

p2 = (x2, y2, z2) .

You compute a vector

v = p2 - p1 = (x2-x1, y2-y1, z2-z1) .

from the first point to the second.

You scale this vector so that it is one on its largest (in absolute value) dimension, to get a new vector,

s = [ 1 / max( abs(v) ) ] v

(I.e., you divide each element of v by the largest absolute value found in v). This is how much you will step in each dimension on each iteration. The reason for this choice of scaling is to avoid gaps (if you stepped by more than one pixel in any dimension, you would get gaps).

Finally, you execute the following loop (pseudocode):

p = p1 for i = 1 to max(abs(v)) { image(round(p)) = color p = p + s } // Now p = p2.

That's the basic idea.

DDA does this with integer math. I wouldn't worry about it until I'd implemented the floating-point version, but, I'll write a little about it here and include some MATLAB code at the end in case you're curious.

DDA, conceptually, just adds the vector "s" to the vector "p" over and over, but it splits all the numbers into two parts, which you should think of as being like "digits:"

- DDA stores everything to the left of the decimal point in units of 1 pixel.

- It stores everything to the right of the decimal point in units of 1/V-ths of a pixel, where V = max(abs(v)).

- It implements addition between these "mixed base" numbers, using the standard grade-school algorithm: You add the low parts, carry any overflow, and add the high parts.

Here's some (MATLAB) code [EDIT: fixed code; added comments]:

function img = rasterLineB(img, x1, x2, color) % New, improved; conceptually cleaner, and without (obvious) bugs. n = numel(x1); % Number of dimensions we're working in. v = x2 - x1; % vector from x1 to x2 V = max(abs(v)); % Inf-norm of v % In principle, we are computing the unit (in inf-norm) vector % u = (1/V)*v . % We are splitting the integers in u into a "most significant part," or "high" part, uh % and a "least significant" or "low" part, ul, so that % u = uh + (1/V)*ul . uh = idivide(v, V, 'fix'); % Round towards zero ('floor' rounds towards negative infinity). ul = rem(v, V); % Use this, not MATLAB's "mod!" This gives least significant part, preserving sign. s = size(img); % Used in multidimensional array indexing cp = int32(cumprod([1 s(1:end-1)]))'; % to compute a linear index % Preallocate variables idx = int32(0); xh = x1; % High (most significant) part of position xl = zeros(n,1, 'int32'); % Low (least significant) part of position for k=1:V+1 idx = 1+sum((xh-1).*cp); % Compute linear index into array. img(idx) = color; % In principle, we have a point x, and a vector u, and we are just % doing "x = x + u" over and over in a loop. Because we have split % "x" and "u" into "low" and "high" parts, we do this as a "ripple carry" % sum. xl = xl + ul; % Add low parts for j=1:n % Carry any overflow if(abs(xl(j)) >= V) % Overflow xh(j) = xh(j) + sign(xl(j)); % Do the carry xl(j) = xl(j) - sign(xl(j))*V; % "Mod" the low part end xh(j) = xh(j) + uh(j); % Add the high parts end end end

**Edited by Emergent, 30 September 2012 - 10:04 AM.**

###
#6
Members - Reputation: **975**

Posted 29 September 2012 - 09:31 PM

I see but Im looking for a implementation that doesn't require a second point, is there a way to use the formula without the need of a second point but instead be given a angle?

That's what the vector "v" represents, actually. It represents a direction, together with a speed.

For our purposes, a vector is a list of numbers. The first tells you how far you move in the x direction per unit time; the second how far in the y direction, and so on. The idea works in any number of dimensions.

In general, this is the most useful way to think about rays -- in terms of vectors, not angles.

If the positive "z" direction points into the scene, and if your image is "W" pixels large in the x direction and "H" in the y direction, then, in camera space, a ray shooting through the (x,y) pixel moves along the vector,

v = (x - W/2, y - H/2, 1) .

###
#7
Members - Reputation: **975**

Posted 30 September 2012 - 12:09 PM

This function is nearly identical to the one I posted previously; the differences are:

1. "v=precision*[sin(theta); cos(theta)]" instead of "v=x2-x1."

2. Instead of iterating for a fixed number of steps (the length of the line), we iterate until we go out of bounds.

function img = rasterRay2dBresenham(img, x0, theta, color) % New, improved; conceptually cleaner, and without (obvious) bugs. n = numel(x0); % Number of dimensions we're working in. if(n ~= 2) error('This function is for 2d rays.\n'); end precision = 256; % An adjustable parameter. v = int32(round(precision*[cos(theta); sin(theta)])); V = max(abs(v)); % Inf-norm of v % In principle, we are computing the unit (in inf-norm) vector % u = (1/V)*v . % We are splitting the integers in u into a "most significant part," or "high" part, uh % and a "least significant" or "low" part, ul, so that % u = uh + (1/V)*ul . uh = idivide(v, V, 'fix'); % Round towards zero ('floor' rounds towards negative infinity). ul = rem(v, V); % Use this, not MATLAB's "mod!" This gives least significant part, preserving sign. s = size(img); % Used in multidimensional array indexing cp = int32(cumprod([1 s(1:end-1)]))'; % to compute a linear index % Preallocate variables idx = int32(0); xh = x0; % High (most significant) part of position xl = zeros(n,1, 'int32'); % Low (least significant) part of position while(true) if(xh(1) < 1 || xh(1) > size(img,1) || ... xh(2) < 1 || xh(2) > size(img,2)) break; end idx = 1+sum((xh-1).*cp); % Compute linear index into array. img(idx) = color; % In principle, we have a point x, and a vector u, and we are just % doing "x = x + u" over and over in a loop. Because we have split % "x" and "u" into "low" and "high" parts, we do this as a "ripple carry" % sum. xl = xl + ul; % Add low parts for j=1:n % Carry any overflow if(2*abs(xl(j)) >= V) % Overflow [Bresenham (rounds rather than floors/fixes) %if(abs(xl(j)) >= V) % Overflow [DDA (grade school addition)] % The reason we have some flexibility in when we choose to % carry (e.g., Bresenham vs. DDA) is that, either way, we % maintain the loop invariant % x = xh + xl/V, % so we have a sensible representation of our current % position either way. xh(j) = xh(j) + sign(xl(j)); % Do the carry xl(j) = xl(j) - sign(xl(j))*V; % "Mod" the low part end xh(j) = xh(j) + uh(j); % Add the high parts end end end

Finally, let me give you one important caveat: Bresenham (or DDA) lines do

*not*hit all the cells that the ideal line passes through. So it's not appropriate for use as an intersection routine.

**Edited by Emergent, 30 September 2012 - 12:15 PM.**

###
#9
Members - Reputation: **585**

Posted 11 October 2012 - 02:53 PM

###
#10
Crossbones+ - Reputation: **10055**

Posted 11 October 2012 - 09:18 PM

Angles have edge cases, wrap-around situations, expensive trig functions, etc... vector math removes all of this and makes everything work in a very general and elegant fashion. Atan2 is probably a testament to that, lol. Besides, if you do use angles, you'll eventually notice you often end up calculating for instance a cosine, just to take the arccosine of a related angle a few lines later...What difference does it make if I use angles or not? For example, I know that I want a certain amount of rays shot from point a to point b, and they can't go past a angle of 90 and 180 so I use angles to give direction to where they start. Btw I know its a late reply but Ill test this out soon.

Sure, it'll still work, but it'll be less stable, harder to code, and often also harder to understand.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- *Pessimal Algorithms and Simplexity Analysis*