Jump to content

  • Log In with Google      Sign In   
  • Create Account


Line Algorithm for Ray Tracing


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
9 replies to this topic

#1 D.V.D   Members   -  Reputation: 433

Like
0Likes
Like

Posted 25 September 2012 - 08:14 PM

So I have looked all across the internet for a explanation and even code of a line drawing algorithm that travels only through one grid at a time while not requiring a endpoint (only start point and angle). I am familiar with bresenham's algorithm but I can't wrap my head around how it works. I know this site has a tutorial on it but it only explains the mechanics of one out of the 8 possible lines. I also haven't found many 3d versions of the algorithm and can imagine extrapolating to it to be very difficult especially when looking at some of the code shown. Can anyone please explain to me how it works and how I can change it to not require a endpoint?

Sponsor:

#2 jameszhao00   Members   -  Reputation: 271

Like
0Likes
Like

Posted 25 September 2012 - 09:49 PM

What you're looking for is 3D DDA.

I remember implementing this a while ago... be careful that this type of stuff is extremely hard to implement well due to precision/logic errors and debugging difficulties:

Essentially we want to walk the grid while keeping track of the distance to the next grid cell in each dimension. Let's say in a grid of 1x1 cells, we're at (0, 0.5) (cell 0,0) with direction <1, 1>. The distance to the next cell in the X (DX) direction is sqrt(2), while the distance to the next cell in the Y direction (DY) is sqrt(1/2). Thus because DY is smaller, we need to go up 1 cell to cell (0, 1) (enters the cell at (0.5, 1)), and recalculate DX and DY. After we've went up 1 cell, DX is now smaller, so we need to visit the next cell in the X direction. And so forth.

You can optimize this by precomputing a whole set of values. I have implemented it here https://github.com/j...uniformgrid.cpp

dda_vals pre computes values
dda_next walks to the next cell, given current cell/info


Please again note that this kind of stuff is really frustrating/difficult to develop (my implemention is not correct for cases), so use a preexisting intersection library if at all possible (embree is great).

Edited by jameszhao00, 25 September 2012 - 09:56 PM.


#3 D.V.D   Members   -  Reputation: 433

Like
0Likes
Like

Posted 28 September 2012 - 03:24 PM

Thanks I kinda skimmed through it and Im not exactly sure if you do this or not, but I made a theory on how to make this algorithm in 2d. I haven't really understood lines in 3d much (still in highschool), but a line that travels on any angle other than 0, 45, 90, 135, 180, 225, 270, 315 and 360 either increases by more than one unit on the x or y value. So if the slope is greater than 1, you take the slope of the line, you create a count variable and in a loop, you subtract 1 unit from slope till slope is less than 1 and store the amount of times you subtracted in count. In your line loop were you actually run through the grids, you could then add one value of x, and add 1 count number of times. You can round the position variables if you want to check whats in the grid so if your sum is lets say x=1, y=2.51, you round it to 3 in a seperate variable, check the grid for anything and then add 1 again (if count isn't reached). So x=1, y=3.51,4.51,etc. Once your finished looping through count, you just add the slope which happens to be less than 1 to the y value. If slope is equal to lets say .70 and you finished adding count with a value of 4.51, you get 5.21. Round it again to check the grid (x=1, y=5) and so on.

To continue, you simply add x again and run through count again while adding the slope once your done adding 1 count times. With the slope initially being less than 1, I don't think you even need the count value because im pretty sure you can't skip a square with a value less than 1. Also, if slope is negative, simply add -1 and adjust the values that need to be negative to negative instead of keeping them positive. Example: if your in quadrent 1, x and y are positive, quadrent 2, x is negative, y is positive, etc.

Would this be a valid algorithm? And a quick one also as your only adding integers (1 float per iretation) for every grid you run through??

#4 Emergent   Members   -  Reputation: 971

Like
0Likes
Like

Posted 29 September 2012 - 06:08 PM

In any number of dimensions, drawing a line segment between two points works the same way. Algorithms like Bresenham and DDA are clever versions of what follows, implemented in ways that avoid floating-point arithmetic. For now, let's just use floats, and worry about the clever bits later.

You have two points,
p1 = (x1, y1, z1), and
p2 = (x2, y2, z2) .

You compute a vector
v = p2 - p1 = (x2-x1, y2-y1, z2-z1) .
from the first point to the second.

You scale this vector so that it is one on its largest (in absolute value) dimension, to get a new vector,
s = [ 1 / max( abs(v) ) ] v
(I.e., you divide each element of v by the largest absolute value found in v). This is how much you will step in each dimension on each iteration. The reason for this choice of scaling is to avoid gaps (if you stepped by more than one pixel in any dimension, you would get gaps).

Finally, you execute the following loop (pseudocode):

p = p1
for i = 1 to max(abs(v))
{
   image(round(p)) = color
   p = p + s
}
// Now p = p2.

That's the basic idea.

DDA does this with integer math. I wouldn't worry about it until I'd implemented the floating-point version, but, I'll write a little about it here and include some MATLAB code at the end in case you're curious.

DDA, conceptually, just adds the vector "s" to the vector "p" over and over, but it splits all the numbers into two parts, which you should think of as being like "digits:"
- DDA stores everything to the left of the decimal point in units of 1 pixel.
- It stores everything to the right of the decimal point in units of 1/V-ths of a pixel, where V = max(abs(v)).
- It implements addition between these "mixed base" numbers, using the standard grade-school algorithm: You add the low parts, carry any overflow, and add the high parts.

Here's some (MATLAB) code [EDIT: fixed code; added comments]:

function img = rasterLineB(img, x1, x2, color)
% New, improved; conceptually cleaner, and without (obvious) bugs.
	n = numel(x1);  % Number of dimensions we're working in.
  
	v = x2 - x1;  % vector from x1 to x2
	V = max(abs(v)); % Inf-norm of v
  
	% In principle, we are computing the unit (in inf-norm) vector
	%	u = (1/V)*v .
	% We are splitting the integers in u into a "most significant part," or "high" part, uh
	% and a "least significant" or "low" part, ul, so that
	%	u = uh + (1/V)*ul .
	uh = idivide(v, V, 'fix'); % Round towards zero ('floor' rounds towards negative infinity).
	ul = rem(v, V); % Use this, not MATLAB's "mod!"  This gives least significant part, preserving sign.
  
	s = size(img);				 % Used in multidimensional array indexing
	cp = int32(cumprod([1 s(1:end-1)]))'; %  to compute a linear index
  
	% Preallocate variables
	idx = int32(0);
	xh = x1;  % High (most significant) part of position
	xl = zeros(n,1, 'int32');  % Low (least significant) part of position
	for k=1:V+1
	  
		idx = 1+sum((xh-1).*cp); % Compute linear index into array.
		img(idx) = color;
	  
	   % In principle, we have a point x, and a vector u, and we are just
	   % doing "x = x + u" over and over in a loop.  Because we have split
	   % "x" and "u" into "low" and "high" parts, we do this as a "ripple carry"
	   % sum.
		xl = xl + ul;  % Add low parts
		for j=1:n	  % Carry any overflow
			if(abs(xl(j)) >= V)  % Overflow
				xh(j) = xh(j) + sign(xl(j));  % Do the carry
				xl(j) = xl(j) - sign(xl(j))*V;  % "Mod" the low part
			end
			xh(j) = xh(j) + uh(j);  % Add the high parts
		end
	  
	end
end

Edited by Emergent, 30 September 2012 - 10:04 AM.


#5 D.V.D   Members   -  Reputation: 433

Like
0Likes
Like

Posted 29 September 2012 - 08:29 PM

I see but Im looking for a implementation that doesn't require a second point, is there a way to use the formula without the need of a second point but instead be given a angle?

#6 Emergent   Members   -  Reputation: 971

Like
0Likes
Like

Posted 29 September 2012 - 09:31 PM

I see but Im looking for a implementation that doesn't require a second point, is there a way to use the formula without the need of a second point but instead be given a angle?


That's what the vector "v" represents, actually. It represents a direction, together with a speed.

For our purposes, a vector is a list of numbers. The first tells you how far you move in the x direction per unit time; the second how far in the y direction, and so on. The idea works in any number of dimensions.

In general, this is the most useful way to think about rays -- in terms of vectors, not angles.

If the positive "z" direction points into the scene, and if your image is "W" pixels large in the x direction and "H" in the y direction, then, in camera space, a ray shooting through the (x,y) pixel moves along the vector,

v = (x - W/2, y - H/2, 1) .

#7 Emergent   Members   -  Reputation: 971

Like
0Likes
Like

Posted 30 September 2012 - 12:09 PM

I just realized you're probably looking for a 2d ray shooting algorithm (whereas in my last post I'd assumed you were trying to shoot rays into a 3d grid). Here; let me just give you some MATLAB code; then, between reading about line drawing algorithms and producing your own C/C++ implementation, you'll figure out how it works.

This function is nearly identical to the one I posted previously; the differences are:
1. "v=precision*[sin(theta); cos(theta)]" instead of "v=x2-x1."
2. Instead of iterating for a fixed number of steps (the length of the line), we iterate until we go out of bounds.

function img = rasterRay2dBresenham(img, x0, theta, color)
% New, improved; conceptually cleaner, and without (obvious) bugs.
	n = numel(x0);  % Number of dimensions we're working in.
	if(n ~= 2)
		error('This function is for 2d rays.\n');
	end

	precision = 256; % An adjustable parameter.
	v = int32(round(precision*[cos(theta); sin(theta)]));
	V = max(abs(v)); % Inf-norm of v

	% In principle, we are computing the unit (in inf-norm) vector
	%	u = (1/V)*v .
	% We are splitting the integers in u into a "most significant part," or "high" part, uh
	% and a "least significant" or "low" part, ul, so that
	%	u = uh + (1/V)*ul .
	uh = idivide(v, V, 'fix'); % Round towards zero ('floor' rounds towards negative infinity).
	ul = rem(v, V); % Use this, not MATLAB's "mod!"  This gives least significant part, preserving sign.

	s = size(img);				 % Used in multidimensional array indexing
	cp = int32(cumprod([1 s(1:end-1)]))'; %  to compute a linear index

	% Preallocate variables
	idx = int32(0);
	xh = x0;  % High (most significant) part of position
	xl = zeros(n,1, 'int32');  % Low (least significant) part of position
	while(true)
	
		if(xh(1) < 1 || xh(1) > size(img,1) || ...
		   xh(2) < 1 || xh(2) > size(img,2))
			break;
		end
	  
		idx = 1+sum((xh-1).*cp); % Compute linear index into array.
		img(idx) = color;
	
	   % In principle, we have a point x, and a vector u, and we are just
	   % doing "x = x + u" over and over in a loop.  Because we have split
	   % "x" and "u" into "low" and "high" parts, we do this as a "ripple carry"
	   % sum.
		xl = xl + ul;  % Add low parts
		for j=1:n	  % Carry any overflow
			if(2*abs(xl(j)) >= V) % Overflow [Bresenham (rounds rather than floors/fixes)
			%if(abs(xl(j)) >= V)  % Overflow [DDA (grade school addition)]
				% The reason we have some flexibility in when we choose to
				% carry (e.g., Bresenham vs. DDA) is that, either way, we
				% maintain the loop invariant
				%	x = xh + xl/V,
				% so we have a sensible representation of our current
				% position either way.
				xh(j) = xh(j) + sign(xl(j));  % Do the carry
				xl(j) = xl(j) - sign(xl(j))*V;  % "Mod" the low part
			end
			xh(j) = xh(j) + uh(j);  % Add the high parts
		end
	
	end
end

Finally, let me give you one important caveat: Bresenham (or DDA) lines do not hit all the cells that the ideal line passes through. So it's not appropriate for use as an intersection routine.

Edited by Emergent, 30 September 2012 - 12:15 PM.


#8 Waterlimon   Crossbones+   -  Reputation: 2460

Like
1Likes
Like

Posted 30 September 2012 - 01:17 PM

You shouldnt really be using much angles when doing math like this, its usually the more complicated way to do thimgs...

o3o


#9 D.V.D   Members   -  Reputation: 433

Like
0Likes
Like

Posted 11 October 2012 - 02:53 PM

What difference does it make if I use angles or not? For example, I know that I want a certain amount of rays shot from point a to point b, and they can't go past a angle of 90 and 180 so I use angles to give direction to where they start. Btw I know its a late reply but Ill test this out soon.

#10 Bacterius   Crossbones+   -  Reputation: 8524

Like
1Likes
Like

Posted 11 October 2012 - 09:18 PM

What difference does it make if I use angles or not? For example, I know that I want a certain amount of rays shot from point a to point b, and they can't go past a angle of 90 and 180 so I use angles to give direction to where they start. Btw I know its a late reply but Ill test this out soon.

Angles have edge cases, wrap-around situations, expensive trig functions, etc... vector math removes all of this and makes everything work in a very general and elegant fashion. Atan2 is probably a testament to that, lol. Besides, if you do use angles, you'll eventually notice you often end up calculating for instance a cosine, just to take the arccosine of a related angle a few lines later...

Sure, it'll still work, but it'll be less stable, harder to code, and often also harder to understand.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS