I have a question about Donald J. Meagher's paper called Efficient Synthetic Image Generation of Arbitary 3D Objects. The paper for those who don't know is a method of rendering octrees or octrees of voxels written back in the days when floats were too costly to use on CPU's. The algorithm pretty much goes like this:
1. Recurse through your octree in front to back order
2. If octree node is visible, project onto a quadtree of the screen
3. Find which nodes are intersected by the node
4. If that spot of the screen is free, draw the node
Part 3 and 4 is where it gets confusing for me. He mentions some sort of overlay algorithm where you make 4 bigger bounding boxes around the bound of the projected node and use this to test against the quadtree. He also tests I guess the remaining nodes against the bound of the projected node as well as lines created from the silouhette lines (edges) of the projected node for intersection.
To me, a lot of this seems redundant for example, whats the purpose of this overlay algorithm if he already checks the bound of the projected node against the quadtree nodes? Also, how does he get the line formula without doing division? Isn't a division needed for the slope?
The way I thought about implementing was also to use the bound of the projected node which would give you something like this:
http://i.imgur.com/rHFP7HB.png
At this point, you would recurse through the quadtree until you got a list of all the nodes intersecting the bound of your projected node:
http://i.imgur.com/Qi1tsZm.png
Here's where im sort of lost as to what I should do.Although I have a list of all nodes intersecting the bound of the projected node, they don't always encapsulate the actual projected node as shown by this picture:
http://i.imgur.com/POkr1cE.png
So do I do the same thing as he does in terms of making line formulas for each edge of the faces and then checking if the node is on other side of the line? Won't this be computationally heavy just to figure out where a shape sits in terms of the screen quadtree? If I intersect 12 nodes like in my drawn case, I would have to do 144 line checks (4 lines per face * 3 faces * 12 nodes) + more for however many times I subdivide to get down to the pixel level. I could test all of the nodes against the 6 edges of the projected node but then I wouldn't know which pixels correspond to the face of the octant which is important if each octant face has a different color.
I also was wondering if there would be a way of figuring out which faces are not on screen without projecting them all? I know it should be possible since you only have a few possible cases (3 or 2 faces are actually projected). Or can this be done with some kind of distance tests?
And also, if someone understands how the paper describes doing this, that would be of great especially the overlay algorithm he uses.