The FlexlineUp to this point, the rasterizer has taken the output of the vertex shader (vertices and primitive info), determined which pixels are covered by those primitives, and then colored them a specified color. To make the rasterizer work with a pixel shader, I need to modify it to output a pixel location, along with the interpolated primitive attributes that are output from the vertex shader (the interpolants in the VS still need to be written as well...)
So, the decision needs to be made on how to organize the output. This system simply allocates a user selectable 'buffer' size for the buffer between the rasterizer and the pixel shader. However, if you recall past descriptions of how GPUs actually shade pixels, it is always issued in blocks of pixels. I think the GeForce 6800 used a block size of 64x64 and the block size got smaller with each iteration of hardware. The blocks are used to process a group of similar fragments. The neighbors of a given pixel are used to calculate the screen space derivative of the texture coordinates for mip-map selection and so on - there is a good reason that NV and ATI did things the way that they did.
My initial idea for the rasterizer was to simply process the scanline and output one pixel at a time to the output buffer. The output buffer is organized as a 1D array of pixels (fragments) to be fed into the pixel shader. However, given the prior history of GPU development, it may be wise to consider using an output that blocks a group of pixels together.
This is where the flexline design is really nice to work with. All I have to do is create two different rasterizer processors, and I can switch between them at runtime to see what difference it makes in performance. All I would really be doing is linking in one or the other processor to the flexline. So now all I have to do is implement them!