Nov 17 2012 09:39 AM
This is a demonstration of software rasterization as presented by Michael Abrash in "Rasterization on Larrabee".
First, an L-System generates our "scene", i.e a list of branches (position+diameter at both end). Then, for each frame, the branches are transformed to view-aligned quads (2 triangles per branch) and submitted to the rasterizer.
This rasterizer is an AVX implementation of a tile-based deferred renderer (cf http://www.drdobbs.c...rabee/217200602) : Triangles are not immediatly rendered but first sorted in 16x16 pixels (64x64 samples) bins. When all triangles have been setup, each tile is separately rendered. Tiles can be processed in parallel (I'm using 4 hyperthreaded cores) and only access their local framebuffer (which should fit L1). As presented in Abrash's article, rasterization is done recursively (4x4 blocks of 4x4 pixels of 4x4 samples) using precomputed step grids. This architecture allows the rasterizer to leverage 16-wide vector units (on Ivy Bride's 8-wide units, all operations have to be duplicated). For each face, the rasterizer outputs pixel masks for each blocks (or sample masks for partial pixels). Then, pixels are depth-tested, shaded and blended in the local framebuffer. Finally, after all passes have been rendered, the tiles are resolved and copied to the application window buffer.
My goal is to minimize any sampling artifacts: - The tiled renderer allows to use 16xMSAA without killing performance (since it is not bandwidth-limited) - The branches (actually cones) are not tesselated. They are rendered as view-aligned quads and shaded accordingly. - Shadows are raycasted (and not sampled from a shadow map).
For this simple scene, a shadow map would be a better choice, but I wanted to try to leverage the CPU in the shader.
The main advantage of raycasting is that it allows to implement accurate contact-hardening soft shadows for scenes with arbitrary depth complexity. Shadow maps only allows to sample the furthest (nearest to light) occluder, while raycasting correctly handle penumbra affected by a closer hidden occluder. Thus, given enough samples, raycasting can simulate much wider lights.
The main issue of raycasting is that performance is already so awful while 16 samples is not enough to avoid noisy shadows. I guess there is much room for optimization, but actually I originally wanted to study L-Systems (I'm not through "The Algorithmic Beauty of Plants" yet).
I worked on this project to have a more flexible real-time renderer for CG experiments. But also because working on improving performance is addictive. As in a game, you get a score (frame time) and you have to make the most out of your abilities to improve the score.