Intro I have been working a while on this technology and since real-time raytracing is getting faster like with the Brigade Raytracer e.g., I believe this can be an important contribution to this area, as it might bring raytracing one step closer to being usable for video games.
Update : I made the Demo public available here: SVO Image Warping Demo Download
Algorithm: The technology exploits temporal coherence between two consecutive rendered images to speed up ray-casting. The idea is to store the x- y- and z-coordinate for each pixel in the scene in a coordinate-buffer and re-project it into the following screen using the differential view matrix. The resulting image will look as below.
The method then gathers empty 2x2 pixel blocks on the screen and stores them into an indexbuffer for raycasting the holes. Raycasting single pixels too inefficient. Small holes remaining after the hole-filling pass are closed by a simple image filter. To improve the overall quality, the method updates the screen in tiles (8x4) by raycasting an entire tile and overwriting the cache. Doing so, the entire cache is refreshed after 32 frames. Further, a triple buffer system is used. That means two image caches which are copied to alternately and one buffer that is written to. This is done since it often happens that a pixel is overwritten in one frame, but becomes visible already in the next frame. Therefore, before the hole filling starts, the two cache buffers are projected to the main image buffer.
Results: Most of the pixels can be re-used this way as only a fraction of the original needs to be raycated, The speed up is significant and up to 5x the original speed, depending on the scene. The implementation is applied to voxel octree raycasting using open cl, but it can eventhough be used for conventional triangle based raycasting.
Limitations: The method also comes with limitations of course. So the speed up depends on the motion in the scene obviously, and the method is only suitable for primary rays and pixel properties that remain constant over multiple frames, such as static ambient lighting. Further, during fast motions, the silhouettes of geometry close to the camera tends to loose precision and geometry in the background will not move as smooth as if the scene is fully raytraced each time. There, future work might include creating suitable image filters to avoid these effects.
How to overcome the Limitations:
Ideas to solve noisy silhouettes near the camera while fast motion:1. suppress unwanted pixels with a filter by analyzing the depth values in a small window around each pixel. In experiments it removed some artifacts but not all - also had quite an impact on the performance.2. (not fully explored yet) Assign a speed value to each pixel and use that to filter out unwanted pixels3. create a quad-tree-like triangle mesh in screenspace from the raycasted result. the idea is to get a smoother frame-to-frame coherence with less pixel noise for far pixels and let the zbuffer do the job of overlapping pixels. Its sufficient to convert one tile from the raycasted result to a mesh per frame. Problem of this method : The mesh tiles dont fit properly together as they are raycasted at different time steps. Using raycasting to fill in holes was not very simple which is why I stopped exploring this method furtherhttp://www.farpeek.com/papers/IIW/IIW-EG2012.pdf fig 5b** Untested Ideas **4. compute silhouettes based on the depth discontinuity and remove pixels crossing them5. somehow do a reverse trace in screenspace between two frames and test for intersection6. use splats to rasterize voxels close to the camera so speckles will be covered