Intro I have been working a while on this technology and since real-time raytracing is getting faster like with the Brigade Raytracer e.g., I believe this can be an important contribution to this area, as it might bring raytracing one step closer to being usable for video games.
Update : I made the Demo public available here: SVO Image Warping Demo Download
1. suppress unwanted pixels with a filter by analyzing the depth values in a small window around each pixel. In experiments it removed some artifacts but not all - also had quite an impact on the performance.
2. (not fully explored yet) Assign a speed value to each pixel and use that to filter out unwanted pixels
3. create a quad-tree-like triangle mesh in screenspace from the raycasted result. the idea is to get a smoother frame-to-frame coherence with less pixel noise for far pixels and let the zbuffer do the job of overlapping pixels. Its sufficient to convert one tile from the raycasted result to a mesh per frame. Problem of this method : The mesh tiles dont fit properly together as they are raycasted at different time steps. Using raycasting to fill in holes was not very simple which is why I stopped exploring this method further
http://www.farpeek.com/papers/IIW/IIW-EG2012.pdf fig 5b
** Untested Ideas **
4. compute silhouettes based on the depth discontinuity and remove pixels crossing them
5. somehow do a reverse trace in screenspace between two frames and test for intersection
6. use splats to rasterize voxels close to the camera so speckles will be covered
Edited by spacerat, 24 May 2015 - 10:37 PM.