• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
spacerat

400% Raytracing Speed-Up by Re-Projection (Image Warping)

12 posts in this topic

Interesting idea, I also use coherence for accelerating diffuse sound ray tracing and get a 10x improvement by averaging the ray contributions over several frames.

 

Do you have any ideas for how to improve the visual quality?

0

Share this post


Link to post
Share on other sites

Sound raytracing for realistic echoing ? Thats also interesting.

 

For the quality, the upper image is just in 256 colors, so doesnt look that good.

I just used it for testing. The general method also can be applied to 32 bit colors of course. 

 

When in motion, the reprojection version looks not as smooth as the raycasted version - so more research could be done like re-projecting to a higher resolution frame buffer that is downsampled for the final rendering e.g. Also edges with fast motion will loose some accuracy. There additional research may improve the result too, such as by using image filters. 

0

Share this post


Link to post
Share on other sites

There's a bit of research on this technique under the name "real-time reverse reprojecion cache". It's even been used in Battlefield 3 (rasterized, not Ray-traced though!)
[edit]i should've read your blog first and seen that you'd mentioned the above name.

 

[edit2] Here's the BF3 presentation where they use it to improve the quality of their SSAO calculations: http://dice.se/wp-content/uploads/GDC12_Stable_SSAO_In_BF3_With_STF.pdf

Edited by Hodgman
0

Share this post


Link to post
Share on other sites

Yes, I know that paper. To what I understood, they just re-use the shading. Related is also a EG paper a while ago, which does iterative image warping ( http://www.farpeek.com/papers/IIW/IIW-EG2012.pdf ) However, they seem to need a stereo image pair to compute the following image.

 

The advantage of the raycasting method is, that it can reuse color and position, and further that raycasting allows to selectively raycast missing pixels, which is impossible with rasterization.

0

Share this post


Link to post
Share on other sites

Very neat, though I've found that ideas along these lines break down completely when it comes to the important secondary rays, IE incoherent bounces rays for GI, ambient occlusion, and reflections. Still thanks for this; there are use cases for primary raycasting, EG virtualized geometry. Hope you can find some way clever way to get high quality motion.

0

Share this post


Link to post
Share on other sites

Yes, this technology is in general for speeding up primary rays. For secondary rays it depends - for static light sources, you can store the shadow information along with the pixel, then you can re-use it in the following frame as well.

 

If its a reflection or refraction, then you could do a quick neighbor search in the previous frame if its possible to reuse anything - but most probably this technology wont suit well for that case.

0

Share this post


Link to post
Share on other sites

From your brief description this sounds very much like the temporal antialiasing techniques that are commonly used with rasterization. For reprojecting camera movement you really only need depth per pixel, since that's enough to reconstruct position with high precision. However it's better if you store per-pixel velocity so that you can handle object movement as well (although keep in mind you need to store multiple layers if you want to handle transparency). Another major issue that you have when doing this for antialiasing is that often your reprojection will fail for various reasons. The pixel you're looking for may have been "covered up" the last frame, or the camera may have cut to a comepletely different scene, or there might be something rendered that you didn't track in your position/depth/velocity buffer. Those cases require careful filtering that will exclude non-relevant samples, which generally means taking drop in quality for those pixels for at least that one frame. In your case I would imagine that you have to do the same thing, since spiking to 5x the render time for even a single frame would be very bad.

1

Share this post


Link to post
Share on other sites

Yes, its related to temporal antialiasing techniques. However there, the geometry is not reused and needs to be rendered again. For the coordinates I also tried different ways but it turned out that using relative coordinates (such as the zbuffer) tend to accumulate the error over multiple frames, and the image is not consistent anymore then.  

 

Transparency is not yet handled and might need special treatment indeed. Its just for opaque surfaces.

 

To still cache the pixels efficiently even lots of them get covered-up / overwritten, I am using a multi buffer cache. That means the result of a screen is alternately stored in either cache, while both caches are projected to the screen at the beginning before filling the holes by raycasting. That can keep the pixels efficiently, as pixels that are covered up in one frame might already be visible again in the next frame. In general the caching works well. Also its a relaxed caching theme, where not every empty pixel get filled by a ray. The method uses an image filter for filling small holes and only fills holes that are 2x2 pixels large (the threshold can be defined) by raycasting.

 

If the camera changes to a different view, then obviously the cache cannot be used and the first frame will render slower.

Edited by spacerat
1

Share this post


Link to post
Share on other sites

>>I had a feeling, by watching their demos that they're doing this already. (but the video quality is very bad, it's hard to see, but the silhouette ghosting artifacts made me think that)

 

Yes, a month ago they stated that. However, they dont tell if its used for primary or secondary rays and how the speedup was. I believe its simpler with secondary rays as you do a reverse projection, so you can avoid empty holes.

 

>>you don't need to store x,y,z, it's enough to have the depth

 

I tried to use the actual depth from the depthbuffer but that failed due to accumulated errors from frame to frame as most pixels are used over up to 30 frames.

 

>>  Crysis 2 (called temporal AA) and in killzone 4 it got recently famous 

 

TXAA (Crysis2) is just reusing the shading as I understand, so not for reconstructing the geometry,

The killzone method sounds more interesting as they predict pixels using the motion, so can reduce the render resolution.

I wonder if they need to store the motion vectors. Perhaps using the previous view matrices + depth is sufficient.

Sounds related to the approach used in mpg compression.

 

>> in path tracing, the trick is to not only re-project the final pixel (those are anti-aliased etc. and give you anyway wrong result), you have to save the originally traced image (with 10spp it's 10x the size!) and re-project those samples, then you'll get a pretty perfect coverage.

updates are now done on an interleaved pattern, e.g. replacing 1 out of 10 samples of the re-projection source buffer per frame.

 
>>this also works for secondary rays, but gets a bit more complex, you have to save not only the position, but also the 2nd bounce and recalculate the shading using the brdf. I think vray is doing that, calling it 'light cache'.
 

10 samples sounds pretty memory consuming, but interesting to hear more details about that method.

 

>>the silhouette issue arises when you work on the final buffer, that's what you can see in the rasterizer versions like in Crysis 2, Killzone 4. re-projecting the spp-buffer (using the propper subpixel position) will end up with no ghosting on sillouettes (beside the previously mentioned reconstruction issue). 

 

Apparently gamers also noticed the quality impact of this method , even its just applied to every second pixel. In my case the pixel is reused far longer which makes it even more difficult to keep the image consistent. I have also tried to increase the tile size, so every pixel on the screen is raycasted every 4th frame - that reduced the silhouette issue significantly, but also led to a lower performance obviously. Would be nice to track the silhouette over several frames somehow so it wont loose accuracy.

Edited by spacerat
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0