• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
BlinksTale

Any options for affordable ray tracing?

31 posts in this topic

So as someone who has dealt with real time graphics instead of prerendered for his entire career, when I look at this..

 

http://upload.wikimedia.org/wikipedia/commons/e/ec/Glasses_800_edit.png

 

...I feel like it's still a distant dream. Everything I'm learning is about how mirrors are too hard and how Portal's whole system was simplified to work with multiple cameras/physics in the game. That just doesn't seem right to me. There must be some way, especially with all these new consoles rolling in, to handle bending light, right?

 

My ideal would be something like these:

 

http://tobifairley.com/blog/wp-content/uploads/2010/07/crystal-glassware.jpg

http://devrajniwas.files.wordpress.com/2013/12/crystal_wine_glasses_tif.gif

http://www.homewetbar.com/images/prod/w-Crystal-Glass-Set-133769.jpg

 

Reflections and curved/bent light, that's really all I'm looking for. Now, that might be asking for a whole new Google/Facebook (or some equally absurd and impossible amount of work), but I really think there's a workaround I just haven't heard of yet.

 

If all I want is these two things, do I have any options? Maybe even ones with a camera that moves?

 

I'd love to have a world the player can navigate where light bends in these beautiful ways. Are there any options at all for that right now?

1

Share this post


Link to post
Share on other sites

Have you seen 

 

http://arauna2.nhtv.nl/

 

There are a few realtime raytracers out there. Some CPU based some GPU based demos from the chipmakers.. Intel was pushing it a few years back for their defunct Larabee chips..

0

Share this post


Link to post
Share on other sites

Hodgman: fantastic! What wonderful examples - I'm really keen on that Nvidia one.

 

ddn3: Windows only! I was unable to open the exe, but working with Unity sounds fantastic.

 

Seeing Arauna's list though: Do I need any special features to have the light curve in the glass? I have no experience with that, just a lot of curiosity.

0

Share this post


Link to post
Share on other sites

Hodgman: fantastic! What wonderful examples - I'm really keen on that Nvidia one.

 

ddn3: Windows only! I was unable to open the exe, but working with Unity sounds fantastic.

 

Seeing Arauna's list though: Do I need any special features to have the light curve in the glass? I have no experience with that, just a lot of curiosity.

 

Yep Arauna is a full path tracer which will reproduce the proper caustics of glass. That is the fun thing about raytracers, no tricks required to reproduce these complex light - object interactions and reflections. There was an older version of Arauna, see if its still out there. 

 

Good Luck!

1

Share this post


Link to post
Share on other sites

this brigade 3 demo is something, i never yet had seen some thing like this (heat in the city)

 

though there appears one wuestion there, this large noise -

(i understand it is because of randomization of sending performance limited amount of environment light rays) couldnt

it be averaged or something.. how much hardware yet needs to grow in speed 5x 10x 

 

is there maybe some chance that hardvare companies will turn to support raytracing in consumer electronics, imo this is good way to go, this is the future,

0

Share this post


Link to post
Share on other sites

Proper real time ray-tracing, which by the way is just an umbrella term for more specific techniques, like bidirectional path tracing, are still at least 15-20 years away from becoming generalized solutions fast enough to finally replace rasterization entirely, simply due to the lack of performance.  But I'm completely sure that we'll see hybrid solutions, with rasterization based engines using ray tracing for secondary effects this gen. At least on PC, it's basically a certain thing in my opinion, on next gen consoles I'm not so sure.

 

I think the first obvious thing we're going to have within the next couple of years are proper single bounce specular reflections for dynamic geometry & light sources, without having to rely on pre baked stuff with lots of artist fiddling involved like localized cube maps. We're already seeing techniques based on this. For example, the voxel-based cone tracing demo that made the rounds about a year or two ago is essentially a ray tracer, that marches through a voxel volume and the farther a ray travels, the lower resolution the mip-level that you sample your data from is, which kind of mimics the behavior of a "cone", hence cone tracing instead of ray tracing.

 

Actually I think I read a paper on something like this a couple of weeks ago, but I'm not sure if I'm dreaming that up right now.

 

Either way, one thing's clear: 30 years from now nobody will be using rasterization anymore and graphics programming will be simpler in many aspects for it.

0

Share this post


Link to post
Share on other sites

maybe faster, i think its a matter of doing hardvare acceleration for it and thats all (probably) 

 

0

Share this post


Link to post
Share on other sites

maybe faster, i think its a matter of doing hardvare acceleration for it and thats all (probably) 

If there would be any easy solutions like that why those are not already out there?

0

Share this post


Link to post
Share on other sites

 

maybe faster, i think its a matter of doing hardvare acceleration for it and thats all (probably) 

If there would be any easy solutions like that why those are not already out there?

 

dont know, they are maybe not "so easy" because heavy hardware must be done and it is not easy maybe for small company, and big ones maybe are not interested maybe they prefer to sold what they have as long as they got it (really idont know in details )

0

Share this post


Link to post
Share on other sites

 

ray-tracing [is] still at least 15-20 years away from becoming generalized solutions fast enough to finally replace rasterization entirely,
Either way, one thing's clear: 30 years from now nobody will be using rasterization anymore and graphics programming will be simpler in many aspects for it.

I'm sure there were people saying the same thing 10, 15, 20 & 30 years ago wink.png
 
If real-time follows the same path as film VFX, we'll probably see micropolygon renderers / Reyes-style rendering in real-time sooner than full-blown ray-tracing.
 
The problem with ray-tracing, which keeps it from being "just a few more years away" is that it has terrible memory access patterns. Rasterization on the other hand can be very predictable with it's memory access patterns. This is a big deal because memory is slow, and the main methods for speeding it up rely on coherent access patterns. Many of the tricks employed by real-time ray-tracers are focussed on improving the coherency of rays, or other techniques to reduce random memory accesses.
 
This problem never gets better.
Say that your program is slow because you can't deliver data from the RAM to the processor fast enough -- the solution: build faster RAM!
Ok, you wait a few years until our RAM 2x is now faster, however, in the same period of time our processors have gotten another 10x faster!
Now, after waiting those few years, the problem is now actually much worse than it was before. The amount of data that you can deliver to the CPU per operation, or bytes/FLOP has actually decreased.
 
Every year, this performance figure gets lower and lower (blue line divided by red line is bytes per op):
APBD92s.gif

 

For people to completely ditch rasterisation and fully switch to ray-tracing methods, we need some huge breakthroughs in efficiency, in efficient scene traversal schemes with predictable memory access patterns for groups of rays.

 

Film guys can get away with this terrible efficiency because they aren't constrained by the size of their computers, the heat output, the cost to run, or the latency from input to image... If you need 100 computers using 100KW of power, inside a climate controlled data-center with 3 hours of latency, so be it!

On the other hand, the main focus of the GPU manufacturers these days seems to be in decreasing the number of joules required for any operation, especially operations involved in physically moving bytes of data from one part of the chip to another... because most consumers can't justify buying a 1KW desktop computer, and they also want 12Hr battery life from their mobile computers... so efficiency is kind of important.

 

 


But I'm completely sure that we'll see hybrid solutions, with rasterization based engines using ray tracing for secondary effects this gen. At least on PC, it's basically a certain thing in my opinion, on next gen consoles I'm not so sure.
You're right about this though. The appearance of SSAO in prev-gen was the beginning of a huge "2.5D ray-tracing" explosion, which will continue on next-gen with screen-space reflections and the like.

Other games are using more advanced 3D/world-space kinds of ray-tracing for other AO techniques on prev-gen, however, they're only tracing against extremely simple representations of the scene... e.g. ignoring everything except one character, and representing the character as less than a dozen spheroids...

Fully 3D ray-marching has shown up in other places, such as good fog rendering (shadow and fog interaction), or pre-baked specular reflection (e.g. cube-map) masking by ray-marching against a simplified version of the environment.

 

The voxel cone tracing stuff might ship on a few next-gen console games that require fancy specular dynamic reflections, but even with their modern GPU's it's a pretty damn costly technique.

 

 

Im not sure if this is this way...

 

Isnt raytracing better suited for paralisation than rasterization?

 

You can trace all the ray independant on another, this is probably no write ram collisions (or I am wrong?) you only need common ram reads

 

 - for rasterization it seem to me it is less nice

 

Also - I dont know how it looks like today but as someone said 

today you can already really do quake game on pathtracer (as far as i know pathtracer is much heavier than simple raytracer so for just raytracer you could  get much faster rendering)

 

(also check up how far was quake2 rasterization engine optymization done by carmack - this engine had no framebuffer pixel overvriting at all, and also a terribly amount of other crazy optymizations - and gone terribly far in optymizations- I am not sure if todays pathtracer people are going so far  - though I know you people are good anyways)

 

If so this is not to far to realy use it - isnt doing tracing-directiona hardware acceleration for this speeded up it yet a couple of times

Edited by fir
0

Share this post


Link to post
Share on other sites

Allright but if so this slowdowns should not apply probably to scenes that has not so terribly big ram footprint and are just resonable in ram size

(todays cache is probably about 10MB ? what if a whole scene

would comprise in such ram (this would be relative simple scenes of 200k triangles or something but should be raytraced quickly - or not?)

Are raytraced scenes had damn gigabytes large memory footprint?

Does this bhv structures so large ram footprint (I was not doing this and i am not sure how it works - Is this some kind of spacial 3d grid of boxes with ray/box intersection test routine and some bressenham kind of traversal thru this 3d grid used? This stuff consumer this ram?

0

Share this post


Link to post
Share on other sites

just tracing small scenes that fit into your cache, especially the L1 cache, will probably be fast, you could fit some Quake1 level into the cache and it would work ok.

I remember someone traced star trek elite force ages ago, there must be a vid on youtube... there: http://www.youtube.com/watch?v=kUsaNfmZ3CU

 

but it's still memory limited...

you can do math with SSE on 4 elements and AVX on 8 elements, so in theory you could work on 8 rays at the same time, yet once you come to memory fetching, it's one fetch at a time. it's even worse, if you process 4 or 8 elements at a time, there will be always also 4 or 8 reads in a row. so while your out-of-order cpu can hide some of the fetch latency by processing independent instructions, once you queue 4 or 8 fetches, there is just nothing to do for the other units than to wait for your memory requests.

and while L1 requests are kind of hidden by the instruction decoding etc. if you start queuing up 4 or 8 reads and those go to the L2 cache, with ~12cycle latency on hit, you have ~50 or ~100cycles until you can work on the results.

 

you can see some results about this straight from intel:

http://embree.github.io/data/embree-siggraph-2013-final.pdf

SSE is barely doing anything to the performance as you can see.

 

 

on the other side, rasterization is just a very specialized form of ray tracing. you transform the triangles into the ray space, so the rays end up being 2d dots that you can evaluate before you do the actually intersection. and the intersection is done by interpolating coherent data (usually UVs and Z) and projection. you also exploit the fact that you can limit the primitive vs ray test by a small bounding rect, skipping most of the pixels (or 2d dots). the coherent interpolation also exploits the fact that you can keep your triangle data for a long time in registers (so you don't need to fetch data from memory) and that you don't need to fetch the rays either, as you know they're in a regular grid order and calculating their positions is just natural and fast. I'm not talking about scanline rasterization, but halfspace or even more homogenous rasterization 

 

 

if you could order rays in some way to be grouped like in rasterization, you'd get close to be as fast with tracing as you are with rasterizing. indeed, there is quite some research going on in how to cluster rays and deferring their processing until enough of them try to touch similar regions. there is research in speculative tracing, e.g. you do triangle tests not with just one ray, but with 4 or 8 at a time and although it's bogus to test random rays, it's also free with SSE/AVX and if your rays are somehow coherent (e.g. for shadow rays or primary rays), it end ups somehow faster.

 

 

as I said in my first post here, there is already so much research regarding faster tracing, but there is really a lot room to improve on what you do with those rays. you can assume you'll get about 100-200MRay/s. that's what the caustics RT hardware does, that's what you get with optix, that's what intel can achive with embree. and even if you'd magically get 4x -> 800MRay/s, you'd just reduce monte carlo path tracing noise by 50%. on the other side, you can add importance sampling, improve your random number or ray generator with a few lines of code (and a lot of thinking) and you'll get suddenly to 5% of the noise (it's really in those big steps).

 

further, ray tracing (and especially path tracing) is still quite an academic area. for rasterized games, we just ignore mostly the proof of correctness and some theories that we violate. texture sampling, mipmapping etc.  was done properly done 20years ago in offline renderern, yet even the latest and greatest hardware will produce blurry or noisy results at some angles and while we know exactly how to solve it academically correct, we rather switch on 64x AF and it also solves the problem to some degree.

that's how game devs should also approach tracing. e.g. not being biased is nice in theory (a theory that just hold in theory, as floats are biased per definition), you can get far better/faster results doing biased algorithms like photonmapping or canceling ray depth after a fixed amount of reflections.

 

all this talk makes me want to work tonight on my path tracer again, .... you guys !

1

Share this post


Link to post
Share on other sites

Use less memory. Use compressed containers. Access per thread/core in a sub-tree or coherently. Use less pointers.

 

My 10 000 USD ;-)

 

spinningcube

 

PS - oh do I miss the heydays of ompf...

Edited by jbadams
Restored post contents from history.
0

Share this post


Link to post
Share on other sites

One exception to the "ray-tracing requires too much memory bandwidth" problem is procedural scenes, as they are created using a lot of code (performance of which increases with CPU speed) instead of data stored in RAM.

Inigo Quilez has lots of great examples here: https://www.shadertoy.com/user/iq (be warned: requires WebGL and can take a long time to load).

1

Share this post


Link to post
Share on other sites

One exception to the "ray-tracing requires too much memory bandwidth" problem is procedural scenes, as they are created using a lot of code (performance of which increases with CPU speed) instead of data stored in RAM.

Inigo Quilez has lots of great examples here: https://www.shadertoy.com/user/iq (be warned: requires WebGL and can take a long time to load).

 

Yep, that killed my browser quickly. blink.png  They really need to implement image thumbnails. Running that many random shaders simultaneously is ridiculous.

0

Share this post


Link to post
Share on other sites

If real-time follows the same path as film VFX, we'll probably see micropolygon renderers / Reyes-style rendering in real-time sooner than full-blown ray-tracing.

I thought that up until recently too. Don't know how familiar you are with modern hardware but from what I can figure, micropolys are out of the question. If anything they're far more antithetical to the current design of acceleration hardware than raytracing. The issue with raytracing in offline applications is that the scenes they are rendering are gigantic, which exacerbates the memory access pattern problem. It's worth noting that the first Pixar movie to use raytracing extensively was Cars. For games, on the other hand, a lot of our scenes are likely to be more manageable.

 

In any case, I feel that the raytracing advocates tend to focus a lot on scenes that look cool but have very little use in practical scenarios. Rendering nice glass is not high on the priority list for games. The things that are, in pure shading terms, are shadows (at many scales), subsurface scattering, more complex lighting models and especially the aliasing behaviors of those lighting models, image based lighting, and global illumination. I recommend reading the FXGuide "State of Rendering" posts, if you haven't already:

https://www.fxguide.com/featured/the-state-of-rendering/

http://www.fxguide.com/featured/the-state-of-rendering-part-2/

Personally I'd place GI and IBL at the top of the real time list, and solve those problems by any means necessary. We're starting to see some movement there in the next gen engines, particularly in work done by Epic and Crytek. But we're not close to done yet. Memory bandwidth is going to be our biggest enemy here for years to come.

 

On a sidenote, I also feel that the subsurface scattering technique(s) we're seeing in production games look like shit and should be discarded entirely. The Jimenez shader was a cute hack but it's holding us back now. It's time to rethink that into something that actually looks like skin and not a funny looking gaussian blur.

Edited by Promit
1

Share this post


Link to post
Share on other sites

 

Personally I'd place GI and IBL at the top of the real time list, and solve those problems by any means necessary. We're starting to see some movement there in the next gen engines, particularly in work done by Epic and Crytek. But we're not close to done yet. Memory bandwidth is going to be our biggest enemy here for years to come.

 

On a sidenote, I also feel that the subsurface scattering technique(s) we're seeing in production games look like shit and should be discarded entirely. The Jimenez shader was a cute hack but it's holding us back now. It's time to rethink that into something that actually looks like skin and not a funny looking gaussian blur.

 

 

GI is by definition and NP hard problem to solve, your solving grows exponentially with each bounce, it's not solvable this generation in real time sad.png

 

Rasterization on the other hand can cover image based lighting pretty well. That is, as far as I can tell, what Epic is doing. Relightable environment maps, blended together smartly (box projection, other hacks) with the diffuse environment lookup hack. I'm just not sure if it works with distances as, at all. All the artists that I've noticed with there hands on it have been doing small scenes, and as you increase depth complexity you're just going to need access ever more cubemaps, eating into your bandwidth more and more.

 

Maybe it works in some way I've not thought of. Hopefully they'll talk about it at GDC or something, though I don't see a talk from them scheduled anywhere. Point is I'm not sure Raytracing is the solution for global illumination, at least not the solution that makes it work this generation.

Edited by Frenetic Pony
0

Share this post


Link to post
Share on other sites

I thought that up until recently too. Don't know how familiar you are with modern hardware but from what I can figure, micropolys are out of the question. If anything they're far more antithetical to the current design of acceleration hardware than raytracing.

Rasterizing and pixel-shading them on current hardware? We could do that now by tesselating things enough... but yeah, the pixel shaders will be extremly  inefficient with those microscopic polygons. It also wouldn't be proper micropolygon renderer.
 
We should be tessellation until we have sub-pixel sized polys, then shading per-vertex. Then for each pixel, storing a list of micropolygons that are contained within it, then after each pixel's lists have been filled, determine the coverage of each micropolygon and blend the results.
That would give actual, analytic AA instead of MSAA crap, and it doesn't involve pixel shaders, so it doesn't fall into the current hardware limitations.
The hardware isn't designed for it though, so you'd have to implement parts of it with a compute shader (might still be able to use hardware vertex, tessellation, and stream-out though!). Should be doable on modern GPUs without new hardware biggrin.png

GI is by definition and NP hard problem to solve, your solving grows exponentially with each bounce, it's not solvable this generation in real time

In most cases, after 3 bounces, any further are imperceptible though. So it's not unsolvable if you put a limit in place, such as "first N bounces only", where N is one or two. Or if you use an iterative algorithm that only does 1 bounce per frame, but it reuses last frames results, so you get infinite bounces with n-1 frames of latency for the nth bounce.

Edited by Hodgman
2

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0