• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
MrOMGWTF

Cone Tracing and Path Tracing - differences.

72 posts in this topic

Hi.
I'm trying to implement this technique: [url="http://blog.icare3d.org/2011/08/interactive-indirect-illumination-using.html"]http://blog.icare3d....tion-using.html[/url]
The paper about it isn't detailed and it's missing some info.
I did research about voxels, I found technique for fast voxelization. But still, I don't know about cone tracing. There are no papers about this technique. I know the basic difference between path tracing and cone tracing. The difference is that you trace cones instead of rays. So instead of shooting hundreds of rays, you just shoot few cones. But still, I'm missing a lot of informations about this technique to implement it. Can somebody explain me cone tracing? Or give me link to paper. I WAS using google. I found nothing. All I found is undetailed wiki page about cone tracing. Thanks in advance. Sorry for my poor english. Edited by MrOMGWTF
0

Share this post


Link to post
Share on other sites
In cone tracing you sample a prefiltered geometry. (mipmaps)

Depending on radius of cone change to lower mipmap, thus when it hits you get sum of all information in the smaller voxels down the mipmap tree.
This is the reason you do not need to shoot so many rays/cones this way.

Older papers give more information on their cone-tracing..
http://www.icare3d.org/research-cat/publications/beyond-triangles-gigavoxels-effects-in-video-games.html
1

Share this post


Link to post
Share on other sites
[quote name='Pottuvoi' timestamp='1343308089' post='4963285']
In cone tracing you sample a prefiltered geometry. (mipmaps)

Depending on radius of cone change to lower mipmap, thus when it hits you get sum of all information in the smaller voxels down the mipmap tree.
This is the reason you do not need to shoot so many rays/cones this way.

Older papers give more information on their cone-tracing..
[url="http://www.icare3d.org/research-cat/publications/beyond-triangles-gigavoxels-effects-in-video-games.html"]http://www.icare3d.o...ideo-games.html[/url]
[/quote]

Is it possible to do cone tracing on polygons? Edited by MrOMGWTF
0

Share this post


Link to post
Share on other sites
[url="http://scholar.google.fi/scholar?q=cone+tracing+polygons&hl=fi&as_sdt=0&as_vis=1&oi=scholart&sa=X&ei=0GcRUKGFJYyN4gTItYCoAQ&ved=0CE8QgQMwAA"]Cone tracing of polygons is possible[/url]...but beam/cone tracing is quite different when using polygons.
Voxels simply make it easy as they are easily prefilterable, polygons are not.

Easiest way to get it to work with polygons is to voxelize them and use voxel cone tracing.. (which is apparently what Unreal Engine 4 does.)
1

Share this post


Link to post
Share on other sites
Oh, I have another question. If a cone intersects a voxel, then were is the hit point? Ray has no thickness, so it hits in one point, but cone has volume. Edited by MrOMGWTF
0

Share this post


Link to post
Share on other sites
[quote name='MrOMGWTF' timestamp='1343301831' post='4963254']
The paper about it isn't detailed and it's missing some info.
[/quote]just FYI, the first part relating to the voxelization was released as a free chapter from the OpenGL Insights book, you can find it [url="http://www.seas.upenn.edu/~pcozzi/OpenGLInsights/OpenGLInsights-SparseVoxelization.pdf"]here[/url] (according to Cressin's Twitter page, the full source will be released soon, probably to the git repo [url="https://github.com/OpenGLInsights/OpenGLInsightsCode"]here[/url]).
1

Share this post


Link to post
Share on other sites
[quote name='Necrolis' timestamp='1343831450' post='4965211']
[quote name='MrOMGWTF' timestamp='1343301831' post='4963254']
The paper about it isn't detailed and it's missing some info.
[/quote]just FYI, the first part relating to the voxelization was released as a free chapter from the OpenGL Insights book, you can find it [url="http://www.seas.upenn.edu/~pcozzi/OpenGLInsights/OpenGLInsights-SparseVoxelization.pdf"]here[/url] (according to Cressin's Twitter page, the full source will be released soon, probably to the git repo [url="https://github.com/OpenGLInsights/OpenGLInsightsCode"]here[/url]).
[/quote]

I know how to do fast voxelization. I'm looking for info about cone tracing.
0

Share this post


Link to post
Share on other sites
[quote name='MrOMGWTF' timestamp='1343827030' post='4965192']
Oh, I have another question. If a cone intersects a voxel, then were is the hit point? Ray has no thickness, so it hits in one point, but cone has volume.
[/quote]
I'm quite sure that cone size is byproduct of mipmaps, voxels get bigger so basically ray gets 'volume' as well.
1

Share this post


Link to post
Share on other sites
Paper's at http://maverick.inria.fr/Publications/2011/CNSGE11b/index.php

Seems extremely tricky to implement well.
1

Share this post


Link to post
Share on other sites
Well, I was trying hard, but my 13 years old brain can't handle cone tracing.
Do you know any other, easy to implement, real time global illumination techniques?
0

Share this post


Link to post
Share on other sites
[quote name='MrOMGWTF' timestamp='1344152867' post='4966291']
Well, I was trying hard, but my 13 years old brain can't handle cone tracing.
Do you know any other, easy to implement, real time global illumination techniques?
[/quote]

The words easy and global illumination don't really get along all too well

Indirect lighting is an advanced technique, so if you want to implement it you'll pretty much have to get your hands dirty
When it comes to global illumination you have quite a few options each with their own advantages and disadvantages.

There are precomputed methods like precomputed radiance transfer (PRT), photon mapping, lightmap baking, etc. These techniques are mostly static and won't have any effect on dynamic objects in your scene, but they are very cheap to run since all the expensive calculations have been done in a pre-processing step. These only support diffuse indirect light bounces as far as I know.

When you look at more dynamic approaches you have VPL-based approaches like instant radiosity, which allow for dynamic objects and a single low-frequency light bounce. You could also directly use VPLs, but this will require some filtering algorithm if you want to get smooth results and prevent flickering.

Another interesting dynamic approach is the light propagation volume approach used by Crytek which uses reflective shadow maps to set up a 3D grid with indirect lighting values, and which then applies a propagation algorithm to correctly fill the grid. This is fast, but also only allows for a single low-frequency diffuse bounce.

There's also screen-space indirect lighting, which is an extension of SSAO. Of course this technique can only use the information available on screen and could possibly not give satisfying results.
1

Share this post


Link to post
Share on other sites
[quote name='MrOMGWTF' timestamp='1344152867' post='4966291']
Well, I was trying hard, but my 13 years old brain can't handle cone tracing.
Do you know any other, easy to implement, real time global illumination techniques?
[/quote]

I suggest you attempt to write a mirror/shadow only ray tracer. No high school level (or college, depending on POV) math required. This isn't true GI, but it's a very good start. Edited by jameszhao00
1

Share this post


Link to post
Share on other sites
[quote name='jameszhao00' timestamp='1344184433' post='4966391']
[quote name='MrOMGWTF' timestamp='1344152867' post='4966291']
Well, I was trying hard, but my 13 years old brain can't handle cone tracing.
Do you know any other, easy to implement, real time global illumination techniques?
[/quote]

I suggest you attempt to write a mirror/shadow only ray tracer. No high school level (or college, depending on POV) math required. This isn't true GI, but it's a very good start.
[/quote]

That's a good idea, i'll do it.
Just finished doing working ray-sphere intersection test.
0

Share this post


Link to post
Share on other sites
[quote name='Tsus' timestamp='1344326346' post='4966933']
Recently, [url="http://www.mpi-inf.mpg.de/~ritschel/Papers/GISTAR.pdf"]Ritschel et al.[/url] wrote a nice state-of-the-art report on interactive global illumination techniques. Most of them require a solid amount of work, but it’s good to get an overview of what’s out there.

Once you get your ray tracer going, you can quite easily extend it to a [url="http://cg.informatik.uni-freiburg.de/intern/seminar/86kajiyaRenderingEquation.pdf"]path tracer[/url] or – with a little more effort – to a [url="http://128.148.32.110/courses/csci2240/papers/photon_mapping.pdf"]photon mapper[/url]. Extending to a path tracer is easier, but with a photon mapper you could compute indirect lighting for real-time applications, see [url="http://www.cs.jhu.edu/~misha/ReadingSeminar/Papers/McGuire09.pdf"]McGuire et al.[/url], or you could look into progressive photon mapping ([url="http://cs.au.dk/~toshiya/ppm.pdf"]PPM[/url], [url="http://cs.au.dk/~toshiya/sppm.pdf"]SPPM[/url], [url="http://zurich.disneyresearch.com/~wjarosz/publications/jarosz11progressive.html"]Photon Beams[/url]) and all its extensions if you want photometrically correct lighting, which, however, takes a few more hours to compute.

Cheers!
[/quote]

Hey, thanks for good informations.
And thanks for getting me into photon mapping! This technique is awesome, and easy to understand. I'll try to do some optimizations for it.
Voxelizing the geometry before mapping photons will be big speed up, i think.
0

Share this post


Link to post
Share on other sites
YES FINALLY I WROTE THE BASE FOR THE RAY TRACER!

[spoiler][img]http://i.imgur.com/cKWqN.png[/img][/spoiler]

Here are the normals of the sphere:

[spoiler][img]http://i.imgur.com/ZK625.png[/img][/spoiler]

It's just the base, it doesn't support lighting and many stuff. I'll work on lighting now. Edited by MrOMGWTF
0

Share this post


Link to post
Share on other sites
I will say this paper is very complicated, and perhaps impossible to implement directly from the paper. Additionally, the technique was implemented taking advantage of some of the latest GPU features, some of which are only availible in OpenGL 4.3 or DirectX 11 and/or Nvidia-specific extensions, so unless you have the latest and greatest Nvidia (I think only Kepler technology), you will not be able to fully implement it. For example, in order to voxelize the dynamic objects per frame, it requires the access to a compute shader. In the absence of a compute shader, I guess you could do something with CUDA and OpenCL, but it would require OpenGL to interop and write data to a buffer that can be used by these other libraries to build the octree or voxelize. You could always pre-build the octree, but your scene would have to be static with no dynamic geometry.

That being said, if you would like more information on these techniques, you should check out Cyril Crassin's webpage as it contains other papers on techniques which this method uses (http://maverick.inria.fr/Members/Cyril.Crassin/). Also, the newest Unreal Engine 4 takes advantage of this technique (see demos on youtube).
0

Share this post


Link to post
Share on other sites
[quote name='scyfris' timestamp='1344461344' post='4967538']
I will say this paper is very complicated, and perhaps impossible to implement directly from the paper. Additionally, the technique was implemented taking advantage of some of the latest GPU features, some of which are only availible in OpenGL 4.3 or DirectX 11 and/or Nvidia-specific extensions, so unless you have the latest and greatest Nvidia (I think only Kepler technology), you will not be able to fully implement it. For example, in order to voxelize the dynamic objects per frame, it requires the access to a compute shader. In the absence of a compute shader, I guess you could do something with CUDA and OpenCL, but it would require OpenGL to interop and write data to a buffer that can be used by these other libraries to build the octree or voxelize. You could always pre-build the octree, but your scene would have to be static with no dynamic geometry.

That being said, if you would like more information on these techniques, you should check out Cyril Crassin's webpage as it contains other papers on techniques which this method uses (http://maverick.inria.fr/Members/Cyril.Crassin/). Also, the newest Unreal Engine 4 takes advantage of this technique (see demos on youtube).
[/quote]

I can do voxelization in vertex/fragment shader.
The technique is expained here: http://graphics.snu.ac.kr/class/graphics2011/materials/paper09_voxel_gi.pdf
You basically render model's vertex coordinates into texture, and for each pixel of that texture you add a voxel at position from current pixel's value.

Maybe I explained it wrong, see the paper for best explanation.
0

Share this post


Link to post
Share on other sites
[quote name='MrOMGWTF' timestamp='1344495092' post='4967651']
I can do voxelization in vertex/fragment shader.
The technique is expained here: http://graphics.snu....09_voxel_gi.pdf
You basically render model's vertex coordinates into texture, and for each pixel of that texture you add a voxel at position from current pixel's value.

Maybe I explained it wrong, see the paper for best explanation.
[/quote]

Well yeah, but that's not enough to implement the technique presented in the paper by Crassin. For cone tracing to work you'll need to generate mipmap data of your voxels stored in an octree, and to maintain any kind of performance this octree structure should be held entirely in GPU memory in a linear layout and it should be recalculated on each scene update by the GPU, and you'll really need a compute shader solution to do this.
1

Share this post


Link to post
Share on other sites
[quote]
Well yeah, but that's not enough to implement the technique presented in the paper by Crassin. For cone tracing to work you'll need to generate mipmap data of your voxels stored in an octree, and to maintain any kind of performance this octree structure should be held entirely in GPU memory in a linear layout and it should be recalculated on each scene update by the GPU, and you'll really need a compute shader solution to do this.
[/quote]

This is exactly right, the key being it is an octree structure and that it is entirely generated/updated/accessed on the GPU.

You might be in luck. A book titled [b]OpenGL Insights[/b] just came out, and Cyril Crassin has a chapter in it entitled [i]Octree-Based Sparse Voxelization Using the GPU Hardware Rasterizer[/i] in which he explains the [b]Octree[/b] voxelization technique presented in the paper. It shows how to use the compute shader and all that. What's more, it's your lucky day because the website has a link to sample chapters you can download for free in PDF form, [b]and this chapter is one of them[/b].

See http://openglinsights.com/
0

Share this post


Link to post
Share on other sites
That being said, I still wouldn't recommend implementing this paper if you are new to this field, although it would be interesting to implement the octree-only part as you'll learn a lot about how all these data structures are fitting together within the GPU. If you decide to do that, please let us know how it went :-)
1

Share this post


Link to post
Share on other sites
Well yeah, actually I have like worst gpu ever.
This is my gpu: http://www.geforce.com/hardware/desktop-gpus/geforce-9500-gt/specifications
So I can't do anything in compute shader.
0

Share this post


Link to post
Share on other sites
I think its also the way the paper is organized which can be confusing. With the cone-tracing section, this is how I've interpreted it:

1. Capture direct illumination first (9.7.2) to store the incoming radiance into each leaf. This is done by placing a camera at the light's position and direction to create a light-view-map (just like shadow-mapping). I think each pixel location is transformed into world space position and the index of the leaf corresponding to this position is derived. Two lots of information: direction distribution and energy (which I think is color?) of that pixel is then stored in the leaf.

2. Correct me if I've misinterpreted the paper, but I think the values are averaged at each level from the bottom leaves to the top of the octree (is direction distribution also averaged?).

3. The actual cone tracing part is what I have the most trouble understanding, if the color values are already stored and averaged out across all nodes in the octree, wouldn't it just be a matter of projecting those colors onto the screen (or a screen texture) to obtain the indirect lighting?
0

Share this post


Link to post
Share on other sites
[quote name='gboxentertainment' timestamp='1344947982' post='4969430']
3. The actual cone tracing part is what I have the most trouble understanding, if the color values are already stored and averaged out across all nodes in the octree, wouldn't it just be a matter of projecting those colors onto the screen (or a screen texture) to obtain the indirect lighting?
[/quote]
The octree only represents the scene with it's direct illumination and also features different 3 dimensional mip map levels of the scene. To obtain indirect lighting you still need to traces rays or even better cones through the scene as you would with Path Tracing. Cone Tracing a SVO is just a feasible way to realise Path Tracing in a real time application. Edited by CryZe
0

Share this post


Link to post
Share on other sites
[quote name='CryZe' timestamp='1344948705' post='4969437']
you still need to traces rays or even better cones through the scene as you would with Path Tracing
[/quote]

So for each pixel on the screen, I would send out a cone, with their apex starting from the pixel?
Or do the apex of the cones start from a point on every surface?

Edit: Okay, I think I get it now. For every pixel on the screen, a number of cones are spawned from the surface corresponding to that pixel in world-space. These are used to sample the pre-integrated information from the voxelized volumes intersecting with the cones. The final gathering involves averaging the total of all the information collected by the cones and this is projected onto the pixel.

Am I correct? Edited by gboxentertainment
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0