• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Bombshell93

[Theory] Unraveling the Unlimited Detail plausibility

167 posts in this topic

its a hacked together piece of crud of an environment, and i dont see it getting much better, and it just makes me want to use a true unique world (like atomontage), with the storage/scale problem, instead of this repetitive crap.

its unlimited repetition, not unlimited detail.
1

Share this post


Link to post
Share on other sites
[quote name='rouncer' timestamp='1313270323' post='4848746']
the reasons why this project is going to flop guaranteed.

[1]* the models you see arent unique, they are just duplications of the exact same objects

[2]* the models all have their OWN level of detail, the only way he gets 64 atoms a millimetre is by SCALING SOME OF THEM SMALLER, the rest have SHIT detail.

[3]* he cant paint his world uniquely like what happens in megatexture

[4]* he cant perform csg operations, all he can do is soup yet more and more disjointed models together

[5]* theres no way he could bake lighting at all, so the lighting all has to be dynamic and eat processing power

[6]* this has nothing to do with voxels, you could get a similar effect just by rastering together lots of displacement mapped
models!!!
[/quote]
1) This is mentioned by Dell that they resorted to scanning objects in to get content for the video. Is this for saving memory via instancing? I personally can't tell. I mean they could have loaded a sponza model in to show things off.
2) Not sure what you mean. Some of the objects are polygon modeled and some are scanned in which utilize the full 64 atoms per cubed mm.
3) That's an assumption. Remember most of this is just surface detail. Meaning there is no data stored for the inside of the models. This brings us to your next complaint.
4) He mentioned that which is why he said he would like to work together with Atomontage. However, that's not saying implementing CSG is impossible via their model format. They just said it's not their goal.
5) A lot of engines choose to do dynamic lighting via SSAO along with other techniques (like crytek's radiosity). However, if they did bake the lighting into the models people would flip this around on them and go "they can't do dynamic lighting. It's all baked in" so it's a catch-22 unless they can do both really. (They didn't even say if they could bake the lighting).
6) Probably. You need DX11 for that to run well though. This technology is rendering the same detailed grooves a POM/QDM/Tesselation renderer would be doing except it's running on the CPU. QDM would probably run at the same performance though on the CPU as these effects, but the others require some serious hardware support.

[quote name='Syranide' timestamp='1313274057' post='4848772']
[calculations]
[/quote]
lol, you did pretty much identical calculations I did a while when I saw the video. Yeah that's a pretty good approximation for the amount of data in a lossless format. Compression and streaming the data in are probably where their method will excel.
[quote name='rouncer' timestamp='1313281144' post='4848816']
its a hacked together piece of crud of an environment, and i dont see it getting much better, and it just makes me want to use a true unique world (like atomontage), with the storage/scale problem, instead of this repetitive crap.

its unlimited repetition, not unlimited detail.
[/quote]
You don't think their GPU implementation will be much better than a 15-20 fps CPU version? That's kind of pessimistic. I mean the shading alone on the GPU will open up most every deferred/forward rendering post-processing effect. It's just a different way to populate the g-buffers. HDR alone would probably help along with demoing specular objects. The reason for the repetition at the moment is mostly just speculation.

The problem I see with atomontage is that his effect for rendering voxels when he lacks the detail is to blur them. This ends up looking really bad even in his newer videos. The UD system even when they went very close to objects has a very nice interpolation.
0

Share this post


Link to post
Share on other sites
[quote name='Sirisian' timestamp='1313284176' post='4848824']
[quote name='Syranide' timestamp='1313274057' post='4848772']
[calculations]
[/quote]
lol, you did pretty much identical calculations I did a while when I saw the video. Yeah that's a pretty good approximation for the amount of data in a lossless format. Compression and streaming the data in are probably where their method will excel.
[/quote]

Problem is though, they aren't showing any streaming, and streaming is probably a monstrous issue for UD. And they aren't showing any compression either, which is also a monstrous issue... GPU textures today are at 1:4 and 1:6 with terribly lossy compression. In-fact, they aren't showing much at all really other than something rendering at ~20FPS ... everything other than that is "been there, done that".

[quote name='Sirisian' timestamp='1313284176' post='4848824']
[quote name='rouncer' timestamp='1313281144' post='4848816']
its a hacked together piece of crud of an environment, and i dont see it getting much better, and it just makes me want to use a true unique world (like atomontage), with the storage/scale problem, instead of this repetitive crap.

its unlimited repetition, not unlimited detail.
[/quote]
You don't think their GPU implementation will be much better than a 15-20 fps CPU version? That's kind of pessimistic. I mean the shading alone on the GPU will open up most every deferred/forward rendering post-processing effect. It's just a different way to populate the g-buffers. HDR alone would probably help along with demoing specular objects. The reason for the repetition at the moment is mostly just speculation.

The problem I see with atomontage is that his effect for rendering voxels when he lacks the detail is to blur them. This ends up looking really bad even in his newer videos. The UD system even when they went very close to objects has a very nice interpolation.
[/quote]

Nvidia has their own GPU implementation, it runs at ~20FPS "sometimes" on modern hardware, with virtually no shading because the "raytracing" consumes all the shader performance. And shading is the hugely expensive part in modern games.
0

Share this post


Link to post
Share on other sites
[quote name='Sirisian' timestamp='1313284176' post='4848824']
5) A lot of engines choose to do dynamic lighting via SSAO along with other techniques (like crytek's radiosity). However, if they did bake the lighting into the models people would flip this around on them and go "they can't do dynamic lighting. It's all baked in" so it's a catch-22 unless they can do both really. (They didn't even say if they could bake the lighting).
[/quote]
To quote Carl Sagan: "It pays to keep an open mind, but not so open your brains fall out."

First, SSAO is not lighting, it's just an approximation of ambient occlusion (the hint is in the name). AO itself is a means of approximating the local effects of global illumination (i.e. dark areas in creases). It just modifies the ambient term of the typical lighting equation. You still need diffuse, specular, and shadows (which Crytek's engine renders as well).

Second, baking in lighting does not preclude dynamic lighting - most games use a combination of techniques. In order for this technique to be competitive with modern rendering, it also needs to support both. But they only support baked in lighting, and only a [i]severely limited[/i] form at that (because it prevents instances from being placed in arbitrary lighting conditions).

If they support good baked-in lighting, why didn't they use it? If they support dynamic lighting, why don't they show it? Extraordinary claims require extraordinary evidence, not promises.


[quote]
[quote name='rouncer' timestamp='1313281144' post='4848816']
its a hacked together piece of crud of an environment, and i dont see it getting much better, and it just makes me want to use a true unique world (like atomontage), with the storage/scale problem, instead of this repetitive crap.

its unlimited repetition, not unlimited detail.
[/quote]
You don't think their GPU implementation will be much better than a 15-20 fps CPU version? That's kind of pessimistic. I mean the shading alone on the GPU will open up most every deferred/forward rendering post-processing effect. It's just a different way to populate the g-buffers. HDR alone would probably help along with demoing specular objects. The reason for the repetition at the moment is mostly just speculation.
[/quote]
If you take their current rendering, but just apply post-effects to it, it will probably look better. But that will still leave it miles behind a modern game engine, because lighting is the most important tool available for simulating realism. But this falls back into the category of hollow promises. We can only evaluate the tech on what they've shown us, and they certainly haven't shown us any of that.


[quote]
The problem I see with atomontage is that his effect for rendering voxels when he lacks the detail is to blur them. This ends up looking really bad even in his newer videos. The UD system even when they went very close to objects has a very nice interpolation.
[/quote]
And that just demonstrates the true limitation of any rendering system: Everything can be arbitrarily unique, or the world can be arbitrarily huge - you can't have both. Atomontage takes the former approach, which allows it to support nice baked-in lighting and modifiable terrain. UD is the latter, so it has vasts amounts of repetitious content. There is of course an entire spectrum in-between, which is where modern games lie.

And it's a bit unfair to compare the two engines as they're doing very different things, but if you insist: Atomontage only looks good when viewed from a distance - UD doesn't look good at any distance. Atomontage supports completely dynamic worlds - UD supports completely static worlds. I personally don't have a problem picking a winner from those two.
0

Share this post


Link to post
Share on other sites
Remember, the guy at the start of this thread initially asked "how do we compress all this unique voxel data?" with UD in mind, none of its unique, thats how.
Im sick of making such a big deal out of it though, if they wind up making a game with it, good for them, (it will be pretty kooky) but I honestly would prefer to play an Atomontage game, and carve up some delicious looking unique mountains. :)
2

Share this post


Link to post
Share on other sites
[quote name='zoborg' timestamp='1313272380' post='4848754']Where do you think baked-in shadows come from? They have to be rendered sometime, and any offline shadow baking performed can be subject to similar quality issues.[/quote]Yeah, it's entirely possible that they're still baked into the static data using shadow-mapping during the baking process, which would be disappointing because the demonstrated shadow technique is cutting a lot of corners as you'd do in a bare-bones real-time version.

However, if they were storing voxel colours that are pre-multiplied with shadow values, then it would severely complicate the instancing. For example, the rock asset is sometimes instanced underneath a tree, and sometimes instanced in direct sunlight. Every ([i]or many[/i]) unique instance would need to store unique shadow values. If these shadow values were pre-baked into the colour data, then suddenly all of these instances have become unique assets... which if it's true, dispels a lot of the criticisms about the data-set actually being quite small due to instancing, right?
[quote]Well, when you're ray-casting you don't need to explicitly implement a clipping plane to get that effect. You'd get that effect if you projected each ray from the near plane instead of the eye.[/quote]Yeah, but you don't [i]need[/i] a near plane. So either, they're using a technique that doesn't [i]need[/i] a near-plane, but decided to use one anyway, [b]or [/b]their "ray-casting" technique actually [i]requires[/i] a near plane for some reason.
[quote]In their demo, a single pixel could contain ground, thousands of clumps of grass, dozens of trees, and even a few spare elephants. How do you approximate a light value for that that's good enough?[/quote]That's assuming that every visible voxel inside the bounds of the pixel is retrieved and averaged, as is common in voxel renderers?
In some other videos, he actually says that their "search algorithm" only returns a single 'atom', implying they're not using this kind of anti-aliasing technique.
i.e. similar to non-anti-aliased rasterization, where each pixel can only end up holding data from a single triangle -- the chosen triangle can then return averaged surface data via mip-mapped textures.

I'm assuming their tech works a similar way, where only a single (hierarchical) voxel from a particular instance is selected, and only data down the hierarchy from the chosen point is averaged. It only has to be 'good enough' to not shimmer excessively under movement ([i]it actually does shimmer a bit[/i]) and to look ok after having been blurred in screen space ([i]which they seem to be doing in the distance[/i]).
[quote]We do approximations all the time in games, but we do that by throwing away perceptually unimportant details. The direction of a surface with respect to the light is something that can be approximated (e.g. normal-maps), but not if the surface is a chaotic mess. At best, your choice of normal would be arbitrary (say, up). But if they did that, you'd see noticeable lighting changes as the LoD reduces, whereas in the demo it's a continuous blend.[/quote]No, we do exactly this in games with a continuous blend. We bake highly chaotic normals into a normal map and use it on a low-poly model. Usually, as you go down the mip-chain and average the normals together, they'll converge towards 'up', but not always. Due to trilinear filtering it's a continuous blend through the mip levels.
0

Share this post


Link to post
Share on other sites
the bit that interests me most in this is the "search algorithm"

can anyone explain to me how they think this works.

given and pixel p.. what goes on to derive its colour?
(and please bare in mind that am not a very clever chap)

if they are not ray tracing, how the fudge do they know/work out what objects (of the potential thousands - instanced or not) are "behind" that pixel, and of those,which is closest, and then of that one which "atom"??

okay obviously there must be some kind of hierarchy going on and data structures to speed it up but it still seems like a mammoth task. take a stone from the floor... must be a fudge load of them! still seems like a hell of a lot of work to do per pixel and I think there is credit due there for how fast it is.
0

Share this post


Link to post
Share on other sites
[quote name='Sirisian' timestamp='1313284176' post='4848824']
<clip>

You don't think their GPU implementation will be much better than a 15-20 fps CPU version? That's kind of pessimistic. I mean the shading alone on the GPU will open up most every deferred/forward rendering post-processing effect. It's just a different way to populate the g-buffers. HDR alone would probably help along with demoing specular objects. The reason for the repetition at the moment is mostly just speculation.

The problem I see with atomontage is that his effect for rendering voxels when he lacks the detail is to blur them. This ends up looking really bad even in his newer videos. The UD system even when they went very close to objects has a very nice interpolation.
[/quote]

Since many people pointed out the lack of lighting in the current video's the things you mention will make the demo look nicer. And since the searching it will be cpu bound and the steps you mention will be done on the gpu they might be 'free' I don't think that will make the demo faster, just nicer.
0

Share this post


Link to post
Share on other sites
[quote name='Chargh' timestamp='1313330535' post='4848961']
... I don't think that will make the demo faster, just nicer.
[/quote]
Mr Dell says in his video that they already have versions working faster that utilize the GPU, so should be nicer and faster



0

Share this post


Link to post
Share on other sites
[quote name='bwhiting' timestamp='1313331719' post='4848972']
[quote name='Chargh' timestamp='1313330535' post='4848961']
... I don't think that will make the demo faster, just nicer.
[/quote]
Mr Dell says in his video that they already have versions working faster that utilize the GPU, so should be nicer and faster

[/quote]

And funny enough, so does nVidia, by actual researchers, supported by actual public research, without bullshit claims. And they've scaled down the quality to be able to store the entire inside of the church in memory, and made a bunch of optimizations to improve the quality... what FPS do they get on modern hardware? ~25FPS in SOME scenes, and again, without any significant shading which would ruin the FPS. They have a published two articles too, you guys should read them.

[url="http://www.youtube.com/watch?v=Mi-mNGz0YMk"]http://www.youtube.c...h?v=Mi-mNGz0YMk[/url]
[url="http://research.nvidia.com/publication/efficient-sparse-voxel-octrees"]http://research.nvid...e-voxel-octrees[/url]
[url="http://www.nvidia.com/object/nvidia_research_pub_018.html"]http://www.nvidia.co...ch_pub_018.html[/url]

UD is in my opinion already certified bullshitters™ and have claimed whatever they need to save their ass and have shown absolutely nothing to prove what they claim is even realistic or at all possible. What UD has shown today is nothing really technically impressive, from what we can tell it's pretty much a straight up implementation of SVO... that is optimized to be faster than naive implementations, and possibly interpolation. That is all really. Or am I missing something?

And I still don't get why people are still defending this technology at its current state, when the biggest non-instanced scene shown to date is the inside of a small low-detail church, and still consumes 2.7GB of memory.

PS. And if they have a working demo for the GPU, then they should show it, or we can just pin this to the list of bullshit claims without proof. nVidia published their GPU implementation 1.5 years ago... meaning, even if UD have it running on the GPU and show it, unless they show something faster or better than nVidia then we can just assume they straight up copied nVidia or are a bunch of amateurs.

To be blunt, they even claim their demo is only running on 1 core, why on earth would they do that? Scaling it up to large amounts of cores is TRIVIAL, should have been easy to implement in a single day. So either they are hiding behind that statement to hide critical issues (like memory performance) or they really are a bunch of amateurs... ?
0

Share this post


Link to post
Share on other sites
[quote name='bwhiting' timestamp='1313138141' post='4848136']
here is my take on this thing having also now watched the interview..

1. forget the "unlimited" bit... nothing in the universe is so just see it as just a "AWESOME AMOUNTS OF" instead, which is what he means methinks. so don't waste your energy on that, we all know its not actually unlimited. That is if you are taking the word unlimited to mean infinite.... but the two are different, unlimited could be the same as when another engine says it supports an unlimited number of lights.... which it true... the engine supports it.... your machine might just not be able to handle it (not a limit imposed by the engine but by the users computer)
either way I wouldn't get hung up on it.


2. he is the guy who came up with the technology and he was a hobby programmer, this could explain how he gets some terms wrong (level of distance??!) and why he may seem quite condescending... if he has no background in traditional graphics then that would make sense. His lack of knowledge of current methodologies is what I think lead to him going about it however he has done.

3. I am more and more thinking that this will lead somewhere and may indeed be the future of graphics (the guy who interviewed him was blown away) and from the sounds of it its only going to get better and faster

4. It still "boggles my mind"!!!

5. - 10. not included as I should really be working

:)
[/quote]

That boggles my mind also, so I did some research over internet about their algorithm. Didn't find much, but this post is quite interesting :

[url="http://www.somedude.net/gamemonkey/forum/viewtopic.php?f=12&t=419"]http://www.somedude.....php?f=12&t=419[/url]

To quote the post :

[quote][i]
I'd like to mention Unlimited Detail, a technology developed by Bruce Dell, which does in fact do exactly what he claims it does... Render incredibly detailed 3D scenes at interactive frame rates... without 3D hardware acceleration. It accomplishes this using a novel traversal of a octree style data structure. The effect is perfect occlusion without retracing tree nodes. The result is tremendous efficiency.

I have seen the system in action and I have seen the C++ source code of his inner loop. What is more impressive is that the core algorithm does not need complex math instructions like square root or trig, in fact it does not use floating point instructions or do multiplies and divides![/i][/quote]

So it seems they are relying on some "octree like" data structure (as many supposed). What is boggling me the most is the fact their algorithm isn't using multiplies or divides or any other floating point instructions (as they say). Is there a way to traverse an octree (doing tree nodes intersection tests) only with simple instructions ? I don't see how (I only know raycasting, and it seems difficult for me to do this without divides, I know that other ways to render an octree exist but I do not know how they work).
0

Share this post


Link to post
Share on other sites
It probably doesn't use ANY arithmetic instructions. It's probably a brand new, revolutionary algorithm that uses Hope and Wish instructions.
2

Share this post


Link to post
Share on other sites
[quote]it does not use floating point instructions or do multiplies and divides![/quote]

Integer arithmetic with shift operators?
0

Share this post


Link to post
[quote]it does not use floating point instructions or do multiplies and divides![/quote]

Power of two integer arithmetic with shift operators?
0

Share this post


Link to post
Share on other sites
[quote name='GFalcon' timestamp='1313510691' post='4849895']
That boggles my mind also, so I did some research over internet about their algorithm. Didn't find much, but this post is quite interesting :

[url="http://www.somedude.net/gamemonkey/forum/viewtopic.php?f=12&t=419"]http://www.somedude.....php?f=12&t=419[/url]

To quote the post :

[quote][i]
I'd like to mention Unlimited Detail, a technology developed by Bruce Dell, which does in fact do exactly what he claims it does... Render incredibly detailed 3D scenes at interactive frame rates... without 3D hardware acceleration. It accomplishes this using a novel traversal of a octree style data structure. The effect is perfect occlusion without retracing tree nodes. The result is tremendous efficiency.

I have seen the system in action and I have seen the C++ source code of his inner loop. What is more impressive is that the core algorithm does not need complex math instructions like square root or trig, in fact it does not use floating point instructions or do multiplies and divides![/i][/quote]
[/quote]
Interesting. There's an algorithm like that which is fairly common. I mentioned it before in this thread. The optimal frustum culling algorithm for octrees. It uses a look-up table for each level of an octree for the accept/reject corner tests. The plus is it relies on very simple addition and can exploit SSE insanely well. Few if anyone implements it I believe since it can be difficult to understand at first. However, I'm not sure how that would help in this problem. I mean I've always written off using it because it must be done per SVO object and it's not a cheap operation even with it's many optimizations (there's ways as you traverse down to keep a list of the still valid frustum planes such that you only use 1 frustum plane for most of the traversal accept/reject tests. However, it still requires this stupid sorting step which seems like a hurdle that can't be solved. Hmm this is making me want to write a test algorithm to see if maybe I missed something where maybe things optimize even further than I'd imagined. :unsure:
1

Share this post


Link to post
Share on other sites
[quote name='GFalcon' timestamp='1313510691' post='4849895']
[quote][i]
I'd like to mention Unlimited Detail, a technology developed by Bruce Dell, which does in fact do exactly what he claims it does... Render incredibly detailed 3D scenes at interactive frame rates... without 3D hardware acceleration. It accomplishes this using a novel traversal of a octree style data structure. The effect is perfect occlusion without retracing tree nodes. The result is tremendous efficiency.

I have seen the system in action and I have seen the C++ source code of his inner loop. What is more impressive is that the core algorithm does not need complex math instructions like square root or trig, in fact it does not use floating point instructions or do multiplies and divides![/i][/quote]

So it seems they are relying on some "octree like" data structure (as many supposed). What is boggling me the most is the fact their algorithm isn't using multiplies or divides or any other floating point instructions (as they say). Is there a way to traverse an octree (doing tree nodes intersection tests) only with simple instructions ? I don't see how (I only know raycasting, and it seems difficult for me to do this without divides, I know that other ways to render an octree exist but I do not know how they work).
[/quote]

I'm not intensly familiar with SVO, but really, whoever you quoted above has no grasp of reality it seems. Not using multiplication and division does not make it impressive by itself, also note, the core algorithm. Meaning, going along a ray and traversing an octree. Going along a ray can be done using a variation [url="http://en.wikipedia.org/wiki/Bresenham"]http://en.wikipedia.org/wiki/Bresenham's_line_algorithm[/url] ... and holy shit! It doesn't use division and multiplication other than for precomputing some values! Bresenham's line algorithm sure is modern day rocket science it seems.

So, let's step back, and look at the problem... we have a RAY and an OCTREE, and our intention is to find the first node in the octree the ray hits, to get the pixel color... so:

1. Step along the ray using some algorithm (bresenham's line algorithm perhaps?)
2. For each step, check the corresponding octree node, if empty go to 1, exit if solid
3. Recurse one level down into the octree, go to 1

A bit simplistic yes, but unless I'm missing something, then that is the "core algorithm"... and no I don't see any unicorn in there.

Just to be clear though, this is one way of implementing it, there are likely a lot better ways, but it wouldn't suprise me if this is what they actually use, just a bit optimized.
0

Share this post


Link to post
Share on other sites
[quote name='Syranide' timestamp='1313679140' post='4850789']
1. Step along the ray using some algorithm (bresenham's line algorithm perhaps?)
[/quote]


I am not a c or c++ programmer and have little grasp on exactly how fast it is other than I know it ain't exactly slow.


How many "steps" do you think they could implement per pixel?
10? 100? 1000?

I have no idea, and what do you think the maximum that could be achieved while still hitting something like 30fps?

:)
0

Share this post


Link to post
Share on other sites
[quote name='bwhiting' timestamp='1313680191' post='4850793']
[quote name='Syranide' timestamp='1313679140' post='4850789']
1. Step along the ray using some algorithm (bresenham's line algorithm perhaps?)
[/quote]


I am not a c or c++ programmer and have little grasp on exactly how fast it is other than I know it ain't exactly slow.


How many "steps" do you think they could implement per pixel?
10? 100? 1000?

I have no idea, and what do you think the maximum that could be achieved while still hitting something like 30fps?

:)
[/quote]

Bresenham's algorithm is a line tracing algorithm that only uses integers, and it's really fast... there are a bunch of others too for different purposes that might be more suitable. But really, they can't be much more than a few instructions per step. And the interesting thing is that with some optimizations, it seems as if you shouldn't even need to recompute the starting values when you go down the octree, but rather just bitshift some of the values (*2 and /2).
0

Share this post


Link to post
Share on other sites
Perhaps it could be a variation of the good old Marching Cubes algorithm, combined with some kind of an octree traversal.
0

Share this post


Link to post
Share on other sites
[quote]
I'm not intensly familiar with SVO, but really, whoever you quoted above has no grasp of reality it seems. Not using multiplication and division does not make it impressive by itself, also note, the core algorithm. Meaning, going along a ray and traversing an octree. Going along a ray can be done using a variation [url="http://en.wikipedia.org/wiki/Bresenham"]http://en.wikipedia.org/wiki/Bresenham's_line_algorithm[/url] ... and holy shit! It doesn't use division and multiplication other than for precomputing some values! Bresenham's line algorithm sure is modern day rocket science it seems.

So, let's step back, and look at the problem... we have a RAY and an OCTREE, and our intention is to find the first node in the octree the ray hits, to get the pixel color... so:

1. Step along the ray using some algorithm (bresenham's line algorithm perhaps?)
2. For each step, check the corresponding octree node, if empty go to 1, exit if solid
3. Recurse one level down into the octree, go to 1

A bit simplistic yes, but unless I'm missing something, then that is the "core algorithm"... and no I don't see any unicorn in there.

Just to be clear though, this is one way of implementing it, there are likely a lot better ways, but it wouldn't suprise me if this is what they actually use, just a bit optimized.
[/quote]

I also thought about Bresenham's algorithm applied to it yesterday, but it might need a lot of (small) steps along the ray to check the octree nodes ... but why not.
For sure, if their "core algorithm" is done this way there are no unicorn here, I agree on that :)
Well even if they say they are not using ray casting, I begin to think that in fact they are. Maybe, they just call it differently because they are not doing it the usual way.
0

Share this post


Link to post
Share on other sites
There is another interesting reveal on the following link at 2:18 that I haven't seen quoted before: -

[url="http://www.youtube.com/watch?v=_5hg9VfbyYg&feature=related"]http://www.youtube.c...feature=related[/url]

A short demo scene that contains a simple implementation of shadowing, hybrid rendering and arbitrary rotations on point cloud objects. This I believe hints at many of the missing features. that people were concerned were missing from the technology demo.

[list][*]The shadowing is very simple, with the appearance of a low resolution shadow map but the basics are there.[*]There is a mix of polygon objects and Voxel objects in the scene, this hybrid rendering always seamed like the best solution for animation to me just like Dooms mixed sprites and polygons (1993 what a year that was!) Characters could be high resolution skeleton animated poly models rendered on the graphics card and mixed in with the Z buffer.[*]The tyre is apparently a point cloud object that is being rotated, assuming that they are rendered on the same pass from the same camera angle, that would represent an arbitrary rotation.[/list]
There is apparently a second pod cast with a further interview on memory use and animation, so I subscribed up as the only way to assess the feasibility of this is to carefully dissect every crumb of cookie that we get.

I didn't want to interrupt the more interesting discussion on the integer traversal of the octree data structure but it seams to have petered out when it was getting interesting. I'll do a spot more study before I post on that though.
0

Share this post


Link to post
Share on other sites
*20[quote name='Frank Dodd' timestamp='1314484213' post='4854568']
There is another interesting reveal on the following link at 2:18 that I haven't seen quoted before: -

[url="http://www.youtube.com/watch?v=_5hg9VfbyYg&feature=related"]http://www.youtube.c...feature=related[/url]

A short demo scene that contains a simple implementation of shadowing, hybrid rendering and arbitrary rotations on point cloud objects. This I believe hints at many of the missing features. that people were concerned were missing from the technology demo.
[list][*]The shadowing is very simple, with the appearance of a low resolution shadow map but the basics are there.[*]There is a mix of polygon objects and Voxel objects in the scene, this hybrid rendering always seamed like the best solution for animation to me just like Dooms mixed sprites and polygons (1993 what a year that was!) Characters could be high resolution skeleton animated poly models rendered on the graphics card and mixed in with the Z buffer.[*]The tyre is apparently a point cloud object that is being rotated, assuming that they are rendered on the same pass from the same camera angle, that would represent an arbitrary rotation.[/list]
There is apparently a second pod cast with a further interview on memory use and animation, so I subscribed up as the only way to assess the feasibility of this is to carefully dissect every crumb of cookie that we get.

I didn't want to interrupt the more interesting discussion on the integer traversal of the octree data structure but it seams to have petered out when it was getting interesting. I'll do a spot more study before I post on that though.
[/quote]

1. Shadowing is not hard to do with SVOs, you can even have "perfect" shadows if you like... the problem is that it is very expensive.
2. Hybrid is also an obvious thing to do... but I'm not so sure that it is a good idea at all:
- The main draw of SVO is that performance is primarily determined by number of pixels, not geometry complexity... while polygon performance is primarily defined by the geometry complexity... mixing both would mean you suffer the drawbacks of both to some extent, which isn't ideal. And you may end up with hugely unpredictable performance as their indvidual coverage of the screen varies.
- SVOs and polygons are likely to have their own unique look, mixing the two seamlessly can be a truly daunting issue.
3. Arbitrary rotation is not hard to do with SVOs, but instancing of arbitrarily rotated, scaled, morphed and positioned objects is likely to add significant cost... something which UD doesn't currently show.

Please note, SVOs can in theory do pretty much everything triangles can and more, nobody is really rejecting that as far as I know. A primary problem is performance, the demo they showed last time ran at 20FPS @ 1024x768 on a modern computer, without shading or any modern techniques at all. Now let's scale that to the common resolution of 1920x1080, that would mean you now have 8FPS at best, and we are still not seeing any shadows, shading, rotation, lighting, heavy instancing, animation, etc. And let's not forget the ever present enormous memory issue.

Overall I'd like to think that UD/SVO is highly overrated... I'm not going to diss the atomonotage engine, it seems nice... but I find both their "visions" all too familiar to my own developer fantasies, to find the perfect solution to every problem and that somehow the best solution would be the most generic possible solution you could ever think of. It's really hard to get explain it in practice... but to give you a picture, the answer to "how much does it hurt to get punched in the face?" is not to look up theories for sub atomic particles, how they interact, their weight, how energy is transferred, what material it is, etc... no it's simply "pretty damn much, but it depends on how hard he hits you"... that is, don't break a problem into the smallest possible components, keep it high-level and approximate. And I feel confident that the same is true here, breaking down the problem to the smallest possible pieces (voxels) means you lose the ability to make optimizations, assumptions and clever tricks... you even to some degree lose the ability to have smooth surfaces. There are no longer triangles, nor surfaces, nor shapes, nor materials... it's all just individual voxels.
0

Share this post


Link to post
Share on other sites
[quote name='Syranide' timestamp='1314550075' post='4854758']
- SVOs and polygons are likely to have their own unique look, mixing the two seamlessly can be a truly daunting issue.
3. Arbitrary rotation is not hard to do with SVOs, but instancing of arbitrarily rotated, scaled, morphed and positioned objects is likely to add significant cost... something which UD doesn't currently show.[/quote]
In my tests I just have a rotation matrix for the OBB for the object to get things into an AABB perspective. Assuming naive 3DDDA you just rasterize the box to the screen using the optimal rasterization algorithm storing the screen to OBB surface ray. (That ray's magnitude is the depth from the screen to the surface on the OBB). Then transform the ray by the inverse rotation matrix along with the surface point. Then it's just a normal traversal on SVO data since everything is not rotated.

Using the frustum rendering method it's even easier. For each OBB you just apply the inverse rotation matrix for the frustum planes around the object and you're now looking at the object in AABB state and can perform the culling and rendering. Still costly.
1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0