Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


[Theory] Unraveling the Unlimited Detail plausibility


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
168 replies to this topic

#141 Syranide   Members   -  Reputation: 375

Like
0Likes
Like

Posted 18 August 2011 - 08:52 AM


I'd like to mention Unlimited Detail, a technology developed by Bruce Dell, which does in fact do exactly what he claims it does... Render incredibly detailed 3D scenes at interactive frame rates... without 3D hardware acceleration. It accomplishes this using a novel traversal of a octree style data structure. The effect is perfect occlusion without retracing tree nodes. The result is tremendous efficiency.

I have seen the system in action and I have seen the C++ source code of his inner loop. What is more impressive is that the core algorithm does not need complex math instructions like square root or trig, in fact it does not use floating point instructions or do multiplies and divides!


So it seems they are relying on some "octree like" data structure (as many supposed). What is boggling me the most is the fact their algorithm isn't using multiplies or divides or any other floating point instructions (as they say). Is there a way to traverse an octree (doing tree nodes intersection tests) only with simple instructions ? I don't see how (I only know raycasting, and it seems difficult for me to do this without divides, I know that other ways to render an octree exist but I do not know how they work).


I'm not intensly familiar with SVO, but really, whoever you quoted above has no grasp of reality it seems. Not using multiplication and division does not make it impressive by itself, also note, the core algorithm. Meaning, going along a ray and traversing an octree. Going along a ray can be done using a variation http://en.wikipedia.org/wiki/Bresenham's_line_algorithm ... and holy shit! It doesn't use division and multiplication other than for precomputing some values! Bresenham's line algorithm sure is modern day rocket science it seems.

So, let's step back, and look at the problem... we have a RAY and an OCTREE, and our intention is to find the first node in the octree the ray hits, to get the pixel color... so:

1. Step along the ray using some algorithm (bresenham's line algorithm perhaps?)
2. For each step, check the corresponding octree node, if empty go to 1, exit if solid
3. Recurse one level down into the octree, go to 1

A bit simplistic yes, but unless I'm missing something, then that is the "core algorithm"... and no I don't see any unicorn in there.

Just to be clear though, this is one way of implementing it, there are likely a lot better ways, but it wouldn't suprise me if this is what they actually use, just a bit optimized.



Sponsor:

#142 bwhiting   Members   -  Reputation: 813

Like
0Likes
Like

Posted 18 August 2011 - 09:09 AM

1. Step along the ray using some algorithm (bresenham's line algorithm perhaps?)



I am not a c or c++ programmer and have little grasp on exactly how fast it is other than I know it ain't exactly slow.


How many "steps" do you think they could implement per pixel?
10? 100? 1000?

I have no idea, and what do you think the maximum that could be achieved while still hitting something like 30fps?

:)

#143 Syranide   Members   -  Reputation: 375

Like
0Likes
Like

Posted 18 August 2011 - 09:54 AM


1. Step along the ray using some algorithm (bresenham's line algorithm perhaps?)



I am not a c or c++ programmer and have little grasp on exactly how fast it is other than I know it ain't exactly slow.


How many "steps" do you think they could implement per pixel?
10? 100? 1000?

I have no idea, and what do you think the maximum that could be achieved while still hitting something like 30fps?

:)


Bresenham's algorithm is a line tracing algorithm that only uses integers, and it's really fast... there are a bunch of others too for different purposes that might be more suitable. But really, they can't be much more than a few instructions per step. And the interesting thing is that with some optimizations, it seems as if you shouldn't even need to recompute the starting values when you go down the octree, but rather just bitshift some of the values (*2 and /2).



#144 Tachikoma   Members   -  Reputation: 552

Like
0Likes
Like

Posted 18 August 2011 - 10:06 AM

Perhaps it could be a variation of the good old Marching Cubes algorithm, combined with some kind of an octree traversal.
Latest project: Sideways Racing on the iPad

#145 GFalcon   Members   -  Reputation: 380

Like
0Likes
Like

Posted 18 August 2011 - 10:20 AM

I'm not intensly familiar with SVO, but really, whoever you quoted above has no grasp of reality it seems. Not using multiplication and division does not make it impressive by itself, also note, the core algorithm. Meaning, going along a ray and traversing an octree. Going along a ray can be done using a variation http://en.wikipedia.org/wiki/Bresenham's_line_algorithm ... and holy shit! It doesn't use division and multiplication other than for precomputing some values! Bresenham's line algorithm sure is modern day rocket science it seems.

So, let's step back, and look at the problem... we have a RAY and an OCTREE, and our intention is to find the first node in the octree the ray hits, to get the pixel color... so:

1. Step along the ray using some algorithm (bresenham's line algorithm perhaps?)
2. For each step, check the corresponding octree node, if empty go to 1, exit if solid
3. Recurse one level down into the octree, go to 1

A bit simplistic yes, but unless I'm missing something, then that is the "core algorithm"... and no I don't see any unicorn in there.

Just to be clear though, this is one way of implementing it, there are likely a lot better ways, but it wouldn't suprise me if this is what they actually use, just a bit optimized.


I also thought about Bresenham's algorithm applied to it yesterday, but it might need a lot of (small) steps along the ray to check the octree nodes ... but why not.
For sure, if their "core algorithm" is done this way there are no unicorn here, I agree on that :)
Well even if they say they are not using ray casting, I begin to think that in fact they are. Maybe, they just call it differently because they are not doing it the usual way.
--
GFalcon
0x5f3759df

#146 Frank Dodd   Members   -  Reputation: 122

Like
0Likes
Like

Posted 27 August 2011 - 04:30 PM

There is another interesting reveal on the following link at 2:18 that I haven't seen quoted before: -

http://www.youtube.c...feature=related

A short demo scene that contains a simple implementation of shadowing, hybrid rendering and arbitrary rotations on point cloud objects. This I believe hints at many of the missing features. that people were concerned were missing from the technology demo.

  • The shadowing is very simple, with the appearance of a low resolution shadow map but the basics are there.
  • There is a mix of polygon objects and Voxel objects in the scene, this hybrid rendering always seamed like the best solution for animation to me just like Dooms mixed sprites and polygons (1993 what a year that was!) Characters could be high resolution skeleton animated poly models rendered on the graphics card and mixed in with the Z buffer.
  • The tyre is apparently a point cloud object that is being rotated, assuming that they are rendered on the same pass from the same camera angle, that would represent an arbitrary rotation.
There is apparently a second pod cast with a further interview on memory use and animation, so I subscribed up as the only way to assess the feasibility of this is to carefully dissect every crumb of cookie that we get.

I didn't want to interrupt the more interesting discussion on the integer traversal of the octree data structure but it seams to have petered out when it was getting interesting. I'll do a spot more study before I post on that though.

#147 Syranide   Members   -  Reputation: 375

Like
0Likes
Like

Posted 28 August 2011 - 10:47 AM

*20

There is another interesting reveal on the following link at 2:18 that I haven't seen quoted before: -

http://www.youtube.c...feature=related

A short demo scene that contains a simple implementation of shadowing, hybrid rendering and arbitrary rotations on point cloud objects. This I believe hints at many of the missing features. that people were concerned were missing from the technology demo.

  • The shadowing is very simple, with the appearance of a low resolution shadow map but the basics are there.
  • There is a mix of polygon objects and Voxel objects in the scene, this hybrid rendering always seamed like the best solution for animation to me just like Dooms mixed sprites and polygons (1993 what a year that was!) Characters could be high resolution skeleton animated poly models rendered on the graphics card and mixed in with the Z buffer.
  • The tyre is apparently a point cloud object that is being rotated, assuming that they are rendered on the same pass from the same camera angle, that would represent an arbitrary rotation.
There is apparently a second pod cast with a further interview on memory use and animation, so I subscribed up as the only way to assess the feasibility of this is to carefully dissect every crumb of cookie that we get.

I didn't want to interrupt the more interesting discussion on the integer traversal of the octree data structure but it seams to have petered out when it was getting interesting. I'll do a spot more study before I post on that though.


1. Shadowing is not hard to do with SVOs, you can even have "perfect" shadows if you like... the problem is that it is very expensive.
2. Hybrid is also an obvious thing to do... but I'm not so sure that it is a good idea at all:
- The main draw of SVO is that performance is primarily determined by number of pixels, not geometry complexity... while polygon performance is primarily defined by the geometry complexity... mixing both would mean you suffer the drawbacks of both to some extent, which isn't ideal. And you may end up with hugely unpredictable performance as their indvidual coverage of the screen varies.
- SVOs and polygons are likely to have their own unique look, mixing the two seamlessly can be a truly daunting issue.
3. Arbitrary rotation is not hard to do with SVOs, but instancing of arbitrarily rotated, scaled, morphed and positioned objects is likely to add significant cost... something which UD doesn't currently show.

Please note, SVOs can in theory do pretty much everything triangles can and more, nobody is really rejecting that as far as I know. A primary problem is performance, the demo they showed last time ran at 20FPS @ 1024x768 on a modern computer, without shading or any modern techniques at all. Now let's scale that to the common resolution of 1920x1080, that would mean you now have 8FPS at best, and we are still not seeing any shadows, shading, rotation, lighting, heavy instancing, animation, etc. And let's not forget the ever present enormous memory issue.

Overall I'd like to think that UD/SVO is highly overrated... I'm not going to diss the atomonotage engine, it seems nice... but I find both their "visions" all too familiar to my own developer fantasies, to find the perfect solution to every problem and that somehow the best solution would be the most generic possible solution you could ever think of. It's really hard to get explain it in practice... but to give you a picture, the answer to "how much does it hurt to get punched in the face?" is not to look up theories for sub atomic particles, how they interact, their weight, how energy is transferred, what material it is, etc... no it's simply "pretty damn much, but it depends on how hard he hits you"... that is, don't break a problem into the smallest possible components, keep it high-level and approximate. And I feel confident that the same is true here, breaking down the problem to the smallest possible pieces (voxels) means you lose the ability to make optimizations, assumptions and clever tricks... you even to some degree lose the ability to have smooth surfaces. There are no longer triangles, nor surfaces, nor shapes, nor materials... it's all just individual voxels.



#148 Sirisian   Crossbones+   -  Reputation: 1791

Like
1Likes
Like

Posted 28 August 2011 - 01:06 PM

- SVOs and polygons are likely to have their own unique look, mixing the two seamlessly can be a truly daunting issue.
3. Arbitrary rotation is not hard to do with SVOs, but instancing of arbitrarily rotated, scaled, morphed and positioned objects is likely to add significant cost... something which UD doesn't currently show.

In my tests I just have a rotation matrix for the OBB for the object to get things into an AABB perspective. Assuming naive 3DDDA you just rasterize the box to the screen using the optimal rasterization algorithm storing the screen to OBB surface ray. (That ray's magnitude is the depth from the screen to the surface on the OBB). Then transform the ray by the inverse rotation matrix along with the surface point. Then it's just a normal traversal on SVO data since everything is not rotated.

Using the frustum rendering method it's even easier. For each OBB you just apply the inverse rotation matrix for the frustum planes around the object and you're now looking at the object in AABB state and can perform the culling and rendering. Still costly.

#149 Syranide   Members   -  Reputation: 375

Like
0Likes
Like

Posted 28 August 2011 - 01:55 PM

In my tests I just have a rotation matrix for the OBB for the object to get things into an AABB perspective. Assuming naive 3DDDA you just rasterize the box to the screen using the optimal rasterization algorithm storing the screen to OBB surface ray. (That ray's magnitude is the depth from the screen to the surface on the OBB). Then transform the ray by the inverse rotation matrix along with the surface point. Then it's just a normal traversal on SVO data since everything is not rotated.

Using the frustum rendering method it's even easier. For each OBB you just apply the inverse rotation matrix for the frustum planes around the object and you're now looking at the object in AABB state and can perform the culling and rendering. Still costly.


Indeed, to clarify for others, this may sound simple and fast... and it is, relatively. But I'm certain that UD is as fast as it is today because there are no overlapping structures or special features, but rather only a single large octree being traversed. That is incredibly cheap, you simply traverse the octree and test against rays, add even the simplest of "features" to that and you'll likely find the cost per pixel to skyrocket... even though some of the cost may be mitigated by memory latency.

And this is the neat thing with triangles, you can do some pretty costly stuff, because it's on objects as a whole, or on smaller but still large triangles. SVOs are per-pixel and thus there are likely few computations that can be shared between adjacent pixels... and when you are tracing millions of pixels per frame, then even the simplest computations become massively costly.



#150 Rhetorician   Members   -  Reputation: 119

Like
0Likes
Like

Posted 23 April 2012 - 12:53 PM

Although a matrix of pixels (screen-space) has euclidean regularity, I don't like the idea of making everything else have euclidean regularity as well (which has dead obvious benefits, but also dead obvious catches). That's probably why Euclideon is going for a hybrid rendering system now, right? I wonder if there's a good way to layer Bresenham-like algorithms (for instancing / tessellation composition) and access these plotting mechanisms fluidly from everywhere else within the engine.

For example: A NURBS surface to specify the curvature of a large terrain, and several layers (of what?) specifying the composition (grasses, dirt, rock etc. / uniqueness modifiers). The problem is integrating this irregular complexity into the whole pipeline, for processes such as hierarchical Bresenham-based beam-tracing.

I'm curious if there's a good way to contain essential geometry using regular definitions, combine & extend this description to form a variety of complexity and uniquness, and then, somehow efficiently traverse these structures *magically* and pick-out just exactly the information that is needed... uncertain magic.

By the way, I think the way Euclideon exhibited dirt particles in their demo is the worst part of all. Dirt is extremely small! Besides the moiste globs and little debris laying around in it, dirt is just dust! They've got what looks to be cobble stones. It would be cool if a game actually had extremely high-quality, realistic dirt, but Euclideon's dirt is much less realistic than even the games they criticized.

I really like self-shadowed terrain textures (look at the ground when its close):



#151 Outthink The Room   Members   -  Reputation: 829

Like
0Likes
Like

Posted 06 May 2012 - 09:55 AM

I apologize in advance, I'm not a programmer, but I have a question that I don't really understand and nobody really discusses it.

I see everyone saying that file sizes would be enormous, that hdd space would never accommodate such high volume datasets. My question is, why would I have to save the same atom a million times?

What I mean is this, in a game, a duplicated rock utilizes the same texture(s). You don't have to save the same texture a thousand times separately, you save it on the disc once, and reuse it when needed.

So, if I was to make a blade of grass out of a million atoms, all using the same colored atom, why do I need to save each atom separately? If each atom was colored the exact same green, using the color code 0001 for that specific shade of green, why do I need to save it for each point? Why wouldn't the object file just hold the color information for each point and while rendering, search for the color code and fill in the corresponding atom?

I don't understand why EACH COLORED ATOM must be looked at, in it's own file size. Especially when the color doesn't have to be assigned until rendering. It seems like a huge waste to load color information if it's not going to be used in the frame.

Also, in this interview, http://www.3d-test.com/interviews/unlimited_detail_technology_1.htm which was done in May of 2010, he clearly states that objects converted to their format are roughly 8% of their polygon format.

So if you made a 1,000,000 polygon asset and converted it, it would be roughly 80,000 polygons. That's an infinitely smaller file size. 80k polygon objects are like 500kb. Not Megs or Gigs or Terabytes, just Kilobytes. Even if you doubled the file size to accommodate all the color information instead of just X,Y,Z coordinates, it's still an extremely small file.

Again, if this is wrong, I do apologize for typing so much. Thing is, I've always viewed the Unlimited Engine as Pointillism Painting, with the Search Engine or Look-Up Table as the Palette. And to me, an artist would never wipe his brush or clean his palette every single time he painted a separate dot. He would just fill in as many dots as possible with that color.

#152 Hodgman   Moderators   -  Reputation: 31785

Like
0Likes
Like

Posted 06 May 2012 - 11:22 AM

My question is, why would I have to save the same atom a million times?
What I mean is this, in a game, a duplicated rock utilizes the same texture(s). You don't have to save the same texture a thousand times separately, you save it on the disc once, and reuse it when needed.

This is what they're doing, and it's why their demo is made up of the same dozen rocks/props repeated a billion times.

Also, in this interview, http://www.3d-test.c...echnology_1.htm which was done in May of 2010, he clearly states that objects converted to their format are roughly 8% of their polygon format.
So if you made a 1,000,000 polygon asset and converted it, it would be roughly 80,000 polygons. That's an infinitely smaller file size. 80k polygon objects are like 500kb. Not Megs or Gigs or Terabytes, just Kilobytes. Even if you doubled the file size to accommodate all the color information instead of just X,Y,Z coordinates, it's still an extremely small file.

If you want to perform a real comparison there, you'd have to include the standard option, which is to convert the 1,000,000 polygon asset into an 80,000 polygon asset + a normal map.

#153 Outthink The Room   Members   -  Reputation: 829

Like
0Likes
Like

Posted 06 May 2012 - 02:04 PM

If you want to perform a real comparison there, you'd have to include the standard option, which is to convert the 1,000,000 polygon asset into an 80,000 polygon asset + a normal map.


I don't really get the Normal Map example. I was referring to the conversion process as plain assets. He explained that if you used the "Unlimited Point Cloud Format", file sizes would be 8% of it's polygonal size. It didn't specify the inclusion of maps and I'm actually curious if developers would continue down that route.

Maybe this isn't possible but couldn't a Paint Program allow you to paint directly onto Point Cloud geometry? If you could bypass the entire process of having textures altogether, simply layer paint your art asset, you'd replace textures with individually colored atoms. The approach could be something like PolyPainting in ZBrush, just without the initial UV part in the beginning.

Again, I'm not a programmer, so I don't know how all the behind the scenes things work on GPU's and stuff. But since the empty space between polygon points would now be replaced with atoms, wouldn't the need for textures be replaced as well?

#154 Crowley99   Members   -  Reputation: 178

Like
0Likes
Like

Posted 06 May 2012 - 11:54 PM

But since the empty space between polygon points would now be replaced with atoms, wouldn't the need for textures be replaced as well?


Yes, the surface textures could be replaced by atoms - many, many atoms. :-) He was setting up a straw-man: it may (arguably) be more efficient to represent certain surfaces that way, but then it becomes unclear how to use standard techniques such as coloring, displacement mapping or normal mapping, on top of this - you can of course represent the surface detail achieved by these techniques with these "atoms" directly, but then the atom count explodes, and is likely far less efficient of a representation, not more.

#155 Hodgman   Moderators   -  Reputation: 31785

Like
0Likes
Like

Posted 07 May 2012 - 02:22 AM

I don't really get the Normal Map example. I was referring to the conversion process as plain assets.

Ok, the problem with that comparison is that you're ignoring the asset conversion processes that are used in games.
The two asset processes that you compared were:
Author highly detailed, film-quality model ----------------------------- Rendered by game
                                            \Generate Atoms            /
...but to be fair, you should actually use the asset conversion processes that are currently used by "polygonal games", which would look more like:
Generate LOD and bake maps
                                            /                          \ 
Author highly detailed, film-quality model -                           - Rendered by game
                                            \Generate Atoms            /
In this comparison, both data sets (the atoms or the LOD/maps) will be a small percentage of the original size.

It's also important to note that there's not that much difference between the above two asset pipelines, and in reality, the "generate atoms" part is pretty much the same as the "bake maps" part, but it's baking a 3D map of some sort. N.B. that if textures can be replaced by creating one atom per pixel in the texture, then the memory requirements of both approaches are going to be similar - you're storing the same data in the end.

Games already do use 3D/volumetric rendering (e.g. point clouds) for certain objects when appropriate, and existing asset pipelines do already use "atom->polygon" and "polygon->atom" conversion where appropriate.

Maybe this isn't possible but couldn't a Paint Program allow you to paint directly onto Point Cloud geometry? If you could bypass the entire process of having textures altogether, simply layer paint your art asset

Yes, this already exists for both polygonal and voxel art creation programs.

Take note though, that the processes used when authoring game art and the processes used when rendering the game, don't have to be the same.
These kinds of "unlimited detail", no hassles (such as textures) processes are already in use in a lot of art creation tools.
If a current engine then requires certain technical details, such as UV coordinates, or texture-maps (e.g. because it happens that the most efficient GPU implementation just works that way), then the game engine's asset import pipeline can automatically perform the required conversions.


If/when we switch over to mainly using point-rendering in the game, it's not going to be a huge overhaul on the art pipeline, or widly enable more freedom for artists. That switchover will be just another technical detail, such as whether your vertices are FP16 or FP32 format...
Artists already have this freedom in their tools, assuming their game engine supports their chosen art tools. So the game engine and/or the artists can independently choose whether to use polygons or atoms, whether to use texture-maps or an automatic solution, etc, etc... The engine's asset import pipeline in the middle takes care of the details. This is already the way things are.

Edited by Hodgman, 07 May 2012 - 02:25 AM.


#156 Kyall   Members   -  Reputation: 287

Like
0Likes
Like

Posted 07 May 2012 - 02:36 AM

It's perfectly plausible. That's the topic answered. It just has some draw backs in terms of a few things, but some bonuses in terms of other stuff.

I've been thinking every now and again how I would engineer some tech to match what euclidean has, and my algorithm so far is:

1. Store the scene as sparse oct tree with duplicates in zones removed.
1.a Any place that is empty is empty of data, it has no 'transparent' voxel representation
1.b The scene is a 'box' subdivided into 8 boxes and then those boxes are sub-divided till we get down to the voxel level.
1.c Each branch along the tree has a color entry that represents the averaged color of the leaves in that section of the tree.
1.d If the color of a branch is the same as the color of all leaves under it, then that branch is set to that color and all leaves are removed. This also means that voxels inside one of these cube maps, if not completely solid and the same color, are not counted as the same colour as the tree. Empty slots in the tree that make up the shape of an object count . Maybe just ignore this last bit since it might cause problems.

2. Move along the tree and render out boxes that describe the branches in the tree. The size of each box (so the depth it goes into the tree) is related to the distance of that part of the tree from the camera. So objects closer up are described by more boxes, and objects further away by less boxes.

3. Now that we have a general idea of the depth of the scene, we use that depth to limit the queries we use to find the actual voxels that make up the scene. The further away a node is from the camera the less we care about actually getting the right value for that part of the screen.

No. 3 is what I'm having problems thinking up. Right now it'll basically be a ray trace that's only saving grace is that the amount of items it needs to test against has been greatly reduced by the pre-render depth step. Maybe that'll be faster enough to work. Maybe it won't. Haven't had the time to try this and see it it'd work.

Edited by Kyall, 07 May 2012 - 02:38 AM.

I say Code! You say Build! Code! Build! Code! Build! Can I get a woop-woop? Woop! Woop!

#157 Outthink The Room   Members   -  Reputation: 829

Like
0Likes
Like

Posted 08 May 2012 - 04:55 AM

it may (arguably) be more efficient to represent certain surfaces that way, but then it becomes unclear how to use standard techniques such as coloring, displacement mapping or normal mapping, on top of this - you can of course represent the surface detail achieved by these techniques with these "atoms" directly, but then the atom count explodes, and is likely far less efficient of a representation, not more.


N.B. that if textures can be replaced by creating one atom per pixel in the texture, then the memory requirements of both approaches are going to be similar - you're storing the same data in the end.


In regards to both of these statements, wouldn't the conversion rate of 64 atoms per cubic millimeter take care of that? The atom count would be a controlled conversion, as well as the rate at which color data was distributed.

Until you guys said these things, I never really fully understood what he was talking about in reference to those numbers. It never dawned on me that he *could* have meant textured surfaces and might have figured out a way to represent color data from "converted textured assets".

Also, while doing more research about Point Cloud once you guys responded, I came across PhotoSynth. Microsoft made a product where you actually do the opposite. It converts high res pictures, into Point Cloud Models. The results are actually quite staggering. I don't know the conversion rate for PhotoSynth, but it seems like a technique similar to that would be ideal for the Unlimited Engine.

#158 Hodgman   Moderators   -  Reputation: 31785

Like
0Likes
Like

Posted 08 May 2012 - 05:09 AM

Also, while doing more research about Point Cloud once you guys responded, I came across PhotoSynth. Microsoft made a product where you actually do the opposite. It converts high res pictures, into Point Cloud Models. The results are actually quite staggering. I don't know the conversion rate for PhotoSynth, but it seems like a technique similar to that would be ideal for the Unlimited Engine

Yep, again though, note that this is already in practice. In one project that I worked on, we wanted to base a level off of a real location that we had a video of -- so we extracted all the frames from the video as separate "photographs", fed them through an app like PhotoSynth, and got a point-cloud of that location. We then cleaned up the data-set, built LODs from it and used it in the game.

#159 Punika   Members   -  Reputation: 229

Like
0Likes
Like

Posted 08 May 2012 - 08:54 AM

A Little bit Off-topic, but every now and then someone says, it has been done before... And I don't think soo.

I mean, "some but not every one"* thinks it is a Sparse Octree with Raycasting... Then I like to see a Demo which produces 30 FPS at 1024x768 which such a deep Octree Level. I have seen none on a CPU, which they claim to use...


* Fixed that :)

Edited by Punika, 09 May 2012 - 10:51 AM.

Punika

#160 Rhetorician   Members   -  Reputation: 119

Like
0Likes
Like

Posted 08 May 2012 - 05:53 PM

I mean, everybody thinks it is a Sparse Octree with Raycasting

Wait. Have you even read the topic? Only some (but very popular, you know who Posted Image) people have spread such an assumption.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS