Jump to content

  • Log In with Google      Sign In   
  • Create Account

Euclideon Geoverse - Latest Video Calms More Critics


Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.


  • You cannot reply to this topic
33 replies to this topic

#21 TheComet   Crossbones+   -  Reputation: 2531

Posted 21 October 2013 - 12:29 PM

They state there are "64 Points Per Cubic MM".

They state the world is "a few cubic kilometers"

 

Even with the assumption that only 0.1% of world space is filled with voxels, that's 64'000'000'000'000'000 (64 quadrillion) voxels. Each needs at least a position (3 x 4bytes) and a colour (3bytes), = 960'000'000'000'000'000 (960 quadrillion) bytes.

 

I'm sorry, but even with a compression ratio of 17%, you cannot fit 163 Peta-bytes of information in RAM, let alone on the hard drive.


"Windows 10 doesn't only include spyware, it is designed as spyware" -- Gaius Publius, The Big Picture RT Interview

"[...] we will access, disclose and preserve personal data, including your content (such as the content of your emails, other private communications or files in private folders), when we have a good faith belief that doing so is necessary" -- Windows 10 Privacy Statement


#22 samoth   Crossbones+   -  Reputation: 8333

Posted 21 October 2013 - 01:51 PM

compression rate of 17%

I only watched it with one eye as it really isn't all that interesting, but from what I understood, they claim 17% on a lossless compression. Lossless compression that compresses in the same ballpark as lossy image compressors. That claim alone would be enough to fill a thread. Where be tha Nobel Prize for these guys? smile.png

 

But seriously, unlimited detail is not, and has never been something special. Fractals are unlimited details, and we had them for 40 or so years. Not in realtime, admittedly, but that was on computers which had a billion times fewer FLOPS too. Realtime fractals have been reality for at least a decade now.

 

Unlimited detail as such isn't interesting, though. What's interesting is meaningful, dynamic, interactive unlimited detail. And they still fail to provide anything remotely close to that from what I can tell.

 

With that said, the patent office turned down my perpetuum mobile application this week, too. I wouldn't know why...


Edited by samoth, 21 October 2013 - 01:51 PM.


#23 TheComet   Crossbones+   -  Reputation: 2531

Posted 21 October 2013 - 02:53 PM

But seriously, unlimited detail is not, and has never been something special. Fractals are unlimited details, and we had them for 40 or so years. Not in realtime, admittedly, but that was on computers which had a billion times fewer FLOPS too. Realtime fractals have been reality for at least a decade now.

You're missing the point a little, I feel. Fractals are "unlimited" because, quite to the contrary of what Euclidean is claiming, they can be procedurally generated and don't have high memory requirements.


Edited by TheComet, 21 October 2013 - 02:54 PM.

"Windows 10 doesn't only include spyware, it is designed as spyware" -- Gaius Publius, The Big Picture RT Interview

"[...] we will access, disclose and preserve personal data, including your content (such as the content of your emails, other private communications or files in private folders), when we have a good faith belief that doing so is necessary" -- Windows 10 Privacy Statement


#24 Hodgman   Moderators   -  Reputation: 48371

Posted 21 October 2013 - 05:18 PM

The demo with the 64p/mm^3 claim *was* procedural - it was the same handful of instanced meshes repeated in the same arrangements thousands of times. Counting the instanced points rather than the unique points when discussing compression is stupendously misleading.
In their new geoverse demos, I'm sure the tech is still capable of using 64p/mm^3 res data, but an ariel laser scan does not provide that resolution (not even military SAR scans do)... So those demos obviously are not storing that kind of hi rez data.
Their compressor also relies on the same trick of quantizing and palletizing the data, then extracting and instancing repeated patterns.
Where do they make the claim that their converter is lossless??

#25 Outthink The Room   Members   -  Reputation: 1288

Posted 21 October 2013 - 11:14 PM

They state there are "64 Points Per Cubic MM".

They state the world is "a few cubic kilometers"

 

Even with the assumption that only 0.1% of world space is filled with voxels, that's 64'000'000'000'000'000 (64 quadrillion) voxels. Each needs at least a position (3 x 4bytes) and a colour (3bytes), = 960'000'000'000'000'000 (960 quadrillion) bytes.

 

I'm sorry, but even with a compression ratio of 17%, you cannot fit 163 Peta-bytes of information in RAM, let alone on the hard drive.

1. It isn't Voxels. It's Point Cloud.

 

2. The world they showed was "1 square kilometer". A cubic kilometer would mean the world is 1 kilometer deep. What is the point of converting Point Cloud 3,000 feet below the surface, which nobody would ever see, while completely wasting storage space at the same time?

 

3. Converting based on 64 Points per mm^3 does not mean, the outputed asset will actually have 64 Points per mm^3. Polygon models are surface based and are hollow inside. Point Cloud is also surface based data. You wouldn't convert the empty space inside of an object. Euclideon essentially needs to break the object into a divisible structure.

 

In other words, if you converted a 1 cubic meter polygon box, that would "technically" equate to 64B Points. Thing is, based on my math, converting only the surface data of that box would equate to only 96M Points. Which is actually 1/667th the size of what you're saying would be converted.

 

4. AeroMetrex is a geospatial company and from what I understand, they are the first company to license Euclideons technology.

 

In the video I linked to, they comment about the size of the data-set. They state specifically, the Polygon version of that entire data-set is 15GB. The converted Point Cloud data-set is 3GB. 20% of original file size, which, coincidentally was done almost a year ago. Compaction has increased and continues to increase according to multiple companies who have been licensing the technology.

 

Furthermore, that entire data-set is roughly a square kilometer. And not only is that a square kilometer, there is NO REPEATED GEOMETRY. That is an entirely unique Point Cloud Data-Set. No trees are duplicated, no cars are reused and every inch of asphalt is converted Point Cloud.

 

They also stated the entire data-set is made up of roughly 100 Million Polygons and when converted, it's roughly 40 Billion Point Cloud. In this instance, the entire data-set is one large piece of geometry. And from what has been revealed, that actually helps Euclideon compact more and reduce file size for massive data-sets.

 

So if a company can get a square kilometer of unique Point Cloud in 3GBs, imagine if you built a forest with repeatable cubes of dirt. Or reusable trees. Grass and Leaves being reused. File sizes would drop dramatically and never reach unsustainable storage levels.

 

5. Euclideon's technology does not load Data-Sets into RAM. When you convert from Polygons to Point Cloud, it's converted into Euclideons .UDS file type. It's a zip file that indexes the data in a very specific way. Euclideons algorithm then searches the hard-drive (the zip files), finds as many points as is needed for the resolution and temporarily sends them to RAM to be computed. Some have hypothesized that it's essentially a cache for the data to be rendered. Using a zip file would allow the algorithm to view the content without having to extract the entire data-set.

 

Companies have confirmed it doesn't load the entire data-set into RAM.


Edited by Outthink The Room, 21 October 2013 - 11:43 PM.


#26 n3Xus   Members   -  Reputation: 922

Posted 22 October 2013 - 09:40 AM

Companies have confirmed it doesn't load the entire data-set into RAM.

 

 

 

Obviously :D

 

A bit off topic: how do these people even get those high quality 3d scans of the world. Even the inside of that half-build building was visible?



#27 TheComet   Crossbones+   -  Reputation: 2531

Posted 22 October 2013 - 10:02 AM

@ Outthink The Room - It was preemptive on my part to post without first researching the subject a little more. I've changed my opinion on the plausibility of Euclidean's software, and think it is very possible to do what they claim, given the resources they're working with.


A bit off topic: how do these people even get those high quality 3d scans of the world. Even the inside of that half-build building was visible?

http://en.wikipedia.org/wiki/Lidar

 

They usually scan from various directions and combine the data to eliminate any blind spots. How they are able to determine where the points are in 3D space after moving around with the scanner is beyond me, so I'm just going to assume it's magic.


"Windows 10 doesn't only include spyware, it is designed as spyware" -- Gaius Publius, The Big Picture RT Interview

"[...] we will access, disclose and preserve personal data, including your content (such as the content of your emails, other private communications or files in private folders), when we have a good faith belief that doing so is necessary" -- Windows 10 Privacy Statement


#28 Servant of the Lord   Crossbones+   -  Reputation: 32510

Posted 22 October 2013 - 10:28 AM

They usually scan from various directions and combine the data to eliminate any blind spots. How they are able to determine where the points are in 3D space after moving around with the scanner is beyond me, so I'm just going to assume it's magic.

 

GPS on the scanner, maybe? We can get it down to within a few centimeters of accuracy now (with extra calculations. Raw GPS accuracy is still measured in meters). Then get the relative position of the points by calculating the amount of time it takes for the rays to bounce back (or, as Wikipedia puts it: "measures distance by illuminating a target with a laser and analyzing the reflected light.").

 

Or, have your own local "GPS", by triangulating the position of the scanner against two or more stationary positions (like a parked truck with computer equipment in it, or local cell phone towers). Just hypothesizing.


It's perfectly fine to abbreviate my username to 'Servant' or 'SotL' rather than copy+pasting it all the time.

All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God.
Of Stranger Flames - [indie turn-based rpg set in a para-historical French colony] | Indie RPG development journal | [Fly with me on Twitter


#29 szecs   Members   -  Reputation: 2713

Posted 22 October 2013 - 12:08 PM

There are algorithms that can convert even regular camera shots to 3D data (some algorithms are from the 60's or 70's). The key is that you don't even have to know the "depth maps" of the images.

 

If I recall correctly, it's possible to reconstruct the 3D even without knowing the exact positions of the cameras. The only constraint is that the scene has to be lit exactly the same when taking the pictures (so you have to use multiple cameras shooting at the same time), so that the algorithm can detect (with some magic, or maybe with human guidance?) the positions of the same point in all images (if not obscured, obviously).

 

This is no magic, the human depth perception works similarly with only image data (no depth) from two viewpoints.


Edited by szecs, 22 October 2013 - 12:11 PM.


#30 samoth   Crossbones+   -  Reputation: 8333

Posted 22 October 2013 - 01:21 PM

 

They usually scan from various directions and combine the data to eliminate any blind spots. How they are able to determine where the points are in 3D space after moving around with the scanner is beyond me, so I'm just going to assume it's magic.

 

GPS on the scanner, maybe? We can get it down to within a few centimeters of accuracy now (with extra calculations. Raw GPS accuracy is still measured in meters). Then get the relative position of the points by calculating the amount of time it takes for the rays to bounce back (or, as Wikipedia puts it: "measures distance by illuminating a target with a laser and analyzing the reflected light.").

 

Or, have your own local "GPS", by triangulating the position of the scanner against two or more stationary positions (like a parked truck with computer equipment in it, or local cell phone towers). Just hypothesizing.

 

Eh, GPS is like 15 meters, no matter what (except if you're military, then it's slightly better).

 

Though EGNOS (more or less identical to WAAS) claims 7 meters, I've never seen any better than 10 meters, and although differential GPS claims 10cm, I've never seen anyhting better than 2 meters (owning devices capable of both).

 

I'm inclined to believe this data is just a smoke and mirrors show. They probably got a reasonably high-res sattelite scan from somewhere, just large enough area so the demo looks stunning, and then they spent 6 months and a team of 20 artists filling in the detail (in the selected areas where the camera zooms close by).



#31 Hodgman   Moderators   -  Reputation: 48371

Posted 22 October 2013 - 06:26 PM

If you've got multiple 2d images with shared features, you can stitch them into a panorama. Same with overlapping 3d scans -- no need for GPS, just line up the shared details (either automatically, or by hand).

And yes, collection of photos / video walk though -> point cloud is a solved problem.
Once we were sent footage of an industrial area and told to recreate the footage as an in-game cutscenes - we started by converting the footage to a point cloud of that area, before modeling the buildings... And that was with cheap software and unannotated footage.
If you know the motion of the camera and have a bigger budget, you can do much better. See http://en.m.wikipedia.org/wiki/Structure_from_motion

I'm inclined to believe this data is just a smoke and mirrors show. They probably got a reasonably high-res sattelite scan from somewhere, just large enough area so the demo looks stunning, and then they spent 6 months and a team of 20 artists filling in the detail (in the selected areas where the camera zooms close by).

but their product is about data visualization, not data acquisition. They're not even showing off their own data, theyre using client's data (which yes, could be acquired in many ways). To supplement an arial scan it would be much cheaper for that client to drive a van around the city for a few days, than to hire artists...

They're selling it to people who have already acquired data, or people in the acquisition business. There's a lot of companies in the GIS business who will chare you big bucks to deliver exactly this kind of data. They're not the ones on trial here ;-P
The company showing off the detailed 3D construction site is unrelated to Euclideon. They already offer this service, and that visualization is using triangulated meshes, not Unlimited Detail. There's no point in the pretending they can do change detection with faking-via-artists if they can't really capture that data -- it's just a lawsuit waiting to happen when they can't deliver on contractural obligations. One things for sure though - these kinds of services aren't cheap!

LIDAR flyovers can do sub-metre resoluton, but military imaging radar can produce point clouds with a resolution of about 2mm, which is enough to be able to analyze specific tyre tracks in dirt, or do change detection on said dirt to tell which tank moved where by following the tracks (I worked on a simulator for them once) And they put them on high altitude flights, satellites and the space shuttle for fun...

#32 Outthink The Room   Members   -  Reputation: 1288

Posted 31 October 2013 - 05:26 AM

Hodgman, AeroMetrex doesn't only use meshes for that tool they built. Also, it isn't LiDAR, they use Photogrammetry techniques.

 

AeroMetrex built that mesh tool to correct any irregularities with the data-set. Making sure that when converted to Euclideons format, it's an entirely clean mesh. That's why AeroPro 3D gives them a ton of flexibility compared to most solutions. For those that are curious how data-sets like this are done, here's a couple videos.

 

The first video shows a company scanning the Himalayas and creating one large dense point cloud using thousands of pictures.

 

 

The second video shows off a product called PhotoScan by the company Agisoft. It's a photography to 3D data-set converter. I used this company as an example because their product ranges from landscape/surveying all the way to smaller, individual assets. And also because of how it segues into the 3rd example.

 

 

The third video is the Fox Engine Demo from GDC. They are actually using Agisoft to create scanned images. If you jump to the 27:30 mark in the video, that's when they start discussing PhotoScan. They give a few examples and show a smallish type glimpse into how they're approaching this.

 

 

Kojima Studios is obviously having to retopologize the 3D model once they convert to polygons, but it gives a clear indication of what's possible if there isn't a polygon count or limitation on texture resolution. They also show how quickly re-texturing an asset would be, from individual pictures.



#33 Madhed   Crossbones+   -  Reputation: 4089

Posted 31 October 2013 - 06:46 AM

These new videos look pretty good, and they seem to have actually found an application for their technology. I say good for them.

But with the way they tried to pitch their product in the past with ridiculous claims (Unlimited detail! Polygons are dead!), they have lost all credibility for me.

To me it just looks like a company trying to find gullible investors to make cash as quickly as possible.



#34 laztrezort   Members   -  Reputation: 1058

Posted 31 October 2013 - 05:05 PM

 

 

They usually scan from various directions and combine the data to eliminate any blind spots. How they are able to determine where the points are in 3D space after moving around with the scanner is beyond me, so I'm just going to assume it's magic.

 

GPS on the scanner, maybe? We can get it down to within a few centimeters of accuracy now (with extra calculations. Raw GPS accuracy is still measured in meters). Then get the relative position of the points by calculating the amount of time it takes for the rays to bounce back (or, as Wikipedia puts it: "measures distance by illuminating a target with a laser and analyzing the reflected light.").

 

Or, have your own local "GPS", by triangulating the position of the scanner against two or more stationary positions (like a parked truck with computer equipment in it, or local cell phone towers). Just hypothesizing.

 

Eh, GPS is like 15 meters, no matter what (except if you're military, then it's slightly better).

 

Though EGNOS (more or less identical to WAAS) claims 7 meters, I've never seen any better than 10 meters, and although differential GPS claims 10cm, I've never seen anyhting better than 2 meters (owning devices capable of both).

 

I'm inclined to believe this data is just a smoke and mirrors show. They probably got a reasonably high-res sattelite scan from somewhere, just large enough area so the demo looks stunning, and then they spent 6 months and a team of 20 artists filling in the detail (in the selected areas where the camera zooms close by).

 

 

Off-topic, but for anyone interested:

 

There are "tricks" where you can get reliable and repeatable GPS accuracy to a tenth of a cm, basically in real-time, in 3D.  This usually involves receiving a correction signal through cellular or radio.  Keep in mind, though, that these are not the same as the GPS receivers in phones - they are far more expensive and bulky.

 

For scanners and the software, while I've never had a chance to use one (yet), I have seen some demonstrations and sales pitches.  One of the selling points of scanning software seems to be the ability for it to automatically stitch together (and weed out a huge amount of redundant ) data.  A vast amount of data is actually useless - e.g., LIDAR fly-overs shoot a massive amount of points over foliage, and the software removes everything that hits the canopy, keeping only the shots that happen to make it through to the ground.

 

Also, in the demonstration I saw, took color photos of each scene and projected those images onto the produced mesh to create a fairly convincing colorized model of a scene.






Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.




PARTNERS