• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
3Ddreamer

Euclideon Geoverse - Latest Video Calms More Critics

30 posts in this topic

Companies have confirmed it doesn't load the entire data-set into RAM.

 

 

 

Obviously :D

 

A bit off topic: how do these people even get those high quality 3d scans of the world. Even the inside of that half-build building was visible?

0

Share this post


Link to post
Share on other sites

@ Outthink The Room - It was preemptive on my part to post without first researching the subject a little more. I've changed my opinion on the plausibility of Euclidean's software, and think it is very possible to do what they claim, given the resources they're working with.


A bit off topic: how do these people even get those high quality 3d scans of the world. Even the inside of that half-build building was visible?

http://en.wikipedia.org/wiki/Lidar

 

They usually scan from various directions and combine the data to eliminate any blind spots. How they are able to determine where the points are in 3D space after moving around with the scanner is beyond me, so I'm just going to assume it's magic.

0

Share this post


Link to post
Share on other sites

They usually scan from various directions and combine the data to eliminate any blind spots. How they are able to determine where the points are in 3D space after moving around with the scanner is beyond me, so I'm just going to assume it's magic.

 

GPS on the scanner, maybe? We can get it down to within a few centimeters of accuracy now (with extra calculations. Raw GPS accuracy is still measured in meters). Then get the relative position of the points by calculating the amount of time it takes for the rays to bounce back (or, as Wikipedia puts it: "measures distance by illuminating a target with a laser and analyzing the reflected light.").

 

Or, have your own local "GPS", by triangulating the position of the scanner against two or more stationary positions (like a parked truck with computer equipment in it, or local cell phone towers). Just hypothesizing.

0

Share this post


Link to post
Share on other sites

There are algorithms that can convert even regular camera shots to 3D data (some algorithms are from the 60's or 70's). The key is that you don't even have to know the "depth maps" of the images.

 

If I recall correctly, it's possible to reconstruct the 3D even without knowing the exact positions of the cameras. The only constraint is that the scene has to be lit exactly the same when taking the pictures (so you have to use multiple cameras shooting at the same time), so that the algorithm can detect (with some magic, or maybe with human guidance?) the positions of the same point in all images (if not obscured, obviously).

 

This is no magic, the human depth perception works similarly with only image data (no depth) from two viewpoints.

Edited by szecs
0

Share this post


Link to post
Share on other sites

 

They usually scan from various directions and combine the data to eliminate any blind spots. How they are able to determine where the points are in 3D space after moving around with the scanner is beyond me, so I'm just going to assume it's magic.

 

GPS on the scanner, maybe? We can get it down to within a few centimeters of accuracy now (with extra calculations. Raw GPS accuracy is still measured in meters). Then get the relative position of the points by calculating the amount of time it takes for the rays to bounce back (or, as Wikipedia puts it: "measures distance by illuminating a target with a laser and analyzing the reflected light.").

 

Or, have your own local "GPS", by triangulating the position of the scanner against two or more stationary positions (like a parked truck with computer equipment in it, or local cell phone towers). Just hypothesizing.

 

Eh, GPS is like 15 meters, no matter what (except if you're military, then it's slightly better).

 

Though EGNOS (more or less identical to WAAS) claims 7 meters, I've never seen any better than 10 meters, and although differential GPS claims 10cm, I've never seen anyhting better than 2 meters (owning devices capable of both).

 

I'm inclined to believe this data is just a smoke and mirrors show. They probably got a reasonably high-res sattelite scan from somewhere, just large enough area so the demo looks stunning, and then they spent 6 months and a team of 20 artists filling in the detail (in the selected areas where the camera zooms close by).

0

Share this post


Link to post
Share on other sites
If you've got multiple 2d images with shared features, you can stitch them into a panorama. Same with overlapping 3d scans -- no need for GPS, just line up the shared details (either automatically, or by hand).

And yes, collection of photos / video walk though -> point cloud is a solved problem.
Once we were sent footage of an industrial area and told to recreate the footage as an in-game cutscenes - we started by converting the footage to a point cloud of that area, before modeling the buildings... And that was with cheap software and unannotated footage.
If you know the motion of the camera and have a bigger budget, you can do much better. See http://en.m.wikipedia.org/wiki/Structure_from_motion

I'm inclined to believe this data is just a smoke and mirrors show. They probably got a reasonably high-res sattelite scan from somewhere, just large enough area so the demo looks stunning, and then they spent 6 months and a team of 20 artists filling in the detail (in the selected areas where the camera zooms close by).

but their product is about data visualization, not data acquisition. They're not even showing off their own data, theyre using client's data (which yes, could be acquired in many ways). To supplement an arial scan it would be much cheaper for that client to drive a van around the city for a few days, than to hire artists...

They're selling it to people who have already acquired data, or people in the acquisition business. There's a lot of companies in the GIS business who will chare you big bucks to deliver exactly this kind of data. They're not the ones on trial here ;-P
The company showing off the detailed 3D construction site is unrelated to Euclideon. They already offer this service, and that visualization is using triangulated meshes, not Unlimited Detail. There's no point in the pretending they can do change detection with faking-via-artists if they can't really capture that data -- it's just a lawsuit waiting to happen when they can't deliver on contractural obligations. One things for sure though - these kinds of services aren't cheap!

LIDAR flyovers can do sub-metre resoluton, but military imaging radar can produce point clouds with a resolution of about 2mm, which is enough to be able to analyze specific tyre tracks in dirt, or do change detection on said dirt to tell which tank moved where by following the tracks (I worked on a simulator for them once) And they put them on high altitude flights, satellites and the space shuttle for fun...
0

Share this post


Link to post
Share on other sites

Hodgman, AeroMetrex doesn't only use meshes for that tool they built. Also, it isn't LiDAR, they use Photogrammetry techniques.

 

AeroMetrex built that mesh tool to correct any irregularities with the data-set. Making sure that when converted to Euclideons format, it's an entirely clean mesh. That's why AeroPro 3D gives them a ton of flexibility compared to most solutions. For those that are curious how data-sets like this are done, here's a couple videos.

 

The first video shows a company scanning the Himalayas and creating one large dense point cloud using thousands of pictures.

 

http://www.youtube.com/watch?v=mEs3euTgj8s

 

The second video shows off a product called PhotoScan by the company Agisoft. It's a photography to 3D data-set converter. I used this company as an example because their product ranges from landscape/surveying all the way to smaller, individual assets. And also because of how it segues into the 3rd example.

 

http://www.youtube.com/watch?v=BXC0q40Vkww

 

The third video is the Fox Engine Demo from GDC. They are actually using Agisoft to create scanned images. If you jump to the 27:30 mark in the video, that's when they start discussing PhotoScan. They give a few examples and show a smallish type glimpse into how they're approaching this.

 

http://www.youtube.com/watch?v=FQMbxzTUuSg

 

Kojima Studios is obviously having to retopologize the 3D model once they convert to polygons, but it gives a clear indication of what's possible if there isn't a polygon count or limitation on texture resolution. They also show how quickly re-texturing an asset would be, from individual pictures.

0

Share this post


Link to post
Share on other sites

These new videos look pretty good, and they seem to have actually found an application for their technology. I say good for them.

But with the way they tried to pitch their product in the past with ridiculous claims (Unlimited detail! Polygons are dead!), they have lost all credibility for me.

To me it just looks like a company trying to find gullible investors to make cash as quickly as possible.

0

Share this post


Link to post
Share on other sites

 

 

They usually scan from various directions and combine the data to eliminate any blind spots. How they are able to determine where the points are in 3D space after moving around with the scanner is beyond me, so I'm just going to assume it's magic.

 

GPS on the scanner, maybe? We can get it down to within a few centimeters of accuracy now (with extra calculations. Raw GPS accuracy is still measured in meters). Then get the relative position of the points by calculating the amount of time it takes for the rays to bounce back (or, as Wikipedia puts it: "measures distance by illuminating a target with a laser and analyzing the reflected light.").

 

Or, have your own local "GPS", by triangulating the position of the scanner against two or more stationary positions (like a parked truck with computer equipment in it, or local cell phone towers). Just hypothesizing.

 

Eh, GPS is like 15 meters, no matter what (except if you're military, then it's slightly better).

 

Though EGNOS (more or less identical to WAAS) claims 7 meters, I've never seen any better than 10 meters, and although differential GPS claims 10cm, I've never seen anyhting better than 2 meters (owning devices capable of both).

 

I'm inclined to believe this data is just a smoke and mirrors show. They probably got a reasonably high-res sattelite scan from somewhere, just large enough area so the demo looks stunning, and then they spent 6 months and a team of 20 artists filling in the detail (in the selected areas where the camera zooms close by).

 

 

Off-topic, but for anyone interested:

 

There are "tricks" where you can get reliable and repeatable GPS accuracy to a tenth of a cm, basically in real-time, in 3D.  This usually involves receiving a correction signal through cellular or radio.  Keep in mind, though, that these are not the same as the GPS receivers in phones - they are far more expensive and bulky.

 

For scanners and the software, while I've never had a chance to use one (yet), I have seen some demonstrations and sales pitches.  One of the selling points of scanning software seems to be the ability for it to automatically stitch together (and weed out a huge amount of redundant ) data.  A vast amount of data is actually useless - e.g., LIDAR fly-overs shoot a massive amount of points over foliage, and the software removes everything that hits the canopy, keeping only the shots that happen to make it through to the ground.

 

Also, in the demonstration I saw, took color photos of each scene and projected those images onto the produced mesh to create a fairly convincing colorized model of a scene.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0