The real question is, whether it will be used much in games.
No. They've been trying to push it on us for over a decade already.
When we review it and explain why it's not suitable, they stick their fingers in their ears and actually make up conspiracy theories as to why they're being "suppressed"...
I still have doubts but admit that they seem to have staying power.
As above, they've been trying to market it to games companies for a decade, unsuccessfully, because it's not a good fit for the majority of games (real-time shading, dynamic geometry, etc), and because they've for whatever reason, refused to bring an SDK to the general market, instead choosing only to attempt to sell it by face-to-face sales negotiations for big, big prices.
However, they did manage to net themselves a $2M "commercialization" grant from the Australian government, which they've used to re-brand as "Euclideon" and hire a board of directors and a bunch more staff (including some actual game-dev veterans). Since then, they've (sensibly) shifted their focus from games to geospatial, where some point-cloud innovation is actually needed.
It seems to be well received by the geospatial industry, which is nice after they finally gave up trying to force it into games where it didn't fit.
If you look at the latest Atomonage video (can't find the link ATM), he's doing similar work with geospatial datasets, of similar quality (though with a completely different rendering technique).
BTW, this tech was shown off back in May (though with Captain Patronizing as the narrator).
I see nothing in the video that couldn't have been trivially faked
They don't need to fake it, you can buy their product already, though probably at an incredibly high price :/
Mining companies with their budgets, and all...
They're showing off a real product with real testimonials -- but that's not why it's controversial. It's controversial because they continually make misleading or false statements about their own tech, and especially about competing tech.
Even though their tech is real, this practice has ironically reduced them to being snake oil salesmen, and a joke in the games industry.
In the past they've deliberately confused 3rd party toolchains (that work the same with triangles, points, voxels) and their own tech, in a stupid attempt to confuse the viewer. They've compared detailed 3D scans (which are triangulated, and need to be voxelized to work with their own tech) and compared them to low-poly triangulated equivalents to show that triangles are teh dumb.
If you take Bruce Dell's words literally, in the past he's actually made claims of inventing O(1) search on infinite data sets, or infinite compression... and he wonders why people don't take him seriously... If he just spoke plainly and honestly about his tech, or ever spoke in a way targeting other developers (instead of speaking in a way that talks down to and misleads the tech-clueless public), he wouldn't have anywhere near as much animosity directed at him. None of his videos are pitching the tech to other tech-people, all his videos are showing off to the general public and misleading them.
Even in this video, they claim to have "solved all problems relating to working with...". That's a bit of a ridiculous exaggeration.
What if I want to view the scan of that city at a different time of day? How can I do that when their rendering tech doesn't allow for any real-time shading, and only supports baked shading? What if I want a surface normal? What if I need many attributes per point, such as specular channels? What if I want to drop in a simulated vehicle and drive over that scanned terrain? What if I want to use an anti-aliasing technique other than super-sampling? They also deride the "low resolution block" technique, but then later slyly explain that they themselves use it when streaming bandwidth isn't sufficient.
Again, their exaggerations and patronising tone are what's isolating them, IMO.
Those bold questions above is why game devs haven't been interested in their tech, along with the inability to render stuff other than static environments, like a skinned character...
I have seen large scientific datasets that are multiple terabytes of raw volumetric data. I don't care how good the rendering engine is, they're not going to handle it in real time at a high processing rate on today's hardware unless they have a fairly expensive computing center.
Or unless you don't actually touch 99% of the data. They'll have a ton of redundancy in their data, having many copies of the scene at different LODs.
AFAIK, it's heavily palletized and utilizing spatial hashing and simple line drawing algorithms.