I did a little math earlier today to see what I could find out, and I'll hope you'll find the answer as interesting as I found it (or I've made a fool of myself ).
First off, the computer he runs the demo on has 8 GB of memory, the resolution is 64 voxels per mm^3 (4*4*4), I estimate the size of the base of each block to be 1m^2, and let's assume that color is stored as a single byte (either through compression or by being palletized, which could actually even be the case). Since octrees are used, we very loosely assume that memory consumption doubles because of octree overhead for nodes, and that the shell of each block can be approximated by taking the 1m^2 base block, multiplying by 6 for each side of the new 3D-block, and then multiplying by 2 because the sides obviously aren't flat but has a rugged surface. (Yes, some are estimates, some may be high, some may be low, and some factor may be missing, but assume for now that it balances out)
8 GB = 8 * 1024 * 1024 * 1024 = 8589934592 bytes
sqrt(8589934592) = 92681 (^2) (side length in units for the entire square)
92681 / 4 / 1000 = 23 m^2 (4 from 4x4x4, 64 voxels per mm^3, 1000 from meter)
23 * 23 = 529 m^2 blocks
529 / 6 / 2 = 44 final blocks (converting from flat 2D to 3D)
44 / 2 = 22 final blocks (compensating for the octree cost)
= 22 blocks (UD uses 24 blocks)
Now, there are a bunch of approximations and guesses here... but the fact that I even came within an order of magnitude of the actual 24 known models UD shows in their demo... says to me that they have indeed not made any significant progress, and even if I've made an error it apparently balances out. They might not even have made anything at all except possibly some optimizations to the SVO algorithm. Please correct me if I've made a serious mistake somewhere, but again, if my calculation would have said 2 or 200 (that would be bad for UD), it would still mean that they are flat out lying and memory consumption is most definately an issue they haven't solved, not even in the slightest.
I'm not saying you're incorrect, but it's possible to do quite a lot better than that once you take into account recursive instancing.
Say you're right about each block of land being 1 meter on a side. If you were to fully populate the tree at that granularity, you'd get those results (or similar since it's an estimate). But now, imagine instead of fully populating the tree, you create a group of 100 of those blocks 10 meters on a side, then instance that over the entire world. Your tree just references that block of 100 ground plots rather than duplicating them. So now you've reduced the size requirement by approximately 100.
There's no limit to how far you can take this. The Sierpinski's pyramid is an excellent example of this - you can describe that whole world to an arbitrary size with a simple recursive function. The only unique data storage required for that demo is the model of the pink monster thingy.
As someone mentioned earlier, the storage requirement is more appropriately measured by the entropy of the world (how much unique stuff there is, including relative placement). The repetitive nature of the demo suggests very little of that, and thus very little actual storage requirement.