Anybody doing real metaverse stuff.?

Started by
49 comments, last by Nagle 1 year, 9 months ago

frob said:
Many people have already got existing worlds going.

You can do a really nice looking 3D world today. But you have to construct it carefully so as not to overload the server or client. That's typically done by level designers and artists using the Unreal or Unity tools and given a triangle budget. Careful attention is paid to occlusion, so that you don't have long lines of sight into highly detailed content that doesn't impostor well. There's a huge amount of asset preparation and optimization during game development.

In a world with user-created content, you don't have that phase, where people look at the whole level and polish it. You can tighten down the limits so hard that nobody can overload the system. The result looks cartoonish, like Meta/Facebook Horizon. Or you can be less restrictive, and accept low frame rates to get high visual quality. That's Second Life and Open Simulator. Or you can go with a voxel based system, like Roblox and Dual Universe, but user created objects look kind of jaggy, and you usually don't let the users create new primitive objects.

That's where we are now. What I want is to get past that, so that user-created worlds look like AAA titles and go fast. So far, nobody has both of those things going. That's the graphics problem for the metaverse.

We know roughly what to do. Automatic LOD generation, good asset download ordering, impostors, etc. All the usual stuff. But fully automated, and running server side in background as users add and change content, rather than running in the desktop dev tools as at present.

Advertisement

First, please let me appologize for my extremely negative views, guys. That's not motivating.
I realize, i'm just one of those afraid of a constant on virtual distraction.
And i also realize this is not about offering people a certain experience of entertainment or challenge, like games do.
It seems the primary goal is to create some virtual space people can share. What they do there is up to them, and it does not need to run Crysis to sell it, i guess.

What should i say… it remains a dystopia to me, or a tech Sodom and Gomorrah. We'll see if it does society a favor. I rather think Zar Tanks will roll over our houses, and we'll not even notice, due to distraction :D
My mother did turn my Atari off when i played for too long.
I guess, you'll have to deal with such resistant concerns at an increasing rate in the future.

Nagle said:
Careful attention is paid to occlusion, so that you don't have long lines of sight into highly detailed content that doesn't impostor well. There's a huge amount of asset preparation and optimization during game development. In a world with user-created content, you don't have that phase, where people look at the whole level and polish it.

I think you underestimate the win of a LOD solution. You saw those thousands (millions?) of instances test scenes in UE5, an it still runs fine. Because pixel count remains constant no matter what, and you have hidden surface removal too, even unoptimized scenes should work. If such system works, creating an artificial case to bring it to its knees would be just as hard as creating optimized scenes for current engines without (or just some minimal) LOD.

But i don't think we'll get Nanite export plug ins for Blender (iirc you made such assumption). And there won't be an open standard with open source implementations anytime soon. So far nobody adopted the Nanite idea at all.
So you have to use UE, or work on both the preprocessing and the runtime.

I saw already two guys implementing some Nanite renderer for fun. This one is gfx dev of Path Of Exile afaik.:


Interesting: He mentions a paper from 2008, proofing Nanites key idea is an old hat already.

Nagle said:
In a world with user-created content, you don't have that phase, where people look at the whole level and polish it. You can tighten down the limits so hard that nobody can overload the system. The result look cartoonish … blocky … voxel … jaggy … no new primitive objects …

So here it looks like you're saying systems already exist, but invalidate them all because of personal tastes.

You can poo-pooh it all you want, that's fine. No matter how you dismiss them, look at the commercial success stories of Roblox, Minecraft, VR Chat, Second Life, Forterra, There.com, or smaller transitory worlds like all the building done in Fortnite (some are amazing structures in shared worlds), Sims 4, No Man's Sky, Rust, Ark, Valheim, Raft, and so many others.

You can always move the goalposts of what “user generated content is”, and make it inaccessible. “That's not a real metaverse because of ${arbitrary_reasons}.” Or you can look around at the enormous and growing list of games and also non-games that embrace world building as a core mechanic and already exist today. Plenty of games and social systems already successfully navigate challenges of shared, user-generated worlds. We can always push for more, but that doesn't invalidate what we already have today.

Look at a modern AAA title. That's what gamers expect today. Then look at any system where users create much of the content. See the difference in detail and frame rate? That's what I'm talking about.

That's what gamers expect today.

We might not even mean the same thing when we say “metaverse.” The “metaverse,” as imagined by most business/mass media focused people, is a utility for normal humans, of whom only a small fraction are “gamers” who have a “gaming PC” or who are sophisticated enough to see the difference between a PS/5 title graphics and a Switch title graphics.

Saying “a metaverse cannot succeed because it doesn't look like a tuned PS/5 or PC master race title,” misses the point that those kinds of gamers are a small niche, and almost definitionally, a metaverse must be for a highly inclusive mass-market.

Meanwhile, people who try to build a metaverse for daily living and all-day business/work, misses the point that normal humans do not feel that that experience contributes anything compared to reality in most cases. Specific meetings/presentations/sessions, and maybe even specific entertainment experiences, might be helped, but that's more like “Zoom on steroids” than it is “a new modality of life” and the market opportunity (and thus the suitable size of investment that will actually pay back,) should be scaled accordingly.

enum Bool { True, False, FileNotFound };

hplus0603 said:

That's what gamers expect today.

We might not even mean the same thing when we say “metaverse.” The “metaverse,” as imagined by most business/mass media focused people, is a utility for normal humans, of whom only a small fraction are “gamers” who have a “gaming PC” or who are sophisticated enough to see the difference between a PS/5 title graphics and a Switch title graphics.

Not sure. Here's a long analysis of the market potential of the Metaverse from Citibank. There's a lot in there about NFTs, and you can mostly skip that part. Several people comment there that 3D worlds are going to stay a gamer thing, or at least a recreational thing, through at least 2030. The technology is expected to come from the game dev community, even if it has other future uses.

The popular culture version of the Metaverse is probably Ready Player One. For people who read, Snow Crash.

So, can we get back to how to actually do this stuff?

Nagle said:
So, can we get back to how to actually do this stuff?

If you have better questions, ask them.


I hate to sum up four pages of discussion this way, but here's how I see the questions so far. I'm hoping maybe the bolded bits will expose my frustration in the discussion:


Nagle: Anybody working on metaverse stuff?

Multiple replies: Yes we are, and also here is some technology.

Nagle: Non-question about voxels, non-question about Nanite.

Multiple responses: Yes, those are technology we have right now. They can solve some issues.

Nagle: Procedural generation isn't good enough. World sizes are not good enough. They don't let me build the thing I want to do. Content is hard. Is anybody doing it?

Multiple replies: Yes, those are real things we're doing at work, and have done at work for decades. It exists today, and is constantly improving.

Nagle: is anybody even trying to do anything better than Second Life and IMVU? All the NFT worlds are inferior.

Multiple people: Yes. That's what companies including our employers are doing.

Nagle: General open world MMO isn't good enough. You can't pre-optimize. Some of them stick with low-rez, some run slowly, many take high equipment requirements. (No specific questions about the tech.) I want to know the how.

Me: Ask specific questions, many have answers.

Nagle: Epic has problems that they're throwing money at. (No specific question asked.)

Multiple replies: Epic and big companies are spending money because they can. There are alternatives to the problems that are solved, and people are choosing to spend money on even harder problems every day.

Nagle: People expect AAA. See the pretty graphics? That's what I mean! (No specific question asked.)

Reply: I do not think those words mean what you think they mean. People are already doing it for their daily work, and more people are engaging in it for their daily work, scaling based on their markets.

Nagle: Investment banking is talking about 8-13 trillion dollar profits. So, can we get back to how to actually do this stuff?



You're going to need to be specific about your questions. Among the group of us, many of us are actively employed in building next generation experiences that fit “metaverse” definitions. We solve problems, and are on teams that are solving bigger problems. Collectively we can probably either directly answer or point you to answers. Some of us are already on AAA teams that are bringing in billions (and mostly spending it on salaries), and are already rolling toward that Citibank estimate.

So: What specific things do you mean by “how to actually do this stuff?” Specific questions about simulation processing? Specific questions about networking? Specific questions about the graphics, VR, or AR? Specific questions about math and physics? Specific questions about AI systems? Specific questions about architecture designs? Specific questions about blockchain technology generally or NFTs particularly? Specific questions about how the work is distributed? Those are more addressable questions.

@JoeJ wrote: Everything would be fine, if the ‘metaverse’ buzzword had never come up. We could just continue on our dream to create a fake reality for entertainment and fun, without looking like scammers, but even failing at delivering attractive marketing promises.

Right. There's good stuff going on in the game dev world, trying to get big worlds with huge amounts of content to work with finite resources. The people making “metaverse” noises aren't the ones pushing the technology.

The demo of a Nanite-type renderer from a non-Epic source is interesting. That's going to be a useful technology. The SIGGRAPH paper points out that current GPUs can't do as much of the work as they'd like, and Epic's code has to do a lot of the work on the CPUs. With multiple people developing similar approaches, hardware support for the data structures needed may appear in GPUs. That would be useful.

Nanite does depend on the content having lots of instancing. It involves storing a mesh as a directed acyclic graph, with instances merged. If every rock is different, it doesn't help as much. This is in fact Second Life's problem - since content is purchased from a huge number of creators, there's not much instancing. There are over 62,000 different chairs on Second Life Marketplace. If the NFT crowd every really gets a metaverse of 3D stuff going, they'll hit that problem. This is a big difference between game content developed by an in-house team and user-created content.

I wonder if the Nanite concept can be extended to network download. You want to download a big mesh, with some parts at higher resolution than others. So the client asks the server “give me mesh N, detailed for viewpoint P.”, and later “for mesh N, I'm now at P2, so let me have a differenced update to the mesh I already have." There's apparently something like this in UE5, specialized for downloading from an local SSD. Anyone know more about that? Would it generalize to a network with more lag?

(I've been writing a Second Life / Open Simulator client in Rust, with many threads, and I'm hitting some of those problems.)

Nagle said:
The people making “metaverse” noises aren't the ones pushing the technology.

To me those people are mostly Zuckerberg and Sweeney, which do work on it. I did not spend attention to those which just jump on a hype train.
But the topic is still confusing and visions vary wildly.

Nagle said:
With multiple people developing similar approaches, hardware support for the data structures needed may appear in GPUs. That would be useful.

No. That's exactly not what's useful or needed. Make sure to understand why fixed function hardware is the primary reason of why the LOD probelm was ignored for much too long, and why Nanite comes late and still is a surprise.
Also notice how another fixed function solution (raytracing) lacks compatibility with Nanite or similar ideas, and this RT might be the reason to prevent other companies from following Epics example. It seems those who want it, just switch to UE5 to avoid this risk and conflict.

I think GPUs / APIs should receive the following updates to do better with LOD:

* Give GPU ability to do coarse control flow on its on. OpenCL 2 can do this in form of device side enqueue, for example. Epic has a related minor issue to implement a multi producer, multi consumer pattern using persistent threads. It works, but it is not specified to work.

* Expose raytracing BVH data structures, so dynamic geometry as LOD requires could be supported.

* Eventually make HW rasterizer work efficiently for tiny triangles. But it could also happen ROPs just become deprecated and obsolete.

Point 1 and 2 are most important, but entirely software problems, coming from a history of fixed function solutions dictating a paradigm of sacrificing flexibility to help with brute force solutions.
But we are past that now. Now we need mostly flexibility to proceed. Not even more fixed function restrictions, imo.

That said, if Linden Labs doesn't aim for raytracing, no need to wait on anything. They could start to implement it right now. Risk of some open standard coming up turning work redundant isn't that big currently, i guess.

Nagle said:
Nanite does depend on the content having lots of instancing. It involves storing a mesh as a directed acyclic graph, with instances merged. If every rock is different, it doesn't help as much. This is in fact Second Life's problem

Agree, but we can not solve this easily as long storage is limited. We simply can't afford an open world made of all unique rocks.

Though, server based games can. They could make games of peta bytes size, and the clients only download what's currently needed. That's the biggest potential win of server games i see, which sadly things like Stadia did not yet utilize.

But even if we go there, we still have the problem of creating infinite content is still problem as well.
Thus, one interesting solution for a meta game would be to enforce sharing of content. Instead making a user made rock a NFT, you add the rock to a library any other creator can use as well. Less storage, less memory, less work, more community.
If you build up your marketing strategy from such positive ideas instead ‘play to earn’ scam, even i would be back believing in the vision.
Second Life may have this ofc., idk. But it should be free to use for anybody. To make the virtual world a better place, not the same shit hole of selfish greed the real world is. Non profit for users, just for fun and reputation. I'd make Reputation the currency - likes, not tokens. That's the true idea of the initial vision. That's why i would avoid the buzzword ‘metaverse’ completely, which came up long after we all founded this vision in our childhood.

Nagle said:
I wonder if the Nanite concept can be extended to network download. You want to download a big mesh, with some parts at higher resolution than others.

I think Nanite is not meant to do a big mesh at all, e.g. landscape. The idea is to compose a big thing from many small instances.
So if you wanted a real universe with planets and space travel between them, you surely need to be creative still. That's not solved by Nanite, but it can be still used to create the detailed environment as you move close to the planets surface.

There's apparently something like this in UE5, specialized for downloading from an local SSD. Anyone know more about that? Would it generalize to a network with more lag?

Afaik all that's new in this regard is a coarse grid based world partition system they have now. Which is the trivial idea to support flat open worlds, but independent from the problems Nanite solves.

The big upsides i see from your perspectives are the same Epic mentions for content creation: Automated LOD generation, less work on faking stuff with normal maps, less need on understanding and avoiding technical restrictions in general to the artist.
The artist creates whatever he wants. The runtime client reduces it to whatever can be processed in realtime. Some details may get lost, but the artist will accept this in return to the benefits.

That's the big advantage, and i'm convinced it lifts user content to a new level.

JoeJ said:
The big upsides i see from your perspectives are the same Epic mentions for content creation: Automated LOD generation, less work on faking stuff with normal maps, less need on understanding and avoiding technical restrictions in general to the artist. The artist creates whatever he wants. The runtime client reduces it to whatever can be processed in realtime. Some details may get lost, but the artist will accept this in return to the benefits.

Right. Big worlds full of user content need something server-side doing the kinds of optimizations UE does on the level dev's desktop. No single content creator has enough of the scene content, and by the time the client needs to render it, there's not enough time left for expensive optimizations. Environment maps and light maps have to be generated somewhere.

The alternative, which Decentraland uses, is that the unit of user creation is the land parcel. In Decentraland, you build everything in your parcel offline in Unity, then do one big upload to the server. Which is a bit embarrassing for a NFT-based world, because they can't have NFT-based furniture.

We simply can't afford an open world made of all unique rocks.

Though, server based games can. They could make games of peta bytes size, and the clients only download what's currently needed. That's the biggest potential win of server games i see, which sadly things like Stadia did not yet utilize.

That's Second Life's data problem. They really do have petabytes of assets on the asset servers. It's common for a client to be pulling 40mb/sec from the servers, and the clients are getting more concurrency, so that's going up. That's more than 4K UHD TV. Load varies; hang out in one place and chat, and the client catches up to the content servers and traffic drops to near zero. Get on a motorcycle and go riding around, and you're pulling content at a huge rate. Anybody who creates a big world with large amounts of high-quality user-created content is going to hit this problem.

Second Life uses Akamai to front end AWS for content distribution. The content all looks like web content, requested via ordinary HTTPS, and Akamai's caching servers handle it as web content. But the usage pattern doesn't look like web content. You teleport someplace, and suddenly the client makes a few thousand HTTP requests. (Then, I suspect, Akamai's anti-DDOS throttling kicks in, because it doesn't seem to be possible to sustain the max data rate for long before something limits it, even on gigabit fiber.)

Cloud gaming is an interesting idea, although cost has been a problem. Some clouds want to own the payment rails and take a big cut of revenue. I may try my own client on NVdia GEForce Now or Shadow, which don't. Those services only need TV-grade network connections to the client; if you can watch Netflix, you can use them.

How this will all interact with “5G” isn't clear. Users are going to need a really good data plan.

Those are some of the problems of a metaverse at scale. Any other projects hitting similar problems?

This topic is closed to new replies.

Advertisement