I don't really get the Normal Map example. I was referring to the conversion process as plain assets.
Ok, the problem with that comparison is that you're ignoring the asset conversion processes that are used in games.
The two asset processes that you compared were:
Author highly detailed, film-quality model ----------------------------- Rendered by game
\Generate Atoms /
...but to be fair, you should actually use the asset conversion processes that are currently used by "polygonal games", which would look more like:
Generate LOD and bake maps
/ \
Author highly detailed, film-quality model - - Rendered by game
\Generate Atoms /
In this comparison, both data sets (
the atoms or the LOD/maps) will be a small percentage of the original size.
It's also important to note that there's not that much difference between the above two asset pipelines, and in reality, the "generate atoms" part is pretty much the same as the "bake maps" part, but it's baking a 3D map of some sort. N.B. that if textures can be replaced by creating one atom per pixel in the texture, then the memory requirements of both approaches are going to be similar - you're storing the same data in the end.
Games already
do use 3D/volumetric rendering (
e.g. point clouds) for certain objects when appropriate, and existing asset pipelines do already use "
atom->polygon" and "
polygon->atom" conversion where appropriate.
Maybe this isn't possible but couldn't a Paint Program allow you to paint directly onto Point Cloud geometry? If you could bypass the entire process of having textures altogether, simply layer paint your art asset
Yes, this already exists for both polygonal and voxel art creation programs.
Take note though, that the processes used when
authoring game art and the processes used when
rendering the game, don't have to be the same.
These kinds of "unlimited detail", no hassles (such as textures) processes
are already in use in a lot of art creation tools.
If a current engine then requires certain technical details, such as UV coordinates, or texture-maps (
e.g. because it happens that the most efficient GPU implementation just works that way), then the game engine's asset import pipeline can automatically perform the required conversions.
If/when we switch over to mainly using
point-rendering in the game, it's not going to be a huge overhaul on the art pipeline, or widly enable more freedom for artists. That switchover will be just another technical detail, such as whether your vertices are FP16 or FP32 format...
Artists already have this freedom in their tools, assuming their game engine supports their chosen art tools. So the game engine and/or the artists can independently choose whether to use polygons or atoms, whether to use texture-maps or an automatic solution, etc, etc... The engine's asset import pipeline in the middle takes care of the details. This is already the way things are.