Carmacks Virtualized Textures

Started by
14 comments, last by Toji 17 years, 11 months ago
In a recent interview with John Carmack he mentioned that the "Megatexture" (Read: Fancy Clipmapping) in Enemy Territory: Quake Wars worked fine for "things that are topologically a deformed plain" but that that technology was already old news to him and that he had developed a new method that allowed him to use unique texturing wherever he wanted. From the article:
Quote: For the better part of a year after that initial creation, I have been sort of struggling to find a way to have a similar technology that creates this unique mapping of everything, and use it in a more general sense so that we could have it on architectural models, and arbitrary characters, and things like that. Finally, I found a solution that lets us do everything that we want in a more general sense, which is what we’re using in our current title that’s under development... ...the core technology to do this is tiny. There’s one file of source code that manages the individual blocks, and then the fragment program stuff for this is like a page.
Does anyone have any thoughts/insights as to how this is achieved? I've never seen/heard anything about it before. Thanks! [Edited by - Toji on May 23, 2006 7:18:08 PM]
// The user formerly known as Tojiro67445, formerly known as Toji [smile]
Advertisement
He's working on basically virtual texture memory. He's talked about it before (QuakeCon 2005 keynote speech, and in an old blog entry from what would probably be around the time he was doing R&D for Doom3). The method for QW is the same concept, except that MT generation 1 is simply made for terrain since it's easily to decide what parts of the txture are to be used in the frame.
This much I knew (though thank you for the link!), what I'm curious about is if anyone has any ideas as to the actual implementation, since it seems to be hardware independant. (ie: Not DX10 exclusive.)
// The user formerly known as Tojiro67445, formerly known as Toji [smile]
It's like any noise texture. It's not resolution dependant, it's domain-dependant. It's not a novel concept, only a novel application. This is why he was flabbergasted that it hadn't been done before.
There's already a topic that is 15 days old

Y.
Yes, indeed there is a topic, and yes it is 15 days old, but it deals primarily with the implementation of the "Megatexture" concept in Quake Wars, which is only really feasible for use on landscapes or similar objects. What interests me, if you fully read my original post, is that Carmack says he has a much more general purpose algorithim now that allows such texturing on any type of object. The theory and possible implementation of THAT is what I am inquiring about.
// The user formerly known as Tojiro67445, formerly known as Toji [smile]
What I got from what he said on that article is that he has a unique texture for the models (being landscapes or whatever) and that the engine somewhat streams everything as it's needed (from the HD) and uploads it to the graphics card in time. That would be the reason why this technology won't work with consoles that don't have a hard disk.

Quote:
And especially my newer paged virtual texturing which applies to everything instead of just terrain, allows you to have a uniform management of all texture resources there, as well as allowing infinitely sized texture dimensions. So this is actually working out very nicely on the Xbox 360.


I belive that when he says "infinitely sized texture", he's talking figuratively.
[size="2"]I like the Walrus best.
You can easily page textures in yourself. When you load a model, load a really small version of the texture on the model, just so that you know you can render it. In a streaming world, models will come in at the far end of the viewing frustum anyway.

For each model, calculate the size of the largest texel on the model (which is easily done as a pre-calculation step). Then, load the original texture as the deepest "tip" of the MIP map chain. When you get close enough that the texture would stretch the biggest texel across more than one texel, load the next higher size MIP level and stick it into the next lower MIP level. The texture can still be considered MIP complete, because you'll simultaneously adjust the MIN_LOD and MAX_LOD parameters of the texture to match.

As an object moves further away from the viewer, unload the upper MIP levels (specify 0x0 size images) to kick them out of memory, and re-specify the LODs as appropriate.

Now, not all Xbox 360 machines have hard drives, so the disk seek time may kill you if you try to do this with a highly dynamic world. However, for static objects, you can probably pack texture images into a big pack file such that you can find all textures you need when you're in position X by just one or two seeks (possibly using redundancy).

The big draw-back of all this is of course that a game will easily fill a full-size DVD, or even a Blu-Ray drive, if you have the art resources to produce it. That's great for offline content, but imaging trying to patch an online game -- "there's a new texture for class X, race Y; download size: 1 GB". I believe procedural texturing approaches (especially mixes of splatting and uber-shaders) will still have a big role to play in the future.
enum Bool { True, False, FileNotFound };
Quote:Then, load the original texture as the deepest "tip" of the MIP map chain


Your wording here is a bit odd. I assume you mean load the lowest level of the mip map and treat it as the full texture, rather than load the full texture and treat it as the lowest mip level?

Also, I would assume this technique would require a format with mip levels built in (.dds), since loading the entire texture just to generate the mip levels would be counter-intuitive.

In any case, the texture LOD part of all this is very logical and I feel quite intuitive (only load what you'll be able to use), but I'm also interested in the technical implementation behind this. It would seem to me that constantly swapping textures in and out of memory like this would grind most cards to a halt. How would one work around that? Once again, this touches on the virtualized texture memory mentioned before, and is the real root of what I'm trying to learn more about.
// The user formerly known as Tojiro67445, formerly known as Toji [smile]
Quote:Original post by hplus0603
You can easily page textures in yourself. When you load a model, load a really small version of the texture on the model, just so that you know you can render it. In a streaming world, models will come in at the far end of the viewing frustum anyway.
...
The big draw-back of all this is of course that a game will easily fill a full-size DVD, or even a Blu-Ray drive, if you have the art resources to produce it. That's great for offline content, but imaging trying to patch an online game -- "there's a new texture for class X, race Y; download size: 1 GB". I believe procedural texturing approaches (especially mixes of splatting and uber-shaders) will still have a big role to play in the future.


This is not how it's done. Each texture and mipmap is divided up to smaller tiles, and the shader just faults in the required pieces. When the object is far away, the shader faults in parts of it's smallest mipmap. When it comes closer, the larger mipmap tiles gets faulted in, and the smallers get swapped out (or just dropped if the system can reload them from the original source).

For compression, you can use jpeg or any other format, since the cpu can decompress the textures on load. (jpeg's use 8x8 and 16x16 blocks, ideal base sizes for texture tiles) And when you have a texture mmu, you can use redundancy compression. With a tilemap layer between the tiles and the virtual texture address space, you can store repeating tiles only once. And because the whole texture space can be stored in one virtual texture, you can cut the unused parts of each skin and merge the repeating patterns. This can be done by the preprocessor that converts the huge design time texture into the compressed and tiled version that gets shipped.

For game updates, you can patch each texture at the tile level. If you want to add a new logo to one of the uniforms, then you only send the logo's tile, prefeably to an unused tile slot, then add a new tile address mapping that copies the tiles from another uniform, and patch the new mapping with the logo's tile. The new uniform texture can be 20 Mb large but the patch code will be around 1Kb, if you send it without image compression.

Another great side effect of this strategy is that you can use the video ram as level 1 cache, the system memory as level 2 cache, the disk as level 3 cache and the storage medium as the data source. Preloading and read ahead strategies can be used too. For diskless systems, you only have level 1 and level 2 caches.

Viktor

ps: This is the same technology that diablo 1 used with its tiles, except now the system swaps in texture tiles instead of 2d tiles.

This topic is closed to new replies.

Advertisement