Sign in to follow this  
Tojiro67445

OpenGL Carmack's Virtualized Textures (QuakeCon 2007)

Recommended Posts

Well, I started a topic about this a year or two back, but at that point there was next to no information to go on. John has given us a bit more information about the technique, and since it's one that I'm very curious about, I'd like the resurrect the subject. Most of you have probably heard about idTech5, id Softwares new engine which will be powering their next game, Rage. The primary new feature that they've been showcasing is "Virtualize Textures", which is basically an paging system for textures that will determine the textures (and mipmap level) that are required to render the current frame and load only those into memory. In "non-geek" terms, this essentially means that you can have infinite texture detail with no performance hit. The only previous algorithm that I'm aware of that attempts something like this is Clipmapping, but it is designed to work only with perturbed plane (like a heightmap) whereas this method apparently works on any surface. So how do you think it's being accomplished? During Carmacks Quakecon 07 keynotes (Keynote, Q&A) he mentioned a few things that should help determine how: 1) It's being done with DirectX 9 level hardware (OpenGL on the PC), so no DX10-only techniques are needed 2) Carmack said that the engine wasn't going to be ported to the Wii, because it wasn't designed for the hardware. He also said, however, that the memory/processing requirements were pretty modest. That would imply that approach is reasonably shader dependent. (Fairly obvious, but made more so by the Wii being fixed function only) 3)One of the more interesting bits of the QA session was when he mentioned that theoretically every scene could be rendered with only 3 draw calls, and that the only reason it wasn't was for culling granularity. This was possible, he said, because (among other things) the virtualized texture system naturally created a texture atlas. So essentially it would seem that he is allocating one or two large textures and manually loading texture portions into different segments of it. 4) Combining the two above items, the obvious approach would be that while the mesh stores the "standard" texture coordinates the shaders that it is run through do a lookup into the texture atlas and modify the coordinates to point to the appropriate sub-map. Pretty logical. Everything above makes sense to me, but there's one big missing piece: How to determine what textures and the mip levels of those texture to load? My best guess is that you could do a pre-pass of the scene using a specially color-coded texture (which is sampled normally) where each mip level uses a different color. You could then use the color that's actually rendered to determine which mip level to read in. This approach wouldn't work directly unless every mesh used the same sized texture, though, which would defeat the purpous. Also, you would need to combine it with another pass that told you which textures are actually needed for the scene, and that seems like a lot of pre-pass for what is supposed to be a low-impact technique. So what are your thoughts? Anyone see something I don't?

Share this post


Link to post
Share on other sites
Hm... good question. From the sounds of it, it would seem like he only ever has cached the appropriate mip level for any given texture anyway. If that's the case, and you weren't letting the GPU do the mip generation, wouldn't that potentially eliminate most border artifacts? If not, I don't suppose it would be too much hassle to simply surround everything with a 1 pixel wide border.

Share this post


Link to post
Share on other sites
Quote:
One of the more interesting bits of the QA session was when he mentioned that theoretically every scene could be rendered with only 3 draw calls, and that the only reason it wasn't was for culling granularity. This was possible, he said, because (among other things) the virtualized texture system naturally created a texture atlas. So essentially it would seem that he is allocating one or two large textures and manually loading texture portions into different segments of it.

I don't see how these mega-textures can result in every scene getting rendered in only 3 draw calls, since texture switches are not the only batch-breaking state changes and while creating texture atlases can result in less state switches, they can not totally remove them and decrease the number of DP calls to 3. Some of those state changes are not even texture-related. What about different vertex layouts, shaders, shader parameters or just every other state change?

Share this post


Link to post
Share on other sites
Quote:

One of the more interesting bits of the QA session was when he mentioned that theoretically every scene could be rendered with only 3 draw calls...

I'd say only 3 draw calls might be very theoretical but possible if you use uber shaders (much like the mega texture where many shaders are contained in one), not that many different parameters, hardware instancing, the same vertex layout etc.

In practice, as you said, there would be more state changes und thus more draw calls.

Share this post


Link to post
Share on other sites
Quote:
Original post by Ashkan
I don't see how these mega-textures can result in every scene getting rendered in only 3 draw calls...

I think he's only talking about the static environment here, i.e. the racing track in the demo. That might be possible to draw with 3 draw calls: it's a textured terrain, so one shader, one vertex layout, and thus one huge texture. Note that the MegaTexture has all kinds of effect baked in.
Then, for actors etc. you'd need some extra draw calls, since you need a different shader, vertex formats that support skinning etc.

Share this post


Link to post
Share on other sites
Quote:
Original post by Lord_Evil
I'd say only 3 draw calls might be very theoretical but possible if you use uber shaders (much like the mega texture where many shaders are contained in one), not that many different parameters, hardware instancing, the same vertex layout etc.

In practice, as you said, there would be more state changes und thus more draw calls.


Actually, from what he said in the keynotes, it sounds like you're pretty dead on.

I'll attempt to transcribe a bit:

Quote:
John Carmack (Quakecon '07) - 24:20 in the Q&A vid linked above
"It turns out that the entire world, just about, could be drawn with three draw calls in this: One for land surfaces, one for non-land opaque surfaces, and one for non-land translucent surfaces. Almost everything end up being done like that.

One of the interesting things is that in addition to virtualizing the textures to let you have kind of unlimited resources, it's also the ultimate infinite atlasing system where, since everything is using the same virtual texture page pool, they can all use the same material. And you can wind up taking, you know, in this room you would have the podiums, the floors, the stands, the lights. All of these things wind up, while they're separate models and you can move them around and position them separately, but when you're ready to "go fast" they all get built into the same model. So you've got one draw call because they all use the same virtual materials on there.

The only reason you end up breaking it up is for culling granularity. You really literally could have three calls that draw just about everything except for a few glowing things and the sky and the few things that are not conventional surfaces in there. But you still want to break it up so you can do culling on visibility and frustum culling and things like that."


I'm guessing that when he says "the lights" as part of the models, he's referring to the physical light casings, not lights in the illuminating, shadow casting sense :)

Share this post


Link to post
Share on other sites
I still don`t get one thing. does he need to create new texture(atlas) every frame and then send it to GPU? it would consume some time.

also, wouldn`t it be better to for example, divide level using some grid, have unique texture and mesh for each grid unit, and then simply load/free these resources as player moves? hmm, i`m not so sure it would work well, becouse i think Oblivion used such aproach and landscape that was far away used low-res textures and it looked blurred.(http://oblivion.bonusweb.cz/obrazek.html?obrazek=oblivion20030604.jpg) it consumed a lots of memory too. what do you guys think?

Share this post


Link to post
Share on other sites
Quote:
Original post by MassacrerAL
I still don`t get one thing. does he need to create new texture(atlas) every frame and then send it to GPU? it would consume some time.

I think you highly underestimate how much data would have to be sent and how fast AGP8x/PCIe bandwidth is. Even on a PCI board, if you had a 1024x768 screen and only diffuse textures, you'd only need 16MB of memory, and enough bandwidth is provided that you could do an entire refresh of the atlas in less than a sixtieth of a second.
Quote:
also, wouldn`t it be better to for example, divide level using some grid, have unique texture and mesh for each grid unit, and then simply load/free these resources as player moves? hmm, i`m not so sure it would work well, becouse i think Oblivion used such aproach and landscape that was far away used low-res textures and it looked blurred.(http://oblivion.bonusweb.cz/obrazek.html?obrazek=oblivion20030604.jpg) it consumed a lots of memory too. what do you guys think?

The virtual texturing of idtech5, if it is what I think it is, is much more interesting than that, since it's able to take into account partial/entire occlusion of geometry, and is able to do much more precise determinisim of what mips of what parts of the texture is needed. With a proper virtual texturing setup like what I think idtech5 has, you could have nigh-infininte resolution textures with multiple channels (i.e. diffuse, normal, specular coef, spec exponent, fresnel coef/exponent, and hell probably even scattering coef/exponent) on a 64MB video card while still having space leftover to do some shadow mapping, store a low-res (e.g. 640x480 or 800x600?) backbuffer with some AA, and other miscellaneous stuff. Like he mentions in the keynote, it really is appalling that GPU makers are even thinking about going beyond 512MB when maybe half of that is really required for 1080p resolutions.

Quote:
Everything above makes sense to me, but there's one big missing piece: How to determine what textures and the mip levels of those texture to load? My best guess is that you could do a pre-pass of the scene using a specially color-coded texture (which is sampled normally) where each mip level uses a different color. You could then use the color that's actually rendered to determine which mip level to read in. This approach wouldn't work directly unless every mesh used the same sized texture, though, which would defeat the purpous. Also, you would need to combine it with another pass that told you which textures are actually needed for the scene, and that seems like a lot of pre-pass for what is supposed to be a low-impact technique.

I have a pretty good idea on what the answer to that question is, but I haven't had time to make a demo that uses it to fully convince myself that the answer does work (fwiw, the idea would only need probably DX7 level tech). The idea that you've suggested probably wouldn't work though, since you need to analyze those results in a manner.

Share this post


Link to post
Share on other sites
Quote:
Original post by Cypher19

I have a pretty good idea on what the answer to that question is, but I haven't had time to make a demo that uses it to fully convince myself that the answer does work (fwiw, the idea would only need probably DX7 level tech).


Really? DX7? I'd be very interested in hearing more about that idea, wether or not it actually works :)

And yes, I know that you would have to read the results of my idea somehow. I meant to imply that it could be rendered to a texture then read back out, which isn't terribly uncommon, but which also isn't terribly fast. :P I really don't think that's it.

Share this post


Link to post
Share on other sites
As long as the pixel shaders don't change the scene (such as depth info), you could easily make a 'software renderer' that could quickly evaluate what texture levels and resolutions are needed for each visible part of a model. It wouldn't even need to actually render anything - it just needs to do some of the calculations to determine where the polygons lie in screenspace and which mipmap should be applied. You also don't need to use a high-poly version of the model - the lowest poly version you have will probably suffice.

This avoids pretty much all the problems with things like GPU RAM being slow for the CPU to read from, etc.

On the other hand, if you were going to create a texture atlas manually each frame, you'd need either a perfectly accurate 'software renderer' or a _MUCH_ larger texture than the screen. Realistically, somewhere in middle (rather accurate statistics from the software pass, somewhat larger texture) is probably what you'd need to use.

Share this post


Link to post
Share on other sites
The very first topic in the Q&A should answer your question. Essentially, the engine has to determine what texture you need before it starts rendering the frame.

Share this post


Link to post
Share on other sites
Quote:
Original post by Extrarius
As long as the pixel shaders don't change the scene (such as depth info), you could easily make a 'software renderer' that could quickly evaluate what texture levels and resolutions are needed for each visible part of a model. It wouldn't even need to actually render anything - it just needs to do some of the calculations to determine where the polygons lie in screenspace and which mipmap should be applied. You also don't need to use a high-poly version of the model - the lowest poly version you have will probably suffice.

That's acutally an idea I was entertaining for quite awhile to determine visibility of each texture 'page', but I abandoned it for two reasons:
1) It'd probably just be too damn slow because you'd have to render the scene at full resolution. If your final screen is 1920x1200, you can't have 640x480 soft-rast going on because if you missed a small bit of a page, you'd probably end up with some popping going on as a polygon is revealed (e.g. if you have a simple 6-wall room with a pillar in the middle, and the camera is circle-strafing around the pillar, as the parts of the room behind the pillar are revealed you'll see accesses into parts of the virtual texutre that you haven't allocated/uploaded yet because the softrast thinks that area was occluded by the pillar).
2) Your results probably owuldn't match what the hardware/driver is going to render, so you might have some over/underestimations of waht's needed going on. Overestimation isn't that bad; so what, you load in a higher-res mip a bit early. Underestimation IS, though, because once the softrast finally decides that a higher-res mip than what's loaded in is necessary, and the video card is currently magnifying that lower-res mip alreaedy in the texture pool, you'll consistently get popin as your camera moves around the world, or triangles face the camera more and more.

By the way, one thing I should note, and I find to be a more interesting problem to solve than determining page visibility is page _arrangement_. One thing I tried, since it is the most obvious solution, is to just say "okay, I've got this 32x32 page of a texture, I'll dump it somewhere in the virtual texture pool, and my meshes will use an indirection texture to translate the coordinates to the pageI want to look up". However, that only works with point filtering. Once you start including linear filtering, yeah, you get some colour bleeding. Upgrade taht to anisotropic filtering, and things just go to hell. Because the hardware expects texture coordinates to be continuous, so that it knows what the aniso lookup pattern will be, the rendering results will be completely wrong because your texture coordinates are non-continuous. The hardware will instead stretch the aniso filtering pattern all the way across the virtual texture pool, between the page you're trying to look up and the page that (in terms of the final render) is adjacent to the one you're looking up. This thread from awhile ago was a part of my attempt to figure out what was going on: http://www.gamedev.net/community/forums/topic.asp?topic_id=393047 . These images demonstrate what was going on:

The hand-made unarranged texture I was using to test with:
Photo Sharing and Video Hosting at Photobucket
The artifact:
http://img.photobucket.com/albums/v627/Cypher19/artifactpage.jpg


The desired image (this was taken with point filtering):
http://img.photobucket.com/albums/v627/Cypher19/bestpointpage.jpg
Visualization of how the GPU is filtering those things:
http://img.photobucket.com/albums/v627/Cypher19/relthamexplan.jpg

[Edited by - Cypher19 on August 15, 2007 3:59:10 PM]

Share this post


Link to post
Share on other sites
To be honest, I just don't see how it's possible to render all the static/opaque geometry (culling issues excepted) in one draw call.

I understand the theory, parsing the scene before render time, extracting which textures are in use, their priorities, their mipmap level, etc.. but from those infos, where do you go ?

To be able to render with only one draw call, you must allocate all your textures in a big virtual one. Whatever you do you are still limited by the hardware/driver restrictions. Let's take an example: your max texture resolution is 2048x2048, and you have 16 TMUs, so you've got enough "virtual" texture space for 2048x2048x16 pixels, before being forced to switch the textures (hence a new draw call).

Since Carmack specifically says that you could render the scene in one draw call, unless I'm missing something (like being able to switch a texture in the middle of a draw call), you're limited to 16 2048^2 textures. That's around 64 MB of video memory.

What if I use 300 MB of data in the frame ? How do they get allocated into 16 2048^2 textures ?

How do you handle compressed textures ?

How do you handle texture tiling ? Or has it become obsolete with Megatextures, in the way that all textures are "unique", even if you don't want them to ? Or do you leave it to the shaders to do their own tiling ? If so, that means the engine can't run without pixel shaders, so I don't see how it could be implemented on DX7 level hardware or hardware without ps 2.0.

How do you offset the texture coordinates from the original texture, to the virtual texture ? You'd have to store a matrix for each object, meaning a switch of matrix (-> new render call), or you'd have to use an ID per vertex, upload matrices as shader constants and offset the tex coords in the shader. Then you need shaders hardware, plus you are limited to the amount of constants available.

Maybe a part of the answer is allocating the textures in a virtual 3D texture instead, but you still need to update the texture coordinates of each mesh to sample the correct layer in the volumetric texture.

Food for thought..

Y.

Share this post


Link to post
Share on other sites
Quote:
Original post by Ysaneya

What if I use 300 MB of data in the frame ? How do they get allocated into 16 2048^2 textures ?

You only send down the fractions of each mip level that you need, basically.

Quote:
How do you handle compressed textures ?

I'm guessing you mean DXTn stuff. If it's possible to send down DXTn stuff in chunks to the GPU, then I don't see how that would be an issue. For other compressed stuff, just get a high-quality JPEG, or a PNG, decompress on the CPU, and then load that uncompressed data to the GPU as necessary. (This reminds me; earlier I mentioned "a 64 MB card would be handle to handle blah blah blah", that was in terms of uncompressed data).

Quote:
How do you handle texture tiling ? Or has it become obsolete with Megatextures, in the way that all textures are "unique", even if you don't want them to ? Or do you leave it to the shaders to do their own tiling ? If so, that means the engine can't run without pixel shaders, so I don't see how it could be implemented on DX7 level hardware or hardware without ps 2.0.

It's become obsolete. Infinite texture memory literally means that; go ahead and tile the texture in photoshop or using the content creation tools or whatever.

Quote:
Maybe a part of the answer is allocating the textures in a virtual 3D texture instead, but you still need to update the texture coordinates of each mesh to sample the correct layer in the volumetric texture.

Even if you could, how would handle filtering between page edges?

Share this post


Link to post
Share on other sites
The way I understood it, the megatexture is an all-encompassing texture atlas which contains a unique texture for every surface element in the world map. This way you can have scene painters working in the world like Carmack described. He also hinted that the current id technology doesn't enable the artists to sculpt the map the same way they're able to paint the environment as they run around the scene. I took this as an indication that during painting world geometry is static and thus the texture atlas arrangement may remain static while the artists paint texels.

If, during painting, the scene geometry needs to be modified (likely introducing new triangles) you'd have to re-arrange the texture atlas. I mean, in a system like this, you'd probably want to be able to give the system a texture budget, say 100K x 100K texels, along with the world geometry and then have the system make the best possible use of the available megatexture space (maximum texels per primitive). After that you'd give the result to the artists and let them paint.

It's clear that the whole megatexture won't fit into video memory, so, he must be maintaining a "working set" in video memory and I suppose this could be a local texture atlas which contains a superset of the textures for the visible primitives. I suppose you could maintain this working set by exploiting temporal coherence between frames and only update it as new textures (primitives) become visible.

Carmack also mentioned something about a test scene with a huge 100K x 100K megatexture. That pretty much means he's paging texture data in from disk. That'd would make a three level paging scheme disk<->main memory<->video memory.

I wouldn't know about the specifics of an actual implementation as far as GPU programming goes, though.

Anyway, interesting stuff indeed.

-- Jani

Share this post


Link to post
Share on other sites
Quote:
Original post by Ysaneya
To be honest, I just don't see how it's possible to render all the static/opaque geometry (culling issues excepted) in one draw call.

I understand the theory, parsing the scene before render time, extracting which textures are in use, their priorities, their mipmap level, etc.. but from those infos, where do you go ?

To be able to render with only one draw call, you must allocate all your textures in a big virtual one. Whatever you do you are still limited by the hardware/driver restrictions. Let's take an example: your max texture resolution is 2048x2048, and you have 16 TMUs, so you've got enough "virtual" texture space for 2048x2048x16 pixels, before being forced to switch the textures (hence a new draw call).

Since Carmack specifically says that you could render the scene in one draw call, unless I'm missing something (like being able to switch a texture in the middle of a draw call), you're limited to 16 2048^2 textures. That's around 64 MB of video memory.

The whole scene/level (static part) has one huge texture applied. I believe the demo level he has shown is using 128000x128000 texture. I am not sure how he unwraps all the data but it is all spacially coherent. So the terrain and whatever is ontop of it at some spot is stored very closly in the MegaTexture.

That huge texture obviously doesn't fit in RAM so he pages it in when needed. He could be using a 2048x2048 texture for the highest detail, 0 to 10m from the viewer. Then another 2048x2048 for the 10-50 meters and maybe a third and fourth for 50+ meters. The fragment shader then uses the 3 or 4 diffuse textures (he would also need the normal map and maybe some other textures), morphs the coords as needed and blends in between them. Check out clipmaps, very similar.

MegaTexture also stores the normal maps and other data, I believe some physics parameters...

Quote:
How do you handle compressed textures ?

MegaTexture is compressed, a lot. It's all one texture in the end, doesn't matter how individual textures are handled pre level compile.

Quote:
How do you handle texture tiling ? Or has it become obsolete with Megatextures, in the way that all textures are "unique", even if you don't want them to ? Or do you leave it to the shaders to do their own tiling ?

Tiling gets compiled into the one huge MegaTexture.

Don't know if this is covered in the keynote, haven't watched it yet but checkout the idTech5 walkthrough videos, especially the Tools video.

Part 1
Part 2
Tools

Share this post


Link to post
Share on other sites
From everything I've read (all I could find on MegaTextures) and watched (Carmacks speeches) the ground is made up of a massive texture, 128,000x128,000 (of course artists decides what size is needed, it can be smaller but not sure if this is the max size) pixels and is one continuous texture on the hard-drive that is paged in as needed. Also, each object in the world is made up of a MegaTexture, so the guys in the game. But I'm quite positive they're a lot smaller textures (guess here), say 2048x2048.

It also seems from what he has said, that the ultra high resolution texture is only used say right under the person, while lower quality is used for things further away and quality, seems to be on the fly, is degraded as you get further and further out (automatic mip-mapping). Since this works on the PS3 which has lower memory then the 360 that means that the high resolution maps will likely be lower resolution then on say the 360 which might be lower then on the PC. Seems like they're calculated in real time.

I believe the texture is created based on all the textures determined to be needed. These are then put into a large block of memory and uploaded. This is updated in real time for the sub parts that change from frame to frame and the changes uploaded to the GPU (likely not the whole texture just the sub sections). This massive texture is made up of static geometry's texture data and dynamic (say a person in game). Data is uploaded to the shader telling it where its texture(s) are in the larger texture.

It's a really interesting concept. The tools video from QuakeCon showed a lot of information. Like the texture could be tiled but that each pixel is saved, if you wanted to stamp out the ground with set tiles you could then custom paint between them, etc. They also had brushes like in MS Paint, you could resize etc.

I'm actually working on something similar, been for a few months but in the process of putting our house on the market so I have so little spare times its not even funny. I had a demo with a 8GB "generated" texture I was streaming from but the HD it was on died a very painful death and I did not, like an idiot, have that project backed up. I was just texturing static terrain and it looked pretty darn good and worked very well. Took me about a week to implement after I spent a week reading everything I could find where Carmack discussed it.

I do want to know what type of compression is used, from 80GB to 2 DVDs is pretty impressive. I was just using raw data myself.

Edit:

@nts -- That rocks, same basic information typed around the same basic time. When I hit reply I didn't see your posting. Awesome links!

[Edited by - Mike2343 on August 15, 2007 7:17:24 PM]

Share this post


Link to post
Share on other sites

I find this all very confusing. I share Ysaneya's questions and I'm not even sure I understand the basic concept, much less how the implementation should work. Going from this part of the transcription, I got a dim idea of how it works in my mind. Could any of you verify if I'm on the right track?

Quote:
And you can wind up taking, you know, in this room you would have the podiums, the floors, the stands, the lights. All of these things wind up, while they're separate models and you can move them around and position them separately, but when you're ready to "go fast" they all get built into the same model. So you've got one draw call because they all use the same virtual materials on there.


It sounds like it's mainly meant to work for relatively static objects. At some point during build-time or sparsely at runtime when some scene has been loaded or altered, all static objects are accumulated into one big batch buffer. To make this work for multiple objects/textures, in bare essence an UVW map is generated for this entire buffer, unwrapping the per-object textures to the scene-wide 'megatexture'. Is that the basic idea of this technique?

Share this post


Link to post
Share on other sites
Remigius: Yeah, based on the quote, it does seems like the modellers first sculpt the static geometry part of the world, then some kind of a processing step is run over the model which constructs this huge texture atlas (the mega texture) which contains a texture for each triangle. As long as the geometry remains unaltered, so does the organization of the texture atlas. After this processing, the artists can start painting the world, in other words, updating the texels in the atlas.

The texture atlas can be organized in such a way that the textures of triangles spatially close to each other are close to each other in the texture atlas as well. I suppose you could tage advantage of this during rendering when trying to figure out which (sub)textures of the atlas to maintain in the current working set.

-- Jani

Share this post


Link to post
Share on other sites
I feel like the quote I posted earlier was slightly misleading: I view it as Carmack describing a nice side effect of the texturing system, not a core component of it.

It's worth noting that it's been mentioned that this texturing technique applies to both static and animated meshes. In one of the tech demo videos they point out that the store owner guy with the big hat uses a 2k*2k texture just for his face. Carmack mentions that this may be a little wasteful, but also says that it really doesn't matter because it doesn't affect the game performance at all. We're also told that the artists can create those meshes however they want, using pretty much any tools they want (I would imagine they have something like a Collada importer), which tends to imply that the textures for most models are not done in a pre-built atlas. (The artists probably wouldn't have been too happy with that!)

Now that's just for individual meshes, the landscape may be a special case, but it does highlight the fact that they must be building the atlas on the fly and, subsequently, determining texture visibility and detail level on the fly. Those two elements are what makes the system so impressive in my view.

@born49: Thank you for posting that paper! Very interesting indeed, and quite probably related to what the idTech5 is doing.

Share this post


Link to post
Share on other sites
Quote:
It sounds like it's mainly meant to work for relatively static objects. At some point during build-time or sparsely at runtime when some scene has been loaded or altered, all static objects are accumulated into one big batch buffer. To make this work for multiple objects/textures, in bare essence an UVW map is generated for this entire buffer, unwrapping the per-object textures to the scene-wide 'megatexture'. Is that the basic idea of this technique?


No. The basic idea is it's one massive texture. Think one massive jpeg or png file (with layers like PSP files but they get baked in, with a history it seems, but this is more file format then technique so we won't get into that). This is for ANY object in the world. It can be (I do believe he mentioned 128,000 x 128,000 being the max side but I'm sure thats just a "set" limit) any size. Like he said the shop vendors face is like 2048x2048. It sounds like the the engine/tools in real time modify this size to whatever the hardware can handle. It's to give the artist total freedom. Not having to worry about anything but making the map/level/area look amazing. You can modify the geometry at any point the texture just gets remapped from the sounds of it.

He said, depending on the resolution, it could take up to 5 minutes to save artist changes for the area of the texture they've "checked out". As he said several artists can work on the map at a time, it's likely they check out sections and can only modify those. It would make sense and what I was doing before I got distracted.

Since all 4 systems, Win32, 360, PS3 and MacOS share the same data set (from the same server) I believe the engine in real time changes the level of detail to suit each systems specifications. So since the PS3 technically has less memory available to each game then say the 360 (the PS3 OS uses more memory) it likely has a lower level of detail on the textures then the 360/PC/Mac.

Mr. Carmack has said several times that the system was very easy to implement and was not all that complex. The hardest part he said was writing the shader and that didn't even take that long.

How I did it was fill the texture units with the max size texture atlases I could and updated them as rarely as possible. And even more rare was full changes, I mostly updated sub sections as needed. I did try and keep static and close objects in the same atlas(s). Say terrain took up 2 texture units and the second one was only 1/2 used I'd put in static objects textures in there also.

But as someone else said and to make sure people understand, ALL objects can have a mega texture applied to it. There is no limit to how many textures are used or how large they are. The engine manages all of those details, leaving the artists to do what they do best.

Hope that helps.

Share this post


Link to post
Share on other sites
Here's a fun to read Carmack .plan from 2000..

http://www.bluesnews.com/cgi-bin/finger.pl?id=1&time=20000308010919

He's been talking about this stuff forever. I specifically remember him talking to me about it in 1995 or so!

Share this post


Link to post
Share on other sites
Quote:
Original post by Mike2343
...

Hope that helps.


Sorry, it doesn't. I'm probably just dumb, but most of Carmacks things or treatises on them I read haven't been too clear to me. Judging by the confusion in this thread though, I'm not the only one who isn't instantly enlightened by his talks [smile]


Quote:

No. The basic idea is it's one massive texture. Think one massive jpeg or png file (with layers like PSP files but they get baked in, with a history it seems, but this is more file format then technique so we won't get into that). This is for ANY object in the world.


Ok... then how does this work toward drawing the scene in 3 draw calls? From my limited understanding, this sounds more like the original megatextures idea, which seems to be 'just' a part of the virtualized textures scheme. As far as I get it, virtualized textures is a fancy paging system to actually put all this data to good use.

I'd wish Carmack would provide sample implementations instead of making us all guess for what he meant [wink]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this