I'm bored right now, so I'm going to take this point by point
quote:
I've heard most engines use a thing called pre-caching. Isn't this where textures ARE loaded into memory once at startup to save on the load later? Or am I off base?
That's half-correct. Textures are loaded once by the API. The API (OpenGL or D3D) creates a local memory copy (in your system RAM). Then, when a texture gets accessed ('bound') by the GPU, this cached memory is copied to the 3D card. And normally, it stays there. Exception is, if you render tons of other textures afterwards. And eventually, if your RAM on the video card is smaller than the memory requirements of all textures displayed in a
single frame , it will get overwritten by a new texture. On modern 3D cards (128MB RAM), this is very unlikely to happen. There is currently no game on the market, that can cause texture thrashing on a 128MB card. You need very high resolution textures (1024² or 2048²) or very poorly witten games (no texture compression) to cause this kind of cache trashing.
quote:
Secondly, it's not a matter of Game Enigines in the first place. Sure the Q3 engine is optomized to handle higher detail...but it is only software! It's the video card and monitor that have to handle that information THAT is where the effects of frame rates come in.
Is there any truth to this? I was under the impression it was videocard + cpu + software that had the biggest effect and the monitor had a very minimal effect on framerate?
Also half-true. The video card surely is a vital part of the system. But the CPU is almost as important, so is bus transfer speed (AGP). But one of the most important points is the software: a well written 3d engine can get you amazing, highly detailed scenes fluid on a GF2. On the other hand, a poorly written engine will crawl with the same scene on a GF4.
I don't see what the monitor has to do in there. Perhaps he is refering to refresh rate (vsync) ?
quote:
Take a look at the file size of an object file sometime. I'll give you an example. The high poly tree that you can see on my Naboo Swamp thread, is 103 kb. I have two textures on it. The Bark is a 32x32 mat that is 3 kb. And the Branches/leaves are 128x128 mats that are 33 kb each. Now let's do some math here folks. 3 kb times every surface this is textured as bark (300)and we gett 900 kb. Add to that, 33 kb times every surface that the branch covers (92) and we get 900 kb (bark) plus 3036 kb (branches/leaves). This comes to 3936 kb of information being thrown across your screen on a model that only takes up 103 kb. That's 31.2 times as much information in textures as it is in the single 3do!
This guy is very confused. Again, this is half-true. When talking about 3D information being thrown accross a bus system, then we have to keep in mind that there are
two distinct bus systems. The AGP bus (where your 3D card communicates with the CPU) and the onboard memory/GPU bus system (the 'Hypertransport' bus on nVidia hardware). He seems to imply that those 4MB information get pushed from the CPU to the 3D card. That is wrong.
But it is true, that significantly higher amounts of information gets pushed over the internal GPU bus. That's what's commonly refered to as 'fillrate'. If your application is fillrate-bound, then it saturates this onboard bus. But the effects are not as easy to calculate as he did. You have to take onboard caching, mipmap striding, texture filtering, multiple framebuffer accesses, etc into account. Also, modern 3D card VRAM is dual-ported, which makes the calculations even more complex.
Conclusion: the guy is confused
He has heard some true bits of information, but he is interpreting them wrong.
[edit: my grammar still sucks]
/ Yann
[edited by - Yann L on August 15, 2002 8:37:10 PM]