Archived

This topic is now archived and is closed to further replies.

Jaiph

Is this guy way off base?

Recommended Posts

I am an amateur level designer for the games Jedi Knight and Jedi Outcast. One of the sites I hang out at most is Massassi.com (great editing resource) but I have to say I'm a self-admitted non-techie I know next to nothing about the mechanics of 3D gaming engines or the hardware involved. On Massassi's forums, a debate started -> http://forums.massassi.net/html/Forum5/HTML/008402.html <- about the difference between the two game engines for Jedi Knight(Sith engine) and Jedi Outcast(Q3 engine). Now this guy, Friend14, has made a lot of claims in this thread and I've been trying to work out if they have any truth to them. Here's a couple:
quote:
Textures are NOT, I repeat, ARE NOT loaded into memory only once! I don't know how big you memory on you card is, but I only have 128MB on my GF4ti. Not even allof that is dedicated to textures. Sure you can alot part of your system memory to handle it, but I heed warning against uping that number to high. It is in no way possible that ALL the textures are only loaded once.
I've heard most engines use a thing called pre-caching. Isn't this where textures ARE loaded into memory once at startup to save on the load later? Or am I off base?
quote:
Secondly, it's not a matter of Game Enigines in the first place. Sure the Q3 engine is optomized to handle higher detail...but it is only software! It's the video card and monitor that have to handle that information THAT is where the effects of frame rates come in.
Is there any truth to this? I was under the impression it was videocard + cpu + software that had the biggest effect and the monitor had a very minimal effect on framerate?
quote:
Take a look at the file size of an object file sometime. I'll give you an example. The high poly tree that you can see on my Naboo Swamp thread, is 103 kb. I have two textures on it. The Bark is a 32x32 mat that is 3 kb. And the Branches/leaves are 128x128 mats that are 33 kb each. Now let's do some math here folks. 3 kb times every surface this is textured as bark (300)and we gett 900 kb. Add to that, 33 kb times every surface that the branch covers (92) and we get 900 kb (bark) plus 3036 kb (branches/leaves). This comes to 3936 kb of information being thrown across your screen on a model that only takes up 103 kb. That's 31.2 times as much information in textures as it is in the single 3do!
This quote relates to his comment above about textures NOT being loaded only once. This isn't really how it works is it? Textures aren't rendered on 3D objects *that* ineffectively are they? There are a lot of other posts on that discussion but these are the things I most have queries about. Any information would be appreciated as I'm not sure what to make of a lot of this information that is flying around [edited by - Jaiph on August 15, 2002 7:37:36 PM] [edited by - Jaiph on August 15, 2002 7:40:44 PM]

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
It''s all BS. That guy is blowing smoke.

-James

Share this post


Link to post
Share on other sites
I''m not experienced with any of the more modern 3d APIs, but let me be the first to say I HOPE that last remark isn''t true. With my experience in software 3d engines, I see no reason that any texture would be stored in memory more than once, like I think he''s implying. The texture is stored in an array, and mapped onto polys wherever it''s needed. That means on any polys in an object, and on any object. I can''t imagine any reason to store more than 36kb worth of textures in his example (3 for bark, 33 for leaves)

-Arek the Absolute

Share this post


Link to post
Share on other sites
I don''t want to respond to each individual quote, but you can be assured "Friend14" knows little to nothing of the mechanics inherent in any game engine.

Share this post


Link to post
Share on other sites
quote:
Original post by Anonymous Poster
It''s all BS. That guy is blowing smoke.

-James

Agreed.

quote:

Secondly, it''s not a matter of Game Enigines in the first place. Sure the Q3 engine is optomized to handle higher detail...but it is only software! It''s the video card and monitor that have to handle that information THAT is where the effects of frame rates come in.


[sarcasm] Yeah, right [/sarcasm]. If its hardware that is the limit, then why are we continually pushing older hardware way past what it was designed to do? Some time ago there was an Image Of The Day that these guys had created a very simple 3D engine to run on an Amiga (I think). If the hardware was the limit, then how is this possible? Not only that, but monitors don''t really handle information, other than the resolution, color depth, and refresh rate. If I keep the same resolution, color depth, and refresh rate, it won''t matter if I run it on a 14" monitor or a 21" monitor.

If I don''t optimize my engine, then it will be hardware limited. According to NVidia "All games are CPU limited - That’s over 80% of games that I see" (quoted straight from an Nvidia document on Direct3D8 performance). There will continually be tweaks and optimizations for the software.

quote:

Textures are NOT, I repeat, ARE NOT loaded into memory only once! I don''t know how big you memory on you card is, but I only have 128MB on my GF4ti. Not even allof that is dedicated to textures. Sure you can alot part of your system memory to handle it, but I heed warning against uping that number to high. It is in no way possible that ALL the textures are only loaded once.


If 128mb of video memory isn''t enough, then YOU are doing something wrong, or using a whole buttload of textures. The only reason why textures would be loaded more than one time into vid memory is if they were changing color formats. I recall a previoust GDNet thread that was about one of the D3DX functions that allocated/deleted/reallocated memory because it was changing the color format of the texture to what was best. You should be able to fit a crapload of textures (including mip maps), vertex buffers, index buffers, and what ever else you need into 128mb. Look at the PS2 and Xbox. The PS2, or so I have heard, has 4mb of vid memory and 48 of system memory (or something like that), and the Xbox has 64mb of unified memory (as fast as vid memory, but accessible by the CPU, AFAIK). Are you telling me that what I am seeing isn''t possible?

quote:

Take a look at the file size of an object file sometime. I''ll give you an example. The high poly tree that you can see on my Naboo Swamp thread, is 103 kb. I have two textures on it. The Bark is a 32x32 mat that is 3 kb. And the Branches/leaves are 128x128 mats that are 33 kb each. Now let''s do some math here folks. 3 kb times every surface this is textured as bark (300)and we gett 900 kb. Add to that, 33 kb times every surface that the branch covers (92) and we get 900 kb (bark) plus 3036 kb (branches/leaves). This comes to 3936 kb of information being thrown across your screen on a model that only takes up 103 kb. That''s 31.2 times as much information in textures as it is in the single 3do!


As far as I know, the texture is only loaded once, and applied many times. Thats what texture coordinates are for. You throw the texture and the texture coordinates at the vid card, not a texture for each set of coordinates. The vid card processes the information and applies the texture in the correct manner. You aren''t throwing 300 seperate bark textures and 92 branch textures along with the right number of tex. coords, you are only throwing two or three textures.



Moe''s site

Share this post


Link to post
Share on other sites
I'm bored right now, so I'm going to take this point by point

quote:

I've heard most engines use a thing called pre-caching. Isn't this where textures ARE loaded into memory once at startup to save on the load later? Or am I off base?


That's half-correct. Textures are loaded once by the API. The API (OpenGL or D3D) creates a local memory copy (in your system RAM). Then, when a texture gets accessed ('bound') by the GPU, this cached memory is copied to the 3D card. And normally, it stays there. Exception is, if you render tons of other textures afterwards. And eventually, if your RAM on the video card is smaller than the memory requirements of all textures displayed in a single frame , it will get overwritten by a new texture. On modern 3D cards (128MB RAM), this is very unlikely to happen. There is currently no game on the market, that can cause texture thrashing on a 128MB card. You need very high resolution textures (1024² or 2048²) or very poorly witten games (no texture compression) to cause this kind of cache trashing.

quote:

Secondly, it's not a matter of Game Enigines in the first place. Sure the Q3 engine is optomized to handle higher detail...but it is only software! It's the video card and monitor that have to handle that information THAT is where the effects of frame rates come in.

Is there any truth to this? I was under the impression it was videocard + cpu + software that had the biggest effect and the monitor had a very minimal effect on framerate?


Also half-true. The video card surely is a vital part of the system. But the CPU is almost as important, so is bus transfer speed (AGP). But one of the most important points is the software: a well written 3d engine can get you amazing, highly detailed scenes fluid on a GF2. On the other hand, a poorly written engine will crawl with the same scene on a GF4.

I don't see what the monitor has to do in there. Perhaps he is refering to refresh rate (vsync) ?

quote:

Take a look at the file size of an object file sometime. I'll give you an example. The high poly tree that you can see on my Naboo Swamp thread, is 103 kb. I have two textures on it. The Bark is a 32x32 mat that is 3 kb. And the Branches/leaves are 128x128 mats that are 33 kb each. Now let's do some math here folks. 3 kb times every surface this is textured as bark (300)and we gett 900 kb. Add to that, 33 kb times every surface that the branch covers (92) and we get 900 kb (bark) plus 3036 kb (branches/leaves). This comes to 3936 kb of information being thrown across your screen on a model that only takes up 103 kb. That's 31.2 times as much information in textures as it is in the single 3do!


This guy is very confused. Again, this is half-true. When talking about 3D information being thrown accross a bus system, then we have to keep in mind that there are two distinct bus systems. The AGP bus (where your 3D card communicates with the CPU) and the onboard memory/GPU bus system (the 'Hypertransport' bus on nVidia hardware). He seems to imply that those 4MB information get pushed from the CPU to the 3D card. That is wrong.

But it is true, that significantly higher amounts of information gets pushed over the internal GPU bus. That's what's commonly refered to as 'fillrate'. If your application is fillrate-bound, then it saturates this onboard bus. But the effects are not as easy to calculate as he did. You have to take onboard caching, mipmap striding, texture filtering, multiple framebuffer accesses, etc into account. Also, modern 3D card VRAM is dual-ported, which makes the calculations even more complex.

Conclusion: the guy is confused He has heard some true bits of information, but he is interpreting them wrong.

[edit: my grammar still sucks]

/ Yann

[edited by - Yann L on August 15, 2002 8:37:10 PM]

Share this post


Link to post
Share on other sites
This guy is seriously deranged. Only the first comment make sense.

quote:

Textures are NOT, I repeat, ARE NOT loaded into memory only once! I don''t know how big you memory on you card is, but I only have 128MB on my GF4ti. Not even allof that is dedicated to textures. Sure you can alot part of your system memory to handle it, but I heed warning against uping that number to high. It is in no way possible that ALL the textures are only loaded once.


I''d also add that most widely used textures are in memory and STAY in memory. However, not all textures are kept in VIDEO memory. Some rarely used (or not used yet) textures are kept on the harddrive until needed. That explains some latency when entering new rooms in some games.
quote:

Secondly, it''s not a matter of Game Enigines in the first place. Sure the Q3 engine is optomized to handle higher detail...but it is only software! It''s the video card and monitor that have to handle that information THAT is where the effects of frame rates come in.


This guys seems to say that the software itself has NOTHING to do about speed. This is all lies. Sure the hardware is important but if you send too much useless data to the hardware, it will behave like jerk. Quake3 is very good at optimizing these things (especially in enclosed and claustrophobic spaces) -- it won''t send a poly that isn''t visible.

Screen has NOTHING to do with speed of a game (except for VSync). The screen is analog, that is, it receives approximated RGB values for the whole screen usually 75 times per seconds and draw them as it receive them. The screen does not receive any texture or poly information. It works exactly like a TV set, the tv channel doesn''t send a person and a car to the TV, it sends an image to display DOT.
quote:

Take a look at the file size of an object file sometime. I''ll give you an example. The high poly tree that you can see on my Naboo Swamp thread, is 103 kb. I have two textures on it. The Bark is a 32x32 mat that is 3 kb. And the Branches/leaves are 128x128 mats that are 33 kb each. Now let''s do some math here folks. 3 kb times every surface this is textured as bark (300)and we gett 900 kb. Add to that, 33 kb times every surface that the branch covers (92) and we get 900 kb (bark) plus 3036 kb (branches/leaves). This comes to 3936 kb of information being thrown across your screen on a model that only takes up 103 kb. That''s 31.2 times as much information in textures as it is in the single 3do!


When you draw stuff with textures, you don''t send the texture along with each face you draw. No, you send the texture (unless it''s cached in video memory) then tell the hardware to draw a list of faces using this texture. In the worst case, the transfer for drawing a static model is almost equal to the size of the model and the textures. I say almost because some programs will compute normals and other component after loading the model.

Share this post


Link to post
Share on other sites
quote:
Original post by Coincoin
I''d also add that most widely used textures are in memory and STAY in memory. However, not all textures are kept in VIDEO memory. Some rarely used (or not used yet) textures are kept on the harddrive until needed. That explains some latency when entering new rooms in some games.


Uhh, most of those statements depend on the mechanics of the game engine. I''m really not certain if APIs have any built-in texture prioritizing features, but I haven''t heard of any. The way it works most of the time, though, as mentioned before, is that textures are loaded into RAM once and then sent on-demand to the gfx card. Many professional games do have incremental loading though, where textures are loaded from the hDD, but that is not a function of the API, so your observations are circumstantial at best.

Later,
ZE.


//email me.//zealouselixir software.//msdn.//n00biez.//
miscellaneous links

Share this post


Link to post
Share on other sites
The guy seems confuse. On what he claimed, the guy looks like he is having each seperate distinct texture for each pettles and barks (or surfaces)... which means that (a 'lil bit of exaggeration) even a 4GB video RAM card could hardly fit the games Black&White, SeriousSam2, etc... IF every game did that!

[edited by - DerekSaw on August 15, 2002 9:09:21 PM]

Share this post


Link to post
Share on other sites
quote:
Original post by ZealousElixir
Uhh, most of those statements depend on the mechanics of the game engine. I''m really not certain if APIs have any built-in texture prioritizing features, but I haven''t heard of any.


Just to add that: Actually both big APIs (D3D and OGL) have such features integrated into the driver. The exact implementation is manufacturer depended and might go from a simple LRU scheme to some ''very smart heuristic'' on nVidia drivers (Quoting Matt Craighead).

Besides that you''re definitely right, it fully depends on how the engine handles it.

/ Yann

Share this post


Link to post
Share on other sites
Ah ok thanks for all the info folks. People tried to talk some sense into this guy, but he seems to be stubbornly sticking to his ideas in a new thread here. I just don''t understand why people can''t admit when they are wrong....I do it all the time!

Share this post


Link to post
Share on other sites
quote:

So, then, when you enter JO, say you decide to Load a random level. This is where the Game engine actually takes over. The Engine sends a screenshot to your GPU to be processed and sent to your monitor along with the loading image, which is updated on progress. The Engine then begins to load everything within the FOV of you player, based on the entry point of the level, and begins loading everything from textures, entities, sounds, or whatever else your Video card will need to process in the first few seconds upon entering the level. (And textures, sounds, entities, ect. not in your FOV is later loaded to your memory shortly before it is to be displayed on your monitor). As previously loaded material in your memory is no longer needed, it is cleaned out to allow space for the new information. This process is what dramtically increases performance.

Ever notice how most of the JO levels start you in a confined area, or open area that has a sharp bend left or right in the path not far from where you start? This is a trick they use to help increase load time. The less information displayed in-view upon entering the level, the less information that needs to be pre-loaded into the video cards memory.

OMG! is this Friend14 guy a loser!! All of it, complete rubbish!

CEO Platoon Studios

Share this post


Link to post
Share on other sites
From the 2nd thread its sounds like he is getting his information from that horrid GameSpy "How stuph werks" series. Anyone care to dig up the thread we had about all the technical inaccuracies, bad generalizations and downright incorrect information was in that piece of crap?

Share this post


Link to post
Share on other sites
quote:

OMG! is this Friend14 guy a loser!! All of it, complete rubbish!


Why ? I haven't read the thread, but the part you quoted is more or less correct for engines using progressive loading systems. Most engines do not progressively swap in geometry (although it is definitely possible), but a lot do that with textures, sfx and objects.

He is a bit confused about some details (the engine won't load what is exactly in your FOV, but will load what is in the nearby BSP/Octtree cells intersected by your FOV). But on the other hand, I think he never implemented an engine himself either, so such details are excuseable.

[Michalson: you mean this one ?]

/ Yann

[edited by - Yann L on August 16, 2002 9:14:22 AM]

Share this post


Link to post
Share on other sites
quote:
Original post by MatrixCubed
Someone should point Friend14 to this forum, and watch his ego deflate to flaccidity...



MatrixCubed
http://MatrixCubed.cjb.net




You can say that again. He is basing all of his information on what he perceives using a stop watch and playing the game. He doesn''t even program. He just creates levels for Jedi Knight: Outcast.

---
Make it work.
Make it fast.

"I’m happy to share what I can, because I’m in it for the love of programming. The Ferraris are just gravy, honest!" --John Carmack: Forward to Graphics Programming Black Book

Share this post


Link to post
Share on other sites