The evolution of the computer graphics in 3D games
I''ve always wondered that if computers go so powerful they can render almost real scenes, will that make us developers lazy. In that we stop scene sorting etc.. and just render everything and let the hardware sort it out.
> Personally, I think LOTR has better graphics than the
> fairy woman demo, so it''ll probably be about 20 years
> before that level of detail becomes mainstream.
Typical ILM stuff takes 80 or so layers of texturing, each taking 2K or 4K pictures depending on size on screen (for your stats, Stephen Fangmeyer reported he used 156 layers for the boat in ''Speed 2: Cruise Control''). Rendering a frame takes **terabytes** of data and it''s why RenderMan is used instead of a raytracer: it renders a model at a time in bucket files.
Maybe a reacheable goal in the shorter term would be "Final Fantasy: The Spirits Within" by SquareUSA. This one uses a far less amount of data to render (Square used a raytracer), but you still hear that sound when your lower jaw hits the floor.
-cb
> fairy woman demo, so it''ll probably be about 20 years
> before that level of detail becomes mainstream.
Typical ILM stuff takes 80 or so layers of texturing, each taking 2K or 4K pictures depending on size on screen (for your stats, Stephen Fangmeyer reported he used 156 layers for the boat in ''Speed 2: Cruise Control''). Rendering a frame takes **terabytes** of data and it''s why RenderMan is used instead of a raytracer: it renders a model at a time in bucket files.
Maybe a reacheable goal in the shorter term would be "Final Fantasy: The Spirits Within" by SquareUSA. This one uses a far less amount of data to render (Square used a raytracer), but you still hear that sound when your lower jaw hits the floor.
-cb
Just because movie fx teams use vast numbers of textures and polys to achieve their effects, doesn''t mean we''ll have to use the same stuff. We game programmers are masters of faking stuff like this - finding fast hacks that look right in the majority of cases. While I don''t think we''ll be able to do LOTR quality scenes in the near future, I do think we''ll be able to do scenes that most people can''t distinguish from LOTR qulity in the next 5 years or so.
quote:Original post by Tac-Tics
Besides that, though, much more important than the graphics in a game is the gameplay. It is quite unfortunate that, in the computer gaming industsry, the QUALITY of the games doesn''t double every 18 months....
Hehe I wouldn''t want the quality of gameplay to double every 18 months. It would be the doom of civilization as we know it, as everybody would play games instead of working, eating and sleeping.
Sorry I just could not resist!
I think one of the biggest problems with real-time computer graphics at the moment is the fact that you still see jaggies on all but the fastest and most advanced cards which can run FSAA in real-time. When FSAA is common and you can''t see that environments and models are made from polygons, then it will be easier for users to forget that what they''re watching is actually a 3D simulation of something happening, and not the real thing...
Windows 95 - 32 bit extensions and a graphical shell for a 16 bit patch
to an 8 bit operating system originally coded for a 4 bit microprocessor,
written by a 2 bit company that can''t stand 1 bit of competition.
Windows 95 - 32 bit extensions and a graphical shell for a 16 bit patch
to an 8 bit operating system originally coded for a 4 bit microprocessor,
written by a 2 bit company that can''t stand 1 bit of competition.
Just compare Doom III and Half-Life II to Wolfenstein and Doom 1.
Looking for a serious game project?
www.xgameproject.com
Looking for a serious game project?
www.xgameproject.com
quote:Original post by gommo
I''ve always wondered that if computers go so powerful they can render almost real scenes, will that make us developers lazy. In that we stop scene sorting etc.. and just render everything and let the hardware sort it out.
Maybe it''ll make graphics developers lazy, and maybe it won''t be so lucrative to be a graphics guru, but it''ll probably be nice for game development as a whole.
Don''t think that game engine equals graphics engine or game development equals graphics programming. There''s plenty of other stuff to do, like AI, and gameplay programming, level design. Problems that require more thoughtful solutions than more horsepower. Considering the crazy pace of graphics hardware in recent years, developers have been forced to divert resources to flailing against the tide of graphics hardware advancement. How many games have you played that looked very pretty but didn''t have enough gameplay to keep you busy for 10 minutes?
I interned as a game programmer for a little over a year, working full time about half the time and part time the rest. We were working with a graphics engine that had been used on a previous title, so all of the programmers were working on something other than graphics, except for basic maintenance or the odd enhancement to the graphics engine. Believe me we had PLENTY TO KEEP US BUSY without having to concurrently develop the graphics tech.
If the effort that currently goes into graphics technology could be redirected into actually improving gameplay, sounds good to me. Not that more manpower necessarily equals better results, especially in software, but it would at least free things up, and give more teams the opportunity to focus on gameplay as the primary goal rather than chasing graphics tech. Lots of titles are technology driven these days, and IMO those titles tend to suck, and that is not the best approach for making a fun GAME.
[Edited by - The_Incubator on November 28, 2006 11:30:51 PM]
cbenoi1,
You bring up some good points about the vast differences between what games currently do and what the movie folks do. However, a few of your facts are a little of. So, just to set the record straight:
1. Square started out using Maya''s renderer for Final Fantasy:The Spirits Within. However, they switched to Renderman and used that for most of the production.
2. Rendering a frame for movie shots does NOT take Terabytes per frame. The complete total of space of everything stored for Toy Story one somewhere in the low Terabytes. Subsequent films have obviously elevated above that. This is still a lot of data, but it''s not that much *per frame*.
3. Renderman doesn''t render "a model at a time in bucket files". Renderman''s bucket rendering works by loading up the models that are used in each bucket, but only those models. A bucket is a small quad of the rendered frame. It''s kind of like a quad tree in screen-space. It does a lot of other things too, but that''s the basic schpeil.
Anyway, those technicalities aside, your point still stands. Currently the resources used to render shots for movies are still quite a ways off from what is being used for games today. Models typically boild down to poly counts that are dramatically higher than game models. Furthermore, these models are typically represented with higher order surfaces that are diced down to polys at render time(NURBS used to be the standard, but sub-division surfaces are becoming the new norm).
There are a lot of other differences as well between the two mediums. There''s no doubt that it will take some time before they really converge for typical scenes. However, if graphics hardware continues at the rate it has been that time may only be about 10 years off.
-John
You bring up some good points about the vast differences between what games currently do and what the movie folks do. However, a few of your facts are a little of. So, just to set the record straight:
1. Square started out using Maya''s renderer for Final Fantasy:The Spirits Within. However, they switched to Renderman and used that for most of the production.
2. Rendering a frame for movie shots does NOT take Terabytes per frame. The complete total of space of everything stored for Toy Story one somewhere in the low Terabytes. Subsequent films have obviously elevated above that. This is still a lot of data, but it''s not that much *per frame*.
3. Renderman doesn''t render "a model at a time in bucket files". Renderman''s bucket rendering works by loading up the models that are used in each bucket, but only those models. A bucket is a small quad of the rendered frame. It''s kind of like a quad tree in screen-space. It does a lot of other things too, but that''s the basic schpeil.
Anyway, those technicalities aside, your point still stands. Currently the resources used to render shots for movies are still quite a ways off from what is being used for games today. Models typically boild down to poly counts that are dramatically higher than game models. Furthermore, these models are typically represented with higher order surfaces that are diced down to polys at render time(NURBS used to be the standard, but sub-division surfaces are becoming the new norm).
There are a lot of other differences as well between the two mediums. There''s no doubt that it will take some time before they really converge for typical scenes. However, if graphics hardware continues at the rate it has been that time may only be about 10 years off.
-John
quote:Original post by mrbastard
Just because movie fx teams use vast numbers of textures and polys to achieve their effects, doesn''t mean we''ll have to use the same stuff. We game programmers are masters of faking stuff like this - finding fast hacks that look right in the majority of cases. While I don''t think we''ll be able to do LOTR quality scenes in the near future, I do think we''ll be able to do scenes that most people can''t distinguish from LOTR qulity in the next 5 years or so.
LOL! You''re right in one sense. Game programmers/artists are good at faking stuff. Then again, visual fx people are just as good, if not better, at faking the same things. I say "if not better" because the very nature of pre-rendered work lends itself to an entire host of tricks that are typically impossible in most games.
You see, in pre-rendered work you know *exactly* what will be in the frame. You know how close or far off different objects are. You can render out several passes and tweak them seperately when you composite them together. And so on and so forth.
I''m not sure why there''s this notion among many game people that the people working in visual effects have a seemingly unlimited amount of resources and time, but they don''t. They simply have a different bag of tricks for their different needs.
-John
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement