Where's 3D graphics going?
For as long as computer graphics industry has existed, most simulations of natural phenomena were implemented as cheap hacks that made stuff look good. (I''m reffering to movies/games, not scientific/military simulations). Even the monster non-realtime systems use hacks all the time (just read "Procedural texturing and modelling" for references).
Right now the processing power has reached levels where physically accurate simulations can be performed in non-realtime systems and soon in games. Usually accurate systems are very hard to develop/implement and many times they actually look worse then hacks. In most cases I''ve looked into, they require hacks themselves to actually look good. It''s like one of those highschool/undergrad labs that is supposed to illustrate something but never goes right and the professor tells you "well, it didn''t go right but that''s how it should be in theory, so why don''t you write a report on why you think it went wrong."
For now we''re still at a level where most papers on simulations are understandable because of those hacks. But soon computers will be able to simulate very complex phenomena almost completely. Cheap old hacks won''t go anymore. That means that programming games/graphics will get progressively more complex. You''ll need a degree in physics to understand interaction of light with surfaces, a degree in statistics to understand how to model surfaces correctly, a degree in meteorology to simulate weather. The list goes on and on and on.
That leads me to the conclusion that most people will need two advanced degrees (in computer graphics and an additional area of knowledge) and will model things that only relate to their expertese. That means the industry will change yet again and become more and more like film industry.
Anyway, it''s just a rant, I don''t know if it''s the correct forum, so if it''s not, Yann please move the thread
Your question is interesting, but your argument is flawed because you assume that in order to model something, you need to understand everything about it. I imagine most people around here *could* write a software renderer if they cared to (and a good many of the more experienced folk have, but we''ll ignore them for now), but most of us choose to use OpenGL or DirectX because they give us the same functionality much more easily and significantly faster. Thus, I don''t have to necessarily understand the fine points of triangle rasterization to use OpenGL, nor do I imagine that ten years down the line someone will necessarily have to have a complete grasp of collision detection and reaction in order to have a full fledged physics engine. This, of course, is all conjecture but as what computers can accomplish gets more complex, I think fewer and fewer people are going to want to start from scratch. This means that middleware software is going to become more popular, or an industry standard of sorts is going to be established (OGL/DX style).
Then again, maybe you''re right and game programming is going to become a discipline much more set in theory rather than in visual hacks, but I think that will shrink the ranks of "viable" employees enough that most won''t have a complete set of experts at their disposal anyhow.
Then again, maybe you''re right and game programming is going to become a discipline much more set in theory rather than in visual hacks, but I think that will shrink the ranks of "viable" employees enough that most won''t have a complete set of experts at their disposal anyhow.
I would tend to think that as machines get progressively more powerful and effects get progressively more complicated, more and more things will be done by the API and underlying hardware to allow programmers to concentrate their efforts elsewhere. This has already been the case with z buffers, transforms, basic lighting, triangle rasterization, etc. No doubt it will also become standard for things like progressive mesh tesselation, per pixel lighting, procedural textures, etc. in the furture. Things that are common to most programs / problems, can always be handed off to the hardware / API to implement a common solution (i.e D3DX math lib). You will always have things to work on, but perhaps at a different level of abstraction. Otherwise I think you a correct in pointing out that the detail will start to overwhelm the programmer, and you will need something the size of a defense contractor just to make a simple game.
Aren''t API and Drivers written by programmers ?
That''s just that now we specialize, IMO.
-* So many things to do, so little time to spend. *-
That''s just that now we specialize, IMO.
-* So many things to do, so little time to spend. *-
quote:Original post by invective
I would tend to think that as machines get progressively more powerful and effects get progressively more complicated, more and more things will be done by the API and underlying hardware to allow programmers to concentrate their efforts elsewhere. This has already been the case with z buffers, transforms, basic lighting, triangle rasterization, etc. No doubt it will also become standard for things like progressive mesh tesselation, per pixel lighting, procedural textures, etc. in the furture. Things that are common to most programs / problems, can always be handed off to the hardware / API to implement a common solution (i.e D3DX math lib). You will always have things to work on, but perhaps at a different level of abstraction. Otherwise I think you a correct in pointing out that the detail will start to overwhelm the programmer, and you will need something the size of a defense contractor just to make a simple game.
So you think that vertex and pixel shaders are a step backwards?
Death of one is a tragedy, death of a million is just a statistic.
quote:Original post by python_regious
So you think that vertex and pixel shaders are a step backwards?
Why would I think that????? If all they did was offer the same functionality as the fixed function pipeline, but require more low level coding, that would be the case, but that is obviously not what they do. You can still use the old fixed function pipeline or even canned shaders if you want the same level of functionality. Additionally, there are a number of shaders available that you can just plug into your code and use or utilities to generate the code like nvlink. You also have the option of codeing specific effects to enable additional functionality that was not possible in real time before the advent of shaders.
However, if you look at where the shaders are going in the next version of Direct X and OpenGL 2.0, the move is away from a low level hardware specific asm like language (with different versions for different vendors), towards a higher level more abstract C like shader language common to all vendors.
I feel it is a mistake to make the leap from some people to most people. Just look at user created mods for commercial games. That is mainly designers, not programmers doing that. Just because it takes a high level of expertise for a few people to create it doesn''t mean it takes a high level for expertise for most to use what the few created.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement