Archived

This topic is now archived and is closed to further replies.

The future of graphics?

This topic is 5870 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

What does the future hold for graphics? Pixel and vertex shaders seem to be the current rage for the next generation 3d engines. I see per pixel lighting and shadows to be a pretty big thing. What will be the next big breakthrough for graphics, other than higher poly counts? I see phong shading supported in hardware. I see some sort of effect library that shows off all sorts of effects that can be done with pixel and vertex shaders (for any video card). I see that we need to come up with a next generation flexible file format for 3d objects. What do you see?

Share this post


Link to post
Share on other sites
I see pixel shaders to be a big thing for quite awhile. I see higher polygon counts to be a BIG thing. I mean, that''s really what the hold up is isn''t it? Imagine if you were making let''s say the game version of the Final Fantasy movie. Wouldn''t it be awesome if you could use the same 3D model they used for the film?

Pixel shaders are awesome, but I would''ve taken more polys any day...


Demetrios Georgeadis
Director/Programmer
Oniera Software Artists

www.oniera.com


Share this post


Link to post
Share on other sites
Actually, I think it''ll go the other way. Less polygons, but with more per-pixel effects on those polygons. Effects like bump-mapping, environment-mapping, lighting, shadow, etc. Then, we''ll probably see parametric surfaces, that is, Bezier patches, displacement maps. Combined with parametric texturing and other effects (like parametric bump-mapping, reflection mapping, displacement maps, etc) and we''ll be looking at some pretty realistic-looking objects, with very small memory footprints (since most of the object will be calculated on-the-fly).

Eventually we''ll ditch these primitives in favor of more abstract primitives, like voxels or other space-modifying constructs. Maybe eventually we''ll get to the point where we can model individual molecules and lighting and texturing will be based on the molecules interactions with actual light rays. Once we reach that point, then there''s really no-where to go. All you could then add is better and better physics so that water, cloth and gas interactions become more realistic.


codeka.com - Just click it.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by Ronin_54
Calculating real light might prove to be a problem... You know, a ray of light travels faster then the information inside your pc...


Only if it''s traveling a very short distance.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
by the time we get this far, light may very well be whats used for transfering the data in the computer...


However, I think we will get household VR goggles as the norm, and then holograms.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Think about it. You''ll have a super-fast computer. (a quantum-coputer?)

Well, you are NOT going to use any non-upgradable hardware to do any poly-filling. Software rendering is the thing! You can write any rendering method you like, e.g. ray-tracing, voxels, you name it.

Share this post


Link to post
Share on other sites
quote:

by the time we get this far, light may very well be whats used for transfering the data in the computer...



Computers are advancing at such a great level. Will graphics follow at a similar speed ? We are trying to simulate reality. Graphics will eventually become real. It's almost like we are trying to create our own alternate reality. With I believe Motorola, and the 70 Gig Hz chip the matematical on the fly calculations will be astounding. We may be under estimating the advancement of graphics.

The future is going to be so cool.

Guy


Edited by - GuyJohnston on November 16, 2001 8:14:09 PM

Share this post


Link to post
Share on other sites
I suppose I see something else in the future. I see more deformable environments and objects. I really liked the multiplayer of the Red Faction demo, and how you could basically make your own level by modifying the existing level. The only part that I didn''t like is that some objects (such as the ground, or steel beams) couldn''t be destroyed. The only other problem with this was that you could only destroy so much of the level before it would stop (probably memory limited, or poly limited).

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
i see name changes, ghz to become thz and so on
3d displays and maybe hologram displays(the stuff in star trek not the crap with the glasses)

Share this post


Link to post
Share on other sites
One beautiful day we won''t even know about the real world anymore, for we''ll all be trapped in an artificial reality called The Matrix........


rk

Share this post


Link to post
Share on other sites
quote:
Original post by rk
One beautiful day we won't even know about the real world anymore, for we'll all be trapped in an artificial reality called The Matrix........
rk


How do you know your not already...

The MATRIX has you


Edited by - avianrr on November 18, 2001 4:42:36 AM

Share this post


Link to post
Share on other sites
The future of graphics?

I predict real time ray-tracers. pollygon counts will be irellavent, I want a sphere here and a cylinder there and that''s all it takes. Trace the ray through the scene and poof! Instant photo real image! I''m actually working on a real time ray tracer that will use hardware acceleration and really fast processors to generate real time output. Don''t know if I''ll be able to get it to work as fast as I want but I''m trying.

Share this post


Link to post
Share on other sites
The only raytracing which can lay claim to being a complete general solution to the rendering problem is forward raytracing ... and the only system which can do that in real time will remain nature for quite a while

For some area's in rendering scanconversion and good occlusion culling are hard to beat (occlusion culling is one of the big issues which have to be solved in the near future IMO, the best occlusion culling which can efficiently deal with dynamic geometry etc will always be screen based ... and with the present hardware screen based occlusion culling is not an option). Companies like PDI and Rhythm & Hues sticking to it with their their own rendering engines and the overwhelming use of PRMan in the rest of the industry is a good enough indication of that.

Raytracing makes sense, but its not an optimal solution for everything ... if the amount of calculations for primary intersections and simple shadowing are only a small fraction of all the calculations (because of the number of calculations dedicated to multiple reflection's, refraction and of course global lighting methods) then you can think about ditching scanconversion altogether. If companies which do non realtime animation rendering for a living havent even done it what hope do we have in the near future?

BTW why would raytracing have no problem with triangle counts? Geometry aliases as easily as textures ... stochastic sampling helps a little, but in general if you stuff more polygons into a single pixel you will have to increase the number of samples per pixel.

Just to pretend I wanted to answer the original question ... something I expect to see in the near future :
Support for efficient screen based occlusion culling
Support for displacement mapping

Thing's Id like to see in the near future :
Support for automatic view based geometry LOD (not only for displacement mapping)
Support for non uniform sampling for buffers when using shadow buffering

I hope occlusion culling support means we will see a scenegraph API surface soonish, but Im relatively sure it will mean we will get faster feedback paths for visibility queries (which means your software has to work in lock step with the graphics hardware, and deferred rendering will no longer be an option for hardware developers).

Edited by - PinkyAndThaBrain on November 18, 2001 11:18:03 AM

Share this post


Link to post
Share on other sites
Back to quantum computers: those can be funny

Think of it... When you need more CPU cycles to get to your 60 frames per second then available, you can just use those you are *going* to generate upcoming night, when you are sleeping

Share this post


Link to post
Share on other sites
quote:
Original post by Ronin_54
Come to think of it... 3d-bitmaps will also be nice


So well, we currently have Voxels and 3DTextures...which do you choose?

I don''t want HoloScreens, I want HoloDecks where you can interact with the game like you do in everyday live!


Yesterday we still stood at the verge of the abyss,
today we''re a step onward!

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
i think in the next couple of years we''ll be seeing a big movement in light and shadow technology. Real time diffuse lighting, real time raytracing, volumetric light and shadow...all good things, and they add a HUGE amount of realism to games.

i''m sure nvidia will keep giving us all more cycles then we will ever know what to do with, so polygon counts will continue to skyrocket. I would like to see more organic rendering focused on, like realtime deformable skin, musculature, cloth and hair animations, etc. to get those working requires more than just polygon pushers though. the algorithms behind these systems (same with light/shadows) are now the big stubbling block; calculating the motion of hair takes far longer than rendering it ever will.

true volume rendering should be explored too, but in games especially there has been a big move away from it, with games like commanche ditchin'' their voxel engines for polies...

<(o)>

Share this post


Link to post
Share on other sites