Archived

This topic is now archived and is closed to further replies.

bob_the_third

Breaking the reality barrier

Recommended Posts

The holy grail of rendering (or at least one of them) is to achieve photorealistic rendering in real-time. [INSERT DEBATE ON DEFINITION OF REALTIME HERE] Okay, let''s just say that realtime is fast enough that the user never notices a slowdown. I understand that radiosity is a method employed in the movie industry for super-realistic CGI. There is a concerted effort going on to achieve a form of this in a realtime PC environment (i.e games, etc.). In addition the "tricks" have been increasing in sophistication: light-mapping, pixel-shaders, shaddow-mapping, (and from what I''ve heard around these parts lately, but know nothing about so please correct me gently if I''m wrong in so labeling this last one...) spherical harmonics to name a few. So here''s the question I pose: 1) Can radiosity or some variant be made to run in realtime without sacrificing the accuracy of the solution significantly. 2) If it can''t at this time, why not? What are the technical obsticals and how would you get around them (if you have any suggestions or feel like bending your mind around this problem). 3) Is there any point to the solving GA in realtime? Are the "tricks" good enough?

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Solving GI in realtime would be awesome
The main obstacle is still processing power. The amount of computations needed with most current GI methods is insane.

Share this post


Link to post
Share on other sites
quote:
The main obstacle is still processing power. The amount of computations needed with most current GI methods is insane.

Why? (Yes I know I''m asking for flames. But bear with me. I have a point). Clearly the processing power required depends entirely on the algorithm used to acheive GA.

Opened ended question ahead. Answer as much or as little as you please:

Which algorithms require too much power and why? What is it about current approaches that overtax modern PCs? For example, the most naive approach to radiosity as I understand it conveys energy from each poly to every other poly and runs in O(n^2) time, thus scaling horribly for large poly counts. However not every algorithm will necessarily share this run-time characteristic.

What I''m trying to start is a round-table discussion that opens up the question of feasibility and tries to evaluate it honestly. If you say it''s too slow let''s look at why and see if there''s a better way.

Share this post


Link to post
Share on other sites
quote:
Original post by bob_the_third
The holy grail of rendering (or at least one of them) is to achieve photorealistic rendering in real-time.


There is none. Approaching photorealism depends on numerous little details that must match. GI is surely one of the most important, but far from the only one. Realistic material properties, expressed as BRDFs is another important point. We still don't have optimal hardware support for those on consumer level hardware (would need 4D textures, only supported on highend workstations).

quote:

1) Can radiosity or some variant be made to run in realtime without sacrificing the accuracy of the solution significantly.


Of course, that's no problem at all. You can go very far with this, and achieve almost 100% photorealistic scenes in full realtime. But keep in mind that a precomputed GI solution is not interactive. You'll get the "museum-effect": walk around, look, but don't touch/move anything. Recently, more and more interactive GI approaches exist, but they are all limited in several respects. They will allow limited dynamic environments, but still not 100% interactive (as with eg. perpixel lights and shadowmaps). A combination between traditional methods and precomputed GI often gives very nice effects, though.

quote:

3) Is there any point to the solving GA in realtime? Are the "tricks" good enough?


If you mean GI (=global illumination), then there is definitely a point. Full realtime GI is an important step towards full photorealism. The tricks we have today are pretty good, and can be quite convincing. But they are not yet the real thing.


[edited by - Yann L on July 26, 2003 4:11:24 PM]

Share this post


Link to post
Share on other sites
Current GI solutions are too much to do in realtime for one reason: computational expense. Real illumination of a single point in space is the integral of all incoming light from all directions. Some of this light is reflected off of the point (if the point corresponds to a surface or a particle suspended in a volume, such as dust or silt underwater). Some light is absorbed. Some is transmitted through the point.

Global illumination is not a simple equation to solve. All GI approximations amount to summing incoming light at a large number of points in the scene, and then visualizing the estimates. To obtain anything remotely resembling a realistic image, one must consider a large number of light contributions for each point. The larger the scene, the more lights involved, etc., the more computations it takes to make an approximation.

Remember, real illumination is an integral. The only way to achieve a true solution is to perform an infinite sum. However, we can use what basically amounts to Monte Carlo integration (virtually all GI methods make use of some kind of disguised MCI). This is a way of approximating an integral by adding up random slices and weighting them. It is possible to approximate certain sums using only a few hundred samples without losing too much accuracy. Applied MCI is a great way of approximating GI -- but it still is essentially doing a whole lot of math and fudging the numbers to obtain something vaguely realistic.

There are good and bad ways of obtaining such GI approximations, but all of them involve a huge number of computations. I personally have been doing some research in realtime GI over the past several months, and even using some ridiculous hacks it is hard to break the seconds-per-frame barrier for nontrivial scenes. Until someone discovers a brilliant way of cutting down on the calculations needed to approximate GI, or of escaping the need for performing MCI at what amounts to millions of discrete points, we simply don''t have the processing power to solve GI at realtime rates.


As Yann mentioned, it is certainly possible to do museum-style GI with precomputed solutions, but this just doesn''t cut it. I look forward to the day when real GI approximations are feasible in realtime, and I can finally illuminate a room by setting a piece of furniture on fire

Share this post


Link to post
Share on other sites
quote:
Original post by JohnBolton
Spherical harmonics is a good real-time approximation to radiosity.

No, they are not. They are actually a pretty bad way to store a GI solution, from a quality point of view (compared to lightmaps). SH in their current form are absolutely unusable for point lights.

For skylight, however, you are right, they are a pretty good approximation.

Oh and BTW: SH do not approximate radiosity, they store an approximation of a lighting solution (global or not).


[edited by - Yann L on July 26, 2003 5:50:15 PM]

Share this post


Link to post
Share on other sites
It is a common misconception that SH is a lighting method; to build on what Yann said and hopefully dispell the mistaken notion, SH can represent lighting data from any GI method. It simply stores the lighting data in a compact and relatively efficient form (point lights excepted).

I still lean heavily towards a solution based on photon mapping; that road seems to have the best balance of robustness and efficiency.

Share this post


Link to post
Share on other sites
There are some realtime raytracers and photonmappers around on the net... or at least they claim to be realtime. Not sure what scene complexity they can handle though.

Michael K.,
Co-designer and Graphics Programmer of "The Keepers"



We come in peace... surrender or die!

Share this post


Link to post
Share on other sites
I'd be interested to see such an animal... got any links?


[edit] And before anyone asks, yes, I've googled realtime photon mapping. Like I said, I've spent several months researching it, and all that I know of is a few proposition-type white papers.

[edited by - ApochPiQ on July 26, 2003 7:43:24 PM]

Share this post


Link to post
Share on other sites
The Avalon Project - a hardware raytracer development project
Some demos
The OpenRT Interactive RayTracing Project
RaVi - can handle small scenes in realtime, or so they claim
I-Ray
RealStorm - this one seems to be the most promising...
Real-Time Global Illumination - an article; haven''t read it, but it looks interesting
Also, a google search for realtime raytracing gives quite a few results, not sure how relevant.

Michael K.,
Co-designer and Graphics Programmer of "The Keepers"



We come in peace... surrender or die!

Share this post


Link to post
Share on other sites
Ah.. I''ve seen all of those before. I guess I was a bit vague but I''m more interested in realtime photon mapping; realtime raytracing isn''t all that impressive these days, and I''ve seen a few papers on realtime GI using other techniques.

Thanks for the effort though

Share this post


Link to post
Share on other sites
[rant]
What is it with everybody and photorealism anyway? Why does anybody really care? Games are games, and you play them for the fun of it, so realism really should be secondary to the actual goal of the game.

I for one wouldn''t mind very much if Moore''s law simply stopped working, leaving us all with technology around that of a Radeon 9800. Games would get better, because people would stop worrying about making the things look better than the rest, and start worrying about making them funner than the rest.

Seriously...was UT2003 that much better than the original simply because it looked better? I, along with many people I know, enjoyed the first one more, and actually still play it instead of UT2003. Although other people probably won''t agree with me.
[/rant]

Anyway, now that I''ve said that, I agree with Yann L. If photorealism is to be achieved, it isn''t going to be through one technique. Not only GI, but dynamics simulation, BDRFs and sub-surface scattering, volumetric interaction of photons with gas and liquid volumes and many, many other techniques need to be drastically improved before photorealism is achieved. And then those techniques need to be brought into real-time.

A lot of natural phenomena simply aren''t well suited for being put into nice, simple algorithms. There aren''t too many ways of simulating the light scattering through skin and muscle to give that natural look for skin without actually doing the scattering, for example.

In the end, it''s almost always a much better idea to try and make something look nice instead of worrying about making it look perfect. "Good enough" is a good thing to keep in mind.

j

Share this post


Link to post
Share on other sites
quote:
What is it with everybody and photorealism anyway? Why does anybody really care? Games are games, and you play them for the fun of it, so realism really should be secondary to the actual goal of the game.


depends on the game
on some kind of games, player immersion is really important, without it there isn't "fun", or at least less fun...
of course, some games don't have to worry about graphics realism... but some do...

quote:
"In the end, it's almost always a much better idea to try and make something look nice instead of worrying about making it look perfect.


making it look like it's looking perfect...

quote:
"Good enough" is a good thing to keep in mind.


IMO, "always better" is... mmh... better

if you don't care about realism, very well, lots of games aren't meant to be realistic-looking... but if you are, you can't just make something average-looking, stand there say "oh, that'll be enough", and do nothing else... if your game is to be sold, lots of people won't buy a game if it looks like crap compared to other games on the market, even if it has a really good gameplay. it will have success if it has a good gameplay, but much less than if it also had stunning graphics...
and graphics usually is what you see first in a game, then comes the gameplay.
and as I already said (and I don't think I'm the only one thinking that ), gameplay, fun, and pleasure playing a game is often tightly linked with graphics...

and if you think focusing on both gameplay and graphics is too much.. mmh that's what a team is made for... a graphics programmer in a team usually won't design the game itself, and a game designer won't code graphics... it's like saying stop focusing on physics and work on better sound systems... that's two different things, usually not done by the same person (unless it's a small team)...

actually I've never been in a profesionnal team, and I might very well be wrong, but that's how I see things...

[edited by - sBibi on July 26, 2003 12:12:25 AM]

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by technobot
<a href="http://www.cfxweb.net/%7Elyc/lray.html">I-Ray</a>



it''s *l*ray (for "lycium" :D

about the topic: i personally don''t think that photon mapping is the way to realtime global illumination... monte carlo methods provide a pretty good solution given the requisite processing power (at least that''s what ingo wald and i think . yeah, it''ll be a while yet...

thomas ludwig / lycium
http://lycium.cfxweb.net

Share this post


Link to post
Share on other sites
quote:
Original post by ApochPiQ
Ah.. I''ve seen all of those before. I guess I was a bit vague but I''m more interested in realtime photon mapping;

I once saw some screenshots of a scene that was rendered with photonmapping, supposedly in realtime. But I can''t find the site anymore...
quote:
Original post by Anonymous Poster
about the topic: i personally don''t think that photon mapping is the way to realtime global illumination... monte carlo methods provide a pretty good solution given the requisite processing power

AFAIK, at least some photonmapping implementations use Monte Carlo raytracing, and it definitely looks great (when done properly).

Michael K.,
Co-designer and Graphics Programmer of "The Keepers"



We come in peace... surrender or die!

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
In "Tricks of the 3D Game Programming Gurus," I think Lamothe says that there will be photorealistic or some type of super-realism...something, within like 5 years. The mathematical prerequisites would most definitely include calculus he says, for he is already employing the need for it in this book. FYI, you MUST get this book. It combines all of those 500 page graphics books into a well-written, well-stylized, massive (so big that I can''t rest it on my chest when reading) quintessential guide. And its not written in alien. For instance, I tought myself everything about quaternions and complex numbers in a few pages. It negetes the use of a lot of GameDev math articles into junk. The author also expresses a friendly personality. OMFG--he should get a nobel prize for computer science Another teaser: I''ve never done assembly in my life, but theres a whole section (and dat''s a BIG section) devoted to programming the FPU (Floating-Point Unit) in assembly, which is fantastic if you''re one of those pop-culture optimization freaks!

Share this post


Link to post
Share on other sites
>They (SH) are actually a pretty bad way to store a
>GI solution, from a quality point of view (compared
>to lightmaps). SH in their current form are absolutely
>unusable for point lights.

Correct. For an interesting alternative to SH, see http://graphics.stanford.edu/papers/allfreq/. It uses wavelets instead of spherical harmonics, which, according to the paper, are much better for storing the GI solution.

>AFAIK, at least some photonmapping implementations use
>Monte Carlo raytracing, and it definitely looks great
>(when done properly).

The original photon mapping papers of Jensen used Monte Carlo raytracing too. The first pass of the rendering equation is evaluated with accurate Monte Carlo integration, and the second pass is evaluated from the photon map (with some adaptivity, of course). Not using monte-carlo integration (ie. visualizing the photon map directly) gives weird "blurred noise" look (see Jensen''s paper from 1996 for a comparison of a museum scene rendered with monte-carlo raytracing and direct visualization of the photon map).

- Mikko Kauppila

Share this post


Link to post
Share on other sites
It''s over a year old but this document has some screenshots of realtime photon mapping.
http://www.cs.mtu.edu/~shene/NSF-2/Tim-Jozwowski.pdf

I am surprised with all the research into photon mapping and radiosity there aren''t more interactive demos even if they were very low quality.

Share this post


Link to post
Share on other sites
one problem with photon mapping is scalability..


it is great for gi, or bether said, for indirect illumination.. it is actually not really great for gi!

why?


think of it. Global Illumination, just as the name sais, means illumination all over our globe.. (so actually we want Full Space Illumination so other planets get lit, too :D).


now come to think of it. where are you now? in a room. what lits that room? a window, with some light comming in. some lightsources, possibly shining through under the door (a wellknown indirect lighting situation..)

thats not much.

but to "solve that", photonmapping would have to throw photons from all lightsources all over this world, and would have to trace all sunphotons onto the whole world.

and the actual photonmap would have a scale over the whole world.

just to illuminate your room..


not _THAT_ efficient..

radiosity is no bether there.. (both can be for example fully precalculated, so photonmaps are still actually bether/more feature rich.. and cheaper, still.. i think:D)



i think thats why all scalable raytracing engines that want to implement gi don''t use photonmapping..

because with raytracing, even when you just stochastically estimate diffuse reflected rays, you only trace around in the visible/important part of the scene.. that means, if you''re in your room, no photons do ever have to be shoot to the rest of the world.. instead only the photons from outside the window comming into the window get catched..



i call photonmapping and radiosity now indirect illumination solutions. real useful global illumination solutions they aren''t.. except for precalc possibly..

always take a full dynamic earth as "global" and try to get your algorithm to fit in there fast.. then, montecarlo pathtracing sounds quite great..

of course, you should have some spacial division to not need to trace every ray against the whole world:D then again, the world can have dynamic subdivision so it is only detailed near you.. just like for rastericer algos..

"take a look around" - limp bizkit
www.google.com

Share this post


Link to post
Share on other sites
Global illumination has nothing to do with illuminating the entire planet. It is easily feasible to light just a single room or building, rather than simulating all light everywhere. No solution can handle illuminating the entire planet, let alone contributions from stars, deep space background radiation, etc.

The term "global" is used in contrast to old-style "local illumination" methods which calculate only the direct contributions of each light source. Global illumination is simply local illumination + indirect illumination (diffuse interreflections, caustics, etc).


There is a technique called importance mapping which determines which photons are visible from the camera, and thus controls where photon hits are stored. It is fairly effective but doesn''t always cut down photon count enough to make a huge difference.

Photon mapping itself is not likely to become realtime any time soon; I think it will be some kind of derivative or hybrid that gets there first, and that will most likely be the method used for illumination for many years to come.

Share this post


Link to post
Share on other sites
for small scenes, its quite realtime-doable to generate photonmaps.. problem is you should raytrace to get a good visual image in the end, with the photonmap.. and thats expensive, but only tracing the photonmap could be done realtime..

(i just read that pdf noted above.)

problem is photonmapping is not well scalable to for example gi the whole world. algorithms like the ones used for gi on www.openrt.de, wich sound much more "bruteforce" are much more efficient in that situation, as they never touch the whole world, but only backtrace what is actually seeable and important.

still you have need for a good raytracing-accelerator-structure to trace a whole world..


but its important imho that even gi does not touch the whole world by itself, but only the visible parts get only lit by the parts that can see them wich got lit by only the parts that can see them wich got lit by only the parts that can see them wich got lit by only the parts that can see them wich got lit by only the parts that can see them wich got lit by only the parts that can see them and so on:D

its the only algo that works the same speed, no mather how big the world is..

of course, "same speed" is not really the same speed.. but with a good tree structure, you could have the whole galaxy filled with sphere-lightsources and while you stand in a room you never touch them.. except the stars that illuminate your window:D

"take a look around" - limp bizkit
www.google.com

Share this post


Link to post
Share on other sites
just my 2-cents...

I think this GI stuff is all very interesting, but not all that useful yet...

I suggest this (feel free to comment!):

Take some traditional shadow-volumed, per-pixel lighting effects (I guess something along the lines of the Doom-3 / UT stuff). But apply it to an extremely complex scene...

We''ve shown that we can make an individual character (or 2) on screen look absolutely fantastic, likewise with some enclosed spaces...

But how about a slightly more open space, with huge amount of detail and a large number of people.

Say a street with a few shops and 20-30 people - all at a medium-high level of detail.

I suppose what I''m saying is: spread a medium level of detail of a high number of objects, rather than a huge amount of detail over a small number of objects

to be a little sciencey (mearly my own speculation) are used to seeing very complex scenes, with lots and lots of objects (even down to the dust/small bits of litter etc...), and I''d guess that we can tell a CGI scene is "fake" by the sparcity of detail... the large relatively plain walls / floors?

Jack

Share this post


Link to post
Share on other sites
davepermen: you''re not making sense. Of course photon mapping is not good for illuminating an entire world. Neither is any other rendering method that currently exists. As I said, there is no need to illuminate a complete planet; illumination in a single room/building is fine. I''m really not sure where you''re getting this "entire world" thing, but IMHO it isn''t really germane to the problem of realtime GI, as nobody is trying to illuminate an entire planet in realtime (yet).


jollyjeffers: you''ve discovered the eternal dilemma of graphics. The tradeoff is always speed for detail, and it will always be so. However, as algorithms continue to improve, it will someday be quite possible to do the scene you describe: huge, crammed full of detail, and realistic. There has already been work done on "simplified models" which are used for shadows/illumination, while high-detail models are directly rendered. The effects are barely noticable but yield a significant speed increase provided that the overall detail level is reduced properly in the simplified models. For example, the technique would take a large wall and do realtime lighting as if the wall were a single quad, but then render it with lumps and deformities, bump maps, etc.

Share this post


Link to post
Share on other sites
ApochPiQ: all i say is that photonmapping, as well as radiosity (but not as much affected as radiosity) does not scale well with world-size.

if your world are only some rooms, yes, photonmapping is fast.

if your world is big, you generate billions of useless photons in the end.

the only thing that never really starts to shade anything that is not important in the final image is monte carlo based raytracing, metropolis light transport at best.

i''ve studied quite a lot about it and like photonmapping very much, but i just had to realise that for general purpose, even say a q3 renderer, its not really useful. if you want it fully dynamic, of course..

thats why i think we don''t really need to support photonmapping evolution, as we don''t need to support radiosity evolution.

a good global illumination algorithm scales well with a quite big and complex global scene. the only one that really does scale well is monte carlo based raytracing.

i, for myself, am dissapointed by this ..

still, photonmapping can get a quite neat intermediate step for indoor raytracings.. as you there can know quite good what affects the final image => you only work with photons in that range..

and of course, you do only update some photons per frame, and store for each photon the light id it comes from. that way, you can realtime animate light-color/brightness (flickering and that..), and slow moving lightsources, too (they have some drift, as you don''t update all photons per frame.. as suggested in the paper on page1 of this thread.

this could be a quite good intermediate step..

but a full general purpose, good scaling GI solution, that COULD one day render the whole world (with tesselation of course), there is only montecarlo wich looks like its capable of doing this..

why rendering the whole world? first: why not? if your algo does not scale well for this, its a non-scalable algo.. with a good ray-acceleration-structure, the size, and size-differences virtually don''t affect plain raytracing, so they don''t affect mc as well really much.. of course, scene complexity does affect the scale..


you can for now say the earth you simulate is really just a sphere, and you have one house on it..

try photonmapping to get the sun shining into the window.. virtually impossible.

THATS what i mean with photonmapping is not good scalable.

"take a look around" - limp bizkit
www.google.com

Share this post


Link to post
Share on other sites