Virtual Visible Spectrum

Started by
9 comments, last by anachronistic89 15 years, 9 months ago
Hello all. New member here. I've just gotten into 3d programming. I am intermediate in C++ programming, and I know a bit of OpenGL. What I would like to do, is create an environment which functions like our own. I notice that a lot of 3d applications use polygons with image textures, and then there is application of shadows and highlights, based on the light source. Then again, that's just what I have noticed, when watching 3d movies, and playing 3d videogames. Perhaps I am wrong. I find this method to be flawed and limited, and I think I have a better way to simulate it. Let's say we create a lightsource. Let's imagine that the lightsource is the sun. I would assign an array of variables to this. These variables would be assigned to wavelengths on the electromagnetic spectrum. I would not limit the amount of wavelengths to the visible spectrum, because some other wavelengths, like ultraviolet are visible to some people and some creatures, and also because I would need these wavelengths for a project I would use this graphics theory for. There would be a few other variables, which measure distance between objects and the light source, the intensity, and so forth. I would create a formula, which calculates the value of the color at any given location. Objects would be anisotropic polygons, with colorless texture. The objects would have different polygons, for the distance of the viewpoint which is viewing it. This is something like Elder Scrolls IV: Oblivion's engine, where graphics become more detailed, as the distance between the player and the object becomes smaller. This would be to reduce the amount of computing power required to render the entire picture. Anyway, I would use the aforementioned formula, and assign variables which hold the values calculated by this formula, to areas on the object. Then, when the object appears under the light source, its local value will be that of which the color it emits when the wavelengths from the light source reach the object. Then, highlights and shadows would be assigned to the object (I haven't really thought about this part yet, but based on what I have learned in art school, I would use analogous colors and compliments, for a more accurate portrayal.) The atmosphere would actually be the color of the gases which make it up. Clouds would actually take form, and would not just be some background, atmospheric image. Copyright 2008, Nathan Jones, by the way. What I would like to know, is, would this be possible to do with the limitations of today's computing technology? Is it possible to program such an advanced system using OpenGL and C++? I know I could do the C++ part, but does OpenGL have any prohibiting boundaries? Does something like this already exist? I really doubt it, but I never completely throw away the possibility.
Advertisement
Quote:Original post by anachronistic89
I would create a formula, which calculates the value of the color at any given location.

Well, this equation already exists. It's called the general global illumination equation. In it's complete form (which is not yet 100% complete, because many parts of the equation are not yet fully understood and/or are not computable with todays technology), its complexity is beyond belief.

After applying several very aggressive optimizations, many approximate equation sets were developed over the last 30 or so years: Radiosity, photon mapping, Metropolis LT, and several others. They are still way beyond what current hardware can do in realtime at acceptable quality. Yet, you can use them to precalculate some data to your 3D scene that will later be reused while rendering (see lightmapping, precomputed radiance transfer, etc). These precalculations can take from a few minutes to several weeks depending on the complexity of your scenes. Most current 3D games do that to one extend or another.

Here is some more info to start.

Of course this method is very popular in highend offline rendering, such as for still images, architecture, movies, etc. Most current 3D software include it or can use through plugins (Mental Ray, V-Ray, Maxwell, etc).

Quote:
Copyright 2008, Nathan Jones, by the way.

You legally can't copyright an idea, BTW [wink]
Quote:Original post by Yann L
Quote:Original post by anachronistic89
I would create a formula, which calculates the value of the color at any given location.

Well, this equation already exists. It's called the general global illumination equation. In it's complete form (which is not yet 100% complete, because many parts of the equation are not yet fully understood and/or are not computable with todays technology), its complexity is beyond belief.

After applying several very aggressive optimizations, many approximate equation sets were developed over the last 30 or so years: Radiosity, photon mapping, Metropolis LT, and several others. They are still way beyond what current hardware can do in realtime at acceptable quality. Yet, you can use them to precalculate some data to your 3D scene that will later be reused while rendering (see lightmapping, precomputed radiance transfer, etc). These precalculations can take from a few minutes to several weeks depending on the complexity of your scenes. Most current 3D games do that to one extend or another.

Here is some more info to start.

Of course this method is very popular in highend offline rendering, such as for still images, architecture, movies, etc. Most current 3D software include it or can use through plugins (Mental Ray, V-Ray, Maxwell, etc).

Quote:
Copyright 2008, Nathan Jones, by the way.

You legally can't copyright an idea, BTW [wink]


I know that one cannot copyright an idea :P that's just to prevent the lazy people from copying and pasting my post.


Thank you for your reply. I know this will certainly help me along in the process.

Maybe I should study some graphics engines on the code-level.

Are the APIs for the big projects designed specifically for themselves, or do they use OpenGL and so forth?
Quote:Original post by anachronistic89
Are the APIs for the big projects designed specifically for themselves, or do they use OpenGL and so forth?

If you're talking about realtime, it's either OpenGL or Direct3D.

For the global illumination (GI) calculations, that's either done on the CPU (or better on multiple CPUs, GI is particularly well suited for parallel computation models) or on one (or multiple) GPUs through CUDA or the upcoming OpenCL.
As Yann L mentioned the FULL GI equation isn't known yet, but most important parts are known (on which are based some calculations, which are pretty correct for simulation - namely Photon mapping, radiosity, etc.).
And as was said there isn't enough computational power in PC to solve F.e. photon mapping in real time (Well, it might be possible with ray tracing ... on enough fast PCs (I'm talking about 8-core, or rather 16-core CPUs in that machines) even in real time).
But there are several paths how can be GI (not 100% correct) achieved in real time graphics using OpenGL or Direct3D. Real time dynamic radiosity (which can be seen in several demos made by me, and not just in them) ... it's pretty limited in scale, in which is used ... but in engine which are those demos running at (also made by me) you're able to set radiosity quality, and if you'd set it high enough it'd have enough quality to even simulate multiple light bounces and correct light accesability (But that probably wouldn't be in real time or close to real time on todays hardware). Link to engines web (not so good, I need to rebuild it ... don't kill me for that) Web
If you have any question about radiosity solution in those demonstrations, ask me on my e-mail or PM me.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

Yes, radiosity has already proved itself to me as a method worth considering. I will be checking out your demos. I'll be sure to contact you if I have questions. Thanks!


Hmm, has the fact that human vision is trichromatic been used to help simulate color?

Part of my project focuses on the idea of color vision, detail, and all of that. Could it be possible to render based on whether or not the character has pentachromatic/trichromatic/dichromatic color?
Quote:Original post by anachronistic89
Hmm, has the fact that human vision is trichromatic been used to help simulate color?

Well, look at your screen (which is trichromatic) :)

But yes, of course the human eye is taken into account. Most higher end GI systems use SPDs (spectral power distributions) to calculate the lighting. Generally two types of engines exist: those that calculate everything in linear wavelength space (radiometric engines) and those that use the non-linear weighted visible spectrum (photometric engines). The former ones must convert the radiometric energies to photometric energies when they have completed the calculations. The latter directly operate on photometric units. In order to take the (non-linear) response of the human eye to colours into account (ie. convert from radiometric to photometric data), you need to use the CIE spectral weighting curves. See CIE colour space (or here).
Most of the things you said are well-known in the graphics literature. I suggest you google "rendering equation", "spectral rendering", "atmospheric scattering" and "level of detail".
"Math is hard" -Barbie
Quote:Original post by Yann L
Quote:Original post by anachronistic89
Hmm, has the fact that human vision is trichromatic been used to help simulate color?

Well, look at your screen (which is trichromatic) :)

But yes, of course the human eye is taken into account. Most higher end GI systems use SPDs (spectral power distributions) to calculate the lighting. Generally two types of engines exist: those that calculate everything in linear wavelength space (radiometric engines) and those that use the non-linear weighted visible spectrum (photometric engines). The former ones must convert the radiometric energies to photometric energies when they have completed the calculations. The latter directly operate on photometric units. In order to take the (non-linear) response of the human eye to colours into account (ie. convert from radiometric to photometric data), you need to use the CIE spectral weighting curves. See CIE colour space (or here).


I've just gotten a chance to read all that stuff. Finally on the CIE color link that you've supplied.

From what I read, this is going to make my project all the more fun that I thought it would be :)

It's going to be fun depicting ultra-violet light with the limitation of RGB. I could certainly work with available colors to help do this, because I am sure that bright-magenta colors and purple do not appear in the natural world.

I estimate that it will take a lot more computing to render what I originally wanted.

It will especially take a lot of computing to convert trichromatic into dichromatic, and trichromatic into monochromatic. This is the fun of programming, no? :)

By the way, if there are any projects on here that use dichromatic, monochromatic, and simulated tetrachromatic viewpoints, please link them to here. Also, motion-based vision, blurry vision, and anything like that. In the meantime I will be searching for such things, and working on the project.
For a real tetrachromat for example, you would notice that a combination of red and blue is not equal to purple because they wouldn't excite your fourth sensor the same way (even if you could get the three first sensors right). So the real object and the photograph of that object (using three kind of color pigments, magenta blue yellow or red green blue on your screen) would never match to you. Your neighbor will tell you that they match.

Simulating that is kind of hard because you would need a point of reference to notice the mismatch (what if the object is REALLY painted red and blue ?). What you could do, is in the game world, having a person that you play see an object, take a polaroid (or digital photography) so you have the ability to compare. Then have the rendering of the photograph mismatch the regular rendering. And non player character in the game tell you that they match (in a perverse way ^_^).

Simulating dichromat would be a bit simpler (somehow), you could make for example a modified RGB value be a linear combination of original RGB values through a transform matrix with one of the vector of the matrix being a combination of one or both other columns. One of the simplest example would be to make a modified red channel = original computed green channel. Of course different types of mutation could make the effect more complex. To be complete you could also go the way of completely simulating the radio transmission of frequencies and their effect on cones and rods and try to approximate that with your red green and blue emitters. Having a real dichromat on hand to validate your model would help of course.

As a last example, I don't think I'm a tetrachromat but I have a particularity is that I wear corrective glasses. The matter they're made of is not totally achromatic, so on the edge of the glass in particular each frequency of light will follow a slightly different path. So this brings us to the internal representation of our colors. If a spot is shining red and blue, I see two distinct spots if observed through the edges of my glasses. If it was pure purple wavelength I would see one spot. This distinction is not something you can make with a traditional RGB representation.

LeGreg

This topic is closed to new replies.

Advertisement