# OpenGL Global Illumination

## Recommended Posts

Hi, What games use somekind of global illumination (Ray Tracing, or anything else) algorithm on the rendering engine? I know that Ray Tracing has a realtime algorithm and a free rendering engine that is openGL based. But does any good use something like that? And how about radiosity or Photon Mapping?

##### Share on other sites
If you are talking about the OpenRT project: in fact, it is not real-time and at this moment of time, it is not really useful for game development. I know, the developers of OpenRT love to say that it IS realtime, but to get 15-20 frames you will need a cluster... even if 15 frames would be possible with one computer, it still will be not enough for games.

##### Share on other sites
But any game use global illumination? Not necessarily using OpenRT...

##### Share on other sites
halo, max payne 2, half life 2 and probably others used radiosity for the static lighting

I'd imagine spherical harmonics have been used in at least a couple games, but I can't be too sure

now, did you mean stuff being calculated at runtime? that is an entirely different story, and the answer is likely no [at least, not for rendering adn stuff]

radioisity and photon mapping aren't fast enough [for complex scenes] to be done in a game engine.

[also, it would have to be done entirely on the graphics card as the cpu will be busy with other things]

##### Share on other sites
Thanks...

Your answer was what I was looking for. I imagined that any global illumination is done in batch...

But how is this applied? They make textures with the scene or they do some kind of collor buffer? How is this accomplished?

##### Share on other sites
They all use lightmaps as textures, but there are many ways to create them.. Lightmaps are mostly genrated in 3dsmax or other 3d packages. In max payne, for example, the lightmap is rendered by their probitary radiosity tool called 'GI server' inside the mapeditor.. but the lighting isn't that exact. Actually the best way to create lightmaps is using mental ray or final render, I think...

##### Share on other sites
Of course that games dont use real-time radiosity or raytracing. Have you played Unreal 2 lately ? It is heavy on lightmap usage and yet it crawls on a lower configuration (under 2 GHz, 256 MB RAM, slower than GF4). And it has already precalculated all lightmaps ! Now imagine it would have to calculate those lightmaps in such detailed environment. Its not gonna happen anytime soon. Youd need at least 1 GB of RAM just for patches and form factors, i.e. another 512 MB for other game-related stuff.

True, there are algorithms that can make radiosity a real-time solution on newest crop if vid cards. But then again, its nowhere as detailed environment as current games are (which again just use already precalculated lightmaps, yet struggle to run on slower machines). Yes, next two generations of vid cards can double the shader performance, but you know what ? Gaming environments will also double in polygon count and complexity, so its a neve-ending game "Catch me, if you can !". True, they could make the real-time radiosity in old games like maybe Quake3, which is low in polygon count (although it has pretty high walls which would mean lots of patches would have to be calculated). But would it really look that much different ? Its definitely not going to be a difference as was the case with Tenebrae Quake.

Besides, I have yet to see a clever way of using real-time radiosity that would actually benefit the player and environment. What would be the point of real-time radiosity other tahn coolness ? I cant seem to see any other usage than dynamic diffuse lights which can also be done otherwise.

## Create an account

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627676
• Total Posts
2978582
• ### Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 11
• 12
• 10
• 12
• 22