# GPU based lightmapper

This topic is 4044 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I would be interested on learning how a GPU based lightmapper would be written, not for real-time purposes, but as an offline tool. Currently I am only interested on computing N.L diffuse light on the GPU. It seems like the challenge is on the visibility determination which can probably be done faster on the CPU if raytracing is used. I am aware that Gelato and the PRT simulator use GPU acceleration, but it is unclear to me how the algorithm would work. How is visibility determined on this case? Shadow maps? How do you handle general area lights?

##### Share on other sites
Whats your reasoning behind going for GPU-based lightmapper ? Actually, with 8 CPU cores already available (and many more to come in future), its better to just spread the load among all cores instead of writing specialized GPU lightmapper where you must prepare your textures (e.g. Normal and Light texture for N dot L operation) carefully.

Its much easier to cull the patches on the CPU, since you know best how data and visibility is organized in your engine.

If you want area lights (Who wouldnt ? They sure look nice compard to regular N dot L - Screen1 , Screen3 ) just go for radiosity where you can turn the shadows on and off as you like - thus either getting extremely fast results without shadows or slow results with shadows. Of course, you could combine radiosity with several known shadowing methods that would get you and order of magnitude speedup while calculating shadows (just spend few hrs googling that up and reading all relevant PDFs).

Plus, with area lights, you have a common range, outside of which the effect of light is invisible. So you could easily go through the array of all lights in your level and prepare 8 groups of patches (one per each light) and let each core process its own set of data. Should you write optimized SSE3-based routines, I bet even GF8800 couldn`t hold the pace, since the gfx card will always be bandwidth limited here, no matter how fast the calculations are made.

##### Share on other sites
Thanks Vlad, interesting take on the topic.

What I am looking at is speed, I am aware of the advancements on the field of real time ray tracing, and aware of the different techniques they use (KD-trees, packet tracing, SSE optimizations, etc), but what I am seeing is global illumination renderers moving towards a mixed/hybrid mode in which the GPU is used in cases where it makes sense and where there is a speedup (Gelato falls in this category, the PRT simulator uses GPU acceleration I belive, and I think mental ray can be customized to use the GPU for acceleration the computation of certain shadows)

The faster kd-tree in the world would not be able to keep up with a simple depth test after a shadow projection.

So I think a good final solution would be a combination of both worlds, but it is unclear to me what are the state of the art techniques on the field.

Gelato hides most of the details of being a hybrid renderer, but they can run considerably faster than a CPU based ray tracer (or so they claim)

##### Share on other sites
If you're going for GPU you need to try to do as much as possible on the GPU to avoid becoming bus limited. The GPU is a lot faster than the fastest multi-core CPUs and has much more bandwidth to video memory to boot.

For the testing visibility thing it depends what you're doing with your visibility calcs. You can use things like hardware occlusion queries or simply using the alpha channel of a texture to indicate visibilty (then use alpha test).

If you just want to determine visibility of a given light (or set of lights) from a given point then you'd be best off rendering the scene from the point of view of the target point. Have objects render in black and lights in white (or the light colour). You can alpha blend transparent objects for fancy "stained glass" effects. Then to calc the colour at the point you simply sum the colours in the scene (use aniso decimation or similar) and scale approriately.
For simple non-shadowed stuff it'll be barely faster than CPU if at all, but for many lights and many shadow casters the speed up can be large. Also area lights are trivially handled.

##### Share on other sites
I agree with Jerax, and I even remember an article (maybe even here on Gdnet) based on this stuff.

And it's very easy to upgrade from a simple lightmap to global illumination, because you just have to do another step with the already lightmapped object.

Actually, with proper shaders, it can be pretty fast. Faster than raytracing. Especially if you want really nice shadows, because that needs 50*50 or more rays.

EDIT: A tried searching this article, but couldnt found it. It was about rendering the whole scene from the viewpoint of every texel of the lightmap to a cubemap. And than sum up the pixels of the cubemap.

[Edited by - Gagyi on July 19, 2007 1:37:47 PM]

1. 1
2. 2
frob
15
3. 3
4. 4
5. 5
Rutin
12

• 12
• 12
• 58
• 14
• 15
• ### Forum Statistics

• Total Topics
632119
• Total Posts
3004208

×