Sending lots of data to the GPU?

Started by
3 comments, last by Krohm 11 years, 5 months ago
I've got my lighting shader code. And let's say I want to have 10, 20, or 50 lights in my scene. Now the lighting code itself isn't very complex, it should easily be able to handle a lot of lights without too much trouble. However, just by declaring my array to hold 150 vec3's, I just get a black screen, presumably because my graphics card can't handle all that much?

I was thinking a way around this would be to create an image at run time where each pixel represents the XYZ and intensity of the light as RGBA and send that instead to the shader. That way I could theoretically have a thousand lights with no problems. I haven't implemented this yet but I wanted to ask how other people handle stuff like this, or whether I should even be trying to send that much data to my shader. And how you normally optimize such things.

Any tips would be appreciated!

Thanks!
Advertisement
If you are using DirectX 11, use Forward+ rendering.
Otherwise use Deferred rendering.

If you insist on Forward rendering, you have to render in multiple passes, each pass handling as many lights as a single pass can.
This means resubmitting geometry multiple times, adding the results to the previous passes.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid


I was thinking a way around this would be to create an image at run time where each pixel represents the XYZ and intensity of the light as RGBA and send that instead to the shader. That way I could theoretically have a thousand lights with no problems.
I had implemented this back in... 2006 I think, on an entry-level GeForce6.
Main problem with that kind of hardware is they cannot loop arbitrarily. They can loop 255 times at best, but even when fitting the HW resources, the shader might eventually time out and abort. Hopefully this is gone now. If memory serves, it was possible to process about... perhaps 50 lights per pass?
Performance was interactive, but far from viable.

Real problem (which holds even now) is float textures take quite a lot of bandwidth and float texture lookup is often slower than int lookup.

Previously "Krohm"

Hmm deferred rendering looks interesting. It seems like it'd make the process of applying the lighting on objects much faster, but I still don't understand how it allows me to use more lights.

I'm reading the article here: http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/deferred-rendering-demystified-r2746

Afterwards, the lights in the scene are rendered as geometry (sphere for point light, cone for spotlight and full screen quad for directional light), and they use the G-buffer to calculate the colour contribution of that light to that pixel.

I would still be sending light data somehow right? And that would be limited by how big I can make my array, or am I not understanding it right?

I would still be sending light data somehow right?
Sure. The data must be there somehow.
And that would be limited by how big I can make my array, or am I not understanding it right?
Yes and no.
The starting point is: everything we need for lighting calculation is in the G-buffer.
Then we have a "light" shader which ideally does 1 light, for each light in the scene we draw a full-screen quad.
Since each fragment has position and normal information (among anything else we need), the full-screen quad will "light" the pixels.
So, to do more light, we add more batches.

Of course this is a very straightforward method. A better way would be to process multiple lights so in that case yes, we get to store them in an array. But it's an optimization on top of the basic concept.
There are also more involved methods dealing with Compute Shaders and tiling. I'm keeping it as easy as possible here.

Previously "Krohm"

This topic is closed to new replies.

Advertisement