Jump to content

  • Log In with Google      Sign In   
  • Create Account


Sending lots of data to the GPU?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
4 replies to this topic

#1 OmarShehata   Members   -  Reputation: 205

Like
0Likes
Like

Posted 01 November 2012 - 04:00 PM

I've got my lighting shader code. And let's say I want to have 10, 20, or 50 lights in my scene. Now the lighting code itself isn't very complex, it should easily be able to handle a lot of lights without too much trouble. However, just by declaring my array to hold 150 vec3's, I just get a black screen, presumably because my graphics card can't handle all that much?

I was thinking a way around this would be to create an image at run time where each pixel represents the XYZ and intensity of the light as RGBA and send that instead to the shader. That way I could theoretically have a thousand lights with no problems. I haven't implemented this yet but I wanted to ask how other people handle stuff like this, or whether I should even be trying to send that much data to my shader. And how you normally optimize such things.

Any tips would be appreciated!

Thanks!

Sponsor:

#2 L. Spiro   Crossbones+   -  Reputation: 12675

Like
1Likes
Like

Posted 01 November 2012 - 05:33 PM

If you are using DirectX 11, use Forward+ rendering.
Otherwise use Deferred rendering.

If you insist on Forward rendering, you have to render in multiple passes, each pass handling as many lights as a single pass can.
This means resubmitting geometry multiple times, adding the results to the previous passes.


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#3 Krohm   Crossbones+   -  Reputation: 3011

Like
1Likes
Like

Posted 02 November 2012 - 01:25 AM

I was thinking a way around this would be to create an image at run time where each pixel represents the XYZ and intensity of the light as RGBA and send that instead to the shader. That way I could theoretically have a thousand lights with no problems.

I had implemented this back in... 2006 I think, on an entry-level GeForce6.
Main problem with that kind of hardware is they cannot loop arbitrarily. They can loop 255 times at best, but even when fitting the HW resources, the shader might eventually time out and abort. Hopefully this is gone now. If memory serves, it was possible to process about... perhaps 50 lights per pass?
Performance was interactive, but far from viable.

Real problem (which holds even now) is float textures take quite a lot of bandwidth and float texture lookup is often slower than int lookup.

#4 OmarShehata   Members   -  Reputation: 205

Like
0Likes
Like

Posted 02 November 2012 - 01:50 PM

Hmm deferred rendering looks interesting. It seems like it'd make the process of applying the lighting on objects much faster, but I still don't understand how it allows me to use more lights.

I'm reading the article here: http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/deferred-rendering-demystified-r2746

Afterwards, the lights in the scene are rendered as geometry (sphere for point light, cone for spotlight and full screen quad for directional light), and they use the G-buffer to calculate the colour contribution of that light to that pixel.

I would still be sending light data somehow right? And that would be limited by how big I can make my array, or am I not understanding it right?

#5 Krohm   Crossbones+   -  Reputation: 3011

Like
0Likes
Like

Posted 04 November 2012 - 04:08 AM

I would still be sending light data somehow right?

Sure. The data must be there somehow.

And that would be limited by how big I can make my array, or am I not understanding it right?

Yes and no.
The starting point is: everything we need for lighting calculation is in the G-buffer.
Then we have a "light" shader which ideally does 1 light, for each light in the scene we draw a full-screen quad.
Since each fragment has position and normal information (among anything else we need), the full-screen quad will "light" the pixels.
So, to do more light, we add more batches.

Of course this is a very straightforward method. A better way would be to process multiple lights so in that case yes, we get to store them in an array. But it's an optimization on top of the basic concept.
There are also more involved methods dealing with Compute Shaders and tiling. I'm keeping it as easy as possible here.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS