Let's say it's dirextx. In my game I first check if the light is not blocked and send the light info to the shaders? So I manually (Just for the sake of my question I'm not using an engine or anything) check every light and set them or not to the shaders before I render?
When you're doing forward shading (ie. you render and light each object one by one after eachother) you would look for the lights which would affect your object (for point and spot lights you would check whether the renderable object falls within their light volumes, being a sphere for points lights and a cone for spot lights. Plain directional lights will always affect each object), and you send each light's data to an appropriate shader. You'll have to make sure you have shader permutations which can handle these setups though.
If you're in a situation where you have a lot of lights you might want to set a limit on how many lights can affect an object at once, and you'll have to apply some optimization tricks like only uploading the nearest n lights, with n being your light limit per object, or merging some more distant lights together until you get an acceptable light count.
Another solution would be to use deferred rendering/shading which effectively decouples geometry from lighting, allowing for an incredible amount of lights at the cost of memory usage and material complexity.
Do you have any tutorials you could recommand me on this? I'm not sure I understand the concept or what your trying to explain.
I'm not up to date on the latest tutorials on these subjects, so I'm afraid I can't directly recommend you anything. I'm sure that a google search for "shadow mapping" will get you some valid results though.
I'll pass this one for now ;) and try to understand the basic concept. It was a question to understand how shaders could communicate between each other or impact your game in a way.
Remember that shaders are completely able to read and write data, and that data can be (re-)used by other shaders as well. When talking about pixel/fragment shaders this writing of data will mostly be in the form of textures (not considering advancements made in newer graphics APIs which allow some shaders to write to random access buffers as well), and textures can be used as buffers to store calculation results.
So to establish 'communication' between shaders, one shader could do some sort of calculation and write its results to a texture (which is bound as a render target), while another shader can use that texture afterwards as input to do its own calculations.