• Advertisement
Sign in to follow this  

OpenGL Shader Management for Lighting?

This topic is 1508 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

 

I am currently learning to apply lighting to my application, but then I got confused on how I should scale it. I'm using WebGL and I'm learning from learningwebgl.com (which they say the same as NeHe OpenGL tutorial), and it only shows simple shader programs that every sample have one program with embedded lighting on it.

 

Say I have multiple lighting setup, like some point lights/spot lights, and I have multiple meshes with different materials. What should I do? make individual shader programs where you put colors/textures to meshes and then switch to lighting program? or always have every shader texts with those lights as default in it and simply make variable passes to enable them?

 

Also I am focusing on per-fragment lighting.

 

Thanks for the help!

Share this post


Link to post
Share on other sites
Advertisement

In DirectX, I pasted some generic light sampling functions at the top of each shader before compiling them. An array from the CPU stored up to 64 light sources that passed the culling test. Having more lights than that would be too slow to render anyway. They same thing should be possible in OpenGL and works without dynamic shader linking.

 

For depth based shadow mapping, I made a 2D allocation argorithm that is 100% free from fragmentation because the only way to free an allocation is to free every light source. Start with a whole square in bucket 0 and let the remaining buckets be empty. Each bucket can at most have 4 unallocated slots and that is only right before one of them is use or divided again. When there is no unused square of the right resolution, divide a larger square until you have something. The only case where you can't get more memory is when the total amount of pixels is not enough because no matter how many smaller sizes that use 3/4, their sum is always smaller than one larger square.

https://code.google.com/p/david-piuvas-graphics-engine/source/browse/trunk/Engine/QuadAllocator.cpp

Edited by Dawoodoz

Share this post


Link to post
Share on other sites

In DirectX, I pasted some generic light sampling functions at the top of each shader before compiling them.

 

Pasted as in you paste in on every shader files, or you have some sort of a function that contains light texts in strings and combine it with every shader files to be linked during the first load?

 
I haven't got into shadow mapping yet, but thank you for your extra information, I'm pretty sure I'm getting there after this.

Share this post


Link to post
Share on other sites

 

In DirectX, I pasted some generic light sampling functions at the top of each shader before compiling them.

 

Pasted as in you paste in on every shader files, or you have some sort of a function that contains light texts in strings and combine it with every shader files to be linked during the first load?

 
I haven't got into shadow mapping yet, but thank you for your extra information, I'm pretty sure I'm getting there after this.

 

 

Pasted as text into every shader that is used for materials. I made it fast by saving pre-compiled shaders with check sums that tell if the shader has changed since the last compilation. The hard thing was to maintain backward compability without bloating the code to insert so try to have a powerful interface from the start that is compatible with, amibient, diffuse and specular light for solid, thin and fog materials. Fog will sample the light without caring much about the direction of the light. Thin materials like fabric will sample the light on 2 sides of the surface. I did not implement anisotropic light but think about how you can implement it later on top of the old interface just in case.

 

For sampling the depth maps, I sampled with both bilinear interpolation and the percentage closer method. I took the maximum intensity of both to remove acne from the percentage closer method. The depth bias was calculated dynamically, based on depth map resolution and dimensions of the light source. For a spot light, the bias must be multiplied by the depth (not distance) from the light source.

Edited by Dawoodoz

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
    • By QQemka
      Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
      Let's go:
      Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
      Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
      Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
      What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
      There were several more but i forgot/solved them at time of writing
      Thanks in advance
    • By RenanRR
      Hi All,
      I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
      I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
      Vertex Shader:
      #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
      ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);  
      So, some doubts:
      - Why use it like that?
      - Is it okay to manipulate the camera that way?
      -in this way, are not the vertex's positions that changes instead of the camera?
      - I need to pass MVP to all shaders of object in my scenes ?
       
      What it seems, is that the camera stands still and the scenery that changes...
      it's right?
       
       
      Thank you
       
    • By dpadam450
      Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

      int rgbValue = int(textureSample.w);//4 bytes of data packed as color
      // algorithm might not be correct and endianness might need switching.
      vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
      extractedData /= 255.0f;
    • By Devashish Khandelwal
      While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
      Anyone has any idea .. what should I do?
  • Advertisement