• Advertisement

OpenGL How to split ( or optimize ) these shaders into multiple shaders?

Recommended Posts

Hi,

I'm working on new asset importer (https://github.com/recp/assetkit) based on COLLADA specs, the question is not about COLLADA directly
also I'm working on a new renderer to render (https://github.com/recp/libgk) imported document.
In the future I'll spend more time on this renderer of course, currently rendering imported (implemented parts) is enough for me

assetkit imports COLLADA document (it will support glTF too),
importing scene, geometries, effects/materials, 2d textures and rendering them seems working

My actual confusion is about shaders. COLLADA has COMMON profile and GLSL... profiles,
GLSL profile provides shaders for effects so I don't need to wory about them just compile, link, group them before render

The problem occours in COMMON profile because I need to write shaders,
Actually I wrote them for basic matrials and another version for 2d texture

I would like to create multiple program but I am not sure how to split this this shader into smaller ones,

Basic material version (only colors):
https://github.com/recp/libgk/blob/master/src/default/shader/gk_default.frag

Texture version:
https://gist.github.com/recp/b0368c74c35d9d6912f524624bfbf5a3

I used subroutines to bind materials, actually I liked it,
In scene graph every node can have different program, and it switches between them if parentNode->program != node->program
(I'll do scene graph optimizations e.g.  view frustum culling, grouping shaders... later)

I'm going to implement transparency but I'm considering to create separate shaders,
because default shader is going to be branching hell

I can't generate shader for every node because I don't know how many node can be exist, there is no limit.
I don't know how to write a good uber-shader for different cases:

Here material struct:

struct Material {
  ColorOrTexture  emission;
  ColorOrTexture  ambient;
  ColorOrTexture  specular;
  ColorOrTexture  reflective;
  ColorOrTexture  transparent;
  ColorOrTexture  diffuse;
  float           shininess;
  float           reflectivEyety;
  float           transparency;
  float           indexOfRefraction;
};

ColorOrTexture could be color or 2d texture, if there would be single colorOrTex then I could split into two programs,
Also I'm going to implement transparency, I am not sure how many program that I needed

I'm considering to maintain a few default shaders for COMMON profile,
1-no-texture, 2-one of colorOrTexture contains texture, 3-........

Any advices in general or about how to optimize/split (if I need) these shaders which I provied as link?
What do you think the shaders I wrote, I would like to write them without branching if posible,
I hope I don't need to write 50+ or 100+ shaders, and 100+ default programs

PS: These default shaders should render any document, they are not specific, they are general purpose...
       I'm compiling and linking default shaders when app launched

Thanks

Share this post


Link to post
Share on other sites
Advertisement

To split the bigger shader into smaller pieces I'm trying to create small shaders for specific purposes e.g. phong, phong with texture, blinn, lambert...
but I'm not sure how to separate lights from the phong shader because I don't want to add lights[MAX_LIGHTS] to every small shader and also phong requires light vector for phong equation :/ It would be more easy to render all lights together with separate pass (before or after material pass). This would save me to add light array to every shader

I would like to do something like this:

for each object
  prepare object to render (transform, culling...)

for each object
   render with object's material e.g. apply phong, texture
   render with all lights

any better idea or suggestions? 

Share this post


Link to post
Share on other sites

Instead of sperating all lights, using one light per render pass will solve render logic,
because light is required in some effects e.g. phong, blinn... I'm going to implement this:

for each object
  prepare object to render (transform, culling...)

for each object
  for each light
     render object

light count will affect performance, but this will make things more clear and easy to maintain I think,

I still would like to hear another ways to handle lights (unknown count)
 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By Karol Plewa
      Hi, 
       
      I am working on a project where I'm trying to use Forward Plus Rendering on point lights. I have a simple reflective scene with many point lights moving around it. I am using effects file (.fx) to keep my shaders in one place. I am having a problem with Compute Shader code. I cannot get it to work properly and calculate the tiles and lighting properly. 
       
      Is there anyone that is wishing to help me set up my compute shader?
      Thank you in advance for any replies and interest!
    • By Nikolay Dimchev
      Hello, Game Devs! 
      I am a 3D Modeler and Texture Artist currently looking for freelance work.
      If you're interested please feel free to checkout my portfolio at the link below.
      Contact me with details at nickeydimchev3d@gmail.com!
      ~Nick
      nickeydimchev3d.myportfolio.com
       
    • By PhillipHamlyn
      Hi
      I have a procedurally generated tiled landscape, and want to apply 'regional' information to the tiles at runtime; so Forests, Roads - pretty much anything that could be defined as a 'region'. Up until now I've done this by creating a mesh defining the 'region' on the CPU and interrogating that mesh during the landscape tile generation; I then add regional information to the landscape tile via a series of Vertex boolean properties. For each landscape tile vertex I do a ray-mesh intersect into the 'region' mesh and get some value from that mesh.

      For example my landscape vertex could be;
      struct Vtx { Vector3 Position; bool IsForest; bool IsRoad; bool IsRiver; } I would then have a region mesh defining a forest, another defining rivers etc. When generating my landscape veretexes I do an intersect check on the various 'region' meshes to see what kind of landscape that vertex falls within.

      My ray-mesh intersect code isn't particularly fast, and there may be many 'region' meshes to interrogate, and I want to see if I can move this work onto the GPU, so that when I create a set of tile vertexes I can call a compute/other shader and pass the region mesh to it, and interrogate that mesh inside the shader. The output would be a buffer where all the landscape vertex boolean values have been filled in.

      The way I see this being done is to pass in two RWStucturedBuffer to a compute shader, one containing the landscape vertexes, and the other containing some definition of the region mesh, (possibly the region might consist of two buffers containing a set of positions and indexes). The compute shader would do a ray-mesh intersect check on each landscape vertex and would set the boolean flags on a corresponding output buffer.

      In theory this is a parallelisable operation (no one landscape vertex relies on another for its values) but I've not seen any examples of a ray-mesh intersect being done in a compute shader; so I'm wondering if my approach is wrong, and the reason I've not seen any examples, is because no-one does it that way. If anyone can comment on;
      Is this a really bad idea ? If no-one does it that way, does everyone use a Texture to define this kind of 'region' information ? If so - given I've only got a small number of possible types of region, what Texture Format would be appropriate, as 32bits seems really wasteful. Is there a common other approach to adding information to a basic height-mapped tile system that would perform well for runtime generated tiles ? Thanks
      Phillip
    • By GytisDev
      Hello,
      without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles,  where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
      Thank you in advance.
    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
  • Advertisement