Jump to content
  • Advertisement
Sign in to follow this  
vinnyvicious

OpenGL Best way to abstract shaders in a small engine?

This topic is 802 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm currently designing a small engine to learn more about OpenGL and i'm in the process of finding the best way of abstracting shaders. Right now, i have a Shader class that allows me to bind uniforms and abstracts all the linking. Next, i have the following classes:

  • Material
    • Contains N instances of Texture2D
    • One instance of Shader
  • World
    • One instance of Shader

My Graphics::render method finds the World object and applies the shader. Then it finds all objects with a Material instance and renders its Shader. This is called on each update. The world shader contains things like lighting, AA, etc. Each Material shader has different configurations.

 

The workflow is something like this (pseudo-code):

material = new Material
material.setDiffuse("tex/blah_d.png")
material.setNormal("tex/blah_n.png")
material.setShader(new Shader("shaders/bumped.frag"))

obj = new Cube
obj.setMaterial(material)

world = new World
world.add(obj)
world.setShader(new Shader("shaders/forwardRendering.vert"))

while (true) {
    graphics.render(world)
}

Is this a good design? How would you guys do this?

Edited by vinnyvicious

Share this post


Link to post
Share on other sites
Advertisement

Hey vinnyvicious

 

a material that has become more standard in any game engine of past 10 years is less a collection of textrues as more some configuration class used to specify the proeprties of your shader. So you need to rethink of your design at all.

 

Normaly a shader has capabilities for one single purpose (e.g. Diffuse, Specular or Decal rendering) to modify the rendered vertices to what it was intended to. The diffuse material takes light into account where a self-illuming material for example does not and so on. Materials also consist of other properties like color, transparency or boolean flags that may be used for example to turn shiness on/off in some material that may get wet in the game.

 

In your asset pipeline you should first take the shader into account. Load it and store it anywhere materials have acess to it and then load the material classes for every shader. Load textures as well as shaders into some globaly managed asset storage because you will use them as well as shaders as references into your material configuration class.

 

pseudo-code

if(AssetManager.HasLoaded("./models/teapot.mat"))
{
  DiffuseMaterial mtl;
  if(AssetManager.HasLoaded("./diffuse.frag"))
    AssetManager.Load("./diffuse.frag");

  mtl.AddShader(AssetManager.GetShader("./diffuse.frag"));
  
  if(AssetManager.HasLoaded("./models/teapot.png"))
    AssetManager.Load("./models/teapot.png");

  mtl.AddBase(AssetManager.GetTexture("./models/teapot.png"));

  if(AssetManager.HasLoaded("./models/teapot.normal.png"))
    AssetManager.Load("./models/teapot.normal.png");

  mtl.AddNormal(AssetManager.GetTexture("./models/teapot.normal.png"));
  AssetManager.AddMaterial(mtl);
}

Mesh teapot;
teapot.AddMaterial(AssetManager.GetMaterial("./models/teapot.mat"));

In this approach you are asking your asset manager first for a specific material instance and create it if it isnt created yet. You ask for the shader of that material on the manager to be loaded and each texture included in the material. Settinsg like color (that arent set in the example) are made in the material too. Then you load your model/world and add the material queried at the asset manager to it. Thats how any modern game engine manages its materials (basic way)

 

Taking the renderer into account you sort your scene graph first by shader instance used before secondary sort by texture(s) bound to the material instances or vise versa, your choice here but never ever render every model by its own in chaotic order to prevent heavy shader/texture swaps between models in every frame.

 

I dont know what you mean by shader for the world, normaly anything in the world (terrain, skybox) is treated as if it were a model with its own material instance too. When you mean post processing then it is more a global then a local shader thing applied to the render target of a framebuffer.

Edited by Shaarigan

Share this post


Link to post
Share on other sites

What about having a Material that's made of effects (for multi pass, think water which needs reflection & refraction for exemple).

 

Material

array<Effect*> m_Effects;

 

Effect

EPass m_Pass;   //enum RenderingPass { Pre_Process, Reflection, Refraction, Depth_Fill, Opaque, Translucent, Post_Processing, Final }

Pipeline m_Pipeline;   //That's the whole pipeline setup, similar to OpenGL Program Object (which was very well thought)

array<TextureBinding> m_TextureBindings;

 

void fillCBData( const Mesh& mesh ); //fill CB

void fillGeomData( const Mesh& mesh );  //fill IB & VB (only if you want more flexibility, like having fallback on shaders using only a subset of disk data, I don't do this anymore.)

void setData( const ConstantBuffer& cb, const IndexBuffer& ib, const VertexBuffer& vb, const Camera& camera ); //Set CB, IB, VB, TB, Textures (using the previous array)...

void render();  //Either standalone, or integrated in setData, which name you should then change to reflect it.

 

 

TextureBinding

uint32_t m_ID; //Could be an enum TextureBindings { Albedo, Normal, Specular, Gloss, ... }

uint32_t m_Slot; //Where to bind it on the GPU

 

 

Rendering Process:

Go through you Spatial Graph and gather visible Mesh, for each of them access its Material and then its Effects, store the Mesh & Effect in an array for the pass.

(Something conceptually like : array<array<Thingy>, RenderingPass:COUNT> with Thingy { Effect* m_Effect, Mesh* m_Mesh; })

Generate a key and sort by that key (sorting will be different given the RenderingPass you are in, for exemple you'll sort by material for the Opaque pass, roughly front to back in the Depth_Fill pass and Back to Front in the Translucent pass...), then render.

 

You'll soon discover that you can change a few things to make it better/faster/more to your liking, it's just a broad presentation of the idea.

I'd strongly advice to use the Pipeline (= Program Object) abstraction as it's how hardware really works and the basis of low level API (Mantle/Vulkan, D3D12)

Edited by Ingenu

Share this post


Link to post
Share on other sites

Hello Shaarigan, and thanks for replying! I have a few questions about your architecture:

  • Why do you call HasLoaded? Shouldn't all the textures be described from the .mat file (which i'm assuming is some kind of JSON or XML file), as soon as it was parsed? Or should texture loading be an async operation?
  • What do you mean by a shader having one single-purpose? I've been following things like learnopengl.com and a few books, and they always have shaders that take care of surfaces. Like: BumpedDiffuseSpecular, etc.
  • By World shaders, i mean anything in the scene: lighting, shadows, AA, etc.

Ingenu, do you recommend any book or tutorial which outlines the concept of passes? I've never heard of this before and it sounds interesting. :)

Share this post


Link to post
Share on other sites
I don't think so, maybe you can find something on the net, however we used to say multi pass to mean rendering the same mesh more than once because we were limited in number of instructions, so we couldn't light a mesh properly rendering it once, that limitation vanished a while ago, but you'll likely still find a lot of references about it.
 
The way I present a pass is different, think it more a list of logically ordered rendering steps/groups. (I'll use step instead to differentiate with the old definition.)
If you have shadows, do GPU animation, or procedural sky generation on the GPU using Perlin noise, you'll need to run that before you need the data, since those operations are expensive and reused by several "items" in the world, you want to compute them first to know they are ready past that point, hence having a Pre-Process step.
After that, there are logical groups you'll want to render at different times, for correctness you need to render opaque before translucent, so you need two different steps, you also want to be optimal when rendering, so you want to render your opaque either front to back (if you don't do a depth fill step), or in effect order (to use instanced rendering if you have a depth fill step), but the translucent step needs all its meshes rendered in back to front order to be correct.
 
I will urge you to limit the number of GPU programs to the minimum, to that effect you should have a data driven GPU program, have a look at Disney's BRDF to see a single program that can do a lot with only a few parameters.
It's easier to get one program right, it means you can batch/instance a lot more, and even do better with indirect rendering (since usually indirect API don't allow you to change the GPU program).
 
The Effect I talked about before is also the glue between your C++ code and the GPU program language, the one that sets data in the right place (slots in D3D11/OGL parlance) before the draw call.
 
---
There are so many things that are intertwined when making an engine it's difficult to go into meso description, it's either macro or micro ^^
Anyway, when it comes to mesh, I shared geometry (vb/ib) data between instances, but could have unique textures, so I had a mesh description containing an enum/key+pathfilename such as : "albedo /gfx/monster/joe.tex" so when that mesh is created the associated textures are loaded (well unless the TextureManager has them in memory in which case they are only reference counted.).
 
 
---
To go back to the shadow effect, it's linked to a light, you don't necessarily need a Mesh object but rather a Drawable/Renderable for the effect.
 
---
I also subdivided my Effects into DrawEffect, ComputeEffect and a third I can't remember atm ^^
Edited by Ingenu

Share this post


Link to post
Share on other sites

To some extent, I wouldn't have a "CommandQueue" described in XML because maintaining it would turn bad rather quickly (adding features, never removing deprecated stuff)...

I would rather have the Effect PlugIn system I described above that does that in one of the callable functions instead, because it's way more versatile, much easier to read, understand and modify, and if it's a plugin/dll pretty much as flexible.

You'll need some glue between your engine code and the GPU setup/program.

 

I would however extend the program to contain meta informations regarding texture binding and sampler descriptions as they do.

Edited by Ingenu

Share this post


Link to post
Share on other sites

Yes that's closer.

 

Basically as you write your GPU program code, you decide where you put your data and what data you need, and if you write say "cbuffer Object : register(cb0) float4x4 WorldMatrix};" for your fragment subprogram you have just explicitely decided to put that constant buffer to constant buffer slot 0, you must therefore write the corresponding glue code engine-side, which would something like "gfx.PSSetConstantBuffers( 0, 1, pCB );" in your setParameters(...) procedure, and you must also write the code that will fill the data in that CB (mapping it, casting it and writing the data such as "pCBMatrix = instance->GetWorldMatrix();")

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!