• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Fadey Duh
      Good evening everyone!

      I was wondering if there is something equivalent of  GL_NV_blend_equation_advanced for AMD?
      Basically I'm trying to find more compatible version of it.

      Thank you!
    • By Jens Eckervogt
      Hello guys, 
       
      Please tell me! 
      How do I know? Why does wavefront not show for me?
      I already checked I have non errors yet.
      using OpenTK; using System.Collections.Generic; using System.IO; using System.Text; namespace Tutorial_08.net.sourceskyboxer { public class WaveFrontLoader { private static List<Vector3> inPositions; private static List<Vector2> inTexcoords; private static List<Vector3> inNormals; private static List<float> positions; private static List<float> texcoords; private static List<int> indices; public static RawModel LoadObjModel(string filename, Loader loader) { inPositions = new List<Vector3>(); inTexcoords = new List<Vector2>(); inNormals = new List<Vector3>(); positions = new List<float>(); texcoords = new List<float>(); indices = new List<int>(); int nextIdx = 0; using (var reader = new StreamReader(File.Open("Contents/" + filename + ".obj", FileMode.Open), Encoding.UTF8)) { string line = reader.ReadLine(); int i = reader.Read(); while (true) { string[] currentLine = line.Split(); if (currentLine[0] == "v") { Vector3 pos = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inPositions.Add(pos); if (currentLine[1] == "t") { Vector2 tex = new Vector2(float.Parse(currentLine[1]), float.Parse(currentLine[2])); inTexcoords.Add(tex); } if (currentLine[1] == "n") { Vector3 nom = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inNormals.Add(nom); } } if (currentLine[0] == "f") { Vector3 pos = inPositions[0]; positions.Add(pos.X); positions.Add(pos.Y); positions.Add(pos.Z); Vector2 tc = inTexcoords[0]; texcoords.Add(tc.X); texcoords.Add(tc.Y); indices.Add(nextIdx); ++nextIdx; } reader.Close(); return loader.loadToVAO(positions.ToArray(), texcoords.ToArray(), indices.ToArray()); } } } } } And It have tried other method but it can't show for me.  I am mad now. Because any OpenTK developers won't help me.
      Please help me how do I fix.

      And my download (mega.nz) should it is original but I tried no success...
      - Add blend source and png file here I have tried tried,.....  
       
      PS: Why is our community not active? I wait very longer. Stop to lie me!
      Thanks !
    • By codelyoko373
      I wasn't sure if this would be the right place for a topic like this so sorry if it isn't.
      I'm currently working on a project for Uni using FreeGLUT to make a simple solar system simulation. I've got to the point where I've implemented all the planets and have used a Scene Graph to link them all together. The issue I'm having with now though is basically the planets and moons orbit correctly at their own orbit speeds.
      I'm not really experienced with using matrices for stuff like this so It's likely why I can't figure out how exactly to get it working. This is where I'm applying the transformation matrices, as well as pushing and popping them. This is within the Render function that every planet including the sun and moons will have and run.
      if (tag != "Sun") { glRotatef(orbitAngle, orbitRotation.X, orbitRotation.Y, orbitRotation.Z); } glPushMatrix(); glTranslatef(position.X, position.Y, position.Z); glRotatef(rotationAngle, rotation.X, rotation.Y, rotation.Z); glScalef(scale.X, scale.Y, scale.Z); glDrawElements(GL_TRIANGLES, mesh->indiceCount, GL_UNSIGNED_SHORT, mesh->indices); if (tag != "Sun") { glPopMatrix(); } The "If(tag != "Sun")" parts are my attempts are getting the planets to orbit correctly though it likely isn't the way I'm meant to be doing it. So I was wondering if someone would be able to help me? As I really don't have an idea on what I would do to get it working. Using the if statement is truthfully the closest I've got to it working but there are still weird effects like the planets orbiting faster then they should depending on the number of planets actually be updated/rendered.
    • By Jens Eckervogt
      Hello everyone, 
      I have problem with texture
      using System; using OpenTK; using OpenTK.Input; using OpenTK.Graphics; using OpenTK.Graphics.OpenGL4; using System.Drawing; using System.Reflection; namespace Tutorial_05 { class Game : GameWindow { private static int WIDTH = 1200; private static int HEIGHT = 720; private static KeyboardState keyState; private int vaoID; private int vboID; private int iboID; private Vector3[] vertices = { new Vector3(-0.5f, 0.5f, 0.0f), // V0 new Vector3(-0.5f, -0.5f, 0.0f), // V1 new Vector3(0.5f, -0.5f, 0.0f), // V2 new Vector3(0.5f, 0.5f, 0.0f) // V3 }; private Vector2[] texcoords = { new Vector2(0, 0), new Vector2(0, 1), new Vector2(1, 1), new Vector2(1, 0) }; private int[] indices = { 0, 1, 3, 3, 1, 2 }; private string vertsrc = @"#version 450 core in vec3 position; in vec2 textureCoords; out vec2 pass_textureCoords; void main(void) { gl_Position = vec4(position, 1.0); pass_textureCoords = textureCoords; }"; private string fragsrc = @"#version 450 core in vec2 pass_textureCoords; out vec4 out_color; uniform sampler2D textureSampler; void main(void) { out_color = texture(textureSampler, pass_textureCoords); }"; private int programID; private int vertexShaderID; private int fragmentShaderID; private int textureID; private Bitmap texsrc; public Game() : base(WIDTH, HEIGHT, GraphicsMode.Default, "Tutorial 05 - Texturing", GameWindowFlags.Default, DisplayDevice.Default, 4, 5, GraphicsContextFlags.Default) { } protected override void OnLoad(EventArgs e) { base.OnLoad(e); CursorVisible = true; GL.GenVertexArrays(1, out vaoID); GL.BindVertexArray(vaoID); GL.GenBuffers(1, out vboID); GL.BindBuffer(BufferTarget.ArrayBuffer, vboID); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(vertices.Length * Vector3.SizeInBytes), vertices, BufferUsageHint.StaticDraw); GL.GenBuffers(1, out iboID); GL.BindBuffer(BufferTarget.ElementArrayBuffer, iboID); GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length * sizeof(int)), indices, BufferUsageHint.StaticDraw); vertexShaderID = GL.CreateShader(ShaderType.VertexShader); GL.ShaderSource(vertexShaderID, vertsrc); GL.CompileShader(vertexShaderID); fragmentShaderID = GL.CreateShader(ShaderType.FragmentShader); GL.ShaderSource(fragmentShaderID, fragsrc); GL.CompileShader(fragmentShaderID); programID = GL.CreateProgram(); GL.AttachShader(programID, vertexShaderID); GL.AttachShader(programID, fragmentShaderID); GL.LinkProgram(programID); // Loading texture from embedded resource texsrc = new Bitmap(Assembly.GetEntryAssembly().GetManifestResourceStream("Tutorial_05.example.png")); textureID = GL.GenTexture(); GL.BindTexture(TextureTarget.Texture2D, textureID); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)All.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)All.Linear); GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, texsrc.Width, texsrc.Height, 0, PixelFormat.Bgra, PixelType.UnsignedByte, IntPtr.Zero); System.Drawing.Imaging.BitmapData bitmap_data = texsrc.LockBits(new Rectangle(0, 0, texsrc.Width, texsrc.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppRgb); GL.TexSubImage2D(TextureTarget.Texture2D, 0, 0, 0, texsrc.Width, texsrc.Height, PixelFormat.Bgra, PixelType.UnsignedByte, bitmap_data.Scan0); texsrc.UnlockBits(bitmap_data); GL.Enable(EnableCap.Texture2D); GL.BufferData(BufferTarget.TextureBuffer, (IntPtr)(texcoords.Length * Vector2.SizeInBytes), texcoords, BufferUsageHint.StaticDraw); GL.BindAttribLocation(programID, 0, "position"); GL.BindAttribLocation(programID, 1, "textureCoords"); } protected override void OnResize(EventArgs e) { base.OnResize(e); GL.Viewport(0, 0, ClientRectangle.Width, ClientRectangle.Height); } protected override void OnUpdateFrame(FrameEventArgs e) { base.OnUpdateFrame(e); keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Key.Escape)) { Exit(); } } protected override void OnRenderFrame(FrameEventArgs e) { base.OnRenderFrame(e); // Prepare for background GL.Clear(ClearBufferMask.ColorBufferBit); GL.ClearColor(Color4.Red); // Draw traingles GL.EnableVertexAttribArray(0); GL.EnableVertexAttribArray(1); GL.BindVertexArray(vaoID); GL.UseProgram(programID); GL.BindBuffer(BufferTarget.ArrayBuffer, vboID); GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, false, 0, IntPtr.Zero); GL.ActiveTexture(TextureUnit.Texture0); GL.BindTexture(TextureTarget.Texture3D, textureID); GL.BindBuffer(BufferTarget.ElementArrayBuffer, iboID); GL.DrawElements(BeginMode.Triangles, indices.Length, DrawElementsType.UnsignedInt, 0); GL.DisableVertexAttribArray(0); GL.DisableVertexAttribArray(1); SwapBuffers(); } protected override void OnClosed(EventArgs e) { base.OnClosed(e); GL.DeleteVertexArray(vaoID); GL.DeleteBuffer(vboID); } } } I can not remember where do I add GL.Uniform2();
    • By Jens Eckervogt
      Hello everyone
      For @80bserver8 nice job - I have found Google search. How did you port from Javascript WebGL to C# OpenTK.?
      I have been searched Google but it shows f***ing Unity 3D. I really want know how do I understand I want start with OpenTK But I want know where is porting of Javascript and C#?
       
      Thanks!
  • Advertisement
  • Advertisement
Sign in to follow this  

OpenGL Best way to abstract shaders in a small engine?

This topic is 710 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm currently designing a small engine to learn more about OpenGL and i'm in the process of finding the best way of abstracting shaders. Right now, i have a Shader class that allows me to bind uniforms and abstracts all the linking. Next, i have the following classes:

  • Material
    • Contains N instances of Texture2D
    • One instance of Shader
  • World
    • One instance of Shader

My Graphics::render method finds the World object and applies the shader. Then it finds all objects with a Material instance and renders its Shader. This is called on each update. The world shader contains things like lighting, AA, etc. Each Material shader has different configurations.

 

The workflow is something like this (pseudo-code):

material = new Material
material.setDiffuse("tex/blah_d.png")
material.setNormal("tex/blah_n.png")
material.setShader(new Shader("shaders/bumped.frag"))

obj = new Cube
obj.setMaterial(material)

world = new World
world.add(obj)
world.setShader(new Shader("shaders/forwardRendering.vert"))

while (true) {
    graphics.render(world)
}

Is this a good design? How would you guys do this?

Edited by vinnyvicious

Share this post


Link to post
Share on other sites
Advertisement

Hey vinnyvicious

 

a material that has become more standard in any game engine of past 10 years is less a collection of textrues as more some configuration class used to specify the proeprties of your shader. So you need to rethink of your design at all.

 

Normaly a shader has capabilities for one single purpose (e.g. Diffuse, Specular or Decal rendering) to modify the rendered vertices to what it was intended to. The diffuse material takes light into account where a self-illuming material for example does not and so on. Materials also consist of other properties like color, transparency or boolean flags that may be used for example to turn shiness on/off in some material that may get wet in the game.

 

In your asset pipeline you should first take the shader into account. Load it and store it anywhere materials have acess to it and then load the material classes for every shader. Load textures as well as shaders into some globaly managed asset storage because you will use them as well as shaders as references into your material configuration class.

 

pseudo-code

if(AssetManager.HasLoaded("./models/teapot.mat"))
{
  DiffuseMaterial mtl;
  if(AssetManager.HasLoaded("./diffuse.frag"))
    AssetManager.Load("./diffuse.frag");

  mtl.AddShader(AssetManager.GetShader("./diffuse.frag"));
  
  if(AssetManager.HasLoaded("./models/teapot.png"))
    AssetManager.Load("./models/teapot.png");

  mtl.AddBase(AssetManager.GetTexture("./models/teapot.png"));

  if(AssetManager.HasLoaded("./models/teapot.normal.png"))
    AssetManager.Load("./models/teapot.normal.png");

  mtl.AddNormal(AssetManager.GetTexture("./models/teapot.normal.png"));
  AssetManager.AddMaterial(mtl);
}

Mesh teapot;
teapot.AddMaterial(AssetManager.GetMaterial("./models/teapot.mat"));

In this approach you are asking your asset manager first for a specific material instance and create it if it isnt created yet. You ask for the shader of that material on the manager to be loaded and each texture included in the material. Settinsg like color (that arent set in the example) are made in the material too. Then you load your model/world and add the material queried at the asset manager to it. Thats how any modern game engine manages its materials (basic way)

 

Taking the renderer into account you sort your scene graph first by shader instance used before secondary sort by texture(s) bound to the material instances or vise versa, your choice here but never ever render every model by its own in chaotic order to prevent heavy shader/texture swaps between models in every frame.

 

I dont know what you mean by shader for the world, normaly anything in the world (terrain, skybox) is treated as if it were a model with its own material instance too. When you mean post processing then it is more a global then a local shader thing applied to the render target of a framebuffer.

Edited by Shaarigan

Share this post


Link to post
Share on other sites

What about having a Material that's made of effects (for multi pass, think water which needs reflection & refraction for exemple).

 

Material

array<Effect*> m_Effects;

 

Effect

EPass m_Pass;   //enum RenderingPass { Pre_Process, Reflection, Refraction, Depth_Fill, Opaque, Translucent, Post_Processing, Final }

Pipeline m_Pipeline;   //That's the whole pipeline setup, similar to OpenGL Program Object (which was very well thought)

array<TextureBinding> m_TextureBindings;

 

void fillCBData( const Mesh& mesh ); //fill CB

void fillGeomData( const Mesh& mesh );  //fill IB & VB (only if you want more flexibility, like having fallback on shaders using only a subset of disk data, I don't do this anymore.)

void setData( const ConstantBuffer& cb, const IndexBuffer& ib, const VertexBuffer& vb, const Camera& camera ); //Set CB, IB, VB, TB, Textures (using the previous array)...

void render();  //Either standalone, or integrated in setData, which name you should then change to reflect it.

 

 

TextureBinding

uint32_t m_ID; //Could be an enum TextureBindings { Albedo, Normal, Specular, Gloss, ... }

uint32_t m_Slot; //Where to bind it on the GPU

 

 

Rendering Process:

Go through you Spatial Graph and gather visible Mesh, for each of them access its Material and then its Effects, store the Mesh & Effect in an array for the pass.

(Something conceptually like : array<array<Thingy>, RenderingPass:COUNT> with Thingy { Effect* m_Effect, Mesh* m_Mesh; })

Generate a key and sort by that key (sorting will be different given the RenderingPass you are in, for exemple you'll sort by material for the Opaque pass, roughly front to back in the Depth_Fill pass and Back to Front in the Translucent pass...), then render.

 

You'll soon discover that you can change a few things to make it better/faster/more to your liking, it's just a broad presentation of the idea.

I'd strongly advice to use the Pipeline (= Program Object) abstraction as it's how hardware really works and the basis of low level API (Mantle/Vulkan, D3D12)

Edited by Ingenu

Share this post


Link to post
Share on other sites

Hello Shaarigan, and thanks for replying! I have a few questions about your architecture:

  • Why do you call HasLoaded? Shouldn't all the textures be described from the .mat file (which i'm assuming is some kind of JSON or XML file), as soon as it was parsed? Or should texture loading be an async operation?
  • What do you mean by a shader having one single-purpose? I've been following things like learnopengl.com and a few books, and they always have shaders that take care of surfaces. Like: BumpedDiffuseSpecular, etc.
  • By World shaders, i mean anything in the scene: lighting, shadows, AA, etc.

Ingenu, do you recommend any book or tutorial which outlines the concept of passes? I've never heard of this before and it sounds interesting. :)

Share this post


Link to post
Share on other sites
I don't think so, maybe you can find something on the net, however we used to say multi pass to mean rendering the same mesh more than once because we were limited in number of instructions, so we couldn't light a mesh properly rendering it once, that limitation vanished a while ago, but you'll likely still find a lot of references about it.
 
The way I present a pass is different, think it more a list of logically ordered rendering steps/groups. (I'll use step instead to differentiate with the old definition.)
If you have shadows, do GPU animation, or procedural sky generation on the GPU using Perlin noise, you'll need to run that before you need the data, since those operations are expensive and reused by several "items" in the world, you want to compute them first to know they are ready past that point, hence having a Pre-Process step.
After that, there are logical groups you'll want to render at different times, for correctness you need to render opaque before translucent, so you need two different steps, you also want to be optimal when rendering, so you want to render your opaque either front to back (if you don't do a depth fill step), or in effect order (to use instanced rendering if you have a depth fill step), but the translucent step needs all its meshes rendered in back to front order to be correct.
 
I will urge you to limit the number of GPU programs to the minimum, to that effect you should have a data driven GPU program, have a look at Disney's BRDF to see a single program that can do a lot with only a few parameters.
It's easier to get one program right, it means you can batch/instance a lot more, and even do better with indirect rendering (since usually indirect API don't allow you to change the GPU program).
 
The Effect I talked about before is also the glue between your C++ code and the GPU program language, the one that sets data in the right place (slots in D3D11/OGL parlance) before the draw call.
 
---
There are so many things that are intertwined when making an engine it's difficult to go into meso description, it's either macro or micro ^^
Anyway, when it comes to mesh, I shared geometry (vb/ib) data between instances, but could have unique textures, so I had a mesh description containing an enum/key+pathfilename such as : "albedo /gfx/monster/joe.tex" so when that mesh is created the associated textures are loaded (well unless the TextureManager has them in memory in which case they are only reference counted.).
 
 
---
To go back to the shadow effect, it's linked to a light, you don't necessarily need a Mesh object but rather a Drawable/Renderable for the effect.
 
---
I also subdivided my Effects into DrawEffect, ComputeEffect and a third I can't remember atm ^^
Edited by Ingenu

Share this post


Link to post
Share on other sites

To some extent, I wouldn't have a "CommandQueue" described in XML because maintaining it would turn bad rather quickly (adding features, never removing deprecated stuff)...

I would rather have the Effect PlugIn system I described above that does that in one of the callable functions instead, because it's way more versatile, much easier to read, understand and modify, and if it's a plugin/dll pretty much as flexible.

You'll need some glue between your engine code and the GPU setup/program.

 

I would however extend the program to contain meta informations regarding texture binding and sampler descriptions as they do.

Edited by Ingenu

Share this post


Link to post
Share on other sites

Yes that's closer.

 

Basically as you write your GPU program code, you decide where you put your data and what data you need, and if you write say "cbuffer Object : register(cb0) float4x4 WorldMatrix};" for your fragment subprogram you have just explicitely decided to put that constant buffer to constant buffer slot 0, you must therefore write the corresponding glue code engine-side, which would something like "gfx.PSSetConstantBuffers( 0, 1, pCB );" in your setParameters(...) procedure, and you must also write the code that will fill the data in that CB (mapping it, casting it and writing the data such as "pCBMatrix = instance->GetWorldMatrix();")

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement