Sign in to follow this  
Irlan

OpenGL Need a design advice on a Mesh Class

Recommended Posts

Irlan    4067

The OpenGL Context that I'm coding it's instance based, so owns OpenGL Objects (shaders, programs, textures).

I think that the code it's very redundant since the context creates the Mesh Buffer.

In that case why I can't store the Mesh in a Struct and just call Mesh.buffer = GLContext.CreateBuffer?

 

My Mesh class follows:

 

class Mesh {

public:

std::vector<Vertex> vertex;

std::vector<unsigned int> index;

GraphicsBuffer* buffer; //a wrapper for GL vertex/index buffers

 

void CreateBuffer(GLContext& context) //looks like DX methods for creating Vertex/Index Buffer

{

  buffer = context.CreateBuffer(*this);

}

 

};

 

I know that this violates the OOP but it's very very redundant.

Edited by irlanrobson

Share this post


Link to post
Share on other sites
Irlan    4067

If Context::CreateBuffer calls Mesh::CreateBuffer you have an infinite loop. This is in my opinion the main reason to avoid doing that.

The GLContext returns a MeshBuffer* which is stored in a std::vector<MeshBuffer*> inside the context.

Edited by irlanrobson

Share this post


Link to post
Share on other sites
L. Spiro    25620

I’ve already covered this in detail.

 

A mesh does not manage an OpenGL object of any kind directly.

A vertex buffer class manages an OpenGL VBO.

An index buffer class manages an OpenGL IBO.

 

A mesh brings these 2 things together without knowing what OpenGL is.

 

 

Your suggestion is severely flawed in that the mesh knows what a context is and the context knows what a mesh is.

The context should have absolutely no idea what a mesh is.

Vertex buffers and index buffers live inside the graphics module, but meshes and models live above that.

A mesh knows what graphics are, not the other way around.

Let the mesh make its own vertices and indices via the CVertexBuffer and CIndexBuffer classes that wrap around GLuint VBO/IBO ID’s.

 

 

L. Spiro

Share this post


Link to post
Share on other sites
Irlan    4067

I’ve already covered this in detail.

 

A mesh does not manage an OpenGL object of any kind directly.

A vertex buffer class manages an OpenGL VBO.

An index buffer class manages an OpenGL IBO.

 

A mesh brings these 2 things together without knowing what OpenGL is.

 

 

Your suggestion is severely flawed in that the mesh knows what a context is and the context knows what a mesh is.

The context should have absolutely no idea what a mesh is.

Vertex buffers and index buffers live inside the graphics module, but meshes and models live above that.

A mesh knows what graphics are, not the other way around.

Let the mesh make its own vertices and indices via the CVertexBuffer and CIndexBuffer classes that wrap around GLuint VBO/IBO ID’s.

 

 

L. Spiro

 

Always helpful LS.Spiro. Now becomes:

class VertexBuffer; 
class IndexBuffer; 

//Mesh.h 
class Mesh { //the context manages the buffers 
public: 
Mesh() : vbuffer(0), ibuffer(0) 
{ 
} 
void GenBuffers(GLContext& context) //inside cpp file 
{ 
context.GenVertexBuffer(vertex, vbuffer); 
context.GenIndexBuffer(index, ibuffer); 
} 
void DrawBuffers(GLContext& context) //inside cpp file 
{ 
context.DrawBuffersIndexed(vbuffer, ibuffer, index.size()); //index count 
} 

private: 
std::vector<Vertex> vertex; 
std::vector<unsigned int> index; 
VertexBuffer* vbuffer; 
IndexBuffer* ibuffer; 
}; 

struct VertexBuffer { 
GLuint vbo; 
}; 
struct IndexBuffer { 
GLuint ibo; 
};
Edited by irlanrobson

Share this post


Link to post
Share on other sites
Irlan    4067

I’ve already covered this in detail.

 

A mesh does not manage an OpenGL object of any kind directly.

A vertex buffer class manages an OpenGL VBO.

An index buffer class manages an OpenGL IBO.

 

A mesh brings these 2 things together without knowing what OpenGL is.

 

 

Your suggestion is severely flawed in that the mesh knows what a context is and the context knows what a mesh is.

The context should have absolutely no idea what a mesh is.

Vertex buffers and index buffers live inside the graphics module, but meshes and models live above that.

A mesh knows what graphics are, not the other way around.

Let the mesh make its own vertices and indices via the CVertexBuffer and CIndexBuffer classes that wrap around GLuint VBO/IBO ID’s.

 

 

L. Spiro

 
I'm confused with one thing. In the post you made you said that VertexBuffer has vertex data and a VBO.
If a physics system needs to acess vertex data for compute AABBs (eg.) it will via vertex buffer and not the mesh itself?
I understand that that vertex data type it's a very specific thing for the graphics context, but the physics system will not have acess to vertex data until the buffer is created in that case.
 
One more thing. In the old topic you said that the only thing a VB has is vertex data and shouldn't be hardcoded. That makes sense but my point of view is:
 
If a VertexBuffer is suposed to have positions, normals, binormals and texture coordinates. The class should be the following:
 
class VertexBuffer {
(...) //vao, vbo etc...

std::vector<Vector3> position;
std::vector<Vector3> uv;
std::vector<Vector3> normal;
std::vector<Vector3> binormal;
};

or better:

class BaseVertex {
public:
position;
uv;
normal;
};

class OtherVertex : public BaseVertex {
public:
binormal;
tangent;
};
Edited by irlanrobson

Share this post


Link to post
Share on other sites
L. Spiro    25620
Currently I don’t have time to respond in full to all points that need to be addressed, but the 2 major issues can be addressed briefly.
 
#1:

If a physics system needs to acess vertex data for compute AABBs (eg.) it will via vertex buffer and not the mesh itself?

The AABB will have already been computed offline during the creation of your custom model format. It has nothing to do with vertex buffers. Again, vertex buffers are related to graphics and graphics only, no other sub-system. If any other sub-system is involved, a local CPU copy of the vertices is used, not a vertex buffer.

#2:

If a VertexBuffer is suposed to have positions, normals, binormals and texture coordinates. The class should be the following:

And what about all the 2D HUD elements that have only a position and UV?
Any combination of any set of attributes can be used to create a vertex buffer. That is why we never use structures to create vertex buffer elements. Your wrapper should work the way graphics API’s work. Let the user decide what is inside the vertex buffer and at what offsets, and give them a structure to fill out that you can translate into something your graphics API can use.


L. Spiro Edited by L. Spiro

Share this post


Link to post
Share on other sites
Irlan    4067

Currently I don’t have time to respond in full to all points that need to be addressed, but the 2 major issues can be addressed briefly.
 
#1:

If a physics system needs to acess vertex data for compute AABBs (eg.) it will via vertex buffer and not the mesh itself?

The AABB will have already been computed offline during the creation of your custom model format. It has nothing to do with vertex buffers. Again, vertex buffers are related to graphics and graphics only, no other sub-system. If any other sub-system is involved, a local CPU copy of the vertices is used, not a vertex buffer.

#2:

If a VertexBuffer is suposed to have positions, normals, binormals and texture coordinates. The class should be the following:

And what about all the 2D HUD elements that have only a position and UV?
Any combination of any set of attributes can be used to create a vertex buffer. That is why we never use structures to create vertex buffer elements. Your wrapper should work the way graphics API’s work. Let the user decide what is inside the vertex buffer and at what offsets, and give them a structure to fill out that you can translate into something your graphics API can use.


L. Spiro

 

1# Understood. I'll use a copy of the vertex data.
2# This approach looks similar of a Vertex Layout (used in DX11). So I'm emulating DirectX 11 in OpenGL. I'll create a Vertex Layout structure and pass to the vertex buffer creation.

Share this post


Link to post
Share on other sites
Irlan    4067

Currently I don’t have time to respond in full to all points that need to be addressed, but the 2 major issues can be addressed briefly.
 
#1:

If a physics system needs to acess vertex data for compute AABBs (eg.) it will via vertex buffer and not the mesh itself?

The AABB will have already been computed offline during the creation of your custom model format. It has nothing to do with vertex buffers. Again, vertex buffers are related to graphics and graphics only, no other sub-system. If any other sub-system is involved, a local CPU copy of the vertices is used, not a vertex buffer.

#2:

If a VertexBuffer is suposed to have positions, normals, binormals and texture coordinates. The class should be the following:

And what about all the 2D HUD elements that have only a position and UV?
Any combination of any set of attributes can be used to create a vertex buffer. That is why we never use structures to create vertex buffer elements. Your wrapper should work the way graphics API’s work. Let the user decide what is inside the vertex buffer and at what offsets, and give them a structure to fill out that you can translate into something your graphics API can use.


L. Spiro

 

 
That's my perspective:
 
struct VertexInfo {
GLuint index;
GLint size;
(...) pointer to data etc...
};


mesh.CreateVertexBuffer(ContextGL& context)
{
VertexAttribute vinfo;
(...) fill info
context.CreateVertexBuffer(vbuffer, vinfo);
}

Share this post


Link to post
Share on other sites
L. Spiro    25620
You are getting closer, but one of the things I wanted to say that I didn’t have time to say before is in regards to passing an OpenGL context around all the time.

Are you going to call ::wglMakeCurrent() (or similar) on every single OpenGL call? That is the only reason the vertex buffer would need you to pass a context into it; it is the only relevant function that can be called related to the context. If that is how you want to emulate object-oriented programming, you would be calling ::wglMakeCurrent() for every single call, including ::glEnable() etc.

Why do you need more than one context anyway? Even if you have multiple “renderer” instances a single context is enough. You don’t need more than one context to do object-oriented OpenGL programming, and even if you did the object you would be passing around would be some kind of more-generalized class such as “Renderer” or something above and beyond “ContextGL”.


L. Spiro

Share this post


Link to post
Share on other sites
Irlan    4067

You are getting closer, but one of the things I wanted to say that I didn’t have time to say before is in regards to passing an OpenGL context around all the time.

Are you going to call ::wglMakeCurrent() (or similar) on every single OpenGL call? That is the only reason the vertex buffer would need you to pass a context into it; it is the only relevant function that can be called related to the context. If that is how you want to emulate object-oriented programming, you would be calling ::wglMakeCurrent() for every single call, including ::glEnable() etc.

Why do you need more than one context anyway? Even if you have multiple “renderer” instances a single context is enough. You don’t need more than one context to do object-oriented OpenGL programming, and even if you did the object you would be passing around would be some kind of more-generalized class such as “Renderer” or something above and beyond “ContextGL”.


L. Spiro

All my Wrappers for OpenGL Objects (Shaders, Textures, VB, IB) lives in the Context, and these objects manage the lifetime by themselves but the ContextGL that delete them. When the context creates a VBO, it puts the vbo in a std::vector<VertexBuffer*> for delete them when gets destroyed.

 

One thing that I didn't said is that the ContextGL was doing the Rendering stuff (but it's just temporary since the "Single Responsability Principle" it's a must).

 

Okay... What I've undestand from your perspective is that (since my context was doing the rendering job) the context is a Renderer and inside the Renderer is where the Wrapped Objects lives.

Share this post


Link to post
Share on other sites
Irlan    4067

Here is my code with the correction: 

class RendererGL {
public:
	RendererGL();
	~RendererGL();

	void SetViewport(int x, int y, int w, int h);

	void CreateTexture(Texture2D** texture);
	void CreateVertexBuffer(const ArrayInfo& ainfo, const std::vector<VertexInfo>& vinfo, VertexBuffer** buffer);
	void CreateIndexBuffer(const std::vector<unsigned int>& index, IndexBuffer** buffer);
	void CreateShader(Shader** shader);
	void CreateProgram(const std::vector<Shader>& shader, ShaderProgram** program);

	void RenderBuffers(VertexBuffer* vbuffer, IndexBuffer* ibuffer);
//private: //tmp
	ShaderProgram* program;
  
	std::map<std::string, Texture2D> texture;	
	std::map<std::string, Shader> shader;
        //more objects
};
Edited by irlanrobson

Share this post


Link to post
Share on other sites
L. Spiro    25620
If there is a need for textures to be managed, they should be managed by a class designed to manage textures.
Same goes for shaders. I don’t know what you are doing here but your maps look like primitive managers.

Even if managers are associated with the renderer instance, they should still be separate classes that are members of the renderer. Keep their implementations separate and modular.


L. Spiro

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
       
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at https://github.com/0x2A/physicam
       
      I would be happy about feedback, suggestions or contributions.

  • Popular Now