Followers 0

# OpenGL Mesh Class

## 7 posts in this topic

Im trying to make my own mesh class without copying the code from a tutorial and I ran into some trouble. As of now, Im not actually importing any info, I have all the vertices stored in a c++ vector class and I have a render function to render the model. The problem is that once I call the render function, my GLuint VBO gets a value of 0 and apperantly the size of my vertices drops from 5 or whatever value it was before to 0. Im not sure why, I figured it might be something to do with how OpenGL handles buffers since I can't modify these values outside of my class (theyre private). Does anyone have an idea as to why these values get modified and become 0? I call CreateMesh just before GlutMainLoop is called which is were my tutorial creates theyre model. The // after cout functions simply state the values the cout functions display.

Mesh.h

#include <vector>
#include <assert.h>
#include "Vector.h"
#include "glew.h"
#include <iostream>

using namespace std;

struct Vertex {
Vector3f pos;

Vertex () {};

Vertex (const Vector3f &_pos) {
pos = _pos;
}
};

struct Mesh {
public:

Mesh ();
Mesh (vector<Vertex>);

void Render ();

private:

GLuint VBO; // Vertex Buffer Object
vector<Vertex> vertices;

};



Mesh.cpp

#include "Mesh.h"

Mesh::Mesh () {}

Mesh::Mesh (vector<Vertex> _vertices) {
vertices.resize(_vertices.size());
vertices = _vertices;

//cout << vertices.size() << endl; // 5
//cout << VBO << endl; // 3435973836

glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * vertices.size(), &vertices[0], GL_STATIC_DRAW);

//cout << vertices.size() << endl; // 5
//cout << VBO << endl; // 1
}

void Mesh::Render () {
//cout << vertices.size() << endl; // 0
//cout << VBO << endl; // 0
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glDrawArrays(GL_POINTS, 0, vertices.size());
glDisableVertexAttribArray(0);
}


Render Function:

Mesh CreateMesh () {
vector<Vertex> Vertices (5, Vector3f());
Vertices[0].pos = Vector3f(0.0f, 0.0f, 0.0f);
Vertices[1].pos = Vector3f(0.0f, 0.0f, 0.0f);

return Mesh(Vertices);
}

static void RenderScene () {
// VBO = 0 here too
glClear(GL_COLOR_BUFFER_BIT);
Model.Render();
glutSwapBuffers();
}

Edited by D.V.D
0

##### Share on other sites

Are you sure that you're really using that same instance of that class? Maybe you haven't assigned the result of CreateMesh to Model, so it's just using the default constructor? Try printing out the pointer inside the constructor and render method, printf("%p\n", this);

Btw, your glBufferData takes a pointer to std::vector<Vertex>, I'd change it to &vertices[0].pos.x

0

##### Share on other sites

Well I encountered a similar problem like this before when I was making a physics simulator for collisions in 2 space and the way I solved it was by putting the class that didn't work into the same header that was using it, in this case id put mesh class into the main function but that makes the code clunky and unorganized.

I don't think its actually the default constructor but Ill give it a test. The reason for that is because the Create Mesh function gets called and creates the mesh after the default constructor was called so it should override it. The value does get assigned because the model variable does get the proper values of VBO being equal to 1 and having the vertices equal to the actual number of vertices not 0.

I could post the whole program but I did a lot of changing with the GLUT interface. It now all runs in a Application class which also has a list of all the models in my application. I then add a new model and all models get rendered in a for loop in the RenderScene function. The variables all still get called the way they did previously so the emsh stuff still runs the same, just under a more dynamic interface.

EDIT: After setting the values of create mesh, the mesh being rendered does indeed get the changed values generated by create mesh which are the ones I want. But once render function gets called it all gets set to 0 and Im really lost as to why because the only thing happening is that another function is being called in another header but that function doesn't actually change any of those values, they just become modified O.o

Btw I did change the BufferData to &vertices[0].pos.x but what difference does it make if it has the .pos.x or if it doesn't? Won't it work properly either way?

Edited by D.V.D
0

##### Share on other sites

Btw I did change the BufferData to &vertices[0].pos.x but what difference does it make if it has the .pos.x or if it doesn't? Won't it work properly either way?

That line of code is fine. Sponji was just having a funny five minutes.....

Right then. Where to begin with this. I'd probably recommend changing your ctor to take a const ref, rather than duplicating the array...  i.e.

Mesh::Mesh (const std::vector<Vertex>& _vertices)

If you ever find yourself putting  using namespace std; in a header file, you're doing it wrong. Using namespace must only be used in a CPP file, after all #includes. This does mean you need to use the std:: prefix in all headers, but that's a good habit to get into. You're just heading for another set of hideous problems if you keep doing that.

they just become modified O.o

Yeah, hate to break it to you, but stuff doesn't just randomly modify itself. It means you have a bug, and you're modifying the data somewhere.

Right then. Modify your Mesh class to be this:

struct Mesh {
public:

Mesh ();
Mesh (vector<Vertex>);

void Render ();

private:

// these will not be called.
inline Mesh (const Mesh&);
inline const Mesh& operator = (const Mesh&);

GLuint VBO; // Vertex Buffer Object
std::vector<Vertex> vertices;
};


Hit compile. Has it spat out a bunch of compile errors? Good. Those are the locations of your bug(s).

If you ever find yourself writing a function declaration like this:

Mesh CreateMesh()
{
}

Take yourself outside, slap yourself around a bit, and go and read the chapter on C++ pointers, followed by the chapter on C++ references. Learn how to use them, and your problems will be greatly reduced.

That Mesh you're returning. Does it have a destructor? Does that destructor delete the vertex buffer? Is it therefore a good idea to be creating a temporary Mesh, which you're then assigning to another Mesh object elsewhere, which is then deleted when it falls out of scope? (but since you haven't defined a copy ctor or assingment op, the VBO's are being killed off). Have you considered maybe reading that chapter on pointers again? Now is probably a good time to do so!

Mesh* CreateMesh()
{
return new Mesh( vertices );
}

// elsewhere....
Mesh* g_mesh = CreateMesh();

// and when you are absolutely finished with it....
delete g_mesh;

Problem solved?

It now all runs in a Application class which also has a list of all the models in my application. I then add a new model and all models get rendered in a for loop in the RenderScene function.

Let me guess....

struct Model
{
public:

private:
std::vector<Mesh> m_meshes;
};

struct App
{
public:

private:
std::vector<Model> m_models;
};

* facepalm * Edited by RobTheBloke
0

##### Share on other sites

Render Function:

static void RenderScene () {
// VBO = 0 here too
glClear(GL_COLOR_BUFFER_BIT);
Model.Render();
glutSwapBuffers();
}


Just a side note: make sure to clear the depth buffer as well:

glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

0

##### Share on other sites

Alright so Im revisiting my C++ book on Pointers and References but I have just a few questions.

Why exactly is it wrong to create a constructor for Mesh the way I did? If I do add a deconstructor which I did that calls glDeleteBuffers on the VBO, would it just make returning a pointer more efficient? I did end up adding a deconstructor and I don't get the VBO drawn on the screen.

Also, in your example of the what I should put for the Mesh class and solve the errors I get, why exactly is there an operator =? Wouldn't C++ copy all the variables for me by default if I didn't make a operator=?

Also, whats with the facepalm for using std::vector<Meshes> as a list? Is it supposed to be a pointer to the meshes rather than being on the stack?

Yeah I added the clear depth buffer part, it wasn't there in the first place because the tutorial I was looking up for OpenGL didn't have the clear depth buffer until shadow mapping was introduced I think, or somewhere further down the line.

0

##### Share on other sites

If I do add a deconstructor which I did that calls glDeleteBuffers on the VBO, would it just make returning a pointer more efficient?

Sure, returning an 8 (or 4) byte value, is slightly more efficient than:

- allocating a new Mesh on the stack

- allocating an array of vertices

- copying in all of the vertices from the mesh being copied

- allocate storage on the GPU for your VBO data

Except.....    you're only doing half of the above (which is required if you're going to pass these things around by reference)

I did end up adding a deconstructor and I don't get the VBO drawn on the screen.

Good. Your Mesh class is now releasing the GPU resources when it is called (which is a very good thing)

Also, in your example of the what I should put for the Mesh class and solve the errors I get, why exactly is there an operator =? Wouldn't C++ copy all the variables for me by default if I didn't make a operator=?

It will indeed copy the variables blindy, without any understanding of what those variables mean. Now this is all well and good when you have a simple Vec3 / Matrix. But it's not so good when your class contains a handle to resources allocated on the GPU (i.e. your VBO). If you really think it's a good idea to allow copying of mesh data, you should implement the copy ctor & assignment operators. I'd argue that allowing copying of Mesh data is a bit pointless - it would be far easier to simply draw the same Mesh twice, rather than drawing two meshes that are exactly the same.

Also, whats with the facepalm for using std::vector<Meshes> as a list? Is it supposed to be a pointer to the meshes rather than being on the stack?

They aren't on the stack, they're on the heap, and std::vector is not a list. If it was a list you could get away with it, but not when you're using a std::vector (where push_back may cause the entire array to be reallocated, which would cause all meshes to be reallocated, including all of their vertex arrays, and all GPU resources.

Edited by RobTheBloke
0

##### Share on other sites

Sorry for the alte reply but thanks, I think I understand this a lot more. I decided to remove the GL functions from the constructor and destructor and instead I added functions to bind the buffer and remove it so that not all meshes created get loaded onto the GPU. I realized from these posts that the error came from having the meshes not passed properly via refrences/pointers. I fixed that part but now the triangle doesn't appear unless its GL_POINTS and it will always be in the centre of the screen. I assume its because I need shaders but by default, on OpenGL 3.0 or higher, do I need shaders to properly display triangles or will it always go to (0,0)?

0

## Create an account

Register a new account

Followers 0

• ### Similar Content

• So it's been a while since I took a break from my whole creating a planet in DX11. Last time around I got stuck on fixing a nice LOD.
A week back or so I got help to find this:
https://github.com/sp4cerat/Planet-LOD
In general this is what I'm trying to recreate in DX11, he that made that planet LOD uses OpenGL but that is a minor issue and something I can solve. But I have a question regarding the code
He gets the position using this row
vec4d pos = b.var.vec4d["position"]; Which is then used further down when he sends the variable "center" into the drawing function:
if (pos.len() < 1) pos.norm(); world::draw(vec3d(pos.x, pos.y, pos.z));
Inside the draw function this happens:
draw_recursive(p3[0], p3[1], p3[2], center); Basically the 3 vertices of the triangle and the center of details that he sent as a parameter earlier: vec3d(pos.x, pos.y, pos.z)
Now onto my real question, he does vec3d edge_center[3] = { (p1 + p2) / 2, (p2 + p3) / 2, (p3 + p1) / 2 }; to get the edge center of each edge, nothing weird there.
But this is used later on with:
vec3d d = center + edge_center[i]; edge_test[i] = d.len() > ratio_size; edge_test is then used to evaluate if there should be a triangle drawn or if it should be split up into 3 new triangles instead. Why is it working for him? shouldn't it be like center - edge_center or something like that? Why adding them togheter? I asume here that the center is the center of details for the LOD. the position of the camera if stood on the ground of the planet and not up int he air like it is now.

Full code can be seen here:
https://github.com/sp4cerat/Planet-LOD/blob/master/src.simple/Main.cpp
If anyone would like to take a look and try to help me understand this code I would love this person. I'm running out of ideas on how to solve this in my own head, most likely twisted it one time to many up in my head
Toastmastern

• I googled around but are unable to find source code or details of implementation.
What keywords should I search for this topic?
Things I would like to know:
A. How to ensure that partially covered pixels are rasterized?
Apparently by expanding each triangle by 1 pixel or so, rasterization problem is almost solved.
But it will result in an unindexable triangle list without tons of overlaps. Will it incur a large performance penalty?
How to ensure proper synchronizations in GLSL?
GLSL seems to only allow int32 atomics on image.
C. Is there some simple ways to estimate coverage on-the-fly?
In case I am to draw 2D shapes onto an exisitng target:
1. A multi-pass whatever-buffer seems overkill.
2. Multisampling could cost a lot memory though all I need is better coverage.
Besides, I have to blit twice, if draw target is not multisampled.

• By mapra99
Hello

I am working on a recent project and I have been learning how to code in C# using OpenGL libraries for some graphics. I have achieved some quite interesting things using TAO Framework writing in Console Applications, creating a GLUT Window. But my problem now is that I need to incorporate the Graphics in a Windows Form so I can relate the objects that I render with some .NET Controls.

To deal with this problem, I have seen in some forums that it's better to use OpenTK instead of TAO Framework, so I can use the glControl that OpenTK libraries offer. However, I haven't found complete articles, tutorials or source codes that help using the glControl or that may insert me into de OpenTK functions. Would somebody please share in this forum some links or files where I can find good documentation about this topic? Or may I use another library different of OpenTK?

Thanks!

• Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
• By KarimIO
EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
Update: No crash occurs if I don't draw, just clear and swap.
static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));

• 16
• 12
• 23
• 11
• 28