Jump to content
  • Advertisement
Sign in to follow this  
FantasyVII

OpenGL Black screen when trying to implement ibo

This topic is 874 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,

 

I'm trying to learn OpenGL 4.5 and so far it's going great. I drew a cube with a simple vertex buffer and it worked. Now I'm trying to implement index object buffer but it's not working. I see nothing on the screen and for the life of me I can't figure out why.

 

Here is my code.

#include "Renderer.h"

namespace blk
{

	Renderer::Renderer() : shader()
	{
	}

	Renderer::~Renderer()
	{
	}

	void Renderer::Initialize()
	{
		glClearColor(0, 0, 0, 0);

		glEnable(GL_DEPTH_TEST);
		glDepthFunc(GL_LESS);

		const GLfloat VertexBufferData[] =
		{
			-1.000000f, -1.000000f,  1.000000f,
			-1.000000f,  1.000000f,  1.000000f,
			-1.000000f, -1.000000f, -1.000000f,
			-1.000000f,  1.000000f, -1.000000f,
			 1.000000f, -1.000000f,  1.000000f,
			 1.000000f,  1.000000f,  1.000000f,
			 1.000000f, -1.000000f, -1.000000f,
			 1.000000f,  1.000000f, -1.000000f
		};

		const GLfloat IndexBufferData[] =
		{
			3, 2, 0,
			7, 6, 2,
			5, 4, 6,
			1, 0, 4,
			2, 6, 4,
			7, 3, 1,
			1, 3, 0,
			3, 7, 2,
			7, 5, 6,
			5, 1, 4,
			0, 2, 4,
			5, 7, 1
		};

		glCreateBuffers(1, &VertexBuffer);
		glBindBuffer(GL_ARRAY_BUFFER, VertexBuffer);
		glBufferData(GL_ARRAY_BUFFER, sizeof(VertexBufferData), VertexBufferData, GL_STATIC_DRAW);

		glCreateBuffers(1, &IndexBuffer);
		glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexBuffer);
		glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(IndexBufferData), IndexBufferData, GL_STATIC_DRAW);

		ModelMatrix = blk::Math::Matrix4::Translation(blk::Math::Vector3(0.f, 0, -2.0f));
		ProjectionMatrix = blk::Math::Matrix4::Perspective(90.f, 800.f / 600.f, 1.f, 100.f);
	}

	void Renderer::Load()
	{
		ProgramID = shader.CreatePorgram("Content/Shaders/Vertex.shader", "Content/Shaders/Fragment.shader");
	}

	void Renderer::Update()
	{
		angle = time.GetTotalTime() * 32;
	}

	void Renderer::Draw()
	{
		glUseProgram(ProgramID);

		shader.SetUniformMatrix4(ProgramID, "ModelMatrix", blk::Math::Matrix4::Rotation(angle, blk::Math::Vector3(0, 1, 0)) * ModelMatrix);
		shader.SetUniformMatrix4(ProgramID, "ProjectionMatrix", ProjectionMatrix);

		glEnableVertexAttribArray(0);

		glBindBuffer(GL_ARRAY_BUFFER, VertexBuffer);
		glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);

		glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexBuffer);
		glDrawElements(GL_TRIANGLES, 12 * 3, GL_UNSIGNED_INT, (void*)0);

		glDisableVertexAttribArray(0);
	}

	void Renderer::Clear()
	{
		glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
	}
}

Vertex shader 

#version 450 core
layout(location = 0) in vec3 inPosition;
//layout(location = 1) in vec3 vertexColor;

out vec4 fragmentColor;

uniform mat4 ModelMatrix;
uniform mat4 ProjectionMatrix;

void main()
{
	gl_Position = ProjectionMatrix * ModelMatrix * vec4(inPosition, 1);
	fragmentColor = vec4(1.0, 0, 0, 1.0);
}

Fragment shader

#version 450 core

in vec4 fragmentColor;
out vec4 OutputColor;

void main()
{
	OutputColor = fragmentColor;
}
Edited by FantasyVII

Share this post


Link to post
Share on other sites
Advertisement
Your index buffer is declared as GLfloat IndexBufferData when it should be uint32_t IndexBufferData. For best efficiency it should be uint16_t IndexBufferData and calling glDrawElements with GL_UNSIGNED_SHORT instead of GL_UNSIGNED_INT.

Share this post


Link to post
Share on other sites

Your index buffer is declared as GLfloat IndexBufferData when it should be uint32_t IndexBufferData. For best efficiency it should be uint16_t IndexBufferData and calling glDrawElements with GL_UNSIGNED_SHORT instead of GL_UNSIGNED_INT.

 

Thank you. I was pulling my hair out. Can you please explain where uint16_t defined from? is it defined in C++ or OpenGL? Also I still don't understand why can't the indexBufferData work with GLfloat? I don't exactly understand what is the problem.

 

Also isn't uint16_t is the same size as short? Can't I just use short? also what is the difference between uint8_t and char or uint16_t and short or uint32_t and int or uint64_t  and double?

 

Can you please explain more. Sorry I'm mainly a C# dev and I'm new to C++ and OpenGL.

 

Thank you happy.png

Edited by FantasyVII

Share this post


Link to post
Share on other sites

"uint16_t is the same size as short?"

 

Usually, yes.

 

" Also I still don't understand why can't the indexBufferData work with GLfloat? I don't exactly understand what is the problem."

 

The index buffer is a set of array indices to use. Passing "7,47,998" means "draw vertexdata[7], vertexdata[47], vertexdata[998]". So floats don't make any sense. In C++ (and most other languages) you can use floats as array indexes, but that's only because they're being silently converted to integers while you're not looking... OpenGL is just being stricter about what types it'll accept. This is largely to simplify the work of the driver and the hardware by making you get things right to start with -- for example mobile phone OpenGL only takes short ints as indexes. Anything else is an error -- this makes the code inside the OpenGL driver simpler (and hence faster). The philosophy is that it's not OpenGL's job to sort your types out for you and most game developers would complain if it spent time doing that work when they just want triangles drawn as absolutely fast as possible.

Share this post


Link to post
Share on other sites

Also isn't uint16_t is the same size as short? Can't I just use short? also what is the difference between uint8_t and char or uint16_t and short or uint32_t and int or uint64_t  and double?

The C++ standard does not define exactly how many bits are used for the integer types (such as short, int, long etc.). For example a long on Windows has a size of 32 bits, but on Linux it has usually 64 bits. So, to be sure you get an (unsigned) integer with exactly X bits on all platforms and configurations, you can use the types (uintX_t) intX_t.

 

 


Can you please explain where uint16_t defined from?

They are defined in the <cstdint> header, within the namespace std.

 

See also: http://en.cppreference.com/w/cpp/header/cstdint

Edited by Simmie

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!