OpenGL, Blank Screen

Started by
6 comments, last by Trienco 10 years, 5 months ago

I am attempting to learn some OpenGL, and I have become a bit stuck. I have been playing with the tutorials at open.gl, and while working on the drawing polygons section, I got stumped while making a few changes. I have gotten everything into the section to work when compiling under C via MinGW, but after moving over to Eclipse (still using MinGW), and rewriting some of the code using C++, I cannot get anything to display to the screen. Before, I had a multi-colored triangle, as the tutorial intended, but now I just get a black window. I have been compiling with -std=c++0x, so I thought I might have done something unforgivable in C++11, but when I set another Eclipse project up with the same settings, and then cut and pasted the code from the C version in, it worked fine (it did throw some compiler warnings). I have been over the code several times, comparing it to the strictly C code that I used at first, and I cannot find where I went wrong. I did rewrite a few things based off tutorials and posts from other places, such as the code for loading the shaders.

The current version of the code is as follows:

[source lang="cpp"]

/*
* main.cpp
*
* Created on: Oct 23, 2013
*/

#define GLEW_STATIC
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <time.h>
#include <iostream>
#include <fstream>
#include <string>
#include <sstream>


/* Shader Load
*/
bool getShader(std::string filename, GLchar ** contents) {
std::ifstream file;
file.open(filename.c_str(), std::ifstream::in);

if (!file) {
return (false);
}

std::stringstream stream;
stream << file.rdbuf();

file.close();
std::string tStr;
tStr = stream.str();
*contents = (GLchar*)tStr.c_str();
return (true);
}

int main(void) {
/* Initialization */
if (!glfwInit()) {
std::cout << "GLFW initialization error.\n";
// TODO Add error message output.
}
else {
std::cout << "GLFW Initialized. Using version:" << glfwGetVersionString() << "\n";
}
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);

/* Create Window */
GLFWwindow * window = glfwCreateWindow(800, 600, "Basic OpenGL", nullptr, nullptr);
glfwMakeContextCurrent(window);

glewExperimental = GL_TRUE;
GLenum err = glewInit();
if (GLEW_OK != err) {
std::cout << "GLEW initialization failed. Error: " << glewGetErrorString(err) << "\n";
}
else {
std::cout << "Using GLEW " << glewGetString(GLEW_VERSION) << "\n";
}

GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
std::cout << "Vertex Array: " << vao << "\n";
// REVIEW Find out exactly what I am outputting here.

GLfloat verticies[] = {
0.0f, 0.5f, 1.0f, 0.0f, 0.0f,
0.5f, -0.5f, 0.0f, 1.0f, 0.0f,
-0.5f, -0.5f, 0.0f, 0.0f, 1.0f
};

GLuint vbo;
glGenBuffers(1, &vbo);
std::cout << "Vertex Buffer: " << vbo << "\n";
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(verticies), verticies, GL_STATIC_DRAW);

GLuint elements[] = {
0, 1, 2
};
GLuint ebo;
glGenBuffers(1, &ebo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(elements), elements, GL_STATIC_DRAW);
std::cout << "Element Buffer: " << ebo << "\n";

/* Load shader files */
GLchar * vC;
GLchar * fC;
if (!getShader("vShad.vert", &vC)) {
std::cout << "Vertex shader load source failed.\n";
}
else {
std::cout << "Vertex shader source loaded.\n";
}
if (!getShader("fShad.frag", &fC)) {
std::cout << "Fragment shader source load failed.\n";
}
else {
std::cout << "Fragment shader source loaded.\n";
}

const GLchar * vert = vC;
const GLchar * frag = fC;

/* Compile Shaders */
GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertexShader, 1, &vert, nullptr);
glCompileShader(vertexShader);

GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragmentShader, 1, &frag, nullptr);
glCompileShader(fragmentShader);

//std::cout << "Vertex Shader Source:\n" << vert;
//std::cout << "Fragment Shader Source:\n" << frag;

/* Shader compilation status */
GLint status;
char errBuffer[512];
glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &status);
if (status == GL_FALSE) {
std::cout << "Vertex shader compilation failed.\n";
glGetShaderInfoLog(vertexShader, 512, nullptr, errBuffer);
std::cout << errBuffer << "\n";
}
else {
std::cout << "Vertex shader compiled.\n";
}

glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &status);
if (status == GL_FALSE) {
std::cout << "Fragment shader compilation failed.\n";
glGetShaderInfoLog(fragmentShader, 512, nullptr, errBuffer);
std::cout << errBuffer << "\n";
}
else {
std::cout << "Fragment shader compiled.\n";
}

GLuint shaderProgram = glCreateProgram();
glAttachShader(shaderProgram, vertexShader);
glAttachShader(shaderProgram, fragmentShader);
glBindFragDataLocation(shaderProgram, 0, "outColor");

glLinkProgram(shaderProgram);
glUseProgram(shaderProgram);

GLint posAttrib = glGetAttribLocation(shaderProgram, "position");
glEnableVertexAttribArray(posAttrib);
glVertexAttribPointer(posAttrib, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), 0);

GLint colAttrib = glGetAttribLocation(shaderProgram, "color");
glEnableVertexAttribArray(colAttrib);
glVertexAttribPointer(colAttrib, 3, GL_FLOAT, GL_FALSE,
5 * sizeof(GLfloat), (void*)(2 * sizeof(GLfloat)));

glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClearDepth(1.0f);

while (!glfwWindowShouldClose(window)) {
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, 0);
glfwSwapBuffers(window);
glfwPollEvents();
if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS) {
glfwSetWindowShouldClose(window, GL_TRUE);
}
}
glDeleteProgram(shaderProgram);
glDeleteShader(fragmentShader);
glDeleteShader(vertexShader);

glDeleteBuffers(1, &ebo);
glDeleteBuffers(1, &vbo);
glDeleteVertexArrays(1, &vao);

glfwDestroyWindow(window);
glfwTerminate();

return (0);

}

[/source]

The shader files that I am using are:

Vertex Shader:

[source]

#version 150

in vec2 position;
in vec3 color;

out vec3 Color;

void main() {
Color = color;
gl_Position = vec4(position, 0.0f, 1.0f);
}

[/source]

Fragment Shader:

[source]

#version 150

in vec3 Color;

out vec4 outColor;


void main() {
outColor = vec4(Color, 1.0f);
}

[/source]

In case it is relevant, I am linking:

-lmingw32 -lglew32s -lglfw3 -lopengl32 -lgdi32

and compiler options are:

-fmessage-length=0 -std=c++0x (I am not really sure about message length, it was the default, and I have left it alone so far).

Compiler is MinGW 4.7.2

The program builds in Eclipse, and runs with a black screen. It may be worth noting that I have a warning:

Warning: .drectve `/DEFAULTLIB:"LIBCMT" /DEFAULTLIB:"OLDNAMES" ' unrecognized

I have not tracked the cause of that down yet, and have been trying on and off to determine the exact cause, since I would prefer no warnings at all, even if the program runs with them. One post I read somewhere I cannot recall mentioned that those errors are related to the Microsoft compilers, but I don't yet really know more than libcmt being part of the multithread c library for them (please correct me if I am wrong).

I was worried if the shaders were loading correctly, as a friend had issues with the program where I did not (I still have not been able to figure that one out, it was with the C code I used originally), but after adding the code to print the loaded shader code in the console, I was able to determine that they are loading properly.

The terminal output when running the program is:

GLFW Initialized. Using version:3.0.3 Win32 WGL MinGW LoadLibrary(winmm)
Using GLEW 1.10.0
Vertex Array: 1
Vertex Buffer: 1
Element Buffer: 2
Vertex shader source loaded.
Fragment shader source loaded.
Vertex shader compiled.
Fragment shader compiled.

I am still getting a blank screen though. I cannot figure out what I am overlooking.

Additionally, I am not what you would call well-versed with programming. I have been playing with various languages for years, but never very seriously. That being said, I am always happy to hear about any...unconventional practices...that may cause me problems in the future as well.

Thanks

Advertisement

Hey Mercury Filter,

first thing I see is:

you are using sizeof(verticies) and sizeof(elements) this returns the size of a pointer.

You should be using something like sizeof(float)*15 (15 Floats for verticies) and sizeof(float)*3 (3 Floats for elements)

You also can use a vector for this (i like doing that :D)

Which would result in something like that:

glBufferData(GL_ARRAY_BUFFER, sizeof(vertice_vector[0])*vertice_vector.size(), &vertice_vector[0], GL_STATIC_DRAW);

I attempted both of the changes, but I am still getting the black screen. The new code reads:

[source lang="cpp"]

GLfloat verts[] = {
0.0f, 0.5f, 1.0f, 0.0f, 0.0f,
0.5f, -0.5f, 0.0f, 1.0f, 0.0f,
-0.5f, -0.5f, 0.0f, 0.0f, 1.0f
};

std::vector<GLfloat> vertex_vect;
vertex_vect.resize(15);
for (int i = 0; i < 15; i++) {
vertex_vect.at(i) = verts;
}

GLuint vbo;
glGenBuffers(1, &vbo);
std::cout << "Vertex Buffer: " << vbo << "\n";
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertex_vect.at(0)) * vertex_vect.size(), &vertex_vect.at(0), GL_STATIC_DRAW);

GLuint elements[] = {
0, 1, 2
};
GLuint ebo;
glGenBuffers(1, &ebo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLuint)*3, elements, GL_STATIC_DRAW);

[/source]

I was curious about what kind of problem the original was causing, so I threw together this:

[source lang="cpp"]

#include <iostream>
#include <vector>


int main(void) {
/* Define a set of values. */
float myvals[] = {
0.1f, 0.3f, 0.0f,
1.5f, 0.2f, 0.0f
};

/* Printing info*/
std::cout << "sizeof(myvals) : " << sizeof(myvals) << "\n";
std::cout << "sizeof(myvals[0]) : " << sizeof(myvals[0]) << "\n";

/* Try a vector */
std::vector<float> vect_vals;
vect_vals.resize(6);
for (int i = 0; i < 6; i++) {
vect_vals.at(i) = myvals;
}

/* Print vector info */
std::cout << "sizeof(vect_vals) : " << sizeof(vect_vals) << "\n";
std::cout << "sizeof(vect_vals.at(0)) : " << sizeof(vect_vals.at(0)) << "\n";
std::cout << "sizeof(vect_vals.at(0)) * vect_vals.size() : " <<
sizeof(vect_vals.at(0)) * vect_vals.size() << "\n";

return 0;
}

[/source]

but the results seem to indicate that the values being returned when using sizeof were the same for sizeof(vertices) (I corrected the spelling), and for

sizeof(vertex_vect.at(0)) * vertex_vect.size().

My results:

C:\Dev\Projects\Utility>g++ -std=c++0x sizeTest.cpp -o st.exe

C:\Dev\Projects\Utility>st.exe
sizeof(myvals) : 24
sizeof(myvals[0]) : 4
sizeof(vect_vals) : 12
sizeof(vect_vals.at(0)) : 4
sizeof(vect_vals.at(0)) * vect_vals.size() : 24

I don't think I misunderstood what I was doing here, so I am wondering if it is actually something else that was the issue, rather than the size values that I was passing in.

sizeof(vertices) will return the size of the actual array, since it was defined in the same scope and never decayed to a pointer. In fact, this is a perfectly common way to determine the size of an array without having to manually count the elements

int data[] = { ... };
const size_t numElements = sizeof(data)/sizeof(*data);

One thing that is always confusing me: why do I often see people accessing vectors using "at", but don't have an actual try/catch around it? The only benefit in return for being slow is that you're sure to have an unhandled exception.

I might have scrolled over it, but are you setting your viewport anywhere?

f@dzhttp://festini.device-zero.de

One thing that is always confusing me: why do I often see people accessing vectors using "at", but don't have an actual try/catch around it? The only benefit in return for being slow is that you're sure to have an unhandled exception.

I am afraid that is my incompetence showing, I will be reading more in that area for sure. Thank you for pointing it out.

In regards to the viewport, I have not set it. I believe the tutorial covers the matter later on, but the initial stuff seems to all be sans-viewport setting. This is a link to code associated with the tutorial, but not identical as it uses SFML for the window and context. Additionally, I used some elements of code from a small amount further in the tutorial. I had a version of this working in C. I can compile the C code using g++, and it will run fine and create the triangle as intended. In the course of trying to hunt down my problem, I pasted the C code into a project with the same settings as the one the code I posted initially is from, to confirm that I was not fouling up the settings in Eclipse. Pretty much the only thing that I changed in the version above is using std::cout for printing, and adding some more stuff to be printed out, with the major change being rewriting the function to load the shader code (I was working rather late on that one, and cannot honestly recall exactly why I changed it at the time; I may have been dealing with warnings or errors, or it may have been more whimsical, i.e. "let's see if we can write this in C++ now"). If it would help, I can post the C code that I attempted to adapt the above code from. Pretty much all the GLFW and GL code is identical(or was until I spent a bunch of time tinkering with it), with the exception of loading the shaders and the console printing.

** I should state that I am now not 100% sure that Eclipse was actually compiling the C code with g++ and not gcc. Regardless, I can use g++ to compile the C code via the command line and it works as intended, I just figured that I may actually be wrong about my test in Eclipse. There are some details on Eclipse I will need to read more on I see.

Okay, apparently it actually is the code that I have been using to bring in the shaders. I need to play around a bit more, but replacing the function to load the shader source with const GLchar* definitions of the shader source has resolved the issue. I guess that it was too soon to assume that my program being able to print the source of my shaders correctly corresponded to them working right.

I am rather new here, and just as a point for future reference, can anyone tell me if this thread was correctly placed? I would consider myself a beginner, but after looking at the content of my post, I decided I was not sure if it was appropriate for the beginner forum.

I am not sure of the social protocols here (I am often a bit slow in regards to learning social protocols), but if I want to ask more questions about this matter, specifically what was wrong with my loading of shaders, would it be acceptable to do so in this thread, or would it be better suited to a different one? I am not sure if it is an issue with the C++ specifically, or with my understanding of something OpenGL related. I guess I feel that I am not really sure where the boundary between a beginner question, and something that should be directed towards the more focused areas is.

Finally, I apologize ahead of time if the placement of this was incorrect.

EDIT: Additionally, if anyone would be kind enough to critique the methods I used (at least the ones that are visible here) in the process of trying to root out the issue, or if there are suggestions that would allow me to expedite the process of resolving unknown issues, I am always looking to expand my proverbial toolbox.

One thing that is always confusing me: why do I often see people accessing vectors using "at", but don't have an actual try/catch around it? The only benefit in return for being slow is that you're sure to have an unhandled exception.


As long as the place where you use at is not run during every game loop (and uploading vertices to the graphics card really shouldn't be) then it does offer additional safety without causing any serious performance issues.
There is no reason to catch it at this point either and if you have your smart pointers set up correctly you can allow the exception to propagate and be caught somewhere more generic or useful such as "failed to load model" where it can then be properly handled.
That said, in a for loop, I don't see a point of using at because there are only a few rare cases where it will go out of bounds but using an iterator is probably faster anyway.
http://tinyurl.com/shewonyay - Thanks so much for those who voted on my GF's Competition Cosplay Entry for Cosplayzine. She won! I owe you all beers :)

Mutiny - Open-source C++ Unity re-implementation.
Defile of Eden 2 - FreeBSD and OpenBSD binaries of our latest game.

Using it in situations where there is actual exception handling and a meaningful way to handle it wouldn't be the use I'd object to. But I have seen code that didn't catch exceptions anywhere and was using .at() everywhere. So I was wondering where this is coming from and if there was some strange C++ authority telling everyone to "always use at(), because it's safe", without actually explaining how and why.

The only argument to have uncaught at() I could think of is "you always get an exception instead of obscure bugs", but for a release build, I doubt the user will care whether he is getting obscure bugs, crashes from access violations or an application killing itself because of an unhandled exception. Debug builds on the other hand often have asserts to check the index anyway and at least VC++ has some nice options to add tons of security checks for things like buffer overruns.

f@dzhttp://festini.device-zero.de

This topic is closed to new replies.

Advertisement