Sign in to follow this  
Phil15

OpenGL Opengl ES on Windows 8 using EGL

Recommended Posts

Hi,

I am trying to make a cross platform graphics engine where I could write a library test it on my desktop with windows 8 and use the same library with Android/Iphone.(Including OpenGL ES libs ofcourse). As far as I am aware Android uses EGL to render images within a window (Correct me if I am wrong ). A lot of sources keep saying that EGL can be done on windows other say you need WGL, some others say Windows 8 can not support OpenGL ES at all. I also have an Nvidia card and only OpenGL ES libraries that I found are from AMD. Does anyone know what is the best way to get through this problem??

Share this post


Link to post
Share on other sites

well its not that hard as you think. First of all decide which es version you target, then you need to get somekind of wrapper that creates opengl window on every platform you can do that writing different codes for different platforms or use some kind of libs that provide this.

 

Then you just use only OpenGL functions that are avail in your ES version, nothing more nothing less

 

 

OpenGL will work the same way on all platforms if you will be using same functions.

 

 

So for windows you dont link es version you can link normal opengl lib provided with windows...

Edited by WiredCat

Share this post


Link to post
Share on other sites


Then you just use only OpenGL functions that are avail in your ES version, nothing more nothing less
OpenGL will work the same way on all platforms if you will be using same functions.

 

well almost... except for vendor specific bugs and slight differences between versions...

There are not many, but there are a few #ifdefs in our engine if you run it on  "normal" GL instead of GLES. Mostly because of different enum names. Then some function which for some reason has an f for float on the end on GLES but not on GL.

There are also #ifdefs if you run on GLES for android, or GLES for iOS... (again different enum names)

 

Also the shader code had to be slightly modified before the opengl driver liked it.

Share this post


Link to post
Share on other sites

wlll i know that glClearColorf is for es and glClearColor for normal one (not sure if theres glClearColorf for ogl too) but you can typename so it will point onto another,

actually es shader code looks liek be more strict so ogl will eat it.

Share this post


Link to post
Share on other sites

Thank you everyone for your time and replies.
 

Found this link a couple of days ago when a similar question was asked: http://www.g-truc.net/post-0457.html

This very link I also found whilst researching, it said I cannot run EGL or WGL on Windows 8 ....
 

 

well its not that hard as you think. First of all decide which es version you target, then you need to get somekind of wrapper that creates opengl window on every platform you can do that writing different codes for different platforms or use some kind of libs that provide this.

 

Then you just use only OpenGL functions that are avail in your ES version, nothing more nothing less

 

 

OpenGL will work the same way on all platforms if you will be using same functions.

 

 

So for windows you dont link es version you can link normal opengl lib provided with windows...

I am planning on using OpenGL ES 2.0 ...How would I know which functions are supported in ES 2.0 and which are not if I am using openGL ... I mainly used Glut and glew before to run a small graphics engine on a pc but I can no longer use this in Android as android uses EGL, also the plan is making it compatible with ios in future. what is the most efficient approach you can suggest from your experience? What kind of libraries should I use ?

Share this post


Link to post
Share on other sites

list of functions supported by opengl es 2.0

https://www.khronos.org/opengles/sdk/docs/man/

 

 

additionally

 

opengl es 2.0 GL SL shading lang list:

https://www.khronos.org/opengles/sdk/docs/reference_cards/OpenGL-ES-2_0-Reference-card.pdf

 

 

 

please note that even when ES 3.0 device is bacward compatible its not always true, and you will have to rewrite shader s ES 3.0 could read that. (goo way to avoid such thing is not to name variables as "texture" becasue its reserved word for es 3.0 glsl

and actually use mediump highp lowp defines : )

Edited by WiredCat

Share this post


Link to post
Share on other sites


it said I cannot run EGL or WGL on Windows 8 ....

 

Think it says Windows RT, which only goes if you buy one of those Surface tablets with an ARM processor or a Windows Phone. If you have a desktop then it most definitely runs OpenGL as well as anything, a long as you have the Nvidia graphics card driver installed

Share this post


Link to post
Share on other sites

You might want to consider using this: http://community.imgtec.com/developers/powervr/tools/pvrvframe/

 

It's a library from PowerVR, who design the GPUs used by iOS devices and many of those in Android devices. It lets you use GLES on desktop, and apparently interfaces pretty well with some of their performance analysis tools so you can even do some optimisation work on desktop (although you should always measure on a target device).

 

I should say that I haven't used it yet, but I'm planning to try it out. I'm not sure how good of an idea it'd be to use it if you're planning to release on Windows though, at the very least you'd need to carefully read the license agreement, I'm hoping to use it purely to speed up development of mobile-only projects.

Share this post


Link to post
Share on other sites

 


it said I cannot run EGL or WGL on Windows 8 ....

 

Think it says Windows RT, which only goes if you buy one of those Surface tablets with an ARM processor or a Windows Phone. If you have a desktop then it most definitely runs OpenGL as well as anything, a long as you have the Nvidia graphics card driver installed

 


I apologize I did not read that carefully, yes you are correct.


I was trying to go to the basics and at least use opengl in a window (WIN32_API). I managed to create a blank window and supposedly an Opengl context. But I cannot understand how do you use the context, I could not even change the color...
 

#include <Windows.h>
#include <gl\GL.h>
#include <iostream>

#pragma comment(lib , "opengl32.lib")

bool InitMainWindow(HINSTANCE, int);

LRESULT CALLBACK MsgProc(HWND, UINT, WPARAM, LPARAM);

const int WIDTH = 800;
const int HEIGHT= 600;

HWND hwnd = NULL;

int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
	if (!InitMainWindow(hInstance, nCmdShow))
	return 1;

	MSG msg = { 0 };
	HDC hdc = GetDC(hwnd);

	while (WM_QUIT != msg.message)
	{
		if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE))
		{
			TranslateMessage(&msg);
			DispatchMessage(&msg);
		}
	}
	return 0;

}


bool InitMainWindow(HINSTANCE hInstance, int nCmdShow)
{
	WNDCLASSEX wcex;

	wcex.cbSize = sizeof(wcex);
	wcex.style = CS_HREDRAW | CS_VREDRAW;
	wcex.cbClsExtra = 0;
	wcex.cbWndExtra = 0;


	wcex.lpfnWndProc = MsgProc;
	wcex.hInstance = hInstance;
	wcex.hIcon = LoadIcon(NULL, IDI_APPLICATION);

	wcex.hCursor = LoadCursor(NULL, IDC_ARROW);
	wcex.hbrBackground = (HBRUSH)GetStockObject(NULL_BRUSH);
	wcex.lpszClassName = "WIndow";
	wcex.lpszMenuName = NULL;
	wcex.hIconSm = LoadIcon(NULL, IDI_WINLOGO);

	if (!RegisterClassEx(&wcex))
	{
		return false;
	}

	hwnd = CreateWindow("WIndow", "Win32", WS_OVERLAPPEDWINDOW | WS_SYSMENU | WS_CAPTION
		, GetSystemMetrics(SM_CXSCREEN) / 2 - WIDTH / 2,
		GetSystemMetrics(SM_CYSCREEN) / 2 - HEIGHT / 2,
		WIDTH,
		HEIGHT,
		(HWND)NULL,
		(HMENU)NULL,
		hInstance,
		(LPVOID*)NULL
		);

	if (!hwnd)
		return false;


	ShowWindow(hwnd, nCmdShow);
	return true;

}

LRESULT CALLBACK MsgProc(HWND hwnd, UINT msg, WPARAM wparam, LPARAM lparam)
{
	switch (msg)
	{
	case WM_CREATE:
	{
 		PIXELFORMATDESCRIPTOR pfd =
		{
			sizeof(PIXELFORMATDESCRIPTOR),
			1,
			PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,    //Flags
			PFD_TYPE_RGBA,            //The kind of framebuffer. RGBA or palette.
			32,                        //Colordepth of the framebuffer.
			0, 0, 0, 0, 0, 0,
			0,
			0,
			0,
			0, 0, 0, 0,
			24,                        //Number of bits for the depthbuffer
			8,                        //Number of bits for the stencilbuffer
			0,                        //Number of Aux buffers in the framebuffer.
			PFD_MAIN_PLANE,
			0,
			0, 0, 0
		};

		HDC ourWindowHandleToDeviceContext = GetDC(hwnd);

		int  letWindowsChooseThisPixelFormat;
		letWindowsChooseThisPixelFormat = ChoosePixelFormat(ourWindowHandleToDeviceContext, &pfd);
		SetPixelFormat(ourWindowHandleToDeviceContext, letWindowsChooseThisPixelFormat, &pfd);

		HGLRC ourOpenGLRenderingContext = wglCreateContext(ourWindowHandleToDeviceContext);
		bool value = wglMakeCurrent(ourWindowHandleToDeviceContext, ourOpenGLRenderingContext);

		
		while (1)
		{
			glClearColor(0.0f, 1.0f, 0.0f, 0.0f);
		}
		//MessageBoxA(0, (char*)glGetString(GL_VERSION), "OPENGL VERSION", 0);

		wglDeleteContext(ourOpenGLRenderingContext);
		PostQuitMessage(0);
	}
	break;

	case WM_DESTROY: //recieved destroy msg
		PostQuitMessage(0);
		return 0;
	}

	return DefWindowProc(hwnd, msg, wparam, lparam);


}




 

Share this post


Link to post
Share on other sites

You need a SwapBuffers(HDC) to present the image to screen.

Then just put wglDeleteContext on exit and do SwapBuffers over and over.

 

Also, glClearColor only specifies the color to use to clear, you need to call glClear to actually clear after it.

 

Like:

if(PeekMessage(...)) {
  Translate(...
  ...
}
else {
  glClear(...);

  // draw stuff

  SwapBuffers(HDC);
}
Edited by Erik Rufelt

Share this post


Link to post
Share on other sites

Ok here is the simplest OpenGLES with EGL context creation (with memory leaks and probably bugs ) for people who might be struggling (like me) in getting started

 

#include <Windows.h>
#include <iostream>

#include <GLES2\gl2.h>
#include <GLES2\gl2ext.h>
#include <GLES2\gl2platform.h>

#include <EGL\egl.h>
#include <EGL\eglext.h>
#include <EGL\eglplatform.h>

#include <string>

bool InitMainWindow(HINSTANCE, int);

LRESULT CALLBACK MsgProc(HWND, UINT, WPARAM, LPARAM);

const int WIDTH = 800;
const int HEIGHT = 600;

 HWND hwnd = NULL;
float theta = 0.0f;

HGLRC ourOpenGLRenderingContext;

void InitEGL(const HWND &WindowHandle);
EGLConfig *  chooseConfig(EGLDisplay display);

int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
	InitMainWindow(hInstance, nCmdShow); // initialize me a window

	MSG msg = { 0 };
	HDC hdc = GetDC(hwnd);

	while (WM_QUIT != msg.message)
	{
		if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE))
		{
			TranslateMessage(&msg);
			DispatchMessage(&msg);
		}
		else
		{

			glClearColor(0.0f, 1.0f, 0.0f, 0.0f);
			glClear(GL_COLOR_BUFFER_BIT);
			//Continue your OpenGLES here
			
			
			SwapBuffers(hdc);

		}
	}

	//eglTerminate here to Implement and release resources here
	return 0;

}


bool InitMainWindow(HINSTANCE hInstance, int nCmdShow)
{
	WNDCLASSEX wcex; // window class wcex 

	wcex.cbSize = sizeof(wcex);
	wcex.style = CS_HREDRAW | CS_VREDRAW;
	wcex.cbClsExtra = 0;
	wcex.cbWndExtra = 0;

	wcex.lpfnWndProc = MsgProc; //A pointer to the window procedure. You must use the CallWindowProc function to call the window procedure.
	wcex.hInstance = hInstance;
	wcex.hIcon = LoadIcon(NULL, IDI_APPLICATION);

	wcex.hCursor = LoadCursor(NULL, IDC_ARROW);
	wcex.hbrBackground = (HBRUSH)GetStockObject(NULL_BRUSH);
	wcex.lpszClassName = "WIndow";
	wcex.lpszMenuName = NULL;
	wcex.hIconSm = LoadIcon(NULL, IDI_WINLOGO);

	if (!RegisterClassEx(&wcex))
	{
		return false;
	}

	hwnd = CreateWindow("WIndow", "Win32", WS_OVERLAPPEDWINDOW | WS_SYSMENU | WS_CAPTION
		, GetSystemMetrics(SM_CXSCREEN) / 2 - WIDTH / 2,
		GetSystemMetrics(SM_CYSCREEN) / 2 - HEIGHT / 2,
		WIDTH,
		HEIGHT,
		(HWND)NULL,
		(HMENU)NULL,
		hInstance,
		(LPVOID*)NULL
		);

	if (!hwnd)
		return false;

	ShowWindow(hwnd, nCmdShow);
	return true;

}

LRESULT CALLBACK MsgProc(HWND hwnd, UINT msg, WPARAM wparam, LPARAM lparam)
{
	switch (msg)
	{
	case WM_CREATE:
	{
		PIXELFORMATDESCRIPTOR pfd =
		{
			sizeof(PIXELFORMATDESCRIPTOR),
			1,
			PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,    //Flags
			PFD_TYPE_RGBA,            //The kind of framebuffer. RGBA or palette.
			32,                        //Colordepth of the framebuffer.
			0, 0, 0, 0, 0, 0,
			0,
			0,
			0,
			0, 0, 0, 0,
			24,                        //Number of bits for the depthbuffer
			8,                        //Number of bits for the stencilbuffer
			0,                        //Number of Aux buffers in the framebuffer.
			PFD_MAIN_PLANE,
			0,
			0, 0, 0
		};

		InitEGL(hwnd);
		
		break;
	}
	case WM_DESTROY: //recieved destroy msg
		PostQuitMessage(0);
		return 0;
	}

	return DefWindowProc(hwnd, msg, wparam, lparam);
}



void InitEGL(const HWND & Windowhandle)
{

	static EGLint const attribute_list[] = {
		EGL_RED_SIZE, 1,
		EGL_GREEN_SIZE, 1,
		EGL_BLUE_SIZE, 1,
		EGL_NONE };

	EGLDisplay display;
	EGLConfig config;
	EGLContext context;
	EGLSurface surface;
	NativeWindowType native_window;
	EGLint num_config;
	/* get an EGL display connection */
	display = eglGetDisplay(EGL_DEFAULT_DISPLAY);

	EGLBoolean eglbool = eglInitialize(display, NULL, NULL);
	eglbool = eglbool;


	EGLConfig * matches = chooseConfig(display);


	int attrib_list[] = {
		EGL_CONTEXT_CLIENT_VERSION, 2,EGL_NONE
		
	};

	///* create an EGL rendering context */
	context = eglCreateContext(display, matches[0], NULL, attrib_list);
	///* create a native window */

	surface = eglCreateWindowSurface(display, matches[0], Windowhandle, NULL);
	//EGLint errNum = eglGetError();
	//std::cout << Error(errNum) << std::endl;
	
	eglMakeCurrent(display, surface, surface, context);
	
}

EGLConfig *  chooseConfig(EGLDisplay display)
{

	EGLint attributes[] = {	EGL_RED_SIZE, 5,
							EGL_GREEN_SIZE, 6,
							EGL_BLUE_SIZE, 5,
							EGL_SAMPLES, 4,
							EGL_NONE };

	EGLint numberConfigs;
	EGLConfig* matchingConfigs;

	if (EGL_FALSE == eglChooseConfig(display, attributes, NULL, 0, &numberConfigs))
	{
		/* An error */
	}
	if (numberConfigs == 0) {
		/* An error */
	}

	matchingConfigs = (EGLConfig*)malloc(numberConfigs * sizeof(EGLConfig));
	/* ...and this time actually get the list (notice the 3rd argument is
	* now the buffer to write matching EGLConfig's to)
	*/
	if (EGL_FALSE == eglChooseConfig(display, attributes, matchingConfigs, numberConfigs, &numberConfigs))
	{
		/* An error */
	}


	return matchingConfigs;

}

Thanks everyone for your help! :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628328
    • Total Posts
      2982099
  • Similar Content

    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
    • By Abecederia
      So I've recently started learning some GLSL and now I'm toying with a POM shader. I'm trying to optimize it and notice that it starts having issues at high texture sizes, especially with self-shadowing.
      Now I know POM is expensive either way, but would pulling the heightmap out of the normalmap alpha channel and in it's own 8bit texture make doing all those dozens of texture fetches more cheap? Or is everything in the cache aligned to 32bit anyway? I haven't implemented texture compression yet, I think that would help? But regardless, should there be a performance boost from decoupling the heightmap? I could also keep it in a lower resolution than the normalmap if that would improve performance.
      Any help is much appreciated, please keep in mind I'm somewhat of a newbie. Thanks!
    • By test opty
      Hi,
      I'm trying to learn OpenGL through a website and have proceeded until this page of it. The output is a simple triangle. The problem is the complexity.
      I have read that page several times and tried to analyse the code but I haven't understood the code properly and completely yet. This is the code:
       
      #include <glad/glad.h> #include <GLFW/glfw3.h> #include <C:\Users\Abbasi\Desktop\std_lib_facilities_4.h> using namespace std; //****************************************************************************** void framebuffer_size_callback(GLFWwindow* window, int width, int height); void processInput(GLFWwindow *window); // settings const unsigned int SCR_WIDTH = 800; const unsigned int SCR_HEIGHT = 600; const char *vertexShaderSource = "#version 330 core\n" "layout (location = 0) in vec3 aPos;\n" "void main()\n" "{\n" " gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);\n" "}\0"; const char *fragmentShaderSource = "#version 330 core\n" "out vec4 FragColor;\n" "void main()\n" "{\n" " FragColor = vec4(1.0f, 0.5f, 0.2f, 1.0f);\n" "}\n\0"; //******************************* int main() { // glfw: initialize and configure // ------------------------------ glfwInit(); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); // glfw window creation GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "My First Triangle", nullptr, nullptr); if (window == nullptr) { cout << "Failed to create GLFW window" << endl; glfwTerminate(); return -1; } glfwMakeContextCurrent(window); glfwSetFramebufferSizeCallback(window, framebuffer_size_callback); // glad: load all OpenGL function pointers if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) { cout << "Failed to initialize GLAD" << endl; return -1; } // build and compile our shader program // vertex shader int vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr); glCompileShader(vertexShader); // check for shader compile errors int success; char infoLog[512]; glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(vertexShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::VERTEX::COMPILATION_FAILED\n" << infoLog << endl; } // fragment shader int fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr); glCompileShader(fragmentShader); // check for shader compile errors glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(fragmentShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::FRAGMENT::COMPILATION_FAILED\n" << infoLog << endl; } // link shaders int shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); glLinkProgram(shaderProgram); // check for linking errors glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success); if (!success) { glGetProgramInfoLog(shaderProgram, 512, nullptr, infoLog); cout << "ERROR::SHADER::PROGRAM::LINKING_FAILED\n" << infoLog << endl; } glDeleteShader(vertexShader); glDeleteShader(fragmentShader); // set up vertex data (and buffer(s)) and configure vertex attributes float vertices[] = { -0.5f, -0.5f, 0.0f, // left 0.5f, -0.5f, 0.0f, // right 0.0f, 0.5f, 0.0f // top }; unsigned int VBO, VAO; glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); // bind the Vertex Array Object first, then bind and set vertex buffer(s), //and then configure vertex attributes(s). glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0); glEnableVertexAttribArray(0); // note that this is allowed, the call to glVertexAttribPointer registered VBO // as the vertex attribute's bound vertex buffer object so afterwards we can safely unbind glBindBuffer(GL_ARRAY_BUFFER, 0); // You can unbind the VAO afterwards so other VAO calls won't accidentally // modify this VAO, but this rarely happens. Modifying other // VAOs requires a call to glBindVertexArray anyways so we generally don't unbind // VAOs (nor VBOs) when it's not directly necessary. glBindVertexArray(0); // uncomment this call to draw in wireframe polygons. //glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); // render loop while (!glfwWindowShouldClose(window)) { // input // ----- processInput(window); // render // ------ glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // draw our first triangle glUseProgram(shaderProgram); glBindVertexArray(VAO); // seeing as we only have a single VAO there's no need to // bind it every time, but we'll do so to keep things a bit more organized glDrawArrays(GL_TRIANGLES, 0, 3); // glBindVertexArray(0); // no need to unbind it every time // glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.) glfwSwapBuffers(window); glfwPollEvents(); } // optional: de-allocate all resources once they've outlived their purpose: glDeleteVertexArrays(1, &VAO); glDeleteBuffers(1, &VBO); // glfw: terminate, clearing all previously allocated GLFW resources. glfwTerminate(); return 0; } //************************************************** // process all input: query GLFW whether relevant keys are pressed/released // this frame and react accordingly void processInput(GLFWwindow *window) { if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS) glfwSetWindowShouldClose(window, true); } //******************************************************************** // glfw: whenever the window size changed (by OS or user resize) this callback function executes void framebuffer_size_callback(GLFWwindow* window, int width, int height) { // make sure the viewport matches the new window dimensions; note that width and // height will be significantly larger than specified on retina displays. glViewport(0, 0, width, height); } As you see, about 200 lines of complicated code only for a simple triangle. 
      I don't know what parts are necessary for that output. And also, what the correct order of instructions for such an output or programs is, generally. That start point is too complex for a beginner of OpenGL like me and I don't know how to make the issue solved. What are your ideas please? What is the way to figure both the code and the whole program out correctly please?
      I wish I'd read a reference that would teach me OpenGL through a step-by-step method. 
  • Popular Now