• Announcements

GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Followers 0

OpenGL What makes OpenGL right handed?

63 posts in this topic

Quote:
 Original post by swiftcoderAn identity projection matrix is just an unscaled orthographic projection.
You're right, it is. But, if I'm not mistaken, it's a left-handed orthographic projection, so I'm not sure that szecs's theory holds up.
0

Share on other sites
It's not the right or left handedness that is important, the important thing is that it is defined. If you have a coordinate in a right-handed coordinate system, the same coordinates will mean the point (represented by the coordinates) will be mirrored to on of the planes, in a left-handed CS. So you can define the handedness.

BTW the identity projection is same-handed as the identity modelview AFAIK. (I played around with shadow mapping matrices, and it didn't matter, if everything is applied in modelview (and the projection was identity), or the opposite, but I may remember wrong(ly?)).

0

Share on other sites
Quote:
 Original post by jykNope, I'm thinking about handedness.

fair enough.

Quote:
 Original post by jykAlso, there's no such thing as 'row-major notation', at least as far as I'm aware. Are you talking about row- vs. column-vector notation?

I think there is, the way I understand it it has to do with whether the i and j in matrix notation Aij represent a column or a row, probably the same thing you mean with vector, but I could be wrong.
0

Share on other sites
Quote:
Original post by jyk
Quote:
 Original post by swiftcoderAn identity projection matrix is just an unscaled orthographic projection.
You're right, it is. But, if I'm not mistaken, it's a left-handed orthographic projection, so I'm not sure that szecs's theory holds up.

I think you are mistaken, what part exactly of a matrix tells you which handedness it belongs to? the 1,0,0 vector could point in the right direction just as well as it could to the left (or up, down,forward,backward for that matter).
0

Share on other sites
Quote:
 I think there is, the way I understand it it has to do with whether the i and j in matrix notation Aij represent a column or a row, probably the same thing you mean with vector, but I could be wrong.
Matrix majorness and vector notation are two different things. 'Majorness' refers to the i and j thing you mentioned, while vector notation convention deals with whether vectors are represented as column matrices (Nx1) or row matrices (1xN).

I've never heard the phrase 'row-major notation' used, and to me at least, matrix majorness doesn't really seem like a notational issue (rather, it's a programming detail having to do with how things are represented in memory). But, I suppose it's just semantics.

In any case, there are (at least) three different issues at play here - coordinate system handedness, matrix majorness, and vector notation. They're often conflated with one another, but despite the fact that they're interrelated in some ways, they're essentially orthogonal.
0

Share on other sites
Quote:
 I think you are mistaken, what part exactly of a matrix tells you which handedness it belongs to? the 1,0,0 vector could point in the right direction just as well as it could to the left (or up, down,forward,backward for that matter).
I'm assuming the convention adopted by both OpenGL and Direct3D that +x is to the right and +y is up in view space.

You're right of course that a matrix with no spatial context has no (spatial) handedness, per se. However, the transforms that are applied in both OpenGL and Direct3D do have a context (the graphics pipeline), and that is the context in which we're considering the handedness of the projection transform.

Just as a point of reference, consider the DX math library functions D3DXMatrixOrthoLH() and D3DXMatrixOrthoRH(). Given a 'unit' orthographic projection, the former produces:
1  0  0  00  1  0  00  0  .5 00  0  .5 1
While the latter produces:
1  0  0  00  1  0  00  0 -.5 00  0  .5 1
There are some differences due to how the canonical view volume differs between the two APIs, but the important thing to note is that the sign of element 33 differs depending on the handedness. If we were using the OpenGL convention for the view volume, the above matrices would instead be identity, and identity with element 33 negated, respectively. This is what I'm referring to when I say that the identity matrix represents a left-handed orthographic transform. (Again, in the context of the standard graphics pipeline.)
0

Share on other sites
Quote:
 Original post by jykI'm assuming the convention adopted by both OpenGL and Direct3D that +x is to the right and +y is up in view space.

Yes, and OpenGL's convention is that -z is forward in view space, which makes it right handed. Direct3D, I assume has +z as forward, therein lies the difference.

Quote:
 Original post by jykThere are some differences due to how the canonical view volume differs between the two APIs, but the important thing to note is that the sign of element 33 differs depending on the handedness. If we were using the OpenGL convention for the view volume, the above matrices would instead be identity, and identity with element 33 negated, respectively. This is what I'm referring to when I say that the identity matrix represents a left-handed orthographic transform. (Again, in the context of the standard graphics pipeline.)

Something doesn't sound right to me in your explanation, it seems like D3D being left handed would properly generate a negative M33 element to switch from its native left handed to right handed one, but I would expect OpenGL to get the opposite results you mentioned, identity for right handed, identity with negative one (-1) in M33 for left handed.

The whole point of this is that while OpenGL doesn't force you into right handed coordinates, it takes more effort from your part in order to use a left handed system.
0

Share on other sites
Quote:
 Yes, and OpenGL's convention is that -z is forward in view space, which makes it right handed. Direct3D, I assume has +z as forward, therein lies the difference.
Right, but the question being asked in this thread is, where does that convention come into play? Is it just in the various transform convenience functions that the API provides?

Another way to ask the question might be (using Direct3D as an example), in what way is Direct3D 'more' left-handed than it is right-handed? Can you point out exactly where in the pipeline its 'inherent left-handedness' comes into play?
Quote:
 Something doesn't sound right to me in your explanation, it seems like D3D being left handed would properly generate a negative M33 element to switch from its native left handed to right handed one, but I would expect OpenGL to get the opposite results you mentioned, identity for right handed, negative M33 for left handed.
Nope, try out the following code:

glMatrixMode(GL_PROJECTION);glLoadIdentity();glOrtho(-1, 1, -1, 1, -1, 1);float m[16];glGetFloatv(GL_PROJECTION_MATRIX, m);for (size_t i = 0; i < 4; ++i) {    for (size_t j = 0; j < 4; ++j) {        std::cout << m[i + j * 4] << " ";    }    std::cout << std::endl;}

The output will be (give or take a negative 0):
1 0  0 0 0 1  0 0 0 0 -1 0 0 0  0 1
In other words, aside from the z range issue, the results are the same in both APIs: +33 for left handed, -33 for right handed.

I think it might be useful to forget about the gl and glu convenience functions and about the DX math library, and just consider the most up-to-date version of each API. In the programmable pipeline, there is no model matrix, no view matrix, and no projection matrix; there is only what you do in the vertex, geometry, and fragment programs, which is completely up to you. In this context (that of the programmable pipeline with no help from utility functions or from the fixed-function pipeline), is Direct3D still 'left handed'? Is OpenGL still 'right handed'?

My own suspicion is that the answer is no; that at the very least, with the modern programmable pipeline, the idea of each API having its own 'inherent handedness' is essentially meaningless.

However, I don't claim 100% certainty, and at this point, it looks like if I want to try to prove my theory, I'm going to have to dig into the respective pipelines a bit and provide some more concrete examples :)
0

Share on other sites
You know what? I don't know anymore [smile], the man page for glOrtho seems to hint that you should pass near and far as negatives, or in your example near as 1, far as -1 which would result in the identity matrix, I tested changing the values around in my GUI, and saw no difference. Makes sence since -z is forward in OpenGL.

Anyway, there is a solution to settle this, implement what szecs suggested, that is set projection to identity, make no mirror transformations or negative scales in modelview draw the axes at the origin with 0.1 lengths and see in which direction z points when y is up and x is right, that's it.
0

Share on other sites
Quote:
 the man page for glOrtho seems to hint that you should pass near and far as negatives, or in your example near as 1, far as -1 which would result in the identity matrix
Hm, I don't read it that way. It says, 'These values are negative if the plane is to be behind the viewer', which I take to mean that negative values (e.g. -1) correspond to planes that are behind the viewer. Therefore, a near value of -1 and a far value of 1 would make sense.

Also, gluOrtho2D() sets 'near' to -1 and 'far' to 1, so I think it's pretty safe to say that negative distances are intended to be behind the viewer and positive distances in front.
Quote:
 Anyway, there is a solution to settle this, implement what szecs suggested, that is set projection to identity, make no mirror transformations or negative scales in modelview draw the axes at the origin with 0.1 lengths and see in which direction z points when y is up and x is right, that's it.
This isn't quite the same thing, but I put together the following little test program:

#include "SDL.h"#include "SDL_opengl.h"int main(int, char*[]){    SDL_Init(SDL_INIT_EVERYTHING);    SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 16);    SDL_SetVideoMode(400, 400, 0, SDL_OPENGL);    glEnable(GL_DEPTH_TEST);    bool quit = false;    while (!quit) {        SDL_Event event;        while (SDL_PollEvent(&event)) {            if (event.type == SDL_QUIT) {                quit = true;            }        }        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);        glColor3f(1, 0, 0);        glBegin(GL_TRIANGLES);        glVertex3f(-.75, .25, .5);   glVertex3f(.75, 0, .5);  glVertex3f(-.75, -.25, .5);        glColor3f(0, 1, 0);        glVertex3f(-.25, -.75, .75); glVertex3f(0, .75, .75); glVertex3f(.25, -.75, .75);        glEnd();        SDL_GL_SwapBuffers();    }    SDL_Quit();    return 0;}

All transforms are left at identity. A red triangle is drawn at z = .5, and a green triangle is drawn at z = .75. Here's the output:

Note that the red triangle is in front of the green one, which would seem to suggest that z is increasing away from the viewer. In other words, the coordinate system in this example is left-handed.

So, that's the empirical (assuming the example isn't flawed in some way), but let's go ahead and look at the theoretical.

In OpenGL, normalized device coordinates are left-handed. Obviously just because it says so on the internet somewhere doesn't make it true, but for what it's worth, it's stated here that the NDCS is left-handed.

Now, if all transforms are left at identity, after the projection transform, all vertices will have a w value of 1; therefore, the division by w can be disregarded.

What we are left with is that all vertices are passed unchanged (except for clipping) to NDC space, which is left-handed. So if anything, it seems to me that if OpenGL has an 'inherent handedness', it would be left-handed, not right-handed.

So why is OpenGL said to be right-handed? The only reasons I can think of right now are that front faces are CCW by default in OpenGL (it has to be one or the other, after all), and that the convenience functions for which handedness matters (gluLookAt, glFrustum, etc.) build right-handed transforms.

As for Direct3D, the only reason I can think of for it being considered left-handed is that front faces are CW by default. Again though, if there's going to be a default, it has to be one or the other.

So for now at least, I'm going to stick with my assertion that aside from minor details such as default winding order (which are more or less incidental), neither API is any more right- or left-handed than the other. In fact, if one were forced to assign a 'default' handedness to each, it seems that they should both be considered left-handed, since if all transforms are left at identity, the geometry ends up in a left-handed coordinate system.

That said, I'm ready to be proven wrong (maybe one of our resident graphics gurus will step in and put this whole thing to rest).
0

Share on other sites
Yes, you're right, I checked this too myself, I guess I got too hung up in the assertion that -z is forward in view space, which is even in the red book, anyway, this knowledge will come in handy for me.

gluPerspective definitely sets a right handed space, I wonder if that is true of glFrustum as well.
0

Share on other sites
Okay, I am a bit confused.
What CS do Blender or 3ds Max have? I think right handed. So what happens if you load a small model into the test program?
Will the model look mirrored or not?
I could try it out myself, but I don't have time for it now unfortunately.
0

Share on other sites
Quote:
 Original post by szecsWhat CS do Blender or 3ds Max have? I think right handed. So what happens if you load a small model into the test program?Will the model look mirrored or not?I could try it out myself, but I don't have time for it now unfortunately.
Blender uses LHS, 3DS Max uses RHS (AFAIK).

Every application has in fact its set-up of projection (for the view local z to depth mapping), vertex winding rule, and depth test. You need to have your models accordingly to this set-up, or else some hiccup will happen.

Viewing a mirrored mesh is one possible outcome, yes. Assuming that the model is imported in the center of the world, and the camera looks from a distance at the center, and x and y are used in both the viewer as well as the content creation package for sideward and up, resp, and the scaling is suitable, then you'll see a mirrored model when the both applications uses different handedness.

All the discussion now is about whether the API itself introduces a handedness. But since all aspects that play a role can be re-programmed or, with OpenGL's newest specifications, have to be be programmed, the consensus seems me to be that there is no real handedness in OpenGL. Instead, the handedness comes from the way an application uses the API. So saying OpenGL uses RHS is substantiated in the history where the utility functions like glFrustum and glOrtho as well as the default winding rules and depth test were used widely by applications, and those functions and defaults have done a set-up for RHS.
0

Share on other sites
Quote:
 All the discussion now is about whether the API itself introduces a handedness. But since all aspects that play a role can be re-programmed or, with OpenGL's newest specifications, have to be be programmed, the consensus seems me to be that there is no real handedness in OpenGL. Instead, the handedness comes from the way an application uses the API. So saying OpenGL uses RHS is substantiated in the history where the utility functions like glFrustum and glOrtho as well as the default winding rules and depth test were used widely by applications, and those functions and defaults have done a set-up for RHS.
Haha, nicely said. You just summed up all of my previous rambling, disorganized posts in one nicely worded paragraph :)
0

Share on other sites
Quote:
 All the discussion now is about whether the API itself introduces a handedness. But since all aspects that play a role can be re-programmed or, with OpenGL's newest specifications, have to be be programmed, the consensus seems me to be that there is no real handedness in OpenGL. Instead, the handedness comes from the way an application uses the API. So saying OpenGL uses RHS is substantiated in the history where the utility functions like glFrustum and glOrtho as well as the default winding rules and depth test were used widely by applications, and those functions and defaults have done a set-up for RHS.

Wow. There's been some really good discussion on this topic here.

I think that jyk's mention of face culling gives us a clue. I believe that clip space (i.e. post projection space, before homogenous divide which transforms us to NDC space) is hard coded in each API to be a specific handedness. Direct3D uses a left handed clip space and OpenGL a right handed clip space.

It's in this space that face culling and frustum clipping occurs. Frustum clipping is something we have no control over, so possibly at this point OpenGL expects a right handed system (i.e. looking down -z) to properly clip. Is this right?
0

Share on other sites
Quote:
 Frustum clipping is something we have no control over, so possibly at this point OpenGL expects a right handed system (i.e. looking down -z) to properly clip. Is this right?
No, I don't believe that is right.
Quote:
 Direct3D uses a left handed clip space and OpenGL a right handed clip space.
What leads you to believe that clip space is right-handed in OpenGL?
0

Share on other sites
Even in the most recent openGL versions there is a default setup/whatever. The identity matrices, depth-range/test, for example. Doesn't that make handedness defined?
(Okay, I am perfectly aware, that this is pure philosophy, not CS, but anyway.)

0

Share on other sites
Quote:
 Original post by jykWhat leads you to believe that clip space is right-handed in OpenGL?

The internet? :)

For DirectX (using a left handed system), if I look at the vertex shader output in PIX, all visible vertices have a positive z value.
0

Share on other sites
Quote:
 Even in the most recent openGL versions there is a default setup/whatever. The identity matrices, depth-range/test, for example. Doesn't that make handedness defined?
I would say so, and I would say that this 'default' setup would be left-handed, not right-handed. (This is because if no non-identity transforms are applied, vertices will be passed unchanged to normalized device space, which is left-handed.)
0

Share on other sites
Quote:
Original post by GaryNas
Quote:
 Original post by jykWhat leads you to believe that clip space is right-handed in OpenGL?

The internet? :)
Can you provide a link to a reference?
0

Share on other sites
Quote:
 This is the closest I found to 'official' docs on the matter: http://www.opengl.org/resources/faq/technical/transformations.htm#tran0150.
I realize that comes from the OpenGL website, but that FAQ entry seems a bit suspect to me. Even with the fixed-function pipeline, you shouldn't have to apply an 'extra' scaling in order to get a left-handed system; rather, it should be as simple as creating your own (left-handed) view and projection transforms and uploading them via glLoadMatrix() and/or glMultMatrix().
0

Share on other sites
Quote:
 Original post by swiftcoderThis is the closest I found to 'official' docs on the matter: http://www.opengl.org/resources/faq/technical/transformations.htm#tran0150.
Ahhahahaha (sorry). This won't end soon...
0

Share on other sites
Quote:
Original post by jyk
Quote:
 This is the closest I found to 'official' docs on the matter: http://www.opengl.org/resources/faq/technical/transformations.htm#tran0150.
I realize that comes from the OpenGL website, but that FAQ entry seems a bit suspect to me.
Thus the quote marks on 'official'. I have no idea who wrote or maintained those FAQ pages, but they haven't been updated since the dawn of time, and at least one answer was wrong to start with.
0

Share on other sites
Quote:
 Original post by jykCan you provide a link to a reference?

This is the best I can do:

http://www.gamerendering.com/2008/10/05/clip-space/

:(
0

Create an account

Register a new account

Followers 0

• Similar Content

• By DaniDesu
#include "MyEngine.h" int main() { MyEngine myEngine; myEngine.run(); return 0; } MyEngine.h
#pragma once #include "MyWindow.h" #include "MyShaders.h" #include "MyShapes.h" class MyEngine { private: GLFWwindow * myWindowHandle; MyWindow * myWindow; public: MyEngine(); ~MyEngine(); void run(); }; MyEngine.cpp
#include "MyEngine.h" MyEngine::MyEngine() { MyWindow myWindow(800, 600, "My Game Engine"); this->myWindow = &myWindow; myWindow.createWindow(); this->myWindowHandle = myWindow.getWindowHandle(); // Load all OpenGL function pointers for use gladLoadGLLoader((GLADloadproc)glfwGetProcAddress); } MyEngine::~MyEngine() { this->myWindow->destroyWindow(); } void MyEngine::run() { MyShaders myShaders("VertexShader.glsl", "FragmentShader.glsl"); MyShapes myShapes; GLuint vertexArrayObjectHandle; float coordinates[] = { 0.5f, 0.5f, 0.0f, 0.5f, -0.5f, 0.0f, -0.5f, 0.5f, 0.0f }; vertexArrayObjectHandle = myShapes.drawTriangle(coordinates); while (!glfwWindowShouldClose(this->myWindowHandle)) { glClearColor(0.5f, 0.5f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Draw something glUseProgram(myShaders.getShaderProgram()); glBindVertexArray(vertexArrayObjectHandle); glDrawArrays(GL_TRIANGLES, 0, 3); glfwSwapBuffers(this->myWindowHandle); glfwPollEvents(); } } MyShaders.h
#pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> class MyWindow { private: GLFWwindow * windowHandle; int windowWidth; int windowHeight; const char * windowTitle; public: MyWindow(int windowWidth, int windowHeight, const char * windowTitle); ~MyWindow(); GLFWwindow * getWindowHandle(); void createWindow(); void MyWindow::destroyWindow(); }; MyWindow.cpp
#include "MyWindow.h" MyWindow::MyWindow(int windowWidth, int windowHeight, const char * windowTitle) { this->windowHandle = NULL; this->windowWidth = windowWidth; this->windowWidth = windowWidth; this->windowHeight = windowHeight; this->windowTitle = windowTitle; glfwInit(); } MyWindow::~MyWindow() { } GLFWwindow * MyWindow::getWindowHandle() { return this->windowHandle; } void MyWindow::createWindow() { // Use OpenGL 3.3 and GLSL 3.3 glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); // Limit backwards compatibility glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); // Prevent resizing window glfwWindowHint(GLFW_RESIZABLE, GL_FALSE); // Create window this->windowHandle = glfwCreateWindow(this->windowWidth, this->windowHeight, this->windowTitle, NULL, NULL); glfwMakeContextCurrent(this->windowHandle); } void MyWindow::destroyWindow() { glfwTerminate(); } MyShapes.h
#pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> class MyShapes { public: MyShapes(); ~MyShapes(); GLuint & drawTriangle(float coordinates[]); }; MyShapes.cpp
#include "MyShapes.h" MyShapes::MyShapes() { } MyShapes::~MyShapes() { } GLuint & MyShapes::drawTriangle(float coordinates[]) { GLuint vertexBufferObject{}; GLuint vertexArrayObject{}; // Create a VAO glGenVertexArrays(1, &vertexArrayObject); glBindVertexArray(vertexArrayObject); // Send vertices to the GPU glGenBuffers(1, &vertexBufferObject); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(coordinates), coordinates, GL_STATIC_DRAW); // Dertermine the interpretation of the array buffer glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), (void *)0); glEnableVertexAttribArray(0); // Unbind the buffers glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); return vertexArrayObject; } MyFileHandler.h
#pragma once #include <cstdio> #include <cstdlib> class MyFileHandler { private: const char * fileName; unsigned long fileSize; void setFileSize(); public: MyFileHandler(const char * fileName); ~MyFileHandler(); unsigned long getFileSize(); const char * readFile(); }; MyFileHandler.cpp
#include "MyFileHandler.h" MyFileHandler::MyFileHandler(const char * fileName) { this->fileName = fileName; this->setFileSize(); } MyFileHandler::~MyFileHandler() { } void MyFileHandler::setFileSize() { FILE * fileHandle = NULL; fopen_s(&fileHandle, this->fileName, "rb"); fseek(fileHandle, 0L, SEEK_END); this->fileSize = ftell(fileHandle); rewind(fileHandle); fclose(fileHandle); return; } unsigned long MyFileHandler::getFileSize() { return (this->fileSize); } const char * MyFileHandler::readFile() { char * buffer = (char *)malloc((this->fileSize)+1); FILE * fileHandle = NULL; fopen_s(&fileHandle, this->fileName, "rb"); fread(buffer, this->fileSize, sizeof(char), fileHandle); fclose(fileHandle); buffer[this->fileSize] = '\0'; return buffer; } VertexShader.glsl
#version 330 core layout (location = 0) vec3 VertexPositions; void main() { gl_Position = vec4(VertexPositions, 1.0f); } FragmentShader.glsl
#version 330 core out vec4 FragmentColor; void main() { FragmentColor = vec4(1.0f, 0.0f, 0.0f, 1.0f); } I am attempting to create a simple engine/graphics utility using some object-oriented paradigms. My first goal is to get some output from my engine, namely, a simple red triangle.
For this goal, the MyShapes class will be responsible for defining shapes such as triangles, polygons etc. Currently, there is only a drawTriangle() method implemented, because I first wanted to see whether it works or not before attempting to code other shape drawing methods.
The constructor of the MyEngine class creates a GLFW window (GLAD is also initialized here to load all OpenGL functionality), and the myEngine.run() method in Main.cpp is responsible for firing up the engine. In this run() method, the shaders get loaded from files via the help of my FileHandler class. The vertices for the triangle are processed by the myShapes.drawTriangle() method where a vertex array object, a vertex buffer object and vertrex attributes are set for this purpose.
The while loop in the run() method should be outputting me the desired red triangle, but all I get is a grey window area. Why?
(Note: I am aware that this code is not using any good software engineering practices (e.g. exceptions, error handling). I am planning to implement them later, once I get the hang of OpenGL.)

• By KarimIO
EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
Update: No crash occurs if I don't draw, just clear and swap.
static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));
• By Tchom
Hey devs!

I've been working on a OpenGL ES 2.0 android engine and I have begun implementing some simple (point) lighting. I had something fairly simple working, so I tried to get fancy and added color-tinting light. And it works great... with only one or two lights. Any more than that, the application drops about 15 frames per light added (my ideal is at least 4 or 5). I know implementing lighting is expensive, I just didn't think it was that expensive. I'm fairly new to the world of OpenGL and GLSL, so there is a good chance I've written some crappy shader code. If anyone had any feedback or tips on how I can optimize this code, please let me know.

uniform mat4 u_MVPMatrix; uniform mat4 u_MVMatrix; attribute vec4 a_Position; attribute vec3 a_Normal; attribute vec2 a_TexCoordinate; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { v_Position = vec3(u_MVMatrix * a_Position); v_TexCoordinate = a_TexCoordinate; v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); gl_Position = u_MVPMatrix * a_Position; } Fragment Shader
precision mediump float; uniform vec4 u_LightPos["+numLights+"]; uniform vec4 u_LightColours["+numLights+"]; uniform float u_LightPower["+numLights+"]; uniform sampler2D u_Texture; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { gl_FragColor = (texture2D(u_Texture, v_TexCoordinate)); float diffuse = 0.0; vec4 colourSum = vec4(1.0); for (int i = 0; i < "+numLights+"; i++) { vec3 toPointLight = vec3(u_LightPos[i]); float distance = length(toPointLight - v_Position); vec3 lightVector = normalize(toPointLight - v_Position); float diffuseDiff = 0.0; // The diffuse difference contributed from current light diffuseDiff = max(dot(v_Normal, lightVector), 0.0); diffuseDiff = diffuseDiff * (1.0 / (1.0 + ((1.0-u_LightPower[i])* distance * distance))); //Determine attenuatio diffuse += diffuseDiff; gl_FragColor.rgb *= vec3(1.0) / ((vec3(1.0) + ((vec3(1.0) - vec3(u_LightColours[i]))*diffuseDiff))); //The expensive part } diffuse += 0.1; //Add ambient light gl_FragColor.rgb *= diffuse; } Am I making any rookie mistakes? Or am I just being unrealistic about what I can do? Thanks in advance
• By yahiko00
Hi,
Not sure to post at the right place, if not, please forgive me...
For a game project I am working on, I would like to implement a 2D starfield as a background.
I do not want to deal with static tiles, since I plan to slowly animate the starfield. So, I am trying to figure out how to generate a random starfield for the entire map.
I feel that using a uniform distribution for the stars will not do the trick. Instead I would like something similar to the screenshot below, taken from the game Star Wars: Empire At War (all credits to Lucasfilm, Disney, and so on...).

Is there someone who could have an idea of a distribution which could result in such a starfield?
Any insight would be appreciated

• I have just noticed that, in quake 3 and half - life, dynamic models are effected from light map. For example in dark areas, gun that player holds seems darker. How did they achieve this effect ? I can use image based lighting techniques however (Like placing an environment probe and using it for reflections and ambient lighting), this tech wasn't used in games back then, so there must be a simpler method to do this.
Here is a link that shows how modern engines does it. Indirect Lighting Cache It would be nice if you know a paper that explains this technique. Can I apply this to quake 3' s light map generator and bsp format ?

• 13
• 11
• 18
• 18
• 15