# OpenGL What makes OpenGL right handed?

This topic is 2844 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Quote:
 Original post by swiftcoderAn identity projection matrix is just an unscaled orthographic projection.
You're right, it is. But, if I'm not mistaken, it's a left-handed orthographic projection, so I'm not sure that szecs's theory holds up.

##### Share on other sites
It's not the right or left handedness that is important, the important thing is that it is defined. If you have a coordinate in a right-handed coordinate system, the same coordinates will mean the point (represented by the coordinates) will be mirrored to on of the planes, in a left-handed CS. So you can define the handedness.

BTW the identity projection is same-handed as the identity modelview AFAIK. (I played around with shadow mapping matrices, and it didn't matter, if everything is applied in modelview (and the projection was identity), or the opposite, but I may remember wrong(ly?)).

##### Share on other sites
Quote:
 Original post by jykNope, I'm thinking about handedness.

fair enough.

Quote:
 Original post by jykAlso, there's no such thing as 'row-major notation', at least as far as I'm aware. Are you talking about row- vs. column-vector notation?

I think there is, the way I understand it it has to do with whether the i and j in matrix notation Aij represent a column or a row, probably the same thing you mean with vector, but I could be wrong.

##### Share on other sites
Quote:
Original post by jyk
Quote:
 Original post by swiftcoderAn identity projection matrix is just an unscaled orthographic projection.
You're right, it is. But, if I'm not mistaken, it's a left-handed orthographic projection, so I'm not sure that szecs's theory holds up.

I think you are mistaken, what part exactly of a matrix tells you which handedness it belongs to? the 1,0,0 vector could point in the right direction just as well as it could to the left (or up, down,forward,backward for that matter).

##### Share on other sites
Quote:
 I think there is, the way I understand it it has to do with whether the i and j in matrix notation Aij represent a column or a row, probably the same thing you mean with vector, but I could be wrong.
Matrix majorness and vector notation are two different things. 'Majorness' refers to the i and j thing you mentioned, while vector notation convention deals with whether vectors are represented as column matrices (Nx1) or row matrices (1xN).

I've never heard the phrase 'row-major notation' used, and to me at least, matrix majorness doesn't really seem like a notational issue (rather, it's a programming detail having to do with how things are represented in memory). But, I suppose it's just semantics.

In any case, there are (at least) three different issues at play here - coordinate system handedness, matrix majorness, and vector notation. They're often conflated with one another, but despite the fact that they're interrelated in some ways, they're essentially orthogonal.

##### Share on other sites
Quote:
 I think you are mistaken, what part exactly of a matrix tells you which handedness it belongs to? the 1,0,0 vector could point in the right direction just as well as it could to the left (or up, down,forward,backward for that matter).
I'm assuming the convention adopted by both OpenGL and Direct3D that +x is to the right and +y is up in view space.

You're right of course that a matrix with no spatial context has no (spatial) handedness, per se. However, the transforms that are applied in both OpenGL and Direct3D do have a context (the graphics pipeline), and that is the context in which we're considering the handedness of the projection transform.

Just as a point of reference, consider the DX math library functions D3DXMatrixOrthoLH() and D3DXMatrixOrthoRH(). Given a 'unit' orthographic projection, the former produces:
1  0  0  00  1  0  00  0  .5 00  0  .5 1
While the latter produces:
1  0  0  00  1  0  00  0 -.5 00  0  .5 1
There are some differences due to how the canonical view volume differs between the two APIs, but the important thing to note is that the sign of element 33 differs depending on the handedness. If we were using the OpenGL convention for the view volume, the above matrices would instead be identity, and identity with element 33 negated, respectively. This is what I'm referring to when I say that the identity matrix represents a left-handed orthographic transform. (Again, in the context of the standard graphics pipeline.)

##### Share on other sites
Quote:
 Original post by jykI'm assuming the convention adopted by both OpenGL and Direct3D that +x is to the right and +y is up in view space.

Yes, and OpenGL's convention is that -z is forward in view space, which makes it right handed. Direct3D, I assume has +z as forward, therein lies the difference.

Quote:
 Original post by jykThere are some differences due to how the canonical view volume differs between the two APIs, but the important thing to note is that the sign of element 33 differs depending on the handedness. If we were using the OpenGL convention for the view volume, the above matrices would instead be identity, and identity with element 33 negated, respectively. This is what I'm referring to when I say that the identity matrix represents a left-handed orthographic transform. (Again, in the context of the standard graphics pipeline.)

Something doesn't sound right to me in your explanation, it seems like D3D being left handed would properly generate a negative M33 element to switch from its native left handed to right handed one, but I would expect OpenGL to get the opposite results you mentioned, identity for right handed, identity with negative one (-1) in M33 for left handed.

The whole point of this is that while OpenGL doesn't force you into right handed coordinates, it takes more effort from your part in order to use a left handed system.

##### Share on other sites
Quote:
 Yes, and OpenGL's convention is that -z is forward in view space, which makes it right handed. Direct3D, I assume has +z as forward, therein lies the difference.
Right, but the question being asked in this thread is, where does that convention come into play? Is it just in the various transform convenience functions that the API provides?

Another way to ask the question might be (using Direct3D as an example), in what way is Direct3D 'more' left-handed than it is right-handed? Can you point out exactly where in the pipeline its 'inherent left-handedness' comes into play?
Quote:
 Something doesn't sound right to me in your explanation, it seems like D3D being left handed would properly generate a negative M33 element to switch from its native left handed to right handed one, but I would expect OpenGL to get the opposite results you mentioned, identity for right handed, negative M33 for left handed.
Nope, try out the following code:

glMatrixMode(GL_PROJECTION);glLoadIdentity();glOrtho(-1, 1, -1, 1, -1, 1);float m[16];glGetFloatv(GL_PROJECTION_MATRIX, m);for (size_t i = 0; i < 4; ++i) {    for (size_t j = 0; j < 4; ++j) {        std::cout << m[i + j * 4] << " ";    }    std::cout << std::endl;}

The output will be (give or take a negative 0):
1 0  0 0 0 1  0 0 0 0 -1 0 0 0  0 1
In other words, aside from the z range issue, the results are the same in both APIs: +33 for left handed, -33 for right handed.

I think it might be useful to forget about the gl and glu convenience functions and about the DX math library, and just consider the most up-to-date version of each API. In the programmable pipeline, there is no model matrix, no view matrix, and no projection matrix; there is only what you do in the vertex, geometry, and fragment programs, which is completely up to you. In this context (that of the programmable pipeline with no help from utility functions or from the fixed-function pipeline), is Direct3D still 'left handed'? Is OpenGL still 'right handed'?

My own suspicion is that the answer is no; that at the very least, with the modern programmable pipeline, the idea of each API having its own 'inherent handedness' is essentially meaningless.

However, I don't claim 100% certainty, and at this point, it looks like if I want to try to prove my theory, I'm going to have to dig into the respective pipelines a bit and provide some more concrete examples :)

##### Share on other sites
You know what? I don't know anymore [smile], the man page for glOrtho seems to hint that you should pass near and far as negatives, or in your example near as 1, far as -1 which would result in the identity matrix, I tested changing the values around in my GUI, and saw no difference. Makes sence since -z is forward in OpenGL.

Anyway, there is a solution to settle this, implement what szecs suggested, that is set projection to identity, make no mirror transformations or negative scales in modelview draw the axes at the origin with 0.1 lengths and see in which direction z points when y is up and x is right, that's it.

##### Share on other sites
Quote:
 the man page for glOrtho seems to hint that you should pass near and far as negatives, or in your example near as 1, far as -1 which would result in the identity matrix
Hm, I don't read it that way. It says, 'These values are negative if the plane is to be behind the viewer', which I take to mean that negative values (e.g. -1) correspond to planes that are behind the viewer. Therefore, a near value of -1 and a far value of 1 would make sense.

Also, gluOrtho2D() sets 'near' to -1 and 'far' to 1, so I think it's pretty safe to say that negative distances are intended to be behind the viewer and positive distances in front.
Quote:
 Anyway, there is a solution to settle this, implement what szecs suggested, that is set projection to identity, make no mirror transformations or negative scales in modelview draw the axes at the origin with 0.1 lengths and see in which direction z points when y is up and x is right, that's it.
This isn't quite the same thing, but I put together the following little test program:

#include "SDL.h"#include "SDL_opengl.h"int main(int, char*[]){    SDL_Init(SDL_INIT_EVERYTHING);    SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 16);    SDL_SetVideoMode(400, 400, 0, SDL_OPENGL);    glEnable(GL_DEPTH_TEST);    bool quit = false;    while (!quit) {        SDL_Event event;        while (SDL_PollEvent(&event)) {            if (event.type == SDL_QUIT) {                quit = true;            }        }        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);        glColor3f(1, 0, 0);        glBegin(GL_TRIANGLES);        glVertex3f(-.75, .25, .5);   glVertex3f(.75, 0, .5);  glVertex3f(-.75, -.25, .5);        glColor3f(0, 1, 0);        glVertex3f(-.25, -.75, .75); glVertex3f(0, .75, .75); glVertex3f(.25, -.75, .75);        glEnd();        SDL_GL_SwapBuffers();    }    SDL_Quit();    return 0;}

All transforms are left at identity. A red triangle is drawn at z = .5, and a green triangle is drawn at z = .75. Here's the output:

Note that the red triangle is in front of the green one, which would seem to suggest that z is increasing away from the viewer. In other words, the coordinate system in this example is left-handed.

So, that's the empirical (assuming the example isn't flawed in some way), but let's go ahead and look at the theoretical.

In OpenGL, normalized device coordinates are left-handed. Obviously just because it says so on the internet somewhere doesn't make it true, but for what it's worth, it's stated here that the NDCS is left-handed.

Now, if all transforms are left at identity, after the projection transform, all vertices will have a w value of 1; therefore, the division by w can be disregarded.

What we are left with is that all vertices are passed unchanged (except for clipping) to NDC space, which is left-handed. So if anything, it seems to me that if OpenGL has an 'inherent handedness', it would be left-handed, not right-handed.

So why is OpenGL said to be right-handed? The only reasons I can think of right now are that front faces are CCW by default in OpenGL (it has to be one or the other, after all), and that the convenience functions for which handedness matters (gluLookAt, glFrustum, etc.) build right-handed transforms.

As for Direct3D, the only reason I can think of for it being considered left-handed is that front faces are CW by default. Again though, if there's going to be a default, it has to be one or the other.

So for now at least, I'm going to stick with my assertion that aside from minor details such as default winding order (which are more or less incidental), neither API is any more right- or left-handed than the other. In fact, if one were forced to assign a 'default' handedness to each, it seems that they should both be considered left-handed, since if all transforms are left at identity, the geometry ends up in a left-handed coordinate system.

That said, I'm ready to be proven wrong (maybe one of our resident graphics gurus will step in and put this whole thing to rest).

##### Share on other sites
Yes, you're right, I checked this too myself, I guess I got too hung up in the assertion that -z is forward in view space, which is even in the red book, anyway, this knowledge will come in handy for me.

gluPerspective definitely sets a right handed space, I wonder if that is true of glFrustum as well.

##### Share on other sites
Okay, I am a bit confused.
What CS do Blender or 3ds Max have? I think right handed. So what happens if you load a small model into the test program?
Will the model look mirrored or not?
I could try it out myself, but I don't have time for it now unfortunately.

##### Share on other sites
Quote:
 Original post by szecsWhat CS do Blender or 3ds Max have? I think right handed. So what happens if you load a small model into the test program?Will the model look mirrored or not?I could try it out myself, but I don't have time for it now unfortunately.
Blender uses LHS, 3DS Max uses RHS (AFAIK).

Every application has in fact its set-up of projection (for the view local z to depth mapping), vertex winding rule, and depth test. You need to have your models accordingly to this set-up, or else some hiccup will happen.

Viewing a mirrored mesh is one possible outcome, yes. Assuming that the model is imported in the center of the world, and the camera looks from a distance at the center, and x and y are used in both the viewer as well as the content creation package for sideward and up, resp, and the scaling is suitable, then you'll see a mirrored model when the both applications uses different handedness.

All the discussion now is about whether the API itself introduces a handedness. But since all aspects that play a role can be re-programmed or, with OpenGL's newest specifications, have to be be programmed, the consensus seems me to be that there is no real handedness in OpenGL. Instead, the handedness comes from the way an application uses the API. So saying OpenGL uses RHS is substantiated in the history where the utility functions like glFrustum and glOrtho as well as the default winding rules and depth test were used widely by applications, and those functions and defaults have done a set-up for RHS.

##### Share on other sites
Quote:
 All the discussion now is about whether the API itself introduces a handedness. But since all aspects that play a role can be re-programmed or, with OpenGL's newest specifications, have to be be programmed, the consensus seems me to be that there is no real handedness in OpenGL. Instead, the handedness comes from the way an application uses the API. So saying OpenGL uses RHS is substantiated in the history where the utility functions like glFrustum and glOrtho as well as the default winding rules and depth test were used widely by applications, and those functions and defaults have done a set-up for RHS.
Haha, nicely said. You just summed up all of my previous rambling, disorganized posts in one nicely worded paragraph :)

##### Share on other sites
Quote:
 All the discussion now is about whether the API itself introduces a handedness. But since all aspects that play a role can be re-programmed or, with OpenGL's newest specifications, have to be be programmed, the consensus seems me to be that there is no real handedness in OpenGL. Instead, the handedness comes from the way an application uses the API. So saying OpenGL uses RHS is substantiated in the history where the utility functions like glFrustum and glOrtho as well as the default winding rules and depth test were used widely by applications, and those functions and defaults have done a set-up for RHS.

Wow. There's been some really good discussion on this topic here.

I think that jyk's mention of face culling gives us a clue. I believe that clip space (i.e. post projection space, before homogenous divide which transforms us to NDC space) is hard coded in each API to be a specific handedness. Direct3D uses a left handed clip space and OpenGL a right handed clip space.

It's in this space that face culling and frustum clipping occurs. Frustum clipping is something we have no control over, so possibly at this point OpenGL expects a right handed system (i.e. looking down -z) to properly clip. Is this right?

##### Share on other sites
Quote:
 Frustum clipping is something we have no control over, so possibly at this point OpenGL expects a right handed system (i.e. looking down -z) to properly clip. Is this right?
No, I don't believe that is right.
Quote:
 Direct3D uses a left handed clip space and OpenGL a right handed clip space.
What leads you to believe that clip space is right-handed in OpenGL?

##### Share on other sites
Even in the most recent openGL versions there is a default setup/whatever. The identity matrices, depth-range/test, for example. Doesn't that make handedness defined?
(Okay, I am perfectly aware, that this is pure philosophy, not CS, but anyway.)

##### Share on other sites
Quote:
 Original post by jykWhat leads you to believe that clip space is right-handed in OpenGL?

The internet? :)

For DirectX (using a left handed system), if I look at the vertex shader output in PIX, all visible vertices have a positive z value.

##### Share on other sites
Quote:
 Even in the most recent openGL versions there is a default setup/whatever. The identity matrices, depth-range/test, for example. Doesn't that make handedness defined?
I would say so, and I would say that this 'default' setup would be left-handed, not right-handed. (This is because if no non-identity transforms are applied, vertices will be passed unchanged to normalized device space, which is left-handed.)

##### Share on other sites
Quote:
Original post by GaryNas
Quote:
 Original post by jykWhat leads you to believe that clip space is right-handed in OpenGL?

The internet? :)
Can you provide a link to a reference?

##### Share on other sites
Quote:
 This is the closest I found to 'official' docs on the matter: http://www.opengl.org/resources/faq/technical/transformations.htm#tran0150.
I realize that comes from the OpenGL website, but that FAQ entry seems a bit suspect to me. Even with the fixed-function pipeline, you shouldn't have to apply an 'extra' scaling in order to get a left-handed system; rather, it should be as simple as creating your own (left-handed) view and projection transforms and uploading them via glLoadMatrix() and/or glMultMatrix().

##### Share on other sites
Quote:
 Original post by swiftcoderThis is the closest I found to 'official' docs on the matter: http://www.opengl.org/resources/faq/technical/transformations.htm#tran0150.
Ahhahahaha (sorry). This won't end soon...

##### Share on other sites
Quote:
Original post by jyk
Quote:
 This is the closest I found to 'official' docs on the matter: http://www.opengl.org/resources/faq/technical/transformations.htm#tran0150.
I realize that comes from the OpenGL website, but that FAQ entry seems a bit suspect to me.
Thus the quote marks on 'official'. I have no idea who wrote or maintained those FAQ pages, but they haven't been updated since the dawn of time, and at least one answer was wrong to start with.

##### Share on other sites
Quote:
 Original post by jykCan you provide a link to a reference?

This is the best I can do:

http://www.gamerendering.com/2008/10/05/clip-space/

:(

• 11
• 11
• 9
• 12
• 10
• ### Similar Content

• By EddieK
Hello. I'm trying to make an android game and I have come across a problem. I want to draw different map layers at different Z depths so that some of the tiles are drawn above the player while others are drawn under him. But there's an issue where the pixels with alpha drawn above the player. This is the code i'm using:
int setup(){ GLES20.glEnable(GLES20.GL_DEPTH_TEST); GLES20.glEnable(GL10.GL_ALPHA_TEST); GLES20.glEnable(GLES20.GL_TEXTURE_2D); } int render(){ GLES20.glClearColor(0, 0, 0, 0); GLES20.glClear(GLES20.GL_ALPHA_BITS); GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT); GLES20.glBlendFunc(GLES20.GL_ONE, GL10.GL_ONE_MINUS_SRC_ALPHA); // do the binding of textures and drawing vertices } My vertex shader:
uniform mat4 MVPMatrix; // model-view-projection matrix uniform mat4 projectionMatrix; attribute vec4 position; attribute vec2 textureCoords; attribute vec4 color; attribute vec3 normal; varying vec4 outColor; varying vec2 outTexCoords; varying vec3 outNormal; void main() { outNormal = normal; outTexCoords = textureCoords; outColor = color; gl_Position = MVPMatrix * position; } My fragment shader:
precision highp float; uniform sampler2D texture; varying vec4 outColor; varying vec2 outTexCoords; varying vec3 outNormal; void main() { vec4 color = texture2D(texture, outTexCoords) * outColor; gl_FragColor = vec4(color.r,color.g,color.b,color.a);//color.a); } I have attached a picture of how it looks. You can see the black squares near the tree. These squares should be transparent as they are in the png image:

Its strange that in this picture instead of alpha or just black color it displays the grass texture beneath the player and the tree:

Any ideas on how to fix this?

• This article uses material originally posted on Diligent Graphics web site.
Introduction
Graphics APIs have come a long way from small set of basic commands allowing limited control of configurable stages of early 3D accelerators to very low-level programming interfaces exposing almost every aspect of the underlying graphics hardware. Next-generation APIs, Direct3D12 by Microsoft and Vulkan by Khronos are relatively new and have only started getting widespread adoption and support from hardware vendors, while Direct3D11 and OpenGL are still considered industry standard. New APIs can provide substantial performance and functional improvements, but may not be supported by older hardware. An application targeting wide range of platforms needs to support Direct3D11 and OpenGL. New APIs will not give any advantage when used with old paradigms. It is totally possible to add Direct3D12 support to an existing renderer by implementing Direct3D11 interface through Direct3D12, but this will give zero benefits. Instead, new approaches and rendering architectures that leverage flexibility provided by the next-generation APIs are expected to be developed.
There are at least four APIs (Direct3D11, Direct3D12, OpenGL/GLES, Vulkan, plus Apple's Metal for iOS and osX platforms) that a cross-platform 3D application may need to support. Writing separate code paths for all APIs is clearly not an option for any real-world application and the need for a cross-platform graphics abstraction layer is evident. The following is the list of requirements that I believe such layer needs to satisfy:
Lightweight abstractions: the API should be as close to the underlying native APIs as possible to allow an application leverage all available low-level functionality. In many cases this requirement is difficult to achieve because specific features exposed by different APIs may vary considerably. Low performance overhead: the abstraction layer needs to be efficient from performance point of view. If it introduces considerable amount of overhead, there is no point in using it. Convenience: the API needs to be convenient to use. It needs to assist developers in achieving their goals not limiting their control of the graphics hardware. Multithreading: ability to efficiently parallelize work is in the core of Direct3D12 and Vulkan and one of the main selling points of the new APIs. Support for multithreading in a cross-platform layer is a must. Extensibility: no matter how well the API is designed, it still introduces some level of abstraction. In some cases the most efficient way to implement certain functionality is to directly use native API. The abstraction layer needs to provide seamless interoperability with the underlying native APIs to provide a way for the app to add features that may be missing. Diligent Engine is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. Full source code is available for download at GitHub and is free to use.
Overview
Diligent Engine API takes some features from Direct3D11 and Direct3D12 as well as introduces new concepts to hide certain platform-specific details and make the system easy to use. It contains the following main components:
Render device (IRenderDevice  interface) is responsible for creating all other objects (textures, buffers, shaders, pipeline states, etc.).
Device context (IDeviceContext interface) is the main interface for recording rendering commands. Similar to Direct3D11, there are immediate context and deferred contexts (which in Direct3D11 implementation map directly to the corresponding context types). Immediate context combines command queue and command list recording functionality. It records commands and submits the command list for execution when it contains sufficient number of commands. Deferred contexts are designed to only record command lists that can be submitted for execution through the immediate context.
An alternative way to design the API would be to expose command queue and command lists directly. This approach however does not map well to Direct3D11 and OpenGL. Besides, some functionality (such as dynamic descriptor allocation) can be much more efficiently implemented when it is known that a command list is recorded by a certain deferred context from some thread.
The approach taken in the engine does not limit scalability as the application is expected to create one deferred context per thread, and internally every deferred context records a command list in lock-free fashion. At the same time this approach maps well to older APIs.
In current implementation, only one immediate context that uses default graphics command queue is created. To support multiple GPUs or multiple command queue types (compute, copy, etc.), it is natural to have one immediate contexts per queue. Cross-context synchronization utilities will be necessary.
Swap Chain (ISwapChain interface). Swap chain interface represents a chain of back buffers and is responsible for showing the final rendered image on the screen.
Render device, device contexts and swap chain are created during the engine initialization.
Resources (ITexture and IBuffer interfaces). There are two types of resources - textures and buffers. There are many different texture types (2D textures, 3D textures, texture array, cubmepas, etc.) that can all be represented by ITexture interface.
Resources Views (ITextureView and IBufferView interfaces). While textures and buffers are mere data containers, texture views and buffer views describe how the data should be interpreted. For instance, a 2D texture can be used as a render target for rendering commands or as a shader resource.
Pipeline State (IPipelineState interface). GPU pipeline contains many configurable stages (depth-stencil, rasterizer and blend states, different shader stage, etc.). Direct3D11 uses coarse-grain objects to set all stage parameters at once (for instance, a rasterizer object encompasses all rasterizer attributes), while OpenGL contains myriad functions to fine-grain control every individual attribute of every stage. Both methods do not map very well to modern graphics hardware that combines all states into one monolithic state under the hood. Direct3D12 directly exposes pipeline state object in the API, and Diligent Engine uses the same approach.
Shader Resource Binding (IShaderResourceBinding interface). Shaders are programs that run on the GPU. Shaders may access various resources (textures and buffers), and setting correspondence between shader variables and actual resources is called resource binding. Resource binding implementation varies considerably between different API. Diligent Engine introduces a new object called shader resource binding that encompasses all resources needed by all shaders in a certain pipeline state.
API Basics
Creating Resources
Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. Graphics APIs usually have a native object that represents linear buffer. Diligent Engine uses IBuffer interface as an abstraction for a native buffer. To create a buffer, one needs to populate BufferDesc structure and call IRenderDevice::CreateBuffer() method as in the following example:
BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); While there is usually just one buffer object, different APIs use very different approaches to represent textures. For instance, in Direct3D11, there are ID3D11Texture1D, ID3D11Texture2D, and ID3D11Texture3D objects. In OpenGL, there is individual object for every texture dimension (1D, 2D, 3D, Cube), which may be a texture array, which may also be multisampled (i.e. GL_TEXTURE_2D_MULTISAMPLE_ARRAY). As a result there are nine different GL texture types that Diligent Engine may create under the hood. In Direct3D12, there is only one resource interface. Diligent Engine hides all these details in ITexture interface. There is only one  IRenderDevice::CreateTexture() method that is capable of creating all texture types. Dimension, format, array size and all other parameters are specified by the members of the TextureDesc structure:
TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); If native API supports multithreaded resource creation, textures and buffers can be created by multiple threads simultaneously.
Interoperability with native API provides access to the native buffer/texture objects and also allows creating Diligent Engine objects from native handles. It allows applications seamlessly integrate native API-specific code with Diligent Engine.
Next-generation APIs allow fine level-control over how resources are allocated. Diligent Engine does not currently expose this functionality, but it can be added by implementing IResourceAllocator interface that encapsulates specifics of resource allocation and providing this interface to CreateBuffer() or CreateTexture() methods. If null is provided, default allocator should be used.
Initializing the Pipeline State
As it was mentioned earlier, Diligent Engine follows next-gen APIs to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.). This approach maps directly to Direct3D12/Vulkan, but is also beneficial for older APIs as it eliminates pipeline misconfiguration errors. With many individual calls tweaking various GPU pipeline settings it is very easy to forget to set one of the states or assume the stage is already properly configured when in fact it is not. Using pipeline state object helps avoid these problems as all stages are configured at once.
While in earlier APIs shaders were bound separately, in the next-generation APIs as well as in Diligent Engine shaders are part of the pipeline state object. The biggest challenge when authoring shaders is that Direct3D and OpenGL/Vulkan use different shader languages (while Apple uses yet another language in their Metal API). Maintaining two versions of every shader is not an option for real applications and Diligent Engine implements shader source code converter that allows shaders authored in HLSL to be translated to GLSL. To create a shader, one needs to populate ShaderCreationAttribs structure. SourceLanguage member of this structure tells the system which language the shader is authored in:
When sampling a texture in a shader, the texture sampler was traditionally specified as separate object that was bound to the pipeline at run time or set as part of the texture object itself. However, in most cases it is known beforehand what kind of sampler will be used in the shader. Next-generation APIs expose new type of sampler called static sampler that can be initialized directly in the pipeline state. Diligent Engine exposes this functionality: when creating a shader, textures can be assigned static samplers. If static sampler is assigned, it will always be used instead of the one initialized in the texture shader resource view. To initialize static samplers, prepare an array of StaticSamplerDesc structures and initialize StaticSamplers and NumStaticSamplers members. Static samplers are more efficient and it is highly recommended to use them whenever possible. On older APIs, static samplers are emulated via generic sampler objects.
The following is an example of shader initialization:
Creating the Pipeline State Object
After all required shaders are created, the rest of the fields of the PipelineStateDesc structure provide depth-stencil, rasterizer, and blend state descriptions, the number and format of render targets, input layout format, etc. For instance, rasterizer state can be described as follows:
PipelineStateDesc PSODesc; RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; RasterizerDesc.AntialiasedLineEnable = False; Depth-stencil and blend states are defined in a similar fashion.
Another important thing that pipeline state object encompasses is the input layout description that defines how inputs to the vertex shader, which is the very first shader stage, should be read from the memory. Input layout may define several vertex streams that contain values of different formats and sizes:
// Define input layout InputLayoutDesc &Layout = PSODesc.GraphicsPipeline.InputLayout; LayoutElement TextLayoutElems[] = {     LayoutElement( 0, 0, 3, VT_FLOAT32, False ),     LayoutElement( 1, 0, 4, VT_UINT8, True ),     LayoutElement( 2, 0, 2, VT_FLOAT32, False ), }; Layout.LayoutElements = TextLayoutElems; Layout.NumElements = _countof( TextLayoutElems ); Finally, pipeline state defines primitive topology type. When all required members are initialized, a pipeline state object can be created by IRenderDevice::CreatePipelineState() method:
// Define shader and primitive topology PSODesc.GraphicsPipeline.PrimitiveTopologyType = PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; PSODesc.GraphicsPipeline.pVS = pVertexShader; PSODesc.GraphicsPipeline.pPS = pPixelShader; PSODesc.Name = "My pipeline state"; m_pDev->CreatePipelineState(PSODesc, &m_pPSO); When PSO object is bound to the pipeline, the engine invokes all API-specific commands to set all states specified by the object. In case of Direct3D12 this maps directly to setting the D3D12 PSO object. In case of Direct3D11, this involves setting individual state objects (such as rasterizer and blend states), shaders, input layout etc. In case of OpenGL, this requires a number of fine-grain state tweaking calls. Diligent Engine keeps track of currently bound states and only calls functions to update these states that have actually changed.
Direct3D11 and OpenGL utilize fine-grain resource binding models, where an application binds individual buffers and textures to certain shader or program resource binding slots. Direct3D12 uses a very different approach, where resource descriptors are grouped into tables, and an application can bind all resources in the table at once by setting the table in the command list. Resource binding model in Diligent Engine is designed to leverage this new method. It introduces a new object called shader resource binding that encapsulates all resource bindings required for all shaders in a certain pipeline state. It also introduces the classification of shader variables based on the frequency of expected change that helps the engine group them into tables under the hood:
Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. Shader variable type must be specified during shader creation by populating an array of ShaderVariableDesc structures and initializing ShaderCreationAttribs::Desc::VariableDesc and ShaderCreationAttribs::Desc::NumVariables members (see example of shader creation above).
Static variables cannot be changed once a resource is bound to the variable. They are bound directly to the shader object. For instance, a shadow map texture is not expected to change after it is created, so it can be bound directly to the shader:
m_pPSO->CreateShaderResourceBinding(&m_pSRB); Note that an SRB is only compatible with the pipeline state it was created from. SRB object inherits all static bindings from shaders in the pipeline, but is not allowed to change them.
Mutable resources can only be set once for every instance of a shader resource binding. Such resources are intended to define specific material properties. For instance, a diffuse texture for a specific material is not expected to change once the material is defined and can be set right after the SRB object has been created:
m_pSRB->GetVariable(SHADER_TYPE_PIXEL, "tex2DDiffuse")->Set(pDiffuseTexSRV); In some cases it is necessary to bind a new resource to a variable every time a draw command is invoked. Such variables should be labeled as dynamic, which will allow setting them multiple times through the same SRB object:
m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); Under the hood, the engine pre-allocates descriptor tables for static and mutable resources when an SRB objcet is created. Space for dynamic resources is dynamically allocated at run time. Static and mutable resources are thus more efficient and should be used whenever possible.
As you can see, Diligent Engine does not expose low-level details of how resources are bound to shader variables. One reason for this is that these details are very different for various APIs. The other reason is that using low-level binding methods is extremely error-prone: it is very easy to forget to bind some resource, or bind incorrect resource such as bind a buffer to the variable that is in fact a texture, especially during shader development when everything changes fast. Diligent Engine instead relies on shader reflection system to automatically query the list of all shader variables. Grouping variables based on three types mentioned above allows the engine to create optimized layout and take heavy lifting of matching resources to API-specific resource location, register or descriptor in the table.
This post gives more details about the resource binding model in Diligent Engine.
Setting the Pipeline State and Committing Shader Resources
Before any draw or compute command can be invoked, the pipeline state needs to be bound to the context:
m_pContext->SetPipelineState(m_pPSO); Under the hood, the engine sets the internal PSO object in the command list or calls all the required native API functions to properly configure all pipeline stages.
The next step is to bind all required shader resources to the GPU pipeline, which is accomplished by IDeviceContext::CommitShaderResources() method:
m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); The method takes a pointer to the shader resource binding object and makes all resources the object holds available for the shaders. In the case of D3D12, this only requires setting appropriate descriptor tables in the command list. For older APIs, this typically requires setting all resources individually.
Next-generation APIs require the application to track the state of every resource and explicitly inform the system about all state transitions. For instance, if a texture was used as render target before, while the next draw command is going to use it as shader resource, a transition barrier needs to be executed. Diligent Engine does the heavy lifting of state tracking.  When CommitShaderResources() method is called with COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES flag, the engine commits and transitions resources to correct states at the same time. Note that transitioning resources does introduce some overhead. The engine tracks state of every resource and it will not issue the barrier if the state is already correct. But checking resource state is an overhead that can sometimes be avoided. The engine provides IDeviceContext::TransitionShaderResources() method that only transitions resources:
m_pContext->TransitionShaderResources(m_pPSO, m_pSRB); In some scenarios it is more efficient to transition resources once and then only commit them.
Invoking Draw Command
The final step is to set states that are not part of the PSO, such as render targets, vertex and index buffers. Diligent Engine uses Direct3D11-syle API that is translated to other native API calls under the hood:
ITextureView *pRTVs[] = {m_pRTV}; m_pContext->SetRenderTargets(_countof( pRTVs ), pRTVs, m_pDSV); // Clear render target and depth buffer const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); m_pContext->ClearDepthStencil(nullptr, CLEAR_DEPTH_FLAG, 1.f); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); Different native APIs use various set of function to execute draw commands depending on command details (if the command is indexed, instanced or both, what offsets in the source buffers are used etc.). For instance, there are 5 draw commands in Direct3D11 and more than 9 commands in OpenGL with something like glDrawElementsInstancedBaseVertexBaseInstance not uncommon. Diligent Engine hides all details with single IDeviceContext::Draw() method that takes takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example:
DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); For compute commands, there is IDeviceContext::DispatchCompute() method that takes DispatchComputeAttribs structure that defines compute grid dimension.
Source Code
Full engine source code is available on GitHub and is free to use. The repository contains two samples, asteroids performance benchmark and example Unity project that uses Diligent Engine in native plugin.
AntTweakBar sample is Diligent Engine’s “Hello World” example.

Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to multiple render targets, using compute shaders and unordered access views, etc.

Asteroids performance benchmark is based on this demo developed by Intel. It renders 50,000 unique textured asteroids and allows comparing performance of Direct3D11 and Direct3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures.

Finally, there is an example project that shows how Diligent Engine can be integrated with Unity.

Future Work
The engine is under active development. It currently supports Windows desktop, Universal Windows and Android platforms. Direct3D11, Direct3D12, OpenGL/GLES backends are now feature complete. Vulkan backend is coming next, and support for more platforms is planned.
• By reenigne
For those that don't know me. I am the individual who's two videos are listed here under setup for https://wiki.libsdl.org/Tutorials
I also run grhmedia.com where I host the projects and code for the tutorials I have online.
Recently, I received a notice from youtube they will be implementing their new policy in protecting video content as of which I won't be monetized till I meat there required number of viewers and views each month.

Frankly, I'm pretty sick of youtube. I put up a video and someone else learns from it and puts up another video and because of the way youtube does their placement they end up with more views.
Even guys that clearly post false information such as one individual who said GLEW 2.0 was broken because he didn't know how to compile it. He in short didn't know how to modify the script he used because he didn't understand make files and how the requirements of the compiler and library changes needed some different flags.

At the end of the month when they implement this I will take down the content and host on my own server purely and it will be a paid system and or patreon.

I get my videos may be a bit dry, I generally figure people are there to learn how to do something and I rather not waste their time.
I used to also help people for free even those coming from the other videos. That won't be the case any more. I used to just take anyone emails and work with them my email is posted on the site.

I don't expect to get the required number of subscribers in that time or increased views. Even if I did well it wouldn't take care of each reoccurring month.
I figure this is simpler and I don't plan on putting some sort of exorbitant fee for a monthly subscription or the like.
I was thinking on the lines of a few dollars 1,2, and 3 and the larger subscription gets you assistance with the content in the tutorials if needed that month.
Maybe another fee if it is related but not directly in the content.
The fees would serve to cut down on the number of people who ask for help and maybe encourage some of the people to actually pay attention to what is said rather than do their own thing. That actually turns out to be 90% of the issues. I spent 6 hours helping one individual last week I must have asked him 20 times did you do exactly like I said in the video even pointed directly to the section. When he finally sent me a copy of the what he entered I knew then and there he had not. I circled it and I pointed out that wasn't what I said to do in the video. I didn't tell him what was wrong and how I knew that way he would go back and actually follow what it said to do. He then reported it worked. Yea, no kidding following directions works. But hey isn't alone and well its part of the learning process.

So the point of this isn't to be a gripe session. I'm just looking for a bit of feed back. Do you think the fees are unreasonable?
Should I keep the youtube channel and do just the fees with patreon or do you think locking the content to my site and require a subscription is an idea.

I'm just looking at the fact it is unrealistic to think youtube/google will actually get stuff right or that youtube viewers will actually bother to start looking for more accurate videos.

• i got error 1282 in my code.