Sign in to follow this  

OpenGL OpenGL3.0.. I mean 2.2

This topic is 3373 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Well, someone has linked to the following on OpenGL.org; http://opengl.org/registry/doc/glspec30.20080811.pdf And, well, thanks but no thanks. All those glorious changes? Gone. The rewritten API? Gone. What we are left with is OpenGL2.2. To quote Eddy Luten from the Opengl.org forum;
Quote:
For those who don't feel like digging through the spec, OpenGL 3.0 Equals:
  • API support for the new texture lookup, texture format, and integer and unsigned integer capabilities of the OpenGL Shading Language 1.30 specification (GL EXT gpu shader4).
  • Conditional rendering (GL NV conditional render).
  • Fine control over mapping buffer subranges into client space and flushing modified data.
  • Floating-point color and depth internal formats for textures and renderbuffers (GL ARB color buffer float, GL NV depth buffer float, 455 N.2. DEPRECATION MODEL 456 GL ARB texture float, GL EXT packed float, and GL EXT texture shared exponent).
  • Framebuffer objects (GL EXT framebuffer object).
  • Half-float (16-bit) vertex array and pixel data formats (GL NV half float and GL ARB half float pixel).
  • Multisample stretch blit functionality (GL EXT framebuffer multisample and GL EXT framebuffer blit).
  • Non-normalized integer color internal formats for textures and renderbuffers (GL EXT texture integer).
  • One- and two-dimensional layered texture targets (GL EXT texture array).
  • Packed depth/stencil internal formats for combined depth+stencil textures and renderbuffers (GL EXT packed depth stencil).
  • Per-color-attachment blend enables and color writemasks (GL EXT draw buffers2).
  • RGTC specific internal compressed formats (GL EXT texture compression rgtc).
  • Single- and double-channel (R and RG) internal formats for textures and renderbuffers.
  • Transform feedback (GL EXT transform feedback).
  • Vertex array objects (GL APPLE vertex array object).
  • sRGB framebuffer mode (GL EXT framebuffer sRGB) Plus deprecation of older features.
As he said, where the hell are the objects? Frankly, this is crap. I said it was a sink or swim moment for the ARB and it's just sunk without a trace. It appears the reason is they don't want to break the API because of all the CAD apps out there (J. Carmack, QuakeCon2008), and in doing so have finally put the nail into the coffin games wise. I'd like to congratulate MS for winning the 3D API 'war' on Windows, turns out they didn't need to sink the goodship OpenGL, the captains ran it into an iceberg for them.

Share this post


Link to post
Share on other sites
Uhm..I was looking forward to new API. Anyway, I'm just happy with D3D. Seems like Microsoft actually know how to do the things.

Share this post


Link to post
Share on other sites
My God. What would have been the problem saying, "This is the way it's going to be, let's REFINE it"? Makes me loose a LOT of respect for the ARB.

Of course, being on Linux, I sort of have to go with OpenGL (mesa, whatever) instead of DirectX. After this little blow though, I'm looking forward to somebody cracking DirectX.

FlyingIsFun1217

Share this post


Link to post
Share on other sites
So let me get this straight... In order to allow what, a small handful of old CAD apps to compile against 3.0, they're willing to practically kill off all *new* applications developed against the API?

That makes sense. One customer a year ago is better than ten next week... [grin]

Or maybe they've just realized that a) they've lost everything on Windows to DirectX so it doesn't really matter what they do there, and b) since they don't have a single competing API on other platforms, they don't actually need to make an effort there either.

End result, they can screw over developers as much as they like, and it won't actually hurt them. Windows developers wouldn't have used OGL in the first place, and everyone else will keep using them because there are no alternatives.

Share this post


Link to post
Share on other sites
Quote:
Original post by Spoonbender
That makes sense. One customer a year ago is better than ten next week... [grin]


Yes, apprently the ARB has a passing familarity with 'sense'; I'm starting to suspect that as a rule of thumb they find out what makes sense and then go in the other direction...

Share this post


Link to post
Share on other sites
So it took them a year of no updates, no news to unroll their existing changes and go back to an unassuming, useless API?

I thought Khronos was on the clue train here. People bank their livelihoods on this stuff.

Maybe with the new depreciation model we can have objects by 2014. I suspect I'd better get cracking on my Core Animation/DirectX interop layer before Apple leaves the ARB.

Share this post


Link to post
Share on other sites
I couldn't help but laugh a little bit at the irony-- the ARB gets nailed for NOT dropping some support for older apps, but when Microsoft does the right thing and pushes forward with DX10-- they take a lot of flak themselves. Screwed if you do, screwed if you don't. That's life, I guess.

Share this post


Link to post
Share on other sites
Sad sad, I always loved opengl, having started with it. And I like it for being open and all. But they really lost focus completely. What's wrong with drawing a line and put a complete new api for the next version? Dx does it (too often possibly). It's not like it hurts anyone to have opengl1.dll, opengl2.dll and opengl3.dll on it's system.

Bad bad bad direction.

we should create a new one :)

Share this post


Link to post
Share on other sites
Quote:
Original post by Mike.Popoloski
So I just checked, and it turns out we have plenty of room in DirectX and XNA for you guys. You can all come, nobody needs to get left out. Jack is going to make a batch of cookies for everyone!

I'm totally taking you up on that. I just started C# and XNA (I've been doing C++ and OpenGL in the past) and I'm really enjoying it. Will they be chocolate chip cookies? Those are my favorite.

Share this post


Link to post
Share on other sites
Quote:
Original post by Spoonbender
So let me get this straight... In order to allow what, a small handful of old CAD apps to compile against 3.0, they're willing to practically kill off all *new* applications developed against the API?

It's not the number of apps that matter; it's the size of the market that these apps cater to. What would you estimate is the annual revenue from this "small handful of old CAD apps" (like AutoCAD 2009, released way back in the dark days of March 2008)? This was a US$ 1 billion market in 1979; in 1997, PDM - just one facet of the PLM approach generally employed by modern CAD solutions - was a $1.1 billion market by itself.

I understand the game developer's frustration - I was just about to start learning OpenGL for the Mac, and I still will - but let's not get ridiculous. CAD is a huge industry: every architecture firm, every electrical firm, every large-scale manufacturer, the automotive industry, product design, industrial design... They are a major client of OpenGL, and their perspective is an important one.

Nor can you argue that they could just continue working against 2.1 while the rest of the world moved on to 3.0. CAD applications develop and compete aggressively, as aggressively as games albeit with a different visual emphasis, and they need to take advantage of technology advances just like everyone else.


No question, this is a disappointment, but it doesn't appear to be so much a case of deliberately "screwing developers over" as it is a case of incompetence and lack of strong vision to plot a future. I mean, I'm only a casual OpenGL observer, but that seems to have been the case ever since the Khronos Group became responsible.

Share this post


Link to post
Share on other sites
The usual will happen. GL drivers are going to get more and more complicated to write -> ATI will lag behind so don't expect GL 3.0 drivers anytime soon, Intel won't release a driver at all and neither will SIS and I don't know who else is making chipsets.

I can see they have "The Deprecation Model" on page 403 but so what??? They are going to make a clean break some day?

Share this post


Link to post
Share on other sites
Quote:
Original post by Oluseyi
No question, this is a disappointment, but it doesn't appear to be so much a case of deliberately "screwing developers over" as it is a case of incompetence and lack of strong vision to plot a future.
But it's more than just that. Forget the justification for why things ended up the way they did finally. My question is, why was everyone in the graphics world strung along for so many years? Why did the ARB and then Khronos promise pie in the sky goals and then vanish off for a year, promising big news, just to give us this? If that was how any of us behaved at our jobs, we'd be fired.

Regardless of where OpenGL goes from here (which is apparently nowhere), there's no reason to ever trust the people behind it again. This is the second time they've misled us thanks to their incompetent bickering.

Share this post


Link to post
Share on other sites
So... They took so much time (and delayed the launch) just for THIS???

I have a great idea!!! Next OpenGL 4.0 will be an object-oriented API that will actually wrap to Direct3D [lol][lol]

(oh... and if that doesn't work will just stick with OGL 2.3 and name it 4.0)

Seriously, If things keep going this way, we should make up a team that will be in charge of a new cross-plattform API. Not easy to do, as that would need support from Driver developers (aka NVIDIA, ATI, Intel); plus we need experienced people, and a lot of time to just design (not to mention coding). After that, it should have success into adoption.
But at least we could try.

Well.... [sigh]

Dark Sylinc

Share this post


Link to post
Share on other sites
Quote:
Original post by Mike.Popoloski
So I just checked, and it turns out we have plenty of room in DirectX and XNA for you guys. You can all come, nobody needs to get left out. Jack is going to make a batch of cookies for everyone!

Yep, I'm also considering coming back to DirectX. Shame on Khronos for making us rely on M$ [crying]

Share this post


Link to post
Share on other sites
TBH I take this as a message from the hardware vendors saying "Nobody except hobbyists are interested in cutting edge cross-platform graphics - OpenGLES is fine for everyone else. Use the well supported windows API you fools!"

Which, to be fair, people have been telling us for years.

It's not that bad though, nothing has been lost. It's just a shame the ARB managed expectations of their work so poorly.

Share this post


Link to post
Share on other sites
Didn't want to be the odd one in the bunch and I'm all for bashing OpenGL but what exactly is the problem? Has it lost hardware features? Does it run slower? Is it less compatible? I'm a noob with OpenGL I just started like 2 weeks ago and I can't understand why all of you are complaining?

Share this post


Link to post
Share on other sites
OpenGL hasn't been cutting edge since before DX8 came out. Now they are just cross platform and playing catch up. They had one foot in the grave when D3D8 came out, and then Ms left the ARB and released D3D9 and they were officially dead in the water.

OpenGL became redundant for a few reasons.

- They keep adding messy extensions onto an old, outdated base API. D3D is like a sports car, and OpenGL is like a scooter that keeps getting side cars attached to it.

- The company that has the most installed graphics chips on home machines has the crappiest drivers. It really sucks to write perfectly legal code and have the horrible intel implementation completely botch your whole program. Even worse when one of the driver versions reported supporting OpenGL 1.2 but didn't actually support most 1.2 extensions.

Have you ever hung out on the user forums for 3D software written in OpenGL? Every time the program gets patched there is a flood of intel users reporting that the program is broken or crashes with strange errors, and the devs have to write workarounds for everything they added.

What good is a standard when no one follows it? Especially when the company that has the biggest share of the GPU market doesn't have a working implementation on most of their models of GPU and has no intention of fixing it. People think it's all about ATI and nVidia, when they are really only competing for second and third place.

When I wanted to write a polished game and publish it, I gave up on OpenGL because of all those problems. All my target audience was going to have those GPUs. I can't be dealing with all that crap. I want to write something that just works. Instead of being creative, I had to look over my code and keep blindly recompiling to try and prevent my friend's intel gpu from rendering my game with a wireframe overlay, and random faces being culled, when I had no calls for wireframe drawing mode in my whole app, and culling disabled. [lol]

The ARB just takes years to try and agree on a header file, and they have to make everyone happy and cater to every possible interest at the same time. And then, they just give us that header and they leave it at that. Everyone else has to do all the work. Even to the point where a member of this forum has to write a library to easily access the extensions. Why couldn't the ARB get off their asses and make their own GLEE like header for everyone to use??

---

I think it's time to let this horse die and let another company step forward and create a cross platform 3D games API. Why hasn't apple been spending money to develop their own DirectX like technology? Maybe even license Direct3D?

Share this post


Link to post
Share on other sites

3.0 could be looked at as the last fully upward compatible revision of OpenGL; since it introduces a deprecation model allowing for the phased elimination of obsolete API's (a set of which are already defined in the spec). The intent there is to provide for an orderly simplification of the specification and drivers for upcoming releases, the next of which is scheduled for less than 12 months from now.

http://www.marketwatch.com/news/story/khronos-releases-opengl-30-specifications/story.aspx?guid=%7BC2A3B5D7-CB9A-4898-BAF9-178DD8CFD695%7D&dist=hppr

BTW we have set up a mail reflector specifically for questions and suggestions specifically relating to game development using OpenGL 3.0 - if there is some piece of hardware functionality not addressed by the current 3.0 spec, now is exactly the right time to let us hear about it.

gamedev@khronos.org

Share this post


Link to post
Share on other sites
Quote:
Original post by EmptyVoid
Didn't want to be the odd one in the bunch and I'm all for bashing OpenGL but what exactly is the problem? Has it lost hardware features? Does it run slower? Is it less compatible? I'm a noob with OpenGL I just started like 2 weeks ago and I can't understand why all of you are complaining?


Because this ISNT what was talked about a year ago when they were 'close' to having a spec.

The point of OpenGL3.0, as originally talked about, was to;
- make the 'fast path' easy to find
- make the life of driver developers easier
- change the API to better reflect the hardware

The OpenGL API is.. well, probably over 15 years old by now, if not a little bit more, and while it matched the hardware for a while it is now drifting from it (see D3D10 for a better idea of how to talk to the hardware) and the point of the breaking refresh was to better match that hardware.

However, by simply bolting things onto the OpenGL2.1 spec they have;
- failed to make the fast path easy to find
- failed to make the driver developers lives easier
- failed to change the API to better reflect the hardware

Same old, same old really... on reflection it was dumb of us to give them another chance to 'fix' the problem.. as they say; fool me once, shame on you, fool me twice, shame on me.

I won't be fooled again.

Share this post


Link to post
Share on other sites
Quote:
Original post by Oluseyi
I mean, I'm only a casual OpenGL observer, but that seems to have been the case ever since the Khronos Group became responsible.


It's been the case since long before then, we had hoped that being part of Khronos would help... apprently not.

Share this post


Link to post
Share on other sites
Quote:
Original post by phantom

- failed to make the fast path easy to find
- failed to make the driver developers lives easier
- failed to change the API to better reflect the hardware



Sorry for asking again... but how u come to a conclusion that openGL failed on these three above? I am a little confuzed..

Thanks again!

Share this post


Link to post
Share on other sites

This topic is 3373 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628714
    • Total Posts
      2984353
  • Similar Content

    • By alex1997
      I'm looking to render multiple objects (rectangles) with different shaders. So far I've managed to render one rectangle made out of 2 triangles and apply shader to it, but when it comes to render another I get stucked. Searched for documentations or stuffs that could help me, but everything shows how to render only 1 object. Any tips or help is highly appreciated, thanks!
      Here's my code for rendering one object with shader!
       
      #define GLEW_STATIC #include <stdio.h> #include <GL/glew.h> #include <GLFW/glfw3.h> #include "window.h" #define GLSL(src) "#version 330 core\n" #src // #define ASSERT(expression, msg) if(expression) {fprintf(stderr, "Error on line %d: %s\n", __LINE__, msg);return -1;} int main() { // Init GLFW if (glfwInit() != GL_TRUE) { std::cerr << "Failed to initialize GLFW\n" << std::endl; exit(EXIT_FAILURE); } // Create a rendering window with OpenGL 3.2 context glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_RESIZABLE, GL_FALSE); // assing window pointer GLFWwindow *window = glfwCreateWindow(800, 600, "OpenGL", NULL, NULL); glfwMakeContextCurrent(window); // Init GLEW glewExperimental = GL_TRUE; if (glewInit() != GLEW_OK) { std::cerr << "Failed to initialize GLEW\n" << std::endl; exit(EXIT_FAILURE); } // ----------------------------- RESOURCES ----------------------------- // // create gl data const GLfloat positions[8] = { -0.5f, -0.5f, 0.5f, -0.5f, 0.5f, 0.5f, -0.5f, 0.5f, }; const GLuint elements[6] = { 0, 1, 2, 2, 3, 0 }; // Create Vertex Array Object GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); // Create a Vertex Buffer Object and copy the vertex data to it GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(positions), positions, GL_STATIC_DRAW); // Specify the layout of the vertex data glEnableVertexAttribArray(0); // layout(location = 0) glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0); // Create a Elements Buffer Object and copy the elements data to it GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(elements), elements, GL_STATIC_DRAW); // Create and compile the vertex shader const GLchar *vertexSource = GLSL( layout(location = 0) in vec2 position; void main() { gl_Position = vec4(position, 0.0, 1.0); } ); GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexSource, NULL); glCompileShader(vertexShader); // Create and compile the fragment shader const char* fragmentSource = GLSL( out vec4 gl_FragColor; uniform vec2 u_resolution; void main() { vec2 pos = gl_FragCoord.xy / u_resolution; gl_FragColor = vec4(1.0); } ); GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fragmentSource, NULL); glCompileShader(fragmentShader); // Link the vertex and fragment shader into a shader program GLuint shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); glLinkProgram(shaderProgram); glUseProgram(shaderProgram); // get uniform's id by name and set value GLint uRes = glGetUniformLocation(shaderProgram, "u_Resolution"); glUniform2f(uRes, 800.0f, 600.0f); // ---------------------------- RENDERING ------------------------------ // while(!glfwWindowShouldClose(window)) { // Clear the screen to black glClear(GL_COLOR_BUFFER_BIT); glClearColor(0.0f, 0.5f, 1.0f, 1.0f); // Draw a rectangle made of 2 triangles -> 6 vertices glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, NULL); // Swap buffers and poll window events glfwSwapBuffers(window); glfwPollEvents(); } // ---------------------------- CLEARING ------------------------------ // // Delete allocated resources glDeleteProgram(shaderProgram); glDeleteShader(fragmentShader); glDeleteShader(vertexShader); glDeleteBuffers(1, &vbo); glDeleteVertexArrays(1, &vao); return 0; }  
    • By Vortez
      Hi guys, im having a little problem fixing a bug in my program since i multi-threaded it. The app is a little video converter i wrote for fun. To help you understand the problem, ill first explain how the program is made. Im using Delphi to do the GUI/Windows part of the code, then im loading a c++ dll for the video conversion. The problem is not related to the video conversion, but with OpenGL only. The code work like this:

       
      DWORD WINAPI JobThread(void *params) { for each files { ... _ConvertVideo(input_name, output_name); } } void EXP_FUNC _ConvertVideo(char *input_fname, char *output_fname) { // Note that im re-initializing and cleaning up OpenGL each time this function is called... CGLEngine GLEngine; ... // Initialize OpenGL GLEngine.Initialize(render_wnd); GLEngine.CreateTexture(dst_width, dst_height, 4); // decode the video and render the frames... for each frames { ... GLEngine.UpdateTexture(pY, pU, pV); GLEngine.Render(); } cleanup: GLEngine.DeleteTexture(); GLEngine.Shutdown(); // video cleanup code... }  
      With a single thread, everything work fine. The problem arise when im starting the thread for a second time, nothing get rendered, but the encoding work fine. For example, if i start the thread with 3 files to process, all of them render fine, but if i start the thread again (with the same batch of files or not...), OpenGL fail to render anything.
      Im pretty sure it has something to do with the rendering context (or maybe the window DC?). Here a snippet of my OpenGL class:
      bool CGLEngine::Initialize(HWND hWnd) { hDC = GetDC(hWnd); if(!SetupPixelFormatDescriptor(hDC)){ ReleaseDC(hWnd, hDC); return false; } hRC = wglCreateContext(hDC); wglMakeCurrent(hDC, hRC); // more code ... return true; } void CGLEngine::Shutdown() { // some code... if(hRC){wglDeleteContext(hRC);} if(hDC){ReleaseDC(hWnd, hDC);} hDC = hRC = NULL; }  
      The full source code is available here. The most relevant files are:
      -OpenGL class (header / source)
      -Main code (header / source)
       
      Thx in advance if anyone can help me.
    • By DiligentDev
      This article uses material originally posted on Diligent Graphics web site.
      Introduction
      Graphics APIs have come a long way from small set of basic commands allowing limited control of configurable stages of early 3D accelerators to very low-level programming interfaces exposing almost every aspect of the underlying graphics hardware. Next-generation APIs, Direct3D12 by Microsoft and Vulkan by Khronos are relatively new and have only started getting widespread adoption and support from hardware vendors, while Direct3D11 and OpenGL are still considered industry standard. New APIs can provide substantial performance and functional improvements, but may not be supported by older hardware. An application targeting wide range of platforms needs to support Direct3D11 and OpenGL. New APIs will not give any advantage when used with old paradigms. It is totally possible to add Direct3D12 support to an existing renderer by implementing Direct3D11 interface through Direct3D12, but this will give zero benefits. Instead, new approaches and rendering architectures that leverage flexibility provided by the next-generation APIs are expected to be developed.
      There are at least four APIs (Direct3D11, Direct3D12, OpenGL/GLES, Vulkan, plus Apple's Metal for iOS and osX platforms) that a cross-platform 3D application may need to support. Writing separate code paths for all APIs is clearly not an option for any real-world application and the need for a cross-platform graphics abstraction layer is evident. The following is the list of requirements that I believe such layer needs to satisfy:
      Lightweight abstractions: the API should be as close to the underlying native APIs as possible to allow an application leverage all available low-level functionality. In many cases this requirement is difficult to achieve because specific features exposed by different APIs may vary considerably. Low performance overhead: the abstraction layer needs to be efficient from performance point of view. If it introduces considerable amount of overhead, there is no point in using it. Convenience: the API needs to be convenient to use. It needs to assist developers in achieving their goals not limiting their control of the graphics hardware. Multithreading: ability to efficiently parallelize work is in the core of Direct3D12 and Vulkan and one of the main selling points of the new APIs. Support for multithreading in a cross-platform layer is a must. Extensibility: no matter how well the API is designed, it still introduces some level of abstraction. In some cases the most efficient way to implement certain functionality is to directly use native API. The abstraction layer needs to provide seamless interoperability with the underlying native APIs to provide a way for the app to add features that may be missing. Diligent Engine is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. Full source code is available for download at GitHub and is free to use.
      Overview
      Diligent Engine API takes some features from Direct3D11 and Direct3D12 as well as introduces new concepts to hide certain platform-specific details and make the system easy to use. It contains the following main components:
      Render device (IRenderDevice  interface) is responsible for creating all other objects (textures, buffers, shaders, pipeline states, etc.).
      Device context (IDeviceContext interface) is the main interface for recording rendering commands. Similar to Direct3D11, there are immediate context and deferred contexts (which in Direct3D11 implementation map directly to the corresponding context types). Immediate context combines command queue and command list recording functionality. It records commands and submits the command list for execution when it contains sufficient number of commands. Deferred contexts are designed to only record command lists that can be submitted for execution through the immediate context.
      An alternative way to design the API would be to expose command queue and command lists directly. This approach however does not map well to Direct3D11 and OpenGL. Besides, some functionality (such as dynamic descriptor allocation) can be much more efficiently implemented when it is known that a command list is recorded by a certain deferred context from some thread.
      The approach taken in the engine does not limit scalability as the application is expected to create one deferred context per thread, and internally every deferred context records a command list in lock-free fashion. At the same time this approach maps well to older APIs.
      In current implementation, only one immediate context that uses default graphics command queue is created. To support multiple GPUs or multiple command queue types (compute, copy, etc.), it is natural to have one immediate contexts per queue. Cross-context synchronization utilities will be necessary.
      Swap Chain (ISwapChain interface). Swap chain interface represents a chain of back buffers and is responsible for showing the final rendered image on the screen.
      Render device, device contexts and swap chain are created during the engine initialization.
      Resources (ITexture and IBuffer interfaces). There are two types of resources - textures and buffers. There are many different texture types (2D textures, 3D textures, texture array, cubmepas, etc.) that can all be represented by ITexture interface.
      Resources Views (ITextureView and IBufferView interfaces). While textures and buffers are mere data containers, texture views and buffer views describe how the data should be interpreted. For instance, a 2D texture can be used as a render target for rendering commands or as a shader resource.
      Pipeline State (IPipelineState interface). GPU pipeline contains many configurable stages (depth-stencil, rasterizer and blend states, different shader stage, etc.). Direct3D11 uses coarse-grain objects to set all stage parameters at once (for instance, a rasterizer object encompasses all rasterizer attributes), while OpenGL contains myriad functions to fine-grain control every individual attribute of every stage. Both methods do not map very well to modern graphics hardware that combines all states into one monolithic state under the hood. Direct3D12 directly exposes pipeline state object in the API, and Diligent Engine uses the same approach.
      Shader Resource Binding (IShaderResourceBinding interface). Shaders are programs that run on the GPU. Shaders may access various resources (textures and buffers), and setting correspondence between shader variables and actual resources is called resource binding. Resource binding implementation varies considerably between different API. Diligent Engine introduces a new object called shader resource binding that encompasses all resources needed by all shaders in a certain pipeline state.
      API Basics
      Creating Resources
      Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. Graphics APIs usually have a native object that represents linear buffer. Diligent Engine uses IBuffer interface as an abstraction for a native buffer. To create a buffer, one needs to populate BufferDesc structure and call IRenderDevice::CreateBuffer() method as in the following example:
      BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); While there is usually just one buffer object, different APIs use very different approaches to represent textures. For instance, in Direct3D11, there are ID3D11Texture1D, ID3D11Texture2D, and ID3D11Texture3D objects. In OpenGL, there is individual object for every texture dimension (1D, 2D, 3D, Cube), which may be a texture array, which may also be multisampled (i.e. GL_TEXTURE_2D_MULTISAMPLE_ARRAY). As a result there are nine different GL texture types that Diligent Engine may create under the hood. In Direct3D12, there is only one resource interface. Diligent Engine hides all these details in ITexture interface. There is only one  IRenderDevice::CreateTexture() method that is capable of creating all texture types. Dimension, format, array size and all other parameters are specified by the members of the TextureDesc structure:
      TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); If native API supports multithreaded resource creation, textures and buffers can be created by multiple threads simultaneously.
      Interoperability with native API provides access to the native buffer/texture objects and also allows creating Diligent Engine objects from native handles. It allows applications seamlessly integrate native API-specific code with Diligent Engine.
      Next-generation APIs allow fine level-control over how resources are allocated. Diligent Engine does not currently expose this functionality, but it can be added by implementing IResourceAllocator interface that encapsulates specifics of resource allocation and providing this interface to CreateBuffer() or CreateTexture() methods. If null is provided, default allocator should be used.
      Initializing the Pipeline State
      As it was mentioned earlier, Diligent Engine follows next-gen APIs to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.). This approach maps directly to Direct3D12/Vulkan, but is also beneficial for older APIs as it eliminates pipeline misconfiguration errors. With many individual calls tweaking various GPU pipeline settings it is very easy to forget to set one of the states or assume the stage is already properly configured when in fact it is not. Using pipeline state object helps avoid these problems as all stages are configured at once.
      Creating Shaders
      While in earlier APIs shaders were bound separately, in the next-generation APIs as well as in Diligent Engine shaders are part of the pipeline state object. The biggest challenge when authoring shaders is that Direct3D and OpenGL/Vulkan use different shader languages (while Apple uses yet another language in their Metal API). Maintaining two versions of every shader is not an option for real applications and Diligent Engine implements shader source code converter that allows shaders authored in HLSL to be translated to GLSL. To create a shader, one needs to populate ShaderCreationAttribs structure. SourceLanguage member of this structure tells the system which language the shader is authored in:
      SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source language matches the underlying graphics API: HLSL for Direct3D11/Direct3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter, so this value should only be used for OpenGL and OpenGLES modes. There are two ways to provide the shader source code. The first way is to use Source member. The second way is to provide a file path in FilePath member. Since the engine is entirely decoupled from the platform and the host file system is platform-dependent, the structure exposes pShaderSourceStreamFactory member that is intended to provide the engine access to the file system. If FilePath is provided, shader source factory must also be provided. If the shader source contains any #include directives, the source stream factory will also be used to load these files. The engine provides default implementation for every supported platform that should be sufficient in most cases. Custom implementation can be provided when needed.
      When sampling a texture in a shader, the texture sampler was traditionally specified as separate object that was bound to the pipeline at run time or set as part of the texture object itself. However, in most cases it is known beforehand what kind of sampler will be used in the shader. Next-generation APIs expose new type of sampler called static sampler that can be initialized directly in the pipeline state. Diligent Engine exposes this functionality: when creating a shader, textures can be assigned static samplers. If static sampler is assigned, it will always be used instead of the one initialized in the texture shader resource view. To initialize static samplers, prepare an array of StaticSamplerDesc structures and initialize StaticSamplers and NumStaticSamplers members. Static samplers are more efficient and it is highly recommended to use them whenever possible. On older APIs, static samplers are emulated via generic sampler objects.
      The following is an example of shader initialization:
      ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = {     {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC},     {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE},     {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader );
      Creating the Pipeline State Object
      After all required shaders are created, the rest of the fields of the PipelineStateDesc structure provide depth-stencil, rasterizer, and blend state descriptions, the number and format of render targets, input layout format, etc. For instance, rasterizer state can be described as follows:
      PipelineStateDesc PSODesc; RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; RasterizerDesc.AntialiasedLineEnable = False; Depth-stencil and blend states are defined in a similar fashion.
      Another important thing that pipeline state object encompasses is the input layout description that defines how inputs to the vertex shader, which is the very first shader stage, should be read from the memory. Input layout may define several vertex streams that contain values of different formats and sizes:
      // Define input layout InputLayoutDesc &Layout = PSODesc.GraphicsPipeline.InputLayout; LayoutElement TextLayoutElems[] = {     LayoutElement( 0, 0, 3, VT_FLOAT32, False ),     LayoutElement( 1, 0, 4, VT_UINT8, True ),     LayoutElement( 2, 0, 2, VT_FLOAT32, False ), }; Layout.LayoutElements = TextLayoutElems; Layout.NumElements = _countof( TextLayoutElems ); Finally, pipeline state defines primitive topology type. When all required members are initialized, a pipeline state object can be created by IRenderDevice::CreatePipelineState() method:
      // Define shader and primitive topology PSODesc.GraphicsPipeline.PrimitiveTopologyType = PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; PSODesc.GraphicsPipeline.pVS = pVertexShader; PSODesc.GraphicsPipeline.pPS = pPixelShader; PSODesc.Name = "My pipeline state"; m_pDev->CreatePipelineState(PSODesc, &m_pPSO); When PSO object is bound to the pipeline, the engine invokes all API-specific commands to set all states specified by the object. In case of Direct3D12 this maps directly to setting the D3D12 PSO object. In case of Direct3D11, this involves setting individual state objects (such as rasterizer and blend states), shaders, input layout etc. In case of OpenGL, this requires a number of fine-grain state tweaking calls. Diligent Engine keeps track of currently bound states and only calls functions to update these states that have actually changed.
      Binding Shader Resources
      Direct3D11 and OpenGL utilize fine-grain resource binding models, where an application binds individual buffers and textures to certain shader or program resource binding slots. Direct3D12 uses a very different approach, where resource descriptors are grouped into tables, and an application can bind all resources in the table at once by setting the table in the command list. Resource binding model in Diligent Engine is designed to leverage this new method. It introduces a new object called shader resource binding that encapsulates all resource bindings required for all shaders in a certain pipeline state. It also introduces the classification of shader variables based on the frequency of expected change that helps the engine group them into tables under the hood:
      Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. Shader variable type must be specified during shader creation by populating an array of ShaderVariableDesc structures and initializing ShaderCreationAttribs::Desc::VariableDesc and ShaderCreationAttribs::Desc::NumVariables members (see example of shader creation above).
      Static variables cannot be changed once a resource is bound to the variable. They are bound directly to the shader object. For instance, a shadow map texture is not expected to change after it is created, so it can be bound directly to the shader:
      PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new Shader Resource Binding object (SRB) that is created by the pipeline state (IPipelineState::CreateShaderResourceBinding()):
      m_pPSO->CreateShaderResourceBinding(&m_pSRB); Note that an SRB is only compatible with the pipeline state it was created from. SRB object inherits all static bindings from shaders in the pipeline, but is not allowed to change them.
      Mutable resources can only be set once for every instance of a shader resource binding. Such resources are intended to define specific material properties. For instance, a diffuse texture for a specific material is not expected to change once the material is defined and can be set right after the SRB object has been created:
      m_pSRB->GetVariable(SHADER_TYPE_PIXEL, "tex2DDiffuse")->Set(pDiffuseTexSRV); In some cases it is necessary to bind a new resource to a variable every time a draw command is invoked. Such variables should be labeled as dynamic, which will allow setting them multiple times through the same SRB object:
      m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); Under the hood, the engine pre-allocates descriptor tables for static and mutable resources when an SRB objcet is created. Space for dynamic resources is dynamically allocated at run time. Static and mutable resources are thus more efficient and should be used whenever possible.
      As you can see, Diligent Engine does not expose low-level details of how resources are bound to shader variables. One reason for this is that these details are very different for various APIs. The other reason is that using low-level binding methods is extremely error-prone: it is very easy to forget to bind some resource, or bind incorrect resource such as bind a buffer to the variable that is in fact a texture, especially during shader development when everything changes fast. Diligent Engine instead relies on shader reflection system to automatically query the list of all shader variables. Grouping variables based on three types mentioned above allows the engine to create optimized layout and take heavy lifting of matching resources to API-specific resource location, register or descriptor in the table.
      This post gives more details about the resource binding model in Diligent Engine.
      Setting the Pipeline State and Committing Shader Resources
      Before any draw or compute command can be invoked, the pipeline state needs to be bound to the context:
      m_pContext->SetPipelineState(m_pPSO); Under the hood, the engine sets the internal PSO object in the command list or calls all the required native API functions to properly configure all pipeline stages.
      The next step is to bind all required shader resources to the GPU pipeline, which is accomplished by IDeviceContext::CommitShaderResources() method:
      m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); The method takes a pointer to the shader resource binding object and makes all resources the object holds available for the shaders. In the case of D3D12, this only requires setting appropriate descriptor tables in the command list. For older APIs, this typically requires setting all resources individually.
      Next-generation APIs require the application to track the state of every resource and explicitly inform the system about all state transitions. For instance, if a texture was used as render target before, while the next draw command is going to use it as shader resource, a transition barrier needs to be executed. Diligent Engine does the heavy lifting of state tracking.  When CommitShaderResources() method is called with COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES flag, the engine commits and transitions resources to correct states at the same time. Note that transitioning resources does introduce some overhead. The engine tracks state of every resource and it will not issue the barrier if the state is already correct. But checking resource state is an overhead that can sometimes be avoided. The engine provides IDeviceContext::TransitionShaderResources() method that only transitions resources:
      m_pContext->TransitionShaderResources(m_pPSO, m_pSRB); In some scenarios it is more efficient to transition resources once and then only commit them.
      Invoking Draw Command
      The final step is to set states that are not part of the PSO, such as render targets, vertex and index buffers. Diligent Engine uses Direct3D11-syle API that is translated to other native API calls under the hood:
      ITextureView *pRTVs[] = {m_pRTV}; m_pContext->SetRenderTargets(_countof( pRTVs ), pRTVs, m_pDSV); // Clear render target and depth buffer const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); m_pContext->ClearDepthStencil(nullptr, CLEAR_DEPTH_FLAG, 1.f); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); Different native APIs use various set of function to execute draw commands depending on command details (if the command is indexed, instanced or both, what offsets in the source buffers are used etc.). For instance, there are 5 draw commands in Direct3D11 and more than 9 commands in OpenGL with something like glDrawElementsInstancedBaseVertexBaseInstance not uncommon. Diligent Engine hides all details with single IDeviceContext::Draw() method that takes takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example:
      DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); For compute commands, there is IDeviceContext::DispatchCompute() method that takes DispatchComputeAttribs structure that defines compute grid dimension.
      Source Code
      Full engine source code is available on GitHub and is free to use. The repository contains two samples, asteroids performance benchmark and example Unity project that uses Diligent Engine in native plugin.
      AntTweakBar sample is Diligent Engine’s “Hello World” example.

       
      Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to multiple render targets, using compute shaders and unordered access views, etc.

      Asteroids performance benchmark is based on this demo developed by Intel. It renders 50,000 unique textured asteroids and allows comparing performance of Direct3D11 and Direct3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures.

      Finally, there is an example project that shows how Diligent Engine can be integrated with Unity.

      Future Work
      The engine is under active development. It currently supports Windows desktop, Universal Windows and Android platforms. Direct3D11, Direct3D12, OpenGL/GLES backends are now feature complete. Vulkan backend is coming next, and support for more platforms is planned.
    • By michaeldodis
      I've started building a small library, that can render pie menu GUI in legacy opengl, planning to add some traditional elements of course.
      It's interface is similar to something you'd see in IMGUI. It's written in C.
      Early version of the library
      I'd really love to hear anyone's thoughts on this, any suggestions on what features you'd want to see in a library like this? 
      Thanks in advance!
    • By Michael Aganier
      I have this 2D game which currently eats up to 200k draw calls per frame. The performance is acceptable, but I want a lot more than that. I need to batch my sprite drawing, but I'm not sure what's the best way in OpenGL 3.3 (to keep compatibility with older machines).
      Each individual sprite move independently almost every frame and their is a variety of textures and animations. What's the fastest way to render a lot of dynamic sprites? Should I map all my data to the GPU and update it all the time? Should I setup my data in the RAM and send it to the GPU all at once? Should I use one draw call per sprite and let the matrices apply the transformations or should I compute the transformations in a world vbo on the CPU so that they can be rendered by a single draw call?
  • Popular Now