Sign in to follow this  

OpenGL Getting Max Tri/sec rate

This topic is 4690 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I've been experimenting with different ways of drawing triangles, and no matter what I try I seem to be getting very low tris/sec rates. What's the optimal way to achieve as high geometry rate as possible? I've tried both OpenGL and Direct3D 9.0, with about the same results. I have an ATI x800XT 256mb graphics card, and get best results drawing using vertex buffers in D3D and vertex buffer objects in OpenGL. D3D gets max ~90 MTris/sec, and OpenGL ~70MTris/sec. I've tried larger/smaller vertex buffers and found that this is as high as it gets. I draw without textures or anything, it's the same if i cull away all tris. ATI's website says the card should be able to draw like 500MTris/sec. I realize this is impossible to reach =) but i'm not even getting 1/5 of this. I've found sites where other people have posted much higher rates, but I have not managed to get them myself. An example app that came with the DX9 SDK reported about 160MTris/sec while using optimized meshes. I've tried all different settings i could think of, and the data is stored on the card, not in system memory. If i keep them in system mem i get like 10MTris/sec =) Are there better ways to draw triangles, or is this something wrong with my comp? Thx, /Erik

Share this post


Link to post
Share on other sites
Hello..

There are alot of factors to be considered.
nVidia has a nice pdf with step-by-step tp get the highest tris/sec out of your card. ATI should have something similar.

Some pointers:
* Use as few batches as you can (and still try to use shorts for indices ;]).
* Remeber to store the indices on the graphics-card as well...
* Some vertex formats are more card-friendly than others. For example one 32 bit int for the colors is better than 4 floats...

And then obvious stuff like vertex and fragment programs.. textures..
Try to just use vertices and nothing else (less data == more speed.. usually).

Good luck!

Share this post


Link to post
Share on other sites
Quote:
Original post by Erik Rufelt

ATI's website says the card should be able to draw like 500MTris/sec. I realize this is impossible to reach =) but i'm not even getting 1/5 of this. I've found sites where other people have posted much higher rates, but I have not managed to get them myself.
An example app that came with the DX9 SDK reported about 160MTris/sec while using optimized meshes.


Congratulations, you've been bitten by the PC hardware marketing BS bug. Your card would have no problem writing 500M triangles/second... if your machine had enough bandwidth to send triangles to the GPU that fast. The PC hardware scene is full of meaningless specs that can never be acheived because there is a limiting bottleneck somewhere else in the system. You could probably get a little higher than the 90M triangles/second as evidenced by the program that comes the DX9 SDK you mentioned, but give up any hope of ever reaching 500M, it's just not gonna happen unless you have a PCI express card.

Share this post


Link to post
Share on other sites
Hi,

Thanks for the reply
I use just one batch, tried drawing it once per frame as well as 10+ times per frame, with same result.
I have also tried more batches.
I don't have any colors or anything, just the vertices, D3DFVF_XYZ.. adding D3DFVF_DIFFUSE and a color (32 bit int) made the rate drop with 40%.. which seems to imply that the problem is memory..
Everything is stored on the card..
I've tried using 16bit indices as well, but it didn't result in any change..
I get 125 MTris/sec now with
device: D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, D3DCREATE_HARDWARE_VERTEXPROCESSING
vbuf: D3DFVF_XYZ, D3DUSAGEWRITEONLY, D3DPOOL_DEFAULT
ibuf: D3DFMT_INDEX32, D3DUSAGE_WRITEONLY, D3DPOOL_DEFAULT
SetFVF: D3DFVF_XYZ
SetIndices
SetStreamSource(0, buf, 0, sizeof(D3DXVECTOR3))
and DrawIndexedPrimitive(D3DPT_TRIANGLELIST..);
Is it possible to tell d3d how to store the vertices on the card?
I was thinking perhaps it's somehow defaulted to double precision or something..

Maybe someone could try my program and see what tri rate u get?
So i'm not trying to fix something in my program when the problem is really my comp =)
http://gys.mine.nu:180/~rufelt/D3DGeometryRateTest.exe
Link


cwhite: that shouldnt be a problem.. everything is stored on the card so hardly anything is sent to it per-frame..


thx,
/Erik

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
It's not been mentioned but I think you'll also discover that the triangles they claim they render in huge volumes are *single pixel*, i.e. not a true triangle, just one pixel (technically a polygon, just a very small one!). Try that and I bet your throughput is higher still.

Cunning buggers aren't they?

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
I've you try to optimize your vertex buffer?
NVidia has made a tool called NvStrip, which reorganize data in a perfect strip, with vertex order optimized for memory paging and access data optimized for cache misses.
At least, a primitive can be faster tha another, depending on the type of mesh.
A very smooth mesh is faster to draw than a sharped mesh, because more vertices are shared by triangles. If you have a triangle to draw, you have more chances that the vertices in it are allready transformed in the cache. So the perfect case is a very smooth mesh (like a sphere) reorganized by NvStrip.
The only way to reach the maximum number of triangles is to have a perfect geometry with a perfect strip, but in practice it never happen because it's too restricting for graphists.
I don't think PCI Express will increase the polycount if the datas are allready on the video memory.
I've seen one time more than 13M triangles per second on a G-Force I, and the maximum limit vas 14M. This was a perfect strip case...

Ronan

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Yes, I suspect that the so-called "triangle transformation rate" is actually the vertex transform rate. If you use a triangle strip/fan then the two are almost exactly equivalent because if you don't change render state a vertex is guaranteed to transform identically, and of course triangle sounds more impressive than vertex.

I also have no idea how clipping works in hardware, so it's possible that you'll only get full speed when the clipping cases are trivial (i.e. completely on screen or completely behind a single clipping plane). I don't think that would be much of a dent though because in any normal scene almost all triangles will fall into this category.

Share this post


Link to post
Share on other sites
Hi,

Thanks for all the replies!
I will try to optimize the mesh more and use tri strips, I guess that's the only way.
I found this though
http://www.delphi3d.net/forums/viewtopic.php?t=364&view=previous
Link
Someone has posted he gets 200MTris/sec with x800pro with the demo in that thread .. but when I download it I only get 80.. guess it must be my system :(

/Erik

Share this post


Link to post
Share on other sites
I get 118MTris/sec with the Delphi3D.net app, and 30MTris/sec with your app. This is a pretty big difference [looksaround].

Additionally, my card is retarded, so take my stats with a pinch of salt :)
Example framerates:
Counter-Strike1.6 in 640x480x32 on de_aztec: 30fps
DesertCombat in 1280x1024x32 all details on high, 70fps


Hardware:
Radeon 9600 Pro 256MB VRAM (Catalyst 3.5 Drivers)
MSI NEO-FIS2R Mainboard
Intel P4 non-HT 2.40b Ghz
512MB DDR300 RAM

Share this post


Link to post
Share on other sites
Quote:
ATI's website says the card should be able to draw like 500MTris/sec. I realize this is impossible to reach =)

u should be able to get close, last time i tested ive gotten exactly what the card can do (gf2mx 20M/tri sec actual rendered) havent tested it fully (cause once it went over 100M/tri/sec it sort of became pointless) with my gffx5900 but im 99% sure if the specs saiz it can do 345M/tris a sec it is possible to do 345M/sec (benchmarked) on this card
for opengl use glCullFace( GL_FRONT_AND_BACK ) though u should be able to get a similar framerate with actually drawing them.
theres quite a few pdfs on the web
eg heres a d3d centered one http://www.ati.com/developer/gdc/D3DTutorial3_Pipeline_Performance.pdf
on how to achieve better performance have a read/ check out other benchmarking apps to see what they achieve

Share this post


Link to post
Share on other sites
Hi,

I reinstalled WinXP and suddenly the demo from DelphiGL got 230 mts, while my D3D9 app went from 120 to 80 mts, and my own GL test app from 75 to 60 mts. This was all very strange.
AGP didnt work cause I hadnt installed the AGP motherboard drivers. After I installed that my D3D9 app went up to 110 mts, my GL app back up to 75, and the DelphiGL app dropped down to 80, as it was with my old system.
Removing the AGP and reinstalling ATI drivers got it back to what it was when I had just installed WinXP.
Has anyone experienced anything similiar to this?
Or have any suggestions on how to fix it?
That my own apps got lower mts without AGP seems to suggest that perhaps the indices are not stored on the card (even though I create buffers on the card for them), however I really can't understand how the demo app could possible drop from 230 to 80 mts.. from enabling AGP..

very messed up :(

[Edited by - Erik Rufelt on February 10, 2005 10:00:49 AM]

Share this post


Link to post
Share on other sites
The problem is AGP
with AGP disabled the Delphi3D demo gets 230 MTris/sec
AGP 8x = 80
AGP 4x = 40

Seems like it stops storing indices on the graphics card or something like that, when AGP is enabled..

Share this post


Link to post
Share on other sites
the max transform speed given by video card manufacturers is just a plain lie.
The same as the ps2 can not render 25 mtri/s.
There seems to be a arketting inflation. I suppose the guys from the marketting division come to see the devs and ask : under ideal condidtions, supposing a ridiculous frame buffer, and using an ib that indexese the same few small triangles over and over again, using the largest batch size possible, without shaders, without even displaying the triangles, how fast can we go?
our competitor said 100M, can we do more?
btw, is there a possibility to have that throughput test app (D3DGeometryrate test in full screen, since the vsyn is usually forced on in windowed mode?)

Share this post


Link to post
Share on other sites
I've tested it in fullscreen, with the same results.
I cant upload a new version since i accidently deleted the code hahaha =)
Anyways, the problem is not getting the 500 MTris/sec, I don't really care how many tris/sec the card can handle, I just want it to handle all the tris it can. Something it seems very reluctant to do when AGP is enabled.
The same app, which code I have searched for any kind of checks for anything system related (there is nothing like that) runs at 230 mts with AGP OFF, and 80 mts with AGP 8x. (40 mts with AGP 4x).
It seems like for some reason the card uses system mem when AGP is enabled.. has anyone else ever experienced something like this??

I e-mailed ATI and got an autoresponse with some BIOS settings one could try with problems matching my key-words (?) so I think I'll try that and hope..

Share this post


Link to post
Share on other sites
I think you are mistaken. The difference could probably be explained like this:
- when AGP is OFF, the geometry data is put in video memory directly (no bus transfer needed, fastest memory pool available).
- when AGP is ON, the geometry data is put in AGP memory, which is slower than video memory (but much faster than system memory), since each time you'll render this data it has to be transfered from AGP to video memory temporarily.

Although in theory it's better to put data in video memory, don't forget that the amount of video memory is much lower than the amount of AGP memory, and that textures/color buffers have to reside in video memory. In a real application, AGP is probably the only good solution.

Y.

Share this post


Link to post
Share on other sites
The geometry is not put in system memory, since that yields an geometry rate of only 10Mtri/sec. No more can be sent over 8x AGP, it's that much slower.
However the indices might be put in system memory..
They should not, definately.
A friend of mine has run the same apps as me, and he doesn't have this problem, he has a much older card and he gets higher geometry rate with AGP enabled than me, but when I disable AGP mine is higher with geometry stored on the card (as it should)
Somehow my system does something that makes the whole process slower (like move things to system memory) when I enable AGP.
I have tried setting chaning bios settings for AGP mem and some other things, with no result.
I have 256 MB memory on the card, I draw maybe 1 million tris + colors.. + 1 million indices .. = 20 bytes / vertex = 20 mb, 1/10 of what's available
no textures or anything
I've checked the free mem with some system scan apps, it says about 220 mb

Share this post


Link to post
Share on other sites
Done some more testing, seems like the problem only appears in OpenGL using VBOs .. still very strange, doesn't happen on other peoples systems.. at least no one I have asked..
thx for all the replies

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Marketing numbers made simple!

Step 1: Create a back facing triangle (3 vertices)
Step 2: Turn on back face culling
Step 3: Create the biggest index array that you can (65000 indices?) using
the smallest index size (i.e. shorts maybe even bytes, although you'll
have to play around to see what is fastest on your hardware).
Step 4: Populate the index array with (0, 1, 2) repeated until the index array
is full.
Step 5: Draw your entire index array (which comes out to thousands of
triangles).

Doing it this way, you get the fastest possible triangle rate the card is capable of. All vertices will always hit on the post transform cache.

Is this a bogus test? Yes and no. The difference between a 10% post transform cache hit rate and a 50% post transform cache hit rate can be *huge* on
performance, especially if the vertex program is long (these days, most fixed
function is just a vertex program inside the driver).

Will you get this in practice? No way....

This is just a portion of your back of the envelope performance figure...
(transfer rate and cache hit rate being the other ones, at which point it
may not matter if you are fill limited).


Share this post


Link to post
Share on other sites

This topic is 4690 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628740
    • Total Posts
      2984470
  • Similar Content

    • By alex1997
      I'm looking to render multiple objects (rectangles) with different shaders. So far I've managed to render one rectangle made out of 2 triangles and apply shader to it, but when it comes to render another I get stucked. Searched for documentations or stuffs that could help me, but everything shows how to render only 1 object. Any tips or help is highly appreciated, thanks!
      Here's my code for rendering one object with shader!
       
      #define GLEW_STATIC #include <stdio.h> #include <GL/glew.h> #include <GLFW/glfw3.h> #include "window.h" #define GLSL(src) "#version 330 core\n" #src // #define ASSERT(expression, msg) if(expression) {fprintf(stderr, "Error on line %d: %s\n", __LINE__, msg);return -1;} int main() { // Init GLFW if (glfwInit() != GL_TRUE) { std::cerr << "Failed to initialize GLFW\n" << std::endl; exit(EXIT_FAILURE); } // Create a rendering window with OpenGL 3.2 context glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_RESIZABLE, GL_FALSE); // assing window pointer GLFWwindow *window = glfwCreateWindow(800, 600, "OpenGL", NULL, NULL); glfwMakeContextCurrent(window); // Init GLEW glewExperimental = GL_TRUE; if (glewInit() != GLEW_OK) { std::cerr << "Failed to initialize GLEW\n" << std::endl; exit(EXIT_FAILURE); } // ----------------------------- RESOURCES ----------------------------- // // create gl data const GLfloat positions[8] = { -0.5f, -0.5f, 0.5f, -0.5f, 0.5f, 0.5f, -0.5f, 0.5f, }; const GLuint elements[6] = { 0, 1, 2, 2, 3, 0 }; // Create Vertex Array Object GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); // Create a Vertex Buffer Object and copy the vertex data to it GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(positions), positions, GL_STATIC_DRAW); // Specify the layout of the vertex data glEnableVertexAttribArray(0); // layout(location = 0) glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0); // Create a Elements Buffer Object and copy the elements data to it GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(elements), elements, GL_STATIC_DRAW); // Create and compile the vertex shader const GLchar *vertexSource = GLSL( layout(location = 0) in vec2 position; void main() { gl_Position = vec4(position, 0.0, 1.0); } ); GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexSource, NULL); glCompileShader(vertexShader); // Create and compile the fragment shader const char* fragmentSource = GLSL( out vec4 gl_FragColor; uniform vec2 u_resolution; void main() { vec2 pos = gl_FragCoord.xy / u_resolution; gl_FragColor = vec4(1.0); } ); GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fragmentSource, NULL); glCompileShader(fragmentShader); // Link the vertex and fragment shader into a shader program GLuint shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); glLinkProgram(shaderProgram); glUseProgram(shaderProgram); // get uniform's id by name and set value GLint uRes = glGetUniformLocation(shaderProgram, "u_Resolution"); glUniform2f(uRes, 800.0f, 600.0f); // ---------------------------- RENDERING ------------------------------ // while(!glfwWindowShouldClose(window)) { // Clear the screen to black glClear(GL_COLOR_BUFFER_BIT); glClearColor(0.0f, 0.5f, 1.0f, 1.0f); // Draw a rectangle made of 2 triangles -> 6 vertices glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, NULL); // Swap buffers and poll window events glfwSwapBuffers(window); glfwPollEvents(); } // ---------------------------- CLEARING ------------------------------ // // Delete allocated resources glDeleteProgram(shaderProgram); glDeleteShader(fragmentShader); glDeleteShader(vertexShader); glDeleteBuffers(1, &vbo); glDeleteVertexArrays(1, &vao); return 0; }  
    • By Vortez
      Hi guys, im having a little problem fixing a bug in my program since i multi-threaded it. The app is a little video converter i wrote for fun. To help you understand the problem, ill first explain how the program is made. Im using Delphi to do the GUI/Windows part of the code, then im loading a c++ dll for the video conversion. The problem is not related to the video conversion, but with OpenGL only. The code work like this:

       
      DWORD WINAPI JobThread(void *params) { for each files { ... _ConvertVideo(input_name, output_name); } } void EXP_FUNC _ConvertVideo(char *input_fname, char *output_fname) { // Note that im re-initializing and cleaning up OpenGL each time this function is called... CGLEngine GLEngine; ... // Initialize OpenGL GLEngine.Initialize(render_wnd); GLEngine.CreateTexture(dst_width, dst_height, 4); // decode the video and render the frames... for each frames { ... GLEngine.UpdateTexture(pY, pU, pV); GLEngine.Render(); } cleanup: GLEngine.DeleteTexture(); GLEngine.Shutdown(); // video cleanup code... }  
      With a single thread, everything work fine. The problem arise when im starting the thread for a second time, nothing get rendered, but the encoding work fine. For example, if i start the thread with 3 files to process, all of them render fine, but if i start the thread again (with the same batch of files or not...), OpenGL fail to render anything.
      Im pretty sure it has something to do with the rendering context (or maybe the window DC?). Here a snippet of my OpenGL class:
      bool CGLEngine::Initialize(HWND hWnd) { hDC = GetDC(hWnd); if(!SetupPixelFormatDescriptor(hDC)){ ReleaseDC(hWnd, hDC); return false; } hRC = wglCreateContext(hDC); wglMakeCurrent(hDC, hRC); // more code ... return true; } void CGLEngine::Shutdown() { // some code... if(hRC){wglDeleteContext(hRC);} if(hDC){ReleaseDC(hWnd, hDC);} hDC = hRC = NULL; }  
      The full source code is available here. The most relevant files are:
      -OpenGL class (header / source)
      -Main code (header / source)
       
      Thx in advance if anyone can help me.
    • By DiligentDev
      This article uses material originally posted on Diligent Graphics web site.
      Introduction
      Graphics APIs have come a long way from small set of basic commands allowing limited control of configurable stages of early 3D accelerators to very low-level programming interfaces exposing almost every aspect of the underlying graphics hardware. Next-generation APIs, Direct3D12 by Microsoft and Vulkan by Khronos are relatively new and have only started getting widespread adoption and support from hardware vendors, while Direct3D11 and OpenGL are still considered industry standard. New APIs can provide substantial performance and functional improvements, but may not be supported by older hardware. An application targeting wide range of platforms needs to support Direct3D11 and OpenGL. New APIs will not give any advantage when used with old paradigms. It is totally possible to add Direct3D12 support to an existing renderer by implementing Direct3D11 interface through Direct3D12, but this will give zero benefits. Instead, new approaches and rendering architectures that leverage flexibility provided by the next-generation APIs are expected to be developed.
      There are at least four APIs (Direct3D11, Direct3D12, OpenGL/GLES, Vulkan, plus Apple's Metal for iOS and osX platforms) that a cross-platform 3D application may need to support. Writing separate code paths for all APIs is clearly not an option for any real-world application and the need for a cross-platform graphics abstraction layer is evident. The following is the list of requirements that I believe such layer needs to satisfy:
      Lightweight abstractions: the API should be as close to the underlying native APIs as possible to allow an application leverage all available low-level functionality. In many cases this requirement is difficult to achieve because specific features exposed by different APIs may vary considerably. Low performance overhead: the abstraction layer needs to be efficient from performance point of view. If it introduces considerable amount of overhead, there is no point in using it. Convenience: the API needs to be convenient to use. It needs to assist developers in achieving their goals not limiting their control of the graphics hardware. Multithreading: ability to efficiently parallelize work is in the core of Direct3D12 and Vulkan and one of the main selling points of the new APIs. Support for multithreading in a cross-platform layer is a must. Extensibility: no matter how well the API is designed, it still introduces some level of abstraction. In some cases the most efficient way to implement certain functionality is to directly use native API. The abstraction layer needs to provide seamless interoperability with the underlying native APIs to provide a way for the app to add features that may be missing. Diligent Engine is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. Full source code is available for download at GitHub and is free to use.
      Overview
      Diligent Engine API takes some features from Direct3D11 and Direct3D12 as well as introduces new concepts to hide certain platform-specific details and make the system easy to use. It contains the following main components:
      Render device (IRenderDevice  interface) is responsible for creating all other objects (textures, buffers, shaders, pipeline states, etc.).
      Device context (IDeviceContext interface) is the main interface for recording rendering commands. Similar to Direct3D11, there are immediate context and deferred contexts (which in Direct3D11 implementation map directly to the corresponding context types). Immediate context combines command queue and command list recording functionality. It records commands and submits the command list for execution when it contains sufficient number of commands. Deferred contexts are designed to only record command lists that can be submitted for execution through the immediate context.
      An alternative way to design the API would be to expose command queue and command lists directly. This approach however does not map well to Direct3D11 and OpenGL. Besides, some functionality (such as dynamic descriptor allocation) can be much more efficiently implemented when it is known that a command list is recorded by a certain deferred context from some thread.
      The approach taken in the engine does not limit scalability as the application is expected to create one deferred context per thread, and internally every deferred context records a command list in lock-free fashion. At the same time this approach maps well to older APIs.
      In current implementation, only one immediate context that uses default graphics command queue is created. To support multiple GPUs or multiple command queue types (compute, copy, etc.), it is natural to have one immediate contexts per queue. Cross-context synchronization utilities will be necessary.
      Swap Chain (ISwapChain interface). Swap chain interface represents a chain of back buffers and is responsible for showing the final rendered image on the screen.
      Render device, device contexts and swap chain are created during the engine initialization.
      Resources (ITexture and IBuffer interfaces). There are two types of resources - textures and buffers. There are many different texture types (2D textures, 3D textures, texture array, cubmepas, etc.) that can all be represented by ITexture interface.
      Resources Views (ITextureView and IBufferView interfaces). While textures and buffers are mere data containers, texture views and buffer views describe how the data should be interpreted. For instance, a 2D texture can be used as a render target for rendering commands or as a shader resource.
      Pipeline State (IPipelineState interface). GPU pipeline contains many configurable stages (depth-stencil, rasterizer and blend states, different shader stage, etc.). Direct3D11 uses coarse-grain objects to set all stage parameters at once (for instance, a rasterizer object encompasses all rasterizer attributes), while OpenGL contains myriad functions to fine-grain control every individual attribute of every stage. Both methods do not map very well to modern graphics hardware that combines all states into one monolithic state under the hood. Direct3D12 directly exposes pipeline state object in the API, and Diligent Engine uses the same approach.
      Shader Resource Binding (IShaderResourceBinding interface). Shaders are programs that run on the GPU. Shaders may access various resources (textures and buffers), and setting correspondence between shader variables and actual resources is called resource binding. Resource binding implementation varies considerably between different API. Diligent Engine introduces a new object called shader resource binding that encompasses all resources needed by all shaders in a certain pipeline state.
      API Basics
      Creating Resources
      Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. Graphics APIs usually have a native object that represents linear buffer. Diligent Engine uses IBuffer interface as an abstraction for a native buffer. To create a buffer, one needs to populate BufferDesc structure and call IRenderDevice::CreateBuffer() method as in the following example:
      BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); While there is usually just one buffer object, different APIs use very different approaches to represent textures. For instance, in Direct3D11, there are ID3D11Texture1D, ID3D11Texture2D, and ID3D11Texture3D objects. In OpenGL, there is individual object for every texture dimension (1D, 2D, 3D, Cube), which may be a texture array, which may also be multisampled (i.e. GL_TEXTURE_2D_MULTISAMPLE_ARRAY). As a result there are nine different GL texture types that Diligent Engine may create under the hood. In Direct3D12, there is only one resource interface. Diligent Engine hides all these details in ITexture interface. There is only one  IRenderDevice::CreateTexture() method that is capable of creating all texture types. Dimension, format, array size and all other parameters are specified by the members of the TextureDesc structure:
      TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); If native API supports multithreaded resource creation, textures and buffers can be created by multiple threads simultaneously.
      Interoperability with native API provides access to the native buffer/texture objects and also allows creating Diligent Engine objects from native handles. It allows applications seamlessly integrate native API-specific code with Diligent Engine.
      Next-generation APIs allow fine level-control over how resources are allocated. Diligent Engine does not currently expose this functionality, but it can be added by implementing IResourceAllocator interface that encapsulates specifics of resource allocation and providing this interface to CreateBuffer() or CreateTexture() methods. If null is provided, default allocator should be used.
      Initializing the Pipeline State
      As it was mentioned earlier, Diligent Engine follows next-gen APIs to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.). This approach maps directly to Direct3D12/Vulkan, but is also beneficial for older APIs as it eliminates pipeline misconfiguration errors. With many individual calls tweaking various GPU pipeline settings it is very easy to forget to set one of the states or assume the stage is already properly configured when in fact it is not. Using pipeline state object helps avoid these problems as all stages are configured at once.
      Creating Shaders
      While in earlier APIs shaders were bound separately, in the next-generation APIs as well as in Diligent Engine shaders are part of the pipeline state object. The biggest challenge when authoring shaders is that Direct3D and OpenGL/Vulkan use different shader languages (while Apple uses yet another language in their Metal API). Maintaining two versions of every shader is not an option for real applications and Diligent Engine implements shader source code converter that allows shaders authored in HLSL to be translated to GLSL. To create a shader, one needs to populate ShaderCreationAttribs structure. SourceLanguage member of this structure tells the system which language the shader is authored in:
      SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source language matches the underlying graphics API: HLSL for Direct3D11/Direct3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter, so this value should only be used for OpenGL and OpenGLES modes. There are two ways to provide the shader source code. The first way is to use Source member. The second way is to provide a file path in FilePath member. Since the engine is entirely decoupled from the platform and the host file system is platform-dependent, the structure exposes pShaderSourceStreamFactory member that is intended to provide the engine access to the file system. If FilePath is provided, shader source factory must also be provided. If the shader source contains any #include directives, the source stream factory will also be used to load these files. The engine provides default implementation for every supported platform that should be sufficient in most cases. Custom implementation can be provided when needed.
      When sampling a texture in a shader, the texture sampler was traditionally specified as separate object that was bound to the pipeline at run time or set as part of the texture object itself. However, in most cases it is known beforehand what kind of sampler will be used in the shader. Next-generation APIs expose new type of sampler called static sampler that can be initialized directly in the pipeline state. Diligent Engine exposes this functionality: when creating a shader, textures can be assigned static samplers. If static sampler is assigned, it will always be used instead of the one initialized in the texture shader resource view. To initialize static samplers, prepare an array of StaticSamplerDesc structures and initialize StaticSamplers and NumStaticSamplers members. Static samplers are more efficient and it is highly recommended to use them whenever possible. On older APIs, static samplers are emulated via generic sampler objects.
      The following is an example of shader initialization:
      ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = {     {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC},     {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE},     {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader );
      Creating the Pipeline State Object
      After all required shaders are created, the rest of the fields of the PipelineStateDesc structure provide depth-stencil, rasterizer, and blend state descriptions, the number and format of render targets, input layout format, etc. For instance, rasterizer state can be described as follows:
      PipelineStateDesc PSODesc; RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; RasterizerDesc.AntialiasedLineEnable = False; Depth-stencil and blend states are defined in a similar fashion.
      Another important thing that pipeline state object encompasses is the input layout description that defines how inputs to the vertex shader, which is the very first shader stage, should be read from the memory. Input layout may define several vertex streams that contain values of different formats and sizes:
      // Define input layout InputLayoutDesc &Layout = PSODesc.GraphicsPipeline.InputLayout; LayoutElement TextLayoutElems[] = {     LayoutElement( 0, 0, 3, VT_FLOAT32, False ),     LayoutElement( 1, 0, 4, VT_UINT8, True ),     LayoutElement( 2, 0, 2, VT_FLOAT32, False ), }; Layout.LayoutElements = TextLayoutElems; Layout.NumElements = _countof( TextLayoutElems ); Finally, pipeline state defines primitive topology type. When all required members are initialized, a pipeline state object can be created by IRenderDevice::CreatePipelineState() method:
      // Define shader and primitive topology PSODesc.GraphicsPipeline.PrimitiveTopologyType = PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; PSODesc.GraphicsPipeline.pVS = pVertexShader; PSODesc.GraphicsPipeline.pPS = pPixelShader; PSODesc.Name = "My pipeline state"; m_pDev->CreatePipelineState(PSODesc, &m_pPSO); When PSO object is bound to the pipeline, the engine invokes all API-specific commands to set all states specified by the object. In case of Direct3D12 this maps directly to setting the D3D12 PSO object. In case of Direct3D11, this involves setting individual state objects (such as rasterizer and blend states), shaders, input layout etc. In case of OpenGL, this requires a number of fine-grain state tweaking calls. Diligent Engine keeps track of currently bound states and only calls functions to update these states that have actually changed.
      Binding Shader Resources
      Direct3D11 and OpenGL utilize fine-grain resource binding models, where an application binds individual buffers and textures to certain shader or program resource binding slots. Direct3D12 uses a very different approach, where resource descriptors are grouped into tables, and an application can bind all resources in the table at once by setting the table in the command list. Resource binding model in Diligent Engine is designed to leverage this new method. It introduces a new object called shader resource binding that encapsulates all resource bindings required for all shaders in a certain pipeline state. It also introduces the classification of shader variables based on the frequency of expected change that helps the engine group them into tables under the hood:
      Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. Shader variable type must be specified during shader creation by populating an array of ShaderVariableDesc structures and initializing ShaderCreationAttribs::Desc::VariableDesc and ShaderCreationAttribs::Desc::NumVariables members (see example of shader creation above).
      Static variables cannot be changed once a resource is bound to the variable. They are bound directly to the shader object. For instance, a shadow map texture is not expected to change after it is created, so it can be bound directly to the shader:
      PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new Shader Resource Binding object (SRB) that is created by the pipeline state (IPipelineState::CreateShaderResourceBinding()):
      m_pPSO->CreateShaderResourceBinding(&m_pSRB); Note that an SRB is only compatible with the pipeline state it was created from. SRB object inherits all static bindings from shaders in the pipeline, but is not allowed to change them.
      Mutable resources can only be set once for every instance of a shader resource binding. Such resources are intended to define specific material properties. For instance, a diffuse texture for a specific material is not expected to change once the material is defined and can be set right after the SRB object has been created:
      m_pSRB->GetVariable(SHADER_TYPE_PIXEL, "tex2DDiffuse")->Set(pDiffuseTexSRV); In some cases it is necessary to bind a new resource to a variable every time a draw command is invoked. Such variables should be labeled as dynamic, which will allow setting them multiple times through the same SRB object:
      m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); Under the hood, the engine pre-allocates descriptor tables for static and mutable resources when an SRB objcet is created. Space for dynamic resources is dynamically allocated at run time. Static and mutable resources are thus more efficient and should be used whenever possible.
      As you can see, Diligent Engine does not expose low-level details of how resources are bound to shader variables. One reason for this is that these details are very different for various APIs. The other reason is that using low-level binding methods is extremely error-prone: it is very easy to forget to bind some resource, or bind incorrect resource such as bind a buffer to the variable that is in fact a texture, especially during shader development when everything changes fast. Diligent Engine instead relies on shader reflection system to automatically query the list of all shader variables. Grouping variables based on three types mentioned above allows the engine to create optimized layout and take heavy lifting of matching resources to API-specific resource location, register or descriptor in the table.
      This post gives more details about the resource binding model in Diligent Engine.
      Setting the Pipeline State and Committing Shader Resources
      Before any draw or compute command can be invoked, the pipeline state needs to be bound to the context:
      m_pContext->SetPipelineState(m_pPSO); Under the hood, the engine sets the internal PSO object in the command list or calls all the required native API functions to properly configure all pipeline stages.
      The next step is to bind all required shader resources to the GPU pipeline, which is accomplished by IDeviceContext::CommitShaderResources() method:
      m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); The method takes a pointer to the shader resource binding object and makes all resources the object holds available for the shaders. In the case of D3D12, this only requires setting appropriate descriptor tables in the command list. For older APIs, this typically requires setting all resources individually.
      Next-generation APIs require the application to track the state of every resource and explicitly inform the system about all state transitions. For instance, if a texture was used as render target before, while the next draw command is going to use it as shader resource, a transition barrier needs to be executed. Diligent Engine does the heavy lifting of state tracking.  When CommitShaderResources() method is called with COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES flag, the engine commits and transitions resources to correct states at the same time. Note that transitioning resources does introduce some overhead. The engine tracks state of every resource and it will not issue the barrier if the state is already correct. But checking resource state is an overhead that can sometimes be avoided. The engine provides IDeviceContext::TransitionShaderResources() method that only transitions resources:
      m_pContext->TransitionShaderResources(m_pPSO, m_pSRB); In some scenarios it is more efficient to transition resources once and then only commit them.
      Invoking Draw Command
      The final step is to set states that are not part of the PSO, such as render targets, vertex and index buffers. Diligent Engine uses Direct3D11-syle API that is translated to other native API calls under the hood:
      ITextureView *pRTVs[] = {m_pRTV}; m_pContext->SetRenderTargets(_countof( pRTVs ), pRTVs, m_pDSV); // Clear render target and depth buffer const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); m_pContext->ClearDepthStencil(nullptr, CLEAR_DEPTH_FLAG, 1.f); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); Different native APIs use various set of function to execute draw commands depending on command details (if the command is indexed, instanced or both, what offsets in the source buffers are used etc.). For instance, there are 5 draw commands in Direct3D11 and more than 9 commands in OpenGL with something like glDrawElementsInstancedBaseVertexBaseInstance not uncommon. Diligent Engine hides all details with single IDeviceContext::Draw() method that takes takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example:
      DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); For compute commands, there is IDeviceContext::DispatchCompute() method that takes DispatchComputeAttribs structure that defines compute grid dimension.
      Source Code
      Full engine source code is available on GitHub and is free to use. The repository contains two samples, asteroids performance benchmark and example Unity project that uses Diligent Engine in native plugin.
      AntTweakBar sample is Diligent Engine’s “Hello World” example.

       
      Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to multiple render targets, using compute shaders and unordered access views, etc.

      Asteroids performance benchmark is based on this demo developed by Intel. It renders 50,000 unique textured asteroids and allows comparing performance of Direct3D11 and Direct3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures.

      Finally, there is an example project that shows how Diligent Engine can be integrated with Unity.

      Future Work
      The engine is under active development. It currently supports Windows desktop, Universal Windows and Android platforms. Direct3D11, Direct3D12, OpenGL/GLES backends are now feature complete. Vulkan backend is coming next, and support for more platforms is planned.
    • By michaeldodis
      I've started building a small library, that can render pie menu GUI in legacy opengl, planning to add some traditional elements of course.
      It's interface is similar to something you'd see in IMGUI. It's written in C.
      Early version of the library
      I'd really love to hear anyone's thoughts on this, any suggestions on what features you'd want to see in a library like this? 
      Thanks in advance!
    • By Michael Aganier
      I have this 2D game which currently eats up to 200k draw calls per frame. The performance is acceptable, but I want a lot more than that. I need to batch my sprite drawing, but I'm not sure what's the best way in OpenGL 3.3 (to keep compatibility with older machines).
      Each individual sprite move independently almost every frame and their is a variety of textures and animations. What's the fastest way to render a lot of dynamic sprites? Should I map all my data to the GPU and update it all the time? Should I setup my data in the RAM and send it to the GPU all at once? Should I use one draw call per sprite and let the matrices apply the transformations or should I compute the transformations in a world vbo on the CPU so that they can be rendered by a single draw call?
  • Popular Now