• Advertisement

Archived

This topic is now archived and is closed to further replies.

OpenGL OpenGL 2.0???

This topic is 5552 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I was just wondering if anyone knows whats happening with openGL 2.0. Last thing I heard about it was when 3dlabs released their plans for it and it seemed great. Haven''t heard how its progressing though, I just never want to switch back to directX I will use extensions if I have to! but does anyone know anything about whats up?it may already be out for all I know I have just recently been able to start programming again, so was just wondering. thx for any info.

Share this post


Link to post
Share on other sites
Advertisement
According to what I heard monthes ago, sometime soon (how soon I don''t know) an official standard should be released (what you saw from 3dlabs was more or less a draft). A couple monthes after that OpenGL 2.0 compliant drivers should start showing up. I have no idea if Microsoft will implement a MCD for OpenGL 2.0 or upgrade their current OpenGL MCD to be OpenGL 2.0 compliant, so Windows users/developers may be out of luck in that regard and have to fallback on extensions to expose everything like they do now (of course, other implementations will probably show OpenGL 2.0 the same support they''ve shown to other OpenGL versions).

Share this post


Link to post
Share on other sites
even if M$ dont do an MCD I dont see that there is anything to stop someone else coming up with one and implmenting it.

Hopefull either M$ will do us one or someone else will step into the fray and do it... I dont think M$ will do one personaly, as they will probably be hoping that a non-MCD version of OGL2.0 will make devs switch to D3D so they dont have to deal with extension to get at the stuff, thus silently killing OGL for Windows and making D3D the completely dominate 3D API for the market.

I would say We will almost certainly see teh same level of driver support we have enjoyed thus far from the chip makers however, I cant speak for ATI based cards, but i''ve been very impressed by Nvidia''s OGL support over the years

Isnt the ATI9700 ment to be OGL2.0 hardware compatible btw?
Personaly, i''ve been looking forward to OGL2.0 ever since it was announced and now we are getting closer to a bopefull release date (I''d be surprised if OGL2.0 drives didnt appear around the same time as the GeForceFX card) I''m looking forward to it more, i think this next year could be a very intresting time for the GFX and computer hardware market in general

Share this post


Link to post
Share on other sites
so... opengl 2.0 doesn''t have any extensions? interesting.

opengl 2.0 is looking more like dx, way to go.

Share this post


Link to post
Share on other sites
quote:
Original post by niyaw
opengl 2.0 is looking more like dx, way to go.



Funny.. DX is looking more like OGL with each new release as well

-----------------------
"When I have a problem on an Nvidia, I assume that it is my fault. With anyone else''s drivers, I assume it is their fault" - John Carmack

Share this post


Link to post
Share on other sites
i doubt OGL2.0 will not have any extensions (althougth I have to say thats a part of the spec i''ve not looked at/for), but an MCD will make it easier to access the core OGL2.0 functionality rather than just adding all the OGL2.0 stuff as extension to OGL1.4.

Share this post


Link to post
Share on other sites
You know, I remember worrying about Microsoft. As I understand it, not only do the video card makers have to supply OpenGL drivers, but Microsoft also had some implementation on the OS side.
And recently they haven''t updated it past 1.1, I believe.
I was concerned they wouldn''t update, but with Linux and Mac systems competing, maybe they will have to.
Or not, since they still control over 90%....

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
I think they will need to include support because of the wide need for many professional programs and games need it to run and if they can''t use it then tons of games that did require it will then be with out a home if they wheren''t released on other OS''s. Any way around it if they don''t include it then Games and Professional tools that need it will either Die, or move to other systems and loyal users to them are likely to switch, Of course not of all of them but is a large number that microsoft will lose out on. The number of games for other OS''s will definalty incease that and more and more programmers who like OpenGL will switch if they want OGL2 I belive that they won''t take the chance to lose out on it.

I just wonder what microsoft wants to acomplish by pressing DX so much its not like you pay to use it Its Free!!!

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by niyaw
so... opengl 2.0 doesn''t have any extensions? interesting.




It''ll have extensions eventually, but most things that are extensions now will be core functionality.

Share this post


Link to post
Share on other sites
quote:
Original post by niyaw
so... opengl 2.0 doesn''t have any extensions?

Of course it will have the capability to have extensions; it''s basically OpenGL 1.4 with added functionality to expose some inner redesign to allow for maximum pipeline reprogramming among other things.

Share this post


Link to post
Share on other sites
In reference to the person asking why MS wants developers to use DX, and the person below him.

DX will always remain free, it is not a sales tool for Microsoft, but a marketing tool for their OS and market leverage.

If people develop in DirectX, they must use a Microsoft OS in order to run the game/application, therefore making sure that gamers need Microsoft =]

Share this post


Link to post
Share on other sites
by the way isn''t the arb going to have a new meeting sometime this month? hope there is some good news on opengl2''s development!

- Lurking

Share this post


Link to post
Share on other sites
Im pretty sure M$ will support OpenGL arent a couple of people on the board that decides what happens with openGL from M$?? Also OpenGL is used for than just games, and microsoft in my opinion needs to support it in order to take over the world . When I read the specs along time ago I was excited about the shader language and waht they were doin with vertex shaders, but of course now there are three people comming out wiht shader and graphics languages, and 10$ says they all look the same, just like C code. I dont even understand why nvidia is bothering with Cg.

Share this post


Link to post
Share on other sites
quote:
Original post by dead_roses
I dont even understand why nvidia is bothering with Cg.

because while opengl 2 only exists on paper, dx9 sdk is only available to beta testers, cg is already here and has already been used for over half a year by many programmers.

naturally, nvidia doesn''t provide an opengl ati profile for cg, which is another incentive to get an nvidia card.

Share this post


Link to post
Share on other sites
quote:
Original post by niyaw
[quote]Original post by dead_roses
I dont even understand why nvidia is bothering with Cg.

because while opengl 2 only exists on paper, dx9 sdk is only available to beta testers, cg is already here and has already been used for over half a year by many programmers.

naturally, nvidia doesn''t provide an opengl ati profile for cg, which is another incentive to get an nvidia card.


actually, nvidia does simply not provide a _STANDARD_ opengl profile, thats why it doesnt work on ati cards. ARB_vertex_program they implemented now, so at least that works, too. ARB_fragment_program is out since quite some while. and if nvidia _WOULD_WANT_ to make cg a real standard, they would have released the implementation for it.

cg is just another way of nvidia (the other one is fully incompatible, hw specific extensions) to try to get people to just develop for nvidia only.

the ati cards don''t have much extensions wich are not ARB. they try to follow the standards, while nvidia always tries to build up their own. they are worse than microsoft in this part

"take a look around" - limp bizkit
www.google.com

Share this post


Link to post
Share on other sites
I also think ms will use the frozen 1.1. One thing d3d doesn''t have is double floating point. That''s why ms thinks gl is used for cad and ''accurate'' apps. The reason I think why there isn''t gl2 on windows is because no one is yelling at ms to implement it. Companies don''t care about well being of others, they use stuff they think will help them the most and dx is it according to ms, hence no gl support beyond 1.1. Also, people are slowly convincing nvidia/ati to dump their extensions and support arb_vertex/fragment programs. For demo coders extended opengl is a dream. Rest of the world I think wants simpler common api layer. Like people said, games are about content and less about technology, unless you''re technology company but many more are game oriented companies. Perhaps if enough people wrote to ms and explained to them that they want newer gl I believe ms would listen. When people complained about c++ standards in vc++ ms noticed and will have more compliant compiler up to 98%, I think I read. The hard part is trying to convince d3d people to dump it and go gl instead, kindof hard when gl is extended so much that it makes it confusing to program with, well at least in my case. It might be too late on windows while Gl on linux is no brainer.

Share this post


Link to post
Share on other sites
Off topic:
Do people REALLY need all those fancy extensions, anyway? I am working at a multiplayer game, and I am using only the CORE of GL 1.1, no extensions whatsoever, and the game looks good, so far.

Share this post


Link to post
Share on other sites
from memory there are NO extensions used in KEA (link below)
it requires opengl1.3+ (though i will limit it to only hardware support)

http://uk.geocities.com/sloppyturds/kea/kea.html
http://uk.geocities.com/sloppyturds/gotterdammerung.html

Share this post


Link to post
Share on other sites
the fancy extensions (none standard ones) should come later into the dev cycle, once you''d got all the core stuff done and want to tweak things for one card or the other.

The other extensions which are ARB standard OR CORE 1.4 which have to be accessed via the extension system on Windows I would have said are perfectly justified... yes, its nice if you can make your app with only 1.1, but why restrict yourself? If you have a need for the 1.2, 1.3 or 1.4 core functions then ya might as well use ''em

As for M$ supporting OGL2.0, maybe the ARB can talk ''em into it, if not I still think there is nuffin stopping anyone else from releasing the libs/dlls, as long as they dont stamp all over what M$ have done it would be hard for ''em to complain, as all the MCD does (as far as I know) is create a link between the application layer and the driver layer so we dont have to worry about getting to the extensions and such like from the driver. (granted, i could be wrong, but it seams sane to me).

Share this post


Link to post
Share on other sites
You need extensions to make the game go fast ie. vertex arrays and other dma stuff. The vendor extensions also have more features than arb extensions. You need register combiners or fragment programs to do pixel lights unless your engine uses lightmaps. There are depth extensions that help with stencil shadow volumes, etc. So it depends, if you want old tech game then gl 1.1 is probably enough if you want doom3 tech then extensions are a must. I don''t have anything against arb extensions, which will eventually make it to the core gl however vendor extensions that are not implemented as arb bother me. That way you need to learn each vendor''s system if your game is run on all hw. Single most reason why people hail to the opengl 2 and want it so badly is because it supposed to be unextended while encapsulating in a hw independent way all new technologies. Kindof like dx9 which is around the corner. That''s why it will be hard to make people migrate to opengl from dx9 which is very close to gl2 but has more features that make development easier, in part gl fault since it has to be vendor independent(talking about gl not having access to os resources while dx9 does ie. old glaux lib which is no more documented or worked on).

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Well, if even if Microsoft doesn''t implement a version of OpenGL 2.0, then maybe someone will take it to court and use the recent Sun ruling as a basis to force Microsoft to implement (a good) one: http://www.washingtonpost.com/wp-dyn/articles/A14210-2002Dec5.html

Maybe.

Share this post


Link to post
Share on other sites
One question? Is OpenGL 2.0 going to include an object-oriented interface? I want that really badly. It just feels kind of strange, growing up object-oriented, and then using a procedural API. It would be nice, because I''m tired of procedural programming and that COM crap. For God''s sake Microsoft, if you want to make object-oriented APIs, use regular structures. Not some homebrewn crap.

I refuse to be hitched to Direct3D9 until OpenGL 2.0 comes out. I think that the ARB should, more aggressively hold meetings, and encapsulate all of the extensions into the core on every release. Maybe, instead of a yearly ARB meeting, they should have bi-yearly ARB meetings, to keep up with Direct3D. The fault of OpenGL lies in the ARB. Microsoft probably never has to have meetings, because there''s one opinion, and problems are settled rather quickly. The ARB has to do something to keep up with Microsoft. It will be a sad day to see all CAD programs implemented in Direct3D, and Carmack using Direct3D. Speak of the devil, I bet Carmack is one of the main reasons Microsoft will implement OpenGL 2.0. Sure, he IS the most influencial person in the industry, but he also carries with him an offspring of developers using his engines...

RTFM   STFW   EMFP
---------------------
[  c o d e  m a t r i x  ]

Share this post


Link to post
Share on other sites
quote:
Original post by elendil67
One question? Is OpenGL 2.0 going to include an object-oriented interface?

No. It''s going to be basically the same as OpenGL 1.4.
quote:
Original post by elendil67
I want that really badly.

Look into SGL. It does contain more than stock-OpenGL though.

Share this post


Link to post
Share on other sites
fast games need good algorithms not extensions
per pixel lighting is part of core opengl now.
u could make doom3 visually without using any extensions just opengl1.4 (if not exactly u could make a bloody close approximation)

this is a trap ive seen lots of ppl make (usually beginners) ie how come my game looks crap but doom3 looks great, i must need wizzbang feature X+Y+Z (which is only available on a newest graphics card).
IIRC the most advanced thing in quake3 is multitexture! (available since the voodoo2) let it still looks good (better than most games of today)


http://uk.geocities.com/sloppyturds/kea/kea.html
http://uk.geocities.com/sloppyturds/gotterdammerung.html

[edited by - zedzeek on December 9, 2002 11:39:59 PM]

Share this post


Link to post
Share on other sites

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By too_many_stars
      Hello Everyone,
      I have been going over a number of books and examples that deal with GLSL. It's common after viewing the source code to have something like this...
      class Model{ public: Model(); void render(); private: GLSL glsl_program; }; ////// .cpp Model::Model(){ glsl_program.compileAndLinkShaders() } void Model::render(){ glsl_program.use() //render something glsl_program.unUse(); } Is this how a shader program should be used in real time applications? For example, if I have a particle class, for every particle that's created, do I want to compiling and linking a vertex, frag shader? It seems to a noob such as myself this might not be the best approach to real time applications.
      If I am correct, what is the best work around?
      Thanks so much for all the help,
       
      Mike
       
    • By getoutofmycar
      I'm having some difficulty understanding how data would flow or get inserted into a multi-threaded opengl renderer where there is a thread pool and a render thread and an update thread (possibly main). My understanding is that the threadpool will continually execute jobs, assemble these and when done send them off to be rendered where I can further sort these and achieve some cheap form of statelessness. I don't want anything overly complicated or too fine grained,  fibers,  job stealing etc. My end goal is to simply have my renderer isolated in its own thread and only concerned with drawing and swapping buffers. 
      My questions are:
      1. At what point in this pipeline are resources created?
      Say I have a
      class CCommandList { void SetVertexBuffer(...); void SetIndexBuffer(...); void SetVertexShader(...); void SetPixelShader(...); } borrowed from an existing post here. I would need to generate a VAO at some point and call glGenBuffers etc especially if I start with an empty scene. If my context lives on another thread, how do I call these commands if the command list is only supposed to be a collection of state and what command to use. I don't think that the render thread should do this and somehow add a task to the queue or am I wrong?
      Or could I do some variation where I do the loading in a thread with shared context and from there generate a command that has the handle to the resources needed.
       
      2. How do I know all my jobs are done.
      I'm working with C++, is this as simple as knowing how many objects there are in the scene, for every task that gets added increment a counter and when it matches aforementioned count I signal the renderer that the command list is ready? I was thinking a condition_variable or something would suffice to alert the renderthread that work is ready.
       
      3. Does all work come from a singular queue that the thread pool constantly cycles over?
      With the notion of jobs, we are basically sending the same work repeatedly right? Do all jobs need to be added to a single persistent queue to be submitted over and over again?
       
      4. Are resources destroyed with commands?
      Likewise with initializing and assuming #3 is correct, removing an item from the scene would mean removing it from the job queue, no? Would I need to send a onetime command to the renderer to cleanup?
    • By Finalspace
      I am starting to get into linux X11/GLX programming, but from every C example i found - there is this XVisualInfo thing parameter passed to XCreateWindow always.
      Can i control this parameter later on - when the window is already created? What i want it to change my own non GLX window to be a GLX window - without recreating. Is that possible?
       
      On win32 this works just fine to create a rendering context later on, i simply find and setup the pixel format from a pixel format descriptor and create the context and are ready to go.
       
      I am asking, because if that doesent work - i need to change a few things to support both worlds (Create a context from a existing window, create a context for a new window).
    • By DiligentDev
      This article uses material originally posted on Diligent Graphics web site.
      Introduction
      Graphics APIs have come a long way from small set of basic commands allowing limited control of configurable stages of early 3D accelerators to very low-level programming interfaces exposing almost every aspect of the underlying graphics hardware. Next-generation APIs, Direct3D12 by Microsoft and Vulkan by Khronos are relatively new and have only started getting widespread adoption and support from hardware vendors, while Direct3D11 and OpenGL are still considered industry standard. New APIs can provide substantial performance and functional improvements, but may not be supported by older hardware. An application targeting wide range of platforms needs to support Direct3D11 and OpenGL. New APIs will not give any advantage when used with old paradigms. It is totally possible to add Direct3D12 support to an existing renderer by implementing Direct3D11 interface through Direct3D12, but this will give zero benefits. Instead, new approaches and rendering architectures that leverage flexibility provided by the next-generation APIs are expected to be developed.
      There are at least four APIs (Direct3D11, Direct3D12, OpenGL/GLES, Vulkan, plus Apple's Metal for iOS and osX platforms) that a cross-platform 3D application may need to support. Writing separate code paths for all APIs is clearly not an option for any real-world application and the need for a cross-platform graphics abstraction layer is evident. The following is the list of requirements that I believe such layer needs to satisfy:
      Lightweight abstractions: the API should be as close to the underlying native APIs as possible to allow an application leverage all available low-level functionality. In many cases this requirement is difficult to achieve because specific features exposed by different APIs may vary considerably. Low performance overhead: the abstraction layer needs to be efficient from performance point of view. If it introduces considerable amount of overhead, there is no point in using it. Convenience: the API needs to be convenient to use. It needs to assist developers in achieving their goals not limiting their control of the graphics hardware. Multithreading: ability to efficiently parallelize work is in the core of Direct3D12 and Vulkan and one of the main selling points of the new APIs. Support for multithreading in a cross-platform layer is a must. Extensibility: no matter how well the API is designed, it still introduces some level of abstraction. In some cases the most efficient way to implement certain functionality is to directly use native API. The abstraction layer needs to provide seamless interoperability with the underlying native APIs to provide a way for the app to add features that may be missing. Diligent Engine is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. Full source code is available for download at GitHub and is free to use.
      Overview
      Diligent Engine API takes some features from Direct3D11 and Direct3D12 as well as introduces new concepts to hide certain platform-specific details and make the system easy to use. It contains the following main components:
      Render device (IRenderDevice  interface) is responsible for creating all other objects (textures, buffers, shaders, pipeline states, etc.).
      Device context (IDeviceContext interface) is the main interface for recording rendering commands. Similar to Direct3D11, there are immediate context and deferred contexts (which in Direct3D11 implementation map directly to the corresponding context types). Immediate context combines command queue and command list recording functionality. It records commands and submits the command list for execution when it contains sufficient number of commands. Deferred contexts are designed to only record command lists that can be submitted for execution through the immediate context.
      An alternative way to design the API would be to expose command queue and command lists directly. This approach however does not map well to Direct3D11 and OpenGL. Besides, some functionality (such as dynamic descriptor allocation) can be much more efficiently implemented when it is known that a command list is recorded by a certain deferred context from some thread.
      The approach taken in the engine does not limit scalability as the application is expected to create one deferred context per thread, and internally every deferred context records a command list in lock-free fashion. At the same time this approach maps well to older APIs.
      In current implementation, only one immediate context that uses default graphics command queue is created. To support multiple GPUs or multiple command queue types (compute, copy, etc.), it is natural to have one immediate contexts per queue. Cross-context synchronization utilities will be necessary.
      Swap Chain (ISwapChain interface). Swap chain interface represents a chain of back buffers and is responsible for showing the final rendered image on the screen.
      Render device, device contexts and swap chain are created during the engine initialization.
      Resources (ITexture and IBuffer interfaces). There are two types of resources - textures and buffers. There are many different texture types (2D textures, 3D textures, texture array, cubmepas, etc.) that can all be represented by ITexture interface.
      Resources Views (ITextureView and IBufferView interfaces). While textures and buffers are mere data containers, texture views and buffer views describe how the data should be interpreted. For instance, a 2D texture can be used as a render target for rendering commands or as a shader resource.
      Pipeline State (IPipelineState interface). GPU pipeline contains many configurable stages (depth-stencil, rasterizer and blend states, different shader stage, etc.). Direct3D11 uses coarse-grain objects to set all stage parameters at once (for instance, a rasterizer object encompasses all rasterizer attributes), while OpenGL contains myriad functions to fine-grain control every individual attribute of every stage. Both methods do not map very well to modern graphics hardware that combines all states into one monolithic state under the hood. Direct3D12 directly exposes pipeline state object in the API, and Diligent Engine uses the same approach.
      Shader Resource Binding (IShaderResourceBinding interface). Shaders are programs that run on the GPU. Shaders may access various resources (textures and buffers), and setting correspondence between shader variables and actual resources is called resource binding. Resource binding implementation varies considerably between different API. Diligent Engine introduces a new object called shader resource binding that encompasses all resources needed by all shaders in a certain pipeline state.
      API Basics
      Creating Resources
      Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. Graphics APIs usually have a native object that represents linear buffer. Diligent Engine uses IBuffer interface as an abstraction for a native buffer. To create a buffer, one needs to populate BufferDesc structure and call IRenderDevice::CreateBuffer() method as in the following example:
      BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); While there is usually just one buffer object, different APIs use very different approaches to represent textures. For instance, in Direct3D11, there are ID3D11Texture1D, ID3D11Texture2D, and ID3D11Texture3D objects. In OpenGL, there is individual object for every texture dimension (1D, 2D, 3D, Cube), which may be a texture array, which may also be multisampled (i.e. GL_TEXTURE_2D_MULTISAMPLE_ARRAY). As a result there are nine different GL texture types that Diligent Engine may create under the hood. In Direct3D12, there is only one resource interface. Diligent Engine hides all these details in ITexture interface. There is only one  IRenderDevice::CreateTexture() method that is capable of creating all texture types. Dimension, format, array size and all other parameters are specified by the members of the TextureDesc structure:
      TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); If native API supports multithreaded resource creation, textures and buffers can be created by multiple threads simultaneously.
      Interoperability with native API provides access to the native buffer/texture objects and also allows creating Diligent Engine objects from native handles. It allows applications seamlessly integrate native API-specific code with Diligent Engine.
      Next-generation APIs allow fine level-control over how resources are allocated. Diligent Engine does not currently expose this functionality, but it can be added by implementing IResourceAllocator interface that encapsulates specifics of resource allocation and providing this interface to CreateBuffer() or CreateTexture() methods. If null is provided, default allocator should be used.
      Initializing the Pipeline State
      As it was mentioned earlier, Diligent Engine follows next-gen APIs to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.). This approach maps directly to Direct3D12/Vulkan, but is also beneficial for older APIs as it eliminates pipeline misconfiguration errors. With many individual calls tweaking various GPU pipeline settings it is very easy to forget to set one of the states or assume the stage is already properly configured when in fact it is not. Using pipeline state object helps avoid these problems as all stages are configured at once.
      Creating Shaders
      While in earlier APIs shaders were bound separately, in the next-generation APIs as well as in Diligent Engine shaders are part of the pipeline state object. The biggest challenge when authoring shaders is that Direct3D and OpenGL/Vulkan use different shader languages (while Apple uses yet another language in their Metal API). Maintaining two versions of every shader is not an option for real applications and Diligent Engine implements shader source code converter that allows shaders authored in HLSL to be translated to GLSL. To create a shader, one needs to populate ShaderCreationAttribs structure. SourceLanguage member of this structure tells the system which language the shader is authored in:
      SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source language matches the underlying graphics API: HLSL for Direct3D11/Direct3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter, so this value should only be used for OpenGL and OpenGLES modes. There are two ways to provide the shader source code. The first way is to use Source member. The second way is to provide a file path in FilePath member. Since the engine is entirely decoupled from the platform and the host file system is platform-dependent, the structure exposes pShaderSourceStreamFactory member that is intended to provide the engine access to the file system. If FilePath is provided, shader source factory must also be provided. If the shader source contains any #include directives, the source stream factory will also be used to load these files. The engine provides default implementation for every supported platform that should be sufficient in most cases. Custom implementation can be provided when needed.
      When sampling a texture in a shader, the texture sampler was traditionally specified as separate object that was bound to the pipeline at run time or set as part of the texture object itself. However, in most cases it is known beforehand what kind of sampler will be used in the shader. Next-generation APIs expose new type of sampler called static sampler that can be initialized directly in the pipeline state. Diligent Engine exposes this functionality: when creating a shader, textures can be assigned static samplers. If static sampler is assigned, it will always be used instead of the one initialized in the texture shader resource view. To initialize static samplers, prepare an array of StaticSamplerDesc structures and initialize StaticSamplers and NumStaticSamplers members. Static samplers are more efficient and it is highly recommended to use them whenever possible. On older APIs, static samplers are emulated via generic sampler objects.
      The following is an example of shader initialization:
      ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = {     {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC},     {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE},     {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader );
      Creating the Pipeline State Object
      After all required shaders are created, the rest of the fields of the PipelineStateDesc structure provide depth-stencil, rasterizer, and blend state descriptions, the number and format of render targets, input layout format, etc. For instance, rasterizer state can be described as follows:
      PipelineStateDesc PSODesc; RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; RasterizerDesc.AntialiasedLineEnable = False; Depth-stencil and blend states are defined in a similar fashion.
      Another important thing that pipeline state object encompasses is the input layout description that defines how inputs to the vertex shader, which is the very first shader stage, should be read from the memory. Input layout may define several vertex streams that contain values of different formats and sizes:
      // Define input layout InputLayoutDesc &Layout = PSODesc.GraphicsPipeline.InputLayout; LayoutElement TextLayoutElems[] = {     LayoutElement( 0, 0, 3, VT_FLOAT32, False ),     LayoutElement( 1, 0, 4, VT_UINT8, True ),     LayoutElement( 2, 0, 2, VT_FLOAT32, False ), }; Layout.LayoutElements = TextLayoutElems; Layout.NumElements = _countof( TextLayoutElems ); Finally, pipeline state defines primitive topology type. When all required members are initialized, a pipeline state object can be created by IRenderDevice::CreatePipelineState() method:
      // Define shader and primitive topology PSODesc.GraphicsPipeline.PrimitiveTopologyType = PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; PSODesc.GraphicsPipeline.pVS = pVertexShader; PSODesc.GraphicsPipeline.pPS = pPixelShader; PSODesc.Name = "My pipeline state"; m_pDev->CreatePipelineState(PSODesc, &m_pPSO); When PSO object is bound to the pipeline, the engine invokes all API-specific commands to set all states specified by the object. In case of Direct3D12 this maps directly to setting the D3D12 PSO object. In case of Direct3D11, this involves setting individual state objects (such as rasterizer and blend states), shaders, input layout etc. In case of OpenGL, this requires a number of fine-grain state tweaking calls. Diligent Engine keeps track of currently bound states and only calls functions to update these states that have actually changed.
      Binding Shader Resources
      Direct3D11 and OpenGL utilize fine-grain resource binding models, where an application binds individual buffers and textures to certain shader or program resource binding slots. Direct3D12 uses a very different approach, where resource descriptors are grouped into tables, and an application can bind all resources in the table at once by setting the table in the command list. Resource binding model in Diligent Engine is designed to leverage this new method. It introduces a new object called shader resource binding that encapsulates all resource bindings required for all shaders in a certain pipeline state. It also introduces the classification of shader variables based on the frequency of expected change that helps the engine group them into tables under the hood:
      Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. Shader variable type must be specified during shader creation by populating an array of ShaderVariableDesc structures and initializing ShaderCreationAttribs::Desc::VariableDesc and ShaderCreationAttribs::Desc::NumVariables members (see example of shader creation above).
      Static variables cannot be changed once a resource is bound to the variable. They are bound directly to the shader object. For instance, a shadow map texture is not expected to change after it is created, so it can be bound directly to the shader:
      PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new Shader Resource Binding object (SRB) that is created by the pipeline state (IPipelineState::CreateShaderResourceBinding()):
      m_pPSO->CreateShaderResourceBinding(&m_pSRB); Note that an SRB is only compatible with the pipeline state it was created from. SRB object inherits all static bindings from shaders in the pipeline, but is not allowed to change them.
      Mutable resources can only be set once for every instance of a shader resource binding. Such resources are intended to define specific material properties. For instance, a diffuse texture for a specific material is not expected to change once the material is defined and can be set right after the SRB object has been created:
      m_pSRB->GetVariable(SHADER_TYPE_PIXEL, "tex2DDiffuse")->Set(pDiffuseTexSRV); In some cases it is necessary to bind a new resource to a variable every time a draw command is invoked. Such variables should be labeled as dynamic, which will allow setting them multiple times through the same SRB object:
      m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); Under the hood, the engine pre-allocates descriptor tables for static and mutable resources when an SRB objcet is created. Space for dynamic resources is dynamically allocated at run time. Static and mutable resources are thus more efficient and should be used whenever possible.
      As you can see, Diligent Engine does not expose low-level details of how resources are bound to shader variables. One reason for this is that these details are very different for various APIs. The other reason is that using low-level binding methods is extremely error-prone: it is very easy to forget to bind some resource, or bind incorrect resource such as bind a buffer to the variable that is in fact a texture, especially during shader development when everything changes fast. Diligent Engine instead relies on shader reflection system to automatically query the list of all shader variables. Grouping variables based on three types mentioned above allows the engine to create optimized layout and take heavy lifting of matching resources to API-specific resource location, register or descriptor in the table.
      This post gives more details about the resource binding model in Diligent Engine.
      Setting the Pipeline State and Committing Shader Resources
      Before any draw or compute command can be invoked, the pipeline state needs to be bound to the context:
      m_pContext->SetPipelineState(m_pPSO); Under the hood, the engine sets the internal PSO object in the command list or calls all the required native API functions to properly configure all pipeline stages.
      The next step is to bind all required shader resources to the GPU pipeline, which is accomplished by IDeviceContext::CommitShaderResources() method:
      m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); The method takes a pointer to the shader resource binding object and makes all resources the object holds available for the shaders. In the case of D3D12, this only requires setting appropriate descriptor tables in the command list. For older APIs, this typically requires setting all resources individually.
      Next-generation APIs require the application to track the state of every resource and explicitly inform the system about all state transitions. For instance, if a texture was used as render target before, while the next draw command is going to use it as shader resource, a transition barrier needs to be executed. Diligent Engine does the heavy lifting of state tracking.  When CommitShaderResources() method is called with COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES flag, the engine commits and transitions resources to correct states at the same time. Note that transitioning resources does introduce some overhead. The engine tracks state of every resource and it will not issue the barrier if the state is already correct. But checking resource state is an overhead that can sometimes be avoided. The engine provides IDeviceContext::TransitionShaderResources() method that only transitions resources:
      m_pContext->TransitionShaderResources(m_pPSO, m_pSRB); In some scenarios it is more efficient to transition resources once and then only commit them.
      Invoking Draw Command
      The final step is to set states that are not part of the PSO, such as render targets, vertex and index buffers. Diligent Engine uses Direct3D11-syle API that is translated to other native API calls under the hood:
      ITextureView *pRTVs[] = {m_pRTV}; m_pContext->SetRenderTargets(_countof( pRTVs ), pRTVs, m_pDSV); // Clear render target and depth buffer const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); m_pContext->ClearDepthStencil(nullptr, CLEAR_DEPTH_FLAG, 1.f); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); Different native APIs use various set of function to execute draw commands depending on command details (if the command is indexed, instanced or both, what offsets in the source buffers are used etc.). For instance, there are 5 draw commands in Direct3D11 and more than 9 commands in OpenGL with something like glDrawElementsInstancedBaseVertexBaseInstance not uncommon. Diligent Engine hides all details with single IDeviceContext::Draw() method that takes takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example:
      DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); For compute commands, there is IDeviceContext::DispatchCompute() method that takes DispatchComputeAttribs structure that defines compute grid dimension.
      Source Code
      Full engine source code is available on GitHub and is free to use. The repository contains tutorials, sample applications, asteroids performance benchmark and an example Unity project that uses Diligent Engine in native plugin.
      Atmospheric scattering sample demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to multiple render targets, using compute shaders and unordered access views, etc.

      Asteroids performance benchmark is based on this demo developed by Intel. It renders 50,000 unique textured asteroids and allows comparing performance of Direct3D11 and Direct3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures.

      Finally, there is an example project that shows how Diligent Engine can be integrated with Unity.

      Future Work
      The engine is under active development. It currently supports Windows desktop, Universal Windows, Linux, Android, MacOS, and iOS platforms. Direct3D11, Direct3D12, OpenGL/GLES backends are now feature complete. Vulkan backend is coming next, and Metal backend is in the plan.
    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
  • Advertisement