• Advertisement
Sign in to follow this  

OpenGL Opengl design

This topic is 1397 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

In the opengl wiki it say: "Although it is possible for the API to be implemented entirely in software, it is designed to be implemented mostly or entirely in hardware." Can anyone expand on that? How would it be  implemented entirely in software, and how is that different than on hardware? 

 

wiki design:

http://en.wikipedia.org/wiki/OpenGL

Edited by 4mad3u5

Share this post


Link to post
Share on other sites
Advertisement


How would it be  implemented entirely in software, and how is that different than on hardware? 

The front-end is still OpenGL's API, but the back-end uses another API, usually a 2D API natively supported on the target platform. If done so, then the in-between steps normally run on the GPU need to be emulated, e.g. the transform, projection, rasterization (to name just the basic ones). Blending is probably supported by the 2D API already. However, in general one has to implement all steps so that pixel information is ready to use.

 

Perhaps the best known software renderer is in Mesa 3D. You may look into its source code if you're really interested in.

Share this post


Link to post
Share on other sites

With a hardware implementation your program issues OpenGL commands.  Your driver takes these commands and converts them to something your graphics card can understand.  Your graphics card does all the work (drawing/etc).

 

With a software implementation all of the work is done in software instead.  So vertex setup, transformation, clipping, rasterization, fragment shading and blending are all performed in software to build a final image which is then written to your display.

 

This is a bit simplified.  Many times OpenGL will perform some work in hardware and some in software, with the intention of balancing the work between the two processors (CPU and GPU) selecting which is best for each task (and depending on what your graphics card is able to do).

 

If you're interested in exploring a software implementation be aware that they are slow.  I don't mean they're half the speed, or quarter, or even one-tenth.  They're play-Quake-at-less-than-one-frame-per-second slow.  This is OK if all you ever write is trivial tech demos.  If you want to do anything serious, forget about software right now.

Share this post


Link to post
Share on other sites

Mesa3D is a fully compliant OpenGL implementation that includes a full software renderer.

 

 

 

If you're interested in exploring a software implementation be aware that they are slow.  I don't mean they're half the speed, or quarter, or even one-tenth.  They're play-Quake-at-less-than-one-frame-per-second slow.  This is OK if all you ever write is trivial tech demos.  If you want to do anything serious, forget about software right now.

 

 

I strongly disagree.  I would call 3D rendering for film to be quite serious. I would also call 3D rendering for printed material to be quite serious. 

 

OpenGL is not just about games. OpenGL is about rendering generally. That might mean rendering for a 320x240 cell phone. That might mean rendering for a 1080p television display. That might mean rendering for a massive scientific dataset at 43200x28800 resolution and even much more.

 

If your 3D experience is limited only to games with fast frame rates and soft-realtime requirements, then it might be reasonable to only think about hardware implementations.  But if your 3D viewpoint includes offline processing, such as rendering that takes place in movies, print, other physical media, or scientific rendering, software rendering is a pretty good thing.

 

Think about the resolution we get out of modern graphics cards.

 

Monitors with DVI can get up to about 1920x1200 resolution. That's about 2 megapixels.  Most 4k screens get up to 8 megapixels. Compare it with photographers who complain about not being able to blow up their 24 megapixel images. In the physical film world, both 70mm and wide 110 are still popular when you are blowing things up to wall-size, either in print or in movies. The first is about 58 megapixel equivalent, the second about 72 megapixel equivalent. 

 

When you see an IMAX 3D movie, I can guarantee you they were not worried about how quickly their little video cards could max out on fill rate. They use an offline process that generates large high quality images very slowly.

 

 

OpenGL is first and foremost a rendering API. It does not specify output media, nor does it specify mandatory resolutions.  Games might be a common use, but they are not the only use.

 

The rendering API allows you to use any resolution you want, and allows an implementation to output the image to whatever media it wants, including saving them to disk. All that matters is that rendering happens.

 

Let's say you are working with scientific computing rather than games. And lets say your scientific image needs to be frequently referenced, so you decide to print it in high resolution and mount it on a wall.  You are in America where they still use inches, so you select a relatively common professional poster size 72 inch x 48 inch print, specifying a 600dpi resolution for a high quality image.  A bit of math means you need to render to a 43200 x 28800 pixel image for your scientific data set.  

 

When you configure your somewhat unconventional display size, you will not be concerned about fill rate or frames per second. Also, you will want to set your rendering hints to favor quality, not performance.

 

 

The OpenGL specification is based around operations and results.  It specifies operations, not nanoseconds.

Edited by frob

Share this post


Link to post
Share on other sites

Mesa3D is a fully compliant OpenGL implementation that includes a full software renderer.

 

 

 

If you're interested in exploring a software implementation be aware that they are slow.  I don't mean they're half the speed, or quarter, or even one-tenth.  They're play-Quake-at-less-than-one-frame-per-second slow.  This is OK if all you ever write is trivial tech demos.  If you want to do anything serious, forget about software right now.

 

 

I strongly disagree.  I would call 3D rendering for film to be quite serious. I would also call 3D rendering for printed material to be quite serious. 

 

OpenGL is not just about games.

 

True, but the site is gamedev.net so I'm assuming a more limited scope: i.e comparison of hardware-accelerated OpenGL via typical consumer-level 3D cards vs typically encountered software implementations (Mesa and Microsoft's).

Share this post


Link to post
Share on other sites

You can definitely play games like Quake 1 - 3 on a software only renderer. With things like AVX2, DDR4 and cpus with 8+ cores, the performance disparity between software rendering and "hardware" rendering decreases significantly.

Edited by Chris_F

Share this post


Link to post
Share on other sites

Maybe I should have said "play a game with 16 year old graphics at less than 1 fps" instead then, eh?

 

The point is: "Quake" wasn't meant to be taken literally, and it's a shame that it was because doing so totally detracts from the point being made here, which is that the common software implementations are slower than is practical for use.

Share this post


Link to post
Share on other sites

I was expecting someone to point out that both "hardware rendering" and "software rendering" are run on hardware. It's just that first one runs on the GPU and second one on the CPU. That's all there is to it.

Share this post


Link to post
Share on other sites
If your 3D experience is limited only to games with fast frame rates and soft-realtime requirements, then it might be reasonable to only think about hardware implementations.  But if your 3D viewpoint includes offline processing, such as rendering that takes place in movies, print, other physical media, or scientific rendering, software rendering is a pretty good thing.

 

 

Sorry I am new to this, what are soft-realtime requirements? And I think hardware implementations are processes that go through the gpu? I'm not quit sure what software rendering is, Processes that are implemented through the cpu?

Share this post


Link to post
Share on other sites


I'm not quit sure what software rendering is

 

A software renderer is any render which is implemented in software instead of by specialized hardware, such as a GPU.

Share this post


Link to post
Share on other sites


I'm not quit sure what software rendering is, Processes that are implemented through the cpu?

 

Software rendering runs on the CPU, hardware rendering runs on the GPU.

Share this post


Link to post
Share on other sites

 

If your 3D experience is limited only to games with fast frame rates and soft-realtime requirements, then it might be reasonable to only think about hardware implementations.  But if your 3D viewpoint includes offline processing, such as rendering that takes place in movies, print, other physical media, or scientific rendering, software rendering is a pretty good thing.

 

 

Sorry I am new to this, what are soft-realtime requirements? And I think hardware implementations are processes that go through the gpu? I'm not quit sure what software rendering is, Processes that are implemented through the cpu?

 

 

A realtime requirement is that the software must complete the task within a certain amount of time.

 

There are soft and hard requirements.

 

Some examples are probably in order.

 

Imagine the machine that puts the caps on glass beverage bottles. The machine is part of an assembly line and the bottles flow through rapidly. If the machine stamps the cap at the wrong time the results are an error. It might form a bad seal or even break the bottle.  There is a very specific time window for the task. If there is a problem the result is catastrophic -- the glass bottle is unusable. This is called a hard realtime requirement.

 

Next, video games. Let's say the game is running on a commercial game console attached to a television. The game's screen is running at 60Hz. If the game takes too long to display a screen the results are not smooth and are considered an error. Each frame must be completed within the time window. Unlike the glass bottles, if the time constraint is not met the result is an annoyance but not catastrophic. This is called a soft realtime requirement.

 

 

 

 

As for the differences between software rendering and hardware rendering, the difference is where the work takes place. Simply put, the work of rendering takes place on the CPU instead of dedicated GPU hardware. 

 

Here comes the history lesson.

 

Because it relevant for time lines, note that the hardware developments leading to 3D graphics are fairly recent. Integer division was frequently done in software for most of computing history. In the early '80s it was common to have a co-processor for division since many chips didn't support it. The x86 family included a dedicated integer divider, which contributed to the popularity as business machines. In the mid-1980s programs started relying on floating point math, which was slow but gave better results than fixed point math for many business uses. The result was the x87 co-processor for floating point math, which many businesses paid a premium for.

 

It wasn't until 1990 that dedicated floating point hardware penetrated the home computer market, and dedicated floating point hardware was pretty rare. Few major games could rely on hardware floating point being present, and it wasn't until around 1993 or so that mainstream games started to require 486DX processors that had the dedicated floating point.

 

Even the Nintendo DS which launched in 2004 did not have floating point hardware and also relied on a dedicated co-processor for integer division. 

 

Before 3D graphics cards became common around 2000-2002 era, 3D programs would do all the math in software, occasionally taking advantage of dedicated math co-processors. They would compute the results as a large 2D image, and then display the image.

 

Before 1995 or so, everything was done in software. The results were usually computed with relatively slow software floating point and relatively slow main memory, which leads to today's common belief that software rendering is too slow to be useful. While many people remember them as slow, note that they were doing software-based floating point on sub-25MHz machines (rather than multi-core multi-GHz machines) and memory speed was several hundred nanoseconds (rather than the 3.75ns in today's newer machines). 

 

Since many systems had 2D graphics acceleration in the mid 1990s there were many games that would do all the math on the fancy new dedicated floating point processors to transform the polygons and then use line drawing or polygon drawing functions to render everything. It wasn't as pretty, but wireframe 3D graphics still provided great games in the early '90s. 

 

The first few consumer-level 3D cards appeared mid-1995. (There were very expensive cards before that used by scientific simulations and specialized CAD software.) They provided dedicated hardware for matrix math. Instead of doing all the math in the main processor, specialized hardware could perform the matrix math using specialized floating point processors in just a few cycles. These devices usually also provided high-speed memory often used for rendering. Many of these devices also let you do all the rendering in place so you didn't need to copy the rendered image over to video memory, either replacing or supplementing existing graphics cards.

 

The next few rounds introduced hardware texturing and lighting. You could store textures on the card instead of main memory. When you needed to draw a triangle it would automatically copy, scale, and shear the texture as necessary for the triangle. The hardware lighting meant you could apply light levels to the triangle corners and it would lighten or darken the texture as needed.

 

To give you a feel for the timeline, hardware accelerated texturing and lighting first appeared in DirectX 7. Before that Direct3D and OpenGL drivers could take advantage of the matrix math co-processors and specialized memory, but it still did quite a lot of the heavy work in software.

 

Today the vast majority of rendering work takes place on dedicated hardware. We can upload the textures, upload the meshes, upload the transformations, and upload compute-intensive scripts to the card. With all of them in place, we issue instructions to the card and it does all the heavy work on its own processors rather than the main CPU.

 

History lesson over. Time to wake up.

 

Software rendering just means the CPU does the work of turning point clouds and textures into a beautiful picture, rather than relying on dedicated hardware to do the job instead.

Share this post


Link to post
Share on other sites

It should be added here:

 

 

In the opengl wiki it say: "Although it is possible for the API to be implemented entirely in software, it is designed to be implemented mostly or entirely in hardware." Can anyone expand on that? How would it be  implemented entirely in software, and how is that different than on hardware?

 

We're not just talking about "software rendering", we're talking about software implementations of OpenGL.  The two things are not the same: it's possible to have a software renderer that looks and acts absolutely nothing like OpenGL.  For the OP's benefit, all of the old software renderers from older (1990s and earlier) games were of this class.  They were custom renderers written specifically for a game engine and highly tuned to run well on then-current CPUs.  Much of the above discussion relates to this kind of software renderer, not to a software implementation of OpenGL.

Share this post


Link to post
Share on other sites
How many flops are you guys getting? You are doing floating point ops in software and hardware?

Share this post


Link to post
Share on other sites

 If you want to do anything serious, forget about software right now.

This is very true, and I'm sorry I was not able to take part in this discussion earlier, since I think it went in wrong direction.

 

Rendering time does matter! It matters a lot, so I have to disagree with most "facts" frob used to illustrate his opinion.

 

But if your 3D viewpoint includes offline processing, such as rendering that takes place in movies, print, other physical media, or scientific rendering, software rendering is a pretty good thing.

 

It is maybe good, but HW accelerated is better. With legacy OpenGL it was really necessary to implement algorithms on the CPU side in order to have ray tracing and similar stuff. But now it is not. And if we have several order of magnitude acceleration through GPU usage, I simply don't understand why anybody would defend slower solutions.

 

There are some cases when CPU can beat GPU in rendering. That is a case when cache coherence is very weak, when different technologies compete for the resources and communicate through high number of small buffers that have to be synchronized. In most cases, beating GPUs like GK110 (with 2880 cores, 6x64-bit memory controllers, GDDR5 memory) in graphics stuff (where parallelization can be massive) is almost impossible. And we are taking about orders of magnitude!

 

 

Think about the resolution we get out of modern graphics cards.

 

Monitors with DVI can get up to about 1920x1200 resolution. That's about 2 megapixels.  Most 4k screens get up to 8 megapixels. Compare it with photographers who complain about not being able to blow up their 24 megapixel images. In the physical film world, both 70mm and wide 110 are still popular when you are blowing things up to wall-size, either in print or in movies. The first is about 58 megapixel equivalent, the second about 72 megapixel equivalent. 

 

When you see an IMAX 3D movie, I can guarantee you they were not worried about how quickly their little video cards could max out on fill rate. They use an offline process that generates large high quality images very slowly.

What does the resolution matter? This is a very inappropriate example.

IF GPU can render a 2M scene in 16ms, 72M scene can be rendered in 576ms. That's only 0.6s.

Using CPU implementation (what we call "software") it would take almost a minute.

Of course, it depends on the underlaying hardware.

 

 

In the film industry, I bet it is not irrelevant if some post-production lasts several days or several months.

There are a lot GPU accelerated renderers for professional 3D applications, although they are using CUDA (probably because it was easier to port it to CUDA than to OpenGL, and because there is lack of precision control and relatively new tessellation and computation support in OpenGL).

 

 There are companies today that sell even faster software rendering middleware for running modern games with real-time speeds for systems with under-featured graphics cards that cannot run modern shaders .

Can you give some useful link? How can CPU be even near the speed of GPU and also do some other tasks (AI, game logic, resource handling, etc.). This is SF, or far slower than it should be to be useful. Be honest, who will buy 3D video cards if games can be played smoothly on the CPU only?

 

You can definitely play games like Quake 1 - 3 on a software only renderer. With things like AVX2, DDR4 and cpus with 8+ cores, the performance disparity between software rendering and "hardware" rendering decreases significantly.

I really doubt in it. Any useful link to support this claim?

Edited by Aks9

Share this post


Link to post
Share on other sites


I really doubt in it. Any useful link to support this claim?
Here is an overview of the Quake 2 software renderer:

 

http://fabiensanglard.net/quake2/quake2_software_renderer.php

 

IIRC, it was the main renderer people used when it came out, there weren't many people with hardware accelerated cards (we're talking of people running Quake 2 rendered with the first Pentium CPU, not 8 core number crunching monsters) All previous Id games also used software renderers (Quake 1, Doom, Wolfenstein 3D). I'm not sure if Quake 3 had one.

Share this post


Link to post
Share on other sites

 


I really doubt in it. Any useful link to support this claim?
Here is an overview of the Quake 2 software renderer:

 

http://fabiensanglard.net/quake2/quake2_software_renderer.php

 

IIRC, it was the main renderer people used when it came out, there weren't many people with hardware accelerated cards (we're talking of people running Quake 2 rendered with the first Pentium CPU, not 8 core number crunching monsters) All previous Id games also used software renderers (Quake 1, Doom, Wolfenstein 3D). I'm not sure if Quake 3 had one.

 

 

The Quake 2 software renderer was not a software implementation of OpenGL, which was what the OP was asking about.

 

At this stage I really really regret even mentioning the word "Quake" here as my doing so seems to have steered this thread down a completely irrelevant path.  What I meant was a Quake-like level of scene complexity, as in low-polycount, low-resolution textures, low screen resolution (maxing out at perhaps 640x480 or 800x600), no complex effects, etc.

 

And of course offline rendering still uses software, but again this is completely irrelevant.  We're gamedev.net so we're talking about realtime rendering in a game engine using consumer-level hardware, unless explicitly stated otherwise.

 

So taking these two together, that's the kind of scenario where a software OpenGL implementation will get you below 1fps even with the low level of scene complexity I mention.  Yes, even on a modern CPU.  Anybody want a citation?  OK, see for example this thread where the OP in it got 0.5fps owing to using an OpenGL feature that was supported by the driver but not in hardware.

Share this post


Link to post
Share on other sites

Except a good software renderer like WARP or LLVMpipe can run that kind of scene at more than playable FPS.

Share this post


Link to post
Share on other sites


At this stage I really really regret even mentioning the word "Quake" here as my doing so seems to have steered this thread down a completely irrelevant path.  What I meant was a Quake-like level of scene complexity, as in low-polycount, low-resolution textures, low screen resolution (maxing out at perhaps 640x480 or 800x600), no complex effects, etc.
 ...  
So taking these two together, that's the kind of scenario where a software OpenGL implementation will get you below 1fps even with the low level of scene complexity I mention.  Yes, even on a modern CPU.  Anybody want a citation?  OK, see for example this thread where the OP in it got 0.5fps owing to using an OpenGL feature that was supported by the driver but not in hardware.

It all depends on what features you want.

 

I know you regret bringing up quake, but it is actually a good example. They released two versions, one using their custom rasterizer and another using the OpenGL rasterizer. IIRC they released the source for both. Just a hunch, but I'm fairly confident if you ran GLQuake with Mesa software drivers you'd probably still see a highly performant game on today's hardare.

 

The specific post you mentioned was fairly high polygons using a blending function. Neither of those fits the quake-style graphics which were low polygon and simple texturing.

 

Naturally if you are trying to run something complex in a software renderer, such as your collection of modern shaders or even a moderately complex blending function, then yes you are quickly going to get bogged down.

 

 

The first question was "what does this mean?" which I think was fully answered.

 

The second question was "is this possible?" which is a little more complex. It is certainly possible to make a fast 3D opengl game with a software renderer if you limit yourself to a subset of the features. It is also possible to make a 3D opengl game with a software renderer that takes hours per frame.

 

The OP was about OpenGL design. OpenGL is about rendering generally, and the design reflects that. Games represent a subset of what OpenGL does.

Share this post


Link to post
Share on other sites

Except a good software renderer like WARP or LLVMpipe can run that kind of scene at more than playable FPS.

Thanks for the examples. I was not aware that WARP even exists. It is really great, BUT it is a substitution in the cases when there is no GPU in the system, or drivers don't support 3D acceleration on such GPUs. As a substitution, it enables rendering, but it is at least an order of magnitude slower (which is excellent result) than there is a GPU. Usually it is almost two orders of magnitude. Anybody could easily experience that through 3DMark tests.

 

The first question was "what does this mean?" which I think was fully answered.

 

The second question was "is this possible?" which is a little more complex. It is certainly possible to make a fast 3D opengl game with a software renderer if you limit yourself to a subset of the features. It is also possible to make a 3D opengl game with a software renderer that takes hours per frame.

 

The OP was about OpenGL design. OpenGL is about rendering generally, and the design reflects that. Games represent a subset of what OpenGL does.

 

I completely agree that it is possible to do everything on the CPU side, but it is not a wise decision. OP has a very naive question, and I hope he had a better insight now. I'm glad that there are some efficient CPU based implementations, but they should be used only when any other alternative is impossible. Probably every low-end GPU can easily outperform the fastest commercial CPU, leaving it free to do other tasks.

Share this post


Link to post
Share on other sites

I completely agree that it is possible to do everything on the CPU side, but it is not a wise decision. OP has a very naive question, and I hope he had a better insight now. I'm glad that there are some efficient CPU based implementations, but they should be used only when any other alternative is impossible. Probably every low-end GPU can easily outperform the fastest commercial CPU, leaving it free to do other tasks.

 

Well there are cases, outside of the game development space, where software makes sense.  For example, the first line of the "invariance" section in your local friendly OpenGL specification warns you that OpenGL is not a pixel-exact specification.  So if you need pixel-exact results, you must look elsewhere.  Likewise the supported floating point precision limits may not meet some rendering requirements.

 

But these are outside of the game development space.  The general case is that neither is a perfect replacement for the other, so like any other tool sets, you pick the tools that match the job you want to do.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By DiligentDev
      This article uses material originally posted on Diligent Graphics web site.
      Introduction
      Graphics APIs have come a long way from small set of basic commands allowing limited control of configurable stages of early 3D accelerators to very low-level programming interfaces exposing almost every aspect of the underlying graphics hardware. Next-generation APIs, Direct3D12 by Microsoft and Vulkan by Khronos are relatively new and have only started getting widespread adoption and support from hardware vendors, while Direct3D11 and OpenGL are still considered industry standard. New APIs can provide substantial performance and functional improvements, but may not be supported by older hardware. An application targeting wide range of platforms needs to support Direct3D11 and OpenGL. New APIs will not give any advantage when used with old paradigms. It is totally possible to add Direct3D12 support to an existing renderer by implementing Direct3D11 interface through Direct3D12, but this will give zero benefits. Instead, new approaches and rendering architectures that leverage flexibility provided by the next-generation APIs are expected to be developed.
      There are at least four APIs (Direct3D11, Direct3D12, OpenGL/GLES, Vulkan, plus Apple's Metal for iOS and osX platforms) that a cross-platform 3D application may need to support. Writing separate code paths for all APIs is clearly not an option for any real-world application and the need for a cross-platform graphics abstraction layer is evident. The following is the list of requirements that I believe such layer needs to satisfy:
      Lightweight abstractions: the API should be as close to the underlying native APIs as possible to allow an application leverage all available low-level functionality. In many cases this requirement is difficult to achieve because specific features exposed by different APIs may vary considerably. Low performance overhead: the abstraction layer needs to be efficient from performance point of view. If it introduces considerable amount of overhead, there is no point in using it. Convenience: the API needs to be convenient to use. It needs to assist developers in achieving their goals not limiting their control of the graphics hardware. Multithreading: ability to efficiently parallelize work is in the core of Direct3D12 and Vulkan and one of the main selling points of the new APIs. Support for multithreading in a cross-platform layer is a must. Extensibility: no matter how well the API is designed, it still introduces some level of abstraction. In some cases the most efficient way to implement certain functionality is to directly use native API. The abstraction layer needs to provide seamless interoperability with the underlying native APIs to provide a way for the app to add features that may be missing. Diligent Engine is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. Full source code is available for download at GitHub and is free to use.
      Overview
      Diligent Engine API takes some features from Direct3D11 and Direct3D12 as well as introduces new concepts to hide certain platform-specific details and make the system easy to use. It contains the following main components:
      Render device (IRenderDevice  interface) is responsible for creating all other objects (textures, buffers, shaders, pipeline states, etc.).
      Device context (IDeviceContext interface) is the main interface for recording rendering commands. Similar to Direct3D11, there are immediate context and deferred contexts (which in Direct3D11 implementation map directly to the corresponding context types). Immediate context combines command queue and command list recording functionality. It records commands and submits the command list for execution when it contains sufficient number of commands. Deferred contexts are designed to only record command lists that can be submitted for execution through the immediate context.
      An alternative way to design the API would be to expose command queue and command lists directly. This approach however does not map well to Direct3D11 and OpenGL. Besides, some functionality (such as dynamic descriptor allocation) can be much more efficiently implemented when it is known that a command list is recorded by a certain deferred context from some thread.
      The approach taken in the engine does not limit scalability as the application is expected to create one deferred context per thread, and internally every deferred context records a command list in lock-free fashion. At the same time this approach maps well to older APIs.
      In current implementation, only one immediate context that uses default graphics command queue is created. To support multiple GPUs or multiple command queue types (compute, copy, etc.), it is natural to have one immediate contexts per queue. Cross-context synchronization utilities will be necessary.
      Swap Chain (ISwapChain interface). Swap chain interface represents a chain of back buffers and is responsible for showing the final rendered image on the screen.
      Render device, device contexts and swap chain are created during the engine initialization.
      Resources (ITexture and IBuffer interfaces). There are two types of resources - textures and buffers. There are many different texture types (2D textures, 3D textures, texture array, cubmepas, etc.) that can all be represented by ITexture interface.
      Resources Views (ITextureView and IBufferView interfaces). While textures and buffers are mere data containers, texture views and buffer views describe how the data should be interpreted. For instance, a 2D texture can be used as a render target for rendering commands or as a shader resource.
      Pipeline State (IPipelineState interface). GPU pipeline contains many configurable stages (depth-stencil, rasterizer and blend states, different shader stage, etc.). Direct3D11 uses coarse-grain objects to set all stage parameters at once (for instance, a rasterizer object encompasses all rasterizer attributes), while OpenGL contains myriad functions to fine-grain control every individual attribute of every stage. Both methods do not map very well to modern graphics hardware that combines all states into one monolithic state under the hood. Direct3D12 directly exposes pipeline state object in the API, and Diligent Engine uses the same approach.
      Shader Resource Binding (IShaderResourceBinding interface). Shaders are programs that run on the GPU. Shaders may access various resources (textures and buffers), and setting correspondence between shader variables and actual resources is called resource binding. Resource binding implementation varies considerably between different API. Diligent Engine introduces a new object called shader resource binding that encompasses all resources needed by all shaders in a certain pipeline state.
      API Basics
      Creating Resources
      Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. Graphics APIs usually have a native object that represents linear buffer. Diligent Engine uses IBuffer interface as an abstraction for a native buffer. To create a buffer, one needs to populate BufferDesc structure and call IRenderDevice::CreateBuffer() method as in the following example:
      BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); While there is usually just one buffer object, different APIs use very different approaches to represent textures. For instance, in Direct3D11, there are ID3D11Texture1D, ID3D11Texture2D, and ID3D11Texture3D objects. In OpenGL, there is individual object for every texture dimension (1D, 2D, 3D, Cube), which may be a texture array, which may also be multisampled (i.e. GL_TEXTURE_2D_MULTISAMPLE_ARRAY). As a result there are nine different GL texture types that Diligent Engine may create under the hood. In Direct3D12, there is only one resource interface. Diligent Engine hides all these details in ITexture interface. There is only one  IRenderDevice::CreateTexture() method that is capable of creating all texture types. Dimension, format, array size and all other parameters are specified by the members of the TextureDesc structure:
      TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); If native API supports multithreaded resource creation, textures and buffers can be created by multiple threads simultaneously.
      Interoperability with native API provides access to the native buffer/texture objects and also allows creating Diligent Engine objects from native handles. It allows applications seamlessly integrate native API-specific code with Diligent Engine.
      Next-generation APIs allow fine level-control over how resources are allocated. Diligent Engine does not currently expose this functionality, but it can be added by implementing IResourceAllocator interface that encapsulates specifics of resource allocation and providing this interface to CreateBuffer() or CreateTexture() methods. If null is provided, default allocator should be used.
      Initializing the Pipeline State
      As it was mentioned earlier, Diligent Engine follows next-gen APIs to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.). This approach maps directly to Direct3D12/Vulkan, but is also beneficial for older APIs as it eliminates pipeline misconfiguration errors. With many individual calls tweaking various GPU pipeline settings it is very easy to forget to set one of the states or assume the stage is already properly configured when in fact it is not. Using pipeline state object helps avoid these problems as all stages are configured at once.
      Creating Shaders
      While in earlier APIs shaders were bound separately, in the next-generation APIs as well as in Diligent Engine shaders are part of the pipeline state object. The biggest challenge when authoring shaders is that Direct3D and OpenGL/Vulkan use different shader languages (while Apple uses yet another language in their Metal API). Maintaining two versions of every shader is not an option for real applications and Diligent Engine implements shader source code converter that allows shaders authored in HLSL to be translated to GLSL. To create a shader, one needs to populate ShaderCreationAttribs structure. SourceLanguage member of this structure tells the system which language the shader is authored in:
      SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source language matches the underlying graphics API: HLSL for Direct3D11/Direct3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter, so this value should only be used for OpenGL and OpenGLES modes. There are two ways to provide the shader source code. The first way is to use Source member. The second way is to provide a file path in FilePath member. Since the engine is entirely decoupled from the platform and the host file system is platform-dependent, the structure exposes pShaderSourceStreamFactory member that is intended to provide the engine access to the file system. If FilePath is provided, shader source factory must also be provided. If the shader source contains any #include directives, the source stream factory will also be used to load these files. The engine provides default implementation for every supported platform that should be sufficient in most cases. Custom implementation can be provided when needed.
      When sampling a texture in a shader, the texture sampler was traditionally specified as separate object that was bound to the pipeline at run time or set as part of the texture object itself. However, in most cases it is known beforehand what kind of sampler will be used in the shader. Next-generation APIs expose new type of sampler called static sampler that can be initialized directly in the pipeline state. Diligent Engine exposes this functionality: when creating a shader, textures can be assigned static samplers. If static sampler is assigned, it will always be used instead of the one initialized in the texture shader resource view. To initialize static samplers, prepare an array of StaticSamplerDesc structures and initialize StaticSamplers and NumStaticSamplers members. Static samplers are more efficient and it is highly recommended to use them whenever possible. On older APIs, static samplers are emulated via generic sampler objects.
      The following is an example of shader initialization:
      ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = {     {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC},     {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE},     {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader );
      Creating the Pipeline State Object
      After all required shaders are created, the rest of the fields of the PipelineStateDesc structure provide depth-stencil, rasterizer, and blend state descriptions, the number and format of render targets, input layout format, etc. For instance, rasterizer state can be described as follows:
      PipelineStateDesc PSODesc; RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; RasterizerDesc.AntialiasedLineEnable = False; Depth-stencil and blend states are defined in a similar fashion.
      Another important thing that pipeline state object encompasses is the input layout description that defines how inputs to the vertex shader, which is the very first shader stage, should be read from the memory. Input layout may define several vertex streams that contain values of different formats and sizes:
      // Define input layout InputLayoutDesc &Layout = PSODesc.GraphicsPipeline.InputLayout; LayoutElement TextLayoutElems[] = {     LayoutElement( 0, 0, 3, VT_FLOAT32, False ),     LayoutElement( 1, 0, 4, VT_UINT8, True ),     LayoutElement( 2, 0, 2, VT_FLOAT32, False ), }; Layout.LayoutElements = TextLayoutElems; Layout.NumElements = _countof( TextLayoutElems ); Finally, pipeline state defines primitive topology type. When all required members are initialized, a pipeline state object can be created by IRenderDevice::CreatePipelineState() method:
      // Define shader and primitive topology PSODesc.GraphicsPipeline.PrimitiveTopologyType = PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; PSODesc.GraphicsPipeline.pVS = pVertexShader; PSODesc.GraphicsPipeline.pPS = pPixelShader; PSODesc.Name = "My pipeline state"; m_pDev->CreatePipelineState(PSODesc, &m_pPSO); When PSO object is bound to the pipeline, the engine invokes all API-specific commands to set all states specified by the object. In case of Direct3D12 this maps directly to setting the D3D12 PSO object. In case of Direct3D11, this involves setting individual state objects (such as rasterizer and blend states), shaders, input layout etc. In case of OpenGL, this requires a number of fine-grain state tweaking calls. Diligent Engine keeps track of currently bound states and only calls functions to update these states that have actually changed.
      Binding Shader Resources
      Direct3D11 and OpenGL utilize fine-grain resource binding models, where an application binds individual buffers and textures to certain shader or program resource binding slots. Direct3D12 uses a very different approach, where resource descriptors are grouped into tables, and an application can bind all resources in the table at once by setting the table in the command list. Resource binding model in Diligent Engine is designed to leverage this new method. It introduces a new object called shader resource binding that encapsulates all resource bindings required for all shaders in a certain pipeline state. It also introduces the classification of shader variables based on the frequency of expected change that helps the engine group them into tables under the hood:
      Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. Shader variable type must be specified during shader creation by populating an array of ShaderVariableDesc structures and initializing ShaderCreationAttribs::Desc::VariableDesc and ShaderCreationAttribs::Desc::NumVariables members (see example of shader creation above).
      Static variables cannot be changed once a resource is bound to the variable. They are bound directly to the shader object. For instance, a shadow map texture is not expected to change after it is created, so it can be bound directly to the shader:
      PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new Shader Resource Binding object (SRB) that is created by the pipeline state (IPipelineState::CreateShaderResourceBinding()):
      m_pPSO->CreateShaderResourceBinding(&m_pSRB); Note that an SRB is only compatible with the pipeline state it was created from. SRB object inherits all static bindings from shaders in the pipeline, but is not allowed to change them.
      Mutable resources can only be set once for every instance of a shader resource binding. Such resources are intended to define specific material properties. For instance, a diffuse texture for a specific material is not expected to change once the material is defined and can be set right after the SRB object has been created:
      m_pSRB->GetVariable(SHADER_TYPE_PIXEL, "tex2DDiffuse")->Set(pDiffuseTexSRV); In some cases it is necessary to bind a new resource to a variable every time a draw command is invoked. Such variables should be labeled as dynamic, which will allow setting them multiple times through the same SRB object:
      m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); Under the hood, the engine pre-allocates descriptor tables for static and mutable resources when an SRB objcet is created. Space for dynamic resources is dynamically allocated at run time. Static and mutable resources are thus more efficient and should be used whenever possible.
      As you can see, Diligent Engine does not expose low-level details of how resources are bound to shader variables. One reason for this is that these details are very different for various APIs. The other reason is that using low-level binding methods is extremely error-prone: it is very easy to forget to bind some resource, or bind incorrect resource such as bind a buffer to the variable that is in fact a texture, especially during shader development when everything changes fast. Diligent Engine instead relies on shader reflection system to automatically query the list of all shader variables. Grouping variables based on three types mentioned above allows the engine to create optimized layout and take heavy lifting of matching resources to API-specific resource location, register or descriptor in the table.
      This post gives more details about the resource binding model in Diligent Engine.
      Setting the Pipeline State and Committing Shader Resources
      Before any draw or compute command can be invoked, the pipeline state needs to be bound to the context:
      m_pContext->SetPipelineState(m_pPSO); Under the hood, the engine sets the internal PSO object in the command list or calls all the required native API functions to properly configure all pipeline stages.
      The next step is to bind all required shader resources to the GPU pipeline, which is accomplished by IDeviceContext::CommitShaderResources() method:
      m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); The method takes a pointer to the shader resource binding object and makes all resources the object holds available for the shaders. In the case of D3D12, this only requires setting appropriate descriptor tables in the command list. For older APIs, this typically requires setting all resources individually.
      Next-generation APIs require the application to track the state of every resource and explicitly inform the system about all state transitions. For instance, if a texture was used as render target before, while the next draw command is going to use it as shader resource, a transition barrier needs to be executed. Diligent Engine does the heavy lifting of state tracking.  When CommitShaderResources() method is called with COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES flag, the engine commits and transitions resources to correct states at the same time. Note that transitioning resources does introduce some overhead. The engine tracks state of every resource and it will not issue the barrier if the state is already correct. But checking resource state is an overhead that can sometimes be avoided. The engine provides IDeviceContext::TransitionShaderResources() method that only transitions resources:
      m_pContext->TransitionShaderResources(m_pPSO, m_pSRB); In some scenarios it is more efficient to transition resources once and then only commit them.
      Invoking Draw Command
      The final step is to set states that are not part of the PSO, such as render targets, vertex and index buffers. Diligent Engine uses Direct3D11-syle API that is translated to other native API calls under the hood:
      ITextureView *pRTVs[] = {m_pRTV}; m_pContext->SetRenderTargets(_countof( pRTVs ), pRTVs, m_pDSV); // Clear render target and depth buffer const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); m_pContext->ClearDepthStencil(nullptr, CLEAR_DEPTH_FLAG, 1.f); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); Different native APIs use various set of function to execute draw commands depending on command details (if the command is indexed, instanced or both, what offsets in the source buffers are used etc.). For instance, there are 5 draw commands in Direct3D11 and more than 9 commands in OpenGL with something like glDrawElementsInstancedBaseVertexBaseInstance not uncommon. Diligent Engine hides all details with single IDeviceContext::Draw() method that takes takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example:
      DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); For compute commands, there is IDeviceContext::DispatchCompute() method that takes DispatchComputeAttribs structure that defines compute grid dimension.
      Source Code
      Full engine source code is available on GitHub and is free to use. The repository contains two samples, asteroids performance benchmark and example Unity project that uses Diligent Engine in native plugin.
      AntTweakBar sample is Diligent Engine’s “Hello World” example.

       
      Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to multiple render targets, using compute shaders and unordered access views, etc.

      Asteroids performance benchmark is based on this demo developed by Intel. It renders 50,000 unique textured asteroids and allows comparing performance of Direct3D11 and Direct3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures.

      Finally, there is an example project that shows how Diligent Engine can be integrated with Unity.

      Future Work
      The engine is under active development. It currently supports Windows desktop, Universal Windows and Android platforms. Direct3D11, Direct3D12, OpenGL/GLES backends are now feature complete. Vulkan backend is coming next, and support for more platforms is planned.
    • By reenigne
      For those that don't know me. I am the individual who's two videos are listed here under setup for https://wiki.libsdl.org/Tutorials
      I also run grhmedia.com where I host the projects and code for the tutorials I have online.
      Recently, I received a notice from youtube they will be implementing their new policy in protecting video content as of which I won't be monetized till I meat there required number of viewers and views each month.

      Frankly, I'm pretty sick of youtube. I put up a video and someone else learns from it and puts up another video and because of the way youtube does their placement they end up with more views.
      Even guys that clearly post false information such as one individual who said GLEW 2.0 was broken because he didn't know how to compile it. He in short didn't know how to modify the script he used because he didn't understand make files and how the requirements of the compiler and library changes needed some different flags.

      At the end of the month when they implement this I will take down the content and host on my own server purely and it will be a paid system and or patreon. 

      I get my videos may be a bit dry, I generally figure people are there to learn how to do something and I rather not waste their time. 
      I used to also help people for free even those coming from the other videos. That won't be the case any more. I used to just take anyone emails and work with them my email is posted on the site.

      I don't expect to get the required number of subscribers in that time or increased views. Even if I did well it wouldn't take care of each reoccurring month.
      I figure this is simpler and I don't plan on putting some sort of exorbitant fee for a monthly subscription or the like.
      I was thinking on the lines of a few dollars 1,2, and 3 and the larger subscription gets you assistance with the content in the tutorials if needed that month.
      Maybe another fee if it is related but not directly in the content. 
      The fees would serve to cut down on the number of people who ask for help and maybe encourage some of the people to actually pay attention to what is said rather than do their own thing. That actually turns out to be 90% of the issues. I spent 6 hours helping one individual last week I must have asked him 20 times did you do exactly like I said in the video even pointed directly to the section. When he finally sent me a copy of the what he entered I knew then and there he had not. I circled it and I pointed out that wasn't what I said to do in the video. I didn't tell him what was wrong and how I knew that way he would go back and actually follow what it said to do. He then reported it worked. Yea, no kidding following directions works. But hey isn't alone and well its part of the learning process.

      So the point of this isn't to be a gripe session. I'm just looking for a bit of feed back. Do you think the fees are unreasonable?
      Should I keep the youtube channel and do just the fees with patreon or do you think locking the content to my site and require a subscription is an idea.

      I'm just looking at the fact it is unrealistic to think youtube/google will actually get stuff right or that youtube viewers will actually bother to start looking for more accurate videos. 
    • By Balma Alparisi
      i got error 1282 in my code.
      sf::ContextSettings settings; settings.majorVersion = 4; settings.minorVersion = 5; settings.attributeFlags = settings.Core; sf::Window window; window.create(sf::VideoMode(1600, 900), "Texture Unit Rectangle", sf::Style::Close, settings); window.setActive(true); window.setVerticalSyncEnabled(true); glewInit(); GLuint shaderProgram = createShaderProgram("FX/Rectangle.vss", "FX/Rectangle.fss"); float vertex[] = { -0.5f,0.5f,0.0f, 0.0f,0.0f, -0.5f,-0.5f,0.0f, 0.0f,1.0f, 0.5f,0.5f,0.0f, 1.0f,0.0f, 0.5,-0.5f,0.0f, 1.0f,1.0f, }; GLuint indices[] = { 0,1,2, 1,2,3, }; GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex), vertex, GL_STATIC_DRAW); GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices,GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(float) * 5, (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, false, sizeof(float) * 5, (void*)(sizeof(float) * 3)); glEnableVertexAttribArray(1); GLuint texture[2]; glGenTextures(2, texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageOne = new sf::Image; bool isImageOneLoaded = imageOne->loadFromFile("Texture/container.jpg"); if (isImageOneLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageOne->getSize().x, imageOne->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageOne->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageOne; glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageTwo = new sf::Image; bool isImageTwoLoaded = imageTwo->loadFromFile("Texture/awesomeface.png"); if (isImageTwoLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageTwo->getSize().x, imageTwo->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageTwo->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageTwo; glUniform1i(glGetUniformLocation(shaderProgram, "inTextureOne"), 0); glUniform1i(glGetUniformLocation(shaderProgram, "inTextureTwo"), 1); GLenum error = glGetError(); std::cout << error << std::endl; sf::Event event; bool isRunning = true; while (isRunning) { while (window.pollEvent(event)) { if (event.type == event.Closed) { isRunning = false; } } glClear(GL_COLOR_BUFFER_BIT); if (isImageOneLoaded && isImageTwoLoaded) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glUseProgram(shaderProgram); } glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr); glBindVertexArray(0); window.display(); } glDeleteVertexArrays(1, &vao); glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteProgram(shaderProgram); glDeleteTextures(2,texture); return 0; } and this is the vertex shader
      #version 450 core layout(location=0) in vec3 inPos; layout(location=1) in vec2 inTexCoord; out vec2 TexCoord; void main() { gl_Position=vec4(inPos,1.0); TexCoord=inTexCoord; } and the fragment shader
      #version 450 core in vec2 TexCoord; uniform sampler2D inTextureOne; uniform sampler2D inTextureTwo; out vec4 FragmentColor; void main() { FragmentColor=mix(texture(inTextureOne,TexCoord),texture(inTextureTwo,TexCoord),0.2); } I was expecting awesomeface.png on top of container.jpg

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
  • Advertisement