• Advertisement
Sign in to follow this  

OpenGL Is Clustered Forward Shading worth implementing?

This topic is 1821 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm referring to this: http://www.cse.chalmers.se/~uffe/clustered_shading_preprint.pdf there is also a video avaliable 

">
the performance of this technique seems to scale perfectly for huge amounts of lights,but on lower amounts performs a little worse than the less advanced tiled culling method.The thing is - has there ever been a case where you will need 30 thousand lights in a scene?Plus,won't it get bottlenecked by generating shadow maps for all the lights(in the youtube video the lights just pass trough the bridge and under it).Unfortunately I couldn't test it's performance,because for some reason the provided demo won't start up(even tho I support OpenGL 3 and higher) and I've never done GLSL,so it might take time to get it to work.

Share this post


Link to post
Share on other sites
Advertisement

One thing I like about the tiled Clustered is that it become "cheaper" to handle transparent object. In the case of tiled deferred you have to build 2 lists, one that used the depth buffer for the light culling and one without. So yo can have a massive overhead on the transparent pass. With clustered 1 culling is necessary.

 

But again that depend of the light count (also clustered is a heavier in term of memory size if I'm correct) and the scene.

 

Also at the Siggraph Asia , they were a presentation about a 2.5D culling techinque that you can find here : https://sites.google.com/site/takahiroharada/

Share this post


Link to post
Share on other sites

the Z-prepass worries me,does that mean I have to do the tessellation twice as well?(tessellation already hits my FPS big time)

Edited by mrheisenberg

Share this post


Link to post
Share on other sites
you can also try to sort front to back instead, if you are vertex bound, that might give you better results. another approach is to use occluder object, you can get 90% of the culling as with zprepass, yet without the cost.

but tesselated geometry has another problem, you cover a lot of pixel just partially when AA is enabled, that increases the costs a lot in the pixelshader. something like POM might scale way better.

Share this post


Link to post
Share on other sites
deferred shading is really unhandy when it comes to anti aliasing and lighting transparent objects is not solved in this approach.

forward shading is the way to go, I expect in the next generation consoles to go back to it. I use a similar approach on my phone engines, I've a view space aligned 3d grid (texture) that has a 'count' and 'offset' value per voxel, that I use to index into a texture containing the light sources that affect that voxel. the grid creation is done every frame on CPU, I don't have 30k of lights, but I run with antialiasing, I use the same shader for solid and transparent objects, very convenient to use, I can even assign this texture on the vertexshader for lighting particles in a cheap way.

 

one problem you still have is to apply shadows/projectors, it's solveable by having an atlas and store more data per lightsource (projection matrix, offsets,extends etc), but it makes quite a lot of overhead.

 

Many have solved transparency with deferred, Epic and Avalanche among them. Anti Aliasing is also doable. Multiple BRDF's are handled straightforward in deferred. You also have direct access to all those buffers should you need anything, and don't have to worry about processing and pixels you can't see it. And most modern hardware, including the 4th Gen Ipad and Tegra 4 from what I've heard, have enough bandwidth and memory to get some sort of deferred done, though if you're doing thousand and thousands of lights mobile probably isn't your target platform anyway.

 

I'd rather make sure there's not any unnecessary shading going on. Of course you can't do 8xMSAA with deferred, at least not cheaply, but you can do something like SMAA, which looks just as good and is cheaper in any case. I suppose it's all based on what you'd like to be doing. If you've got the time for it, and are on the right platform (new consoles, high end pc stuff) then I don't see any reason not to go deferred. If you don't have the time to solve all those problems, or somethings I'm probably not even thinking of, then forward might be your solution. But calling out all the old problems with deferred isn't relevant, as they've been solved for most part.

Share this post


Link to post
Share on other sites
Many have solved transparency with deferred, Epic and Avalanche among them. Anti Aliasing is also doable. Multiple BRDF's are handled straightforward in deferred. You also have direct access to all those buffers should you need anything, and don't have to worry about processing and pixels you can't see it. And most modern hardware, including the 4th Gen Ipad and Tegra 4 from what I've heard, have enough bandwidth and memory to get some sort of deferred done, though if you're doing thousand and thousands of lights mobile probably isn't your target platform anyway.

I don't remember Avalanche using Deferred Shading in it's titles. Which titles do use it?

 

Handling transparency... nice way of saying "solved". Switching to forward is not a "solution", neither is using lighting accumulative aproaches. It's a workaround. Anti aliasing is doable, but at a gigantic cost. I'm talking about MSAA and CSAA (SSAA is always expensive). Not about "FXAA" & Co. which is a cheap trick.

As for multiple BRDFs, it's not straightforward in deferred. It needs an extra cost in the MRT to store material ID, and you either use branching in your code and pray for high branch coherency (low frequency image) to get the best BRDFs (Cook Torrance, Oren Nayar, Phong, Blinn Phong, Strauss, etc) at decent speed, or resort to texture array approaches (which produce very interesting/creative results that I love, but aren't optimal for those seeking photorealism).

 

So, no, I wouldn't call the old deferred problems as "solved".

Share this post


Link to post
Share on other sites
I don't see any reason not to go deferred

Forward vs Deferred arguments are silly and useless out of context, because different games are better suited to different pipelines. There is no one-pipeline-to-rule-them-all, and as a side-rant: any engine that lists "deferred shading" on it's feature list is missing the point (an engine should give you the tools to build different pipelines, and a deferred rendering pipe should be in the engine samples/examples, not the core).

 

There's still many games shipping today that use "traditional forward" rendering, and almost every game is a hybrid, where some calculations are deferred and others aren't.
Choosing where to put calculations in your graphics pipeline is an optimization problem, which means it's unsolvable except in the context of your particular data.

 

e.g. on my last game, we calculated shadow data in screen-space for some objects (Deferred Shadow Maps), and also used deferred decals, then forward rendered everything, then calculated shadow data in screen-space for some other objects, then applied these 2nd shadow results to the forward-rendered lighting data to get the final lighting buffer.

That's not traditional forward or deferred rendering. Vanilla doesn't work for most games.

 

Note that Forward+ (aka Clustered Forward, Light Indexed Deferred) is a very new topic and there's a lot of research coming up this year.

The original version (light-indexed deferred) has actually been around for 5 years or so, and is even very easy to implement on DX9! However, DX11 has made these kinds of forward renderers easier and more efficient to implement with less restrictions too, so the idea is making a big comeback wink.png

Edited by Hodgman

Share this post


Link to post
Share on other sites

the reason a lot of games went deferred is that it's not possible on current consoles to go forward. dynamic branching etc. would just kill you, and you don't really have benefits of it as most games are not rendering insane AA resolutions. that might change on future gen, they'll probably be very alike to PCs and there you don't worry about branching, but you want to support high AA resolutions without paying the cost of shading every sub sample.

 

so the question whether you go deferred or forward is also very much dependent on what your hardware has to offer (beside the question of what you're trying to achive).

Share this post


Link to post
Share on other sites

the reason a lot of games went deferred is that it's not possible on current consoles to go forward.

Many current-gen console games are forward, and forward has stuck around because it's very hard to go deferred on current-gen consoles... The amount of bandwidth required kills you. Even 16-bit HDR (64bpp) is a huge burden on these consoles.

the more advanced games are, the more likely they become deferred, the reason is that it's not possible to get the amount of light-surface interactions with forward rendering in a fast way. as you said, it would seem deferred is more demanding, yet it's the only way to go if you want flexibility.

Share this post


Link to post
Share on other sites
Not really; deferred might have solved some problems with regards to lights but it brought with it a whole host of others with regards to memory bandwidth, AA issues, problems integrating different BRDFs, transparency and other issues which required various hoops to be jumped through.

Going forward hybrid solutions are likely to become the norm, such as AMD's Leo demo which mixes deferred aspects with a forward rendering pass to do the real geometry rendering which can get around pretty much all of those problems (but brings its own compromises).

The point is; all rendering has trade offs and you'll find plenty of "advanced" engines which use various rendering methods - hell, the last game I worked on was all forward lit using baked lighting and SH light probes because it was the only way we were going to hit 60fps on the consoles.

Edit: also a good and advanced engine WONT force you to take one rendering path, it will let the game code decide (the engine powering the aforementioned game can support deferred as well as forward at least...) Edited by phantom

Share this post


Link to post
Share on other sites
the more advanced games are, the more likely they become deferred, the reason is that it's not possible to get the amount of light-surface interactions with forward rendering in a fast way. as you said, it would seem deferred is more demanding, yet it's the only way to go if you want flexibility.

 
What's 'advanced' mean? Huge numbers of dynamic lights? You can do just as many lights with forward as long as you've got a decent way of solving the classic issue of determining which objects are affected by which lights. Actually, the whole point of tiled-deferred was that it was trying to reduce lighting bandwidth back down to what we had with forward rendering, while keeping the "which light for which object" calculations in screen-space on the GPU.
 
If your environment is static, then you can bake all the lighting (and probes) and it'll be a ton faster than any other approach! wink.png
Most console games are still using static, baked lighting for most of the scene, which reduces the need for huge dynamic light counts.
 
Another issue with deferred is that it's very hard to do at full 720p on the 360. The 360 only has 10MiB of EDRAM, where your frame-buffers have to live. Let's say you optimize your G-buffer layout so you've got hardware depth/stencil, and two 8888 targets -- that's 3 * 4bpp * 1280*720, or ~10.5MiB -- that's over the limit and won't fit.

n.b. these numbers are the same as depth/stencil + FP16_16_16_16, which also makes forward rendering or deferred light accumulation difficult in HDR... wacko.png 

Sure, Crysis, Battlefield 3 and Killzone are deferred, but there's probably many more games that use forward rendering, even "AAA" games, like Gears of War (and most other Unreal games), L4D2 (and other Source games), God of War, etc... Then there's the games that have gone deferred-lighting (LPP) as a half-way choice, such as GTA4 (or many rockstar games), Space Marine, etc...
 
Regarding materials, forward is unarguably more flexible -- each object can have unique BRDFs, unique lighting models, and any number of lights. It's just inefficient if you've got lots of small objects (due to shader swapping overhead and bad quad efficiency), or lots of big objects (due to the "which light for which object" calculations being done per-object).
Actually, you mentioned dynamic branches before, but forward rendering doesn't need any; all branches should be able to be determined at compile time. On the other hand, implementing multiple BRDFs in a deferred renderer requires some form of branching (or look-up-tables, which are just as bad).
 
Also, tiled-deferred and tiled-forward are implementable on current-gen hardware (even DX9 PC if you're careful), so there's no reason we won't see it soon wink.png

As usual, there's no single objectively better pipeline; different games have different requirements, which are more efficiently met with one pipeline or another...

Edited by Hodgman

Share this post


Link to post
Share on other sites
A little off topic but still on topic, does anyone have any links to good tutorials on deferred vs forward rendering? I've read a fair bit about the detail on deferred but would rather get a good grounding on it before look into it further - couldn't find any decent sites with 'why deferred' other than 'you can have more lights'.

Apologies for borrowing this thread quickly...

Share this post


Link to post
Share on other sites
Not really; deferred might have solved some problems with regards to lights but it brought with it a whole host of others with regards to memory bandwidth, AA issues, problems integrating different BRDFs, transparency and other issues which required various hoops to be jumped through.

exactly, one would think, having no MSAA (for shading), no solution for alphablend, problems with getting different BRDFs running, high memory storage and bandwidth cost, why on earth would anyone do that.

simply because the current gen console hardware does not offer another solution to create worlds that player, designer and artist expect, where you have tons of dynamic lights, where even particles light the close-by geometry.

Share this post


Link to post
Share on other sites
the more advanced games are, the more likely they become deferred, the reason is that it's not possible to get the amount of light-surface interactions with forward rendering in a fast way. as you said, it would seem deferred is more demanding, yet it's the only way to go if you want flexibility.

 
What's 'advanced' mean? Huge numbers of dynamic lights? You can do just as many lights with forward as long as you've got a decent way of solving the classic issue of determining which objects are affected by which lights. Actually, the whole point of tiled-deferred was that it was trying to reduce lighting bandwidth back down to what we had with forward rendering, while keeping the "which light for which object" calculations in screen-space on the GPU.

advanced means there are no limits in light-surface interactions due to tech. deferred shading has a lot of 'points', not just this one.

-you had to reduce shader combination counts, you can imagin, even if your forward solution would be fast enough, you could have 0 to 100 lights affecting a surface, this means you need 100 times the permutation of your shader library that isn't small already.  (and no, sadly dynamic branching is not a solution on current gen HW, and no even static branching is not a solution, as your shader will increase be some % and your register usage will increase as well, and we graphics coder guys don't want to pay those ms that we could spend elsewhere. yes, it's a performance reason)

-complexity of light resources, there are some simple lights, some area lights, some projector light, some shadow-mapping lights, there is a sun, there are light streaks (e.g. particle, laser beams). if you'd want to go forward, you'd need to index into all the needed resources, like textures, constants, and current gen hw is not really supporting that. creating atlases is also not very feasible, you'd need to spend a lot of time on moving memory to re-arange data per object to draw. (and you'd still face tight limits on current gen).

 

you can find some more reasons people went deferred in:

http://www.crytek.com/download/A_bit_more_deferred_-_CryEngine3.ppt

 

 

 

 

 

 

 

If your environment is static, then you can bake all the lighting (and probes) and it'll be a ton faster than any other approach! wink.png
Most console games are still using static, baked lighting for most of the scene, which reduces the need for huge dynamic light counts.

and even those engines, that decimate a vast count of lights this way, like UE3 using lightmass, have problems to apply those lights to dynamic objects, in UE3 they use spherical harmonics to combine them, just like KZ2 does for baked lights. lightmaps are really just orthogonal to forward/deferred.

http://www.unrealengine.com/files/downloads/GDC09_Smedberg_RenderingTechniques.pdf

 

 

AFAIK those realtime shadows in UE3 are claimed to be deferred, as that's the only reason why UE3 does not cope well with MSAA.

 

 

 

 

Another issue with deferred is that it's very hard to do at full 720p on the 360. The 360 only has 10MiB of EDRAM, where your frame-buffers have to live. Let's say you optimize your G-buffer layout so you've got hardware depth/stencil, and two 8888 targets -- that's 3 * 4bpp * 1280*720, or ~10.5MiB -- that's over the limit and won't fit.

n.b. these numbers are the same as depth/stencil + FP16_16_16_16, which also makes forward rendering or deferred light accumulation difficult in HDR... wacko.png

exactly, yet another reason why it is a very unfavorable idea to go deferred on 360. why would anyone do that? it's because the alternative just does not work (for the reasons given above). Sure, if you make a racing game like gran turismo, with just one light source and maybe some spherical harmonics evaluation in the VS for nicer ambient/radiosity, no reason to go deferred. even an outdoor shooter like just caused can life with forward I guess. but as soon as you want more advanced lighting, like GearsOfWar, GTA, Crysis, Stalker, ... you can't go forward on current gen. next gen, I imagin something like AMD did in LEO is very doable.

.

 

 

Sure, Crysis, Battlefield 3 and Killzone are deferred, but there's probably many more games that use forward rendering, even "AAA" games, like Gears of War (and most other Unreal games), L4D2 (and other Source games), God of War, etc... Then there's the games that have gone deferred-lighting (LPP) as a half-way choice, such as GTA4 (or many rockstar games), Space Marine, etc...

Crysis is forward shaded with up to 16lights per object, (check the insane amount of shader space they use ;) ), Crysis 2 is deferred lighted like GTA, UE3 games are neither what we would call deferred nor forward, it's spherical harmonic based like KZ2. battlefield 3 goes for the (deferred) light indexing/tiling approach. as it's not doable on the RSX it seems, they rather spend their SPUs for it, yet it's the first step towards light indexing, IMO.

 

 

 

Regarding materials, forward is unarguably more flexible -- each object can have unique BRDFs, unique lighting models, and any number of lights. It's just inefficient if you've got lots of small objects (due to shader swapping overhead and bad quad efficiency), or lots of big objects (due to the "which light for which object" calculations being done per-object).

that's the vanilla version, and then the clustered/tiled forward shading comes in ;)

 

Actually, you mentioned dynamic branches before, but forward rendering doesn't need any; all branches should be able to be determined at compile time. On the other hand, implementing multiple BRDFs in a deferred renderer requires some form of branching (or look-up-tables, which are just as bad).

would explain why most deferred games on console have just one lighting term, even the nano suit in Crysis2 looks like it's missing the anisotropic metal shading of crysis1.

the dynamic branching is needed in first place to skip unneeded light calculations. if you are backfacing, or in shadow, or out of range -> next light. this gives even on my mobile phones a boost if I use a fixed set of lights per drawn object. on DX9 hardware it was skipping pixel, but the general overhead due to this branching compensated for it (was like 10cycles more per shader, 6due to branching and some more as the loop had overhead of storing/restoring registers, validated with FX composer back then.)

 

 

Also, tiled-deferred and tiled-forward are implementable on current-gen hardware (even DX9 PC if you're careful), so there's no reason we won't see it soon wink.png

As usual, there's no single objectively better pipeline; different games have different requirements, which are more efficiently met with one pipeline or another...

I'm just saying, going for top notch lighting/shading (aka not just radiosity baking into lightmaps and also not just 1light source in the world and cubemap/spherical harmonics for dynamic objects), made all engines go deferred on this generation of consoles. I can't think of any with competitive lighting to dead space, crysis,gta, that would be forward, beside maybe God Of War, but you could clearly identify artifacts of merged lights per vertex if you exceeded some count (I'd guess 3 dynamic lights).

Share this post


Link to post
Share on other sites
A little off topic but still on topic, does anyone have any links to good tutorials on deferred vs forward rendering? I've read a fair bit about the detail on deferred but would rather get a good grounding on it before look into it further - couldn't find any decent sites with 'why deferred' other than 'you can have more lights'.

Apologies for borrowing this thread quickly...

I think that's a good start:

http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter09.html

Share this post


Link to post
Share on other sites
A little off topic but still on topic, does anyone have any links to good tutorials on deferred vs forward rendering? I've read a fair bit about the detail on deferred but would rather get a good grounding on it before look into it further - couldn't find any decent sites with 'why deferred' other than 'you can have more lights'.

Apologies for borrowing this thread quickly...

I think that's a good start:

http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter09.html

 

That link just reinforces his belief that 'why deferred' is just 'you can have more lights'.

Effectively, that's the main reason it appeared, and that's the main reason it's still strong.

 

There are other side effects that are good:

  1. The GBuffer data can be very useful for screen space effects (i.e. Normals can be used for AO, refraction mapping, and local reflections, depth can be used for Godrays, fog, and DOF). Even if you do you forward rendering, you'll probably end up spitting a sort of GBuffer for those FXs. Of course, you don't have to do magic to compress a lot of parameters into the MRT that you won't be needing in the postprocessing passes (like specular colour term).
  2. Shading complexity becomes screen-dependant. This benefit/disadvantage (depending on the application) is shared with Forward+. Assuming just one directional light is used, every pixel is shaded once. In a forward renderer, if you render everything back to front, every pixel covered by a triangle will be shaded multiple times. Hence deferred shader's time will be fixed and depends on screen resolution (hence lower screen res. is an instant win for low end users). A deferred shader/Forward+ cannot shade more than (num_lights * width * height) pixels even if there are an infinite amount of triangles, whereas the Forward renderer may shade the same pixel an infinite number of times for an infinite amount of triangles, overwriting it's previous value. Of course if you're very good at sorting your triangles (chances are the game cannot be that good) Forward renderer may perform faster; but in a Deferred Shader you're on more stable grounds.

Edit: As for the "more lights" argument, take in mind that a deferred shader can easily take 5000 lights (as long as they're small) while a forward renderer can max at 8-16 lights per object.

Edited by Matias Goldberg

Share this post


Link to post
Share on other sites
Very insightful guys, thanks. My renderer is nicely abstracted so I might give it a go. My game only requires one directional light at the moment but I still see the plus with effects like AO, etc

Anyone know which method the call of duty engines use?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By EddieK
      Hello. I'm trying to make an android game and I have come across a problem. I want to draw different map layers at different Z depths so that some of the tiles are drawn above the player while others are drawn under him. But there's an issue where the pixels with alpha drawn above the player. This is the code i'm using:
      int setup(){ GLES20.glEnable(GLES20.GL_DEPTH_TEST); GLES20.glEnable(GL10.GL_ALPHA_TEST); GLES20.glEnable(GLES20.GL_TEXTURE_2D); } int render(){ GLES20.glClearColor(0, 0, 0, 0); GLES20.glClear(GLES20.GL_ALPHA_BITS); GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT); GLES20.glBlendFunc(GLES20.GL_ONE, GL10.GL_ONE_MINUS_SRC_ALPHA); // do the binding of textures and drawing vertices } My vertex shader:
      uniform mat4 MVPMatrix; // model-view-projection matrix uniform mat4 projectionMatrix; attribute vec4 position; attribute vec2 textureCoords; attribute vec4 color; attribute vec3 normal; varying vec4 outColor; varying vec2 outTexCoords; varying vec3 outNormal; void main() { outNormal = normal; outTexCoords = textureCoords; outColor = color; gl_Position = MVPMatrix * position; } My fragment shader:
      precision highp float; uniform sampler2D texture; varying vec4 outColor; varying vec2 outTexCoords; varying vec3 outNormal; void main() { vec4 color = texture2D(texture, outTexCoords) * outColor; gl_FragColor = vec4(color.r,color.g,color.b,color.a);//color.a); } I have attached a picture of how it looks. You can see the black squares near the tree. These squares should be transparent as they are in the png image:

      Its strange that in this picture instead of alpha or just black color it displays the grass texture beneath the player and the tree:

      Any ideas on how to fix this?
       
      Thanks in advance
       
       
    • By DiligentDev
      This article uses material originally posted on Diligent Graphics web site.
      Introduction
      Graphics APIs have come a long way from small set of basic commands allowing limited control of configurable stages of early 3D accelerators to very low-level programming interfaces exposing almost every aspect of the underlying graphics hardware. Next-generation APIs, Direct3D12 by Microsoft and Vulkan by Khronos are relatively new and have only started getting widespread adoption and support from hardware vendors, while Direct3D11 and OpenGL are still considered industry standard. New APIs can provide substantial performance and functional improvements, but may not be supported by older hardware. An application targeting wide range of platforms needs to support Direct3D11 and OpenGL. New APIs will not give any advantage when used with old paradigms. It is totally possible to add Direct3D12 support to an existing renderer by implementing Direct3D11 interface through Direct3D12, but this will give zero benefits. Instead, new approaches and rendering architectures that leverage flexibility provided by the next-generation APIs are expected to be developed.
      There are at least four APIs (Direct3D11, Direct3D12, OpenGL/GLES, Vulkan, plus Apple's Metal for iOS and osX platforms) that a cross-platform 3D application may need to support. Writing separate code paths for all APIs is clearly not an option for any real-world application and the need for a cross-platform graphics abstraction layer is evident. The following is the list of requirements that I believe such layer needs to satisfy:
      Lightweight abstractions: the API should be as close to the underlying native APIs as possible to allow an application leverage all available low-level functionality. In many cases this requirement is difficult to achieve because specific features exposed by different APIs may vary considerably. Low performance overhead: the abstraction layer needs to be efficient from performance point of view. If it introduces considerable amount of overhead, there is no point in using it. Convenience: the API needs to be convenient to use. It needs to assist developers in achieving their goals not limiting their control of the graphics hardware. Multithreading: ability to efficiently parallelize work is in the core of Direct3D12 and Vulkan and one of the main selling points of the new APIs. Support for multithreading in a cross-platform layer is a must. Extensibility: no matter how well the API is designed, it still introduces some level of abstraction. In some cases the most efficient way to implement certain functionality is to directly use native API. The abstraction layer needs to provide seamless interoperability with the underlying native APIs to provide a way for the app to add features that may be missing. Diligent Engine is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. Full source code is available for download at GitHub and is free to use.
      Overview
      Diligent Engine API takes some features from Direct3D11 and Direct3D12 as well as introduces new concepts to hide certain platform-specific details and make the system easy to use. It contains the following main components:
      Render device (IRenderDevice  interface) is responsible for creating all other objects (textures, buffers, shaders, pipeline states, etc.).
      Device context (IDeviceContext interface) is the main interface for recording rendering commands. Similar to Direct3D11, there are immediate context and deferred contexts (which in Direct3D11 implementation map directly to the corresponding context types). Immediate context combines command queue and command list recording functionality. It records commands and submits the command list for execution when it contains sufficient number of commands. Deferred contexts are designed to only record command lists that can be submitted for execution through the immediate context.
      An alternative way to design the API would be to expose command queue and command lists directly. This approach however does not map well to Direct3D11 and OpenGL. Besides, some functionality (such as dynamic descriptor allocation) can be much more efficiently implemented when it is known that a command list is recorded by a certain deferred context from some thread.
      The approach taken in the engine does not limit scalability as the application is expected to create one deferred context per thread, and internally every deferred context records a command list in lock-free fashion. At the same time this approach maps well to older APIs.
      In current implementation, only one immediate context that uses default graphics command queue is created. To support multiple GPUs or multiple command queue types (compute, copy, etc.), it is natural to have one immediate contexts per queue. Cross-context synchronization utilities will be necessary.
      Swap Chain (ISwapChain interface). Swap chain interface represents a chain of back buffers and is responsible for showing the final rendered image on the screen.
      Render device, device contexts and swap chain are created during the engine initialization.
      Resources (ITexture and IBuffer interfaces). There are two types of resources - textures and buffers. There are many different texture types (2D textures, 3D textures, texture array, cubmepas, etc.) that can all be represented by ITexture interface.
      Resources Views (ITextureView and IBufferView interfaces). While textures and buffers are mere data containers, texture views and buffer views describe how the data should be interpreted. For instance, a 2D texture can be used as a render target for rendering commands or as a shader resource.
      Pipeline State (IPipelineState interface). GPU pipeline contains many configurable stages (depth-stencil, rasterizer and blend states, different shader stage, etc.). Direct3D11 uses coarse-grain objects to set all stage parameters at once (for instance, a rasterizer object encompasses all rasterizer attributes), while OpenGL contains myriad functions to fine-grain control every individual attribute of every stage. Both methods do not map very well to modern graphics hardware that combines all states into one monolithic state under the hood. Direct3D12 directly exposes pipeline state object in the API, and Diligent Engine uses the same approach.
      Shader Resource Binding (IShaderResourceBinding interface). Shaders are programs that run on the GPU. Shaders may access various resources (textures and buffers), and setting correspondence between shader variables and actual resources is called resource binding. Resource binding implementation varies considerably between different API. Diligent Engine introduces a new object called shader resource binding that encompasses all resources needed by all shaders in a certain pipeline state.
      API Basics
      Creating Resources
      Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. Graphics APIs usually have a native object that represents linear buffer. Diligent Engine uses IBuffer interface as an abstraction for a native buffer. To create a buffer, one needs to populate BufferDesc structure and call IRenderDevice::CreateBuffer() method as in the following example:
      BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); While there is usually just one buffer object, different APIs use very different approaches to represent textures. For instance, in Direct3D11, there are ID3D11Texture1D, ID3D11Texture2D, and ID3D11Texture3D objects. In OpenGL, there is individual object for every texture dimension (1D, 2D, 3D, Cube), which may be a texture array, which may also be multisampled (i.e. GL_TEXTURE_2D_MULTISAMPLE_ARRAY). As a result there are nine different GL texture types that Diligent Engine may create under the hood. In Direct3D12, there is only one resource interface. Diligent Engine hides all these details in ITexture interface. There is only one  IRenderDevice::CreateTexture() method that is capable of creating all texture types. Dimension, format, array size and all other parameters are specified by the members of the TextureDesc structure:
      TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); If native API supports multithreaded resource creation, textures and buffers can be created by multiple threads simultaneously.
      Interoperability with native API provides access to the native buffer/texture objects and also allows creating Diligent Engine objects from native handles. It allows applications seamlessly integrate native API-specific code with Diligent Engine.
      Next-generation APIs allow fine level-control over how resources are allocated. Diligent Engine does not currently expose this functionality, but it can be added by implementing IResourceAllocator interface that encapsulates specifics of resource allocation and providing this interface to CreateBuffer() or CreateTexture() methods. If null is provided, default allocator should be used.
      Initializing the Pipeline State
      As it was mentioned earlier, Diligent Engine follows next-gen APIs to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.). This approach maps directly to Direct3D12/Vulkan, but is also beneficial for older APIs as it eliminates pipeline misconfiguration errors. With many individual calls tweaking various GPU pipeline settings it is very easy to forget to set one of the states or assume the stage is already properly configured when in fact it is not. Using pipeline state object helps avoid these problems as all stages are configured at once.
      Creating Shaders
      While in earlier APIs shaders were bound separately, in the next-generation APIs as well as in Diligent Engine shaders are part of the pipeline state object. The biggest challenge when authoring shaders is that Direct3D and OpenGL/Vulkan use different shader languages (while Apple uses yet another language in their Metal API). Maintaining two versions of every shader is not an option for real applications and Diligent Engine implements shader source code converter that allows shaders authored in HLSL to be translated to GLSL. To create a shader, one needs to populate ShaderCreationAttribs structure. SourceLanguage member of this structure tells the system which language the shader is authored in:
      SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source language matches the underlying graphics API: HLSL for Direct3D11/Direct3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter, so this value should only be used for OpenGL and OpenGLES modes. There are two ways to provide the shader source code. The first way is to use Source member. The second way is to provide a file path in FilePath member. Since the engine is entirely decoupled from the platform and the host file system is platform-dependent, the structure exposes pShaderSourceStreamFactory member that is intended to provide the engine access to the file system. If FilePath is provided, shader source factory must also be provided. If the shader source contains any #include directives, the source stream factory will also be used to load these files. The engine provides default implementation for every supported platform that should be sufficient in most cases. Custom implementation can be provided when needed.
      When sampling a texture in a shader, the texture sampler was traditionally specified as separate object that was bound to the pipeline at run time or set as part of the texture object itself. However, in most cases it is known beforehand what kind of sampler will be used in the shader. Next-generation APIs expose new type of sampler called static sampler that can be initialized directly in the pipeline state. Diligent Engine exposes this functionality: when creating a shader, textures can be assigned static samplers. If static sampler is assigned, it will always be used instead of the one initialized in the texture shader resource view. To initialize static samplers, prepare an array of StaticSamplerDesc structures and initialize StaticSamplers and NumStaticSamplers members. Static samplers are more efficient and it is highly recommended to use them whenever possible. On older APIs, static samplers are emulated via generic sampler objects.
      The following is an example of shader initialization:
      ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = {     {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC},     {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE},     {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader );
      Creating the Pipeline State Object
      After all required shaders are created, the rest of the fields of the PipelineStateDesc structure provide depth-stencil, rasterizer, and blend state descriptions, the number and format of render targets, input layout format, etc. For instance, rasterizer state can be described as follows:
      PipelineStateDesc PSODesc; RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; RasterizerDesc.AntialiasedLineEnable = False; Depth-stencil and blend states are defined in a similar fashion.
      Another important thing that pipeline state object encompasses is the input layout description that defines how inputs to the vertex shader, which is the very first shader stage, should be read from the memory. Input layout may define several vertex streams that contain values of different formats and sizes:
      // Define input layout InputLayoutDesc &Layout = PSODesc.GraphicsPipeline.InputLayout; LayoutElement TextLayoutElems[] = {     LayoutElement( 0, 0, 3, VT_FLOAT32, False ),     LayoutElement( 1, 0, 4, VT_UINT8, True ),     LayoutElement( 2, 0, 2, VT_FLOAT32, False ), }; Layout.LayoutElements = TextLayoutElems; Layout.NumElements = _countof( TextLayoutElems ); Finally, pipeline state defines primitive topology type. When all required members are initialized, a pipeline state object can be created by IRenderDevice::CreatePipelineState() method:
      // Define shader and primitive topology PSODesc.GraphicsPipeline.PrimitiveTopologyType = PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; PSODesc.GraphicsPipeline.pVS = pVertexShader; PSODesc.GraphicsPipeline.pPS = pPixelShader; PSODesc.Name = "My pipeline state"; m_pDev->CreatePipelineState(PSODesc, &m_pPSO); When PSO object is bound to the pipeline, the engine invokes all API-specific commands to set all states specified by the object. In case of Direct3D12 this maps directly to setting the D3D12 PSO object. In case of Direct3D11, this involves setting individual state objects (such as rasterizer and blend states), shaders, input layout etc. In case of OpenGL, this requires a number of fine-grain state tweaking calls. Diligent Engine keeps track of currently bound states and only calls functions to update these states that have actually changed.
      Binding Shader Resources
      Direct3D11 and OpenGL utilize fine-grain resource binding models, where an application binds individual buffers and textures to certain shader or program resource binding slots. Direct3D12 uses a very different approach, where resource descriptors are grouped into tables, and an application can bind all resources in the table at once by setting the table in the command list. Resource binding model in Diligent Engine is designed to leverage this new method. It introduces a new object called shader resource binding that encapsulates all resource bindings required for all shaders in a certain pipeline state. It also introduces the classification of shader variables based on the frequency of expected change that helps the engine group them into tables under the hood:
      Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. Shader variable type must be specified during shader creation by populating an array of ShaderVariableDesc structures and initializing ShaderCreationAttribs::Desc::VariableDesc and ShaderCreationAttribs::Desc::NumVariables members (see example of shader creation above).
      Static variables cannot be changed once a resource is bound to the variable. They are bound directly to the shader object. For instance, a shadow map texture is not expected to change after it is created, so it can be bound directly to the shader:
      PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new Shader Resource Binding object (SRB) that is created by the pipeline state (IPipelineState::CreateShaderResourceBinding()):
      m_pPSO->CreateShaderResourceBinding(&m_pSRB); Note that an SRB is only compatible with the pipeline state it was created from. SRB object inherits all static bindings from shaders in the pipeline, but is not allowed to change them.
      Mutable resources can only be set once for every instance of a shader resource binding. Such resources are intended to define specific material properties. For instance, a diffuse texture for a specific material is not expected to change once the material is defined and can be set right after the SRB object has been created:
      m_pSRB->GetVariable(SHADER_TYPE_PIXEL, "tex2DDiffuse")->Set(pDiffuseTexSRV); In some cases it is necessary to bind a new resource to a variable every time a draw command is invoked. Such variables should be labeled as dynamic, which will allow setting them multiple times through the same SRB object:
      m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); Under the hood, the engine pre-allocates descriptor tables for static and mutable resources when an SRB objcet is created. Space for dynamic resources is dynamically allocated at run time. Static and mutable resources are thus more efficient and should be used whenever possible.
      As you can see, Diligent Engine does not expose low-level details of how resources are bound to shader variables. One reason for this is that these details are very different for various APIs. The other reason is that using low-level binding methods is extremely error-prone: it is very easy to forget to bind some resource, or bind incorrect resource such as bind a buffer to the variable that is in fact a texture, especially during shader development when everything changes fast. Diligent Engine instead relies on shader reflection system to automatically query the list of all shader variables. Grouping variables based on three types mentioned above allows the engine to create optimized layout and take heavy lifting of matching resources to API-specific resource location, register or descriptor in the table.
      This post gives more details about the resource binding model in Diligent Engine.
      Setting the Pipeline State and Committing Shader Resources
      Before any draw or compute command can be invoked, the pipeline state needs to be bound to the context:
      m_pContext->SetPipelineState(m_pPSO); Under the hood, the engine sets the internal PSO object in the command list or calls all the required native API functions to properly configure all pipeline stages.
      The next step is to bind all required shader resources to the GPU pipeline, which is accomplished by IDeviceContext::CommitShaderResources() method:
      m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); The method takes a pointer to the shader resource binding object and makes all resources the object holds available for the shaders. In the case of D3D12, this only requires setting appropriate descriptor tables in the command list. For older APIs, this typically requires setting all resources individually.
      Next-generation APIs require the application to track the state of every resource and explicitly inform the system about all state transitions. For instance, if a texture was used as render target before, while the next draw command is going to use it as shader resource, a transition barrier needs to be executed. Diligent Engine does the heavy lifting of state tracking.  When CommitShaderResources() method is called with COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES flag, the engine commits and transitions resources to correct states at the same time. Note that transitioning resources does introduce some overhead. The engine tracks state of every resource and it will not issue the barrier if the state is already correct. But checking resource state is an overhead that can sometimes be avoided. The engine provides IDeviceContext::TransitionShaderResources() method that only transitions resources:
      m_pContext->TransitionShaderResources(m_pPSO, m_pSRB); In some scenarios it is more efficient to transition resources once and then only commit them.
      Invoking Draw Command
      The final step is to set states that are not part of the PSO, such as render targets, vertex and index buffers. Diligent Engine uses Direct3D11-syle API that is translated to other native API calls under the hood:
      ITextureView *pRTVs[] = {m_pRTV}; m_pContext->SetRenderTargets(_countof( pRTVs ), pRTVs, m_pDSV); // Clear render target and depth buffer const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); m_pContext->ClearDepthStencil(nullptr, CLEAR_DEPTH_FLAG, 1.f); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); Different native APIs use various set of function to execute draw commands depending on command details (if the command is indexed, instanced or both, what offsets in the source buffers are used etc.). For instance, there are 5 draw commands in Direct3D11 and more than 9 commands in OpenGL with something like glDrawElementsInstancedBaseVertexBaseInstance not uncommon. Diligent Engine hides all details with single IDeviceContext::Draw() method that takes takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example:
      DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); For compute commands, there is IDeviceContext::DispatchCompute() method that takes DispatchComputeAttribs structure that defines compute grid dimension.
      Source Code
      Full engine source code is available on GitHub and is free to use. The repository contains two samples, asteroids performance benchmark and example Unity project that uses Diligent Engine in native plugin.
      AntTweakBar sample is Diligent Engine’s “Hello World” example.

       
      Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to multiple render targets, using compute shaders and unordered access views, etc.

      Asteroids performance benchmark is based on this demo developed by Intel. It renders 50,000 unique textured asteroids and allows comparing performance of Direct3D11 and Direct3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures.

      Finally, there is an example project that shows how Diligent Engine can be integrated with Unity.

      Future Work
      The engine is under active development. It currently supports Windows desktop, Universal Windows and Android platforms. Direct3D11, Direct3D12, OpenGL/GLES backends are now feature complete. Vulkan backend is coming next, and support for more platforms is planned.
    • By reenigne
      For those that don't know me. I am the individual who's two videos are listed here under setup for https://wiki.libsdl.org/Tutorials
      I also run grhmedia.com where I host the projects and code for the tutorials I have online.
      Recently, I received a notice from youtube they will be implementing their new policy in protecting video content as of which I won't be monetized till I meat there required number of viewers and views each month.

      Frankly, I'm pretty sick of youtube. I put up a video and someone else learns from it and puts up another video and because of the way youtube does their placement they end up with more views.
      Even guys that clearly post false information such as one individual who said GLEW 2.0 was broken because he didn't know how to compile it. He in short didn't know how to modify the script he used because he didn't understand make files and how the requirements of the compiler and library changes needed some different flags.

      At the end of the month when they implement this I will take down the content and host on my own server purely and it will be a paid system and or patreon. 

      I get my videos may be a bit dry, I generally figure people are there to learn how to do something and I rather not waste their time. 
      I used to also help people for free even those coming from the other videos. That won't be the case any more. I used to just take anyone emails and work with them my email is posted on the site.

      I don't expect to get the required number of subscribers in that time or increased views. Even if I did well it wouldn't take care of each reoccurring month.
      I figure this is simpler and I don't plan on putting some sort of exorbitant fee for a monthly subscription or the like.
      I was thinking on the lines of a few dollars 1,2, and 3 and the larger subscription gets you assistance with the content in the tutorials if needed that month.
      Maybe another fee if it is related but not directly in the content. 
      The fees would serve to cut down on the number of people who ask for help and maybe encourage some of the people to actually pay attention to what is said rather than do their own thing. That actually turns out to be 90% of the issues. I spent 6 hours helping one individual last week I must have asked him 20 times did you do exactly like I said in the video even pointed directly to the section. When he finally sent me a copy of the what he entered I knew then and there he had not. I circled it and I pointed out that wasn't what I said to do in the video. I didn't tell him what was wrong and how I knew that way he would go back and actually follow what it said to do. He then reported it worked. Yea, no kidding following directions works. But hey isn't alone and well its part of the learning process.

      So the point of this isn't to be a gripe session. I'm just looking for a bit of feed back. Do you think the fees are unreasonable?
      Should I keep the youtube channel and do just the fees with patreon or do you think locking the content to my site and require a subscription is an idea.

      I'm just looking at the fact it is unrealistic to think youtube/google will actually get stuff right or that youtube viewers will actually bother to start looking for more accurate videos. 
    • By Balma Alparisi
      i got error 1282 in my code.
      sf::ContextSettings settings; settings.majorVersion = 4; settings.minorVersion = 5; settings.attributeFlags = settings.Core; sf::Window window; window.create(sf::VideoMode(1600, 900), "Texture Unit Rectangle", sf::Style::Close, settings); window.setActive(true); window.setVerticalSyncEnabled(true); glewInit(); GLuint shaderProgram = createShaderProgram("FX/Rectangle.vss", "FX/Rectangle.fss"); float vertex[] = { -0.5f,0.5f,0.0f, 0.0f,0.0f, -0.5f,-0.5f,0.0f, 0.0f,1.0f, 0.5f,0.5f,0.0f, 1.0f,0.0f, 0.5,-0.5f,0.0f, 1.0f,1.0f, }; GLuint indices[] = { 0,1,2, 1,2,3, }; GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex), vertex, GL_STATIC_DRAW); GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices,GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(float) * 5, (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, false, sizeof(float) * 5, (void*)(sizeof(float) * 3)); glEnableVertexAttribArray(1); GLuint texture[2]; glGenTextures(2, texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageOne = new sf::Image; bool isImageOneLoaded = imageOne->loadFromFile("Texture/container.jpg"); if (isImageOneLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageOne->getSize().x, imageOne->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageOne->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageOne; glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageTwo = new sf::Image; bool isImageTwoLoaded = imageTwo->loadFromFile("Texture/awesomeface.png"); if (isImageTwoLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageTwo->getSize().x, imageTwo->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageTwo->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageTwo; glUniform1i(glGetUniformLocation(shaderProgram, "inTextureOne"), 0); glUniform1i(glGetUniformLocation(shaderProgram, "inTextureTwo"), 1); GLenum error = glGetError(); std::cout << error << std::endl; sf::Event event; bool isRunning = true; while (isRunning) { while (window.pollEvent(event)) { if (event.type == event.Closed) { isRunning = false; } } glClear(GL_COLOR_BUFFER_BIT); if (isImageOneLoaded && isImageTwoLoaded) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glUseProgram(shaderProgram); } glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr); glBindVertexArray(0); window.display(); } glDeleteVertexArrays(1, &vao); glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteProgram(shaderProgram); glDeleteTextures(2,texture); return 0; } and this is the vertex shader
      #version 450 core layout(location=0) in vec3 inPos; layout(location=1) in vec2 inTexCoord; out vec2 TexCoord; void main() { gl_Position=vec4(inPos,1.0); TexCoord=inTexCoord; } and the fragment shader
      #version 450 core in vec2 TexCoord; uniform sampler2D inTextureOne; uniform sampler2D inTextureTwo; out vec4 FragmentColor; void main() { FragmentColor=mix(texture(inTextureOne,TexCoord),texture(inTextureTwo,TexCoord),0.2); } I was expecting awesomeface.png on top of container.jpg

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
  • Advertisement