Sign in to follow this  

OpenGL [DirectX 11] Sudden Saturday Shadow Sadness Syndrome

This topic is 1966 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I just got out of rehab from my previous shadow problem.
I was able to change the type so that I could view it in PIX and then I was able to see that the values had too much range.
The way to fix it follows.
Old Code:
[CODE]float shadow2dDepth( Texture2D _tTexture, float2 _vCoord ){ return _tTexture.Sample( lsg_SamplerShadow, _vCoord ).x; }[/CODE]
New Code:
[CODE]float shadow2dDepth( Texture2D _tTexture, float2 _vCoord ){ return _tTexture.Sample( lsg_SamplerShadow, _vCoord ).x * 0.5 + 0.5; }[/CODE][list]
[*]This works but obviously I prefer a more efficient shader. In DirectX 9 there is no way to read from a depth surface so I have to create a colored surface and output depth to that manually, which is where I perform this conversion, however in OpenGL and OpenGL ES 2 the depth surface can be read directly. But it works without having to perform this conversion anywhere. I don’t set up a special viewport depth range or do the conversion in the shader. Isn’t this X * 0.5 + 0.5 conversion supposed to be done by the rasterizer of Direct3D 11? What do I need to do to make it do this instead of me doing it in my shader?
[/list]


The second issue is that my PCSSM shader fails to compile with the following:
[quote name='My Direct3D 11 Compiler']e:\Blah\x64\DirectX11 Debug\Shader@0x00000000036A1CC0(105,16): error X4014: cannot have gradient operations inside loops with divergent flow control[/quote]
Here are the relevant parts of the shader:
[CODE]float PCSSShadowMap( in vec4 _vShadowCoord ) {
float fSum = (LSE_PCF_STEPS * 2.0 + 1.0);
float fTotal = fSum * fSum;
if ( _vShadowCoord.w > 0.0 && _vShadowCoord.x >= 0.0 && _vShadowCoord.x <= 1.0 && _vShadowCoord.y >= 0.0 && _vShadowCoord.y <= 1.0 ) {
float fAvgDepth = 0.0;
float fTotalBlockers = 0.0;
vec4 vShadowCoordWDivide = _vShadowCoord / _vShadowCoord.w;
//vShadowCoordWDivide.z -= 0.000625 * 0.125;
FindBlockers( vShadowCoordWDivide.xy, vShadowCoordWDivide.z, g_vShadowMapUvDepth.xy * 1.25,
fTotalBlockers, fAvgDepth );
if ( fTotalBlockers != 0.0 ) {
fTotal = 0.0;
vec2 vSize = g_vShadowMapUvDepth.xy * g_fShadowMapCasterSize * fAvgDepth * g_vShadowMapUvDepth.z;
// Get the distance within the shadow we are.
vec2 vThis;
vec2 fStepUv = vSize / LSE_PCF_STEPS;
for ( float y = -LSE_PCF_STEPS; y <= LSE_PCF_STEPS; y++ ) {
for ( float x = -LSE_PCF_STEPS; x <= LSE_PCF_STEPS; x++ ) {
vec2 vOffset = vec2( x, y ) * fStepUv; // **************** LINE 105 **************** //
float fDepth = shadow2dDepth( g_sShadowTex, vShadowCoordWDivide.xy + vOffset );
fTotal += (fDepth == 1.0 || fDepth > vShadowCoordWDivide.z) ? 1.0 : 0.0;
}
}
}
}
return fTotal / (fSum * fSum);
}[/CODE][list]
[*]Why is it barking at that line and how can I rewrite it to work?
[/list]
If you need shader code that actually compiles, the actual shader that is sent to Direct3D 11 follows. Yes, it is ugly. If you have heart conditions or are pregnant, viewer discretion is advised.
[spoiler]float mix( in float _fX, in float _fY, in float _fA ) { return _fX * (1.0 - _fA) + _fY * _fA; }
float2 mix( in float2 _fX, in float2 _fY, in float _fA ) { return _fX * (1.0 - _fA) + _fY * _fA; }
float3 mix( in float3 _fX, in float3 _fY, in float _fA ) { return _fX * (1.0 - _fA) + _fY * _fA; }
float4 mix( in float4 _fX, in float4 _fY, in float _fA ) { return _fX * (1.0 - _fA) + _fY * _fA; }
float2 mix( in float2 _fX, in float2 _fY, in float2 _fA ) { return _fX * (1.0 - _fA) + _fY * _fA; }
float3 mix( in float3 _fX, in float3 _fY, in float3 _fA ) { return _fX * (1.0 - _fA) + _fY * _fA; }
float4 mix( in float4 _fX, in float4 _fY, in float4 _fA ) { return _fX * (1.0 - _fA) + _fY * _fA; }
SamplerState lsg_SamplerBiLinearRepeat:register(s0){Filter=MIN_MAG_LINEAR_MIP_POINT;AddressU=WRAP;AddressV=WRAP;};
SamplerState lsg_SamplerBiLinearClamp:register(s1){Filter=MIN_MAG_LINEAR_MIP_POINT;AddressU=CLAMP;AddressV=CLAMP;};
SamplerState lsg_SamplerShadow:register(s15){Filter=MIN_MAG_LINEAR_MIP_POINT;AddressU=CLAMP;AddressV=CLAMP;};
float shadow2dDepth( Texture2D _tTexture, float2 _vCoord ){ return _tTexture.Sample( lsg_SamplerShadow, _vCoord ).x * 0.5 + 0.5; }
Texture2D g_sShadowTex:register(t15);
cbuffer cb0:register(b0){
vector<float,4>g_vDiffuseMaterial:packoffset(c0.x);
vector<float,4>g_vSpecularMaterial:packoffset(c1.x);
vector<float,2>g_vAnistropy:packoffset(c2.x);
};
cbuffer cb1:register(b1){
vector<float,4>g_vLightVectors[8]:packoffset(c0.x);
vector<float,4>g_vLightDiffuses[8]:packoffset(c8.x);
vector<float,4>g_vLightSpeculars[8]:packoffset(c16.x);
};
cbuffer cb2:register(b2){
};
cbuffer cb3:register(b3){
int g_iTotalDirLights:packoffset(c0.x);
matrix<float,4,4>g_mShadowMapMatrix:packoffset(c1.x);
vector<float,3>g_vShadowMapUvDepth:packoffset(c5.x);
float g_fShadowMapCasterSize:packoffset(c5.w);
};
#line 1 // e:/Data/LSDDefaultForwardPixelShader.lssl
;
#line 2 // e:/Data/LSDDefaultForwardPixelShader.lssl
;
#line 3 // e:/Data/LSDDefaultForwardPixelShader.lssl
;
#line 4 // e:/Data/LSDDefaultForwardPixelShader.lssl
;
#line 5 // e:/Data/LSDDefaultForwardPixelShader.lssl
;
#line 6 // e:/Data/LSDDefaultForwardPixelShader.lssl
;
#line 8 // e:/Data/LSDDefaultForwardPixelShader.lssl
;
#line 9 // e:/Data/LSDDefaultForwardPixelShader.lssl
;
#line 10 // e:/Data/LSDDefaultForwardPixelShader.lssl
;
#line 11 // e:/Data/LSDDefaultForwardPixelShader.lssl
;
#line 12 // e:/Data/LSDDefaultForwardPixelShader.lssl
;
#line 13 // e:/Data/LSDDefaultForwardPixelShader.lssl
;
#line 14 // e:/Data/LSDDefaultForwardPixelShader.lssl
;
#line 15 // e:/Data/LSDDefaultForwardPixelShader.lssl
;
#line 17 // e:/Data/LSDDefaultForwardPixelShader.lssl
;
#line 1 // e:/Data/LSDLighting.lssl
;
#line 2 // e:/Data/LSDLighting.lssl
;
#line 3 // e:/Data/LSDLighting.lssl
;
#line 4 // e:/Data/LSDLighting.lssl
;
#line 5 // e:/Data/LSDLighting.lssl
;
#line 6 // e:/Data/LSDLighting.lssl
;
#line 7 // e:/Data/LSDLighting.lssl
;
#line 8 // e:/Data/LSDLighting.lssl
;
#line 9 // e:/Data/LSDLighting.lssl
;
#line 10 // e:/Data/LSDLighting.lssl
;
#line 11 // e:/Data/LSDLighting.lssl
;
#line 12 // e:/Data/LSDLighting.lssl
;
#line 13 // e:/Data/LSDLighting.lssl
;
#line 14 // e:/Data/LSDLighting.lssl
;
#line 15 // e:/Data/LSDLighting.lssl
;
#line 25 // e:/Data/LSDLighting.lssl
struct LSE_COLOR_PAIR{
vector<float,4>cDiffuse;
vector<float,4>cSpecular;
};
#line 52 // e:/Data/LSDLighting.lssl
#line 102 // e:/Data/LSDLighting.lssl
#line 126 // e:/Data/LSDLighting.lssl
#line 139 // e:/Data/LSDLighting.lssl
#line 181 // e:/Data/LSDLighting.lssl
#line 223 // e:/Data/LSDLighting.lssl
#line 289 // e:/Data/LSDLighting.lssl
#line 360 // e:/Data/LSDLighting.lssl
LSE_COLOR_PAIR GetDirLightColorAshikhminShirley(in vector<float,3>_vNormalInViewSpace,in vector<float,4>_vViewVector,in int _iIndex){
{
#line 300 // e:/Data/LSDLighting.lssl
LSE_COLOR_PAIR cpRet;
#line 301 // e:/Data/LSDLighting.lssl
vector<float,3>vLightDir=g_vLightVectors[_iIndex].xyz;
#line 304 // e:/Data/LSDLighting.lssl
vector<float,3>vHalfVec=normalize((vLightDir+_vViewVector.xyz));
#line 305 // e:/Data/LSDLighting.lssl
vector<float,3>fEpsilon=vector<float,3>(1.0,0.0,0.0);
#line 306 // e:/Data/LSDLighting.lssl
vector<float,3>fTangent=normalize(cross(_vNormalInViewSpace,fEpsilon));
#line 307 // e:/Data/LSDLighting.lssl
vector<float,3>fBiTangent=normalize(cross(_vNormalInViewSpace,fTangent));
#line 310 // e:/Data/LSDLighting.lssl
float fNormalDotHalf=max(dot(_vNormalInViewSpace,vHalfVec),0.0);
#line 311 // e:/Data/LSDLighting.lssl
float fNormalDotView=dot(_vNormalInViewSpace,_vViewVector.xyz);
#line 312 // e:/Data/LSDLighting.lssl
float fNormalDotLight=dot(_vNormalInViewSpace,vLightDir);
#line 313 // e:/Data/LSDLighting.lssl
float fLightDotHalf=dot(vLightDir,vHalfVec);
#line 314 // e:/Data/LSDLighting.lssl
float fTangentDotHalf=dot(fTangent,vHalfVec);
#line 315 // e:/Data/LSDLighting.lssl
float fBiTangentDotHalf=dot(fBiTangent,vHalfVec);
#line 322 // e:/Data/LSDLighting.lssl
const float fRs=0.29999999999999999;
#line 326 // e:/Data/LSDLighting.lssl
vector<float,3>vDiffuse=((28.0*(g_vLightDiffuses[_iIndex].xyz*6.2831853071795862))*(1.0/(23.0*3.1415899999999999)));
#line 328 // e:/Data/LSDLighting.lssl
vDiffuse*=(1.0-fRs);
#line 329 // e:/Data/LSDLighting.lssl
float fTemp=(1.0-(fNormalDotLight*0.5));
#line 330 // e:/Data/LSDLighting.lssl
float fTemp2=(fTemp*fTemp);
#line 331 // e:/Data/LSDLighting.lssl
vDiffuse*=(1.0-((fTemp2*fTemp2)*fTemp));
#line 332 // e:/Data/LSDLighting.lssl
fTemp=(1.0-(fNormalDotView*0.5));
#line 333 // e:/Data/LSDLighting.lssl
fTemp2=(fTemp*fTemp);
#line 334 // e:/Data/LSDLighting.lssl
cpRet.cDiffuse.xyz=(vDiffuse*(1.0-((fTemp2*fTemp2)*fTemp)));
#line 335 // e:/Data/LSDLighting.lssl
cpRet.cDiffuse.w=1.0;
#line 340 // e:/Data/LSDLighting.lssl
float fNumExp=(((g_vAnistropy.x*fTangentDotHalf)*fTangentDotHalf)+((g_vAnistropy.y*fBiTangentDotHalf)*fBiTangentDotHalf));
#line 341 // e:/Data/LSDLighting.lssl
fNumExp/=(1.0-(fNormalDotHalf*fNormalDotHalf));
#line 342 // e:/Data/LSDLighting.lssl
float fNum=sqrt(((g_vAnistropy.x+1.0)*(g_vAnistropy.y+1.0)));
#line 343 // e:/Data/LSDLighting.lssl
fNum*=pow(fNormalDotHalf,fNumExp);
#line 345 // e:/Data/LSDLighting.lssl
float fDen=((8.0*3.1415899999999999)*fNormalDotHalf);
#line 346 // e:/Data/LSDLighting.lssl
fDen*=max(fNormalDotLight,fNormalDotView);
#line 348 // e:/Data/LSDLighting.lssl
fTemp=(fRs*(fNum/fDen));
#line 349 // e:/Data/LSDLighting.lssl
cpRet.cSpecular=vector<float,4>(fTemp,fTemp,fTemp,1.0);
#line 350 // e:/Data/LSDLighting.lssl
fTemp=(1.0-fNormalDotHalf);
#line 351 // e:/Data/LSDLighting.lssl
fTemp2=(fTemp*fTemp);
#line 353 // e:/Data/LSDLighting.lssl
cpRet.cSpecular.xyz*=(fRs+((1.0-fRs)*((fTemp2*fTemp2)*fTemp)));
#line 354 // e:/Data/LSDLighting.lssl
cpRet.cSpecular.w=1.0;
#line 355 // e:/Data/LSDLighting.lssl
cpRet.cSpecular*=g_vLightSpeculars[_iIndex];
return cpRet;}
}
#line 1 // e:/Data/LSDShadowing.lssl
;
#line 2 // e:/Data/LSDShadowing.lssl
;
#line 3 // e:/Data/LSDShadowing.lssl
;
#line 4 // e:/Data/LSDShadowing.lssl
;
#line 31 // e:/Data/LSDShadowing.lssl
#line 46 // e:/Data/LSDShadowing.lssl
#line 73 // e:/Data/LSDShadowing.lssl
void FindBlockers(in vector<float,2>_vPos,in float _zViewDepth,in vector<float,2>_vRadius,out float _fBlockers,out float _fAvgDepth){
{
#line 58 // e:/Data/LSDShadowing.lssl
vector<float,2>fStepUv=(_vRadius/1.0);
#line 59 // e:/Data/LSDShadowing.lssl
_fBlockers=0.0;
#line 60 // e:/Data/LSDShadowing.lssl
_fAvgDepth=0.0;
#line 71 // e:/Data/LSDShadowing.lssl
for(float y=-1.0;
#line 61 // e:/Data/LSDShadowing.lssl
(y<=1.0);
y++){
{
#line 70 // e:/Data/LSDShadowing.lssl
for(float x=-1.0;
#line 62 // e:/Data/LSDShadowing.lssl
(x<=1.0);
x++){
{
#line 63 // e:/Data/LSDShadowing.lssl
vector<float,2>vOffset=(vector<float,2>(x,y)*fStepUv);
#line 64 // e:/Data/LSDShadowing.lssl
float fDepth=shadow2dDepth(g_sShadowTex,(_vPos+vOffset));
if(((fDepth!=1.0)&&(fDepth<_zViewDepth))){
{
#line 67 // e:/Data/LSDShadowing.lssl
_fAvgDepth+=(_zViewDepth-fDepth);
#line 68 // e:/Data/LSDShadowing.lssl
++_fBlockers;
}
}
}
}
}
}
#line 72 // e:/Data/LSDShadowing.lssl
_fAvgDepth/=_fBlockers;
}
}
#line 113 // e:/Data/LSDShadowing.lssl
float PCSSShadowMap(in vector<float,4>_vShadowCoord){
{
#line 82 // e:/Data/LSDShadowing.lssl
float fSum=((2.0*2.0)+1.0);
#line 83 // e:/Data/LSDShadowing.lssl
float fTotal=(fSum*fSum);
if((((((_vShadowCoord.w>0.0)&&(_vShadowCoord.x>=0.0))&&(_vShadowCoord.x<=1.0))&&(_vShadowCoord.y>=0.0))&&(_vShadowCoord.y<=1.0))){
{
#line 85 // e:/Data/LSDShadowing.lssl
float fAvgDepth=0.0;
#line 86 // e:/Data/LSDShadowing.lssl
float fTotalBlockers=0.0;
#line 88 // e:/Data/LSDShadowing.lssl
vector<float,4>vShadowCoordWDivide=(_vShadowCoord/_vShadowCoord.w);
#line 91 // e:/Data/LSDShadowing.lssl
FindBlockers(vShadowCoordWDivide.xy,vShadowCoordWDivide.z,(g_vShadowMapUvDepth.xy*1.25),fTotalBlockers,fAvgDepth);
if((fTotalBlockers!=0.0)){
{
#line 93 // e:/Data/LSDShadowing.lssl
fTotal=0.0;
#line 94 // e:/Data/LSDShadowing.lssl
vector<float,2>vSize=(((g_vShadowMapUvDepth.xy*g_fShadowMapCasterSize)*fAvgDepth)*g_vShadowMapUvDepth.z);
#line 98 // e:/Data/LSDShadowing.lssl
vector<float,2>vThis;
#line 101 // e:/Data/LSDShadowing.lssl
vector<float,2>fStepUv=(vSize/2.0);
#line 109 // e:/Data/LSDShadowing.lssl
for(float y=-2.0;
#line 103 // e:/Data/LSDShadowing.lssl
(y<=2.0);
y++){
{
#line 108 // e:/Data/LSDShadowing.lssl
for(float x=-2.0;
#line 104 // e:/Data/LSDShadowing.lssl
(x<=2.0);
x++){
{
#line 105 // e:/Data/LSDShadowing.lssl
vector<float,2>vOffset=(vector<float,2>(x,y)*fStepUv);
#line 106 // e:/Data/LSDShadowing.lssl
float fDepth=shadow2dDepth(g_sShadowTex,(vShadowCoordWDivide.xy+vOffset));
#line 107 // e:/Data/LSDShadowing.lssl
fTotal+=((fDepth==1.0)||(fDepth>vShadowCoordWDivide.z))?1.0:0.0;
}
}
}
}
}
}
}
}
return (fTotal/(fSum*fSum));}
}
#line 125 // e:/Data/LSDShadowing.lssl
#line 201 // e:/Data/LSDDefaultForwardPixelShader.lssl
void Main(in vector<float,3>_vInNormal:NORMAL0,in vector<float,2>_vIn2dTex0:TEXCOORD2,in vector<float,4>_vInPos:SV_POSITION0,in vector<float,4>_vInEyePos:TEXCOORD1,out vector<float,4>_vOutColor:SV_Target0){
{
#line 71 // e:/Data/LSDDefaultForwardPixelShader.lssl
vector<float,4>vShadowCoord=mul(g_mShadowMapMatrix,_vInEyePos);
#line 75 // e:/Data/LSDDefaultForwardPixelShader.lssl
float fShadow=PCSSShadowMap(vShadowCoord);
#line 86 // e:/Data/LSDDefaultForwardPixelShader.lssl
vector<float,4>vColorTemp;
#line 93 // e:/Data/LSDDefaultForwardPixelShader.lssl
vector<float,3>vNormalizedNormal=normalize(_vInNormal);
#line 101 // e:/Data/LSDDefaultForwardPixelShader.lssl
vector<float,4>vViewPosToEye=-normalize(_vInEyePos);
#line 108 // e:/Data/LSDDefaultForwardPixelShader.lssl
LSE_COLOR_PAIR cpLightColors={vector<float,4>(0.0,0.0,0.0,0.0),vector<float,4>(0.0,0.0,0.0,0.0)};
#line 122 // e:/Data/LSDDefaultForwardPixelShader.lssl
for(int I=0;
#line 110 // e:/Data/LSDDefaultForwardPixelShader.lssl
(I<g_iTotalDirLights);
I++){
{
#line 115 // e:/Data/LSDDefaultForwardPixelShader.lssl
LSE_COLOR_PAIR cpThis=GetDirLightColorAshikhminShirley(vNormalizedNormal,vViewPosToEye,I);
#line 120 // e:/Data/LSDDefaultForwardPixelShader.lssl
cpLightColors.cDiffuse+=cpThis.cDiffuse;
#line 121 // e:/Data/LSDDefaultForwardPixelShader.lssl
cpLightColors.cSpecular+=cpThis.cSpecular;
}
}
#line 145 // e:/Data/LSDDefaultForwardPixelShader.lssl
_vOutColor=g_vDiffuseMaterial;
#line 178 // e:/Data/LSDDefaultForwardPixelShader.lssl
_vOutColor.xyz=((_vOutColor.xyz*cpLightColors.cDiffuse.xyz)+(g_vSpecularMaterial*cpLightColors.cSpecular).xyz);
#line 184 // e:/Data/LSDDefaultForwardPixelShader.lssl
_vOutColor.xyz*=fShadow;
#line 200 // e:/Data/LSDDefaultForwardPixelShader.lssl
_vOutColor=max(_vOutColor,vector<float,4>(0.0,0.0,0.0,0.0));
}
}[/spoiler]


L. Spiro

Share this post


Link to post
Share on other sites
[quote name='L. Spiro' timestamp='1343405279' post='4963682']
In DirectX 9 there is no way to read from a depth surface
[/quote]

I know this is not what you were asking about, but you can sample from a depth texture in dx9, through vendor specific extensions.
INTZ works on pretty much all non ancient ati and nv hw (http://aras-p.info/texts/D3D9GPUHacks.html).

Share this post


Link to post
Share on other sites
"Gradient operations" refer to anything that computes partial derivatives in the pixel shader, and in this particular case it's referring to the "Sample" function. You can't compute derivatives inside of dynamic flow control, since they're undefined if one of the pixels in the quad doesn't take the same path. So you need to either...

A. Use a sampling function that doesn't compute gradients, such as SampleLevel or SampleCmpLevelZero

or

B. Flatten all branches and unroll all loops in which you need to compute gradients Edited by MJP

Share this post


Link to post
Share on other sites
SampleLevel worked; thank you. My DirectX 11 side is now fully caught up to my DirectX 9, OpenGL 3.2, and OpenGL ES 2 sides. Now I can get serious about new graphics features.

What about the first issue?


L. Spiro

Share this post


Link to post
Share on other sites
Hi!

[quote name='L. Spiro' timestamp='1343414616' post='4963727']
What about the first issue?
[/quote]
As you know, one of the many (meticulously hidden) differences between GL and D3D is that the range of the z-coordinate in clipping space differs. In GL the clipping space z goes from [-1…1] and in D3D it goes from [0…1]. GL and GL ES give direct access to the [-1…1] coordinate, which happens to be correct, since the clipping space coordinate you want to compare to is also in [-1…1] as well. Very convenient. In D3D (9 and 11) it is – surprise, surprise – the same, but the clipping space z-coordinate is in [0…1]. If you store the coordinate unaltered, you can just read from it and directly use it without conversions, e.g. render to depth texture, fetch the depth later and compare it to the depth in clipping space from the light’s point of view.

But now, you confuse me a little. How can it be that the values had “too much range” in D3D (i.e. ended up in [-1…1])? What puzzles me even more is that you convert it to [0…1] to compare it to a depth value in [0…1]. How can it be that one coordinate ended up in [-1…1] needing a conversion and the other one is in [0…1]? It appears to me that there is an inconsistency at some point.

I have the feeling that you currently store the depth in [-1…1]. This means, you’re converting it at writing to the render target and when reading from it. (Note that you invert the operation at reading that you have done at writing. --> You can avoid that entirely.) The depth value you compare to is coming out from a projection matrix, thus is still in [0…1], right?

I’m quite sure you don’t, but: Do you use the exact same projection matrix in D3D as you use in GL? (That would cause the depth to be in D3D in [-1…1].) That would be a problem, since in D3D the projection matrix returns something with depth in [0...1] and in GL in [-1…1]. The D3D rasterizer would happily clip away half of your frustum, since in D3D-country things are not getting negative. So… using a proper projection matrix that returns values in the correct range would be the easiest fix, as it would render all needs for conversions void.

Also, storing the depth consistently over all platforms in [-1…1] is rather impossible to achieve (isn't it?), since the D3D depth buffer just happens to store in [0…1]. You may change the coordinate in D3D9, when writing to the render target (needing a conversion at reading), but it won’t help you much in D3D10+, if you use the real depth buffer.
Or can you persuade D3D to work in [-1...1] too by messing with the viewport? Hm...

Is there a reason, you need to have explicitly the depth values in [-1…1] on all platforms? I see that you would like to have consistency, but wouldn't it be just fine if the range is in the "correct" space of the respective platform? If the projection matrix leads you to the correct space (GL: [-1...1], D3D: [0...1], there shouldn't be much to worry about, right?

Best regards!

Share this post


Link to post
Share on other sites

This topic is 1966 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628719
    • Total Posts
      2984386
  • Similar Content

    • By alex1997
      I'm looking to render multiple objects (rectangles) with different shaders. So far I've managed to render one rectangle made out of 2 triangles and apply shader to it, but when it comes to render another I get stucked. Searched for documentations or stuffs that could help me, but everything shows how to render only 1 object. Any tips or help is highly appreciated, thanks!
      Here's my code for rendering one object with shader!
       
      #define GLEW_STATIC #include <stdio.h> #include <GL/glew.h> #include <GLFW/glfw3.h> #include "window.h" #define GLSL(src) "#version 330 core\n" #src // #define ASSERT(expression, msg) if(expression) {fprintf(stderr, "Error on line %d: %s\n", __LINE__, msg);return -1;} int main() { // Init GLFW if (glfwInit() != GL_TRUE) { std::cerr << "Failed to initialize GLFW\n" << std::endl; exit(EXIT_FAILURE); } // Create a rendering window with OpenGL 3.2 context glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_RESIZABLE, GL_FALSE); // assing window pointer GLFWwindow *window = glfwCreateWindow(800, 600, "OpenGL", NULL, NULL); glfwMakeContextCurrent(window); // Init GLEW glewExperimental = GL_TRUE; if (glewInit() != GLEW_OK) { std::cerr << "Failed to initialize GLEW\n" << std::endl; exit(EXIT_FAILURE); } // ----------------------------- RESOURCES ----------------------------- // // create gl data const GLfloat positions[8] = { -0.5f, -0.5f, 0.5f, -0.5f, 0.5f, 0.5f, -0.5f, 0.5f, }; const GLuint elements[6] = { 0, 1, 2, 2, 3, 0 }; // Create Vertex Array Object GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); // Create a Vertex Buffer Object and copy the vertex data to it GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(positions), positions, GL_STATIC_DRAW); // Specify the layout of the vertex data glEnableVertexAttribArray(0); // layout(location = 0) glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0); // Create a Elements Buffer Object and copy the elements data to it GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(elements), elements, GL_STATIC_DRAW); // Create and compile the vertex shader const GLchar *vertexSource = GLSL( layout(location = 0) in vec2 position; void main() { gl_Position = vec4(position, 0.0, 1.0); } ); GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexSource, NULL); glCompileShader(vertexShader); // Create and compile the fragment shader const char* fragmentSource = GLSL( out vec4 gl_FragColor; uniform vec2 u_resolution; void main() { vec2 pos = gl_FragCoord.xy / u_resolution; gl_FragColor = vec4(1.0); } ); GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fragmentSource, NULL); glCompileShader(fragmentShader); // Link the vertex and fragment shader into a shader program GLuint shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); glLinkProgram(shaderProgram); glUseProgram(shaderProgram); // get uniform's id by name and set value GLint uRes = glGetUniformLocation(shaderProgram, "u_Resolution"); glUniform2f(uRes, 800.0f, 600.0f); // ---------------------------- RENDERING ------------------------------ // while(!glfwWindowShouldClose(window)) { // Clear the screen to black glClear(GL_COLOR_BUFFER_BIT); glClearColor(0.0f, 0.5f, 1.0f, 1.0f); // Draw a rectangle made of 2 triangles -> 6 vertices glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, NULL); // Swap buffers and poll window events glfwSwapBuffers(window); glfwPollEvents(); } // ---------------------------- CLEARING ------------------------------ // // Delete allocated resources glDeleteProgram(shaderProgram); glDeleteShader(fragmentShader); glDeleteShader(vertexShader); glDeleteBuffers(1, &vbo); glDeleteVertexArrays(1, &vao); return 0; }  
    • By Vortez
      Hi guys, im having a little problem fixing a bug in my program since i multi-threaded it. The app is a little video converter i wrote for fun. To help you understand the problem, ill first explain how the program is made. Im using Delphi to do the GUI/Windows part of the code, then im loading a c++ dll for the video conversion. The problem is not related to the video conversion, but with OpenGL only. The code work like this:

       
      DWORD WINAPI JobThread(void *params) { for each files { ... _ConvertVideo(input_name, output_name); } } void EXP_FUNC _ConvertVideo(char *input_fname, char *output_fname) { // Note that im re-initializing and cleaning up OpenGL each time this function is called... CGLEngine GLEngine; ... // Initialize OpenGL GLEngine.Initialize(render_wnd); GLEngine.CreateTexture(dst_width, dst_height, 4); // decode the video and render the frames... for each frames { ... GLEngine.UpdateTexture(pY, pU, pV); GLEngine.Render(); } cleanup: GLEngine.DeleteTexture(); GLEngine.Shutdown(); // video cleanup code... }  
      With a single thread, everything work fine. The problem arise when im starting the thread for a second time, nothing get rendered, but the encoding work fine. For example, if i start the thread with 3 files to process, all of them render fine, but if i start the thread again (with the same batch of files or not...), OpenGL fail to render anything.
      Im pretty sure it has something to do with the rendering context (or maybe the window DC?). Here a snippet of my OpenGL class:
      bool CGLEngine::Initialize(HWND hWnd) { hDC = GetDC(hWnd); if(!SetupPixelFormatDescriptor(hDC)){ ReleaseDC(hWnd, hDC); return false; } hRC = wglCreateContext(hDC); wglMakeCurrent(hDC, hRC); // more code ... return true; } void CGLEngine::Shutdown() { // some code... if(hRC){wglDeleteContext(hRC);} if(hDC){ReleaseDC(hWnd, hDC);} hDC = hRC = NULL; }  
      The full source code is available here. The most relevant files are:
      -OpenGL class (header / source)
      -Main code (header / source)
       
      Thx in advance if anyone can help me.
    • By DiligentDev
      This article uses material originally posted on Diligent Graphics web site.
      Introduction
      Graphics APIs have come a long way from small set of basic commands allowing limited control of configurable stages of early 3D accelerators to very low-level programming interfaces exposing almost every aspect of the underlying graphics hardware. Next-generation APIs, Direct3D12 by Microsoft and Vulkan by Khronos are relatively new and have only started getting widespread adoption and support from hardware vendors, while Direct3D11 and OpenGL are still considered industry standard. New APIs can provide substantial performance and functional improvements, but may not be supported by older hardware. An application targeting wide range of platforms needs to support Direct3D11 and OpenGL. New APIs will not give any advantage when used with old paradigms. It is totally possible to add Direct3D12 support to an existing renderer by implementing Direct3D11 interface through Direct3D12, but this will give zero benefits. Instead, new approaches and rendering architectures that leverage flexibility provided by the next-generation APIs are expected to be developed.
      There are at least four APIs (Direct3D11, Direct3D12, OpenGL/GLES, Vulkan, plus Apple's Metal for iOS and osX platforms) that a cross-platform 3D application may need to support. Writing separate code paths for all APIs is clearly not an option for any real-world application and the need for a cross-platform graphics abstraction layer is evident. The following is the list of requirements that I believe such layer needs to satisfy:
      Lightweight abstractions: the API should be as close to the underlying native APIs as possible to allow an application leverage all available low-level functionality. In many cases this requirement is difficult to achieve because specific features exposed by different APIs may vary considerably. Low performance overhead: the abstraction layer needs to be efficient from performance point of view. If it introduces considerable amount of overhead, there is no point in using it. Convenience: the API needs to be convenient to use. It needs to assist developers in achieving their goals not limiting their control of the graphics hardware. Multithreading: ability to efficiently parallelize work is in the core of Direct3D12 and Vulkan and one of the main selling points of the new APIs. Support for multithreading in a cross-platform layer is a must. Extensibility: no matter how well the API is designed, it still introduces some level of abstraction. In some cases the most efficient way to implement certain functionality is to directly use native API. The abstraction layer needs to provide seamless interoperability with the underlying native APIs to provide a way for the app to add features that may be missing. Diligent Engine is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. Full source code is available for download at GitHub and is free to use.
      Overview
      Diligent Engine API takes some features from Direct3D11 and Direct3D12 as well as introduces new concepts to hide certain platform-specific details and make the system easy to use. It contains the following main components:
      Render device (IRenderDevice  interface) is responsible for creating all other objects (textures, buffers, shaders, pipeline states, etc.).
      Device context (IDeviceContext interface) is the main interface for recording rendering commands. Similar to Direct3D11, there are immediate context and deferred contexts (which in Direct3D11 implementation map directly to the corresponding context types). Immediate context combines command queue and command list recording functionality. It records commands and submits the command list for execution when it contains sufficient number of commands. Deferred contexts are designed to only record command lists that can be submitted for execution through the immediate context.
      An alternative way to design the API would be to expose command queue and command lists directly. This approach however does not map well to Direct3D11 and OpenGL. Besides, some functionality (such as dynamic descriptor allocation) can be much more efficiently implemented when it is known that a command list is recorded by a certain deferred context from some thread.
      The approach taken in the engine does not limit scalability as the application is expected to create one deferred context per thread, and internally every deferred context records a command list in lock-free fashion. At the same time this approach maps well to older APIs.
      In current implementation, only one immediate context that uses default graphics command queue is created. To support multiple GPUs or multiple command queue types (compute, copy, etc.), it is natural to have one immediate contexts per queue. Cross-context synchronization utilities will be necessary.
      Swap Chain (ISwapChain interface). Swap chain interface represents a chain of back buffers and is responsible for showing the final rendered image on the screen.
      Render device, device contexts and swap chain are created during the engine initialization.
      Resources (ITexture and IBuffer interfaces). There are two types of resources - textures and buffers. There are many different texture types (2D textures, 3D textures, texture array, cubmepas, etc.) that can all be represented by ITexture interface.
      Resources Views (ITextureView and IBufferView interfaces). While textures and buffers are mere data containers, texture views and buffer views describe how the data should be interpreted. For instance, a 2D texture can be used as a render target for rendering commands or as a shader resource.
      Pipeline State (IPipelineState interface). GPU pipeline contains many configurable stages (depth-stencil, rasterizer and blend states, different shader stage, etc.). Direct3D11 uses coarse-grain objects to set all stage parameters at once (for instance, a rasterizer object encompasses all rasterizer attributes), while OpenGL contains myriad functions to fine-grain control every individual attribute of every stage. Both methods do not map very well to modern graphics hardware that combines all states into one monolithic state under the hood. Direct3D12 directly exposes pipeline state object in the API, and Diligent Engine uses the same approach.
      Shader Resource Binding (IShaderResourceBinding interface). Shaders are programs that run on the GPU. Shaders may access various resources (textures and buffers), and setting correspondence between shader variables and actual resources is called resource binding. Resource binding implementation varies considerably between different API. Diligent Engine introduces a new object called shader resource binding that encompasses all resources needed by all shaders in a certain pipeline state.
      API Basics
      Creating Resources
      Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. Graphics APIs usually have a native object that represents linear buffer. Diligent Engine uses IBuffer interface as an abstraction for a native buffer. To create a buffer, one needs to populate BufferDesc structure and call IRenderDevice::CreateBuffer() method as in the following example:
      BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); While there is usually just one buffer object, different APIs use very different approaches to represent textures. For instance, in Direct3D11, there are ID3D11Texture1D, ID3D11Texture2D, and ID3D11Texture3D objects. In OpenGL, there is individual object for every texture dimension (1D, 2D, 3D, Cube), which may be a texture array, which may also be multisampled (i.e. GL_TEXTURE_2D_MULTISAMPLE_ARRAY). As a result there are nine different GL texture types that Diligent Engine may create under the hood. In Direct3D12, there is only one resource interface. Diligent Engine hides all these details in ITexture interface. There is only one  IRenderDevice::CreateTexture() method that is capable of creating all texture types. Dimension, format, array size and all other parameters are specified by the members of the TextureDesc structure:
      TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); If native API supports multithreaded resource creation, textures and buffers can be created by multiple threads simultaneously.
      Interoperability with native API provides access to the native buffer/texture objects and also allows creating Diligent Engine objects from native handles. It allows applications seamlessly integrate native API-specific code with Diligent Engine.
      Next-generation APIs allow fine level-control over how resources are allocated. Diligent Engine does not currently expose this functionality, but it can be added by implementing IResourceAllocator interface that encapsulates specifics of resource allocation and providing this interface to CreateBuffer() or CreateTexture() methods. If null is provided, default allocator should be used.
      Initializing the Pipeline State
      As it was mentioned earlier, Diligent Engine follows next-gen APIs to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.). This approach maps directly to Direct3D12/Vulkan, but is also beneficial for older APIs as it eliminates pipeline misconfiguration errors. With many individual calls tweaking various GPU pipeline settings it is very easy to forget to set one of the states or assume the stage is already properly configured when in fact it is not. Using pipeline state object helps avoid these problems as all stages are configured at once.
      Creating Shaders
      While in earlier APIs shaders were bound separately, in the next-generation APIs as well as in Diligent Engine shaders are part of the pipeline state object. The biggest challenge when authoring shaders is that Direct3D and OpenGL/Vulkan use different shader languages (while Apple uses yet another language in their Metal API). Maintaining two versions of every shader is not an option for real applications and Diligent Engine implements shader source code converter that allows shaders authored in HLSL to be translated to GLSL. To create a shader, one needs to populate ShaderCreationAttribs structure. SourceLanguage member of this structure tells the system which language the shader is authored in:
      SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source language matches the underlying graphics API: HLSL for Direct3D11/Direct3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter, so this value should only be used for OpenGL and OpenGLES modes. There are two ways to provide the shader source code. The first way is to use Source member. The second way is to provide a file path in FilePath member. Since the engine is entirely decoupled from the platform and the host file system is platform-dependent, the structure exposes pShaderSourceStreamFactory member that is intended to provide the engine access to the file system. If FilePath is provided, shader source factory must also be provided. If the shader source contains any #include directives, the source stream factory will also be used to load these files. The engine provides default implementation for every supported platform that should be sufficient in most cases. Custom implementation can be provided when needed.
      When sampling a texture in a shader, the texture sampler was traditionally specified as separate object that was bound to the pipeline at run time or set as part of the texture object itself. However, in most cases it is known beforehand what kind of sampler will be used in the shader. Next-generation APIs expose new type of sampler called static sampler that can be initialized directly in the pipeline state. Diligent Engine exposes this functionality: when creating a shader, textures can be assigned static samplers. If static sampler is assigned, it will always be used instead of the one initialized in the texture shader resource view. To initialize static samplers, prepare an array of StaticSamplerDesc structures and initialize StaticSamplers and NumStaticSamplers members. Static samplers are more efficient and it is highly recommended to use them whenever possible. On older APIs, static samplers are emulated via generic sampler objects.
      The following is an example of shader initialization:
      ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = {     {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC},     {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE},     {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader );
      Creating the Pipeline State Object
      After all required shaders are created, the rest of the fields of the PipelineStateDesc structure provide depth-stencil, rasterizer, and blend state descriptions, the number and format of render targets, input layout format, etc. For instance, rasterizer state can be described as follows:
      PipelineStateDesc PSODesc; RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; RasterizerDesc.AntialiasedLineEnable = False; Depth-stencil and blend states are defined in a similar fashion.
      Another important thing that pipeline state object encompasses is the input layout description that defines how inputs to the vertex shader, which is the very first shader stage, should be read from the memory. Input layout may define several vertex streams that contain values of different formats and sizes:
      // Define input layout InputLayoutDesc &Layout = PSODesc.GraphicsPipeline.InputLayout; LayoutElement TextLayoutElems[] = {     LayoutElement( 0, 0, 3, VT_FLOAT32, False ),     LayoutElement( 1, 0, 4, VT_UINT8, True ),     LayoutElement( 2, 0, 2, VT_FLOAT32, False ), }; Layout.LayoutElements = TextLayoutElems; Layout.NumElements = _countof( TextLayoutElems ); Finally, pipeline state defines primitive topology type. When all required members are initialized, a pipeline state object can be created by IRenderDevice::CreatePipelineState() method:
      // Define shader and primitive topology PSODesc.GraphicsPipeline.PrimitiveTopologyType = PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; PSODesc.GraphicsPipeline.pVS = pVertexShader; PSODesc.GraphicsPipeline.pPS = pPixelShader; PSODesc.Name = "My pipeline state"; m_pDev->CreatePipelineState(PSODesc, &m_pPSO); When PSO object is bound to the pipeline, the engine invokes all API-specific commands to set all states specified by the object. In case of Direct3D12 this maps directly to setting the D3D12 PSO object. In case of Direct3D11, this involves setting individual state objects (such as rasterizer and blend states), shaders, input layout etc. In case of OpenGL, this requires a number of fine-grain state tweaking calls. Diligent Engine keeps track of currently bound states and only calls functions to update these states that have actually changed.
      Binding Shader Resources
      Direct3D11 and OpenGL utilize fine-grain resource binding models, where an application binds individual buffers and textures to certain shader or program resource binding slots. Direct3D12 uses a very different approach, where resource descriptors are grouped into tables, and an application can bind all resources in the table at once by setting the table in the command list. Resource binding model in Diligent Engine is designed to leverage this new method. It introduces a new object called shader resource binding that encapsulates all resource bindings required for all shaders in a certain pipeline state. It also introduces the classification of shader variables based on the frequency of expected change that helps the engine group them into tables under the hood:
      Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. Shader variable type must be specified during shader creation by populating an array of ShaderVariableDesc structures and initializing ShaderCreationAttribs::Desc::VariableDesc and ShaderCreationAttribs::Desc::NumVariables members (see example of shader creation above).
      Static variables cannot be changed once a resource is bound to the variable. They are bound directly to the shader object. For instance, a shadow map texture is not expected to change after it is created, so it can be bound directly to the shader:
      PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new Shader Resource Binding object (SRB) that is created by the pipeline state (IPipelineState::CreateShaderResourceBinding()):
      m_pPSO->CreateShaderResourceBinding(&m_pSRB); Note that an SRB is only compatible with the pipeline state it was created from. SRB object inherits all static bindings from shaders in the pipeline, but is not allowed to change them.
      Mutable resources can only be set once for every instance of a shader resource binding. Such resources are intended to define specific material properties. For instance, a diffuse texture for a specific material is not expected to change once the material is defined and can be set right after the SRB object has been created:
      m_pSRB->GetVariable(SHADER_TYPE_PIXEL, "tex2DDiffuse")->Set(pDiffuseTexSRV); In some cases it is necessary to bind a new resource to a variable every time a draw command is invoked. Such variables should be labeled as dynamic, which will allow setting them multiple times through the same SRB object:
      m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); Under the hood, the engine pre-allocates descriptor tables for static and mutable resources when an SRB objcet is created. Space for dynamic resources is dynamically allocated at run time. Static and mutable resources are thus more efficient and should be used whenever possible.
      As you can see, Diligent Engine does not expose low-level details of how resources are bound to shader variables. One reason for this is that these details are very different for various APIs. The other reason is that using low-level binding methods is extremely error-prone: it is very easy to forget to bind some resource, or bind incorrect resource such as bind a buffer to the variable that is in fact a texture, especially during shader development when everything changes fast. Diligent Engine instead relies on shader reflection system to automatically query the list of all shader variables. Grouping variables based on three types mentioned above allows the engine to create optimized layout and take heavy lifting of matching resources to API-specific resource location, register or descriptor in the table.
      This post gives more details about the resource binding model in Diligent Engine.
      Setting the Pipeline State and Committing Shader Resources
      Before any draw or compute command can be invoked, the pipeline state needs to be bound to the context:
      m_pContext->SetPipelineState(m_pPSO); Under the hood, the engine sets the internal PSO object in the command list or calls all the required native API functions to properly configure all pipeline stages.
      The next step is to bind all required shader resources to the GPU pipeline, which is accomplished by IDeviceContext::CommitShaderResources() method:
      m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); The method takes a pointer to the shader resource binding object and makes all resources the object holds available for the shaders. In the case of D3D12, this only requires setting appropriate descriptor tables in the command list. For older APIs, this typically requires setting all resources individually.
      Next-generation APIs require the application to track the state of every resource and explicitly inform the system about all state transitions. For instance, if a texture was used as render target before, while the next draw command is going to use it as shader resource, a transition barrier needs to be executed. Diligent Engine does the heavy lifting of state tracking.  When CommitShaderResources() method is called with COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES flag, the engine commits and transitions resources to correct states at the same time. Note that transitioning resources does introduce some overhead. The engine tracks state of every resource and it will not issue the barrier if the state is already correct. But checking resource state is an overhead that can sometimes be avoided. The engine provides IDeviceContext::TransitionShaderResources() method that only transitions resources:
      m_pContext->TransitionShaderResources(m_pPSO, m_pSRB); In some scenarios it is more efficient to transition resources once and then only commit them.
      Invoking Draw Command
      The final step is to set states that are not part of the PSO, such as render targets, vertex and index buffers. Diligent Engine uses Direct3D11-syle API that is translated to other native API calls under the hood:
      ITextureView *pRTVs[] = {m_pRTV}; m_pContext->SetRenderTargets(_countof( pRTVs ), pRTVs, m_pDSV); // Clear render target and depth buffer const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); m_pContext->ClearDepthStencil(nullptr, CLEAR_DEPTH_FLAG, 1.f); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); Different native APIs use various set of function to execute draw commands depending on command details (if the command is indexed, instanced or both, what offsets in the source buffers are used etc.). For instance, there are 5 draw commands in Direct3D11 and more than 9 commands in OpenGL with something like glDrawElementsInstancedBaseVertexBaseInstance not uncommon. Diligent Engine hides all details with single IDeviceContext::Draw() method that takes takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example:
      DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); For compute commands, there is IDeviceContext::DispatchCompute() method that takes DispatchComputeAttribs structure that defines compute grid dimension.
      Source Code
      Full engine source code is available on GitHub and is free to use. The repository contains two samples, asteroids performance benchmark and example Unity project that uses Diligent Engine in native plugin.
      AntTweakBar sample is Diligent Engine’s “Hello World” example.

       
      Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to multiple render targets, using compute shaders and unordered access views, etc.

      Asteroids performance benchmark is based on this demo developed by Intel. It renders 50,000 unique textured asteroids and allows comparing performance of Direct3D11 and Direct3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures.

      Finally, there is an example project that shows how Diligent Engine can be integrated with Unity.

      Future Work
      The engine is under active development. It currently supports Windows desktop, Universal Windows and Android platforms. Direct3D11, Direct3D12, OpenGL/GLES backends are now feature complete. Vulkan backend is coming next, and support for more platforms is planned.
    • By michaeldodis
      I've started building a small library, that can render pie menu GUI in legacy opengl, planning to add some traditional elements of course.
      It's interface is similar to something you'd see in IMGUI. It's written in C.
      Early version of the library
      I'd really love to hear anyone's thoughts on this, any suggestions on what features you'd want to see in a library like this? 
      Thanks in advance!
    • By Michael Aganier
      I have this 2D game which currently eats up to 200k draw calls per frame. The performance is acceptable, but I want a lot more than that. I need to batch my sprite drawing, but I'm not sure what's the best way in OpenGL 3.3 (to keep compatibility with older machines).
      Each individual sprite move independently almost every frame and their is a variety of textures and animations. What's the fastest way to render a lot of dynamic sprites? Should I map all my data to the GPU and update it all the time? Should I setup my data in the RAM and send it to the GPU all at once? Should I use one draw call per sprite and let the matrices apply the transformations or should I compute the transformations in a world vbo on the CPU so that they can be rendered by a single draw call?
  • Popular Now