Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

256 Neutral

About Volgut

  • Rank
  1. Hello, I am working for one client on a DirectX 9 based visualization software. The client has a set of computers with very strict rules which make it impossible to install the latest DirectX redistributable files. Is there is a way to find out which version of redistributable D3DX files has has installed? I asked him to look for d3dx9_xx.dll in system32. I got a reply that he couldn't find any files with that name. He is running Windows Vista with SP1. I also found out that the version of DirectX he uses is 6.00.6001.18000 Regards, Tomasz
  2. Hello, I am trying to implement atmospheric scattering effect based on O'Neils implementation from GPUGems2. Almost everything seems to be fine, but the sky colour is too dark. I already spent several hours trying to understand the problem and I am running out of ideas. To render the effect, I create a sphere of a radius of 6370997.0f * 1.025f (radius of earth with atmosphere) and then I move it down by 6370997.0f in the shader (earth radius). It means that my camera is located on the North Pole. static const float EARTH_RADIUS = 6370997.0f; static const float EARTH_ATMOSPHERE_RADIUS = EARTH_RADIUS * 1.025f; From my experience I know that this atmospheric scattering algorithm is very fragile to input parameters which need to be exactly in the right range or otherwise the sky will look completely wrong. Here are my parameters: float fRayleighScatteringConstant = 0.0025f; float fMieScatteringConstant = 0.0015f; float fSunBrightness = 18.0f; float fRayleighScaleDepth = 0.25f; float fMiePhaseAsymmetryFactor = -0.98f; Vector3 vWaveLength = Vector3( 0.650f, 0.570f, 0.475f ); This is how I pass all parameters to the shader: const float fKr = cAtmosphericScattering.fRayleighScatteringConstant; // Rayleigh scattering constant const float fKr4PI = fKr * 4.0f * SkMath::Pi(); const float fKm = cAtmosphericScattering.fMieScatteringConstant; // Mie scattering constant const float fKm4PI = fKm * 4.0f * SkMath::Pi(); const float fScale = 1 / (EARTH_ATMOSPHERE_RADIUS - EARTH_RADIUS); Vector3 vInvWavelength; vInvWavelength.x = 1.0f / SkMath::Pow( cAtmosphericScattering.vWaveLength.x, 4.0f ); // 650 nm for red vInvWavelength.y = 1.0f / SkMath::Pow( cAtmosphericScattering.vWaveLength.y, 4.0f ); // 570 nm for green vInvWavelength.z = 1.0f / SkMath::Pow( cAtmosphericScattering.vWaveLength.z, 4.0f ); // 475 nm for blue pEffect->SetVector( "v3InvWavelength", vInvWavelength ); pEffect->SetFloat( "fInnerRadius", EARTH_RADIUS ); pEffect->SetFloat( "fKrESun", fKr*cAtmosphericScattering.fSunBrightness ); pEffect->SetFloat( "fKmESun", fKm*cAtmosphericScattering.fSunBrightness ); pEffect->SetFloat( "fKr4PI", fKr4PI ); pEffect->SetFloat( "fKm4PI", fKm4PI ); pEffect->SetFloat( "fScale", fScale ); pEffect->SetFloat( "fScaleDepth", cAtmosphericScattering.fRayleighScaleDepth ); pEffect->SetFloat( "fScaleOverScaleDepth", fScale / cAtmosphericScattering.fRayleighScaleDepth ); pEffect->SetFloat( "g", cAtmosphericScattering.fMiePhaseAsymmetryFactor ); pEffect->SetFloat( "g2", cAtmosphericScattering.fMiePhaseAsymmetryFactor * cAtmosphericScattering.fMiePhaseAsymmetryFactor ); float fCameraHeight = EARTH_RADIUS + vCameraPosition.y; // vCameraPosition is my camera position in meters pEffect->SetVector( "v3CameraPos", SkVector3( 0.0f, fCameraHeight, 0.0f ) ); pEffect->SetVector( "v3LightPos", vSunDirection ); // vSunDirection is my light vector pEffect->SetFloat( "fCameraHeight", fCameraHeight ); Functions used by the shader: float ExpScale(float fCos) { float x = 1.0 - fCos; return fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25)))); } float GetRayleighPhase( float fCos2 ) { return 0.75 * (1.0 + fCos2); } float GetMiePhase( float fCos, float fCos2 ) { return 1.5 * ((1.0 - g2) / (2.0 + g2)) * (1.0 + fCos2) / pow(1.0 + g2 - 2.0*g*fCos, 1.5); } Vertex shader: #define INTEGRAL_EQUATION_NUM_SAMPLES 2 // Number of sample rays to use in integral equation SVSOutput VS_Sky( float4 i_Position : POSITION ) { SVSOutput sOutput; // This is a vertex position center earth center at (0,0,0) float3 vPositionEarthCenter =; // Transform vertex to the screen space i_Position.y -= fInnerRadius; // Here I move the mesh down sOutput.vPosition = mul( i_Position, g_WorldViewProjection ); // Get the ray from the camera to the vertex, and its length (which is the far point of the ray passing through the atmosphere) float3 v3Ray = vPositionEarthCenter - v3CameraPos; float fFar = length(v3Ray); v3Ray /= fFar; // Calculate the ray's starting position, then calculate its scattering offset float3 v3Start = v3CameraPos; float fHeight = length(v3Start); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fCameraHeight)); float fStartAngle = dot(v3Ray, v3Start) / fHeight; float fStartOffset = fDepth*ExpScale(fStartAngle); // Initialize the scattering loop variables float fSampleLength = fFar / float( INTEGRAL_EQUATION_NUM_SAMPLES ); float fScaledLength = fSampleLength * fScale; float3 v3SampleRay = v3Ray * fSampleLength; float3 v3SamplePoint = v3Start + v3SampleRay * 0.5; // Now loop through the sample rays float3 v3FrontColor = float3(0.0, 0.0, 0.0); for(int i=0; i< INTEGRAL_EQUATION_NUM_SAMPLES; i++) { float fHeight = length(v3SamplePoint); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fHeight)); float fLightAngle = dot(v3LightPos, v3SamplePoint) / fHeight; float fCameraAngle = dot(v3Ray, v3SamplePoint) / fHeight; float fScatter = (fStartOffset + fDepth*(ExpScale(fLightAngle) - ExpScale(fCameraAngle))); float3 v3Attenuate = exp(-fScatter * (v3InvWavelength * fKr4PI + fKm4PI)); v3FrontColor += v3Attenuate * (fDepth * fScaledLength); v3SamplePoint += v3SampleRay; } // Finally, scale the Mie and Rayleigh colours and set up the varying variables for the pixel shader sOutput.vMieColor = v3FrontColor * fKmESun; sOutput.vRayleighColor = v3FrontColor * (v3InvWavelength * fKrESun); sOutput.vCameraToVertexDir = v3CameraPos - vPositionEarthCenter; } Pixel shader: SPSOutput PS_Sky( SVSOutput sInput ) { SPSOutput sOutput; float fCos =dot(v3LightPos, sInput.vCameraToVertexDir) / length(sInput.vCameraToVertexDir); float fCos2 = fCos * fCos; float fRayleighPhase = GetRayleighPhase( fCos2 * fff ); float fMiePhase = GetMiePhase( fCos, fCos2 ); sOutput.vColor = float4( fRayleighPhase * sInput.vRayleighColor + fMiePhase * sInput.vMieColor, 1.0f ); } And a picture showing my sky (120deg FOV to show more sky): I also should mention that I render my sky to HDR texture and I use following equation to convert colors to LDR: vColor = 1.0 - exp( -2.0 * vColor ) Did anyone who implemented atmospheric scattering based on O'Neils algorithm have the same problem? Do you have any ideas how to fix my sky? The easiest way to make the sky brighter is to increase sun brightness parameter (fSunBrightness in my code), but for values above 20.0f it creates very strong "halo" effect around the horizon.
  3. Yes, you are right. The vector should be only scaled by variable called "filterStrength".
  4. Hello Styves, Thank you for helping me :) Your code was very useful. I only modified it a bit and I implemented normal scale/offset based on pixel distance from the camera. With fixed scale/offset I was getting some artefacts in the distance (bleeding). Here you can see two screenshots (scaled 200%) with and without anti-aliasing. Your anti-aliasing technique works very well. As you wrote, it doesn't solve all problems, but it definitely can reduce aliasing effect. Now I will try to look into Crytek's temporal aliasing. Thanks again. Regards, Tomasz
  5. Hello, I was looking for a good anti-aliasing technique for my deferred rendering for a long time. Your idea sounds very interesting and I would like to test it in my engine, but I am not sure if I fully understood how it works. How exactly do you calculate normals from color? In my engine I used luminosity, but my results are not as good as yours. Here is my shader code: float2 vPixelViewport = float2( 1.0f / VIEWPORT_WIDTH, 1.0f / VIEWPORT_HEIGHT ); // Normal float2 upOffset = float2( 0, vPixelViewport.y ); float2 rightOffset = float2( vPixelViewport.x, 0 ); float topHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+upOffset).rgb ); float bottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-upOffset).rgb ); float rightHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset).rgb ); float leftHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset).rgb ); float leftTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset+upOffset).rgb ); float leftBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset-upOffset).rgb ); float rightTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset+upOffset).rgb ); float rightBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset-upOffset).rgb ); float xDifference = (2*rightHeight+rightTopHeight+rightBottomHeight)/4.0f - (2*leftHeight+leftTopHeight+leftBottomHeight)/4.0f; float yDifference = (2*topHeight+leftTopHeight+rightTopHeight)/4.0f - (2*bottomHeight+rightBottomHeight+leftBottomHeight)/4.0f; float3 vec1 = float3( 1, 0, xDifference ); float3 vec2 = float3( 0, 1, yDifference ); float3 Normal = normalize( cross( vec1, vec2 ) ); // Color Normal.xy *= vPixelViewport * 2; // Increase pixel size to get more blur float4 Scene0 = tex2D( ColorTextureSampler, i_TexCoord.xy ); float4 Scene1 = tex2D( ColorTextureSampler, i_TexCoord.xy + Normal.xy ); float4 Scene2 = tex2D( ColorTextureSampler, i_TexCoord.xy - Normal.xy ); float4 Scene3 = tex2D( ColorTextureSampler, i_TexCoord.xy + float2(Normal.x, -Normal.y) ); float4 Scene4 = tex2D( ColorTextureSampler, i_TexCoord.xy - float2(Normal.x, -Normal.y) ); // Final color o_Color = (Scene0 + Scene1 + Scene2 + Scene3 + Scene4) * 0.2; And this is how I calculate luminosity: float GetColorLuminance( float3 i_vColor ) { return dot( i_vColor, float3( 0.2126f, 0.7152f, 0.0722f ) ); } And at the end I would like to show you screenshots with and without AA. Images scaled by 200%. [Edited by - Volgut on September 2, 2010 4:25:22 AM]
  6. Volgut

    Shader cache

    The only reason why I was thinking about shader cache is to avoid run-time stalls. And no, I can't compile all shaders at a start up of my app, because as Hodgman said, that would take too much time. I have around 5 "super shaders" and every one of them uses around 10-30 #ifdefs. [Edited by - Volgut on July 13, 2010 9:22:24 AM]
  7. Volgut

    Shader cache

    I am a bit surprised that it's not possible to load a shader in a binary format, because as I know it's not a problem under DirectX. Anyway, thanks for the answer.
  8. Volgut

    Shader cache

    My application uses many variations of several shaders and I need to access them at any time. For example - if I enable a point light, then several shaders need to be re-compiled to include point light computations. Currently it stops the whole application for a short while, because OpenGL driver needs to re-compile these shaders. Instead of re-compiling shaders, I would like to directly use them from a cache in a binary (compiled) format.
  9. Hello, I would like to implement a shader cache which will store all compiled shaders in a file. Shader will be re-compiled if source file has been changed - otherwise compiled shader will be used directly from the cache file. I am not very familiar with OpenGL and I am looking for a function which will return compiled shader and another function which I could use to set compiled shader which was loaded from the cache. Best regards, Tomasz
  10. Thanks! This is exactly what I was looking for :-) Regards, Tomasz
  11. Hello, I would like to add a special dialog box to my assert function which will show different information depends if the application runs under Visual Studio or not. Does anyone know if there is a function in VS which I can use to detect the debugger? I was also thinking about detecting process 'devenv.exe'. My application is written in C++ and I am using Visual Studio 2008. Regards, To [Edited by - Volgut on March 13, 2010 2:44:36 AM]
  12. Volgut

    MRT support

    Thank you. I will test that. I am using GeForce 9800GX2, but I also tested my engine on 8800GTX. Regards, Tomasz
  13. Volgut

    MRT support

    Hello! I am working on my 3D engine and some time ago I implemented Deferred Rendering. I am using 4 surfaces with different formats: - D3DFMT_A16B16G16R16F (HDR color) - 8 bytes/pixel - D3DFMT_A8R8G8B8 (normal, specular) - 4 bytes/pixel - D3DFMT_R32F (depth) - 4 bytes/pixel My question is - how to detect if a device supports multiple render targets with different bit-depth? Regards, Tomasz
  14. Volgut

    MRT and multisampling

    Thank you very much. For now I will use supersampling and probably in a near future I will start porting my 3D engine to DirectX 10. Regards, Tomasz
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!