• Content count

  • Joined

  • Last visited

Community Reputation

132 Neutral

About Paksas

  • Rank
  1. Backing up a bit - turns out the problem's still there. It just became less visible, because the specular power is now calculated correctly, but that doesn't change the fact that when a light is perpendicular to object's surface, the highlight flickers. Any ides?
  2. Ok - found it. There were two problems hidden in the directional light shader: 1.) I don't know if I mentioned that I'm running all calculations in camera ( view ) space. Anyway - the light direction I passed to calculate the reflection was flipped 2.) You guys were barking at the right tree with the whole min/max thing - right tree, wrong branch ;) That's the original code: [CODE] const float matShininess = matSpecularCol.a * 2 - 1; /// ... float specularFactor = pow( max( specularValue, 1.0 ), matShininess ); [/CODE] and here's the corrected code: [CODE] const float matShininess = matSpecularCol.a * 255.0; /// ... float specularFactor = pow( max( specularValue, 1.0 ), max( matShininess, 1.0 ) ); [/CODE] I was decoding the shininess in the wrong way, having the value divided by 255 in the engine code ( that I didn't paste here, so you guys couldn't have possibly known ) so that I can output it in the alpha channel of my normals texture, but then I forgot to invert the operation before I calculated the factor, so in fact I was calculating a power with a factor in range ( 0..1 ). The shininess value in my engine can be equal 0, if the material's not supposed to be shiny at all - I forgot to filter that case out. Anyway - thank you all for help. Cheers
  3. @Ashaman73 - I tried, but that doesn't change anything. As a matter of fact if you look at the code you pasted ( the original one ), you can see that I'm actually maxin' it to 1.0, before I pass it to the 'pow' function: [CODE] float specularFactor = pow( max( specularValue, 1.0 ), matShininess ); [/CODE]
  4. No division by zero there. Here are the shaders: [b]Object shader:[/b] [CODE] //-------------------------------------------------------------------------------------- // Global variables //-------------------------------------------------------------------------------------- float4 g_MaterialAmbientColor; // Material's ambient color float4 g_MaterialDiffuseColor; // Material's diffuse color bool g_DiffuseTexSet; bool g_NormalTexSet; bool g_SpecularTexSet; float g_farZ = 1024; int g_materialIdx; //-------------------------------------------------------------------------------------- // Texture samplers //-------------------------------------------------------------------------------------- sampler g_DiffuseTexture; sampler g_NormalTexture; sampler g_SpecularTexture; //-------------------------------------------------------------------------------------- // This shader outputs the pixel's color by modulating the texture's // color with diffuse material color //-------------------------------------------------------------------------------------- // <psMain> DEFERRED_PS_OUTPUT main( VS_OUTPUT In ) { DEFERRED_PS_OUTPUT output = (DEFERRED_PS_OUTPUT)0; // calculate the color component output.Color = g_MaterialDiffuseColor; if ( g_DiffuseTexSet ) { float4 texColor = tex2D( g_DiffuseTexture, In.TextureUV ); output.Color = texColor + g_MaterialDiffuseColor * ( 1 - texColor.a ); } // calculate normal float3 surfaceNormalViewSpace = 0; if ( g_NormalTexSet ) { float3 normalVS = normalize( In.Normal ); float3 tangentVS = normalize( In.Tangent ); float3 binormalVS = cross( normalVS, tangentVS ); float3x3 mtx = float3x3( tangentVS, binormalVS, normalVS ); float3 surfaceNormalTanSpace = tex2D( g_NormalTexture, In.TextureUV ) * 2.0 - 1.0; // I noticed the other day that normal maps exported from Blender have y coordinate flipped. // So I'm unflipping it here - but keep that fact in mind and pay attention to other // normal maps - perhaps I got something wrong in the code surfaceNormalTanSpace.y *= -1.0; surfaceNormalViewSpace = mul( surfaceNormalTanSpace, mtx ); surfaceNormalViewSpace = normalize( surfaceNormalViewSpace ); } else { surfaceNormalViewSpace = normalize( In.Normal ); } // Flip the Z direction, so that Z values decrease as they get further away from the camera. // It's required in order for our normals compression algorithm to work correctly output.NormalSpec.rgb = surfaceNormalViewSpace * float3( 0.5, 0.5, -0.5 ) + 0.5; output.DepthNormal = encodeDepthAndNormal( surfaceNormalViewSpace, In.PositionCS.z, g_farZ ); // calculate the specular color ( store info about the specularity in albedo's alpha channel if ( g_SpecularTexSet ) { output.NormalSpec.a = tex2D( g_SpecularTexture, In.TextureUV ).r; } else { // full specularity output.NormalSpec.a = 1; } // store the material index and the UV coordinates output.MatIdx = storeMaterialIndex( g_materialIdx ); return output; } [/CODE] [b]Directional light shader:[/b] [CODE] // ------------------------------------------------------------------------------- // constants // ------------------------------------------------------------------------------- float4 g_halfPixel; float4 g_lightDirVS; // light direction in view space float4 g_lightColor; // light color float g_strength; // light strength bool g_drawShadows; // should we take the shadow map into account while rendering the light? int g_materialsTexSize; //-------------------------------------------------------------------------------------- // Texture samplers //-------------------------------------------------------------------------------------- sampler g_Normals; sampler g_Specular; sampler g_SceneColor; sampler g_ShadowMap; sampler g_MaterialIndices; sampler g_MaterialsDescr; //-------------------------------------------------------------------------------------- // Helper structure that describes light parameters at the sampled point of the screen //-------------------------------------------------------------------------------------- struct LightValues { float3 m_direction; float m_reflectionFactor; float3 m_reflectionVector; float4 m_color; }; //-------------------------------------------------------------------------------------- // Calculates the color of the pixel lit by the light described by the specified values //-------------------------------------------------------------------------------------- float4 calcLightContribution( float2 UV, LightValues lightValues, sampler sceneColorTex, sampler materialIndices, sampler materialsDescr, int materialsTexSize ) { // get the material's ambient color float4 encodedMaterialIdx = tex2D( materialIndices, UV ); int materialIdx = readMaterialIndex( encodedMaterialIdx ); float4 matDiffuseCol = getMaterialComponent( materialsDescr, materialIdx, DIFFUSE_COLOR, materialsTexSize ); float4 matSpecularCol = getMaterialComponent( materialsDescr, materialIdx, SPECULAR_COLOR, materialsTexSize ); // I don't remember encoding the value of specular power, but for some reason // it's encoded in the same way as the normals - investigate! const float matShininess = matSpecularCol.a * 2 - 1; matSpecularCol.a = 1; float4 baseColor = tex2D( sceneColorTex, UV ); float4 outputColor = float4( 0, 0, 0, 1 ); if ( lightValues.m_reflectionFactor > 0.0 ) { // calculate the specular factor float specularValue = dot( lightValues.m_reflectionVector, float3( 0, 0, 1 ) ); float specularFactor = pow( max( specularValue, 1.0 ), matShininess ); // shininess occurs only where we allow it to be, and in our case that is regulated // by the contents of the albedo texture alpha channel specularFactor *= baseColor.a; // calculate component albedo contributions for both the specular and alpha channels float3 specularColor = matSpecularCol * specularFactor; float3 diffuseColor = matDiffuseCol * baseColor.rgb * lightValues.m_reflectionFactor; // calculate the final color by adding the two albedo colors and modulating them using the light's color outputColor.rgb += ( diffuseColor + specularColor ) * lightValues.m_color.rgb; } return saturate( outputColor ); } //-------------------------------------------------------------------------------------- // This method calculates light parameters values //-------------------------------------------------------------------------------------- LightValues calculateDirectionalLightValue( float2 UV ) { LightValues Out = (LightValues)0; // set light ray direction Out.m_direction = * float3( -1, -1, 1 ); // calculate the reflection factor of the light ray hitting the lit surface float3 pixelNormalVS = normalize( tex2D( g_Normals, UV ) * 2.0 - 1.0 ); Out.m_reflectionFactor = dot( pixelNormalVS, Out.m_direction ); Out.m_reflectionVector = normalize( -reflect(, pixelNormalVS ) ); // set the color of the light ( no attenuation requires, since directional lights have the same strength // at every point in space ) Out.m_color = g_lightColor * g_strength; return Out; } // ------------------------------------------------------------------------------- // main function // ------------------------------------------------------------------------------- LIGHT_PS_OUT main( float2 UV : TEXCOORD0 ) { LIGHT_PS_OUT Out = (LIGHT_PS_OUT)0; // align texture coordinates UV -= g_halfPixel.xy; float lightPercentage = 1.0f; if( g_drawShadows ) { // use the shadow map to determine if we should render there lightPercentage = tex2D( g_ShadowMap, UV ).r; } // color light and direction are constant across the entire space if ( lightPercentage > 0.0f ) { LightValues lightValue = calculateDirectionalLightValue( UV ); // calculate the light contribution to the scene color Out.vColor = calcLightContribution( UV, lightValue, g_SceneColor, g_MaterialIndices, g_MaterialsDescr, g_materialsTexSize ); // modulate the output color by the light value Out.vColor *= lightPercentage; } else { // save precious cycles - if the light doesn't reach the spot, don't calculate its contribution Out.vColor = 0.0f; } return Out; } [/CODE]
  5. Hi everyone I'm writing a deferred renderer using DX9. I ran to a problem with specular reflections produced by directional lights. when the ray of light is parallel to the lit surface, specular reflection produced by the light starts flickering as I move the camera. [attachment=11458:flicker_1.png] [attachment=11459:flicker_2.png] I made sure that all normals were normalized after they came through the rasterizer. I'm even renormalizing them once they've been sampled from the normals texture. I think it's got something to do with the precision loss as the normals go through the rasterizer, but the weird thing is that if I decrease the light angle even by a fraction,the artifact disappears altogether and the reflection becomes stable ( which would suggest that it's not a precision issue after all ). I'm storing my normals uncompressed in a DXT5 texture, but I tried different formats as well, including a 32bit color texture. I'm sampling it using a POINT filter ( tried linear, but nothing changes as well ). I'd be grateful for all suggestions. Cheers, Paksas
  6. the best ssao ive seen

    [quote name='GeneralQuery' timestamp='1321461665' post='4884609'] @Paksas: I had this exact same problem with the noise texture! I didn't find a solution, though [/quote] I established that when I increase the sampling radius, the weird "cross" in the middle of the screen disappears, and the "overlayed scattered pattern" along with it. But another problem arises - the edges start bleeding white color further and further. The radius I'm taking about here equals 100.0f - so it's pretty big... Any clues - anyone?
  7. the best ssao ive seen

    Hi Thanks for posting the shader - your results look really amazing. I've been trying to run it the whole day though and I keep having problems, so I thought I'll ask. I'm getting strange artifacts. I use the original HLSL code you posted. Here's what I'm getting: [attachment=6112:ssao.png] Here's the contents of the depth buffer and the normals for it: [attachment=6114:depthBuffer.png] [attachment=6115:normals.png] Both are in view space ( WorldMtx * ViewMtx ) Any hints what I may be doing wrong? Oh - here's the code I'm using to pack/unpack the depth & normal values: [code] // ---------------------------------------------------------------------------- // Normals // ---------------------------------------------------------------------------- float2 EncodeNormal( float3 normal ) { float2 color = normal.xy; return color; } float3 DecodeNormal( float2 input ) { float3 normal; normal.xy = 2.0f * input.rg - 1.0f; normal.z = sqrt( max( 0, 1 - dot( normal.xy, normal.xy ) ) ); return normal; } // ---------------------------------------------------------------------------- // Depth // ---------------------------------------------------------------------------- float2 EncodeDepth( float depth, float maxZ ) { float remappedDepth = depth / maxZ; const float2 bitSh = float2( 256.0, 1.0 ); const float2 bitMsk = float2( 0.0, 1.0 / 256.0 ); float2 res = frac( remappedDepth * bitSh ); res -= res.xx * bitMsk; return res; } float DecodeDepth( float2 encodedDepth, float maxZ ) { const float2 bitSh = float2( 1.0 / 256.0, 1.0 ); float val = dot( encodedDepth, bitSh ); return val * maxZ; } [/code] One last thing - the image I posted is generated without the random noise you used. When I add it, here's the result I get: [attachment=6117:ssaoNoise.png] It's as if the texture was simply blended over the scene. Cheers, Paksas
  8. [quote name='TiagoCosta' timestamp='1300625996' post='4788210'] I guess when your blur shader you dont really downsamle the original image, you simply crop it... Post your blurring shader... And any code you find to be important [/quote] I found the problem - I was rendering to a fullscreen quad the size of which always matched that of the screen, and not the render target size. You have to excuse a noob Cheers, Paksas
  9. Hi I'm trying to render a scene to a render target, blur it and then output it to the back buffer. The thing is that only the top left half of the scene gets rendered to the back buffer. Original: [attachment=1679:original.jpg] Post processed: [attachment=1680:blured.jpg] I narrowed it down to be an issue with the render target texture size - the scene gets rendered to a texture matching the viewport size ( lets say 800x600 ). The blur filter renders its results to a texture half that size ( 400x300 ), and then that texture gets rendered back to the viewport's backbuffer ( 800x600). Somewhere along the way the image is cropped and only its top left part gets rendered. I removed everything from the blur pixel shader, leaving only a simple pass-through code there, which also yielded such results. Here's the code: [code] float4 PassThroug( in float2 t : TEXCOORD0 ) : COLOR { float4 col = tex2D( inputTex, t ); return col; } [/code] Could you please tell me what I'm doing wrong? Thanks, Paksas
  10. converting a quaternion to euler angles

    >> I'll go ahead and ask my usual question here: what do you need the Euler angles for? I want to make an editor where I can display the value of a quaternion in the form of Euler angles. >> Are you trying to get the output to equal (within numerical limits) 124, 264, 351? Yes - I want to have a constructor in my EulerAngles class that accepts a quaternion as a param. As a simple test, I wanted to create a quaternion using EulerAngles and roll that operation back by instancing my class. Any ideas what am I doing wrong here with the equations? Cheers, Paksas
  11. Hello I know that this subject's already been on the board, and I've tried using the equations posted in other posts as well as those that can be found here Euclidean space. As you can probably guess - I couldn't get them to work for me. My code looks like this: D3DXQUATERNION q; D3DXQuaternionRotationYawPitchRoll(&q, DEG2RAD(124), DEG2RAD(269), DEG2RAD(351)); float sqx = q.x * q.x; float sqy = q.y * q.y; float sqz = q.z * q.z; float yaw = atan2( 2.f * ( q.y * q.w - q.x * q.z ), 1.f - 2.f * ( sqy + sqz ) ); float pitch = asin ( 2.f * ( q.x * q.y + q.z * q.w ) ); float roll = atan2( 2.f * ( q.x * q.w - q.y * q.z ), 1.f - 2.f * ( sqx + sqz ) ); float gimbaLockTest = q.x * q.y + q.z * q.w; if ( gimbaLockTest > 0.499f ) { yaw = 2.f * atan2( q.x, q.w ); roll = 0; } else if ( gimbaLockTest < -0.499f ) { yaw = -2.f * atan2( q.x, q.w ); roll = 0; } yaw = RAD2DEG(yaw); pitch = RAD2DEG(pitch); roll = RAD2DEG(roll); CPPUNIT_ASSERT_DOUBLES_EQUAL(124.f, yaw, 1e-3); CPPUNIT_ASSERT_DOUBLES_EQUAL(264.f, pitch, 1e-3); CPPUNIT_ASSERT_DOUBLES_EQUAL(351.f, roll, 1e-3); The results I get are: [yaw = 115.00134, pitch = 0.15642539, roll = -90.987679]. If I conjugate the quaternion before calculating the angles, I get the following results: [yaw = 178.03929, pitch = -64.983093, roll = -92.336105]. Could you please help me spot the error? Thanks :) Paksas
  12. The triangle doesn't get drawn at all! However when I disabled the effect and used the fixed pipeline by setting the matrices using the SetTransform methods, everything worked correctly. The vertex structure: struct Vertex { D3DXVECTOR3 pos; Vertex(const D3DXVECTOR3& _pos) : pos(_pos) {} }; ...could just use D3DXVECTOR3 instead, but was planning to extend it a bit afterwards... As for the pixel shader - well - the lines that get drawn are all drawn in black, and the pixel shader should paint everything in red, so it doesn't work - but I guess its sort of given since the vertex shader doesn't work either. Any clues? I'm clueless and I really appreciate your help :)
  13. Thanks for the advice - tried checking them all - but it didn't help. The linesare drawn as if I wasn't using an effect at all - which would point to me using the pretransformed vertices as you mentioned, but I'm not. I've tried specifying FVF flag D3DFVF_XYZ - nothing. You mentioned a vertex buffer being incorrectly filled in. What did you have in mind? I didn't get any error when I created, locked and unlocked the buffer. Could you elaborate on that one?
  14. I tried using this: output.pos = mul(float4(inPos.x, inPos.y, inPos.z, 1), g_mWorldViewProjection); Didn't change anything :(
  15. With that code, I get the following effect compilation error: "error X3014: incorrect number of arguments to numeric-type constructor"