Advertisement Jump to content
  • Advertisement

Keba

Member
  • Content Count

    153
  • Joined

  • Last visited

Community Reputation

170 Neutral

About Keba

  • Rank
    Member
  1. is these parameters the same in rendermonkey: texture Noise_Tex; sampler Noise = sampler_state { Texture = (Noise_Tex); ADDRESSU = WRAP; ADDRESSV = WRAP; ADDRESSW = WRAP; MAGFILTER = LINEAR; MINFILTER = LINEAR; MIPFILTER = LINEAR; }; In your engine picture it looks like a wrap, but in the rendermonkey it dosen't. And also. how are you rendering the cube in your engine? using indexbuffer with only 8 vertices? Then you could get that "stretch" result becuase of the fact that the "walls" in the cube shares the uvcoordinates with the "ceiling" and dosen't get unique uvcoordinates.
  2. Here is something nice It's quite old and you wont't find any code in it, not much anyway, but you will get the basic idea. it will be worth reading it and understanding only 50 % of it ;)
  3. Slinky730 is right, however there is other application that has done this. This is a good start, however done in WTL and kind of advanced. One of the samples has a similiar docking window style as seen in the above picture.
  4. Quote: You first iterate through the bones and generate their inverse matrix to the original pose. Is this the T-pose, or bindpose, you mean? and the matrix we are talking about is it the transformation matrix for that pose. If i remember this correct, that inversed matrix will transform the vertices in the t-pose/bindpose to local space? Quote: Every frame, you calculate the transform matrix for each bone (from the origin you just moved them all to), transform you verticies, and draw. Here i get confused about the "offset" matrix, the matrix which is responsible to transforming my vertices to the origin. Thats the one i wan't. Quote: In your example, basicaly do the last step your confused about, first. Imagine your lower arm. Transform all your lower arm verticies to the origin first, then when you want to position it, all you have to do is calculate the forward transformation and boom you're there. This may sound stupid...whats is the "forward transformation"? So let se if I get this staright. I Should move my vertices, let say for the lower arm, to the origin, performe the rotation, and then apply a offset(the offset to the parent) and then mulptiply with the parent transformation matrix? Quote: Right now I'm workin on ditching keyframes all together and converting my skeleton to follow proceduraly generated target points. Sounds interesting :D.....first things first though....gotta catch this "bugg" ;)...gotta catch them all EDIT: ahhha!...after reading RSN I now understand what you mean, have to test it later... [Edited by - Keba on November 8, 2006 2:02:39 AM]
  5. Hello, I have this problem with keyframed skeletal animation. First I compute the (in maya) local transformation matrix for each bone: L = R * T, where R is the rotation (in quaternion) and the T is the translation of the bone(relative to its parent bone). Here i could also add the scaling, but for now i set it to {1,1,1} so i let it be. This is done for all bones in all keyframes in the exported animations. The application then retrieves two keyframes that match the current time and interpolates between these. the next step is to update the hierarchy: C = L * P, here the C is the final combined matrix of the bone in the hierarchy and the L is the local transformation and the P is the parent transformation. So far so good. the Hiearchy is updated, and animated when rendering. Now its time for skinning. When skinning the model onto the hierarchy we need a offset matrix. and its here the questions arise. how do i compute this offset matrix? Is it: * the inverted transformation matrix of the bone in T-pose? * Advance Animation With DirectX(Jim Adams): "you can compute it yourself by first updating the frame hierarchy and then inverting each frames combined transformation". This dosen't make sense to me since A*A^-1 = I. The book sugest that one should calculate the combinedmatrix(A) and then invert it (A^-1) and then the multiply these.
  6. Quote: /////////////////////////////////////// g_pd3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET, 0xffffffff, 1.0f, 0 ); g_pd3dDevice->SetRenderTarget( 0, pBackBuffer ); g_pd3dDevice->SetTransform(D3DTS_PROJECTION, &proj); At the second rendering, you clear the rendertarget BEFORE switching it, which will result in that the first rendertarget(the one you rendered on the first time) is cleared to white, and then the second render target is activated.... try this instead: g_pd3dDevice->SetRenderTarget( 0, pBackBuffer ); g_pd3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET, 0xffffffff, 1.0f, 0 );
  7. So, i implemeted a shadow map algortihm (variance) but are not quite satisfied as you can se in this picture. in the left corner one could se the shadow map which at the moment is not blured/filtered In the picture, the box nearest the plane should have a "stronger" shadow, and the above box should have a weaker one. the light is placed at origio of the plane pointing straight down. Here is my hlsl shader(its just a rewrite of the "famous" one from the papper): [Source] float4x4 g_fWorldViewProjection : MATRIX_WORLDVIEWPROJECTION; float4x4 g_fWorldView : MATRIX_WORLDVIEW; float4x4 g_fWorld : MATRIX_WORLD; float4x4 g_fLightView : MATRIX_LIGHTVIEW; float4x4 g_fLightViewInverse : MATRIX_LIGHTVIEWINVERSE; float4x4 g_fLightProjection : MATRIX_LIGHTPROJECTION; float3 g_fLightPosition : LIGHT_POSITION; float4 g_fLightDirection : LIGHT_DIRECTION; float g_fLightAngleAttenBegin : LIGHT_ANGLE_ATTENBEGIN; float g_fLightAngleAttenEnd : LIGHT_ANGLE_ATTENEND; float g_fLightAttenBegin : LIGHT_ATTEN_BEGIN; float g_fLightAttenEnd : LIGHT_ATTEN_END; float g_fLightVSMEpsilon : LIGHT_VSMEPSILON; // = 0.0001; float g_fLightShadowBias : LIGHT_SHADOWBIAS; //= 0.001f; float4 g_fLightColor = {1.0f,1.0f,1.0f,1.0f}; float4 g_fDiffuseColor = {1.0f, 1.0f, 1.0f,1.0f}; static float2 g_fLightAngleAtten = float2(g_fLightAngleAttenBegin, g_fLightAngleAttenEnd) * 0.5; static float2 g_fCosLightAngleAtten = cos(g_fLightAngleAtten); /////////////////////////////////////////////////////////////////////////////// // Textures/samplers and Render Targets /////////////////////////////////////////////////////////////////////////////// // texture ShadowRenderTarget sampler2D Sampler0: s0; ////////////////////////////////////////////////////////////////////////////// // Depth Pass Shaders ////////////////////////////////////////////////////////////////////////////// struct VsInput_VSMDepth { float4 fPosition : POSITION; }; struct VsOutput_VSMDepth { float4 fScreenPosition : POSITION; float3 fLightVector : TEXCOORD0; }; VsOutput_VSMDepth Vs_VSMDepth(VsInput_VSMDepth InData, uniform float4x4 ShadowView,uniform float4x4 ShadowViewProjection) { VsOutput_VSMDepth Out = (VsOutput_VSMDepth)0; float4 WorldPos = mul(InData.fPosition, g_fWorld); Out.fScreenPosition = mul(WorldPos, ShadowViewProjection); Out.fLightVector = mul(WorldPos, ShadowView).xyz; return Out; } float4 Ps_VSMDepth(VsOutput_VSMDepth InData): COLOR { // Work out the depth of this fragment from the light, normalized to 0->1 float2 fDepth; fDepth.x = length(InData.fLightVector) / g_fLightAttenEnd; fDepth.y = fDepth.x * fDepth.x; return fDepth.xyxy; } ////////////////////////////////////////////////////////////////////////////// // Lighting Pass Shaders ////////////////////////////////////////////////////////////////////////////// struct VsInput_VSMLight { float4 fPosition : POSITION; float3 fNormal : NORMAL; }; struct VsOutput_VSMLight { float4 fScreenPosition : POSITION; float3 fWorldPosition : TEXCOORD0; float3 fWorldNormal : TEXCOORD1; }; VsOutput_VSMLight Vs_VSMLight(VsInput_VSMLight InData,uniform float4x4 Shadow) { VsOutput_VSMLight Out = (VsOutput_VSMLight)0; Out.fScreenPosition = mul(InData.fPosition, g_fWorldViewProjection); Out.fWorldPosition = mul(InData.fPosition, g_fWorld).xyz; Out.fWorldNormal = mul(float4(InData.fNormal, 0), g_fWorld).xyz; return Out; } float4 Ps_VSMLight(VsOutput_VSMLight InData,uniform float4x4 LightViewProjection): COLOR { // Sum the contributions from all lights float3 LitColor = float3(0, 0, 0); // Light Shader: float3 fLightContribution = {0,0,0}; float3 fDirToLight = {0,0,0}; float fDistToLight = 0; float NdotL = 0; { // Unnormalized light vector fDirToLight = g_fLightPosition - InData.fWorldPosition; fDistToLight = length(fDirToLight); float fAttenAmount = clamp((fDistToLight - g_fLightAttenBegin) / (g_fLightAttenEnd - g_fLightAttenBegin), 0.0, 1.0); // Radial attenuation fDirToLight = normalize(fDirToLight); float2 fCosAngleAtten = g_fCosLightAngleAtten; float CosAngle = dot(-fDirToLight, g_fLightDirection); float fAngleAttenAmount = clamp((CosAngle - fCosAngleAtten.x) / (fCosAngleAtten.y - fCosAngleAtten.x), 0.0, 1.0); // Compose the light shader outputs fLightContribution = (1.0 - fAttenAmount) * (1.0 - fAngleAttenAmount) * g_fLightColor; NdotL = dot(InData.fWorldNormal, fDirToLight); } // Variance Shadow Mapping: { // Transform the surface into light space and project // NB: Could be done in the vertex shader, but doing it here keeps the "light // shader" abstraction and doesn't limit # of shadowed lights. float4 fSurfTex = mul(float4(InData.fWorldPosition, 1.0), LightViewProjection); fSurfTex = fSurfTex / fSurfTex.w; // Rescale viewport to be [0,1] (texture coordinate space) float2 fShadowTex = fSurfTex.xy * float2(0.5, -0.5) + 0.5; float4 fMoments; fMoments = tex2D(Sampler0, fShadowTex); // Rescale light distance and check if we're in shadow float fRescaledDistToLight = fDistToLight / g_fLightAttenEnd; fRescaledDistToLight = fRescaledDistToLight - g_fLightShadowBias; float fLitFactor = (fRescaledDistToLight <= fMoments.x); // Variance shadow mapping float E_x2 = fMoments.y; float Ex_2 = fMoments.x * fMoments.x; float fVariance = min(max(E_x2 - Ex_2, 0.0) + g_fLightVSMEpsilon, 1.0); float m_d = (fMoments.x - fRescaledDistToLight); float p = fVariance / (fVariance + m_d * m_d); // Adjust the light color based on the shadow attenuation fLightContribution *= max(fLitFactor,p); } // Evaluate basic diffuse lighting float fSelfshadow = clamp(NdotL * 2.0, 0.0, 1.0); float3 DirectContribution = g_fDiffuseColor * NdotL; LitColor += fLightContribution * fSelfshadow * DirectContribution; return float4(LitColor, 1.0); } ////////////////////////////////////////////////////////////////////////////// // Techniques ////////////////////////////////////////////////////////////////////////////// technique ShadowMapRender { pass p0 { CullMode = None; ZEnable = true; ZWriteEnable = true; ZFunc = LessEqual; VertexShader = compile vs_3_0 Vs_VSMDepth(g_fLightView, mul(g_fLightView, g_fLightProjection)); PixelShader = compile ps_3_0 Ps_VSMDepth(); } } technique SceneRender { pass p0 { ZEnable = true; ZWriteEnable = true; ZFunc = LessEqual; CullMode = None; AddressU[0] = CLAMP; AddressV[0] = CLAMP; VertexShader = compile vs_3_0 Vs_VSMLight(g_fLightViewInverse); PixelShader = compile ps_3_0 Ps_VSMLight(mul(g_fLightView, g_fLightProjection)); } } [/Source][/source] its seems like the 'p' var is gettin the "wrong" value, it looks the same as the shadowmap, which IMO is wrong... and another thing: [Source] float E_x2 = fMoments.y; float Ex_2 = fMoments.x * fMoments.x; [/Source][/source] fMoments is sampled from the shadow map, and in that shadow map the x value is the distance from the light to a fragment in world space(scaled by some nice value), and the y value is squared x value. wouldn't the above code be the same as: [Source] float E_x2 = fMoments.y; float Ex_2 = fMoments.y; [/Source][/source] E_x2 and Ex_2 is nearly the same when debugging the shader. Which dosen't really make sense to me....could someone explain this for me? anyway, The creation of the shadow map: VsOutput_VSMDepth Vs_VSMDepth(VsInput_VSMDepth InData, uniform float4x4 ShadowView,uniform float4x4 ShadowViewProjection) { VsOutput_VSMDepth Out = (VsOutput_VSMDepth)0; float4 WorldPos = mul(InData.fPosition, g_fWorld); Out.fScreenPosition = mul(WorldPos, ShadowViewProjection); Out.fLightVector = mul(WorldPos, ShadowView).xyz; return Out; } float4 Ps_VSMDepth(VsOutput_VSMDepth InData): COLOR { // Work out the depth of this fragment from the light, normalized to 0->1 float2 fDepth; fDepth.x = length(InData.fLightVector) / g_fLightAttenEnd; fDepth.y = fDepth.x * fDepth.x; return fDepth.xyxy; } seems fine, one could se that on the above picture, the cube which is closets to the source got a stronger purple color, since the distance is smaller, and the cube with a longer distance got a smaller purple value, and so on. The projection of the current rendered pixel into the shadow map is also correct(i think) since the shadows is appearing where they should. if the projection was wrong the shadows would start to shown on random places.... The spotlight code also works, as one can se on the picture, the plane is lightened up with a round "spotlightish light".... so i have concentrated on this part of the code: float4 fMoments; fMoments = tex2D(Sampler0, fShadowTex); // Rescale light distance and check if we're in shadow float fRescaledDistToLight = fDistToLight / g_fLightAttenEnd; fRescaledDistToLight = fRescaledDistToLight - g_fLightShadowBias; float fLitFactor = (fRescaledDistToLight <= fMoments.x); // Variance shadow mapping float E_x2 = fMoments.y; float Ex_2 = fMoments.x * fMoments.x; float fVariance = min(max(E_x2 - Ex_2, 0.0) + g_fLightVSMEpsilon, 1.0); float m_d = (fMoments.x - fRescaledDistToLight); float p = fVariance / (fVariance + m_d * m_d); // Adjust the light color based on the shadow attenuation fLightContribution *= max(fLitFactor,p); } // Evaluate basic diffuse lighting float fSelfshadow = clamp(NdotL * 2.0, 0.0, 1.0); float3 DirectContribution = g_fDiffuseColor * NdotL; LitColor += fLightContribution * fSelfshadow * DirectContribution; when the distance from the current rendering pixel in world space is larger than the moments(or distance and square distance) read from the shadow map, the current pixel is in shadows(fLitFactor), and the value of 'p' is calculated as the "shadow value", something gets wrong in this calculation, i think, the strength of the shadow is somehow inverted. It looks exactly as the shadow map, like the shadow that was calculated in 'p' is the same as the distances values in the shadow map, and the geometry with the lowest distance to the source gets the strongest shadow on the plane, and the geometry with the shortest distances to the source gets the weaker shadow on the plane. I found this incorrect, what should i do to fix this? and what i'm i doing wrong?
  8. Keba

    Shadow map problem

    No one can answer this? come on....it can't be that hard? anyway, The creation of the shadow map: VsOutput_VSMDepth Vs_VSMDepth(VsInput_VSMDepth InData, uniform float4x4 ShadowView,uniform float4x4 ShadowViewProjection) { VsOutput_VSMDepth Out = (VsOutput_VSMDepth)0; float4 WorldPos = mul(InData.fPosition, g_fWorld); Out.fScreenPosition = mul(WorldPos, ShadowViewProjection); Out.fLightVector = mul(WorldPos, ShadowView).xyz; return Out; } float4 Ps_VSMDepth(VsOutput_VSMDepth InData): COLOR { // Work out the depth of this fragment from the light, normalized to 0->1 float2 fDepth; fDepth.x = length(InData.fLightVector) / g_fLightAttenEnd; fDepth.y = fDepth.x * fDepth.x; return fDepth.xyxy; } seems fine, one could se that on the above picture, the cube which is closets to the source got a stronger purple color, since the distance is smaller, and the cube with a longer distance got a smaller purple value, and so on. The projection of the current rendered pixel into the shadow map is also correct(i think) since the shadows is appearing where they should. if the projection was wrong the shadows would start to shown on random places.... The spotlight code also works, as one can se on the picture, the plane is lightened up with a round "spotlightish light".... so i have concentrated on this part of the code: float4 fMoments; fMoments = tex2D(Sampler0, fShadowTex); // Rescale light distance and check if we're in shadow float fRescaledDistToLight = fDistToLight / g_fLightAttenEnd; fRescaledDistToLight = fRescaledDistToLight - g_fLightShadowBias; float fLitFactor = (fRescaledDistToLight <= fMoments.x); // Variance shadow mapping float E_x2 = fMoments.y; float Ex_2 = fMoments.x * fMoments.x; float fVariance = min(max(E_x2 - Ex_2, 0.0) + g_fLightVSMEpsilon, 1.0); float m_d = (fMoments.x - fRescaledDistToLight); float p = fVariance / (fVariance + m_d * m_d); // Adjust the light color based on the shadow attenuation fLightContribution *= max(fLitFactor,p); } // Evaluate basic diffuse lighting float fSelfshadow = clamp(NdotL * 2.0, 0.0, 1.0); float3 DirectContribution = g_fDiffuseColor * NdotL; LitColor += fLightContribution * fSelfshadow * DirectContribution; when the distance from the current rendering pixel in world space is larger than the moments(or distance and square distance) read from the shadow map, the current pixel is in shadows(fLitFactor), and the value of 'p' is calculated as the "shadow value", something gets wrong in this calculation, i think, the strength of the shadow is somehow inverted. It looks exactly as the shadow map, like the shadow that was calculated in 'p' is the same as the distances values in the shadow map, and the geometry with the lowest distance to the source gets the strongest shadow on the plane, and the geometry with the shortest distances to the source gets the weaker shadow on the plane. I found this incorrect, what should i do to fix this? and what i'm i doing wrong?
  9. So, i implemeted a shadow map algortihm (variance) but are not quite satisfied as you can se in this picture. in the left corner one could se the shadow map which at the moment is not blured/filtered In the picture, the box nearest the plane should have a "stronger" shadow, and the above box should have a weaker one. the light is placed at origio of the plane pointing straight down. Here is my hlsl shader(its just a rewrite of the "famous" one from the papper): [Source] float4x4 g_fWorldViewProjection : MATRIX_WORLDVIEWPROJECTION; float4x4 g_fWorldView : MATRIX_WORLDVIEW; float4x4 g_fWorld : MATRIX_WORLD; float4x4 g_fLightView : MATRIX_LIGHTVIEW; float4x4 g_fLightViewInverse : MATRIX_LIGHTVIEWINVERSE; float4x4 g_fLightProjection : MATRIX_LIGHTPROJECTION; float3 g_fLightPosition : LIGHT_POSITION; float4 g_fLightDirection : LIGHT_DIRECTION; float g_fLightAngleAttenBegin : LIGHT_ANGLE_ATTENBEGIN; float g_fLightAngleAttenEnd : LIGHT_ANGLE_ATTENEND; float g_fLightAttenBegin : LIGHT_ATTEN_BEGIN; float g_fLightAttenEnd : LIGHT_ATTEN_END; float g_fLightVSMEpsilon : LIGHT_VSMEPSILON; // = 0.0001; float g_fLightShadowBias : LIGHT_SHADOWBIAS; //= 0.001f; float4 g_fLightColor = {1.0f,1.0f,1.0f,1.0f}; float4 g_fDiffuseColor = {1.0f, 1.0f, 1.0f,1.0f}; static float2 g_fLightAngleAtten = float2(g_fLightAngleAttenBegin, g_fLightAngleAttenEnd) * 0.5; static float2 g_fCosLightAngleAtten = cos(g_fLightAngleAtten); /////////////////////////////////////////////////////////////////////////////// // Textures/samplers and Render Targets /////////////////////////////////////////////////////////////////////////////// // texture ShadowRenderTarget sampler2D Sampler0: s0; ////////////////////////////////////////////////////////////////////////////// // Depth Pass Shaders ////////////////////////////////////////////////////////////////////////////// struct VsInput_VSMDepth { float4 fPosition : POSITION; }; struct VsOutput_VSMDepth { float4 fScreenPosition : POSITION; float3 fLightVector : TEXCOORD0; }; VsOutput_VSMDepth Vs_VSMDepth(VsInput_VSMDepth InData, uniform float4x4 ShadowView,uniform float4x4 ShadowViewProjection) { VsOutput_VSMDepth Out = (VsOutput_VSMDepth)0; float4 WorldPos = mul(InData.fPosition, g_fWorld); Out.fScreenPosition = mul(WorldPos, ShadowViewProjection); Out.fLightVector = mul(WorldPos, ShadowView).xyz; return Out; } float4 Ps_VSMDepth(VsOutput_VSMDepth InData): COLOR { // Work out the depth of this fragment from the light, normalized to 0->1 float2 fDepth; fDepth.x = length(InData.fLightVector) / g_fLightAttenEnd; fDepth.y = fDepth.x * fDepth.x; return fDepth.xyxy; } ////////////////////////////////////////////////////////////////////////////// // Lighting Pass Shaders ////////////////////////////////////////////////////////////////////////////// struct VsInput_VSMLight { float4 fPosition : POSITION; float3 fNormal : NORMAL; }; struct VsOutput_VSMLight { float4 fScreenPosition : POSITION; float3 fWorldPosition : TEXCOORD0; float3 fWorldNormal : TEXCOORD1; }; VsOutput_VSMLight Vs_VSMLight(VsInput_VSMLight InData,uniform float4x4 Shadow) { VsOutput_VSMLight Out = (VsOutput_VSMLight)0; Out.fScreenPosition = mul(InData.fPosition, g_fWorldViewProjection); Out.fWorldPosition = mul(InData.fPosition, g_fWorld).xyz; Out.fWorldNormal = mul(float4(InData.fNormal, 0), g_fWorld).xyz; return Out; } float4 Ps_VSMLight(VsOutput_VSMLight InData,uniform float4x4 LightViewProjection): COLOR { // Sum the contributions from all lights float3 LitColor = float3(0, 0, 0); // Light Shader: float3 fLightContribution = {0,0,0}; float3 fDirToLight = {0,0,0}; float fDistToLight = 0; float NdotL = 0; { // Unnormalized light vector fDirToLight = g_fLightPosition - InData.fWorldPosition; fDistToLight = length(fDirToLight); float fAttenAmount = clamp((fDistToLight - g_fLightAttenBegin) / (g_fLightAttenEnd - g_fLightAttenBegin), 0.0, 1.0); // Radial attenuation fDirToLight = normalize(fDirToLight); float2 fCosAngleAtten = g_fCosLightAngleAtten; float CosAngle = dot(-fDirToLight, g_fLightDirection); float fAngleAttenAmount = clamp((CosAngle - fCosAngleAtten.x) / (fCosAngleAtten.y - fCosAngleAtten.x), 0.0, 1.0); // Compose the light shader outputs fLightContribution = (1.0 - fAttenAmount) * (1.0 - fAngleAttenAmount) * g_fLightColor; NdotL = dot(InData.fWorldNormal, fDirToLight); } // Variance Shadow Mapping: { // Transform the surface into light space and project // NB: Could be done in the vertex shader, but doing it here keeps the "light // shader" abstraction and doesn't limit # of shadowed lights. float4 fSurfTex = mul(float4(InData.fWorldPosition, 1.0), LightViewProjection); fSurfTex = fSurfTex / fSurfTex.w; // Rescale viewport to be [0,1] (texture coordinate space) float2 fShadowTex = fSurfTex.xy * float2(0.5, -0.5) + 0.5; float4 fMoments; fMoments = tex2D(Sampler0, fShadowTex); // Rescale light distance and check if we're in shadow float fRescaledDistToLight = fDistToLight / g_fLightAttenEnd; fRescaledDistToLight = fRescaledDistToLight - g_fLightShadowBias; float fLitFactor = (fRescaledDistToLight <= fMoments.x); // Variance shadow mapping float E_x2 = fMoments.y; float Ex_2 = fMoments.x * fMoments.x; float fVariance = min(max(E_x2 - Ex_2, 0.0) + g_fLightVSMEpsilon, 1.0); float m_d = (fMoments.x - fRescaledDistToLight); float p = fVariance / (fVariance + m_d * m_d); // Adjust the light color based on the shadow attenuation fLightContribution *= max(fLitFactor,p); } // Evaluate basic diffuse lighting float fSelfshadow = clamp(NdotL * 2.0, 0.0, 1.0); float3 DirectContribution = g_fDiffuseColor * NdotL; LitColor += fLightContribution * fSelfshadow * DirectContribution; return float4(LitColor, 1.0); } ////////////////////////////////////////////////////////////////////////////// // Techniques ////////////////////////////////////////////////////////////////////////////// technique ShadowMapRender { pass p0 { CullMode = None; ZEnable = true; ZWriteEnable = true; ZFunc = LessEqual; VertexShader = compile vs_3_0 Vs_VSMDepth(g_fLightView, mul(g_fLightView, g_fLightProjection)); PixelShader = compile ps_3_0 Ps_VSMDepth(); } } technique SceneRender { pass p0 { ZEnable = true; ZWriteEnable = true; ZFunc = LessEqual; CullMode = None; AddressU[0] = CLAMP; AddressV[0] = CLAMP; VertexShader = compile vs_3_0 Vs_VSMLight(g_fLightViewInverse); PixelShader = compile ps_3_0 Ps_VSMLight(mul(g_fLightView, g_fLightProjection)); } } [/Source] its seems like the 'p' var is gettin the "wrong" value, it looks the same as the shadowmap, which IMO is wrong... and another thing: [Source] float E_x2 = fMoments.y; float Ex_2 = fMoments.x * fMoments.x; [/Source] fMoments is sampled from the shadow map, and in that shadow map the x value is the distance from the light to a fragment in world space(scaled by some nice value), and the y value is squared x value. wouldn't the above code be the same as: [Source] float E_x2 = fMoments.y; float Ex_2 = fMoments.y; [/Source] E_x2 and Ex_2 is nearly the same when debugging the shader. Which dosen't really make sense to me....could someone explain this for me?
  10. Ok, i'm not sure how to render to texture. i wan't to render my scene to a texture and then display the texture on a quad. I have seen examples of this with effects that are doing it in two passes...but how should i handle the rendertarget switch? should i use multiple rendertargets to create this effect? or should i create a additional rendertarget and switch between the main rendertarget and my new rendertarget? You can't change rendertarget between two passes...right? or? something like this: // Sceen render to texture Clear(D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER) BeginScene() GetRenderTarget(main rendertarget) SetRenderTarget(new rendertarget) RenderSceen(); SetRenderTarget(main rendertarget); EndScene() // quad render Clear(D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER) BeginScene() Render Quad with texture; EndScene() Present(); above is not possible to do in a effect with two passes... maybe there is a better solution than this, my question is: how to performe render to texture in the "correct way"
  11. Keba

    branching in SM3.0 ??

    " bool flag = false; . . . . . if (bool == false) { } " in C and C(pp) i think something like: if(bool() == true) { } would become true, but not sure for shaders... are you sure you didn't mean: "if(flag == false) { .... } " ??
  12. Hello I was wondering if combining newton dynamics, i.e ragdoll, and Microsoft xfile woul be possible? and if so, would it be hard to combine these? i haven't used newton game dynamics yet, but how is the ragdoll information stored? in Matricies which could be combined with the matrices in my x file mesh? or is it a bit more complicated than that? (problaby). Maybe this question should be asked in the DX forum?, not sure.. thanks / Keba
  13. Keba

    Obscure Error in VS

    This is a "dll exporting problem" basicly it means that you have to export a class of type class 'std::set<_Kty>'. Not sure if the problem is in you application or the OGRE engine, but here is more information that explains it better: Click and also search on google for "needs to have dll-interface to be used by clients of " for more info
  14. Hello, Got a memory problem in a dll File. I have created a own DLL and included to the exe. now, the situation is like this: in DLL: [SOURCE] bool CSomeExportedClass::DoSomething(std::string AStringFromExe) { m_aStringInExportedClass = AStringFromExe; return true; } [/SOURCE] now, in the exe, above function is called with a argument: [SOURCE] CSomeExportedClass pClass = new CSomeExportedClass; pClass->DoSomething("A Test!"); [/SOURCE] this will cause memory problem. i know that a DLL should deallocate only memory which it allocates. but what happens with that string i uses as a argument, who is allocatin/deallocating it(DLL or EXE)? It works fine if i send the string to CSomeExportedClass as a pointer. I guess thats ok because the memory is not allocated/deallocated then, just passed as a pointer. hmmm not sure on this, could someone help me maybe? thanks / Keba
  15. hmm i'm not sure if i understand this question, but if you just want to render som geometric data with a texture loaded from disc, then you should use D3DPOOL_MANAGED with SetTexture (not sure if the D3DPOOL_MANAGED is the optimal pool, se SDK doc for that). The SetTexture function is setting up a texture for rendering, while the UpdateTexture function is for acessing the Texturedata. about Dynamic Texture and locking. you should only lock a texture if you are supposed to change the texture data(change color on the pixels), but locking a texture should be avoided because its expensive... If you have a texture on the disc which you want to render with, then just load it and use SetTexture and write out dynamic vertex and index buffers and draw it in a batch. I hope that helps you.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!