Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral


About mister345

  • Rank

Personal Information

  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. While I am running my directX11 engine in windowed mode, I can use the snipping too to get a screen cap of it. However, if I run it in full screen and hit print screen button, and try to copy paste that into paint or photoshop, it has nothing - print screen isnt capturing the image for some reason, and I cant use snipping tool to get the screen cap while its in full screen? Is there some setting or soemthing thats not allowing windows to detect the game while it's in full screen? Very important to get these captures for my portfolio, and if I capture it in windowed mode, it looks really bad and low res. Heres a link to the D3D class in my engine for reference Any help would be much appreciated, thanks!
  2. When I switch to front face culling, it introduced new gaps in teh shadows, so I had to completely turn off culling - seemed to improve, but with a 10 frame drop! For bias, I found these values online, they seem to be based on Frank Luna - RasterizerState Depth { DepthBias = 10000; DepthBiasClamp = 0.0f; SlopeScaledDepthBias = 1.0f; }; I set the bias to the above numbers (they used to all be 0 by default), honestly cant tell if it makes any difference.
  3. Thanks great tips Turanzkij. Quick questions 1) I see SlopeScaledDepthBias is currently set 0 in my engine - what value do you suggest I set? 2) Will changing it cause problems w the rendering of the other objects in the scene? 3) Can I change the SlopeScaledDepthBias dynamically (set it to one value before I draw the shadows and back to 0 after I draw them)? 3) For reversed culling, I should turn off culling right before I draw the shadows to the depth map, and then turn culling on again afterwards, is that correct?
  4. Hi, so I imported some new models into my engine, and some of them show up with ugly seams or dark patches, while others look perfect (see pictures) I'm using the same shader for all of them, and all of these models have had custom UV mapped textures created for them, which should wrap fully around them, instead of using tiled textures. I have no idea why the custom UV mapped textures are mapping correctly on some, but not others. Possible causes are 1. Am I using the wrong SamplerState to sample the textures? (Im using SampleTypeClamp ) 2. The original models had quads, and were UV mapped by an artist in that state, then I reimported them into 3DS Max and reexported them as all triangles (my engine object loader only accepts triangles). 3. Could the original model UVs just be wrong? Please let me know if somebody can help identify this problem, I'm completely baffled. Thanks. For reference, here's a link to the shader being used to draw the problematic models and the shader code below. ///////////// // DEFINES // ///////////// #define NUM_LIGHTS 3 ///////////// // GLOBALS // ///////////// // texture resource that will be used for rendering the texture on the model Texture2D shaderTextures[7];// NOTE - we only use one render target for drawing all the shadows here! // allows modifying how pixels are written to the polygon face, for example choosing which to draw. SamplerState SampleType; /////////////////// // SAMPLE STATES // /////////////////// SamplerState SampleTypeClamp : register(s0); SamplerState SampleTypeWrap : register(s1); /////////////////// // TYPEDEFS // /////////////////// // This structure is used to describe the lights properties struct LightTemplate_PS { int type; float3 padding; float4 diffuseColor; float3 lightDirection; //(lookat?) //@TODO pass from VS BUFFER? float specularPower; float4 specularColor; }; ////////////////////// // CONSTANT BUFFERS // ////////////////////// cbuffer SceneLightBuffer:register(b0) { float4 cb_ambientColor; LightTemplate_PS cb_lights[NUM_LIGHTS]; } ////////////////////// // CONSTANT BUFFERS // ////////////////////// // value set here will be between 0 and 1. cbuffer TranslationBuffer:register(b1) { float textureTranslation; //@NOTE = hlsl automatically pads floats for you }; // for alpha blending textures cbuffer TransparentBuffer:register(b2) { float blendAmount; }; struct PixelInputType { float4 vertex_ModelSpace : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float3 viewDirection : TEXCOORD1; float3 lightPos_LS[NUM_LIGHTS] : TEXCOORD2; float4 vertex_ScrnSpace : TEXCOORD5; }; float4 main(PixelInputType input) : SV_TARGET { bool bInsideSpotlight = true; float2 projectTexCoord; float depthValue; float lightDepthValue; float4 textureColor; float gamma = 7.f; /////////////////// NORMAL MAPPING ////////////////// float4 bumpMap = shaderTextures[4].Sample(SampleType, input.tex); // Sample the shadow value from the shadow texture using the sampler at the projected texture coordinate location. projectTexCoord.x = input.vertex_ScrnSpace.x / input.vertex_ScrnSpace.w / 2.0f + 0.5f; projectTexCoord.y = -input.vertex_ScrnSpace.y / input.vertex_ScrnSpace.w / 2.0f + 0.5f; float shadowValue = shaderTextures[6].Sample(SampleTypeClamp, projectTexCoord).r; // Expand the range of the normal value from (0, +1) to (-1, +1). bumpMap = (bumpMap * 2.0f) - 1.0f; // Change the COORDINATE BASIS of the normal into the space represented by basis vectors tangent, binormal, and normal! float3 bumpNormal = normalize((bumpMap.x * input.tangent) + (bumpMap.y * input.binormal) + (bumpMap.z * input.normal)); //////////////// AMBIENT BASE COLOR //////////////// // Set the default output color to the ambient light value for all pixels. float4 lightColor = cb_ambientColor * saturate(dot(bumpNormal, input.normal) + .2); // Calculate the amount of light on this pixel. for(int i = 0; i < NUM_LIGHTS; ++i) { float lightIntensity = saturate(dot(bumpNormal, normalize(input.lightPos_LS[i]))); if(lightIntensity > 0.0f) { lightColor += (cb_lights[i].diffuseColor * lightIntensity) * 0.3; } } // Saturate the final light color. lightColor = saturate(lightColor); // TEXTURE ANIMATION - Sample pixel color from texture at this texture coordinate location. input.tex.x += textureTranslation; // BLENDING float4 color1 = shaderTextures[0].Sample(SampleTypeWrap, input.tex); float4 color2 = shaderTextures[1].Sample(SampleTypeWrap, input.tex); float4 alphaValue = shaderTextures[3].Sample(SampleTypeWrap, input.tex); //textureColor = saturate((alphaValue * color1) + ((1.0f - alphaValue) * color2)); textureColor = color1; // Combine the light and texture color. float4 finalColor = lightColor * textureColor * shadowValue * gamma; //if(lightColor.x == 0) //{ // finalColor = cb_ambientColor * saturate(dot(bumpNormal, input.normal) + .2) * textureColor; //} return finalColor; }
  5. My shadows (drawn using a depth buffer which I later sample from in the shadow texture), seem to be detaching slightly from their objects. I looked this up and I think it's "peter panning," and they were saying you have to change the depth offset, but not sure how to do that. Is there a fast way I can tweak the code to fix this? Should I change the "bias" perhaps, or something else? Thanks, Here is the code for the shadows, for reference: // ENTRY POINT float4 main(PixelInputType input) : SV_TARGET { float2 projectTexCoord; float depthValue; float lightDepthValue; //float4 lightColor = float4(0,0,0,0); float4 lightColor = float4(0.05,0.05,0.05,1); // Set the bias value for fixing the floating point precision issues. float bias = 0.001f; //////////////// SHADOWING LOOP //////////////// for(int i = 0; i < NUM_LIGHTS; ++i) { // Calculate the projected texture coordinates. projectTexCoord.x = input.vertex_ProjLightSpace[i].x / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f; projectTexCoord.y = -input.vertex_ProjLightSpace[i].y / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f; if((saturate(projectTexCoord.x) == projectTexCoord.x) && (saturate(projectTexCoord.y) == projectTexCoord.y)) { // Sample the shadow map depth value from the depth texture using the sampler at the projected texture coordinate location. depthValue = depthTextures[i].Sample(SampleTypeClamp, projectTexCoord).r; // Calculate the depth of the light. lightDepthValue = input.vertex_ProjLightSpace[i].z / input.vertex_ProjLightSpace[i].w; // Subtract the bias from the lightDepthValue. lightDepthValue = lightDepthValue - bias; // Compare the depth of the shadow map value and the depth of the light to determine whether to shadow or to light this pixel. // If the light is in front of the object then light the pixel, if not then shadow this pixel since an object (occluder) is casting a shadow on it. if(lightDepthValue < depthValue) { // Calculate the amount of light on this pixel. float lightIntensity = saturate(dot(input.normal, normalize(input.lightPos_LS[i]))); if(lightIntensity > 0.0f) { float spotlightIntensity = CalculateSpotLightIntensity(input.lightPos_LS[i], cb_lights[i].lightDirection, input.normal); //lightColor += (float4(1.0f, 1.0f, 1.0f, 1.0f) * lightIntensity) * .3f; // spotlight lightColor += float4(1.0f, 1.0f, 1.0f, 1.0f) /** lightIntensity*/ * spotlightIntensity * .3f; // spotlight } } } } return saturate(lightColor); }
  6. So the var with the SV_POSITION tag gets OVERWRITTEN by the Gpu but any other tagged variable will be whatever I assign it, is that correct? And other than the PixelInput struct declaration being exactly the same in vertex and pixel shaders, there is no requirement for the members of the struct to be in a particular order, right? Also, do these user tags like TEXCOORD5 need to be defined anywhere else as do the vertexinput semantics in the shader class? Or can they just be named whatever I want? Thank you so much.
  7. I've been trying for hours now to find the cause of this problem. My vertex shader is passing the wrong values to the pixel shader, and I think it might be my input/output semantics. *This shader takes in a prerendered texture with shadows in it, based on Rastertek Tutorial 42. So the light/dark values of the shadows are already encoded in the blurred shadow texture, sampled from Texture2D shaderTextures[7] at index 6 in the pixel shader. struct VertexInputType { float4 vertex_ModelSpace : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; }; struct PixelInputType { float4 vertex_ModelSpace : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float3 viewDirection : TEXCOORD1; float3 lightPos_LS[NUM_LIGHTS] : TEXCOORD2; float4 vertex_ScrnSpace : TEXCOORD5; }; Specifically PixelInputType is causing a ton of trouble - if I switch the tags "SV_POSITION" for the first variable and "TEXCOORD5" for the last one, it gives completely different values to the Pixel shader even though all the calculations are exactly the same. Specifically, the main issues are that I have a spotlight effect in the pixel shader that takes the dot product of the light to Surface Vector with the light direction to give it a falloff, which was previously working, but in this upgraded version of the shader, it seems to be giving completely wrong values. / (See full vertex shader code below). Is there some weird thing about pixel shader semantics that Im missing? Does the order of the variables in the struct matter? I've also attached teh full shader files for reference. Any insight would be much appreciated, thanks. PixelInputType main( VertexInputType input ) { //The final output for the vertex shader PixelInputType output; // Pass through tex coordinates untouched output.tex = input.tex; // Pre-calculate vertex position in world space input.vertex_ModelSpace.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.vertex_ModelSpace = mul(input.vertex_ModelSpace, cb_worldMatrix); output.vertex_ModelSpace = mul(output.vertex_ModelSpace, cb_viewMatrix); output.vertex_ModelSpace = mul(output.vertex_ModelSpace, cb_projectionMatrix); // Store the position of the vertice as viewed by the camera in a separate variable. output.vertex_ScrnSpace = output.vertex_ModelSpace; // Bring normal, tangent, and binormal into world space output.normal = normalize(mul(input.normal, (float3x3)cb_worldMatrix)); output.tangent = normalize(mul(input.tangent, (float3x3)cb_worldMatrix)); output.binormal = normalize(mul(input.binormal, (float3x3)cb_worldMatrix)); // Store worldspace view direction for specular calculations float4 vertex_WS = mul(input.vertex_ModelSpace, cb_worldMatrix); output.viewDirection = normalize( -; for(int i = 0; i< NUM_LIGHTS; ++i) { // Calculate light position relative to the vertex in WORLD SPACE output.lightPos_LS[i] = cb_lights[i].lightPosition_WS -; } return output; } Repo link: Light_SoftShadows_ps.hlsl Light_SoftShadows_vs.hlsl
  8. Of course not - to combine any two of his tutorials is so unimaginably complex that copy pasting would never work. Even if I copy paste, I'm forced to then go back through it line by line and digest it just to get it working. I wish I could copy paste them! 1. Re: Variance, if I blur each shadow in shadow map space wouldnt it be even slower, because the blur effect takes 4 render targets, and there are 3 shadows, so it would be better to blur them all to one texture after the shadowing is done, as I am doing now? 2. Regarding PCF filtering, I tried doing that but it did absolutely nothing. Here is the topic regarding that on Maybe I didnt sample the shadows right? Could you please explain, how do you "take multiple taps of the shadow map from neighboring texels"? How do you know which texels are nearby to a given texel? Do you just randomly apply an offset to the texCoord value inside the Pixel shader?
  9. Hi, I'm on Rastertek series 42, soft shadows, which uses a blur shader and runs extremely slow. He obnoxiously states that there are many ways to optimize his blur shader, but gives you no idea how to do it. The way he does it is : 1. Project the objects in the scene to a render target using the depth shader. 2. Draw black and white shadows on another render target using those depth textures. 3. Blur the black/white shadow texture produced in step 2 by a) rendering it to a smaller texture b) vertical / horizontal blurring that texture c) rendering it back to a bigger texture again. 4. Send the blurred shadow texture into the final shader, which samples its black/white values to determine light intensity. So this uses a ton of render textures, and I just added more than one light, which multiplies the render textures required. Is there any easy way I can optimize the super expensive blur shader that wouldnt require a whole new complicated system? Like combining any of these render textures into one for example? If you know of any easy way not requiring too many changes, please let me know, as I already had a really hard time understanding the way this works, so a super complicated change would be beyond my capacity. Thanks. *For reference, here is my repo, in which I have simplified his tutorial and added an additional light.
  10. Oh I see. Ive never used VS in that way so I dont know how. I always just make one project for one solution. Is it still possible to put them in a separate project in the same solution now that they are already dumped into the main project? How would I do so if you dont mind my asking? Also, if the physics files are in a separate project but my main project files include/ reference them will it still compile and run?
  11. Hi guys, so I have about 200 files isolated in their own folder [physics code] in my Visual Studio project that I never touch. They might as well be a separate library, I just keep em as source files in case I need to look at them or step through them, but I will never actually edit them, so there's no need to ever build them. However, when I need to rebuild the entire solution because I changed the other files, all of these 200 files get rebuilt too and it takes a really long time. If I click on their properties -> exclude from build, then rebuild, it's no good because then all the previous built objects get deleted automatically, so the build will fail. So how do I make the built versions of the 200+ files in the physics directory stay where they are so I never have to rebuild them, but do a normal rebuild for everything else? Any easy answers to this? The simpler the better, as I am a noob at Visual Studio settings. Thanks.
  12. Oh, so is it just doing view direction - eye position? The variables are really confusingly named!
  13. The lights were drawn properly in the version that didnt throw the assertion (used for shadowing render textures, rastertek tutorial 42). If thats the case what valid light matrix could they have produced w an eye direction of 000?
  14. Hi, can someone please explain why this is giving an assertion EyePosition!=0 exception? _lightBufferVS->viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&_lightBufferVS->position), XMLoadFloat3(&_lookAt), XMLoadFloat3(&up)); It looks like DirectX doesnt want the 2nd parameter to be a zero vector in the assertion, but I passed in a zero vector with this exact same code in another program and it ran just fine. (Here is the version of the code that worked - note XMLoadFloat3(&m_lookAt) parameter value is (0,0,0) at runtime - I debugged it - but it throws no exceptions. m_viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&m_position), XMLoadFloat3(&m_lookAt), XMLoadFloat3(&up)); Here is the repo for the broken code (See LightClass) and here is the repo with the alternative version of the code that is working with a value of (0,0,0) for the second parameter.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!