Advertisement Jump to content
  • Advertisement

fire67

Member
  • Content Count

    50
  • Joined

  • Last visited

Community Reputation

497 Neutral

1 Follower

About fire67

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Art
    Business
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi there, I am working on a specific blur effect that implies several behaviours. But before I would like to know what you think about those blurring algorithms according to performances and quality : Kawase blur, Box blur and two pass Gaussian blur. Next are the effects I am currently working on and I would be glad to have your thoughts on those on the proper way to achieve this. Here is a schematic view followed by my questions. I would like to blur the content of those 4 spheres using the same offset no mater their position. If I apply the same blur on all objects, far object's content will appear more blurry than ones on the foreground and I want to avoid that. I think that the depth make could help but any precision would help me. If I blur the whole image and apply the result on the sphere, the white background will bleed onto the sphere shape and I want to avoid that. I also don't want that the blue (3) and yellow (4) sphere merges with the red (1) and green (2) ones. But I would like that the green and red ones merges. Again This could be done using the depth but if you have more precision about how to do it it would be interesting. Any ideas about those questions would be helpful. Thanks a lot.
  2. Hi there, I am currently working on some FX trail effect. Here's the situation, you have a fast moving object that puts trail on another object. Here's how it works for the moment. I have some kind of stamp texture (the object) that is printed onto another texture of the receiver object. When the object is moving not to fast, you can see the trail effect. But the problem appears when the object is moving to fast, then you don't see the trail but only the stamp texture printed at some points. The issue is quite logical and linked to the time when the update function is called (I am using Unity). The function is not called enough so it's missing some positions and thus the trail effect
  3. Thanks for your answer, this is very clever but I can't use a solution that involves geometry as I need to be able to change the texture offset along a ribbon mesh while keeping a good stretching.
  4. Hi there ! I am trying to achieve a special texture stretching effect in my shader. Here is how I define my uv before sampling my texture, nothing really special. // uv_ST contains the tiling and offset values uv.xy = uv.xy * uv_ST.xy + uv_ST.zw This gives the standard tiling/stretching and offset behaviour when you tile/stretche a clamped texture as you can see in the image below. First is normal, second is offset and last is stretching. But I want to avoid the deformation behaviour when stretching, I want to keep margins when stretching my texture or simply cut it in the middle and stretching it. Here is an illustration below. How could I do that inside my shader when defining my uvs before sampling the texture ? Thanks a lot !
  5. Thanks for the answer @Hodgman ! :)   We talk very much about blue noise those days, this noise type can be used to jitter samples locations, right ? Are there other general and efficient jittering technics (ie : Poisson sampling) or does this depends on the situation. Are there some very efficient dithering patterns ?
  6. Hi there,   I've heard a lot about jittering and dithering but I would like to know more about thise techniques. Here are some general questions that might also be useful to others :) What are the differences between both ? When are they used the most ? What's the best way to implement them, any algorithm example ? Can we combine them ? Thanks a lot !
  7. Thank you so much for your answer ! Here is a small gif to show the result with some changes in the code (Poisson, etc.). I'll post it in the coming days.   [attachment=34264:unitypccs.gif]
  8. Does somebody have an idea on how to improve this ? I've tried some things without success...
  9. Alright here is the current result. I think the tecnoc is 100% hacky/unoptimized/no-physically based but it almost work. Maybe somebody could help me improve it ? [attachment=34211:result.jpg]   Here is how I do it. I use the distance between the occluder and the receiver to modulate the size of the search width. for (int i = 0; i < BLOCKER_SEARCH_NUM_SAMPLES; ++i) { float shadowMapDepthInverted = _ShadowMap.SampleLevel(sampler_ShadowMapSampler, uv + poissonDisk[i] * searchWidth, 0); shadowMapDepthInverted += 1.0 - zReceiver; shadowMapDepthInverted = 1 - shadowMapDepthInverted; shadowMapDepthInverted = pow(shadowMapDepthInverted, A) * B; shadowMapDepthInverted = min(1.0, shadowMapDepthInverted); float shadowMapDepth = _ShadowMap.SampleLevel(sampler_ShadowMapSampler, uv + poissonDisk[i] * searchWidth * shadowMapDepthInverted, 0); blockerSum += shadowMapDepth; numBlockers++; } avgBlockerDepth = blockerSum / numBlockers; return avgBlockerDepth;
  10. I think I got something by using the receiver depth and the blocker depth. :)
  11. Thank you again for your answer but I have some difficulties to see how to implement it in the current code solution :( For directional light, the search width should be related to the distance between the occluder and the receiver no ?
  12. Thanks for the answer. I've rebuilt my system and I better understand the whole thing and I also think that I know where the issue comes from. Let's focus on the near plane value, the shadow map and the zReceiver which is the z projection coordinate. The shadow map and the zReceiver are calculated within the light view space as you can see in the following screenshot.   [attachment=34199:depth.jpg]   Now let's see how the searchWidth is calculated and some screenshots with different NEAR_PLANE values. float searchWidth = LIGHT_SIZE_UV * (zReceiver - NEAR_PLANE) / zReceiver; In those examples only the NEAR_PLANE value is change and the camera near place remains to 0.   [attachment=34200:near.jpg]   As you can see with a NEAR_PLANE value of zero the searchWidth doesn't varies compared to other values. But I think that other value are wrong and doesn't behave right as I am using a directional light. To try fixing this I only used the zReceiver without taking in account the NEAR_PLANE value and the results are not good. float searchWidth = LIGHT_SIZE_UV * zReceiver; As you can see in the following screenshots, in the red circles as you move the object far according to the shadow map the width gets bigger and it shouldn't.   [attachment=34201:receiver.jpg]   The correct behaviour is that the searchWidth shouldn't be affected by the object position in the shadow map but more of its height. Hope it makes sens. I would like know how could I calculate the seachWidth as it seems that my problem comes from its calculation. Maybe the issue comes from the shadowmap of the zReceiver but I have some doubts about it.
  13.  am trying to implement the Percentage-closer Soft Shadows (PCSS) from NVidia inside Unity but I am facing some issue and I don't know where they come from and then, I do not know how to solve them... Here is my current setup. I using an orthographic camera to calculate my shadowmap here are the different steps and some pseudo-code. // Setup camera _shadowCamera.clearFlags = CameraClearFlags.Depth; _shadowCamera.orthographic= true; // Setup a render texture to output the shadowmap from the camera RenderTexture _shadowTexture = new RenderTexture((int)_shadowMapSize, _shadowMapSize, 16, RenderTextureFormat.Shadowmap, RenderTextureReadWrite.Linear); // Render scene using a replacement shader. This is only used to output _shadowCamera.SetReplacementShader(_shadowMapShader, "RenderType"); _shadowCamera.Render(); // Set the camera positions and matrix _radius = _bounds; Vector3 targetPos = _target.transform.position; Vector3 lightDir = _light.transform.forward; Quaternion lightRot = _light.transform.rotation; _shadowCamera.transform.position = targetPos - lightDir * _radius; _shadowCamera.transform.rotation = lightRot; _shadowCamera.orthographicSize = _radius; _shadowCamera.farClipPlane = _radius * 2.0f; Matrix4x4 shadowViewMatrix = _shadowCamera.worldToCameraMatrix; Matrix4x4 shadowProjectionMatrix = GL.GetGPUProjectionMatrix(_shadowCamera.projectionMatrix, false); Matrix4x4 shadowBiasMatrix = Matrix4x4.identity; shadowBiasMatrix.SetRow(0, new Vector4(0.5f, 0.0f, 0.0f, 0.5f)); shadowBiasMatrix.SetRow(1, new Vector4(0.0f, 0.5f, 0.0f, 0.5f)); shadowBiasMatrix.SetRow(2, new Vector4(0.0f, 0.0f, 1.0f, 0.0f)); shadowBiasMatrix.SetRow(3, new Vector4(0.0f, 0.0f, 0.0f, 1.0f)); _shadowMatrix = shadowBiasMatrix * shadowProjectionMatrix * shadowViewMatrix; // Transfering data to shader _material.SetMatrix("_ShadowMatrix", _shadowMatrix); _material.SetTexture("_ShadowTexture", _shadowTexture); _material.SetTexture("u_PointSampler", _pointSampler); _material.SetFloat("u_NearPlane", _shadowCamera.nearClipPlane); _material.SetFloat("u_LightWorldSize", _lightWorldSize); _material.SetFloat("u_LightFrustrumWidth", _lightFrustrumWidth);  In my shader, I am simply doing the blocker search part and here is also some pseudo-code. Nothing really different from nvidia code. #define BLOCKER_SEARCH_NUM_SAMPLES 16 #define NEAR_PLANE u_NearPlane #define LIGHT_WORLD_SIZE u_LightWorldSize #define LIGHT_FRUSTUM_WIDTH u_LightFrustrumWidth #define LIGHT_SIZE_UV (LIGHT_WORLD_SIZE / LIGHT_FRUSTUM_WIDTH) uniform Texture2D _ShadowTexture; uniform SamplerComparisonState sampler_ShadowTexture; uniform Texture2D u_PointSampler; uniform SamplerState sampleru_PointSampler; half4 coords = mul(_ShadowMatrix, float4(worldPos.xyz, 1.f)); float2 uv = coords.xy; float zReceiver = coords.z; float searchWidth = LIGHT_SIZE_UV * (zReceiver - NEAR_PLANE) / zReceiver; float blockerSum = u_PointSampler.Sample(sampleru_PointSampler, float2(0, 0)).a; float numBlockers = 0; for (int i = 0; i < BLOCKER_SEARCH_NUM_SAMPLES; ++i) { float shadowMapDepth = _ShadowTexture.Sample(sampleru_PointSampler, uv.xy + poissonDisk[i] * searchWidth).r; if (shadowMapDepth < zReceiver) { blockerSum += shadowMapDepth; numBlockers++; } } float avgBlockerDepth = blockerSum / numBlockers; return avgBlockerDepth;  Here is an example of my issue. As you can see on the right the shadowing seems correct but if you move the cylinder, you can see on the left, the penumbra is not computed correctly.         As I said I don't know what I am doing wrong, I suppose that this comes from the matrix or maybe the depth but there might be some other problems. Any help is welcome, Thanks !
  14. I've added the refraction vector calculations which gives interesting results. float cosine = dot(viewDir, worldNormal); float sine = sqrt(1 - cosine * cosine); float sine2 = (_IOR * sine); float cosine2 = sqrt(1 - sine2 * sine2); float3 x = -worldNormal; float3 y = normalize(cross(cross(viewDir, worldNormal), worldNormal)); float3 refractedW = x * cosine2 + y * sine2; But I am having some issues when looking at grazing angles and I don't know how to get rid of this issue. Here is an image representing the issue when having a high parallax scale.     Is there any way to get rid of that ? Thanks a lot !
  15. Any news about this ?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!