Jump to content
  • Advertisement

dietrich

Member
  • Content Count

    31
  • Joined

  • Last visited

Community Reputation

780 Good

About dietrich

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Art
    Design
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. dietrich

    Environment map seam

    @turanszkij thanks for the advice! I took a look at the texture in the VS graphics debugger and the mip maps are indeed there. My hardware doesn't support mipmap generation for DXGI_FORMAT_R32G32B32_FLOAT (3 components), but R32G32B32A32_FLOAT seems to work fine. Anyway, looks like I was worrying too much about the texture being an HDR image, when in fact it had nothing to do with the problem, and didn't pay enough attention to the math behind the mapping. Apparently the mapping itself was causing the seam, namely, the function atan2(y, x), which has a discontinuity when x is negative and y changes sign. If I output the emx coordinate directly, the discontinuity is clearly visible. I'm guessing that this discontinuity caused the mip level selection to freak out (texture coordinate derivative very high) and sample the highest mip, which is a single bluish gray pixel, resulting in a visible seam. That also explains why there was no seam when selecting the mip level manually. Now the question is what I should do about it, and the first answer that comes to mind is - nothing😄 I believe it is common to use environment map's mip levels to store prefiltered versions of the map for glossy reflections; besides, I eventually intend to use cubemaps, eliminating the need for atan2. But learning of its limitations was still worth it, I guess😊
  2. Hi everyone! I'm implementing image-based lighting in a demo program and I ran into a problem with sampling an environment map. For the map I use a free HDR sample from CGSkies, loaded with stb_image. I create the texture with DXGI_FORMAT_R32G32B32A32_FLOAT and call GenerateMips on it (I do intend to implement proper environment map filtering later). Here is fragment shader code that samples the map: float3 reflectionDir = reflect(-dirToViewer, normal); float theta = acos(reflectionDir.z); // lat, 0..PI float phi = atan2(reflectionDir.y, reflectionDir.x); // lon, 0..2PI float emx = phi / (PI*2); float emy = theta / PI; float3 emsample = textureEnv.Sample(samplerDiffuse, float2(emx, emy)); return float4(emsample, 1.0f); The reflection on the sphere has a seam that passes through the sun (envmap01). Notice however, that the sun is in the middle of the texture, nowhere near any border. Also, if a add an offset to the x coordinate, the reflection moves, but not the seam (envmap02). If I move the camera, the seam moves with the reflection. float emx = phi / (PI*2) + 0.2f; Finally, the seam disappears if I do any of the following (envmap03): 1) Set sampler's MaxLOD to 0, 2) Create a Texture with only a single mip level, 3) Sample the texture with SampleLevel, even with a fractional LOD level. And that's where I'm at right now. Looks like it has something to do with mip mapping, since first two methods effectively disable it, but I don't see how SampleLevel is different from regular Sample with a hardware-calculated level. Will greatly appreciate any pointers!
  3. dietrich

    Specular lighting working but slight bug

    @DividedByZero, no problem, glad it helped:)
  4. output.Pos = mul(float4(vin.position, 1.0f), cbVS.wvp); output.VertexPos = float4(output.Pos.xyzw); Won't this test will always be true for every pixel on the screen, though? Or am I missing something? I believe, if you need to test every pixel against a certain fixed value, you will have to compute it outside of the shader code and then pass it either in a constant buffer, or as an additional vertex parameter, which will be the same for all vertices. As @Irusan, son of Arusan said, those will also be interpolated. Again, correct me if I'm wrong, but there's no such thing as vertex' pixels. A vertex has zero size and can be projected to any location on the screen, not necessarily onto a pixel's center. This means that while input.Pos will have "round" values like 123.5 or 7.5, a projected vertex position can have arbitrary coordinates. Testing input.Pos and target vertex position for equality won't work in most cases. I guess, you could compute the distance between input.Pos and screen-space vertex position, then shade the fragment white if the distance is less than one or some other threshold.
  5. dietrich

    Specular lighting working but slight bug

    Hi, @DividedByZero! Can you point to the technique you're using to calculate the specular component? It's not very clear from the code. Looks like you're missing a view vector completely, which may be the source of the problem, since specular reflections are view-dependent. This line computes a reflected light vector: float3 reflection = normalize(2 * diffuseIntensity * input.normal - input.lightPos); And here you compute the dot product between the light vector and its reflection: specularIntensity = pow(saturate(dot(reflection, input.lightPos)), specularPower); If I'm correct, this doesn't make much sense. In the picture below, dot(reflection, input.lightPos) corresponds to angle α between l and l', which basically tells you how, how far the light is from the surface's normal. Instead you want to compute angle θ between the view vector v and the perfect reflection direction l'. To do so, substitue input.lightPos for the view vector in either one of the lines above. Also, to take it a step further, take a look at the Blinn-Phong shading model (Wiki). It uses the half-vector between light and view direction to compute the specular, which better describes the actual physics of the process and produces more plausible result under certain conditions. Make sure to multiply by the texture color before you add the specular term. For most (all?) dielectric materials (plastic, paper, fabrics, etc.), the diffuse color does not affect the specular reflection.
  6. dietrich

    Depth shadow mapping question

    @pcmaster, haha, getting there took me way longer than I like to admit:)
  7. dietrich

    Depth shadow mapping question

    Hi, DividedByZero, I took a look at you project and was able to get the shadows working. The main issue seems to be in vs_shadow.hlsl, lines 43-45: output.lightViewPosition = mul(input.position, matWorld); output.lightViewPosition = mul(output.position, matLightView); output.lightViewPosition = mul(output.position, matLightProjection); Instead of output.position it should really be output.lightViewPosition. Otherwise you're not getting a correct light-space position in the corresponding fragment shader; looks like a copy-paste error. However, I also had to move the camera float camX = 10.0f; float camY = 5.0f; float camZ = -3.0f; and increase the width and height of the light's ortho projection to get this result. EDIT: was looking back through the thread and saw that blicili already pointed this out a while ago:)
  8. dietrich

    shadow mapping using directional light

    You can definitely cull parts of your scene for directional shadow mapping, and as TomKQT mentions, the problem is to find the correct view frustum (which in this case will just be a box due to the orthographic projection) to cull against. One option is indeed to render the entire scene, in which case the view volume will be a light-space oriented bounding box of the scene. While not very optimal, I've found this to be a good starting point to check if everything is working as expected. Another is to actually compute a tight bounding box around objects required for shadow rendering. Basically, the view volume should enclose only the part of the scene, that is currently visible (i.e. whatever is inside the main camera's view frustum), plus whatever objects can cast a shadow into it. With this approach you compute a light-space bounding box around main camera's view frustum, but shift its near plane all the way back to the scene bound, repeating the process every frame to account for the main view changes. A few more pointers: cascaded shadow mapping mentioned above uses that approach, but you don't actually need to implement the cascades - or you can view it as single-cascade CSM. Anyway, it should be easy enough to extend to cascades form that point, if the need arises. the volume produced by this technique is conservative - there may be optimizations to further reduce the rendered object count, that I don't know of. the fact, that the light view volume is now shifting whenever the main view moves or rotates, and that there's no 1:1 correlation between the screen pixels and shadowmap texels, causes artifacts with shadow stability. You overcome those by making sure that your shadow map is translation and rotation invariant. It's a fairly big topic, again covered by many CSM descriptions. same artifacts are produced whenever the shadow casting light moves (think dynamic day-night cycle). I believe this one is more complicated (if possible) to solve. I saw several games move the sun/moon in short bursts, to limit the time when the effect is visible, but don't recall any other ways to improve this, apart from increasing shadow resolution.
  9. Hi, Hashbrown! The math actually looks fine to me, are you sure there's a problem? I've always thought that this kind of distortion is to be expected from a perspective projection. Something to do with the fact that we're projecting a 3D scene onto a 2D plane. In the second image the spheres are closer to the edge of the screen, which results in the more apparent distortion, but they aren't perfectly circular in the first image either. Here's an illustration: https://en.wikipedia.org/wiki/File:Fig_3._After_pivoting_object_to_remain_facing_the_viewer,_the_image_is_distorted,_but_to_the_viewer_the_image_is_unchanged..png Shaarigan, could you please explain, what a and b coefficients in your implementation do? I'm using basically the same matrix as Hashbrown, so I'm guessing, yours is a more general form? I mean, if left == -right and top == -bottom, then both a and b are 0, yielding the same matrix as Hashbrown has. Setting those to different values results in some kind of skewed/asymmetrical frustum?
  10. That's strange, we do the same thing in our project (assign a UStaticMeshComponent to RootComponent), and it works fine. UStaticMeshComponent inherits from USceneComponent, so such assignment is valid. Does it work, if you do an explicit cast? RootComponent = (USceneComponent*)PickupMesh; Don't know, if it has anything to do with the problem, but it looks like you're using an older version of Visual Studio (2013?), while they recommend VS2017 for UE4.18 (although we use VS2015 without any issues). UPDATE: I guess, it doesn't immediately follow, that the assignment of UStaticMeshComponent* to USceneComponent* is valid, only that a cast from UStaticMeshComponent* to USceneComponent* is. And unfortunately I'm not the right person to explain, whether such cast will happen implicitly, and what kind of constructor you need in order for it to happen.
  11. dietrich

    Problems with directional lighting shader

    All you need is a unit-length normal, you can just add v_Normal = normalize(v_Normal); to your fragment shader:)
  12. dietrich

    Problems with directional lighting shader

    Ah, well, that happens too, glad to hear it's fixed:) One more thing: you're currently using v_Normal as is, but you really want to normalize it again in the fragment shader before doing any computations with it (the tutorial seems to be missing this step). Each fragment receives an interpolated normal, and a linear interpolation of unit vectors is not necessarily a unit vector itself, here's a nice illustration (image via google):
  13. dietrich

    Problems with directional lighting shader

    Then I'd suggest to simplify things a bit and move the lighting computations into world space, and see if it changes anything. v_Position will then become v_Position = vec3(worldMatrix * object_space_pos); and light position will remain unchanged, v_LightPos = u_LightPos;
  14. dietrich

    Problems with directional lighting shader

    No problem, @lonewolff That looks interesting:) worldMatrix is at the moment an identity matrix, correct?
  15. dietrich

    Problems with directional lighting shader

    Hi, lonewolff! My initial guess would be that your v_Position and v_LightPos are in different coordinate spaces, making the distance and light direction computation meaningless. v_Position is in view space: v_Position = vec3(WV * object_space_pos); Can you make sure that you also transform v_LightPos into view space somewhere in your app? Does it help if you comment out attenuation? diffuse_light = diffuse_light * (1.0 / (1.0 + (0.25 * dist * dist))); Also, probably just a formality, but directional lighting doesn't use light position or attenuation, it's an approximation for the case when the light source is very far relative to the scale of the scene, e.g. the sun illuminating a building. In such a case we assume that the light rays all travel in the same direction and the light's intensity falloff with distance is negligible. Point lighting would be a better name in this case.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!