Jump to content
  • Advertisement

semler

Member
  • Content Count

    42
  • Joined

  • Last visited

Community Reputation

548 Good

About semler

  • Rank
    Member

Personal Information

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. semler

    R&D Trails

    For our game I just do a cross product between the 2 vertices and the center vertex to figure out the facing of the triangle. Just look at the sign of the component that you use as up. If it's negative I copy vertex A into B. Henning
  2. semler

    3D Projective Texture Mapping

    When you perform the lighting calculation you will at the same time sample from your projected texture and multiply it with the resulting light. The resulting light might be masked by the usual shadow map so you only have your projected texture where the light can actually reach it. Henning
  3. So there is a difference between "Geometry Clipmaps" and "Texture Clipmaps". Geometry Clipmaps are described here: http://hhoppe.com/geomclipmap.pdf The algorithm is a continous LOD for the mesh Texture Clipmaps are described here: http://developer.download.nvidia.com/SDK/10/direct3d/Source/Clipmaps/doc/Clipmaps.pdf The algorithm handles the visualization of very large textures by using a stack of LOD textures as opposed to the usual pyramid. I used Geometry Clipmaps in the early days (before switching over to a quad tree based algorithm), and I used a toroidal addressing method for the texture (See the Texture Clipmaps paper for an explanation) To answer your question: You can scale and offset your parts in the vertex shader using data from a constant buffer eg. pos_base = pos_input_vs + meshOffset; tex_coord = pos_base.xy pos_world = pos_base * LODscale + LODoffset Where meshOffset is to place the L-shaped part at the right location and LODscale and LODoffset is to get it into the right size and position in the world. Henning
  4. semler

    Shadow factor

    Yes you are correct, both methods are viable for both methods. But normally you would like to bunch the lights together in the forward renderer to avoid having to render the geometry multiple times (ie loop over lights per object). This is not nearly as important for the deferred renderer, where you render the lights (the area on screen they affect) instead of objects.
  5. semler

    Shadow factor

    Only the shadow map from the light casting that shadow should be used when drawing the light - its used for removing the light contribution. It depends on how you do your drawing when dealing with shadow maps. For a forward render, you would loop over the lights affecting a pixel/region, so it would require you to gather all the shadowmaps in some way (atlas, array etc..) For a deferred render, the lighting/result is accumulated into a rendertarget, so normally you would only draw one light source at a time in this case. Henning
  6. I use explicit slots for DX11 and DX12 now. I used the D3D11 EffectFrameWork before to control it but found it easier to do explicit binding for DX12. For materials I have 2 tables. One for programmer controlled entries (like noise, reflection, refraction etc..) and one for material assets. That way I only have to update the last table when changing materials. Henning
  7. Here is my code that is used in production (So I know it works ;-) ) V4 vMin = vCorners[0]; V4 vMax = vCorners[0]; for(int j = 1; j<8; j++) { vMin = VMin(vMin, vCorners[j]); vMax = VMax(vMax, vCorners[j]); } V4 vSize = vMax - vMin; float fRadius = VLen3(vSize).x() * 0.5f; mShadowProj = MOrthoRH(-fRadius, fRadius, -fRadius, fRadius, -fMaxZ, -fMinZ); // Snap center to shadow texel // This is done by transforming center of CSM frustum into light post projection (texel space) and // perform snapping in this space. M44 mCameraToLight = MInverseAffine(mLightToCamera); V4 vCenterCamera = VLerp(vMin, vMax, VSplat(0.5f)); V4 vCenterLight = VTransform43(mCameraToLight, vCenterCamera); V4 vCenterTexel = VTransform44(mShadowProj, vCenterLight); vCenterTexel = VProject(vCenterTexel); uint32 nCascadeSize = descShadowCSM.nWidth >> 1; float fShadowSize = nCascadeSize * 0.25f; V4 vShadowSize = VSplat(fShadowSize); V4 vShadowSizeInv = VSplat(1.0f / fShadowSize); V4 vSnap = VMul(vCenterTexel, vShadowSize); vSnap = VFloor(vSnap); vSnap = VMul(vSnap, vShadowSizeInv); V4 vSnapOffset = VLoadZero() - vSnap; M44 mSnapTrans = MLoadIdentity(); mSnapTrans.vTrans = VXY01(vSnapOffset); mShadowProj = mSnapTrans * mShadowProj; Hope it makes sense. BTW: Looking at your code I think the issue is the way you apply the offset directly to the matrix instead of doing a multiplication with a translation matrix (Like I do in the last line) Henning Oh, and I now use the quantization trick instead of the code above to get more precision (http://dev.theomader.com/stable-csm/)
  8. I can see now that my question could be misunderstood. Sorry for that What I meant to write was: while creating the materials, the viewport in substance painter can show the final result (with IBL applied), either using sRGB as default or using a custom 3D LUT. I was not referring to the output format of the channels. So again, do your team use a custom LUT like ACES, or just the standard sRGB while authoring the materials? Henning
  9. For the sake of clarity and maintenance, I would keep all my outputs in shaders in the same space (either viewspace or worldspace), no matter if it's first person or not. But you will probably want to keep camera space items (like first person, weapons) in camera local space (viewspace) up until you want to draw them for precision issues. Otherwise you might end up with shaky hands and weapons due to floating point precisions. Henning
  10. Hi, I'm curious as to what others do wrt tonemapping when it comes to authoring materials for PBR in programs like SP. To my understanding the default tonemapping in SP is a simple sRGB, but it supports using a 3D LUT. Many modern engines use a different tonemapping to get a more filmic(?) look, like the ACES in Unreal, so does your team use the default sRGB when authoring or does it use a custom LUT? Henning
  11. Just a hunch, The way I make reflections in our game is by using a skewed front clip plane, that alters the depth values written to the depth buffer. Can you verify that the depth values you are reading are the right ones? Henning
  12. semler

    Shader matrix mul - dp4 or mad?

    I remember back in the old days, that we were encouraged (by the hardware vendors) to choose the dp4 version, as the order of the instructions doesn't change the outcome. Back then the compiler(s) had a tendency to shuffle the instructions around, and the order of the madd instructions could result in numerical different results which again could result in z fighting when using a multipass approach. I don't know how good (or bad) the compiler is nowadays in maintaining the order, as I'm still using the dp4 approach ;-)
  13. semler

    OpenGL Shadow problem

    My best guess is the bias value in the fragment shader is too large. Try changing the bias value to 0, to see if that helps. You might get shadow acne instead, but it's just to figure out where the problem comes from.   Henning
  14. You can just dot the vector towards the sun with the camera forward facing vector, and if that is negative it means the sun is behind you. To avoid the harsh transition you can use the dot value of the normalized vectors as a fade value, eg. float fade = saturate(dot3(normalize3(vectorToSun), normalize3(cameraForward)) * 16)
  15. The idea is to split edges, not triangles, ie. have a criteria for when an edge should be split (length in screen space or something).   When you do this adjacent triangle edges will split equally. I have some code lying around for when I implemented this in our engine at some point, before hw tessellation was available. Let me know if I should post it.     Henning
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!