Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

257 Neutral

About d07RiV

  • Rank

Personal Information

  • Role
    3D Animator
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Keeping track of angular momentum instead of velocity is technically more correct, but since you're doing numeric integration, both are going to give "wrong" results, and updating angular velocity instead is much easier and more stable. You won't be able to reproduce some peculiar motions like these without a more robust integration method anyway.
  2. d07RiV

    Raycast From Camera To Mouse Pointer

    You multiply your vector by inverse projection matrix, but then discard Z/W values and put -1,0 in there. Z=-1 is only correct if your zNear is 1. Setting W=0 makes it ignore the translation component of inverse view matrix, which is what you need if you just want the ray direction. The origin of the ray is, indeed, your camera position. It is the same as the translation column (12,13,14) in inverse view matrix. You should keep the Z so it works for every zNear. The rest seems fine, and you seem to have figured out the problem by now.
  3. If it's a fullscreen quad, then don't worry about it. The cost of processing a screen-full of pixels is much higher. But you should still try to squeeze multiple postprocessing steps into one where possible.
  4. Hm thanks, I'll try to play around with sorting, because quicksort in JS isn't all that fast anyway (since it runs a callback for every comparison). Got a couple more questions if you don't mind. 1. How do you deal with non-discrete data like model matrices? You can't encode them in a 128-bit draw call, unless you put them in a big list or something. 2. Are draw calls supposed to be "compiled" on every frame, or are they cached inside objects?
  5. However, I'm not sure if individual samples generated for MSAA count as separate fragments, since FS is only ran once.
  6. I've also been thinking if there are better alternatives to picking the rendering order than simple radix sort, which can have abysmal results in some cases (i.e. 0111111 -> 1000000 -> 1111111 -> 2000000 etc). It is essentially a traveling salesman problem, which has plenty of decent approximate solutions, the question is, how much time are we prepared to dedicate to sorting. I'm guessing the most reasonable way would be to always pick the draw item closest to the current state, using some LSH or tree-based structure to preprocess the draw list. It also raises a question of whether we need "don't care" values, because they can significantly reduce the cost of switching states.
  7. d07RiV

    Contour of the shadow region

    That's what cascaded shadow maps are for - you get high shadowmap density close to the camera, and lower as you get further away from it. The sun is the most important light source, you can't just get rid of it.
  8. d07RiV

    Contour of the shadow region

    Is the light not supposed to shine outside the box at all? Then near Z border might solve it, I suppose (if you need blurry light edges).
  9. d07RiV

    Contour of the shadow region

    Didn't you say you use PCF, which samples a block of pixels around the target, leading to incorrect results. You should always make shadowmaps slightly larger than the area they affect, so you never have to sample outside.
  10. Thanks, I'm still not sure how much abstraction I need since API is always going to be the same. Another thing - when you put all passes in the same shader file, do you run a lexer on them, or do you just feed everything to shader compiler and let it figure out what to optimize away? The former option would us to know which options affect which passes, so we don't have to make redundant copies (instead of having to manually specify them for every pass). edit: I guess this is partially answered by bonus slides.
  11. Do you have any suggestions on how to handle texture bindings? Should I even bother to minimize the number of swaps? I.e. in deferred rendering, I could keep the gbuffer textures bound to targets 0-3 throughout the rest of the rendering phase, but that requires making sure all shaders are bound to them correctly, and shaders that don't need them use the remaining targets. Or I could assume that every time I switch shaders all textures have to be re-bound, which simplifies things greatly. I'm targeting WebGL by the way, so no resource lists, and many of these bitwise optimizations are hard to apply there.
  12. Umm apparently it does work, but takes ages to process? And the one in wisiwyg editor doesn't work, either.
  13. First, the Sigma button deletes half the paragraph you were typing, and inserts your equation surrounded by \( and \) (not visible in the preview). And it stays that way in the post, no math symbols or anything. Has it always been this way or they just broke it and never bothered to fix?
  14. Yes, a zero there should do the trick I think. Or just convert your matrix to `mat3`, same thing. (something is buggy with the equation editor so this whole paragraph got deleted.. oh well) But the real bad part is that you're applying *translation* (which is part of the view matrix) to your normals is definitely wrong.
  15. You don't transform normals with the same matrices you use for positions. Skipping the math part, you need to apply the inverse-transpose of the transform to it. For rotation-only matrices it is the same thing (because inverse of rotation is its transpose) but that's not true for scaling or translation. Luckily, assuming your view matrix is just rotation+translation, you can just use W=0 for your normal to skip the translation, no need to invert any matrices.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!