Jump to content
  • Advertisement

BRabbit27

Member
  • Content count

    18
  • Joined

  • Last visited

Community Reputation

153 Neutral

About BRabbit27

  • Rank
    Member
  1. I'm still not confortable with my understanding of this. I was looking at the code given in a coursera course on WebGL. If you look at the code given for the lookAt function http://www.cs.unm.edu/~angel/COURSERA/CODE/Common/MV.js   And given the example figure 4.13 on page 205 of this book https://robot.bolink.org/ebooks/Interactive%20Computer%20Graphics%20-%20A%20Top-Down%20Approach%206e%20By%20Edward%20Angel%20and%20Dave%20Shreiner%20(Pearson,%202012)%20BBS.pdf   The forward vector of the camera is computed as at - eye, which represents the vector going into the lens of the camera, i.e. Zc in the figure. Good. The right vector is computed cross(forward, up) which points opposite to Xc in the figure, why? for me it should be cross(up, forward) The up vector is computer with the previously computed vectors.   Then, the code goes and negates the forward vector, why?   Finally, the construction of the camera matrix is given but there are some dot(forward, eye), dot(right, eye), dot(up, eye), that I don't seem to understand where do they come from. Can someone explain this? I have done some computations on paper but do not get any sort of dot products, so I must be missing something.
  2. Sorry about that, I'm kinda new to this forum. I have some posts but this one is the first one that receives many helpful answers.
  3. When you say     Do you mean point forward in the direction the camera is looking at or forward in the direction of positive z-axis? I think my doubt goes to for the right vector, is it right relative to the direction of positive z-axis or right relative to the direction the camera is looking at?   Maybe the source of my confusion is that some sources build the world2camera matrix whereas what I want is to build a camera2world matrix whose inverse will be the world2camera matrix.   So, I want to have a right-hand coord system and I am following a row-major order. Then, if I set the position of my camera to be P = (0, 0, 2), and I want my camera to look at the origin, i.e. L = (0,0,0) where L is for lookAt, and then I also have the Up vector U = (0,1,0), all of these given in world coordinates.   To build my cameraToWorld matrix I can build a translation and rotation matrices as follow   T = [1     0    0      0        0    1     0      0        0    0     1      0        P.x, P.y, P.z, 1]   Forward vector (assuming forward means in the direction of positive z-axis) F = normalize(P - L) Right vector, R = normalize(cross(F, U)) Up vector, U = normalize(cross(R,F))   R = [R.x, R.y, R.z, 0        U.x, U.y, U.z, 0        F.x, F.y, F.u, 0        0     0     0     1]   Finally, camera2world = T * R, therefore world2camera = camera2world.inverse()   is this correct? Perhaps, if you have some spare time, could you reproduce the image in Figure 8 from the link above and then sketch where the forward, up, right vectors point? That would help a lot
  4.   I thought I was using everything with a right-hand coord system. I understand the right-handed is the same as having my right hand with the thumb pointing to the right, index up and middle out of my computer screen, am I right?   Now I think I don't fully understand the relation between handedness and the camera setting. Could you explain a little bit more? What confuses me is figure 8 I found in this tutorial http://www.scratchapixel.com/lessons/3d-basic-rendering/computing-pixel-coordinates-of-3d-point/mathematics-computing-2d-coordinates-of-3d-points, you can see the camera-local-coord-system as being aligned with that of the world-coord-system, both right-handed. So, in that same picture, what would be the "forward" vector? Isn't it the blue vector? or is it the vector on the opposite direction of that of the blue? Perhaps understanding precisely what the image is trying to explain would give some light to my learning.
  5. I have been trying to set up a simple perspective camera to visualize a triangle with coords (-0.5, -0.5), (0.5, -0.5), (0.0, 0.5). My code looks like follows int main() {    // Init GLFW    SimplePerspectiveCamera camera(Vector(0.5f, 0.0f, 2.0f), Vector(0.0f, 0.0f, 0.0f), Vector(0.0f, 1.0f, 0.0f), 67.0f, aspectRatio, 0.1f, 100.0f);    // Render geometry return 0; } The constructor of the camera SimplePerspectiveCamera:: SimplePerspectiveCamera(Vector position, Vector lookAt, Vector up, float hfov, float aspectRatio, float nearz, float farz) { m_position = position; m_lookAt = lookAt; m_up = up; m_hfov = hfov; m_aspectRatio = aspectRatio; m_nearz = nearz; m_farz = farz; updateCameraMatrix(); updatePerspectiveMatrix(); } and the updateCameraMatrix and updatePerspectiveMatrix void SimplePerspectiveCamera:: updateCameraMatrix() { Vector forward = normalize(m_lookAt-m_position); Vector right = normalize(cross(forward, m_up)); m_up = cross(right, forward); Matrix44 T( 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, -m_position.x, -m_position.y, -m_position.z, 1); Matrix44 R(right.x, right.y, right.z, 0.0f, m_up.x, m_up.y, m_up.z, 0.0f, -forward.x, -forward.y, -forward.z, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f); cameraMatrix = T * R; } void SimplePerspectiveCamera:: updatePerspectiveMatrix() { float fovRadians = Converter::degrees2radians(m_hfov); float range = tan(fovRadians * 0.5f) * m_nearz; float Sx = m_nearz / (range * m_aspectRatio); float Sy = m_nearz / range; float Sz = -(m_farz+m_nearz) / (m_farz-m_nearz); float Pz = -(2.0f * m_farz * m_nearz) / (m_farz-m_nearz); perspectiveMatrix = Matrix44(Sx, 0.0f, 0.0f, 0.0f, 0.0f, Sy, 0.0f, 0.0f, 0.0f, 0.0f, Sz, -1.0f, 0.0f, 0.0f, Pz, 0.0f); } Now, when creating the camera, if I set the position of the camera to be in (0.5, 0.0, 2.0), the result I get is what the image MoveCamRight shows but I would expect the result to be what MoveCamLeft shows. I hope someone can explain what am I missing here.    
  6. The parameters are the rest density, the mass of the particles, the number of particles and the support of the kernel. If I want to simulate water, how should I set up these? The kernels are the common ones, I guess, the poly6 kernel  4/(?*h^8)*(h^2-r^2)^3  for density estimation and gradient spiky kernel for pressure -30/(?*h^5)*(h-r)^2*r.normalized()   In my simulation I have an area = 4 m^2, where I want to simulate a fluid with density = 1000 kg/m^2. This parameters give me a total mass M = 4000 kg, which, having 50 particles gives a particle mass m = 80 kg.   I tried to put my particles in a square shape, to try to simulate the damn break, but at the beginning of the simulation the particles go apart, they do not keep falling together until hit the ground. I set the initial velocity to 0. Gravity is pulling down and the interaction forcer between particles, that's where I believe the problem is.   If I reduce the mass of the particles, particle clustering is observed which is not desirable, I want the particles to be tightly packed but with a good structure. If I increase the support of the kernel, the particles explode even more. So far the less worse results are achieved with kernel support h = 0.2, m = 80. Where am I missing something?
  7. After some debugging and head scratching and thinking I manage to get a very basic fluid simulation, incompressibility working fine. Now, there are a lot of parameters in a SPH simulation that I don't know how to tune them, for now I would like to know how can I make my particles be closer to each other.   I tried increasing the density but the simulation looks exactly the same, i.e. particles well separated, however it helped smoothing the interaction with particles which I'm glad I found this.   So, what parameters affect how close/tight the particles should be?
  8. BRabbit27

    Particle fluid simulation

    After some more reading and thinking, I guess I found something that could help. Basically the poly6 kernel has the form   W(|r|, h) = k * (h^2 - r^2)^3   where I have to determine the value of k in order to have my kernel normalized for the desired value of the support h. Am I right on this? So if I want a smoothing kernel in the range [-0.1, 0.1] I just need to integrate this right? Now in many of the papers proposing smoothing kernels they always have the normalization factor k depending on ? and h. Can someone explain me how to get to a normalization factor like that? Because I actually can compute the integral just as W is and get the value of the factor.
  9. BRabbit27

    Position Based Dynamics collision

    After some more thinking and reading your answer, so basically, for a single particle I will do something like   While iter < solverIteration do    Take constraint C1 compute DeltaP    Update particle's tmpPosition    Take constraint C2 compute DeltaP    Update particle's tmpPosition End while and, as you said, after a certain number of iterations, the system will have the best approximation of tmpPosition that satisfies both constraints, right? BTW, thanks for the video, I'll watch it right away !
  10. BRabbit27

    Position Based Dynamics collision

    After some reading and thinking I figure out the idea behind those lines (I hope). Since we have a particle at position xi and after updating velocity and predicted the new position pi we want to know whether there is a collision and if so move the particle back to the collision point qc and then using position-based dynamics compute a new valid position for the particle.   To check whether the particle when moving from xi to pi collides, we check if there is an intersection of the ray given by r = xi + t(pi-xi) and the plane P = (p-p0) dot n where t will tell us if there is a collision (0 <= t <= 1) or not.   Once we get the collision point qc we can create an inequality constrain C(p) = (p-qc) dot ncwhere nc is the same as the normal of the plane. With this constrain I can get the correction position DeltaP. Now the only question remaining is   - If I have two different constraints, each will give a DeltaP then, how to mix them in order to have the final correction position?
  11. I have read about Position-Based dynamics from this paper but I got a little bit confused on how can I use it to implement collision detection and response. I could follow the math for the case of a distance constrain but it got a bit confusing when trying to apply the same principle to collision.   They mention 2 types of collision detection: continuous and static. The former is when at time t the particle is in the valid zone and in time t+1 is not. The latter is just when at both times the position is invalid and thus the continuous collision detection failed.   To check whether a collision existed I can compute the distance from the particle's position pi to the plane. This is achieved with the following equation  (po-pi) . n = d   where po is a point in the plane and "n" is the normal of the plane. So far so good.   If I got it correctly I can add a constraint of type unilateral as follows C(pi) = (po-pi) . n >= 0. However, it mentions two conditions.   - If particle pi comes from a valid to an invalid position, compute qc and add an unilateral constraint C(p) = (p-qc) . nc >= 0 - If particle pi has been in an invalid position, compute qs (closest point to pi) and add an unilateral constraint C(p) = (p-qs) . ns >= 0   So, my questions are   1) In the previous constraints what is P ? Is this the point in the plane or the particle's position? Since I know I have to have a point in the plane to be able to compute the distance from the particle to the plane I would say it is a point in the plane, nevertheless, the constraints are applied to the positions of the particles, so my previous statement makes no sense.   2) If I have to compute qc (or qs) then the constrain is solved right? why would I need to create a constraint when I already have the valid point?   3) For any constrain I want my system to be subject to, is the method described like a recipe? What I mean, I just take the constraint, compute the gradient, compute lambda, and then solve iteratively?
  12. BRabbit27

    Particle fluid simulation

    @h4tt3n yes ... I guess what you say makes actually a lot of sense. The thing is that I was reading through some SPH papers and in those, one of the properties of the smoothing kernels is that they must be normalized. So, I have two different kernels (which you can see at the paper referenced above) which have different normalization factors, therefore, I was computing the support h for each of them. What I can actually do is set the same kernel support and then compute the correct normalization factor, am I right on this?
  13. BRabbit27

    Particle fluid simulation

      Thank you very much for the links, I will take a look at it ! :)
  14. BRabbit27

    Particle fluid simulation

      Thanks for the advise ! So when you say you set "the grid equal to the distance supported by the kernel" in my case I would have the neighborhood-radius of a particle equal to the support of the kernel? However, I have one kernel for the density estimation and one kernel for the pressure ... in this case, how to set the neighborhood-radius of a particle?
  15. BRabbit27

    Particle fluid simulation

      And in that case, how do you set the search radius for the nearest particles in your system? Now I am using nanoflann (KD-Tree) to look for the nearest neighbors where the search radius is the same as the support for the kernel I use to compute the density.   - I know KD-Trees are not the best way, soon I will change it to some hashing algorithm (need to do some research on this tough) -
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!