• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Epicghost 505
      Hello,
      We are looking for people to be apart of a team, to help create a horror game we are looking for 3d modelers, coders, artist, animators, fx artist, level designers, and audio design, there will be a payment plan once release of game                                                                                                                                                                                                                                                                                              if your interested come join our discord                                                                                                                                                                                                                                                                         We hope to see you there
      https://discord.gg/6rcc6xr
      -Epicghost505
    • By lxjk
      Hi guys,
      There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there.
      Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test.
      On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres).
      This method can be naturally extended to clustered light culling as well.
      The following image shows the general ideas

       
      Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test
       

       
      I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included!
       
      Eric
    • By Octane_Test
      I am developing a mini golf game in Scenekit. I have applied dynamic physics body to the ball and static physics body to the grass surface and the brick walls show in this image.
      Issue:
      When I apply the force to the ball, the ball’s linear and angular speeds change as shown in the graphs.  The ball’s speeds don’t reduce to zero (so that the ball can stop) but remains constant after certain value.
      Ball linear speed graph
      Ball angular speed graph
      Analysis Tests:
      When I increase the values to both the rolling friction and the friction, the ball speed is reduced quickly but remains constant after certain value (similar to the above graphs). When I increase the values of the linear damping and the angular damping, the ball speed behavior is same as the point #1. When I set the gravity value to -9.8 m/s2, the ball’s linear speed remains constant after 0.1 m/s. If I reduce the gravity value to -5 m/s2, the ball’s linear speed remains constant after 0.05 m/s. The friction, linear friction, linear damping and angular damping are same throughout the motion of the ball.
      There is 1 millimeter overlapping between the ball and the surface of the golf course.
      Questions:
      From the analysis test #3, I think the gravity is causing the constant ball speed issue. Is my assumption correct? If yes, how can I fix the issue? I can’t remove the gravity field as without the gravity field the ball will not roll along the grass and it will slide. Why the friction and the damping properties are not affecting the ball speed after certain value?
      Are there any other physics properties can cause such issue?
      From the analysis test #5, are there any chances that the ball is receiving upward push to correct the position of the ball?
      Solutions:
      If I increase the physics timestep from 60 FPS to 200 FPS, the issue is resolved. I am not able to understand how this change can fix this issue? After reducing the gravity value to -1 m/s2 and physics simulation speed to 4 (4 times fast physics simulation), the issue is fixed. Again, I am not able to understand how this change fix the issue? I would appreciate any suggestions and thoughts on this topic. Thank you.
    • By akshayMore
      Hello,
      I am trying to make a GeometryUtil class that has methods to draw point,line ,polygon etc. I am trying to make a method to draw circle.  
      There are many ways to draw a circle.  I have found two ways, 
      The one way:
      public static void drawBresenhamCircle(PolygonSpriteBatch batch, int centerX, int centerY, int radius, ColorRGBA color) { int x = 0, y = radius; int d = 3 - 2 * radius; while (y >= x) { drawCirclePoints(batch, centerX, centerY, x, y, color); if (d <= 0) { d = d + 4 * x + 6; } else { y--; d = d + 4 * (x - y) + 10; } x++; //drawCirclePoints(batch,centerX,centerY,x,y,color); } } private static void drawCirclePoints(PolygonSpriteBatch batch, int centerX, int centerY, int x, int y, ColorRGBA color) { drawPoint(batch, centerX + x, centerY + y, color); drawPoint(batch, centerX - x, centerY + y, color); drawPoint(batch, centerX + x, centerY - y, color); drawPoint(batch, centerX - x, centerY - y, color); drawPoint(batch, centerX + y, centerY + x, color); drawPoint(batch, centerX - y, centerY + x, color); drawPoint(batch, centerX + y, centerY - x, color); drawPoint(batch, centerX - y, centerY - x, color); } The other way:
      public static void drawCircle(PolygonSpriteBatch target, Vector2 center, float radius, int lineWidth, int segments, int tintColorR, int tintColorG, int tintColorB, int tintColorA) { Vector2[] vertices = new Vector2[segments]; double increment = Math.PI * 2.0 / segments; double theta = 0.0; for (int i = 0; i < segments; i++) { vertices[i] = new Vector2((float) Math.cos(theta) * radius + center.x, (float) Math.sin(theta) * radius + center.y); theta += increment; } drawPolygon(target, vertices, lineWidth, segments, tintColorR, tintColorG, tintColorB, tintColorA); } In the render loop:
      polygonSpriteBatch.begin(); Bitmap.drawBresenhamCircle(polygonSpriteBatch,500,300,200,ColorRGBA.Blue); Bitmap.drawCircle(polygonSpriteBatch,new Vector2(500,300),200,5,50,255,0,0,255); polygonSpriteBatch.end(); I am trying to choose one of them. So I thought that I should go with the one that does not involve heavy calculations and is efficient and faster.  It is said that the use of floating point numbers , trigonometric operations etc. slows down things a bit.  What do you think would be the best method to use?  When I compared the code by observing the time taken by the flow from start of the method to the end, it shows that the second one is faster. (I think I am doing something wrong here ).
      Please help!  
      Thank you.  
    • By Sandman Academy
      Downloadable at:
      https://virva.itch.io/sandman-academy
      https://gamejolt.com/games/sandmanacademy/329088
      https://www.indiexpo.net/en/games/sandman-academy
      https://www.gamefront.com/@sandmanacademy
      http://www.indiedb.com/games/sandman-academy
  • Advertisement
  • Advertisement

3D Need help with Implementing rotation tool user experience

Recommended Posts

Hi,

I am trying to implement correct user experience for rotation tool in my game engine. I want it visually behave like the same tool in Maya or Unity.

When I rotate an object, rotation tool should also rotate, BUT its axis should always be on the near camera plane and never go to another side of the tool like described in 2 first attached screenshots from Unity. You can see here that X axis (red) go to the up part of the tool instead of the back. The same for Y and Z axis.

Currently I implement something similar, but my code has huge limitation - it gave me correct quaternion for rotation, BUT to have correct axis alignment I must rewrite my existing tool rotation, so I cannot accumulate rotation and I cannot implement correct visual experience for tool. (See next 2 screenshots). As you can see there is no difference between visual tool representation despite I rotate an object itself.

Here is code I am using currently:

/// <summary>
      /// Calculate Quaternion which will define rotation from one point to another face to face
      /// </summary>
      /// <param name="objectPosition">objectPosition is your object's position</param>
      /// <param name="targetPosition">objectToFacePosition is the position of the object to face</param>
      /// <param name="upVector">upVector is the nominal "up" vector (typically Vector3.Y)</param>
      /// <remarks>Note: this does not work when objectPosition is straight below or straight above objectToFacePosition</remarks>
      /// <returns></returns>
      public static QuaternionF RotateToFace(ref Vector3F objectPosition, ref Vector3F targetPosition, ref Vector3F upVector)
      {
         Vector3F D = (objectPosition - targetPosition);
         Vector3F right = Vector3F.Normalize(Vector3F.Cross(upVector, D));
         Vector3F backward = Vector3F.Normalize(Vector3F.Cross(right, upVector));
         Vector3F up = Vector3F.Cross(backward, right);
         Matrix4x4F rotationMatrix = new Matrix4x4F(right.X, right.Y, right.Z, 0, up.X, up.Y, up.Z, 0, backward.X, backward.Y, backward.Z, 0, 0, 0, 0, 1);
         QuaternionF orientation;
         QuaternionF.RotationMatrix(ref rotationMatrix, out orientation);
         return orientation;
      }

And I am using some hack to correctly rotate all axis and keep them 90 degrees to each other:

private void TransformRotationTool(Entity current, Camera camera)
        {
            var m = current.Transform.GetRotationMatrix();
            if (current.Name == "RightAxisManipulator")
            {
                var rot = QuaternionF.RotateToFace(current.GetRelativePosition(camera), Vector3F.Zero, m.Right);
                rot.Z = rot.Y = 0;
                rot.Normalize();
                current.Transform.SetRotation(rot);
            }

            if (current.Name == "UpAxisManipulator")
            {
                var rot = QuaternionF.RotateToFace(current.GetRelativePosition(camera), Vector3F.Zero, m.Up);
                rot.X = rot.Z = 0;
                rot.Normalize();
                current.Transform.SetRotation(rot);
            }

            if (current.Name == "ForwardAxisManipulator")
            {
                var rot = QuaternionF.RotateToFace(current.GetRelativePosition(camera), Vector3F.Zero, m.Forward);
                rot.X = rot.Y = 0;
                rot.Normalize();
                current.Transform.SetRotation(rot);
            }

            if (current.Name == "CurrentViewManipulator" || current.Name == "CurrentViewCircle")
            {
                var billboardMatrix = Matrix4x4F.BillboardLH(
                    current.GetRelativePosition(camera),
                    Vector3F.Zero,
                    camera.Up,
                    camera.Forward);
                var rot = MathHelper.GetRotationFromMatrix(billboardMatrix);
                current.Transform.SetRotation(rot);
            }
        }

As you can see I am zeroing 2 of 3 axis and renormalize quaternion to keep axis perpendicular to each other.

And when I try to accumulate rotation, I am receiving completely incorrect result.

On the last image you see what happening with my tool when I try to apply rotation to face with some basic rotation.

 

This issue is driving me crazy. 

Could anyone help me to implement correct behaviour? 

 

 

Rotation_Unity_1.png

Rotation_Unity_2.png

Static_rotation_1.png

Static_rotation_2.png

Dynamic_rotation_1.png

Share this post


Link to post
Share on other sites
Advertisement

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement