Sign in to follow this  
caymanbruce

Algorithm What is the mathematical explanation behind this implementation of simulating eyeballs roll in eyes?

Recommended Posts

caymanbruce    209

I want to simulate eyeballs roll in eyes and I have found and forked this implementation in codepen.io. This is exactly what I need.

 

Smart way but I don't understand why it needs to work like this way. Why is it using `ratioX` and `ratioY` which are calculated from dividing `mouseX` and `mouseY` with their sum?  Is there a simpler or even cleverer way to do similar simulation?

Edited by caymanbruce

Share this post


Link to post
Share on other sites
missionctrl    206

I think they are using the x/y ratio as a cheap way to do fake trigonometry. If the mouse is far away from the eyes, they should rotate slower. If the mouse is close to the eyes, it will rotate faster. That would happen naturally with sin/cos. So I think what they have is already pretty simple and clever.

Edited by missionctrl

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Outliner
      In ordinary marching squares, we're trying to find isolines on a height map for some particular height. It's a delightfully simple algorithm because we can use a look-up table to determine the structure of edges and vertices within each grid cell based on whether each corner is above or below the desired height.
      In multi-material marching squares, each point on the grid has some proportion of several materials and we're trying to draw the boundaries between the areas where each material is dominant. This is less simple, since there are more than two options for each corner of each cell; at worst each corner could have a distinct dominant material. Even so, it's not too hard to approach this problem with a look-up table based on the corners of each cell.
      Finally, we have constrained multi-material marching squares, which is much like other constrained triangulation problems. In addition to the multi-material grid, we now have pre-defined boundary edges in some of the grid cells, and the multi-material marching squares must respect those pre-defined edges as if they accurately represent the boundary between two materials. I'm finding it hard to wrap my head around this problem. It seems that a look-up table will be of no use because the pre-defined edges create too many possibilities, even if those edges are restricted to the kinds of edges that marching square would naturally produce, but doing this without a look-up table also seems daunting.
      Motivation: In principle the goal seems quite simple. Take a 2D grid and use it to define terrain as a height map and as a material map that will form the foundation for a procedurally constructed mesh. Aside from the usual hills and valleys of a plain height map, the multi-material aspect of the grid allows us to define swamp, forest, desert regions on the map and apply particular procedural meshing for each. In addition to that, we want vertical cliffs that get their own special meshing and define the region boundaries. The cliffs are the constraints of constrained multi-material marching squares because when there is a cliff running through a grid cell, that should always act as the boundary if the material at the top of the cliff is different from the material at the bottom, even if marching squares would have naturally put the boundary somewhere else.
    • By Luigi Lescarini
      Hi,
      i’m trying to build an effective AI for the Buraco card game (2 and 4 players).
      I want to avoid the heuristic approach : i’m not an expert of the game and for the last games i’ve developed this way i obtained mediocre results with that path.
      I know the montecarlo tree search algorithm, i’ve used it for a checkers game with discrete result but I’m really confused by the recent success of other Machine Learning options.
      For example i found this answer in stack overflow that really puzzles me, it says :
      "So again: build a bot which can play against itself. One common basis is a function Q(S,a) which assigns to any game state and possible action of the player a value -- this is called Q-learning. And this function is often implemented as a neural network ... although I would think it does not need to be that sophisticated here.”
      I’m very new to Machine Learning (this should be Reinforcement Learning, right?) and i only know a little of Q-learning but it sounds like a great idea: i take my bot, making play against itself and then it learns from its results… the problem is that i have no idea how to start! (and neither if this approach could be good or not).
      Could you help me to get the right direction?
      Is the Q-learning strategy a good one for my domain?
      Is the Montecarlo still the best option for me?
      Would it work well in a 4 players game like Buraco (2 opponents and 1 team mate)?
      Is there any other method that i’m ignoring?
      PS: My goal is to develop an enjoyable AI for a casual application, i can even consider the possibility to make the AI cheating for example by looking at the players hands or deck.  Even with this, ehm, permission i would not be able to build a good heuristic, i think
      Thank you guys for your help!
    • By ramirofages
      Hi, I came across this udk article:
      https://docs.unrealengine.com/udk/Three/VolumetricLightbeamTutorial.html
      that somewhat teaches you how to make the volumetric light beam using a cone. I'm not using unreal engine so I just wanted to understand how the technique works.
      What I'm having problems is with how they calculate the X position of the uv coordinate, they mention the use of a "reflection vector" that according to the documentation (https://docs.unrealengine.com/latest/INT/Engine/Rendering/Materials/ExpressionReference/Vector/#reflectionvectorws ) it just reflects the camera direction across the surface normal in world space (I assume from the WS initials) .
      So in my pixel shader I tried doing something like this:
      float3 reflected_view = reflect(view_dir, vertex_normal); tex2D(falloff_texture, float2(reflected_view.x * 0.5 + 0.5, uv.y)) view_dir is the direction that points from the camera to the point in world space. vertex normal is also in world space. But unfortunately it's not working as expected probably because the calculations are being made in world space. I moved them to view space but there is a problem when you move the camera horizontally that makes the coordinates "move" as well. The problem can be seen below:

      Notice the white part in the second image, coming from the left side.
      Surprisingly I couldn't find as much information about this technique on the internet as I would have liked to, so I decided to come here for help!
    • By Outliner
      Consider how one makes terrain using marching cubes. By having a grid of floats we can represent a continuous field that marching cubes will interpolate and turn into a nice smooth isosurface for the player to walk around on. This is easy and excellent for creating mountains and valleys and so on, but what if we want more variety in our game? A game is not normally made of just grass and sky. Maybe some places should be sand, or water, or road. How could that be worked into the mesh that we're getting from marching cubes?
      The obvious approach seems to be to have multiple fields, so each point on the grid has a certain level of sand, soil, rock, water, and so on. Then we modify the marching cubes algorithm to look for transitions between materials, so it puts a surface between areas of mostly one material and areas that are mostly other materials. We'd also want to keep track of when these surfaces touch the air, because that's the only time when we'd actually want to triangulate and render the surfaces.
      Suddenly the delightfully simple marching cubes algorithm is looking a lot less obvious. Has anything like this ever been done? Does anyone have any tips? Is this the right approach?
      Edit: Embarrassing mistake! I didn't think of phrasing the problem as "multiple materials" until I went to post this question, but now that I have I see there are plentiful google results for marching cubes with multiple materials. I'm still interested in any tips and advice, but now I have other resources to help with this problem.
      From the Google results, this paper looks especially interesting: Automatic 3D Mesh Generation for A Domain with Multiple Materials
    • By 51mon
      Hey
      I want to try shade particles by compute a "small" number of samples, e.g. 10, in VS. I only need to compute the intensity of the light, so essentially it's a single piece of data in 2 dimensions.
      Now I want to compress this data, pass it on to PS and decompress it there (the particle is a single quad and the data is passed through interpolators). I will accept a certain amount of error as long as there are no hard edges, i.e. blurred.
      The compressed data has to be small and compression/decompression fast. Does anyone know of a good way to do this?
      Maybe I could do something fourier based but I'm not sure of what basis functions to use.
       
      Thanks
  • Popular Now