Jump to content
  • Advertisement

Search the Community

Showing results for tags 'Linear Algebra'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Art Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum
  • Unreal Engine Users's Unreal Engine Group Forum
  • Unity Developers's Forum
  • Unity Developers's Asset Share

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 9 results

  1. Hey there, I recently came across this: https://github.com/ssloy/tinyraytracer It's a raytracer implemented in a very minimal way. This person uses a method to find intersections between a ray and a sphere that I can't understand. His implementation is as follows: bool ray_intersect(const Vec3f &orig, const Vec3f &dir, float &t0) const { Vec3f L = center - orig; float tca = L*dir; float d2 = L*L - tca*tca; if (d2 > radius*radius) return false; float thc = sqrtf(radius*radius - d2); t0 = tca - thc; float t1 = tca + thc; if (t0 < 0) t0 = t1; if (t0 < 0) return false; return true; } This code confuses me, because I've always used the quadratic formula to get the discriminant of the ray-sphere function. It looks like he is doing the same thing, but perhaps using a different formula to begin with? If someone has the time I'd appreciate an explanation or a breakdown of this method.
  2. 8Observer8

    GameDev Books

    Updated 3/27/2019 I created a new section: Math and Physics I added a new link on this nice book: Beginning Math and Physics for Game Programmers I study how to write my own game engines using modern OpenGL/C# and WebGL/TypeScript. I advice you this book: C# Game Programming: For Serious Game Creation. This book shows how to write your own game engine with maintainable code using TDD. This is a great book. It is not for GameDev only. It shows how to develop big projects in general. I know that you like to write games using Game Engines like Unity. By this book you will know basics of Linear Algebra, Shader Math, Game Physics and so on. Shader Math is important for Unity too because you need to write shaders for Unity. HLSL and GLSL are very similar. It is a great book really. Behaviour-Driven Development: 2014 - 10 - BDD in Action: Behavior-driven development for the whole software lifecycle - John Ferguson Smart. Source Code: https://www.manning.com/books/bdd-in-action Test-Driven Development: 2013 - 12 - The Art of Unit Testing: with examples in C# - 2nd Edition - Roy Osherove. Source Code: https://github.com/royosherove/aout2 Writing Games: 2010 - 06 - C# Game Programming: For Serious Game Creation. Source Code: 9781435455566.zip (121 MB) 2015 - 09 - Build your own 2D Game Engine and Create Great Web Games Using HTML5, JavaScript, and WebGL by Kelvin Sung, Jebediah Pavleas, Fernando Arnez, and Jason Pace. Source Code: https://github.com/apress/build-your-own-2d-game-engine 2017 - 10 - Pro HTML5 Games - 2nd Edition - A.R. Shankar. Source Code: https://github.com/apress/pro-html5-games-17 2018 - 04 - Unity in Action - 2nd Edition - J. Hocking. Source Code: https://www.manning.com/books/unity-in-action-second-edition Computer graphics: 2013 - 07 - WebGL Programming Guide - K. Matsuda, R. Lea. Source Code: https://sites.google.com/site/webglbook/ 2013 - 06 - Computer Graphics Principles and Practice - 3rd Edition - John F. Hughes, Andries van Dam, Morgan McGuire, David F. Sklar, James D. Foley, Steven K. Feiner, Kurt Akeley. Source Code: http://cgpp.net/about.xml Math and Physics: 2004 - 04 - Beginning Math and Physics for Game Programmers - Wendy Stahler 2011 - 06 - Mathematics for 3D Game Programming and Computer Graphics - 3rd edition - Eric Lengyel 2011 - 11 - 3D Math Primer for Graphics and Game Development - F. Dunn, I. Parberry 2013 - 04 - Physics for Game Developers - 2nd Edition - David M. Bourg, Bryan Bywalec 2014 - 05 - Physics for JavaScript Games, Animation, and Simulations - Adrian Dobre, Dev Ramtal Multiplayer: 2015 - 05 - Multiplayer Game Development with HTML5 - Rodrigo Silveira. Source Code: https://www.packtpub.com/code_download/21527 2015 - 10 - Multiplayer Game Programming - &nbsp;Josh Glazer, Sanjay Madhav. Source Code: https://github.com/MultiplayerBook/MultiplayerBook
  3. Original Post: Limitless Curiosity Out of various phases of the physics engine. Constraint Resolution was the hardest for me to understand personally. I need to read a lot of different papers and articles to fully understand how constraint resolution works. So I decided to write this article to help me understand it more easily in the future if, for example, I forget how this works. This article will tackle this problem by giving an example then make a general formula out of it. So let us delve into a pretty common scenario when two of our rigid bodies collide and penetrate each other as depicted below. From the scenario above we can formulate: We don't want our rigid bodies to intersect each other, thus we construct a constraint where the penetration depth must be more than zero. \(C: d>=0\) This is an inequality constraint, we can transform it to a more simple equality constraint by only solving it if two bodies are penetrating each other. If two rigid bodies don't collide with each other, we don't need any constraint resolution. So: if d>=0, do nothing else if d < 0 solve C: d = 0 Now we can solve this equation by calculating \( \Delta \vec{p1},\Delta \vec{p2},\Delta \vec{r1}\),and \( \Delta \vec{r2}\) that cause the constraint above satisfied. This method is called the position-based method. This will satisfy the above constraint immediately in the current frame and might cause a jittery effect. A much more modern and preferable method that is used in Box2d, Chipmunk, Bullet and my physics engine is called the impulse-based method. In this method, we derive a velocity constraint equation from the position constraint equation above. We are working in 2D so angular velocity and the cross result of two vectors are scalars. Next, we need to find \(\Delta V\) or impulse to satisfy the velocity constraint. This \(\Delta V\) is caused by a force. We call this force 'constraint force'. Constraint force only exerts a force on the direction of illegal movement in our case the penetration normal. We don't want this force to do any work, contribute or restrict any motion of legal direction. \(\lambda\) is a scalar, called Lagrangian multiplier. To understand why constraint force working on \(J^{T}\) direction (remember J is a 12 by 1 matrix, so \(J^{T}\) is a 1 by 12 matrix or a 12-dimensional vector), try to remember the equation for a three-dimensional plane. Now we can draw similarity between equation(1) and equation(2), where \(\vec{n}^{T}\) is similar to J and \(\vec{v}\) is similar to V. So we can interpret equation(1) as a 12 dimensional plane, we can conclude that \(J^{T}\) as the normal of this plane. If a point is outside a plane, the shortest distance from this point to the surface is the normal direction. After we calculate the Lagrangian multiplier, we have a way to get back the impulse from equation(3). Then, we can apply this impulse to each rigid body. Baumgarte Stabilization Note that solving the velocity constraint doesn't mean that we satisfy the position constraint. When we solve the velocity constraint, there is already a violation in the position constraint. We call this violation position drift. What we achieve is stopping the two bodies from penetrating deeper (The penetration depth will stop growing). It might be fine for a slow-moving object as the position drift is not noticeable, but it will be a problem as the object moving faster. The animation below demonstrates what happens when we solve the velocity constraint. [caption id="attachment_38" align="alignnone" width="800"] So instead of purely solving the velocity constraint, we add a bias term to fix any violation that happens in position constraint. So what is the value of the bias? As mentioned before we need this bias to fix positional drift. So we want this bias to be in proportion to penetration depth. This method is called Baumgarte Stabilization and \(\beta\) is a baumgarte term. The right value for this term might differ for different scenarios. We need to tweak this value between 0 and 1 to find the right value that makes our simulation stable. Sequential Impulse If our world consists only of two rigid bodies and one contact constraint. Then the above method will work decently. But in most games, there are more than two rigid bodies. One body can collide and penetrate with two or more bodies. We need to satisfy all the contact constraint simultaneously. For a real-time application, solving all these constraints simultaneously is not feasible. Erin Catto proposes a practical solution, called sequential impulse. The idea here is similar to Project Gauss-Seidel. We calculate \(\lambda\) and \(\Delta V\) for each constraint one by one, from constraint one to constraint n(n = number of constraint). After we finish iterating through the constraints and calculate \(\Delta V\), we repeat the process from constraint one to constraint n until the specified number of iteration. This algorithm will converge to the actual solution.The more we repeat the process, the more accurate the result will be. In Box2d, Erin Catto set ten as the default for the number of iteration. Another thing to notice is that while we satisfy one constraint we might unintentionally satisfy another constraint. Say for example that we have two different contact constraint on the same rigid body. When we solve \(\dot{C1}\), we might incidentally make \(\dot{d2} >= 0\). Remember that equation(5), is a formula for \(\dot{C}: \dot{d} = 0\) not \(\dot{C}: \dot{d} >= 0\). So we don't need to apply it to \(\dot{C2}\) anymore. We can detect this by looking at the sign of \(\lambda\). If the sign of \(\lambda\) is negative, that means the constraint is already satisfied. If we use this negative lambda as an impulse, it means we pull it closer instead of pushing it apart. It is fine for individual \(\lambda\) to be negative. But, we need to make sure the accumulation of \(\lambda\) is not negative. In each iteration, we add the current lambda to normalImpulseSum. Then we clamp the normalImpulseSum between 0 and positive infinity. The actual Lagrangian multiplier that we will use to calculate the impulse is the difference between the new normalImpulseSum and the previous normalImpulseSum Restitution Okay, now we have successfully resolve contact penetration in our physics engine. But what about simulating objects that bounce when a collision happens. The property to bounce when a collision happens is called restitution. The coefficient of restitution denoted \(C_{r}\), is the ratio of the parting speed after the collision and the closing speed before the collision. The coefficient of restitution only affects the velocity along the normal direction. So we need to do the dot operation with the normal vector. Notice that in this specific case the \(V_{initial}\) is similar to JV. If we look back at our constraint above, we set \(\dot{d}\) to zero because we assume that the object does not bounce back(\(C_{r}=0\)).So, if \(C_{r} != 0\), instead of 0, we can modify our constraint so the desired velocity is \(V_{final}\). We can merge our old bias term with the restitution term to get a new bias value. // init constraint // Calculate J(M^-1)(J^T). This term is constant so we can calculate this first for (int i = 0; i < constraint->numContactPoint; i++) { ftContactPointConstraint *pointConstraint = &constraint->pointConstraint; pointConstraint->r1 = manifold->contactPoints.r1 - (bodyA->transform.center + bodyA->centerOfMass); pointConstraint->r2 = manifold->contactPoints.r2 - (bodyB->transform.center + bodyB->centerOfMass); real kNormal = bodyA->inverseMass + bodyB->inverseMass; // Calculate r X normal real rnA = pointConstraint->r1.cross(constraint->normal); real rnB = pointConstraint->r2.cross(constraint->normal); // Calculate J(M^-1)(J^T). kNormal += (bodyA->inverseMoment * rnA * rnA + bodyB->inverseMoment * rnB * rnB); // Save inverse of J(M^-1)(J^T). pointConstraint->normalMass = 1 / kNormal; pointConstraint->positionBias = m_option.baumgarteCoef * manifold->penetrationDepth; ftVector2 vA = bodyA->velocity; ftVector2 vB = bodyB->velocity; real wA = bodyA->angularVelocity; real wB = bodyB->angularVelocity; ftVector2 dv = (vB + pointConstraint->r2.invCross(wB) - vA - pointConstraint->r1.invCross(wA)); //Calculate JV real jnV = dv.dot(constraint->normal pointConstraint->restitutionBias = -restitution * (jnV + m_option.restitutionSlop); } // solve constraint while (numIteration > 0) { for (int i = 0; i < m_constraintGroup.nConstraint; ++i) { ftContactConstraint *constraint = &(m_constraintGroup.constraints); int32 bodyIDA = constraint->bodyIDA; int32 bodyIDB = constraint->bodyIDB; ftVector2 normal = constraint->normal; ftVector2 tangent = normal.tangent(); for (int j = 0; j < constraint->numContactPoint; ++j) { ftContactPointConstraint *pointConstraint = &(constraint->pointConstraint[j]); ftVector2 vA = m_constraintGroup.velocities[bodyIDA]; ftVector2 vB = m_constraintGroup.velocities[bodyIDB]; real wA = m_constraintGroup.angularVelocities[bodyIDA]; real wB = m_constraintGroup.angularVelocities[bodyIDB]; //Calculate JV. (jnV = JV, dv = derivative of d, JV = derivative(d) dot normal)) ftVector2 dv = (vB + pointConstraint->r2.invCross(wB) - vA - pointConstraint->r1.invCross(wA)); real jnV = dv.dot(normal); //Calculate Lambda ( lambda real nLambda = (-jnV + pointConstraint->positionBias / dt + pointConstraint->restitutionBias) * pointConstraint->normalMass; // Add lambda to normalImpulse and clamp real oldAccumI = pointConstraint->nIAcc; pointConstraint->nIAcc += nLambda; if (pointConstraint->nIAcc < 0) { pointConstraint->nIAcc = 0; } // Find real lambda real I = pointConstraint->nIAcc - oldAccumI; // Calculate linear impulse ftVector2 nLinearI = normal * I; // Calculate angular impulse real rnA = pointConstraint->r1.cross(normal); real rnB = pointConstraint->r2.cross(normal); real nAngularIA = rnA * I; real nAngularIB = rnB * I; // Apply linear impulse m_constraintGroup.velocities[bodyIDA] -= constraint->invMassA * nLinearI; m_constraintGroup.velocities[bodyIDB] += constraint->invMassB * nLinearI; // Apply angular impulse m_constraintGroup.angularVelocities[bodyIDA] -= constraint->invMomentA * nAngularIA; m_constraintGroup.angularVelocities[bodyIDB] += constraint->invMomentB * nAngularIB; } } --numIteration; } General Step to Solve Constraint In this article, we have learned how to solve contact penetration by defining it as a constraint and solve it. But this framework is not only used to solve contact penetration. We can do many more cool things with constraints like for example implementing hinge joint, pulley, spring, etc. So this is the step-by-step of constraint resolution: Define the constraint in the form \(\dot{C}: JV + b = 0\). V is always \(\begin{bmatrix} \vec{v1} \\ w1 \\ \vec{v2} \\ w2\end{bmatrix}\) for every constraint. So we need to find J or the Jacobian Matrix for that specific constraint. Decide the number of iteration for the sequential impulse. Next find the Lagrangian multiplier by inserting velocity, mass, and the Jacobian Matrix into this equation: Do step 3 for each constraint, and repeat the process as much as the number of iteration. Clamp the Lagrangian multiplier if needed. This marks the end of this article. Feel free to ask if something is still unclear. And please inform me if there are inaccuracies in my article. Thank you for reading. NB: Box2d use sequential impulse, but does not use baumgarte stabilization anymore. It uses full NGS to resolve the position drift. Chipmunk still use baumgarte stabilization. References Allen Chou's post on Constraint Resolution A Unified Framework for Rigid Body Dynamics An Introduction to Physically Based Modeling: Constrained Dynamics Erin Catto's Box2d and presentation on constraint resolution Falton Debug Visualizer 18_01_2018 22_40_12.mp4 equation.svg
  4. Context I recently started reading the book Artificial Intelligence for Humans, Volume 1: Fundamental Algorithms by Jeff Heaton. Upon finishing reading the chapter Normalizing Data, I decided to practice what I had just learnt to hopefully finally understand 100 % of a special algorithm : equilateral encoding. Equilateral Encoding The purpose of this algorithm is to normalize N categories by creating an equilateral triangle of N - 1 dimensions. Each vertex in that triangle is a category and so, because of the properties of the equilateral triangle, it's easy to construct this structure with medians of 1 unit. For selecting a category, all you need is a location vector of N - 1 dimensions that's ideally inside the equilateral triangle. For example, consider the vertex C and point c. The vertex C is 100% of a specified category, whilst the point c is 0% of that category. The vertices A and B are also 0% of C's category. In other words, the further away a location vector is from a vertex along its median axis, the less it belongs to the vertex`s category. Also, if the mentioned axis distance is greater than the length of a median (all medians have the same length in an equilateral triangle, which is the range of values in this context), then the location vector absolutely does not belong to the specified category. The reason why we just don't select directly a category is because this algorithm is originally for machine learning. It quantifies qualitative data, shares the blame better and uses less memory than simpler algorithms, as it uses N - 1 dimensions. The Application First of, here's the link to my application : https://github.com/thecheeselover/image-recognition-by-equilateral-encoding How It Works To be able to know the similarities between two images, a comparison must be done. Using the equilateral encoding algorithm, a pixel versus pixel comparison is the only way to do so and thus shapes do not count towards the similarities between the two images. What category should we use to compare images? Well, colors of course! Here are the colors I specified as categories : Red Green Blue White Black The algorithm will select the closest category for each pixel according to the differences between both colors and save it's location vector in an array. Then, when both images are loaded, compare each image pixels against each other by computing the median distance, as mentioned before, between both of them. Simply project the first pixel's vector onto the other one and compute the Euclidean distance to get the median distance. Sum all computed median distances This way, it would also works if there were location vectors not totally on a vertex, even though it doesn't happen for the just described comparison algorithm. Preliminaries Of course, this can be really slow for immense images. Downscale the images to have a maximum width and height, 32 pixels in my case. Also, remove the transparency as it is not a color component in itself. If both images do not have the same aspect ratio and so not the same downscaled dimensions, then consider using extremely different location vectors and so just different categories for "empty" pixels against non-empty pixels. The Code The equilateral encoding table : package org.benoitdubreuil.iree.model; import org.benoitdubreuil.iree.utils.MathUtils; /** * The image data for the image recognition by equilateral encoding. * <p> * See the author Jeff Heaton of this algorithm and his book that contains everything about it : * <a href="https://www.heatonresearch.com/book/aifh-vol1-fundamental.html">Artificial Intelligence for Humans, Vol 1: Fundamental Algorithms</a> * <p> * The base source code for the equilateral encoding algorithm : * <a href="https://github.com/jeffheaton/aifh/blob/master/vol1/java-examples/src/main/java/com/heatonresearch/aifh/normalize/Equilateral.java">Equilateral.java</a> */ public class EquilateralEncodingTable { // The algorithm with only one category does not make sense private static final int MIN_CATEGORIES_FOR_ENCODING = 2; private int m_categoryCount; private int m_dimensionCount; private double m_lowestValue; private double m_highestValue; private double m_valueRange; private double[][] m_categoriesCoordinates; public EquilateralEncodingTable(int categoryCount, double lowestValue, double highestValue) { if (categoryCount < MIN_CATEGORIES_FOR_ENCODING) { throw new ArrayIndexOutOfBoundsException("Not enough categories for equilateral encoding"); } this.m_categoryCount = categoryCount; this.m_dimensionCount = computeDimensionCount(categoryCount); this.m_lowestValue = lowestValue; this.m_highestValue = highestValue; this.m_valueRange = highestValue - lowestValue; this.m_categoriesCoordinates = computeEquilateralEncodingTable(); } /** * Encodes a supplied index and gets the equilaterally encoded coordinates at that index. * * @param equilateralCoordsIndex The index at which the equilaterally encoded coordinates are. * * @return The equilaterally encoded coordinates. */ public double[] encode(int equilateralCoordsIndex) { return m_categoriesCoordinates[equilateralCoordsIndex]; } /** * Decodes the supplied coordinates by finding its closest equilaterally encoded coordinates. * * @param coordinates The coordinates that need to be decoded. It should not be equilateral it doesn't matter, as the goal is simply to find the closest equilaterally encoded * coordinates. * * @return The index at which the closest equilaterally encoded coordinates are. */ public int decode(double[] coordinates) { double closestDistance = Double.POSITIVE_INFINITY; int closestEquilateralCoordsIndex = -1; for (int i = 0; i < m_categoriesCoordinates.length; ++i) { double dist = computeDistance(coordinates, i); if (dist < closestDistance) { closestDistance = dist; closestEquilateralCoordsIndex = i; } } return closestEquilateralCoordsIndex; } /** * Computes the Euclidean distance between the supplied coordinates and the equilaterally encoded coordinates at the supplied index. * * @param coordinates Coordinates of the first n-dimensional vector. * @param equilateralCoordsIndex Index for the equilaterally encoded coordinates. * * @return The Euclidean distance between the two vectors. */ public double computeDistance(double[] coordinates, int equilateralCoordsIndex) { return MathUtils.computeDistance(coordinates, m_categoriesCoordinates[equilateralCoordsIndex]); } /** * Computes the equilateral encoding table, which is used as a look up for table for the data. * * @return The equilateral encoding table. */ private double[][] computeEquilateralEncodingTable() { double negativeReciprocalOfN; double scalingFactor; final double[][] matrix = new double[m_categoryCount][m_dimensionCount]; matrix[0][0] = -1; matrix[1][0] = 1.0; if (m_categoryCount > 2) { for (int dimension = 2; dimension < m_categoryCount; ++dimension) { // scale the matrix so far scalingFactor = dimension; negativeReciprocalOfN = Math.sqrt(scalingFactor * scalingFactor - 1.0) / scalingFactor; for (int coordinate = 0; coordinate < dimension; ++coordinate) { for (int oldDimension = 0; oldDimension < dimension - 1; ++oldDimension) { matrix[coordinate][oldDimension] *= negativeReciprocalOfN; } } scalingFactor = -1.0 / scalingFactor; for (int coordinate = 0; coordinate < dimension; ++coordinate) { matrix[coordinate][dimension - 1] = scalingFactor; } for (int coordinate = 0; coordinate < dimension - 1; ++coordinate) { matrix[dimension][coordinate] = 0.0; } matrix[dimension][dimension - 1] = 1.0; // Scale the matrix for (int row = 0; row < matrix.length; ++row) { for (int col = 0; col < matrix[0].length; ++col) { double min = -1; double max = 1; matrix[row][col] = ((matrix[row][col] - min) / (max - min)) * m_valueRange + m_lowestValue; } } } } return matrix; } public static int computeDimensionCount(int categoryCount) { return categoryCount - 1; } public int getCategoryCount() { return m_categoryCount; } public int getDimensionCount() { return m_dimensionCount; } public double getLowestValue() { return m_lowestValue; } public double getHighestValue() { return m_highestValue; } public double getValueRange() { return m_valueRange; } public double[][] getCategoriesCoordinates() { return m_categoriesCoordinates; } } The mathematics utilities : package org.benoitdubreuil.iree.utils; public final class MathUtils { /** * Computes the Euclidean distance between the supplied vectors. * * @param lhsCoordinates Coordinates of the first n-dimensional vector. * @param rhsCoordinates Coordinates of the second n-dimensional vector. * * @return The Euclidean distance between the two vectors. */ public static double computeDistance(double[] lhsCoordinates, double[] rhsCoordinates) { double result = 0; for (int i = 0; i < rhsCoordinates.length; ++i) { result += Math.pow(lhsCoordinates[i] - rhsCoordinates[i], 2); } return Math.sqrt(result); } /** * Normalizes the supplied vector. * * @param vector The vector to normalize. * * @return The same array, but normalized. */ public static double[] normalizeVector(double[] vector) { double squaredLength = 0; for (int dimension = 0; dimension < vector.length; ++dimension) { squaredLength += vector[dimension] * vector[dimension]; } if (squaredLength != 1.0 && squaredLength != 0.0) { double reciprocalLength = 1.0 / Math.sqrt(squaredLength); for (int dimension = 0; dimension < vector.length; ++dimension) { vector[dimension] *= reciprocalLength; } } return vector; } /** * Negates the vector. * * @param vector The vector to negate. * * @return The same array, but each coordinate negated. */ public static double[] negateVector(double[] vector) { for (int dimension = 0; dimension < vector.length; ++dimension) { vector[dimension] = -vector[dimension]; } return vector; } /** * Multiplies the vector by a scalar. * * @param vector The vector to multiply. * @param scalar The scalar by which to multiply the vector. * * @return The same array, but each coordinate multiplied by the scalar. */ public static double[] multVector(double[] vector, double scalar) { for (int dimension = 0; dimension < vector.length; ++dimension) { vector[dimension] *= scalar; } return vector; } /** * Computes the dot product of the two supplied points. * * @param lhsVector The first point. * @param rhsVector The second point. * * @return The dot product of the two points. */ public static double dotProductVector(double[] lhsVector, double[] rhsVector) { double dotResult = 0; for (int dimension = 0; dimension < lhsVector.length; ++dimension) { dotResult += lhsVector[dimension] * rhsVector[dimension]; } return dotResult; } private MathUtils() { } } The image data that shows how to encode pixels : package org.benoitdubreuil.iree.model; import org.benoitdubreuil.iree.controller.ControllerIREE; import org.benoitdubreuil.iree.gui.ImageGUIData; import org.benoitdubreuil.iree.pattern.observer.IObserver; import org.benoitdubreuil.iree.pattern.observer.Observable; import java.awt.*; import java.awt.image.BufferedImage; public class ImageData extends Observable<ImageData> implements IObserver<ImageGUIData> { private double[][] m_encodedPixelData; /** * Encodes the pixel data of the supplied image data. * * @param newValue The new value of the observed. */ @Override public void observableChanged(ImageGUIData newValue) { if (newValue.getDownScaled() != null) { encodePixelData(newValue); modelChanged(this); } } private void encodePixelData(ImageGUIData imageGUIData) { EquilateralEncodingTable encodingTable = ControllerIREE.getInstance().getEncodingTable(); EquilateralEncodingCategory[] categories = EquilateralEncodingCategory.values(); BufferedImage image = imageGUIData.getDownScaled(); int width = image.getWidth(); int height = image.getHeight(); m_encodedPixelData = new double[width * height][]; for (int x = 0; x < width; ++x) { for (int y = 0; y < height; ++y) { int orignalPixel = image.getRGB(x, height - 1 - y); int r = (orignalPixel >> 16) & 0xff; int g = (orignalPixel >> 8) & 0xff; int b = orignalPixel & 0xff; int minColorDistance = 255 * 3; int minColorDistanceCategory = 0; for (int category = 0; category < encodingTable.getCategoryCount(); ++category) { Color categoryColor = categories[category].getColor(); int colorDistance = Math.abs(r - categoryColor.getRed()) + Math.abs(g - categoryColor.getGreen()) + Math.abs(b - categoryColor.getBlue()); if (colorDistance < minColorDistance) { minColorDistance = colorDistance; minColorDistanceCategory = category; } } m_encodedPixelData[x * height + y] = encodingTable.encode(minColorDistanceCategory).clone(); } } } public int getPixelCount() { return m_encodedPixelData.length; } double[][] getEncodedPixelData() { return m_encodedPixelData; } } The actual image data recognition algorithm : package org.benoitdubreuil.iree.model; import org.benoitdubreuil.iree.controller.ControllerIREE; import org.benoitdubreuil.iree.utils.MathUtils; public final class ImageDataRecognition { /** * Compares two images and return how similar they are. * * @param lhs The first image to compare. * @param rhs The second image to compare. * * @return Inclusively from 0, totally different, to 1, the same. */ public static double compareImages(ImageData lhs, ImageData rhs) { double meanDistance = 0; double[][] lhsEncodedPixelData = lhs.getEncodedPixelData(); double[][] rhsEncodedPixelData = rhs.getEncodedPixelData(); if (lhsEncodedPixelData != null && rhsEncodedPixelData != null) { EquilateralEncodingTable table = ControllerIREE.getInstance().getEncodingTable(); int maxPixelCount = Math.max(lhs.getPixelCount(), rhs.getPixelCount()); double emptyPixelMeanValue = 1.0; for (int pixel = 0; pixel < maxPixelCount; ++pixel) { double[] lhsVector; double[] rhsVector; if (pixel < lhs.getPixelCount()) { lhsVector = lhsEncodedPixelData[pixel]; } else { meanDistance += emptyPixelMeanValue; continue; } if (pixel < rhs.getPixelCount()) { rhsVector = rhsEncodedPixelData[pixel]; } else { meanDistance += emptyPixelMeanValue; continue; } double rhsDotLhs = MathUtils.dotProductVector(rhsVector, lhsVector); double[] rhsProjectedOntoLhs = MathUtils.multVector(lhsVector.clone(), rhsDotLhs); meanDistance += MathUtils.computeDistance(lhsVector, rhsProjectedOntoLhs) / table.getValueRange() / maxPixelCount; } } return meanDistance; } private ImageDataRecognition() { } }
  5. Introduction The impetus for this research topic is concerned with a data structure that efficiently stores and queries scatter point data. These points may originally be neatly arranged on a grid, or randomly scattered across the surface of a sphere (e.g. the earth), but it makes no assumption in that regard. The data structure could have member functions such as add(point) or add(points, amount). The points themselves contain three scalar values: the longitude, the latitude, and the data value itself. This data value could be geographical data such as land height or weather data such as air temperature, but again, the data structure is agnostic about the data itself other than as a scalar value. Data Structure The purpose of the data structure is ultimately to efficiently and accurately query for point(s) that is(are) nearby another point, the reason being to compute the average value at that point. Suppose a requirement is to create an image representing the air temperature (or rainfall, cloud cover, whatever the scalar value of each point represents) - this data structure could be queried for each point (i.e. pixel) in the image. Once nearby points are queried, the algorithm to average the scalar values of each point is outside the scope of this, but a good candidate is the inverse distance weighed algorithm. Within the data structure, the data points will not be stored in spherical coordinates, but as n-vectors. The reason for this is that the sampling algorithm - finding N closest data points - is more efficient and accurate using vector algebra. While it is outside the scope of this, to summarize: it is easier to determine how close two points are on a sphere if these points are represented by unit vectors than by latitude and longitude. For example, two points sharing the same longitude are farther apart at the equator than above or below the equator. Even if we are just concerned with relative distances, and do not need to compute the square root of the distance formula, for this reason it is still not as accurate. Also the international date line, where longitude "resets", presents a problem that would have to be dealt with. Also either pole. But you get the idea - vectors are more stable, accurate, and efficient. I've always been a fan of Binary Space Partitions - each branch node of a BSP has two areas, one below and one above (or one left and one right, or one in and one out, however you like to think of it). To go along with storing points as vectors, branch nodes within the structure partitions their space also using a vector, which in this case forms the normal of a plane that bisects the unit sphere. All data points in a node (or children of the node) will either be above or below this plane (points that are on the plane can be allocated to either side as is appropriate). Points that are below the plane are placed in the first child of the node, and points that are above placed in the second child. We call this plane normal the split axis of the node. Querying the structure for the closest N points then becomes trivial. For any branch node, compute the dot product of the point in question and the split axis of the node. If it is negative (the point is below the split plane), recursively query with the first child node, and if positive (the point is above the split plane), with the second child node. For a leaf node, compute the dot product of the point in question with each data point contained in the node, keeping a sorted list of the closest N points. The one caveat is that in branch nodes, after recursing into one child node, it may be necessary to recurse into the other child if the farthest point found so far is farther than the other child node, since there may be closer points in the other child node. But this is trivial as we are comparing dot products. No expensive computations are necessary to compute the N closest points - just a bunch of dot products. As dot products of unit vectors range from -1 to 1 (-1 being farthest apart and 1 being equal), two points are closer if their dot product is higher. Once complete, and the list of N points found, the actual distances can be calculated if necessary, which in this case is done by calculating the angles using the already-calculated dot products. This angle can then be multiplied by the radius of the earth to get an exact distance, again only if necessary. But for averaging, these extra calculations are not necessary. As a final note, the data structure lends itself well to graphics hardware. In my particular case, rendering an image using such a data structure on the CPU may take several minutes, but on the GPU takes a fraction of a second. Problem The problem - common to any space partitioning tree - is how to initially create the data structure. Again, the points are not assumed to be arranged in any specific way, and as far as we are concerned, are a "point soup". They can be specified one at a time - addPoint(point) - or all at once - addPoints(points) - or both. Specifically, how can the determination of the split axis of any branch be efficient and provide an even split (the same or similar number of points on either side). The problem is unique here because the points are not arranged on a two-dimensional surface or in three-dimensional space, but lie on a unit sphere. My current solution to the problem is not terribly elegant. For each two data points, compute the axis that splits them evenly. This is done by computing the point between the two (subtract one from the other) and normalizing it, then crossing this with the cross product of the two points. This cross product comprises the normal of the plane that evenly splits the two points. This normal is then tested against each other point in the node to get 1) the number of points on either side of the plane and 2) the distances of the points to the plane. A good split plane is one that 1) splits points evenly, 2) has the same/similar distances on other side, and 3) has large distances on other side. As this test is performed for each two data points, some big-O notation shows that determination of the split plane for nodes containing a large number of points will become prohibitive. However, I have not been able to determine a better solution. But the advantages of such a data structure still outweigh this expense. In my particular case, the time spent initially creating the tree is worth the time saved during queries. I should mention that if the data points are known ahead of time, it is faster to specify them all at once, so the tree can re-build itself once, rather than one at a time which may cause the tree to re-build itself multiple times.
  6. My question is pretty straight to the point. I'm reading Real-Time Rendering right now, and while at the start I've done great understanding all the topics, I'm having a hard time once I entered rotations and stuff. I.e. the 4.2.2 chapter, where the function to extract parameters from an Euler Transform is presented, I've felt like the author just pulled a rabbit from a hat.I know that most mathematicians could just derive those equations by themselves, but I absolutely can't do it right now. So the question is: should I keep reading and I will eventually understand those things or should I stop and "start easier"? I'm a little bit worried that I would not be able to follow along and waste my time, but then this could be only because I hate to not grasp the full concept right away and have a 'semi-knowledge' on something. Maybe I don't need to understand everything the first time? I should mention that I'm a programmer and I know some decent linear algebra. I'm also planning to go hard on math on the next semesters.
  7. hi, i´ve tried for a long time to get a skinning example code to work, without much success. does anyone know a website/tutorial where it is described step-by-step what happens to the vertices during key-frame animations? i want to point put that there isnt a more confusing way than wrapping essential pieces of code into class methods, as it is done here: http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html so this one is just not comprehendable, at least not to me ... i´d like to reiterate what i already know: vertices are defined in "mesh space". to transform that mesh to its proper position in the world, several chained transformations have to be applied to those vertices ("world space"). first, a model is (or can be) a hierarchical organized scene, for example, to put the hand of a human to its correct place, the ellbow-transform and shoulder-transform have to be applied after the models transform itself (that is, where the model in the world is located). absolute_hand_transform = model_transform x shoulder x ellbow x hand ... each defined relative to its parent, of course. this is all static, no skinning there ... thats what i´ve already implemented. if you load a model using ASSIMP (for example), bones contain a "offset matrix", that transform the vertices of a mesh to "local space of the bone": ... then, an animation produces for each bone a relative transformation (relative to its parent, i guess) that i can use (draw) after all the previous or "parent" transforms are applied. IS IT THEN CORRECT (?) that to draw a skinned mesh, the transformations i have to calculate and send to the vertexshader are calculated as following: MeshSpace-To-WorldSpace = ModelToWorld-Transform x MeshSpace-To-BoneSpace_0 x AnimatedBone_0 x MeshSpace-To-BoneSpace_1 x AnimatedBone_1 x MeshSpace-To-BoneSpace_2 x AnimatedBone_2 x MeshSpace-To-BoneSpace_3 x AnimatedBone_3 where all those "MeshSpace-To-BoneSpace" transforms are these "bone offset matrices" AND each of those "AnimatedBone" are directly computed by interpolating the animation keys ... correct ??? (i assume its not because often i read about having to use some "inverse bone offset transform" in some way i dont get ... 😐) thanks in advance for any answers/suggestions/clarifications !!
  8. I am a beginner in computer graphics but i have solid understanding of linear algebra. I am trying to implement my own forward kinematics for a skeleton. I defined global and local translation vector in child-parent relationships but i am struggling with propagation of rotations from the node n to n+1, n+2 ... I figured that motion builder uses XYZ rotation order. I created a matrix to transform child vector and "rotate it". But if i have 3 joints where 1st is parent to 2nd and 2nd is parent to 3rd, if i rotate 1st joint say (10,20,0) locally, i get accurate global position for 2nd and 3rd joint but if i have a local rotation of 2nd joint say (20,0, 0) on top of (10, 20, 0) for the 1st, my 3rd joint's position is wrong. I notice there should be some stacking so that i should probably do (10+20, 20+0, 0+0) for the second rotation but i am not sure why is that the case. If i could get some links or a book that will address this topic, i would appreciate it. Thanks!
  9. Hi, Say I have a square with the following vertices in world position. topleft=100,100 bottomRight= 200,200 So the center is at 150,150 So my local to world matrix should be a translation T to 150,150 (if my for vertices where offset around the origin by 50 in local space) Now, this is what I tried to build my rotation matrix. Compute Tinv as the inverse of T to come back to local space. Compute R as the rotation matrix around 0,0,1 axis with pi/2 as the angle. So my plan to builld this matrix was to first come back to local space, then rotate, then go back to world space. F = Tinv * R * T And that doesn't work. It does work when I apply those matrices independently though, so like: rotatedTopLeft = topLeft * TInv rotatedTopLeft = rotatedTopLeft * R rotatedTopLeft = rotatedTopLeft * T But what I want is combine TInv, R and T into a single matrix so I can just do rotatedTopLeft = topLeft * F So how exactly do I compute F? Thanks 🙂