 Home
 » Viewing Profile: Reputation: l0k0
l0k0
Member Since 17 Aug 2009Offline Last Active Jan 26 2014 12:29 PM
Community Stats
 Group Members
 Active Posts 45
 Profile Views 2,439
 Submitted Links 0
 Member Title Member
 Age Age Unknown
 Birthday Birthday Unknown

Gender
Not Telling
#5106536 Suggestions for memory management classes?
Posted by l0k0 on 02 November 2013  02:22 PM
Some popular ones that haven't been mentioned are Page Allocators, Slab Allocators, and Paged Pool Allocators. Another really simple one that isn't a heap replacement but is handy on the stack is Alloca. However, it should never be used in a loop, recursive function, or with an allocation request that will blow up the stack. Some temporary memory allocators will use a statically sized array first, then alloca, and then go to the heap if needed for "scratch" memory. One way of going about this is described here.
If the end goal is memory performance (and it should be if you're writing allocators) I cannot agree enough with what Hodgman is saying here. malloc is a general purpose allocator that has to fulfill allocations of all different sizes and alignments. It also has to do something called a context switch and these are ridiculously expensive. However, it's still primarily about cache misses at the end of the day.
The goal should be to make fewer allocations, with tightly packed data that is accessed contiguously. Find out what a struct of arrays is, what prefetching does and how to use it responsibly, why casting a float to an int is expensive, what the restrict keyword is, and how this also applies to instructions. Custom memory allocators are a small slice of a much bigger problem.
#5097732 Circle vs Line Segment collision
Posted by l0k0 on 29 September 2013  10:53 PM
Treat it like a circle against an infinite line first. Validate the t value of the intersection between the circle and the infinite line is valid first [0  1]. Then, take the scalar projection of this intersection pt onto p0p1 and validate that the scalar projection is also valid [0  1]. Give this a try:
bool TestSweptCircleLine(const Vector2& c, const float r, const Vector2& d, const Vector2& p0, const Vector2& p1, float &outT) { Vector2 p0p1 = p1  p0; // we have this annoying case first, where the circle is *already* intersecting the line segment so check for that first. In this case, t would be 0.0f. As for intersection points technically you have 2, but your "solver" might just want to use c Vector2 closestPointOnLineSegment = p0 + p0p1 * Saturate(Dot(c  p0, p0p1) / Dot(p0p1, p0p1)); if (DistSquared(closestPointOnLineSegment, c) <= r * r) { outT = 0.0f; return true; } Vector2 p0p1Perp = RightPerp(p0p1); // can avoid normalisation here, but lets keep this simpler for now Vector2 pn = Normalise(p0p1Perp); float pd = Dot(pn, p0); // system of equations is: // dot(pn, X) = pd + r (X is any point that is +r or r distance away from the infinite line) // c + d * t = X (t is in the 0 to 1 range inclusive, X in this case is again the center of the circle when it intersects the infinite line (not necessarily the segment)) // using substitution and solving for T float distanceLineToCenterOfCircle = Dot(pn, c) + pd; // just line signed plane distance float signSideOfLine = Sign(distanceLineToCenterOfCircle ); // if our sign is positive, then our system of equations is + R // if not our system of equations is  R float modR = signSideOfLine * r; outT = (((pd + modR)  Dot(pn, c)) / (Dot(pn, d))); // now validate t and s if (outT >= 0.0f && outT <= 1.0f) { // alright, our swept circle is intersecting the infinite line // now validate that this intersection is in the line segment // range too Vector2 ptCenterOfCircleWhenIntersecting = c + d * outT; Vector2 ptOnLineWhenIntersecting = ptCenterOfCircleWhenIntersecting + pn * modR; // find the scalar projection of ptOnLineWhenIntersecting. If it is between 0 and 1 inclusive we have an intersection float s = Dot(ptOnLineWhenIntersecting  p0, p0p1) / Dot(p0p1, p0p1); if (s >= 0.0f && s <= 1.0f) { return true; } } return false; }
#5077780 Implementing an event or message system for decoupling
Posted by l0k0 on 14 July 2013  11:11 PM
I have a blog post about a message passing system in a componententity world that I wrote here: http://afloatingpoint.blogspot.com/2012/09/gameplayarchitecturepart2message.html. I'm not sure if I would call it simple (warning: custom memory allocation), but it definitely gets the job done when it comes to sending custom messages now and queuing them for later. The short version is that I used a sorted heap, a map of message types to listeners, and a pool allocator.
#5053303 What's the Deal with them Indices?
Posted by l0k0 on 14 April 2013  07:25 PM
It might be easier to think of it this way:
Vector2 vertices[] =
{
Vector2(1.0f, 1.0f), // Index 0
Vector2(1.0f, 1.0f), // Index 1
Vector2(1.0f, 1.0f), // Index 2
Vector2(1.0f, 1.0f), // Index 3
};
Triangle indices[] =
{
Triangle(0, 3, 1), // Bottom left to top right to bottom right triangle
Triangle(0, 2, 3), // Bottom left to top left to top right triangle
};
Each index is referencing a point on the quad. You are ultimately drawing 2 triangles to achieve this. For quads, index buffers don't help all that much, but for something like a cube the amount of data sent to the gpu can get a lot smaller (especially when encoding more than just positions in the vertices) when using index buffers of 16 bit offsets instead of 3 whole floats again.
Open GL takes things as flat arrays of floats and integral types. As such, you have to think of the indices and vertices as groups inside these arrays. Does that make more sense?
#5052714 Matrix LookAt problem
Posted by l0k0 on 13 April 2013  02:13 AM
The view matrix transforms from world to screen space. It is the inverse of the camera's world matrix.
We know that the inverse of a rotation matrix is its transpose. Since what you are calculating is the view matrix, you need to transpose the upper 3x3 portion of the matrix to get the desired result. Notice if you do that with your results, that you get the same thing as the library's matrix (don't just negate that's silly). You can either transpose the matrix after setting the 3 columns or just set it directly.
What you are doing now is right for the translation, but is giving the world space rotation (when really we want its inverse). Use your left hand given your input, have it point from the "eye" to the "at". Hopefully it then becomes clear how the properly calculated view matrix is undoing that transformation.
#5033247 Trouble implementing SAT collision
Posted by l0k0 on 16 February 2013  10:23 PM
I'd do it like this:
void GetProjectionOnAxis(const Vector2 * pVertices, const int N, const Vector2& axisN, float &projMin, float &projMax) { projMin = projMax = Dot(pVertices[0], axisN); // axisN assumed to be normalized for (int i = 1; i < N; ++i) { projMin = Min(projMin, pVertices[i]); projMax = Max(projMax, pVertices[i]); } } float GetOverlapOnAxis(const float projMinA, const float projMaxA, const float projMinB, const float projMaxB) { float minP = Min(projMinA, projMinB); float maxP = Max(projMaxA, projMaxB); float overlap = maxP  minP; // negative if no intersection return overlap; } struct SATResult { Vector2 m_normal; float m_depth; bool m_intersect; }; bool TestPolygonPolygonSAT(const Vector2 * pVerts0, const int n0, const Vector2 * pVerts1, const int n1, SATResult& satOut) { jlVector2 edge, edgePerp, edgePerpN; float min0, max0, min1, max1; float overlap, minOverlap = F32_MAX; jlVector4 minNormal; satOut.m_intersect = false; int j, i; for (j = n0  1, i = 0; i < n0; j = i, ++i) { // compute our orthogonal axis for each edge edge = pVerts0[i]  pVerts0[j]; edgePerp = RightPerp(edge); // y, x edgePerpN = Normalize(edgePerp); // get min/max projection on each axis GetProjectionOnAxis(verts0, n0, edgePerpN, min0, max0); GetProjectionOnAxis(verts1, n1, edgePerpN, min1, max1); // if no overlap, no intersection overlap = jlOverlapOnAxis(min0, max0, min1, max1); // otherwise store minimum overlap value and the normal if (overlap < 0.0f) { return false; // have to intersect on every tested axis for SAT to return true } else if (overlap < minOverlap) { minOverlap = overlap; minNormal = edgePerpN; } } // now do the same thing, on pVerts1 axis for (j = n0  1, i = 0; i < n0; j = i, ++i) { // compute our orthogonal axis for each edge edge = pVerts1[i]  pVerts1[j]; edgePerp = RightPerp(edge); // y, x edgePerpN = Normalize(edgePerp); // get min/max projection on each axis GetProjectionOnAxis(verts0, n0, edgePerpN, min0, max0); GetProjectionOnAxis(verts1, n1, edgePerpN, min1, max1); // if no overlap, no intersection overlap = jlOverlapOnAxis(min0, max0, min1, max1); // otherwise store minimum overlap value and the normal if (overlap < 0.0f) { return false; // have to intersect on every tested axis for SAT to return true } else if (overlap < minOverlap) { minOverlap = overlap; minNormal = edgePerpN; } } // if we get here they are intersecting satOut.m_normal = minNormal; satOut.m_depth = minOverlap; satOut.m_intersect = true; return true; }
You would probably want to break the TestPolygonPolygonSAT into a helper function you call twice. Also, I was lazy and just did this on the spot. Pretty sure it is right though .
#5024217 Good habits for game development
Posted by l0k0 on 22 January 2013  01:36 AM
Some general things, most of them independent of coding style:
 Have realistic and well defined goals that are clearly communicated to members on your team
 Scope it down: your game idea is probably too damn big. Polish takes time, things get cut, features don't exist in a bubble (unless the project fails that is).
 Iteration time is everything: Fail early and fail often, prototype mechanics, and be prepared to cut something you invested a considerable amount of time in
 Knowledge is power: profile your code, run tools that check for leaks, crank up compiler warnings, and use static analysis tools if possible
 Interfaces are king: A well designed one is the root of a clear, maintainable, and productive workflow for others. A poor one will cut into iteration time.
 Understand that hard problems will inevitably contain immensely complex code: elegant systems rarely survive contact with real users
 Embrace time estimates and embrace making time estimates that might be completely wrong. Do this over and over and over again.
 "Self Documenting Code" is probably only documented to you
 It's (usually) not done until it's data driven
#5019381 Rectangle Line collision problem
Posted by l0k0 on 09 January 2013  12:52 AM
It would be quite the feat if you could do this without vectors so not sure what you mean by "not vector based"
To keep this simple, albeit not the most efficient, I'd first make a system of equations between the line segment and the line in normal form (for each edge of the rectangle). Ensure the t value is within 0 to 1 for the segment. Then, find the scalar projection onto the edge of the rectangle. It is possible that the segment resides completely in the rectangle, which means we need a point inside rectangle test. If either point is inside the rectangle it is a positive intersection. It would look something like this:
const Vector2 RightPerp(const Vector2& v) { return Vector2(y, x); } bool32 TestLineSegmentLineSegment(const Vector2& a0, const Vector2& a1, const Vector2& b0, const Vector2& b1) { const Vector2 a = a1  a0; const Vector2 b = b1  b0; const Vector2 bPerp = RightPerp(b); const Vector2 bPerpN = Normalize(bPerp); const float bPerpNDist = Dot(bPerpN, b0); // dot(pn, x) = pd // pt = a0 + a * t // dot(pn, a0 + a * t) = pd // t = (pd  dot(pn, a0)) / dot(pn, a) // intersects a if t is in zero to one range const float t = (bPerpNDist  Dot(bPerpN, a0)) / (Dot(bPerpN, a)); if (0.0f <= t && t <= 1.0f) { // this means the segment a is being intersected between a0 and a1 const Vector2 ptOnA = a0 + a * t; // this is the point on a of the intersection // find the scalar projection of the point onto b and if it is in range they intersect const float bt = Dot(ptOnA  b0, b) / (Dot(b, b)); if (0.0f <= bt && bt <= 1.0f) { return 1; } } return 0; } bool32 TestRectanglePoint(const Vector2& min, const Vector2& max, const Vector2& pt) { return (pt.x >= min.x && pt.y >= min.y && pt.x <= max.x && pt.y <= max.y); } bool32 TestRectangleLineSegment(const Vector2& min, const Vector2& max, const Vector2& a, const Vector2& b) { if (TestRectanglePoint(min, max, a)  TestRectanglePoint(min, max, b)) return 1; const Vector2 maxXMinY = Vector2(max.x, min.y); const Vector2 minXMaxY = Vector2(min.x, max.y); return (TestLineSegmentLineSegment(min, maxXMinY, a, b)  TestLineSegmentLineSegment(maxXMinY, max, a, b)  TestLineSegmentLineSegment(max, minXMaxY, a, b)  TestLineSegmentLineSegment(minXMaxY, min, a, b)); }
Give this a try and let me know if you have any issues.
#5018397 is class templates made in gamedevelopment?
Posted by l0k0 on 06 January 2013  08:11 PM
We use templates for a lot of different things. I am going to (mostly) focus on how templates can prevent you from making errors in your code. I think this is a frequently overlooked when people considers templates.
1) Data Structures. Just because the STL provides some doesn't mean it has all of the data structures you need. In addition, very few commercial game companies use the STL (and you should atleast know why). One simple one that you could write would be a range checking array. If the operator [] functions are inlined a good compiler will optimize it out completely. However, in debug builds you can throw in an assert that will catch these out of range errors that will save you a lot of debugging time. To get you started:
template <typename TYPE, int ARRAY_SIZE> class RangeArray { public: STATIC_ASSERT(ARRAY_SIZE > 0); TYPE& operator[](int idx) { Assert(idx >= 0 && idx < ARRAY_SIZE); return m_array[idx]; } private: TYPE m_array[ARRAY_SIZE]; };
RangeArray<int, 5> myArray; printf("%d\n", myArray[0]); int someIndex = 2 * 2 * 2; int i = myArray[someIndex]; // this will compile, but it will not fail silently at runtime
2) Static assertions are very useful for catching bad code at compile time. If you want to limit the overloads to a certain type by providing all of the possible overloads and throwing a static assertion into the main templated function. Or not even define it at all, just declare it. If you can catch the error at compile time it is a lot better than at runtime. Especially since assertions often aren't of the fatal variety if you are working on a large project. Here's a somewhat practical example, with a Vector3. The range and size of the structure is known and constant, so we can use non type template parameters.
class Vector3 { public: // If the index is not 0, 1, or 2 it is known to be invalid! template <int idx> float getElem() const { STATIC_ASSERT_FAIL_MSG("Invalid element index!"); } template <> float getElem<0>() const { return x; } template <> float getElem<1>() const { return y; } template <> float getElem<2>() const { return z; } private: float x, y, z; };
Vector3 someVec; float atElem0 = someVec.getElem<0>(); // OK float atElem4 = someVec.getElem<5>(); // ERROR will not compile
3) Compilers can inline templates and the optimizations made possible with inlining can be quite remarkable. It is what makes them superior to function pointers (and is largely why std::sort trumps qsort). It's also extremely useful in the form of delegates and functors. Even if you use the stl, you will need to define your own functors all the time to process your custom data.
4) Strings. It is a pain to support code that works with 8 bit, 16 bit, and 32 bit characters. If you ever embark on localization or code that otherwise needs to support both at runtime the only sane way to do it (in my opinion anyways) would be with templates. Certainly better than using the preprocessor all over the place to route it to the equivalent call.
In addition, I frequently don't really know how large statically sized arrays should be when I do string concatenations on cstrings. A frequently unknown benefit of templates is that they can take the size of the array as a parameter. Alloca won't work here because we can't get the string sizes if they aren't constructed yet.
template <typename CHAR_TYPE, int ARRAY_SIZE> INLINE CHAR_TYPE * string_format(CHAR_TYPE ( & dest) [ARRAY_SIZE], const CHAR_TYPE * fmt, ...) { va_list argp; ASSERT_ONLY(int numWritten =) vsprintf(dest, fmt, argp); // yes I know this will only work with char and not wchar_t... SLOW_ASSERT_MSG(numWritten < ARRAY_SIZE && numWritten > 0, "Array not big enough!"); va_end(argp); return dest; }
const int kStrArraySize = 256; char strArray[kStrArraySize]; string_format(strArray, "Entity Position: %f, %f, %f\n", entPos.x, entPos.y, entPos.z); DebugDrawString(entPos, strArray); // if we do a bunch of string formatting on this array and make debug drawing calls and it overflows, we can // go back and change it in ONE place. We don't have to find all of the places we passed kStrArraySize and // change that too.
This will catch an array overrun without silently thrashing memory. Or at least without you having to check the return value on vsprintf every time yourself.
5) Catching errors in casting pointer types. This can be a real lifesaver when you have a message/event system that uses base ptr data. dynamic_cast is almost always a sign of bad design, but don't underestimate your ability to make a mistake when casting. NOTE: dynamic_cast requires the from type has a vtable!
// this code assumes LPTR_TYPE and RPTR_TYPE are pointer types, although there is a template for that too :) template <typename LPTR_TYPE, typename RPTR_TYPE> FORCE_INLINE LPTR_TYPE smart_cast(RPTR_TYPE right) { #if _CPPRTTI Assert(!right  dynamic_cast<LPTR_TYPE>(right)); #endif return static_cast<LPTR_TYPE>(right); }
This was extremely useful for pool allocated data seen in message passing and event systems.
6) Also great when making assignments between numerical types with different bits of precision. For example, lets say we have a U32 but need to pass it as U16.
template <typename LTYPE, typename RTYPE> LTYPE& Assign(LTYPE& assignedTo, const RTYPE assigned) { assignedTo = assigned; Assert(assignedTo == assigned); // this will fail if there weren't enough bits in LTYPE to store RTYPE return assignedTo; }
// example code that isn't really practical U32 src = U16_MAX + 5; U16 dest = 0; Assign(dest, src); // this will not fail silently anymore
7) Other things I have found templates to be useful for.
 Component Entity Systems
 Your own RTTI system (this will usually require the preprocessor, a virtual function, and a hash code too)
 Permutations/Shuffling in vector libraries
 Some of my coworkers love it for serialization code. That's not really something I can say I'm an expert on but leaving it here anyways.
 Static Assertions themselves
 All that metaprogramming stuff
 enum range checking at compile time
 avoiding virtual functions
 a bunch of other things I haven't mentioned...
But don't underestimate how much time it will save you from the errors it can catch in your code. Most of this should compile out cleanly in release builds!
#4903891 Component Entity Model Without RTTI
Posted by l0k0 on 17 January 2012  11:13 PM
It also has the problem of enforcing the there can only be one philosophy onto the user  each entity can only have one transform and one health component  which may seem like a trivial burden at the time of writing, but at some point your users will find a reason for there to be two of something, and will be forced to construct ungracious workarounds involving several entities cooperating to function as one "entity".
My current system uses said system without that restriction like so:
template<class T> void getComponents(T *components[], int32 &n) const { n = 0; uint32 i; for (i = 0; i < componentList.size(); ++i) { if (componentList[i]>asType<T>()) { components[n++] = static_cast<T *>(componentList[i]); } } }
I think you make a great point about the hidden dependencies though. Accessing the pointers directly is also faster than a linear search/hash table look up when it matters too.
#4903134 Component Entity Model Message Passing in C++
Posted by l0k0 on 15 January 2012  11:37 PM
Here are my concerns with this approach:
1) My current idea is to pass a void ** array as the arguments so the function prototypes are consistent. I don't think the template ju jitsu would be worth it here, if even doable for "all" or "most" function prototypes.
2) How might you handle returns from sendMessage complicit functions? Put them at the end of the array? This doesn't seem very intuitive but is there another option?
3) Right now if two components have the same string name message handler, when sendMessage is invoked, only one of them will be called. This is a bit concerning. Anyway to get around this besides using descriptive names. Consider this example:
class Armor : public Component { DECLARE_COMPONENT(Armor, ARMOR_ID, Component) public: void setHealth(void **args) { // set the armor health } Armor() : Component() { REGISTER_FUNC(Armor, "setHealth", &setHealth); } }; class Health: public Component { DECLARE_COMPONENT(Health, HEALTH_ID, Component) public: void setHealth(void **args) { // set the health } Health() : Component() { REGISTER_FUNC(Health, "setHealth", &setHealth); } };GameObject.sendMessage() will only call one. broadcastMessage() will call both despite this being undesirable. Is the only way to avoid this to use more descriptive message names, e.g. "Health_SetHealth" and "Armor_SetArmor". I realize that in some cases this is prefferred. For example, one component can be aware of a "health changed" event without any dependencies on the actual health component. For things like A.I. scripts this can keep things decoupled and encapsulated.
Also, I'm aware that member function pointers are a PITA when inheritance is concerned. My DECLARE macro already handles this somewhat messy process by generating a unique typedef and table name from the classname and calling the base class's implementation of handleMessage if nothing is found. I'm more interested with alternatives/solutions to the ambiguous name issue and the rather clunky practice of passing void pointer arrays for everything.
Thanks in advance!
#4897356 Component Entity Model Without RTTI
Posted by l0k0 on 25 December 2011  05:48 PM
#4838184 XNA Simple NPC movement is wonky
Posted by l0k0 on 20 July 2011  05:28 PM
Floating point error will inevitably accumulate, and if the value is off by just one bit, this section of code will never be hit. Try using the distance formula to see if the point is within an appropriate tolerance. Using squared values is better for performance, but for now, try using:if (Carriep == idlePoint) { idle = false; idle2 = true; Carriep.X = 22; Carriep.Y = 200; }
if (Vector2.Distance(Carriep, idlePoint) <= 2.0f)You could of course replace this with a more appropriate tolerance value.
Secondly, your code will seek even if it is at the correct coordinates on a given axis because you are using less than/greater than OR EQUAL comparisons.
Think about this logically for a second. If the target.X is at 400 and you are positioned at 400 on the Xaxis, do you want to move by 2 units on that axis? Change your <= to < comparisons, and your >= to > comparisons.if (Carriep == idlePoint) { idle = false; idle2 = true; Carriep.X = 22; Carriep.Y = 200; } // consider putting an else here { if (Carriep.Y >= idlePoint.Y) { Carriep.Y = CarrieSpeed; } else if (Carriep.Y <= idlePoint.Y) { Carriep.Y += CarrieSpeed; } if (Carriep.X >= idlePoint.X) { Carriep.X = CarrieSpeed; } else if (Carriep.X <= idlePoint.X) { Carriep.X += CarrieSpeed; } // end else here
My guess is that the main culprit is the first tolerance issue I discussed above. Since you are using absolute comparisons, I think your code never registers as reaching "idlePoint" and continuously seeks it. It's easy to test for yourself though. Inside the comparison, put a write line statement like so:
if (Carriep == idlePoint) { idle = false; idle2 = true; Carriep.X = 22; Carriep.Y = 200; Console.WriteLine("Idle Point Reached!"); }I suspect this will never be printed in your console window. Note: if you don't have a console at all, you'll need to change your project settings from a WindowsApplication to a ConsoleApplication. If you're looking for something more challenging you could try Googling Steering Behaviors, specifically Seek, Arrival, and Path Following.
#4828109 Farthest point in a direction?
Posted by l0k0 on 26 June 2011  10:47 PM
In this instance, the dot product between the point and the direction finds the Scalar Projection along the search direction. Recall how vector projection works onto a normalized vector n. We can find the vector with dot(pt, n) * n. Therefore, the dot product alone is the magnitude of the projected vector. Since we are looking for the farthest point in the given direction, the maximum scalar projection is sufficient for finding the farthest point. If you think about it, this is what you are doing in your code too (for positive projections anyways), only with more tests that are computationally expensive, especially for an iterative algorithm like GJK.
It's easiest to find the maximum scalar projection along the given direction. You can do this with a simple dot product operation repeated in a loop for each vertex. Your implementation requires a square root for each iteration that will utterly destroy the efficiency of the algorithm.
Note: direction does not need to be normalized. In addition, as you'll find in other places reading up on the algorithm, the best starting direction is usually the difference between the two object's centroids.
Code Zealot has some excellent posts on this, and much more GJK related material.vector2 FarthestPointInDirection(polygon p, vector2 direction) { assert(p.VertexCount > 0); int farthestIndex = 0; float farthestDistance = dot(p.v[0], direction); float tmp; for (int i = 0; i < p.VertexCount; ++i) { tmp = dot(p.v[i], direction); if (tmp > farthestDistance) { farthestDistance = tmp; farthestIndex = i; } } return p.v[farthestIndex]; }
im a little confused why this would result in the point farthest along in a direction. Doesn't the dot product represent the angle between 2 points? So isn't the algorithm you posted simply finding the point with the farthest angle from the direction vector?
Look at how you compute the projected vector. While the math is correct (at least for the projection), it is completely unnecessary because we aren't interested in the actual projected point, but just the largest scalar projection. Since the direction is the same for each tested point, there is no need to divide each component by the squared length of the search direction if we merely want to make a comparison. Sqrt function calls are also very expensive in low level code, and will result in incorrect results because your magnitude function will negate negative values and always return a positive result. Instead of merely finding the scalar with a simple dot product, you needlessly project the point, and find it's magnitude irrespective of the search direction.
Here's a simple diagram taken from the Wikipedia article I linked to. Let's assume a is our search direction and b is one of the points in the convex set. Do you now see how the dot product returns the distance along the given direction? Do you understand that if b was negated it would return a negative result, while your code, which uses a magnitude function would return the exact same value? Do you see why this matters when determining the farthest point in a direction?