Jump to content

  • Log In with Google      Sign In   
  • Create Account

Dirk Gregorius

Member Since 24 Apr 2002
Offline Last Active Oct 23 2016 05:53 PM

#5314014 Your Opinion of Software Dev Resume assertion

Posted by on 05 October 2016 - 10:45 AM

I also agree. I read quite a view resumes every week and all I do is look for achievements and what problems you solved. If there will be a follow-up phone screen we usually take someone with expertise in that field and he will ask about problems you discovered and how you solved them. 


"Develop CAD software solutions for Civil Engineering companies"


What does this even mean? Did you write FEM solvers for constructional analysis? Did you you work on the UI? If you want to get attention you need to tell what specific problems you solved.

#5313492 Local scope benefits

Posted by on 01 October 2016 - 11:24 AM

Note that in C++ the code will not do what you expect. In both operations you make a deep copy of your object and then update it. If the doWork() function will change the state of the object it will not be reflected in the object list. I think in C# it might work . You cannot look at the problem from a general language agnostic perspective. 


Personally I think declaring variables at a different location from where it is used should have very a good reasons. Write code for readability first, optimize second if you have proof it is important for performance. You write code for human beings first and not the compiler. The correct version in C++ for your loop would be using a reference:


for(int i = 0; i < 5; ++i)
    CoolObject& obj = coolObjectList[i];


If in your language this resolves to a reference/pointer anyway, it doesn't matter which form you use for performance. 

#5313487 Are games with large worlds compiled with fp:/fast?

Posted by on 01 October 2016 - 10:37 AM

Yeah if I was doing a large world, I'd use a floating origin for local coordinates, perhaps combined with 64bit integer / fixed point global coordinates. Using double precision anywhere is often a code smell honestly  :wink: 


I used to think like this as well, but most recently I changed my opinion on this. There are a places where doubles really make sense. In particular for large worlds and numerical sensitive operations double is the right choice in my opinion. Why 64bit integer / fixed point and not just double. What does this buy you? Because of memory? On modern machines double performance is nearly as good as float from my experience. I would argue if doubles are an option for your problem on your target machine use them.


I also spent quite some time implementing robust versions of Quickhull and Mesh Simplification (QEM) which needed to compile our entire asset database (15 years of models!) and I did it in 32 bit floating point. What a waste of time! With doubles it all just works and with float32 I spend lots of time debugging and tweaking epsilons. Sure I gained some great insight into the algorithms and numerical analysis from this, but in retrospect I could have spend this time with much more valuable things.

#5313392 problem with ray/box intersection in GLSL

Posted by on 30 September 2016 - 10:12 AM

The division by zero is indeed a problem with this approach. E.g. the Ember raytracer just uses FLT_MIN if one of the direction elements is zero (at least in one of their older versions). Another option is to use the Separating Axis Test (SAT) for ray/box instead. This will not clip the ray though, but only provide a boolean answer whether the ray intersects the box. I use this in most of my tree traversals as I found it faster and more robust.

#5313385 When should I use int32_t rather than int

Posted by on 30 September 2016 - 09:17 AM

You are correct about the warning. You can cast the to integer, but this is ugly as well :). We don't use the STL at work and I also don't use it personally and the size type is int. I think Mike Acton formulated it perfectly at the C++ conference a while ago. "The STL doesn't solve our problems".


I totally agree with you on the strong typing! Personally I believe that code should be written for *readability*. This helps with maintaining large code bases over decades. It also helps with managing complexity. It helps with productivity since you enable more team members to work on a larger number of problem in the code base. auto is orthogonal to all of this in my opinion. E.g.


int x = 5;


This is clear. I read over this and I now know that x is a signed integer. 


auto x = 5;


What type is x now? Can I write x = -5 or will this overflow. The int specifier is somewhat redundant from the perspective of the compiler, but for a programmer this is useful information.  Sure, we know that 5 is int, but not every programmer on a team is a language expert. I am not saying auto is not useful. Personally I use it for lambda's and if I use STL for some of iterator types, but currently auto is truly overused in the suggestions and explanations I am reading. And the ferrous example above shows perfectly how it easily can lead to misunderstandings. 

#5313272 When should I use int32_t rather than int

Posted by on 29 September 2016 - 11:14 AM

for ( size_t i = 0; i < someVec.size(); ++i )
for ( int i = 0; i < someVec.size(); ++i )


I disagree with this totally. For me this just over-thinking a very simple problem. Personally I prefer it if the size type is signed which makes backward iteration over the range much more natural. I don't think we need an unsigned 64bit size type. I cannot imagine a need for more than 2B objects in a container. In particular not for games.


for (std::vector<int>::size_type i=0; i<someVec.size(); ++i)


This is beyond ugliness for such a simple problem. Also this is not more correct than what L. Spiro wrote since the size type is guaranteed to be size_t for the STD containers.

#5313122 When should I use int32_t rather than int

Posted by on 28 September 2016 - 05:07 PM

int doesn't give you any guarantee on the number of bytes it uses. So when you need this guarantee you would need to use any of the sized integers, e.g. for serialization. For performance there is actually int_fast32_t. E.g iirc on the PS3 the fast integer was actually 64bit and they didn't want to make the regular int 64bit. We used fast int in for loops. Personally I would *not* worry about this.

#5313058 Turning a Physics degree into Game Development material

Posted by on 28 September 2016 - 10:52 AM

I actually took a very similar route. I studied constructional engineering in college and discovered my joy in programming when I was about half through my classes. Changing my major wasn't an option so I specialized in Computational Engineering and taught myself all about programming in my spare time myself. My first job in the industry was writing a physics engine for the PS3 so my specialization helped me actually a lot. So there are quite a few possibilities for you like Havok/MS or PhysX/nVidia to get a foot into the industry with you background. There are also quite a few game studios looking for people with background in physics.


For me personally I decided I wanted to become a physics/collision/animation specialist as this fitted my background well. I also got interested in tools programming recently. It seems popular to write 'an engine' as a student, but from my experience this is the least interesting problem to solve. It is better to take a more specialized (kind of unsolved) problem and add that to Unity or Unreal and show this in portfolio. On the other hand you say you want to make a game. So if you are more interested in the design process this would be a more orthogonal to your education, but I don't believe this would be a problem. I recommend to figure out in what part of the development process you are interested and then become good at it. Your education will not be the reason if you don't succeed in my opinion.

#5312167 Question about Jacobian in Joint Constraint formula

Posted by on 23 September 2016 - 02:34 PM

It looks nearly correct. I think you just have the signs and dimension a bit mixed up.  Here is how to derive the constraint.


// Position constraint

C = pb - pa


// Velocity constraint (is the time derivative of the position constraint)

dC/dt = pb/dt - pa/dt

          = vb + wb x rb - va - wa x ra

          = vb - rb x wb - va + ra x wa

          = vb - skew( rb ) * wb - va + skew( ra ) * wa  



The velocity constraint has a special form:


dC/dt = del_C / del_x * dx /dt + del_C / del_t = J * v + del_C / del_t


If the velocity constraint doesn't depend explicitly on time (del_C / del_t = 0) - which it doesn't in our case - this simply becomes. 


dC/dt =  J * v


So finally we can find the Jacobian by inspection:


J = [ -1 skew( ra ) 1  -skew( rb ) ] 


There are a couple of things here. v the the linear velocity at the center of mass and w (read omega) is the angular velocity. r are the offset vectors (lever arms) from the center of mass to the constrained points and skew( r ) is the cross product matrix such that r x w = skew( r ) * w. Also note that '1' is not a vector, but the identity matrix. The Jacobian is a 3 x 12 matrix. If you build the effective mass matrix Me^-1 = J * W * JT the result will be a 3x3 matrix. You can solve the constraint as 3D block using e.g. sequential impulses. Also for clarification del_ is the *partial* derivative. If you are interested in the math check here: https://en.wikipedia.org/wiki/Total_derivative


There were plenty of great talks about this at the GDC Physics tutorial over the last years. In particular I would check out the talks by Erin Catto and Richard Tongue. You can find all presentations here: http://box2d.org/downloads/

#5312005 Question about Jacobian in Joint Constraint formula

Posted by on 22 September 2016 - 06:26 PM

Your observation is absolutely correct. Choosing a constraint function such that the Jacobian is not defined if the constraint error is zero (C = 0) is a very poor choice.  For exactly the reasons you describe. Usually a distance constraint is formulated like C = |pb - pa| - L. If L is reasonable large the Jacobian will be fine. Also note that the correct constraint to hold two points coincident is C = pb - pa. The later is 2d/3d constraint (hence the bold notation).


The reason is that the 1d distance constraint is not only a poor choice because it is undefined when C = 0. Let's assume we actually had a small displacement and get a valid Jacobian. The constraint can only handle forces along the offset vector of the two connected points. If you apply a force orthogonal to that direction the constraint will not handle this as expected and the objects will drift apart until the constraint will handle this in the next frame. 

#5311102 3D Manipulators/Gizmos in DirectX

Posted by on 16 September 2016 - 12:12 PM

What you are looking for is called 'Direct Manipulation'. I liked this resource: http://www.codeproject.com/Articles/35139/Interactive-Techniques-in-Three-dimensional-Scenes


There was also a Sigraph course 2002: https://www.siggraph.org/s2002/conference/courses/crs20.html

It was downloadable, but I cannot find it anymore. Maybe your Google-Fu is better than mine. Otherwise PM me and I send you the PDF

#5311099 Continous collision for plane of triangles vs AABB

Posted by on 16 September 2016 - 12:04 PM

There are a couple of options to compute the TOI:

1) Conservative Advancement (B.Mirtich)
2) Bracketed Advancement (E. Catto)
3) Convex Cast (G. v. d. Bergen)
4) Special case version using continuous SAT

They all use basically the same idea. The first two require a stable version of GJK to compute distance inside the loop. If you have only linear movement (no rotation) CA is great as the root finding is trivial. For CCD with rotation CA has problems and might iterate forever and even might not find a solution. This is solved in bracketed advancement. A pathological example would be an icehockey pug spinning in place.

If you are not into this topic a lot what I am saying might not make sense to you. This is a pretty large topic and here are some suggestions for reading:

1) B. Mirtich's PhD explaining CA: http://www.kuffner.org/james/software/dynamics/mirtich/index.html
2) E. Catto's GDC presentations: http://box2d.org/files/GDC2013/ErinCatto_GDC2013.zip
3) Gino has a couple of great presentation here: http://dtecta.com/publications

In particular:
Ray Casting against General Convex Objects with Application to Continuous Collision Detection: http://dtecta.com/papers/unpublished04raycast.pdf
Continuous Collision Detection of General Convex Objects under Translation: http://dtecta.com/papers/gdc2005_Gino_vandenBergen.pps

4) There might be special version for swept AABB vs triangle. I would check Christer Ericson's book: 'Real-Time Collision Detection'


#5311038 Continous collision for plane of triangles vs AABB

Posted by on 16 September 2016 - 12:26 AM

You build the swept AABB around the original AABB at its start and end position (e.g. SweptAabb = Aabb(t1) + Aabb(t2)). You find/query all triangles within that swept AABB usually utilizing some kind of broadphase. This is the set of potentially colliding triangles and you have to compute the TOI for each triangle in this set and keep track of the minimum TOI.

#5309581 Quaternion multiplication

Posted by on 05 September 2016 - 07:41 PM

Let's define a quaternion like this:

struct Quaternion 
    Vector3 v; 
    float s;

Then the quaternion product is defined as:

Quaternion operator*( const Quaternion& q1, const Quaternion& q2 )
    Quaternion result;
    result.v = cross( q1.v, q2.v )+ q1.s * q2.v + q2.s * q1.v;
    result.s = q1.s * q2.s - dot( q1, q2 );

    return result;

Both your functions are equivalent. The vector'ish form is just less usual in literature in my experience. If you write the definition above component-wise you will get the same result!

#5308938 Resolving and Remembering Collisions

Posted by on 31 August 2016 - 07:30 PM

Here is how continuous physics works in a nutshell. We start with the general discrete solver step:


1) You integrate external forces (e.g. gravity) 

2) You solve the joint and contact constraints

3) You integrate the velocities to find the new positions


Of course we might have tunneled right now. So in a continuous solver the new positions are tentative and we now enter the continuous part of the solver:


1) For each shape you build the swept AABB and add it into the broadphase. 

2) For each contact you compute and cache the TOI

3) You advance the bodies to the minimum TOI, build the contacts and solve them

4) You invalidate the cached TOI for the involved bodies and recompute them

5) Repeat from 3) until you have reached the end of the timestep


I left out a lot of detail, but this is the general idea. Note that you still need to run the normal contact phase before entering the continuous phase. 


So to answer you question. Yes, of course you need to compute the contact at each TOI.