There is no possible way that code speeds up the calculation of sin and cos values for vectors and it introduces problem with reentrancy (it's not thread safe).
When optimizing this sort of code there are three things you must do to achieve state-of-the-art performance.
1) Ensure you are using the greatest known mathematical reduction of the algorithm
2) Eliminate all branches (even if it means more calculations)
3) Use vectorizing operations, e.g. SIMD, NEON, AVX, et. al.
Optimizing the code for vectoring operations can be very annoying.
Algorithms tend to favor separate arrays for each element/dimension as opposed to interleaved arrays which are more conveniently to deal with.
This cuts down on loading and packing time of the MD registers and that can be critical to utilizing all available computation units.
Doing the above and eliminating any IEEE-754 or C-standard overhead (e.g. if the rounding rules of the unit is different than the standards then it has to perform a conversion when storing) is how you make it fast.
The old fsincos instruction got it done in about 137 clock cycles; SSE2 and newer should have faster or more vectorized options.
If you can sacrifice accuracy then you can use an estimation of the sin and cos values and those algorithms are generally just multiplies and accumulates and you can get it done in a lot less than 100 clock cycles.
Thanks for the swift response, guys. I've done some research and it actually seems like this approach would make much more sense than my current one. They really didn't touch up this at all when I did my degree, so it's refreshing to learn about. I do have to ask, though, what are the negatives? This approach seems better in almost every aspect? Is it more a case of inheritance still having its uses in niche situations?
The negative is that you are completely systematic and successful as found in a game like Dungeon Siege.
They effectively created a another set of unbreakable laws of physics for this game which makes the entire thing feel monotonous.
Nothing unexpected ever happened in the game because they didn't break the laws of their universe to do it.
The other much maligned approached means every new object introduces new game mechanics (or at least has the potential to) - as found in a game like Minecraft.
I think those two games demonstrate the emergent affect of taking either approach to an extreme.
The (de)serialization (second) approach can have a significant performance impact for moderate data-sets.
In a tool I wrote long ago we had to stop doing it that way and use the command pattern with undo/redo stacks (otherwise every time the end-user made a trivial modification there was a pause as the data was serialized.)