A plane (a,b,c,d) is really a four-dimensional trivector, not an ordinary vector. When you take the wedge product of three points P, Q, and R, it naturally produces a plane in which d = -dot(N, P) = -dot(N, Q) = -dot(N, R), where N = (a,b,c). When you take the dot product between a homogeneous point (x,y,z,w) and a plane (a,b,c,d), you get a*x + b*y + c*z + d*w, which gives you the signed distance between the point and the plane multiplied by w and the magnitude of N. A positive sign in front of the d is the correct choice.
This kind of stuff is discussed very thoroughly in my new book that comes out in August:
Dirk, when a homogeneous point is treated as a single-column matrix that is transformed by multiplying on the left by a 4x4 matrix M, a 4D plane must be treated as a single-row matrix that is transformed by multiplying on the right by the adjugate of the matrix M. If the translation portion of the matrix M is not zero, then a plane will not be transformed correctly if you treat it the same as a point.
The wedge product mentioned in that pseudocode is between two 2D vectors. Calling them U and V, this wedge product gives the following numerical quantity (which is technically a pseudoscalar or antiscalar, but it doesn't matter):
U ∧ V = U.x * V.y - U.y * V.x
I have a Grassmann algebra talk that provides an introduction to the wedge product:
What's been bothering myself more about Lengyel's approach is that he weighs tangents by the equivalent of inverse triangle area (area=cross(p2-p1,p3-p1)/2). Wouldn't it make more sense (and avoid a FP division) to scale by (the equivalent of) triangle area?
Are you talking about the value of r, which is 1.0F / (s1 * t2 - s2 * t1)? This divides out the "area" of the triangle in texture space so that the scale of the texture map doesn't matter, but does not have anything to do with the geometric position of the vertices. The tangents are still weighted based on the geometric area of the triangles because the variables x1, y1, z1, x2, y2, and z2 are not normalized in any way.
I'm also concerned that the format will undergo change over time that could break previous functionality. Not sure how valid that concern it though.
You don't need to worry about that. I absolutely, positively won't be breaking backward compatibility with any future additions to the OpenGEX format. The current spec is extremely solid and stable.
2) The exporters seem pretty solid, and maintained by Eric, but if I recall, there are some slight differences in support for the exporters between 3D modeling packages. I remember the Blender version not supporting LODs for meshes, which is probably more of a Blender thing than an OpenGEX thing. Not really necessary for me, but there might be other support Blender lacks as that's probably what we'll use.
The Blender exporter supports everything that the Max and Maya exporters support. None of them actually have LOD support because it's not something that's generally available (or done well) in the modeling software itself.
I've been developing a game engine for most of my professional career, so I can tell you about my experiences. First, let me mention that my book Mathematics for 3D Game Programming & Computer Graphics has a lot of useful engine development information in it. Don't let the first few chapters put you off. They are pretty dry and purely mathematical, but after that, the book gets into a lot of stuff that's specifically needed to make a decent game engine, like visibility determination, collision detection, basic physics, shading, curves, and fluid/cloth simulation. There is also the Game Engine Gems series, but they are collections of shorter chapters that discuss intermediate to advanced techniques. These books would probably be interesting after you've got an engine up and running and you're looking for some cool effect to implement, but there are a few chapters are would be generally useful to anyone (like a chapter in GEG2 about bit hacks for games).
Satharis made a couple comments above that I wholeheartedly agree with. Developing a game engine does take a long time, especially by yourself. I have been working on the C4 Engine for over 15 years now (although not continuously for the first five years), and there is still a lot of new stuff that can be added. It's important to start small and work your way up. Don't lose touch with reality and think that you're going to build a complete engine of high quality in a few months time, or even a few years. Choose very specific features to work on and implement them one at a time as best you can. After finishing some components of your engine, you should find yourself looking back and thinking to yourself that you now know a much better way of implementing the same thing. This ought to happen a lot in the first several years, and it's part of the learning process. Iteration is key to really discovering better ways of developing software and becoming a good engineer. Don't be afraid to completely throw away some system you spent a lot of time working on and start over. If you're not under any kind of time pressure, it will be worth it in the long run.
Way back in early 1999, the C4 Engine could barely render a handful of primitive geometry types (plane, cylinder, sphere, etc.), some particle systems, and some basic shading effects. After many years of hard work and meticulous refinement, it can do all of this today. It has been very rewarding, but also very difficult at times.
Btw, if you're going to write some kind of import component, I recommend using the Open Game Engine Exchange format (OpenGEX). The OBJ format doesn't support enough features (not even close), and FBX will have you tearing your hair out. OpenGEX supports Maya, and there's a free C++ import template that you can start with when building your own importer.
SmkViper, I agree with most of your post, but I'd like to point out that exceptions do not actually have a zero run-time cost like compiler vendors would like you to believe. In my tests, it has about a 1% performance impact to simply enable EH and RTTI in the compiler without a single usage in the code (no try, catch, throw, typeof, or dynamic_cast anywhere). This is very small, yes, but it's measurable, and I would not argue against someone claiming that it's worth it to get the extra functionality of EH if that's what somebody really wanted. The overhead comes from the fact that a larger amount of code and data is generated by the compiler, and this has an affect on cache usage while the program is executing. Fragments of code that used to be side by side in the same cache line are now separated by a little extra code that the compiler inserted to handle an exception being thrown. And the zero overhead claim only applies to throw statements. Once you start sprinkling throw...catch blocks in the code, it does have a nonzero cost in terms of actual instructions executed, including new branches that wouldn't exist otherwise.
But as you discussed, the real downside to enabling EH is the fact that the programmers have to worry about what will happen for every function they call if an exception is thrown inside that function, and they have to jump through extra hoops to make sure everything is cleaned up properly in the case that the called function doesn't return and the remaining code in the calling function doesn't get executed. I'm normally a big proponent of writing clean, bulletproof code, but the burden of wrapping up all your pointers and implementing strong RAII for absolutely everything goes a little too far for me. There most definitely is the matter of programmer productivity to consider here, and it is more expensive in terms of time and money to write exception-safe code.
BitMaster, the more I think about it, the more I'm OK with using something like polymorphic_downcast with RTTI enabled in debug mode, so I think I may have overreacted to it. It's just that using RTTI or Boost are both things that would make virtually all of my peers give me a dirty look, and seeing them suggested together at once made me go all hulk-smash. Sorry if I came across as abrasive, but nothing I said justifies you resorting to name calling and derision of my qualifications. (And if you're going to do more, please grow a pair and use your real name.)
Then why did you feel the need to downvote me? The huge advantage of boost::polymorphic_downcast is that it verifies the type in debug builds (always a good idea to assert things which should be true there) and is a static_cast in non-debug builds.
Again, the pros do not use RTTI and exception handling, even if it's only in debug builds. If you want to ignore the collective wisdom of all the top developers in the industry, you do so at your own peril.
However if I was doing that I would instead add helper non-member functions like "TextWidget* GetTextWidget(int ID)" that return exactly what I want, pre-casted.
That would work fine, but it's ugly and unnecessary. You're just wrapping up a perfectly good language feature in an extra set of functions that increase the cost of maintaining the code. (When someone adds a new widget type, will they remember to add the separate Get function, too?)
Internally I'd still use dynamic_cast instead of static_cast because of type-safety.
Experienced game developers do not build with RTTI or exception handling enabled. There is no dynamic casting, and there is no throwing. In this example, each widget should know its own dynamic type (stored in the base class). You could check that type in a debug build and simply assert if you got one different from what you're casting to.
The need to cast to a subclass type arises all the time in well-designed architectures. Please stop it with the scaremongering about downcasting being a symptom of bad design. It's not. If you want to eliminate it everywhere, then your code is going to turn to shit because you'll be jumping through all kinds of unnecessary hoops like adding lots of virtual functions to do simple things.
As a basic example, consider a GUI system in which there is a class hierarchy of widgets. Suppose there is a resource format of some kind that stores all the widgets for a dialog box, and suppose that when it's loaded, the widgets of the appropriate subclasses (TextWidget, ButtonWidget, MenuWidget, etc.) are created and stored in a tree representing the layout of the dialog. Now some code is going to load that resource and want to access particular widgets as their specific subclass types, e.g., to change the content of a TextWidget to the user's name or enable/disable a ButtonWidget depending on some condition. However, the function that locates those particular widgets (perhaps by name or some kind of ID number) after the dialog is loaded will always return a pointer to the Widget base class. Knowing what the widget's actual subclass type must be, your program will then use static_cast to change a Widget * into a pointer to a TextWidget *, ButtonWidget *, etc., so it can call the functions specific to those subclasses. There's nothing wrong with this.
To be clear, I doubt VAOs perform that bad. While you got Valve and a couple of scattered devs saying otherwise, on the other end you have every single OpenGL driver developer out there saying they perform better. And to go against those kind of people, you'd need a bit more big names than just Valve to say it, someone that either is known for being a graphics powerhouse (Crytek, DICE) so they're bound to know what the fuck they're doing, or someone who has been working with OpenGL for a long ass time (say, Carmack).
So far I haven't seen such complaints from other developers that have been porting new games to OpenGL lately (Firaxis, Aspyr, 4A Games, etc).
Maybe you should do some research on my qualifications before implying that I don't know what I'm talking about.
Matias Goldberg already posted the following link, but I'll put it here again anyway. It clearly demonstrates that any assertion that multiple VAOs are faster is dead wrong. It's important to note that Valve came to the same conclusions that I did.
I don't give shit what that synthetic test in your link says. It does not represent real-world usage cases, and that makes the data meaningless. The engine makers know better about this stuff than the driver writers seem to, and I have talked to IHVs at length about this issue. However, no one seems to be interested in doing anything about it. It's very frustrating that something that should be faster (multiple VAOs), by its very design, is instead significantly slower.