Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Eric Lengyel

Member Since 25 Nov 2003
Offline Last Active Yesterday, 09:38 PM

#5192598 Casting Pointer to Derived Type

Posted by Eric Lengyel on 13 November 2014 - 01:21 AM

SmkViper, I agree with most of your post, but I'd like to point out that exceptions do not actually have a zero run-time cost like compiler vendors would like you to believe. In my tests, it has about a 1% performance impact to simply enable EH and RTTI in the compiler without a single usage in the code (no try, catch, throw, typeof, or dynamic_cast anywhere). This is very small, yes, but it's measurable, and I would not argue against someone claiming that it's worth it to get the extra functionality of EH if that's what somebody really wanted. The overhead comes from the fact that a larger amount of code and data is generated by the compiler, and this has an affect on cache usage while the program is executing. Fragments of code that used to be side by side in the same cache line are now separated by a little extra code that the compiler inserted to handle an exception being thrown. And the zero overhead claim only applies to throw statements. Once you start sprinkling throw...catch blocks in the code, it does have a nonzero cost in terms of actual instructions executed, including new branches that wouldn't exist otherwise.

 

But as you discussed, the real downside to enabling EH is the fact that the programmers have to worry about what will happen for every function they call if an exception is thrown inside that function, and they have to jump through extra hoops to make sure everything is cleaned up properly in the case that the called function doesn't return and the remaining code in the calling function doesn't get executed. I'm normally a big proponent of writing clean, bulletproof code, but the burden of wrapping up all your pointers and implementing strong RAII for absolutely everything goes a little too far for me. There most definitely is the matter of programmer productivity to consider here, and it is more expensive in terms of time and money to write exception-safe code.

 

BitMaster, the more I think about it, the more I'm OK with using something like polymorphic_downcast with RTTI enabled in debug mode, so I think I may have overreacted to it. It's just that using RTTI or Boost are both things that would make virtually all of my peers give me a dirty look, and seeing them suggested together at once made me go all hulk-smash. Sorry if I came across as abrasive, but nothing I said justifies you resorting to name calling and derision of my qualifications. (And if you're going to do more, please grow a pair and use your real name.)




#5192309 Casting Pointer to Derived Type

Posted by Eric Lengyel on 11 November 2014 - 04:05 PM

Then why did you feel the need to downvote me? The huge advantage of boost::polymorphic_downcast is that it verifies the type in debug builds (always a good idea to assert things which should be true there) and is a static_cast in non-debug builds.

 

Again, the pros do not use RTTI and exception handling, even if it's only in debug builds. If you want to ignore the collective wisdom of all the top developers in the industry, you do so at your own peril.




#5192305 Casting Pointer to Derived Type

Posted by Eric Lengyel on 11 November 2014 - 03:52 PM

However if I was doing that I would instead add helper non-member functions like "TextWidget* GetTextWidget(int ID)" that return exactly what I want, pre-casted.

 

 

That would work fine, but it's ugly and unnecessary. You're just wrapping up a perfectly good language feature in an extra set of functions that increase the cost of maintaining the code. (When someone adds a new widget type, will they remember to add the separate Get function, too?)

 

 

 

Internally I'd still use dynamic_cast instead of static_cast because of type-safety.

 

Experienced game developers do not build with RTTI or exception handling enabled. There is no dynamic casting, and there is no throwing. In this example, each widget should know its own dynamic type (stored in the base class). You could check that type in a debug build and simply assert if you got one different from what you're casting to.




#5192223 Casting Pointer to Derived Type

Posted by Eric Lengyel on 11 November 2014 - 01:02 AM

The need to cast to a subclass type arises all the time in well-designed architectures. Please stop it with the scaremongering about downcasting being a symptom of bad design. It's not. If you want to eliminate it everywhere, then your code is going to turn to shit because you'll be jumping through all kinds of unnecessary hoops like adding lots of virtual functions to do simple things.

 

As a basic example, consider a GUI system in which there is a class hierarchy of widgets. Suppose there is a resource format of some kind that stores all the widgets for a dialog box, and suppose that when it's loaded, the widgets of the appropriate subclasses (TextWidget, ButtonWidget, MenuWidget, etc.) are created and stored in a tree representing the layout of the dialog. Now some code is going to load that resource and want to access particular widgets as their specific subclass types, e.g., to change the content of a TextWidget to the user's name or enable/disable a ButtonWidget depending on some condition. However, the function that locates those particular widgets (perhaps by name or some kind of ID number) after the dialog is loaded will always return a pointer to the Widget base class. Knowing what the widget's actual subclass type must be, your program will then use static_cast to change a Widget * into a pointer to a TextWidget *, ButtonWidget *, etc., so it can call the functions specific to those subclasses. There's nothing wrong with this.




#5191848 Mutiple VAOs and VBOs

Posted by Eric Lengyel on 08 November 2014 - 03:43 PM

To be clear, I doubt VAOs perform that bad. While you got Valve and a couple of scattered devs saying otherwise, on the other end you have every single OpenGL driver developer out there saying they perform better. And to go against those kind of people, you'd need a bit more big names than just Valve to say it, someone that either is known for being a graphics powerhouse (Crytek, DICE) so they're bound to know what the fuck they're doing, or someone who has been working with OpenGL for a long ass time (say, Carmack).

 

So far I haven't seen such complaints from other developers that have been porting new games to OpenGL lately (Firaxis, Aspyr, 4A Games, etc).

 

 

Maybe you should do some research on my qualifications before implying that I don't know what I'm talking about.




#5191736 Mutiple VAOs and VBOs

Posted by Eric Lengyel on 07 November 2014 - 04:49 PM

 

If you had reproducible data demonstrating that correct use of VAO resulted in decreased performance, I wouldn't have a problem with your statement.

 

But since your data indicates it to be a wash in your case, and existing benchmarks note measurable performance gains, I'd prefer that we continue to teach best practices.

 

 

Matias Goldberg already posted the following link, but I'll put it here again anyway. It clearly demonstrates that any assertion that multiple VAOs are faster is dead wrong. It's important to note that Valve came to the same conclusions that I did.

 

http://the31stgame.com/blog/?p=39

 

I don't give shit what that synthetic test in your link says. It does not represent real-world usage cases, and that makes the data meaningless. The engine makers know better about this stuff than the driver writers seem to, and I have talked to IHVs at length about this issue. However, no one seems to be interested in doing anything about it. It's very frustrating that something that should be faster (multiple VAOs), by its very design, is instead significantly slower.




#5181774 How do u build game engines ?

Posted by Eric Lengyel on 20 September 2014 - 03:11 PM

This diagram should give you an idea about what makes up a complete game engine. As mentioned above, some of the pieces could be replaced by other middleware.

 

http://www.terathon.com/architecture.php




#5177149 COLLADA vs FBX... Are They Worthwhile for Generic Model Formats?

Posted by Eric Lengyel on 30 August 2014 - 10:34 PM

These are just some of the reasons we created the Open Game Engine Exchange format (OpenGEX):

 

http://opengex.org/




#5159967 Shadow Volumes - Edge finding

Posted by Eric Lengyel on 12 June 2014 - 01:19 AM

The description of the edgeArray parameter appears to be out of date in the book and should actually be different. Sorry about that. An updated version of the code can be found here:

 

http://www.terathon.com/code/edges.html

 

This version explains that the edgeArray parameter should point to a preallocated buffer large enough to hold the maximum number of edges possible, which is three times the number of triangles in the mesh.




#5148261 Writing model 3D Model data to a new file as a new format

Posted by Eric Lengyel on 19 April 2014 - 07:01 PM


What I really miss from all this exchange formats, especially when using them to export stuff for a game engine, is the lacking support of custom attributes. You can add custom attributes in modelling tools, but you never got them exported to Collada/FBX. It is possible with OpenGex ?

 

It depends on exactly what kind of custom attributes you're talking about. OpenGEX supports custom per-vertex data and custom material attributes, but intentionally does not allow for general custom data all over the place. The reason for this is to avoid the creation of software-specific standards like you see in the Collada format through the use of the <technique> elements, making it necessary for importers to understand information whose format is specified over multiple poorly-maintained documents. Now if you're just talking about custom user-defined key-value properties like those supported in 3DS Max, then I can tell you that support for these is being considered for the next version of OpenGEX, and it's something that can easily be added to the existing exporters.




#5148021 Writing model 3D Model data to a new file as a new format

Posted by Eric Lengyel on 18 April 2014 - 06:18 PM

There is also the new OpenGEX format:

 

http://www.opengex.org/




#5127125 About 3dMax export Skeleton

Posted by Eric Lengyel on 28 January 2014 - 10:34 PM

I use IGame to export the skeleton, but have some questions. I want to calculate the binding position with the skeleton. The function with

 

IGameSkin->GetInitBoneTM(IGameNode * boneNode, GMatrix &intMat)

 

can get the bone TM when skin was added. But it will return false the the param is root bone. There is another function 

 

GetInitSkinTM(GMatrix & intMat)

 

in IGameSkin. But it always return a Identity matrix. So how can I get the binding position with root bone?

 

If GetInitBoneTM() returns false, then it suggests that the bone is not actually used by the skinned mesh.

 

The GetInitSkinTM() function returns the transform that was applied to the skin geometry at the time that it was bound to the skeleton, and not the transform of any particular bone.

 

If you'd like to see an example of skin/skeleton exporting from Max (but without using the IGame interface), take a look at the source for the OpenGEX exporter:

 

http://opengex.org/

 

The skin and bone bind transforms are exported at the top of the OpenGexExport::ExportSkin() function.




#5091113 Collision of plane and sphere

Posted by Eric Lengyel on 02 September 2013 - 03:18 PM

Not sure what L*P is supposed to mean.

 

It means the dot product between the four-dimensional plane L = (Nx, Ny, Nz, D) and the homogeneous point P = (Px, Py, Pz, 1). The plus sign in the book is correct. If you're familiar with my more recent talks on Grassmann Algebra, then this is more accurately stated as the wedge product between the antivector (Nx, Ny, Nz, D) and the vector (Px, Py, Pz, 1).




#5059353 normals in tangent space question

Posted by Eric Lengyel on 04 May 2013 - 11:32 PM

The tangent and bitangent are derived using a calculation like this:

 

http://www.terathon.com/code/tangent.html




#5055934 A C++ code to smooth (and fix cracks in) meshes generated by the standard, or...

Posted by Eric Lengyel on 22 April 2013 - 11:51 PM

another alternative is the transvoxel algorithm by E. Lengyel, though I am not sure if it's patented, it is iirc

 

The Transvoxel algorithm is not patented. More information here:

 

http://www.terathon.com/voxels/

 

In general, a correctly implemented Marching Cubes algorithm generating a mesh with a single LOD will only produce cracks if it doesn't have a consistent way of choosing polarity for the so-called "ambiguous cases". This can be solved by using a fixed polarity based on corner states or some face-level choice function as used in the MC33 algorithm. See Section 3.1.2 of my dissertation at the above link for some discussion of these. Using fixed polarity is easy, and it never generates any holes in the resulting mesh.

 

A good MC implementation will generate a smooth mesh to begin with if the data at each voxel location has a range of values instead of just a binary in or out state. The ugly stair-stepping only shows up if you're forced to put each vertex right in middle of each isosurface-crossing edge because you don't have enough information to do anything more intelligent.






PARTNERS