Bad code or usefull ...

Started by
23 comments, last by Pink Horror 9 years, 6 months ago

And that's why video games don't have bugs or crashes any more.

What are you talking about!?! I haven't played a video game since Baldurs Gate that didn't have a bug or crashed in some way! Don't even get me started on Skyrim.

Sarcasm maybe?

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Advertisement

You did notice the important bits already discussed in this thread? Mainly:

1) the behavior is implementation defined. It's not undefined (relying on that would be bad), it's implementation defined (preferably avoided but real life does not always give us that pleasure).
2) using a static_assert (either mine or Hodgman's) reliably detects incompatible implementations at compile-time. There is no potential to run into sudden, inexplicable, hard-to-find bugs, the compiler while simply refuse to compile it. If your compiler does not yet support C++11, there is a Boost implementation of static assert which has been working for ages.

You do not have much experience in working with actual, real-life, non-trivial C++ codebases, do you?

Thx for recap I kinda did but refuse to accept that this is good code unless you perform bounds checking in release code and then you get the performance loss. (quote below is good reply as to why this is acceptable in game libs)

I actual have over 10 years xp as a freelancer in c, c++ and sadly c#; work still comes to me and I don't have to look for it so I probably don't suck to badly at it. Define "non-trivial" ? The work is related to old custom db's and networking tools that are not trivial, mostly small Unix like tools that could be consider a trivial code base, so you are right to some degree.

However in a lot of real situations you cannot afford to do some checks at runtime, and invalid input parameters indicate a coder made a mistake, in which case an assert is the "best" solution. (Obviously you want to always check/sanitize external data as you load it, but once it's loaded it shouldn't be invalid)

In coding you're going to be frequently trading off speed for safety or vice versa. The trick is to be as safe as possible, but as fast as possible where needed. People aren't going to play a game that runs at 10 fps, but they will play a game that sometimes crashes (see, well, every game ever - I don't know of one that doesn't crash at least sometimes)

The big benefit of C and C++ is that you can actually make that choice. In other languages like Java or C# you cannot chose to turn off the safety in certain cases to get a few more fps out of the hardware, even if you could verifiable prove that the checking was unnecessary.

thx, I kinda looked over those things, still have mixed feelings but it is clear I'm not gone convince people here.

Tom Robbins: "There's a tendency today to absolve individuals of moral responsibility and treat them as victims of social circumstance." - Don't be that victim, take responsibility and be what you want to be.

Thx for recap I kinda did but refuse to accept that this is good code unless you perform bounds checking in release code and then you get the performance loss. (quote below is good reply as to why this is acceptable in game libs)

The thing is, what do you do in the release build if you do detect an out-of-bounds error? The course of action on detecting this error is to fix the code... At that point, you know the code is wrong... but you've shipped the code and the user is running it!
e.g. if this were C#, a nice safe language, then the game would still just crash with an unhandled out of bounds exception.

All you can really do is generate a nice crash dump and ask the user to send it to you so you can develop a patch.
If you choose not to crash, then your game is then running in an invalid state from that point onward. You can try to stop the OS from generating access violations by returning zero (instead of reading/writing out of bounds)... but that may or may not completely break your game code. Maybe you end up getting the player stuck somewhere in a level, and a hard crash would've been preferable. Whatever you do at this point, you're screwed!

In this situation (where you are getting out of bounds errors), the broken code isn't inside the vector, it's in the user of the vector that's broken it's contract. The vector's contract states that the index must be 0/1/2, so it's up to the client code to ensure this is true.
If you want to abuse the type system to push this responsibility up to the client, you could make your operator[] take an enum rather than an int. If the client is doing integer-based access, they're forced to cast their int to the enum type beforehand. In a code review, this int->enum cast highlights the fact that you're going from a huge range of bilions of possible integers down to a finite set of enum values -- so if there's a lack of checking during this task, the client code should fail it's code review. All the way up the chain to the source of this integer data, you need to be able to prove that you'll only generate integer values that are within the enum set.


namespace Axis3d { enum Type { X=0, Y, Z, Count, First=X, Last=Z }; }
...
T m_data[Axis3d::Count];
T& operator[]( Axis3d::Type t ) { assert(t>=Axis3d::X && t<Axis3d::Count); return m_data[t]; }

e.g. this code is provably valid:


static_assert( Axis3d::X == 0 && Axis3d::Y == 1 && Axis3d::Z == 2 );
for( int i=0; i!=3; ++i )
  total += vec[(Axis3d::Type)i];
//or
for( int i=Axis3d::First; i!=Axis3d::Last; ++i )
  total += vec[(Axis3d::Type)i];


FWIW, one of the standard ways that we QA a game before release is to modify the memory allocator so that every allocation is rounded up to the size of a page - in different runs the actual allocation is either placed right at the start, or the end of the page. You then also allocate an extra 2 guard pages, one before and one after the allocation, and set these pages to have no permissions. If the game then reads/writes past the boundaries of an allocation, the OS immediately generates an access violation.
This greatly increases the amount of address space required by your game, so it pretty much requires you to do a 64bit build... but it's an amazing last line of defense against memory access bugs.

There's often more, such as Retail with logging+assertions enabled for QA, or Dev minus logging+assertions for profiling, etc...
Point is that every frame there are thousands of error checks done to ensure the code is valid. Every assumption is tested, thouroughy, and repeatedly by every dev working on the game and every QA person trying to break it.

And that's why video games don't have bugs or crashes any more.

When was the last time you crashed a console game? If Microsoft or Sony can reproduce a crash in your game, then your disks don't get printed and you can't ship! It does still happen, but it's pretty rare.
Often games ship knowing that there are a lot of gameplay bugs, because these won't necessarily cause MS/Sony to veto your production run, and you simply don't have the time/money to fix them before the publisher's release date. A lot of that comes down to business decisions, not engineering ones sad.png
Most publishers would rather ship on the day they decided a year earlier and release a downloadable patch later, rather than delay by a month.

On the other side of the coin, most developers get paid a flat fee no matter how well the game performs, or whether it's going to take longer to create or not. You might negotiate to get $100k per month for 12 months, which is barely enough to cover your development expenses. If it comes to the 13th month of development and you've still got 6 weeks worth of bugs to fix, you're not getting paid for that work... You'll just be bleeding money, trying to get the publisher off your back as soon as you can to avoid bankruptcy.

Don't even get me started on Skyrim.

TES games have a terrible amount of bugs because (edit: it seems that from an arrogant, ignorant outsider perspective) they hire non-programmers to write code, so they obviously don't give a shit about decent Software Engineering practices.

(Sorry to everyone at Bethesda. I love your games, but there are so many "script" bugs on every release. I hope practices are improving with every iteration!)

Plus their games are rridiculously massive, meaning lots of work in a short time-frame, plus hard to test everything, plus AAA publisher having release dates set in stone...

Exactly why fix when you can prevent the bugs

Your bounds checking suggestion doesn't prevent any bugs. It only detects bugs.


If you want some more examples of terribly unsafe game engine code, check out this blob/types.h file.
This file is part of a system for storing complex data structures on disc, including pointers -- implemented as byte offsets instead of memory addresses. This allows the data structures to be loaded from disc and into the game without any need for a deserialization step.
e.g.


//Offset<T> ~= T*
//List<T> ~= std::array<T>
//StringOffset ~= std::string*
struct Person { StringOffset name; int age; };
struct Class { int code; StringOffset name; List<Offset<Person>> students; };
struct School { List<Class> classes; };
 
u8* data = LoadFile("school.blob");
School& school = *(School*)data;
for( int i=0, iend=school.classes.Count(); i!=iend; ++i )
{
  Class& c = school.classes[i];
  printf( "Class %d: %s\n", c.code, &c.name->chars );
  for( Offset<Person>* j=c.students.Begin(), *jend=c.studends.End(); j!=jend; ++j )
  { 
    Person& p = **j;
    printf( "\t%s - %d\n", &p.name->chars, p.age );
  }
}

If the content in that data file is invalid, all bets are off. There's no validation code at runtime; it's just assumed that the data files are valid.
If the data file compilers are provably valid, then there's no need for extra validation at runtime.

You did notice the important bits already discussed in this thread? Mainly:
1) the behavior is implementation defined. It's not undefined (relying on that would be bad), it's implementation defined (preferably avoided but real life does not always give us that pleasure).
2) using a static_assert (either mine or Hodgman's) reliably detects incompatible implementations at compile-time. There is no potential to run into sudden, inexplicable, hard-to-find bugs, the compiler while simply refuse to compile it. If your compiler does not yet support C++11, there is a Boost implementation of static assert which has been working for ages.

You do not have much experience in working with actual, real-life, non-trivial C++ codebases, do you?

Thx for recap I kinda did but refuse to accept that this is good code unless you perform bounds checking in release code and then you get the performance loss. (quote below is good reply as to why this is acceptable in game libs)

Let me preface this by saying Hodgman makes a lot of good points in #23. I would just like to expand on a few things while I wait for my coffee to kick in.
The core expectation of C++ is that you have reasonably competent developers as well as reasonable use of the test tools given (that includes debug builds and asserts).
You can of course add release checks for everything like that, but that defeats the point of using C++ in the first place. You have to manually implement things other languages (like Java) would give you right out of the box. In Java you cannot access an array out of bounds. You cannot do an invalid type cast. The compiler promises and enforces that without needing you for that and it is not possible for you to circumvent it. Other languages (like C#) can do that too.
In C++ you have no chance for that. No check or safety you add is going to help against a stupid and/or malicious user. Once they get out of the Siege Weapons of Doom (aka reinterpret_cast and const_cast) they can do whatever they want.

I actual have over 10 years xp as a freelancer in c, c++ and sadly c#; work still comes to me and I don't have to look for it so I probably don't suck to badly at it. Define "non-trivial" ? The work is related to old custom db's and networking tools that are not trivial, mostly small Unix like tools that could be consider a trivial code base, so you are right to some degree.

The reason I was asking that was because I'm getting the feeling you are trying to sell a vegan diet to people drowning in a shipwreck. It certainly has its benefits (although it certainly won't work for everyone) but it also has absolutely no relevance to the actual problems currently faced by the people you are interacting with.

My job is working with a large C++ codebase with a lot of ancient (and frequently bad) baggage. It's not in the game business but it probably counts as game-ish enough because it requires a lot of rendering and user reactivity (and parts of it need to work on Windows, Linux, Mac, Android and iOS). The problem you are arguing so vehemently with does not exist in my experience.
A properly placed function with both static asserts and runtime asserts like that? The only change I would do is maybe add an appreciative comment. The things that frequently cause me lengthy debugging sessions are a completely different beast and everything that flags potential problems with an assert like that is a benefit. All you need is reproduce the crashing behavior in an assert-enabled build and you immediately see the problem and fix where some developer broke the contract.


When was the last time you crashed a console game? If Microsoft or Sony can reproduce a crash in your game, then your disks don't get printed and you can't ship! It does still happen, but it's pretty rare.

I've crashed console games plenty of times this year - I've crashed Dark Souls 2, and I crashed Super Mario 3D World, and Nintendo games have been fairly reliable for me. Though I was really thinking more about PC games - especially all the Steam reviews I've been reading in that new discovery queue thing - and how so many games on there are plagued with bugs and crashes. There's no reason why your previous quote wouldn't apply to PC games - it wasn't about the bugs caught by cert, it was about developers and QA catching all the bugs.

I've also worked on a few relatively popular console games and have seen the crash reports for the released versions of those games. I've also worked on fixing multi-player stability issues to patch a live game. It's not something to be proud of, but I believe that every commercially-released game I've worked on shipped with some crash in it. And the games I've worked on did have the build system your described, and the hundreds of developers using them, but crashes get through, and there's the pile of gameplay bugs that you mentioned. For Microsoft or Sony, to me it seems they want a particular big title to ship at a certain time almost as much as the publisher does, and the two of them work something out, and eventually a producer marks some crash or disconnect you wanted to chase down as a shippable bug.

For this particular discussion, I don't mind looking up vector members by index, though basing it off the address of x won't work on one of the platforms I've used (I wish I could remember which one), and so it had to be done with a union or the other method presented here. I'm just saying, don't expect to just naturally verify all of your assumptions through regular development and QA. Even a vector class that everything uses could have some hidden problem in it - I've had to fix one of those, where the problem was only noticeable in facial animations, but it was actually in the vector class itself.

This topic is closed to new replies.

Advertisement