• Advertisement
Sign in to follow this  

Bad code or usefull ...

This topic is 1241 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

If you know other examples like this related to game libraries I would love to hear about them.

 

So I’m looking over maths libraries and most of those I have seen implement something like the struct/class below:

struct Vec3
{
  float x, y, z;

  float& Vec3::operator[](unsigned int i)
  { return (&x)[i]; }
}

Note: this has many variations, private members or array, reinterpret_cast, ... matrix representations with same structure, ... but they all result in the risk of writing or reading memory that does not belong to the class.

 

My question is why would any one think this is good and acceptable code ?

 

- It is not faster then direct access or in worse case an in-line function call to access private member.

- On top of that possible bugs resulting form the operator[] are hard to id and/or track down unless you implement bound checking but that would slow things down even more. Then we argue bounds checking is debug only, again why risk the bugs in production code in the first place ?

 

Did i miss a memo somewhere ?

Is the 'operator[]' really that useful ?

 

Just a semi rant/question ... but I would love some other opinions on this.

Edited by Nemo Persona

Share this post


Link to post
Share on other sites
Advertisement

One example I can think of is when implementing kd-trees or a few other space-partitioning trees, you usually want to rotate through each coordinate for the split axis at each level of the tree (i.e. level 1 = split on x-axis, level 2 = split on y-axis, etc..). With the [] operator it amounts to working on vec[depth % 3], without it you are stuck with branching on the depth or unrolling three tree levels at a time (awful). In short, it gives you constant-time access to any coordinate based on an index, and is the kind of thing where you are essentially screwed if it's not provided.

 

That said another, simpler answer to your question is, why not? It does not cause any performance penalty compared to direct access, can potentially help awkward situations like the one I described above, and can be made strictly safe (wrt memory layout and stuff) and standards-conforming with little effort - in fact, zero effort for a properly implemented vector class. If it is never used, it won't even be included in the binary, and it will almost certainly be inlined in all use cases being a trivial function, so all it does is take up five lines of code maximum in your vector class for only potential gains. One could then argue that features need to be proven worthwhile before being added to a library, and I won't deny that (I would like to see more examples of the use of vec[] as well) but, honestly, how many other features can you think of providing with your vector class? I can think of more questionable ones than [].

Share this post


Link to post
Share on other sites
One example I can think of is when implementing kd-trees or a few other space-partitioning trees, you usually want to rotate through each coordinate for the split axis at each level of the tree (i.e. level 1 = split on x-axis, level 2 = split on y-axis, etc..). With the [] operator it amounts to working on vec[depth % 3], without it you are stuck with branching on the depth or unrolling three tree levels at a time (awful).

Ok, branching in this case is bad, but for a specific cases like this it is still possible to cast locally and avoid the branching. By adding the operator to the class it self you create a potential problem that can be hidden any where in the code base. (where a programmer feels like typing [i + 1] in staid of getY())

 

 

That said another, simpler answer to your question is, why not?
It does not cause any performance penalty compared to direct access, can potentially help awkward situations like the one I described above, and can be made strictly safe (wrt memory layout and stuff) and standards-conforming with little effort - in fact, zero effort for a properly implemented vector class.
 
If it is never used, it won't even be included in the binary, and it will almost certainly be inlined in all use cases being a trivial function, so all it does is take up five lines of code maximum in your vector class for only potential gains.

It does not cause a performance hit but there is no performance gain ether, there is just the potential for bugs or slower and or more complex code in case you do decide to make it a secure conforming function.

 

Just relaying on the faq that no one is supposed to be using the operator "unless in specific cases" sounds bad, code stays around for a long time but programmers come and go. Just like comments or docs saying "// Do not use unless really needed" make no sense at all, "needed" is undefined and/or defined differently by different people.

 

And yes, there are some other questionable functions in a vector class but this one has the highest risk factor of all those I can come up with.

Edited by Nemo Persona

Share this post


Link to post
Share on other sites

Ok, branching in this case is bad, but for a specific cases like this it is still possible to cast locally and avoid the branching. By adding the operator to the class it self you create a potential problem that can be hidden any where in the code base. (where a programmer feels like typing [i + 1] in staid of getY())

 

.. I'm not sure I understand what you mean. You are suggesting breaking into the internals of the vector by taking a pointer to x and incrementing it in this specific case, instead of using an operator[] specially designed for the task? How does that avoid safety issues at all, especially since like I said, proper implementation of this function is strongly coupled to how the vector struct is implemented with respect to its layout and so on? Seems to me it just muddles up the codebase with random casts. The operator[] offers a specific and well-defined functionality, there is nothing vague about it. Where should the line be drawn between "this operation is used twice in the entire codebase" and "the dot product is used everywhere so we made this dot function"?

Share this post


Link to post
Share on other sites

By adding the operator to the class it self you create a potential problem that can be hidden any where in the code base.

Adding to what Bacterius already said, I'd like to add disagreement to this bit particularly. Having it cleanly in a well-defined function is good. Should there ever be a problem with the code it's enough to fix that one function. Hunting down each hand-rolled occurrence is a horrible debugging nightmare (and I'm saying this as someone who had to do it a few times in his life).
 

(where a programmer feels like typing [i + 1] in staid of getY())

You can simply add an 'assert(i < 3);' to the function. No impact on non-debug builds and a clean error signaling in debug builds.

Share this post


Link to post
Share on other sites
.. I'm not sure I understand what you mean. You are suggesting breaking into the internals of the vector by taking a pointer to x and incrementing it in this specific case, instead of using an operator[] specially designed for the task? How does that avoid safety issues at all, especially since like I said, proper implementation of this function is strongly coupled to how the vector struct is implemented with respect to its layout and so on? Seems to me it just muddles up the codebase with random casts. The operator[] offers a specific and well-defined functionality, there is nothing vague about it. Where should the line be drawn between "this operation is used twice in the entire codebase" and "the dot product is used everywhere so we made this dot function"?

That is exactly what I suggest.

 

It localizes the safety issues and minimizes overall risk. If some spatial trees are the only reason to use the [] operator the you would not really litter the entire code base with random casts, you could even implement a special private struct to make it cleaner.

 

I'm still not convinced the benefits out way the risk or that it is possible to make the operation secure without losing performance.

 

 

(where a programmer feels like typing [i + 1] in staid of getY())
You can simply add an 'assert(i < 3);' to the function. No impact on non-debug builds and a clean error signaling in debug builds.

What about non-debug builds ? Are you sure your debug process is gone catch every edge case where the operation is used ? I'm sorry but to me this sounds like raw pointers all over again.

Edited by Nemo Persona

Share this post


Link to post
Share on other sites

It localizes the safety issues and minimizes overall risk. If some spatial trees are the only reason to use the [] operator the you would not really litter the entire code base with random casts, you could even implement a special private struct to make it cleaner.

I have seen thought processes like that in code bases I had to debug. As Is said, it ended with me hunting down similar, equally broken fragments all over the place. Not an experience I wish to repeat when I have a chance to avoid it. If there is even a chance it's used in two places, it gets its own one centralized function.

Also, I can easily find more reasons why an [] operator could be convenient, for example any template code which works on vectors of { 2, 3, 4} or even arbitrary dimension and needs to access individual components.
 

(where a programmer feels like typing [i + 1] in staid of getY())

You can simply add an 'assert(i < 3);' to the function. No impact on non-debug builds and a clean error signaling in debug builds.

 

What about non-debug builds ? Are you sure your debug process is gone catch every edge case where the operation is used ? I'm sorry but to me this sounds like raw pointers all over again.

If there is any crash in release build it should be trivial to follow the same steps in debug builds to hit the assertions and quickly identify the problem. That said, a developer should spend sufficient time in debug builds so that it happening in non-debug builds is never a realistic issue.

Edit: To reinforce my point: Adding Álvaro's standard-compliant code requires me to change a single implementation. Without the single implementation I need to find the two places where it is used that I know of, then debug for half an hour to find the one spot I didn't know about and fix it thrice more over the course of several months when stumbling over the issue again after reactivating old code. Those are pretty much real-life numbers from experience, although not for this particular issue but it was again something that should have been in a single implementation. Edited by BitMaster

Share this post


Link to post
Share on other sites

The top version of the code (using (&x)[i] ), I would never ever use.

 

It's just too danagerous. It will work fine 100% of the time when you first write it, on OS a with compiler b on platform c, then you change one of them and the game crashes.

 

I've used compilers that use really strange packing systems for variables, especially when you move into 64 bits. It's entirely possible (though unlikely) to end up with the Z parameter being miles away from Y.  I actually had a bug once where someone had done something similar and looking at the memory mapping I saw .....

 

|  x   32 bits    | y 32 bits    |

| other var   64 bits           |

| other var   64 bits           |

| other 32 bits| z 32 bits    |

 

Why the hell the compiler did that ? Don't ask me! unsure.png

 

The second solution looks a lot more bullet proof, but even then I get twitchy when I see pointers to member variables.

Share this post


Link to post
Share on other sites
Personally I would probably do it for my own projects because I know what platforms it compiles on. I would however add a static assert to be immediately notified about the need for intervention, probably something like that: static_assert(sizeof(Vec3) == 3 * sizeof(float), "unexpected packing");

Share this post


Link to post
Share on other sites
On the dozen-ish console games that I've worked on, they've all gone beyond the simple Debug/Release dichotomy.
The most common extension is-
• Debug -- all validation options, no optimization. Probably does not run at target framerate. Extra developer features such as a TCP/IP file-system, memory allocation tracking, intrusive profiler, reloading of assets/code/etc.
• Release/Dev -- as above, but optimized. Almost runs at target framerate.
• Shipping/Retail -- all error handling for logic errors is stripped, all developer features stripped, fully optimized (LTCG/etc). Runs at target framerate or above.

There's often more, such as Retail with logging+assertions enabled for QA, or Dev minus logging+assertions for profiling, etc...
Point is that every frame there are thousands of error checks done to ensure the code is valid. Every assumption is tested, thouroughy, and repeatedly by every dev working on the game and every QA person trying to break it.

When it comes to the gold master build, you KNOW that all of those assertions are no longer necessary, so it's fine to strip them. Furthermore, they're just error-detection, not error-handling, so there's no real use in shipping them anyway 8-)

Share this post


Link to post
Share on other sites

I learned this trick from Washu ... That is a bit verbose and tricky to read, but other than that I think it has all the nice features one would want, including being correct according to the C++ standard.

Thats really cute, but I'd hate to see the code that's produced in the hypothetical situation where the original implementation-defined approach isn't valid and/or the optimizer doesn't realize the equivalence...

I would however add a static assert to be immediately notified about the need for intervention, probably something like that: static_assert(sizeof(Vec3) == 3 * sizeof(float), "unexpected packing");

Definately. When you're making assumptions, they need to be documented with assertions. The OP's snippet is implementation-defined, so you're making assumptions about your compiler. You need to statically assert that &y==&x+1 && &z==&x+2 (or the sizeof one above is probably good enough), and at runtime assert that index>=0 && index<3.

Share this post


Link to post
Share on other sites


There's often more, such as Retail with logging+assertions enabled for QA, or Dev minus logging+assertions for profiling, etc...
Point is that every frame there are thousands of error checks done to ensure the code is valid. Every assumption is tested, thouroughy, and repeatedly by every dev working on the game and every QA person trying to break it.

 

And that's why video games don't have bugs or crashes any more.

Share this post


Link to post
Share on other sites

The top version of the code (using (&x) ), I would never ever use.
 
It's just too danagerous. It will work fine 100% of the time when you first write it, on OS a with compiler b on platform c, then you change one of them and the game crashes.
 
I've used compilers that use really strange packing systems for variables, especially when you move into 64 bits. It's entirely possible (though unlikely) to end up with the Z parameter being miles away from Y.  I actually had a bug once where someone had done something similar and looking at the memory mapping I saw .....
 
|  x   32 bits    | y 32 bits    |
| other var   64 bits           |
| other var   64 bits           |
| other 32 bits| z 32 bits    |
 
Why the hell the compiler did that ? Don't ask me! unsure.png
 
The second solution looks a lot more bullet proof, but even then I get twitchy when I see pointers to member variables.


That packing is actually invalid C++. The standard requires that variables are in memory in the same order they are in the class/struct - as long as they're using the same protection level. (I.e. don't mix private, protected, and public - but you shouldn't have non-private variables in the first place) The standard doesn't mandate that they are all adjacent - so the code the original poster showed will only work on some compilers and some platforms (and will cause merry debugging hell when it doesn't). Washu/Alvaro's solution is much better as it won't break unless your compiler has a bug.

Share this post


Link to post
Share on other sites

There's often more, such as Retail with logging+assertions enabled for QA, or Dev minus logging+assertions for profiling, etc...
Point is that every frame there are thousands of error checks done to ensure the code is valid. Every assumption is tested, thouroughy, and repeatedly by every dev working on the game and every QA person trying to break it.


And that's why video games don't have bugs or crashes any more.
What are you talking about!?! I haven't played a video game since Baldurs Gate that didn't have a bug or crashed in some way! Don't even get me started on Skyrim.

Share this post


Link to post
Share on other sites

 

 

There's often more, such as Retail with logging+assertions enabled for QA, or Dev minus logging+assertions for profiling, etc...
Point is that every frame there are thousands of error checks done to ensure the code is valid. Every assumption is tested, thouroughy, and repeatedly by every dev working on the game and every QA person trying to break it.


And that's why video games don't have bugs or crashes any more.
What are you talking about!?! I haven't played a video game since Baldurs Gate that didn't have a bug or crashed in some way! Don't even get me started on Skyrim.

 

Exactly why fix when you can prevent the bugs ... don't have allot more to add, seems people are ok with the idea of writing code that can write to or read from invalid memory and they trust Debug/QA to find all possible edge cases. I guess there ok with smoking as well, god will protect them and doctors will fix the cancer when god fails. (sad panda was hoping for more examples of bad code in game libs, but we end up with believe in god and promises of an easy cure)

Edited by Nemo Persona

Share this post


Link to post
Share on other sites


Exactly why fix when you can prevent the bugs ... don't have allot more to add, seems people are ok with the idea of writing code that can write to or read from invalid memory or they trust Debug/QA to find all possible edge cases. I guess there ok with smoking as well, god will protect them and doctors will fix the cancer when god fails. (sad panda was hoping for more examples of bad code in game libs, but we end up with believe in god and promises of an easy cure)

 

If you are not OK with the idea of writing code that can write to or read from invalid memory, you shouldn't use C++. Given that the language and the standard library are written with the assumption that the programmer knows what he is doing, I don't see a problem with having a vector class that gives you access to coordinates by index: It's perfectly idiomatic and unlikely to be a problem in practice. People have already pointed out situations where you might want to use access by index. If you don't need it, don't provide an operator[]. But please don't make absurd analogies.

Share this post


Link to post
Share on other sites

 


Exactly why fix when you can prevent the bugs ... don't have allot more to add, seems people are ok with the idea of writing code that can write to or read from invalid memory or they trust Debug/QA to find all possible edge cases. I guess there ok with smoking as well, god will protect them and doctors will fix the cancer when god fails. (sad panda was hoping for more examples of bad code in game libs, but we end up with believe in god and promises of an easy cure)

 

If you are not OK with the idea of writing code that can write to or read from invalid memory, you shouldn't use C++. Given that the language and the standard library are written with the assumption that the programmer knows what he is doing, I don't see a problem with having a vector class that gives you access to coordinates by index: It's perfectly idiomatic and unlikely to be a problem in practice. People have already pointed out situations where you might want to use access by index. If you don't need it, don't provide an operator[]. But please don't make absurd analogies.

 

It is not a bold it is the conclusion I draw after reading most of replies, as for the analogy sorry if that offended people but I stand by it. If you don't agree that is fine by me but just because something is written and designed a certain way does not mean it is the correct way and telling people to go away or to use something else because they don't share your view point is counter productive.

 

 

I learned this trick from Washu:
 

struct Vector3 {
  float x, y, z;
  
  float &operator[](int index) {
    assert(index >= 0 && index < 3 && "Index out of bounds");
    return this->*members[index];
  }
  
  float operator[](int index) const {
    assert(index >= 0 && index < 3 && "Index out of bounds");
    return this->*members[index];
  }
  
  static float Vector3::* const members[3];
};
  
float Vector3::* const Vector3::members[3] = { &Vector3::x, &Vector3::y, &Vector3::z };
That is a bit verbose and tricky to read, but other than that I think it has all the nice features one would want, including being correct according to the C++ standard.

 

The example is nice but it does not solve the problem mentioned, it still leaves room for serious bugs in release build and that is simply unacceptable to me, prevention is still better then fixing in my mind. (but that’s just my believe)

Share this post


Link to post
Share on other sites

It is not a bold it is the conclusion I draw after reading most of replies, as for the analogy sorry if that offended people but I stand by it. If you don't agree that is fine by me but just because something is written and designed a certain way does not mean it is the correct way and telling people to go away or to use something else because they don't share your view point is counter productive.


You did notice the important bits already discussed in this thread? Mainly:
1) the behavior is implementation defined. It's not undefined (relying on that would be bad), it's implementation defined (preferably avoided but real life does not always give us that pleasure).
2) using a static_assert (either mine or Hodgman's) reliably detects incompatible implementations at compile-time. There is no potential to run into sudden, inexplicable, hard-to-find bugs, the compiler while simply refuse to compile it. If your compiler does not yet support C++11, there is a Boost implementation of static assert which has been working for ages.

The example is nice but it does not solve the problem mentioned, it still leaves room for serious bugs in release build and that is simply unacceptable to me, prevention is still better then fixing in my mind. (but that’s just my believe)

You do not have much experience in working with actual, real-life, non-trivial C++ codebases, do you?

Share this post


Link to post
Share on other sites
If you are that concerned about reading/writing invalid memory (and hey, that's a valid concern) nothing is preventing you from writing operator[] such that that thing is impossible, even in release builds:
 
  float &operator[](int index) {
    if ((index < 0) || (index >= 3))
    {
      assert(false, "Index out of bounds");
      index = 0;
    }
    return this->*members[index];
  }
  
  float operator[](int index) const {
    if ((index < 0) || (index >= 3))
    {
      assert(false, "Index out of bounds");
      index = 0;
    }
    return this->*members[index];
  }
There you go - completely safe, you've just defined "out of range" as writing/reading X.

Of course, now you have a performance cost in your release build to do the checking, but your code may not care (that's where profiling comes in).

However in a lot of real situations you cannot afford to do some checks at runtime, and invalid input parameters indicate a coder made a mistake, in which case an assert is the "best" solution. (Obviously you want to always check/sanitize external data as you load it, but once it's loaded it shouldn't be invalid)

In coding you're going to be frequently trading off speed for safety or vice versa. The trick is to be as safe as possible, but as fast as possible where needed. People aren't going to play a game that runs at 10 fps, but they will play a game that sometimes crashes (see, well, every game ever - I don't know of one that doesn't crash at least sometimes)

The big benefit of C and C++ is that you can actually make that choice. In other languages like Java or C# you cannot chose to turn off the safety in certain cases to get a few more fps out of the hardware, even if you could verifiably prove that the checking was unnecessary. Edited by SmkViper

Share this post


Link to post
Share on other sites

 

 And that's why video games don't have bugs or crashes any more.

What are you talking about!?! I haven't played a video game since Baldurs Gate that didn't have a bug or crashed in some way! Don't even get me started on Skyrim.

Sarcasm maybe?

Share this post


Link to post
Share on other sites

You did notice the important bits already discussed in this thread? Mainly:

1) the behavior is implementation defined. It's not undefined (relying on that would be bad), it's implementation defined (preferably avoided but real life does not always give us that pleasure).
2) using a static_assert (either mine or Hodgman's) reliably detects incompatible implementations at compile-time. There is no potential to run into sudden, inexplicable, hard-to-find bugs, the compiler while simply refuse to compile it. If your compiler does not yet support C++11, there is a Boost implementation of static assert which has been working for ages.

You do not have much experience in working with actual, real-life, non-trivial C++ codebases, do you?

 

Thx for recap I kinda did but refuse to accept that this is good code unless you perform bounds checking in release code and then you get the performance loss. (quote below is good reply as to why this is acceptable in game libs)

 

I actual have over 10 years xp as a freelancer in c, c++ and sadly c#; work still comes to me and I don't have to look for it so I probably don't suck to badly at it. Define "non-trivial" ? The work is related to old custom db's and networking tools that are not trivial, mostly small Unix like tools that could be consider a trivial code base, so you are right to some degree.

 

However in a lot of real situations you cannot afford to do some checks at runtime, and invalid input parameters indicate a coder made a mistake, in which case an assert is the "best" solution. (Obviously you want to always check/sanitize external data as you load it, but once it's loaded it shouldn't be invalid)

In coding you're going to be frequently trading off speed for safety or vice versa. The trick is to be as safe as possible, but as fast as possible where needed. People aren't going to play a game that runs at 10 fps, but they will play a game that sometimes crashes (see, well, every game ever - I don't know of one that doesn't crash at least sometimes)

The big benefit of C and C++ is that you can actually make that choice. In other languages like Java or C# you cannot chose to turn off the safety in certain cases to get a few more fps out of the hardware, even if you could verifiable prove that the checking was unnecessary.

 

thx, I kinda looked over those things, still have mixed feelings but it is clear I'm not gone convince people here.

Share this post


Link to post
Share on other sites

You did notice the important bits already discussed in this thread? Mainly:
1) the behavior is implementation defined. It's not undefined (relying on that would be bad), it's implementation defined (preferably avoided but real life does not always give us that pleasure).
2) using a static_assert (either mine or Hodgman's) reliably detects incompatible implementations at compile-time. There is no potential to run into sudden, inexplicable, hard-to-find bugs, the compiler while simply refuse to compile it. If your compiler does not yet support C++11, there is a Boost implementation of static assert which has been working for ages.

You do not have much experience in working with actual, real-life, non-trivial C++ codebases, do you?

Thx for recap I kinda did but refuse to accept that this is good code unless you perform bounds checking in release code and then you get the performance loss. (quote below is good reply as to why this is acceptable in game libs)

Let me preface this by saying Hodgman makes a lot of good points in #23. I would just like to expand on a few things while I wait for my coffee to kick in.
The core expectation of C++ is that you have reasonably competent developers as well as reasonable use of the test tools given (that includes debug builds and asserts).
You can of course add release checks for everything like that, but that defeats the point of using C++ in the first place. You have to manually implement things other languages (like Java) would give you right out of the box. In Java you cannot access an array out of bounds. You cannot do an invalid type cast. The compiler promises and enforces that without needing you for that and it is not possible for you to circumvent it. Other languages (like C#) can do that too.
In C++ you have no chance for that. No check or safety you add is going to help against a stupid and/or malicious user. Once they get out of the Siege Weapons of Doom (aka reinterpret_cast and const_cast) they can do whatever they want.
 

I actual have over 10 years xp as a freelancer in c, c++ and sadly c#; work still comes to me and I don't have to look for it so I probably don't suck to badly at it. Define "non-trivial" ? The work is related to old custom db's and networking tools that are not trivial, mostly small Unix like tools that could be consider a trivial code base, so you are right to some degree.

The reason I was asking that was because I'm getting the feeling you are trying to sell a vegan diet to people drowning in a shipwreck. It certainly has its benefits (although it certainly won't work for everyone) but it also has absolutely no relevance to the actual problems currently faced by the people you are interacting with.

My job is working with a large C++ codebase with a lot of ancient (and frequently bad) baggage. It's not in the game business but it probably counts as game-ish enough because it requires a lot of rendering and user reactivity (and parts of it need to work on Windows, Linux, Mac, Android and iOS). The problem you are arguing so vehemently with does not exist in my experience.
A properly placed function with both static asserts and runtime asserts like that? The only change I would do is maybe add an appreciative comment. The things that frequently cause me lengthy debugging sessions are a completely different beast and everything that flags potential problems with an assert like that is a benefit. All you need is reproduce the crashing behavior in an assert-enabled build and you immediately see the problem and fix where some developer broke the contract.

Share this post


Link to post
Share on other sites


When was the last time you crashed a console game? If Microsoft or Sony can reproduce a crash in your game, then your disks don't get printed and you can't ship! It does still happen, but it's pretty rare.

 

I've crashed console games plenty of times this year - I've crashed Dark Souls 2, and I crashed Super Mario 3D World, and Nintendo games have been fairly reliable for me. Though I was really thinking more about PC games - especially all the Steam reviews I've been reading in that new discovery queue thing - and how so many games on there are plagued with bugs and crashes. There's no reason why your previous quote wouldn't apply to PC games - it wasn't about the bugs caught by cert, it was about developers and QA catching all the bugs.

 

I've also worked on a few relatively popular console games and have seen the crash reports for the released versions of those games. I've also worked on fixing multi-player stability issues to patch a live game. It's not something to be proud of, but I believe that every commercially-released game I've worked on shipped with some crash in it. And the games I've worked on did have the build system your described, and the hundreds of developers using them, but crashes get through, and there's the pile of gameplay bugs that you mentioned. For Microsoft or Sony, to me it seems they want a particular big title to ship at a certain time almost as much as the publisher does, and the two of them work something out, and eventually a producer marks some crash or disconnect you wanted to chase down as a shippable bug.

 

For this particular discussion, I don't mind looking up vector members by index, though basing it off the address of x won't work on one of the platforms I've used (I wish I could remember which one), and so it had to be done with a union or the other method presented here. I'm just saying, don't expect to just naturally verify all of your assumptions through regular development and QA. Even a vector class that everything uses could have some hidden problem in it - I've had to fix one of those, where the problem was only noticeable in facial animations, but it was actually in the vector class itself.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement