Jump to content

  • Log In with Google      Sign In   
  • Create Account


What gets a game to pass certification by a publisher?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
14 replies to this topic

#1 warnexus   Prime Members   -  Reputation: 1409

Like
1Likes
Like

Posted 06 March 2014 - 08:48 PM

It seems weird or more appropriately baffling how games with bugs, low frame rate and poor gameplay pass certification in which then the game distributed to the consumers? Am I missing something about certification by the publisher?

Sponsor:

#2 Hodgman   Moderators   -  Reputation: 29366

Like
13Likes
Like

Posted 06 March 2014 - 09:32 PM

Certification is a process controlled by the platform holders (e.g. Nintendo, Microsoft, Sony), not the publishers. It mostly ensures that your game isn't going to crash, and that the user-experience is going to be consistent with all other games on that platform (time spent in loading screens, names/icons of buttons, sign-in screens, animating "game is saving" icons, online profanity filtering, etc). Actual bad gameplay problems are only a minor concern here.



#3 Promit   Moderators   -  Reputation: 6610

Like
11Likes
Like

Posted 06 March 2014 - 10:17 PM

Also to be quite frank, the console makers are often willing to look the other way when a big company asks for some ... flexibility in hitting the requirements.


Edited by Promit, 06 March 2014 - 10:17 PM.


#4 frob   Moderators   -  Reputation: 20169

Like
9Likes
Like

Posted 07 March 2014 - 01:13 AM

Reiterating: They don't care about gameplay.

 

Certification is not about how fun the game is. It can be crazy fun, or it can be the stupidest game ever.

 

Certification is about how you handle a specific list of bad situations. A Google search for the words "TRC TCR Lotcheck" gives some descriptive results. Does the game crash?  (Hint: It better not crash.) Does the game follow all the system's requirements? (Hint: a PlayStation controller button showing up in an XBox certification is a really bad thing.) Is it using debug libraries, or old libraries that have been replaced due to security concerns? Can the testers continuously play the game for 100 hours, swapping out testers on the machines over the course of a few days? Can they leave the game sitting in a start screen or other steady position running for a few days without it crashing?  Does the game handle behavior like disconnected controllers, ejected disks, unplugged network cables, and bad internet connections? Do all the features basically work in a manner that the testers can figure out? 

 

If all of those pass, it ships.

 

Often the certification teams will find something to complain about. Sometimes they are big complaints. E.g. "We found a crash..."  Other times they are minor complaints that you can easily fight back against. Them: "The saving screen was up for about 5 seconds". Us: "On our retail kits it takes about 1.2 seconds, what did you use to time it?" Them: "Oh, looks like you are right. Sorry about that."

 

I've worked on a few games that passed cleanly on the first pass through all of the big three, so I know it happens occasionally. When we had our first 1-submit Nintendo title the studio execs took the whole team out to lunch and gave the rest of the week off. Having good internal QA makes life so much better for everyone. :-)


Check out my personal indie blog at bryanwagstaff.com.

#5 warnexus   Prime Members   -  Reputation: 1409

Like
0Likes
Like

Posted 07 March 2014 - 03:35 PM

Also to be quite frank, the console makers are often willing to look the other way when a big company asks for some ... flexibility in hitting the requirements.

 

flexibility as in being lenient with the rules? Is it necessarily so that the game can hit the deadline? I'm guessing since the publisher is publishing the game, they obviously want the game to be shipped so they have return on investment?

 

May you give me one example of a worst case that happened to a game but the game still got shipped? If specific information cannot be disclosed, I can understand. 


Edited by warnexus, 07 March 2014 - 03:35 PM.


#6 Pink Horror   Members   -  Reputation: 1138

Like
0Likes
Like

Posted 07 March 2014 - 05:59 PM

flexibility as in being lenient with the rules? Is it necessarily so that the game can hit the deadline? I'm guessing since the publisher is publishing the game, they obviously want the game to be shipped so they have return on investment?


I believe that flexibility is earned in a similar way to how the Mafia earns flexibility from the police.

#7 frob   Moderators   -  Reputation: 20169

Like
5Likes
Like

Posted 07 March 2014 - 06:20 PM

flexibility as in being lenient with the rules? Is it necessarily so that the game can hit the deadline? I'm guessing since the publisher is publishing the game, they obviously want the game to be shipped so they have return on investment?

May you give me one example of a worst case that happened to a game but the game still got shipped? If specific information cannot be disclosed, I can understand.

They are generally called "waivers" from the requirements. The company checklists (TRC, TCR, and Lotcheck requirements) specify that you must do certain things in response to certain behavior, or must not do certain things in response to certain events.

For a specific example, let's say the requirement is that the game must not have degraded network play under specific artificial lab requirements. Two that I have seen were labs come back saying that when they artificially simulate a long-term 40% packet loss (the first case) or artificially simulate frequent multi-second latency (the second case) the game does not perform adequately or has some problems.

If you are a little studio with no clout you will have a difficult time fighting back. You adjust your network code to handle an obscene case of testers being bored and testing weird conditions.

If you are with a major publisher they can fight back with this kind of implied message. It doesn't go exactly like this, but it might be interpreted this way: "We know it has that minor issue in that very rare case, and we know our audience does not have that bad of connection. From one multi-billion dollar corporation to another multi-billion dollar corporation, we're asking for a waiver. If customers complain, you can pull out this email saying we knew it was a bug and you told us to fix it, we'll take the blame."

Most of the time the requirements are reasonable. As professional game developers we want to create amazing games that work well for everybody. When certification comes back with concerns, teams don't like the defects but generally are willing to fix them. It is only the really hard ones that require massive change after the game is essentially complete that studios want to fight back.

Some things are easier to get waivers on than others. An unanimated loading screen that approaches the limits on one specific level would be pretty easy to push back against. Submitting with an old library one or two days past the replacement cutoff date might be a little harder. It is common to challenge a cert requirement by reviewing the change, discovering the risks or costs of a change, and asking them explaining that fixing that issue would create a different issue, asking which of the two issues they would prefer. ... but be prepared for them to require the change and accept the different issue. Very rarely it reaches the point of studio leadership making passionate pleas that the fix would require massive rewrites causing the dates to slip or possibly require cancellation of the project. The more costly it is to the developer and risky the fix, the more likely they are to grant it. But the more visible and the more potentially damaging the error, the less likely a waiver will be allowed. People talk about it, negotiate, establish a paper trail, and make decisions.

Even a minor change risks destabilizing the game, so changes after final submission are heavily reviewed and require significant QA work for both halo-testing and yet another pass through the entire storyline. One typical cert requirement is that someone has played the game from beginning to end without cheats. When you make that last-minute change you ask the magical testers who can race through everything in 11 hours to do their magic and pay them overtime, meals, and a gift. The QA effort itself can even be useful when pushing back and asking for waivers.
Check out my personal indie blog at bryanwagstaff.com.

#8 Stainless   Members   -  Reputation: 870

Like
1Likes
Like

Posted 08 March 2014 - 04:32 AM

In the network case you mentioned, Call Of Duty should fail. Yet it is published. 

 

Can you imagine Microsoft refusing to certify COD ?  smile.png  Don't get me wrong, as far as I am concerned they should.

 

As far as I am concerned, certification is a good thing. It can be annoying, but no more annoying than bug reports you get from internal QA.

 

I had one many years ago...

 

BUG              : Game crashes

Actions          : Press these 5 keys with your left hand, these 5 keys with your right hand, and press the spacebar with your nose

Repeatability : 100%

 

The bug fix was "Don't fecking do it"

 

There are times when certification has to be "massaged". In my experience it is always when the test case that fails was badly designed.

 

For example we had massive problems getting a JVM certified by Sun. The test  case was the garbage collector. The Java garbage collector is crap, it has a known bug in it that means it will eventually fail. The test case exercised the garbage collector and had to run for 10 hours. 

 

This was fine for a normal JVM, but ours ran the test 147 times faster than the original Sun JVM. This meant we had to run for the equivalent of 1470 hours or 2 months. After between 9 hours 47 minutes and 9 hours 49 minutes, our JVM crashed.

 

We eventually managed to get Sun to accept that it was the test case that was at fault and we got our certification.



#9 warnexus   Prime Members   -  Reputation: 1409

Like
0Likes
Like

Posted 09 March 2014 - 09:56 AM


The bug fix was "Don't fecking do it"

 

Oh boy...

 

Still it is important to fix the bugs, very important to the customers. I can understand the game needing to meet the deadline and fixing the bugs might comprise a lot of time in other things that still need to be worked on. It still should be fixed.



#10 Pink Horror   Members   -  Reputation: 1138

Like
0Likes
Like

Posted 09 March 2014 - 03:30 PM

Still it is important to fix the bugs, very important to the customers. I can understand the game needing to meet the deadline and fixing the bugs might comprise a lot of time in other things that still need to be worked on. It still should be fixed.


I know the feeling. I've worked on some input bugs that were most easily reproduced by slamming a bunch of buttons at a certain time, and it's easy to say the users deserve to break the game if that's what they're trying to do, but those have been tip-of-the-iceberg style bugs that have revealed issues such as race conditions in the underlying input system. It's easy to complain about QA doing stupid things, but they're only bugs because programmers did stupid things.

I know too many engineers who get something up to about 95% working properly when they're supposed to be finishing tasks and then get tons of praise from management (a) for "completing" work and then (b) fixing piles of bugs later. Of course, not all of them get fixed, and we ship games with these sorts of crashes, while they laugh at stupid QA.

#11 TheChubu   Crossbones+   -  Reputation: 4069

Like
0Likes
Like

Posted 09 March 2014 - 04:22 PM

We eventually managed to get Sun to accept that it was the test case that was at fault and we got our certification.

You had more luck than Google then :D


"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


#12 Stainless   Members   -  Reputation: 870

Like
0Likes
Like

Posted 10 March 2014 - 03:06 PM

 


The bug fix was "Don't fecking do it"

 

Oh boy...

 

Still it is important to fix the bugs, very important to the customers. I can understand the game needing to meet the deadline and fixing the bugs might comprise a lot of time in other things that still need to be worked on. It still should be fixed.

 

 

Was a hardware bug, turned out that the same technique crashed ALL games 

 

He had found a way of dropping the voltage at the 6510 in the keyboard to the point where the chip crashed. This has a chain effect of sending a NMI back to the 68000 and triggering a hard reboot.

 

Nothing I could have done about it.



#13 Stainless   Members   -  Reputation: 870

Like
1Likes
Like

Posted 10 March 2014 - 03:12 PM


I know too many engineers who get something up to about 95% working properly when they're supposed to be finishing tasks and then get tons of praise from management (a) for "completing" work and then (b) fixing piles of bugs later. Of course, not all of them get fixed, and we ship games with these sorts of crashes, while they laugh at stupid QA.

 

Yes I've seen that too, but I have also seen QA totally melt down.

 

One guy didn't like the game, so he just didn't test it. He spent an hour a day fiddling with it, then just went on to doing something else. He was supposed to test against all bios versions, he tested against one. Game went out with a huge bug caused by a bios change that had not been applied to my development machine.

 

The same guy nearly caused me to cover mount a demo on 1 million magazines, with a virus.

 

He used my machine at night to play pirated games. I didn't take it very well when I found out. I swear I didn't know that wall was only plaster board.



#14 frob   Moderators   -  Reputation: 20169

Like
0Likes
Like

Posted 10 March 2014 - 04:09 PM

The bug fix was "Don't fecking do it"


Oh boy...

Still it is important to fix the bugs, very important to the customers. I can understand the game needing to meet the deadline and fixing the bugs might comprise a lot of time in other things that still need to be worked on. It still should be fixed.
There are two classes of bugs in that category.

One is "Don't Do That', or DDT bugs. My personal favorite DDT bug was to pause the game, alt-tab out, run the uninstaller, skip the prompts about the game still running, then alt-tab back in the game. It crashes. DDT. Another of my most favorite was when the tester pressed down on the PS2 until the disc ground to a stop during reading, repeating five or ten times until a read error appeared. Thanks for that. DDT. (We joked for weeks asking for permission to grind discs in our very expensive devkits...)

The other is "Known Shippable", or KS. These are always a little troubling, but cannot be helped. These include one-off bugs; the game crashed once doing this, we don't know why and we cannot reproduce it. They also include little annoyances that we can live with; when a player is wearing this clothing combination, is running and turning left, some of their clothing polygons clip through each other.

As the deadline approaches, the number of KS bugs rapidly increases. In the days right before submitting to certification almost every new bug goes straight to KS status; if they haven't reported it with three months of testing and it isn't a crash bug, we can probably live with it.
Check out my personal indie blog at bryanwagstaff.com.

#15 Stainless   Members   -  Reputation: 870

Like
0Likes
Like

Posted 10 March 2014 - 06:35 PM

I hate crash once bugs. You spend ages trying to recreate them, add loads of debug code to try and track it down and you never see it again.

 

There has to be a reason.

 

When I worked at Panasonic we had one of those, we created a special build with loads of trace information in it. Then everyone in the company took a handful of phones and went somewhere. Like the true coder I am, I went to the pub. smile.png Ever 3 minutes we dialled the speaking clock on all handsets. We made so many calls, Vodafone cut us off.

 

Shut down all the companies phones. 

 

Eventually someone got a crash, actually at their home. A team with loads of test kit jumped in a van and camped in his front room until the problem was found.

 

That's the sort of massive effort required to find bugs sometimes.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS