Filling the holes in formal methods

Published June 02, 2006
Advertisement
Agh, the HTTP 500 errors drive me INSANE. I've been trying to get to the page to write this for like 20 minutes. I thought this was over?

Anyway, I think this is an opportune time to write about some discussions I've had with people at my workplace who have been in the industry a while concerning formal methods. If you are an avid slashdot reader, you probably saw the article on the Scientific American website Dependable Software by Design. If you haven't seen it, it's a very good article about some emerging tools for rigorously verifying software designs. While I'm touting interesting articles, I'd suggest reading some of the articles from the September 2005 IEEE Spectrum Magazine, which are available for free online. There is in depth discussion of why it is there are so many failures on large and small scale software projects, including an entire article on the $170 million FBI virtual case file debacle mentioned in the Scientific American article.

Anyway, cool story of debatable origin but nonetheless relevant:

Two airliners are zeroing in on one another, flying in excess of 500 mph. Without correction they will smack into each other nose to nose. Fortunately there is no crew on-board, just ballsy test pilots at the helm. Rather than something about to turn into a Discovery channel special, this is an early test for TCAS, an emergency warning system to alert planes when they come too close to one another.

The basic idea is that a set number of seconds before collision, TCAS alerts the pilots of their impending doom and gives coordinated directions to each pilot to avoid collision. The planes get close to one another, 20 seconds from collision, 15 seconds to collision - at this point the primary software architects are about to shit themselves - soon after the TCAS kicks in and directs the pilots to safely avoid one another.

After a change of pants, the software director adjusts the requirements to make the software provide a slightly earlier warning of an impending collision.

Moral of the story? There is a big difference between verifying and validating software.

The former is where testing has reigned supreme for a number of years, and recently formal methods for proving software correct are beginning to come into the limelight. Great, I'm not bashing those kinds of tools, in fact I hope to study the methods they use in depth as I progress through university because I find it fascinating.

How is it being done now? Where I work, verification works kind of like this:
  • Developers use static analysis tools to force coding standard compliance and catch errors early
  • Some or all code is formally peer-reviewed
  • Requirements are coherent, all high level requirements trace down to low level requirements, and all low level requirements trace to high level requirements (except in special cases)
  • Code maps to requirements - this is essentially where testing is key, tests are written to verify the requirements and the testing tool generates a summary of how well the code was covered. In aviation, depending on the safety consequences of your devices failure, this could mean anything from verifying that every line executes eventually, to extensive verification that all control paths are exercised.

This is where tools like the one detailed in the Scientific American article would be extremely useful. The current processes do generate safe software, but it is extremely expensive to develop in comparison to non mission-critical applications. A tool which makes design verification a seamless part of requirements generation and code development would be GREAT. In fact, there is room for these sorts of tools in the upcoming release of the FAA's new software process, DO-178C.

What's validation then? It's exactly what happened in the story I related earlier. You can do all the proving you want to show that your requirements were implemented perfectly, but you need to then show that the software actually solves the problem that sparked its creation. Usually this involves testing the software in the real-world.

In a comfortable conference room, I'm sure 15 seconds seemed like plenty of time. But when you're actually watching two planes fly at each other on a radar screen, you might find out that you need to make some important adjustments.
0 likes 0 comments

Comments

Nobody has left a comment. You can be the first!
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Profile
Author
Advertisement
Advertisement