Quote:Original post by JasonBlochowiak
I'd say that if static analysis demonstrates that the assertion always holds, then stay silent. If static analysis doesn't prove the assertion, then issue a warning, with the ability for the programmer to silence the warning by making an explicit statement that the assertion will hold, while adding a dynamic code check to debug code.
You raise an interesting point which I hadn't seen in the first place, and that is the interaction between "what could happen to today's code" and "what could happen to tommorrow's code". Static analysis only proves today's code to be correct, while the user might want to prove the code to be always correct (for instance, he intends on distributing a module on the web, so he wants to make sure that no bugs will appear and all constraints are expressed in the function contracts).
I don't really know how to make this differentiation. I could say, for example, that descriptive assertions must always be proven to be true before being distributed, and that internal (imperative) assertions must be proven to be true using only the contract of the function (if it's a public function), or through static analysis (if it's a provate function). Does this make sense? And also, would this fit with the open-closed principle?
Quote:Original post by Zahlman
Unsafe.
Fast.
Weak.
Interesting levels, although the static checker will run by default: it's more or less required to ensure type safety and code generation, but it's made quite fast by the language structure.
Quote:
Normal. Implicit assertions are generated conservatively but simply (i.e. without any intra-function analysis), and thereafter treated equally to explicit ones. All of these are statically checked; if it will never succeed, an error is emitted, and if it merely isn't provable, then a warning is emitted and an exception-throw generated.
Interesting. This sounds to me like a good "development-in-progress" mode, with warnings and safety scaffolds and thorough verification.
Quote:Strict. Implicit assertions are generated whereever possible, and thereafter treated equally to explicit ones. The static checker attempts to remove all assertions which are redundant (and does "chain" through logic e.g. checking whether Foo(1) can be called if that would lead to a divide by zero: effectively, an implicit assertion 'i not in x such that Bar(x) == 0' is added to Foo), and then those which can be guaranteed for all input. If anything is left, emit errors: "I can't prove XXX for all input".
I think my answer to JasonBlochowiak applies here as well: what about making sure that a module, in isolation, is correct? What rules do I apply to decide when static checking is applicable (XXX is never violated in this program) and when I'm looking for more strict verification (XXX can never be violated by any program)?
Quote:Unit test generation, if offered, should be done by a separate tool.
Good point. How do you suggest, though, that the compiler use the results from the unit tests to "prove" assertions?
Quote:Also, I consider the two types of explicit assertion to be equivalent at all levels.
I'm actually wondering if this is reasonable. Using static checking, an error is created only if an assertion contradicts the actual
usage of the object. For imperative assertions, it would make sense, but I'm not certain it would for descriptive assertions. If I write a couple of functions, and then assert properties over these function, I would like to be warned about this right now, and not when an object is finally created and used in a way that contradicts these assertions.
Basically, when I state for (i:int) assert (bar(i) != 0); I'd rather have this statement be checked and cleared when I'm wrapping up the module that defines bar, and not when I'm in the middle of writing a module that uses bar correctly (but bar itself happens to be incorrect). What do you think?