The beauty of Hoare Logic is that it is compositional: the structure of proofs exactly follows the structure of programs.
This suggests that we can record the essential ideas of a
proof (informally, and leaving out some low-level calculational
details) by decorating
a program with appropriate assertions on
each of its commands.
Such a decorated program carries within it an argument for its own correctness.
For example, consider the program:
X ::= m;; Z ::= p; WHILE ~(X = 0) DO Z ::= Z - 1;; X ::= X - 1 END
(Note the parameters m and p, which stand for fixed-but-arbitrary numbers. Formally, they are simply Coq variables of type nat.)
Here is one possible specification for this program:
True X ::= m;; Z ::= p; WHILE ~(X = 0) DO Z ::= Z - 1;; X ::= X - 1 END Z = p - m
Here is a decorated version of the program, embodying a proof of this specification:
True ->> m = m X ::= m;; X = m ->> X = m /\ p = p Z ::= p; X = m /\ Z = p ->> Z - X = p - m WHILE ~(X = 0) DO Z - X = p - m /\ X <> 0 ->> (Z - 1) - (X - 1) = p - m Z ::= Z - 1;; Z - (X - 1) = p - m X ::= X - 1 Z - X = p - m END Z - X = p - m /\ ~ (X <> 0) ->> Z = p - m
Concretely, a decorated program consists of the program text interleaved with assertions (either a single assertion or possibly two assertions separated by an implication).
To check that a decorated program represents a valid proof, we check that each individual command is locally consistent with its nearby assertions in the following sense:
P [X |-> a] X ::= a P
thenand
elsebranches are exactly P /\ b and P /\ ~b and if its
thenbranch is locally consistent (with respect to P /\ b and Q) and its
elsebranch is locally consistent (with respect to P /\ ~b and Q):
This corresponds to the application of hoare_consequence, and it is the only place in a decorated program where checking whether decorations are correct is not fully mechanical and syntactic, but rather may involve logical and/or arithmetic reasoning.
These local consistency conditions essentially describe a procedure for verifying the correctness of a given proof. This verification involves checking that every single command is locally consistent with the accompanying assertions.
If we are instead interested in finding a proof for a given specification, we need to discover the right assertions. This can be done in an almost mechanical way, with the exception of finding loop invariants, which is the subject of the next section. In the remainder of this section we explain in detail how to construct decorations for several simple programs that don't involve non-trivial loop invariants.
Here is a program that swaps the values of two variables using addition and subtraction (instead of by assigning to a temporary variable).
X ::= X + Y;; Y ::= X - Y;; X ::= X - Y
We can prove (informally) using decorations that this program is correct -- i.e., it always swaps the values of variables X and Y.
(1) X = m /\ Y = n ->> (2) (X + Y) - ((X + Y) - Y) = n /\ (X + Y) - Y = m X ::= X + Y;; (3) X - (X - Y) = n /\ X - Y = m Y ::= X - Y;; (4) X - Y = n /\ Y = m X ::= X - Y (5) X = n /\ Y = m
These decorations can be constructed as follows:
(m + n) - ((m + n) - n) = n /\ (m + n) - n = m (m + n) - m = n /\ m = m n = n /\ m = m
Note that, since we are working with natural numbers rather than fixed-width machine integers, we don't need to worry about the possibility of arithmetic overflow anywhere in this argument. This makes life quite a bit simpler!
Here is a simple decorated program using conditionals:
(1) True TEST X <= Y THEN (2) True /\ X <= Y ->> (3) (Y - X) + X = Y \/ (Y - X) + Y = X Z ::= Y - X (4) Z + X = Y \/ Z + Y = X ELSE (5) True /\ ~(X <= Y) ->> (6) (X - Y) + X = Y \/ (X - Y) + Y = X Z ::= X - Y (7) Z + X = Y \/ Z + Y = X FI (8) Z + X = Y \/ Z + Y = X
These decorations were constructed as follows:
Fill in valid decorations for the following program:
True TEST X <= Y THEN ->> Z ::= Y - X ELSE ->> Y ::= X + Z FI Y = X + Z
Here is a WHILE loop that is so simple it needs no invariant (i.e., the invariant True will do the job).
(1) True WHILE ~(X = 0) DO (2) True /\ X <> 0 ->> (3) True X ::= X - 1 (4) True END (5) True /\ X = 0 ->> (6) X = 0
The decorations can be constructed as follows:
From an informal proof in the form of a decorated program, it is easy to read off a formal proof using the Coq versions of the Hoare rules. Note that we do not unfold the definition of hoare_triple anywhere in this proof -- the idea is to use the Hoare rules as a self-contained logic for reasoning about programs.
The following Imp program calculates the integer quotient and remainder of two numbers m and n that are arbitrary constants in the program.
X ::= m;; Y ::= 0;; WHILE n <= X DO X ::= X - n;; Y ::= Y + 1 END;
In we replace m and n by concrete numbers and execute the program, it will terminate with the variable X set to the remainder when m is divided by n and Y set to the quotient.
In order to give a specification to this program we need to remember that dividing m by n produces a remainder X and a quotient Y such that n * Y + X = m /\ X < n.
It turns out that we get lucky with this program and don't have to think very hard about the loop invariant: the invariant is just the first conjunct n * Y + X = m, and we can use this to decorate the program.
(1) True ->> (2) n * 0 + m = m X ::= m;; (3) n * 0 + X = m Y ::= 0;; (4) n * Y + X = m WHILE n <= X DO (5) n * Y + X = m /\ n <= X ->> (6) n * (Y + 1) + (X - n) = m X ::= X - n;; (7) n * (Y + 1) + X = m Y ::= Y + 1 (8) n * Y + X = m END (9) n * Y + X = m /\ X < n
Assertions (4), (5), (8), and (9) are derived mechanically from the invariant and the loop's guard. Assertions (8), (7), and (6) are derived using the assignment rule going backwards from (8) to (6). Assertions (4), (3), and (2) are again backwards applications of the assignment rule.
Now that we've decorated the program it only remains to check that the two uses of the consequence rule are correct -- i.e., that (1) implies (2) and that (5) implies (6). This is indeed the case, so we have a valid decorated program.
Once the outermost precondition and postcondition are chosen, the only creative part in verifying programs using Hoare Logic is finding the right loop invariants. The reason this is difficult is the same as the reason that inductive mathematical proofs are: strengthening the loop invariant (or the induction hypothesis) means that you have a stronger assumption to work with when trying to establish the postcondition of the loop body (or complete the induction step of the proof), but it also means that the loop body's postcondition (or the statement being proved inductively) is stronger and thus harder to prove!
This section explains how to approach the challenge of finding loop invariants through a series of examples and exercises.
The following program subtracts the value of X from the value of Y by repeatedly decrementing both X and Y. We want to verify its correctness with respect to the pre- and postconditions shown:
X = m /\ Y = n WHILE ~(X = 0) DO Y ::= Y - 1;; X ::= X - 1 END Y = n - m
To verify this program, we need to find an invariant Inv for the loop. As a first step we can leave Inv as an unknown and build a skeleton for the proof by applying the rules for local consistency (working from the end of the program to the beginning, as usual, and without any thinking at all yet).
This leads to the following skeleton:
(1) X = m /\ Y = n ->> (a) (2) Inv WHILE ~(X = 0) DO (3) Inv /\ X <> 0 ->> (c) (4) Inv [X |-> X-1] [Y |-> Y-1] Y ::= Y - 1;; (5) Inv [X |-> X-1] X ::= X - 1 (6) Inv END (7) Inv /\ ~ (X <> 0) ->> (b) (8) Y = n - m
By examining this skeleton, we can see that any valid Inv will have to respect three conditions:
These conditions are actually independent of the particular
program and specification we are considering. Indeed, every loop
invariant has to satisfy them. One way to find an invariant that
simultaneously satisfies these three conditions is by using an
iterative process: start with a candidate
invariant (e.g., a
guess or a heuristic choice) and check the three conditions above;
if any of the checks fails, try to use the information that we get
from the failure to produce another -- hopefully better -- candidate
invariant, and repeat.
For instance, in the reduce-to-zero example above, we saw that, for a very simple loop, choosing True as an invariant did the job. So let's try instantiating Inv with True in the skeleton above and see what we get...
(1) X = m /\ Y = n ->> (a - OK) (2) True WHILE ~(X = 0) DO (3) True /\ X <> 0 ->> (c - OK) (4) True Y ::= Y - 1;; (5) True X ::= X - 1 (6) True END (7) True /\ X = 0 ->> (b - WRONG!) (8) Y = n - m
While conditions (a) and (c) are trivially satisfied, condition (b) is wrong, i.e., it is not the case that True /\ X = 0 (7) implies Y = n - m (8). In fact, the two assertions are completely unrelated, so it is very easy to find a counterexample to the implication (say, Y = X = m = 0 and n = 1).
If we want (b) to hold, we need to strengthen the invariant so that it implies the postcondition (8). One simple way to do this is to let the invariant be the postcondition. So let's return to our skeleton, instantiate Inv with Y = n - m, and check conditions (a) to (c) again.
(1) X = m /\ Y = n ->> (a - WRONG!) (2) Y = n - m WHILE ~(X = 0) DO (3) Y = n - m /\ X <> 0 ->> (c - WRONG!) (4) Y - 1 = n - m Y ::= Y - 1;; (5) Y = n - m X ::= X - 1 (6) Y = n - m END (7) Y = n - m /\ X = 0 ->> (b - OK) (8) Y = n - m
This time, condition (b) holds trivially, but (a) and (c) are broken. Condition (a) requires that (1) X = m /\ Y = n implies (2) Y = n - m. If we substitute Y by n we have to show that n = n - m for arbitrary m and n, which is not the case (for instance, when m = n = 1). Condition (c) requires that n - m - 1 = n - m, which fails, for instance, for n = 1 and m = 0. So, although Y = n - m holds at the end of the loop, it does not hold from the start, and it doesn't hold on each iteration; it is not a correct invariant.
This failure is not very surprising: the variable Y changes during the loop, while m and n are constant, so the assertion we chose didn't have much chance of being an invariant!
To do better, we need to generalize (8) to some statement that is
equivalent to (8) when X is 0, since this will be the case
when the loop terminates, and that fills the gap
in some
appropriate way when X is nonzero. Looking at how the loop
works, we can observe that X and Y are decremented together
until X reaches 0. So, if X = 2 and Y = 5 initially,
after one iteration of the loop we obtain X = 1 and Y = 4;
after two iterations X = 0 and Y = 3; and then the loop stops.
Notice that the difference between Y and X stays constant
between iterations: initially, Y = n and X = m, and the
difference is always n - m. So let's try instantiating Inv in
the skeleton above with Y - X = n - m.
(1) X = m /\ Y = n ->> (a - OK) (2) Y - X = n - m WHILE ~(X = 0) DO (3) Y - X = n - m /\ X <> 0 ->> (c - OK) (4) (Y - 1) - (X - 1) = n - m Y ::= Y - 1;; (5) Y - (X - 1) = n - m X ::= X - 1 (6) Y - X = n - m END (7) Y - X = n - m /\ X = 0 ->> (b - OK) (8) Y = n - m
Success! Conditions (a), (b) and (c) all hold now. (To verify (c), we need to check that, under the assumption that X <> 0, we have Y - X = (Y - 1) - (X - 1); this holds for all natural numbers X and Y.)
A roundabout way of assigning a number currently stored in X to the variable Y is to start Y at 0, then decrement X until it hits 0, incrementing Y at each step. Here is a program that implements this idea:
X = m Y ::= 0;; WHILE ~(X = 0) DO X ::= X - 1;; Y ::= Y + 1 END Y = m
Write an informal decorated program showing that this procedure is correct.
The following program adds the variable X into the variable Z by repeatedly decrementing X and incrementing Z.
WHILE ~(X = 0) DO Z ::= Z + 1;; X ::= X - 1 END
Following the pattern of the subtract_slowly example above, pick a precondition and postcondition that give an appropriate specification of add_slowly; then (informally) decorate the program accordingly.
Here is a cute little program for computing the parity of the value initially stored in X (due to Daniel Cristofani).
X = m WHILE 2 <= X DO X ::= X - 2 END X = parity m
The mathematical parity function used in the specification is defined in Coq as follows:
The postcondition does not hold at the beginning of the loop, since m = parity m does not hold for an arbitrary m, so we cannot use that as an invariant. To find an invariant that works, let's think a bit about what this loop does. On each iteration it decrements X by 2, which preserves the parity of X. So the parity of X does not change, i.e., it is invariant. The initial value of X is m, so the parity of X is always equal to the parity of m. Using parity X = parity m as an invariant we obtain the following decorated program:
X = m ->> (a - OK) parity X = parity m WHILE 2 <= X DO parity X = parity m /\ 2 <= X ->> (c - OK) parity (X-2) = parity m X ::= X - 2 parity X = parity m END parity X = parity m /\ X < 2 ->> (b - OK) X = parity m
With this invariant, conditions (a), (b), and (c) are all satisfied. For verifying (b), we observe that, when X < 2, we have parity X = X (we can easily see this in the definition of parity). For verifying (c), we observe that, when 2 <= X, we have parity X = parity (X-2).
Translate this proof to Coq. Refer to the reduce_to_zero example for ideas. You may find the following two lemmas useful:
The following program computes the (integer) square root of X by naive iteration:
X=m Z ::= 0;; WHILE (Z+1)*(Z+1) <= X DO Z ::= Z+1 END Z*Z<=m /\ m<(Z+1)*(Z+1)
As above, we can try to use the postcondition as a candidate invariant, obtaining the following decorated program:
(1) X=m ->> (a - second conjunct of (2) WRONG!) (2) 0*0 <= m /\ m<1*1 Z ::= 0;; (3) Z*Z <= m /\ m<(Z+1)*(Z+1) WHILE (Z+1)*(Z+1) <= X DO (4) Z*Z<=m /\ (Z+1)*(Z+1)<=X ->> (c - WRONG!) (5) (Z+1)*(Z+1)<=m /\ m<(Z+2)*(Z+2) Z ::= Z+1 (6) Z*Z<=m /\ m<(Z+1)*(Z+1) END (7) Z*Z<=m /\ m<(Z+1)*(Z+1) /\ ~((Z+1)*(Z+1)<=X) ->> (b - OK) (8) Z*Z<=m /\ m<(Z+1)*(Z+1)
This didn't work very well: conditions (a) and (c) both failed. Looking at condition (c), we see that the second conjunct of (4) is almost the same as the first conjunct of (5), except that (4) mentions X while (5) mentions m. But note that X is never assigned in this program, so we should always have X=m; we didn't propagate this information from (1) into the loop invariant, but we could!
Also, we don't need the second conjunct of (8), since we can obtain it from the negation of the guard -- the third conjunct in (7) -- again under the assumption that X=m. This allows us to simplify a bit.
So we now try X=m /\ Z*Z <= m as the loop invariant:
X=m ->> (a - OK) X=m /\ 0*0 <= m Z ::= 0; X=m /\ Z*Z <= m WHILE (Z+1)*(Z+1) <= X DO X=m /\ Z*Z<=m /\ (Z+1)*(Z+1)<=X ->> (c - OK) X=m /\ (Z+1)*(Z+1)<=m Z ::= Z + 1 X=m /\ Z*Z<=m END X=m /\ Z*Z<=m /\ X<(Z+1)*(Z+1) ->> (b - OK) Z*Z<=m /\ m<(Z+1)*(Z+1)
This works, since conditions (a), (b), and (c) are now all trivially satisfied.
Very often, if a variable is used in a loop in a read-only fashion (i.e., it is referred to by the program or by the specification and it is not changed by the loop), it is necessary to add the fact that it doesn't change to the loop invariant.
Here is a program that squares X by repeated addition:
X = m Y ::= 0;; Z ::= 0;; WHILE ~(Y = X) DO Z ::= Z + X;; Y ::= Y + 1 END Z = m*m
The first thing to note is that the loop reads X but doesn't change its value. As we saw in the previous example, it is a good idea in such cases to add X = m to the invariant. The other thing that we know is often useful in the invariant is the postcondition, so let's add that too, leading to the invariant candidate Z = m * m /\ X = m.
X = m ->> (a - WRONG) 0 = m*m /\ X = m Y ::= 0;; 0 = m*m /\ X = m Z ::= 0;; Z = m*m /\ X = m WHILE ~(Y = X) DO Z = Y*m /\ X = m /\ Y <> X ->> (c - WRONG) Z+X = m*m /\ X = m Z ::= Z + X;; Z = m*m /\ X = m Y ::= Y + 1 Z = m*m /\ X = m END Z = m*m /\ X = m /\ ~(Y <> X) ->> (b - OK) Z = m*m
Conditions (a) and (c) fail because of the Z = m*m part. While Z starts at 0 and works itself up to m*m, we can't expect Z to be m*m from the start. If we look at how Z progresses in the loop, after the 1st iteration Z = m, after the 2nd iteration Z = 2*m, and at the end Z = m*m. Since the variable Y tracks how many times we go through the loop, this leads us to derive a new invariant candidate: Z = Y*m /\ X = m.
X = m ->> (a - OK) 0 = 0*m /\ X = m Y ::= 0;; 0 = Y*m /\ X = m Z ::= 0;; Z = Y*m /\ X = m WHILE ~(Y = X) DO Z = Y*m /\ X = m /\ Y <> X ->> (c - OK) Z+X = (Y+1)*m /\ X = m Z ::= Z + X; Z = (Y+1)*m /\ X = m Y ::= Y + 1 Z = Y*m /\ X = m END Z = Y*m /\ X = m /\ ~(Y <> X) ->> (b - OK) Z = m*m
This new invariant makes the proof go through: all three conditions are easy to check.
It is worth comparing the postcondition Z = m*m and the Z = Y*m conjunct of the invariant. It is often the case that one has to replace parameters with variables -- or with expressions involving both variables and parameters, like m - Y -- when going from postconditions to invariants.
Recall that n! denotes the factorial of n (i.e., n! = 1*2*...*n). Here is an Imp program that calculates the factorial of the number initially stored in the variable X and puts it in the variable Y:
X = m Y ::= 1 ;; WHILE ~(X = 0) DO Y ::= Y * X ;; X ::= X - 1 END Y = m!
Fill in the blanks in following decorated program. For full credit, make sure all the arithmetic operations used in the assertions are well-defined on natural numbers.
X = m ->> Y ::= 1;; WHILE ~(X = 0) DO ->> Y ::= Y * X;; X ::= X - 1 END ->> Y = m!
Fill in valid decorations for the following program. For the => steps in your annotations, you may rely (silently) on the following facts about min
Lemma lemma1 : forall x y, (x=0 \/ y=0) -> min x y = 0. Lemma lemma2 : forall x y, min (x-1) (y-1) = (min x y) - 1.
plus standard high-school algebra, as always.
True ->> X ::= a;; Y ::= b;; Z ::= 0;; WHILE ~(X = 0) && ~(Y = 0) DO ->> X := X - 1;; Y := Y - 1;; Z := Z + 1 END ->> Z = min a b
Here is a very inefficient way of adding 3 numbers:
X ::= 0;; Y ::= 0;; Z ::= c;; WHILE ~(X = a) DO X ::= X + 1;; Z ::= Z + 1 END;; WHILE ~(Y = b) DO Y ::= Y + 1;; Z ::= Z + 1 END
Show that it does what it should by filling in the blanks in the following decorated program.
True ->> X ::= 0;; Y ::= 0;; Z ::= c;; WHILE ~(X = a) DO ->> X ::= X + 1;; Z ::= Z + 1 END;; ->> WHILE ~(Y = b) DO ->> Y ::= Y + 1;; Z ::= Z + 1 END ->> Z = a + b + c
Here is a program that computes the series: 1 + 2 + 2^2 + ... + 2^m = 2^(m+1) - 1
X ::= 0;; Y ::= 1;; Z ::= 1;; WHILE ~(X = m) DO Z ::= 2 * Z;; Y ::= Y + Z;; X ::= X + 1 END
Write a decorated program for this.
Some Hoare triples are more interesting than others. For example,
is not very interesting: although it is perfectly valid, it tells us nothing useful. Since the precondition isn't satisfied by any state, it doesn't describe any situations where we can use the command X ::= Y + 1 to achieve the postcondition X <= 5.
By contrast,
Y <= 4 /\ Z = 0 X ::= Y + 1 X <= 5
is useful: it tells us that, if we can somehow create a situation in which we know that Y <= 4 /\ Z = 0, then running this command will produce a state satisfying the postcondition. However, this triple is still not as useful as it could be, because the Z = 0 clause in the precondition actually has nothing to do with the postcondition X <= 5. The most useful triple (for this command and postcondition) is this one:
In other words, Y <= 4 is the weakest valid precondition of the command X ::= Y + 1 for the postcondition X <= 5.
In general, we say that P is the weakest precondition of
command c for postcondition Q
if {{P}} c {{Q}} and if,
whenever P' is an assertion such that {{P'}} c {{Q}}, it is
the case that P' st implies P st for all states st.
That is, P is the weakest precondition of c for Q if (a) P is a precondition for Q and c, and (b) P is the weakest (easiest to satisfy) assertion that guarantees that Q will hold after executing c.
What are the weakest preconditions of the following commands for the following postconditions?
Prove formally, using the definition of hoare_triple, that Y <= 4 is indeed the weakest precondition of X ::= Y + 1 with respect to postcondition X <= 5.
Show that the precondition in the rule hoare_asgn is in fact the weakest precondition.
Show that your havoc_pre rule from the himp_hoare exercise in the Hoare chapter returns the weakest precondition.
Our informal conventions for decorated programs amount to a way of displaying Hoare triples, in which commands are annotated with enough embedded assertions that checking the validity of a triple is reduced to simple logical and algebraic calculations showing that some assertions imply others. In this section, we show that this informal presentation style can actually be made completely formal and indeed that checking the validity of decorated programs can mostly be automated.
The first thing we need to do is to formalize a variant of the syntax of commands with embedded assertions. We call the new commands decorated commands, or dcoms.
We don't want both preconditions and postconditions on each command, because a sequence of two commands would contain redundant decorations--the postcondition of the first likely being the same as the precondition of the second. Instead, decorations are added corresponding to postconditions only. A separate type, decorated, is used to add just one precondition for the entire program.
To avoid clashing with the existing Notation definitions for ordinary commands, we introduce these notations in a special scope called dcom_scope, and we Open this scope for the remainder of the file.
Careful readers will note that we've defined two notations
for specifying a precondition explicitly, one with and one
without a ->>. The without
version is intended to be
used to supply the initial precondition at the very top
of the program.
It is easy to go from a dcom to a com by erasing all annotations.
The choice of exactly where to put assertions in the definition of dcom is a bit subtle. The simplest thing to do would be to annotate every dcom with a precondition and postcondition. But this would result in very verbose programs with a lot of repeated annotations: for example, a program like SKIP;SKIP would have to be annotated as
with pre- and post-conditions on each SKIP, plus identical pre- and post-conditions on the semicolon!
Instead, the rule we've followed is this:
In other words, the invariant of the representation is that a dcom d together with a precondition P determines a Hoare triple {{P}} (extract d) {{post d}}, where post is defined as follows:
It is straightforward to extract the precondition and postcondition from a decorated program.
We can express what it means for a decorated program to be correct as follows:
To check whether this Hoare triple is valid, we need a way to
extract the proof obligations
from a decorated program. These
obligations are often called verification conditions, because
they are the facts that must be verified to see that the
decorations are logically consistent and thus add up to a complete
proof of correctness.
The function verification_conditions takes a dcom d together with a precondition P and returns a proposition that, if it can be proved, implies that the triple {{P}} (extract d) {{post d}} is valid.
It does this by walking over d and generating a big
conjunction including all the local checks
that we listed when
we described the informal rules for decorated programs. (Strictly
speaking, we need to massage the informal rules a little bit to
add some uses of the rule of consequence, but the correspondence
should be clear.)
And now the key theorem, stating that verification_conditions does its job correctly. Not surprisingly, we need to use each of the Hoare Logic rules at some point in the proof.
Now that all the pieces are in place, we can verify an entire program.
The propositions generated by verification_conditions are fairly big, and they contain many conjuncts that are essentially trivial.
===> (((fun _ : state => True) ->> (fun _ : state => True)) /\ ((fun st : state => True /\ bassn (~(X = 0)) st) ->> (fun st : state => True /\ st X <> 0)) /\ ((fun st : state => True /\ ~ bassn (~(X = 0)) st) ->> (fun st : state => True /\ st X = 0)) /\ (fun st : state => True /\ st X <> 0) ->> (fun _ : state => True) X |-> X - 1) /\ (fun st : state => True /\ st X = 0) ->> (fun st : state => st X = 0)
In principle, we could work with such propositions using just the tactics we have so far, but we can make things much smoother with a bit of automation. We first define a custom verify tactic that uses split repeatedly to turn all the conjunctions into separate subgoals and then uses omega and eauto (described in chapter Auto in Logical Foundations) to deal with as many of them as possible.
What's left after verify does its thing is just the interesting
parts
of checking that the decorations are correct. For very
simple examples, verify sometimes even immediately solves the
goal (provided that the annotations are correct!).
Another example (formalizing a decorated program we've seen before):
In this section, we use the automation developed above to verify formal decorated programs corresponding to most of the informal ones we have seen.
There are actually several ways to phrase the loop invariant for this program. Here is one natural one, which leads to a rather long proof:
Here is a more intuitive way of writing the invariant:
Here is the simplest invariant we've found for this program:
Again, there are several ways of annotating the squaring program. The simplest variant we've found, square_simpler_dec, is given last.
In the slow_assignment exercise above, we saw a roundabout way of assigning a number currently stored in X to the variable Y: start Y at 0, then decrement X until it hits 0, incrementing Y at each step. Write a formal version of this decorated program and prove it correct.
Remember the factorial function we worked with before:
Following the pattern of subtract_slowly_dec, write a decorated program factorial_dec that implements the factorial function and prove it correct as factorial_dec_correct.
The Fibonacci function is usually written like this:
Fixpoint fib n := match n with | 0 => 1 | 1 => 1 | _ => fib (pred n) + fib (pred (pred n)) end.
This doesn't pass Coq's termination checker, but here is a slightly clunkier definition that does:
Prove that fib satisfies the following equation:
The following Imp program leaves the value of fib n in the variable Y when it terminates:
X ::= 1;; Y ::= 1;; Z ::= 1;; WHILE ~(X = n + 1) DO T ::= Z;; Z ::= Z + Y;; Y ::= T;; X ::= X + 1 END
Fill in the following definition of dfib and prove that it satisfies this specification:
The formal decorated programs defined in this section are intended to look as similar as possible to the informal ones defined earlier in the chapter. If we drop this requirement, we can eliminate almost all annotations, just requiring final postconditions and loop invariants to be provided explicitly. Do this -- i.e., define a new version of dcom with as few annotations as possible and adapt the rest of the formal development leading up to the verification_correct theorem.
Adapt the parser for Imp presented in chapter ImpParser of Logical Foundations to parse decorated commands (either ours or, even better, the ones you defined in the previous exercise).