In the Logic chapter, we looked at several ways of writing propositions, including conjunction, disjunction, and existential quantification. In this chapter, we bring yet another new tool into the mix: inductive definitions.
In past chapters, we have seen two ways of stating that a number n is even: We can say
(1) evenb n = true, or
(2) exists k, n = double k.
Yet another possibility is to say that n is even if we can establish its evenness from the following rules:
To illustrate how this new definition of evenness works, let's imagine using it to show that 4 is even. By rule ev_SS, it suffices to show that 2 is even. This, in turn, is again guaranteed by rule ev_SS, as long as we can show that 0 is even. But this last fact follows directly from the ev_0 rule.
We will see many definitions like this one during the rest of the course. For purposes of informal discussions, it is helpful to have a lightweight notation that makes them easy to read and write. Inference rules are one such notation:
even n
Each of the textual rules above is reformatted here as an inference rule; the intended reading is that, if the premises above the line all hold, then the conclusion below the line follows. For example, the rule ev_SS says that, if n satisfies even, then S (S n) also does. If a rule has no premises above the line, then its conclusion holds unconditionally.
We can represent a proof using these rules by combining rule applications into a proof tree. Here's how we might transcribe the above proof that 4 is even:
(Why call this a tree
(rather than a stack
, for example)?
Because, in general, inference rules can have multiple premises.
We will see examples of this shortly.
Putting all of this together, we can translate the definition of evenness into a formal Coq definition using an Inductive declaration, where each constructor corresponds to an inference rule:
This definition is different in one crucial respect from previous uses of Inductive: the thing we are defining is not a Type, but rather a function from nat to Prop -- that is, a property of numbers. We've already seen other inductive definitions that result in functions -- for example, list, whose type is Type -> Type. What is really new here is that, because the nat argument of even appears to the right of the colon, it is allowed to take different values in the types of different constructors: 0 in the type of ev_0 and S (S n) in the type of ev_SS.
In contrast, the definition of list names the X parameter globally, to the left of the colon, forcing the result of nil and cons to be the same (list X). Had we tried to bring nat to the left in defining even, we would have seen an error:
In an Inductive definition, an argument to the type
constructor on the left of the colon is called a parameter
,
whereas an argument on the right is called an index
.
For example, in Inductive list (X : Type) := ..., X is a parameter; in Inductive even : nat -> Prop := ..., the unnamed nat argument is an index.
We can think of the definition of even as defining a Coq property even : nat -> Prop, together with primitive theorems ev_0 : even 0 and ev_SS : forall n, even n -> even (S (S n)).
That definition can also be written as follows...
Inductive even : nat -> Prop := | ev_0 : even 0 | ev_SS : forall n, even n -> even (S (S n)).
... making explicit the type of the rule ev_SS.
Such constructor theorems
have the same status as proven
theorems. In particular, we can use Coq's apply tactic with the
rule names to prove even for particular numbers...
... or we can use function application syntax:
We can also prove theorems that have hypotheses involving even.
Besides constructing evidence that numbers are even, we can also reason about such evidence.
Introducing even with an Inductive declaration tells Coq not only that the constructors ev_0 and ev_SS are valid ways to build evidence that some number is even, but also that these two constructors are the only ways to build evidence that numbers are even (in the sense of even).
In other words, if someone gives us evidence E for the assertion even n, then we know that E must have one of two shapes:
This suggests that it should be possible to analyze a hypothesis of the form even n much as we do inductively defined data structures; in particular, it should be possible to argue by induction and case analysis on such evidence. Let's look at a few examples to see what this means in practice.
Suppose we are proving some fact involving a number n, and we are given even n as a hypothesis. We already know how to perform case analysis on n using destruct or induction, generating separate subgoals for the case where n = O and the case where n = S n' for some n'. But for some proofs we may instead want to analyze the evidence that even n directly. As a tool, we can prove our characterization of evidence for even n, using destruct.
The following theorem can easily be proved using destruct on evidence.
However, this variation cannot easily be handled with destruct.
Intuitively, we know that evidence for the hypothesis cannot consist just of the ev_0 constructor, since O and S are different constructors of the type nat; hence, ev_SS is the only case that applies. Unfortunately, destruct is not smart enough to realize this, and it still generates two subgoals. Even worse, in doing so, it keeps the final goal unchanged, failing to provide any useful information for completing the proof.
What happened, exactly? Calling destruct has the effect of replacing all occurrences of the property argument by the values that correspond to each constructor. This is enough in the case of ev_minus2 because that argument n is mentioned directly in the final goal. However, it doesn't help in the case of evSS_ev since the term that gets replaced (S (S n)) is not mentioned anywhere.
We could patch this proof by replacing the goal even n, which does not mention the replaced term S (S n), by the equivalent goal even (pred (pred (S (S n)))), which does mention this term, after which destruct can make progress. But it is more straightforward to use our inversion lemma.
Coq provides a tactic called inversion, which does the work of our inversion lemma and more besides.
The inversion tactic can detect (1) that the first case
(n = 0) does not apply and (2) that the n' that appears in the
ev_SS case must be the same as n. It has an as
variant
similar to destruct, allowing us to assign names rather than
have Coq choose them.
The inversion tactic can apply the principle of explosion to
obviously contradictory
hypotheses involving inductive
properties, something that takes a bit more work using our
inversion lemma. For example:
Prove the following result using inversion. For extra practice, prove it using the inversion lemma.
Prove the following result using inversion.
The inversion tactic does quite a bit of work. When applied to equalities, as a special case, it does the work of both discriminate and injection. In addition, it carries out the intros and rewrites that are typically necessary in the case of injection. It can also be applied, more generally, to analyze evidence for inductively defined propositions. As examples, we'll use it to reprove some theorems from Tactics.v.
Here's how inversion works in general. Suppose the name H refers to an assumption P in the current context, where P has been defined by an Inductive declaration. Then, for each of the constructors of P, inversion H generates a subgoal in which H has been replaced by the exact, specific conditions under which this constructor could have been used to prove P. Some of these subgoals will be self-contradictory; inversion throws these away. The ones that are left represent the cases that must be proved to establish the original goal. For those, inversion adds all equations into the proof context that must hold of the arguments given to P (e.g., S (S n') = n in the proof of evSS_ev).
The ev_double exercise above shows that our new notion of evenness is implied by the two earlier ones (since, by even_bool_prop in chapter Logic, we already know that those are equivalent to each other). To show that all three coincide, we just need the following lemma.
We could try to proceed by case analysis or induction on n. But since even is mentioned in a premise, this strategy would probably lead to a dead end, as in the previous section. Thus, it seems better to first try inversion on the evidence for even. Indeed, the first case can be solved trivially.
Unfortunately, the second case is harder. We need to show exists k, S (S n') = double k, but the only available assumption is E', which states that even n' holds. Since this isn't directly useful, it seems that we are stuck and that performing case analysis on E was a waste of time.
If we look more closely at our second goal, however, we can see that something interesting happened: By performing case analysis on E, we were able to reduce the original result to a similar one that involves a different piece of evidence for even: namely E'. More formally, we can finish our proof by showing that
exists k', n' = double k',
which is the same as the original statement, but with n' instead of n. Indeed, it is not difficult to convince Coq that this intermediate result suffices.
If this looks familiar, it is no coincidence: We've encountered similar problems in the Induction chapter, when trying to use case analysis to prove results that required induction. And once again the solution is... induction!
The behavior of induction on evidence is the same as its behavior on data: It causes Coq to generate one subgoal for each constructor that could have used to build that evidence, while providing an induction hypotheses for each recursive occurrence of the property in question.
To prove a property of n holds for all numbers for which even n holds, we can use induction on even n. This requires us to prove two things, corresponding to the two ways in which even n could have been constructed. If it was constructed by ev_0, then n=0, and the property must hold of 0. If it was constructed by ev_SS, then the evidence of even n is of the form ev_SS n' E', where n = S (S n') and E' is evidence for even n'. In this case, the inductive hypothesis says that the property we are trying to prove holds for n'.
Let's try our current lemma again:
Here, we can see that Coq produced an IH that corresponds to E', the single recursive occurrence of even in its own definition. Since E' mentions n', the induction hypothesis talks about n', as opposed to n or some other number.
The equivalence between the second and third definitions of evenness now follows.
As we will see in later chapters, induction on evidence is a recurring technique across many areas, and in particular when formalizing the semantics of programming languages, where many properties of interest are defined inductively.
The following exercises provide simple examples of this technique, to help you familiarize yourself with it.
In general, there may be multiple ways of defining a property inductively. For example, here's a (slightly contrived) alternative definition for even:
Prove that this definition is logically equivalent to the old one. (You may want to look at the previous theorem when you get to the induction step.)
Finding the appropriate thing to do induction on is a bit tricky here:
This exercise just requires applying existing lemmas. No induction or even case analysis is needed, though some of the rewriting may be tedious.
A proposition parameterized by a number (such as even) can be thought of as a property -- i.e., it defines a subset of nat, namely those numbers for which the proposition is provable. In the same way, a two-argument proposition can be thought of as a relation -- i.e., it defines a set of pairs for which the proposition is provable.
One useful example is the less than or equal to
relation on
numbers.
The following definition should be fairly intuitive. It says that there are two ways to give evidence that one number is less than or equal to another: either observe that they are the same number, or give evidence that the first is less than or equal to the predecessor of the second.
Proofs of facts about <= using the constructors le_n and le_S follow the same patterns as proofs about properties, like even above. We can apply the constructors to prove <= goals (e.g., to show that 3<=3 or 3<=6), and we can use tactics like inversion to extract information from <= hypotheses in the context (e.g., to prove that (2 <= 1) -> 2+2=5.)
Here are some sanity checks on the definition. (Notice that,
although these are the same kind of simple unit tests
as we gave
for the testing functions we wrote in the first few lectures, we
must construct their proofs explicitly -- simpl and
reflexivity don't do the job, because the proofs aren't just a
matter of simplifying computations.)
The strictly less than
relation n < m can now be defined
in terms of le.
Here are a few more simple relations on numbers:
Define an inductive binary relation total_relation that holds between every pair of natural numbers.
Define an inductive binary relation empty_relation (on numbers) that never holds.
From the definition of le, we can sketch the behaviors of destruct, inversion, and induction on a hypothesis H providing evidence of the form le e1 e2. Doing destruct H will generate two cases. In the first case, e1 = e2, and it will replace instances of e2 with e1 in the goal and context. In the second case, e2 = S n' for some n' for which le e1 n' holds, and it will replace instances of e2 with S n'. Doing inversion H will remove impossible cases and add generated equalities to the context for further use. Doing induction H will, in the second case, add the induction hypothesis that the goal holds when e2 is replaced with n'.
Here are a number of facts about the <= and < relations that we are going to need later in the course. The proofs make good practice exercises.
Hint: The next one may be easiest to prove by induction on m.
Hint: This one can easily be proved without using induction.
We can define three-place relations, four-place relations, etc., in just the same way as binary relations. For example, consider the following three-place relation on numbers:
(* FILL IN HERE *)
The relation R above actually encodes a familiar function. Figure out which function; then state and prove this equivalence in Coq?
A list is a subsequence of another list if all of the elements in the first list occur in the same order in the second list, possibly with some extra elements in between. For example,
1;2;3
is a subsequence of each of the lists
1;2;3 1;1;1;2;2;3 1;2;7;3 5;6;1;9;9;2;7;3;8
but it is not a subsequence of any of the lists
1;2 1;3 5;6;2;1;7;3;8.
Suppose we give Coq the following definition:
Inductive R : nat -> list nat -> Prop := | c1 : R 0 | c2 : forall n l, R n l -> R (S n) (n :: l) | c3 : forall n l, R (S n) l -> R n l.
Which of the following propositions are provable?
The even property provides a simple example for illustrating inductive definitions and the basic techniques for reasoning about them, but it is not terribly exciting -- after all, it is equivalent to the two non-inductive definitions of evenness that we had already seen, and does not seem to offer any concrete benefit over them.
To give a better sense of the power of inductive definitions, we now show how to use them to model a classic concept in computer science: regular expressions.
Regular expressions are a simple language for describing sets of strings. Their syntax is defined as follows:
Note that this definition is polymorphic: Regular expressions in reg_exp T describe strings with characters drawn from T -- that is, lists of elements of T.
(We depart slightly from standard practice in that we do not require the type T to be finite. This results in a somewhat different theory of regular expressions, but the difference is not significant for our purposes.)
We connect regular expressions and strings via the following rules, which define when a regular expression matches some string:
As a special case, the sequence of strings may be empty, so Star re always matches the empty string [] no matter what re is.
We can easily translate this informal definition into an Inductive one as follows:
Again, for readability, we can also display this definition using inference-rule notation. At the same time, let's introduce a more readable infix notation.
s1 =~ re1 s2 =~ re2
s1 =~ re1
s2 =~ re2
s1 =~ re s2 =~ Star re
Notice that these rules are not quite the same as the
informal ones that we gave at the beginning of the section.
First, we don't need to include a rule explicitly stating that no
string matches EmptySet; we just don't happen to include any
rule that would have the effect of some string matching
EmptySet. (Indeed, the syntax of inductive definitions doesn't
even allow us to give such a negative rule.
)
Second, the informal rules for Union and Star correspond to two constructors each: MUnionL / MUnionR, and MStar0 / MStarApp. The result is logically equivalent to the original rules but more convenient to use in Coq, since the recursive occurrences of exp_match are given as direct arguments to the constructors, making it easier to perform induction on evidence. (The exp_match_ex1 and exp_match_ex2 exercises below ask you to prove that the constructors given in the inductive declaration and the ones that would arise from a more literal transcription of the informal rules are indeed equivalent.)
Let's illustrate these rules with a few examples.
(Notice how the last example applies MApp to the strings [1] and [2] directly. Since the goal mentions [1; 2] instead of [1] ++ [2], Coq wouldn't be able to figure out how to split the string on its own.)
Using inversion, we can also show that certain strings do not match a regular expression:
We can define helper functions for writing down regular expressions. The reg_exp_of_list function constructs a regular expression that matches exactly the list that it receives as an argument:
We can also prove general facts about exp_match. For instance, the following lemma shows that every string s that matches re also matches Star re.
(Note the use of app_nil_r to change the goal of the theorem to exactly the same shape expected by MStarApp.)
The following lemmas show that the informal matching rules given at the beginning of the chapter can be obtained from the formal inductive definition.
The next lemma is stated in terms of the fold function from the Poly chapter: If ss : list (list T) represents a sequence of strings s1, ..., sn, then fold app ss [] is the result of concatenating them all together.
Prove that reg_exp_of_list satisfies the following specification:
Since the definition of exp_match has a recursive structure, we might expect that proofs involving regular expressions will often require induction on evidence.
For example, suppose that we wanted to prove the following intuitive result: If a regular expression re matches some string s, then all elements of s must occur as character literals somewhere in re.
To state this theorem, we first define a function re_chars that lists all characters that occur in a regular expression:
We can then phrase our theorem as follows:
Something interesting happens in the MStarApp case. We obtain two induction hypotheses: One that applies when x occurs in s1 (which matches re), and a second one that applies when x occurs in s2 (which matches Star re). This is a good illustration of why we need induction on evidence for exp_match, rather than induction on the regular expression re: The latter would only provide an induction hypothesis for strings that match re, which would not allow us to reason about the case In x s2.
Write a recursive function re_not_empty that tests whether a regular expression matches some string. Prove that your function is correct.
One potentially confusing feature of the induction tactic is that it will let you try to perform an induction over a term that isn't sufficiently general. The effect of this is to lose information (much as destruct without an eqn: clause can do), and leave you unable to complete the proof. Here's an example:
Just doing an inversion on H1 won't get us very far in the recursive cases. (Try it!). So we need induction (on evidence!). Here is a naive first attempt:
But now, although we get seven cases (as we would expect from the definition of exp_match), we have lost a very important bit of information from H1: the fact that s1 matched something of the form Star re. This means that we have to give proofs for all seven constructors of this definition, even though all but two of them (MStar0 and MStarApp) are contradictory. We can still get the proof to go through for a few constructors, such as MEmpty...
... but most cases get stuck. For MChar, for instance, we must show that
s2 =~ Char x' -> x' :: s2 =~ Char x',
which is clearly impossible.
The problem is that induction over a Prop hypothesis only works properly with hypotheses that are completely general, i.e., ones in which all the arguments are variables, as opposed to more complex expressions, such as Star re.
(In this respect, induction on evidence behaves more like destruct-without-eqn: than like inversion.)
An awkward way to solve this problem is manually generalizing
over the problematic expressions by adding explicit equality
hypotheses to the lemma:
We can now proceed by performing induction over evidence directly, because the argument to the first hypothesis is sufficiently general, which means that we can discharge most cases by inverting the re' = Star re equality in the context.
This idiom is so common that Coq provides a tactic to automatically generate such equations for us, avoiding thus the need for changing the statements of our theorems.
The tactic remember e as x causes Coq to (1) replace all occurrences of the expression e by the variable x, and (2) add an equation x = e to the context. Here's how we can use it to show the above result:
We now have Heqre' : re' = Star re.
The Heqre' is contradictory in most cases, allowing us to conclude immediately.
The interesting cases are those that correspond to Star. Note that the induction hypothesis IH2 on the MStarApp case mentions an additional premise Star re'' = Star re', which results from the equality generated by remember.
The MStar'' lemma below (combined with its converse, the MStar' exercise above), shows that our definition of exp_match for Star is equivalent to the informal one given previously.
One of the first really interesting theorems in the theory of
regular expressions is the so-called pumping lemma, which
states, informally, that any sufficiently long string s matching
a regular expression re can be pumped
by repeating some middle
section of s an arbitrary number of times to produce a new
string also matching re.
To begin, we need to define sufficiently long.
Since we are
working in a constructive logic, we actually need to be able to
calculate, for each regular expression re, the minimum length
for strings s to guarantee pumpability.
Next, it is useful to define an auxiliary function that repeats a string (appends it to itself) some number of times.
Now, the pumping lemma itself says that, if s =~ re and if the length of s is at least the pumping constant of re, then s can be split into three substrings s1 ++ s2 ++ s3 in such a way that s2 can be repeated any number of times and the result, when combined with s1 and s3 will still match re. Since s2 is also guaranteed not to be the empty string, this gives us a (constructive!) way to generate strings matching re that are as long as we like.
To streamline the proof (which you are to fill in), the omega tactic, which is enabled by the following Require, is helpful in several places for automatically completing tedious low-level arguments involving equalities or inequalities over natural numbers. We'll return to omega in a later chapter, but feel free to experiment with it now if you like. The first case of the induction gives an example of how it is used.
We've seen in the Logic chapter that we often need to relate boolean computations to statements in Prop. But performing this conversion as we did it there can result in tedious proof scripts. Consider the proof of the following theorem:
In the first branch after destruct, we explicitly apply the eqb_eq lemma to the equation generated by destructing n =? m, to convert the assumption n =? m = true into the assumption n = m; then we had to rewrite using this assumption to complete the case.
We can streamline this by defining an inductive proposition that yields a better case-analysis principle for n =? m. Instead of generating an equation such as (n =? m) = true, which is generally not directly useful, this principle gives us right away the assumption we really need: n = m.
The reflect property takes two arguments: a proposition P and a boolean b. Intuitively, it states that the property P is reflected in (i.e., equivalent to) the boolean b: that is, P holds if and only if b = true. To see this, notice that, by definition, the only way we can produce evidence for reflect P true is by showing P and then using the ReflectT constructor. If we invert this statement, this means that it should be possible to extract evidence for P from a proof of reflect P true. Similarly, the only way to show reflect P false is by combining evidence for ~ P with the ReflectF constructor.
It is easy to formalize this intuition and show that the statements P <-> b = true and reflect P b are indeed equivalent. First, the left-to-right implication:
Now you prove the right-to-left implication:
The advantage of reflect over the normal if and only if
connective is that, by destructing a hypothesis or lemma of the
form reflect P b, we can perform case analysis on b while at
the same time generating appropriate hypothesis in the two
branches (P in the first subgoal and ~ P in the second).
A smoother proof of filter_not_empty_In now goes as follows. Notice how the calls to destruct and apply are combined into a single call to destruct.
(To see this clearly, look at the two proofs of filter_not_empty_In with Coq and observe the differences in proof state at the beginning of the first case of the destruct.)
Use eqbP as above to prove the following:
This small example shows how reflection gives us a small gain in convenience; in larger developments, using reflect consistently can often lead to noticeably shorter and clearer proof scripts. We'll see many more examples in later chapters and in Programming Language Foundations.
The use of the reflect property has been popularized by SSReflect, a Coq library that has been used to formalize important results in mathematics, including as the 4-color theorem and the Feit-Thompson theorem. The name SSReflect stands for small-scale reflection, i.e., the pervasive use of reflection to simplify small proof steps with boolean computations.
Formulating inductive definitions of properties is an important skill you'll need in this course. Try to solve this exercise without any help at all.
We say that a list stutters
if it repeats the same element
consecutively. (This is different from not containing duplicates:
the sequence [1;4;1] repeats the element 1 but does not
stutter.) The property nostutter mylist
means that mylist
does not stutter. Formulate an inductive definition for
nostutter.
Make sure each of these tests succeeds, but feel free to change the suggested proof (in comments) if the given one doesn't work for you. Your definition might be different from ours and still be correct, in which case the examples might need a different proof. (You'll notice that the suggested proofs use a number of tactics we haven't talked about, to make them more robust to different possible ways of defining nostutter. You can probably just uncomment and use them as-is, but you can also prove each example with more basic tactics.)
Let's prove that our definition of filter from the Poly chapter matches an abstract specification. Here is the specification, written out informally in English:
A list l is an in-order merge
of l1 and l2 if it contains
all the same elements as l1 and l2, in the same order as l1
and l2, but possibly interleaved. For example,
1;4;6;2;3
is an in-order merge of
1;6;2
and
4;3.
Now, suppose we have a set X, a function test: X->bool, and a list l of type list X. Suppose further that l is an in-order merge of two lists, l1 and l2, such that every item in l1 satisfies test and no item in l2 satisfies test. Then filter test l = l1.
Translate this specification into a Coq theorem and prove it. (You'll need to begin by defining what it means for one list to be a merge of two others. Do this with an inductive relation, not a Fixpoint.)
A different way to characterize the behavior of filter goes like this: Among all subsequences of l with the property that test evaluates to true on all their members, filter test l is the longest. Formalize this claim and prove it.
A palindrome is a sequence that reads the same backwards as forwards.
c : forall l, l = rev l -> pal l
may seem obvious, but will not work very well.)
forall l, pal (l ++ rev l).
forall l, pal l -> l = rev l.
Again, the converse direction is significantly more difficult, due to the lack of evidence. Using your definition of pal from the previous exercise, prove that
forall l, l = rev l -> pal l.
Recall the definition of the In property from the Logic chapter, which asserts that a value x appears at least once in a list l:
Your first task is to use In to define a proposition disjoint X l1 l2, which should be provable exactly when l1 and l2 are lists (with elements of type X) that have no elements in common.
Next, use In to define an inductive proposition NoDup X l, which should be provable exactly when l is a list (with elements of type X) where every member is different from every other. For example, NoDup nat [1;2;3;4] and NoDup bool [] should be provable, while NoDup nat [1;2;1] and NoDup bool [true;true] should not be.
Finally, state and prove one or more interesting theorems relating disjoint, NoDup and ++ (list append).
The pigeonhole principle states a basic fact about counting: if we distribute more than n items into n pigeonholes, some pigeonhole must contain at least two items. As often happens, this apparently trivial fact about numbers requires non-trivial machinery to prove, but we now have enough...
First prove an easy useful lemma.
Now define a property repeats such that repeats X l asserts that l contains at least one repeated element (of type X).
Now, here's a way to formalize the pigeonhole principle. Suppose list l2 represents a list of pigeonhole labels, and list l1 represents the labels assigned to a list of items. If there are more items than labels, at least two items must have the same label -- i.e., list l1 must contain repeats.
This proof is much easier if you use the excluded_middle hypothesis to show that In is decidable, i.e., forall x l, (In x l) \/ ~ (In x l). However, it is also possible to make the proof go through without assuming that In is decidable; if you manage to do this, you will not need the excluded_middle hypothesis.
We have now defined a match relation over regular expressions and polymorphic lists. We can use such a definition to manually prove that a given regex matches a given string, but it does not give us a program that we can run to determine a match autmatically.
It would be reasonable to hope that we can translate the definitions of the inductive rules for constructing evidence of the match relation into cases of a recursive function reflects the relation by recursing on a given regex. However, it does not seem straightforward to define such a function in which the given regex is a recursion variable recognized by Coq. As a result, Coq will not accept that the function always terminates.
Heavily-optimized regex matchers match a regex by translating a given regex into a state machine and determining if the state machine accepts a given string. However, regex matching can also be implemented using an algorithm that operates purely on strings and regexes without defining and maintaining additional datatypes, such as state machines. We'll implemement such an algorithm, and verify that its value reflects the match relation.
We will implement a regex matcher that matches strings represented as lists of ASCII characters:
The Coq standard library contains a distinct inductive definition of strings of ASCII characters. However, we will use the above definition of strings as lists as ASCII characters in order to apply the existing definition of the match relation.
We could also define a regex matcher over polymorphic lists, not lists of ASCII characters specifically. The matching algorithm that we will implement needs to be able to test equality of elements in a given list, and thus needs to be given an equality-testing function. Generalizing the definitions, theorems, and proofs that we define for such a setting is a bit tedious, but workable.
The proof of correctness of the regex matcher will combine properties of the regex-matching function with properties of the match relation that do not depend on the matching function. We'll go ahead and prove the latter class of properties now. Most of them have straightforward proofs, which have been given to you, although there are a few key lemmas that are left for you to prove.
Each provable Prop is equivalent to True.
Each Prop whose negation is provable is equivalent to False.
EmptySet matches no string.
EmptyStr only matches the empty string.
EmptyStr matches no non-empty string.
Char a matches no string that starts with a non-a character.
If Char a matches a non-empty string, then the string's tail is empty.
App re0 re1 matches string s iff s = s0 ++ s1, where s0 matches re0 and s1 matches re1.
App re0 re1 matches a::s iff re0 matches the empty string and a::s matches re1 or s=s0++s1, where a::s0 matches re0 and s1 matches re1.
Even though this is a property of purely the match relation, it is a critical observation behind the design of our regex matcher. So (1) take time to understand it, (2) prove it, and (3) look for how you'll use it later.
s matches Union re0 re1 iff s matches re0 or s matches re1.
a::s matches Star re iff s = s0 ++ s1, where a::s0 matches re and s1 matches Star re. Like app_ne, this observation is critical, so understand it, prove it, and keep it in mind.
Hint: you'll need to perform induction. There are quite a few reasonable candidates for Prop's to prove by induction. The only one that will work is splitting the iff into two implications and proving one by induction on the evidence for a :: s =~ Star re. The other implication can be proved without induction.
In order to prove the right property by induction, you'll need to rephrase a :: s =~ Star re to be a Prop over general variables, using the remember tactic.
The definition of our regex matcher will include two fixpoint functions. The first function, given regex re, will evaluate to a value that reflects whether re matches the empty string. The function will satisfy the following property:
Complete the definition of match_eps so that it tests if a given regex matches the empty string:
Now, prove that match_eps indeed tests if a given regex matches the empty string. (Hint: You'll want to use the reflection lemmas ReflectT and ReflectF.)
We'll define other functions that use match_eps. However, the only property of match_eps that you'll need to use in all proofs over these functions is match_eps_refl.
The key operation that will be performed by our regex matcher will be to iteratively construct a sequence of regex derivatives. For each character a and regex re, the derivative of re on a is a regex that matches all suffixes of strings matched by re that start with a. I.e., re' is a derivative of re on a if they satisfy the following relation:
A function d derives strings if, given character a and regex re, it evaluates to the derivative of re on a. I.e., d satisfies the following property:
Define derive so that it derives strings. One natural implementation uses match_eps in some cases to determine if key regex's match the empty string.
The derive function should pass the following tests. Each test establishes an equality between an expression that will be evaluated by our regex matcher and the final value that must be returned by the regex matcher. Each test is annotated with the match fact that it reflects.
c
=~ EmptySet:
c
=~ Char c:
c
=~ Char d:
c
=~ App (Char c) EmptyStr:
c
=~ App EmptyStr (Char c):
c
=~ Star c:
cd
=~ App (Char c) (Char d):
cd
=~ App (Char d) (Char c):
Prove that derive in fact always derives strings.
Hint: one proof performs induction on re, although you'll need to carefully choose the property that you prove by induction by generalizing the appropriate terms.
Hint: if your definition of derive applies match_eps to a particular regex re, then a natural proof will apply match_eps_refl to re and destruct the result to generate cases with assumptions that the re does or does not match the empty string.
Hint: You can save quite a bit of work by using lemmas proved above. In particular, to prove many cases of the induction, you can rewrite a Prop over a complicated regex (e.g., s =~ Union re0 re1) to a Boolean combination of Prop's over simple regex's (e.g., s =~ re0 \/ s =~ re1) using lemmas given above that are logical equivalences. You can then reason about these Prop's naturally using intro and destruct.
We'll define the regex matcher using derive. However, the only property of derive that you'll need to use in all proofs of properties of the matcher is derive_corr.
A function m matches regexes if, given string s and regex re, it evaluates to a value that reflects whether s is matched by re. I.e., m holds the following property:
Complete the definition of regex_match so that it matches regexes.
Finally, prove that regex_match in fact matches regexes.
Hint: if your definition of regex_match applies match_eps to regex re, then a natural proof applies match_eps_refl to re and destructs the result to generate cases in which you may assume that re does or does not match the empty string.
Hint: if your definition of regex_match applies derive to character x and regex re, then a natural proof applies derive_corr to x and re to prove that x :: s =~ re given s =~ derive x re, and vice versa.