Smallstep: Small-step Operational Semantics

The evaluators we have seen so far (for aexps, bexps, commands, ...) have been formulated in a big-step style: they specify how a given expression can be evaluated to its final value (or a command plus a store to a final store) all in one big step.

This style is simple and natural for many purposes -- indeed, Gilles Kahn, who popularized it, called it natural semantics. But there are some things it does not do well. In particular, it does not give us a natural way of talking about concurrent programming languages, where the semantics of a program -- i.e., the essence of how it behaves -- is not just which input states get mapped to which output states, but also includes the intermediate states that it passes through along the way, since these states can also be observed by concurrently executing code.

Another shortcoming of the big-step style is more technical, but critical in many situations. Suppose we want to define a variant of Imp where variables could hold either numbers or lists of numbers. In the syntax of this extended language, it will be possible to write strange expressions like 2 + nil, and our semantics for arithmetic expressions will then need to say something about how such expressions behave. One possibility is to maintain the convention that every arithmetic expression evaluates to some number by choosing some way of viewing a list as a number -- e.g., by specifying that a list should be interpreted as 0 when it occurs in a context expecting a number. But this is really a bit of a hack.

A much more natural approach is simply to say that the behavior of an expression like 2+nil is undefined -- i.e., it doesn't evaluate to any result at all. And we can easily do this: we just have to formulate aeval and beval as Inductive propositions rather than Fixpoints, so that we can make them partial functions instead of total ones.

Now, however, we encounter a serious deficiency. In this language, a command might fail to map a given starting state to any ending state for two quite different reasons: either because the execution gets into an infinite loop or because, at some point, the program tries to do an operation that makes no sense, such as adding a number to a list, so that none of the evaluation rules can be applied.

These two outcomes -- nontermination vs. getting stuck in an erroneous configuration -- should not be confused. In particular, we want to allow the first (permitting the possibility of infinite loops is the price we pay for the convenience of programming with general looping constructs like while) but prevent the second (which is just wrong), for example by adding some form of typechecking to the language. Indeed, this will be a major topic for the rest of the course. As a first step, we need a way of presenting the semantics that allows us to distinguish nontermination from erroneous stuck states.

So, for lots of reasons, we'd often like to have a finer-grained way of defining and reasoning about program behaviors. This is the topic of the present chapter. Our goal is to replace the big-step eval relation with a small-step relation that specifies, for a given program, how the atomic steps of computation are performed.

A Toy Language

To save space in the discussion, let's go back to an incredibly simple language containing just constants and addition. (We use single letters -- C and P (for Constant and Plus) -- as constructor names, for brevity.) At the end of the chapter, we'll see how to apply the same techniques to the full Imp language.

Here is a standard evaluator for this language, written in the big-step style that we've been using up to this point.

Here is the same evaluator, written in exactly the same style, but formulated as an inductively defined relation. We use the notation t ==> n for t evaluates to n.


(E_Const) C n ==> n

t1 ==> n1 t2 ==> n2


(E_Plus) P t1 t2 ==> n1 + n2

Now, here is the corresponding small-step evaluation relation.


(ST_PlusConstConst) P (C n1) (C n2) --> C (n1 + n2)

t1 --> t1'


(ST_Plus1) P t1 t2 --> P t1' t2

t2 --> t2'


(ST_Plus2) P (C n1) t2 --> P (C n1) t2'

Things to notice:

  • We are defining just a single reduction step, in which one P node is replaced by its value.

  • Each step finds the leftmost P node that is ready to go (both of its operands are constants) and rewrites it in place. The first rule tells how to rewrite this P node itself; the other two rules tell how to find it.

  • A term that is just a constant cannot take a step.

Let's pause and check a couple of examples of reasoning with the step relation...

If t1 can take a step to t1', then P t1 t2 steps to P t1' t2:

Exercise: 1 star, standard (test_step_2)

Right-hand sides of sums can take a step only when the left-hand side is finished: if t2 can take a step to t2', then P (C n) t2 steps to P (C n) t2':

Relations

We will be working with several different single-step relations, so it is helpful to generalize a bit and state a few definitions and theorems about relations in general. (The optional chapter Rel.v develops some of these ideas in a bit more detail; it may be useful if the treatment here is too dense.)

A binary relation on a set X is a family of propositions parameterized by two elements of X -- i.e., a proposition about pairs of elements of X.

Our main examples of such relations in this chapter will be the single-step reduction relation, -->, and its multi-step variant, -->* (defined below), but there are many other examples -- e.g., the equals, less than, less than or equal to, and is the square of relations on numbers, and the prefix of relation on lists and strings.

One simple property of the --> relation is that, like the big-step evaluation relation for Imp, it is deterministic.

Theorem: For each t, there is at most one t' such that t steps to t' (t --> t' is provable).

Proof sketch: We show that if x steps to both y1 and y2, then y1 and y2 are equal, by induction on a derivation of step x y1. There are several cases to consider, depending on the last rule used in this derivation and the last rule in the given derivation of step x y2.

  • If both are ST_PlusConstConst, the result is immediate.

  • The cases when both derivations end with ST_Plus1 or ST_Plus2 follow by the induction hypothesis.

  • It cannot happen that one is ST_PlusConstConst and the other is ST_Plus1 or ST_Plus2, since this would imply that x has the form P t1 t2 where both t1 and t2 are constants (by ST_PlusConstConst) and one of t1 or t2 has the form P _.

  • Similarly, it cannot happen that one is ST_Plus1 and the other is ST_Plus2, since this would imply that x has the form P t1 t2 where t1 has both the form P t11 t12 and the form C n.

Formally:

There is some annoying repetition in this proof. Each use of inversion Hy2 results in three subcases, only one of which is relevant (the one that matches the current case in the induction on Hy1). The other two subcases need to be dismissed by finding the contradiction among the hypotheses and doing inversion on it.

The following custom tactic, called solve_by_inverts, can be helpful in such cases. It will solve the goal if it can be solved by inverting some hypothesis; otherwise, it fails.

The details of how this works are not important for now, but it illustrates the power of Coq's Ltac language for programmatically defining special-purpose tactics. It looks through the current proof state for a hypothesis H (the first match) of type Prop (the second match) such that performing inversion on H (followed by a recursive invocation of the same tactic, if its argument n is greater than one) completely solves the current goal. If no such hypothesis exists, it fails.

We will usually want to call solve_by_inverts with argument 1 (especially as larger arguments can lead to very slow proof checking), so we define solve_by_invert as a shorthand for this case.

Let's see how a proof of the previous theorem can be simplified using this tactic...

Values

Next, it will be useful to slightly reformulate the definition of single-step reduction by stating it in terms of values.

It can be useful to think of the --> relation as defining an abstract machine:

  • At any moment, the state of the machine is a term.

  • A step of the machine is an atomic unit of computation -- here, a single add operation.

  • The halting states of the machine are ones where there is no more computation to be done.

We can then execute a term t as follows:

  • Take t as the starting state of the machine.

  • Repeatedly use the --> relation to find a sequence of machine states, starting with t, where each state steps to the next.

  • When no more reduction is possible, read out the final state of the machine as the result of execution.

Intuitively, it is clear that the final states of the machine are always terms of the form C n for some n. We call such terms values.

Having introduced the idea of values, we can use it in the definition of the --> relation to write ST_Plus2 rule in a slightly more elegant way:


(ST_PlusConstConst) P (C n1) (C n2) --> C (n1 + n2)

t1 --> t1'


(ST_Plus1) P t1 t2 --> P t1' t2

value v1 t2 --> t2'


(ST_Plus2) P v1 t2 --> P v1 t2'

Again, the variable names here carry important information: by convention, v1 ranges only over values, while t1 and t2 range over arbitrary terms. (Given this convention, the explicit value hypothesis is arguably redundant. We'll keep it for now, to maintain a close correspondence between the informal and Coq versions of the rules, but later on we'll drop it in informal rules for brevity.)

Here are the formal rules:

Exercise: 3 stars, standard, recommended (redo_determinism)

As a sanity check on this change, let's re-verify determinism. Here's an informal proof:

Proof sketch: We must show that if x steps to both y1 and y2, then y1 and y2 are equal. Consider the final rules used in the derivations of step x y1 and step x y2.

  • If both are ST_PlusConstConst, the result is immediate.

  • It cannot happen that one is ST_PlusConstConst and the other is ST_Plus1 or ST_Plus2, since this would imply that x has the form P t1 t2 where both t1 and t2 are constants (by ST_PlusConstConst) and one of t1 or t2 has the form P _.

  • Similarly, it cannot happen that one is ST_Plus1 and the other is ST_Plus2, since this would imply that x has the form P t1 t2 where t1 both has the form P t11 t12 and is a value (hence has the form C n).

  • The cases when both derivations end with ST_Plus1 or ST_Plus2 follow by the induction hypothesis.

Most of this proof is the same as the one above. But to get maximum benefit from the exercise you should try to write your formal version from scratch and just use the earlier one if you get stuck.

Strong Progress and Normal Forms

The definition of single-step reduction for our toy language is fairly simple, but for a larger language it would be easy to forget one of the rules and accidentally create a situation where some term cannot take a step even though it has not been completely reduced to a value. The following theorem shows that we did not, in fact, make such a mistake here.

Theorem (Strong Progress): If t is a term, then either t is a value or else there exists a term t' such that t --> t'.

Proof: By induction on t.

  • Suppose t = C n. Then t is a value.

  • Suppose t = P t1 t2, where (by the IH) t1 either is a value or can step to some t1', and where t2 is either a value or can step to some t2'. We must show P t1 t2 is either a value or steps to some t'.

    • If t1 and t2 are both values, then t can take a step, by ST_PlusConstConst.

    • If t1 is a value and t2 can take a step, then so can t, by ST_Plus2.

    • If t1 can take a step, then so can t, by ST_Plus1.

Or, formally:

This important property is called strong progress, because every term either is a value or can make progress by stepping to some other term. (The qualifier strong distinguishes it from a more refined version that we'll see in later chapters, called just progress.)

The idea of making progress can be extended to tell us something interesting about values: in this language, values are exactly the terms that cannot make progress in this sense.

To state this observation formally, let's begin by giving a name to terms that cannot make progress. We'll call them normal forms.

Note that this definition specifies what it is to be a normal form for an arbitrary relation R over an arbitrary set X, not just for the particular single-step reduction relation over terms that we are interested in at the moment. We'll re-use the same terminology for talking about other relations later in the course.

We can use this terminology to generalize the observation we made in the strong progress theorem: in this language, normal forms and values are actually the same thing.

Why is this interesting?

Because value is a syntactic concept -- it is defined by looking at the form of a term -- while normal_form is a semantic one -- it is defined by looking at how the term steps.

It is not obvious that these concepts should coincide!

Indeed, we could easily have written the definitions incorrectly so that they would not coincide.

Exercise: 3 stars, standard, optional (value_not_same_as_normal_form1)

We might, for example, wrongly define value so that it includes some terms that are not finished reducing.

(Even if you don't work this exercise and the following ones in Coq, make sure you can think of an example of such a term.)

Exercise: 2 stars, standard, optional (value_not_same_as_normal_form2)

Alternatively, we might wrongly define step so that it permits something designated as a value to reduce further.

Exercise: 3 stars, standard, optional (value_not_same_as_normal_form3)

Finally, we might define value and step so that there is some term that is not a value but that cannot take a step in the step relation. Such terms are said to be stuck. In this case this is caused by a mistake in the semantics, but we will also see situations where, even in a correct language definition, it makes sense to allow some terms to be stuck.

(Note that ST_Plus2 is missing.)

Additional Exercises

Here is another very simple language whose terms, instead of being just addition expressions and numbers, are just the booleans true and false and a conditional expression...

Exercise: 1 star, standard (smallstep_bools)

Which of the following propositions are provable? (This is just a thought exercise, but for an extra challenge feel free to prove your answers in Coq.)

Exercise: 3 stars, standard, optional (progress_bool)

Just as we proved a progress theorem for plus expressions, we can do so for boolean expressions, as well.

Exercise: 2 stars, standard, optional (step_deterministic)

Exercise: 2 stars, standard (smallstep_bool_shortcut)

Suppose we want to add a short circuit to the step relation for boolean expressions, so that it can recognize when the then and else branches of a conditional are the same value (either tru or fls) and reduce the whole conditional to this value in a single step, even if the guard has not yet been reduced to a value. For example, we would like this proposition to be provable:

test (test tru tru tru) fls fls > fls.

Write an extra clause for the step relation that achieves this effect and prove bool_step_prop4.

Exercise: 3 stars, standard, optional (properties_of_altered_step)

It can be shown that the determinism and strong progress theorems for the step relation in the lecture notes also hold for the definition of step given above. After we add the clause ST_ShortCircuit...

  • Is the step relation still deterministic? Write yes or no and briefly (1 sentence) explain your answer.

    Optional: prove your answer correct in Coq.

Multi-Step Reduction

We've been working so far with the single-step reduction relation -->, which formalizes the individual steps of an abstract machine for executing programs.

We can use the same machine to reduce programs to completion -- to find out what final result they yield. This can be formalized as follows:

  • First, we define a multi-step reduction relation -->*, which relates terms t and t' if t can reach t' by any number (including zero) of single reduction steps.

  • Then we define a result of a term t as a normal form that t can reach by multi-step reduction.
  • Since we'll want to reuse the idea of multi-step reduction many times, let's take a little extra trouble and define it generically.

    Given a relation R (which will be --> for present purposes), we define a relation multi R, called the multi-step closure of R as follows.

    (In the Rel chapter of Logical Foundations and the Coq standard library, this relation is called clos_refl_trans_1n. We give it a shorter name here for the sake of readability.)

    The effect of this definition is that multi R relates two elements x and y if

    • x = y, or
    • R x y, or
    • there is some nonempty sequence z1, z2, ..., zn such that

      R x z1 R z1 z2 ... R zn y.

    Thus, if R describes a single-step of computation, then z1 ... zn is the sequence of intermediate steps of computation between x and y.

    We write -->* for the multi step relation on terms.

    The relation multi R has several crucial properties.

    First, it is obviously reflexive (that is, forall x, multi R x x). In the case of the -->* (i.e., multi step) relation, the intuition is that a term can execute to itself by taking zero steps of execution.

    Second, it contains R -- that is, single-step executions are a particular case of multi-step executions. (It is this fact that justifies the word closure in the term multi-step closure of R.)

    Third, multi R is transitive.

    In particular, for the multi step relation on terms, if t1 -->* t2 and t2 -->* t3, then t1 -->* t3.

    Examples

    Here's a specific instance of the multi step relation:

    Here's an alternate proof of the same fact that uses eapply to avoid explicitly constructing all the intermediate terms.

    Exercise: 1 star, standard, optional (test_multistep_2)

    Exercise: 1 star, standard, optional (test_multistep_3)

    Exercise: 2 stars, standard (test_multistep_4)

    Normal Forms Again

    If t reduces to t' in zero or more steps and t' is a normal form, we say that t' is a normal form of t.

    We have already seen that, for our language, single-step reduction is deterministic -- i.e., a given term can take a single step in at most one way. It follows from this that, if t can reach a normal form, then this normal form is unique. In other words, we can actually pronounce normal_form t t' as t' is the normal form of t.

    Exercise: 3 stars, standard, optional (normal_forms_unique)

    Indeed, something stronger is true for this language (though not for all languages): the reduction of any term t will eventually reach a normal form -- i.e., normal_form_of is a total function. Formally, we say the step relation is normalizing.

    To prove that step is normalizing, we need a couple of lemmas.

    First, we observe that, if t reduces to t' in many steps, then the same sequence of reduction steps within t is also possible when t appears as the left-hand child of a P node, and similarly when t appears as the right-hand child of a P node whose left-hand child is a value.

    Exercise: 2 stars, standard (multistep_congr_2)

    With these lemmas in hand, the main proof is a straightforward induction.

    Theorem: The step function is normalizing -- i.e., for every t there exists some t' such that t steps to t' and t' is a normal form.

    Proof sketch: By induction on terms. There are two cases to consider:

    • t = C n for some n. Here t doesn't take a step, and we have t' = t. We can derive the left-hand side by reflexivity and the right-hand side by observing (a) that values are normal forms (by nf_same_as_value) and (b) that t is a value (by v_const).

    • t = P t1 t2 for some t1 and t2. By the IH, t1 and t2 have normal forms t1' and t2'. Recall that normal forms are values (by nf_same_as_value); we know that t1' = C n1 and t2' = C n2, for some n1 and n2. We can combine the -->* derivations for t1 and t2 using multi_congr_1 and multi_congr_2 to prove that P t1 t2 reduces in many steps to C (n1 + n2).

      It is clear that our choice of t' = C (n1 + n2) is a value, which is in turn a normal form.

    Equivalence of Big-Step and Small-Step

    Having defined the operational semantics of our tiny programming language in two different ways (big-step and small-step), it makes sense to ask whether these definitions actually define the same thing! They do, though it takes a little work to show it. The details are left as an exercise.

    Exercise: 3 stars, standard (eval__multistep)

    The key ideas in the proof can be seen in the following picture:

    P t1 t2 --> (by ST_Plus1) P t1' t2 --> (by ST_Plus1) P t1'' t2 --> (by ST_Plus1) ... P (C n1) t2 --> (by ST_Plus2) P (C n1) t2' --> (by ST_Plus2) P (C n1) t2'' --> (by ST_Plus2) ... P (C n1) (C n2) --> (by ST_PlusConstConst) C (n1 + n2)

    That is, the multistep reduction of a term of the form P t1 t2 proceeds in three phases:

  • First, we use ST_Plus1 some number of times to reduce t1 to a normal form, which must (by nf_same_as_value) be a term of the form C n1 for some n1.
  • Next, we use ST_Plus2 some number of times to reduce t2 to a normal form, which must again be a term of the form C n2 for some n2.
  • Finally, we use ST_PlusConstConst one time to reduce P (C n1) (C n2) to C (n1 + n2).
  • To formalize this intuition, you'll need to use the congruence lemmas from above (you might want to review them now, so that you'll be able to recognize when they are useful), plus some basic properties of -->*: that it is reflexive, transitive, and includes -->.

    Exercise: 3 stars, advanced (eval__multistep_inf)

    Write a detailed informal version of the proof of eval__multistep.

    (* FILL IN HERE *)

    For the other direction, we need one lemma, which establishes a relation between single-step reduction and big-step evaluation.

    Exercise: 3 stars, standard (step__eval)

    The fact that small-step reduction implies big-step evaluation is now straightforward to prove, once it is stated correctly.

    The proof proceeds by induction on the multi-step reduction sequence that is buried in the hypothesis normal_form_of t t'.

    Make sure you understand the statement before you start to work on the proof.

    Exercise: 3 stars, standard (multistep__eval)

    Additional Exercises

    Exercise: 3 stars, standard, optional (interp_tm)

    Remember that we also defined big-step evaluation of terms as a function evalF. Prove that it is equivalent to the existing semantics. (Hint: we just proved that eval and multistep are equivalent, so logically it doesn't matter which you choose. One will be easier than the other, though!)

    Exercise: 4 stars, standard (combined_properties)

    We've considered arithmetic and conditional expressions separately. This exercise explores how the two interact.

    Earlier, we separately proved for both plus- and if-expressions...

    • that the step relation was deterministic, and

    • a strong progress lemma, stating that every term is either a value or can take a step.
    Formally prove or disprove these two properties for the combined language. (That is, state a theorem saying that the property holds or does not hold, and prove your theorem.)

    Small-Step Imp

    Now for a more serious example: a small-step version of the Imp operational semantics.

    The small-step reduction relations for arithmetic and boolean expressions are straightforward extensions of the tiny language we've been working up to now. To make them easier to read, we introduce the symbolic notations -->a and -->b for the arithmetic and boolean step relations.

    We are not actually going to bother to define boolean values, since they aren't needed in the definition of -->b below (why?), though they might be if our language were a bit larger (why?).

    The semantics of commands is the interesting part. We need two small tricks to make it work:

    • We use SKIP as a command value -- i.e., a command that has reached a normal form.

      • An assignment command reduces to SKIP (and an updated state).

      • The sequencing command waits until its left-hand subcommand has reduced to SKIP, then throws it away so that reduction can continue with the right-hand subcommand.

    • We reduce a WHILE command by transforming it into a conditional followed by the same WHILE.

    (There are other ways of achieving the effect of the latter trick, but they all share the feature that the original WHILE command needs to be saved somewhere while a single copy of the loop body is being reduced.)

    Concurrent Imp

    Finally, to show the power of this definitional style, let's enrich Imp with a new form of command that runs two subcommands in parallel and terminates when both have terminated. To reflect the unpredictability of scheduling, the actions of the subcommands may be interleaved in any order, but they share the same memory and can communicate by reading and writing the same variables.

    Among the (many) interesting properties of this language is the fact that the following program can terminate with the variable X set to any value.

    In particular, it can terminate with X set to 0:

    It can also terminate with X set to 2:

    More generally...

    Exercise: 3 stars, standard, optional (par_body_n__Sn)

    Exercise: 3 stars, standard, optional (par_body_n)

    ... the above loop can exit with X having any value whatsoever.

    A Small-Step Stack Machine

    Our last example is a small-step semantics for the stack machine example from the Imp chapter of Logical Foundations.

    Exercise: 3 stars, advanced (compiler_is_correct)

    Remember the definition of compile for aexp given in the Imp chapter of Logical Foundations. We want now to prove s_compile correct with respect to the stack machine.

    State what it means for the compiler to be correct according to the stack machine small step semantics and then prove it.

    Aside: A normalize Tactic

    When experimenting with definitions of programming languages in Coq, we often want to see what a particular concrete term steps to -- i.e., we want to find proofs for goals of the form t -->* t', where t is a completely concrete term and t' is unknown. These proofs are quite tedious to do by hand. Consider, for example, reducing an arithmetic expression using the small-step relation astep.

    The proof repeatedly applies multi_step until the term reaches a normal form. Fortunately The sub-proofs for the intermediate steps are simple enough that auto, with appropriate hints, can solve them.

    The following custom Tactic Notation definition captures this pattern. In addition, before each step, we print out the current goal, so that we can follow how the term is being reduced.

    The normalize tactic also provides a simple way to calculate the normal form of a term, by starting with a goal with an existentially bound variable.

    Exercise: 1 star, standard (normalize_ex)

    Exercise: 1 star, standard, optional (normalize_ex')

    For comparison, prove it using apply instead of eapply.