\section{The proof}
In this section, we present the correctness proof for the algorithm in more
detail. The main correctness statement is shown, slightly simplified, in~Figure~\ref{statement}.
\label{sigmapolspec}
\begin{figure}[t]
\begin{alignat*}{6}
\mathtt{sigma}&\omit\rlap{$\mathtt{\_policy\_specification} \equiv
\lambda program.\lambda sigma.\lambda policy.$} \notag\\
& \omit\rlap{$sigma\ 0 = 0\ \wedge$} \notag\\
& \mathbf{let}\ & & \omit\rlap{$instr\_list \equiv code\ program\ \mathbf{in}$} \notag\\
&&& \omit\rlap{$\forall ppc.ppc < |instr\_list| \rightarrow$} \notag\\
&&& \mathbf{let}\ && pc \equiv sigma\ ppc\ \mathbf{in} \notag\\
&&& \mathbf{let}\ && instruction \equiv \mathtt{fetch\_pseudo\_instruction}\ instr\_list\ ppc\ \mathbf{in} \notag\\
&&& \mathbf{let}\ && next\_pc \equiv sigma\ (ppc+1)\ \mathbf{in}\notag\\
&&&&& next\_pc = pc + \mathtt{instruction\_size}\ sigma\ policy\ ppc\ instruction\ \wedge\notag\\
&&&&& (pc + \mathtt{instruction\_size}\ sigma\ policy\ ppc\ instruction < 2^{16}\ \vee\notag\\
&&&&& (\forall ppc'.ppc' < |instr\_list| \rightarrow ppc < ppc' \rightarrow \notag\\
&&&&& \mathbf{let}\ instruction' \equiv \mathtt{fetch\_pseudo\_instruction}\ instr\_list\ ppc'\ ppc\_ok'\ \mathbf{in} \notag\\
&&&&&\ \mathtt{instruction\_size}\ sigma\ policy\ ppc'\ instruction' = 0)\ \wedge \notag\\
&&&&& pc + (\mathtt{instruction\_size}\ sigma\ policy\ ppc\ instruction = 2^{16}))
\end{alignat*}
\caption{Main correctness statement\label{statement}}
\end{figure}
Informally, this means that when fetching a pseudo-instruction at $ppc$, the
translation by $\sigma$ of $ppc+1$ is the same as $\sigma(ppc)$ plus the size
of the instruction at $ppc$. That is, an instruction is placed consecutively
after the previous one, and there are no overlaps.
Instructions are also stocked in
order: the memory address of the instruction at $ppc$ should be smaller than
the memory address of the instruction at $ppc+1$. There is one exception to
this rule: the instruction at the very end of the program, whose successor
address can be zero (this is the case where the program size is exactly equal
to the amount of memory).
Finally, we enforce that the program starts at address 0, i.e. $\sigma(0) = 0$.
Since our computation is a least fixed point computation, we must prove
termination in order to prove correctness: if the algorithm is halted after
a number of steps without reaching a fixed point, the solution is not
guaranteed to be correct. More specifically, branch instructions might be
encoded which do not coincide with the span between their location and their
destination.
Proof of termination rests on the fact that the encoding of branch
instructions can only grow larger, which means that we must reach a fixed point
after at most $2n$ iterations, with $n$ the number of branch instructions in
the program. This worst case is reached if at every iteration, we change the
encoding of exactly one branch instruction; since the encoding of any branch
instructions can change first from short to absolute and then from absolute to
long, there can be at most $2n$ changes.
The proof has been carried out using the ``Russell'' style from~\cite{Sozeau2006}.
We have proven some invariants of the {\sc f} function from the previous
section; these invariants are then used to prove properties that hold for every
iteration of the fixed point computation; and finally, we can prove some
properties of the fixed point.
\subsection{Fold invariants}
In this section, we present the invariants that hold during the fold of {\sc f}
over the program. These will be used later on to prove the properties of the
iteration.
Note that during the fixed point computation, the $\sigma$ function is
implemented as a trie for ease of access; computing $\sigma(x)$ is achieved by
looking up the value of $x$ in the trie. Actually, during the fold, the value
we pass along is a pair $\mathbb{N} \times \mathtt{ppc\_pc\_map}$. The first
component is the number of bytes added to the program so far with respect to
the previous iteration, and the second component, {\tt ppc\_pc\_map}, is the
actual $\sigma$ trie.
\begin{alignat*}{2}
\mathtt{out} & \mathtt{\_of\_program\_none} \equiv \lambda prefix.\lambda sigma. \notag\\
& \forall i.i < 2^{16} \rightarrow (i > |prefix| \leftrightarrow
\mathtt{lookup\_opt}\ i\ (\mathtt{snd}\ sigma) = \mathtt{None})
\end{alignat*}
The first invariant states that any pseudo-address not yet examined is not
present in the lookup trie.
\begin{alignat*}{2}
\mathtt{not} & \mathtt{\_jump\_default} \equiv \lambda prefix.\lambda sigma.\notag\\
& \forall i.i < |prefix| \rightarrow\notag\\
& \neg\mathtt{is\_jump}\ (\mathtt{nth}\ i\ prefix) \rightarrow\notag\\
& \mathtt{lookup}\ i\ (\mathtt{snd}\ sigma) = \mathtt{short\_jump}
\end{alignat*}
This invariant states that when we try to look up the jump length of a
pseudo-address where there is no branch instruction, we will get the default
value, a short jump.
\begin{alignat*}{4}
\mathtt{jump} & \omit\rlap{$\mathtt{\_increase} \equiv \lambda pc.\lambda op.\lambda p.$} \notag\\
& \omit\rlap{$\forall i.i < |prefix| \rightarrow$} \notag\\
& \mathbf{let}\ && oj \equiv \mathtt{lookup}\ i\ (\mathtt{snd}\ op)\ \mathbf{in} \notag\\
& \mathbf{let}\ && j \equiv \mathtt{lookup}\ i\ (\mathtt{snd}\ p)\ \mathbf{in} \notag\\
&&& \mathtt{jmpleq}\ oj\ j
\end{alignat*}
This invariant states that between iterations (with $op$ being the previous
iteration, and $p$ the current one), jump lengths either remain equal or
increase. It is needed for proving termination.
\begin{figure}[ht]
\begin{alignat*}{6}
\mathtt{sigma} & \omit\rlap{$\mathtt{\_compact\_unsafe} \equiv \lambda prefix.\lambda sigma.$}\notag\\
& \omit\rlap{$\forall n.n < |prefix| \rightarrow$}\notag\\
& \mathbf{match}\ && \omit\rlap{$\mathtt{lookup\_opt}\ n\ (\mathtt{snd}\ sigma)\ \mathbf{with}$}\notag\\
&&& \omit\rlap{$\mathtt{None} \Rightarrow \mathrm{False}$} \notag\\
&&& \omit\rlap{$\mathtt{Some}\ \langle pc, j \rangle \Rightarrow$} \notag\\
&&& \mathbf{match}\ && \mathtt{lookup\_opt}\ (n+1)\ (\mathtt{snd}\ sigma)\ \mathbf{with}\notag\\
&&&&& \mathtt{None} \Rightarrow \mathrm{False} \notag\\
&&&&& \mathtt{Some}\ \langle pc_1, j_1 \rangle \Rightarrow
pc_1 = pc + \notag\\
&&&&& \ \ \mathtt{instruction\_size\_jmplen}\ j\ (\mathtt{nth}\ n\ prefix)
\end{alignat*}
\caption{Temporary safety property}
\label{sigmacompactunsafe}
\end{figure}
We can now proceed with the lemmas that are needed for algorithm safety.
The lemma in Figure~\ref{sigmacompactunsafe} is a temporary formulation of
the main property\\ ({\tt sigma\_policy\_specification}). Its main difference
from the final version is that it uses {\tt instruction\_size\_jmplen} to
compute the instruction size. This function uses $j$ to compute the span
of branch instructions (i.e. it uses the $\sigma$ function under construction),
instead of looking at the distance between source and destination. This is
because $\sigma$ is still under construction; later on we will prove that after
the final iteration, {\tt sigma\_compact\_unsafe} is equivalent to the main
property.
\begin{figure}[ht]
\begin{alignat*}{6}
\mathtt{sigma} & \omit\rlap{$\mathtt{\_safe} \equiv \lambda prefix.\lambda labels.\lambda old\_sigma.\lambda sigma$}\notag\\
& \omit\rlap{$\forall i.i < |prefix| \rightarrow$} \notag\\
& \omit\rlap{$\forall dest\_label.\mathtt{is\_jump\_to\ (\mathtt{nth}\ i\ prefix})\ dest\_label \rightarrow$} \notag\\
& \mathbf{let} && \omit\rlap{$\ paddr \equiv \mathtt{lookup}\ labels\ dest\_label\ \mathbf{in}$} \notag\\
& \mathbf{let} && \omit\rlap{$\ \langle j, src, dest \rangle \equiv$}\notag\\
&&& \omit\rlap{$\mathbf{if} \ paddr\ \leq\ i\ \mathbf{then}$}\notag\\
&&&&& \mathbf{let}\ \langle \_, j \rangle \equiv \mathtt{lookup}\ i\ (\mathtt{snd}\ sigma)\ \mathbf{in} \notag\\
&&&&& \mathbf{let}\ \langle pc\_plus\_jl, \_ \rangle \equiv \mathtt{lookup}\ (i+1)\ (\mathtt{snd}\ sigma)\ \mathbf{in}\notag\\
&&&&& \mathbf{let}\ \langle addr, \_ \rangle \equiv \mathtt{lookup}\ paddr\ (\mathtt{snd}\ sigma)\ \mathbf{in}\notag\\
&&&&& \langle j, pc\_plus\_jl, addr \rangle\notag\\
&&&\mathbf{else} \notag\\
&&&&&\mathbf{let}\ \langle \_, j \rangle \equiv \mathtt{lookup}\ i\ (\mathtt{snd}\ sigma)\ \mathbf{in} \notag\\
&&&&&\mathbf{let}\ \langle pc\_plus\_jl, \_ \rangle \equiv \mathtt{lookup}\ (i+1)\ (\mathtt{snd}\ old\_sigma)\ \mathbf{in}\notag\\
&&&&&\mathbf{let}\ \langle addr, \_ \rangle \equiv \mathtt{lookup}\ paddr\ (\mathtt{snd}\ old\_sigma)\ \mathbf{in}\notag\\
&&&&&\langle j, pc\_plus\_jl, addr \rangle\notag\\
&&&\omit\rlap{$\mathbf{in}$}\notag\\
&&&\mathbf{match} && \ j\ \mathbf{with} \notag\\
&&&&&\mathrm{short\_jump} \Rightarrow \mathtt{short\_jump\_valid}\ src\ dest\notag\\
&&&&&\mathrm{absolute\_jump} \Rightarrow \mathtt{absolute\_jump\_valid}\ src\ dest\notag\\
&&&&&\mathrm{long\_jump} \Rightarrow \mathrm{True}
\end{alignat*}
\caption{Safety property}
\label{sigmasafe}
\end{figure}
The lemma in figure~\ref{sigmasafe} is a more direct safety property. It states
that branch instructions are encoded properly, and that no wrong branch
instructions are chosen.
Note that we compute the distance using the memory address of the instruction
plus its size. This follows the behaviour of the MCS-51 microprocessor, which
increases the program counter directly after fetching, and only then executes
the branch instruction (by changing the program counter again).
We now encode some final, simple, properties to make sure that our policy
remains consistent, and to keep track of whether the fixed point has been
reached.
\begin{align*}
& \mathtt{lookup}\ 0\ (\mathtt{snd}\ policy) = 0 \notag\\
& \mathtt{lookup}\ |prefix|\ (\mathtt{snd}\ policy) = \mathtt{fst}\ policy
\end{align*}
These two properties give the values of $\sigma$ for the start and end of the
program; $\sigma(0) = 0$ and $\sigma(n)$, where $n$ is the number of
instructions up until now, is equal to the maximum memory address so far.
\begin{align*}
& added = 0\ \rightarrow\ \mathtt{policy\_pc\_equal}\ prefix\ old\_sigma\ policy \notag\\
& \mathtt{policy\_jump\_equal}\ prefix\ old\_sigma\ policy\ \rightarrow\ added = 0
\end{align*}
And finally, two properties that deal with what happens when the previous
iteration does not change with respect to the current one. $added$ is a
variable that keeps track of the number of bytes we have added to the program
size by changing the encoding of branch instructions. If $added$ is 0, the program
has not changed and vice versa.
We need to use two different formulations, because the fact that $added$ is 0
does not guarantee that no branch instructions have changed. For instance,
it is possible that we have replaced a short jump with an absolute jump, which
does not change the size of the branch instruction.
Therefore {\tt policy\_pc\_equal} states that $old\_sigma_1(x) = sigma_1(x)$,
whereas {\tt policy\_jump\_equal} states that $old\_sigma_2(x) = sigma_2(x)$.
This formulation is sufficient to prove termination and compactness.
Proving these invariants is simple, usually by induction on the prefix length.
\subsection{Iteration invariants}
These are invariants that hold after the completion of an iteration. The main
difference between these invariants and the fold invariants is that after the
completion of the fold, we check whether the program size does not supersede
64 Kb, the maximum memory size the MCS-51 can address.
The type of an iteration therefore becomes an option type: {\tt None} in case
the program becomes larger than 64 Kb, or $\mathtt{Some}\ \sigma$
otherwise. We also no longer pass along the number of bytes added to the
program size, but a boolean that indicates whether we have changed something
during the iteration or not.
If an iteration returns {\tt None}, we have the following invariant:
\begin{alignat*}{2}
\mathtt{nec} & \mathtt{\_plus\_ultra} \equiv \lambda program.\lambda p. \notag\\
&\neg(\forall i.i < |program|\ \rightarrow \notag\\
& \mathtt{is\_jump}\ (\mathtt{nth}\ i\ program)\ \rightarrow \notag\\
& \mathtt{lookup}\ i\ (\mathtt{snd}\ p) = \mathrm{long\_jump}).
\end{alignat*}
This invariant is applied to $old\_sigma$; if our program becomes too large
for memory, the previous iteration cannot have every branch instruction encoded
as a long jump. This is needed later in the proof of termination.
If the iteration returns $\mathtt{Some}\ \sigma$, the invariants
{\tt out\_of\_program\_none},\\
{\tt not\_jump\_default}, {\tt jump\_increase},
and the two invariants that deal with $\sigma(0)$ and $\sigma(n)$ are
retained without change.
Instead of using {\tt sigma\_compact\_unsafe}, we can now use the proper
invariant:
\begin{alignat*}{6}
\mathtt{sigma} & \omit\rlap{$\mathtt{\_compact} \equiv \lambda program.\lambda sigma.$} \notag\\
& \omit\rlap{$\forall n.n < |program|\ \rightarrow$} \notag\\
& \mathbf{match}\ && \omit\rlap{$\mathtt{lookup\_opt}\ n\ (\mathtt{snd}\ sigma)\ \mathbf{with}$}\notag\\
&&& \omit\rlap{$\mathrm{None}\ \Rightarrow\ \mathrm{False}$}\notag\\
&&& \omit\rlap{$\mathrm{Some}\ \langle pc, j \rangle \Rightarrow$}\notag\\
&&& \mathbf{match}\ && \mathtt{lookup\_opt}\ (n+1)\ (\mathtt{snd}\ sigma)\ \mathbf{with}\notag\\
&&&&& \mathrm{None}\ \Rightarrow\ \mathrm{False}\notag\\
&&&&& \mathrm{Some} \langle pc1, j1 \rangle \Rightarrow\notag\\
&&&&& \ \ pc1 = pc + \mathtt{instruction\_size}\ n\ (\mathtt{nth}\ n\ program)
\end{alignat*}
This is almost the same invariant as ${\tt sigma\_compact\_unsafe}$, but differs in that it
computes the sizes of branch instructions by looking at the distance between
position and destination using $\sigma$.
In actual use, the invariant is qualified: $\sigma$ is compact if there have
been no changes (i.e. the boolean passed along is {\tt true}). This is to
reflect the fact that we are doing a least fixed point computation: the result
is only correct when we have reached the fixed point.
There is another, trivial, invariant in case the iteration returns
$\mathtt{Some}\ \sigma$: it must hold that $\mathtt{fst}\ sigma < 2^{16}$.
We need this invariant to make sure that addresses do not overflow.
The invariants that are taken directly from the fold invariants are trivial to
prove.
The proof of {\tt nec\_plus\_ultra} goes as follows: if we return {\tt None},
then the program size must be greater than 64 Kb. However, since the
previous iteration did not return {\tt None} (because otherwise we would
terminate immediately), the program size in the previous iteration must have
been smaller than 64 Kb.
Suppose that all the branch instructions in the previous iteration are
encoded as long jumps. This means that all branch instructions in this
iteration are long jumps as well, and therefore that both iterations are equal
in the encoding of their branch instructions. Per the invariant, this means that
$added = 0$, and therefore that all addresses in both iterations are equal.
But if all addresses are equal, the program sizes must be equal too, which
means that the program size in the current iteration must be smaller than
64 Kb. This contradicts the earlier hypothesis, hence not all branch
instructions in the previous iteration are encoded as long jumps.
The proof of {\tt sigma\_compact} follows from {\tt sigma\_compact\_unsafe} and
the fact that we have reached a fixed point, i.e. the previous iteration and
the current iteration are the same. This means that the results of
{\tt instruction\_size\_jmplen} and {\tt instruction\_size} are the same.
\subsection{Final properties}
These are the invariants that hold after $2n$ iterations, where $n$ is the
program size (we use the program size for convenience; we could also use the
number of branch instructions, but this is more complex). Here, we only
need {\tt out\_of\_program\_none}, {\tt sigma\_compact} and the fact that
$\sigma(0) = 0$.
Termination can now be proved using the fact that there is a $k \leq 2n$, with
$n$ the length of the program, such that iteration $k$ is equal to iteration
$k+1$. There are two possibilities: either there is a $k < 2n$ such that this
property holds, or every iteration up to $2n$ is different. In the latter case,
since the only changes between the iterations can be from shorter jumps to
longer jumps, in iteration $2n$ every branch instruction must be encoded as
a long jump. In this case, iteration $2n$ is equal to iteration $2n+1$ and the
fixed point is reached.