\documentclass{llncs}
\usepackage{amsfonts}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage[english]{babel}
\usepackage{color}
\usepackage{fancybox}
\usepackage{graphicx}
\usepackage[colorlinks]{hyperref}
\usepackage{hyphenat}
\usepackage[utf8x]{inputenc}
\usepackage{listings}
\usepackage{mdwlist}
\usepackage{microtype}
\usepackage{stmaryrd}
\usepackage{url}
\renewcommand{\verb}{\lstinline}
\def\lstlanguagefiles{lst-grafite.tex}
\lstset{language=Grafite}
\newlength{\mylength}
\newenvironment{frametxt}%
{\setlength{\fboxsep}{5pt}
\setlength{\mylength}{\linewidth}%
\addtolength{\mylength}{-2\fboxsep}%
\addtolength{\mylength}{-2\fboxrule}%
\Sbox
\minipage{\mylength}%
\setlength{\abovedisplayskip}{0pt}%
\setlength{\belowdisplayskip}{0pt}%
}%
{\endminipage\endSbox
\[\fbox{\TheSbox}\]}
\title{On the correctness of an optimising assembler for the Intel MCS-51 microprocessor\thanks{The project CerCo acknowledges the financial support of the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission, under FET-Open grant number: 243881.}}
\author{Dominic P. Mulligan \and Claudio Sacerdoti Coen}
\institute{Dipartimento di Scienze dell'Informazione,\\ Universit\'a degli Studi di Bologna}
\bibliographystyle{splncs03}
\begin{document}
\maketitle
\begin{abstract}
We present a proof of correctness in Matita for an optimising assembler for the MCS-51 microcontroller.
The efficient expansion of pseudoinstructions, namely jumps, into machine instructions is complex.
We isolate the decision making over how jumps should be expanded from the expansion process itself as much as possible using `policies', making the proof of correctness for the assembler more straightforward.
%We observe that it is impossible for an optimising assembler to preserve the semantics of every assembly program.
%Assembly language programs can manipulate concrete addresses in arbitrary ways.
Our proof strategy contains a tracking facility for `good addresses' and only programs that use good addresses have their semantics preserved under assembly, as we observe that it is impossible for an assembler to preserve the semantics of every assembly program.
Our strategy offers increased flexibility over the traditional approach to proving the correctness of assemblers, wherein addresses in assembly are kept opaque and immutable.
In particular, we may experiment with allowing the benign manipulation of addresses.
\keywords{Verified software, CerCo (Certified Complexity), MCS-51 microcontroller, Matita proof assistant}
\end{abstract}
% ---------------------------------------------------------------------------- %
% SECTION %
% ---------------------------------------------------------------------------- %
\section{Introduction}
\label{sect.introduction}
We consider the formalisation of an assembler for the Intel MCS-51 8-bit microprocessor in the Matita proof assistant~\cite{asperti:user:2007}.
This formalisation forms a major component of the EU-funded CerCo (`Certified Complexity') project~\cite{cerco:2011}, concerning the construction and formalisation of a concrete complexity preserving compiler for a large subset of the C programming language.
The MCS-51 dates from the early 1980s and is commonly called the 8051/8052.
Derivatives are still widely manufactured by a number of semiconductor foundries, with the processor being used especially in embedded systems.
The MCS-51 has a relative paucity of features compared to its more modern brethren, with the lack of any caching or pipelining features meaning that timing of execution is predictable, making the MCS-51 very attractive for CerCo's ends.
However, the MCS-51's paucity of features---though an advantage in many respects---also quickly becomes a hindrance, as the MCS-51 features a relatively minuscule series of memory spaces by modern standards.
As a result our C compiler, to be able to successfully compile realistic programs for embedded devices, ought to produce `tight' machine code.
To do this, we must solve the `branch displacement' problem---deciding how best to expand pseudojumps to labels in assembly language to machine code jumps.
The branch displacement problem arises when pseudojumps can be expanded
in different ways to real machine instructions, but the different expansions
are not equivalent (e.g. differ in size or speed) and not always
correct (e.g. correctness is only up to global constraints over the compiled
code). For instance, some jump instructions (short jumps) are very small
and fast, but they can only reach destinations within a
certain distance from the current instruction. When the destinations are
too far away, larger and slower long jumps must be used. The use of a long jump may
augment the distance between another pseudojump and its target, forcing
another long jump use, in a cascade. The job of the optimising
compiler (assembler) is to individually expand every pseudo-instruction in such a way
that all global constraints are satisfied and that the compiled program
is minimal in size and faster in concrete time complexity.
This problem is known to be computationally hard for most CISC architectures (see~\cite{hyde:branch:2006}).
To simplify the CerCo C compiler we have chosen to implement an optimising assembler whose input language the compiler will target.
Labels, conditional jumps to labels, a program preamble containing global data and a \texttt{MOV} instruction for moving this global data into the MCS-51's one 16-bit register all feature in our assembly language.
We further simplify by ignoring linking, assuming that all our assembly programs are pre-linked.
Another complication we have addressed is that of the cost model.
CerCo imposes a cost model on C programs or, more specifically, on simple blocks of instructions.
This cost model is induced by the compilation process itself, and its non-compositional nature allows us to assign different costs to identical C statements depending on how they are compiled.
In short, we aim to obtain a very precise costing for a program by embracing the compilation process, not ignoring it.
At the assembler level, this is reflected by our need to induce a cost
model on the assembly code as a function of the assembly program and the
strategy used to solve the branch displacement problem. In particular, the
optimising compiler should also return a map that assigns a cost (in clock
cycles) to every instruction in the source program. We expect the induced cost
to be preserved by the compiler: we will prove that the compiled code
tightly simulates the source code by taking exactly the predicted amount of
time.
Note that the temporal tightness of the simulation is a fundamental prerequisite
of the correctness of the simulation because some functions of the MCS-51---timers and I/O---depend on the microprocessor's clock.
If the pseudo- and concrete clock differ the result of an I/O operation may not be preserved.
Branch displacement algorithms must have a deep knowledge of the way
the rest of the assembler works in order to build globally correct solutions.
Proving their correctness is quite a complex task (see, for instance,
the compaion paper~\cite{boender:correctness:2012}).
Nevertheless, the correctness of the whole assembler only depends on the
correctness of the branch displacement algorithm.
Therefore, in the rest of the paper, we presuppose the
existence of a correct policy, to be computed by a branch displacement
algorithm if it exists. A policy is the decision over how
any particular jump should be expanded; it is correct when the global
constraints are satisfied.
The assembler fails to assemble an assembly program if and only if a correct policy does not exist.
This is stated in an elegant way in the dependent type of the assembler: the assembly function is total over a program, a policy and the proof that the policy is correct for that program.
A final complication in the proof is due to the kind of semantics associated to pseudo-assembly programs.
Should assembly programs be allowed to freely manipulate addresses?
The traditional answer is `no': values stored in memory or registers are either
concrete data or symbolic addresses. The latter can only be manipulated
in very restricted ways and programs that do not do so are not assigned a semantics and cannot be reasoned about.
All programs that have a semantics have it preserved by the assembler.
We take an alternative approach, allowing programs to freely
manipulate addresses non-symbolically but only granting a preservation of semantics
to those programs that act in `well-behaved' ways.
In principle, this should allow some reasoning on the actual semantics of malign programs.
In practice, we note how our approach facilitates more code reuse between the semantics of assembly code and object code.
The rest of this paper is a detailed description of our proof that is marginally still a work in progress.
\paragraph{Matita}
Matita is a proof assistant based on a variant of the Calculus of (Co)inductive Constructions~\cite{asperti:user:2007}.
It features dependent types that we exploit in the formalisation.
The (simplified) syntax of the statements and definitions in the paper should be self-explanatory.
Pairs are denoted with angular brackets, $\langle-, -\rangle$.
Matita features a liberal system of coercions.
It is possible to define a uniform coercion $\lambda x.\langle x,?\rangle$ from every type $T$ to the dependent product $\Sigma x:T.P~x$.
The coercion opens a proof obligation that asks the user to prove that $P$ holds for $x$.
When a coercion must be applied to a complex term (a $\lambda$-abstraction, a local definition, or a case analysis), the system automatically propagates the coercion to the sub-terms
For instance, to apply a coercion to force $\lambda x.M : A \to B$ to have type $\forall x:A.\Sigma y:B.P~x~y$, the system looks for a coercion from $M: B$ to $\Sigma y:B.P~x~y$ in a context augmented with $x:A$.
This is significant when the coercion opens a proof obligation, as the user will be presented with multiple, but simpler proof obligations in the correct context.
In this way, Matita supports the `Russell' proof methodology developed by Sozeau in~\cite{sozeau:subset:2006}, with an implementation that is lighter and more tightly integrated with the system than that of Coq.
% ---------------------------------------------------------------------------- %
% SECTION %
% ---------------------------------------------------------------------------- %
\section{The proof}
\label{sect.the.proof}
The aim of the section is to explain the main ideas and steps of the certified
proof of correctness for an optimizing assembler for the MCS-51.
In Section~\ref{subsect.machine.code.semantics} we sketch an operational semantics (a realistic and efficient emulator) for the MCS-51.
We also introduce a syntax for decoded instructions that will be reused for the assembly language.
In Section~\ref{subsect.assembly.code.semantics} we describe the assembly language and its operational semantics.
The latter is parametric in the cost model that will be induced by the assembler, reusing the semantics of the machine code on all `real' instructions.
Branch displacement policies are introduced in Section~\ref{subsect.the.assembler} where we also describe the assembler as a function over policies as previously described.
To prove our assembler correct we show that the object code given in output, together with a cost model for the source program, simulates the source program executed using that cost model.
The proof can be divided into two main lemmas.
The first is correctness with respect to fetching, described in Section~\ref{subsect.total.correctness.of.the.assembler}.
Roughly it states that a step of fetching at the assembly level, returning the decoded instruction $I$, is simulated by $n$ steps of fetching at the object level that returns instructions $J_1,\ldots,J_n$, where $J_1,\ldots,J_n$ is, amongst the possible expansions of $I$, the one picked by the policy.
The second lemma states that $J_1,\ldots,J_n$ simulates $I$ but only if $I$ is well-behaved, i.e. manipulates addresses in `good' ways.
To keep track of well-behaved address manipulations we record where addresses are currently stored (in memory or an accumulator).
We introduce a dynamic checking function that inspects this map to determine if the operation is well-behaved, with an affirmative answer being the pre-condition of the lemma.
The second lemma is detailed in Section~\ref{subsect.total.correctness.for.well.behaved.assembly.programs} where we also establish correctness of our assembler as a composition of the two lemmas: programs that are well-behaved when executed under the cost model induced by the compiler are correctly simulated by the compiled code.
% ---------------------------------------------------------------------------- %
% SECTION %
% ---------------------------------------------------------------------------- %
\subsection{Machine code and its semantics}
\label{subsect.machine.code.semantics}
We implemented a realistic and efficient emulator for the MCS-51 microprocessor.
An MCS-51 program is just a sequence of bytes stored in the read-only code
memory of the processor, represented as a compact trie of bytes addressed
by the program counter.
The \texttt{Status} of the emulator is described as
a record that contains the microprocessor's program counter, registers, stack
pointer, clock, special function registers, code memory, and so on.
The value of the code memory is a parameter of the record since it is not
changed during execution.
The \texttt{Status} records is itself an instance of a more general
datatype \texttt{PreStatus} that abstracts over the implementation of code
memory in order to reuse the same datatype for the semantics of the assembly
language in the next section.
The execution of a single instruction is performed by the \texttt{execute\_1}
function, parametric over the content \texttt{cm} of the code memory:
\begin{lstlisting}
definition execute_1: $\forall$cm. Status cm $\rightarrow$ Status cm
\end{lstlisting}
The function \texttt{execute\_1} closely matches the fetch-decode-execute
cycle of the MCS-51 hardware, as described by a Siemen's manufacturer's data sheet~\cite{siemens:2011}.
Fetching and decoding are performed simultaneously:
we first fetch, using the program counter, from code memory the first byte of the instruction to be executed, decoding the resulting opcode, fetching more bytes as is necessary to decode the arguments.
Decoded instructions are represented by the \texttt{instruction} data type
which extends a data type of \texttt{preinstruction}s that will be reused
for the assembly language.
\begin{lstlisting}
inductive preinstruction (A: Type[0]): Type[0] :=
| ADD: $\llbracket$acc_a$\rrbracket$ → $\llbracket$registr; direct; indirect; data$\rrbracket$ $\rightarrow$ preinstruction A
| DEC: $\llbracket$acc_a; registr; direct; indirect$\rrbracket$ $\rightarrow$ preinstruction A
| JB: $\llbracket$bit_addr$\rrbracket$ $\rightarrow$ A $\rightarrow$ preinstruction A
| ...
inductive instruction: Type[0] :=
| LCALL: $\llbracket$addr16$\rrbracket$ $\rightarrow$ instruction
| AJMP: $\llbracket$addr11$\rrbracket$ $\rightarrow$ instruction
| RealInstruction: preinstruction $\llbracket$relative$\rrbracket$ $\rightarrow$ instruction.
| ...
\end{lstlisting}
The MCS-51 has many operand modes, but an unorthogonal instruction set: every
opcode is only enable for a finite subset of the possible operand modes.
Here we exploit dependent types and an implicit coercion to synthesize
the type of arguments of opcodes from a vector of names of operand modes.
For example, \texttt{ACC} has two operands, the first one constrained to be
the \texttt{A} accumulator, and the second one to be a disjoint union of
register, direct, indirect and data operand modes.
The parameterised type $A$ of \texttt{preinstruction} represents the addressing mode allowed for conditional jumps; in the \texttt{RealInstruction} constructor
we constraint it to be a relative offset. A different instantiation will be
used in the next Section for assembly programs.
Once decoded, execution proceeds by a case analysis on the decoded instruction, following the operation of the hardware.
For example, the \texttt{DEC} preinstruction (`decrement') is executed as follows:
\begin{lstlisting}
| DEC addr $\Rightarrow$
let s := add_ticks1 s in
let $\langle$result, flags$\rangle$ := sub_8_with_carry (get_arg_8 s true addr)
(bitvector_of_nat 8 1) false in
set_arg_8 s addr result
\end{lstlisting}
Here, \texttt{add\_ticks1} models the incrementing of the internal clock of the microprocessor; it is a parameter of the semantics of \texttt{preinstruction}s
that is fixed in the semantics of \texttt{instruction}s according to the
manufacturer datasheet.
% ---------------------------------------------------------------------------- %
% SECTION %
% ---------------------------------------------------------------------------- %
\subsection{Assembly code and its semantics}
\label{subsect.assembly.code.semantics}
An assembly program is a list of potentially labelled pseudoinstructions, bundled with a preamble consisting of a list of symbolic names for locations in data memory (i.e. global variables).
All preinstructions are pseudoinstructions, but conditional jumps are now
only allowed to use \texttt{Identifiers} (labels) as their target.
\begin{lstlisting}
inductive pseudo_instruction: Type[0] :=
| Instruction: preinstruction Identifier $\rightarrow$ pseudo_instruction
...
| Jmp: Identifier $\rightarrow$ pseudo_instruction
| Call: Identifier $\rightarrow$ pseudo_instruction
| Mov: $\llbracket$dptr$\rrbracket$ $\rightarrow$ Identifier $\rightarrow$ pseudo_instruction.
\end{lstlisting}
The pseudoinstructions \texttt{Jmp}, \texttt{Call} and \texttt{Mov} are generalisations of machine code unconditional jumps, calls and move instructions respectively, all of whom act on labels, as opposed to concrete memory addresses.
The object code calls and jumps that act on concrete memory addresses are ruled
out of assembly programs not being included in the preinstructions (see previous
Section).
Execution of pseudoinstructions is an endofunction on \texttt{PseudoStatus}.
A \texttt{PseudoStatus} is an instance of \texttt{PreStatus} that differs
from a \texttt{Status} only in the datatype used for code memory: a list
of optionally labelled pseudoinstructions versus a trie of bytes.
The \texttt{PreStatus} type is crucial for sharing the majority of the
semantics of the two languages.
Emulation for pseudoinstructions is handled by \texttt{execute\_1\_pseudo\_instruction}:
\begin{lstlisting}
definition execute_1_pseudo_instruction:
$\forall$cm. $\forall$costing:($\forall$ppc: Word. ppc < $\mid$snd cm$\mid$ $\rightarrow$ nat $\times$ nat).
$\forall$s:PseudoStatus cm. program_counter s < $\mid$snd cm$\mid$ $\rightarrow$ PseudoStatus cm
\end{lstlisting}
The type of \texttt{execute\_1\_pseudo\_instruction} is more involved than
that of \texttt{execute\_1}. The first difference is that execution is only
defined when the program counter points to a valid instruction, i.e.
it is smaller than the length $\mid$\texttt{snd cm}$\mid$ of the program.
The second difference is the abstraction over the cost model, abbreviated
here as \emph{costing}.
The costing is a function that maps valid program counters to pairs of natural numbers representing the number of clock ticks used by the pseudoinstructions stored at those program counters. For conditional jumps the two numbers differ
to represent different costs for the `true branch' and the `false branch'.
In the next Section we will see how the optimizing
assembler induces the only costing that is preserved by compilation.
Obviously the induced costing is determined by the branch displacement policy
that decides how to expand every pseudojump to a label into concrete
instructions.
Execution proceeds by first fetching from pseudo-code memory using the program counter---treated as an index into the pseudoinstruction list.
This index is always guaranteed to be within the bounds of the pseudoinstruction list due to the dependent type placed on the function.
No decoding is required.
We then proceed by case analysis over the pseudoinstruction, reusing the code for object code for all instructions present in the MCS-51's instruction set.
For all newly introduced pseudoinstructions, we simply translate labels to concrete addresses before behaving as a `real' instruction.
We do not perform any kind of symbolic execution, wherein data is the disjoint union of bytes and addresses, with addresses kept opaque and immutable.
Labels are immediately translated to concrete addresses, and registers and memory locations only ever contain bytes, never labels.
As a consequence, we allow the programmer to mangle, change and generally adjust addresses as they want, under the proviso that the translation process may not be able to preserve the semantics of programs that do this.
The only limitation introduced by this approach is that the size of
assembly programs is bounded by $2^16$.
This will be further discussed in Subsection~\ref{subsect.total.correctness.for.well.behaved.assembly.programs}.
% ---------------------------------------------------------------------------- %
% SECTION %
% ---------------------------------------------------------------------------- %
\subsection{The assembler}
\label{subsect.the.assembler}
Conceptually the assembler works in two passes.
The first pass expands every pseudoinstruction into a list of machine code instructions using the function \texttt{expand\_pseudo\_instruction}.
The second pass encodes as a list of bytes the expanded instruction list by mapping the function \texttt{assembly1} across the list, and then flattening.
The program obtained as a list of bytes is ready to be loaded in code memory
for execution.
\begin{displaymath}
\hspace{-0.5cm}
\mbox{\fontsize{7}{9}\selectfont$[\mathtt{P_1}, \ldots \mathtt{P_n}]$} \underset{\mbox{\fontsize{7}{9}\selectfont$\mathtt{assembly}$}}{\xrightarrow{\left(P_i \underset{\mbox{\fontsize{7}{9}\selectfont$\mathtt{assembly\_1\_pseudo\_instruction}$}}{\xrightarrow{\mathtt{P_i} \xrightarrow{\mbox{\fontsize{7}{9}\selectfont$\mathtt{expand\_pseudo\_instruction}$}} \mathtt{[I^1_i, \ldots I^q_i]} \xrightarrow{\mbox{\fontsize{7}{9}\selectfont$\mathtt{~~~~~~~~assembly1^{*}~~~~~~~~}$}} \mathtt{[0110]}}} \mathtt{[0110]}\right)^{*}}} \mbox{\fontsize{7}{9}\selectfont$\mathtt{[\ldots0110\ldots]}$}
\end{displaymath}
The most complex of the two passes is the first, which expands pseudoinstructions and must perform the task of branch displacement~\cite{hyde:branch:2006}.
The function \texttt{assembly\_1\_pseudo\_instruction} used in the body of the paper is essentially the composition of the two passes.
The branch displacement problem refers to the task of expanding pseudojumps into their concrete counterparts, preferably as efficiently as possible.
For instance, the MCS-51 features three unconditional jump instructions: \texttt{LJMP} and \texttt{SJMP}---`long jump' and `short jump' respectively---and an 11-bit oddity of the MCS-51, \texttt{AJMP}.
Each of these three instructions expects arguments in different sizes and behaves in markedly different ways: \texttt{SJMP} may only perform a `local jump'; \texttt{LJMP} may jump to any address in the MCS-51's memory space and \texttt{AJMP} may jump to any address in the current memory page.
Consequently, the size of each opcode is different, and to squeeze as much code as possible into the MCS-51's limited code memory, the smallest possible opcode that will suffice should be selected.
Similarly, a conditional pseudojump must be translated potentially into a configuration of machine code instructions, depending on the distance to the jump's target.
For example, to translate a jump to a label, a single conditional jump pseudoinstruction may be translated into a block of three real instructions as follows (here, \texttt{JZ} is `jump if accumulator is zero'):
{\small{
\begin{displaymath}
\begin{array}{r@{\quad}l@{\;\;}l@{\qquad}c@{\qquad}l@{\;\;}l}
& \mathtt{JZ} & \mathtt{label} & & \mathtt{JZ} & \text{size of \texttt{SJMP} instruction} \\
& \ldots & & \text{translates to} & \mathtt{SJMP} & \text{size of \texttt{LJMP} instruction} \\
\mathtt{label:} & \mathtt{MOV} & \mathtt{A}\;\;\mathtt{B} & \Longrightarrow & \mathtt{LJMP} & \text{address of \textit{label}} \\
& & & & \ldots & \\
& & & & \mathtt{MOV} & \mathtt{A}\;\;\mathtt{B}
\end{array}
\end{displaymath}}}
Here, if \texttt{JZ} fails, we fall through to the \texttt{SJMP} which jumps over the \texttt{LJMP}.
Naturally, if \texttt{label} is `close enough', a conditional jump pseudoinstruction is mapped directly to a conditional jump machine instruction; the above translation only applies if \texttt{label} is not sufficiently local.
In order to implement branch displacement it is impossible to really make the \texttt{expand\_pseudo\_instruction} function completely independent of the encoding function.
This is due to branch displacement requiring the distance in bytes of the target of the jump.
Moreover the standard solutions for solving the branch displacement problem find their solutions iteratively, by either starting from a solution where all jumps are long, and shrinking them when possible, or starting from a state where all jumps are short and increasing their length as needed.
Proving the correctness of such algorithms is already quite involved and the correctness of the assembler as a whole does not depend on the `quality' of the solution found to a branch displacement problem.
For this reason, we try to isolate the computation of a branch displacement problem from the proof of correctness for the assembler by parameterising our \texttt{expand\_pseudo\_instruction} by a `policy'.
\begin{lstlisting}
definition expand_pseudo_instruction:
$\forall$lookup_labels: Identifier $\rightarrow$ Word.
$\forall$policy.
$\forall$ppc: Word.
$\forall$lookup_datalabels: Identifier $\rightarrow$ Word.
$\forall$pi: pseudo_instruction.
list instruction := ...
\end{lstlisting}
Here, the functions \texttt{lookup\_labels} and \texttt{lookup\_datalabels} are the functions that map labels and datalabels to program counters respectively, both of them used in the semantics of assembly.
The input \texttt{pi} is the pseudoinstruction to be expanded and is found at address \texttt{ppc} in the assembly program.
The function takes \texttt{policy} as an input.
In reality, this is a pair of functions, but for the purposes of this paper we simplify.
The \texttt{policy} maps pseudo-program counters to program counters: the encoding of the expansion of the pseudoinstruction found at address \texttt{a} in the assembly code should be placed into code memory at address \texttt{policy(a)}.
Of course this is possible only if the policy is correct, which means that the encoding of consecutive assembly instructions must be consecutive in code memory.
\begin{displaymath}
\texttt{policy}(\texttt{ppc} + 1) = \texttt{pc} + \texttt{current\_instruction\_size}
\end{displaymath}
Here, \texttt{current\_instruction\_size} is the size in bytes of the encoding of the expanded pseudoinstruction found at \texttt{ppc}.
Note that the entanglement we hinted at is only partially solved in this way: the assembler code can ignore the implementation details of the algorithm that finds a policy;
however, the algorithm that finds a policy must know the exact behaviour of the assembly program because it needs to predict the way the assembly will expand and encode pseudoinstructions, once fed with a policy.
A companion submission to this one~\cite{boender:correctness:2012} certifies an algorithm that finds branch displacement policies for the assembler described in this paper.
The \texttt{expand\_pseudo\_instruction} function uses the \texttt{policy} map to determine the size of jump required when expanding pseudojumps, computing the jump size by examining the size of the differences between program counters.
For instance, if at address \texttt{ppc} in the assembly program we found \texttt{Jmp l} such that \texttt{lookup\_labels l = a}, if the offset \texttt{d = policy(a) - policy(ppc + 1)} is such that \texttt{d} $< 128$ then \texttt{Jmp l} is normally translated to the best local solution, the short jump \texttt{SJMP d}.
A global best solution to the branch displacement problem, however, is not always made of locally best solutions.
Therefore, in some circumstances, it is necessary to force the assembler to expand jumps into larger ones.
This is achieved by another boolean-valued function such that if the function applied to \texttt{ppc} returns true then a \texttt{Jmp l} at address \texttt{ppc} is always translated to a long jump.
An essentially identical mechanism exists for call instructions.
% ---------------------------------------------------------------------------- %
% SECTION %
% ---------------------------------------------------------------------------- %
\subsection{Correctness of the assembler with respect to fetching}
\label{subsect.total.correctness.of.the.assembler}
Using our policies, we now work toward proving the correctness of the assembler.
Correctness means that the assembly process never fails when provided with a correct policy and that the process does not change the semantics of a certain class of well-behaved assembly programs.
The aim of this section is to prove the following informal statement: when we fetch an assembly pseudoinstruction \texttt{I} at address \texttt{ppc}, then we can fetch the expanded pseudoinstruction(s) \texttt{[J1, \ldots, Jn] = fetch\_pseudo\_instruction \ldots\ I\ ppc} from \texttt{policy ppc} in the code memory obtained by loading the assembled object code.
This constitutes the first major step in the proof of correctness of the assembler, the next one being the simulation of \texttt{I} by \texttt{[J1, \ldots, Jn]} (see Subsection~\ref{subsect.total.correctness.for.well.behaved.assembly.programs}).
The \texttt{assembly} function is given a Russell type (slightly simplified here):
\begin{lstlisting}
definition assembly:
$\forall$program: pseudo_assembly_program.
$\forall$policy.
$\Sigma$assembled: list Byte $\times$ (BitVectorTrie costlabel 16).
policy is correct for program $\rightarrow$
$\mid$program$\mid$ < $2^{16}$ $\rightarrow$ $\mid$fst assembled$\mid$ < $2^{16}$ $\wedge$
(policy ($\mid$program$\mid$) = $\mid$fst assembled$\mid$ $\vee$
(policy ($\mid$program$\mid$) = 0 $\wedge$ $\mid$fst assembled$\mid$ = $2^{16}$)) $\wedge$
$\forall$ppc: pseudo_program_counter. ppc < $2^{16}$ $\rightarrow$
let pseudo_instr := fetch from program at ppc in
let assembled_i := assemble pseudo_instr in
$\mid$assembled_i$\mid$ $\leq$ $2^{16}$ $\wedge$
$\forall$n: nat. n < $\mid$assembled_i$\mid$ $\rightarrow$ $\exists$k: nat.
nth assembled_i n = nth assembled (policy ppc + k).
\end{lstlisting}
In plain words, the type of \texttt{assembly} states the following.
Suppose we are given a policy that is correct for the program we are assembling.
Then we return a list of assembled bytes, complete with a map from program counters to cost labels, such that the following properties hold for the list of bytes.
Under the condition that the policy is `correct' for the program and the program is fully addressable by a 16-bit word, the assembled list is also fully addressable by a 16-bit word, the policy maps the last program counter that can address the program to the last instruction of the assemble pseudoinstruction or overflows, and if we fetch from the pseudo-program counter \texttt{ppc} we get a pseudoinstruction \texttt{pi} and a new pseudo-program counter \texttt{ppc}.
Further, assembling the pseudoinstruction \texttt{pseudo\_instr} results in a list of bytes, \texttt{assembled\_i}.
Then, indexing into this list with any natural number \texttt{n} less than the length of \texttt{assembled\_i} gives the same result as indexing into \texttt{assembled} with \texttt{policy ppc} (the program counter pointing to the start of the expansion in \texttt{assembled}) plus \texttt{k}.
Essentially the lemma above states that the \texttt{assembly} function correctly expands pseudoinstructions, and that the expanded instruction reside consecutively in memory.
This result is lifted from lists of bytes into a result on tries of bytes (i.e. code memories), using an additional lemma: \texttt{assembly\_ok}.
Lemma \texttt{fetch\_assembly} establishes that the \texttt{fetch} and \texttt{assembly1} functions interact correctly.
The \texttt{fetch} function, as its name implies, fetches the instruction indexed by the program counter in the code memory, while \texttt{assembly1} maps a single instruction to its byte encoding:
\begin{lstlisting}
lemma fetch_assembly:
$\forall$pc: Word.
$\forall$i: instruction.
$\forall$code_memory: BitVectorTrie Byte 16.
$\forall$assembled: list Byte.
assembled = assemble i $\rightarrow$
let len := $\mid$assembled$\mid$ in
let pc_plus_len := pc + len in
encoding_check pc pc_plus_len assembled $\rightarrow$
let $\langle$instr, pc', ticks$\rangle$ := fetch pc in
instr = i $\wedge$ ticks = (ticks_of_instruction instr) $\wedge$ pc' = pc_plus_len.
\end{lstlisting}
We read \texttt{fetch\_assembly} as follows.
Given an instruction, \texttt{i}, we first assemble the instruction to obtain \texttt{assembled}, checking that the assembled instruction was stored in code memory correctly.
Fetching from code memory, we obtain a tuple consisting of the instruction, new program counter, and the number of ticks this instruction will take to execute.
We finally check that the fetched instruction is the same instruction that we began with, and the number of ticks this instruction will take to execute is the same as the result returned by a lookup function, \texttt{ticks\_of\_instruction}, devoted to tracking this information.
Or, in plainer words, assembling and then immediately fetching again gets you back to where you started.
Lemma \texttt{fetch\_assembly\_pseudo} is obtained by composition of \texttt{expand\_pseudo\_instruction} and \texttt{assembly\_1\_pseudoinstruction}:
\begin{lstlisting}
lemma fetch_assembly_pseudo:
$\forall$program: pseudo_assembly_program.
$\forall$policy.
$\forall$ppc.
$\forall$code_memory.
let $\langle$preamble, instr_list$\rangle$ := program in
let pi := $\pi_1$ (fetch_pseudo_instruction instr_list ppc) in
let pc := policy ppc in
let instrs := expand_pseudo_instructio policy ppc pi in
let $\langle$l, a$\rangle$ := assembly_1_pseudoinstruction policy ppc pi in
let pc_plus_len := pc + l in
encoding_check code_memory pc pc_plus_len a $\rightarrow$
fetch_many code_memory pc_plus_len pc instructions.
\end{lstlisting}
Here, \texttt{l} is the number of machine code instructions the pseudoinstruction at hand has been expanded into.
We assemble a single pseudoinstruction with \texttt{assembly\_1\_pseudoinstruction}, which internally calls \texttt{expand\_pseudo\_instruction}.
The function \texttt{fetch\_many} fetches multiple machine code instructions from code memory and performs some routine checks.
Intuitively, Lemma \texttt{fetch\_assembly\_pseudo} can be read as follows.
Suppose we expand the pseudoinstruction at \texttt{ppc} with the policy, obtaining the list of machine code instructions \texttt{instrs}.
Suppose we also assemble the pseudoinstruction at \texttt{ppc} to obtain \texttt{a}, a list of bytes.
Then, we check with \texttt{fetch\_many} that the number of machine instructions that were fetched matches the number of instruction that \texttt{expand\_pseudo\_instruction} expanded.
The final lemma in this series is \texttt{fetch\_assembly\_pseudo2} that combines the Lemma \texttt{fetch\_assembly\_pseudo} with the correctness of the functions that load object code into the processor's memory:
\begin{lstlisting}
lemma fetch_assembly_pseudo2:
$\forall$program.
$\mid$snd program$\mid$ $\leq$ $2^{16}$ $\rightarrow$
$\forall$policy.
policy is correct for program $\rightarrow$
$\forall$ppc. ppc < $\mid$snd program$\mid$ $\rightarrow$
let $\langle$labels, costs$\rangle$ := create_label_cost_map program in
let $\langle$assembled, costs'$\rangle$ := $\pi_1$ (assembly program policy) in
let cmem := load_code_memory assembled in
let $\langle$pi, newppc$\rangle$ := fetch_pseudo_instruction program ppc in
let instructions := expand_pseudo_instruction policy ppc pi in
fetch_many cmem (policy newppc) (policy ppc) instructions.
\end{lstlisting}
Here we use $\pi_1$ to project the existential witness from the Russell-typed function \texttt{assembly}.
We read \texttt{fetch\_assembly\_pseudo2} as follows.
Suppose we are given an assembly program which can be addressed by a 16-bit word and a policy that is correct for this program.
Suppose we are able to successfully assemble an assembly program using \texttt{assembly} and produce a code memory, \texttt{cmem}.
Then, fetching a pseudoinstruction from the pseudo-code memory at address \texttt{ppc} corresponds to fetching a sequence of instructions from the real code memory using \texttt{policy} to expand pseudoinstructions.
The fetched sequence corresponds to the expansion, according to the policy, of the pseudoinstruction.
At first, the lemma appears to immediately imply the correctness of the assembler, but this property is \emph{not} strong enough to establish that the semantics of an assembly program has been preserved by the assembly process since it does not establish the correspondence between the semantics of a pseudoinstruction and that of its expansion.
In particular, the two semantics differ on instructions that \emph{could} directly manipulate program addresses.
% ---------------------------------------------------------------------------- %
% SECTION %
% ---------------------------------------------------------------------------- %
\subsection{Correctness for `well-behaved' assembly programs}
\label{subsect.total.correctness.for.well.behaved.assembly.programs}
The traditional approach to verifying the correctness of an assembler is to treat memory addresses as opaque structures that cannot be modified.
Memory is represented as a map from opaque addresses to the disjoint union of data and opaque addresses---addresses are kept opaque to prevent their possible `semantics breaking' manipulation by assembly programs:
\begin{displaymath}
\mathtt{Mem} : \mathtt{Addr} \rightarrow \mathtt{Bytes} + \mathtt{Addr} \qquad \llbracket - \rrbracket : \mathtt{Instr} \rightarrow \mathtt{Mem} \rightarrow \mathtt{option\ Mem}
\end{displaymath}
The semantics of a pseudoinstruction, $\llbracket - \rrbracket$, is given as a possibly failing function from pseudoinstructions and memory spaces to new memory spaces.
The semantic function proceeds by case analysis over the operands of a given instruction, failing if either operand is an opaque address, or otherwise succeeding, updating memory.
\begin{gather*}
\llbracket \mathtt{ADD\ @A1\ @A2} \rrbracket^\mathtt{M} = \begin{cases}
\mathtt{Byte\ b1},\ \mathtt{Byte\ b2} & \rightarrow \mathtt{Some}(\mathtt{M}\ \text{with}\ \mathtt{b1} + \mathtt{b2}) \\
-,\ \mathtt{Addr\ a} & \rightarrow \mathtt{None} \\
\mathtt{Addr\ a},\ - & \rightarrow \mathtt{None}
\end{cases}
\end{gather*}
In this paper we take a different approach, tracing memory locations (and accumulators) that contain memory addresses.
We prove that only those assembly programs that use addresses in `safe' ways have their semantics preserved by the assembly process---a sort of dynamic type system sitting atop memory.
In principle this approach allows us to introduce some permitted \emph{benign} manipulations of addresses that the traditional approach, using opaque addresses, cannot handle, therefore expanding the set of input programs that can be assembled correctly.
This approach seems similar to one taken by Tuch \emph{et al}~\cite{tuch:types:2007} for reasoning about low-level C code.
Our analogue of the semantic function above is merely a wrapper around the function that implements the semantics of machine code, paired with a function that keeps track of addresses.
The semantics of pseudo- and machine code are then essentially shared.
The only thing that changes at the assembly level is the presence of the new tracking function.
However, with this approach we must detect (at run time) programs that manipulate addresses in well-behaved ways, according to some approximation of well-behavedness.
We use an \texttt{internal\_pseudo\_address\_map} to trace addresses of code memory addresses in internal RAM:
\begin{lstlisting}
definition address_entry := upper_lower $\times$ Byte.
definition internal_pseudo_address_map :=
(BitVectorTrie address_entry 7) $\times$ (BitVectorTrie address_entry 7)
$\times$ (option address_entry).
\end{lstlisting}
Here, \texttt{upper\_lower} is a type isomorphic to the booleans denoting whether a byte value is the upper or lower byte of some 16-bit address.
The implementation of \texttt{internal\_pseudo\_address\_map} is complicated by some peculiarities of the MCS-51's instruction set.
Note here that all addresses are 16 bit words, but are stored (and manipulated) as 8 bit bytes.
All \texttt{MOV} instructions in the MCS-51 must use the accumulator \texttt{A} as an intermediary, moving a byte at a time.
The third component of \texttt{internal\_pseudo\_address\_map} therefore states whether the accumulator currently holds a piece of an address, and if so, whether it is the upper or lower byte of the address (using the \texttt{upper\_lower} flag) complete with the corresponding source address in full.
The first and second components, on the other hand, performs a similar task for the higher and lower external RAM.
Again, we use our \texttt{upper\_lower} flag to describe whether a byte is the upper or lower component of a 16-bit address.
The \texttt{low\_internal\_ram\_of\_pseudo\_low\_internal\_ram} function converts the lower internal RAM of a \texttt{PseudoStatus} into the lower internal RAM of a \texttt{Status}.
A similar function exists for high internal RAM.
Note that both RAM segments are indexed using addresses 7-bits long:
\begin{lstlisting}
definition low_internal_ram_of_pseudo_low_internal_ram:
internal_pseudo_address_map $\rightarrow$ policy $\rightarrow$ BitVectorTrie Byte 7
$\rightarrow$ BitVectorTrie Byte 7.
\end{lstlisting}
Next, we are able to translate \texttt{PseudoStatus} records into \texttt{Status} records using \texttt{status\_of\_pseudo\_status}.
Translating a \texttt{PseudoStatus}'s code memory requires we expand pseudoinstructions and then assemble to obtain a trie of bytes.
This never fails, provided that our policy is correct:
\begin{lstlisting}
definition status_of_pseudo_status:
internal_pseudo_address_map $\rightarrow$ $\forall$pap. $\forall$ps: PseudoStatus pap.
$\forall$policy. Status (code_memory_of_pseudo_assembly_program pap policy)
\end{lstlisting}
The \texttt{next\_internal\_pseudo\_address\_map} function is responsible for run time monitoring of the behaviour of assembly programs, in order to detect well-behaved ones.
It returns a map that traces memory addresses in internal RAM after execution of the next pseudoinstruction, failing when the instruction tampers with memory addresses in unanticipated (but potentially correct) ways.
It thus decides the membership of a strict subset of the set of well-behaved programs.
\begin{lstlisting}
definition next_internal_pseudo_address_map: internal_pseudo_address_map $\rightarrow$
$\forall$cm. (Identifier $\rightarrow$ PseudoStatus cm $\rightarrow$ Word) $\rightarrow$ $\forall$s: PseudoStatus cm.
program_counter s < $2^{16}$ $\rightarrow$ option internal_pseudo_address_map
\end{lstlisting}
If we wished to allow `benign manipulations' of addresses, it would be this function that needs to be changed.
Note we once again use dependent types to ensure that program counters are properly within bounds.
The third argument is a function that resolves the concrete address of a label.
The function \texttt{ticks\_of0} computes how long---in clock cycles---a pseudoinstruction will take to execute when expanded in accordance with a given policy.
The function returns a pair of natural numbers, needed for recording the execution times of each branch of a conditional jump.
\begin{lstlisting}
definition ticks_of0:
pseudo_assembly_program $\rightarrow$ (Identifier $\rightarrow$ Word) $\rightarrow$ $\forall$policy. Word $\rightarrow$
pseudo_instruction $\rightarrow$ nat $\times$ nat
\end{lstlisting}
An additional function, \texttt{ticks\_of}, is merely a wrapper around this function.
Finally, we are able to state and prove our main theorem, relating the execution of a single assembly instruction and the execution of (possibly) many machine code instructions, as long as we are able to track memory addresses properly:
\begin{lstlisting}
theorem main_thm:
$\forall$M, M': internal_pseudo_address_map.
$\forall$program: pseudo_assembly_program.
$\forall$program_in_bounds: $\mid$program$\mid$ $\leq$ $2^{16}$.
let maps := create_label_cost_map program in
let addr_of := ... in
program is well labelled $\rightarrow$
$\forall$policy. policy is correct for program.
$\forall$ps: PseudoStatus program. ps < $\mid$program$\mid$.
next_internal_pseudo_address_map M program ... = Some M' $\rightarrow$
$\exists$n. execute n (status_of_pseudo_status M ps policy) =
status_of_pseudo_status M'
(execute_1_pseudo_instruction program
(ticks_of program ($\lambda$id. addr_of id ps) policy) ps) policy.
\end{lstlisting}
The statement is standard for forward simulation, but restricted to \texttt{PseudoStatuses} \texttt{ps} whose next instruction to be executed is well-behaved with respect to the \texttt{internal\_pseudo\_address\_map} \texttt{M}.
We explicitly require proof that the policy is correct, the program is well-labelled (i.e. no repeated labels, etc.) and the pseudo-program counter is in the program's bounds.
Theorem \texttt{main\_thm} establishes the correctness of the assembly process and can be lifted to the forward simulation of an arbitrary number of well-behaved steps on the assembly program.
% ---------------------------------------------------------------------------- %
% SECTION %
% ---------------------------------------------------------------------------- %
\section{Conclusions}
\label{sect.conclusions}
We are proving the correctness of an assembler for MCS-51 assembly language.
Our assembly language features labels, arbitrary conditional and unconditional jumps to labels, global data and instructions for moving this data into the MCS-51's single 16-bit register.
Expanding these pseudoinstructions into machine code instructions is not trivial, and the proof that the assembly process is `correct', in that the semantics of a subset of assembly programs are not changed is complex.
The formalisation is a component of CerCo which aims to produce a verified concrete complexity preserving compiler for a large subset of the C language.
The verified assembler, complete with the underlying formalisation of the semantics of MCS-51 machine code, will form the bedrock layer upon which the rest of CerCo will build its verified compiler platform.
It is interesting to compare our work to an `industrial grade' assembler for the MCS-51: SDCC~\cite{sdcc:2011}.
SDCC is the only open source C compiler that targets the MCS-51 instruction set.
It appears that all pseudojumps in SDCC assembly are expanded to \texttt{LJMP} instructions, the worst possible jump expansion policy from an efficiency point of view.
Note that this policy is the only possible policy \emph{in theory} that can preserve the semantics of an assembly program during the assembly process.
However, this comes at the expense of assembler completeness: the generated program may be too large to fit into code memory.
In this respect, there is a trade-off between the completeness of the assembler and the efficiency of the assembled program.
The definition and proof of a terminating, correct jump expansion policy is described in a companion publication to this one~\cite{boender:correctness:2012}.
Aside from their application in verified compiler projects such as CerCo, CompCert~\cite{leroy:formally:2009} and CompCertTSO~\cite{sevcik:relaxed-memory:2011}, verified assemblers such as ours could also be applied to the verification of operating system kernels.
Of particular note is the verified seL4 kernel~\cite{klein:sel4:2009}.
This verification explicitly assumes the existence of, amongst other things, a trustworthy assembler and compiler.
CompCert, CompCertTSO and the seL4 formalisation assume the existence of `trustworthy' assemblers.
For instance, the CompCert C compiler's default backend stops at the PowerPC assembly language.
The observation that an optimising assembler cannot preserve the semantics of every assembly program may have consequences for these projects (though in the case of CompCertTSO, targetting a multiprocessor, what exactly constitutes the subset of `good programs' may not be entirely clear).
If CompCert chooses to assume the existence of an optimising assembler, then care should be made to ensure that any assembly program produced by the CompCert compiler falls into the subset of programs that can have their semantics preserved by an optimising assembler.
Our formalisation exploits dependent types in different ways and for multiple purposes.
The first purpose is to reduce potential errors in the formalisation of the microprocessor.
Dependent types are used to constrain the size of bitvectors and tries that represent memory quantities and memory areas respectively.
They are also used to simulate polymorphic variants in Matita, in order to provide precise typings to various functions expecting only a subset of all possible addressing modes that the MCS-51 offers.
Polymorphic variants nicely capture the absolutely unorthogonal instruction set of the MCS-51 where every opcode must accept its own subset of the 11 addressing mode of the processor.
The second purpose is to single out sources of incompleteness.
By abstracting our functions over the dependent type of correct policies, we were able to manifest the fact that the compiler never refuses to compile a program where a correct policy exists.
This also allowed to simplify the initial proof by dropping lemmas establishing that one function fails if and only if some previous function does so.
Finally, dependent types, together with Matita's liberal system of coercions, allow us to simulate almost entirely in user space the proof methodology `Russell' of Sozeau~\cite{sozeau:subset:2006}.
Not every proof has been carried out in this way: we only used this style to prove that a function satisfies a specification that only involves that function in a significant way.
It would not be natural to see the proof that fetch and assembly commute as the specification of one of the two functions.
%\paragraph{Related work}
% piton
We are not the first to consider the correctness of an assembler for a non-trivial assembly language.
The most impressive piece of work in this domain is Piton~\cite{moore:piton:1996}, a stack of verified components, written and verified in ACL2, ranging from a proprietary FM9001 microprocessor verified at the gate level, to assemblers and compilers for two high-level languages---Lisp and $\mu$Gypsy~\cite{moore:grand:2005}.
% jinja
Klein and Nipkow consider a Java-like programming language, Jinja~\cite{klein:machine:2006}.
They provide a compiler, virtual machine and operational semantics for the programming language and virtual machine, and prove that their compiler is semantics and type preserving.
Though other verified assemblers exist in the literature what sets our work apart from that above is our attempt to optimise the generated machine code.
This complicates any formalisation effort as an attempt at the best possible selection of machine instructions must be made---especially important on devices with limited code memory.
Care must be taken to ensure that the time properties of an assembly program are not modified by assembly lest we affect the semantics of any program employing the MCS-51's I/O facilities.
This is only possible by inducing a cost model on the source code from the optimisation strategy and input program.
%\paragraph{Resources}
Our source files are available at~\url{http://cerco.cs.unibo.it}.
We assumed several properties of `library functions', e.g. modular arithmetic and datastructure manipulation.
We axiomatised various small functions needed to complete the main theorems, as well as some `routine' proof obligations of the theorems themselves, in focussing on the main meat of the theorems.
We believe that the proof strategy is sound and that all axioms can be closed, up to minor bugs that should have local fixes that do not affect the global proof strategy.
The development, including the definition of the executable semantics of the MCS-51, is spread across 29 files, with around 18,500 lines of Matita source.
The bulk of the proof is contained in a series of files, \texttt{AssemblyProof.ma}, \texttt{AssemblyProofSplit.ma} and \texttt{AssemblyProofSplitSplit.ma} consisting of approximately 4500 lines of Matita source.
Numerous other lines of proofs are spread all over the development because of dependent types and the Russell proof style, which does not allow one to separate the code from the proofs.
The low ratio between source lines and the number of lines of proof is unusual, but justified by the fact that the pseudo-assembly and the assembly language share most constructs and large swathes of the semantics are shared.
Many lines of code are required to describe the complex semantics of the processor, but for the shared cases the proof of preservation of the semantics is essentially trivial.
\bibliography{cpp-2012-asm.bib}
\end{document}\renewcommand{\verb}{\lstinline}
\def\lstlanguagefiles{lst-grafite.tex}
\lstset{language=Grafite}