source: Papers/cpp-asm-2012/cpp-2012-asm.tex @ 2364

Last change on this file since 2364 was 2364, checked in by mulligan, 7 years ago

Some minor changes to Claudio's additions and reinstatement of some sections and subsections.

File size: 52.9 KB
25        {\setlength{\fboxsep}{5pt}
26                \setlength{\mylength}{\linewidth}%
27                \addtolength{\mylength}{-2\fboxsep}%
28                \addtolength{\mylength}{-2\fboxrule}%
29                \Sbox
30                \minipage{\mylength}%
31                        \setlength{\abovedisplayskip}{0pt}%
32                        \setlength{\belowdisplayskip}{0pt}%
33                }%
34                {\endminipage\endSbox
35                        \[\fbox{\TheSbox}\]}
37\title{On the correctness of an optimising assembler for the Intel MCS-51 microprocessor\thanks{The project CerCo acknowledges the financial support of the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission, under FET-Open grant number: 243881.}}
38\author{Dominic P. Mulligan \and Claudio Sacerdoti Coen}
39\institute{Dipartimento di Scienze dell'Informazione,\\ Universit\'a degli Studi di Bologna}
48We present a proof of correctness in Matita for an optimising assembler for the MCS-51 microcontroller.
49The efficient expansion of pseudoinstructions, namely jumps, into machine instructions is complex.
50We isolate the decision making over how jumps should be expanded from the expansion process itself as much as possible using `policies', making the proof of correctness for the assembler more straightforward.
52%We observe that it is impossible for an optimising assembler to preserve the semantics of every assembly program.
53%Assembly language programs can manipulate concrete addresses in arbitrary ways.
54Our proof strategy contains a tracking facility for `good addresses' and only programs that use good addresses have their semantics preserved under assembly, as we observe that it is impossible for an assembler to preserve the semantics of every assembly program.
55Our strategy offers increased flexibility over the traditional approach to proving the correctness of assemblers, wherein addresses in assembly are kept opaque and immutable.
56In particular, we may experiment with allowing the benign manipulation of addresses.
57\keywords{Verified software, CerCo (Certified Complexity), MCS-51 microcontroller, Matita proof assistant}
60% ---------------------------------------------------------------------------- %
61% SECTION                                                                      %
62% ---------------------------------------------------------------------------- %
66We consider the formalisation of an assembler for the Intel MCS-51 8-bit microprocessor in the Matita proof assistant~\cite{asperti:user:2007}.
67This formalisation forms a major component of the EU-funded CerCo (`Certified Complexity') project~\cite{cerco:2011}, concerning the construction and formalisation of a concrete complexity preserving compiler for a large subset of the C programming language.
69The MCS-51 dates from the early 1980s and is commonly called the 8051/8052.
70Derivatives are still widely manufactured by a number of semiconductor foundries, with the processor being used especially in embedded systems.
72The MCS-51 has a relative paucity of features compared to its more modern brethren, with the lack of any caching or pipelining features meaning that timing of execution is predictable, making the MCS-51 very attractive for CerCo's ends.
73However, the MCS-51's paucity of features---though an advantage in many respects---also quickly becomes a hindrance, as the MCS-51 features a relatively minuscule series of memory spaces by modern standards.
74As a result our C compiler, to be able to successfully compile realistic programs for embedded devices, ought to produce `tight' machine code.
76To do this, we must solve the `branch displacement' problem---deciding how best to expand pseudojumps to labels in assembly language to machine code jumps.
77The branch displacement problem arises when pseudojumps can be expanded
78in different ways to real machine instructions, but the different expansions
79are not equivalent (e.g. differ in size or speed) and not always
80correct (e.g. correctness is only up to global constraints over the compiled
81code). For instance, some jump instructions (short jumps) are very small
82and fast, but they can only reach destinations within a
83certain distance from the current instruction. When the destinations are
84too far away, larger and slower long jumps must be used. The use of a long jump may
85augment the distance between another pseudojump and its target, forcing
86another long jump use, in a cascade. The job of the optimising
87compiler (assembler) is to individually expand every pseudo-instruction in such a way
88that all global constraints are satisfied and that the compiled program
89is minimal in size and faster in concrete time complexity.
90This problem is known to be computationally hard for most CISC architectures (see~\cite{hyde:branch:2006}).
92To simplify the CerCo C compiler we have chosen to implement an optimising assembler whose input language the compiler will target.
93Labels, conditional jumps to labels, a program preamble containing global data and a \texttt{MOV} instruction for moving this global data into the MCS-51's one 16-bit register all feature in our assembly language.
94We further simplify by ignoring linking, assuming that all our assembly programs are pre-linked.
96Another complication we have addressed is that of the cost model.
97CerCo imposes a cost model on C programs or, more specifically, on simple blocks of instructions.
98This cost model is induced by the compilation process itself, and its non-compositional nature allows us to assign different costs to identical C statements depending on how they are compiled.
99In short, we aim to obtain a very precise costing for a program by embracing the compilation process, not ignoring it.
100At the assembler level, this is reflected by our need to induce a cost
101model on the assembly code as a function of the assembly program and the
102strategy used to solve the branch displacement problem. In particular, the
103optimising compiler should also return a map that assigns a cost (in clock
104cycles) to every instruction in the source program. We expect the induced cost
105to be preserved by the compiler: we will prove that the compiled code
106tightly simulates the source code by taking exactly the predicted amount of
109Note that the temporal tightness of the simulation is a fundamental prerequisite
110of the correctness of the simulation because some functions of the MCS-51---timers and I/O---depend on the microprocessor's clock.
111If the pseudo- and concrete clock differ the result of an I/O operation may not be preserved.
113Branch displacement algorithms must have a deep knowledge of the way
114the rest of the assembler works in order to build globally correct solutions.
115Proving their correctness is quite a complex task (see, for instance,
116the compaion paper~\cite{boender:correctness:2012}).
117Nevertheless, the correctness of the whole assembler only depends on the
118correctness of the branch displacement algorithm.
119Therefore, in the rest of the paper, we presuppose the
120existence of a correct policy, to be computed by a branch displacement
121algorithm if it exists. A policy is the decision over how
122any particular jump should be expanded; it is correct when the global
123constraints are satisfied.
124The assembler fails to assemble an assembly program if and only if a correct policy does not exist.
125This is stated in an elegant way in the dependent type of the assembler: the assembly function is total over a program, a policy and the proof that the policy is correct for that program.
127A final complication in the proof is due to the kind of semantics associated to pseudo-assembly programs.
128Should assembly programs be allowed to freely manipulate addresses?
129The traditional answer is `no': values stored in memory or registers are either
130concrete data or symbolic addresses. The latter can only be manipulated
131in very restricted ways and programs that do not do so are not assigned a semantics and cannot be reasoned about.
132All programs that have a semantics have it preserved by the assembler.
133We take an alternative approach, allowing programs to freely
134manipulate addresses non-symbolically but only granting a preservation of semantics
135to those programs that act in `well-behaved' ways.
136In principle, this should allow some reasoning on the actual semantics of malign programs.
137In practice, we note how our approach facilitates more code reuse between the semantics of assembly code and object code.
139The rest of this paper is a detailed description of our proof that is marginally still a work in progress.
141% ---------------------------------------------------------------------------- %
142% SECTION                                                                      %
143% ---------------------------------------------------------------------------- %
146Matita is a proof assistant based on a variant of the Calculus of (Co)inductive Constructions~\cite{asperti:user:2007}.
147It features dependent types that we exploit in the formalisation.
148The (simplified) syntax of the statements and definitions in the paper should be self-explanatory.
149Pairs are denoted with angular brackets, $\langle-, -\rangle$.
151Matita features a liberal system of coercions.
152It is possible to define a uniform coercion $\lambda x.\langle x,?\rangle$ from every type $T$ to the dependent product $\Sigma x:T.P~x$.
153The coercion opens a proof obligation that asks the user to prove that $P$ holds for $x$.
154When a coercion must be applied to a complex term (a $\lambda$-abstraction, a local definition, or a case analysis), the system automatically propagates the coercion to the sub-terms
155 For instance, to apply a coercion to force $\lambda x.M : A \to B$ to have type $\forall x:A.\Sigma y:B.P~x~y$, the system looks for a coercion from $M: B$ to $\Sigma y:B.P~x~y$ in a context augmented with $x:A$.
156This is significant when the coercion opens a proof obligation, as the user will be presented with multiple, but simpler proof obligations in the correct context.
157In this way, Matita supports the `Russell' proof methodology developed by Sozeau in~\cite{sozeau:subset:2006}, with an implementation that is lighter and more tightly integrated with the system than that of Coq.
159% ---------------------------------------------------------------------------- %
160% SECTION                                                                      %
161% ---------------------------------------------------------------------------- %
162\section{The proof}
165Our aim here is to explain the main ideas and steps of the certified proof of correctness for an optimising assembler for the MCS-51.
167In Subsect.~\ref{subsect.machine.code.semantics} we sketch an operational semantics (a realistic and efficient emulator) for the MCS-51.
168We also introduce a syntax for decoded instructions that will be reused for the assembly language.
170In Subsect.~\ref{subsect.assembly.code.semantics} we describe the assembly language and its operational semantics.
171The latter is parametric in the cost model that will be induced by the assembler, reusing the semantics of the machine code on all `real' instructions.
173Branch displacement policies are introduced in Subsect.~\ref{subsect.the.assembler} where we also describe the assembler as a function over policies as previously described.
175To prove our assembler correct we show that the object code given in output, together with a cost model for the source program, simulates the source program executed using that cost model.
176The proof can be divided into two main lemmas.
177The first is correctness with respect to fetching, described in Subsect.~\ref{}.
178Roughly it states that a step of fetching at the assembly level, returning the decoded instruction $I$, is simulated by $n$ steps of fetching at the object level that returns instructions $J_1,\ldots,J_n$, where $J_1,\ldots,J_n$ is, amongst the possible expansions of $I$, the one picked by the policy.
179The second lemma states that $J_1,\ldots,J_n$ simulates $I$ but only if $I$ is well-behaved, i.e. manipulates addresses in `good' ways.
180To keep track of well-behaved address manipulations we record where addresses are currently stored (in memory or an accumulator).
181We introduce a dynamic checking function that inspects this map to determine if the operation is well-behaved, with an affirmative answer being the pre-condition of the lemma.
182The second lemma is detailed in Subsect.~\ref{} where we also establish correctness of our assembler as a composition of the two lemmas: programs that are well-behaved when executed under the cost model induced by the compiler are correctly simulated by the compiled code.
184% ---------------------------------------------------------------------------- %
185% SECTION                                                                      %
186% ---------------------------------------------------------------------------- %
188\subsection{Machine code and its semantics}
191We implemented a realistic and efficient emulator for the MCS-51 microprocessor.
192An MCS-51 program is just a sequence of bytes stored in the read-only code
193memory of the processor, represented as a compact trie of bytes addressed
194by the program counter.
195The \texttt{Status} of the emulator is described as
196a record that contains the microprocessor's program counter, registers, stack
197pointer, clock, special function registers, code memory, and so on.
198The value of the code memory is a parameter of the record since it is not
199changed during execution.
201The \texttt{Status} records is itself an instance of a more general
202datatype \texttt{PreStatus} that abstracts over the implementation of code
203memory in order to reuse the same datatype for the semantics of the assembly
204language in the next section.
206The execution of a single instruction is performed by the \texttt{execute\_1}
207function, parametric over the content \texttt{cm} of the code memory:
209definition execute_1: $\forall$cm. Status cm $\rightarrow$ Status cm
212The function \texttt{execute\_1} closely matches the fetch-decode-execute
213cycle of the MCS-51 hardware, as described by a Siemen's manufacturer's data sheet~\cite{siemens:2011}.
214Fetching and decoding are performed simultaneously:
215we first fetch, using the program counter, from code memory the first byte of the instruction to be executed, decoding the resulting opcode, fetching more bytes as is necessary to decode the arguments.
216Decoded instructions are represented by the \texttt{instruction} data type
217which extends a data type of \texttt{preinstruction}s that will be reused
218for the assembly language.
220inductive preinstruction (A: Type[0]): Type[0] :=
221 | ADD: $\llbracket$acc_a$\rrbracket$ → $\llbracket$registr; direct; indirect; data$\rrbracket$ $\rightarrow$ preinstruction A
222 | DEC: $\llbracket$acc_a; registr; direct; indirect$\rrbracket$ $\rightarrow$ preinstruction A
223 | JB: $\llbracket$bit_addr$\rrbracket$ $\rightarrow$ A $\rightarrow$ preinstruction A
224 | ...
226inductive instruction: Type[0] :=
227 | LCALL: $\llbracket$addr16$\rrbracket$ $\rightarrow$ instruction
228 | AJMP: $\llbracket$addr11$\rrbracket$ $\rightarrow$ instruction
229 | RealInstruction: preinstruction $\llbracket$relative$\rrbracket$ $\rightarrow$ instruction.
230 | ...
232The MCS-51 has many operand modes, but an unorthogonal instruction set: every
233opcode is only enable for a finite subset of the possible operand modes.
234Here we exploit dependent types and an implicit coercion to synthesize
235the type of arguments of opcodes from a vector of names of operand modes.
236For example, \texttt{ACC} has two operands, the first one constrained to be
237the \texttt{A} accumulator, and the second one to be a disjoint union of
238register, direct, indirect and data operand modes.
240The parameterised type $A$ of \texttt{preinstruction} represents the addressing mode allowed for conditional jumps; in the \texttt{RealInstruction} constructor
241we constraint it to be a relative offset.
242A different instantiation will be used in the next section for assembly programs.
244Once decoded, execution proceeds by a case analysis on the decoded instruction, following the operation of the hardware.
245For example, the \texttt{DEC} preinstruction (`decrement') is executed as follows:
247 | DEC addr $\Rightarrow$
248  let s := add_ticks1 s in
249  let $\langle$result, flags$\rangle$ := sub_8_with_carry (get_arg_8 s true addr)
250   (bitvector_of_nat 8 1) false in
251     set_arg_8 s addr result
254Here, \texttt{add\_ticks1} models the incrementing of the internal clock of the microprocessor; it is a parameter of the semantics of \texttt{preinstruction}s
255that is fixed in the semantics of \texttt{instruction}s according to the
256manufacturer datasheet.
258% ---------------------------------------------------------------------------- %
259% SECTION                                                                      %
260% ---------------------------------------------------------------------------- %
262\subsection{Assembly code and its semantics}
265An assembly program is a list of potentially labelled pseudoinstructions, bundled with a preamble consisting of a list of symbolic names for locations in data memory (i.e. global variables).
266All preinstructions are pseudoinstructions, but conditional jumps are now
267only allowed to use \texttt{Identifiers} (labels) as their target.
269inductive pseudo_instruction: Type[0] :=
270  | Instruction: preinstruction Identifier $\rightarrow$ pseudo_instruction
271    ...
272  | Jmp: Identifier $\rightarrow$ pseudo_instruction
273  | Call: Identifier $\rightarrow$ pseudo_instruction
274  | Mov: $\llbracket$dptr$\rrbracket$ $\rightarrow$ Identifier $\rightarrow$ pseudo_instruction.
276The pseudoinstructions \texttt{Jmp}, \texttt{Call} and \texttt{Mov} are generalisations of machine code unconditional jumps, calls and move instructions respectively, all of whom act on labels, as opposed to concrete memory addresses.
277The object code calls and jumps that act on concrete memory addresses are ruled
278out of assembly programs not being included in the preinstructions (see previous
281Execution of pseudoinstructions is an endofunction on \texttt{PseudoStatus}.
282A \texttt{PseudoStatus} is an instance of \texttt{PreStatus} that differs
283from a \texttt{Status} only in the datatype used for code memory: a list
284of optionally labelled pseudoinstructions versus a trie of bytes.
285The \texttt{PreStatus} type is crucial for sharing the majority of the
286semantics of the two languages.
288Emulation for pseudoinstructions is handled by \texttt{execute\_1\_pseudo\_instruction}:
290definition execute_1_pseudo_instruction:
291 $\forall$cm. $\forall$costing:($\forall$ppc: Word. ppc < $\mid$snd cm$\mid$ $\rightarrow$ nat $\times$ nat).
292  $\forall$s:PseudoStatus cm. program_counter s < $\mid$snd cm$\mid$ $\rightarrow$ PseudoStatus cm
294The type of \texttt{execute\_1\_pseudo\_instruction} is more involved than
295that of \texttt{execute\_1}. The first difference is that execution is only
296defined when the program counter points to a valid instruction, i.e.
297it is smaller than the length $\mid$\texttt{snd cm}$\mid$ of the program.
298The second difference is the abstraction over the cost model, abbreviated
299here as \emph{costing}.
300The costing is a function that maps valid program counters to pairs of natural numbers representing the number of clock ticks used by the pseudoinstructions stored at those program counters. For conditional jumps the two numbers differ
301to represent different costs for the `true branch' and the `false branch'.
302In the next section we will see how the optimising
303assembler induces the only costing (induced by the branch displacement policy deciding how to expand pseudojumps) that is preserved by compilation.
305Execution proceeds by first fetching from pseudo-code memory using the program counter---treated as an index into the pseudoinstruction list.
306This index is always guaranteed to be within the bounds of the pseudoinstruction list due to the dependent type placed on the function.
307No decoding is required.
308We then proceed by case analysis over the pseudoinstruction, reusing the code for object code for all instructions present in the MCS-51's instruction set.
309For all newly introduced pseudoinstructions, we simply translate labels to concrete addresses before behaving as a `real' instruction.
311We do not perform any kind of symbolic execution, wherein data is the disjoint union of bytes and addresses, with addresses kept opaque and immutable.
312Labels are immediately translated to concrete addresses, and registers and memory locations only ever contain bytes, never labels.
313As a consequence, we allow the programmer to mangle, change and generally adjust addresses as they want, under the proviso that the translation process may not be able to preserve the semantics of programs that do this.
314The only limitation introduced by this approach is that the size of
315assembly programs is bounded by $2^{16}$.
316This will be further discussed in Subsect.~\ref{}.
318% ---------------------------------------------------------------------------- %
319% SECTION                                                                      %
320% ---------------------------------------------------------------------------- %
322\subsection{The assembler}
324The assembler takes in input an assembly program to expand and a
325branch displacement policy for it.
326It returns both the assembled program (a list of bytes to be
327loaded in code memory for execution) and the costing of the source program.
329Conceptually the assembler works in two passes.
330The first pass expands every pseudoinstruction into a list of machine code instructions using the function \texttt{expand\_pseudo\_instruction}. The policy
331determines which expansion among the alternatives will be chosen for
332pseudo-jumps and pseudo-calls. Once the expansion is performed, the cost
333of the pseudoinstruction is defined as the cost of the expansion.
334The second pass encodes as a list of bytes the expanded instruction list by mapping the function \texttt{assembly1} across the list, and then flattening.
337\mbox{\fontsize{7}{9}\selectfont$[\mathtt{P_1}, \ldots \mathtt{P_n}]$} \underset{\mbox{\fontsize{7}{9}\selectfont$\mathtt{assembly}$}}{\xrightarrow{\left(P_i \underset{\mbox{\fontsize{7}{9}\selectfont$\mathtt{assembly\_1\_pseudo\_instruction}$}}{\xrightarrow{\mathtt{P_i} \xrightarrow{\mbox{\fontsize{7}{9}\selectfont$\mathtt{expand\_pseudo\_instruction}$}} \mathtt{[I^1_i, \ldots I^q_i]} \xrightarrow{\mbox{\fontsize{7}{9}\selectfont$\mathtt{~~~~~~~~assembly1^{*}~~~~~~~~}$}} \mathtt{[0110]}}} \mathtt{[0110]}\right)^{*}}} \mbox{\fontsize{7}{9}\selectfont$\mathtt{[\ldots0110\ldots]}$}
339In order to understand the type for the policy, we briefly hint at the
340branch displacement problem for the MCS-51. A detailed description is found
342The MCS-51 features three unconditional jump instructions: \texttt{LJMP} and \texttt{SJMP}---`long jump' and `short jump' respectively---and an 11-bit oddity of the MCS-51, \texttt{AJMP}.
343Each of these three instructions expects arguments in different sizes and behaves in markedly different ways: \texttt{SJMP} may only perform a `local jump' to an address closer then $2^{7}$ bytes; \texttt{LJMP} may jump to any address in the MCS-51's memory space and \texttt{AJMP} may jump to any address in the current memory page. Memory pages partition the code memory into $2^8$ disjoint areas.
344The size of each opcode is different, with long jumps being larger than the
345other two. Because of the presence of \texttt{AJMP}, an optimal global solution
346may be locally unoptimal, employing a long jump where a shorter one could be
347used to force later jumps to stay inside single memory pages.
349Similarly, a conditional pseudojump must be translated potentially into a configuration of machine code instructions, depending on the distance to the jump's target.
350For example, to translate a jump to a label, a single conditional jump pseudoinstruction may be translated into a block of three real instructions as follows (here, \texttt{JZ} is `jump if accumulator is zero'):
354       & \mathtt{JZ}  & \mathtt{label}                      &                 & \mathtt{JZ}   & \text{size of \texttt{SJMP} instruction} \\
355       & \ldots       &                            & \text{translates to}   & \mathtt{SJMP} & \text{size of \texttt{LJMP} instruction} \\
356\mathtt{label:} & \mathtt{MOV} & \mathtt{A}\;\;\mathtt{B}   & \Longrightarrow & \mathtt{LJMP} & \text{address of \textit{label}} \\
357       &              &                            &                 & \ldots        & \\
358       &              &                            &                 & \mathtt{MOV}  & \mathtt{A}\;\;\mathtt{B}
361Naturally, if \texttt{label} is `close enough', a conditional jump pseudoinstruction is mapped directly to a conditional jump machine instruction; the above translation only applies if \texttt{label} is not sufficiently local.
363%In order to implement branch displacement it is impossible to really make the \texttt{expand\_pseudo\_instruction} function completely independent of the encoding function.
364%This is due to branch displacement requiring the distance in bytes of the target of the jump.
365%Moreover the standard solutions for solving the branch displacement problem find their solutions iteratively, by either starting from a solution where all jumps are long, and shrinking them when possible, or starting from a state where all jumps are short and increasing their length as needed.
366%Proving the correctness of such algorithms is already quite involved and the correctness of the assembler as a whole does not depend on the `quality' of the solution found to a branch displacement problem.
367%For this reason, we try to isolate the computation of a branch displacement problem from the proof of correctness for the assembler by parameterising our \texttt{expand\_pseudo\_instruction} by a `policy'.
369The \texttt{expand\_pseudo\_instruction} function is driven by a policy
370in the choice of expansion of pseudoinstructions. The simplest idea
371is then to define policies as functions that maps jumps to their size.
372This simple idea, however, is impractical because short jumps require the
373offset of the target. For instance, suppose that at address \texttt{ppc} in
374the assembly program we found \texttt{Jmp l} such that $l$ is associated to the
375pseudo-address \texttt{a} and the policy wants the \texttt{Jmp} to become a
376\texttt{SJMP $\delta$}. To compute $\delta$, we need to know what the addresses
377\texttt{ppc+1} and \texttt{a} will become in the assembled program to compute
378their difference.
379The address \texttt{a} will be associated to is a function of the expansion
380of all the pseudoinstructions between \texttt{ppc} and \texttt{a}, which is
381still to be performed when expanding the instruction at \texttt{ppc}.
383To solve the issue, we define the policy \texttt{policy} as a map
384from a valid pseudo-address to the corresponding address in the assembled
386Therefore, $\delta$ in the example above can be computed simply as
387\texttt{policy(a) - policy(ppc + 1)}. Moreover, the \texttt{expand\_pseudo\_instruction} emits a \texttt{SJMP} only after verifying for each \texttt{Jmp} that
388$\delta < 128$. When this is not the case, the function emits an
389\texttt{AJMP} if possible, or an \texttt{LJMP} otherwise, therefore always
390picking the locally best solution.
391In order to accomodate those optimal solutions that require local sub-optimal
392choices, the policy may also return a boolean used to force
393the translation of a \texttt{Jmp} into a \texttt{LJMP} even if
394$\delta < 128$. An essentially identical mechanism exists for call
395instructions and conditional jumps.
397In order for the translation of a jump to be correct, the address associated to
398\texttt{a} by the policy and by the assembler must coincide. The latter is
399the sum of the size of all the expansions of the pseudo-instructions that
400preceed the one at address \texttt{a}: the assembler just concatenates all
401expansions sequentially. To grant this property, we impose a
402correctness criterion over policies. A policy is correct when for all
403valid pseudoaddresses \texttt{ppc}
404$${\texttt{policy(ppc+1) = policy(ppc) + instruction\_size(ppc)}}$$
405Here \texttt{instruction\_size(ppc)} is the size in bytes of the expansion
406of the pseudoinstruction found at \texttt{pcc}, i.e. the length of
410% ---------------------------------------------------------------------------- %
411% SECTION                                                                      %
412% ---------------------------------------------------------------------------- %
413\subsection{Correctness of the assembler with respect to fetching}
415Using our policies, we now work toward proving the correctness of the assembler.
416Correctness means that the assembly process never fails when provided with a correct policy and that the process does not change the semantics of a certain class of well-behaved assembly programs.
418The aim of this section is to prove the following informal statement: when we fetch an assembly pseudoinstruction \texttt{I} at address \texttt{ppc}, then we can fetch the expanded pseudoinstruction(s) \texttt{[J1, \ldots, Jn] = fetch\_pseudo\_instruction \ldots\ I\ ppc} from \texttt{policy ppc} in the code memory obtained by loading the assembled object code.
419This constitutes the first major step in the proof of correctness of the assembler, the next one being the simulation of \texttt{I} by \texttt{[J1, \ldots, Jn]} (see Subsect.~\ref{}).
421The \texttt{assembly} function is given a Russell type (slightly simplified here):
423definition assembly:
424  $\forall$program: pseudo_assembly_program.
425  $\forall$policy.
426    $\Sigma$assembled: list Byte $\times$ (BitVectorTrie costlabel 16).
427      policy is correct for program $\rightarrow$
428      $\mid$program$\mid$ < $2^{16}$ $\rightarrow$ $\mid$fst assembled$\mid$ < $2^{16}$ $\wedge$
429      (policy ($\mid$program$\mid$) = $\mid$fst assembled$\mid$ $\vee$
430      (policy ($\mid$program$\mid$) = 0 $\wedge$ $\mid$fst assembled$\mid$ = $2^{16}$)) $\wedge$
431      $\forall$ppc: pseudo_program_counter. ppc < $2^{16}$ $\rightarrow$
432        let pseudo_instr := fetch from program at ppc in
433        let assembled_i := assemble pseudo_instr in
434          $\mid$assembled_i$\mid$ $\leq$ $2^{16}$ $\wedge$
435            $\forall$n: nat. n < $\mid$assembled_i$\mid$ $\rightarrow$ $\exists$k: nat.
436              nth assembled_i n = nth assembled (policy ppc + k).
438In plain words, the type of \texttt{assembly} states the following.
439Suppose we are given a policy that is correct for the program we are assembling.
440Then we return a list of assembled bytes, complete with a map from program counters to cost labels, such that the following properties hold for the list of bytes.
441Under the condition that the policy is `correct' for the program and the program is fully addressable by a 16-bit word, the assembled list is also fully addressable by a 16-bit word, the policy maps the last program counter that can address the program to the last instruction of the assemble pseudoinstruction or overflows, and if we fetch from the pseudo-program counter \texttt{ppc} we get a pseudoinstruction \texttt{pi} and a new pseudo-program counter \texttt{ppc}.
442Further, assembling the pseudoinstruction \texttt{pseudo\_instr} results in a list of bytes, \texttt{assembled\_i}.
443Then, indexing into this list with any natural number \texttt{n} less than the length of \texttt{assembled\_i} gives the same result as indexing into \texttt{assembled} with \texttt{policy ppc} (the program counter pointing to the start of the expansion in \texttt{assembled}) plus \texttt{k}.
445Essentially the lemma above states that the \texttt{assembly} function correctly expands pseudoinstructions, and that the expanded instruction reside consecutively in memory.
446This result is lifted from lists of bytes into a result on tries of bytes (i.e. code memories), using an additional lemma: \texttt{assembly\_ok}.
448Lemma \texttt{fetch\_assembly} establishes that the \texttt{fetch} and \texttt{assembly1} functions interact correctly.
449The \texttt{fetch} function, as its name implies, fetches the instruction indexed by the program counter in the code memory, while \texttt{assembly1} maps a single instruction to its byte encoding:
451lemma fetch_assembly:
452 $\forall$pc: Word.
453 $\forall$i: instruction.
454 $\forall$code_memory: BitVectorTrie Byte 16.
455 $\forall$assembled: list Byte.
456  assembled = assemble i $\rightarrow$
457  let len := $\mid$assembled$\mid$ in
458  let pc_plus_len := pc + len in
459   encoding_check pc pc_plus_len assembled $\rightarrow$
460   let $\langle$instr, pc', ticks$\rangle$ := fetch pc in
461    instr = i $\wedge$ ticks = (ticks_of_instruction instr) $\wedge$ pc' = pc_plus_len.
463We read \texttt{fetch\_assembly} as follows.
464Given an instruction, \texttt{i}, we first assemble the instruction to obtain \texttt{assembled}, checking that the assembled instruction was stored in code memory correctly.
465Fetching from code memory, we obtain a tuple consisting of the instruction, new program counter, and the number of ticks this instruction will take to execute.
466We finally check that the fetched instruction is the same instruction that we began with, and the number of ticks this instruction will take to execute is the same as the result returned by a lookup function, \texttt{ticks\_of\_instruction}, devoted to tracking this information.
467Or, in plainer words, assembling and then immediately fetching again gets you back to where you started.
469Lemma \texttt{fetch\_assembly\_pseudo} is obtained by composition of \texttt{expand\_pseudo\_instruction} and \texttt{assembly\_1\_pseudoinstruction}:
471lemma fetch_assembly_pseudo:
472 $\forall$program: pseudo_assembly_program.
473 $\forall$policy.
474 $\forall$ppc.
475 $\forall$code_memory.
476 let $\langle$preamble, instr_list$\rangle$ := program in
477 let pi := $\pi_1$ (fetch_pseudo_instruction instr_list ppc) in
478 let pc := policy ppc in
479 let instrs := expand_pseudo_instructio policy ppc pi in
480 let $\langle$l, a$\rangle$ := assembly_1_pseudoinstruction policy ppc pi in
481 let pc_plus_len := pc + l in
482  encoding_check code_memory pc pc_plus_len a $\rightarrow$
483   fetch_many code_memory pc_plus_len pc instructions.
485Here, \texttt{l} is the number of machine code instructions the pseudoinstruction at hand has been expanded into.
486We assemble a single pseudoinstruction with \texttt{assembly\_1\_pseudoinstruction}, which internally calls \texttt{expand\_pseudo\_instruction}.
487The function \texttt{fetch\_many} fetches multiple machine code instructions from code memory and performs some routine checks.
489Intuitively, Lemma \texttt{fetch\_assembly\_pseudo} can be read as follows.
490Suppose we expand the pseudoinstruction at \texttt{ppc} with the policy, obtaining the list of machine code instructions \texttt{instrs}.
491Suppose we also assemble the pseudoinstruction at \texttt{ppc} to obtain \texttt{a}, a list of bytes.
492Then, we check with \texttt{fetch\_many} that the number of machine instructions that were fetched matches the number of instruction that \texttt{expand\_pseudo\_instruction} expanded.
494The final lemma in this series is \texttt{fetch\_assembly\_pseudo2} that combines the Lemma \texttt{fetch\_assembly\_pseudo} with the correctness of the functions that load object code into the processor's memory:
496lemma fetch_assembly_pseudo2:
497 $\forall$program.
498 $\mid$snd program$\mid$ $\leq$ $2^{16}$ $\rightarrow$
499 $\forall$policy.
500 policy is correct for program $\rightarrow$
501 $\forall$ppc. ppc < $\mid$snd program$\mid$ $\rightarrow$
502  let $\langle$labels, costs$\rangle$ := create_label_cost_map program in
503  let $\langle$assembled, costs'$\rangle$ := $\pi_1$ (assembly program policy) in
504  let cmem := load_code_memory assembled in
505  let $\langle$pi, newppc$\rangle$ := fetch_pseudo_instruction program ppc in
506  let instructions := expand_pseudo_instruction policy ppc pi in
507    fetch_many cmem (policy newppc) (policy ppc) instructions.
510Here we use $\pi_1$ to project the existential witness from the Russell-typed function \texttt{assembly}.
511We read \texttt{fetch\_assembly\_pseudo2} as follows.
512Suppose we are given an assembly program which can be addressed by a 16-bit word and a policy that is correct for this program.
513Suppose we are able to successfully assemble an assembly program using \texttt{assembly} and produce a code memory, \texttt{cmem}.
514Then, fetching a pseudoinstruction from the pseudo-code memory at address \texttt{ppc} corresponds to fetching a sequence of instructions from the real code memory using \texttt{policy} to expand pseudoinstructions.
515The fetched sequence corresponds to the expansion, according to the policy, of the pseudoinstruction.
517At first, the lemma appears to immediately imply the correctness of the assembler, but this property is \emph{not} strong enough to establish that the semantics of an assembly program has been preserved by the assembly process since it does not establish the correspondence between the semantics of a pseudoinstruction and that of its expansion.
518In particular, the two semantics differ on instructions that \emph{could} directly manipulate program addresses.
520% ---------------------------------------------------------------------------- %
521% SECTION                                                                      %
522% ---------------------------------------------------------------------------- %
523\subsection{Correctness for `well-behaved' assembly programs}
526The traditional approach to verifying the correctness of an assembler is to treat memory addresses as opaque structures that cannot be modified.
527Memory is represented as a map from opaque addresses to the disjoint union of data and opaque addresses---addresses are kept opaque to prevent their possible `semantics breaking' manipulation by assembly programs:
529\mathtt{Mem} : \mathtt{Addr} \rightarrow \mathtt{Bytes} + \mathtt{Addr} \qquad \llbracket - \rrbracket : \mathtt{Instr} \rightarrow \mathtt{Mem} \rightarrow \mathtt{option\ Mem}
531The semantics of a pseudoinstruction, $\llbracket - \rrbracket$, is given as a possibly failing function from pseudoinstructions and memory spaces to new memory spaces.
532The semantic function proceeds by case analysis over the operands of a given instruction, failing if either operand is an opaque address, or otherwise succeeding, updating memory.
534\llbracket \mathtt{ADD\ @A1\ @A2} \rrbracket^\mathtt{M} = \begin{cases}
535                                                              \mathtt{Byte\ b1},\ \mathtt{Byte\ b2} & \rightarrow \mathtt{Some}(\mathtt{M}\ \text{with}\ \mathtt{b1} + \mathtt{b2}) \\
536                                                              -,\ \mathtt{Addr\ a} & \rightarrow \mathtt{None} \\
537                                                              \mathtt{Addr\ a},\ - & \rightarrow \mathtt{None}
538                                                            \end{cases}
540In this paper we take a different approach, tracing memory locations (and accumulators) that contain memory addresses.
541We prove that only those assembly programs that use addresses in `safe' ways have their semantics preserved by the assembly process---a sort of dynamic type system sitting atop memory.
542In principle this approach allows us to introduce some permitted \emph{benign} manipulations of addresses that the traditional approach cannot handle, therefore expanding the set of input programs that can be assembled correctly.
543This approach seems similar to one taken by Tuch \emph{et al}~\cite{tuch:types:2007} for reasoning about low-level C code.
545Our analogue of the semantic function above is merely a wrapper around the function that implements the semantics of machine code, paired with a function that keeps track of addresses.
546The semantics of pseudo- and machine code are then essentially shared.
547The only thing that changes at the assembly level is the presence of the new tracking function.
549However, with this approach we must detect (at run time) programs that manipulate addresses in well-behaved ways, according to some approximation of well-behavedness.
550We use an \texttt{internal\_pseudo\_address\_map} to trace addresses of code memory addresses in internal RAM:
552definition address_entry := upper_lower $\times$ Byte.
554definition internal_pseudo_address_map :=
555  (BitVectorTrie address_entry 7) $\times$ (BitVectorTrie address_entry 7)
556    $\times$ (option address_entry).
558Here, \texttt{upper\_lower} is a type isomorphic to the booleans.
559The implementation of \texttt{internal\_pseudo\_address\_map} is complicated by some peculiarities of the MCS-51's instruction set.
560Note here that all addresses are 16 bit words, but are stored (and manipulated) as 8 bit bytes.
561All \texttt{MOV} instructions in the MCS-51 must use the accumulator \texttt{A} as an intermediary, moving a byte at a time.
562The third component of \texttt{internal\_pseudo\_address\_map} therefore states whether the accumulator currently holds a piece of an address, and if so, whether it is the upper or lower byte of the address (using the \texttt{upper\_lower} flag) complete with the corresponding source address in full.
563The first and second components, on the other hand, performs a similar task for the higher and lower external RAM.
564Again, we use our \texttt{upper\_lower} flag to describe whether a byte is the upper or lower component of a 16-bit address.
566The \texttt{low\_internal\_ram\_of\_pseudo\_low\_internal\_ram} function converts the lower internal RAM of a \texttt{PseudoStatus} into the lower internal RAM of a \texttt{Status}.
567A similar function exists for high internal RAM.
568Note that both RAM segments are indexed using addresses 7-bits long:
570definition low_internal_ram_of_pseudo_low_internal_ram:
571 internal_pseudo_address_map $\rightarrow$ policy $\rightarrow$ BitVectorTrie Byte 7
572  $\rightarrow$ BitVectorTrie Byte 7.
575Next, we are able to translate \texttt{PseudoStatus} records into \texttt{Status} records using \texttt{status\_of\_pseudo\_status}.
576Translating a \texttt{PseudoStatus}'s code memory requires we expand pseudoinstructions and then assemble to obtain a trie of bytes.
577This never fails, provided that our policy is correct:
579definition status_of_pseudo_status:
580 internal_pseudo_address_map $\rightarrow$ $\forall$pap. $\forall$ps: PseudoStatus pap.
581 $\forall$policy. Status (code_memory_of_pseudo_assembly_program pap policy)
584The \texttt{next\_internal\_pseudo\_address\_map} function is responsible for run time monitoring of the behaviour of assembly programs, in order to detect well-behaved ones.
585It returns a map that traces memory addresses in internal RAM after execution of the next pseudoinstruction, failing when the instruction tampers with memory addresses in unanticipated (but potentially correct) ways.
586It thus decides the membership of a strict subset of the set of well-behaved programs.
588definition next_internal_pseudo_address_map: internal_pseudo_address_map $\rightarrow$
589 $\forall$cm. (Identifier $\rightarrow$ PseudoStatus cm $\rightarrow$ Word) $\rightarrow$ $\forall$s: PseudoStatus cm.
590   program_counter s < $2^{16}$ $\rightarrow$ option internal_pseudo_address_map
592If we wished to allow `benign manipulations' of addresses, it would be this function that needs to be changed.
593Note we once again use dependent types to ensure that program counters are properly within bounds.
594The third argument is a function that resolves the concrete address of a label.
596The function \texttt{ticks\_of0} computes how long---in clock cycles---a pseudoinstruction will take to execute when expanded in accordance with a given policy.
597The function returns a pair of natural numbers, needed for recording the execution times of each branch of a conditional jump.
599definition ticks_of0:
600 pseudo_assembly_program $\rightarrow$ (Identifier $\rightarrow$ Word) $\rightarrow$ $\forall$policy. Word $\rightarrow$
601   pseudo_instruction $\rightarrow$ nat $\times$ nat
603An additional function, \texttt{ticks\_of}, is merely a wrapper around this function.
605Finally, we are able to state and prove our main theorem, relating the execution of a single assembly instruction and the execution of (possibly) many machine code instructions, as long as we are able to track memory addresses properly:
607theorem main_thm:
608 $\forall$M, M': internal_pseudo_address_map.
609 $\forall$program: pseudo_assembly_program.
610 $\forall$program_in_bounds: $\mid$program$\mid$ $\leq$ $2^{16}$.
611 let maps := create_label_cost_map program in
612 let addr_of := ... in
613 program is well labelled $\rightarrow$
614 $\forall$policy. policy is correct for program.
615 $\forall$ps: PseudoStatus program. ps < $\mid$program$\mid$.
616  next_internal_pseudo_address_map M program ... = Some M' $\rightarrow$
617   $\exists$n. execute n (status_of_pseudo_status M ps policy) =
618    status_of_pseudo_status M'
619      (execute_1_pseudo_instruction program
620       (ticks_of program ($\lambda$id. addr_of id ps) policy) ps) policy.
622The statement is standard for forward simulation, but restricted to \texttt{PseudoStatuses} \texttt{ps} whose next instruction to be executed is well-behaved with respect to the \texttt{internal\_pseudo\_address\_map} \texttt{M}.
623We explicitly require proof that the policy is correct, the program is well-labelled (i.e. no repeated labels, etc.) and the pseudo-program counter is in the program's bounds.
624Theorem \texttt{main\_thm} establishes the correctness of the assembly process and can be lifted to the forward simulation of an arbitrary number of well-behaved steps on the assembly program.
626% ---------------------------------------------------------------------------- %
627% SECTION                                                                      %
628% ---------------------------------------------------------------------------- %
632We are proving the correctness of an assembler for MCS-51 assembly language.
633Our assembly language features labels, arbitrary conditional and unconditional jumps to labels, global data and instructions for moving this data into the MCS-51's single 16-bit register.
634Expanding these pseudoinstructions into machine code instructions is not trivial, and the proof that the assembly process is `correct', in that the semantics of a subset of assembly programs are not changed is complex.
636The formalisation is a component of CerCo which aims to produce a verified concrete complexity preserving compiler for a large subset of the C language.
637The verified assembler, complete with the underlying formalisation of the semantics of MCS-51 machine code, will form the bedrock layer upon which the rest of CerCo will build its verified compiler platform.
639We may compare our work to an `industrial grade' assembler for the MCS-51: SDCC~\cite{sdcc:2011}, the only open source C compiler that targets the MCS-51 instruction set.
640It appears that all pseudojumps in SDCC assembly are expanded to \texttt{LJMP} instructions, the worst possible jump expansion policy from an efficiency point of view.
641Note that this policy is the only possible policy \emph{in theory} that can preserve the semantics of an assembly program during the assembly process, coming at the expense of assembler completeness as the generated program may be too large for code memory, there being a trade-off between the completeness of the assembler and the efficiency of the assembled program.
642The definition and proof of a terminating, correct jump expansion policy is described elsewhere~\cite{boender:correctness:2012}.
644Verified assemblers could also be applied to the verification of operating system kernels and other formalised compilers.
645For instance the verified seL4 kernel~\cite{klein:sel4:2009}, CompCert~\cite{leroy:formally:2009} and CompCertTSO~\cite{sevcik:relaxed-memory:2011} all explicitly assume the existence of trustworthy assemblers.
646The fact that an optimising assembler cannot preserve the semantics of all assembly programs may have consequences for these projects.
648Our formalisation exploits dependent types in different ways and for multiple purposes.
649The first purpose is to reduce potential errors in the formalisation of the microprocessor.
650Dependent types are used to constrain the size of bitvectors and tries that represent memory quantities and memory areas respectively.
651They are also used to simulate polymorphic variants in Matita, in order to provide precise typings to various functions expecting only a subset of all possible addressing modes that the MCS-51 offers.
652Polymorphic variants nicely capture the absolutely unorthogonal instruction set of the MCS-51 where every opcode must accept its own subset of the 11 addressing mode of the processor.
654The second purpose is to single out sources of incompleteness.
655By abstracting our functions over the dependent type of correct policies, we were able to manifest the fact that the compiler never refuses to compile a program where a correct policy exists.
656This also allowed to simplify the initial proof by dropping lemmas establishing that one function fails if and only if some previous function does so.
658Finally, dependent types, together with Matita's liberal system of coercions, allow us to simulate almost entirely in user space the proof methodology `Russell' of Sozeau~\cite{sozeau:subset:2006}.
659Not every proof has been carried out in this way: we only used this style to prove that a function satisfies a specification that only involves that function in a significant way.
660It would not be natural to see the proof that fetch and assembly commute as the specification of one of the two functions.
661\paragraph{Related work}
662% piton
663We are not the first to consider the correctness of an assembler for a non-trivial assembly language.
664The most impressive piece of work in this domain is Piton~\cite{moore:piton:1996}, a stack of verified components, written and verified in ACL2, ranging from a proprietary FM9001 microprocessor verified at the gate level, to assemblers and compilers for two high-level languages---Lisp and $\mu$Gypsy~\cite{moore:grand:2005}.
665% jinja
666Klein and Nipkow also provide a compiler, virtual machine and operational semantics for the Jinja~\cite{klein:machine:2006} language and prove that their compiler is semantics and type preserving.
668Though other verified assemblers exist what sets our work apart from that above is our attempt to optimise the generated machine code.
669This complicates a formalisation as an attempt at the best possible selection of machine instructions must be made---especially important on devices with limited code memory.
670Care must be taken to ensure that the time properties of an assembly program are not modified by assembly lest we affect the semantics of any program employing the MCS-51's I/O facilities.
671This is only possible by inducing a cost model on the source code from the optimisation strategy and input program.
673Our source files are available at~\url{}.
674We assumed several properties of `library functions', e.g. modular arithmetic and datastructure manipulation.
675We axiomatised various small functions needed to complete the main theorems, as well as some `routine' proof obligations of the theorems themselves, in focussing on the main meat of the theorems.
676We believe that the proof strategy is sound and that all axioms can be closed, up to minor bugs that should have local fixes that do not affect the global proof strategy.
678The complete development is spread across 29 files with around 20,000 lines of Matita source.
679Relavent files are: \texttt{}, \texttt{} and \texttt{}, consisting of approximately 4500 lines of Matita source.
680Numerous other lines of proofs are spread all over the development because of dependent types and the Russell proof style, which does not allow one to separate the code from the proofs.
681The low ratio between source lines and the number of lines of proof is unusual, but justified by the fact that the pseudo-assembly and the assembly language share most constructs and large swathes of the semantics are shared.
Note: See TracBrowser for help on using the repository browser.