# Changeset 952 for src/ASM/CPP2011/cpp-2011.tex

Ignore:
Timestamp:
Jun 15, 2011, 10:15:00 AM (9 years ago)
Message:

work from yesterday

File:
1 edited

### Legend:

Unmodified
 r947 \documentclass{llncs} \usepackage{amsmath} \usepackage[english]{babel} \usepackage[colorlinks]{hyperref} \title{Proving the correctness of an assembler for the MCS-51 microprocessor} \title{On the correctness of an assembler for the Intel MCS-51 microprocessor} \author{Jaap Boender \and Dominic P. Mulligan \and Claudio Sacerdoti Coen} \institute{Dipartimento di Scienze dell'Informazione, Universit\'a di Bologna} \begin{abstract} We consider the formalisation of an assembler for the Intel MCS-51 8-bit microprocessor in the Matita proof assistant. This formalisation forms a major component of the EU-funded CerCo project, concering the construction and formalisation of a concrete complexity preserving compiler for a large subset of the C programming language. ... \end{abstract} \label{sect.introduction} \begin{enumerate} \item Assembler are considered simple pieces of code, but this is not the case and they can be quite hard to formalize. \item We are interested in an assembler for the legacy MCS family. \item What does it do: \begin{enumerate} \item translates from human readabale to machine readable \item expand labels \item expand pseudo instructions by optimizing the expansion \end{enumerate} \item Problems due to the expansion/optimization: \begin{itemize} \item operations that fetch from the code memory do not make sense at pseudo level \item operations that combine the PC with constant shifts do not make sense at pseudo level \item more generally, memory addresses, wherever they are (memory, registers) can only be copied and compared with other memory addresses $\Rightarrow$ need to trace memory addresses UNDECIDABLE PROBLEM \item consequence: full preservation of the semantics becomes IMPOSSIBLE \end{itemize} \item We are also interested in intensional properties \begin{itemize} \item the semantics is sensible to the timing (e.g. I/O, interrupts) \item to show that the semantics is preserved, one needs to assign a precise cost model to the pseudo instructions \item the cost model is induced by the compilation itself $\Rightarrow$ recursive'' statement \end{itemize} \item Finally, the optimizing expansion itself: certified compilers are usually proved to be correct, but we want more: completeness and optimality of the expansion. \begin{itemize} \item the optimization starts with a non solution and incrementally refines it to a solution. At each step it uses functions that are used to implement the expansion and that needs to be correct. The solution can fail to exist. The proof becomes a mess \item idea: split the policy from the implementation; prove the implementation to be correct w.r.t. any correct policy; provide a correct policy (when it exists) and show that it is also complete and optimal. Show that the assembler fails iff a correct policy does not exist (completeness). \end{itemize} \item Additional issues: We consider the formalisation of an assembler for the Intel MCS-51 8-bit microprocessor in the Matita proof assistant. This formalisation forms a major component of the EU-funded CerCo project, concering the construction and formalisation of a concrete complexity preserving compiler for a large subset of the C programming language. The MCS-51 dates from the early 1980s and is commonly called the 8051/8052. Despite the microprocessor's age, derivatives are still widely manufactured by a number of semiconductor foundries and the processor is widely used, especially in embedded systems development, where well-tested, cheap, predictable microprocessors find a niche. The MCS-51 has a relative paucity of features compared to its more modern brethren. In particular, the MCS-51 does not possess a cache or any instruction pipelining that would make predicting the concrete cost of executing a single instruction an involved process. Instead, each semiconductor foundry that produces an MCS-51 derivative is able to provide accurate timing information in clock cycles for each instruction in their derivative's instruction set. It is important to stress that this timing information, unlike in more sophisticated processors, is not an estimate, it is a definition. With the MCS-51, if a manufacturer states that a particular opcode takes three clock cycles to execute, then that opcode \emph{always} takes three clock cycles to execute. This predicability of timing information is especially attractive to the CerCo consortium. We are in the process of constructing a cost-preserving certified compiler for a realistic processor, and not for building and formalising the worst case execution time (WCET) tools that would be necessary to achieve the same result with, for example, a modern ARM or PowerPC microprocessor. However, the MCS-51's paucity of features is a double edged sword. In particular, the MCS-51 features relatively miniscule memory spaces (including read-only code memory, stack and internal/external RAM) by modern standards. As a result, our compiler, to have any sort of hope of successfully compiling realistic C programs, must produce tight' machine code. This is not simple. To begin to understand the problems we faced, we here focus on a single issue in the MCS-51's instruction set: unconditional jumps. The MCS-51 features three conditional jump instructions: \texttt{LJMP} and \texttt{SJMP}---long jump' and short jump' respectively---and \texttt{AJMP}, an 11-bit oddity of the MCS-51 that we choose to ignore for simplicity's sake.\footnote{Ignoring \texttt{AJMP} and its analogue \texttt{ACALL} is not idiosyncratic.  The Small Device C Compiler (SDCC), the leading open source C compiler for the MCS-51, also seemingly does not produce \texttt{AJMP} and \texttt{ACALL} instructions.  Their utility in a modern context remains unclear.} Each of these three instructions expects arguments in different sizes and behaves in different ways. For instance, \texttt{SJMP} expects an 8-bit offset which is added to the current program counter to produce a relative, local jump. In contrast, \texttt{LJMP} expects a 16-bit addressing mode and can jump to any address in the MCS-51's memory space. As a result, the size of each opcode is different, and to squeeze as much code as possible into the MCS-51's limited code memory, the smallest instruction that produces the required effect should be picked. Having the compiler attempt to select the smallest possible jump instruction was deemed too high a burden, unneccessarily complicating the compilation chain. Instead, we decided to have the compiler target an assembly language, complete with pseudoinstructions. These pseudoinstructions included generic \texttt{Jmp} and \texttt{Call} instructions. We also implemented labels, conditional jumps to labels, a program preamble containing global data and a \texttt{MOV} instruction for moving this global data into the MCS-51's one 16-bit register. This latter feature will ease any later consideration of separate compilation in the CerCo compiler. Further, our conditional jumps to labels behave differently from their machine code counterparts. At the machine code level, conditional jumps may only jump to a relative offset of the current program counter, limiting their scope. However, at the assembly level, conditional jumps may jump to a label that appears anywhere in the program, significantly liberalising the use of conditional jumps. However, in line with CerCo's goal to produce a verified compilation chain, this assembly language to machine language translation must also be proved correct. Assemblers are not as simple as they first appear, and are in fact quite hard to formalise. In particular, the CerCo assembler needs to expand labels and pseudoinstructions into a correct representation at the machine level. Trying to na\"ively relate assembly programs with their machine code counterparts simply does not work. Machine code programs that fetch from code memory and programs that combine the program counter with constant shifts do not make sense at the assembly level. More generally, memory addresses can only be compared with other memory addresses. However, checking that memory addresses are only compared against each other at the assembly level is in fact undecidable. In short, the full preservation of the semantics of the two languages is impossible. A further set of complications is added by the peculiarities of the CerCo project itself. As mentioned, the CerCo consortium is in the business of constructing a verified compiler for the C programming language. However, unlike CompCert---currently representing the state of the art for industrial grade' verified compilers---and similar projects, CerCo considers not just the \emph{intensional correctness} of the compiler, but also its \emph{extensional correctness}. That is, CompCert focusses solely on the preservation of the \emph{meaning} of a program during the compilation process, guaranteeing that the program's meaning does not change as it is gradually transformed into assembly code. However in any realistic compiler (even the CompCert compiler!) there is no guarantee that the program's time properties are preserved during the compilation process; a compiler's optimisations' could, in theory, even conspire to degrade the concrete complexity of certain classes of programs. CerCo aims to expand the current state of the art by producing a compiler where this temporal degradation is guaranteed not to happen. In order to achieve this CerCo imposes a cost model on programs, or more specifically, on simple blocks of instructions. This cost model is induced by the compilation process itself, and its non-compositional nature allows us to assign different costs to identical blocks of instructions depending on how they are compiled, obtaining a very precise costing for a program by embracing the compilation process, not ignoring it. However, this complicates the proof of correctness for the compiler proper: for every translation pass from intermediate language to intermediate language, we must prove that not only has the meaning of a program been preserved, but also its complexity characteristics. This also applies for the translation from assembly language to machine code. How do we assign a cost to a pseudoinstruction? There is one snag: how to expand jumps. As mentioned, conditional jumps at the assembly level can jump to a label appearing anywhere in the program. However, at the machine code level, conditional jumps are limited to jumping locally', using an 8-bit relative offset of the program counter. To translate a jump to a label, a single conditional jump pseudoinstruction is potentially translated into a block of three real instructions, as follows (here, \texttt{JZ} is jump if accumulator is zero'): \begin{displaymath} \begin{array}{r@{\quad}l@{\;\;}l@{\qquad}c@{\qquad}l@{\;\;}l} & \mathtt{JZ}  & label                      &                 & \mathtt{JZ}   & \text{size of \texttt{SJMP} instruction} \\ & \ldots       &                            & \text{translates to}   & \mathtt{SJMP} & \text{size of \texttt{LJMP} instruction} \\ label: & \mathtt{MOV} & \mathtt{A}\;\;\mathtt{B}   & \Longrightarrow & \mathtt{LJMP} & \text{address of \textit{label}} \\ &              &                            &                 & \ldots        & \\ &              &                            &                 & \mathtt{MOV}  & \mathtt{A}\;\;\mathtt{B} \end{array} \end{displaymath} In the translation, if \texttt{JZ} fails, we fall through to the \texttt{SJMP} which jumps over the \texttt{LJMP}. Naturally, if \textit{label} is close enough, a conditional jump pseudoinstruction is mapped directly to a conditional jump instruction; the above translation only applies if \textit{label} is not sufficiently local. Similarly, we must also work out whether to expand an unconditional jump pse This leaves the problem, addressed below, of calculating whether a label is indeed close enough' for the simpler translation to be used. Crucially, the above translation demonstrates the difficulty in predicting how many clock cycles a pseudoinstruction will take to execute. A conditional jump may be mapped to a single machine instruction or a block of three. Perhaps more insidious, the number of cycles needed to execute the instructions in the two branches of a translated conditional jump may be different. Depending on the semiconductor manufacturer, an \texttt{SJMP} could in theory take a different number of clock cycles to execute than an \texttt{LJMP}. These issues must also be dealt with in order to prove that the translation pass preserves the concrete complexity of the code. The question remains: how do we decide whether to expand a jump into an \texttt{SJMP} or an \texttt{LJMP}? This problem is far from trivial. To understand why, consider the following snippet of assembly code: \begin{displaymath} \text{dpm: finish me} \end{displaymath} As our example shows, given an occurence $l$ of an \texttt{LJMP} instruction, it may be possible to shrink $l$ to an occurence of an \texttt{SJMP} providing we can shrink any \texttt{LJMP}s that exist between $l$ and its target location. However, shrinking these \texttt{LJMP}s may in turn depend on shrinking $l$ to an \texttt{SJMP}, as it is perfectly possible to jump backwards. In short, unless we can somehow break this loop of circularity, we are stuck with a suboptimal solution to the expanding jumps problem. How we go about resolving this problem affected the shape of our proof of correctness for the whole assembler in a rather profound way. We first attempted to synthesize a solution bottom up. That is, starting with no solution, we gradually refine a solution using the same functions that implement the jump expansion. Using this technique, solutions can fail to exist, and the proof quickly descends into a diabolical quagmire. Abandoning this attempt, we instead split the policy', i.e. the decision over how any particular jump should be expanded, from the implementation. Assuming the existence of a correct policy, we proved the implementation of the assembler correct. Further, we proved that the assembler fails to assemble a file if and only if a correct policy does not exist. Policies do not exist in only a limited number of circumstances: namely, if a pseudoinstruction attempts to jump to a label that does not exist, or the program is too large to fit in code memory. The first case would constitute a serious compiler error, and hopefully certifying the rest of the compiler would rule this possibility out; the second case is unavoidable---certified compiler or not, trying to load a huge program into a small code memory will break \emph{something}. % ---------------------------------------------------------------------------- % % SECTION                                                                      % % ---------------------------------------------------------------------------- % \subsection{Overview of the paper} \label{subsect.overview.of.the.paper} In Section~\ref{sect.matita} we provide a brief overview of the Matita proof assistant for the unfamiliar reader. In Section~\ref{sect.the.proof} we discuss the design and implementation of the proof proper. In Section~\ref{sect.conclusions} we conclude. % ---------------------------------------------------------------------------- % % SECTION                                                                      % % ---------------------------------------------------------------------------- % \section{Matita} \label{sect.matita} % ---------------------------------------------------------------------------- % % SECTION                                                                      % % ---------------------------------------------------------------------------- % \section{The proof} \label{sect.the.proof} \begin{itemize} \item use of dependent types to throw away wrong programs that would made one \end{itemize} \end{itemize} \end{enumerate} This paper discusses the proof of correctness of an assembler for the Intel MCS-51 8-bit family of microprocessors. The work presented herein is a component of the EU-funded CerCo (Certified Complexity') project. CerCo aims to produce a verified compiler from a large subset of C to the machine language of a microprocessor commonly used in embedded systems. In this respect, CerCo aims to go beyond the state of the art in verified compiler technology. Arguably, the CompCert C compiler currently represents the state of the art in the field of verified compilers. However, the verified backend of the CompCert C compiler stops at Power PC and ARM assembly \emph{languages}. Assembly of these languages into machine code is left untrusted. We aim to go further. The MCS-51 is CerCo's target processor. % ---------------------------------------------------------------------------- % % SECTION                                                                      % % ---------------------------------------------------------------------------- % \subsection{CerCo} \label{subsect.cerco} % ---------------------------------------------------------------------------- % % SECTION                                                                      % % ---------------------------------------------------------------------------- % \subsection{Overview of the paper} \label{subsect.overview.of.the.paper} % ---------------------------------------------------------------------------- % % SECTION                                                                      % % ---------------------------------------------------------------------------- % \section{The proof} \label{sect.the.proof} % ---------------------------------------------------------------------------- % % SECTION                                                                      % % ---------------------------------------------------------------------------- % \subsection{Matita} \label{subsect.matita} % ---------------------------------------------------------------------------- % \label{sect.conclusions} \subsection{Use of dependent types} \label{subsect.use.of.dependent.types} As it stands our use of complex dependent types is limited in the formalisation. Where it made sense, for example in data structures like tries and vectors, we have made limited use of dependent types. \subsection{Related work} \label{subsect.related.work} \bibliography{cpp-2011.bib}