Changeset 1017 for src

Ignore:
Timestamp:
Jun 21, 2011, 11:15:31 AM (9 years ago)
Message:

complete, just under 16 pages

File:
1 edited

Legend:

Unmodified
 r1016 This formalisation forms a major component of the EU-funded CerCo project~\cite{cerco:2011}, concering the construction and formalisation of a concrete complexity preserving compiler for a large subset of the C programming language. The MCS-51 dates from the early 1980s and is commonly called the 8051/8052. The MCS-51 dates from the early 1980s and is commonly called the 8051/8052.\footnote{Being strict, the 8051 and 8052 are two different microprocessors, though the features that the 8052 added over the 8051 are minor, and largely irrelevant for our formalisation project.} Despite the microprocessor's age, derivatives are still widely manufactured by a number of semiconductor foundries. As a result the processor is widely used, especially in embedded systems development, where well-tested, cheap, predictable microprocessors find their niche. In particular, the MCS-51 does not possess a cache or any instruction pipelining that would make predicting the concrete cost of executing a single instruction an involved process. Instead, each semiconductor foundry that produces an MCS-51 derivative is able to provide accurate timing information in clock cycles for each instruction in their derivative's instruction set. It is important to stress that this timing information, unlike in more sophisticated processors, is not an estimate, it is a definition. It is important to stress that this timing information, unlike in more sophisticated processors, is not an estimate, it is a definition'. For the MCS-51, if a manufacturer states that a particular opcode takes three clock cycles to execute, then that opcode \emph{always} takes three clock cycles to execute. This predicability of timing information is especially attractive to the CerCo consortium. We are in the process of constructing a certified, concrete complexity compiler for a realistic processor, and not for building and formalising the worst case execution time (WCET) tools that would be necessary to achieve the same result with, for example, a modern ARM or PowerPC microprocessor. However, the MCS-51's paucity of features is a double edged sword. In particular, the MCS-51 features relatively miniscule memory spaces (including read-only code memory, stack and internal/external random access memory) by modern standards. As a result our compiler, to have any sort of hope of successfully compiling realistic C programs, ought to produce tight' machine code. We are in the process of constructing a certified, concrete complexity compiler for a realistic processor, and not for building and formalising the worst case execution time tools (WCET---see~\cite{yan:wcet:2008} and~\cite{bate:wcet:2011}, amongst many others, for an application of WCET technology to microprocessors with more complex designs) that would be necessary to achieve the same result with, for example, a modern ARM or PowerPC microprocessor. As in most things, what one hand giveth, the other taketh away: the MCS-51's paucity of features, though an advantage in many respects, also quickly become a hindrance, and successfully compiling high-level code for this architecture is a cumbrous and involved process. In particular, the MCS-51 features a relatively miniscule series of memory spaces (including read-only code memory, stack and internal/external random access memory) by modern standards. As a result our C compiler, to have any sort of hope of successfully compiling realistic programs for embedded devices, ought to produce tight' machine code. This is not simple and requires the use of optimisations. For example, the MCS-51 features three unconditional jump instructions: \texttt{LJMP} and \texttt{SJMP}---long jump' and short jump' respectively---and an 11-bit oddity of the MCS-51, \texttt{AJMP}. Each of these three instructions expects arguments in different sizes and behaves in different ways: \texttt{SJMP} may only perform a local jump'; \texttt{LJMP} may jump to any address in the MCS-51's memory space and \texttt{AJMP} may jump to any address in the current memory page. Consequently, the size of each opcode is different, and to squeeze as much code as possible into the MCS-51's limited code memory, the smallest possible opcode should be selected. The prototype CerCo C compiler does not attempt to select the smallest jump opcode in this manner, as this was thought to unneccessarily complicate the compilation chain. Instead, the compiler targets an assembly language, complete with pseudoinstructions including bespoke \texttt{Jmp} and \texttt{Call} instructions. Each of these three instructions expects arguments in different sizes and behaves in markedly different ways: \texttt{SJMP} may only perform a local jump'; \texttt{LJMP} may jump to any address in the MCS-51's memory space and \texttt{AJMP} may jump to any address in the current memory page. Consequently, the size of each opcode is different, and to squeeze as much code as possible into the MCS-51's limited code memory, the smallest possible opcode that will suffice should be selected. The prototype CerCo C compiler does not attempt to select the smallest jump opcode in this manner, as this was thought to unneccessarily complicate the compilation chain, making the eventual translation and formalisation of the compiler into Matita much harder. Instead, the compiler targets a bespoke assembly language, similar to real world' assembly languages, complete with pseudoinstructions including \texttt{Jmp} and \texttt{Call} instructions. Labels, conditional jumps to labels, a program preamble containing global data and a \texttt{MOV} instruction for moving this global data into the MCS-51's one 16-bit register also feature. This latter feature will ease any later consideration of separate compilation in the CerCo compiler. More generally, memory addresses can only be compared with other memory addresses. However, checking that memory addresses are only compared against each other at the assembly level is in fact undecidable. In short, we come to the shocking realisation that, with optimisations, the full preservation of the semantics of the two languages is impossible. In short, we come to the shocking\footnote{For us, anyway.} realisation that, with optimisations, the full preservation of the semantics of all assembly programs is impossible. We believe that this revelation is significant for large formalisation projects that assume the existence of a correct assembler. Projects in this class include both the recent CompCert~\cite{compcert:2011,leroy:formal:2009} and seL4 formalisations~\cite{klein:sel4:2009}. Projects in this class include both the recent CompCert~\cite{compcert:2011,leroy:formal:2009} and seL4 formalisations~\cite{klein:sel4:2009,klein:sel4:2010}. Yet, the situation is even more complex than having to expand pseudoinstructions correctly. In particular, when formalising the assembler, we must make sure that the assembly process does not change the timing characteristics of an assembly program for two reasons. First, the semantics of some functions of the MCS-51, notably I/O, depend on the microprocessor's clock. Changing how long a particular program takes to execute can affect the semantics of a program. However in any realistic compiler (even the CompCert compiler!) there is no guarantee that the program's time properties are preserved during the compilation process; a compiler's optimisations' could, in theory, even conspire to degrade the concrete complexity of certain classes of programs. CerCo aims to expand the current state of the art by producing a compiler where this temporal degradation is guaranteed not to happen. Moreover, CerCo's approach lifts a program's timing information to the source (C language) level, wherein the programmer can reason about a program's intensional properties by directly examining the source code that they write. In order to achieve this CerCo imposes a cost model on programs, or more specifically, on simple blocks of instructions. Moreover, CerCo's approach lifts a program's timing information to the source (C language) level. This has the advantage of allowing a programmer to reason about a program's intensional properties directly on the source code that they write, not on the code that the compiler produces. In order to achieve this, CerCo imposes a cost model on programs or, more specifically, on simple blocks of instructions. This cost model is induced by the compilation process itself, and its non-compositional nature allows us to assign different costs to identical blocks of instructions depending on how they are compiled. In short, we aim to obtain a very precise costing for a program by embracing the compilation process, not ignoring it. This, however, complicates the proof of correctness for the compiler proper: for every translation pass from intermediate language to intermediate language, we must prove that not only has the meaning of a program been preserved, but also its complexity characteristics. This, however, complicates the proof of correctness for the compiler proper: for every translation pass from intermediate language to intermediate language, we must prove that not only has the meaning of a program been preserved, but also its concrete complexity characteristics. This also applies for the translation from assembly language to machine code. How do we assign a cost to a pseudoinstruction? Naturally, this raises a question: how do we assign an \emph{accurate} cost to a pseudoinstruction? As mentioned, conditional jumps at the assembly level can jump to a label appearing anywhere in the program. However, at the machine code level, conditional jumps are limited to jumping locally', using a measly byte offset. To translate a jump to a label, a single conditional jump pseudoinstruction may be translated into a block of three real instructions, as follows (here, \texttt{JZ} is jump if accumulator is zero'): To translate a jump to a label, a single conditional jump pseudoinstruction may be translated into a block of three real instructions as follows (here, \texttt{JZ} is jump if accumulator is zero'): \begin{displaymath} \begin{array}{r@{\quad}l@{\;\;}l@{\qquad}c@{\qquad}l@{\;\;}l} & \mathtt{JZ}  & label                      &                 & \mathtt{JZ}   & \text{size of \texttt{SJMP} instruction} \\ & \mathtt{JZ}  & \mathtt{label}                      &                 & \mathtt{JZ}   & \text{size of \texttt{SJMP} instruction} \\ & \ldots       &                            & \text{translates to}   & \mathtt{SJMP} & \text{size of \texttt{LJMP} instruction} \\ label: & \mathtt{MOV} & \mathtt{A}\;\;\mathtt{B}   & \Longrightarrow & \mathtt{LJMP} & \text{address of \textit{label}} \\ \mathtt{label:} & \mathtt{MOV} & \mathtt{A}\;\;\mathtt{B}   & \Longrightarrow & \mathtt{LJMP} & \text{address of \textit{label}} \\ &              &                            &                 & \ldots        & \\ &              &                            &                 & \mathtt{MOV}  & \mathtt{A}\;\;\mathtt{B} \end{displaymath} In the translation, if \texttt{JZ} fails, we fall through to the \texttt{SJMP} which jumps over the \texttt{LJMP}. Naturally, if \textit{label} is close enough, a conditional jump pseudoinstruction is mapped directly to a conditional jump machine instruction; the above translation only applies if \textit{label} is not sufficiently local. Naturally, if \texttt{label} is close enough, a conditional jump pseudoinstruction is mapped directly to a conditional jump machine instruction; the above translation only applies if \texttt{label} is not sufficiently local. This leaves the problem, addressed below, of calculating whether a label is indeed close enough' for the simpler translation to be used. Crucially, the above translation demonstrates the difficulty in predicting how many clock cycles a pseudoinstruction will take to execute. A conditional jump may be mapped to a single machine instruction or a block of three. Perhaps more insidious, the number of cycles needed to execute the instructions in the two branches of a translated conditional jump may be different. Perhaps more insidious is the realisation that the number of cycles needed to execute the instructions in the two branches of a translated conditional jump may be different. Depending on the particular MCS-51 derivative at hand, an \texttt{SJMP} could in theory take a different number of clock cycles to execute than an \texttt{LJMP}. These issues must also be dealt with in order to prove that the translation pass preserves the concrete complexity of the code, and that the semantics of a program using the MCS-51's I/O facilities does not change. These issues must also be dealt with in order to prove that the translation pass preserves the concrete complexity of assembly code, and that the semantics of a program using the MCS-51's I/O facilities does not change. We address this problem by parameterizing the semantics over a cost model. We prove the preservation of concrete complexity only for the program-dependent cost model induced by the optimisation. The question remains: how do we decide whether to expand a jump into an \texttt{SJMP} or an \texttt{LJMP}? To understand why this problem is not trivial, consider the following snippet of assembly code: Yet one more question remains: how do we decide whether to expand a jump into an \texttt{SJMP} or an \texttt{LJMP}? To understand, again, why this problem is not trivial, consider the following snippet of assembly code: \begin{displaymath} \begin{array}{r@{\qquad}r@{\quad}l@{\;\;}l@{\qquad}l} Further, consider what happens if, instead of appearing at memory address \texttt{0x100}, the instruction at line 5 instead appeared \emph{just} beyond the size of code memory, and all other memory addresses were shifted accordingly. Now, in order to be able to successfully fit our program into the MCS-51's code memory, we are \emph{obliged} to shrink the \texttt{LJMP} occurring at line 5. That is, the shrinking process is not just related to optimisation, but also the completeness of the assembler. Now, in order to be able to successfully fit our program into the MCS-51's limited code memory, we are \emph{obliged} to shrink the \texttt{LJMP} occurring at line 5. That is, the shrinking process is not just related to the optimisation of generated machine code but also the completeness of the assembler itself. How we went about resolving this problem affected the shape of our proof of correctness for the whole assembler in a rather profound way. We first attempted to synthesize a solution bottom up: starting with no solution, we gradually refine a solution using the same functions that implement the jump expansion. Using this technique, solutions can fail to exist, and the proof quickly descends into a diabolical quagmire. We first attempted to synthesize a solution bottom up: starting with no solution, we gradually refine a solution using the same functions that implement the jump expansion process. Using this technique, solutions can fail to exist, and the proof of correctness for the assembler quickly descends into a diabolical quagmire. Abandoning this attempt, we instead split the policy'---the decision over how any particular jump should be expanded---from the implementation that actually expands assembly programs into machine code. Assuming the existence of a correct policy, we proved the implementation of the assembler correct. Further, we proved that the assembler fails to assemble an assembly program if and only if a correct policy does not exist. This is achieved by means of dependent types: the assembly function is total over a program, a policy and the proof that the policy is correct for the program. Policies do not exist in only a limited number of circumstances: namely, if a pseudoinstruction attempts to jump to a label that does not exist, or the program is too large to fit in code memory even after shrinking jumps according to the best policy. The first circumstance is an example of a serious compiler error, as an ill-formed assembly program was generated. It does not count against the completeness of the assembler. Further, we proved that the assembler fails to assemble an assembly program if and only if a correct policy does not exist. This is achieved by means of dependent types: the assembly function is total over a program, a policy and the proof that the policy is correct for that program. Policies do not exist in only a limited number of circumstances: namely, if a pseudoinstruction attempts to jump to a label that does not exist, or the program is too large to fit in code memory, even after shrinking jumps according to the best policy. The first circumstance is an example of a serious compiler error, as an ill-formed assembly program was generated, and does not (and should not) count as a mark against the completeness of the assembler. We plan to employ dependent types in CerCo in order to restrict the domain of the compiler to those programs that are semantically correct' and should be compiled. In particular, in CerCo we are also interested in the completeness of the compilation process, whereas previous formalisations only focused on correctness. The rest of this paper is a detailed description of this proof. The rest of this paper is a detailed description of our proof. % ---------------------------------------------------------------------------- % \begin{lstlisting} lemma assembly_ok: $\forall$p,pol,assembled. let $\langle$labels, costs$\rangle$ := build_maps program pol in let $\langle$labels, costs$\rangle$ := build_maps p pol in $\langle$assembled,costs$\rangle$ = assembly p pol $\rightarrow$ let cmem := load_code_memory assembled in let preamble := $\pi_1$ program in let preamble := $\pi_1$ p in let dlbls := construct_datalabels preamble in let addr := address_of_word_labels_code_mem ($\pi_2$ program) in let lk_lbls := λx. sigma program pol (addr x) in let addr := address_of_word_labels_code_mem ($\pi_2$ p) in let lk_lbls := λx. sigma p pol (addr x) in let lk_dlbls := λx. lookup $\ldots$ x datalabels (zero ?) in $\forall$ppc, pi, newppc. $\forall$prf: $\langle$pi, newppc$\rangle$ = fetch_pseudo_instruction ($\pi_2$ program) ppc. $\forall$prf: $\langle$pi, newppc$\rangle$ = fetch_pseudo_instruction ($\pi_2$ p) ppc. $\forall$len, assm. let spol := sigma program pol ppc in echeck $\wedge$ sigma p pol newppc = spol_len. \end{lstlisting} Suppose also we assemble our program \texttt{p} in accordance with a policy \texttt{pol} to obtain \texttt{assembled}, loading the assembled program into code memory \texttt{cmem}. %dpm finish Suppose also we assemble our program \texttt{p} in accordance with a policy \texttt{pol} to obtain \texttt{assembled}. Here, we perform a sanity check' to ensure that the two cost label maps generated are identical, before loading the assembled program into code memory \texttt{cmem}. Then, for every pseudoinstruction \texttt{pi}, pseudo program counter \texttt{ppc} and new pseudo program counter \texttt{newppc}, such that we obtain \texttt{pi} and \texttt{newppc} from fetching a pseudoinstruction at \texttt{ppc}, we check that assembling this pseudoinstruction produces the correct number of machine code instructions, and that the new pseudo program counter \texttt{ppc} has the value expected of it. Theorem \texttt{fetch\_assembly} establishes that the \texttt{fetch} and \texttt{assembly1} functions interact correctly. It is interesting to compare our work to an `industrial grade' assembler for the MCS-51: SDCC~\cite{sdcc:2011}. SDCC is the only open source C compiler available that targets the MCS-51 instruction set. SDCC is the only open source C compiler that targets the MCS-51 instruction set. It appears that all pseudojumps in SDCC assembly are expanded to \texttt{LJMP} instructions, the worst possible jump expansion policy from an efficiency point of view. Note that this policy is the only possible policy \emph{in theory} that can preserve the semantics of an assembly program during the assembly process. However, this comes at the expense of assembler completeness: the generated program may be too large to fit into code memory. In this respect, there is a fundamental trade-off between the completeness of the assembler and the efficiency of the assembled program. In this respect, there is a trade-off between the completeness of the assembler and the efficiency of the assembled program. The definition and proof of an complete, optimal (in the sense that jump pseudoinstructions are expanded to the smallest possible opcode) and correct jump expansion policy is ongoing work.