# Changeset 565

Ignore:
Timestamp:
Feb 17, 2011, 6:30:14 PM (9 years ago)
Message:

down to 16 pages after a bit of rewriting

File:
1 edited

### Legend:

Unmodified
 r564 Specifications are therefore also given at a high level and correctness can be proved by reasoning automatically or interactively on the program's source code. The code that is actually run, however, is not the high level source code that we reason on, but the object code that is generated by the compiler. A few simple questions now arise: A few questions now arise: \begin{itemize*} \item \end{itemize*} These questions, and others like them, motivate a current hot topic' in computer science research: \emph{compiler verification} (for instance~\cite{leroy:formal:2009,chlipala:verified:2010}, and many others). So far, the field has been focused on the first and last questions only. In particular, much attention has been placed on verifying compiler correctness with respect to extensional properties of programs, which are easily preserved during compilation; it is sufficient to completely preserve the denotational semantics of the input program. However, if we consider intensional properties of programs---such as space, time or energy spent into computation and transmission of data---the situation is more complex. To even be able to express these properties, and to be able to reason about them, we are forced to adopt a cost model that assigns a cost to single, or blocks, of instructions. Ideally, we would like to have a compositional cost model that assigns the same cost to all occurrences of one instruction. However, compiler optimisations are inherently non-compositional: each occurrence of a high level instruction is usually compiled in a different way according to the context it finds itself in. So far, the field has only been focused on the first and last questions. Much attention has been placed on verifying compiler correctness with respect to extensional properties of programs, which are easily preserved during compilation; it is sufficient to completely preserve the denotational semantics of the input program. If we consider intensional properties of programs---space, time, and so forth---the situation is more complex. To express these properties, and reason about them, we must adopt a cost model that assigns a cost to single, or blocks, of instructions. A compositional cost model---assigning the same cost to all occurrences of one instruction---would be ideal. However, compiler optimisations are inherently non-compositional: each occurrence of a high level instruction may be compiled in a different way depending on its context. Therefore both the cost model and intensional specifications are affected by the compilation process. In the current EU project CerCo (Certified Complexity')~\cite{cerco:2011} we approach the problem of reasoning about intensional properties of programs as follows. We are currently developing a compiler that induces a cost model on the high level source code. Costs are assigned to each block of high level instructions by considering the costs of the corresponding blocks of compiled object code. The cost model is therefore inherently non-compositional. However, the model has the potential to be extremely \emph{precise}, capturing a program's \emph{realistic} cost, by taking into account, not ignoring, the compilation process. In the CerCo project (Certified Complexity')~\cite{cerco:2011} we approach the problem of reasoning about intensional properties of programs as follows. We are currently developing a compiler that induces a cost model on high level source code. Costs are assigned to each block of high level instructions by considering the costs of the corresponding blocks of compiled code. The cost model is therefore inherently non-compositional, but has the potential to be extremely \emph{precise}, capturing a program's \emph{realistic} cost. That is, the compilation process is taken into account, not ignored. A prototype compiler, where no approximation of the cost is provided, has been developed. (The full technical details of the CerCo cost model is explained in~\cite{amadio:certifying:2010}.) We believe that our approach is especially applicable to certifying real time programs. Here, a user can certify that all deadlines' are met whilst wringing as many clock cycles from the processor---using a cost model that does not over-estimate---as possible. Further, we see our approach as being relevant to the field of compiler verification (and construction) itself. For instance, an optimisation specified only extensionally is only half specified; though the optimisation may preserve the denotational semantics of a program, there is no guarantee that any intensional properties of the program, such as space or time usage, will be improved. (The technical details of the cost model is explained in~\cite{amadio:certifying:2010}.) We believe that our approach is applicable to certifying real time programs. A user can certify that deadlines' are met whilst wringing as many clock cycles from the processor---using a cost model that does not over-estimate---as possible. We also see our approach as being relevant to the compiler verification (and construction) itself. An optimisation specified only extensionally is only half specified; though the optimisation may preserve the denotational semantics of a program, there is no guarantee that any intensional properties of the program will be improved. Another potential application is toward completeness and correctness of the compilation process in the presence of space constraints. Here, a compiler could potentially reject a source program targetting an embedded system when the size of the compiled code exceeds the available ROM size. Moreover, preservation of a program's semantics may only be required for those programs that do not exhaust the stack or heap. A compiler could potentially reject a source program targetting an embedded system when the size of the compiled code exceeds the available ROM size. Preservation of a program's semantics may only be required for those programs that do not exhaust the stack or heap. Hence the statement of completeness of the compiler must take in to account a realistic cost model. In the methodology proposed in CerCo we assume we are able to compute on the object code exact and realistic costs for sequential blocks of instructions. With modern processors, though possible (see~\cite{bate:wcet:2011,yan:wcet:2008} for instance), it is difficult to compute exact costs or to reasonably approximate them. This is because the execution of a program itself has an influence on the speed of processing. For instance, caching, memory effects and other advanced features such as branch prediction all have a profound effect on execution speeds. With the CerCo methodology, we assume we can assign to the object code exact and realistic costs for sequential blocks of instructions. This is possible with modern processors (see~\cite{bate:wcet:2011,yan:wcet:2008} for instance) but difficult, as the execution of a program has an influence on the speed of processing. Caching, memory effects, and other advanced features such as branch prediction all have a profound effect on execution speeds. For this reason CerCo decided to focus on 8-bit microprocessors. These are still widely used in embedded systems, and have the advantage of an easily predictable cost model due to the relative sparcity of features that they possess. In particular, we have fully formalised an executable formal semantics of a family of 8 bit Freescale Microprocessors~\cite{oliboni:matita:2008}, and provided a similar executable formal semantics for the MCS-51 microprocessor. The latter work is what we describe in this paper. The main focus of the formalisation has been on capturing the intensional behaviour of the processor. These are still widely used in embedded systems, with the advantage of an easily predictable cost model due to their relative paucity of features. We have fully formalised an executable formal semantics of a family of 8 bit Freescale Microprocessors~\cite{oliboni:matita:2008}, and provided a similar executable formal semantics for the MCS-51 microprocessor. The latter is what we describe in this paper. The focus of the formalisation has been on capturing the intensional behaviour of the processor. However, the design of the MCS-51 itself has caused problems in our formalisation. For example, the MCS-51 has a highly unorthogonal instruction set. To cope with this unorthogonality, and to produce an executable specification, we have exploited the dependent type system of Matita, an interactive proof assistant. To cope with this unorthogonality, and to produce an executable specification, we have Matita's dependent types. \subsection{The 8051/8052} The 8051 has interrupts disabled by default. The programmer is free to handle serial input and output manually, by poking serial flags in the SFRs. Similarly, exceptional circumstances' that would otherwise trigger an interrupt on more modern processors, for example, division by zero, are also signalled by setting flags. Exceptional circumstances' that would otherwise trigger an interrupt on more modern processors, (e.g. division by zero) are also signalled by setting flags. %\begin{figure}[t] In Section~\ref{sect.validation} we discuss how we validated the design and implementation of our emulator to ensure that what we formalised was an accurate model of an MCS-51 series microprocessor. In Section~\ref{sect.related.work} we describe previous work, with an eye toward describing its relation with the work described herein. In Section~\ref{sect.conclusions} we conclude the paper. In Section~\ref{sect.conclusions} we conclude. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Our implementation progressed in two stages. We began with an emulator written in O'Caml. We used this to iron out' any bugs in our design and implementation within O'Caml's more permissive type system. O'Caml's ability to perform file input-output also eased debugging and validation. We began with an emulator written in O'Caml to iron out' any bugs in our design and implementation. O'Caml's ability to perform file I/O also eased debugging and validation. Once we were happy with the performance and design of the O'Caml emulator, we moved to the Matita formalisation. Matita's syntax is lexically similar to O'Caml's. This eased the translation, as large swathes of code were merely copy-pasted with minor modifications. However, several major issues had to be addresses when moving from O'Caml to Matita. This eased the translation, as code was merely copied with minor modifications. However, several major issues had to be addresses when moving from to Matita. These are now discussed. \label{subsect.representing.memory} The MCS-51 has numerous disjoint memory segments addressed by pointers of different sizes. In our prototype implementation, we simply used a map datastructure (from the O'Caml standard library) for each segment. Matita's standard library is relatively small, and does not contain a generic map datastructure. Therefore, we had the opportunity of crafting a dependently typed special-purpose datastructure for the job to enforce the correspondence between the size of pointers and the size of the segment . We also worked under the assumption that large swathes of memory would often be uninitialized, trying to represent them concisely using stubs. We picked a modified form of trie of fixed height $h$ where paths are represented by bitvectors of length $h$, that are already used in our implementation for addresses and registers: The MCS-51 has numerous disjoint memory spaces addressed by pointers of different sizes. In our prototype implementation, we use a map data structure (from O'Caml's standard library) for each space. Matita's standard library is small, and does not contain a generic map data structure. We had the opportunity of crafting a dependently typed special-purpose data structure for the job to enforce the correspondence between the size of pointer and the size of the memory space. We assumed that large swathes of memory would often be uninitialized. We picked a modified form of trie of fixed height $h$. Paths are represented by bitvectors (already used in our implementation for addresses and registers) of length $h$: \begin{lstlisting} inductive BitVectorTrie (A: Type[0]): nat $\rightarrow$ Type[0] ≝ | Stub: ∀n. BitVectorTrie A n. \end{lstlisting} Here, \texttt{Stub} is a constructor that can appear at any point in our tries. It internalises the notion of uninitialized data'. \texttt{Stub} is a constructor that can appear at any point in a trie. It represents uninitialized data'. Performing a lookup in memory is now straight-forward. We merely traverse a path, and if at any point we encounter a \texttt{Stub}, we return a default value\footnote{All manufacturer data sheets that we consulted were silent on the subject of what should be returned if we attempt to access uninitialized memory.  We defaulted to simply returning zero, though our \texttt{lookup} function is parametric in this choice.  We do not believe that this is an outrageous decision, as SDCC for instance generates code which first zeroes out' all memory in a preamble before executing the program proper.  This is in line with the C standard, which guarantees that all global variables will be zero initialized piecewise.}. We traverse a path, and if we encounter \texttt{Stub}, we return a default value\footnote{All manufacturer data sheets that we consulted were silent on the subject of what should be returned if we attempt to access uninitialized memory.  We defaulted to simply returning zero, though our \texttt{lookup} function is parametric in this choice.  We do not believe that this is an outrageous decision, as SDCC for instance generates code which first zeroes out' all memory in a preamble before executing the program proper.  This is in line with the C standard, which guarantees that all global variables will be zero initialized piecewise.}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Introducing pseudoinstructions had the effect of simplifying a C compiler---another component of the CerCo project---that was being implemented in parallel with our implementation. To understand why this is so, consider the fact that the MCS-51's instruction set has numerous instructions for unconditional and conditional jumps to memory locations. To see why, consider the fact that the MCS-51's instruction set has numerous instructions for unconditional and conditional jumps to memory locations. For instance, the instructions \texttt{AJMP}, \texttt{JMP} and \texttt{LJMP} all perform unconditional jumps. However, these instructions differ in how large the maximum size of the offset of the jump to be performed can be. Further, all jump instructions require a concrete memory address---to jump to---to be specified. Hence compilers that support separate compilation cannot directly compute these offsets and select the appropriate jump instructions. These operations are needleslly burdensome also for compilers that do not do separate compilation and are thus handled by the assemblers, as we decided to do. While introducing pseudo instructions we also introduced labels for locations for jumps and for global data. To specify global data via labels, we have introduced the notion of a preamble before the program to hold the association of labels to sizes of reserved space. Compilers that support separate compilation cannot directly compute these offsets and select the appropriate jump instructions. These operations are also burdensome for compilers that do not do separate compilation and are handled by the assemblers, as we decided to do. While introducing pseudoinstructions, we also introduced labels for locations to jump to, and for global data. To specify global data via labels, we introduced a preamble before the program where labels and the size of reserved space for data is associated. A pseudoinstruction \texttt{Mov} moves (16-bit) data stored at a label into the (16-bit) register \texttt{DPTR}. Our pseudoinstructions and labels induce an assembly language similar to that of SDCC. All pseudoinstructions and labels are assembled away', prior to program execution, using a preprocessing stage. Jumps are computed in two stages. The first stage builds a map associating memory addresses to labels, with the second stage removing pseudojumps with concrete jumps to the correct address. The algorithm currently implemented does not try to minimize the object code size by always picking the shortest possible jump instruction. The choice of an optimal algorithm is currently left as future work. Our pseudoinstructions and labels induce an assembly language similar to that of SDCC. All pseudoinstructions and labels are assembled away' prior to program execution. Jumps are computed in two stages. A map associating memory addresses to labels is built, before removing pseudojumps with concrete jumps to the correct address. The algorithm currently implemented does not try to minimize the object code size by always picking the shortest possible jump instruction. A better algorithm is currently left for future work. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% This record neatly encapsulates the current memory contents, the program counter, the state of the current SFRs, and so on. Here we choosed to represent the MCS-51 memory model using four disjoint segments plus the SFRs. From the programmer point of view, however, what matters are addressing modes that are in a many-to-many relation with the segments. For instance, the \texttt{DIRECT} addressing mode can be used to address either low internal RAM (if the first bit is 0) or the SFRs (if the first bit is 1). That's why DIRECT uses 8-bits address but pointers to the low internal RAM only have 7 bits. Hence the complexity of the memory model is incapsulated in the  \texttt{get\_arg\_XX} and \texttt{set\_arg\_XX} functions that get and set data of size \texttt{XX} from the memory by considering all possible addressing modes Here the MCS-51 memory model is implemented using four disjoint memory spaces plus the SFRs. From the programmer's point of view, what matters are addressing modes that are in a many-to-many relation with the spaces. \texttt{DIRECT} addressing can be used to address either low internal RAM (if the first bit is 0) or the SFRs (if the first bit is 1), for instance. That's why DIRECT uses 8-bit addresses but pointers to the low internal RAM only use 7 bits. The complexity of the memory model is captured in the \texttt{get\_arg\_XX} and \texttt{set\_arg\_XX} functions that get and set data of size \texttt{XX} from memory, considering all addressing modes %Overlapping, and checking which addressing modes can be used to address particular memory spaces, is handled through numerous \texttt{get\_arg\_XX} and \texttt{set\_arg\_XX} (for 1, 8 and 16 bits) functions. definition assembly1: instruction $\rightarrow$ list Byte \end{lstlisting} An assembly program, consisting of a preamble containing global data, and a list of (pseudo)instructions, is assembled using \texttt{assembly}. Pseudoinstructions and labels are eliminated in favour of concrete instructions from the MCS-51 instruction set. A map associating memory locations and cost labels (see Subsection~\ref{subsect.computation.cost.traces}) is also produced. An assembly program---comprising a preamble containing global data and a list of pseudoinstructions---is assembled using \texttt{assembly}. Pseudoinstructions and labels are eliminated in favour of instructions from the MCS-51 instruction set. A map associating memory locations and cost labels (see Subsection~\ref{subsect.computation.cost.traces}) is produced. \begin{lstlisting} definition assembly: let rec execute (n: nat) (s: Status) on n: Status := ... \end{lstlisting} This differs slightly from the design of the O'Caml emulator, which executed a program indefinitely, and also accepted a callback function as an argument, which could witness' the execution as it happened, and provide a print-out of the processor state, and other debugging information. Due to Matita's requirement that all functions be strongly normalizing, \texttt{execute} cannot execute a program indefinitely. An alternative is to produce an infinite stream of statuses representing the execution trace. Infinite streams are encodable in Matita as co-inductive types. This differs from the O'Caml emulator, which executed a program indefinitely. A callback function was also accepted as an argument, which could witness' the execution as it happened, providing a print-out of the processor state. Due to Matita's termination requirement, \texttt{execute} cannot execute a program indefinitely. An alternative would be to produce an infinite stream of statuses representing an execution trace. Matita supports infinite streams through co-inductive types. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Have lemmas proving that if an element is a member of a sub, then it is a member of a superlist, and so on The final, missing component is a pair of type coercions from \texttt{addressing\_mode} to \texttt{subaddressing\_mode} and from \texttt{subaddressing\_mode} to \texttt{Type$\lbrack0\rbrack$}, respectively. The first one is simply a forgetful coercion, while the second one opens a proof obligation wherein we must prove that the provided value is in the admissible set. This kind of coercions were firstly introduced in PVS to implement subset types~\cite{pvs?} and then in Coq as an additional machinery~\cite{russell}. In Matita all coercions can open proof obligations. The first is a forgetful coercion, while the second opens a proof obligation wherein we must prove that the provided value is in the admissible set. These coercions were first introduced by PVS to implement subset types~\cite{pvs?}, and later in Coq as an addition~\cite{russell}. In Matita all coercions can open proof obligations. Proof obligations impels us to state and prove a few auxilliary lemmas related | _ $\Rightarrow$ $\lambda$_: False. $\bot$ ] $~$(subaddressing_modein $\ldots$ a). \end{lstlisting} We feed to the pattern matching the proof \texttt{(subaddressing\_modein} $\ldots$ \texttt{a)} that the argument $a$ is in the set $\llbracket$ \texttt{dptr} $\rrbracket$. In all cases but \texttt{DPTR}, the proof is a proof of \texttt{False} and we can ask the system to open a proof obligation $\bot$ that will be discarded automatically using ex-falso. Attempting to match against a non allowed addressing mode (replacing \texttt{False} with \texttt{True} in the branch) will produce a type-error. All the other dependently and non dependently typed solutions we tried before the current one resulted to be sub-optimal in practice. In particular, since we need a large number of different combinations of address modes to describe the whole instruction set, it is unfeasible to declare a data type for each one of these combinations. Moreover, the current solution is the one that matches best the corresponding O'Caml code, at the point that the translation from O'Caml to Matita is almost syntactical. In particular, we would like to investigate the possibility of changing the code extraction procedure of Matita to recognize this programming pattern and output the original code based on polymorphic variants. We give a proof (the expression \texttt{(subaddressing\_modein} $\ldots$ \texttt{a)}) that the argument $a$ is in the set $\llbracket$ \texttt{dptr} $\rrbracket$ to the match expression. In every case but \texttt{DPTR}, the proof is a proof of \texttt{False}, and the system opens a proof obligation $\bot$ that can be discarded using \emph{ex falso}. Attempting to match against a disallowed addressing mode (replacing \texttt{False} with \texttt{True} in the branch) produces a type-error. Other dependently and non-dependently typed solutions we tried were clumsy in practice. As we need a large number of different combinations of addressing modes to describe the whole instruction set, it is unfeasible to declare a data type for each one of these combinations. The current solution is the one that best matches the corresponding O'Caml code, to the point that the translation from O'Caml to Matita is almost syntactical. We would like to investigate the possibility of changing the code extraction procedure of Matita to recognise this programming pattern and output O'Caml code using polymorphic variants. % Talk about extraction to O'Caml code, which hopefully will allow us to extract back to using polymorphic variants, or when extracting vectors we could extract using phantom types The O'Caml emulator has code for handling timers, asynchronous I/O and interrupts (these are not yet ported to the Matita emulator). All three of these features interact with each other in subtle ways. For instance, interrupts can fire' when an input is detected on the processor's UART port, and, in certain modes, timers reset when a high signal is detected on one of the MCS-51's communication pins. Interrupts can fire' when an input is detected on the processor's UART port, and, in certain modes, timers reset when a high signal is detected on one of the MCS-51's communication pins. To accurately model timers and I/O, we add an unbounded integral field \texttt{clock} to the central \texttt{status} record. Further, if $\pi_1(k) = \mathtt{Some}~\langle \tau',i,\epsilon,k'\rangle$, then at time $\tau'$ the environment will send the asynchronous input $i$ to the processor and the status will be updated with the continuation $k'$. This input will become visible to the processor only at time $\tau' + \epsilon$. This input is visible to the processor only at time $\tau' + \epsilon$. The time required to perform an I/O operation is partially specified in the data sheets of the UART module. However, this computation is complex so we prefer to abstract over it. We therefore leave the computation of the delay time to the environment. This computation is complex so we prefer to abstract over it. We leave the computation of the delay time to the environment. We use only the P1 and P3 lines despite the MCS-51 having four output lines, P0--P3. \label{sect.validation} We spent considerable effort attempting to ensure that our formalisation is correct, that is, what we have formalised really is an accurate model of the MCS-51 microprocessor. We spent considerable effort attempting to ensure that what we have formalised is an accurate model of the MCS-51 microprocessor. First, we made use of multiple data sheets, each from a different semiconductor manufacturer.  This helped us spot errors in the specification of the processor's instruction set, and its behaviour, for instance, in a datasheet from Philips. Our use of dependent types will also help to maintain invariants when we prove the correctness of the CerCo prototype compiler. Finally, in~\cite{sarkar:semantics:2009} Sarkar et al provide an executable semantics for x86-CC multiprocessor machine code. Finally, Sarkar et al~\cite{sarkar:semantics:2009} provide an executable semantics for x86-CC multiprocessor machine code. This machine code exhibits a high degree of non-uniformity similar to the MCS-51. However, only a very small subset of the instruction set is considered, and they over-approximate the possibilities of unorthogonality of the instruction set, largely dodging the problems we had to face. However, only a small subset of the instruction set is considered, and they over-approximate the possibilities of unorthogonality of the instruction set, largely dodging the problems we had to face. Further, it seems that the definition of the decode function is potentially error prone. \label{sect.conclusions} In the EU project CerCo (Certified Complexity') we are interested in the certification of a compiler for C that induces a precise cost model on the source code. The cost model assigns costs to blocks of instructions by tracing the way blocks are compiled and by computing the exact costs on the generated assembly code. In order to perform this accurately, we have provided an executable semantics for the MCS-51 family of processors, better known as 8051/8052. The formalization was done twice, first in O'Caml and then in Matita, and also captures the exact timings of the processor. Moreover, the O'Caml one also considers timers and I/O. Adding support for I/O and timers in Matita is an on-going work that will not present any major problem. It was delayed only because not immediately useful for the formalization of the CerCo compiler. The formalization is done at machine level and not at assembly level: we also formalize fetching and decoding. However, we separately provide also an assembly language, enhanched with labels and pseudo-instructions, and an assembler from this language to machine code. We also introduce cost labels in the machine language to relate the data flow of the assembly program to that of the C source language, in order to associate costs to the C program. Finally, for the O'Caml version, we provide a parser and pretty printer from the memory representation to the Intel HEX format. Hence we can easily perform testing on programs compiled using any free or commercial compiler. Compared with previous formalizations of micro-processors, the main difficulties in formalizing the MCS-51 are due to the unorthogonality of its memory model and instruction set. These are easily handled in O'Caml by using advanced language features like polymorphic variants and phantom types to simulate Generalized Abstract Data Types. In Matita, we use dependent types to recover the same flexibility, to reduce spurious partiality and to grant invariants that will be useful in the formalization of the CerCo compiler. The formalization has been partially verified by computing execution traces on selected programs and comparing them with an exhisting emulator. All instructions have been tested at least once, but we have not yet pushed testing further, for example with random testing or by means of development boards. I/O in particular has not been tested yet, and it is currently unclear how to provide exhaustive testing in presence of I/O. Finally, we are aware of having over-specified the processor in several places, by fixing a behaviour hopefully consistent with the real machine where the data sheets of the vendors do not specify one. The CerCo project is interested in the certification of a compiler for C that induces a precise cost model on the source code. Our cost model assigns costs to blocks of instructions by tracing the way that blocks are compiled, and by computing exact costs on generated assembly code. To perform this accurately, we have provided an executable semantics for the MCS-51 family of processors, better known as 8051/8052. The formalisation was done twice, first in O'Caml and then in Matita, and captures the exact timings of the processor. Moreover, the O'Caml formalisation also considers timers and I/O. Adding support for I/O and timers in Matita is an on-going work that will not present any major problem, and was delayed only because the addition is not immediately useful for the formalisation of the CerCo compiler. The formalisation is done at machine level and not at assembly level; we also formalise fetching and decoding. We separately provide an assembly language, enhanched with labels and pseudo-instructions, and an assembler from this language to machine code. We introduce cost labels in the machine language to relate the data flow of the assembly program to that of the C source language, in order to associate costs to the C program. For the O'Caml version, we provide a parser and pretty printer from code memory to Intel HEX format. Hence we can perform testing on programs compiled using any free or commercial compiler. Our main difficulty in formalising the MCS-51 was the unorthogonality of its memory model and instruction set. These problems are easily handled in O'Caml by using advanced language features like polymorphic variants and phantom types, simulating Generalized Abstract Data Types. In Matita, we use dependent types to recover the same flexibility, to reduce spurious partiality, and to grant invariants that will be useful in the formalization of the CerCo compiler. The formalisation has been partially verified by computing execution traces on selected programs and comparing them with an existing emulator. All instructions have been tested at least once, but we have not yet pushed testing further, for example with random testing or by using development boards. I/O in particular has not been tested yet, and it is currently unclear how to provide exhaustive testing in the presence of I/O. Finally, we are aware of having over-specified the processor in several places, by fixing a behaviour hopefully consistent with the real machine, where manufacturer data sheets are ambiguous or under specified. \bibliography{itp-2011.bib}