source: Deliverables/D4.1/ITP-Paper/itp-2011.tex @ 554

Last change on this file since 554 was 554, checked in by sacerdot, 9 years ago


File size: 53.6 KB
19        {\setlength{\fboxsep}{5pt}
20                \setlength{\mylength}{\linewidth}%
21                \addtolength{\mylength}{-2\fboxsep}%
22                \addtolength{\mylength}{-2\fboxrule}%
23                \Sbox
24                \minipage{\mylength}%
25                        \setlength{\abovedisplayskip}{0pt}%
26                        \setlength{\belowdisplayskip}{0pt}%
27                }%
28                {\endminipage\endSbox
29                        \[\fbox{\TheSbox}\]}
[539]32  {keywords={definition,coercion,lemma,theorem,remark,inductive,record,qed,let,in,rec,match,return,with,Type,try,on,to},
[510]33   morekeywords={[2]whd,normalize,elim,cases,destruct},
[532]34   morekeywords={[3]type,of,val,assert,let,function},
[495]35   mathescape=true,
36  }
38        keywordstyle=\color{red}\bfseries,
39        keywordstyle=[2]\color{blue},
40        keywordstyle=[3]\color{blue}\bfseries,
41        commentstyle=\color{green},
42        stringstyle=\color{blue},
43        showspaces=false,showstringspaces=false}
[543]51\author{Dominic P. Mulligan\thanks{The project CerCo acknowledges the financial support of the Future and
52Emerging Technologies (FET) programme within the Seventh Framework
53Programme for Research of the European Commission, under FET-Open grant
54number: 243881} \and Claudio Sacerdoti Coen$^\star$}
[527]55\authorrunning{D. P. Mulligan and C. Sacerdoti Coen}
[501]56\title{An executable formalisation of the MCS-51 microprocessor in Matita}
57\titlerunning{An executable formalisation of the MCS-51}
[544]58\institute{Dipartimento di Scienze dell'Informazione, Universit\`a di Bologna}
[495]67We summarise our formalisation of an emulator for the MCS-51 microprocessor in the Matita proof assistant.
68The MCS-51 is a widely used 8-bit microprocessor, especially popular in embedded devices.
70We proceeded in two stages, first implementing in O'Caml a prototype emulator, where bugs could be `ironed out' quickly.
71We then ported our O'Caml emulator to Matita's internal language.
72Though mostly straight-forward, this porting presented multiple problems.
73Of particular interest is how we handle the extreme non-orthoganality of the MSC-51's instruction set.
74In O'Caml, this was handled through heavy use of polymorphic variants.
[501]75In Matita, we achieve the same effect through a non-standard use of dependent types.
77Both the O'Caml and Matita emulators are `executable'.
78Assembly programs may be animated within Matita, producing a trace of instructions executed.
80Our formalisation is a major component of the ongoing EU-funded CerCo project.
84% SECTION                                                                      %
[512]89Formal methods are designed to increase our confidence in the design and implementation of software (and hardware).
[551]90Ideally, we would like all software to come equipped with a formal specification, along with a proof of correctness that the software meets this specification.
91Today the majority of programs are written in high level languages and then compiled into low level ones.
[512]92Specifications are therefore also given at a high level and correctness can be proved by reasoning automatically or interactively on the program's source code.
93The code that is actually run, however, is not the high level source code that we reason on, but the object code that is generated by the compiler.
94A few simple questions now arise:
[509]97What properties are preserved during compilation?
[509]99What properties are affected by the compilation strategy?
[509]101To what extent can you trust your compiler in preserving those properties?
[551]103These questions, and others like them, motivate a current `hot topic' in computer science research: \emph{compiler verification} (for instance~\cite{leroy:formal:2009, chlipala:verified:2010}, and many others).
[512]104So far, the field has been focused on the first and last questions only.
105In particular, much attention has been placed on verifying compiler correctness with respect to extensional properties of programs, which are easily preserved during compilation; it is sufficient to completely preserve the denotational semantics of the input program.
[513]107However, if we consider intensional properties of programs---such as space, time or energy spent into computation and transmission of data---the situation is more complex.
[518]108To even be able to express these properties, and to be able to reason about them, we are forced to adopt a cost model that assigns a cost to single, or blocks, of instructions.
[513]109Ideally, we would like to have a compositional cost model that assigns the same cost to all occurrences of one instruction.
[515]110However, compiler optimisations are inherently non-compositional: each occurrence of a high level instruction is usually compiled in a different way according to the context it finds itself in.
[513]111Therefore both the cost model and intensional specifications are affected by the compilation process.
[551]113In the current EU project CerCo (`Certified Complexity')~\cite{cerco:2011} we approach the problem of reasoning about intensional properties of programs as follows.
[514]114We are currently developing a compiler that induces a cost model on the high level source code.
115Costs are assigned to each block of high level instructions by considering the costs of the corresponding blocks of compiled object code.
[518]116The cost model is therefore inherently non-compositional.
[514]117However, the model has the potential to be extremely \emph{precise}, capturing a program's \emph{realistic} cost, by taking into account, not ignoring, the compilation process.
118A prototype compiler, where no approximation of the cost is provided, has been developed.
[551]119(The full technical details of the CerCo cost model is explained in~\cite{amadio:certifying:2010}.)
[514]121We believe that our approach is especially applicable to certifying real time programs.
122Here, a user can certify that all `deadlines' are met whilst wringing as many clock cycles from the processor---using a cost model that does not over-estimate---as possible.
[515]124Further, we see our approach as being relevant to the field of compiler verification (and construction) itself.
125For instance, an optimisation specified only extensionally is only half specified; though the optimisation may preserve the denotational semantics of a program, there is no guarantee that any intensional properties of the program, such as space or time usage, will be improved.
126Another potential application is toward completeness and correctness of the compilation process in the presence of space constraints.
127Here, a compiler could potentially reject a source program targetting an embedded system when the size of the compiled code exceeds the available ROM size.
128Moreover, preservation of a program's semantics may only be required for those programs that do not exhaust the stack or heap.
129Hence the statement of completeness of the compiler must take in to account a realistic cost model.
[515]131In the methodology proposed in CerCo we assume we are able to compute on the object code exact and realistic costs for sequential blocks of instructions.
132With modern processors, though possible~\cite{??,??,??}, it is difficult to compute exact costs or to reasonably approximate them.
133This is because the execution of a program itself has an influence on the speed of processing.
[518]134For instance, caching, memory effects and other advanced features such as branch prediction all have a profound effect on execution speeds.
[515]135For this reason CerCo decided to focus on 8-bit microprocessors.
136These are still widely used in embedded systems, and have the advantage of an easily predictable cost model due to the relative sparcity of features that they possess.
[515]138In particular, we have fully formalised an executable formal semantics of a family of 8 bit Freescale Microprocessors~\cite{oliboni}, and provided a similar executable formal semantics for the MCS-51 microprocessor.
139The latter work is what we describe in this paper.
140The main focus of the formalisation has been on capturing the intensional behaviour of the processor.
141However, the design of the MCS-51 itself has caused problems in our formalisation.
142For example, the MCS-51 has a highly unorthogonal instruction set.
143To cope with this unorthogonality, and to produce an executable specification, we have exploited the dependent type system of Matita, an interactive proof assistant.
[493]145\subsection{The 8051/8052}
148The MCS-51 is an eight bit microprocessor introduced by Intel in the late 1970s.
149Commonly called the 8051, in the three decades since its introduction the processor has become a highly popular target for embedded systems engineers.
[515]150Further, the processor, its immediate successor the 8052, and many derivatives are still manufactured \emph{en masse} by a host of semiconductor suppliers.
152The 8051 is a well documented processor, and has the additional support of numerous open source and commercial tools, such as compilers for high-level languages and emulators.
[550]153For instance, the open source Small Device C Compiler (SDCC) recognises a dialect of C~\cite{sdcc:2010}, and other compilers targeting the 8051 for BASIC, Forth and Modula-2 are also extant.
154An open source emulator for the processor, MCU-8051 IDE, is also available~\cite{mcu8051ide:2010}.
[515]155Both MCU-8051 IDE and SDCC were used profitably in the implementation of our formalisation.
161\caption{High level overview of the 8051 memory layout}
165The 8051 has a relatively straightforward architecture, unencumbered by advanced features of modern processors, making it an ideal target for formalisation.
166A high-level overview of the processor's memory layout is provided in Figure~\ref{fig.memory.layout}.
168Processor RAM is divided into numerous segments, with the most prominent division being between internal and (optional) external memory.
[552]169Internal memory, commonly provided on the die itself with fast access, is composed of 256 bytes, but, in direct addressing mode, half of them are overloaded with 128 bytes of memory mapped Special Function Registers (SFRs) which control the operation of the processor.
[516]170Internal RAM (IRAM) is further divided into eight general purpose bit-addressable registers (R0--R7).
[493]171These sit in the first eight bytes of IRAM, though can be programmatically `shifted up' as needed.
[516]172Bit memory, followed by a small amount of stack space, resides in the memory space immediately after the register banks.
[493]173What remains of the IRAM may be treated as general purpose memory.
174A schematic view of IRAM layout is provided in Figure~\ref{fig.iram.layout}.
[516]176External RAM (XRAM), limited to a maximum size of 64 kilobytes, is optional, and may be provided on or off chip, depending on the manufacturer.
177XRAM is accessed using a dedicated instruction, and requires sixteen bits to address fully.
[493]178External code memory (XCODE) is often stored in the form of an EPROM, and limited to 64 kilobytes in size.
179However, depending on the particular manufacturer and processor model, a dedicated on-die read-only memory area for program code (ICODE) may also be supplied.
181Memory may be addressed in numerous ways: immediate, direct, indirect, external direct and code indirect.
182As the latter two addressing modes hint, there are some restrictions enforced by the 8051 and its derivatives on which addressing modes may be used with specific types of memory.
[553]183For instance, the 128 bytes of extra internal RAM that the 8052 features cannot be addressed using indirect addressing; rather, external (in)direct addressing must be used. Moreover, some memory segments are addressed using 8 bits pointers while others require 16 bits.
185The 8051 series possesses an eight bit Arithmetic and Logic Unit (ALU), with a wide variety of instructions for performing arithmetic and logical operations on bits and integers.
186Further, the processor possesses two eight bit general purpose accumulators, A and B.
188Communication with the device is facilitated by an onboard UART serial port, and associated serial controller, which can operate in numerous modes.
189Serial baud rate is determined by one of two sixteen bit timers included with the 8051, which can be set to multiple modes of operation.
190(The 8052 provides an additional sixteen bit timer.)
191As an additional method of communication, the 8051 also provides a four byte bit-addressable input-output port.
193The programmer may take advantage of the interrupt mechanism that the processor provides.
194This is especially useful when dealing with input or output involving the serial device, as an interrupt can be set when a whole character is sent or received via the serial port.
196Interrupts immediately halt the flow of execution of the processor, and cause the program counter to jump to a fixed address, where the requisite interrupt handler is stored.
197However, interrupts may be set to one of two priorities: low and high.
198The interrupt handler of an interrupt with high priority is executed ahead of the interrupt handler of an interrupt of lower priority, interrupting a currently executing handler of lower priority, if necessary.
200The 8051 has interrupts disabled by default.
201The programmer is free to handle serial input and output manually, by poking serial flags in the SFRs.
202Similarly, `exceptional circumstances' that would otherwise trigger an interrupt on more modern processors, for example, division by zero, are also signalled by setting flags.
208\caption{Schematic view of 8051 IRAM layout}
213% SECTION                                                                      %
215\subsection{Overview of paper}
[538]218In Section~\ref{} we discuss design issues in the development of the formalisation.
219In Section~\ref{sect.validation} we discuss how we validated the design and implementation of our emulator to ensure that what we formalised was an accurate model of an MCS-51 series microprocessor.
220In Section~\ref{} we describe previous work, with an eye toward describing its relation with the work described herein.
221In Section~\ref{sect.conclusions} we conclude the paper.
[546]223In Appendices~\ref{sect.listing.main.ocaml.functions} and~\ref{sect.listing.main.matita.functions} we provide a brief overview of the main functions in our implementation, and describe at a high level what they do.
226% SECTION                                                                      %
[527]228\section{Design issues in the formalisation}
[540]231From hereonin, we typeset O'Caml source with \texttt{\color{blue}{blue}} and Matita source with \texttt{\color{red}{red}} to distinguish the two syntaxes.
232Matita's syntax is largely straightforward to those familiar with Coq or O'Caml.
[554]233The only subtlety is the use of `\texttt{?}' in an argument position denoting an argument that should be inferred automatically.
235\subsection{Development strategy}
[538]238Our implementation progressed in two stages.
[506]239We began with an emulator written in O'Caml.
240We used this to `iron out' any bugs in our design and implementation within O'Caml's more permissive type system.
241O'Caml's ability to perform file input-output also eased debugging and validation.
242Once we were happy with the performance and design of the O'Caml emulator, we moved to the Matita formalisation.
[506]244Matita's syntax is lexically similar to O'Caml's.
245This eased the translation, as large swathes of code were merely copy-pasted with minor modifications.
246However, several major issues had to be addresses when moving from O'Caml to Matita.
247These are now discussed.
250% SECTION                                                                      %
[554]252\subsection{Representation of bytes, words, etc.}
259type 'a vect = bit list
[554]260type nibble = [`Sixteen] vect
[527]261type byte = [`Eight] vect
[554]262$\color{blue}{\mathtt{let}}$ split_word w = split_nth 4 w
263$\color{blue}{\mathtt{let}}$ split_byte b = split_nth 2 b
[532]270type 'a vect
271type word = [`Sixteen] vect
272type byte = [`Eight] vect
[554]273val split_word: word -> byte * word
274val split_byte: byte -> nibble * nibble
277\caption{Sample of O'Caml implementation and interface for bitvectors module}
[554]281The formalization of MCS-51 must deal with bytes (8 bits), words (16 bits) but also with more exoteric quantities (7 bits, 3 bits, 9 bits). To avoid size mismatch bugs difficult to spot, we represent all of these quantities using bitvectors, i.e. fixed length vectors of booleans.
282In our O'Caml emulator, we `faked' bitvectors using phantom types implemented with polymorphic variants\cite{phantom_types_ocaml}, as in Figure~\ref{fig.ocaml.implementation.bitvectors}.
283From within the bitvector module (left column) bitvectors are just lists of bits and no guarantee is provided on sizes. However, the module's interface (right column) enforces the size invariants in the rest of the code.
[554]285In Matita, we are able to use the full power of dependent types to always work with vectors of a known size:
287inductive Vector (A: Type[0]): nat → Type[0] ≝
288  VEmpty: Vector A O
289| VCons: ∀n: nat. A → Vector A n → Vector A (S n).
291We define \texttt{BitVector} as a specialization of \texttt{Vector} to \texttt{bool}.
[554]292We may use Matita's type system to provide precise typing for functions that are
293polymorphic in the size without having to duplicate the code as we did in O'Caml:
295let rec split (A: Type[0]) (m,n: nat) on m:
296   Vector A (plus m n) $\rightarrow$ (Vector A m) $\times$ (Vector A n) := ...
300% SECTION                                                                      %
[511]302\subsection{Representing memory}
[516]305The MCS-51 has numerous different types of memory.
306In our prototype implementation, we simply used a map datastructure from the O'Caml standard library.
[519]307Matita's standard library is relatively small, and does not contain a generic map datastructure.
[536]308Therefore, we had the opportunity of crafting a special-purpose datastructure for the job.
310We worked under the assumption that large swathes of memory would often be uninitialized.
[519]311Na\"ively, using a complete binary tree, for instance, would be extremely memory inefficient.
[516]312Instead, we chose to use a modified form of trie, where paths are represented by bitvectors.
313As bitvectors were widely used in our implementation already for representing integers, this worked well:
315inductive BitVectorTrie (A: Type[0]): nat $\rightarrow$ Type[0] ≝
316  Leaf: A $\rightarrow$ BitVectorTrie A 0
317| Node: ∀n. BitVectorTrie A n $\rightarrow$ BitVectorTrie A n $\rightarrow$ BitVectorTrie A (S n)
318| Stub: ∀n. BitVectorTrie A n.
320Here, \texttt{Stub} is a constructor that can appear at any point in our tries.
321It internalises the notion of `uninitialized data'.
322Performing a lookup in memory is now straight-forward.
[519]323We merely traverse a path, and if at any point we encounter a \texttt{Stub}, we return a default value\footnote{All manufacturer data sheets that we consulted were silent on the subject of what should be returned if we attempt to access uninitialized memory.  We defaulted to simply returning zero, though our \texttt{lookup} function is parametric in this choice.  We do not believe that this is an outrageous decision, as SDCC for instance generates code which first `zeroes out' all memory in a preamble before executing the program proper.  This is in line with the C standard, which guarantees that all global variables will be zero initialized piecewise.}.
[516]324As we are using bitvectors, we may make full use of dependent types and ensure that our bitvector paths are of the same length as the height of the tree.
327% SECTION                                                                      %
[519]329\subsection{Labels and pseudoinstructions}
[523]332Aside from implementing the core MCS-51 instruction set, we also provided \emph{pseudoinstructions}, \emph{labels} and \emph{cost labels}.
333The purpose of \emph{cost labels} will be explained in Subsection~\ref{subsect.computation.cost.traces}.
[522]335Introducing pseudoinstructions had the effect of simplifying a C compiler---another component of the CerCo project---that was being implemented in parallel with our implementation.
336To understand why this is so, consider the fact that the MCS-51's instruction set has numerous instructions for unconditional and conditional jumps to memory locations.
[519]337For instance, the instructions \texttt{AJMP}, \texttt{JMP} and \texttt{LJMP} all perform unconditional jumps.
338However, these instructions differ in how large the maximum size of the offset of the jump to be performed can be.
[522]339Further, all jump instructions require a concrete memory address---to jump to---to be specified.
[519]340Requiring the compiler to compute these offsets, and select appropriate jump instructions, was seen as needleslly burdensome.
[522]342Introducing labels also had a simplifying effect on the design of the compiler.
343Instead of jumping to a concrete address, the compiler could `just' jump to a label.
344In this vein, we introduced pseudoinstructions for both unconditional and conditional jumps to a label.
[522]346Further, we also introduced labels for storing global data in a preamble before the program.
347A pseudoinstruction \texttt{Mov} moves (16-bit) data stored at a label into the (16-bit) register \texttt{DPTR}.
348We believe this facility, of storing global data in a preamble referenced by a label, will also make any future extension considering separate compilation much simpler.
350Our pseudoinstructions and labels induce an assembly language similar to that of SDCC.
351All pseudoinstructions and labels are `assembled away', prior to program execution, using a preprocessing stage.
352Jumps are computed in two stages.
353The first stage builds a map associating memory addresses to labels, with the second stage removing pseudojumps with concrete jumps to the correct address.
356% SECTION                                                                      %
[524]358\subsection{Anatomy of the (Matita) emulator}
[517]361The internal state of our Matita emulator is represented as a record:
363record Status: Type[0] ≝
365  code_memory: BitVectorTrie Byte 16;
366  low_internal_ram: BitVectorTrie Byte 7;
367  high_internal_ram: BitVectorTrie Byte 7;
368  external_ram: BitVectorTrie Byte 16;
369  program_counter: Word;
370  special_function_registers_8051: Vector Byte 19;
371  special_function_registers_8052: Vector Byte 5;
372  ...
375This record neatly encapsulates the current memory contents, the program counter, the state of the current SFRs, and so on.
376One peculiarity is the packing of the 24 combined SFRs into fixed length vectors.
377This was due to a bug in Matita when we were constructing the emulator, since fixed, where the time needed to typecheck a record grew exponentially with the number of fields.
[536]379Here, it appears that the MCS-51's memory spaces are completely disjoint.
380This is not so; many of them overlap with each other, and there's a many-many relationship between addressing modes and memory spaces.
381For instance, \texttt{DIRECT} addressing can be used to address low internal RAM and the SFRs, but not high internal RAM.
383For simplicity, we merely treat memory spaces as if they are completely disjoint in the \texttt{Status} record.
384Overlapping, and checking which addressing modes can be used to address particular memory spaces, is handled through numerous \texttt{get\_arg\_XX} and \texttt{set\_arg\_XX} (for 1, 8 and 16 bits) functions.
[524]386Both the Matita and O'Caml emulators follows the classic `fetch-decode-execute' model of processor operation.
[532]387The next instruction to be processed, indexed by the program counter, is fetched from code memory with \texttt{fetch}.
[524]388An updated program counter, along with the concrete cost, in processor cycles for executing this instruction, is also returned.
389These costs are taken from a Siemen's data sheet for the MCS-51, and will likely vary across manufacturers and particular derivatives of the processor.
[532]391definition fetch:
392  BitVectorTrie Byte 16 $\rightarrow$ Word $\rightarrow$ instruction $\times$ Word $\times$ nat := ...
394A single instruction is assembled into its corresponding bit encoding with \texttt{assembly1}:
396definition assembly1: instruction $\rightarrow$ list Byte := ...
398An assembly program, consisting of a preamble containing global data, and a list of (pseudo)instructions, is assembled using \texttt{assembly}.
399Pseudoinstructions and labels are eliminated in favour of concrete instructions from the MCS-51 instruction set.
400A map associating memory locations and cost labels (see Subsection~\ref{subsect.computation.cost.traces}) is also produced.
[532]402definition assembly:
403  assembly_program $\rightarrow$ option (list Byte $\times$ (BitVectorTrie String 16)) := ...
405A single execution step of the processor is evaluated using \texttt{execute\_1}, mapping a \texttt{Status} to a \texttt{Status}:
[532]407definition execute_1: Status $\rightarrow$ Status := ...
409Multiple steps of processor execution are implemented in \texttt{execute}, which wraps \texttt{execute\_1}:
411let rec execute (n: nat) (s: Status) on n: Status := ...
[532]413This differs slightly from the design of the O'Caml emulator, which executed a program indefinitely, and also accepted a callback function as an argument, which could `witness' the execution as it happened, and providing a print-out of the processor state, and other debugging information.
414Due to Matita's requirement that all functions be strongly normalizing, \texttt{execute} cannot execute a program indefinitely, and must execute a fixed number of steps.
417% SECTION                                                                      %
419\subsection{Instruction set unorthogonality}
[508]422A peculiarity of the MCS-51 is the non-orthogonality of its instruction set.
423For instance, the \texttt{MOV} instruction, can be invoked using one of sixteen combinations of addressing modes.
[520]425% Show example of pattern matching with polymorphic variants
[508]427Such non-orthogonality in the instruction set was handled with the use of polymorphic variants in the O'Caml emulator.
428For instance, we introduced types corresponding to each addressing mode:
430type direct = [ `DIRECT of byte ]
431type indirect = [ `INDIRECT of bit ]
434Which were then used in our inductive datatype for assembly instructions, as follows:
436type 'addr preinstruction =
437 [ `ADD of acc * [ reg | direct | indirect | data ]
439 | `MOV of
440    (acc * [ reg | direct | indirect | data ],
441     [ reg | indirect ] * [ acc | direct | data ],
442     direct * [ acc | reg | direct | indirect | data ],
443     dptr * data16,
444     carry * bit,
445     bit * carry
446     ) union6
449Here, \texttt{union6} is a disjoint union type, defined as follows:
451type ('a,'b,'c,'d,'e,'f) union6 = [ `U1 of 'a | ... | `U6 of 'f ]
[510]453For our purposes, the types \texttt{union2}, \texttt{union3} and \texttt{union6} sufficed.
[510]455This polymorphic variant machinery worked well: it introduced a certain level of type safety (for instance, the type of our \texttt{MOV} instruction above guarantees it cannot be invoked with arguments in the \texttt{carry} and \texttt{data16} addressing modes, respectively), and also allowed us to pattern match against instructions, when necessary.
456However, this polymorphic variant machinery is \emph{not} present in Matita.
457We needed some way to produce the same effect, which Matita supported.
458For this task, we used dependent types.
[510]460We first provided an inductive data type representing all possible addressing modes, a type that functions will pattern match against:
[510]462inductive addressing_mode: Type[0] ≝
[495]463  DIRECT: Byte $\rightarrow$ addressing_mode
464| INDIRECT: Bit $\rightarrow$ addressing_mode
[510]467We also wished to express in the type of functions the \emph{impossibility} of pattern matching against certain constructors.
468In order to do this, we introduced an inductive type of addressing mode `tags'.
469The constructors of \texttt{addressing\_mode\_tag} are in one-to-one correspondence with the constructors of \texttt{addressing\_mode}:
[510]471inductive addressing_mode_tag : Type[0] ≝
[495]472  direct: addressing_mode_tag
473| indirect: addressing_mode_tag
[510]476A function that checks whether an \texttt{addressing\_mode} is `morally' an \texttt{addressing\_mode\_tag} is provided, as follows:
[539]478let rec is_a (d: addressing_mode_tag) (A: addressing_mode) on d :=
[495]479  match d with
480   [ direct $\Rightarrow$ match A with [ DIRECT _ $\Rightarrow$ true | _ $\Rightarrow$ false ]
481   | indirect $\Rightarrow$ match A with [ INDIRECT _ $\Rightarrow$ true | _ $\Rightarrow$ false ]
484We also extend this check to vectors of \texttt{addressing\_mode\_tag}'s in the obvious manner:
[539]486let rec is_in (n: nat) (l: Vector addressing_mode_tag n) (A: addressing_mode) on l :=
[510]487 match l return $\lambda$m.$\lambda$_: Vector addressing_mode_tag m. bool with
[495]488  [ VEmpty $\Rightarrow$ false
489  | VCons m he (tl: Vector addressing_mode_tag m) $\Rightarrow$
490     is_a he A $\vee$ is_in ? tl A ].
[528]492Here $\mathtt{\vee}$ is inclusive disjunction on the \texttt{bool} datatype.
[539]494record subaddressing_mode (n: nat) (l: Vector addressing_mode_tag (S n)): Type[0] :=
496  subaddressing_modeel :> addressing_mode;
497  subaddressing_modein: bool_to_Prop (is_in ? l subaddressing_modeel)
500We can now provide an inductive type of preinstructions with precise typings:
[510]502inductive preinstruction (A: Type[0]): Type[0] ≝
[495]503   ADD: $\llbracket$ acc_a $\rrbracket$ $\rightarrow$ $\llbracket$ register; direct; indirect; data $\rrbracket$ $\rightarrow$ preinstruction A
504 | ADDC: $\llbracket$ acc_a $\rrbracket$ $\rightarrow$ $\llbracket$ register; direct; indirect; data $\rrbracket$ $\rightarrow$ preinstruction A
507Here $\llbracket - \rrbracket$ is syntax denoting a vector.
508We see that the constructor \texttt{ADD} expects two parameters, the first being the accumulator A (\texttt{acc\_a}), and the second being one of a register, direct, indirect or data addressing mode.
[520]510% One of these coercions opens up a proof obligation which needs discussing
511% Have lemmas proving that if an element is a member of a sub, then it is a member of a superlist, and so on
[495]512The final, missing component is a pair of type coercions from \texttt{addressing\_mode} to \texttt{subaddressing\_mode} and from \texttt{subaddressing\_mode} to \texttt{Type$\lbrack0\rbrack$}, respectively.
[539]513The latter coercion is largely straightforward, however the former is not:
515coercion mk_subaddressing_mode:
516  $\forall$n.  $\forall$l: Vector addressing_mode_tag (S n).
517  $\forall$a: addressing_mode.
518  $\forall$p: bool_to_Prop (is_in ? l a). subaddressing_mode n l :=
519    mk_subaddressing_mode on a: addressing_mode to subaddressing_mode ? ?.
521Using this coercion opens a proof obligation wherein we must prove that the \texttt{addressing\_mode\_tag} in correspondence with the \texttt{addressing\_mode} is a member of the \texttt{Vector} of permissible \texttt{addressing\_mode\_tag}s.
522This impels us to state and prove a number of auxilliary lemmas.
[546]523For instance, we prove that if an \texttt{addressing\_mode\_tag} is a member of a \texttt{Vector}, and we possess another vector with additional elements, then the same \texttt{addressing\_mode\_tag} is a member of this vector.
524Using these lemmas, and Matita's automation, all proof obligations are solved easily.
525(Type checking the main \texttt{execute\_1} function, for instance, opens up over 200 proof obligations.)
527The machinery just described allows us to state in the type of a function what addressing modes that function expects.
[495]528For instance, consider \texttt{set\_arg\_16}, which expects only a \texttt{DPTR}:
[510]530definition set_arg_16: Status $\rightarrow$ Word $\rightarrow$ $\llbracket$ dptr $\rrbracket$ $\rightarrow$ Status ≝
[495]531  $\lambda$s, v, a.
532   match a return $\lambda$x. bool_to_Prop (is_in ? $\llbracket$ dptr $\rrbracket$ x) $\rightarrow$ ? with
533     [ DPTR $\Rightarrow$ $\lambda$_: True.
534       let 〈 bu, bl 〉 := split $\ldots$ eight eight v in
535       let status := set_8051_sfr s SFR_DPH bu in
536       let status := set_8051_sfr status SFR_DPL bl in
537         status
538     | _ $\Rightarrow$ $\lambda$_: False.
539       match K in False with
540       [
541       ]
542     ] (subaddressing_modein $\ldots$ a).
544All other cases are discharged by the catch-all at the bottom of the match expression.
545Attempting to match against another addressing mode not indicated in the type (for example, \texttt{REGISTER}) will produce a type-error.
[520]547% Talk about extraction to O'Caml code, which hopefully will allow us to extract back to using polymorphic variants, or when extracting vectors we could extract using phantom types
548% Discuss alternative approaches, i.e. Sigma types to piece together smaller types into larger ones, as opposed to using a predicate to `cut out' pieces of a larger type, which is what we did
551% SECTION                                                                      %
[521]553\subsection{I/O and timers}
556% `Real clock' for I/O and timers
[545]557The O'Caml emulator has code for handling timers, asynchronous I/O and interrupts (these are not yet ported to the Matita emulator).
[525]558All three of these features interact with each other in subtle ways.
559For instance, interrupts can `fire' when an input is detected on the processor's UART port, and, in certain modes, timers reset when a high signal is detected on one of the MCS-51's communication pins.
[548]561To accurately model timers and I/O, we add an unbounded integral field \texttt{clock} to the central \texttt{status} record.
562This field is only logical, since it does not represent any quantity stored in the actual processor, and is used to keep track of the current processor time.
563Before every execution step, \texttt{clock} is incremented by the number of processor cycles that the instruction just fetched will take to execute.
564The processor then executes the instruction, followed by the code implementing the timers and I/O\footnote{Though it isn't fully specified by the manufacturer's data sheets if I/O is handled at the beginning or the end of each cycle.}. In order to model I/O, we also store in the status a
565We use \emph{continuation} as a description of the behaviour of the environment:
567type line =
568  [ `P1 of byte | `P3 of byte
569  | `SerialBuff of [ `Eight of byte | `Nine of BitVectors.bit * byte ]]
570type continuation =
571  [`In of time * line * epsilon * continuation] option *
572  [`Out of (time -> line -> time * continuation)]
[548]574At each moment, the second projection of the continuation $k$ describes how the environment will react to an output event performed in the future by the processor.
575If the processor at time $\tau$ starts an asynchronous output $o$ either on the P1 or P3 output lines, or on the UART, then the environment will receive the output at time $\tau'$.
576Moreover the status is immediately updated with the continuation $k'$ where $\pi_2(k)(\tau,o) = \langle \tau',k' \rangle$.
[548]578Further, if $\pi_1(k) = \mathtt{Some}~\langle \tau',i,\epsilon,k'\rangle$, then at time $\tau'$ the environment will send the asynchronous input $i$ to the processor and the status will be updated with the continuation $k'$.
579This input will become visible to the processor only at time $\tau' + \epsilon$.
[548]581The time required to perform an I/O operation is partially specified in the data sheets of the UART module.
582However, this computation is complex so we prefer to abstract over it.
583We therefore leave the computation of the delay time to the environment.
[548]585We use only the P1 and P3 lines despite the MCS-51 having four output lines, P0--P3.
586This is because P0 and P2 become inoperable if the processor is equipped with XRAM (which we assume it is).
[548]588The UART port can work in several modes, depending on the how the SFRs are set.
589In an asyncrhonous mode, the UART transmits eight bits at a time, using a ninth line for syncrhonization.
590In a syncrhonous mode the ninth line is used to transmit an additional bit.
593% SECTION                                                                      %
595\subsection{Computation of cost traces}
[529]598As mentioned in Subsection~\ref{subsect.labels.pseudoinstructions} we introduced a notion of \emph{cost label}.
599Cost labels are inserted by the prototype C compiler in specific locations in the object code.
600Roughly, for those familiar with control flow graphs, they are inserted at the start of every basic block.
[529]602Cost labels are used to calculate a precise costing for a program by marking the location of basic blocks.
603During the assembly phase, where labels and pseudoinstructions are eliminated, a map is generated associating cost labels with memory locations.
604This map is later used in a separate analysis which computes the cost of a program by traversing through a program, fetching one instruction at a time, and computing the cost of blocks.
605These block costings are stored in another map, and will later be passed back to the prototype compiler.
608% SECTION                                                                      %
[511]613We spent considerable effort attempting to ensure that our formalisation is correct, that is, what we have formalised really is an accurate model of the MCS-51 microprocessor.
615First, we made use of multiple data sheets, each from a different semiconductor manufacturer.
616This helped us spot errors in the specification of the processor's instruction set, and its behaviour.
618The O'Caml prototype was especially useful for validation purposes.
619This is because we wrote a module for parsing and loading the Intel HEX file format.
620HEX is a standard format that all compilers targetting the MCS-51, and similar processors, produce.
621It is essentially a snapshot of the processor's code memory in compressed form.
622Using this, we were able to compile C programs with SDCC, an open source compiler, and load the resulting program directly into our emulator's code memory, ready for execution.
623Further, we are able to produce a HEX file from our emulator's code memory, for loading into third party tools.
624After each step of execution, we can print out both the instruction that had been executed, along with its arguments, and a snapshot of the processor's state, including all flags and register contents.
625For example:
63008: mov 81 #07
632 Processor status:                               
[546]634   ACC : 0 (00000000) B   : 0 (00000000) PSW : 0 (00000000)
635    with flags set as:
636     CY  : false   AC  : false FO  : false
637     RS1 : false   RS0 : false OV  : false
[511]638     UD  : false   P   : false
639   SP  : 7 (00000111) IP  : 0 (00000000)
640   PC  : 8 (0000000000001000)
[546]641   DPL : 0 (00000000) DPH : 0 (00000000) SCON: 0 (00000000)
642   SBUF: 0 (00000000) TMOD: 0 (00000000) TCON: 0 (00000000)
[511]643   Registers:                                   
[546]644    R0 : 0 (00000000) R1 : 0 (00000000) R2 : 0 (00000000)
645    R3 : 0 (00000000) R4 : 0 (00000000) R5 : 0 (00000000)
[511]646    R6 : 0 (00000000) R7 : 0 (00000000)
651Here, the traces indicates that the instruction \texttt{mov 81 \#07} has just been executed by the processor, which is now in the state indicated.
652These traces were useful in spotting anything that was `obviously' wrong with the execution of the program.
[549]654Further, we used MCU 8051 IDE as a reference.
655Using our execution traces, we were able to step through a compiled program, one instruction at a time, in MCU 8051 IDE, and compare the resulting execution trace with the trace produced by our emulator.
657Our Matita formalisation was largely copied from the O'Caml source code, apart from changes related to addressing modes already mentioned.
658However, as the Matita emulator is executable, we could perform further validation by comparing the trace of a program's execution in the Matita emulator with the trace of the same program in the O'Caml emulator.
661% SECTION                                                                      %
[493]663\section{Related work}
[546]665There exists a large body of literature on the formalisation of microprocessors.
666The majority of it aims to prove correctness of the implementation of the microprocessor at the microcode or gate level.
667However, we are interested in providing a precise specification of the behaviour of the microprocessor in order to prove the correctness of a compiler which will target the processor.
668In particular, we are interested in intensional properties of the processor; precise timings of instruction execution in clock cycles.
669Moreover, in addition to formalising the interface of an MCS-51 processor, we have also built a complete MCS-51 ecosystem: the UART, the I/O lines, and hardware timers, along with an assembler.
[549]671Similar work to ours can be found in~\cite{fox:trustworthy:2010}.
[546]672Here, the authors describe the formalisation, in HOL4, of the ARMv7 instruction set architecture, and point to a good list of references to related work in the literature.
673This formalisation also considers the machine code level, as opposed to only considering an abstract assembly language.
674In particular, instruction decoding is explicitly modelled inside HOL4's logic.
675However, we go further in also providing an assembly language, complete with assembler, to translate instructions and pseudoinstruction to machine code.
[549]677Further, in~\cite{fox:trustworthy:2010} the authors validated their formalisation by using development boards and random testing.
[546]678However, we currently rely on non-exhaustive testing against a third party emulator.
679We leave similar exhaustive testing for future work.
[549]681Executability is another key difference between our work and~\cite{fox:trustworthy:2010}.
682In~\cite{fox:trustworthy:2010} the authors provide an automation layer to derive single step theorems: if the processor is in a particular state that satisfies some preconditions, then after execution of an instruction it will reside in another state satisfying some postconditions.
[546]683We do not need single step theorems of this form.
684This is because Matita is based on a logic that internalizes conversion.
685As a result, our formalisation is executable: applying the emulation function to an input state eventually reduces to an output state that already satisfies the appropriate conditions.
[546]687Our main difficulties resided in the non-uniformity of an old 8-bit architecture, in terms of the instruction set, addressing modes and memory models.
688In contrast, the ARM instruction set and memory model is relatively uniform, simplifying any formalisation considerably.
[549]690Perhaps the closest project to CerCo is CompCert~\cite{leroy:formal:2009,leroy:formally:2009,blazy:formal:2006}.
[546]691CompCert concerns the certification of an ARM compiler and includes a formalisation in Coq of a subset of ARM.
692Coq and Matita essentially share the same logic.
[546]694Despite this similarity, the two formalisations do not have much in common.
695First, CompCert provides a formalisation at the assembly level (no instruction decoding), and this impels them to trust an unformalised assembler and linker, whereas we provide our own.
696I/O is also not considered at all in CompCert.
697Moreover an idealized abstract and uniform memory model is assumed, while we take into account the complicated overlapping memory model of the MCS-51 architecture.
698Finally, around 90 instructions of the 200+ offered by the processor are formalised in CompCert, and the assembly language is augmented with macro instructions that are turned into `real' instructions only during communication with the external assembler.
699Even from a technical level the two formalisations differ: while we tried to exploit dependent types as often as possible, CompCert largely sticks to the non-dependent fragment of Coq.
[549]701In~\cite{atkey:coqjvm:2007} Atkey presents an executable specification of the Java virtual machine which uses dependent types.
[546]702As we do, dependent types are used to remove spurious partiality from the model, and to lower the need for over-specifying the behaviour of the processor in impossible cases.
703Our use of dependent types will also help to maintain invariants when we prove the correctness of the CerCo prototype compiler.
[549]705Finally, in~\cite{sarkar:semantics:2009} Sarkar et al provide an executable semantics for x86-CC multiprocessor machine code.
[548]706This machine code exhibits a high degree of non-uniformity similar to the MCS-51.
707However, only a very small subset of the instruction set is considered, and they over-approximate the possibilities of unorthogonality of the instruction set, largely dodging the problems we had to face.
[548]709Further, it seems that the definition of the decode function is potentially error prone.
710A small domain specific language of patterns is formalised in HOL4.
711This is similar to the specification language of the x86 instruction set found in manufacturer's data sheets.
712A decode function is implemented by copying lines from data sheets into the proof script.
[548]714We are currently considering implementing a similar domain specific language in Matita.
715However, we would prefer to certify in Matita the compiler for this language.
716Data sheets could then be compiled down to the efficient code that we currently provide, instead of inefficiently interpreting the data sheets every time an instruction is executed.
719% SECTION                                                                      %
[542]724\CSC{Tell what is NOT formalized/formalizable: the HEX parser/pretty printer
725 and/or the I/O procedure}
726\CSC{Decode: two implementations}
727\CSC{Discuss over-specification}
730  How to test it? Specify it?
738\section{Listing of main O'Caml functions}
741\subsubsection{From \texttt{}}
745Name & Description \\
747\texttt{assembly} & Assembles an abstract syntax tree representing an 8051 assembly program into a list of bytes, its compiled form. \\
748\texttt{initialize} & Initializes the emulator status. \\
749\texttt{load} & Loads an assembled program into the emulator's code memory. \\
750\texttt{fetch} & Fetches the next instruction, and automatically increments the program counter. \\
751\texttt{execute} & Emulates the processor.  Accepts as input a function that pretty prints the emulator status after every emulation loop. \\
[541]755\subsubsection{From \texttt{}}
759Name & Description \\
761\texttt{compute} & Computes a map associating costings to basic blocks in the program.
[540]765\subsubsection{From \texttt{}}
769Name & Description \\
771\texttt{intel\_hex\_of\_file} & Reads in a file and parses it if in Intel IHX format, otherwise raises an exception. \\
772\texttt{process\_intel\_hex} & Accepts a parsed Intel IHX file and populates a hashmap (of the same type as code memory) with the contents.
776\subsubsection{From \texttt{}}
780Name & Description \\
782\texttt{subb8\_with\_c} & Performs an eight bit subtraction on bitvectors.  The function also returns the most important PSW flags for the 8051: carry, auxiliary carry and overflow. \\
783\texttt{add8\_with\_c} & Performs an eight bit addition on bitvectors.  The function also returns the most important PSW flags for the 8051: carry, auxiliary carry and overflow. \\
784\texttt{dec} & Decrements an eight bit bitvector with underflow, if necessary. \\
785\texttt{inc} & Increments an eight bit bitvector with overflow, if necessary.
791\section{Listing of main Matita functions}
794\subsubsection{From \texttt{}}
798Title & Description \\
800\texttt{add\_n\_with\_carry} & Performs an $n$ bit addition on bitvectors.  The function also returns the most important PSW flags for the 8051: carry, auxiliary carry and overflow. \\
801\texttt{sub\_8\_with\_carry} & Performs an eight bit subtraction on bitvectors. The function also returns the most important PSW flags for the 8051: carry, auxiliary carry and overflow. \\
802\texttt{half\_add} & Performs a standard half addition on bitvectors, returning the result and carry bit. \\
803\texttt{full\_add} & Performs a standard full addition on bitvectors and a carry bit, returning the result and a carry bit.
807\subsubsection{From \texttt{}}
811Title & Description \\
813\texttt{assemble1} & Assembles a single 8051 assembly instruction into its memory representation. \\
814\texttt{assemble} & Assembles an 8051 assembly program into its memory representation.\\
815\texttt{assemble\_unlabelled\_program} &\\& Assembles a list of (unlabelled) 8051 assembly instructions into its memory representation.
819\subsubsection{From \texttt{}}
823Title & Description \\
825\texttt{lookup} & Returns the data stored at the end of a particular path (a bitvector) from the trie.  If no data exists, returns a default value. \\
826\texttt{insert} & Inserts data into a tree at the end of the path (a bitvector) indicated.  Automatically expands the tree (by filling in stubs) if necessary.
830\subsubsection{From \texttt{}}
834Title & Description \\
836\texttt{execute\_trace} & Executes an assembly program for a fixed number of steps, recording in a trace which instructions were executed.
840\subsubsection{From \texttt{}}
844Title & Description \\
846\texttt{fetch} & Decodes and returns the instruction currently pointed to by the program counter and automatically increments the program counter the required amount to point to the next instruction. \\
850\subsubsection{From \texttt{}}
854Title & Description \\
856\texttt{execute\_1} & Executes a single step of an 8051 assembly program. \\
857\texttt{execute} & Executes a fixed number of steps of an 8051 assembly program.
861\subsubsection{From \texttt{}}
865Title & Description \\
867\texttt{load} & Loads an assembled 8051 assembly program into code memory.
Note: See TracBrowser for help on using the repository browser.