Changeset 2083 for src/ASM/CPP2012asm/cpp2012asm.tex
 Timestamp:
 Jun 14, 2012, 5:32:04 PM (8 years ago)
 File:

 1 edited
Legend:
 Unmodified
 Added
 Removed

src/ASM/CPP2012asm/cpp2012asm.tex
r2069 r2083 35 35 \[\fbox{\TheSbox}\]} 36 36 37 \title{On the correctness of an assembler for the Intel MCS51 microprocessor\thanks{The project CerCo acknowledges the financial support of the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission, under FETOpen grant number: 243881}}37 \title{On the correctness of an optimising assembler for the Intel MCS51 microprocessor\thanks{The project CerCo acknowledges the financial support of the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission, under FETOpen grant number: 243881}} 38 38 \author{Dominic P. Mulligan \and Claudio Sacerdoti Coen} 39 39 \institute{Dipartimento di Scienze dell'Informazione, Universit\'a di Bologna} … … 46 46 47 47 \begin{abstract} 48 We present a proof of correctness, in the Matita proof assistant, for an optimising assembler for the MCS51 8bitmicrocontroller.49 This assembler constitutes a major component of the EU's CerCo (`Certified Complexity')project.48 We present a proof of correctness, in Matita, for an optimising assembler for the MCS51 microcontroller. 49 This assembler constitutes a major component of the EU's CerCo project. 50 50 51 51 The efficient expansion of pseudoinstructionsparticularly jumpsinto MCS51 machine instructions is complex. … … 55 55 We observe that it is impossible for an optimising assembler to preserve the semantics of every assembly program. 56 56 Assembly language programs can manipulate concrete addresses in arbitrary ways. 57 Our proof strategy contains a notion of `good addresses' and only assembly programs that use good addresses have their semantics preserved under assembly.57 Our proof strategy contains a tracking facility for `good addresses' and only programs that use good addresses have their semantics preserved under assembly. 58 58 Our strategy offers increased flexibility over the traditional approach of keeping addresses in assembly opaque. 59 59 In particular, we may experiment with allowing the benign manipulation of addresses. … … 70 70 71 71 The MCS51 dates from the early 1980s and is commonly called the 8051/8052. 72 Despite the microprocessor's age, derivatives are still widely manufactured by a number of semiconductor foundries. 73 As a result the processor is widely used, especially in embedded systems development, where welltested, cheap, predictable microprocessors find their niche. 74 75 The MCS51 has a relative paucity of features compared to its more modern brethren, with the lack of any caching or pipelining features means that timing of execution is predictable, making the MCS51 very attractive for CerCo's ends. 76 Yet, as in most things, what one hand giveth the other taketh away, and the MCS51's paucity of featuresthough an advantage in many respectsalso quickly becomes a hindrance. 72 Despite the microprocessor's age, derivatives are still widely manufactured by a number of semiconductor foundries, with the processor being used especially in embedded systems development, where welltested, cheap, predictable microprocessors find their niche. 73 74 The MCS51 has a relative paucity of features compared to its more modern brethren, with the lack of any caching or pipelining features meaning that timing of execution is predictable, making the MCS51 very attractive for CerCo's ends. 75 Yetas with many thingswhat one hand giveth, the other taketh away, and the MCS51's paucity of featuresthough an advantage in many respectsalso quickly becomes a hindrance. 77 76 In particular, the MCS51 features a relatively minuscule series of memory spaces by modern standards. 78 77 As a result our C compiler, to have any sort of hope of successfully compiling realistic programs for embedded devices, ought to produce `tight' machine code. 79 78 79 In order to do this, we must solve the `branch displacement' problemdeciding how best to expand jumps to labels in assembly language to machine code jumps. 80 Clearly a correct but efficient strategy would be to expand all unconditional jumps to the MCS51's \texttt{LJMP} instruction, and all conditional jumps to a set configuration of jumps using \texttt{LJMP} instructions; this is inefficient. 81 Finding an efficient solution with this expansion process is not trivial, and is a wellknown problem for those writing assemblers targetting RISC architectures. 82 80 83 Branch displacement is not a simple problem to solve and requires the implementation of an optimising assembler. 81 84 Labels, conditional jumps to labels, a program preamble containing global data and a \texttt{MOV} instruction for moving this global data into the MCS51's one 16bit register all feature in our assembly language. 82 We simplify the process by assuming that all assembly programs are prelinked (i.e. we do not formalise a linker). 83 The assembler expands pseudoinstructions into MCS51 machine code, but this assembly process is not trivial, for numerous reasons. 84 For example, our conditional jumps to labels behave differently from their machine code counterparts. 85 At the machine code level, all conditional jumps are `short', limiting their range. 86 However, at the assembly level, conditional jumps may jump to a label that appears anywhere in the program, significantly liberalising the use of conditional jumps. 87 88 Yet, the situation is even more complex than having to expand pseudoinstructions correctly. 89 In particular, when formalising the assembler, we must make sure that the assembly process does not change the timing characteristics of an assembly program for two reasons. 85 We simplify the proof by assuming that all our assembly programs are prelinked (i.e. we do not formalise a linker). 86 87 Further, we must make sure that the assembly process does not change the timing characteristics of an assembly program for two reasons. 90 88 91 89 First, the semantics of some functions of the MCS51, notably I/O, depend on the microprocessor's clock. … … 96 94 This cost model is induced by the compilation process itself, and its noncompositional nature allows us to assign different costs to identical blocks of instructions depending on how they are compiled. 97 95 In short, we aim to obtain a very precise costing for a program by embracing the compilation process, not ignoring it. 98 This, however, complicates the proof of correctness for the compiler proper. 99 In each translation pass from intermediate language to intermediate language, we must prove that both the meaning and concrete complexity characteristics of the program are preserved. 100 This also applies for the translation from assembly language to machine code. 101 102 Yet one more question remains: how do we decide whether to expand a jump into an \texttt{SJMP} or an \texttt{LJMP}? 103 To understand, again, why this problem is not trivial, consider the following snippet of assembly code: 104 {\small{ 105 \begin{displaymath} 106 \begin{array}{r@{\qquad}r@{\quad}l@{\;\;}l@{\qquad}l} 107 \text{1:} & \mathtt{(0x000)} & \texttt{LJMP} & \texttt{0x100} & \text{\texttt{;; Jump forward 256.}} \\ 108 \text{2:} & \mathtt{...} & \mathtt{...} & & \\ 109 \text{3:} & \mathtt{(0x0FA)} & \texttt{LJMP} & \texttt{0x100} & \text{\texttt{;; Jump forward 256.}} \\ 110 \text{4:} & \mathtt{...} & \mathtt{...} & & \\ 111 \text{5:} & \mathtt{(0x100)} & \texttt{LJMP} & \texttt{0x100} & \text{\texttt{;; Jump backward 256.}} \\ 112 \end{array} 113 \end{displaymath}}} 114 We observe that $100_{16} = 256_{10}$, and lies \emph{just} outside the range expressible in an 8bit byte (0255). 115 116 As our example shows, given an occurrence $l$ of an \texttt{LJMP} instruction, it may be possible to shrink $l$ to an occurrence of an \texttt{SJMP}consuming fewer bytes of code memoryprovided we can shrink any \texttt{LJMP}s that exist between $l$ and its target location. 117 In particular, if we wish to shrink the \texttt{LJMP} occurring at line 1, then we must shrink the \texttt{LJMP} occurring at line 3. 118 However, to shrink the \texttt{LJMP} occurring at line 3 we must also shrink the \texttt{LJMP} occurring at line 5, and \emph{vice versa}. 119 120 Further, consider what happens if, instead of appearing at memory address \texttt{0x100}, the instruction at line 5 appeared \emph{just} beyond the size of code memory, and all other memory addresses were shifted accordingly. 121 Now, in order to be able to successfully fit our program into the MCS51's limited code memory, we are \emph{obliged} to shrink the \texttt{LJMP} occurring at line 5. 122 That is, the shrinking process is not just related to the optimisation of generated machine code but also the completeness of the assembler itself. 96 This, however, complicates the proof of correctness for the compiler proper, and we must prove that both the meaning and concrete complexity characteristics of the program are preserved for every translation pass in the compiler, including the assembler. 123 97 124 98 How we went about resolving this problem affected the shape of our proof of correctness for the whole assembler in a rather profound way. … … 191 165 The function \texttt{execute\_1} closely matches the operation of the MCS51 hardware, as described by a Siemen's manufacturer's data sheet. 192 166 We first fetch, using the program counter, from code memory the first byte of the instruction to be executed, decoding the resulting opcode, fetching more bytes as is necessary. 193 Decoded instructions are represented as an inductive type: 194 167 Decoded instructions are represented as an inductive type, where $\llbracket  \rrbracket$ denotes a vector: 195 168 \begin{lstlisting} 196 169 inductive preinstruction (A: Type[0]): Type[0] := 197  ADD: [[acc_a]] → [[ registr; direct; indirect; data ]] →preinstruction A198  INC: [[ acc_a; registr; direct ; indirect ; dptr ]] →preinstruction A199  JB: [[bit_addr]] → A →preinstruction A170  ADD: $\llbracket$acc_a$\rrbracket$ → $\llbracket$registr; direct; indirect; data$\rrbracket$ $\rightarrow$ preinstruction A 171  INC: $\llbracket$ acc_a; registr; direct; indirect; dptr$\rrbracket$ $\rightarrow$ preinstruction A 172  JB: $\llbracket$bit_addr$\rrbracket$ $\rightarrow$ A $\rightarrow$ preinstruction A 200 173  ... 201 174 202 inductive instruction: Type[0] ≝ 203  LCALL: [[addr16]] → instruction 204  AJMP: [[addr11]] → instruction 205  RealInstruction: preinstruction [[ relative ]] → instruction. 175 inductive instruction: Type[0] := 176  LCALL: $\llbracket$addr16$\rrbracket$ $\rightarrow$ instruction 177  AJMP: $\llbracket$addr11$\rrbracket$ $\rightarrow$ instruction 178  RealInstruction: preinstruction $\llbracket$relative$\rrbracket$ $\rightarrow$ instruction. 179  ... 206 180 \end{lstlisting} 207 181 Here, we use dependent types to provide a precise typing for instructions, specifying in their type the permitted addressing modes that their arguments can belong to. … … 212 186 213 187 \begin{lstlisting} 214 215 216 217 218 188  DEC addr $\Rightarrow$ 189 let s := add_ticks1 s in 190 let $\langle$result, flags$\rangle$ := sub_8_with_carry (get_arg_8 $\ldots$ s true addr) 191 (bitvector_of_nat 8 1) false in 192 set_arg_8 $\ldots$ s addr result 219 193 \end{lstlisting} 220 194 … … 231 205 Pseudoinstructions are implemented as an inductive type: 232 206 \begin{lstlisting} 233 inductive pseudo_instruction: Type[0] ≝234  Instruction: preinstruction Identifier →pseudo_instruction207 inductive pseudo_instruction: Type[0] := 208  Instruction: preinstruction Identifier $\rightarrow$ pseudo_instruction 235 209 ... 236  Jmp: Identifier →pseudo_instruction237  Call: Identifier →pseudo_instruction238  Mov: [[dptr]] → Identifier →pseudo_instruction.210  Jmp: Identifier $\rightarrow$ pseudo_instruction 211  Call: Identifier $\rightarrow$ pseudo_instruction 212  Mov: $\llbracket$dptr$\rrbracket$ $\rightarrow$ Identifier $\rightarrow$ pseudo_instruction. 239 213 \end{lstlisting} 240 214 The pseudoinstructions \texttt{Jmp}, \texttt{Call} and \texttt{Mov} are generalisations of machine code unconditional jumps, calls and move instructions respectively, all of whom act on labels, as opposed to concrete memory addresses. … … 287 261 \end{displaymath} 288 262 The most complex of the two passes is the first, which expands pseudoinstructions and must perform the task of branch displacement~\cite{holmes:branch:2006}. 263 The function \texttt{assembly\_1\_pseudo\_instruction} used in the body of the paper is essentially the composition of the two passes. 289 264 290 265 The branch displacement problems consists of the problem of expanding pseudojumps into their concrete counterparts, preferably as efficiently as possible. … … 334 309 \end{displaymath} 335 310 Here, \texttt{current\_instruction\_size} is the size in bytes of the encoding of the expanded pseudoinstruction found at \texttt{ppc}. 336 Note that the entanglement we hinted at is only partially solved in this 337 way: the assembler code can ignore the implementative details of the 338 algorithm that finds a policy; however, the algorithm that finds a policy 339 must know the exact behaviour of the assembly because it needs to predict 340 the way the assembly will expand and encode pseudoinstructions, once feeded 341 with a policy. The companion paper~\cite{jaap} certifies an algorithm that 342 finds branch displacement policies for the assembler described in this paper. 311 Note that the entanglement we hinted at is only partially solved in this way: the assembler code can ignore the implementation details of the algorithm that finds a policy; 312 however, the algorithm that finds a policy must know the exact behaviour of the assembly program because it needs to predict the way the assembly will expand and encode pseudoinstructions, once fed with a policy. 313 A companion paper to this one~\cite{boender:correctness:2012} certifies an algorithm that finds branch displacement policies for the assembler described in this paper. 343 314 344 315 The \texttt{expand\_pseudo\_instruction} function uses the \texttt{sigma} map to determine the size of jump required when expanding pseudojumps, computing the jump size by examining the size of the differences between program counters. 345 For instance, if at address $ppc$ in the assembly program we found 346 $Jmp~l$ such that \texttt{lookup\_labels~l} $= a$, if the 347 offset $ d = \sigma(a)  \sigma(ppc+1)$ is such that $d < 128$ then $Jmp~l$ is 348 normally translated to the best local solution, the short jump $SJMP~d$. 349 A global best solution to the branch displacement problem, however, is not 350 always made of locally best solutions. Therefore, in some circumstances, 351 it is necessary to force the assembler to expand jumps into larger ones. 352 This is achieved by the \texttt{policy} function: if \texttt{policy~ppc}$= true$ 353 then a $Jmp~l$ at address $ppc$ is always translated to a long jump. 354 The mechanism is essentially identical for calls. 316 For instance, if at address \texttt{ppc} in the assembly program we found \texttt{Jmp l} such that \texttt{lookup\_labels l = a}, if the offset \texttt{d = sigma(a)  sigma(ppc + 1)} is such that \texttt{d} $< 128$ then \texttt{Jmp l} is normally translated to the best local solution, the short jump \texttt{SJMP d}. 317 A global best solution to the branch displacement problem, however, is not always made of locally best solutions. 318 Therefore, in some circumstances, it is necessary to force the assembler to expand jumps into larger ones. 319 This is achieved by the \texttt{policy} function: if \texttt{policy ppc = true} then a \texttt{Jmp l} at address \texttt{ppc} is always translated to a long jump. 320 An essentially identical mechanism exists for call instructions. 355 321 356 322 %  % … … 362 328 By `total correctness', we mean that the assembly process never fails when provided with a correct policy and that the process does not change the semantics of a certain class of well behaved assembly programs. 363 329 364 The aim of this section is to prove the following informal statement: 365 when we fetch an assembly pseudoinstruction $I$ at address $ppc$, then 366 we can fetch the expanded pseudoinstruction(s) $[J^1;\ldots;J^n] = fetch\_pseudo\_instruction \ldots I~ppc$ from 367 $\sigma(ppc)$ in the code memory obtained loading the assembled object code. 368 369 This constitutes the first major step in the proof of correctness of the 370 assembler, the next one being the simulation of $I$ by $[J^1;\ldots;J^n]$ 371 (see Section~\ref{???}). 372 373 We can express the following lemma, expressing the correctness of the assembly function (slightly simplified): 374 \begin{lstlisting} 375 lemma assembly_ok: 376 $\forall$program: pseudo_assembly_program. 330 The aim of this section is to prove the following informal statement: when we fetch an assembly pseudoinstruction \texttt{I} at address \texttt{ppc}, then we can fetch the expanded pseudoinstruction(s) \texttt{[J1, \ldots, Jn] = fetch\_pseudo\_instruction \ldots\ I\ ppc} from \texttt{sigma(ppc)} in the code memory obtained loading the assembled object code. 331 332 This constitutes the first major step in the proof of correctness of the assembler, the next one being the simulation of \texttt{I} by \texttt{[J1, \ldots, Jn]} (see Subsection~\ref{subsect.total.correctness.for.well.behaved.assembly.programs}). 333 334 The \texttt{assembly} function is given a Russell type (slightly simplified here): 335 \begin{lstlisting} 336 definition assembly: 337 $\forall$p: pseudo_assembly_program. 377 338 $\forall$sigma: Word $\rightarrow$ Word. 378 339 $\forall$policy: Word $\rightarrow$ bool. 379 $\forall$sigma_policy_witness. 380 $\forall$assembled. 381 $\forall$costs': BitVectorTrie costlabel 16. 382 let $\langle$preamble, instr_list$\rangle$ := program in 383 let $\langle$labels, costs$\rangle$ := create_label_cost_map instr_list in 384 $\langle$assembled,costs'$\rangle$ = assembly program sigma policy $\rightarrow$ 385 let cmem := load_code_memory assembled in 386 $\forall$ppc. 387 let $\langle$pi, newppc$\rangle$ := fetch_pseudo_instruction instr_list ppc in 388 let $\langle$len,assembled$\rangle$ := assembly_1_pseudoinstruction $\ldots$ ppc $\ldots$ pi in 389 let pc := sigma ppc in 390 let pc_plus_len := add $\ldots$ pc (bitvector_of_nat $\ldots$ len) in 391 encoding_check cmem pc pc_plus_len assembled $\wedge$ 392 sigma newppc = add $\ldots$ pc (bitvector_of_nat $\ldots$ len). 393 \end{lstlisting} 394 Here, \texttt{encoding\_check} is a recursive function that checks that assembled machine code is correctly stored in code memory. 395 Suppose also we assemble our program \texttt{p} in accordance with a policy \texttt{sigma} to obtain \texttt{assembled}, loading the assembled program into code memory \texttt{cmem}. 396 Then, for every pseudoinstruction \texttt{pi}, pseudo program counter \texttt{ppc} and new pseudo program counter \texttt{newppc}, such that we obtain \texttt{pi} and \texttt{newppc} from fetching a pseudoinstruction at \texttt{ppc}, we check that assembling this pseudoinstruction produces the correct number of machine code instructions, and that the new pseudo program counter \texttt{ppc} has the value expected of it. 340 $\Sigma$res:list Byte $\times$ (BitVectorTrie costlabel 16). 341 sigma_meets_specification p sigma policy $\rightarrow$ 342 let $\langle$preamble, instr_list$\rangle$ := p in 343 instr_list < 2^16 $\rightarrow$ 344 let $\langle$assembled,costs$\rangle$ := res in 345 $\forall$ppc. $\forall$ppc_ok:nat_of_bitvector $\ldots$ ppc < instr_list. 346 let $\langle$pi,newppc$\rangle$ := fetch_pseudo_instruction instr_list ppc ppc_ok in 347 let $\langle$l,a$\rangle$ := assembly_1_pseudoinstruction $\ldots$ sigma policy ppc $\ldots$ pi in 348 $\forall$j:nat. j < a 349 nth j a = nth (add $\ldots$ (sigma ppc) (bitvector_of_nat ? j))) assembled 350 \end{lstlisting} 351 In plain words, the type of assembly states the following. 352 Suppose we are given a policy that is correct for the program we are assembling, and suppose the program to be assembled is fully addressable by a 16bit word. 353 Then if we fetch from the pseudo program counter \texttt{ppc} we get a pseudoinstruction \texttt{pi} and a new pseudo program counter \texttt{ppc}. 354 Further, assembling the pseudoinstruction \texttt{pi} results in a list of bytes, \texttt{a}. 355 Then, indexing into this list with any natural number \texttt{j} less than the length of \texttt{a} gives the same result as indexing into \texttt{assembled} with \texttt{sigma(ppc)} (the program counter pointing to the start of the expansion in \texttt{assembled}) plus \texttt{j}. 356 357 Essentially the lemma above states that the \texttt{assembly} function correctly expands pseudoinstructions. 358 This result is lifted from lists of bytes into a result on tries of bytes (i.e. code memories), using an additional lemma: \texttt{assembly\_ok}. 397 359 398 360 Lemma \texttt{fetch\_assembly} establishes that the \texttt{fetch} and \texttt{assembly1} functions interact correctly. … … 409 371 encoding_check code_memory pc pc_plus_len assembled $\rightarrow$ 410 372 let $\langle$instr, pc', ticks$\rangle$ := fetch code_memory pc in 411 (eq_instruction instr i $\wedge$ 412 eqb ticks (ticks_of_instruction instr) $\wedge$ 413 eq_bv $\ldots$ pc' pc_plus_len) = true. 373 instr = i $\wedge$ ticks = (ticks_of_instruction instr) $\wedge$ pc' = pc_plus_len. 414 374 \end{lstlisting} 415 375 In particular, we read \texttt{fetch\_assembly} as follows. … … 452 412 $\forall$sigma. 453 413 $\forall$policy. 454 $\forall$sigma_ policy_specification_witness.414 $\forall$sigma_meets_specification. 455 415 $\forall$ppc. 456 416 let $\langle$preamble, instr_list$\rangle$ := program in … … 467 427 We read \texttt{fetch\_assembly\_pseudo2} as follows. 468 428 Suppose we are able to successfully assemble an assembly program using \texttt{assembly} and produce a code memory, \texttt{cmem}. 469 Then, fetching a pseudoinstruction from the pseudo 429 Then, fetching a pseudoinstruction from the pseudocode memory at address \texttt{ppc} corresponds to fetching a sequence of instructions from the real code memory using $\sigma$ to expand pseudoinstructions. 470 430 The fetched sequence corresponds to the expansion, according to \texttt{sigma}, of the pseudoinstruction. 471 431 472 432 At first, the lemmas appears to immediately imply the correctness of the assembler. 473 However, this property is \emph{not} strong enough to establish that the semantics of an assembly program has been preserved by the assembly process since it does not establish the correspondence between the semantics of a pseudo instruction and that of its expansion.433 However, this property is \emph{not} strong enough to establish that the semantics of an assembly program has been preserved by the assembly process since it does not establish the correspondence between the semantics of a pseudoinstruction and that of its expansion. 474 434 In particular, the two semantics differ on instructions that \emph{could} directly manipulate program addresses. 475 435 … … 481 441 482 442 The traditional approach to verifying the correctness of an assembler is to treat memory addresses as opaque structures that cannot be modified. 483 This prevents assembly programs from performing any `dangerous', semantics breaking manipulations of memory addresses by making these programs simply unrepresentable in the source language. 484 485 Here, we take a different approach to this problem: we trace memory locations (and, potentially, registers) that contain memory addresses. 486 We then prove that only those assembly programs that use addresses in `safe' ways have their semantics preserved by the assembly process. 443 Memory is represented as a map from opaque addresses to the disjoint union of data and opaque addressesaddresses are kept opaque to prevent their possible `semantics breaking' manipulation by assembly programs: 444 \begin{displaymath} 445 \mathtt{Mem} : \mathtt{Addr} \rightarrow \mathtt{Bytes} + \mathtt{Addr} \qquad \llbracket  \rrbracket : \mathtt{Instr} \rightarrow \mathtt{Mem} \rightarrow \mathtt{option\ Mem} 446 \end{displaymath} 447 The semantics of a pseudoinstruction, $\llbracket  \rrbracket$, is given as a possibly failing function from pseudoinstructions and memory spaces to new memory spaces. 448 The semantic function proceeds by case analysis over the operands of a given instruction, failing if either operand is an opaque address, or otherwise succeeding, updating memory. 449 \begin{gather*} 450 \llbracket \mathtt{ADD\ @A1\ @A2} \rrbracket^\mathtt{M} = \begin{cases} 451 \mathtt{Byte\ b1},\ \mathtt{Byte\ b2} & \rightarrow \mathtt{Some}(\mathtt{M}\ \text{with}\ \mathtt{b1} + \mathtt{b2}) \\ 452 ,\ \mathtt{Addr\ a} & \rightarrow \mathtt{None} \\ 453 \mathtt{Addr\ a},\  & \rightarrow \mathtt{None} 454 \end{cases} 455 \end{gather*} 456 In contrast, in this paper we take a different approach. 457 We trace memory locations (and, potentially, registers) that contain memory addresses. 458 We then prove that only those assembly programs that use addresses in `safe' ways have their semantics preserved by the assembly processa sort of type system sitting atop memory. 459 487 460 We believe that this approach is more flexible when compared to the traditional approach, as in principle it allows us to introduce some permitted \emph{benign} manipulations of addresses that the traditional approach, using opaque addresses, cannot handle, therefore expanding the set of input programs that can be assembled correctly. 461 This approach, of using real addresses coupled with a weak, dynamic typing system sitting atop of memory, is similar to one taken by Huth \emph{et al}~\cite{tuch:types:2007}, for reasoning about lowlevel C code. 462 463 Our analogue of the semantic function above is then merely a wrapper around the function that implements the semantics of machine code, paired with a function that keeps track of addresses. 464 This permits a large amount of code reuse, as the semantics of pseudo and machine code is essentially shared. 465 The only thing that changes as the assembly level is the presence of the new tracking function. 488 466 489 467 However, with this approach we must detect (at run time) programs that manipulate addresses in well behaved ways, according to some approximation of wellbehavedness. 490 Second, we must compute statuses that correspond to pseudostatuses.491 The contents of the program counter must be translated, as well as the contents of all traced locations, by applying the \texttt{sigma} map.492 Remaining memory cells are copied \emph{verbatim}.493 494 For instance, after a function call, the two bytes that form the return pseudo address are pushed on top of the stack, i.e. in internal RAM.495 This pseudo internal RAM corresponds to an internal RAM where the stack holds the real addresses after optimisation, and all the other values remain untouched.496 497 468 We use an \texttt{internal\_pseudo\_address\_map} to trace addresses of code memory addresses in internal RAM: 498 469 \begin{lstlisting} … … 515 486 internal_pseudo_address_map$\rightarrow$BitVectorTrie Byte 7$\rightarrow$BitVectorTrie Byte 7. 516 487 \end{lstlisting} 488 Another pair of axioms precisely describes the supposed behaviour of \texttt{low\_internal\_ram\_of\_pseudo\_low\_internal\_ram} and its high internal ram counterpart. 517 489 518 490 Next, we are able to translate \texttt{PseudoStatus} records into \texttt{Status} records using \texttt{status\_of\_pseudo\_status}. … … 535 507 Note, if we wished to allow `benign manipulations' of addresses, it would be this function that needs to be changed. 536 508 537 The function \texttt{ticks\_of } computes how longin clock cyclesa pseudoinstruction will take to execute when expanded in accordance with a given policy.509 The function \texttt{ticks\_of0} computes how longin clock cyclesa pseudoinstruction will take to execute when expanded in accordance with a given policy. 538 510 The function returns a pair of natural numbers, needed for recording the execution times of each branch of a conditional jump. 539 511 \begin{lstlisting} 540 axiom ticks_of: 541 $\forall$p:pseudo_assembly_program. policy p $\rightarrow$ Word $\rightarrow$ nat $\times$ nat := $\ldots$ 542 \end{lstlisting} 512 definition ticks_of0: 513 pseudo_assembly_program $\rightarrow$ (Word $\rightarrow$ Word) $\rightarrow$ (Word $\rightarrow$ bool) $\rightarrow$ Word $\rightarrow$ 514 pseudo_instruction $\rightarrow$ nat $\times$ nat := $\ldots$ 515 \end{lstlisting} 516 An additional function, \texttt{ticks\_of}, is merely a wrapper around this function. 543 517 544 518 Finally, we are able to state and prove our main theorem. … … 552 526 $\forall$sigma: Word $\rightarrow$ Word. 553 527 $\forall$policy: Word $\rightarrow$ bool. 554 $\forall$sigma_ policy_specification_witness.528 $\forall$sigma_meets_specification. 555 529 $\forall$ps: PseudoStatus program. 556 530 $\forall$program_counter_in_bounds. … … 621 595 We believe some other verified assemblers exist in the literature. 622 596 However, what sets our work apart from that above is our attempt to optimise the machine code generated by our assembler. 623 This complicates any formalisation effort, as an attempt at the best possible selection of machine instructions must be made, especially important on a device such as the MCS51 with a min iscule code memory.597 This complicates any formalisation effort, as an attempt at the best possible selection of machine instructions must be made, especially important on a device such as the MCS51 with a minuscule code memory. 624 598 Further, care must be taken to ensure that the time properties of an assembly program are not modified by the assembly process lest we affect the semantics of any program employing the MCS51's I/O facilities. 625 599 This is only possible by inducing a cost model on the source code from the optimisation strategy and input program. … … 630 604 631 605 All files relating to our formalisation effort can be found online at~\url{http://cerco.cs.unibo.it}. 632 The code of the compiler has been completed, and the proof of correctness described here is still in progress.633 606 In particular, we have assumed several properties of ``library functions'' related in particular to modular arithmetic and datastructure manipulation. 634 607 Moreover, we have axiomatised various ancillary functions needed to complete the main theorems, as well as some `routine' proof obligations of the theorems themselves, preferring instead to focus on the main meat of the theorems.
Note: See TracChangeset
for help on using the changeset viewer.