 Timestamp:
 Jun 17, 2013, 3:16:25 PM (7 years ago)
 Location:
 src/ASM/CPP2013policy
 Files:

 3 edited
Legend:
 Unmodified
 Added
 Removed

src/ASM/CPP2013policy/conclusion.tex
r3363 r3364 24 24 from instruction labels to addresses in the assembly code. In the 25 25 second pass it iterates over the code, translating every pseudo jump 26 at address srcto a label l associated to the assembly instruction at27 address dstto a jump of the size dictated by $(\sigma\ src)$ to26 at address $src$ to a label l associated to the assembly instruction at 27 address $dst$ to a jump of the size dictated by $(\sigma\ src)$ to 28 28 $(\sigma\ dst)$. In case of conditional jumps, the translated jump may be 29 29 implemented with a series of instructions. … … 31 31 The proof of correctness abstracts over the algorithm used and only relies on 32 32 {\tt sigma\_policy\_specification} (page~\ref{sigmapolspec}). It is a variation 33 of a standard 1tomany forward simulation proof~\cite{ compcert:2011}. The34 relation R between states just maps every code address ppcstored in33 of a standard 1tomany forward simulation proof~\cite{Leroy2009}. The 34 relation R between states just maps every code address $ppc$ stored in 35 35 registers or memory to $(\sigma\ ppc)$. To identify the code addresses, 36 36 an additional data structure is always kept together with the source 37 37 state and is updated by the semantics. The semantics is preserved 38 38 only for those programs whose source code operations 39 $(f\ ppc 1\ \ldots\ ppcn)$ applied to code addresses $ppc1 \ldots ppcn$ are such40 that $(f\ (\sigma\ ppc1)\ ...\ (\sigma\ ppc) = f\ ppc\ ppcn))$. For example, 41 an injective $\sigma$ preserves a binary equality test f for code addresses, 42 but not pointer subtraction.39 $(f\ ppc_1\ \ldots\ ppc_n)$ applied to code addresses $ppc_1 \ldots ppc_n$ are 40 such that $(f\ (\sigma\ ppc_1)\ ...\ (\sigma\ ppc) = f\ ppc\ ppc_n))$. For 41 example, an injective $\sigma$ preserves a binary equality test f for code 42 addresses, but not pointer subtraction. 43 43 44 44 The main lemma (fetching simulation), which relies on\\ … … 55 55 see~\cite{lastyear}. 56 56 57 Instead of verifying the algorithm directly, another solution to the problem 58 would be to run an optimisation algorithm, and then verify the safety of the 59 result using a verified validator. Such a validator would be easier to verify 60 than the algorithm itself, but it would still leave the question of 61 termination open. In the case of a least fixed point algorithm, at least, the 62 solution is not necessarily safe until the algorithm terminates, so this 63 effort would still have to be expended. 64 65 57 66 \subsection{Related work} 58 67 … … 66 75 displacement optimisation algorithm is not needed. 67 76 68 An offshoot of the CompCert project is the CompCertTSO project, wh o add thread69 concurrency and synchronisation to the CompCert compiler~\cite{TSO2011}. This 70 compiler also generates assembly code and therefore does not include a branch 71 displacement algorithm.77 An offshoot of the CompCert project is the CompCertTSO project, which adds 78 thread concurrency and synchronisation to the CompCert compiler~\cite{TSO2011}. 79 This compiler also generates assembly code and therefore does not include a 80 branch displacement algorithm. 72 81 73 82 Finally, there is also the Piton stack~\cite{Moore1996}, which not only includes the 74 83 formal verification of a compiler, but also of the machine architecture 75 targeted by that compiler, a bespokemicroprocessor called the FM9001.84 targeted by that compiler, a microprocessor called the FM9001. 76 85 However, this architecture does not have different 77 86 jump sizes (branching is simulated by assigning values to the program counter), 
src/ASM/CPP2013policy/problem.tex
r3363 r3364 14 14 that is the proofs of termination and correctness using the Matita proof 15 15 assistant~\cite{Asperti2007}. 16 16 17 Formulating the final statement of correctness and finding the loop invariants 18 have been nontrivial tasks and are, indeed, the main contribution of this 19 paper. It has required considerable care and finetuning to formulate not 20 only the minimal statement required for the ulterior proof of correctness of 21 the assembler, but also the minimal set of invariants needed for the proof 22 of correctness of the algorithm. 23 17 24 The research presented in this paper has been executed within the CerCo project 18 25 which aims at formally verifying a C compiler with cost annotations. The 
src/ASM/CPP2013policy/proof.tex
r3363 r3364 2 2 3 3 In this section, we present the correctness proof for the algorithm in more 4 detail. 4 detail. The main correctness statement is shown, slightly simplified, in~Figure~\ref{statement}. 5 5 6 6 \label{sigmapolspec} … … 51 51 the program. This worst case is reached if at every iteration, we change the 52 52 encoding of exactly one branch instruction; since the encoding of any branch 53 instructions can change first from short to absolute and then from absolute to54 long, therecan be at most $2n$ changes.53 instructions can change first from short to absolute and then to long, there 54 can be at most $2n$ changes. 55 55 56 56 The proof has been carried out using the ``Russell'' style from~\cite{Sozeau2006}. … … 123 123 \end{figure} 124 124 125 We can now proceed with the lemmas that are needed for algorithm safety.126 The lemma in Figure~\ref{sigmacompactunsafe} is a temporary formulation of 127 the main property\\ ({\tt sigma\_policy\_specification}). Its main difference128 f rom the final version is that it uses {\tt instruction\_size\_jmplen} to129 compute the instruction size. This function uses $j$ to compute the span 130 of branch instructions (i.e. it uses the $\sigma$ function under construction), 131 instead of looking at the distance between source and destination. This is 132 because $\sigma$ is still under construction; later on we will prove that after 133 thefinal iteration, {\tt sigma\_compact\_unsafe} is equivalent to the main125 We now proceed with the safety lemmas. The lemma in 126 Figure~\ref{sigmacompactunsafe} is a temporary formulation of the main 127 property {\tt sigma\_policy\_specification}. Its main difference from the 128 final version is that it uses {\tt instruction\_size\_jmplen} to compute the 129 instruction size. This function uses $j$ to compute the span of branch 130 instructions (i.e. it uses the $\sigma$ under construction), instead 131 of looking at the distance between source and destination. This is because 132 $\sigma$ is still under construction; we will prove below that after the 133 final iteration, {\tt sigma\_compact\_unsafe} is equivalent to the main 134 134 property. 135 135 … … 161 161 \end{figure} 162 162 163 The lemma in figure~\ref{sigmasafe} is a more directsafety property. It states163 The lemma in figure~\ref{sigmasafe} is a safety property. It states 164 164 that branch instructions are encoded properly, and that no wrong branch 165 165 instructions are chosen. … … 266 266 We need this invariant to make sure that addresses do not overflow. 267 267 268 The invariants that are taken directly from the fold invariants are trivial to 269 prove. 268 The invariants taken directly from the fold invariants are trivial to prove. 270 269 271 270 The proof of {\tt nec\_plus\_ultra} goes as follows: if we return {\tt None},
Note: See TracChangeset
for help on using the changeset viewer.