1 | \documentclass[11pt,a4paper]{article} |
---|
2 | \usepackage{../../style/cerco} |
---|
3 | \usepackage{a4wide} |
---|
4 | \usepackage{amsfonts} |
---|
5 | \usepackage{amsmath} |
---|
6 | \usepackage{amssymb} |
---|
7 | \usepackage[english]{babel} |
---|
8 | \usepackage{color} |
---|
9 | \usepackage{diagrams} |
---|
10 | \usepackage{graphicx} |
---|
11 | \usepackage{listings} |
---|
12 | \usepackage{microtype} |
---|
13 | \usepackage{skull} |
---|
14 | \usepackage{stmaryrd} |
---|
15 | \usepackage{array} |
---|
16 | \newcolumntype{b}{@{}>{{}}} |
---|
17 | \newcolumntype{B}{@{}>{{}}c<{{}}@{}} |
---|
18 | \newcolumntype{h}[1]{@{\hspace{#1}}} |
---|
19 | \newcolumntype{L}{>{$}l<{$}} |
---|
20 | \newcolumntype{C}{>{$}c<{$}} |
---|
21 | \newcolumntype{R}{>{$}r<{$}} |
---|
22 | \newcolumntype{S}{>{$(}r<{)$}} |
---|
23 | \newcolumntype{n}{@{}} |
---|
24 | \usepackage{wasysym} |
---|
25 | |
---|
26 | \hypersetup{bookmarksopenlevel=2} |
---|
27 | |
---|
28 | \title{ |
---|
29 | INFORMATION AND COMMUNICATION TECHNOLOGIES\\ |
---|
30 | (ICT)\\ |
---|
31 | PROGRAMME\\ |
---|
32 | \vspace*{1cm}Project FP7-ICT-2009-C-243881 {\cerco}} |
---|
33 | |
---|
34 | \date{ } |
---|
35 | \author{} |
---|
36 | |
---|
37 | \begin{document} |
---|
38 | \thispagestyle{empty} |
---|
39 | |
---|
40 | \vspace*{-1cm} |
---|
41 | \begin{center} |
---|
42 | \includegraphics[width=0.6\textwidth]{../../style/cerco_logo.png} |
---|
43 | \end{center} |
---|
44 | |
---|
45 | \begin{minipage}{\textwidth} |
---|
46 | \maketitle |
---|
47 | \end{minipage} |
---|
48 | |
---|
49 | |
---|
50 | \vspace*{0.5cm} |
---|
51 | \begin{center} |
---|
52 | \begin{LARGE} |
---|
53 | \bf |
---|
54 | Proof outline for the correctness of the CerCo compiler |
---|
55 | \end{LARGE} |
---|
56 | \end{center} |
---|
57 | |
---|
58 | \vspace*{2cm} |
---|
59 | \begin{center} |
---|
60 | \begin{large} |
---|
61 | Version 1.0 |
---|
62 | \end{large} |
---|
63 | \end{center} |
---|
64 | |
---|
65 | \vspace*{0.5cm} |
---|
66 | \begin{center} |
---|
67 | \begin{large} |
---|
68 | Main Authors:\\ |
---|
69 | B. Campbell, D. Mulligan, P. Tranquilli, C. Sacerdoti Coen |
---|
70 | \end{large} |
---|
71 | \end{center} |
---|
72 | |
---|
73 | \vspace*{\fill} |
---|
74 | \noindent |
---|
75 | Project Acronym: {\cerco}\\ |
---|
76 | Project full title: Certified Complexity\\ |
---|
77 | Proposal/Contract no.: FP7-ICT-2009-C-243881 {\cerco}\\ |
---|
78 | |
---|
79 | \clearpage \pagestyle{myheadings} \markright{{\cerco}, FP7-ICT-2009-C-243881} |
---|
80 | |
---|
81 | \tableofcontents |
---|
82 | |
---|
83 | \section{Introduction} |
---|
84 | \label{sect.introduction} |
---|
85 | |
---|
86 | In the last project review of the CerCo project, the project reviewers |
---|
87 | recommended us to quickly outline a paper-and-pencil correctness proof |
---|
88 | for each of the stages of the CerCo compiler in order to allow for an |
---|
89 | estimation of the complexity and time required to complete the formalization |
---|
90 | of the proof. This has been possible starting from month 18 when we have |
---|
91 | completed the formalization in Matita of the datastructures and code of |
---|
92 | the compiler. |
---|
93 | |
---|
94 | In this document we provide a very high-level, pen-and-paper |
---|
95 | sketch of what we view as the best path to completing the correctness proof |
---|
96 | for the compiler. In particular, for every translation between two intermediate languages, in both the front- and back-ends, we identify the key translation steps, and identify some invariants that we view as being important for the correctness proof. We sketch the overall correctness results, and also briefly describe the parts of the proof that have already |
---|
97 | been completed at the end of the First Period. |
---|
98 | |
---|
99 | In the last section we finally present an estimation of the effort required |
---|
100 | for the certification in Matita of the compiler and we draw conclusions. |
---|
101 | |
---|
102 | \section{Front-end: Clight to RTLabs} |
---|
103 | |
---|
104 | The front-end of the CerCo compiler consists of several stages: |
---|
105 | |
---|
106 | \begin{center} |
---|
107 | \begin{minipage}{.8\linewidth} |
---|
108 | \begin{tabbing} |
---|
109 | \quad \= $\downarrow$ \quad \= \kill |
---|
110 | \textsf{Clight}\\ |
---|
111 | \> $\downarrow$ \> cast removal\\ |
---|
112 | \> $\downarrow$ \> add runtime functions\footnote{Following the last project |
---|
113 | meeting we intend to move this transformation to the back-end}\\ |
---|
114 | \> $\downarrow$ \> cost labelling\\ |
---|
115 | \> $\downarrow$ \> loop optimizations\footnote{\label{lab:opt2}To be ported from the untrusted compiler and certified only in case of early completion of the certification of the other passes.} (an endo-transformation)\\ |
---|
116 | \> $\downarrow$ \> partial redundancy elimination$^{\mbox{\scriptsize \ref{lab:opt2}}}$ (an endo-transformation)\\ |
---|
117 | \> $\downarrow$ \> stack variable allocation and control structure |
---|
118 | simplification\\ |
---|
119 | \textsf{Cminor}\\ |
---|
120 | \> $\downarrow$ \> generate global variable initialisation code\\ |
---|
121 | \> $\downarrow$ \> transform to RTL graph\\ |
---|
122 | \textsf{RTLabs}\\ |
---|
123 | \> $\downarrow$ \> \\ |
---|
124 | \>\,\vdots |
---|
125 | \end{tabbing} |
---|
126 | \end{minipage} |
---|
127 | \end{center} |
---|
128 | |
---|
129 | Here, by `endo-transformation', we mean a mapping from language back to itself: |
---|
130 | the loop optimization step maps the Clight language to itself. |
---|
131 | |
---|
132 | %Our overall statements of correctness with respect to costs will |
---|
133 | %require a correctly labelled program |
---|
134 | There are three layers in most of the proofs proposed: |
---|
135 | \begin{enumerate} |
---|
136 | \item invariants closely tied to the syntax and transformations using |
---|
137 | dependent types (such as the presence of variable names in environments), |
---|
138 | \item a forward simulation proof relating each small-step of the |
---|
139 | source to zero or more steps of the target, and |
---|
140 | \item proofs about syntactic properties of the cost labelling. |
---|
141 | \end{enumerate} |
---|
142 | The first will support both functional correctness and allow us to |
---|
143 | show the totality of some of the compiler stages (that is, those |
---|
144 | stages of the compiler cannot fail). The second provides the main |
---|
145 | functional correctness result, including the preservation of cost |
---|
146 | labels in the traces, and the last will be crucial for applying |
---|
147 | correctness results about the costings from the back-end by showing |
---|
148 | that they appear in enough places so that we can assign all of the |
---|
149 | execution costs to them. |
---|
150 | |
---|
151 | We will also prove that a suitably labelled RTLabs trace can be turned |
---|
152 | into a \emph{structured trace} which splits the execution trace into |
---|
153 | cost-label to cost-label chunks with nested function calls. This |
---|
154 | structure was identified during work on the correctness of the |
---|
155 | back-end cost analysis as retaining important information about the |
---|
156 | structure of the execution that is difficult to reconstruct later in |
---|
157 | the compiler. |
---|
158 | |
---|
159 | \subsection{Clight cast removal} |
---|
160 | |
---|
161 | This transformation removes some casts inserted by the parser to make |
---|
162 | arithmetic promotion explicit but which are superfluous (such as |
---|
163 | \lstinline[language=C]'c = (short)((int)a + (int)b);' where |
---|
164 | \lstinline'a' and \lstinline'b' are \lstinline[language=C]'short'). |
---|
165 | This is necessary for producing good code for our target architecture. |
---|
166 | |
---|
167 | It only affects Clight expressions, recursively detecting casts that |
---|
168 | can be safely eliminated. The semantics provides a big-step |
---|
169 | definition for expression, so we should be able to show a lock-step |
---|
170 | forward simulation between otherwise identical states using a lemma |
---|
171 | showing that cast elimination does not change the evaluation of |
---|
172 | expressions. This lemma will follow from a structural induction on |
---|
173 | the source expression. We have already proved a few of the underlying |
---|
174 | arithmetic results necessary to validate the approach. |
---|
175 | |
---|
176 | \subsection{Clight cost labelling} |
---|
177 | |
---|
178 | This adds cost labels before and after selected statements and |
---|
179 | expressions, and the execution traces ought to be equivalent modulo |
---|
180 | the new cost labels. Hence it requires a simple forward simulation |
---|
181 | with a limited amount of stuttering whereever a new cost label is |
---|
182 | introduced. A bound can be given for the amount of stuttering allowed |
---|
183 | based on the statement or continuation to be evaluated next. |
---|
184 | |
---|
185 | We also intend to show three syntactic properties about the cost |
---|
186 | labelling: |
---|
187 | \begin{enumerate} |
---|
188 | \item every function starts with a cost label, |
---|
189 | \item every branching instruction is followed by a cost label (note that |
---|
190 | exiting a loop is treated as a branch), and |
---|
191 | \item the head of every loop (and any \lstinline'goto' destination) is |
---|
192 | a cost label. |
---|
193 | \end{enumerate} |
---|
194 | These can be shown by structural induction on the source term. |
---|
195 | |
---|
196 | \subsection{Clight to Cminor translation} |
---|
197 | |
---|
198 | This translation is the first to introduce some invariants, with the |
---|
199 | proofs closely tied to the implementation by dependent typing. These |
---|
200 | are largely complete and show that the generated code enjoys: |
---|
201 | \begin{itemize} |
---|
202 | \item some minimal type safety shown by explicit checks on the |
---|
203 | Cminor types during the transformation (a little more work remains |
---|
204 | to be done here, but follows the same form); |
---|
205 | \item that variables named in the parameter and local variable |
---|
206 | environments are distinct from one another, again by an explicit |
---|
207 | check; |
---|
208 | \item that variables used in the generated code are present in the |
---|
209 | resulting environment (either by checking their presence in the |
---|
210 | source environment, or from a list of freshly generated temporary variables); |
---|
211 | and |
---|
212 | \item that all \lstinline[language=C]'goto' labels are present (by |
---|
213 | checking them against a list of source labels and proving that all |
---|
214 | source labels are preserved). |
---|
215 | \end{itemize} |
---|
216 | |
---|
217 | The simulation will be similar to the relevant stages of CompCert |
---|
218 | (Clight to Csharpminor and Csharpminor to Cminor --- in the event that |
---|
219 | the direct proof is unwieldy we could introduce an intermediate |
---|
220 | language corresponding to Csharpminor). During early experimentation |
---|
221 | with porting CompCert definitions to the Matita proof assistant we |
---|
222 | found little difficulty reproving the results for the memory model, so |
---|
223 | we plan to port the memory injection properties and use them to relate |
---|
224 | Clight in-memory variables with either the value of the local variable or a |
---|
225 | stack slot, depending on how it was classified. |
---|
226 | |
---|
227 | This should be sufficient to show the equivalence of (big-step) |
---|
228 | expression evaluation. The simulation can then be shown by relating |
---|
229 | corresponding blocks of statement and continuations with their Cminor |
---|
230 | counterparts and proving that a few steps reaches the next matching |
---|
231 | state. |
---|
232 | |
---|
233 | The syntactic properties required for cost labels remain similar and a |
---|
234 | structural induction on the function bodies should be sufficient to |
---|
235 | show that they are preserved. |
---|
236 | |
---|
237 | \subsection{Cminor global initialisation code} |
---|
238 | |
---|
239 | This short phase replaces the global variable initialisation data with |
---|
240 | code that executes when the program starts. Each piece of |
---|
241 | initialisation data in the source is matched by a new statement |
---|
242 | storing that data. As each global variable is allocated a distinct |
---|
243 | memory block, the program state after the initialisation statements |
---|
244 | will be the same as the original program's state at the start of |
---|
245 | execution, and will proceed in the same manner afterwards. |
---|
246 | |
---|
247 | % Actually, the above is wrong... |
---|
248 | % ... this ought to be in a fresh main function with a fresh cost label |
---|
249 | |
---|
250 | \subsection{Cminor to RTLabs translation} |
---|
251 | |
---|
252 | In this part of the compiler we transform the program's functions into |
---|
253 | control flow graphs. It is closely related to CompCert's Cminorsel to |
---|
254 | RTL transformation, albeit with target-independent operations. |
---|
255 | |
---|
256 | We already enforce several invariants with dependent types: some type |
---|
257 | safety, mostly shown using the type information from Cminor; and |
---|
258 | that the graph is closed (by showing that each successor was recently |
---|
259 | added, or corresponds to a \lstinline[language=C]'goto' label which |
---|
260 | are all added before the end). Note that this relies on a |
---|
261 | monotonicity property; CompCert maintains a similar property in a |
---|
262 | similar way while building RTL graphs. We will also add a result |
---|
263 | showing that all of the pseudo-register names are distinct for use by |
---|
264 | later stages using the same method as Cminor. |
---|
265 | |
---|
266 | The simulation will relate Cminor states to RTLabs states which are about to |
---|
267 | execute the code corresponding to the Cminor statement or continuation. |
---|
268 | Each Cminor statement becomes zero or more RTLabs statements, with a |
---|
269 | decreasing measure based on the statement and continuations similar to |
---|
270 | CompCert's. We may also follow CompCert in using a relational |
---|
271 | specification of this stage so as to abstract away from the functional |
---|
272 | (and highly dependently typed) definition. |
---|
273 | |
---|
274 | The first two labelling properties remain as before; we will show that |
---|
275 | cost labels are preserved, so the function entry point will be a cost |
---|
276 | label, and successors to any statement that are cost labels map still |
---|
277 | map to cost labels, preserving the condition on branches. We replace |
---|
278 | the property for loops with the notion that we will always reach a |
---|
279 | cost label or the end of the function after following a bounded number of |
---|
280 | successors. This can be easily seen in Cminor using the requirement |
---|
281 | for cost labels at the head of loops and after gotos. It remains to |
---|
282 | show that this is preserved by the translation to RTLabs. % how? |
---|
283 | |
---|
284 | \subsection{RTLabs structured trace generation} |
---|
285 | |
---|
286 | This proof-only step incorporates the function call structure and cost |
---|
287 | labelling properties into the execution trace. As the function calls |
---|
288 | are nested within the trace, we need to distinguish between |
---|
289 | terminating and non-terminating function calls. Thus we use the |
---|
290 | excluded middle (specialised to a function termination property) to do |
---|
291 | this. |
---|
292 | |
---|
293 | Structured traces for terminating functions are built by following the |
---|
294 | flat trace, breaking it into chunks between cost labels and |
---|
295 | recursively processing function calls. The main difficulties here are |
---|
296 | the non-structurally recursive nature of the function (instead we use |
---|
297 | the size of the termination proof as a measure) and using the RTLabs |
---|
298 | cost labelling properties to show that the constraints of the |
---|
299 | structured traces are observed. We also show that the lower stack |
---|
300 | frames are preserved during function calls in order to prove that |
---|
301 | after returning from a function call we resume execution of the |
---|
302 | correct code. This part of the work has already been constructed, but |
---|
303 | still requires a simple proof to show that flattening the structured |
---|
304 | trace recreates the original flat trace. |
---|
305 | |
---|
306 | The non-terminating case follows the trace like the terminating |
---|
307 | version to build up chunks of trace from cost-label to cost-label |
---|
308 | (which, by the finite distance to a cost label property shown before, |
---|
309 | can be represented by an inductive type). These chunks are chained |
---|
310 | together in a coinductive data structure that can represent |
---|
311 | non-terminating traces. The excluded middle is used to decide whether |
---|
312 | function calls terminate, in which case the function described above |
---|
313 | constructs an inductive terminating structured trace which is nested |
---|
314 | in the caller's trace. Otherwise, another coinductive constructor is |
---|
315 | used to embed the non-terminating trace of the callee, generated by |
---|
316 | corecursion. This part of the trace transformation is currently under |
---|
317 | construction, and will also need a flattening result to show that it |
---|
318 | is correct. |
---|
319 | |
---|
320 | |
---|
321 | \section{Backend: RTLabs to machine code} |
---|
322 | \label{sect.backend.rtlabs.machine.code} |
---|
323 | |
---|
324 | The compiler backend consists of the following intermediate languages, and stages of translation: |
---|
325 | |
---|
326 | \begin{center} |
---|
327 | \begin{minipage}{.8\linewidth} |
---|
328 | \begin{tabbing} |
---|
329 | \quad \=\,\vdots\= \\ |
---|
330 | \> $\downarrow$ \>\\ |
---|
331 | \> $\downarrow$ \quad \= \kill |
---|
332 | \textsf{RTLabs}\\ |
---|
333 | \> $\downarrow$ \> copy propagation\footnote{\label{lab:opt}To be ported from the untrusted compiler and certified only in case of early completion of the certification of the other passes.} (an endo-transformation) \\ |
---|
334 | \> $\downarrow$ \> instruction selection\\ |
---|
335 | \> $\downarrow$ \> change of memory models in compiler\\ |
---|
336 | \textsf{RTL}\\ |
---|
337 | \> $\downarrow$ \> constant propagation$^{\mbox{\scriptsize \ref{lab:opt}}}$ (an endo-transformation) \\ |
---|
338 | \> $\downarrow$ \> calling convention made explicit \\ |
---|
339 | \> $\downarrow$ \> layout of activation records \\ |
---|
340 | \textsf{ERTL}\\ |
---|
341 | \> $\downarrow$ \> register allocation and spilling\\ |
---|
342 | \> $\downarrow$ \> dead code elimination\\ |
---|
343 | \textsf{LTL}\\ |
---|
344 | \> $\downarrow$ \> function linearisation\\ |
---|
345 | \> $\downarrow$ \> branch compression (an endo-transformation) \\ |
---|
346 | \textsf{LIN}\\ |
---|
347 | \> $\downarrow$ \> relabeling\\ |
---|
348 | \textsf{ASM}\\ |
---|
349 | \> $\downarrow$ \> pseudoinstruction expansion\\ |
---|
350 | \textsf{MCS-51 machine code}\\ |
---|
351 | \end{tabbing} |
---|
352 | \end{minipage} |
---|
353 | \end{center} |
---|
354 | |
---|
355 | \subsection{Graph translations} |
---|
356 | RTLabs and most intermediate languages in the back-end have a graph |
---|
357 | representation: |
---|
358 | the code of each function is represented by a graph of instructions. |
---|
359 | The graph maps a set of labels (the names of the nodes) to the instruction |
---|
360 | stored at that label (the nodes of the graph). |
---|
361 | Instructions reference zero or more additional labels that are the immediate |
---|
362 | successors of the instruction: zero for return from functions; more than one |
---|
363 | for conditional jumps and calls; one in all other cases. The references |
---|
364 | from one instruction to its immediates are the arcs of the graph. |
---|
365 | |
---|
366 | Status of graph languages always have a program counter that holds a |
---|
367 | representation of a reference to the current instruction. |
---|
368 | |
---|
369 | A translation between two consecutive graph languages maps each instruction |
---|
370 | stored at location $l$ in the first graph and with immediate successors |
---|
371 | $\{l_1,\ldots,l_n\}$ to a subgraph of the output graph that has a single |
---|
372 | entry point at location $l$ and exit arcs to $\{l_1,\ldots,l_n\}$. Moreover, |
---|
373 | the labels of all non entry nodes in the subgraph are distinct from all the |
---|
374 | labels in the source graph. |
---|
375 | |
---|
376 | In order to simplify the translations and the relative proofs of forward |
---|
377 | simulation, after the release of D4.2 and D4.3, we have provided: |
---|
378 | \begin{itemize} |
---|
379 | \item a new data type (called \texttt{blist}) that represents a |
---|
380 | sequence of instructions to be added to the output graph. |
---|
381 | The ``b'' in the name stands for binder, since a \texttt{blist} is |
---|
382 | either empty, an extension of a \texttt{blist} with an instruction |
---|
383 | at the front, or the generation of a fresh quantity followed by a |
---|
384 | \texttt{blist}. The latter feature is used, for instance, to generate |
---|
385 | fresh register names. The instructions in the list are unlabelled and |
---|
386 | all of them but the last one are also sequential, like in a linear program. |
---|
387 | \item a new iterator (called \texttt{b\_graph\_translate}) of type |
---|
388 | \begin{displaymath} |
---|
389 | \mathtt{b\_graph\_translate}: (\mathtt{label} \rightarrow \mathtt{blist}) |
---|
390 | \rightarrow \mathtt{graph} \rightarrow \mathtt{graph} |
---|
391 | \end{displaymath} |
---|
392 | The iterator transform the input graph in the output graph by replacing |
---|
393 | each node with the graph that corresponds to the linear \texttt{blist} |
---|
394 | obtained by applying the function in input to the node label. |
---|
395 | \end{itemize} |
---|
396 | |
---|
397 | Using the iterator above, the code can be written in such a way that |
---|
398 | the programmer does not see any distinction between writing a transformation |
---|
399 | on linear or graph languages. |
---|
400 | |
---|
401 | In order to prove simulations for translations obtained using the iterator, |
---|
402 | we will prove the following theorem: |
---|
403 | |
---|
404 | \begin{align*} |
---|
405 | \mathtt{theorem} &\ \mathtt{b\_graph\_translate\_ok}: \\ |
---|
406 | & \forall f.\forall G_{i}.\mathtt{let}\ G_{\sigma} := \mathtt{b\_graph\_translate}\ f\ G_{i}\ \mathtt{in} \\ |
---|
407 | & \forall l \in G_{i}.\mathtt{subgraph}\ (f\ l)\ l\ (next \ l \ G_i)\ G_{\sigma} |
---|
408 | \end{align*} |
---|
409 | |
---|
410 | Here \texttt{subgraph} is a computational predicate that given a \texttt{blist} |
---|
411 | $[i_1, \ldots, i_n]$, an entry label $l$, an exit label $l'$ and a graph $G$ |
---|
412 | expands to the fact that fetching from $G$ at address $l$ one retrieves a node |
---|
413 | $i_1$ with a successor $l_1$ that, when fetched, yields a node $i_2$ with a |
---|
414 | successor $l_2$ such that \ldots. The successor of $i_n$ is $l'$. |
---|
415 | |
---|
416 | Proving a forward simulation diagram of the following kind using the theorem |
---|
417 | above is now as simple as doing the same using standard small step operational |
---|
418 | semantics over linear languages. |
---|
419 | |
---|
420 | \begin{align*} |
---|
421 | \mathtt{lemma} &\ \mathtt{execute\_1\_step\_ok}: \\ |
---|
422 | & \forall s. \mathtt{let}\ s' := s\ \sigma\ \mathtt{in} \\ |
---|
423 | & \mathtt{let}\ l := pc\ s\ \mathtt{in} \\ |
---|
424 | & s \stackrel{1}{\rightarrow} s^{*} \Rightarrow \exists n. s' \stackrel{n}{\rightarrow} s'^{*} \wedge s'^{*} = s'\ \sigma |
---|
425 | \end{align*} |
---|
426 | |
---|
427 | Because of the fact that graph translation preserves entry and exit labels of |
---|
428 | translated statements, the state translation function $\sigma$ will simply |
---|
429 | preserve the value of the program counter. The program code, which is |
---|
430 | part of the state, is translated using the iterator. |
---|
431 | |
---|
432 | The proof is then roughly the following. Let $l$ be the program counter of the |
---|
433 | input state $s$. We proceed by cases on the current instruction of $s$. |
---|
434 | Let $[i_1, \ldots, i_n]$ be the \texttt{blist} associated to $l$ and $s$ |
---|
435 | by the translation function. The witness required for the existential |
---|
436 | statement is simply $n$. By applying the theorem above we know that the |
---|
437 | next $n$ instructions that will be fetched from $s\ \sigma$ will be |
---|
438 | $[i_1, \ldots, i_n]$ and it is now sufficient to prove that they simulate |
---|
439 | the original instruction. |
---|
440 | |
---|
441 | \subsection{The RTLabs to RTL translation} |
---|
442 | \label{subsect.rtlabs.rtl.translation} |
---|
443 | |
---|
444 | The RTLabs to RTL translation pass marks the frontier between the two memory models used in the CerCo project. |
---|
445 | As a result, we require some method of translating between the values that the two memory models permit. |
---|
446 | Suppose we have such a translation, $\sigma$. |
---|
447 | Then the translation between values of the two memory models may be pictured with: |
---|
448 | |
---|
449 | \begin{displaymath} |
---|
450 | \mathtt{Value} ::= \bot \mid \mathtt{int(size)} \mid \mathtt{float} \mid \mathtt{null} \mid \mathtt{ptr} \quad\stackrel{\sigma}{\longrightarrow}\quad \mathtt{BEValue} ::= \bot \mid \mathtt{byte} \mid \mathtt{null}_i \mid \mathtt{ptr}_i |
---|
451 | \end{displaymath} |
---|
452 | |
---|
453 | In the front-end, we have both integer and float values, where integer values are `sized', along with null values and pointers. Some frontenv values are |
---|
454 | representables in a byte, but some others require more bits. |
---|
455 | |
---|
456 | In the back-end model all values are meant to be represented in a single byte. |
---|
457 | Values can thefore be undefined, be one byte long integers or be indexed |
---|
458 | fragments of a pointer, null or not. Floats values are no longer present, as floating point arithmetic is not supported by the CerCo compiler. |
---|
459 | |
---|
460 | The $\sigma$ map implements a one-to-many relation: a single front-end value |
---|
461 | is mapped to a sequence of back-end values when its size is more then one byte. |
---|
462 | |
---|
463 | We further require a map, $\sigma$, which maps the front-end \texttt{Memory} and the back-end's notion of \texttt{BEMemory}. Both kinds of memory can be |
---|
464 | thought as an instance of a generic \texttt{Mem} data type parameterized over |
---|
465 | the kind of values stored in memory. |
---|
466 | |
---|
467 | \begin{displaymath} |
---|
468 | \mathtt{Mem}\ \alpha = \mathtt{Block} \rightarrow (\mathbb{Z} \rightarrow \alpha) |
---|
469 | \end{displaymath} |
---|
470 | |
---|
471 | Here, \texttt{Block} consists of a \texttt{Region} paired with an identifier. |
---|
472 | |
---|
473 | \begin{displaymath} |
---|
474 | \mathtt{Block} ::= \mathtt{Region} \times \mathtt{ID} |
---|
475 | \end{displaymath} |
---|
476 | |
---|
477 | We now have what we need for defining what is meant by the `memory' in the backend memory model. |
---|
478 | Namely, we instantiate the previously defined \texttt{Mem} type with the type of back-end memory values. |
---|
479 | |
---|
480 | \begin{displaymath} |
---|
481 | \mathtt{BEMem} = \mathtt{Mem}~\mathtt{BEValue} |
---|
482 | \end{displaymath} |
---|
483 | |
---|
484 | Memory addresses consist of a pair of back-end memory values: |
---|
485 | |
---|
486 | \begin{displaymath} |
---|
487 | \mathtt{Address} = \mathtt{BEValue} \times \mathtt{BEValue} \\ |
---|
488 | \end{displaymath} |
---|
489 | |
---|
490 | The back- and front-end memory models differ in how they represent sized integeer values in memory. |
---|
491 | In particular, the front-end stores integer values as a header, with size information, followed by a string of `continuation' blocks, marking out the full representation of the value in memory. |
---|
492 | In contrast, the layout of sized integer values in the back-end memory model consists of a series of byte-sized `chunks': |
---|
493 | |
---|
494 | \begin{center} |
---|
495 | \begin{picture}(0, 25) |
---|
496 | \put(-125,0){\framebox(25,25)[c]{\texttt{v,4}}} |
---|
497 | \put(-100,0){\framebox(25,25)[c]{\texttt{cont}}} |
---|
498 | \put(-75,0){\framebox(25,25)[c]{\texttt{cont}}} |
---|
499 | \put(-50,0){\framebox(25,25)[c]{\texttt{cont}}} |
---|
500 | \put(-15,10){\vector(1, 0){30}} |
---|
501 | \put(25,0){\framebox(25,25)[c]{\texttt{\texttt{v$_1$}}}} |
---|
502 | \put(50,0){\framebox(25,25)[c]{\texttt{\texttt{v$_2$}}}} |
---|
503 | \put(75,0){\framebox(25,25)[c]{\texttt{\texttt{v$_3$}}}} |
---|
504 | \put(100,0){\framebox(25,25)[c]{\texttt{\texttt{v$_4$}}}} |
---|
505 | \end{picture} |
---|
506 | \end{center} |
---|
507 | |
---|
508 | Chunks for pointers are pairs made of the original pointer and the index of the chunk. |
---|
509 | Therefore, when assembling the chunks together, we can always recognize if all chunks refer to the same value or if the operation is meaningless. |
---|
510 | |
---|
511 | The differing memory representations of values in the two memory models imply the need for a series of lemmas on the actions of \texttt{load} and \texttt{store} to ensure correctness. |
---|
512 | The first lemma required has the following statement: |
---|
513 | \begin{displaymath} |
---|
514 | \mathtt{load}\ s\ a\ M = \mathtt{Some}\ v \rightarrow \forall i \leq s.\ \mathtt{load}\ s\ (a + i)\ \sigma(M) = \mathtt{Some}\ v_i |
---|
515 | \end{displaymath} |
---|
516 | That is, if we are successful in reading a value of size $s$ from memory at address $a$ in front-end memory, then we should successfully be able to read all of its chunks from memory in the back-end memory at appropriate address (from address $a$ up to and including address $a + i$, where $i \leq s$). |
---|
517 | |
---|
518 | Next, we must show that \texttt{store} properly commutes with the $\sigma$-map between memory spaces: |
---|
519 | \begin{displaymath} |
---|
520 | \sigma(\mathtt{store}\ a\ v\ M) = \mathtt{store}\ \sigma(v)\ \sigma(a)\ \sigma(M) |
---|
521 | \end{displaymath} |
---|
522 | That is, if we store a value \texttt{v} in the front-end memory \texttt{M} at address \texttt{a} and transform the resulting memory with $\sigma$, then this is equivalent to storing a transformed value $\mathtt{\sigma(v)}$ at address $\mathtt{\sigma(a)}$ into the back-end memory $\mathtt{\sigma(M)}$. |
---|
523 | |
---|
524 | Finally, the commutation properties between \texttt{load} and \texttt{store} are weakened in the $\sigma$-image of the memory. |
---|
525 | Writing \texttt{load}$^*$ for the multiple consecutive iterations of \texttt{load} used to fetch all chunks of a value, we must prove that, when $a \neq a'$: |
---|
526 | \begin{displaymath} |
---|
527 | \texttt{load}^* \sigma(a)\ (\mathtt{store}\ \sigma(a')\ \sigma(v)\ \sigma(M)) = \mathtt{load}^*\ \sigma(s)\ \sigma(a)\ \sigma(M) |
---|
528 | \end{displaymath} |
---|
529 | That is, suppose we store a transformed value $\mathtt{\sigma(v)}$ into a back-end memory $\mathtt{\sigma(M)}$ at address $\mathtt{\sigma(a')}$, using \texttt{store}, and then load from the address $\sigma(a)$. Even if $a$ and $a'$ are |
---|
530 | distinct by hypothesis, there is a priori no guarantee that the consecutive |
---|
531 | bytes for the value stored at $\sigma(a)$ are disjoint from those for the |
---|
532 | values stored at $\sigma(a')$. The fact that this holds is a non-trivial |
---|
533 | property of $\sigma$ to be proved. |
---|
534 | |
---|
535 | RTLabs states come in three flavours: |
---|
536 | \begin{displaymath} |
---|
537 | \begin{array}{rll} |
---|
538 | \mathtt{State} & ::= & (\mathtt{State} : \mathtt{Frame}^* \times \mathtt{Frame} \\ |
---|
539 | & \mid & \mathtt{Call} : \mathtt{Frame}^* \times \mathtt{Args} \times \mathtt{Return} \times \mathtt{Fun} \\ |
---|
540 | & \mid & \mathtt{Return} : \mathtt{Frame}^* \times \mathtt{Value} \times \mathtt{Return}) \times \mathtt{Mem} |
---|
541 | \end{array} |
---|
542 | \end{displaymath} |
---|
543 | \texttt{State} is the default state in which RTLabs programs are almost always in. |
---|
544 | The \texttt{Call} state is only entered when a call instruction is being executed, and then we immediately return to being in \texttt{State}. |
---|
545 | Similarly, \texttt{Return} is only entered when a return instruction is being executed, before returning immediately to \texttt{State}. |
---|
546 | All RTLabs states are accompanied by a memory, \texttt{Mem}, with \texttt{Call} and \texttt{Return} keeping track of arguments, return addresses and the results of functions. |
---|
547 | \texttt{State} keeps track of a list of stack frames. |
---|
548 | |
---|
549 | RTL states differ from their RTLabs counterparts, in including a program counter \texttt{PC}, stack-pointer \texttt{SP}, internal stack pointer \texttt{ISP}, a carry flag \texttt{CARRY} and a set of registers \texttt{REGS}: |
---|
550 | \begin{displaymath} |
---|
551 | \mathtt{State} ::= \mathtt{Frame}^* \times \mathtt{PC} \times \mathtt{SP} \times \mathtt{ISP} \times \mathtt{CARRY} \times \mathtt{REGS} |
---|
552 | \end{displaymath} |
---|
553 | The internal stack pointer \texttt{ISP}, and its relationship with the stack pointer \texttt{SP}, needs some comment. |
---|
554 | Due to the design of the MCS-51, and its minuscule stack, it was decided that the compiler would implement an emulated stack in external memory. |
---|
555 | As a result, we have two stack pointers in our state: \texttt{ISP}, which is the real, hardware stack, and \texttt{SP}, which is the stack pointer of the emulated stack in memory. |
---|
556 | The emulated stack is used for pushing and popping stack frames when calling or returning from function calls, however this is done using the hardware stack, indexed by \texttt{ISP} as an intermediary. |
---|
557 | Instructions like \texttt{LCALL} and \texttt{ACALL} are hardwired by the processor's design to push the return address on to the hardware stack. Therefore after a call has been made, and before a call returns, the compiler emits code to move the return address back and forth the two stacks. Parameters, return values |
---|
558 | and local variables are only present in the external stack. |
---|
559 | As a result, for most of the execution of the processor, the hardware stack is empty, or contains a single item ready to be moved into external memory. |
---|
560 | |
---|
561 | Once more, we require a relation $\sigma$ between RTLabs states and RTL states. |
---|
562 | Because $\sigma$ is one-to-many and, morally, a multi-function, |
---|
563 | we use in the following the functional notation for $\sigma$, using $\star$ |
---|
564 | in the output of $\sigma$ to mean that any value is accepted. |
---|
565 | \begin{displaymath} |
---|
566 | \mathtt{State} \stackrel{\sigma}{\longrightarrow} \mathtt{State} |
---|
567 | \end{displaymath} |
---|
568 | |
---|
569 | Translating an RTLabs state to an RTL state proceeds by cases on the particular type of state we are trying to translate, either a \texttt{State}, \texttt{Call} or a \texttt{Return}. |
---|
570 | For \texttt{State} we perform a further case analysis of the top stack frame, which decomposes into a tuple holding the current program counter value, the current stack pointer and the value of the registers: |
---|
571 | \begin{displaymath} |
---|
572 | \sigma(\mathtt{State} (\mathtt{Frame}^* \times \mathtt{\langle PC, REGS, SP \rangle})) \longrightarrow ((\sigma(\mathtt{Frame}^*), \sigma(\mathtt{PC}), \sigma(\mathtt{SP}), \star, \star, \sigma(\mathtt{REGS})), \sigma(\mathtt{Mem})) |
---|
573 | \end{displaymath} |
---|
574 | Translation then proceeds by translating the remaining stack frames, as well as the contents of the top stack frame. Any value for the internal stack pointer |
---|
575 | and the carry bit is admitted. |
---|
576 | |
---|
577 | Translating \texttt{Call} and \texttt{Return} states is more involved, as a commutation between a single step of execution and the translation process must hold: |
---|
578 | \begin{displaymath} |
---|
579 | \sigma(\mathtt{Return}(-)) \longrightarrow \sigma \circ \text{return one step} |
---|
580 | \end{displaymath} |
---|
581 | |
---|
582 | \begin{displaymath} |
---|
583 | \sigma(\mathtt{Call}(-)) \longrightarrow \sigma \circ \text{call one step} |
---|
584 | \end{displaymath} |
---|
585 | |
---|
586 | Here \emph{return one step} and \emph{call one step} refer to a pair of commuting diagrams relating the one-step execution of a call and return state and translation of both. |
---|
587 | We provide the one step commuting diagrams in Figure~\ref{fig.commuting.diagrams}. The fact that one execution step in the source language is not performed |
---|
588 | in the target language is not problematic for preservation of divergence |
---|
589 | because it is easy to show that every step from a \texttt{Call} or |
---|
590 | \texttt{Return} state is always preceeded/followed by one step that is always |
---|
591 | simulated. |
---|
592 | |
---|
593 | \begin{figure} |
---|
594 | \begin{displaymath} |
---|
595 | \begin{diagram} |
---|
596 | s & \rTo^{\text{one step of execution}} & s' \\ |
---|
597 | & \rdTo & \dTo \\ |
---|
598 | & & \llbracket s'' \rrbracket |
---|
599 | \end{diagram} |
---|
600 | \end{displaymath} |
---|
601 | |
---|
602 | \begin{displaymath} |
---|
603 | \begin{diagram} |
---|
604 | s & \rTo^{\text{one step of execution}} & s' \\ |
---|
605 | & \rdTo & \dTo \\ |
---|
606 | & & \llbracket s'' \rrbracket |
---|
607 | \end{diagram} |
---|
608 | \end{displaymath} |
---|
609 | \caption{The one-step commuting diagrams for \texttt{Call} and \texttt{Return} state translations} |
---|
610 | \label{fig.commuting.diagrams} |
---|
611 | \end{figure} |
---|
612 | |
---|
613 | The forward simulation proof for all steps that do not involve function calls are lengthy, but routine. |
---|
614 | They consist of simulating a front-end operation on front-end pseudo-registers and the front-end memory with sequences of back-end operations on the back-end pseudo-registers and back-end memory. |
---|
615 | The properties of $\sigma$ presented before that relate values and memories will need to be heavily exploited. |
---|
616 | |
---|
617 | The simulation of invocation of functions and returns from functions is less obvious. |
---|
618 | We sketch here what happens on the source code and on its translation. |
---|
619 | |
---|
620 | \begin{displaymath} |
---|
621 | \begin{array}{rcl} |
---|
622 | \mathtt{Call(id,\ args,\ dst,\ pc),\ State(Frame^*, Frame)} & \longrightarrow & \mathtt{Call(M(args), dst)}, \\ |
---|
623 | & & \mathtt{PUSH(Frame[PC := after\_return])} |
---|
624 | \end{array} |
---|
625 | \end{displaymath} |
---|
626 | Suppose we are given a \texttt{State} with a list of stack frames, with the top frame being \texttt{Frame}. |
---|
627 | Suppose also that the program counter in \texttt{Frame} points to a \texttt{Call} instruction, complete with arguments and destination address. |
---|
628 | Then this is executed by entering into a \texttt{Call} state where the arguments are loaded from memory, and the address pointing to the instruction immediately following the \texttt{Call} instruction is filled in, with the current stack frame being pushed on top of the stack with the return address substituted for the program counter. |
---|
629 | |
---|
630 | Now, what happens next depends on whether we are executing an internal or an external function. |
---|
631 | In the case where the call is to an external function, we have: |
---|
632 | \begin{displaymath} |
---|
633 | \begin{array}{rcl} |
---|
634 | \mathtt{Call(M(args), dst)}, & \stackrel{\mathtt{ret\_val = f(M(args))}}{\longrightarrow} & \mathtt{Return(ret\_val,\ dst,\ PUSH(...))} \\ |
---|
635 | \mathtt{PUSH(current\_frame[PC := after\_return])} & & |
---|
636 | \end{array} |
---|
637 | \end{displaymath} |
---|
638 | That is, the call to the external function enters a return state after first computing the return value by executing the external function on the arguments. |
---|
639 | Then the return state restores the program counter by popping the stack, and execution proceeds in a new \texttt{State}: |
---|
640 | \begin{displaymath} |
---|
641 | \begin{array}{rcl} |
---|
642 | \mathtt{Return(ret\_val,\ dst,\ PUSH(...))} & \longrightarrow & \mathtt{pc = POP\_STACK(regs[dst := M(ret\_val)],\ pc)} \\ |
---|
643 | & & \mathtt{State(regs[dst := M(ret\_val),\ pc)} |
---|
644 | \end{array} |
---|
645 | \end{displaymath} |
---|
646 | |
---|
647 | Suppose we are executing an internal function, however: |
---|
648 | \begin{displaymath} |
---|
649 | \begin{array}{rcl} |
---|
650 | \mathtt{Call(M(args), dst)} & \longrightarrow & \mathtt{SP = alloc,\ regs = \emptyset[- := params]} \\ |
---|
651 | \mathtt{PUSH(current\_frame[PC := after\_return])} & & \mathtt{State(regs,\ sp,\ pc_\emptyset,\ dst)} |
---|
652 | \end{array} |
---|
653 | \end{displaymath} |
---|
654 | Here, execution of the \texttt{Call} state first pushes the current frame with the program counter set to the address following the function call. |
---|
655 | The stack pointer allocates more space, the register map is initialized first to the empty map, assigning an undefined value to all register, before the value of the parameters is inserted into the map into the argument registers, and a new \texttt{State} follows. |
---|
656 | After this, the stack pointer is freed and a \texttt{Return} state is entered: |
---|
657 | \begin{displaymath} |
---|
658 | \begin{array}{rcl} |
---|
659 | \mathtt{sp = alloc,\ regs = \emptyset[- := PARAMS]} & \longrightarrow & \mathtt{free(sp)} \\ |
---|
660 | \mathtt{State(regs,\ sp,\ pc_\emptyset,\ dst)} & & \mathtt{Return(M(ret\_val), dst, Frames)} |
---|
661 | \end{array} |
---|
662 | \end{displaymath} |
---|
663 | Then the return state restores the program counter by popping the stack, and execution proceeds in a new \texttt{State}, like the case for external functions: |
---|
664 | \begin{displaymath} |
---|
665 | \begin{array}{rcl} |
---|
666 | \mathtt{free(sp)} & \longrightarrow & \mathtt{pc = POP\_STACK(regs[dst := M(ret\_val)],\ pc)} \\ |
---|
667 | \mathtt{Return(M(ret\_val), dst, frames)} & & \mathtt{State(regs[dst := M(ret\_val),\ pc)} |
---|
668 | \end{array} |
---|
669 | \end{displaymath} |
---|
670 | |
---|
671 | Translation from RTLabs to RTL states proceeds as follows. |
---|
672 | Return states are translated as is: |
---|
673 | \begin{displaymath} |
---|
674 | \mathtt{Return} \longrightarrow \mathtt{Return} |
---|
675 | \end{displaymath} |
---|
676 | |
---|
677 | \texttt{Call} states are translated to \texttt{Call\_ID} states: |
---|
678 | \begin{displaymath} |
---|
679 | \mathtt{Call(id,\ args,\ dst,\ pc)} \longrightarrow \mathtt{Call\_ID(id,\ \sigma'(args),\ \sigma(dst),\ pc)} |
---|
680 | \end{displaymath} |
---|
681 | Here, $\sigma$ and $\sigma'$ are two maps to be defined between pseudo-registers and lists of pseudo-registers, of the type: |
---|
682 | |
---|
683 | \begin{displaymath} |
---|
684 | \sigma: \mathtt{register} \rightarrow \mathtt{list\ register} |
---|
685 | \end{displaymath} |
---|
686 | |
---|
687 | and: |
---|
688 | |
---|
689 | \begin{displaymath} |
---|
690 | \sigma': \mathtt{list\ register} \rightarrow \mathtt{list\ register} |
---|
691 | \end{displaymath} |
---|
692 | |
---|
693 | where $\sigma'$ is implemented as: |
---|
694 | |
---|
695 | \begin{displaymath} |
---|
696 | \sigma' = \mathtt{flatten} \circ \sigma |
---|
697 | \end{displaymath} |
---|
698 | |
---|
699 | In the case of RTL, execution proceeds as follows. |
---|
700 | Suppose we are executing a \texttt{CALL\_ID} instruction. |
---|
701 | Then a case split occurs depending on whether we are executing an internal or an external function, as in the RTLabs case: |
---|
702 | \begin{displaymath} |
---|
703 | \begin{diagram} |
---|
704 | & & \llbracket \mathtt{CALL\_ID}(\mathtt{id}, \mathtt{args}, \mathtt{dst}, \mathtt{pc})\rrbracket & & \\ |
---|
705 | & \ldTo^{\text{external}} & & \rdTo^{\text{internal}} & \\ |
---|
706 | \skull & & & & \mathtt{regs} = [\mathtt{params}/-] \\ |
---|
707 | & & & & \mathtt{sp} = \mathtt{ALLOC} \\ |
---|
708 | & & & & \mathtt{PUSH}(\mathtt{carry}, \mathtt{regs}, \mathtt{dst}, \mathtt{return\_addr}), \mathtt{pc}_{0}, \mathtt{regs}, \mathtt{sp} \\ |
---|
709 | \end{diagram} |
---|
710 | \end{displaymath} |
---|
711 | Here, however, we differ from RTLabs when we attempt to execute an external function, in that we use a daemon (i.e. an axiom that can close any goal) to artificially close the case, as we have not yet implemented external functions in the backend. |
---|
712 | The reason for this lack of implementation is as follows. |
---|
713 | Though we have implemented an optimising assembler as the target of the compiler's backend, we have not yet implemented a linker for that assembler, so external functions can not yet be called. |
---|
714 | Whilst external functions are carried forth throughout the entirety of the compiler's frontend, we choose not to do the same for the backend, instead eliminating them in RTL. |
---|
715 | However, it is plausible that we could have carried external functions forth, in order to eliminate them at a later stage (i.e. when translating from LIN to assembly). |
---|
716 | |
---|
717 | In the case of an internal function being executed, we proceed as follows. |
---|
718 | The register map is initialized to the empty map, where all registers are assigned the undefined value, and then the registers corresponding to the function parameters are assigned the value of the parameters. |
---|
719 | Further, the stack pointer is reallocated to make room for an extra stack frame, then a frame is pushed onto the stack with the correct address to jump back to in place of the program counter. |
---|
720 | |
---|
721 | Note, in particular, that this final act of pushing a frame on the stack leaves us in an identical state to the RTLabs case, where the instruction |
---|
722 | \begin{displaymath} |
---|
723 | \mathtt{PUSH(current\_frame[PC := after\_return])} |
---|
724 | \end{displaymath} |
---|
725 | |
---|
726 | was executed. |
---|
727 | |
---|
728 | The execution of \texttt{Return} in RTL is similarly straightforward, with the return address, stack pointer, and so on, being computed by popping off the top of the stack, and the return value computed by the function being retrieved from memory: |
---|
729 | \begin{align*} |
---|
730 | \mathtt{return\_addr} & := \mathtt{top}(\mathtt{stack}) \\ |
---|
731 | v* & := M(\mathtt{rv\_regs}) \\ |
---|
732 | \mathtt{dst}, \mathtt{sp}, \mathtt{carry}, \mathtt{regs} & := \mathtt{pop} \\ |
---|
733 | \mathtt{regs}[v* / \mathtt{dst}] \\ |
---|
734 | \end{align*} |
---|
735 | |
---|
736 | Translation and execution must satisfy a pair of commutation properties for the \texttt{Return} and \texttt{Call} cases. |
---|
737 | Starting from any \texttt{Return} or \texttt{Call} state, translating and then executing a single step must be the same as executing exactly two steps and then translating, with the intermediate state obtained by executing once also being translatable to the final state. |
---|
738 | This is exemplified by the following diagram: |
---|
739 | \begin{displaymath} |
---|
740 | \begin{diagram} |
---|
741 | s & \rTo^1 & s' & \rTo^1 & s'' \\ |
---|
742 | \dTo & & & \rdTo & \dTo \\ |
---|
743 | \llbracket s \rrbracket & \rTo(1,3)^1 & & & \llbracket s'' \rrbracket \\ |
---|
744 | \end{diagram} |
---|
745 | \end{displaymath} |
---|
746 | |
---|
747 | \subsection{The RTL to ERTL translation} |
---|
748 | \label{subsect.rtl.ertl.translation} |
---|
749 | |
---|
750 | We map RTL statuses to ERTL statuses as follows: |
---|
751 | \begin{align*} |
---|
752 | \mathtt{sp} & = \mathtt{RegisterSPH} / \mathtt{RegisterSPL} \\ |
---|
753 | \mathtt{graph} & \mathtt{graph} + \mathtt{prologue}(s) + \mathtt{epilogue}(s) \\ |
---|
754 | & \mathrm{where}\ s = \mathrm{callee\ saved} + \nu \mathrm{RA} \\ |
---|
755 | \end{align*} |
---|
756 | The 16-bit RTL stack pointer \texttt{SP} is mapped to a pair of 8-bit hardware registers \texttt{RegisterSPH} and \texttt{RegisterSPL}. |
---|
757 | The internal function graphs of RTL are augmented with an epilogue and a prologue, indexed by a set of registers, consisting of a fresh pair of registers \texttt{RA} and the set of registers that must be saved by the callee of a function. |
---|
758 | |
---|
759 | The prologue and epilogue that are added to the function graph do the following: |
---|
760 | \begin{align*} |
---|
761 | \mathtt{prologue}(s) = & \mathtt{create\_new\_frame}; \\ |
---|
762 | & \mathtt{pop\ ra}; \\ |
---|
763 | & \mathtt{save\ callee\_saved}; \\ |
---|
764 | & \mathtt{get\_params} \\ |
---|
765 | & \ \ \mathtt{reg\_params}: \mathtt{move} \\ |
---|
766 | & \ \ \mathtt{stack\_params}: \mathtt{push}/\mathtt{pop}/\mathtt{move} \\ |
---|
767 | \end{align*} |
---|
768 | That is, the prologue first creates a new stack frame, pops the return address from the stack, saves all the callee saved registers (i.e. the set \texttt{s}), fetches the parameters that are passed via registers and the stack and moves them into the correct registers. |
---|
769 | In other words, the prologue of a function correctly sets up the calling convention used in the compiler when calling a function. |
---|
770 | On the other hand, the epilogue undoes the action of the prologue: |
---|
771 | \begin{align*} |
---|
772 | \mathtt{epilogue}(s) = & \mathtt{save\ return\ to\ tmp\ real\ regs}; \\ |
---|
773 | & \mathtt{restore\_registers}; \\ |
---|
774 | & \mathtt{push\ ra}; \\ |
---|
775 | & \mathtt{delete\_frame}; \\ |
---|
776 | & \mathtt{save return} \\ |
---|
777 | \end{align*} |
---|
778 | That is, the epilogue first saves the return value to a temporary register, restores all the registers, pushes the return address on to the stack, deletes the stack frame that the prologue created, and saves the return value. |
---|
779 | |
---|
780 | The \texttt{CALL} instruction is translated as follows: |
---|
781 | \begin{displaymath} |
---|
782 | \mathtt{CALL}\ id \mapsto \mathtt{set\_params};\ \mathtt{CALL}\ id;\ \mathtt{fetch\_result} |
---|
783 | \end{displaymath} |
---|
784 | Here, \texttt{set\_params} and \texttt{fetch\_result} are functions that implement what the caller of the function needs to do when calling a function, as opposed to the epilogue and prologue which implement what the callee must do. |
---|
785 | |
---|
786 | The translation from RTL to ERTL and execution functions must satisfy the following properties for \texttt{CALL} and \texttt{RETURN} instructions appropriately: |
---|
787 | \begin{displaymath} |
---|
788 | \begin{diagram} |
---|
789 | \mathtt{CALL} & \rTo^1 & \mathtt{inside\ function} \\ |
---|
790 | \dTo & & \dTo \\ |
---|
791 | \underbrace{\ldots}_{\llbracket \mathtt{CALL} \rrbracket} & \rTo & |
---|
792 | \underbrace{\ldots}_{\mathtt{prologue}} \\ |
---|
793 | \end{diagram} |
---|
794 | \end{displaymath} |
---|
795 | That is, if we start in a RTL \texttt{CALL} instruction, and translate this to an ERTL \texttt{CALL} instruction, then executing the RTL \texttt{CALL} instruction for one step and translating should land us in the prologue of the translated function. |
---|
796 | A similar property for \texttt{RETURN} should also hold, substituting the prologue for the epilogue of the function being translated: |
---|
797 | \begin{displaymath} |
---|
798 | \begin{diagram} |
---|
799 | \mathtt{RETURN} & \rTo^1 & \mathtt{.} \\ |
---|
800 | \dTo & & \dTo \\ |
---|
801 | \underbrace{\ldots}_{\mathtt{epilogue}} & \rTo & |
---|
802 | \underbrace{\ldots} \\ |
---|
803 | \end{diagram} |
---|
804 | \end{displaymath} |
---|
805 | |
---|
806 | \subsection{The ERTL to LTL translation} |
---|
807 | \label{subsect.ertl.ltl.translation} |
---|
808 | \newcommand{\declsf}[1]{\expandafter\newcommand\expandafter{\csname #1\endcsname}{\mathop{\mathsf{#1}}\nolimits}} |
---|
809 | \declsf{Livebefore} |
---|
810 | \declsf{Liveafter} |
---|
811 | \declsf{Defined} |
---|
812 | \declsf{Used} |
---|
813 | \declsf{Eliminable} |
---|
814 | \declsf{StatementSem} |
---|
815 | For the liveness analysis, we aim at a map |
---|
816 | $\ell \in \mathtt{label} \mapsto $ live registers at $\ell$. |
---|
817 | We define the following operators on ERTL statements. |
---|
818 | $$ |
---|
819 | \begin{array}{lL>{(ex. $}L<{)$}} |
---|
820 | \Defined(s) & registers defined at $s$ & r_1\leftarrow r_2+r_3 \mapsto \{r_1,C\}, \mathtt{CALL}~id\mapsto \text{caller-save} |
---|
821 | \\ |
---|
822 | \Used(s) & registers used at $s$ & r_1\leftarrow r_2+r_3 \mapsto \{r_2,r_3\}, \mathtt{CALL}~id\mapsto \text{parameters} |
---|
823 | \end{array} |
---|
824 | $$ |
---|
825 | Given $LA:\mathtt{label}\to\mathtt{lattice}$ (where $\mathtt{lattice}$ |
---|
826 | is the type of sets of registers\footnote{More precisely, it is thethe lattice |
---|
827 | of pairs of sets of pseudo-registers and sets of hardware registers, |
---|
828 | with pointwise operations.}, we also have have the following |
---|
829 | predicates: |
---|
830 | $$ |
---|
831 | \begin{array}{lL} |
---|
832 | \Eliminable_{LA}(\ell) & iff $s(\ell)$ has side-effects only on $r\notin LA(\ell)$ |
---|
833 | \\& |
---|
834 | (ex.\ $\ell : r_1\leftarrow r_2+r_3 \mapsto (\{r_1,C\}\cap LA(\ell)\neq\emptyset, |
---|
835 | \mathtt{CALL}id\mapsto \text{never}$) |
---|
836 | \\ |
---|
837 | \Livebefore_{LA}(\ell) &$:= |
---|
838 | \begin{cases} |
---|
839 | LA(\ell) &\text{if $\Eliminable_{LA}(\ell)$,}\\ |
---|
840 | (LA(\ell)\setminus \Defined(s(\ell)))\cup \Used(s(\ell) &\text{otherwise}. |
---|
841 | \end{cases}$ |
---|
842 | \end{array} |
---|
843 | $$ |
---|
844 | In particular, $\Livebefore$ has type $(\mathtt{label}\to\mathtt{lattice})\to |
---|
845 | \mathtt{label}\to\mathtt{lattice}$. |
---|
846 | |
---|
847 | The equation on which we build the fixpoint is then |
---|
848 | $$\Liveafter(\ell) \doteq \bigcup_{\ell' >_1 \ell} \Livebefore_{\Liveafter}(\ell')$$ |
---|
849 | where $\ell' >_1 \ell$ denotes that $\ell'$ is an immediate successor of $\ell$ |
---|
850 | in the graph. We do not require the fixpoint to be the least one, so the hypothesis |
---|
851 | on $\Liveafter$ that we require is |
---|
852 | $$\Liveafter(\ell) \supseteq \bigcup_{\ell' >_1 \ell} \Livebefore(\ell')$$ |
---|
853 | (for shortness we drop the subscript from $\Livebefore$). |
---|
854 | \subsection{The LTL to LIN translation} |
---|
855 | \label{subsect.ltl.lin.translation} |
---|
856 | Ad detailed elsewhere in the reports, due to the parameterized representation of |
---|
857 | the back-end languages, the pass described here is actually much more generic |
---|
858 | than the translation from LTL to LIN. It consists in a linearization pass |
---|
859 | that maps any graph-based back-end language to its corresponding linear form, |
---|
860 | preserving its semantics. In the rest of the section, however, we will keep |
---|
861 | the names LTL and LIN for the two partial instantiations of the parameterized |
---|
862 | language. |
---|
863 | |
---|
864 | We require a map, $\sigma$, from LTL statuses, where program counters are represented as labels in a graph data structure, to LIN statuses, where program counters are natural numbers: |
---|
865 | \begin{displaymath} |
---|
866 | \mathtt{pc : label} \stackrel{\sigma}{\longrightarrow} \mathbb{N} |
---|
867 | \end{displaymath} |
---|
868 | |
---|
869 | The LTL to LIN translation pass also linearises the graph data structure into a list of instructions. |
---|
870 | Pseudocode for the linearisation process is as follows: |
---|
871 | |
---|
872 | \begin{lstlisting} |
---|
873 | let rec linearise graph visited required generated todo := |
---|
874 | match todo with |
---|
875 | | l::todo -> |
---|
876 | if l $\in$ visited then |
---|
877 | let generated := generated $\cup\ \{$ Goto l $\}$ in |
---|
878 | let required := required $\cup$ l in |
---|
879 | linearise graph visited required generated todo |
---|
880 | else |
---|
881 | -- Get the instruction at label `l' in the graph |
---|
882 | let lookup := graph(l) in |
---|
883 | let generated := generated $\cup\ \{$ lookup $\}$ in |
---|
884 | -- Find the successor of the instruction at label `l' in the graph |
---|
885 | let successor := succ(l, graph) in |
---|
886 | let todo := successor::todo in |
---|
887 | linearise graph visited required generated todo |
---|
888 | | [] -> (required, generated) |
---|
889 | \end{lstlisting} |
---|
890 | |
---|
891 | It is easy to see that this linearisation process eventually terminates. |
---|
892 | In particular, the size of the visited label set is monotonically increasing, and is bounded above by the size of the graph that we are linearising. |
---|
893 | |
---|
894 | The initial call to \texttt{linearise} sees the \texttt{visited}, \texttt{required} and \texttt{generated} sets set to the empty set, and \texttt{todo} initialized with the singleton list consisting of the entry point of the graph. |
---|
895 | We envisage needing to prove the following invariants on the linearisation function above: |
---|
896 | |
---|
897 | \begin{enumerate} |
---|
898 | \item |
---|
899 | $\mathtt{visited} \approx \mathtt{generated}$, where $\approx$ is \emph{multiset} equality, as \texttt{generated} is a set of instructions where instructions may mention labels multiple times, and \texttt{visited} is a set of labels, |
---|
900 | \item |
---|
901 | $\forall \mathtt{l} \in \mathtt{generated}.\ \mathtt{succ(l,\ graph)} \subseteq \mathtt{required} \cup \mathtt{todo}$, |
---|
902 | \item |
---|
903 | $\mathtt{required} \subseteq \mathtt{visited}$, |
---|
904 | \item |
---|
905 | $\mathtt{visited} \cap \mathtt{todo} = \emptyset$. |
---|
906 | \end{enumerate} |
---|
907 | |
---|
908 | The invariants collectively imply the following properties, crucial to correctness, about the linearisation process: |
---|
909 | |
---|
910 | \begin{enumerate} |
---|
911 | \item |
---|
912 | Every graph node is visited at most once, |
---|
913 | \item |
---|
914 | Every instruction that is generated is generated due to some graph node being visited, |
---|
915 | \item |
---|
916 | The successor instruction of every instruction that has been visited already will eventually be visited too. |
---|
917 | \end{enumerate} |
---|
918 | |
---|
919 | Note, because the LTL to LIN transformation is the first time the code of |
---|
920 | a function is linearised in the back-end, we must discover a notion of `well formed function code' suitable for linearised forms. |
---|
921 | In particular, we see the notion of well formedness (yet to be formally defined) resting on the following conditions: |
---|
922 | |
---|
923 | \begin{enumerate} |
---|
924 | \item |
---|
925 | For every jump to a label in a linearised function code, the target label exists at some point in the function code, |
---|
926 | \item |
---|
927 | Each label is unique, appearing only once in the function code, |
---|
928 | \item |
---|
929 | The final instruction of a function code must be a return or an unconditional |
---|
930 | jump. |
---|
931 | \end{enumerate} |
---|
932 | |
---|
933 | We assume that these properties will be easy consequences of the invariants on the linearisation function defined above. |
---|
934 | |
---|
935 | The final condition above is potentially a little opaque, so we explain further. |
---|
936 | The only instructions that can reasonably appear in final position at the end of a function code are returns or backward jumps, as any other instruction would cause execution to `fall out' of the end of the program (for example, when a function invoked with \texttt{CALL} returns, it returns to the next instruction past the \texttt{CALL} that invoked it). |
---|
937 | |
---|
938 | \subsection{The LIN to ASM and ASM to MCS-51 machine code translations} |
---|
939 | \label{subsect.lin.asm.translation} |
---|
940 | |
---|
941 | The LIN to ASM translation step is trivial, being almost the identity function. |
---|
942 | The only non-trivial feature of the LIN to ASM translation is that all labels are `named apart' so that there is no chance of freshly generated labels from different namespaces clashing with labels from another namespace. |
---|
943 | |
---|
944 | The ASM to MCS-51 machine code translation step, and the required statements of correctness, are found in an unpublished manuscript attached to this document. |
---|
945 | This is the most complex translation because of the huge number of cases |
---|
946 | to be addressed and because of the complexity of the two semantics. |
---|
947 | Moreover, in the assembly code we have conditional and unconditional jumps |
---|
948 | to arbitrary locations in the code, which are not supported by the MCS-51 |
---|
949 | instruction set. The latter has several kind of jumps characterized by a |
---|
950 | different instruction size and execution time, but limited in range. For |
---|
951 | instance, conditional jumps to locations whose destination is more than |
---|
952 | $2^7$ bytes away from the jump instruction location are not supported at |
---|
953 | all and need to be emulated with a code transformation. The problem, which |
---|
954 | is known in the litterature as branch displacement and that applies also |
---|
955 | to modern architectures, is known to be hard and is often NP. As far as we |
---|
956 | know, we will provide the first formally verified proof of correctness for |
---|
957 | an assembler that implements branch displacement. We are also providing |
---|
958 | the first verified proof of correctness of a mildly optimizing branch |
---|
959 | displacement algorithm that, at the moment, is almost finished, but not |
---|
960 | described in the companion paper. This proof by itself took about 6 men |
---|
961 | months. |
---|
962 | |
---|
963 | \section{Correctness of cost prediction} |
---|
964 | Roughly speaking, |
---|
965 | the proof of correctness of cost prediction shows that the cost of executing |
---|
966 | a labelled object code program is the same as the sum over all labels in the |
---|
967 | program execution trace of the cost statically associated to the label and |
---|
968 | computed on the object code itself. |
---|
969 | |
---|
970 | In presence of object level function calls, the previous statement is, however, |
---|
971 | incorrect. The reason is twofold. First of all, a function call may diverge. |
---|
972 | To the last labels that comes before the call, however, we also associate |
---|
973 | the cost of the instructions that follow the call. Therefore, in the |
---|
974 | sum over all labels, when we meet a label we pre-pay for the instructions |
---|
975 | after function calls, assuming all calls to be terminating. This choice is |
---|
976 | driven by considerations on the source code. Functions can be called also |
---|
977 | inside expressions and it would be too disruptive to put labels inside |
---|
978 | expressions to capture the cost of instructions that follow a call. Moreover, |
---|
979 | adding a label after each call would produce a much higher number of proof |
---|
980 | obligations in the certification of source programs using Frama-C. The |
---|
981 | proof obligations, moreover, would be guarded by termination of all functions |
---|
982 | involved, that also generates lots of additional complex proof obligations |
---|
983 | that have little to do with execution costs. With our approach, instead, we |
---|
984 | put less burden on the user, at the price of proving a weaker statement: |
---|
985 | the estimated and actual costs will be the same if and only if the high level |
---|
986 | program is converging. For prefixes of diverging programs we can provide |
---|
987 | a similar result where the equality is replaced by an inequality (loss of |
---|
988 | precision). |
---|
989 | |
---|
990 | Assuming totality of functions is however not sufficient yet at the object |
---|
991 | level. Even if a function returns, there is no guarantee that it will transfer |
---|
992 | control back to the calling point. For instance, the function could have |
---|
993 | manipulated the return address from its stack frame. Moreover, an object level |
---|
994 | program can forge any address and transfer control to it, with no guarantee |
---|
995 | on the execution behaviour and labelling properties of the called program. |
---|
996 | |
---|
997 | To solve the problem, we introduced the notion of \emph{structured trace} |
---|
998 | that come in two flavours: structured traces for total programs (an inductive |
---|
999 | type) and structured traces for diverging programs (a co-inductive type based |
---|
1000 | on the previous one). Roughly speaking, a structured trace represents the |
---|
1001 | execution of a well behaved program that is subject to several constraints |
---|
1002 | like |
---|
1003 | \begin{enumerate} |
---|
1004 | \item All function calls return control just after the calling point |
---|
1005 | \item The execution of all function bodies start with a label and end with |
---|
1006 | a RET (even the ones reached by invoking a function pointer) |
---|
1007 | \item All instructions are covered by a label (required by correctness of |
---|
1008 | the labelling approach) |
---|
1009 | \item The target of all conditional jumps must be labelled (a sufficient |
---|
1010 | but not necessary condition for precision of the labelling approach) |
---|
1011 | \item \label{prop5} Two structured traces with the same structure yield the same |
---|
1012 | cost traces. |
---|
1013 | \end{enumerate} |
---|
1014 | |
---|
1015 | Correctness of cost predictions is proved only for structured execution traces, |
---|
1016 | i.e. well behaved programs. The forward simulation proof for all back-end |
---|
1017 | passes will actually be a proof of preservation of the structure of |
---|
1018 | the structured traces that, because of property \ref{prop5}, will imply |
---|
1019 | correctness of the cost prediction for the back-end. The Clight to RTLabs |
---|
1020 | will also include a proof that associates to each converging execution its |
---|
1021 | converging structured trace and to each diverging execution its diverging |
---|
1022 | structured trace. |
---|
1023 | |
---|
1024 | There are also other two issues that invalidate the naive statement of |
---|
1025 | correctness of cost prediciton given above. The algorithm that statically |
---|
1026 | computes the cost of blocks is correct only when the object code is \emph{well |
---|
1027 | formed} and the program counter is \emph{reachable}. |
---|
1028 | A well formed object code is such that |
---|
1029 | the program counter will never overflow after the execution step of |
---|
1030 | the processor. An overflow that occurs during fetching but is overwritten |
---|
1031 | during execution is, however, correct and necessary to accept correct |
---|
1032 | programs that are as large as the program memory. Temporary overflows add |
---|
1033 | complications to the proof. A reachable address is an address that can be |
---|
1034 | obtained by fetching (not executing!) a finite number of times from the |
---|
1035 | beginning of the code memory without ever overflowing. The complication is that |
---|
1036 | the static prediction traverses the code memory assuming that the memory will |
---|
1037 | be read sequentially from the beginning and that all jumps jump only to |
---|
1038 | reachable addresses. When this property is violated, the way the code memory |
---|
1039 | is interpreted is uncorrect and the cost computed is totally meaningless. |
---|
1040 | The reachability relation is closed by fetching for well formed programs. |
---|
1041 | The property that calls to function pointers only target reachable and |
---|
1042 | well labelled locations, however, is not statically predictable and it is |
---|
1043 | enforced in the structured trace. |
---|
1044 | |
---|
1045 | The proof of correctness of cost predictions has been quite complex. Setting |
---|
1046 | up the good invariants (structured traces, well formed programs, reachability) |
---|
1047 | and completing the proof has required more than 3 men months while the initally |
---|
1048 | estimated effort was much lower. In the paper-and-pencil proof for IMP, the |
---|
1049 | corresponding proof was obvious and only took two lines. |
---|
1050 | |
---|
1051 | The proof itself is quite involved. We |
---|
1052 | basically need to show as an important lemma that the sum of the execution |
---|
1053 | costs over a structured trace, where the costs are summed in execution order, |
---|
1054 | is equivalent to the sum of the execution costs in the order of pre-payment. |
---|
1055 | The two orders are quite different and the proof is by mutual recursion over |
---|
1056 | the definition of the converging structured traces, which is a family of three |
---|
1057 | mutual inductive types. The fact that this property only holds for converging |
---|
1058 | function calls in hidden in the definition of the structured traces. |
---|
1059 | Then we need to show that the order of pre-payment |
---|
1060 | corresponds to the order induced by the cost traces extracted from the |
---|
1061 | structured trace. Finally, we need to show that the statically computed cost |
---|
1062 | for one block corresponds to the cost dinamically computed in pre-payment |
---|
1063 | order. |
---|
1064 | |
---|
1065 | \section{Overall results} |
---|
1066 | |
---|
1067 | Functional correctness of the compiled code can be shown by composing |
---|
1068 | the simulations to show that the target behaviour matches the |
---|
1069 | behaviour of the source program, if the source program does not `go |
---|
1070 | wrong'. More precisely, we show that there is a forward simulation |
---|
1071 | between the source trace and a (flattened structured) trace of the |
---|
1072 | output, and conclude equivalence because the target's semantics are |
---|
1073 | in the form of an executable function, and hence |
---|
1074 | deterministic. |
---|
1075 | |
---|
1076 | Combining this with the correctness of the assignment of costs to cost |
---|
1077 | labels at the ASM level for a structured trace, we can show that the |
---|
1078 | cost of executing any compiled function (including the main function) |
---|
1079 | is equal to the sum of all the values for cost labels encountered in |
---|
1080 | the \emph{source code's} trace of the function. |
---|
1081 | |
---|
1082 | \section{Estimated effort} |
---|
1083 | Based on the rough analysis performed so far we can estimate the total |
---|
1084 | effort for the certification of the compiler. We obtain this estimation by |
---|
1085 | combining, for each pass: 1) the number of lines of code to be certified; |
---|
1086 | 2) the ratio of number of lines of proof to number of lines of code from |
---|
1087 | the CompCert project~\cite{compcert} for the CompCert pass that is closest to |
---|
1088 | ours; 3) an estimation of the complexity of the pass according to the |
---|
1089 | analysis above. The result is shown in Table~\ref{table}. |
---|
1090 | |
---|
1091 | \begin{table}{h} |
---|
1092 | \begin{tabular}{lrlrr} |
---|
1093 | Pass origin & Code lines & CompCert ratio & Estimated effort & Estimated effort \\ |
---|
1094 | & & & (based on CompCert) & \\ |
---|
1095 | \hline |
---|
1096 | Common & 4864 & 4.25 \permil & 20.67 & 17.0 \\ |
---|
1097 | Cminor & 1057 & 5.23 \permil & 5.53 & 6.0 \\ |
---|
1098 | Clight & 1856 & 5.23 \permil & 9.71 & 10.0 \\ |
---|
1099 | RTLabs & 1252 & 1.17 \permil & 1.48 & 5.0 \\ |
---|
1100 | RTL & 469 & 4.17 \permil & 1.95 & 2.0 \\ |
---|
1101 | ERTL & 789 & 3.01 \permil & 2.38 & 2.5 \\ |
---|
1102 | LTL & 92 & 5.94 \permil & 0.55 & 0.5 \\ |
---|
1103 | LIN & 354 & 6.54 \permil & 2.31 & 1.0 \\ |
---|
1104 | ASM & 984 & 4.80 \permil & 4.72 & 10.0 \\ |
---|
1105 | \hline |
---|
1106 | Total common & 4864 & 4.25 \permil & 20.67 & 17.0 \\ |
---|
1107 | Total front-end & 2913 & 5.23 \permil & 15.24 & 16.0 \\ |
---|
1108 | Total back-end & 6853 & 4.17 \permil & 13.39 & 21.0 \\ |
---|
1109 | \hline |
---|
1110 | Total & 14630 & 3.75 \permil & 49.30 & 54.0 \\ |
---|
1111 | \end{tabular} |
---|
1112 | \caption{\label{table}Estimated effort} |
---|
1113 | \end{table} |
---|
1114 | |
---|
1115 | We provide now some additional informations on the methodology used in the |
---|
1116 | computation. The passes in Cerco and CompCert front-end closely match each |
---|
1117 | other. However, there is no clear correspondence between the two back-ends. |
---|
1118 | For instance, we enforce the calling convention immediately after instruction |
---|
1119 | selection, whereas in CompCert this is performed in a later phase. Or we |
---|
1120 | linearize the code at the very end, whereas CompCert performs linearization |
---|
1121 | as soon as possible. Therefore, the first part of the exercise has consisted |
---|
1122 | in shuffling and partitioning the CompCert code in order to assign to each |
---|
1123 | CerCo pass the CompCert code that performs the same transformation. |
---|
1124 | |
---|
1125 | After this preliminary step, using the data given in~\cite{compcert} (which |
---|
1126 | are relative to an early version of CompCert) we computed the ratio between |
---|
1127 | men months and lines of code in CompCert for each CerCo pass. This is shown |
---|
1128 | in the third column of Table~\ref{wildguess}. For those CerCo passes that |
---|
1129 | have no correspondence in CompCert (like the optimizing assembler) or where |
---|
1130 | we have insufficient data, we have used the average of the ratios computed |
---|
1131 | above. |
---|
1132 | |
---|
1133 | The first column of the table shows the number of lines of code for each |
---|
1134 | pass in CerCo. The third column is obtained multiplying the first with the |
---|
1135 | CompCert ratio. It provides an estimate of the effort required (in men months) |
---|
1136 | if the complexity of the proofs for CerCo and Compcert would be the same. |
---|
1137 | |
---|
1138 | The two proof styles, however, are on purpose completely different. Where |
---|
1139 | CompCert uses non executable semantics, describing the various semantics with |
---|
1140 | inductive types, we have preferred executable semantics. Therefore, CompCert |
---|
1141 | proofs by induction and inversion become proof by functional inversion, |
---|
1142 | performed using the Russel methodology (now called Program in Coq, but whose |
---|
1143 | behaviour differs from Matita's one). Moreover, CompCert code is written using |
---|
1144 | only types that belong to the Hindley-Milner fragment, whereas we have |
---|
1145 | heavily exploited dependent types all over the code. The dependent type |
---|
1146 | discipline offers many advantages from the point of view of clarity of the |
---|
1147 | invariants involved and early detection of errors and it naturally combines |
---|
1148 | well with the Russel approach which is based on dependent types. However, it |
---|
1149 | is also well known to introduce technical problems all over the code, like |
---|
1150 | the need to explicitly prove type equalities to be able to manipulate |
---|
1151 | expressions in certain ways. In many situations, the difficulties encountered |
---|
1152 | with manipulating dependent types are better addressed by improving the Matita |
---|
1153 | system, according to the formalization driven system development. For this |
---|
1154 | reason, and assuming a pessimistic point of view on our performance, the |
---|
1155 | fourth columns presents the final estimation of the effort required, that also |
---|
1156 | takes in account the complexity of the proof suggested by the informal proofs |
---|
1157 | sketched in the previous section. |
---|
1158 | |
---|
1159 | \subsection{Contingency plan} |
---|
1160 | On the basis of the proof strategy sketched in this document and the |
---|
1161 | estimated effort, we can refine our contingency plan. In case we will end |
---|
1162 | the certification of the basic compiler in advance we will have the choice |
---|
1163 | of either proving loop optimizations and/or partial redundancy elimination |
---|
1164 | correct (both tasks that seem difficult to achieve in a short time) or |
---|
1165 | considering the MCS-51 specific extensions introduced during the first period |
---|
1166 | and under-used in the formalized prototype. Yet another possibility would be |
---|
1167 | to better study retargeting of the code and the commutation property between |
---|
1168 | different compiler passes. The latter study is easily enabled by our |
---|
1169 | approach where all back-end languages are instances of the same parameterized |
---|
1170 | language. |
---|
1171 | |
---|
1172 | In the case of a consistent delay in the certification of some |
---|
1173 | components, we will address first the passes that are more likely to have |
---|
1174 | undetected bugs and we will follow a top-down approach, axiomatizing |
---|
1175 | the properties of the data structured used in the compiler to focus more |
---|
1176 | on the algorithms. The rational is that data structures are easier then |
---|
1177 | algorithms to test using well known methodologies. |
---|
1178 | The effort table clearly shows that commond definitions |
---|
1179 | and data structures are 1/4th of the size of the code and require slightly |
---|
1180 | less than 1/3rd of the total effort. At least half of this effort really goes |
---|
1181 | into simple data structures (vectors, bounded and unbounded integers, tries |
---|
1182 | and maps) whose certification is not interesting and whose code could be |
---|
1183 | taken without re-proving it from the library of any other theorem prover. |
---|
1184 | |
---|
1185 | \section{Conclusions} |
---|
1186 | The overall exercise, whose details have been only minimally reported here, |
---|
1187 | has been very useful. It has allowed to spot in an early moment some criticities |
---|
1188 | of the proof that have required major changes in the proof plan. It has also |
---|
1189 | shown that the last passes of the compilation (e.g. assembly) and cost |
---|
1190 | prediction on the object code are much more involved than more high level |
---|
1191 | passes. |
---|
1192 | |
---|
1193 | The final estimation for the effort is surely affected by a low degree of |
---|
1194 | confidence. It is however sufficient to conclude that the effort required |
---|
1195 | is in line with the man power that was scheduled for the second half of the |
---|
1196 | second period and for the third period. Compared to the number of men months |
---|
1197 | declared in Annex I of the contract, we will need more men months. However, |
---|
1198 | both at UNIBO and UEDIN there have been major differences in hiring with |
---|
1199 | respect to the Annex. Therefore both sites have now an higher number of men |
---|
1200 | months available, with the trade-off of a lower level of maturity of the |
---|
1201 | people employed. |
---|
1202 | |
---|
1203 | The reviewers suggested that we use this estimation to compare two possible |
---|
1204 | scenarios: a) proceed as planned, porting all the CompCert proofs to Matita |
---|
1205 | or b) port D3.1 and D4.1 to Coq and re-use the CompCert proofs. |
---|
1206 | We remark here again that the back-end of the two compilers, from the |
---|
1207 | memory model on, are sensibly different: we are not re-proving correctness |
---|
1208 | of the same piece of code. Moreover, the proof techniques are different for |
---|
1209 | the front-end too. Switching to the CompCert formalization would imply |
---|
1210 | the abandon of the untrusted compiler, the abandon of the experiment with |
---|
1211 | a different proof technique, the abandon of the quest for an open source |
---|
1212 | proof, and the abandon of the co-development of the formalization and the |
---|
1213 | Matita proof assistant. In the Commitment Letter~\cite{letter} delivered |
---|
1214 | to the Officer in May we clarified our personal perspective on the project |
---|
1215 | goals and objectives. We do not re-describe here the point of view presented |
---|
1216 | in the letter that we can condense in ``we value diversity''. |
---|
1217 | |
---|
1218 | Clearly, if the execise would have suggested the infeasability in terms of |
---|
1219 | effort of concluding the formalization or getting close to that, we would have |
---|
1220 | abandoned our path and embraced the reviewer's suggestion. However, we |
---|
1221 | have been comforted in the analysis we did in autumn and further progress done |
---|
1222 | during the winter does not show yet any major delay with respect to the |
---|
1223 | proof schedule. We are thus planning to continue the certification according |
---|
1224 | to the more detailed proof plan that came out from the exercise reported in |
---|
1225 | this manuscript. |
---|
1226 | |
---|
1227 | \begin{thebibliography}{2} |
---|
1228 | \bibitem{compcert} X. Leroy, ``A Formally Verified Compiler back-end'', |
---|
1229 | Journal of Automated Reasoning 43(4)):363-446, 2009. |
---|
1230 | |
---|
1231 | \bibitem{letter}The CerCo team, ``Commitment to the Consideration of Reviewer's Reccomendation'', 16/05/2011. |
---|
1232 | \end{thebibliography} |
---|
1233 | |
---|
1234 | |
---|
1235 | \end{document} |
---|