# source:Deliverables/D1.2/CompilerProofOutline/outline.tex@1732

Last change on this file since 1732 was 1732, checked in by campbell, 8 years ago

More on front-end.

File size: 19.7 KB
Line
1\documentclass[a4paper, 10pt]{article}
2
3\usepackage{a4wide}
4\usepackage{amsfonts}
5\usepackage{amsmath}
6\usepackage{amssymb}
7\usepackage[english]{babel}
8\usepackage{color}
9\usepackage{diagrams}
10\usepackage{graphicx}
12\usepackage[utf8x]{inputenc}
13\usepackage{listings}
14\usepackage{microtype}
15\usepackage{skull}
16\usepackage{stmaryrd}
17
18\lstdefinelanguage{matita-ocaml} {
19  mathescape=true
20}
21\lstset{
22  language=matita-ocaml,basicstyle=\tt,columns=flexible,breaklines=false,
23  showspaces=false, showstringspaces=false, extendedchars=false,
24  inputencoding=utf8x, tabsize=2
25}
26
27\title{Proof outline for the correctness of the CerCo compiler}
28\date{\today}
29\author{The CerCo team}
30
31\begin{document}
32
33\maketitle
34
35\section{Introduction}
36\label{sect.introduction}
37
38In the last project review of the CerCo project, the project reviewers expressed the opinion that it would be a good idea to attempt to write down some of the statements of the correctness theorems that we intend to prove about the complexity preserving compiler.
39This document provides a very high-level, pen-and-paper sketch of what we view as the best path to completing the correctness proof for the compiler.
40In particular, for every translation between two intermediate languages, in both the front- and back-ends, we identify the key translation steps, and identify some invariants that we view as being important for the correctness proof.
41
42\section{Front-end: Clight to RTLabs}
43
44The front-end of the CerCo compiler consists of several stages:
45
46\begin{center}
47\begin{minipage}{.8\linewidth}
48\begin{tabbing}
49\quad \= $\downarrow$ \quad \= \kill
50\textsf{Clight}\\
51\> $\downarrow$ \> cast removal\\
52\> $\downarrow$ \> add runtime functions\footnote{Following the last project
53meeting we intend to move this transformation to the back-end}\\
54\> $\downarrow$ \> cost labelling\\
55\> $\downarrow$ \> stack variable allocation and control structure
56 simplification\\
57\textsf{Cminor}\\
58\> $\downarrow$ \> generate global variable initialisation code\\
59\> $\downarrow$ \> transform to RTL graph\\
60\textsf{RTLabs}\\
61\> $\downarrow$ \> start of target specific back-end\\
63\end{tabbing}
64\end{minipage}
65\end{center}
66
67%Our overall statements of correctness with respect to costs will
68%require a correctly labelled program
69
70There are three layers in most of the proofs proposed:
71\begin{enumerate}
72\item invariants closely tied to the syntax and transformations using
73  dependent types,
74\item a forward simulation proof, and
75\item syntactic proofs about the cost labelling.
76\end{enumerate}
77The first will support both function correctness and allow us to show
78the totality of some of the compiler stages (that is, these stages of
79the compiler cannot fail).  The second provides the main functional
80correctness result, and the last will be crucial for applying
81correctness results about the costings from the back-end.
82
83We will also prove that a suitably labelled RTLabs trace can be turned
84into a \emph{structured trace} which splits the execution trace into
85cost-label-to-cost-label chunks with nested function calls.  This
86structure was identified during work on the correctness of the
87back-end cost analysis as retaining important information about the
88structure of the execution that is difficult to reconstruct later in
89the compiler.
90
91\subsection{Clight Cast removal}
92
93This removes some casts inserted by the parser to make arithmetic
94promotion explicit when they are superfluous (such as
95\lstinline[language=C]'c = (short)((int)a + (int)b);').
96
97The transformation only affects Clight expressions, recursively
98detecting casts that can be safely eliminated.  The semantics provides
99a big-step definition for expression, so we should be able to show a
100lock-step forward simulation using a lemma showing that cast
101elimination does not change the evaluation of expressions.  This lemma
102will follow from a structural induction on the source expression.  We
103have already proved a few of the underlying arithmetic results
104necessary to validate the approach.
105
106\subsection{Clight cost labelling}
107
108This adds cost labels before and after selected statements and
109expressions, and the execution traces ought to be equivalent modulo
110cost labels.  Hence it requires a simple forward simulation with a
111limited amount of stuttering whereever a new cost label is introduced.
112A bound can be given for the amount of stuttering allowed can be given
113based on the statement or continuation to be evaluated next.
114
115We also intend to show three syntactic properties about the cost
116labelling:
117\begin{itemize}
118\item every function starts with a cost label,
119\item every branching instruction is followed by a label (note that
120  exiting a loop is treated as a branch), and
121\item the head of every loop (and any \lstinline'goto' destination) is
122  a cost label.
123\end{itemize}
124These can be shown by structural induction on the source term.
125
126\subsection{Clight to Cminor translation}
127
128This translation is the first to introduce some invariants, with the
129proofs closely tied to the implementation by dependent typing.  These
130are largely complete and show that the generated code enjoys:
131\begin{itemize}
132\item some minimal type safety shown by explicit checks on the
133  Cminor types during the transformation (a little more work remains
134  to be done here, but follows the same form);
135\item that variables named in the parameter and local variable
136  environments are distinct from one another, again by an explicit
137  check;
138\item that variables used in the generated code are present in the
139  resulting environment (either by checking their presence in the
140  source environment, or from a list of freshly temporary variables);
141  and
142\item that all \lstinline[language=C]'goto' labels are present (by
143  checking them against a list of source labels and proving that all
144  source labels are preserved).
145\end{itemize}
146
147The simulation will be similar to the relevant stages of CompCert
148(Clight to Csharpminor and Csharpminor to Cminor --- in the event that
149the direct proof is unwieldy we could introduce a corresponding
150intermediate language).  During early experimentation with porting
151CompCert definitions to the Matita proof assistant we found little
152difficulty reproving the results for the memory model, so we plan to
153port the memory injection properties and use them to relate Clight
154in-memory variables with either a local variable valuation or a stack
155slot, depending on how it was classified.
156
157This should be sufficient to show the equivalence of (big-step)
158expression evaluation.  The simulation can then be shown by relating
159corresponding blocks of statement and continuations with their Cminor
160counterparts and proving that a few steps reaches the next matching
161state.
162
163The syntactic properties required for cost labels remain similar and a
164structural induction on the function bodies should be sufficient to
165show that they are preserved.
166
167\subsection{Cminor global initialisation code}
168
169This short phase replaces the global variable initialisation data with
170code that executes when the program starts.  Each piece of
171initialisation data in the source is matched by a new statement
172storing that data.  As each global variable is allocated a distinct
173memory block, the program state after the initialisation statements
174will be the same as the original program's state at the start of
175execution, and will proceed in the same manner afterwards.
176
177% Actually, the above is wrong...
178% ... this ought to be in a fresh main function with a fresh cost label
179
180\subsection{Cminor to RTLabs translation}
181
182\subsection{RTLabs structured trace generation}
183
184\section{The RTLabs to RTL translation}
185\label{sect.rtlabs.rtl.translation}
186
187% dpm: type system and casting load (???)
188We require a map, $\sigma$, between \texttt{Values} of the front-end memory model to lists of \texttt{BEValues} of the back-end memory model:
189
190\begin{displaymath}
191\mathtt{Value} ::= \bot \mid \mathtt{int(size)} \mid \mathtt{float} \mid \mathtt{null} \mid \mathtt{ptr} \quad\stackrel{\sigma}{\longrightarrow}\quad \mathtt{BEValue} ::= \bot \mid \mathtt{byte} \mid \mathtt{int}_i \mid \mathtt{ptr}_i
192\end{displaymath}
193
194We further require a map, $\sigma$, which maps the front-end \texttt{Memory} and the back-end's notion of \texttt{BEMemory}.
195
196\begin{displaymath}
197\mathtt{Mem}\ \alpha = \mathtt{Block} \rightarrow (\mathbb{Z} \rightarrow \alpha)
198\end{displaymath}
199
200where
201
202\begin{displaymath}
203\mathtt{Block} ::= \mathtt{Region} \cup \mathtt{ID}
204\end{displaymath}
205
206\begin{displaymath}
207\mathtt{BEMem} = \mathtt{Mem} \mathtt{Value}
208\end{displaymath}
209
210\begin{displaymath}
211\mathtt{Address} = \mathtt{BEValue} \times  \mathtt{BEValue} \\
212\end{displaymath}
213
214\begin{displaymath}
215\mathtt{Mem} = \mathtt{Block} \rightarrow (\mathbb{Z} \rightarrow \mathtt{Cont} \mid \mathtt{Value} \times \mathtt{Size} \mid \mathtt{null})
216\end{displaymath}
217
218\begin{center}
219\begin{picture}(2, 2)
220% picture of sigma mapping memory to memory
221TODO
222\end{picture}
223\end{center}
224
225\begin{displaymath}
226\mathtt{load}\ s\ a\ M = \mathtt{Some}\ v \rightarrow \forall i \leq s.\ \mathtt{load}\ s\ (a + i)\ \sigma(M) = \mathtt{Some}\ v_i
227\end{displaymath}
228
229\begin{displaymath}
230\sigma(\mathtt{store}\ v\ M) = \mathtt{store}\ \sigma(v)\ \sigma(M)
231\end{displaymath}
232
233\begin{displaymath}
234\begin{array}{rll}
235\mathtt{State} & ::=  & (\mathtt{State} : \mathtt{Frame}^* \times \mathtt{Frame} \\
236               & \mid & \mathtt{Call} : \mathtt{Frame}^* \times \mathtt{Args} \times \mathtt{Return} \times \mathtt{Fun} \\
237               & \mid & \mathtt{Return} : \mathtt{Frame}^* \times \mathtt{Value} \times \mathtt{Return}) \times \mathtt{Mem}
238\end{array}
239\end{displaymath}
240
241\begin{displaymath}
242\mathtt{State} ::= \mathtt{Frame}^* \times \mathtt{PC} \times \mathtt{SP} \times \mathtt{ISP} \times \mathtt{CARRY} \times \mathtt{REGS}
243\end{displaymath}
244
245\begin{displaymath}
246\mathtt{State} \stackrel{\sigma}{\longrightarrow} \mathtt{State}
247\end{displaymath}
248
249\begin{displaymath}
250\sigma(\mathtt{State} (\mathtt{Frame}^* \times \mathtt{Frame})) \longrightarrow ((\sigma(\mathtt{Frame}^*), \sigma(\mathtt{PC}), \sigma(\mathtt{SP}), 0, 0, \sigma(\mathtt{REGS})), \sigma(\mathtt{Mem}))
251\end{displaymath}
252
253\begin{displaymath}
254\sigma(\mathtt{Return}(-)) \longrightarrow \sigma \circ \text{return one step}
255\end{displaymath}
256
257\begin{displaymath}
258\sigma(\mathtt{Call}(-)) \longrightarrow \sigma \circ \text{call one step}
259\end{displaymath}
260
261Return one step commuting diagram:
262
263\begin{displaymath}
264\begin{diagram}
265s & \rTo^{\text{one step of execution}} & s'   \\
266  & \rdTo                             & \dTo \\
267  &                                   & \llbracket s'' \rrbracket
268\end{diagram}
269\end{displaymath}
270
271Call one step commuting diagram:
272
273\begin{displaymath}
274\begin{diagram}
275s & \rTo^{\text{one step of execution}} & s'   \\
276  & \rdTo                             & \dTo \\
277  &                                   & \llbracket s'' \rrbracket
278\end{diagram}
279\end{displaymath}
280
281\begin{displaymath}
282\begin{diagram}
283\mathtt{CALL}(\mathtt{id},\ \mathtt{args},\ \mathtt{dst},\ \mathtt{pc}),\ \mathtt{State}(\mathtt{Frame},\ \mathtt{Frames}) & \rTo & \mathtt{Call}(FINISH ME)
284\end{diagram}
285\end{displaymath}
286
287\begin{displaymath}
288\begin{array}{rcl}
289\mathtt{CALL}(\mathtt{id}, \mathtt{args}, \mathtt{dst}, \mathtt{pc}) & \longrightarrow & \mathtt{CALL\_ID}(\mathtt{id}, \sigma'(\mathtt{args}), \sigma(\mathtt{dst}), \mathtt{pc}) \\
290\mathtt{RETURN}                                                      & \longrightarrow & \mathtt{RETURN} \\
291\end{array}
292\end{displaymath}
293
294\begin{displaymath}
295\begin{array}{rcl}
296\sigma & : & \mathtt{register} \rightarrow \mathtt{list\ register} \\
297\sigma' & : & \mathtt{list\ register} \rightarrow \mathtt{list\ register}
298\end{array}
299\end{displaymath}
300
301\section{The RTL to ERTL translation}
302\label{sect.rtl.ertl.translation}
303
304\begin{displaymath}
305\begin{diagram}
306& & \llbracket \mathtt{CALL\_ID}(\mathtt{id}, \mathtt{args}, \mathtt{dst}, \mathtt{pc})\rrbracket & & \\
307& \ldTo^{\text{external}} & & \rdTo^{\text{internal}} & \\
308\skull & & & & \mathtt{regs} = [\mathtt{params}/-] \\
309& & & & \mathtt{sp} = \mathtt{ALLOC} \\
310& & & & \mathtt{PUSH}(\mathtt{carry}, \mathtt{regs}, \mathtt{dst}, \mathtt{return\_addr}), \mathtt{pc}_{0}, \mathtt{regs}, \mathtt{sp} \\
311\end{diagram}
312\end{displaymath}
313
314\begin{align*}
315\llbracket \mathtt{RETURN} \rrbracket \\
316\mathtt{return\_addr} & := \mathtt{top}(\mathtt{stack}) \\
317v*                    & := m(\mathtt{rv\_regs}) \\
318\mathtt{dst}, \mathtt{sp}, \mathtt{carry}, \mathtt{regs} & := \mathtt{pop} \\
319\mathtt{regs}[v* / \mathtt{dst}] \\
320\end{align*}
321
322\begin{displaymath}
323\begin{diagram}
324s    & \rTo^1 & s' & \rTo^1 & s'' \\
325\dTo &        &    & \rdTo  & \dTo \\
326\llbracket s \rrbracket & \rTo(1,3)^1 & & & \llbracket s'' \rrbracket \\
327\mathtt{CALL} \\
328\end{diagram}
329\end{displaymath}
330
331\begin{displaymath}
332\begin{diagram}
333s    & \rTo^1 & s' & \rTo^1 & s'' \\
334\dTo &        &    & \rdTo  & \dTo \\
335\  & \rTo(1,3) & & & \ \\
336\mathtt{RETURN} \\
337\end{diagram}
338\end{displaymath}
339
340\begin{displaymath}
341\mathtt{b\_graph\_translate}: (\mathtt{label} \rightarrow \mathtt{blist'})
342\rightarrow \mathtt{graph} \rightarrow \mathtt{graph}
343\end{displaymath}
344
345\begin{align*}
346\mathtt{theorem} &\ \mathtt{b\_graph\_translate\_ok}: \\
347& \forall  f.\forall G_{i}.\mathtt{let}\ G_{\sigma} := \mathtt{b\_graph\_translate}\ f\ G_{i}\ \mathtt{in} \\
348&       \forall l \in G_{i}.\mathtt{subgraph}\ (f\ l)\ l\ (\mathtt{next}\ l\ G_{i})\ G_{\sigma}
349\end{align*}
350
351\begin{align*}
352\mathtt{lemma} &\ \mathtt{execute\_1\_step\_ok}: \\
353&       \forall s.  \mathtt{let}\ s' := s\ \sigma\ \mathtt{in} \\
354&       \mathtt{let}\ l := pc\ s\ \mathtt{in} \\
355&       s \stackrel{1}{\rightarrow} s^{*} \Rightarrow \exists n. s' \stackrel{n}{\rightarrow} s'^{*} \wedge s'^{*} = s'\ \sigma
356\end{align*}
357
358\begin{align*}
359\mathrm{RTL\ status} & \ \ \mathrm{ERTL\ status} \\
360\mathtt{sp} & = \mathtt{spl} / \mathtt{sph} \\
361\mathtt{graph} & \mapsto \mathtt{graph} + \mathtt{prologue}(s) + \mathtt{epilogue}(s) \\
362& \mathrm{where}\ s = \mathrm{callee\ saved} + \nu \mathrm{RA} \\
363\end{align*}
364
365\begin{displaymath}
366\begin{diagram}
367\mathtt{CALL} & \rTo^1 & \mathtt{inside\ function} \\
368\dTo & & \dTo \\
369\underbrace{\ldots}_{\llbracket \mathtt{CALL} \rrbracket} & \rTo &
370\underbrace{\ldots}_{\mathtt{prologue}} \\
371\end{diagram}
372\end{displaymath}
373
374\begin{displaymath}
375\begin{diagram}
376\mathtt{RETURN} & \rTo^1 & \mathtt{.} \\
377\dTo & & \dTo \\
378\underbrace{\ldots}_{\mathtt{epilogue}} & \rTo &
379\underbrace{\ldots} \\
380\end{diagram}
381\end{displaymath}
382
383\begin{align*}
384\mathtt{prologue}(s) = & \mathtt{create\_new\_frame}; \\
385                       & \mathtt{pop\ ra}; \\
386                       & \mathtt{save\ callee\_saved}; \\
387                                                                                         & \mathtt{get\_params} \\
388                                                                                         & \ \ \mathtt{reg\_params}: \mathtt{move} \\
389                                                                                         & \ \ \mathtt{stack\_params}: \mathtt{push}/\mathtt{pop}/\mathtt{move} \\
390\end{align*}
391
392\begin{align*}
393\mathtt{epilogue}(s) = & \mathtt{save\ return\ to\ tmp\ real\ regs}; \\
394                                                                                         & \mathtt{restore\_registers}; \\
395                       & \mathtt{push\ ra}; \\
396                       & \mathtt{delete\_frame}; \\
397                       & \mathtt{save return} \\
398\end{align*}
399
400\begin{displaymath}
401\mathtt{CALL} id \mapsto \mathtt{set\_params}; \mathtt{CALL} id; \mathtt{fetch\_result}
402\end{displaymath}
403
404\section{The ERTL to LTL translation}
405\label{sect.ertl.ltl.translation}
406
407\section{The LTL to LIN translation}
408\label{sect.ltl.lin.translation}
409
410We require a map, $\sigma$, from LTL statuses, where program counters are represented as labels in a graph data structure, to LIN statuses, where program counters are natural numbers:
411\begin{displaymath}
412\mathtt{pc : label} \stackrel{\sigma}{\longrightarrow} \mathbb{N}
413\end{displaymath}
414
415The LTL to LIN translation pass also linearises the graph data structure into a list of instructions.
416Pseudocode for the linearisation process is as follows:
417
418\begin{lstlisting}
419let rec linearise graph visited required generated todo :=
420  match todo with
421  | l::todo ->
422    if l $\in$ visited then
423      let generated := generated $\cup\ \{$ Goto l $\}$ in
424      let required := required $\cup$ l in
425        linearise graph visited required generated todo
426    else
427      -- Get the instruction at label l' in the graph
428      let lookup := graph(l) in
429      let generated := generated $\cup\ \{$ lookup $\}$ in
430      -- Find the successor of the instruction at label l' in the graph
431      let successor := succ(l, graph) in
432      let todo := successor::todo in
433        linearise graph visited required generated todo
434  | []      -> (required, generated)
435\end{lstlisting}
436
437It is easy to see that this linearisation process eventually terminates.
438In particular, the size of the visited label set is monotonically increasing, and is bounded above by the size of the graph that we are linearising.
439
440The initial call to \texttt{linearise} sees the \texttt{visited}, \texttt{required} and \texttt{generated} sets set to the empty set, and \texttt{todo} initialized with the singleton list consisting of the entry point of the graph.
441We envisage needing to prove the following invariants on the linearisation function above:
442
443\begin{enumerate}
444\item
445$\mathtt{visited} \approx \mathtt{generated}$, where $\approx$ is \emph{multiset} equality, as \texttt{generated} is a set of instructions where instructions may mention labels multiple times, and \texttt{visited} is a set of labels,
446\item
447$\forall \mathtt{l} \in \mathtt{generated}.\ \mathtt{succ(l,\ graph)} \subseteq \mathtt{required} \cup \mathtt{todo}$,
448\item
449$\mathtt{required} \subseteq \mathtt{visited}$,
450\item
451$\mathtt{visited} \cap \mathtt{todo} = \emptyset$.
452\end{enumerate}
453
454The invariants collectively imply the following properties, crucial to correctness, about the linearisation process:
455
456\begin{enumerate}
457\item
458Every graph node is visited at most once,
459\item
460Every instruction that is generated is generated due to some graph node being visited,
461\item
462The successor instruction of every instruction that has been visited already will eventually be visited too.
463\end{enumerate}
464
465Note, because the LTL to LIN transformation is the first time the program is linearised, we must discover a notion of well formed program' suitable for linearised forms.
466In particular, we see the notion of well formedness (yet to be formally defined) resting on the following conditions:
467
468\begin{enumerate}
469\item
470For every jump to a label in a linearised program, the target label exists at some point in the program,
471\item
472Each label is unique, appearing only once in the program,
473\item
474The final instruction of a program must be a return.
475\end{enumerate}
476
477We assume that these properties will be easy consequences of the invariants on the linearisation function defined above.
478
479The final condition above is potentially a little opaque, so we explain further.
480First, the only instructions that can reasonably appear in final position at the end of a program are returns or backward jumps, as any other instruction would cause execution to fall out' of the end of the program (for example, when a function invoked with \texttt{CALL} returns, it returns to the next instruction past the \texttt{CALL} that invoked it).
481However, in LIN, though each function's graph has been linearised, the entire program is not yet fully linearised into a list of instructions, but rather, a list of functions', each consisting of a linearised body along with other data.
482Each well-formed function must end with a call to \texttt{RET}, and therefore the only correct instruction that can terminate a LIN program is a \texttt{RET} instruction.
483
484\section{The LIN to ASM and ASM to MCS-51 machine code translations}
485\label{sect.lin.asm.translation}
486
487The LIN to ASM translation step is trivial, being almost the identity function.
488The only non-trivial feature of the LIN to ASM translation is that all labels are named apart' so that there is no chance of freshly generated labels from different namespaces clashing with labels from another namespace.
489
490The ASM to MCS-51 machine code translation step, and the required statements of correctness, are found in an unpublished manuscript attached to this document.
491
492\end{document}
Note: See TracBrowser for help on using the repository browser.