source: Papers/itp-2013/ccexec.tex @ 2614

Last change on this file since 2614 was 2614, checked in by sacerdot, 8 years ago


File size: 56.6 KB
9% NB: might be worth removing this if changing class in favour of
10% a built-in definition.
15  {keywords={Definition,Lemma,Theorem,Remark,Qed,Save,Inductive,Record},
16   morekeywords={[2]if,then,else,forall,Prop,match,with,end,let},
17  }
20  morekeywords={int8_t}
24        keywordstyle=\color{red}\bfseries,
25        keywordstyle=[2]\color{blue},
26        commentstyle=\color{green},
27        stringstyle=\color{blue},
28        showspaces=false,showstringspaces=false,
29        xleftmargin=1em}
33\title{Certification of the Preservation of Structure by a Compiler's Back-end Pass\thanks{The project CerCo acknowledges the financial support of the Future and
34Emerging Technologies (FET) programme within the Seventh Framework
35Programme for Research of the European Commission, under FET-Open grant
36number: 243881}}
37\author{Paolo Tranquilli \and Claudio Sacerdoti Coen}
38\institute{Department of Computer Science and Engineering, University of Bologna,\\\email{}, \email{}}
41The labelling approach is a technique to lift cost models for non-functional
42properties of programs from the object code to the source code. It is based
43on the preservation of the structure of the high level program in every
44intermediate language used by the compiler. Such structure is captured by
45observables that are added to the semantics and that needs to be preserved
46by the forward simulation proof of correctness of the compiler. Additional
47special observables are required for function calls. In this paper we
48present a generic forward simulation proof that preserves all these observables.
49The proof statement is based on a new mechanized semantics that traces the
50structure of execution when the language is unstructured. The generic semantics
51and simulation proof have been mechanized in the interactive theorem prover
57The \emph{labelling approach} has been introduced in~\cite{easylabelling} as
58a technique to \emph{lift} cost models for non-functional properties of programs
59from the object code to the source code. Examples of non-functional properties
60are execution time, amount of stack/heap space consumed and energy required for
61communication. The basic idea of the approach is that it is impossible to
62provide a \emph{uniform} cost model for an high level language that is preserved
63\emph{precisely} by a compiler. For instance, two instances of an assignment
64$x = y$ in the source code can be compiled very differently according to the
65place (registers vs stack) where $x$ and $y$ are stored at the moment of
66execution. Therefore a precise cost model must assign a different cost
67to every occurrence, and the exact cost can only be known after compilation.
69According to the labeling approach, the compiler is free to compile and optimize
70the source code without any major restriction, but it must keep trace
71of what happens to basic blocks during the compilation. The cost model is
72then computed on the object code. It assigns a cost to every basic block.
73Finally, the compiler propagates back the cost model to the source level,
74assigning a cost to each basic block of the source code.
76Implementing the labelling approach in a certified compiler
77allows to reason formally on the high level source code of a program to prove
78non-functional properties that are granted to be preserved by the compiler
79itself. The trusted code base is then reduced to 1) the interactive theorem
80prover (or its kernel) used in the certification of the compiler and
812) the software used to certify the property on the source language, that
82can be itself certified further reducing the trusted code base.
83In~\cite{easylabelling} the authors provide an example of a simple
84certified compiler that implements the labelling approach for the
85imperative \texttt{While} language~\cite{while}, that does not have
86pointers and function calls.
88The labelling approach has been shown to scale to more interesting scenarios.
89In particular in~\cite{functionallabelling} it has been applied to a functional
90language and in~\cite{loopoptimizations} it has been shown that the approach
91can be slightly complicated to handle loop optimizations and, more generally,
92program optimizations that do not preserve the structure of basic blocks.
93On-going work also shows that the labelling approach is also compatible with
94the complex analyses required to obtain a cost model for object code
95on processors that implement advanced features like pipelining, superscalar
96architectures and caches.
98In the European Project CerCo (Certified Complexity~\footnote{\url{}})~\cite{cerco} we are certifying a labelling approach based compiler for a large subset of C to
998051 object code. The compiler is
100moderately optimizing and implements a compilation chain that is largely
101inspired to that of CompCert~\cite{compcert1,compcert2}. Compared to work done in~\cite{easylabelling}, the main novely and source of difficulties is due to the presence
102of function calls. Surprisingly, the addition of function calls require a
103rivisitation of the proof technique given in~\cite{easylabelling}. In
104particular, at the core of the labelling approach there is a forward
105simulation proof that, in the case of \texttt{While}, is only minimally
106more complex than the proof required for the preservation of the
107functional properties only. In the case of a programming language with
108function calls, instead, it turns out that the forward simulation proof for
109the back-end languages must grant a whole new set of invariants.
111In this paper we present a formalization in the Matita interactive theorem
112prover of a generic version of the simulation proof required for unstructured
113languages. All back-end languages of the CerCo compiler are unstructured
114languages, so the proof covers half of the correctness of the compiler.
115The statement of the generic proof is based on a new semantics
116for imperative unstructured languages that is based on \emph{structured
117traces} and that restores the preservation of structure in the observables of
118the semantics. The generic proof allows to almost completely split the
119part of the simulation that deals with functional properties only from the
120part that deals with the preservation of structure.
122The plan of this paper is the following. In Section~\ref{labelling} we
123sketch the labelling method and the problems derived from the application
124to languages with function calls. In Section~\ref{semantics} we introduce
125a generic description of an unstructured imperative language and the
126corresponding structured traces (the novel semantics). In
127Section~\ref{simulation} we describe the forward simulation proof.
128Conclusions and future works are in Section~\ref{conclusions}
130\section{The labelling approach}
132We briefly sketch here a simplified version of the labelling approach as
133introduced in~\cite{easylabelling}. The simplification strengthens the
134sufficient conditions given in~\cite{easylabelling} to allow a simpler
135explanation. The simplified conditions given here are also used in the
136CerCo compiler to simplify the proof.
138Let $\mathcal{P}$ be a programming language whose semantics is given in
139terms of observables: a run of a program yields a finite or infinite
140stream of observables. We also assume for the time being that function
141calls are not available in $\mathcal{P}$. We want to associate a cost
142model to a program $P$ written in $\mathcal{P}$. The first step is to
143extend the syntax of $\mathcal{P}$ with a new construct $\texttt{emit L}$
144where $L$ is a label distinct from all observables of $\mathcal{P}$.
145The semantics of $\texttt{emit L}$ is the emission of the observable
146\texttt{L} that is meant to signal the beginning of a basic block.
148There exists an automatic procedure that injects into the program $P$ an
149$\texttt{emit L}$ at the beginning of each basic block, using a fresh
150\texttt{L} for each block. In particular, the bodies of loops, both branches
151of \texttt{if-then-else}s and the targets of \texttt{goto}s must all start
152with an emission statement.
154Let now $C$ be a compiler from $\mathcal{P}$ to the object code $\mathcal{M}$,
155that is organized in passes. Let $\mathcal{Q}_i$ be the $i$-th intermediate
156language used by the compiler. We can easily extend every
157intermediate language (and its semantics) with an $\texttt{emit L}$ statement
158as we did for $\mathcal{P}$. The same is possible for $\mathcal{M}$ too, with
159the additional difficulty that the syntax of object code is given as a
160sequence of bytes. The injection of an emission statement in the object code
161can be done using a map that maps two consecutive code addresses with the
162statement. The intended semantics is that, if $(pc_1,pc_2) \mapsto \texttt{emit L}$ then the observable \texttt{L} is emitted after the execution of the
163instruction stored at $pc_1$ and before the execution of the instruction
164stored at $pc_2$. The two program counters are necessary because the
165instruction stored at $pc_1$ can have multiple possible successors (e.g.
166in case of a conditional branch or an indirect call). Dually, the instruction
167stored at $pc_2$ can have multiple possible predecessors (e.g. if it is the
168target of a jump).
170The compiler, to be functionally correct, must preserve the observational
171equivalence, i.e. executing the program after each compiler pass should
172yield the same stream of observables. After the injection of emission
173statements, observables now capture both functional and non-functional
175This correctness property is called in the litterature a forward simulation
176and is sufficient for correctness when the target language is
178We also require a stronger, non-functional preservation property: after each
179pass all basic blocks must start with an emission statement, and all labels
180\texttt{L} must be unique.
182Now let $M$ be the object code obtained for the program $P$. Let us suppose
183that we can statically inspect the code $M$ and associate to each basic block
184a cost (e.g. the number of clock cycles required to execute all instructions
185in the basic block, or an upper bound to that time). Every basic block is
186labelled with an unique label \texttt{L}, thus we can actually associate the
187cost to \texttt{L}. Let call it $k(\texttt{L})$.
189The function $k$ is defined as the cost model for the object code control
190blocks. It can be equally used as well as the cost model for the source
191control blocks. Indeed, if the semantics of $P$ is the stream
192$L_1 L_2 \ldots$, then, because of forward simulation, the semantics of $M$ is
193also $L_1 L_2 \ldots$ and its actual execution cost is $\Sigma_i k(L_i)$ because
194every instruction belongs to a control block and every control block is
195labelled. Thus it is correct to say that the execution cost of $P$ is also
196$\Sigma_i k(L_i)$. In other words, we have obtained a cost model $k$ for
197the blocks of the high level program $P$ that is preserved by compilation.
199How can the user profit from the high level cost model? Suppose, for instance,
200that he wants to prove that the WCET of his program is bounded by $c$. It
201is sufficient for him to prove that $\Sigma_i k(L_i) \leq c$, which is now
202a purely functional property of the code. He can therefore use any technique
203available to certify functional properties of the source code.
204What is suggested in~\cite{easylabelling} is to actually instrument the
205source code $P$ by replacing every label emission statement
206$\texttt{emit L}$ with the instruction $\texttt{cost += k(L)}$ that increments
207a global fresh variable \texttt{cost}. The bound is now proved by establishing
208the program invariant $\texttt{cost} \leq c$, which can be done for example
209using the Frama-C~\cite{framaC} suite if the source code is some variant of
212\subsection{Labelling function calls}
213We now want to extend the labelling approach to support function calls.
214On the high level, \emph{structured} programming language $\mathcal{P}$ there
215is not much to change.
216When a function is invoked, the current basic block is temporarily exited
217and the basic block the function starts with take control. When the function
218returns, the execution of the original basic block is resumed. Thus the only
219significant change is that basic blocks can now be nested. Let \texttt{E}
220be the label of the external block and \texttt{I} the label of a nested one.
221Since the external starts before the internal, the semantics observed will be
222\texttt{E I} and the cost associated to it on the source language will be
223$k(\texttt{E}) + k(\texttt{I})$, i.e. the cost of executing all instructions
224in the block \texttt{E} first plus the cost of executing all the instructions in
225the block \texttt{I}. However, we know that some instructions in \texttt{E} are
226executed after the last instruction in \texttt{I}. This is actually irrelevant
227because we are here assuming that costs are additive, so that we can freely
228permute them\footnote{The additivity assumption fails on modern processors that have stateful subsystems, like caches and pipelines. The extension of the labelling approach to those systems is therefore non trivial and under development in the CerCo project.}. Note that, in the present discussion, we are assuming that
229the function call terminates and yields back control to the basic block
230\texttt{E}. If the call diverges, the instrumentation
231$\texttt{cost += k(E)}$ executed at the beginning of \texttt{E} is still valid,
232but just as an upper bound to the real execution cost: only precision is lost.
234Let now consider what happens when we move down the compilation chain to an
235unstructured intermediate or final language. Here unstructured means that
236the only control operators are conditional and unconditional jumps, function
237calls and returns. Unlike a structured language, though, there is no guarantee
238that a function will return control just after the function call point.
239The semantics of the return statement, indeed, consists in fetching the
240return address from some internal structure (tipically the control stack) and
241jumping directly to it. The code can freely manipulate the control stack to
242make the procedure returns to whatever position. Indeed, it is also possible
243to break the well nesting of function calls/returns.
245Is it the case that the code produced by a correct compiler must respect the
246additional property that every function returns just after its function call
247point? The answer is negative and the property is not implied by forward
248simulation proofs. For instance, imagine to modify a correct compiler pass
249by sistematically adding one to the return address on the stack and by
250putting a \texttt{NOP} (or any other instruction that takes one byte) after
251every function call. The obtained code will be functionally indistinguishable,
252and the added instructions will all be dead code.
254This lack of structure in the semantics badly interferes with the labelling
255approach. The reason is the following: when a basic block labelled with
256\texttt{E} contains a function call, it no longer makes any sense to associate
257to a label \texttt{E} the sum of the costs of all the instructions in the block.
258Indeed, there is no guarantee that the function will return into the block and
259that the instructions that will be executed after the return will be the ones
260we are paying for in the cost model.
262How can we make the labelling approach work in this scenario? We only see two
263possible ways. The first one consists in injecting an emission statement after
264every function call: basic blocks no longer contain function calls, but are now
265terminated by them. This completely solves the problem and allows the compiler
266to break the structure of function calls/returns at will. However, the
267technique has several drawbacks. First of all, it greatly augments the number
268of cost labels that are injected in the source code and that become
269instrumentation statements. Thus, when reasoning on the source code to prove
270non-functional properties, the user (or the automation tool) will have to handle
271larger expressions. Second, the more labels are emitted, the more difficult it
272becomes to implement powerful optimizations respecting the code structure.
273Indeed, function calls are usually implemented in such a way that most registers
274are preserved by the call, so that the static analysis of the block is not
275interrupted by the call and an optimization can involve both the code before
276and after the function call. Third, instrumenting the source code may require
277unpleasant modification of it. Take, for example, the code
278\texttt{f(g(x));}. We need to inject an emission statement/instrumentation
279instruction just after the execution of \texttt{g}. The only way to do that
280is to rewrite the code as \texttt{y = g(x); emit L$_1$; f(y);} for some fresh
281variable \texttt{y}. It is pretty clear how in certain situations the obtained
282code would be more obfuscated and then more difficult to manually reason on.
284For the previous reasons, in this paper and in the CerCo project we adopt a
285different approach. We do not inject any more emission statement after every
286function call. However, we want to propagate a strong additional invariant in
287the forward simulation proof. The invariant is the propagation of the structure
288 of the original high level code, even if the target language is unstructured.
289The structure we want to propagate, that will become more clear in the next
290section, comprises 1) the property that every function should return just after
291the function call point, which in turns imply well nesting of function calls;
2922) the property that every basic block starts with a code emission statement.
294In the original labelling approach of~\cite{easylabelling}, the second property
295was granted statically (syntactically) as a property of the generated code.
296In our revised approach, instead, we will grant the property dinamically:
297it will be possible to generate code that does not respect it, as soon as
298all execution traces do respect it. For instance, dead code will no longer
299be required to have all basic blocks correctly labelled. The switch is suggested
300from the fact that the first of the two properties --- that related to
301function calls/returns --- can only be defined as a dinamic property, i.e.
302as a property of execution traces, not of the static code. The switch is
303beneficial to the proof because the original proof was made of two parts:
304the forward simulation proof and the proof that the static property was granted.
305In our revised approach the latter disappears and only the forward simulation
306is kept.
308In order to capture the structure semantics so that it is preserved
309by a forward simulation argument, we need to make the structure observable
310in the semantics. This is the topic of the next Section.
312\section{Structured traces}
314The program semantics adopted in the traditional labelling approach is based
315on labelled deductive systems. Given a set of observables $\mathcal{O}$ and
316a set of states $\mathcal{S}$, the semantics of one deterministic execution
317step is
318defined as a function $S \to S \times O^*$ where $O^*$ is a (finite) stream of
319observables. The semantics is then lifted compositionally to multiple (finite
320or infinite) execution steps.
321Finally, the semantics of a a whole program execution is obtained by forgetting
322about the final state (if any), yielding a function $S \to O^*$ that given an
323initial status returns the finite or infinite stream of observables in output.
325We present here a new definition of semantics where the structure of execution,
326as defined in the previous Section, is now observable. The idea is to replace
327the stream of observables with a structured data type that makes explicit
328function call and returns and that grants some additional invariants by
329construction. The data structure, called \emph{structured traces}, is
330defined inductively for terminating programs and coinductively for diverging
331ones. In the paper we focus only on the inductive structure, i.e. we assume
332that all programs that are given a semantics are total. The Matita formalization
333also shows the coinductive definitions. The semantics of a program is then
334defined as a function that maps an initial state into a structured trace.
336In order to have a definition that works on multiple intermediate languages,
337we abstract the type of structure traces over an abstract data type of
338abstract statuses:
340record abstract_status := {
341   S: Type[0]
342;  as_execute: S -> S -> Prop
343;  as_classify: S -> classification
344;  as_costed: S -> Prop
345;  as_label: \forall s. as_costed S s -> label
346;  as_call_ident: \forall s. as_classify S s = cl_call -> label
347;  as_after_return:
348    (Σs:as_status. as_classify s = Some ? cl_call) → as_status → Prop
352The predicate $\texttt{as\_execute}~s_1~s_2$ holds if $s_1$ evolves into
353$s_2$ in one step; $\texttt{as\_classify}~s~c$ holds if the next instruction
354to be executed in $s$ is classified according to $c \in
355\{\texttt{return,jump,call,other}\}$ (we omit tailcalls for simplicity);
356the predicate $\texttt{as\_costed}~s$ holds if the next instruction to be
357executed in $s$ is a cost emission statement (also classified
358as \texttt{other}); finally $(\texttt{as\_after\_return}~s_1~s_2)$ holds
359if the next instruction to be executed in $s_2$ follows the function call
360to be executed in (the witness of the $\Sigma$-type) $s_1$.
363The inductive type for structured traces is actually made by three multiple
364inductive types with the following semantics:
366 \item $(\texttt{trace\_label\_return}~s_1~s_2)$ is a trace that begins in
367   the state $s_1$ (included) and ends just before the state $s_2$ (excluded)
368   such that the instruction to be executed in $s_1$ is a label emission
369   statement and the one to be executed in the state before $s_2$ is a return
370   statement. Thus $s_2$ is the state after the return. The trace
371   may contain other label emission statements. It captures the structure of
372   the execution of function bodies: they must start with a cost emission
373   statement and must end with a return; they are obtained by concatenating
374   one or more basic blocks, all starting with a label emission
375   (e.g. in case of loops).
376 \item $(\texttt{trace\_any\_label}~b~s_1~s_2)$ is a trace that begins in
377   the state $s_1$ (included) and ends just before the state $s_2$ (excluded)
378   such that the instruction to be executed in $s_2$/in the state before
379   $s_2$ is either a label emission statement or
380   or a return, according to the boolean $b$. It must not contain
381   any label emission statement. It captures the notion of a suffix of a
382   basic block.
383 \item $(\texttt{trace\_label\_label}~b~s_1~s_2)$ is the special case of
384   $(\texttt{trace\_any\_label}~b~s_1~s_2)$ such that the instruction to be
385   executed in $s_1$ is a label emission statement. It captures the notion of
386   a basic block.
390 {\texttt{trace\_label\_label}~true~s_1~s_2}
391 {\texttt{trace\_label\_return}~s_1~s_2}
394 {\texttt{trace\_label\_label}~false~s_1~s_2 \andalso
395  \texttt{trace\_label\_return}~s_2~s_3
396 }
397 {\texttt{trace\_label\_return}~s_1~s_3}
400 {\texttt{trace\_any\_label}~b~s_1~s_2 \andalso
401  \texttt{as\_costed}~s_1
402 }
403 {\texttt{trace\_label\_label}~b~s_1~s_2}
406 {\texttt{as\_execute}~s_1~s_2 \andalso
407  \texttt{as\_classify}~s_1 \in \{\texttt{cl\_jump,cl\_other}\} \andalso
408  \texttt{as\_costed}~s_2
409 }
410 {\texttt{trace\_any\_label}~false~s_1~s_2}
413 {\texttt{as\_execute}~s_1~s_2 \andalso
414  \texttt{as\_classify}~s_1 = \texttt{cl\_return} \\
415 }
416 {\texttt{trace\_any\_label}~true~s_1~s_2}
419 {\texttt{as\_execute}~s_1~s_2 \andalso
420  \texttt{as\_classify}~s_1 = \texttt{cl\_call} \\
421  \texttt{as\_after\_return}~s_1~s_3 \andalso
422  \texttt{trace\_label\_return}~s_2~s_3 \andalso
423  \texttt{as\_costed}~s_3
424 }
425 {\texttt{trace\_any\_label}~false~s_1~s_3}
428 {\texttt{as\_execute}~s_1~s_2 \andalso
429  \texttt{as\_classify}~s_1 = \texttt{cl\_call} \\
430  \texttt{as\_after\_return}~s_1~s_3 \andalso
431  \texttt{trace\_label\_return}~s_2~s_3 \\
432  \lnot \texttt{as\_costed}~s_3 \andalso
433  \texttt{trace\_any\_label}~b~s_3~s_4
434 }
435 {\texttt{trace\_any\_label}~b~s_1~s_4}
438 {\texttt{as\_execute}~s_1~s_2 \andalso
439  \lnot \texttt{as\_costed}~s_2 \\
440  \texttt{trace\_any\_label}~b~s_2~s_3
441  \texttt{as\_classify}~s_1 = \texttt{cl\_other}
442 }
443 {\texttt{trace\_any\_label}~b~s_1~s_3}
448inductive trace_label_return (S:abstract_status) : S → S → Type[0] ≝
449  | tlr_base:
450      ∀status_before: S.
451      ∀status_after: S.
452        trace_label_label S ends_with_ret status_before status_after →
453        trace_label_return S status_before status_after
454  | tlr_step:
455      ∀status_initial: S.
456      ∀status_labelled: S.
457      ∀status_final: S.
458        trace_label_label S doesnt_end_with_ret status_initial status_labelled →
459        trace_label_return S status_labelled status_final →
460          trace_label_return S status_initial status_final
461with trace_label_label: trace_ends_with_ret → S → S → Type[0] ≝
462  | tll_base:
463      ∀ends_flag: trace_ends_with_ret.
464      ∀start_status: S.
465      ∀end_status: S.
466        trace_any_label S ends_flag start_status end_status →
467        as_costed S start_status →
468          trace_label_label S ends_flag start_status end_status
469with trace_any_label: trace_ends_with_ret → S → S → Type[0] ≝
470  (* Single steps within a function which reach a label.
471     Note that this is the only case applicable for a jump. *)
472  | tal_base_not_return:
473      ∀start_status: S.
474      ∀final_status: S.
475        as_execute S start_status final_status →
476        (as_classifier S start_status cl_jump ∨
477         as_classifier S start_status cl_other) →
478        as_costed S final_status →
479          trace_any_label S doesnt_end_with_ret start_status final_status
480  | tal_base_return:
481      ∀start_status: S.
482      ∀final_status: S.
483        as_execute S start_status final_status →
484        as_classifier S start_status cl_return →
485          trace_any_label S ends_with_ret start_status final_status
486  (* A call followed by a label on return. *)
487  | tal_base_call:
488      ∀status_pre_fun_call: S.
489      ∀status_start_fun_call: S.
490      ∀status_final: S.
491        as_execute S status_pre_fun_call status_start_fun_call →
492        ∀H:as_classifier S status_pre_fun_call cl_call.
493          as_after_return S «status_pre_fun_call, H» status_final →
494          trace_label_return S status_start_fun_call status_final →
495          as_costed S status_final →
496            trace_any_label S doesnt_end_with_ret status_pre_fun_call status_final
497  (* A call followed by a non-empty trace. *)
498  | tal_step_call:
499      ∀end_flag: trace_ends_with_ret.
500      ∀status_pre_fun_call: S.
501      ∀status_start_fun_call: S.
502      ∀status_after_fun_call: S.
503      ∀status_final: S.
504        as_execute S status_pre_fun_call status_start_fun_call →
505        ∀H:as_classifier S status_pre_fun_call cl_call.
506          as_after_return S «status_pre_fun_call, H» status_after_fun_call →
507          trace_label_return S status_start_fun_call status_after_fun_call →
508          ¬ as_costed S status_after_fun_call →
509          trace_any_label S end_flag status_after_fun_call status_final →
510            trace_any_label S end_flag status_pre_fun_call status_final
511  | tal_step_default:
512      ∀end_flag: trace_ends_with_ret.
513      ∀status_pre: S.
514      ∀status_init: S.
515      ∀status_end: S.
516        as_execute S status_pre status_init →
517        trace_any_label S end_flag status_init status_end →
518        as_classifier S status_pre cl_other →
519        ¬ (as_costed S status_init) →
520          trace_any_label S end_flag status_pre status_end.
523A \texttt{trace\_label\_return} is isomorphic to a list of
524\texttt{trace\_label\_label}s that end with a cost emission followed by a
525\texttt{trace\_label\_label} that ends with a return.
527The interesting cases are those of $(\texttt{trace\_any\_label}~b~s_1~s_2)$.
528A \texttt{trace\_any\_label} is a sequence of steps built by a syntax directed
529definition on the classification of $s_1$. The constructors of the datatype
530impose several invariants that are meant to impose a structure to the
531otherwise unstructured execution. In particular, the following invariants are
534 \item the trace is never empty; it ends with a return iff $b$ is
535       true (\texttt{ends\_with\_ret})
536 \item a jump must always be the last instruction of the trace, and it must
537       be followed by a cost emission statement; i.e. the target of a jump
538       is always the beginning of a new basic block; as such it must start
539       with a cost emission statement
540 \item a cost emission statement can never occur inside the trace, only in
541       the status immediately after
542 \item the trace for a function call step is made from a subtrace of type
543       $(\texttt{trace\_label\_return}~s_1~s_2)$, possibly followed by the
544       rest of the trace for this basic block. The subtrace represents the
545       function execution. Being an inductive datum, it grants totality of
546       the function call. The status $s_2$ is the one that follows the return
547       statement. The next instruction of $s_2$ must follow the function call
548       instruction. As a consequence, function calls are also well nested.
551The three mutual structural recursive functions \texttt{flatten\_trace\_label\_return, flatten\_trace\_label\_label} and \texttt{flatten\_trance\_any\_label}
552allow to extract from a structured trace the list of states whose next
553instruction is a cost emission statement. We only show here the type of one
554of them:
557  (S: abstract_status)
558    (start_status: S) (final_status: S)
559      (the_trace: trace_label_return S start_status final_status)
560        on the_trace: list (as_cost_label S)
563\paragraph{Cost prediction on structured traces}
565The first main theorem of CerCo about structured traces
566(theorem \texttt{compute\_max\_trace\_label\_return\_cost\_ok\_with\_trace})
567holds for the
569of the structured traces to the concrete status of object code programs.
570Simplifying a bit, it states that
571$\forall s_1,s_2. \forall \tau: \texttt{trace\_label\_return}~s_1~s_2.
572  clock~s_2 = clock~s_1 + \Sigma_{s \in \texttt{flatten\_trace\_label\_return}~\tau} k(L(s))$ where $L$ maps a labelled state to its emitted label, and the
573cost model $k$ is statically computed from the object code
574by associating to each label \texttt{L} the sum of the cost of the instructions
575in the basic block that starts at \texttt{L} and ends before the next labelled
576instruction. The theorem is proved by structural induction over the structured
577trace, and is based on the invariant that
578iff the function that computes the cost model has analyzed the instruction
579to be executed at $s_2$ after the one to be executed at $s_1$, and if
580the structured trace starts with $s_1$, then eventually it will contain also
581$s_2$. When $s_1$ is not a function call, the result holds trivially because
582of the $(\texttt{as\_execute}~s_1~s_2)$ condition obtained by inversion on
583the trace. The only non
584trivial case is the one of function calls: the cost model computation function
585does recursion on the first instruction that follows that function call; the
586\texttt{as\_after\_return} condition of the \texttt{tal\_base\_call} and
587\texttt{tal\_step\_call} grants exactly that the execution will eventually reach
588this state.
590\paragraph{Structured traces similarity and cost prediction invariance}
592A compiler pass maps source to object code and initial states to initial
593states. The source code and initial state uniquely determine the structured
594trace of a program, if it exists. The structured trace fails to exists iff
595the structural conditions are violated by the program execution (e.g. a function
596body does not start with a cost emission statement). Let us assume that the
597target structured trace exists.
599What is the relation between the source and target structured traces?
600In general, the two traces can be arbitrarily different. However, we are
601interested only in those compiler passes that maps a trace $\tau_1$ to a trace
602$\tau_2$ such that
603\begin{displaymath}(\texttt{flatten\_trace\_label\_return}~\tau_1 = \texttt{flatten\_trace\_label\_return}~\tau_2)\label{condition1}\end{displaymath}
604The reason is that when this happens we obviously have the
605following corollary to the previous theorem:
606$$\forall s_1,s_2. \forall \tau: \texttt{trace\_label\_return}~s_1~s_2.
607  clock~s_2 - clock~s_1 = \Sigma_{s \in \texttt{flatten\_trace\_label\_return}~\tau_1} k(L(s)) = \Sigma_{s \in \texttt{flatten\_trace\_label\_return}~\tau_2} k(L(s))$$
608The corollary states that the actual execution time of the program can be computed equally well on the source or target language. Thus it becomes possible to
609transfer the cost model from the target to the source code and reason on the
610source code only.
612We are therefore interested in conditions stronger than~\ref{condition1}.
613Therefore we introduce here a similarity relation between traces that captures
614the fact that two traces have the same structure and that
615implies~\ref{condition1} (theorem \texttt{tlr\_rel\_to\_traces\_same\_flatten}). Two traces are similar when one can be obtained from
616the other by erasing or inserting steps classified as \texttt{other} into the
617\texttt{trace\_any\_label} blocks of the trace. In particular,
618the relation maps function calls to function calls to the same function,
619label emission statements to emissions of the same label, concatenation of
620subtraces to concatenation of subtraces of the same lenghth and starting with
621the same emissione statement, etc.
623The similarity relation definition if simple, but it involves a large
624number of cases. In the formalization, we define
625$(\texttt{tlr\_rel}~tlr_1~tlr_2)$ by structural recursion on $tlr_1$ and pattern
626matching over $tlr_2$. Here we turn
627the definition into inference rules for the sake of readability. We also
628omit from trace constructors all arguments, but those that are traces or that
629are used in the premises of the rules.
632 {\texttt{tll\_rel}~tll_1~tll_2
633 }
634 {\texttt{tlr\_rel}~(\texttt{tlr\_base}~tll_1)~(\texttt{tlr\_base}~tll_2)}
637 {\texttt{tll\_rel}~tll_1~tll_2 \andalso
638  \texttt{tlr\_rel}~tlr_1~tlr_2
639 }
640 {\texttt{tlr\_rel}~(\texttt{tlr\_step}~tll_1~tlr_1)~(\texttt{tlr\_step}~tll_2~tlr_2)}
643 {\texttt{as\_label}~H_1 = \texttt{as\_label}~H_2 \andalso
644  \texttt{tal\_rel}~tal_1~tal_2
645 }
646 {\texttt{tll\_rel}~(\texttt{tll\_base}~tal_1~H_1)~(\texttt{tll\_base}~tal_2~H_2)}
649 {}
650 {\texttt{tal\_rel}~\texttt{tal\_base\_not\_return}~(taa @ \texttt{tal\_base\_not\_return}}
653 {}
654 {\texttt{tal\_rel}~\texttt{tal\_base\_return}~(taa @ \texttt{tal\_base\_return}}
657 {\texttt{tlr\_rel}~tlr_1~tlr_2 \andalso
658  \texttt{as\_call\_ident}~H_1 = \texttt{as\_call\_ident}~H_2
659 }
660 {\texttt{tal\_rel}~(\texttt{tal\_base\_call}~H_1~tlr_1)~(taa @ \texttt{tal\_base\_call}~H_2~tlr_2)}
663 {\texttt{tlr\_rel}~tlr_1~tlr_2 \andalso
664  \texttt{as\_call\_ident}~H_1 = \texttt{as\_call\_ident}~H_2 \andalso
665  \texttt{tal\_collapsable}~tal_2
666 }
667 {\texttt{tal\_rel}~(\texttt{tal\_base\_call}~tlr_1)~(taa @ \texttt{tal\_step\_call}~tlr_2~tal_2)}
670 {\texttt{tlr\_rel}~tlr_1~tlr_2 \andalso
671  \texttt{as\_call\_ident}~H_1 = \texttt{as\_call\_ident}~H_2 \andalso
672  \texttt{tal\_collapsable}~tal_1
673 }
674 {\texttt{tal\_rel}~(\texttt{tal\_step\_call}~tlr_1~tal_1)~(taa @ \texttt{tal\_base\_call}~tlr_2)}
677 {\texttt{tlr\_rel}~tlr_1~tlr_2 \andalso
678  \texttt{tal\_rel}~tal_1~tal_2 \andalso
679  \texttt{as\_call\_ident}~H_1 = \texttt{as\_call\_ident}~H_2
680 }
681 {\texttt{tal\_rel}~(\texttt{tal\_step\_call}~tlr_1~tal_1)~(taa @ \texttt{tal\_step\_call}~tlr_2~tal_2)}
684 {\texttt{tal\_rel}~tal_1~tal_2
685 }
686 {\texttt{tal\_rel}~(\texttt{tal\_step\_default}~tal_1)~tal_2}
689let rec tlr_rel S1 st1 st1' S2 st2 st2'
690  (tlr1 : trace_label_return S1 st1 st1')
691  (tlr2 : trace_label_return S2 st2 st2') on tlr1 : Prop ≝
692match tlr1 with
693  [ tlr_base st1 st1' tll1 ⇒
694    match tlr2 with
695    [ tlr_base st2 st2' tll2 ⇒ tll_rel … tll1 tll2
696    | _ ⇒ False
697    ]
698  | tlr_step st1 st1' st1'' tll1 tl1 ⇒
699    match tlr2 with
700    [ tlr_step st2 st2' st2'' tll2 tl2 ⇒
701      tll_rel … tll1 tll2 ∧ tlr_rel … tl1 tl2
702    | _ ⇒ False
703    ]
704  ]
705and tll_rel S1 fl1 st1 st1' S2 fl2 st2 st2'
706 (tll1 : trace_label_label S1 fl1 st1 st1')
707 (tll2 : trace_label_label S2 fl2 st2 st2') on tll1 : Prop ≝
708  match tll1 with
709  [ tll_base fl1 st1 st1' tal1 H ⇒
710    match tll2 with
711    [ tll_base fl2 st2 st2 tal2 G ⇒
712      as_label_safe … («?, H») = as_label_safe … («?, G») ∧
713      tal_rel … tal1 tal2
714    ]
715  ]
716and tal_rel S1 fl1 st1 st1' S2 fl2 st2 st2'
717 (tal1 : trace_any_label S1 fl1 st1 st1')
718 (tal2 : trace_any_label S2 fl2 st2 st2')
719   on tal1 : Prop ≝
720  match tal1 with
721  [ tal_base_not_return st1 st1' _ _ _ ⇒
722    fl2 = doesnt_end_with_ret ∧
723    ∃st2mid,taa,H,G,K.
724    tal2 ≃ taa_append_tal ? st2 ??? taa
725      (tal_base_not_return ? st2mid st2' H G K)
726  | tal_base_return st1 st1' _ _ ⇒
727    fl2 = ends_with_ret ∧
728    ∃st2mid,taa,H,G.
729    tal2 ≃ taa_append_tal ? st2 ? st2mid st2' taa
730      (tal_base_return ? st2mid st2' H G)
731  | tal_base_call st1 st1' st1'' _ prf _ tlr1 _ ⇒
732    fl2 = doesnt_end_with_ret ∧
733    ∃st2mid,G.as_call_ident S2 («st2mid, G») = as_call_ident ? «st1, prf» ∧
734    ∃taa : trace_any_any ? st2 st2mid.∃st2mid',H.
735    (* we must allow a tal_base_call to be similar to a call followed
736      by a collapsable trace (trace_any_any followed by a base_not_return;
737      we cannot use trace_any_any as it disallows labels in the end as soon
738      as it is non-empty) *)
739    (∃K.∃tlr2 : trace_label_return ? st2mid' st2'.∃L.
740      tal2 ≃ taa @ (tal_base_call … H G K tlr2 L) ∧ tlr_rel … tlr1 tlr2) ∨
741    ∃st2mid'',K.∃tlr2 : trace_label_return ? st2mid' st2mid''.∃L.
742    ∃tl2 : trace_any_label … doesnt_end_with_ret st2mid'' st2'.
743      tal2 ≃ taa @ (tal_step_call … H G K tlr2 L tl2) ∧
744      tlr_rel … tlr1 tlr2 ∧ tal_collapsable … tl2
745  | tal_step_call fl1 st1 st1' st1'' st1''' _ prf _ tlr1 _ tl1 ⇒
746    ∃st2mid,G.as_call_ident S2 («st2mid, G») = as_call_ident ? «st1, prf» ∧
747    ∃taa : trace_any_any ? st2 st2mid.∃st2mid',H.
748    (fl2 = doesnt_end_with_ret ∧ ∃K.∃tlr2 : trace_label_return ? st2mid' st2'.∃L.
749      tal2 ≃ taa @ tal_base_call … H G K tlr2 L ∧
750      tal_collapsable … tl1 ∧ tlr_rel … tlr1 tlr2) ∨
751    ∃st2mid'',K.∃tlr2 : trace_label_return ? st2mid' st2mid''.∃L.
752    ∃tl2 : trace_any_label ? fl2 st2mid'' st2'.
753      tal2 ≃ taa @ (tal_step_call … H G K tlr2 L tl2) ∧
754      tal_rel … tl1 tl2 ∧ tlr_rel … tlr1 tlr2
755  | tal_step_default fl1 st1 st1' st1'' _ tl1 _ _ ⇒
756    tal_rel … tl1 tal2 (* <- this makes it many to many *)
757  ].
761In the preceding rules, a $taa$ is an inhabitant of the
762$(\texttt{trace\_any\_any}~s_1~s_2)$ inductive data type whose definition
763we do not give in the paper for lack of space. It is the type of valid
764prefixes (even empty ones) of \texttt{trace\_any\_label}s that do not contain
765any function call. Therefore it
766is possible to concatenate (using ``$@$'') a \texttt{trace\_any\_any} to the
767left of a \texttt{trace\_any\_label}. The intuition is that two traces are
768similar if one of them can perform more silent (non observables) moves,
769whose sequences are here represented by \texttt{trace\_any\_any}.
771The \texttt{tal\_collapsable} unary predicate over \texttt{trace\_any\_label}s
772holds when the argument does not contain any function call and it ends
773with a label (not a return). The intuition is that after a function call we
774can still perform a sequence of silent actions while remaining similar.
776\section{Forward simulation}
779We summarize here the results of the previous sections. Each intermediate
780unstructured language can be given a semantics based on structured traces,
781that single out those runs that respect a certain number of invariants.
782A cost model can be computed on the object code and it can be used to predict
783the execution costs of runs that produce structured traces. The cost model
784can be lifted from the target to the source code of a pass if the pass maps
785structured traces to similar structured traces. The latter property is called
786a \emph{forward simulation}.
788As for labelled transition systems, in order to establish the forward
789simulation we are interested in (preservation of observables), we are
790forced to prove a stronger notion of forward simulation that introduces
791an explicit relation between states. The classical notion of a 1-to-0-or-many
792forward simulation is the existence of a relation $R$ over states such that
793if $s_1 R s_2$ and $s_1 \to^1 s_1'$ then there exists an $s_2'$ such that
794$s_1' \to^* s_2'$ and $s_1' R s_2'$. In our context, we need to replace the
795one and multi step transition relations $\to^n$ with the existence of
796a structured trace between the two states, and we need to add the request that
797the two structured traces are similar. Thus what we would like to state is
798something like:\\
799for all $s_1,s_2,s_1'$ such that there is a $\tau_1$ from
800$s_1$ to $s_1'$ and $s_1 R s_2$ there exists an $s_2'$ such that
801$s_1' R s_2'$ and a $\tau_2$ from $s_2$ to $s_2'$ such that
802$\tau_1 \approx \tau_2$.\\
804The statement just introduced, however, is too simplicistic and not provable
805in the general case. To understand why, consider the case of a function call
806and the pass that fixes the parameter passing conventions. A function
807call in the source code takes in input an arbitrary number of pseudo-registers (the actual parameters to pass) and returns an arbitrary number of pseudo-registers (where the result is stored). A function call in the target call has no
808input nor output parameters. The pass must add explicit code before and after
809the function call to move the pseudo-registers content from/to the hardware
810registers or the stack in order to implement the parameter passing strategy.
811Similarly, each function body must be augmented with a preamble and a postamble
812to complete/initiate the parameter passing strategy for the call/return phase.
813Therefore what used to be a call followed by the next instruction to execute
814after the function return, now becomes a sequence of instructions, followed by
815a call, followed by another sequence. The two states at the beginning of the
816first sequence and at the end of the second sequence are in relation with
817the status before/after the call in the source code, like in an usual forward
818simulation. How can we prove however the additional condition for function calls
819that asks that when the function returns the instruction immediately after the
820function call is called? To grant this invariant, there must be another relation
821between the address of the function call in the source and in the target code.
822This additional relation is to be used in particular to relate the two stacks.
824Another example is given by preservation of code emission statements. A single
825code emission instruction can be simulated by a sequence of steps, followed
826by a code emission, followed by another sequence. Clearly the initial and final
827statuses of the sequence are to be in relation with the status before/after the
828code emission in the source code. In order to preserve the structured traces
829invariants, however, we must consider a second relation between states that
830traces the preservation of the code emission statement.
832Therefore we now introduce an abstract notion of relation set between abstract
833statuses and an abstract notion of 1-to-0-or-many forward simulation. These
834two definitions enjoy the following remarkable properties:
836 \item they are generic enough to accomodate all passes of the CerCo compiler
837 \item they allow to lift the 1-to-0-or-many forward simulation to the
838   structured traces preserving forward simulation we are interested in
841The latter lifting is proved by the following theorem which has the
8421-to-0-or-many forward simulation statement as an hypothesis. Combined with
843the results in the previous section, it has as a corollary that the cost
844model can be computed on the object code and lifted to the source code to
845reason on non-functional properties iff a simpler 1-to-0-or-many forward
846simulation is proved. The statement of the latter is way more complex than a
847traditional 1-to-0-or-many forward simulation statement, but proofs have now
848about the same complexity. Indeed, it should be possible to largely reuse an
849existent traditional forward simulation proof. The lifting theorem below,
850proved once and for all in this paper, is the one that does all the job to
851put the labelling approach in motion.
853\begin{condition}[Cases \texttt{cl\_other} and \texttt{cl\_jump}]
854 For all $s_1,s_1',s_2$ such that $s_1 \mathcal{S} s_1'$, and
855 $\texttt{as\_execute}~s_1~s_1'$, and $\texttt{as\_classify}~s_1 = \texttt{cl\_other}$ or $\texttt{as\_classify}~s_1 = \texttt{cl\_other}$ and
856$(\texttt{as\_costed}~s_1')$, there exists an $s_2'$ and a $taaf$ of type $(\texttt{trace\_any\_any\_free}~s_2~s_2')$ such that $s_1' (\mathcal{S} \cap \mathcal{L}) s_2'$ and either
857$taaf$ is non empty, or one among $s_1$ and $s_1'$ is \texttt{as\_costed}.
860In the above condition, a $(\texttt{trace\_any\_any\_free}~s_1~s_2)$ is an
861inductive type of structured traces that do not contain function calls or
862cost emission statements. Differently from a \texttt{trace\_any\_any}, the
863instruction to be executed in the lookahead state $s_2$ may be a cost emission
866The intuition of the condition is that one step can be replaced with zero or more steps if it
867preserves the relation between the data and if the two final statuses are
868labelled in the same way. Moreover, we must take special care of the empty case
869to avoid collapsing two consecutive states that emit the same label to just
870one state, missing one of the two emissions.
873definition status_simulation ≝
874  λS1 : abstract_status.
875  λS2 : abstract_status.
876  λsim_status_rel : status_rel S1 S2.
877    ∀st1,st1',st2.as_execute S1 st1 st1' →
878    sim_status_rel st1 st2 →
879    match as_classify … st1 with
880    [ None ⇒ True
881    | Some cl ⇒
882      match cl with
883      [ cl_call ⇒ ∀prf.
884        (*
885             st1' ------------S----------\
886              ↑ \                         \
887             st1 \--L--\                   \
888              | \       \                   \
889              S  \-C-\  st2_after_call →taa→ st2'
890              |       \     ↑
891             st2 →taa→ st2_pre_call
892        *)
893        ∃st2_pre_call.
894        as_call_ident ? st2_pre_call = as_call_ident ? («st1, prf») ∧
895        call_rel ?? sim_status_rel «st1, prf» st2_pre_call ∧
896        ∃st2_after_call,st2'.
897        ∃taa2 : trace_any_any … st2 st2_pre_call.
898        ∃taa2' : trace_any_any … st2_after_call st2'.
899        as_execute … st2_pre_call st2_after_call ∧
900        sim_status_rel st1' st2' ∧
901        label_rel … st1' st2_after_call
902      | cl_return ⇒
903        (*
904             st1
905            / ↓
906           | st1'----------S,L------------\
907           S   \                           \
908            \   \-----R-------\            |     
909             \                 |           |
910             st2 →taa→ st2_ret |           |
911                          ↓   /            |
912                     st2_after_ret →taaf→ st2'
914           we also ask that st2_after_ret be not labelled if the taaf tail is
915           not empty
916        *)
917        ∃st2_ret,st2_after_ret,st2'.
918        ∃taa2 : trace_any_any … st2 st2_ret.
919        ∃taa2' : trace_any_any_free … st2_after_ret st2'.
920        (if taaf_non_empty … taa2' then ¬as_costed … st2_after_ret else True) ∧
921        as_classifier … st2_ret cl_return ∧
922        as_execute … st2_ret st2_after_ret ∧ sim_status_rel st1' st2' ∧
923        ret_rel … sim_status_rel st1' st2_after_ret ∧
924        label_rel … st1' st2'
925      ]
926    ].
930definition status_simulation ≝
931  λS1 : abstract_status.
932  λS2 : abstract_status.
933  λsim_status_rel : status_rel S1 S2.
934    ∀st1,st1',st2.as_execute S1 st1 st1' →
935    sim_status_rel st1 st2 →
936    match as_classify … st1 with
937    [ None ⇒ True
938    | Some cl ⇒
939      match cl with
940      [ cl_call ⇒ ∀prf.
941        (*
942             st1' ------------S----------\
943              ↑ \                         \
944             st1 \--L--\                   \
945              | \       \                   \
946              S  \-C-\  st2_after_call →taa→ st2'
947              |       \     ↑
948             st2 →taa→ st2_pre_call
949        *)
950        ∃st2_pre_call.
951        as_call_ident ? st2_pre_call = as_call_ident ? («st1, prf») ∧
952        call_rel ?? sim_status_rel «st1, prf» st2_pre_call ∧
953        ∃st2_after_call,st2'.
954        ∃taa2 : trace_any_any … st2 st2_pre_call.
955        ∃taa2' : trace_any_any … st2_after_call st2'.
956        as_execute … st2_pre_call st2_after_call ∧
957        sim_status_rel st1' st2' ∧
958        label_rel … st1' st2_after_call
959      | cl_return ⇒
960        (*
961             st1
962            / ↓
963           | st1'----------S,L------------\
964           S   \                           \
965            \   \-----R-------\            |     
966             \                 |           |
967             st2 →taa→ st2_ret |           |
968                          ↓   /            |
969                     st2_after_ret →taaf→ st2'
971           we also ask that st2_after_ret be not labelled if the taaf tail is
972           not empty
973        *)
974        ∃st2_ret,st2_after_ret,st2'.
975        ∃taa2 : trace_any_any … st2 st2_ret.
976        ∃taa2' : trace_any_any_free … st2_after_ret st2'.
977        (if taaf_non_empty … taa2' then ¬as_costed … st2_after_ret else True) ∧
978        as_classifier … st2_ret cl_return ∧
979        as_execute … st2_ret st2_after_ret ∧ sim_status_rel st1' st2' ∧
980        ret_rel … sim_status_rel st1' st2_after_ret ∧
981        label_rel … st1' st2'
982      | cl_other ⇒
983          (*         
984          st1 → st1'
985            |      \
986            S      S,L
987            |        \
988           st2 →taaf→ st2'
990           the taaf can be empty (e.g. tunneling) but we ask it must not be the
991           case when both st1 and st1' are labelled (we would be able to collapse
992           labels otherwise)
993         *)
994        ∃st2'.
995        ∃taa2 : trace_any_any_free … st2 st2'.
996        (if taaf_non_empty … taa2 then True else (¬as_costed … st1 ∨ ¬as_costed … st1')) ∧
997        sim_status_rel st1' st2' ∧
998        label_rel … st1' st2'
999      | cl_jump ⇒
1000        (* just like cl_other, but with a hypothesis more *)
1001        as_costed … st1' →
1002        ∃st2'.
1003        ∃taa2 : trace_any_any_free … st2 st2'.
1004        (if taaf_non_empty … taa2 then True else (¬as_costed … st1 ∨ ¬as_costed … st1')) ∧
1005        sim_status_rel st1' st2' ∧
1006        label_rel … st1' st2'
1007      ]
1008    ].
1013status_simulation_produce_tlr S1 S2 R
1014(* we start from this situation
1015     st1 →→→→tlr→→→→ st1'
1016      | \
1017      L  \---S--\
1018      |          \
1019   st2_lab →taa→ st2   (the taa preamble is in general either empty or given
1020                        by the preceding call)
1022   and we produce
1023     st1 →→→→tlr→→→→ st1'
1024             \\      /  \
1025             //     R    \-L,S-\
1026             \\     |           \
1027   st2_lab →tlr→ st2_mid →taaf→ st2'
1029  st1 st1' st2_lab st2
1030  (tlr1 : trace_label_return S1 st1 st1')
1031  (taa2_pre : trace_any_any S2 st2_lab st2)
1032  (sim_execute : status_simulation S1 S2 R)
1033  on tlr1 : R st1 st2 → label_rel … st1 st2_lab →
1034  ∃st2_mid.∃st2'.
1035  ∃tlr2 : trace_label_return S2 st2_lab st2_mid.
1036  ∃taa2 : trace_any_any_free … st2_mid st2'.
1037  (if taaf_non_empty … taa2 then ¬as_costed … st2_mid else True) ∧
1038  R st1' st2' ∧ ret_rel … R st1' st2_mid ∧ label_rel … st1' st2' ∧
1039  tlr_rel … tlr1 tlr2
1042\section{Conclusions and future works}
1044The labelling approach is a technique to implement compilers that induce on
1045the source code a non uniform cost model determined from the object code
1046produced. The cost model assigns a cost to each basic block of the program.
1047The main theorem of the labelling approach says that there is an exact
1048correspondence between the sequence of basic blocks started in the source
1049and object code, and that no instruction in the source or object code is
1050executed outside a basic block. Thus the overall cost of object code execution
1051can be computed precisely on the source code.
1053In this paper we scale the labelling approach to cover a programming language
1054with function calls. This introduces new difficulties only when the language
1055is unstructured, i.e. it allows function calls to return anywhere in the code,
1056destroying the hope of a static prediction of the cost of basic blocks.
1057We restore static predictability by introducing a new semantics for unstructured
1058programs that single outs well structured executions. The latter are represented
1059by structured traces, a generalization of streams of observables that capture
1060several structural invariants of the execution, like well nesting of functions
1061or the fact that every basic block must start with a code emission statement.
1062We show that structured traces are sufficiently structured to statically compute
1063a precise cost model on the object code.
1065We introduce a similarity relation on structured traces that must hold between
1066source and target traces. When the relation holds for every program, we prove
1067that the cost model can be lifted from the object to the source code.
1069In order to prove that similarity holds, we present a generic proof of forward
1070simulation that is aimed at pulling apart as much as possible the part of the
1071simulation related to non-functional properties (preservation of structure)
1072from that related to functional properties. In particular, we reduce the
1073problem of preservation of structure to that of showing a 1-to-0-or-many
1074forward simulation that only adds a few additional proof obligations to those
1075of a traditional, function properties only, proof.
1077All the results presented in the paper are part of a larger certification of a
1078C compiler which is based on the labelling approach. The certification, done
1079in Matita, is the main deliverable of the European Project CerCo.
1081The short term future work consists in the completion of the certification of
1082the CerCo compiler. In particular, the results presented in this paper will
1083be used to certify all the passes of the back-end of the compiler.
1085\paragraph{Related works}
1086CerCo is the first certification project which explicitly tries to induce a
1087precise cost model on the source code in order to establish non-functional
1088properties of programs on an high level language. Traditional certifications
1089of compilers, like~\cite{compcert2,piton}, only explicitly prove preservation
1090of the functional properties via forward simulations.
1092Usually forward simulations take the following form: for each transition
1093from $s_1$ to $s_2$ in the source code, there exists an equivalent sequence of
1094transitions in the target code of length $n$. The number $n$ of transition steps
1095in the target code can just be the witness of the existential statement.
1096An equivalent alternative when the proof of simulation is constructive consists
1097in providing an explicit function, called \emph{clock function} in the
1098litterature~\cite{clockfunctions}, that computes $n$ from $s_1$. Every clock
1099function constitutes then a cost model for the source code, in the spirit of
1100what we are doing in CerCo. However, we believe our solution to be superior
1101in the following points: 1) the machinery of the labelling approach is
1102insensible to the resource being measured. Indeed, any cost model computed on
1103the object code can be lifted to the source code (e.g. stack space used,
1104energy consumed, etc.). On the contrary, clock functions only talk about
1105number of transition steps. In order to extend the approach with clock functions
1106to other resources, additional functions must be introduced. Moreover, the
1107additional functions would be handled differently in the proof.
11082) the cost models induced by the labelling approach have a simple presentation.
1109In particular, they associate a number to each basic block. More complex
1110models can be induced when the approach is scaled to cover, for instance,
1111loop optimizations~\cite{loopoptimizations}, but the costs are still meant to
1112be easy to understandable and manipulate in an interactive theorem prover.
1113On the contrary, a clock function is a complex function of the state $s_1$
1114which, as a function, is an opaque object that is difficult to reify as
1115source code in order to reason on it.
Note: See TracBrowser for help on using the repository browser.