source: Papers/itp-2013/ccexec.tex @ 2612

Last change on this file since 2612 was 2612, checked in by sacerdot, 7 years ago


File size: 54.0 KB
9% NB: might be worth removing this if changing class in favour of
10% a built-in definition.
14  {keywords={Definition,Lemma,Theorem,Remark,Qed,Save,Inductive,Record},
15   morekeywords={[2]if,then,else,forall,Prop,match,with,end,let},
16  }
19  morekeywords={int8_t}
23        keywordstyle=\color{red}\bfseries,
24        keywordstyle=[2]\color{blue},
25        commentstyle=\color{green},
26        stringstyle=\color{blue},
27        showspaces=false,showstringspaces=false,
28        xleftmargin=1em}
32\title{Certification of the Preservation of Structure by a Compiler's Back-end Pass\thanks{The project CerCo acknowledges the financial support of the Future and
33Emerging Technologies (FET) programme within the Seventh Framework
34Programme for Research of the European Commission, under FET-Open grant
35number: 243881}}
36\author{Paolo Tranquilli \and Claudio Sacerdoti Coen}
37\institute{Department of Computer Science and Engineering, University of Bologna,\\\email{}, \email{}}
40The labelling approach is a technique to lift cost models for non-functional
41properties of programs from the object code to the source code. It is based
42on the preservation of the structure of the high level program in every
43intermediate language used by the compiler. Such structure is captured by
44observables that are added to the semantics and that needs to be preserved
45by the forward simulation proof of correctness of the compiler. Additional
46special observables are required for function calls. In this paper we
47present a generic forward simulation proof that preserves all these observables.
48The proof statement is based on a new mechanized semantics that traces the
49structure of execution when the language is unstructured. The generic semantics
50and simulation proof have been mechanized in the interactive theorem prover
56The \emph{labelling approach} has been introduced in~\cite{easylabelling} as
57a technique to \emph{lift} cost models for non-functional properties of programs
58from the object code to the source code. Examples of non-functional properties
59are execution time, amount of stack/heap space consumed and energy required for
60communication. The basic idea of the approach is that it is impossible to
61provide a \emph{uniform} cost model for an high level language that is preserved
62\emph{precisely} by a compiler. For instance, two instances of an assignment
63$x = y$ in the source code can be compiled very differently according to the
64place (registers vs stack) where $x$ and $y$ are stored at the moment of
65execution. Therefore a precise cost model must assign a different cost
66to every occurrence, and the exact cost can only be known after compilation.
68According to the labeling approach, the compiler is free to compile and optimize
69the source code without any major restriction, but it must keep trace
70of what happens to basic blocks during the compilation. The cost model is
71then computed on the object code. It assigns a cost to every basic block.
72Finally, the compiler propagates back the cost model to the source level,
73assigning a cost to each basic block of the source code.
75Implementing the labelling approach in a certified compiler
76allows to reason formally on the high level source code of a program to prove
77non-functional properties that are granted to be preserved by the compiler
78itself. The trusted code base is then reduced to 1) the interactive theorem
79prover (or its kernel) used in the certification of the compiler and
802) the software used to certify the property on the source language, that
81can be itself certified further reducing the trusted code base.
82In~\cite{easylabelling} the authors provide an example of a simple
83certified compiler that implements the labelling approach for the
84imperative \texttt{While} language~\cite{while}, that does not have
85pointers and function calls.
87The labelling approach has been shown to scale to more interesting scenarios.
88In particular in~\cite{functionallabelling} it has been applied to a functional
89language and in~\cite{loopoptimizations} it has been shown that the approach
90can be slightly complicated to handle loop optimizations and, more generally,
91program optimizations that do not preserve the structure of basic blocks.
92On-going work also shows that the labelling approach is also compatible with
93the complex analyses required to obtain a cost model for object code
94on processors that implement advanced features like pipelining, superscalar
95architectures and caches.
97In the European Project CerCo (Certified Complexity~\footnote{\url{}})~\cite{cerco} we are certifying a labelling approach based compiler for a large subset of C to
988051 object code. The compiler is
99moderately optimizing and implements a compilation chain that is largely
100inspired to that of CompCert~\cite{compcert1,compcert2}. Compared to work done in~\cite{easylabelling}, the main novely and source of difficulties is due to the presence
101of function calls. Surprisingly, the addition of function calls require a
102rivisitation of the proof technique given in~\cite{easylabelling}. In
103particular, at the core of the labelling approach there is a forward
104simulation proof that, in the case of \texttt{While}, is only minimally
105more complex than the proof required for the preservation of the
106functional properties only. In the case of a programming language with
107function calls, instead, it turns out that the forward simulation proof for
108the back-end languages must grant a whole new set of invariants.
110In this paper we present a formalization in the Matita interactive theorem
111prover of a generic version of the simulation proof required for unstructured
112languages. All back-end languages of the CerCo compiler are unstructured
113languages, so the proof covers half of the correctness of the compiler.
114The statement of the generic proof is based on a new semantics
115for imperative unstructured languages that is based on \emph{structured
116traces} and that restores the preservation of structure in the observables of
117the semantics. The generic proof allows to almost completely split the
118part of the simulation that deals with functional properties only from the
119part that deals with the preservation of structure.
121The plan of this paper is the following. In Section~\ref{labelling} we
122sketch the labelling method and the problems derived from the application
123to languages with function calls. In Section~\ref{semantics} we introduce
124a generic description of an unstructured imperative language and the
125corresponding structured traces (the novel semantics). In
126Section~\ref{simulation} we describe the forward simulation proof.
127Conclusions and future works are in Section~\ref{conclusions}
129\section{The labelling approach}
131We briefly sketch here a simplified version of the labelling approach as
132introduced in~\cite{easylabelling}. The simplification strengthens the
133sufficient conditions given in~\cite{easylabelling} to allow a simpler
134explanation. The simplified conditions given here are also used in the
135CerCo compiler to simplify the proof.
137Let $\mathcal{P}$ be a programming language whose semantics is given in
138terms of observables: a run of a program yields a finite or infinite
139stream of observables. We also assume for the time being that function
140calls are not available in $\mathcal{P}$. We want to associate a cost
141model to a program $P$ written in $\mathcal{P}$. The first step is to
142extend the syntax of $\mathcal{P}$ with a new construct $\texttt{emit L}$
143where $L$ is a label distinct from all observables of $\mathcal{P}$.
144The semantics of $\texttt{emit L}$ is the emission of the observable
145\texttt{L} that is meant to signal the beginning of a basic block.
147There exists an automatic procedure that injects into the program $P$ an
148$\texttt{emit L}$ at the beginning of each basic block, using a fresh
149\texttt{L} for each block. In particular, the bodies of loops, both branches
150of \texttt{if-then-else}s and the targets of \texttt{goto}s must all start
151with an emission statement.
153Let now $C$ be a compiler from $\mathcal{P}$ to the object code $\mathcal{M}$,
154that is organized in passes. Let $\mathcal{Q}_i$ be the $i$-th intermediate
155language used by the compiler. We can easily extend every
156intermediate language (and its semantics) with an $\texttt{emit L}$ statement
157as we did for $\mathcal{P}$. The same is possible for $\mathcal{M}$ too, with
158the additional difficulty that the syntax of object code is given as a
159sequence of bytes. The injection of an emission statement in the object code
160can be done using a map that maps two consecutive code addresses with the
161statement. The intended semantics is that, if $(pc_1,pc_2) \mapsto \texttt{emit L}$ then the observable \texttt{L} is emitted after the execution of the
162instruction stored at $pc_1$ and before the execution of the instruction
163stored at $pc_2$. The two program counters are necessary because the
164instruction stored at $pc_1$ can have multiple possible successors (e.g.
165in case of a conditional branch or an indirect call). Dually, the instruction
166stored at $pc_2$ can have multiple possible predecessors (e.g. if it is the
167target of a jump).
169The compiler, to be functionally correct, must preserve the observational
170equivalence, i.e. executing the program after each compiler pass should
171yield the same stream of observables. After the injection of emission
172statements, observables now capture both functional and non-functional
174This correctness property is called in the litterature a forward simulation
175and is sufficient for correctness when the target language is
177We also require a stronger, non-functional preservation property: after each
178pass all basic blocks must start with an emission statement, and all labels
179\texttt{L} must be unique.
181Now let $M$ be the object code obtained for the program $P$. Let us suppose
182that we can statically inspect the code $M$ and associate to each basic block
183a cost (e.g. the number of clock cycles required to execute all instructions
184in the basic block, or an upper bound to that time). Every basic block is
185labelled with an unique label \texttt{L}, thus we can actually associate the
186cost to \texttt{L}. Let call it $k(\texttt{L})$.
188The function $k$ is defined as the cost model for the object code control
189blocks. It can be equally used as well as the cost model for the source
190control blocks. Indeed, if the semantics of $P$ is the stream
191$L_1 L_2 \ldots$, then, because of forward simulation, the semantics of $M$ is
192also $L_1 L_2 \ldots$ and its actual execution cost is $\Sigma_i k(L_i)$ because
193every instruction belongs to a control block and every control block is
194labelled. Thus it is correct to say that the execution cost of $P$ is also
195$\Sigma_i k(L_i)$. In other words, we have obtained a cost model $k$ for
196the blocks of the high level program $P$ that is preserved by compilation.
198How can the user profit from the high level cost model? Suppose, for instance,
199that he wants to prove that the WCET of his program is bounded by $c$. It
200is sufficient for him to prove that $\Sigma_i k(L_i) \leq c$, which is now
201a purely functional property of the code. He can therefore use any technique
202available to certify functional properties of the source code.
203What is suggested in~\cite{easylabelling} is to actually instrument the
204source code $P$ by replacing every label emission statement
205$\texttt{emit L}$ with the instruction $\texttt{cost += k(L)}$ that increments
206a global fresh variable \texttt{cost}. The bound is now proved by establishing
207the program invariant $\texttt{cost} \leq c$, which can be done for example
208using the Frama-C~\cite{framaC} suite if the source code is some variant of
211\subsection{Labelling function calls}
212We now want to extend the labelling approach to support function calls.
213On the high level, \emph{structured} programming language $\mathcal{P}$ there
214is not much to change.
215When a function is invoked, the current basic block is temporarily exited
216and the basic block the function starts with take control. When the function
217returns, the execution of the original basic block is resumed. Thus the only
218significant change is that basic blocks can now be nested. Let \texttt{E}
219be the label of the external block and \texttt{I} the label of a nested one.
220Since the external starts before the internal, the semantics observed will be
221\texttt{E I} and the cost associated to it on the source language will be
222$k(\texttt{E}) + k(\texttt{I})$, i.e. the cost of executing all instructions
223in the block \texttt{E} first plus the cost of executing all the instructions in
224the block \texttt{I}. However, we know that some instructions in \texttt{E} are
225executed after the last instruction in \texttt{I}. This is actually irrelevant
226because we are here assuming that costs are additive, so that we can freely
227permute them\footnote{The additivity assumption fails on modern processors that have stateful subsystems, like caches and pipelines. The extension of the labelling approach to those systems is therefore non trivial and under development in the CerCo project.}. Note that, in the present discussion, we are assuming that
228the function call terminates and yields back control to the basic block
229\texttt{E}. If the call diverges, the instrumentation
230$\texttt{cost += k(E)}$ executed at the beginning of \texttt{E} is still valid,
231but just as an upper bound to the real execution cost: only precision is lost.
233Let now consider what happens when we move down the compilation chain to an
234unstructured intermediate or final language. Here unstructured means that
235the only control operators are conditional and unconditional jumps, function
236calls and returns. Unlike a structured language, though, there is no guarantee
237that a function will return control just after the function call point.
238The semantics of the return statement, indeed, consists in fetching the
239return address from some internal structure (tipically the control stack) and
240jumping directly to it. The code can freely manipulate the control stack to
241make the procedure returns to whatever position. Indeed, it is also possible
242to break the well nesting of function calls/returns.
244Is it the case that the code produced by a correct compiler must respect the
245additional property that every function returns just after its function call
246point? The answer is negative and the property is not implied by forward
247simulation proofs. For instance, imagine to modify a correct compiler pass
248by sistematically adding one to the return address on the stack and by
249putting a \texttt{NOP} (or any other instruction that takes one byte) after
250every function call. The obtained code will be functionally indistinguishable,
251and the added instructions will all be dead code.
253This lack of structure in the semantics badly interferes with the labelling
254approach. The reason is the following: when a basic block labelled with
255\texttt{E} contains a function call, it no longer makes any sense to associate
256to a label \texttt{E} the sum of the costs of all the instructions in the block.
257Indeed, there is no guarantee that the function will return into the block and
258that the instructions that will be executed after the return will be the ones
259we are paying for in the cost model.
261How can we make the labelling approach work in this scenario? We only see two
262possible ways. The first one consists in injecting an emission statement after
263every function call: basic blocks no longer contain function calls, but are now
264terminated by them. This completely solves the problem and allows the compiler
265to break the structure of function calls/returns at will. However, the
266technique has several drawbacks. First of all, it greatly augments the number
267of cost labels that are injected in the source code and that become
268instrumentation statements. Thus, when reasoning on the source code to prove
269non-functional properties, the user (or the automation tool) will have to handle
270larger expressions. Second, the more labels are emitted, the more difficult it
271becomes to implement powerful optimizations respecting the code structure.
272Indeed, function calls are usually implemented in such a way that most registers
273are preserved by the call, so that the static analysis of the block is not
274interrupted by the call and an optimization can involve both the code before
275and after the function call. Third, instrumenting the source code may require
276unpleasant modification of it. Take, for example, the code
277\texttt{f(g(x));}. We need to inject an emission statement/instrumentation
278instruction just after the execution of \texttt{g}. The only way to do that
279is to rewrite the code as \texttt{y = g(x); emit L$_1$; f(y);} for some fresh
280variable \texttt{y}. It is pretty clear how in certain situations the obtained
281code would be more obfuscated and then more difficult to manually reason on.
283For the previous reasons, in this paper and in the CerCo project we adopt a
284different approach. We do not inject any more emission statement after every
285function call. However, we want to propagate a strong additional invariant in
286the forward simulation proof. The invariant is the propagation of the structure
287 of the original high level code, even if the target language is unstructured.
288The structure we want to propagate, that will become more clear in the next
289section, comprises 1) the property that every function should return just after
290the function call point, which in turns imply well nesting of function calls;
2912) the property that every basic block starts with a code emission statement.
293In the original labelling approach of~\cite{easylabelling}, the second property
294was granted statically (syntactically) as a property of the generated code.
295In our revised approach, instead, we will grant the property dinamically:
296it will be possible to generate code that does not respect it, as soon as
297all execution traces do respect it. For instance, dead code will no longer
298be required to have all basic blocks correctly labelled. The switch is suggested
299from the fact that the first of the two properties --- that related to
300function calls/returns --- can only be defined as a dinamic property, i.e.
301as a property of execution traces, not of the static code. The switch is
302beneficial to the proof because the original proof was made of two parts:
303the forward simulation proof and the proof that the static property was granted.
304In our revised approach the latter disappears and only the forward simulation
305is kept.
307In order to capture the structure semantics so that it is preserved
308by a forward simulation argument, we need to make the structure observable
309in the semantics. This is the topic of the next Section.
311\section{Structured traces}
313The program semantics adopted in the traditional labelling approach is based
314on labelled deductive systems. Given a set of observables $\mathcal{O}$ and
315a set of states $\mathcal{S}$, the semantics of one deterministic execution
316step is
317defined as a function $S \to S \times O^*$ where $O^*$ is a (finite) stream of
318observables. The semantics is then lifted compositionally to multiple (finite
319or infinite) execution steps.
320Finally, the semantics of a a whole program execution is obtained by forgetting
321about the final state (if any), yielding a function $S \to O^*$ that given an
322initial status returns the finite or infinite stream of observables in output.
324We present here a new definition of semantics where the structure of execution,
325as defined in the previous Section, is now observable. The idea is to replace
326the stream of observables with a structured data type that makes explicit
327function call and returns and that grants some additional invariants by
328construction. The data structure, called \emph{structured traces}, is
329defined inductively for terminating programs and coinductively for diverging
330ones. In the paper we focus only on the inductive structure, i.e. we assume
331that all programs that are given a semantics are total. The Matita formalization
332also shows the coinductive definitions. The semantics of a program is then
333defined as a function that maps an initial state into a structured trace.
335In order to have a definition that works on multiple intermediate languages,
336we abstract the type of structure traces over an abstract data type of
337abstract statuses:
339record abstract_status := {
340   S: Type[0]
341;  as_execute: S -> S -> Prop
342;  as_classify: S -> classification
343;  as_costed: S -> Prop
344;  as_label: \forall s. as_costed S s -> label
345;  as_call_ident: \forall s. as_classify S s = cl_call -> label
346;  as_after_return:
347    (Σs:as_status. as_classify s = Some ? cl_call) → as_status → Prop
351The predicate $\texttt{as\_execute}~s_1~s_2$ holds if $s_1$ evolves into
352$s_2$ in one step; $\texttt{as\_classify}~s~c$ holds if the next instruction
353to be executed in $s$ is classified according to $c \in
354\{\texttt{return,jump,call,other}\}$ (we omit tailcalls for simplicity);
355the predicate $\texttt{as\_costed}~s$ holds if the next instruction to be
356executed in $s$ is a cost emission statement (also classified
357as \texttt{other}); finally $(\texttt{as\_after\_return}~s_1~s_2)$ holds
358if the next instruction to be executed in $s_2$ follows the function call
359to be executed in (the witness of the $\Sigma$-type) $s_1$.
362The inductive type for structured traces is actually made by three multiple
363inductive types with the following semantics:
365 \item $(\texttt{trace\_label\_return}~s_1~s_2)$ is a trace that begins in
366   the state $s_1$ (included) and ends just before the state $s_2$ (excluded)
367   such that the instruction to be executed in $s_1$ is a label emission
368   statement and the one to be executed in the state before $s_2$ is a return
369   statement. Thus $s_2$ is the state after the return. The trace
370   may contain other label emission statements. It captures the structure of
371   the execution of function bodies: they must start with a cost emission
372   statement and must end with a return; they are obtained by concatenating
373   one or more basic blocks, all starting with a label emission
374   (e.g. in case of loops).
375 \item $(\texttt{trace\_any\_label}~b~s_1~s_2)$ is a trace that begins in
376   the state $s_1$ (included) and ends just before the state $s_2$ (excluded)
377   such that the instruction to be executed in $s_2$/in the state before
378   $s_2$ is either a label emission statement or
379   or a return, according to the boolean $b$. It must not contain
380   any label emission statement. It captures the notion of a suffix of a
381   basic block.
382 \item $(\texttt{trace\_label\_label}~b~s_1~s_2)$ is the special case of
383   $(\texttt{trace\_any\_label}~b~s_1~s_2)$ such that the instruction to be
384   executed in $s_1$ is a label emission statement. It captures the notion of
385   a basic block.
389 {\texttt{trace\_label\_label}~true~s_1~s_2}
390 {\texttt{trace\_label\_return}~s_1~s_2}
393 {\texttt{trace\_label\_label}~false~s_1~s_2 \andalso
394  \texttt{trace\_label\_return}~s_2~s_3
395 }
396 {\texttt{trace\_label\_return}~s_1~s_3}
399 {\texttt{trace\_any\_label}~b~s_1~s_2 \andalso
400  \texttt{as\_costed}~s_1
401 }
402 {\texttt{trace\_label\_label}~b~s_1~s_2}
405 {\texttt{as\_execute}~s_1~s_2 \andalso
406  \texttt{as\_classify}~s_1 \in \{\texttt{cl\_jump,cl\_other}\} \andalso
407  \texttt{as\_costed}~s_2
408 }
409 {\texttt{trace\_any\_label}~false~s_1~s_2}
412 {\texttt{as\_execute}~s_1~s_2 \andalso
413  \texttt{as\_classify}~s_1 = \texttt{cl\_return} \\
414 }
415 {\texttt{trace\_any\_label}~true~s_1~s_2}
418 {\texttt{as\_execute}~s_1~s_2 \andalso
419  \texttt{as\_classify}~s_1 = \texttt{cl\_call} \\
420  \texttt{as\_after\_return}~s_1~s_3 \andalso
421  \texttt{trace\_label\_return}~s_2~s_3 \andalso
422  \texttt{as\_costed}~s_3
423 }
424 {\texttt{trace\_any\_label}~false~s_1~s_3}
427 {\texttt{as\_execute}~s_1~s_2 \andalso
428  \texttt{as\_classify}~s_1 = \texttt{cl\_call} \\
429  \texttt{as\_after\_return}~s_1~s_3 \andalso
430  \texttt{trace\_label\_return}~s_2~s_3 \\
431  \lnot \texttt{as\_costed}~s_3 \andalso
432  \texttt{trace\_any\_label}~b~s_3~s_4
433 }
434 {\texttt{trace\_any\_label}~b~s_1~s_4}
437 {\texttt{as\_execute}~s_1~s_2 \andalso
438  \lnot \texttt{as\_costed}~s_2 \\
439  \texttt{trace\_any\_label}~b~s_2~s_3
440  \texttt{as\_classify}~s_1 = \texttt{cl\_other}
441 }
442 {\texttt{trace\_any\_label}~b~s_1~s_3}
447inductive trace_label_return (S:abstract_status) : S → S → Type[0] ≝
448  | tlr_base:
449      ∀status_before: S.
450      ∀status_after: S.
451        trace_label_label S ends_with_ret status_before status_after →
452        trace_label_return S status_before status_after
453  | tlr_step:
454      ∀status_initial: S.
455      ∀status_labelled: S.
456      ∀status_final: S.
457        trace_label_label S doesnt_end_with_ret status_initial status_labelled →
458        trace_label_return S status_labelled status_final →
459          trace_label_return S status_initial status_final
460with trace_label_label: trace_ends_with_ret → S → S → Type[0] ≝
461  | tll_base:
462      ∀ends_flag: trace_ends_with_ret.
463      ∀start_status: S.
464      ∀end_status: S.
465        trace_any_label S ends_flag start_status end_status →
466        as_costed S start_status →
467          trace_label_label S ends_flag start_status end_status
468with trace_any_label: trace_ends_with_ret → S → S → Type[0] ≝
469  (* Single steps within a function which reach a label.
470     Note that this is the only case applicable for a jump. *)
471  | tal_base_not_return:
472      ∀start_status: S.
473      ∀final_status: S.
474        as_execute S start_status final_status →
475        (as_classifier S start_status cl_jump ∨
476         as_classifier S start_status cl_other) →
477        as_costed S final_status →
478          trace_any_label S doesnt_end_with_ret start_status final_status
479  | tal_base_return:
480      ∀start_status: S.
481      ∀final_status: S.
482        as_execute S start_status final_status →
483        as_classifier S start_status cl_return →
484          trace_any_label S ends_with_ret start_status final_status
485  (* A call followed by a label on return. *)
486  | tal_base_call:
487      ∀status_pre_fun_call: S.
488      ∀status_start_fun_call: S.
489      ∀status_final: S.
490        as_execute S status_pre_fun_call status_start_fun_call →
491        ∀H:as_classifier S status_pre_fun_call cl_call.
492          as_after_return S «status_pre_fun_call, H» status_final →
493          trace_label_return S status_start_fun_call status_final →
494          as_costed S status_final →
495            trace_any_label S doesnt_end_with_ret status_pre_fun_call status_final
496  (* A call followed by a non-empty trace. *)
497  | tal_step_call:
498      ∀end_flag: trace_ends_with_ret.
499      ∀status_pre_fun_call: S.
500      ∀status_start_fun_call: S.
501      ∀status_after_fun_call: S.
502      ∀status_final: S.
503        as_execute S status_pre_fun_call status_start_fun_call →
504        ∀H:as_classifier S status_pre_fun_call cl_call.
505          as_after_return S «status_pre_fun_call, H» status_after_fun_call →
506          trace_label_return S status_start_fun_call status_after_fun_call →
507          ¬ as_costed S status_after_fun_call →
508          trace_any_label S end_flag status_after_fun_call status_final →
509            trace_any_label S end_flag status_pre_fun_call status_final
510  | tal_step_default:
511      ∀end_flag: trace_ends_with_ret.
512      ∀status_pre: S.
513      ∀status_init: S.
514      ∀status_end: S.
515        as_execute S status_pre status_init →
516        trace_any_label S end_flag status_init status_end →
517        as_classifier S status_pre cl_other →
518        ¬ (as_costed S status_init) →
519          trace_any_label S end_flag status_pre status_end.
522A \texttt{trace\_label\_return} is isomorphic to a list of
523\texttt{trace\_label\_label}s that end with a cost emission followed by a
524\texttt{trace\_label\_label} that ends with a return.
526The interesting cases are those of $(\texttt{trace\_any\_label}~b~s_1~s_2)$.
527A \texttt{trace\_any\_label} is a sequence of steps built by a syntax directed
528definition on the classification of $s_1$. The constructors of the datatype
529impose several invariants that are meant to impose a structure to the
530otherwise unstructured execution. In particular, the following invariants are
533 \item the trace is never empty; it ends with a return iff $b$ is
534       true (\texttt{ends\_with\_ret})
535 \item a jump must always be the last instruction of the trace, and it must
536       be followed by a cost emission statement; i.e. the target of a jump
537       is always the beginning of a new basic block; as such it must start
538       with a cost emission statement
539 \item a cost emission statement can never occur inside the trace, only in
540       the status immediately after
541 \item the trace for a function call step is made from a subtrace of type
542       $(\texttt{trace\_label\_return}~s_1~s_2)$, possibly followed by the
543       rest of the trace for this basic block. The subtrace represents the
544       function execution. Being an inductive datum, it grants totality of
545       the function call. The status $s_2$ is the one that follows the return
546       statement. The next instruction of $s_2$ must follow the function call
547       instruction. As a consequence, function calls are also well nested.
550The three mutual structural recursive functions \texttt{flatten\_trace\_label\_return, flatten\_trace\_label\_label} and \texttt{flatten\_trance\_any\_label}
551allow to extract from a structured trace the list of states whose next
552instruction is a cost emission statement. We only show here the type of one
553of them:
556  (S: abstract_status)
557    (start_status: S) (final_status: S)
558      (the_trace: trace_label_return S start_status final_status)
559        on the_trace: list (as_cost_label S)
562\paragraph{Cost prediction on structured traces}
564The first main theorem of CerCo about structured traces
565(theorem \texttt{compute\_max\_trace\_label\_return\_cost\_ok\_with\_trace})
566holds for the
568of the structured traces to the concrete status of object code programs.
569Simplifying a bit, it states that
570$\forall s_1,s_2. \forall \tau: \texttt{trace\_label\_return}~s_1~s_2.
571  clock~s_2 = clock~s_1 + \Sigma_{s \in \texttt{flatten\_trace\_label\_return}~\tau} k(L(s))$ where $L$ maps a labelled state to its emitted label, and the
572cost model $k$ is statically computed from the object code
573by associating to each label \texttt{L} the sum of the cost of the instructions
574in the basic block that starts at \texttt{L} and ends before the next labelled
575instruction. The theorem is proved by structural induction over the structured
576trace, and is based on the invariant that
577iff the function that computes the cost model has analyzed the instruction
578to be executed at $s_2$ after the one to be executed at $s_1$, and if
579the structured trace starts with $s_1$, then eventually it will contain also
580$s_2$. When $s_1$ is not a function call, the result holds trivially because
581of the $(\texttt{as\_execute}~s_1~s_2)$ condition obtained by inversion on
582the trace. The only non
583trivial case is the one of function calls: the cost model computation function
584does recursion on the first instruction that follows that function call; the
585\texttt{as\_after\_return} condition of the \texttt{tal\_base\_call} and
586\texttt{tal\_step\_call} grants exactly that the execution will eventually reach
587this state.
589\paragraph{Structured traces similarity and cost prediction invariance}
591A compiler pass maps source to object code and initial states to initial
592states. The source code and initial state uniquely determine the structured
593trace of a program, if it exists. The structured trace fails to exists iff
594the structural conditions are violated by the program execution (e.g. a function
595body does not start with a cost emission statement). Let us assume that the
596target structured trace exists.
598What is the relation between the source and target structured traces?
599In general, the two traces can be arbitrarily different. However, we are
600interested only in those compiler passes that maps a trace $\tau_1$ to a trace
601$\tau_2$ such that
602\begin{displaymath}(\texttt{flatten\_trace\_label\_return}~\tau_1 = \texttt{flatten\_trace\_label\_return}~\tau_2)\label{condition1}\end{displaymath}
603The reason is that when this happens we obviously have the
604following corollary to the previous theorem:
605$$\forall s_1,s_2. \forall \tau: \texttt{trace\_label\_return}~s_1~s_2.
606  clock~s_2 - clock~s_1 = \Sigma_{s \in \texttt{flatten\_trace\_label\_return}~\tau_1} k(L(s)) = \Sigma_{s \in \texttt{flatten\_trace\_label\_return}~\tau_2} k(L(s))$$
607The corollary states that the actual execution time of the program can be computed equally well on the source or target language. Thus it becomes possible to
608transfer the cost model from the target to the source code and reason on the
609source code only.
611We are therefore interested in conditions stronger than~\ref{condition1}.
612Therefore we introduce here a similarity relation between traces that captures
613the fact that two traces have the same structure and that
614implies~\ref{condition1} (theorem \texttt{tlr\_rel\_to\_traces\_same\_flatten}). Two traces are similar when one can be obtained from
615the other by erasing or inserting steps classified as \texttt{other} into the
616\texttt{trace\_any\_label} blocks of the trace. In particular,
617the relation maps function calls to function calls to the same function,
618label emission statements to emissions of the same label, concatenation of
619subtraces to concatenation of subtraces of the same lenghth and starting with
620the same emissione statement, etc.
622The similarity relation definition if simple, but it involves a large
623number of cases. In the formalization, we define
624$(\texttt{tlr\_rel}~tlr_1~tlr_2)$ by structural recursion on $tlr_1$ and pattern
625matching over $tlr_2$. Here we turn
626the definition into inference rules for the sake of readability. We also
627omit from trace constructors all arguments, but those that are traces or that
628are used in the premises of the rules.
631 {\texttt{tll\_rel}~tll_1~tll_2
632 }
633 {\texttt{tlr\_rel}~(\texttt{tlr\_base}~tll_1)~(\texttt{tlr\_base}~tll_2)}
636 {\texttt{tll\_rel}~tll_1~tll_2 \andalso
637  \texttt{tlr\_rel}~tlr_1~tlr_2
638 }
639 {\texttt{tlr\_rel}~(\texttt{tlr\_step}~tll_1~tlr_1)~(\texttt{tlr\_step}~tll_2~tlr_2)}
642 {\texttt{as\_label}~H_1 = \texttt{as\_label}~H_2 \andalso
643  \texttt{tal\_rel}~tal_1~tal_2
644 }
645 {\texttt{tll\_rel}~(\texttt{tll\_base}~tal_1~H_1)~(\texttt{tll\_base}~tal_2~H_2)}
648 {}
649 {\texttt{tal\_rel}~\texttt{tal\_base\_not\_return}~(taa @ \texttt{tal\_base\_not\_return}}
652 {}
653 {\texttt{tal\_rel}~\texttt{tal\_base\_return}~(taa @ \texttt{tal\_base\_return}}
656 {\texttt{tlr\_rel}~tlr_1~tlr_2 \andalso
657  \texttt{as\_call\_ident}~H_1 = \texttt{as\_call\_ident}~H_2
658 }
659 {\texttt{tal\_rel}~(\texttt{tal\_base\_call}~H_1~tlr_1)~(taa @ \texttt{tal\_base\_call}~H_2~tlr_2)}
662 {\texttt{tlr\_rel}~tlr_1~tlr_2 \andalso
663  \texttt{as\_call\_ident}~H_1 = \texttt{as\_call\_ident}~H_2 \andalso
664  \texttt{tal\_collapsable}~tal_2
665 }
666 {\texttt{tal\_rel}~(\texttt{tal\_base\_call}~tlr_1)~(taa @ \texttt{tal\_step\_call}~tlr_2~tal_2)}
669 {\texttt{tlr\_rel}~tlr_1~tlr_2 \andalso
670  \texttt{as\_call\_ident}~H_1 = \texttt{as\_call\_ident}~H_2 \andalso
671  \texttt{tal\_collapsable}~tal_1
672 }
673 {\texttt{tal\_rel}~(\texttt{tal\_step\_call}~tlr_1~tal_1)~(taa @ \texttt{tal\_base\_call}~tlr_2)}
676 {\texttt{tlr\_rel}~tlr_1~tlr_2 \andalso
677  \texttt{tal\_rel}~tal_1~tal_2 \andalso
678  \texttt{as\_call\_ident}~H_1 = \texttt{as\_call\_ident}~H_2
679 }
680 {\texttt{tal\_rel}~(\texttt{tal\_step\_call}~tlr_1~tal_1)~(taa @ \texttt{tal\_step\_call}~tlr_2~tal_2)}
683 {\texttt{tal\_rel}~tal_1~tal_2
684 }
685 {\texttt{tal\_rel}~(\texttt{tal\_step\_default}~tal_1)~tal_2}
688let rec tlr_rel S1 st1 st1' S2 st2 st2'
689  (tlr1 : trace_label_return S1 st1 st1')
690  (tlr2 : trace_label_return S2 st2 st2') on tlr1 : Prop ≝
691match tlr1 with
692  [ tlr_base st1 st1' tll1 ⇒
693    match tlr2 with
694    [ tlr_base st2 st2' tll2 ⇒ tll_rel … tll1 tll2
695    | _ ⇒ False
696    ]
697  | tlr_step st1 st1' st1'' tll1 tl1 ⇒
698    match tlr2 with
699    [ tlr_step st2 st2' st2'' tll2 tl2 ⇒
700      tll_rel … tll1 tll2 ∧ tlr_rel … tl1 tl2
701    | _ ⇒ False
702    ]
703  ]
704and tll_rel S1 fl1 st1 st1' S2 fl2 st2 st2'
705 (tll1 : trace_label_label S1 fl1 st1 st1')
706 (tll2 : trace_label_label S2 fl2 st2 st2') on tll1 : Prop ≝
707  match tll1 with
708  [ tll_base fl1 st1 st1' tal1 H ⇒
709    match tll2 with
710    [ tll_base fl2 st2 st2 tal2 G ⇒
711      as_label_safe … («?, H») = as_label_safe … («?, G») ∧
712      tal_rel … tal1 tal2
713    ]
714  ]
715and tal_rel S1 fl1 st1 st1' S2 fl2 st2 st2'
716 (tal1 : trace_any_label S1 fl1 st1 st1')
717 (tal2 : trace_any_label S2 fl2 st2 st2')
718   on tal1 : Prop ≝
719  match tal1 with
720  [ tal_base_not_return st1 st1' _ _ _ ⇒
721    fl2 = doesnt_end_with_ret ∧
722    ∃st2mid,taa,H,G,K.
723    tal2 ≃ taa_append_tal ? st2 ??? taa
724      (tal_base_not_return ? st2mid st2' H G K)
725  | tal_base_return st1 st1' _ _ ⇒
726    fl2 = ends_with_ret ∧
727    ∃st2mid,taa,H,G.
728    tal2 ≃ taa_append_tal ? st2 ? st2mid st2' taa
729      (tal_base_return ? st2mid st2' H G)
730  | tal_base_call st1 st1' st1'' _ prf _ tlr1 _ ⇒
731    fl2 = doesnt_end_with_ret ∧
732    ∃st2mid,G.as_call_ident S2 («st2mid, G») = as_call_ident ? «st1, prf» ∧
733    ∃taa : trace_any_any ? st2 st2mid.∃st2mid',H.
734    (* we must allow a tal_base_call to be similar to a call followed
735      by a collapsable trace (trace_any_any followed by a base_not_return;
736      we cannot use trace_any_any as it disallows labels in the end as soon
737      as it is non-empty) *)
738    (∃K.∃tlr2 : trace_label_return ? st2mid' st2'.∃L.
739      tal2 ≃ taa @ (tal_base_call … H G K tlr2 L) ∧ tlr_rel … tlr1 tlr2) ∨
740    ∃st2mid'',K.∃tlr2 : trace_label_return ? st2mid' st2mid''.∃L.
741    ∃tl2 : trace_any_label … doesnt_end_with_ret st2mid'' st2'.
742      tal2 ≃ taa @ (tal_step_call … H G K tlr2 L tl2) ∧
743      tlr_rel … tlr1 tlr2 ∧ tal_collapsable … tl2
744  | tal_step_call fl1 st1 st1' st1'' st1''' _ prf _ tlr1 _ tl1 ⇒
745    ∃st2mid,G.as_call_ident S2 («st2mid, G») = as_call_ident ? «st1, prf» ∧
746    ∃taa : trace_any_any ? st2 st2mid.∃st2mid',H.
747    (fl2 = doesnt_end_with_ret ∧ ∃K.∃tlr2 : trace_label_return ? st2mid' st2'.∃L.
748      tal2 ≃ taa @ tal_base_call … H G K tlr2 L ∧
749      tal_collapsable … tl1 ∧ tlr_rel … tlr1 tlr2) ∨
750    ∃st2mid'',K.∃tlr2 : trace_label_return ? st2mid' st2mid''.∃L.
751    ∃tl2 : trace_any_label ? fl2 st2mid'' st2'.
752      tal2 ≃ taa @ (tal_step_call … H G K tlr2 L tl2) ∧
753      tal_rel … tl1 tl2 ∧ tlr_rel … tlr1 tlr2
754  | tal_step_default fl1 st1 st1' st1'' _ tl1 _ _ ⇒
755    tal_rel … tl1 tal2 (* <- this makes it many to many *)
756  ].
760In the preceding rules, a $taa$ is an inhabitant of the
761$(\texttt{trace\_any\_any}~s_1~s_2)$ inductive data type whose definition
762we do not give in the paper for lack of space. It is the type of valid
763prefixes (even empty ones) of \texttt{trace\_any\_label}s that do not contain
764any function call. Therefore it
765is possible to concatenate (using ``$@$'') a \texttt{trace\_any\_any} to the
766left of a \texttt{trace\_any\_label}. The intuition is that two traces are
767similar if one of them can perform more silent (non observables) moves,
768whose sequences are here represented by \texttt{trace\_any\_any}.
770The \texttt{tal\_collapsable} unary predicate over \texttt{trace\_any\_label}s
771holds when the argument does not contain any function call and it ends
772with a label (not a return). The intuition is that after a function call we
773can still perform a sequence of silent actions while remaining similar.
775\section{Forward simulation}
778We summarize here the results of the previous sections. Each intermediate
779unstructured language can be given a semantics based on structured traces,
780that single out those runs that respect a certain number of invariants.
781A cost model can be computed on the object code and it can be used to predict
782the execution costs of runs that produce structured traces. The cost model
783can be lifted from the target to the source code of a pass if the pass maps
784structured traces to similar structured traces. The latter property is called
785a \emph{forward simulation}.
787As for labelled transition systems, in order to establish the forward
788simulation we are interested in (preservation of observables), we are
789forced to prove a stronger notion of forward simulation that introduces
790an explicit relation between states. The classical notion of a 1-to-0-or-many
791forward simulation is the existence of a relation $R$ over states such that
792if $s_1 R s_2$ and $s_1 \to^1 s_1'$ then there exists an $s_2'$ such that
793$s_1' \to^* s_2'$ and $s_1' R s_2'$. In our context, we need to replace the
794one and multi step transition relations $\to^n$ with the existence of
795a structured trace between the two states, and we need to add the request that
796the two structured traces are similar. Thus what we would like to state is
797something like:\\
798for all $s_1,s_2,s_1'$ such that there is a $\tau_1$ from
799$s_1$ to $s_1'$ and $s_1 R s_2$ there exists an $s_2'$ such that
800$s_1' R s_2'$ and a $\tau_2$ from $s_2$ to $s_2'$ such that
801$\tau_1 \approx \tau_2$.\\
803The statement just introduced, however, is too simplicistic and not provable
804in the general case. To understand why, consider the case of a function call
805and the pass that fixes the parameter passing conventions. A function
806call in the source code takes in input an arbitrary number of pseudo-registers (the actual parameters to pass) and returns an arbitrary number of pseudo-registers (where the result is stored). A function call in the target call has no
807input nor output parameters. The pass must add explicit code before and after
808the function call to move the pseudo-registers content from/to the hardware
809registers or the stack in order to implement the parameter passing strategy.
810Similarly, each function body must be augmented with a preamble and a postamble
811to complete/initiate the parameter passing strategy for the call/return phase.
812Therefore what used to be a call followed by the next instruction to execute
813after the function return, now becomes a sequence of instructions, followed by
814a call, followed by another sequence. The two states at the beginning of the
815first sequence and at the end of the second sequence are in relation with
816the status before/after the call in the source code, like in an usual forward
817simulation. How can we prove however the additional condition for function calls
818that asks that when the function returns the instruction immediately after the
819function call is called? To grant this invariant, there must be another relation
820between the address of the function call in the source and in the target code.
821This additional relation is to be used in particular to relate the two stacks.
823Another example is given by preservation of code emission statements. A single
824code emission instruction can be simulated by a sequence of steps, followed
825by a code emission, followed by another sequence. Clearly the initial and final
826statuses of the sequence are to be in relation with the status before/after the
827code emission in the source code. In order to preserve the structured traces
828invariants, however, we must consider a second relation between states that
829traces the preservation of the code emission statement.
831Therefore we now introduce an abstract notion of relation set between abstract
832statuses and an abstract notion of 1-to-0-or-many forward simulation. These
833two definitions enjoy the following remarkable properties:
835 \item they are generic enough to accomodate all passes of the CerCo compiler
836 \item they allow to lift the 1-to-0-or-many forward simulation to the
837   structured traces preserving forward simulation we are interested in
840The latter lifting is proved by the following theorem which has the
8411-to-0-or-many forward simulation statement as an hypothesis. Combined with
842the results in the previous section, it has as a corollary that the cost
843model can be computed on the object code and lifted to the source code to
844reason on non-functional properties iff a simpler 1-to-0-or-many forward
845simulation is proved. The statement of the latter is way more complex than a
846traditional 1-to-0-or-many forward simulation statement, but proofs have now
847about the same complexity. Indeed, it should be possible to largely reuse an
848existent traditional forward simulation proof. The lifting theorem below,
849proved once and for all in this paper, is the one that does all the job to
850put the labelling approach in motion.
853definition status_simulation ≝
854  λS1 : abstract_status.
855  λS2 : abstract_status.
856  λsim_status_rel : status_rel S1 S2.
857    ∀st1,st1',st2.as_execute S1 st1 st1' →
858    sim_status_rel st1 st2 →
859    match as_classify … st1 with
860    [ None ⇒ True
861    | Some cl ⇒
862      match cl with
863      [ cl_call ⇒ ∀prf.
864        (*
865             st1' ------------S----------\
866              ↑ \                         \
867             st1 \--L--\                   \
868              | \       \                   \
869              S  \-C-\  st2_after_call →taa→ st2'
870              |       \     ↑
871             st2 →taa→ st2_pre_call
872        *)
873        ∃st2_pre_call.
874        as_call_ident ? st2_pre_call = as_call_ident ? («st1, prf») ∧
875        call_rel ?? sim_status_rel «st1, prf» st2_pre_call ∧
876        ∃st2_after_call,st2'.
877        ∃taa2 : trace_any_any … st2 st2_pre_call.
878        ∃taa2' : trace_any_any … st2_after_call st2'.
879        as_execute … st2_pre_call st2_after_call ∧
880        sim_status_rel st1' st2' ∧
881        label_rel … st1' st2_after_call
882      | cl_tailcall ⇒ ∀prf.
883        (*
884             st1' ------------S----------\
885              ↑ \                         \
886             st1 \--L--\                   \
887              |         \                   \
888              S         st2_after_call →taa→ st2'
889              |             ↑
890             st2 →taa→ st2_pre_call
891        *)
892        ∃st2_pre_call.
893        as_tailcall_ident ? st2_pre_call = as_tailcall_ident ? («st1, prf») ∧
894        ∃st2_after_call,st2'.
895        ∃taa2 : trace_any_any … st2 st2_pre_call.
896        ∃taa2' : trace_any_any … st2_after_call st2'.
897        as_execute … st2_pre_call st2_after_call ∧
898        sim_status_rel st1' st2' ∧
899        label_rel … st1' st2_after_call
900      | cl_return ⇒
901        (*
902             st1
903            / ↓
904           | st1'----------S,L------------\
905           S   \                           \
906            \   \-----R-------\            |     
907             \                 |           |
908             st2 →taa→ st2_ret |           |
909                          ↓   /            |
910                     st2_after_ret →taaf→ st2'
912           we also ask that st2_after_ret be not labelled if the taaf tail is
913           not empty
914        *)
915        ∃st2_ret,st2_after_ret,st2'.
916        ∃taa2 : trace_any_any … st2 st2_ret.
917        ∃taa2' : trace_any_any_free … st2_after_ret st2'.
918        (if taaf_non_empty … taa2' then ¬as_costed … st2_after_ret else True) ∧
919        as_classifier … st2_ret cl_return ∧
920        as_execute … st2_ret st2_after_ret ∧ sim_status_rel st1' st2' ∧
921        ret_rel … sim_status_rel st1' st2_after_ret ∧
922        label_rel … st1' st2'
923      | cl_other ⇒
924          (*         
925          st1 → st1'
926            |      \
927            S      S,L
928            |        \
929           st2 →taaf→ st2'
931           the taaf can be empty (e.g. tunneling) but we ask it must not be the
932           case when both st1 and st1' are labelled (we would be able to collapse
933           labels otherwise)
934         *)
935        ∃st2'.
936        ∃taa2 : trace_any_any_free … st2 st2'.
937        (if taaf_non_empty … taa2 then True else (¬as_costed … st1 ∨ ¬as_costed … st1')) ∧
938        sim_status_rel st1' st2' ∧
939        label_rel … st1' st2'
940      | cl_jump ⇒
941        (* just like cl_other, but with a hypothesis more *)
942        as_costed … st1' →
943        ∃st2'.
944        ∃taa2 : trace_any_any_free … st2 st2'.
945        (if taaf_non_empty … taa2 then True else (¬as_costed … st1 ∨ ¬as_costed … st1')) ∧
946        sim_status_rel st1' st2' ∧
947        label_rel … st1' st2'
948      ]
949    ].
953status_simulation_produce_tlr S1 S2 R
954(* we start from this situation
955     st1 →→→→tlr→→→→ st1'
956      | \
957      L  \---S--\
958      |          \
959   st2_lab →taa→ st2   (the taa preamble is in general either empty or given
960                        by the preceding call)
962   and we produce
963     st1 →→→→tlr→→→→ st1'
964             \\      /  \
965             //     R    \-L,S-\
966             \\     |           \
967   st2_lab →tlr→ st2_mid →taaf→ st2'
969  st1 st1' st2_lab st2
970  (tlr1 : trace_label_return S1 st1 st1')
971  (taa2_pre : trace_any_any S2 st2_lab st2)
972  (sim_execute : status_simulation S1 S2 R)
973  on tlr1 : R st1 st2 → label_rel … st1 st2_lab →
974  ∃st2_mid.∃st2'.
975  ∃tlr2 : trace_label_return S2 st2_lab st2_mid.
976  ∃taa2 : trace_any_any_free … st2_mid st2'.
977  (if taaf_non_empty … taa2 then ¬as_costed … st2_mid else True) ∧
978  R st1' st2' ∧ ret_rel … R st1' st2_mid ∧ label_rel … st1' st2' ∧
979  tlr_rel … tlr1 tlr2
982\section{Conclusions and future works}
984The labelling approach is a technique to implement compilers that induce on
985the source code a non uniform cost model determined from the object code
986produced. The cost model assigns a cost to each basic block of the program.
987The main theorem of the labelling approach says that there is an exact
988correspondence between the sequence of basic blocks started in the source
989and object code, and that no instruction in the source or object code is
990executed outside a basic block. Thus the overall cost of object code execution
991can be computed precisely on the source code.
993In this paper we scale the labelling approach to cover a programming language
994with function calls. This introduces new difficulties only when the language
995is unstructured, i.e. it allows function calls to return anywhere in the code,
996destroying the hope of a static prediction of the cost of basic blocks.
997We restore static predictability by introducing a new semantics for unstructured
998programs that single outs well structured executions. The latter are represented
999by structured traces, a generalization of streams of observables that capture
1000several structural invariants of the execution, like well nesting of functions
1001or the fact that every basic block must start with a code emission statement.
1002We show that structured traces are sufficiently structured to statically compute
1003a precise cost model on the object code.
1005We introduce a similarity relation on structured traces that must hold between
1006source and target traces. When the relation holds for every program, we prove
1007that the cost model can be lifted from the object to the source code.
1009In order to prove that similarity holds, we present a generic proof of forward
1010simulation that is aimed at pulling apart as much as possible the part of the
1011simulation related to non-functional properties (preservation of structure)
1012from that related to functional properties. In particular, we reduce the
1013problem of preservation of structure to that of showing a 1-to-0-or-many
1014forward simulation that only adds a few additional proof obligations to those
1015of a traditional, function properties only, proof.
1017All the results presented in the paper are part of a larger certification of a
1018C compiler which is based on the labelling approach. The certification, done
1019in Matita, is the main deliverable of the European Project CerCo.
1021The short term future work consists in the completion of the certification of
1022the CerCo compiler. In particular, the results presented in this paper will
1023be used to certify all the passes of the back-end of the compiler.
1025\paragraph{Related works}
1026CerCo is the first certification project which explicitly tries to induce a
1027precise cost model on the source code in order to establish non-functional
1028properties of programs on an high level language. Traditional certifications
1029of compilers, like~\cite{compcert2,piton}, only explicitly prove preservation
1030of the functional properties via forward simulations.
1032Usually forward simulations take the following form: for each transition
1033from $s_1$ to $s_2$ in the source code, there exists an equivalent sequence of
1034transitions in the target code of length $n$. The number $n$ of transition steps
1035in the target code can just be the witness of the existential statement.
1036An equivalent alternative when the proof of simulation is constructive consists
1037in providing an explicit function, called \emph{clock function} in the
1038litterature~\cite{clockfunctions}, that computes $n$ from $s_1$. Every clock
1039function constitutes then a cost model for the source code, in the spirit of
1040what we are doing in CerCo. However, we believe our solution to be superior
1041in the following points: 1) the machinery of the labelling approach is
1042insensible to the resource being measured. Indeed, any cost model computed on
1043the object code can be lifted to the source code (e.g. stack space used,
1044energy consumed, etc.). On the contrary, clock functions only talk about
1045number of transition steps. In order to extend the approach with clock functions
1046to other resources, additional functions must be introduced. Moreover, the
1047additional functions would be handled differently in the proof.
10482) the cost models induced by the labelling approach have a simple presentation.
1049In particular, they associate a number to each basic block. More complex
1050models can be induced when the approach is scaled to cover, for instance,
1051loop optimizations~\cite{loopoptimizations}, but the costs are still meant to
1052be easy to understandable and manipulate in an interactive theorem prover.
1053On the contrary, a clock function is a complex function of the state $s_1$
1054which, as a function, is an opaque object that is difficult to reify as
1055source code in order to reason on it.
Note: See TracBrowser for help on using the repository browser.