# source:Papers/itp-2013/ccexec2.tex@3345

Last change on this file since 3345 was 3345, checked in by sacerdot, 6 years ago

...

File size: 85.2 KB
Line
1\documentclass{llncs}
2\usepackage{hyperref}
3\usepackage{graphicx}
4\usepackage{color}
5\usepackage{listings}
6\usepackage{bcprules}%\bcprulessavespace
7\usepackage{verbatim}
8\usepackage{alltt}
9\usepackage{subcaption}
10\usepackage{listings}
11\usepackage{amssymb}
12% \usepackage{amsmath}
13\usepackage{multicol}
14
15\providecommand{\eqref}[1]{(\ref{#1})}
16
17% NB: might be worth removing this if changing class in favour of
18% a built-in definition.
19%\newtheorem{theorem}{Theorem}
20\newtheorem{condition}{Condition}
21
22\lstdefinelanguage{coq}
23  {keywords={Definition,Lemma,Theorem,Remark,Qed,Save,Inductive,Record},
24   morekeywords={[2]if,then,else,forall,Prop,match,with,end,let},
25  }
26
27\lstdefinelanguage[mine]{C}[ANSI]{C}{
28  morekeywords={int8_t},
29  mathescape
30}
31
32\lstset{language=[mine]C,basicstyle=\footnotesize\tt,columns=flexible,breaklines=false,
33        keywordstyle=\color{red}\bfseries,
34        keywordstyle=[2]\color{blue},
36        stringstyle=\color{blue},
37        showspaces=false,showstringspaces=false,
38        xleftmargin=1em}
39
40\usepackage{tikz}
41\usetikzlibrary{positioning,calc,patterns,chains,shapes.geometric,scopes}
42\makeatletter
43\pgfutil@ifundefined{pgf@arrow@code@implies}{% supply for lack of double arrow special arrow tip if it is not there%
44  \pgfarrowsdeclare{implies}{implies}%
45  {%
46  \pgfarrowsleftextend{2.2pt}%
47  \pgfarrowsrightextend{2.2pt}%
48  }%
49  {%
50    \pgfsetdash{}{0pt} % do not dash%
51    \pgfsetlinewidth{.33pt}%
52    \pgfsetroundjoin   % fix join%
53    \pgfsetroundcap    % fix cap%
54    \pgfpathmoveto{\pgfpoint{-1.5pt}{2.5pt}}%
55    \pgfpathcurveto{\pgfpoint{-.75pt}{1.5pt}}{\pgfpoint{0.5pt}{.5pt}}{\pgfpoint{2pt}{0pt}}%
56    \pgfpathcurveto{\pgfpoint{0.5pt}{-.5pt}}{\pgfpoint{-.75pt}{-1.5pt}}{\pgfpoint{-1.5pt}{-2.5pt}}%
57    \pgfusepathqstroke%
58  }%
59}{}%
60\makeatother
61
62\tikzset{state/.style={inner sep = 0, outer sep = 2pt, draw, fill},
63         every node/.style={inner sep=2pt},
64         every on chain/.style = {inner sep = 0, outer sep = 2pt},
65         join all/.style = {every on chain/.append style={join}},
66         on/.style={on chain={#1}, state},
67         m/.style={execute at begin node=$, execute at end node=$},
68         node distance=3mm,
69         is other/.style={circle, minimum size = 3pt, state},
70         other/.style={on, is other},
71         is jump/.style={diamond, minimum size = 6pt, state},
72         jump/.style={on, is jump},
73         is call/.style={regular polygon, regular polygon sides=3, minimum size=5pt, state},
74         call/.style={on=going below, is call, node distance=6mm, label=above left:$#1$},
75         is ret/.style={regular polygon, regular polygon sides=3, minimum size=5pt, shape border rotate=180, state},
76         ret/.style={on=going above, is ret, node distance=6mm},
77         chain/.style={start chain=#1 going left},
78         rev ar/.style={stealth-, thick},
79         ar/.style={-stealth, thick},
80         every join/.style={rev ar},
81         labelled/.style={fill=white, label=above:$#1$},
82         vcenter/.style={baseline={([yshift=-.5ex]current bounding box)}},
83         every picture/.style={thick},
84         double equal sign distance/.prefix style={double distance=1.5pt}, %% if already defined (newest version of pgf) it should be ignored%}
85         implies/.style={double, -implies, thin, double equal sign distance, shorten <=5pt, shorten >=5pt},
86         new/.style={densely dashed},
87         rel/.style={font=\scriptsize, fill=white, inner sep=2pt},
88         diag/.style={row sep={11mm,between origins},
89                      column sep={11mm,between origins},
90                      every node/.style={draw, is other}},
91         small vgap/.style={row sep={7mm,between origins}},
92         % for article, maybe temporary
93         is jump/.style=is other,
94         is call/.style=is other,
95         is ret/.style=is other,
96}
97
98\def\L{\mathrel{\mathcal L}}
99\def\S{\mathrel{\mathcal S}}
100\def\R{\mathrel{\mathcal R}}
101\def\C{\mathrel{\mathcal C}}
102
103\newsavebox{\execbox}
104\savebox{\execbox}{\tikz[baseline=-.5ex]\draw [-stealth] (0,0) -- ++(1em, 0);}
105\newcommand{\exec}{\ensuremath{\mathrel{\usebox{\execbox}}}}
106\let\ar\rightsquigarrow
107\renewcommand{\verb}{\lstinline[mathescape]}
108
109\let\class\triangleright
111
112\newcommand{\append}{\mathbin{@}}
113\newcommand{\iffdef}{\mathrel{:\Leftrightarrow}}
114
115\begin{document}
116\pagestyle{plain}
117
118\title{Certification of the Preservation of Structure by a Compiler's Back-end Pass\thanks{The project CerCo acknowledges the financial support of the Future and
119Emerging Technologies (FET) programme within the Seventh Framework
120Programme for Research of the European Commission, under FET-Open grant
121number: 243881}}
122\author{Paolo Tranquilli \and Claudio Sacerdoti Coen}
123\institute{Department of Computer Science and Engineering, University of Bologna,\\\email{Paolo.Tranquilli@unibo.it}, \email{Claudio.SacerdotiCoen@unibo.it}}
124\maketitle
125\begin{abstract}
126The labelling approach is a technique to lift cost models for non-functional
127properties of programs from the object code to the source code. It is based
128on the preservation of the structure of the high level program in every
129intermediate language used by the compiler. Such structure is captured by
130observables that are added to the semantics and that needs to be preserved
131by the forward simulation proof of correctness of the compiler. Additional
132special observables are required for function calls. In this paper we
133present a generic forward simulation proof that preserves all these observables.
134The proof statement is based on a new mechanised semantics that traces the
135structure of execution when the language is unstructured. The generic semantics
136and simulation proof have been mechanised in the interactive theorem prover
137Matita.
138\end{abstract}
139
140\section{Introduction}
141The \emph{labelling approach} has been introduced in~\cite{easylabelling} as
142a technique to \emph{lift} cost models for non-functional properties of programs
143from the object code to the source code. Examples of non-functional properties
144are execution time, amount of stack/heap space consumed and energy required for
145communication. The basic idea of the approach is that it is impossible to
146provide a \emph{uniform} cost model for an high level language that is preserved
147\emph{precisely} by a compiler. For instance, two instances of an assignment
148$x = y$ in the source code can be compiled very differently according to the
149place (registers vs stack) where $x$ and $y$ are stored at the moment of
150execution. Therefore a precise cost model must assign a different cost
151to every occurrence, and the exact cost can only be known after compilation.
152
153According to the labelling approach, the compiler is free to compile and optimise
154the source code without any major restriction, but it must keep trace
155of what happens to basic blocks during the compilation. The cost model is
156then computed on the object code. It assigns a cost to every basic block.
157Finally, the compiler propagates back the cost model to the source level,
158assigning a cost to each basic block of the source code.
159
160Implementing the labelling approach in a certified compiler
161allows to reason formally on the high level source code of a program to prove
162non-functional properties that are granted to be preserved by the compiler
163itself. The trusted code base is then reduced to 1) the interactive theorem
164prover (or its kernel) used in the certification of the compiler and
1652) the software used to certify the property on the source language, that
166can be itself certified further reducing the trusted code base.
167In~\cite{easylabelling} the authors provide an example of a simple
168certified compiler that implements the labelling approach for the
169imperative \texttt{While} language~\cite{while}, that does not have
170pointers and function calls.
171
172The labelling approach has been shown to scale to more interesting scenarios.
173In particular in~\cite{functionallabelling} it has been applied to a functional
174language and in~\cite{loopoptimizations} it has been shown that the approach
175can be slightly complicated to handle loop optimisations and, more generally,
176program optimisations that do not preserve the structure of basic blocks.
177On-going work also shows that the labelling approach is also compatible with
178the complex analyses required to obtain a cost model for object code
179on processors that implement advanced features like pipelining, superscalar
180architectures and caches.
181
182In the European Project CerCo (Certified Complexity~\footnote{\url{http://cerco.cs.unibo.it}})~\cite{cerco} we are certifying a labelling approach based compiler for a large subset of C to
1838051 object code. The compiler is
184moderately optimising and implements a compilation chain that is largely
185inspired to that of CompCert~\cite{compcert1,compcert2}. Compared to work done in~\cite{easylabelling}, the main novelty and source of difficulties is due to the presence
186of function calls. Surprisingly, the addition of function calls require a
187revisitation of the proof technique given in~\cite{easylabelling}. In
188particular, at the core of the labelling approach there is a forward
189simulation proof that, in the case of \texttt{While}, is only minimally
190more complex than the proof required for the preservation of the
191functional properties only. In the case of a programming language with
192function calls, instead, it turns out that the forward simulation proof for
193the back-end languages must grant a whole new set of invariants.
194
195In this paper we present a formalisation in the Matita interactive theorem
196prover~\cite{matita1,matita2} of a generic version of the simulation proof required for unstructured
197languages. All back-end languages of the CerCo compiler are unstructured
198languages, so the proof covers half of the correctness of the compiler.
199The statement of the generic proof is based on a new semantics
200for imperative unstructured languages that is based on \emph{structured
201traces} and that restores the preservation of structure in the observables of
202the semantics. The generic proof allows to almost completely split the
203part of the simulation that deals with functional properties only from the
204part that deals with the preservation of structure.
205
206The plan of this paper is the following. In Section~\ref{labelling} we
207sketch the labelling method and the problems derived from the application
208to languages with function calls. In Section~\ref{semantics} we introduce
209a generic description of an unstructured imperative language and the
210corresponding structured traces (the novel semantics). In
211Section~\ref{simulation} we describe the forward simulation proof.
212Conclusions and future works are in Section~\ref{conclusions}
213
214\section{The labelling approach}
215\label{labelling}
216
217\section{A brief introduction to the labelling approach}
218
219\begin{figure}
220\begin{verbatim}
221EMIT L_1;                         EMIT L_1         cost += k_1;
222I_1;                              I_3              I_1;
223for (i=0; i<2; i++) {        l_1: COND l_2         for (i=0; i<2; i++) {
224  EMIT L_2;                       EMIT L_2           cost += k_2;
225  I_2;                            I_4                I_2;
226 }                                GOTO l_1          }
227EMIT L_3;                    l_2: EMIT L_3         cost += k_3;
228\end{verbatim}
229\caption{The labelling approach applied to a simple program.\label{examplewhile}. The $I_i$ are sequences of instructions not containing jumps or loops. }
230\end{figure}
231We briefly explain the labelling approach on the example in Figure~\ref{examplewhile}. The user wants to analyse the execution time of the program made by
232the black lines in the r.h.s. of the figure. He compiles the program using
233a special compiler that first inserts in the code three label emission
234statements (\texttt{EMIT L$_i$}, in red) to mark the beginning of basic blocks;
235then the compiler compiles the code to machine code (in the
236middle of the figure), granting that the execution of the source and object
237code emits the same sequence of labels ($L_1; L_2; L_2; L_3$ in the example).
238This is achieved by keeping track of basic blocks during compilation, avoiding
239all optimizations that alter the control flow. The latter can be recovered with
240a more refined version of the labelling approach~\cite{tranquill}, but in the
241present paper we stick to this simple variant for simplicity. Once the object
242code is produced, the compiler runs a static code analyzer to associate to
243each label $L_1, \ldots, L_3$ the cost (in clock cycles) of the instructions
244that belong to the corresponding basic block. For example, the cost $k_1$
245associated to $L_1$ is the number of cycles required to execute the block
246$I_3$ and $COND l_2$, while the cost $k_2$ associated to $L_2$ counts the
247cycles required by the block $I_4$ and $GOTO l_1$. The compiler also guarantees
248that every executed instruction is in the scope of some code emission label,
249that each scope does not contain loops (to associate a finite cost), and that
250both branches of a conditional statement are followed by a code emission
251statement. Under these assumptions it is true that the total execution cost
252of the program $\Delta_t$ is equal to the sum over the sequence of emitted
253labels of the cost associated to every label:
254$\Delta t = k(L_1; L_2; L_2; L_3) = k_1 + k_2 + k_2 + k_3$.
255Finally, the compiler emits an instrumented version of the source code
256(in the r.h.s. of the figure) where label emission statements are replaced
257by increments of a global variable cost that, before every increment, holds the
258exact number of clock cycles spent by the microprocessor so far:
259the difference $\Delta clock$ between the final and initial value of the variable clock is $\Delta clock = k_1 + k_2 + k_2 + k_3 = \Delta t$. Finally, the
260user can employ any available method (e.g. Hoare logic, invariant generators,
261abstract interpretation and automated provers) to certify that $\Delta clock$
262never exceeds a certain bound~\cite{cerco}, which is now a functional property
263of the code.
264
265\section{The labelling approach in presence of loops}
266
267Let's now consider a simple program written in C that contains a function
268pointer call inside the scope of the cost label $L_1$.
269\begin{verbatim}
270main: EMIT L_1       g: EMIT L_3   EMIT L_1     g: EMIT L_3
271      I_1;              I_3;       I_4;            I_6;
272      (*f)();           return;    CALL            RETURN
273      I_2;                         I_5;
274      EMIT L_2                     EMIT L_2
275\end{verbatim}
276The labelling method works exactly as before, inserting
277code emission statements/\texttt{cost} variable increments at the beginning
278of every basic block and at the beginning of every function. The compiler
279still grants that the sequence of labels observed on the two programs are
280the same. A new difficulty happears when the compiler needs to statically
281analyze the object code to assign a cost to every label. What should the scope
282of the $L_1$ label be? After executing the $I_4$ block, the \texttt{CALL}
283statement passes control to a function that cannot be determined statically.
284Therefore the cost of executing the body must be paid by some other label
285(hence the requirement that every function starts with a code emission
286statement). What label should pay for the cost for the block $I_5$? The only
287reasonable answer is $L_1$, i.e. \emph{the scope of labels should extend to the
288next label emission statement, stepping over function calls}.
289
290The latter definition of scope is adeguate on the source level because
291C is a structured language that guarantees that every function call, if it
292returns, passes control to the first instruction that follows the call. However,
293this is not guaranteed for object code, the backend languages of a compiler
294and, more generally, for unstructured
295languages that use a writable control stack to store the return address of
296calls. For example, $I_6$ could increment by $1$ the return address on the
297stack so that the next \texttt{RETURN} would start at the second instruction
298of $I_5$. The compiler would still be perfectly correct if a random, dead
299code instruction was also added just after each \texttt{CALL}. More generally,
300\emph{there is no guarantee that a correct compiler that respects the functional
301behaviour of a program also respects the calling structure of the source code}.
302Without such an assumption, however, it may not be true that the execution cost
303of the program is the sum of the costs associated to the labels emitted. In our
304example, the cost of $I_5$ is paid by $L_1$, but in place of $I_5$ the processor could execute any other code after $g$ returns.
305
306Obviously, any reasonably written compiler produces object code that behaves
307as if the language was structured (i.e. by properly nesting function
308calls/returns and without tampering with the return addresses on the control
309stack). This property, however, is a property of the runs of object code
310programs, and not a property of the object code that can be easily statically
311verified (as the ones we required for the basic labelling method).
312Therefore, we now need to single out those runs whose cost behaviour can be
313statically predicted, and we need to prove that every run of programs generated
314by our compiler are of that type. We call them \emph{structured} since their
315main property is to respect properties that hold for free on the source code
316because the source language is structured. Moreover, in order to avoid proving
317too many preservation properties of our compiler, we drop the original
318requirements on the object code (all instructons must be in scope of some labels, no loops inside a scope, etc.) in favour of the corresponding requirement
319for structured runs (a structured run must start with a label emission, no instruction can be executed twice between two emissions, etc.).
320
321We will therefore proceed as follows. In the following section
3221) we formally introduce the notion of
323structured trace, which captures structured runs in the style of labelled
324transition systems; 2) we show that on the object code we can correctly
325compute the execution time of a structured run from the sequence of labels
326observed; 3) we give unstructured languages a semantics in terms of structured
327traces; 4) we show that on the source code we can correctly compute the
328execution time of a program if the compiler produces object code whose
329runs are weakly similar to the source code runs.
330
331The notion of weak bisimulation for structured traces is a global property
332which is hard to prove formally and much more demanding than the simple forward
333simulation required for proofs of preservation of functional properties.
334Therefore in Section~\ref{XXX} we will present a set of local simulation
335conditions that refine the corresponding conditions for forward simulation and
336that are sufficient to grant the production of weakly similar traces.
337
338All the definitions and theorems presented in the paper have been formalized
339in the interactive theorem prover Matita and are being used to certify
340the complexity preserving compiler developed in the CerCo project~\cite{cerco}.
341The formalization can be
342found at~\ref{YYY} and it heavily relies on algebraic and dependent types for
343both structured traces and the definition of weak similarity. In the paper
344we did not try to stay close to the formalization. On the contrary,
345the definitions given in the paper are the result of a significant
346simplification effort for
347the sake of presentation and to make easier the re-implementation of the
348concepts in a proof assistant which is not based on the Calculus of Inductive
349Constructions. However the formalization is heavily commented to allow the
350reader to understand the technical details of the formalization.
351
352
353====================================================
354
355We briefly sketch here a simplified version of the labelling approach as
356introduced in~\cite{easylabelling}. The simplification strengthens the
357sufficient conditions given in~\cite{easylabelling} to allow a simpler
358explanation. The simplified conditions given here are also used in the
359CerCo compiler to simplify the proof.
360
361Let $\mathcal{P}$ be a programming language whose semantics is given in
362terms of observables: a run of a program yields a finite or infinite
363stream of observables. We also assume for the time being that function
364calls are not available in $\mathcal{P}$. We want to associate a cost
365model to a program $P$ written in $\mathcal{P}$. The first step is to
366extend the syntax of $\mathcal{P}$ with a new construct $\texttt{emit L}$
367where $L$ is a label distinct from all observables of $\mathcal{P}$.
368The semantics of $\texttt{emit L}$ is the emission of the observable
369\texttt{L} that is meant to signal the beginning of a basic block.
370
371There exists an automatic procedure that injects into the program $P$ an
372$\texttt{emit L}$ at the beginning of each basic block, using a fresh
373\texttt{L} for each block. In particular, the bodies of loops, both branches
374of \texttt{if-then-else}s and the targets of \texttt{goto}s must all start
375with an emission statement.
376
377Let now $C$ be a compiler from $\mathcal{P}$ to the object code $\mathcal{M}$,
378that is organised in passes. Let $\mathcal{Q}_i$ be the $i$-th intermediate
379language used by the compiler. We can easily extend every
380intermediate language (and its semantics) with an $\texttt{emit L}$ statement
381as we did for $\mathcal{P}$. The same is possible for $\mathcal{M}$ too, with
382the additional difficulty that the syntax of object code is given as a
383sequence of bytes. The injection of an emission statement in the object code
384can be done using a map that maps two consecutive code addresses with the
385statement. The intended semantics is that, if $(pc_1,pc_2) \mapsto \texttt{emit L}$ then the observable \texttt{L} is emitted after the execution of the
386instruction stored at $pc_1$ and before the execution of the instruction
387stored at $pc_2$. The two program counters are necessary because the
388instruction stored at $pc_1$ can have multiple possible successors (e.g.
389in case of a conditional branch or an indirect call). Dually, the instruction
390stored at $pc_2$ can have multiple possible predecessors (e.g. if it is the
391target of a jump).
392
393The compiler, to be functionally correct, must preserve the observational
394equivalence, i.e. executing the program after each compiler pass should
395yield the same stream of observables. After the injection of emission
396statements, observables now capture both functional and non-functional
397behaviours.
398This correctness property is called in the literature a forward simulation
399and is sufficient for correctness when the target language is
400deterministic~\cite{compcert3}.
401We also require a stronger, non-functional preservation property: after each
402pass all basic blocks must start with an emission statement, and all labels
403\texttt{L} must be unique.
404
405Now let $M$ be the object code obtained for the program $P$. Let us suppose
406that we can statically inspect the code $M$ and associate to each basic block
407a cost (e.g. the number of clock cycles required to execute all instructions
408in the basic block, or an upper bound to that time). Every basic block is
409labelled with an unique label \texttt{L}, thus we can actually associate the
410cost to \texttt{L}. Let call it $k(\texttt{L})$.
411
412The function $k$ is defined as the cost model for the object code control
413blocks. It can be equally used as well as the cost model for the source
414control blocks. Indeed, if the semantics of $P$ is the stream
415$L_1 L_2 \ldots$, then, because of forward simulation, the semantics of $M$ is
416also $L_1 L_2 \ldots$ and its actual execution cost is $\Sigma_i k(L_i)$ because
417every instruction belongs to a control block and every control block is
418labelled. Thus it is correct to say that the execution cost of $P$ is also
419$\Sigma_i k(L_i)$. In other words, we have obtained a cost model $k$ for
420the blocks of the high level program $P$ that is preserved by compilation.
421
422How can the user profit from the high level cost model? Suppose, for instance,
423that he wants to prove that the WCET of his program is bounded by $c$. It
424is sufficient for him to prove that $\Sigma_i k(L_i) \leq c$, which is now
425a purely functional property of the code. He can therefore use any technique
426available to certify functional properties of the source code.
427What is suggested in~\cite{easylabelling} is to actually instrument the
428source code $P$ by replacing every label emission statement
429$\texttt{emit L}$ with the instruction $\texttt{cost += k(L)}$ that increments
430a global fresh variable \texttt{cost}. The bound is now proved by establishing
431the program invariant $\texttt{cost} \leq c$, which can be done for example
432using the Frama-C~\cite{framaC} suite if the source code is some variant of
433C.
434
435In order to extend the labeling approach to function calls we make
436\verb+CALL f+ emit the observable \verb+f+ and \verb+RET+ emit a distinguished observable
437\verb+ret+.
438
439For example the following execution history of the program in \autoref{fig:esempio}
440$$I_1; \verb+CALL f+; \verb+COND l+; \verb+EMIT \ell_2+; I_3; \verb+RET+; I_2; \verb+RET+$$
441emits the trace
442$$\verb+main+, \verb+f+$$
443\begin{figure}
444\hfil
445\begin{minipage}{.2\linewidth}
446\begin{lstlisting}
447main: $\!I_1$
448      CALL f
449      $I_2$
450      RET
451\end{lstlisting}
452\end{minipage}
453\begin{minipage}{.1\linewidth}
454\begin{lstlisting}
455main
456main
457main
458main
459\end{lstlisting}
460\end{minipage}
461\hfil
462\begin{minipage}{.2\linewidth}
463\begin{lstlisting}
464f: $\!$COND l
465   EMIT $\ell_2$
466   RET
467l: $\!$EMIT $\ell_3$
468   $I_3$
469   RET
470\end{lstlisting}
471\end{minipage}
472\begin{minipage}{.1\linewidth}
473\begin{lstlisting}
474f
475
476$\ell_2$
477
478$\ell_3$
479$\ell_3$
480\end{lstlisting}
481\end{minipage}
482\hfil{}
483\caption{}
484\label{fig:esempio}
485\end{figure}
486
487
488\subsection{Labelling function calls}
489We now want to extend the labelling approach to support function calls.
490On the high level, \emph{structured} programming language $\mathcal{P}$ there
491is not much to change.
492When a function is invoked, the current basic block is temporarily exited
493and the basic block the function starts with take control. When the function
494returns, the execution of the original basic block is resumed. Thus the only
495significant change is that basic blocks can now be nested. Let \texttt{E}
496be the label of the external block and \texttt{I} the label of a nested one.
497Since the external starts before the internal, the semantics observed will be
498\texttt{E I} and the cost associated to it on the source language will be
499$k(\texttt{E}) + k(\texttt{I})$, i.e. the cost of executing all instructions
500in the block \texttt{E} first plus the cost of executing all the instructions in
501the block \texttt{I}. However, we know that some instructions in \texttt{E} are
502executed after the last instruction in \texttt{I}. This is actually irrelevant
503because we are here assuming that costs are additive, so that we can freely
504permute them\footnote{The additivity assumption fails on modern processors that have stateful subsystems, like caches and pipelines. The extension of the labelling approach to those systems is therefore non trivial and under development in the CerCo project.}. Note that, in the present discussion, we are assuming that
505the function call terminates and yields back control to the basic block
506\texttt{E}. If the call diverges, the instrumentation
507$\texttt{cost += k(E)}$ executed at the beginning of \texttt{E} is still valid,
508but just as an upper bound to the real execution cost: only precision is lost.
509
510Let now consider what happens when we move down the compilation chain to an
511unstructured intermediate or final language. Here unstructured means that
512the only control operators are conditional and unconditional jumps, function
513calls and returns. Unlike a structured language, though, there is no guarantee
514that a function will return control just after the function call point.
515The semantics of the return statement, indeed, consists in fetching the
516return address from some internal structure (typically the control stack) and
517jumping directly to it. The code can freely manipulate the control stack to
518make the procedure returns to whatever position. Indeed, it is also possible
519to break the well nesting of function calls/returns.
520
521Is it the case that the code produced by a correct compiler must respect the
522additional property that every function returns just after its function call
523point? The answer is negative and the property is not implied by forward
524simulation proofs. For instance, imagine to modify a correct compiler pass
525by systematically adding one to the return address on the stack and by
526putting a \texttt{NOP} (or any other instruction that takes one byte) after
527every function call. The obtained code will be functionally indistinguishable,
529
530This lack of structure in the semantics badly interferes with the labelling
531approach. The reason is the following: when a basic block labelled with
532\texttt{E} contains a function call, it no longer makes any sense to associate
533to a label \texttt{E} the sum of the costs of all the instructions in the block.
534Indeed, there is no guarantee that the function will return into the block and
535that the instructions that will be executed after the return will be the ones
536we are paying for in the cost model.
537
538How can we make the labelling approach work in this scenario? We only see two
539possible ways. The first one consists in injecting an emission statement after
540every function call: basic blocks no longer contain function calls, but are now
541terminated by them. This completely solves the problem and allows the compiler
542to break the structure of function calls/returns at will. However, the
543technique has several drawbacks. First of all, it greatly augments the number
544of cost labels that are injected in the source code and that become
545instrumentation statements. Thus, when reasoning on the source code to prove
546non-functional properties, the user (or the automation tool) will have to handle
547larger expressions. Second, the more labels are emitted, the more difficult it
548becomes to implement powerful optimisations respecting the code structure.
549Indeed, function calls are usually implemented in such a way that most registers
550are preserved by the call, so that the static analysis of the block is not
551interrupted by the call and an optimisation can involve both the code before
552and after the function call. Third, instrumenting the source code may require
553unpleasant modification of it. Take, for example, the code
554\texttt{f(g(x));}. We need to inject an emission statement/instrumentation
555instruction just after the execution of \texttt{g}. The only way to do that
556is to rewrite the code as \texttt{y = g(x); emit L; f(y);} for some fresh
557variable \texttt{y}. It is pretty clear how in certain situations the obtained
558code would be more obfuscated and then more difficult to manually reason on.
559
560For the previous reasons, in this paper and in the CerCo project we adopt a
561different approach. We do not inject emission statements after every
562function call. However, we want to propagate a strong additional invariant in
563the forward simulation proof. The invariant is the propagation of the structure
564 of the original high level code, even if the target language is unstructured.
565The structure we want to propagate, that will become more clear in the next
566section, comprises 1) the property that every function should return just after
567the function call point, which in turns imply well nesting of function calls;
5682) the property that every basic block starts with a code emission statement.
569
570In the original labelling approach of~\cite{easylabelling}, the second property
571was granted syntactically as a property of the generated code.
572In our revised approach, instead, we will impose the property on the runs:
573it will be possible to generate code that does not respect the syntactic
574property, as soon as all possible runs respect it. For instance, dead code will no longer
575be required to have all basic blocks correctly labelled. The switch is suggested
576from the fact that the first of the two properties --- that related to
577function calls/returns --- can only be defined as property of runs,
578not of the static code. The switch is
579beneficial to the proof because the original proof was made of two parts:
580the forward simulation proof and the proof that the static property was granted.
581In our revised approach the latter disappears and only the forward simulation
582is kept.
583
584In order to capture the structure semantics so that it is preserved
585by a forward simulation argument, we need to make the structure observable
586in the semantics. This is the topic of the next section.
587
588\section{Structured traces}
589\label{semantics}
590
591Let's consider a generic unstructured language already equipped with a
592small step structured operational semantics (SOS). We introduce a
593deterministic labelled transition system~\cite{LTS} $(S,s_i,\Lambda,\to)$
594that refines the
595SOS by observing function calls and the beginning of basic blocks.
596$S$ is the set of states of the program, $s_i$ the initial state and
597$\Lambda = \{ \tau, RET \} \cup \mathcal{L} \cup \mathcal{F}$
598where $\mathcal{F}$ is the set of names of functions that can occur in the
599program, $\mathcal{L}$ is a set of labels disjoint from $\mathcal{F}$
600and $\tau$ and $RET$ do not belong to $\mathcal{F} \cup \mathcal{L}$.
601The transition function is defined as $s_1 \stackrel{o}{\to} s_2$ if
602$s_1$ moves to $s_2$ according to the SOS; moreover $o = f \in \mathcal{F}$ if
603the function $f$ is called, $o = RET$ if a \texttt{RETURN} is executed,
604$o = L \in \mathcal{L}$ if a \texttt{EMIT L} is executed to signal the
605beginning of a basic block, and $o = \tau$ in all other cases.
606Because we assume the language to be deterministic, the label emitted can
607actually be computed simply observing $s_1$.
608
609Among all possible finite execution fragments
610$s_0 \stackrel{o_0}{\to} s_1 \ldots \stackrel{o_1}{\to} s_n$ we want to
611identify the ones that satisfy the requirements we sketched in the previous
612section. We say that an execution fragment is \emph{structured} iff
613\begin{enumerate}
614 \item for every $i$, if $s_i \stackrel{f}{\to} s_{i+1}$ then there is a
615   label $L$ such that $s_{i+1} \stackrel{L}{\to} s_{i+2}$.
616   Equivalently, $s_{i+1}$ must start execution with an \texttt{EMIT L}.
617   This captures the requirement that the body of function calls always start
618   with a label emission statement.
619 \item for every $i$, if $s_i \stackrel{f}{\to} s_{i+1}$ then
620   there is a strucuted $s_{i+1} \stackrel{o_{i+1}}{\to} \ldots 621 \stackrel{o_n}{\to} s_{n+1}$ such that $s_{n+1} \stackrel{RET}{\to} s_{n+2}$
622   and the instruction to be executed in $s_{n+2}$ follows the call that was
623   executed in $s_i$. This captures the double requirement that every function
624   call must converge and that it must yield back control just after the call.
625 \item for every $i$, if the instruction to be executed in $s_i$ is a
626   conditional branch, then there is an $L$ such that $s_{i+1} \stackrel{L}{\to} s_{i+2}$ or, equivalently, that $s_{i+1}$ must start execution with an
627   \texttt{EMIT L}. This captures the requirement that every branch which is
629\end{enumerate}
630
631
632
633
634========================================================================
635
637on labelled deductive systems. Given a set of observables $\mathcal{O}$ and
638a set of states $\S$, the semantics of one deterministic execution
639step is
640defined as a function $S \to S \times O^*$ where $O^*$ is a (finite) stream of
641observables. The semantics is then lifted compositionally to multiple (finite
642or infinite) execution steps.
643Finally, the semantics of a a whole program execution is obtained by forgetting
644about the final state (if any), yielding a function $S \to O^*$ that given an
645initial status returns the finite or infinite stream of observables in output.
646
647We present here a new definition of semantics where the structure of execution,
648as defined in the previous section, is now observable. The idea is to replace
649the stream of observables with a structured data type that makes explicit
650function call and returns and that grants some additional invariants by
651construction. The data structure, called \emph{structured traces}, is
652defined inductively for terminating programs and coinductively for diverging
653ones. In the paper we focus only on the inductive structure, i.e. we assume
654that all programs that are given a semantics are total. The Matita formalisation
655also shows the coinductive definitions. The semantics of a program is then
656defined as a function that maps an initial state into a structured trace.
657
658In order to have a definition that works on multiple intermediate languages,
659we abstract the type of structure traces over an abstract data type of
660abstract statuses, which we aptly call $\texttt{abstract\_status}$. The fields
661of this record are the following.
662\begin{itemize}
663 \item \verb+S : Type[0]+, the type of states.
664 \item \verb+as_execute : S $\to$ S $\to$ Prop+, a binary predicate stating
665 an execution step. We write $s_1\exec s_2$ for $\verb+as_execute+~s_1~s_2$.
666 \item \verb+as_classifier : S $\to$ classification+, a function tagging all
667 states with a class in
668 $\{\texttt{cl\_return,cl\_jump,cl\_call,cl\_other}\}$, depending on the instruction
669 that is about to be executed (we omit tail-calls for simplicity). We will
670 use $s \class c$ as a shorthand for both $\texttt{as\_classifier}~s=c$
671 (if $c$ is a classification) and $\texttt{as\_classifier}~s\in c$
672 (if $c$ is a set of classifications).
673 \item \verb+as_label : S $\to$ option label+, telling whether the
674 next instruction to be executed in $s$ is a cost emission statement,
675 and if yes returning the associated cost label. Our shorthand for this function
676 will be $\ell$, and we will also abuse the notation by using $\ell~s$ as a
677 predicate stating that $s$ is labelled.
678 \item \verb+as_call_ident : ($\Sigma$s:S. s $\class$ cl_call) $\to$ label+,
679 telling the identifier of the function which is being called in a
680 \verb+cl_call+ state. We will use the shorthand $s\uparrow f$ for
681 $\verb+as_call_ident+~s = f$.
682 \item \verb+as_after_return : ($\Sigma$s:S. s $\class$ cl_call) $\to$ S $\to$ Prop+,
683 which holds on the \verb+cl_call+ state $s_1$ and a state $s_2$ when the
684 instruction to be executed in $s_2$ follows the function call to be
685 executed in (the witness of the $\Sigma$-type) $s_1$. We will use the notation
686 $s_1\ar s_2$ for this relation.
687\end{itemize}
688
689% \begin{alltt}
690% record abstract_status := \{ S: Type[0];
691%  as_execute: S $$\to$$ S $$\to$$ Prop;   as_classifier: S $$\to$$ classification;
692%  as_label: S $$\to$$ option label;    as_called: ($$\Sigma$$s:S. c s = cl_call) $$\to$$ label;
693%  as_after_return: ($$\Sigma$$s:S. c s = cl_call) $$\to$$ S $$\to$$ Prop \}
694% \end{alltt}
695
696The inductive type for structured traces is actually made by three multiple
697inductive types with the following semantics:
698\begin{enumerate}
699 \item $(\texttt{trace\_label\_return}~s_1~s_2)$ (shorthand $\verb+TLR+~s_1~s_2$)
700   is a trace that begins in
701   the state $s_1$ (included) and ends just before the state $s_2$ (excluded)
702   such that the instruction to be executed in $s_1$ is a label emission
703   statement and the one to be executed in the state before $s_2$ is a return
704   statement. Thus $s_2$ is the state after the return. The trace
705   may contain other label emission statements. It captures the structure of
706   the execution of function bodies: they must start with a cost emission
707   statement and must end with a return; they are obtained by concatenating
708   one or more basic blocks, all starting with a label emission
709   (e.g. in case of loops).
710 \item $(\texttt{trace\_any\_label}~b~s_1~s_2)$ (shorthand $\verb+TAL+~b~s_1~s_2$)
711   is a trace that begins in
712   the state $s_1$ (included) and ends just before the state $s_2$ (excluded)
713   such that the instruction to be executed in $s_2$/in the state before
714   $s_2$ is either a label emission statement or
715   or a return, according to the boolean $b$. It must not contain
716   any label emission statement. It captures the notion of a suffix of a
717   basic block.
718 \item $(\texttt{trace\_label\_label}~b~s_1~s_2)$ (shorthand $\verb+TLL+~b~s_1~s_2$ is the special case of
719   $\verb+TAL+~b~s_1~s_2)$ such that the instruction to be
720   executed in $s_1$ is a label emission statement. It captures the notion of
721   a basic block.
722\end{enumerate}
723
724\begin{multicols}{3}
725\infrule[\verb+tlr_base+]
726 {\texttt{TLL}~true~s_1~s_2}
727 {\texttt{TLR}~s_1~s_2}
728
729\infrule[\verb+tlr_step+]
730 {\texttt{TLL}~false~s_1~s_2 \andalso
731  \texttt{TLR}~s_2~s_3
732 }
733 {\texttt{TLR}~s_1~s_3}
734
735\infrule[\verb+tll_base+]
736 {\texttt{TAL}~b~s_1~s_2 \andalso
737  \ell~s_1
738 }
739 {\texttt{TLL}~b~s_1~s_2}
740\end{multicols}
741
742\infrule[\verb+tal_base_not_return+]
743 {s_1\exec s_2 \andalso
744  s_1\class\{\verb+cl_jump+, \verb+cl_other+\}\andalso
745  \ell~s_2
746 }
747 {\texttt{TAL}~false~s_1~s_2}
748
749\infrule[\verb+tal_base_return+]
750 {s_1\exec s_2 \andalso
751  s_1 \class \texttt{cl\_return}
752 }
753 {\texttt{TAL}~true~s_1~s_2}
754
755\infrule[\verb+tal_base_call+]
756 {s_1\exec s_2 \andalso
757  s_1 \class \texttt{cl\_call} \andalso
758  s_1\ar s_3 \andalso
759  \texttt{TLR}~s_2~s_3 \andalso
760  \ell~s_3
761 }
762 {\texttt{TAL}~false~s_1~s_3}
763
764\infrule[\verb+tal_step_call+]
765 {s_1\exec s_2 \andalso
766  s_1 \class \texttt{cl\_call} \andalso
767  s_1\ar s_3 \andalso
768  \texttt{TLR}~s_2~s_3 \andalso
769  \texttt{TAL}~b~s_3~s_4
770 }
771 {\texttt{TAL}~b~s_1~s_4}
772
773\infrule[\verb+tal_step_default+]
774 {s_1\exec s_2 \andalso
775  \lnot \ell~s_2 \andalso
776  \texttt{TAL}~b~s_2~s_3\andalso
777  s_1 \class \texttt{cl\_other}
778 }
779 {\texttt{TAL}~b~s_1~s_3}
780\begin{comment}
781\begin{verbatim}
782inductive trace_label_return (S:abstract_status) : S → S → Type[0] ≝
783  | tlr_base:
784      ∀status_before: S.
785      ∀status_after: S.
786        trace_label_label S ends_with_ret status_before status_after →
787        trace_label_return S status_before status_after
788  | tlr_step:
789      ∀status_initial: S.
790      ∀status_labelled: S.
791      ∀status_final: S.
792        trace_label_label S doesnt_end_with_ret status_initial status_labelled →
793        trace_label_return S status_labelled status_final →
794          trace_label_return S status_initial status_final
795with trace_label_label: trace_ends_with_ret → S → S → Type[0] ≝
796  | tll_base:
797      ∀ends_flag: trace_ends_with_ret.
798      ∀start_status: S.
799      ∀end_status: S.
800        trace_any_label S ends_flag start_status end_status →
801        as_costed S start_status →
802          trace_label_label S ends_flag start_status end_status
803with trace_any_label: trace_ends_with_ret → S → S → Type[0] ≝
804  (* Single steps within a function which reach a label.
805     Note that this is the only case applicable for a jump. *)
806  | tal_base_not_return:
807      ∀start_status: S.
808      ∀final_status: S.
809        as_execute S start_status final_status →
810        (as_classifier S start_status cl_jump ∨
811         as_classifier S start_status cl_other) →
812        as_costed S final_status →
813          trace_any_label S doesnt_end_with_ret start_status final_status
814  | tal_base_return:
815      ∀start_status: S.
816      ∀final_status: S.
817        as_execute S start_status final_status →
818        as_classifier S start_status cl_return →
819          trace_any_label S ends_with_ret start_status final_status
820  (* A call followed by a label on return. *)
821  | tal_base_call:
822      ∀status_pre_fun_call: S.
823      ∀status_start_fun_call: S.
824      ∀status_final: S.
825        as_execute S status_pre_fun_call status_start_fun_call →
826        ∀H:as_classifier S status_pre_fun_call cl_call.
827          as_after_return S «status_pre_fun_call, H» status_final →
828          trace_label_return S status_start_fun_call status_final →
829          as_costed S status_final →
830            trace_any_label S doesnt_end_with_ret status_pre_fun_call status_final
831  (* A call followed by a non-empty trace. *)
832  | tal_step_call:
833      ∀end_flag: trace_ends_with_ret.
834      ∀status_pre_fun_call: S.
835      ∀status_start_fun_call: S.
836      ∀status_after_fun_call: S.
837      ∀status_final: S.
838        as_execute S status_pre_fun_call status_start_fun_call →
839        ∀H:as_classifier S status_pre_fun_call cl_call.
840          as_after_return S «status_pre_fun_call, H» status_after_fun_call →
841          trace_label_return S status_start_fun_call status_after_fun_call →
842          ¬ as_costed S status_after_fun_call →
843          trace_any_label S end_flag status_after_fun_call status_final →
844            trace_any_label S end_flag status_pre_fun_call status_final
845  | tal_step_default:
846      ∀end_flag: trace_ends_with_ret.
847      ∀status_pre: S.
848      ∀status_init: S.
849      ∀status_end: S.
850        as_execute S status_pre status_init →
851        trace_any_label S end_flag status_init status_end →
852        as_classifier S status_pre cl_other →
853        ¬ (as_costed S status_init) →
854          trace_any_label S end_flag status_pre status_end.
855\end{verbatim}
856\end{comment}
857A \texttt{trace\_label\_return} is isomorphic to a list of
858\texttt{trace\_label\_label}s that ends with a cost emission followed by a
859return terminated \texttt{trace\_label\_label}.
860The interesting cases are those of $\texttt{trace\_any\_label}~b~s_1~s_2$.
861A \texttt{trace\_any\_label} is a sequence of steps built by a syntax directed
862definition on the classification of $s_1$. The constructors of the datatype
863impose several invariants that are meant to impose a structure to the
864otherwise unstructured execution. In particular, the following invariants are
865imposed:
866\begin{enumerate}
867 \item the trace is never empty; it ends with a return iff $b$ is
868       true
869 \item a jump must always be the last instruction of the trace, and it must
870       be followed by a cost emission statement; i.e. the target of a jump
871       is always the beginning of a new basic block; as such it must start
872       with a cost emission statement
873 \item a cost emission statement can never occur inside the trace, only in
874       the status immediately after
875 \item the trace for a function call step is made of a subtrace for the
876       function body of type
877       $\texttt{trace\_label\_return}~s_1~s_2$, possibly followed by the
878       rest of the trace for this basic block. The subtrace represents the
879       function execution. Being an inductive datum, it grants totality of
880       the function call. The status $s_2$ is the one that follows the return
881       statement. The next instruction of $s_2$ must follow the function call
882       instruction. As a consequence, function calls are also well nested.
883\end{enumerate}
884
885There are three mutual structural recursive functions, one for each of
886\verb+TLR+, \verb+TLL+ and \verb+TAL+, for which we use the same notation
887$|\,.\,|$: the \emph{flattening} of the traces. These functions
888allow to extract from a structured trace the list of emitted cost labels.
889%  We only show here the type of one
890% of them:
891% \begin{alltt}
892% flatten_trace_label_return:
893%  $$\forall$$S: abstract_status. $$\forall$$$$s_1,s_2$$.
894%   trace_label_return $$s_1$$ $$s_2$$ $$\to$$ list (as_cost_label S)
895% \end{alltt}
896
897\paragraph{Cost prediction on structured traces.}
898
899The first main theorem of CerCo about traces
900(theorem \texttt{compute\_max\_trace\_label\_return\_cost\_ok\_with\_trace})
901holds for the
902instantiation
903of the structured traces to the concrete status of object code programs.
904Simplifying a bit, it states that
905\begin{equation}\label{th1}
906\begin{array}{l}\forall s_1,s_2. \forall \tau: \texttt{TLR}~s_1~s_2.~
907  \texttt{clock}~s_2 = \texttt{clock}~s_1 +
908  \Sigma_{\alpha \in |\tau|}\;k(\alpha)
909\end{array}
910\end{equation}
911where the cost model $k$ is statically computed from the object code
912by associating to each label $\alpha$ the sum of the cost of the instructions
913in the basic block that starts at $\alpha$ and ends before the next labelled
914instruction. The theorem is proved by structural induction over the structured
915trace, and is based on the invariant that
916iff the function that computes the cost model has analysed the instruction
917to be executed at $s_2$ after the one to be executed at $s_1$, and if
918the structured trace starts with $s_1$, then eventually it will contain also
919$s_2$. When $s_1$ is not a function call, the result holds trivially because
920of the $s_1\exec s_2$ condition obtained by inversion on
921the trace. The only non
922trivial case is the one of function calls: the cost model computation function
923does recursion on the first instruction that follows that function call; the
924\texttt{as\_after\_return} condition of the \texttt{tal\_base\_call} and
925\texttt{tal\_step\_call} grants exactly that the execution will eventually reach
926this state.
927
928\paragraph{Structured traces similarity and cost prediction invariance.}
929
930A compiler pass maps source to object code and initial states to initial
931states. The source code and initial state uniquely determine the structured
932trace of a program, if it exists. The structured trace fails to exists iff
933the structural conditions are violated by the program execution (e.g. a function
934body does not start with a cost emission statement). Let us assume that the
935target structured trace exists.
936
937What is the relation between the source and target structured traces?
938In general, the two traces can be arbitrarily different. However, we are
939interested only in those compiler passes that maps a trace $\tau_1$ to a trace
940$\tau_2$ such that
941\begin{equation}|\tau_1| = |\tau_2|.\label{th2}\end{equation}
942The reason is that the combination of~\eqref{th1} with~\eqref{th2} yields the
943corollary
944\begin{equation}\label{th3}
945\forall s_1,s_2. \forall \tau: \texttt{TLR}~s_1~s_2.~
946  \texttt{clock}~s_2 - \texttt{clock}~s_1 =
947  \Sigma_{\alpha \in |\tau_1|}\;k(\alpha) =
948  \Sigma_{\alpha \in |\tau_2|}\;k(\alpha).
949\end{equation}
950This corollary states that the actual execution time of the program can be computed equally well on the source or target language. Thus it becomes possible to
951transfer the cost model from the target to the source code and reason on the
952source code only.
953
954We are therefore interested in conditions stronger than~\eqref{th2}.
955Therefore we introduce here a similarity relation between traces with
956the same structure. Theorem~\texttt{tlr\_rel\_to\_traces\_same\_flatten}
957in the Matita formalisation shows that~\eqref{th2} holds for every pair
958$(\tau_1,\tau_2)$ of similar traces.
959
960Intuitively, two traces are similar when one can be obtained from
961the other by erasing or inserting silent steps, i.e. states that are
962not \texttt{as\_costed} and that are classified as \texttt{cl\_other}.
963Silent steps do not alter the structure of the traces.
964In particular,
965the relation maps function calls to function calls to the same function,
966label emission statements to emissions of the same label, concatenation of
967subtraces to concatenation of subtraces of the same length and starting with
968the same emission statement, etc.
969
970In the formalisation the three similarity relations --- one for each trace
971kind --- are defined by structural recursion on the first trace and pattern
972matching over the second. Here we turn
973the definition into the inference rules shown in \autoref{fig:txx_rel}
974for the sake of readability. We also omit from trace constructors all arguments,
975but those that are traces or that
976are used in the premises of the rules. By abuse of notation we denote all three
977relations by infixing $\approx$.
978
979\begin{figure}
980\begin{multicols}{2}
981\infrule
982 {tll_1\approx tll_2
983 }
984 {\texttt{tlr\_base}~tll_1 \approx \texttt{tlr\_base}~tll_2}
985
986\infrule
987 {tll_1 \approx tll_2 \andalso
988  tlr_1 \approx tlr_2
989 }
990 {\texttt{tlr\_step}~tll_1~tlr_1 \approx \texttt{tlr\_step}~tll_2~tlr_2}
991\end{multicols}
992\vspace{3ex}
993\begin{multicols}{2}
994\infrule
995 {\ell~s_1 = \ell~s_2 \andalso
996  tal_1\approx tal_2
997 }
998 {\texttt{tll\_base}~s_1~tal_1 \approx \texttt{tll\_base}~s_2~tal_2}
999
1000\infrule
1001 {tal_1\approx tal_2
1002 }
1003 {\texttt{tal\_step\_default}~tal_1 \approx tal_2}
1004\end{multicols}
1005\vspace{3ex}
1006\infrule
1007 {}
1008 {\texttt{tal\_base\_not\_return}\approx taa \append \texttt{tal\_base\_not\_return}}
1009\vspace{1ex}
1010\infrule
1011 {}
1012 {\texttt{tal\_base\_return}\approx taa \append \texttt{tal\_base\_return}}
1013\vspace{1ex}
1014\infrule
1015 {tlr_1\approx tlr_2 \andalso
1016  s_1 \uparrow f \andalso s_2\uparrow f
1017 }
1018 {\texttt{tal\_base\_call}~s_1~tlr_1\approx taa \append \texttt{tal\_base\_call}~s_2~tlr_2}
1019\vspace{1ex}
1020\infrule
1021 {tlr_1\approx tlr_2 \andalso
1022  s_1 \uparrow f \andalso s_2\uparrow f \andalso
1023  \texttt{tal\_collapsable}~tal_2
1024 }
1025 {\texttt{tal\_base\_call}~s_1~tlr_1 \approx taa \append \texttt{tal\_step\_call}~s_2~tlr_2~tal_2)}
1026\vspace{1ex}
1027\infrule
1028 {tlr_1\approx tlr_2 \andalso
1029  s_1 \uparrow f \andalso s_2\uparrow f \andalso
1030  \texttt{tal\_collapsable}~tal_1
1031 }
1032 {\texttt{tal\_step\_call}~s_1~tlr_1~tal_1 \approx taa \append \texttt{tal\_base\_call}~s_2~tlr_2)}
1033\vspace{1ex}
1034\infrule
1035 {tlr_1 \approx tlr_2 \andalso
1036  s_1 \uparrow f \andalso s_2\uparrow f\andalso
1037  tal_1 \approx tal_2 \andalso
1038 }
1039 {\texttt{tal\_step\_call}~s_1~tlr_1~tal_1 \approx taa \append \texttt{tal\_step\_call}~s_2~tlr_2~tal_2}
1040\caption{The inference rule for the relation $\approx$.}
1041\label{fig:txx_rel}
1042\end{figure}
1043%
1044\begin{comment}
1045\begin{verbatim}
1046let rec tlr_rel S1 st1 st1' S2 st2 st2'
1047  (tlr1 : trace_label_return S1 st1 st1')
1048  (tlr2 : trace_label_return S2 st2 st2') on tlr1 : Prop ≝
1049match tlr1 with
1050  [ tlr_base st1 st1' tll1 ⇒
1051    match tlr2 with
1052    [ tlr_base st2 st2' tll2 ⇒ tll_rel … tll1 tll2
1053    | _ ⇒ False
1054    ]
1055  | tlr_step st1 st1' st1'' tll1 tl1 ⇒
1056    match tlr2 with
1057    [ tlr_step st2 st2' st2'' tll2 tl2 ⇒
1058      tll_rel … tll1 tll2 ∧ tlr_rel … tl1 tl2
1059    | _ ⇒ False
1060    ]
1061  ]
1062and tll_rel S1 fl1 st1 st1' S2 fl2 st2 st2'
1063 (tll1 : trace_label_label S1 fl1 st1 st1')
1064 (tll2 : trace_label_label S2 fl2 st2 st2') on tll1 : Prop ≝
1065  match tll1 with
1066  [ tll_base fl1 st1 st1' tal1 H ⇒
1067    match tll2 with
1068    [ tll_base fl2 st2 st2 tal2 G ⇒
1069      as_label_safe … («?, H») = as_label_safe … («?, G») ∧
1070      tal_rel … tal1 tal2
1071    ]
1072  ]
1073and tal_rel S1 fl1 st1 st1' S2 fl2 st2 st2'
1074 (tal1 : trace_any_label S1 fl1 st1 st1')
1075 (tal2 : trace_any_label S2 fl2 st2 st2')
1076   on tal1 : Prop ≝
1077  match tal1 with
1078  [ tal_base_not_return st1 st1' _ _ _ ⇒
1079    fl2 = doesnt_end_with_ret ∧
1080    ∃st2mid,taa,H,G,K.
1081    tal2 ≃ taa_append_tal ? st2 ??? taa
1082      (tal_base_not_return ? st2mid st2' H G K)
1083  | tal_base_return st1 st1' _ _ ⇒
1084    fl2 = ends_with_ret ∧
1085    ∃st2mid,taa,H,G.
1086    tal2 ≃ taa_append_tal ? st2 ? st2mid st2' taa
1087      (tal_base_return ? st2mid st2' H G)
1088  | tal_base_call st1 st1' st1'' _ prf _ tlr1 _ ⇒
1089    fl2 = doesnt_end_with_ret ∧
1090    ∃st2mid,G.as_call_ident S2 («st2mid, G») = as_call_ident ? «st1, prf» ∧
1091    ∃taa : trace_any_any ? st2 st2mid.∃st2mid',H.
1092    (* we must allow a tal_base_call to be similar to a call followed
1093      by a collapsable trace (trace_any_any followed by a base_not_return;
1094      we cannot use trace_any_any as it disallows labels in the end as soon
1095      as it is non-empty) *)
1096    (∃K.∃tlr2 : trace_label_return ? st2mid' st2'.∃L.
1097      tal2 ≃ taa @ (tal_base_call … H G K tlr2 L) ∧ tlr_rel … tlr1 tlr2) ∨
1098    ∃st2mid'',K.∃tlr2 : trace_label_return ? st2mid' st2mid''.∃L.
1099    ∃tl2 : trace_any_label … doesnt_end_with_ret st2mid'' st2'.
1100      tal2 ≃ taa @ (tal_step_call … H G K tlr2 L tl2) ∧
1101      tlr_rel … tlr1 tlr2 ∧ tal_collapsable … tl2
1102  | tal_step_call fl1 st1 st1' st1'' st1''' _ prf _ tlr1 _ tl1 ⇒
1103    ∃st2mid,G.as_call_ident S2 («st2mid, G») = as_call_ident ? «st1, prf» ∧
1104    ∃taa : trace_any_any ? st2 st2mid.∃st2mid',H.
1105    (fl2 = doesnt_end_with_ret ∧ ∃K.∃tlr2 : trace_label_return ? st2mid' st2'.∃L.
1106      tal2 ≃ taa @ tal_base_call … H G K tlr2 L ∧
1107      tal_collapsable … tl1 ∧ tlr_rel … tlr1 tlr2) ∨
1108    ∃st2mid'',K.∃tlr2 : trace_label_return ? st2mid' st2mid''.∃L.
1109    ∃tl2 : trace_any_label ? fl2 st2mid'' st2'.
1110      tal2 ≃ taa @ (tal_step_call … H G K tlr2 L tl2) ∧
1111      tal_rel … tl1 tl2 ∧ tlr_rel … tlr1 tlr2
1112  | tal_step_default fl1 st1 st1' st1'' _ tl1 _ _ ⇒
1113    tal_rel … tl1 tal2 (* <- this makes it many to many *)
1114  ].
1115\end{verbatim}
1116\end{comment}
1117%
1118In the preceding rules, a $taa$ is an inhabitant of the
1119$\texttt{trace\_any\_any}~s_1~s_2$ (shorthand $\texttt{TAA}~s_1~s_2$),
1120an inductive data type whose definition
1121is not in the paper for lack of space. It is the type of valid
1122prefixes (even empty ones) of \texttt{TAL}'s that do not contain
1123any function call. Therefore it
1124is possible to concatenate (using $\append$'') a \texttt{TAA} to the
1125left of a \texttt{TAL}. A \texttt{TAA} captures
1126a sequence of silent moves.
1127The \texttt{tal\_collapsable} unary predicate over \texttt{TAL}'s
1128holds when the argument does not contain any function call and it ends
1129with a label (not a return). The intuition is that after a function call we
1130can still perform a sequence of silent actions while remaining similar.
1131
1132As should be expected, even though the rules are asymmetric $\approx$ is in fact
1133an equivalence relation.
1134\section{Forward simulation}
1135\label{simulation}
1136
1137We summarise here the results of the previous sections. Each intermediate
1138unstructured language can be given a semantics based on structured traces,
1139that single out those runs that respect a certain number of invariants.
1140A cost model can be computed on the object code and it can be used to predict
1141the execution costs of runs that produce structured traces. The cost model
1142can be lifted from the target to the source code of a pass if the pass maps
1143structured traces to similar structured traces. The latter property is called
1144a \emph{forward simulation}.
1145
1146As for labelled transition systems, in order to establish the forward
1147simulation we are interested in (preservation of observables), we are
1148forced to prove a stronger notion of forward simulation that introduces
1149an explicit relation between states. The classical notion of a 1-to-many
1150forward simulation is the existence of a relation $\S$ over states such that
1151if $s_1 \S s_2$ and $s_1 \to^1 s_1'$ then there exists an $s_2'$ such that
1152$s_2 \to^* s_2'$ and $s_1' \S s_2'$. In our context, we need to replace the
1153one and multi step transition relations $\to^n$ with the existence of
1154a structured trace between the two states, and we need to add the request that
1155the two structured traces are similar. Thus what we would like to state is
1156something like:\\
1157for all $s_1,s_2,s_1'$ such that there is a $\tau_1$ from
1158$s_1$ to $s_1'$ and $s_1 \S s_2$ there exists an $s_2'$ such that
1159$s_1' \S s_2'$ and a $\tau_2$ from $s_2$ to $s_2'$ such that
1160$\tau_1$ is similar to $\tau_2$. We call this particular form of forward
1161simulation \emph{trace reconstruction}.
1162
1163The statement just introduced, however, is too simplistic and not provable
1164in the general case. To understand why, consider the case of a function call
1165and the pass that fixes the parameter passing conventions. A function
1166call in the source code takes in input an arbitrary number of pseudo-registers (the actual parameters to pass) and returns an arbitrary number of pseudo-registers (where the result is stored). A function call in the target language has no
1167input nor output parameters. The pass must add explicit code before and after
1168the function call to move the pseudo-registers content from/to the hardware
1169registers or the stack in order to implement the parameter passing strategy.
1170Similarly, each function body must be augmented with a preamble and a postamble
1171to complete/initiate the parameter passing strategy for the call/return phase.
1172Therefore what used to be a call followed by the next instruction to execute
1173after the function return, now becomes a sequence of instructions, followed by
1174a call, followed by another sequence. The two states at the beginning of the
1175first sequence and at the end of the second sequence are in relation with
1176the status before/after the call in the source code, like in an usual forward
1177simulation. How can we prove however the additional condition for function calls
1178that asks that when the function returns the instruction immediately after the
1179function call is called? To grant this invariant, there must be another relation
1180between the address of the function call in the source and in the target code.
1181This additional relation is to be used in particular to relate the two stacks.
1182
1183Another example is given by preservation of code emission statements. A single
1184code emission instruction can be simulated by a sequence of steps, followed
1185by a code emission, followed by another sequence. Clearly the initial and final
1186statuses of the sequence are to be in relation with the status before/after the
1187code emission in the source code. In order to preserve the structured traces
1188invariants, however, we must consider a second relation between states that
1189traces the preservation of the code emission statement.
1190
1191Therefore we now introduce an abstract notion of relation set between abstract
1192statuses and an abstract notion of 1-to-many forward simulation conditions.
1193These two definitions enjoy the following remarkable properties:
1194\begin{enumerate}
1195 \item they are generic enough to accommodate all passes of the CerCo compiler
1196 \item the conjunction of the 1-to-many forward simulation conditions are
1197       just slightly stricter than the statement of a 1-to-many forward
1198       simulation in the classical case. In particular, they only require
1199       the construction of very simple forms of structured traces made of
1200       silent states only.
1201 \item they allow to prove our main result of the paper: the 1-to-many
1202       forward simulation conditions are sufficient to prove the trace
1203       reconstruction theorem
1204\end{enumerate}
1205
1206Point 3. is the important one. First of all it means that we have reduced
1207the complex problem of trace reconstruction to a much simpler one that,
1208moreover, can be solved with slight adaptations of the forward simulation proof
1209that is performed for a compiler that only cares about functional properties.
1210Therefore we have successfully splitted as much as possible the proof of
1211preservation of functional properties from that of non-functional ones.
1212Secondly, combined with the results in the previous section, it implies
1213that the cost model can be computed on the object code and lifted to the
1214source code to reason on non-functional properties, assuming that
1215the 1-to-many forward simulation conditions are fulfilled for every
1216compiler pass.
1217
1218\paragraph{Relation sets.}
1219
1220We introduce now the four relations $\mathcal{S,C,L,R}$ between abstract
1221statuses that are used to correlate the corresponding statues before and
1222after a compiler pass. The first two are abstract and must be instantiated
1223by every pass. The remaining two are derived relations.
1224
1225The $\S$ relation between states is the classical relation used
1226in forward simulation proofs. It correlates the data of the status
1227(e.g. registers, memory, etc.).
1228
1229The $\C$ relation correlates call states. It allows to track the
1230position in the target code of every call in the source code.
1231
1232The $\L$ relation simply says that the two states are both label
1233emitting states that emit the same label, \emph{i.e.}\ $s_1\L s_2\iffdef \ell~s_1=\ell~s_2$.
1234It allows to track the position in
1235the target code of every cost emitting statement in the source code.
1236
1237Finally the $\R$ relation is the more complex one. Two states
1238$s_1$ and $s_2$ are $\R$ correlated if every time $s_1$ is the
1239successors of a call state that is $\C$-related to a call state
1240$s_2'$ in the target code, then $s_2$ is the successor of $s_2'$. Formally:
1241$$s_1\R s_2 \iffdef \forall s_1',s_2'.s_1'\C s_2' \to s_1'\ar s_1 \to s_2' \ar s_2.$$
1242We will require all pairs of states that follow a related call to be
1243$\R$-related. This is the fundamental requirement granting
1244that the target trace is well structured, \emph{i.e.}\ that calls are well
1245nested and returning where they are supposed to.
1246
1247% \begin{alltt}
1248% record status_rel (S1,S2 : abstract_status) : Type[1] := \{
1249%   $$\S$$: S1 $$\to$$ S2 $$\to$$ Prop;
1250%   $$\C$$: ($$\Sigma$$s.as_classifier S1 s cl_call) $$\to$$
1251%      ($$\Sigma$$s.as_classifier S2 s cl_call) $$\to$$ Prop \}.
1252%
1253% definition $$\L$$ S1 S2 st1 st2 := as_label S1 st1 = as_label S2 st2.
1254%
1255% definition $$\R$$ S1 S2 (R: status_rel S1 S2) s1_ret s2_ret ≝
1256%  $$\forall$$s1_pre,s2_pre.
1257%   as_after_return s1_pre s1_ret $$\to$$ s1_pre $$\R$$ s2_pre $$\to$$
1258%    as_after_return s2_pre s2_ret.
1259% \end{alltt}
1260
1261\begin{figure}
1262\centering
1263\begin{tabular}{@{}c@{}c@{}c@{}}
1264% \begin{subfigure}{.475\linewidth}
1265% \centering
1266% \begin{tikzpicture}[every join/.style={ar}, join all, thick,
1267%                             every label/.style=overlay, node distance=10mm]
1268%     \matrix [diag] (m) {%
1269%          \node (s1) [is jump] {}; & \node [fill=white] (t1) {};\\
1270%          \node (s2) {}; & \node (t2) {}; \\
1271%     };
1272%     \node [above=0 of t1, overlay] {$\alpha$};
1273%     {[-stealth]
1274%     \draw (s1) -- (t1);
1275%     \draw [new] (s2) -- node [above] {$*$} (t2);
1276%     }
1277%     \draw (s1) to node [rel] {$\S$} (s2);
1278%     \draw [new] (t1) to node [rel] {$\S,\L$} (t2);
1279% \end{tikzpicture}
1280% \caption{The \texttt{cl\_jump} case.}
1281% \label{subfig:cl_jump}
1282% \end{subfigure}
1283% &
1284\begin{subfigure}{.25\linewidth}
1285\centering
1286\begin{tikzpicture}[every join/.style={ar}, join all, thick,
1287                            every label/.style=overlay, node distance=10mm]
1288    \matrix [diag] (m) {%
1289         \node (s1) {}; & \node (t1) {};\\
1290         \node (s2) {}; & \node (t2) {}; \\
1291    };
1292    {[-stealth]
1293    \draw (s1) -- (t1);
1294    \draw [new] (s2) -- node [above] {$*$} (t2);
1295    }
1296    \draw (s1) to node [rel] {$\S$} (s2);
1297    \draw [new] (t1) to node [rel] {$\S,\L$} (t2);
1298\end{tikzpicture}
1299\caption{The \texttt{cl\_oher} and \texttt{cl\_jump} cases.}
1300\label{subfig:cl_other_jump}
1301\end{subfigure}
1302&
1303\begin{subfigure}{.375\linewidth}
1304\centering
1305\begin{tikzpicture}[every join/.style={ar}, join all, thick,
1306                            every label/.style=overlay, node distance=10mm]
1307    \matrix [diag, small vgap] (m) {%
1308        \node (t1) {}; \\
1309         \node (s1) [is call] {}; \\
1310         & \node (l) {}; & \node (t2) {};\\
1311         \node (s2) {}; & \node (c) [is call] {};\\
1312    };
1313    {[-stealth]
1314    \draw (s1) -- node [left] {$f$} (t1);
1315    \draw [new] (s2) -- node [above] {$*$} (c);
1316    \draw [new] (c) -- node [right] {$f$} (l);
1317    \draw [new] (l) -- node [above] {$*$} (t2);
1318    }
1319    \draw (s1) to node [rel] {$\S$} (s2);
1320    \draw [new] (t1) to [bend left] node [rel] {$\S$} (t2);
1321    \draw [new] (t1) to [bend left] node [rel] {$\L$} (l);
1322    \draw [new] (t1) to node [rel] {$\C$} (c);
1323    \end{tikzpicture}
1324\caption{The \texttt{cl\_call} case.}
1325\label{subfig:cl_call}
1326\end{subfigure}
1327&
1328\begin{subfigure}{.375\linewidth}
1329\centering
1330\begin{tikzpicture}[every join/.style={ar}, join all, thick,
1331                            every label/.style=overlay, node distance=10mm]
1332    \matrix [diag, small vgap] (m) {%
1333        \node (s1) [is ret] {}; \\
1334        \node (t1) {}; \\
1335        \node (s2) {}; & \node (c) [is ret] {};\\
1336        & \node (r) {}; & \node (t2) {}; \\
1337    };
1338    {[-stealth]
1339    \draw (s1) -- (t1);
1340    \draw [new] (s2) -- node [above] {$*$} (c);
1341    \draw [new] (c) -- (r);
1342    \draw [new] (r) -- node [above] {$*$} (t2);
1343    }
1344    \draw (s1) to [bend right=45] node [rel] {$\S$} (s2);
1345    \draw [new, overlay] (t1) to [bend left=90, looseness=1] node [rel] {$\S,\L$} (t2);
1346    \draw [new, overlay] (t1) to [bend left=90, looseness=1.2] node [rel] {$\R$} (r);
1347\end{tikzpicture}
1348\caption{The \texttt{cl\_return} case.}
1349\label{subfig:cl_return}
1350\end{subfigure}
1351\end{tabular}
1352\caption{Mnemonic diagrams depicting the hypotheses for the preservation of structured traces.
1353         Dashed lines
1354         and arrows indicates how the diagrams must be closed when solid relations
1355         are present.}
1356\label{fig:forwardsim}
1357\end{figure}
1358
1359\paragraph{1-to-many forward simulation conditions.}
1360\begin{condition}[Cases \texttt{cl\_other} and \texttt{cl\_jump}]
1361 For all $s_1,s_1',s_2$ such that $s_1 \S s_1'$, and
1362 $s_1\exec s_1'$, and either $s_1 \class \texttt{cl\_other}$ or
1363 both $s_1\class\texttt{cl\_other}\}$ and $\ell~s_1'$,
1364 there exists an $s_2'$ and a $\texttt{trace\_any\_any\_free}~s_2~s_2'$ called $taaf$
1365 such that $s_1' \mathrel{{\S} \cap {\L}} s_2'$ and either
1366$taaf$ is non empty, or one among $s_1$ and $s_1'$ is \texttt{as\_costed}.
1367\end{condition}
1368
1369In the above condition depicted in \autoref{subfig:cl_other_jump},
1370a $\texttt{trace\_any\_any\_free}~s_1~s_2$ (which from now on
1371will be shorthanded as \verb+TAAF+) is an
1372inductive type of structured traces that do not contain function calls or
1373cost emission statements. Differently from a \verb+TAA+, the
1374instruction to be executed in the lookahead state $s_2$ may be a cost emission
1375statement.
1376
1377The intuition of the condition is that one step can be replaced with zero or more steps if it
1378preserves the relation between the data and if the two final statuses are
1379labelled in the same way. Moreover, we must take special care of the empty case
1380to avoid collapsing two consecutive states that emit a label, missing one of the two emissions.
1381
1382\begin{condition}[Case \texttt{cl\_call}]
1383 For all $s_1,s_1',s_2$ s.t. $s_1 \S s_1'$ and
1384 $s_1\exec s_1'$ and $s_1 \class \texttt{cl\_call}$, there exists $s_a, s_b, s_2'$, a
1385$\verb+TAA+~s_2~s_a$, and a
1386$\verb+TAAF+~s_b~s_2'$ such that:
1387$s_a\class\texttt{cl\_call}$, the \texttt{as\_call\_ident}'s of
1388the two call states are the same, $s_1 \C s_a$,
1389$s_a\exec s_b$, $s_1' \L s_b$ and
1390$s_1' \S s_2'$.
1391\end{condition}
1392
1393The condition, depicted in \autoref{subfig:cl_call} says that, to simulate a function call, we can perform a
1394sequence of silent actions before and after the function call itself.
1395The old and new call states must be $\C$-related, the old and new
1396states at the beginning of the function execution must be $\L$-related
1397and, finally, the two initial and final states must be $\S$-related
1398as usual.
1399
1400\begin{condition}[Case \texttt{cl\_return}]
1401 For all $s_1,s_1',s_2$ s.t. $s_1 \S s_1'$,
1402 $s_1\exec s_1'$ and $s_1 \class \texttt{cl\_return}$, there exists $s_a, s_b, s_2'$, a
1403$\verb+TAA+~s_2~s_a$, a
1404$\verb+TAAF+~s_b~s_2'$ called $taaf$ such that:
1405$s_a\class\texttt{cl\_return}$,
1406$s_a\exec s_b$,
1407$s_1' \R s_b$ and
1408$s_1' \mathrel{{\S} \cap {\L}} s_2'$ and either
1409$taaf$ is non empty, or $\lnot \ell~s_a$.
1410\end{condition}
1411
1412Similarly to the call condition, to simulate a return we can perform a
1413sequence of silent actions before and after the return statement itself,
1414as depicted in \autoref{subfig:cl_return}.
1415The old and the new statements after the return must be $\R$-related,
1416to grant that they returned to corresponding calls.
1417The two initial and final states must be $\S$-related
1418as usual and, moreover, they must exhibit the same labels. Finally, when
1419the suffix is non empty we must take care of not inserting a new
1420unmatched cost emission statement just after the return statement.
1421
1422\begin{comment}
1423\begin{verbatim}
1424definition status_simulation ≝
1425  λS1 : abstract_status.
1426  λS2 : abstract_status.
1427  λsim_status_rel : status_rel S1 S2.
1428    ∀st1,st1',st2.as_execute S1 st1 st1' →
1429    sim_status_rel st1 st2 →
1430    match as_classify … st1 with
1431    [ None ⇒ True
1432    | Some cl ⇒
1433      match cl with
1434      [ cl_call ⇒ ∀prf.
1435        (*
1436             st1' ------------S----------\
1437              ↑ \                         \
1438             st1 \--L--\                   \
1439              | \       \                   \
1440              S  \-C-\  st2_after_call →taa→ st2'
1441              |       \     ↑
1442             st2 →taa→ st2_pre_call
1443        *)
1444        ∃st2_pre_call.
1445        as_call_ident ? st2_pre_call = as_call_ident ? («st1, prf») ∧
1446        call_rel ?? sim_status_rel «st1, prf» st2_pre_call ∧
1447        ∃st2_after_call,st2'.
1448        ∃taa2 : trace_any_any … st2 st2_pre_call.
1449        ∃taa2' : trace_any_any … st2_after_call st2'.
1450        as_execute … st2_pre_call st2_after_call ∧
1451        sim_status_rel st1' st2' ∧
1452        label_rel … st1' st2_after_call
1453      | cl_return ⇒
1454        (*
1455             st1
1456            / ↓
1457           | st1'----------S,L------------\
1458           S   \                           \
1459            \   \-----R-------\            |
1460             \                 |           |
1461             st2 →taa→ st2_ret |           |
1462                          ↓   /            |
1463                     st2_after_ret →taaf→ st2'
1464
1465           we also ask that st2_after_ret be not labelled if the taaf tail is
1466           not empty
1467        *)
1468        ∃st2_ret,st2_after_ret,st2'.
1469        ∃taa2 : trace_any_any … st2 st2_ret.
1470        ∃taa2' : trace_any_any_free … st2_after_ret st2'.
1471        (if taaf_non_empty … taa2' then ¬as_costed … st2_after_ret else True) ∧
1472        as_classifier … st2_ret cl_return ∧
1473        as_execute … st2_ret st2_after_ret ∧ sim_status_rel st1' st2' ∧
1474        ret_rel … sim_status_rel st1' st2_after_ret ∧
1475        label_rel … st1' st2'
1476      | cl_other ⇒
1477          (*
1478          st1 → st1'
1479            |      \
1480            S      S,L
1481            |        \
1482           st2 →taaf→ st2'
1483
1484           the taaf can be empty (e.g. tunneling) but we ask it must not be the
1485           case when both st1 and st1' are labelled (we would be able to collapse
1486           labels otherwise)
1487         *)
1488        ∃st2'.
1489        ∃taa2 : trace_any_any_free … st2 st2'.
1490        (if taaf_non_empty … taa2 then True else (¬as_costed … st1 ∨ ¬as_costed … st1')) ∧
1491        sim_status_rel st1' st2' ∧
1492        label_rel … st1' st2'
1493      | cl_jump ⇒
1494        (* just like cl_other, but with a hypothesis more *)
1495        as_costed … st1' →
1496        ∃st2'.
1497        ∃taa2 : trace_any_any_free … st2 st2'.
1498        (if taaf_non_empty … taa2 then True else (¬as_costed … st1 ∨ ¬as_costed … st1')) ∧
1499        sim_status_rel st1' st2' ∧
1500        label_rel … st1' st2'
1501      ]
1502    ].
1503\end{verbatim}
1504\end{comment}
1505
1506\paragraph{Main result: the 1-to-many forward simulation conditions
1507are sufficient to trace reconstruction}
1508
1509Let us assume that a relation set is given such that the 1-to-many
1510forward simulation conditions are satisfied. Under this assumption we
1511can prove the following three trace reconstruction theorems by mutual
1512structural induction over the traces given in input between the
1513$s_1$ and $s_1'$ states.
1514
1515In particular, the \texttt{status\_simulation\_produce\_tlr} theorem
1516applied to the \texttt{main} function of the program and equal
1517$s_{2_b}$ and $s_2$ states shows that, for every initial state in the
1518source code that induces a structured trace in the source code,
1519the compiled code produces a similar structured trace.
1520
1521\begin{theorem}[\texttt{status\_simulation\_produce\_tlr}]
1522For every $s_1,s_1',s_{2_b},s_2$ s.t.
1523there is a $\texttt{TLR}~s_1~s_1'$ called $tlr_1$ and a
1524$\verb+TAA+~s_{2_b}~s_2$ and $s_1 \L s_{2_b}$ and
1525$s_1 \S s_2$, there exists $s_{2_m},s_2'$ s.t.
1526there is a $\texttt{TLR}~s_{2_b}~s_{2_m}$ called $tlr_2$ and
1527there is a $\verb+TAAF+~s_{2_m}~s_2'$ called $taaf$
1528s.t. if $taaf$ is non empty then $\lnot (\ell~s_{2_m})$,
1529and $tlr_1\approx tlr_2$
1530and $s_1' \mathrel{{\S} \cap {\L}} s_2'$ and
1531$s_1' \R s_{2_m}$.
1532\end{theorem}
1533
1534The theorem states that a \texttt{trace\_label\_return} in the source code
1535together with a precomputed preamble of silent states
1536(the \verb+TAA+) in the target code induces a
1537similar \texttt{trace\_label\_return} in the target code which can be
1538followed by a sequence of silent states. Note that the statement does not
1540precomputed preamble, even if this is likely to be the case in concrete
1541implementations. The preamble in input is necessary for compositionality, e.g.
1542because the 1-to-many forward simulation conditions allow in the
1543case of function calls to execute a preamble of silent instructions just after
1544the function call.
1545
1546Clearly similar results are also available for the other two types of structured
1547traces (in fact, they are all proved simultaneously by mutual induction).
1548% \begin{theorem}[\texttt{status\_simulation\_produce\_tll}]
1549% For every $s_1,s_1',s_{2_b},s_2$ s.t.
1550% there is a $\texttt{TLL}~b~s_1~s_1'$ called $tll_1$ and a
1551% $\verb+TAA+~s_{2_b}~s_2$ and $s_1 \L s_{2_b}$ and
1552% $s_1 \S s_2$, there exists $s_{2_m},s_2'$ s.t.
1553% \begin{itemize}
1554%  \item if $b$ (the trace ends with a return) then there exists $s_{2_m},s_2'$
1555%        and a trace $\texttt{TLL}~b~s_{2_b}~s_{2_m}$ called $tll_2$
1556%        and a $\texttt{TAAF}~s_{2_m}~s_2'$ called $taa_2$ s.t.
1557%        $s_1' \mathrel{{\S} \cap {\L}} s_2'$ and
1558%        $s_1' \R s_{2_m}$ and
1559%        $tll_1\approx tll_2$ and
1560%        if $taa_2$ is non empty then $\lnot \ell~s_{2_m}$;
1561%  \item else there exists $s_2'$ and a
1562%        $\texttt{TLL}~b~s_{2_b}~s_2'$ called $tll_2$ such that
1563%        $s_1' \mathrel{{\S} \cap {\L}} s_2'$ and
1564%        $tll_1\approx tll_2$.
1565% \end{itemize}
1566% \end{theorem}
1567%
1568% The statement is similar to the previous one: a source
1569% \texttt{trace\_label\_label} and a given target preamble of silent states
1570% in the target code induce a similar \texttt{trace\_label\_label} in the
1571% target code, possibly followed by a sequence of silent moves that become the
1572% preamble for the next \texttt{trace\_label\_label} translation.
1573%
1574% \begin{theorem}[\texttt{status\_simulation\_produce\_tal}]
1575% For every $s_1,s_1',s_2$ s.t.
1576% there is a $\texttt{TAL}~b~s_1~s_1'$ called $tal_1$ and
1577% $s_1 \S s_2$
1578% \begin{itemize}
1579%  \item if $b$ (the trace ends with a return) then there exists $s_{2_m},s_2'$
1580%    and a trace $\texttt{TAL}~b~s_2~s_{2_m}$ called $tal_2$ and a
1581%    $\texttt{TAAF}~s_{2_m}~s_2'$ called $taa_2$ s.t.
1582%    $s_1' \mathrel{{\S} \cap {\L}} s_2'$ and
1583%    $s_1' \R s_{2_m}$ and
1584%    $tal_1 \approx tal_2$ and
1585%    if $taa_2$ is non empty then $\lnot \ell~s_{2_m}$;
1586%  \item else there exists $s_2'$ and a
1587%    $\texttt{TAL}~b~s_2~s_2'$ called $tal_2$ such that
1588%    either $s_1' \mathrel{{\S} \cap {\L}} s_2'$ and
1589%        $tal_1\approx tal_2$
1590%    or $s_1' \mathrel{{\S} \cap {\L}} s_2$ and
1591%    $\texttt{tal\_collapsable}~tal_1$ and $\lnot \ell~s_1$.
1592% \end{itemize}
1593% \end{theorem}
1594%
1595% The statement is also similar to the previous ones, but for the lack of
1596% the target code preamble.
1597
1598\begin{comment}
1599\begin{corollary}
1600For every $s_1,s_1',s_2$ s.t.
1601there is a $\texttt{trace\_label\_return}~s_1~s_1'$ called $tlr_1$ and
1602$s_1 (\L \cap \S) s_2$
1603there exists $s_{2_m},s_2'$ s.t.
1604there is a $\texttt{trace\_label\_return}~s_2~s_{2_m}$ called $tlr_2$ and
1605there is a $\texttt{trace\_any\_any\_free}~s_{2_m}~s_2'$ called $taaf$
1606s.t. if $taaf$ is non empty then $\lnot (\texttt{as\_costed}~s_{2_m})$,
1607and $\texttt{tlr\_rel}~tlr_1~tlr_2$
1608and $s_1' (\S \cap \L) s_2'$ and
1609$s_1' \R s_{2_m}$.
1610\end{corollary}
1611\end{comment}
1612
1613\begin{comment}
1614\begin{verbatim}
1615status_simulation_produce_tlr S1 S2 R
1616(* we start from this situation
1617     st1 →→→→tlr→→→→ st1'
1618      | \
1619      L  \---S--\
1620      |          \
1621   st2_lab →taa→ st2   (the taa preamble is in general either empty or given
1622                        by the preceding call)
1623
1624   and we produce
1625     st1 →→→→tlr→→→→ st1'
1626             \\      /  \
1627             //     R    \-L,S-\
1628             \\     |           \
1629   st2_lab →tlr→ st2_mid →taaf→ st2'
1630*)
1631  st1 st1' st2_lab st2
1632  (tlr1 : trace_label_return S1 st1 st1')
1633  (taa2_pre : trace_any_any S2 st2_lab st2)
1634  (sim_execute : status_simulation S1 S2 R)
1635  on tlr1 : R st1 st2 → label_rel … st1 st2_lab →
1636  ∃st2_mid.∃st2'.
1637  ∃tlr2 : trace_label_return S2 st2_lab st2_mid.
1638  ∃taa2 : trace_any_any_free … st2_mid st2'.
1639  (if taaf_non_empty … taa2 then ¬as_costed … st2_mid else True) ∧
1640  R st1' st2' ∧ ret_rel … R st1' st2_mid ∧ label_rel … st1' st2' ∧
1641  tlr_rel … tlr1 tlr2
1642\end{verbatim}
1643\end{comment}
1644
1645\section{Conclusions and future works}
1646\label{conclusions}
1647The labelling approach is a technique to implement compilers that induce on
1648the source code a non uniform cost model determined from the object code
1649produced. The cost model assigns a cost to each basic block of the program.
1650The main theorem of the approach says that there is an exact
1651correspondence between the sequence of basic blocks started in the source
1652and object code, and that no instruction in the source or object code is
1653executed outside a basic block. Thus the cost of object code execution
1654can be computed precisely on the source.
1655
1656In this paper we scale the labelling approach to cover a programming language
1657with function calls. This introduces new difficulties only when the language
1658is unstructured, i.e. it allows function calls to return anywhere in the code,
1659destroying the hope of a static prediction of the cost of basic blocks.
1660We restore static predictability by introducing a new semantics for unstructured
1661programs that single outs well structured executions. The latter are represented
1662by structured traces, a generalisation of streams of observables that capture
1663several structural invariants of the execution, like well nesting of functions
1664or the fact that every basic block must start with a code emission statement.
1665We show that structured traces are sufficiently structured to statically compute
1666a precise cost model on the object code.
1667
1668We introduce a similarity relation on structured traces that must hold between
1669source and target traces. When the relation holds for every program, we prove
1670that the cost model can be lifted from the object to the source code.
1671
1672In order to prove that similarity holds, we present a generic proof of forward
1673simulation that is aimed at pulling apart as much as possible the part of the
1674simulation related to non-functional properties (preservation of structure)
1675from that related to functional properties. In particular, we reduce the
1676problem of preservation of structure to that of showing a 1-to-many
1677forward simulation that only adds a few additional proof obligations to those
1678of a traditional, function properties only, proof.
1679
1680All results presented in the paper are part of a larger certification of a
1681C compiler which is based on the labelling approach. The certification, done
1682in Matita, is the main deliverable of the FET-Open Certified Complexity (CerCo).
1683
1684The short term future work consists in the completion of the certification of
1685the CerCo compiler exploiting the main theorem of this paper.
1686
1687\paragraph{Related works.}
1688CerCo is the first project that explicitly tries to induce a
1689precise cost model on the source code in order to establish non-functional
1690properties of programs on an high level language. Traditional certifications
1691of compilers, like~\cite{compcert2,piton}, only explicitly prove preservation
1692of the functional properties.
1693
1694Usually forward simulations take the following form: for each transition
1695from $s_1$ to $s_2$ in the source code, there exists an equivalent sequence of
1696transitions in the target code of length $n$. The number $n$ of transition steps
1697in the target code can just be the witness of the existential statement.
1698An equivalent alternative when the proof of simulation is constructive consists
1699in providing an explicit function, called \emph{clock function} in the
1700literature~\cite{clockfunctions}, that computes $n$ from $s_1$. Every clock
1701function constitutes then a cost model for the source code, in the spirit of
1702what we are doing in CerCo. However, we believe our solution to be superior
1703in the following respects: 1) the machinery of the labelling approach is
1704insensible to the resource being measured. Indeed, any cost model computed on
1705the object code can be lifted to the source code (e.g. stack space used,
1706energy consumed, etc.). On the contrary, clock functions only talk about
1707number of transition steps. In order to extend the approach with clock functions
1708to other resources, additional functions must be introduced. Moreover, the
1709additional functions would be handled differently in the proof.
17102) the cost models induced by the labelling approach have a simple presentation.
1711In particular, they associate a number to each basic block. More complex
1712models can be induced when the approach is scaled to cover, for instance,
1713loop optimisations~\cite{loopoptimizations}, but the costs are still meant to
1714be easy to understand and manipulate in an interactive theorem prover or
1715in Frama-C.
1716On the contrary, a clock function is a complex function of the state $s_1$
1717which, as a function, is an opaque object that is difficult to reify as
1718source code in order to reason on it.
1719
1720\bibliographystyle{splncs03}
1721\bibliography{ccexec}
1722
1723% \appendix
1724% \section{Notes for the reviewers}
1725%
1726% The results described in the paper are part of a larger formalization
1727% (the certification of the CerCo compiler). At the moment of the submission
1728% we need to single out from the CerCo formalization the results presented here.
1729% Before the 16-th of February we will submit an attachment that contains the
1730% minimal subset of the CerCo formalization that allows to prove those results.
1731% At that time it will also be possible to measure exactly the size of the
1732% formalization described here. At the moment a rough approximation suggests
1733% about 2700 lines of Matita code.
1734%
1735% We will also attach the development version of the interactive theorem
1736% prover Matita that compiles the submitted formalization. Another possibility
1737% is to backport the development to the last released version of the system
1738% to avoid having to re-compile Matita from scratch.
1739%
1740% The programming and certification style used in the formalization heavily
1741% exploit dependent types. Dependent types are used: 1) to impose invariants
1742% by construction on the data types and operations (e.g. a traces from a state
1743% $s_1$ to a state $s_2$ can be concatenad to a trace from a state
1744% $s_2'$ to a state $s_3$ only if $s_2$ is convertible with $s_2'$); 2)
1745% to state and prove the theorems by using the Russell methodology of
1746% Matthieu Sozeau\footnote{Subset Coercions in Coq in TYPES'06. Matthieu Sozeau. Thorsten Altenkirch and Conor McBride (Eds). Volume 4502 of Lecture Notes in Computer Science. Springer, 2007, pp.237-252.
1747% }, better known in the Coq world as \texttt{Program}'' and reimplemented in a simpler way in Matita using coercion propagations\footnote{Andrea Asperti, Wilmer Ricciotti, Claudio Sacerdoti Coen, Enrico Tassi: A Bi-Directional Refinement Algorithm for the Calculus of (Co)Inductive Constructions. Logical Methods in Computer Science 8(1) (2012)}. However, no result presented depends
1748% mandatorily on dependent types: it should be easy to adapt the technique
1749% and results presented in the paper to HOL.
1750%
1751% Finally, Matita and Coq are based on minor variations of the Calculus of
1752% (Co)Inductive Constructions. These variations do not affect the CerCo
1753% formalization. Therefore a porting of the proofs and ideas to Coq would be
1754% rather straightforward.
1755
1756\end{document}
Note: See TracBrowser for help on using the repository browser.