 Timestamp:
 Jun 17, 2013, 6:48:16 PM (6 years ago)
 File:

 1 edited
Legend:
 Unmodified
 Added
 Removed

Papers/itp2013/ccexec2.tex
r3360 r3366 1 \documentclass[bookmarksdepth=2,bookmarksopen ]{llncs}1 \documentclass[bookmarksdepth=2,bookmarksopen,envcountsame]{llncs} 2 2 \usepackage{hyperref} 3 3 \usepackage{graphicx} … … 18 18 % a builtin definition. 19 19 %\newtheorem{theorem}{Theorem} 20 \newtheorem{condition}{Condition} 20 % \usepackage{aliascnt} %% this will make autoref give correct names 21 % \def\mynewtheorem#1[#2]#3{% 22 % \newaliascnt{#1}{#2}% 23 % \newtheorem{#1}[#1]{#3}% 24 % \aliascntresetthe{#1}% 25 % \expandafter\def\csname #1autorefname\endcsname{#3}% 26 % } 27 % \let\proof\relax 28 % \let\endproof\relax 29 % \usepackage{amsthm} 30 % theorem environments 31 % \theoremstyle{definition} 32 \spnewtheorem{condition}[theorem]{Condition}{\bfseries}{} 21 33 22 34 \lstdefinelanguage{coq} … … 108 120 \def\R{\mathrel{\mathcal R}} 109 121 \def\C{\mathrel{\mathcal C}} 122 \def\LS{\mathrel{\mathcal{L_S}}} 110 123 111 124 \def\Labels{\mathbb L} … … 131 144 \def\to{\@ifnextchar[{\new@to}{\oldto}} 132 145 \def\new@to[#1]{\xrightarrow{#1}} 133 \def\st@ck#1[#2]{\stackrel{#2}{#1}}134 \def\defst@ck#1#2{\def#1{\@ifnextchar[{\st@ck#2}{#2}}}135 \ defst@ck\To\implies136 \defst@ck\MeasTo\rightsquigarrow146 % \def\st@ck#1[#2]{\stackrel{#2}{#1}} 147 % \def\defst@ck#1#2{\def#1{\@ifnextchar[{\st@ck#2}{#2}}} 148 \let\To\implies 149 % \let\MeasTo\rightsquigarrow 137 150 \makeatother 138 151 … … 229 242 part that deals with the preservation of structure. 230 243 231 The plan of this paper is the following. In Section~\ref{ labelling} we244 The plan of this paper is the following. In Section~\ref{sec:labelling} we 232 245 sketch the labelling method and the problems derived from the application 233 to languages with function calls. In Section~\ref{se mantics} we introduce246 to languages with function calls. In Section~\ref{sec:semantics} we introduce 234 247 a generic description of an unstructured imperative language and the 235 248 corresponding structured traces (the novel semantics). In 236 Section~\ref{s imulation} we describe the forward simulation proof.237 Conclusions and future works are in Section~\ref{ conclusions}249 Section~\ref{sec:simulation} we describe the forward simulation proof. 250 Conclusions and future works are in Section~\ref{sec:conclusions} 238 251 239 252 \section{The labelling approach} 240 \label{ labelling}253 \label{sec:labelling} 241 254 % \subsection{A brief introduction to the labelling approach} 242 255 We briefly explain the labelling approach as introduced in~\cite{easylabelling} … … 417 430 stack). This property, however, is a property of the runs of object code 418 431 programs, and not a property of the object code that can be easily statically 419 verified (as the ones required in \autoref{ labelling} in absence of function calls).432 verified (as the ones required in \autoref{sec:labelling} in absence of function calls). 420 433 Therefore, we now need to single out those runs whose cost behaviour can be 421 434 statically predicted, and we need to prove that every run of programs generated … … 440 453 The proof of correctness of such a compiler is harder than a traditional 441 454 proof of preservation of functional properties, and a standard forward 442 simulation argument does not work. In \autoref{s imulation} we present455 simulation argument does not work. In \autoref{sec:simulation} we present 443 456 a refinement of forward simulation that grants all required correctness 444 457 properties. … … 695 708 696 709 \subsection{Structured traces} 697 \label{se mantics}710 \label{sec:semantics} 698 711 699 712 Let's consider a generic unstructured language already equipped with a … … 763 776 $s_i\to[f]s_{i+1}$. This is a technical condition needed to ensure that a 764 777 label associated with a function is only used at the beginning of its 765 body. Its use will become clear in~\autoref{s imulation}.778 body. Its use will become clear in~\autoref{sec:simulation}. 766 779 \item For every $i$, if the instruction to be executed in $s_i$ is a 767 780 conditional branch, then there is an $L$ such that $s_{i+1} \to[L] s_{i+2}$ or, equivalently, that $s_{i+1}$ must start execution with an … … 788 801 789 802 \begin{theorem} 790 \label{ static}803 \label{thm:static} 791 804 for all measurable fragment $T = s_0 \to^{*} s_n$,\\ 792 805 $$\Delta_t := \verb+clock+_{s_n}  \verb+clock+_{s_0} = \Sigma_{o \in T} k(o)$$ … … 835 848 holds for each compiler pass. 836 849 837 Having proved in~\autoref{ static} that the statically computed cost model is850 Having proved in~\autoref{thm:static} that the statically computed cost model is 838 851 accurate for the object code, we get as a corollary that it is also accurate 839 852 for the source code if the compiler preserves weak traces and … … 842 855 on the source code only. 843 856 844 \begin{theorem}\label{ preservation}857 \begin{theorem}\label{thm:preservation} 845 858 Given a compiler that preserves weak traces and propagates measurability, 846 859 for all measurable execution fragment $T_1 = s_1 \to^{*} s_1'$ of the source … … 852 865 853 866 \section{Proving the compiler correctness} 854 \label{s imulation}855 Because of \autoref{ preservation}, to certify a compiler for the labelling867 \label{sec:simulation} 868 Because of \autoref{thm:preservation}, to certify a compiler for the labelling 856 869 approach we need to both prove that it respects the functional semantics of the 857 870 program, and that it preserves weak traces and propagates measurability. … … 1373 1386 % As should be expected, even though the rules are asymmetric $\approx$ is in fact 1374 1387 % an equivalence relation. 1375 1376 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1377 1378 Therefore we now introduce an abstract notion of relation set between abstract 1379 statuses and an abstract notion of 1tomany forward simulation conditions. 1380 These two definitions enjoy the following remarkable properties: 1388 % 1389 This argument enjoys the following remarkable properties: 1381 1390 \begin{enumerate} 1382 \item they are generic enough to accommodate all passes of the CerCo compiler1383 \item the conjunction of the 1tomany forward simulationconditions are1391 \item it is generic enough to accommodate all passes of the CerCo compiler; 1392 \item the requested conditions are 1384 1393 just slightly stricter than the statement of a 1tomany forward 1385 simulation in the classical case. In particular, they only require 1386 the construction of very simple forms of structured traces made of 1387 silent states only. 1388 \item they allow to prove our main result of the paper: the 1tomany 1389 forward simulation conditions are sufficient to prove the trace 1390 reconstruction theorem 1394 simulation in the classical case; 1395 % . In particular, they only require 1396 % the construction of very simple forms of structured traces made of 1397 % silent states only. 1398 \item they allow to prove our main result of the paper: the conditions we will 1399 present are sufficient to prove the intensional preservation needed to prove 1400 the compiler correct (\autoref{thm:main}). 1391 1401 \end{enumerate} 1392 1393 Point 3. is the important one. First of all it means that we have reduced 1394 the complex problem of trace reconstruction to a much simpler one that, 1402 The last point is the important one. First of all it means that we have reduced 1403 the complex problem of preserving the structure of fragments to a much simpler one that, 1395 1404 moreover, can be solved with slight adaptations of the forward simulation proof 1396 1405 that is performed for a compiler that only cares about functional properties. 1397 Therefore we have successfully split tedas much as possible the proof of1406 Therefore we have successfully split as much as possible the proof of 1398 1407 preservation of functional properties from that of nonfunctional ones. 1399 1408 1400 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@1409 % @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1401 1410 1402 1411 \paragraph{Relation sets.} 1403 1412 Let $S_1$ and $S_2$ be two deterministic labelled transition systems as described 1404 in \autoref{se mantics}. We introduce now the four relations $\mathcal{S,C,R,L}\subseteq S_1\times S_2$1413 in \autoref{sec:semantics}. We introduce now the four relations $\mathcal{S,C,R,L}\subseteq S_1\times S_2$ 1405 1414 between states of the two systems. The first two are abstract and must be instantiated 1406 1415 by every pass. The remaining two are derived. … … 1418 1427 % the target code of every cost emitting statement in the source code. 1419 1428 1429 \begin{definition}[$\R$ and $\LS$] 1430 \label{def:R_LS} 1420 1431 Two states 1421 1432 $s_1$ and $s_2$ are $\R$related if every time $s_1$ is the … … 1423 1434 $s_2'$ in the target code, then $s_2$ is the successor of $s_2'$. Formally: 1424 1435 $$s_1\R s_2 \iffdef \forall s_1',s_2'.s_1'\C s_2' \to s_1'\ar s_1 \to s_2' \ar s_2.$$ 1425 We will require all pairs of states that return from related calls to be1426 $\R$related. This, in combinantion with a dual requirement on function calls,1427 will grant that calls return exactly where they are supposed to be.1428 1436 1429 1437 We say states in $s_1\in S_1$ and $s_2\in S_2$ are labelrelated 1430 (marked $s_1\L s_2$) if1438 (marked $s_1\LS s_2$) if 1431 1439 \begin{itemize} 1432 1440 \item they both emit the same label $L$; … … 1435 1443 $s_1\S s_2'$, otherwise if $L\notin\ell(\Functions)$ then $s_1\S s_2$. 1436 1444 \end{itemize} 1437 This relation captures the fact that synchronisation on labels can be decoupled from 1438 ``semantic synchronisation'' (in particular at the start of functions). $s_1\L s_2$ 1445 \end{definition} 1446 We will require all pairs of states that return from related calls to be 1447 $\R$related. This, in combinantion with a dual requirement on function calls, 1448 will grant that calls return exactly where they are supposed to be. 1449 On the other hand the $\LS$ relation captures the fact that synchronisation on labels can be decoupled from 1450 ``semantic synchronisation'' (in particular at the start of functions). $s_1\LS s_2$ 1439 1451 tells that the two state are synchronised as to labels, and $s_2$ will eventually 1440 1452 synchronise semantically using only silent execution steps (apart from the very first … … 1443 1455 Given the relations $\S$ and $\C$, \autoref{fig:forwardsim} defines a set of 1444 1456 local simulation conditions that are sufficient to grant the correctness of 1445 the compiler. 1446 1457 the compiler, as stated by the results that follow. 1447 1458 \begin{figure} 1448 1459 \centering … … 1550 1561 \draw [new, overlay] (t1) to [bend left ] node [rel] {$\R$} (r); 1551 1562 \end{tikzpicture} 1552 \caption{Local simulation conditions. Each one states that the existence of 1553 states in~XXX such that the dashed relations holds is implied by the assumptions 1554 drawn as solid lines.} 1563 \caption{Local simulation conditions. Each one states that the assumptions 1564 drawn solid imply the existence of the white states and the dashed relations.} 1555 1565 \label{fig:forwardsim} 1556 1566 \end{figure} 1557 1558 \begin{lemma} 1567 % 1568 \begin{lemma}[Preservation of structured fragments] 1559 1569 If $S_1$, $S_2$, $\S$ and $\C$ satisfy the diagrams in \autoref{fig:forwardsim}, 1560 1570 $T_1=s_1\To s_1'$ is a structured fragment not starting with a $\ell(f)$ emission, 1561 1571 and $s_1\S s_2$, then there is $T_2=s_2\To s_2'$ with $T\approx T'$ and $s_1'\S s_2'$. 1562 1572 \end{lemma} 1563 \begin{theorem} 1573 \begin{theorem}[Preservation of measurable fragments] 1574 \label{thm:main} 1564 1575 If $S_1$, $S_2$, $\S$ and $\C$ satisfy the diagrams in \autoref{fig:forwardsim}, 1565 $M_1 :s_1\MeasTos_1'$ is a measurable fragment of $S_1$ and $s_2$ is such that1566 $s_1\L s_2$, then there is $M_2:s_2\MeasTo s_2'$ with $M_1\approx M_2$. Moreover,1576 $M_1 = s_1\to^* s_1'$ is a measurable fragment of $S_1$ and $s_2$ is such that 1577 $s_1\LS s_2$, then there is a measurable $M_2 = s_2\to^* s_2'$ with $M_1=M_2$. Moreover, 1567 1578 there is a state $s_2''$ with $s_2'\to[\tau *]s_2''$ and $s_1'\S s_2''$. 1568 1579 \end{theorem} 1569 @@@@@@@@@@@@@@@@@@@@@@@ 1580 In particular, the above theorem 1581 applied to the \verb+main+ function of the program together with an assumption 1582 stating that initial states are $\LS$related shows that 1583 for each measurable execution of the source code, 1584 the compiled code produces a measurable execution with the same observables. 1585 Combined with \autoref{thm:static} and by proving the conditions in 1586 \autoref{fig:forwardsim} for every pass, the compiler is proved correct. 1587 1570 1588 1571 1589 % \begin{figure} … … 1952 1970 \end{verbatim} 1953 1971 \end{comment} 1954 1955 \section{Conclusions and future works}1956 AGGIUNGERE DISCORSO SU APPROCCIO ALTERNATIVO: LABEL DOPO OGNI CALL. COMUNQUE1957 CON TRACCE STRUTTURATE, MA DIVERSAMENTE1958 \label{conclusions}1959 The labelling approach is a technique to implement compilers that induce on1960 the source code a non uniform cost model determined from the object code1961 produced. The cost model assigns a cost to each basic block of the program.1962 The main theorem of the approach says that there is an exact1963 correspondence between the sequence of basic blocks started in the source1964 and object code, and that no instruction in the source or object code is1965 executed outside a basic block. Thus the cost of object code execution1966 can be computed precisely on the source.1967 1968 In this paper we scaled the labelling approach to cover a programming language1969 with function calls. This introduces new difficulties when the language1970 is unstructured, i.e. it allows function calls to return anywhere in the code,1971 destroying the hope of a static prediction of the cost of basic blocks.1972 We restored static predictability by introducing a new semantics for unstructured1973 programs that single outs well structured executions. The latter are represented1974 by structured traces, a generalisation of streams of observables that capture1975 several structural invariants of the execution, like well nesting of functions1976 or the fact that every basic block must start with a code emission statement.1977 We showed that structured traces are sufficiently well behaved to statically compute a precise cost model on the object code.1978 1979 We also proved that the cost model computed on the object code is also valid1980 on the source code if the compiler respects two requirements: the weak execution1981 traces of the source and target code must be the same and the object1982 code execution fragments are structured.1983 1984 To prove that a compiler respects the requirement we extended the notion1985 of forward simulation proof for a labelled transition system to grant1986 preservation of structured fragments. If the source language of the compiler1987 is structured, all its execution fragments are, allowing to deduce from1988 preservation of structure that object code fragments are structured too.1989 1990 Finally, we identified measurable execution fragments that are those whose1991 execution time (once compiled) can be exactly computed looking at the object1992 code weak execution traces only. A final instrumentation pass on the source1993 code can then be used to turn the non functional property of having a certain1994 cost into the functional property of granting that a certain global variable1995 incremented at the beginning of every block according to the induced cost model1996 has a certain value.1997 1998 All results presented in the paper are part of a larger certification of a1999 C compiler which is based on the labelling approach. The certification, done2000 in Matita, is the main deliverable of the FETOpen Certified Complexity (CerCo).2001 2002 The short term future work consists in the completion of the certification of2003 the CerCo compiler exploiting the main theorem of this paper.2004 2005 \paragraph{Related works.}2006 CerCo is the first project that explicitly tries to induce a2007 precise cost model on the source code in order to establish nonfunctional2008 properties of programs on an high level language. Traditional certifications2009 of compilers, like~\cite{compcert2,piton}, only explicitly prove preservation2010 of the functional properties.2011 2012 Usually forward simulations take the following form: for each transition2013 from $s_1$ to $s_2$ in the source code, there exists an equivalent sequence of2014 transitions in the target code of length $n$. The number $n$ of transition steps2015 in the target code can just be the witness of the existential statement.2016 An equivalent alternative when the proof of simulation is constructive consists2017 in providing an explicit function, called \emph{clock function} in the2018 literature~\cite{clockfunctions}, that computes $n$ from $s_1$. Every clock2019 function constitutes then a cost model for the source code, in the spirit of2020 what we are doing in CerCo. However, we believe our solution to be superior2021 in the following respects: 1) the machinery of the labelling approach is2022 insensible to the resource being measured. Indeed, any cost model computed on2023 the object code can be lifted to the source code (e.g. stack space used,2024 energy consumed, etc.) simply reproving an analogue of~\autoref{static}.2025 For example, in CerCo we transported to the source level not only the execution2026 time cost model, but also the amount of stack used by function calls.2027 On the contrary, clock functions only talk about2028 number of transition steps. In order to extend the approach with clock functions2029 to other resources, additional functions must be introduced. Moreover, the2030 additional functions would be handled differently in the proof.2031 2) the cost models induced by the labelling approach have a simple presentation.2032 In particular, they associate a number to each basic block. More complex2033 models can be induced when the approach is scaled to cover, for instance,2034 loop optimisations~\cite{loopoptimizations}, but the costs are still meant to2035 be easy to understand and manipulate in an interactive theorem prover or2036 in FramaC.2037 On the contrary, a clock function is a complex function of the state $s_1$2038 which, as a function, is an opaque object that is difficult to reify as2039 source code in order to reason on it.2040 2041 1972 \section{The formalisation} 2042 1973 As we already explained, for the sake of presentation we explained the formal … … 2050 1981 \item Rather than having a generic notion of fragment and a predicate of 2051 1982 structuredness and measurability, we use inductive definitions internalising 2052 the conditions. Among other things this turns the proof of \autoref{ preservation} a matter of1983 the conditions. Among other things this turns the proof of \autoref{thm:preservation} a matter of 2053 1984 structural induction, when it would require more complex induction schemes 2054 1985 otherwise. … … 2061 1992 with the tradition of labelled transition systems and named sequences of states 2062 1993 and steps \emph{execution fragments}. In the development we named our structures 2063 \emph{traces}. In the remainder of this section we will use both names. 1994 \emph{traces} (that usually refer to the sequence of emitted labels only). 1995 In the remainder of this section we will use both names. 2064 1996 \end{itemize} 2065 1997 … … 2269 2201 2270 2202 \paragraph{The forward simulation result.} 2203 In the formalisation, the equivalent conditions of those 2204 depicted in \autoref{fig:forwardsim} can be seen in \autoref{fig:forwardsim'}. 2205 Again we must produce for each pass the relations $\S$ and $\C$. Another derived 2206 relation is $\L$, which holds for states $s_1$ and $s_2$ when $L~s_1=L~s_2$. 2207 % 2208 Some details are skipped in the figure regarding the nature of the repeated steps depicted with 2209 an asterisk. 2210 There are three kinds of iterated steps without 2211 calls nor returns involved in the conditions: 2212 \begin{itemize} 2213 \item sequences where all states strictly after the first one are unlabelled; 2214 these are what can be safely prepended to \verb+TAL+'s, and are modelled 2215 by the inductive definition \verb+trace_any_any+ (\verb+TAA+) in the formalisation; 2216 \item sequences where all ``internal'' states strictly after the first and before the last 2217 are unlabelled; these are \verb+trace_any_any_free+ in the formalisation (\verb+TAAF+, which 2218 is just a \verb+TAA+ followed by a step); 2219 \item sequences where all states strictly before the last are unlabelled that we 2220 will call here \verb+trace_any_any_right+ (\verb+TAAR+); in the formalisation 2221 these are in fact \verb+TAAF+'s with an additional condition. 2222 \end{itemize} 2223 % 2271 2224 \begin{figure} 2272 2225 \centering … … 2298 2251 % \end{subfigure} 2299 2252 % & 2300 \begin{subfigure} {.25\linewidth}2253 \begin{subfigure}[b]{.25\linewidth} 2301 2254 \centering 2302 2255 \begin{tikzpicture}[every join/.style={ar}, join all, thick, … … 2317 2270 \end{subfigure} 2318 2271 & 2319 \begin{subfigure} {.375\linewidth}2272 \begin{subfigure}[b]{.375\linewidth} 2320 2273 \centering 2321 2274 \begin{tikzpicture}[every join/.style={ar}, join all, thick, … … 2338 2291 \draw [new] (s1) to [bend left] node [rel] {$\C$} (c); 2339 2292 \end{tikzpicture} 2340 \caption{The \verb+cl_call+ case. }2293 \caption{The \verb+cl_call+ case.\\\vspace{3.1ex}} 2341 2294 \label{subfig:cl_call} 2342 2295 \end{subfigure} 2343 2296 & 2344 \begin{subfigure} {.375\linewidth}2297 \begin{subfigure}[b]{.375\linewidth} 2345 2298 \centering 2346 2299 \begin{tikzpicture}[every join/.style={ar}, join all, thick, … … 2362 2315 \draw [new, overlay] (t1) to [bend left ] node [rel] {$\R$} (r); 2363 2316 \end{tikzpicture} 2364 \caption{The \verb+cl_return+ case. }2317 \caption{The \verb+cl_return+ case.\\\vspace{3.1ex}} 2365 2318 \label{subfig:cl_return} 2366 2319 \end{subfigure} 2367 2320 \end{tabular} 2368 \caption{Mnemonic diagrams depicting the hypotheses for the preservation of structured traces. 2369 Dashed lines 2370 and arrows indicates how the diagrams must be closed when solid relations 2371 are present. 2372 DA CAMBIARE: $\L$ va in conflitto con l'altra definizione} 2373 \label{fig:forwardsim} 2321 \caption{Mnemonic diagrams depicting the hypotheses for the preservation of structured traces.} 2322 \label{fig:forwardsim'} 2374 2323 \end{figure} 2324 This prolification of different types of fragments is a downside of shifting 2325 labelling from steps to states. 2326 The actual conditions are as follows. 2327 % 2328 \begin{condition}[\verb+cl_other+ and \verb+cl_jump+, \autoref{subfig:cl_other_jump}] 2329 \label{cond:other} 2330 For all $s_1,s_1',s_2$ such that $s_1 \S s_1'$, and 2331 $s_1\exec s_1'$, and either $s_1 \class \verb+cl_other+$ or 2332 both $s_1\class\verb+cl_other+$ and $\ell~s_1'$, 2333 there exists an $s_2'$ and a $taaf:\verb+trace_any_any_free+~s_2~s_2'$ 2334 such that $s_1' \mathrel{{\S} \cap {\L}} s_2'$ and either 2335 $taaf$ is non empty, or one among $s_1$ and $s_1'$ is not labelled. 2336 The last condition is needed to prevent the collapsing of two 2337 labelemitting consecutive states. 2338 \end{condition} 2339 2340 % In the above condition depicted in , 2341 % a $\verb+trace_any_any_free+~s_1~s_2$ (which from now on 2342 % will be shorthanded as \verb+TAAF+) is an 2343 % inductive type of structured traces that do not contain function calls or 2344 % cost emission statements. Differently from a \verb+TAA+, the 2345 % instruction to be executed in the lookahead state $s_2$ may be a cost emission 2346 % statement. 2347 % 2348 % The intuition of the condition is that one step can be replaced with zero or more steps if it 2349 % preserves the relation between the data and if the two final statuses are 2350 % labelled in the same way. Moreover, we must take special care of the empty case 2351 % to avoid collapsing two consecutive states that emit a label, missing one of the two emissions. 2352 % 2353 \begin{condition}[\verb+cl_call+, \autoref{subfig:cl_call}] 2354 \label{cond:call} 2355 For all $s_1,s_1',s_2$ s.t. $s_1 \S s_1'$ and 2356 $s_1\exec s_1'$ and $s_1 \class \verb+cl_call+$, there exists $s_a, s_b, s_2'$, a 2357 $\verb+TAA+~s_2~s_a$, and a 2358 $\verb+TAAF+~s_b~s_2'$ such that: 2359 $s_a\class\verb+cl_call+$, the \verb+as_call_ident+'s of 2360 the two call states are the same, $s_1 \C s_a$, 2361 $s_a\exec s_b$, $s_1' \L s_b$ and 2362 $s_1' \S s_2'$. 2363 \end{condition} 2364 2365 % The condition, depicted in \autoref{subfig:cl_call} says that, to simulate a function call, we can perform a 2366 % sequence of silent actions before and after the function call itself. 2367 % The old and new call states must be $\C$related, the old and new 2368 % states at the beginning of the function execution must be $\L$related 2369 % and, finally, the two initial and final states must be $\S$related 2370 % as usual. 2371 2372 \begin{condition}[\verb+cl_return+, \autoref{subfig:cl_return}] 2373 \label{cond:return} 2374 For all $s_1,s_1',s_2$ s.t. $s_1 \S s_1'$, 2375 $s_1\exec s_1'$ and $s_1 \class \verb+cl_return+$, there exists $s_a, s_b, s_2'$, a 2376 $\verb+TAA+~s_2~s_a$ and a 2377 $\verb+TAAR+~s_b~s_2'$ such that: 2378 $s_a\class\verb+cl_return+$, 2379 $s_a\exec s_b$, 2380 $s_1' \R s_b$ and 2381 $s_1' \mathrel{{\S} \cap {\L}} s_2'$. 2382 \end{condition} 2383 2384 \paragraph{Main result.} 2385 Let us assume that $\S$ and $\C$ are given such that 2386 Conditions~\ref{cond:other}, \ref{cond:call} and~\ref{cond:return} 2387 are satisfied. If we reword the 2388 relation $\LS$ of \autoref{def:R_LS} by defining 2389 $$s_1\LS s_2\iffdef s_1\L s_2\wedge \exists s_2', taa:\verb+TAA+~s_2~s_2'. s_1\S s_2',$$ 2390 we can prove the trace reconstruction theorem, 2391 which is the analogue of \autoref{thm:main}. 2392 % 2393 \begin{theorem}[\verb+status_simulation_produce_tlr+] 2394 \label{thm:main'} 2395 For every $s_1,s_1',s_2$ s.t. 2396 $s_1\LS s_2$ and there is a $tlr_1:\verb+TLR+~s_1~s_1'$, 2397 then there exist $s_2'$ and $tlr_2:\verb+TLR+~s_2~s_2'$ 2398 with $tlr_1=tlr_2$. Moreover there are $s_2''$ and 2399 a $\verb+TAAR+~s_2'~s_2''$ with $s_2'\mathrel{{\S}\cap{\L}} s_2''$. 2400 \end{theorem} 2401 % In particular, the \verb+status_simulation_produce_tlr+ theorem 2402 % applied to the \verb+main+ function of the program and equal 2403 % $s_{2_b}$ and $s_2$ states shows that, for every initial state in the 2404 % source code that induces a structured trace in the source code, 2405 % the compiled code produces a similar structured trace. 2406 The theorem states that a \verb+trace_label_return+ in the source code 2407 together with a precomputed preamble of silent states 2408 (hidden in the definition of $\LS$) in the target code induces a 2409 similar \verb+trace_label_return+ in the target code. 2410 % Note that the statement does not 2411 % require the produced \verb+trace_label_return+ to start with the 2412 % precomputed preamble, even if this is likely to be the case in concrete 2413 % implementations. 2414 The preamble in input and the postamble in output are necessary for compositionality, 2415 and follow directly from the the fact that points semantically related 2416 may not correspond to points where the same observable events are fired, in particular 2417 cost labels and $RET$'s that mark the borders of measurability. 2418 2419 The proof proceeds by mutual structural induction on the traces involved. 2420 In the actual formalisation, in place of $tlr_1=tlr_2$ we use a 2421 recursively defined simulation relation between traces that implies the required 2422 equality. 2423 2424 \section{Conclusions and future works} 2425 \label{sec:conclusions} 2426 The labelling approach is a technique to implement compilers that induce on 2427 the source code a non uniform cost model determined from the object code 2428 produced. The cost model assigns a cost to each basic block of the program. 2429 The main theorem of the approach says that there is an exact 2430 correspondence between the sequence of basic blocks started in the source 2431 and object code, and that no instruction in the source or object code is 2432 executed outside a basic block. Thus the cost of object code execution 2433 can be computed precisely on the source. 2434 2435 In this paper we scaled the labelling approach to cover a programming language 2436 with function calls. This introduces new difficulties when the language 2437 is unstructured, i.e. it allows function calls to return anywhere in the code, 2438 destroying the hope of a static prediction of the cost of basic blocks. 2439 We restored static predictability by introducing a new semantics for unstructured 2440 programs that single outs well structured executions. The latter are represented 2441 by structured traces, a generalisation of streams of observables that capture 2442 several structural invariants of the execution, like well nesting of functions 2443 or the fact that every basic block must start with a code emission statement. 2444 We showed that structured traces are sufficiently well behaved to statically compute a precise cost model on the object code. 2445 2446 We also proved that the cost model computed on the object code is also valid 2447 on the source code if the compiler respects two requirements: the weak execution 2448 traces of the source and target code must be the same and the object 2449 code execution fragments are structured. 2450 2451 To prove that a compiler respects the requirement we extended the notion 2452 of forward simulation proof for a labelled transition system to grant 2453 preservation of structured fragments. If the source language of the compiler 2454 is structured, all its execution fragments are, allowing to deduce from 2455 preservation of structure that object code fragments are structured too. 2456 2457 Finally, we identified measurable execution fragments that are those whose 2458 execution time (once compiled) can be exactly computed looking at the object 2459 code weak execution traces only. A final instrumentation pass on the source 2460 code can then be used to turn the non functional property of having a certain 2461 cost into the functional property of granting that a certain global variable 2462 incremented at the beginning of every block according to the induced cost model 2463 has a certain value. 2464 2465 All results presented in the paper are part of a larger certification of a 2466 C compiler which is based on the labelling approach. The certification, done 2467 in Matita, is the main deliverable of the FETOpen Certified Complexity (CerCo). 2468 2469 The short term objective consists in the completion of the certification of 2470 the CerCo compiler exploiting the main theorem of this paper. An alternative approach 2471 to the same problem that we would like to investigate consists in labelling 2472 every instruction that follows a call. This would still require a form of structured 2473 execution fragments, stating that no only calls but also returns are always followed 2474 by an emission. The main downside is the pollution of the instrumented code 2475 with many cost annotations. 2476 2477 \paragraph{Related works.} 2478 CerCo is the first project that explicitly tries to induce a 2479 precise cost model on the source code in order to establish nonfunctional 2480 properties of programs on an high level language. Traditional certifications 2481 of compilers, like~\cite{compcert2,piton}, only explicitly prove preservation 2482 of the functional properties. 2483 2484 Usually forward simulations take the following form: for each transition 2485 from $s_1$ to $s_2$ in the source code, there exists an equivalent sequence of 2486 transitions in the target code of length $n$. The number $n$ of transition steps 2487 in the target code can just be the witness of the existential statement. 2488 An equivalent alternative when the proof of simulation is constructive consists 2489 in providing an explicit function, called \emph{clock function} in the 2490 literature~\cite{clockfunctions}, that computes $n$ from $s_1$. Every clock 2491 function constitutes then a cost model for the source code, in the spirit of 2492 what we are doing in CerCo. However, we believe our solution to be superior 2493 in the following respects: 1) the machinery of the labelling approach is 2494 insensible to the resource being measured. Indeed, any cost model computed on 2495 the object code can be lifted to the source code (e.g. stack space used, 2496 energy consumed, etc.) simply reproving an analogue of~\autoref{thm:static}. 2497 For example, in CerCo we transported to the source level not only the execution 2498 time cost model, but also the amount of stack used by function calls. 2499 On the contrary, clock functions only talk about 2500 number of transition steps. In order to extend the approach with clock functions 2501 to other resources, additional functions must be introduced. Moreover, the 2502 additional functions would be handled differently in the proof. 2503 2) the cost models induced by the labelling approach have a simple presentation. 2504 In particular, they associate a number to each basic block. More complex 2505 models can be induced when the approach is scaled to cover, for instance, 2506 loop optimisations~\cite{loopoptimizations}, but the costs are still meant to 2507 be easy to understand and manipulate in an interactive theorem prover or 2508 in FramaC. 2509 On the contrary, a clock function is a complex function of the state $s_1$ 2510 which, as a function, is an opaque object that is difficult to reify as 2511 source code in order to reason on it. 2512 2513 % e.g. 2514 % because the 1tomany forward simulation conditions allow in the 2515 % case of function calls to execute a preamble of silent instructions just after 2516 % the function call. 2517 2518 % Similarly to the call condition, to simulate a return we can perform a 2519 % sequence of silent actions before and after the return statement itself, 2520 % as depicted in \autoref{subfig:cl_return}. 2521 % The old and the new statements after the return must be $\R$related, 2522 % to grant that they returned to corresponding calls. 2523 % The two initial and final states must be $\S$related 2524 % as usual and, moreover, they must exhibit the same labels. Finally, when 2525 % the suffix is non empty we must take care of not inserting a new 2526 % unmatched cost emission statement just after the return statement. 2527 2375 2528 2376 2529 % There are three mutual structural recursive functions, one for each of
Note: See TracChangeset
for help on using the changeset viewer.