 Timestamp:
 Jun 14, 2013, 4:21:50 PM (6 years ago)
 File:

 1 edited
Legend:
 Unmodified
 Added
 Removed

Papers/itp2013/ccexec2.tex
r3355 r3356 238 238 \label{labelling} 239 239 % \subsection{A brief introduction to the labelling approach} 240 We briefly explain the labelling approach on the example in \autoref{examplewhile}. 240 We briefly explain the labelling approach as introduced in~\cite{easylabelling} 241 on the example in \autoref{examplewhile}. 241 242 The user wants to analyse the execution time of the program (the black lines in 242 243 \autoref{subfig:example_input}). He compiles the program using 243 244 a special compiler that first inserts in the code three label emission 244 statements (\verb+EMIT L_i+) to mark the beginning of basic blocks; 245 statements (\verb+EMIT L_i+) to mark the beginning of basic blocks 246 (\autoref{subfig:example_input}); 245 247 then the compiler compiles the code to machine code (\autoref{subfig:example_oc}), 246 248 granting that the execution of the source and object … … 248 250 This is achieved by keeping track of basic blocks during compilation, avoiding 249 251 all optimizations that alter the control flow. The latter can be recovered with 250 a more refined version of the labelling approach~\cite{ tranquill}, but in the252 a more refined version of the labelling approach~\cite{loopoptimizations}, but in the 251 253 present paper we stick to this simple variant for simplicity. Once the object 252 254 code is produced, the compiler runs a static code analyzer to associate to … … 273 275 never exceeds a certain bound~\cite{cerco}, which is now a functional property 274 276 of the code. 275 \begin{figure} 277 278 \vspace{0.5cm} 279 \begin{figure}[!h] 276 280 \begin{subfigure}[t]{.32\linewidth} 277 281 \begin{lstlisting}[moredelim={[is][\color{red}]{}{}}] … … 327 331 \end{figure} 328 332 329 \s ubsection{The labelling approach in presence ofcalls}333 \section{Extending the labelling approach to function calls} 330 334 % 331 335 Let's now consider a simple program written in C that contains a function … … 351 355 \begin{lstlisting}[xleftmargin=20pt] 352 356 void main() { 353 EMIT L_1;357 EMIT $L_1$; 354 358 $I_1$; 355 359 (*f)(); 356 360 $I_2$; 357 361 } 362 358 363 void g() { 359 EMIT L_2;364 EMIT $L_2$; 360 365 $I_3$; 361 366 } … … 369 374 \begin{lstlisting}[xleftmargin=20pt] 370 375 main: 371 EMIT L_1376 EMIT $L_1$ 372 377 $I_4$ 373 378 CALL 374 379 $I_5$ 375 380 RETURN 381 376 382 g: 377 EMIT L_2383 EMIT $L_2$ 378 384 $I_6$ 379 385 RETURN … … 397 403 stack so that the next \verb+RETURN+ would start at the second instruction 398 404 of $I_5$. The compiler would still be perfectly correct if a random, dead 399 code instruction was a lso added just after each \verb+CALL+. More generally,405 code instruction was added after the \verb+CALL+ as the first instruction of $I_5$. More generally, 400 406 \emph{there is no guarantee that a correct compiler that respects the functional 401 407 behaviour of a program also respects the calling structure of the source code}. … … 409 415 stack). This property, however, is a property of the runs of object code 410 416 programs, and not a property of the object code that can be easily statically 411 verified (as the ones we required for the basic labelling method).417 verified (as the ones required in \autoref{labelling} in absence of function calls). 412 418 Therefore, we now need to single out those runs whose cost behaviour can be 413 419 statically predicted, and we need to prove that every run of programs generated … … 426 432 transition systems; 2) we show that on the object code we can correctly 427 433 compute the execution time of a structured run from the sequence of labels 428 observed; 3) we give unstructured languages a semantics in terms of structured 429 traces; 4) we show that on the source code we can correctly compute the 434 observed; 3) we show that on the source code we can correctly compute the 430 435 execution time of a program if the compiler produces object code whose 431 runs are weakly similar to the source code runs. 432 433 The notion of weak simulation for structured traces is a global property 434 which is hard to prove formally and much more demanding than the simple forward 435 simulation required for proofs of preservation of functional properties. 436 Therefore in \autoref{simulation} we will present a set of local simulation 437 conditions that refine the corresponding conditions for forward simulation and 438 that are sufficient to grant the production of weakly similar traces. 436 runs are weakly similar to the source code runs and structured. 437 438 The proof of correctness of such a compiler is harder than a traditional 439 proof of preservation of functional properties, and a standard forward 440 simulation argument does not work. In \autoref{simulation} we present 441 a refinement of forward simulation that grants all required correctness 442 properties. 439 443 440 444 All the definitions and theorems presented in the paper have been formalized … … 688 692 % in the semantics. This is the topic of the next section. 689 693 690 \s ection{Structured traces}694 \subsection{Structured traces} 691 695 \label{semantics} 692 696 … … 695 699 deterministic labelled transition system~\cite{LTS} $(S,\Lambda,\to)$ 696 700 that refines the 697 SOS by observing function calls and the beginning of basic blocks.701 SOS by observing function calls/returns and the beginning of basic blocks. 698 702 $S$ is the set of states of the program and 699 703 $\Lambda = \{ \tau, RET \} \cup \Labels \cup \Functions$ … … 771 775 cases when the fragment starts at the beginning of a function body 772 776 (e.g. the one of \texttt{main}) because in that case nobody would have emitted 773 the observable $\ell(f)$. 777 the observable $\ell(f)$. We plan to compare the two approaches in the future. 774 778 775 779 \paragraph{Measurable execution fragments and their cost prediction.} 776 The first main theorem of CerCo deals with programs written inobject code.780 The first main theorem of CerCo deals with the object code. 777 781 It states that the execution cost of 778 782 certain execution fragments, that we call \emph{measurable fragments}, can be … … 799 803 $s_0 \To s_n$ and $s_n$ must be a label emission statement. 800 804 801 \textbf{CSC: PROVA}805 %\textbf{CSC: PROVA} 802 806 % The theorem is proved by structural induction over the structured 803 807 % trace, and is based on the invariant that … … 832 836 accurate for the object code, we get as a corollary that it is also accurate 833 837 for the source code if the compiler preserves weak traces and 834 propagates measurability. Thus it becomes possible to compute cost models838 propagates measurability. Thus, as prescribed by the CerCo's methodology~\cite{fopara}, it becomes possible to compute cost models 835 839 on the object code, transfer it to the source code and then reason comfortably 836 840 on the source code only. … … 845 849 \end{theorem} 846 850 847 \section{ Forward simulation}851 \section{Proving the compiler correctness} 848 852 \label{simulation} 849 853 Because of \autoref{preservation}, to certify a compiler for the labelling 850 854 approach we need to both prove that it respects the functional semantics of the 851 855 program, and that it preserves weak traces and propagates measurability. 856 We achieve this by independently proving the three properties for each compiler 857 pass. 852 858 The first property is standard and can be proved by means of a forward simulation argument (see for example~\cite{compcert}) that runs like this. 853 859 First a relation between the corresponding 854 source and target states is established. Then a lemma establishes860 source and target states is defined. Then a lemma establishes 855 861 a local simulation condition: given two states in relation, if the source 856 862 one performs one step then the target one performs zero or more steps and 857 863 the two resulting states are synchronized again according to the relation. 864 No requirements are imposed on the intermediate target states. 858 865 Finally, the lemma is iterated over the execution trace to establish the 859 866 final result. … … 865 872 kind of local or global analysis of the code followed by a compositional, order 866 873 preserving translation of every instruction. In order to produce structured 867 traces, however, code emission cannot be fully compositional any longer. 874 traces, however, code emission cannot be fully compositional any longer, 875 and requirements need to be enforced on intermediate target states. 868 876 869 877 For example, consider~requirement \ref{req3} that asks every function body … … 886 894 the state just after the return in the source code is matched by the state $s_2$ 887 895 after these steps in the object code. However, the aforementioned requirement 888 does not involve $s_2$, but the state reached after the return in the object889 code , that preceeds $s_2$ in the execution fragment. Therefore the requirement896 does not involve $s_2$, but the intermediate state reached after the return in the object 897 code. Therefore this requirement too 890 898 cannot be enforced with the standard forward simulation argument. 891 899 … … 1364 1372 % an equivalence relation. 1365 1373 1366 \textbf{} 1367 1368 We summarise here the results of the previous sections. Each intermediate 1369 unstructured language can be given a semantics based on structured traces, 1370 that single out those runs that respect a certain number of invariants. 1371 A cost model can be computed on the object code and it can be used to predict 1372 the execution costs of runs that produce structured traces. The cost model 1373 can be lifted from the target to the source code of a pass if the pass maps 1374 structured traces to similar structured traces. The latter property is called 1375 a \emph{forward simulation}. 1376 1377 As for labelled transition systems, in order to establish the forward 1378 simulation we are interested in (preservation of observables), we are 1379 forced to prove a stronger notion of forward simulation that introduces 1380 an explicit relation between states. The classical notion of a 1tomany 1381 forward simulation is the existence of a relation $\S$ over states such that 1382 if $s_1 \S s_2$ and $s_1 \to^1 s_1'$ then there exists an $s_2'$ such that 1383 $s_2 \to^* s_2'$ and $s_1' \S s_2'$. In our context, we need to replace the 1384 one and multi step transition relations $\to^n$ with the existence of 1385 a structured trace between the two states, and we need to add the request that 1386 the two structured traces are similar. Thus what we would like to state is 1387 something like:\\ 1388 for all $s_1,s_2,s_1'$ such that there is a $\tau_1$ from 1389 $s_1$ to $s_1'$ and $s_1 \S s_2$ there exists an $s_2'$ such that 1390 $s_1' \S s_2'$ and a $\tau_2$ from $s_2$ to $s_2'$ such that 1391 $\tau_1$ is similar to $\tau_2$. We call this particular form of forward 1392 simulation \emph{trace reconstruction}. 1393 1394 The statement just introduced, however, is too simplistic and not provable 1395 in the general case. To understand why, consider the case of a function call 1396 and the pass that fixes the parameter passing conventions. A function 1397 call in the source code takes in input an arbitrary number of pseudoregisters (the actual parameters to pass) and returns an arbitrary number of pseudoregisters (where the result is stored). A function call in the target language has no 1398 input nor output parameters. The pass must add explicit code before and after 1399 the function call to move the pseudoregisters content from/to the hardware 1400 registers or the stack in order to implement the parameter passing strategy. 1401 Similarly, each function body must be augmented with a preamble and a postamble 1402 to complete/initiate the parameter passing strategy for the call/return phase. 1403 Therefore what used to be a call followed by the next instruction to execute 1404 after the function return, now becomes a sequence of instructions, followed by 1405 a call, followed by another sequence. The two states at the beginning of the 1406 first sequence and at the end of the second sequence are in relation with 1407 the status before/after the call in the source code, like in an usual forward 1408 simulation. How can we prove however the additional condition for function calls 1409 that asks that when the function returns the instruction immediately after the 1410 function call is called? To grant this invariant, there must be another relation 1411 between the address of the function call in the source and in the target code. 1412 This additional relation is to be used in particular to relate the two stacks. 1413 1414 Another example is given by preservation of code emission statements. A single 1415 code emission instruction can be simulated by a sequence of steps, followed 1416 by a code emission, followed by another sequence. Clearly the initial and final 1417 statuses of the sequence are to be in relation with the status before/after the 1418 code emission in the source code. In order to preserve the structured traces 1419 invariants, however, we must consider a second relation between states that 1420 traces the preservation of the code emission statement. 1374 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1421 1375 1422 1376 Therefore we now introduce an abstract notion of relation set between abstract … … 1441 1395 Therefore we have successfully splitted as much as possible the proof of 1442 1396 preservation of functional properties from that of nonfunctional ones. 1443 Secondly, combined with the results in the previous section, it implies 1444 that the cost model can be computed on the object code and lifted to the 1445 source code to reason on nonfunctional properties, assuming that 1446 the 1tomany forward simulation conditions are fulfilled for every 1447 compiler pass. 1397 1398 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1448 1399 1449 1400 \paragraph{Relation sets.} 1450 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@1451 1401 Let $S_1$ and $S_2$ be two deterministic labelled transition systems as described 1452 1402 in \autoref{semantics}. We introduce now the four relations $\mathcal{S,C,R,L}\subseteq S_1\times S_2$ … … 1468 1418 Two states 1469 1419 $s_1$ and $s_2$ are $\R$related if every time $s_1$ is the 1470 successor of a call state that is $\C$related to a call state1420 successor of a call state $s_1'$ that is $\C$related to a call state 1471 1421 $s_2'$ in the target code, then $s_2$ is the successor of $s_2'$. Formally: 1472 1422 $$s_1\R s_2 \iffdef \forall s_1',s_2'.s_1'\C s_2' \to s_1'\ar s_1 \to s_2' \ar s_2.$$ 1473 1423 We will require all pairs of states that return from related calls to be 1474 $\R$related. This is the fundamental requirement granting 1475 that the target trace is well structured, \emph{i.e.}\ that calls are well 1476 nested and returning where they are supposed to. 1424 $\R$related. This, in combinantion with a dual requirement on function calls, 1425 will grant that calls return exactly where they are supposed to be. 1477 1426 1478 1427 We say states in $s_1\in S_1$ and $s_2\in S_2$ are labelrelated … … 1485 1434 \end{itemize} 1486 1435 1487 The results that follow all rely on common hypotheses to hold for the 1488 systems $S_1$ and $S_2$ endowed with the relations $\S$ and $\C$. These hypotheses 1489 are depicted by the diagrams in \autoref{fig:forwardsim}. 1436 Given the relations $\S$ and $\C$, \autoref{fig:forwardsim} defines a set of 1437 local simulation conditions that are sufficient to grant the correctness of 1438 the compiler. 1439 1490 1440 \begin{figure} 1491 1441 \centering … … 1593 1543 \draw [new, overlay] (t1) to [bend left ] node [rel] {$\R$} (r); 1594 1544 \end{tikzpicture} 1595 \caption{The hypotheses of ???. Dashed relations are implied by solid ones.} 1545 \caption{Local simulation conditions. Each one states that the existence of 1546 states in~XXX such that the dashed relations holds is implied by the assumptions 1547 drawn as solid lines.} 1596 1548 \label{fig:forwardsim} 1597 1549 \end{figure} 1598 1550 1599 1551 \begin{lemma} 1600 If $S_1 $, $S_2$,$\S$ and $\C$ satisfy the diagrams in \autoref{fig:forwardsim},1601 $T_1 :s_1\To s_1'$ is a structured fragment not starting with a $\ell(f)$ emission,1602 and $s_1\S s_2$, then there is $T_2 :s_2\To s_2'$ with $T\approx T'$ and $s_1'\S s_2$.1552 If $S_1,S_2,\S,\C$ satisfy the diagrams in \autoref{fig:forwardsim}, 1553 $T_1=s_1\To s_1'$ is a structured fragment not starting with a $\ell(f)$ emission, 1554 and $s_1\S s_2$, then there is $T_2=s_2\To s_2'$ with $T\approx T'$ and $s_1'\S s_2'$. 1603 1555 \end{lemma} 1604 1556 \begin{theorem} 1605 If $S_1 $, $S_2$, $\S$ and $\C$ satisfy the diagrams in \autoref{fig:forwardsim},1606 $M_1:s_1\ MeasTos_1'$ is a measurable fragment of $S_1$ and $s_2$ is such that1607 $s_1\L s_2$, then there is $M_2:s_2\ MeasTos_2'$ with $M_1\approx M_2$.1557 If $S_1,S_2,\S,\C$ satisfy the diagrams in \autoref{fig:forwardsim}, 1558 $M_1:s_1\to^{*} s_1'$ is a measurable fragment of $S_1$ and $s_2$ is such that 1559 $s_1\L s_2$, then there is $M_2:s_2\to^{*} s_2'$ with $M_1\approx M_2$. 1608 1560 \end{theorem} 1609 1561 @@@@@@@@@@@@@@@@@@@@@@@ … … 2006 1958 can be computed precisely on the source. 2007 1959 2008 In this paper we scale the labelling approach to cover a programming language2009 with function calls. This introduces new difficulties onlywhen the language1960 In this paper we scaled the labelling approach to cover a programming language 1961 with function calls. This introduces new difficulties when the language 2010 1962 is unstructured, i.e. it allows function calls to return anywhere in the code, 2011 1963 destroying the hope of a static prediction of the cost of basic blocks. 2012 We restore static predictability by introducing a new semantics for unstructured1964 We restored static predictability by introducing a new semantics for unstructured 2013 1965 programs that single outs well structured executions. The latter are represented 2014 1966 by structured traces, a generalisation of streams of observables that capture 2015 1967 several structural invariants of the execution, like well nesting of functions 2016 1968 or the fact that every basic block must start with a code emission statement. 2017 We show that structured traces are sufficiently structured to statically compute 2018 a precise cost model on the object code. 2019 2020 We introduce a similarity relation on structured traces that must hold between 2021 source and target traces. When the relation holds for every program, we prove 2022 that the cost model can be lifted from the object to the source code. 2023 2024 In order to prove that similarity holds, we present a generic proof of forward 2025 simulation that is aimed at pulling apart as much as possible the part of the 2026 simulation related to nonfunctional properties (preservation of structure) 2027 from that related to functional properties. In particular, we reduce the 2028 problem of preservation of structure to that of showing a 1tomany 2029 forward simulation that only adds a few additional proof obligations to those 2030 of a traditional, function properties only, proof. 1969 We showed that structured traces are sufficiently well behaved to statically compute a precise cost model on the object code. 1970 1971 We also proved that the cost model computed on the object code is also valid 1972 on the source code if the compiler respects two requirements: the weak execution 1973 traces of the source and target code must be the same and the object 1974 code execution fragments are structured. 1975 1976 To prove that a compiler respects the requirement we extended the notion 1977 of forward simulation proof for a labelled transition system to grant 1978 preservation of structured fragments. If the source language of the compiler 1979 is structured, all its execution fragments are, allowing to deduce from 1980 preservation of structure that object code fragments are structured too. 1981 1982 Finally, we identified measurable execution fragments that are those whose 1983 execution time (once compiled) can be exactly computed looking at the object 1984 code weak execution traces only. A final instrumentation pass on the source 1985 code can then be used to turn the non functional property of having a certain 1986 cost into the functional property of granting that a certain global variable 1987 incremented at the beginning of every block according to the induced cost model 1988 has a certain value. 2031 1989 2032 1990 All results presented in the paper are part of a larger certification of a … … 2056 2014 insensible to the resource being measured. Indeed, any cost model computed on 2057 2015 the object code can be lifted to the source code (e.g. stack space used, 2058 energy consumed, etc.). On the contrary, clock functions only talk about 2016 energy consumed, etc.) simply reproving an analogue of~\autoref{static}. 2017 For example, in CerCo we transported to the source level not only the execution 2018 time cost model, but also the amount of stack used by function calls. 2019 On the contrary, clock functions only talk about 2059 2020 number of transition steps. In order to extend the approach with clock functions 2060 2021 to other resources, additional functions must be introduced. Moreover, the
Note: See TracChangeset
for help on using the changeset viewer.