Changeset 3344


Ignore:
Timestamp:
Jun 11, 2013, 10:31:46 PM (4 years ago)
Author:
sacerdot
Message:

...

File:
1 edited

Legend:

Unmodified
Added
Removed
  • Papers/itp-2013/ccexec2.tex

    r3343 r3344  
    214214\section{The labelling approach}
    215215\label{labelling}
     216
     217\section{A brief introduction to the labelling approach}
     218
     219\begin{figure}
     220\begin{verbatim}
     221EMIT L_1;                         EMIT L_1         cost += k_1;
     222I_1;                              I_3              I_1;
     223for (i=0; i<2; i++) {        l_1: COND l_2         for (i=0; i<2; i++) {
     224  EMIT L_2;                       EMIT L_2           cost += k_2;           
     225  I_2;                            I_4                I_2;               
     226 }                                GOTO l_1          }                   
     227EMIT L_3;                    l_2: EMIT L_3         cost += k_3;           
     228\end{verbatim}
     229\caption{The labelling approach applied to a simple program.\label{examplewhile}. The $I_i$ are sequences of instructions not containing jumps or loops. }
     230\end{figure}
     231We briefly explain the labelling approach on the example in Figure~\ref{examplewhile}. The user wants to analyse the execution time of the program made by
     232the black lines in the r.h.s. of the figure. He compiles the program using
     233a special compiler that first inserts in the code three label emission
     234statements (\texttt{EMIT L$_i$}, in red) to mark the beginning of basic blocks;
     235then the compiler compiles the code to machine code (in the
     236middle of the figure), granting that the execution of the source and object
     237code emits the same sequence of labels ($L_1; L_2; L_2; L_3$ in the example).
     238This is achieved by keeping track of basic blocks during compilation, avoiding
     239all optimizations that alter the control flow. The latter can be recovered with
     240a more refined version of the labelling approach~\cite{tranquill}, but in the
     241present paper we stick to this simple variant for simplicity. Once the object
     242code is produced, the compiler runs a static code analyzer to associate to
     243each label $L_1, \ldots, L_3$ the cost (in clock cycles) of the instructions
     244that belong to the corresponding basic block. For example, the cost $k_1$
     245associated to $L_1$ is the number of cycles required to execute the block
     246$I_3$ and $COND l_2$, while the cost $k_2$ associated to $L_2$ counts the
     247cycles required by the block $I_4$ and $GOTO l_1$. The compiler also guarantees
     248that every executed instruction is in the scope of some code emission label,
     249that each scope does not contain loops (to associate a finite cost), and that
     250both branches of a conditional statement are followed by a code emission
     251statement. Under these assumptions it is true that the total execution cost
     252of the program $\Delta_t$ is equal to the sum over the sequence of emitted
     253labels of the cost associated to every label:
     254$\Delta t = k(L_1; L_2; L_2; L_3) = k_1 + k_2 + k_2 + k_3$.
     255Finally, the compiler emits an instrumented version of the source code
     256(in the r.h.s. of the figure) where label emission statements are replaced
     257by increments of a global variable cost that, before every increment, holds the
     258exact number of clock cycles spent by the microprocessor so far:
     259the difference $\Delta clock$ between the final and initial value of the variable clock is $\Delta clock = k_1 + k_2 + k_2 + k_3 = \Delta t$. Finally, the
     260user can employ any available method (e.g. Hoare logic, invariant generators,
     261abstract interpretation and automated provers) to certify that $\Delta clock$
     262never exceeds a certain bound~\cite{cerco}, which is now a functional property
     263of the code.
     264
     265\section{The labelling approach in presence of loops}
     266
     267Let's now consider a simple program written in C that contains a function
     268pointer call inside the scope of the cost label $L_1$.
     269\begin{verbatim}
     270main: EMIT L_1       g: EMIT L_3   EMIT L_1     g: EMIT L_3
     271      I_1;              I_3;       I_4;            I_6;
     272      (*f)();           return;    CALL            RETURN
     273      I_2;                         I_5;
     274      EMIT L_2                     EMIT L_2
     275\end{verbatim}
     276The labelling method works exactly as before, inserting
     277code emission statements/\texttt{cost} variable increments at the beginning
     278of every basic block and at the beginning of every function. The compiler
     279still grants that the sequence of labels observed on the two programs are
     280the same. A new difficulty happears when the compiler needs to statically
     281analyze the object code to assign a cost to every label. What should the scope
     282of the $L_1$ label be? After executing the $I_4$ block, the \texttt{CALL}
     283statement passes control to a function that cannot be determined statically.
     284Therefore the cost of executing the body must be paid by some other label
     285(hence the requirement that every function starts with a code emission
     286statement). What label should pay for the cost for the block $I_5$? The only
     287reasonable answer is $L_1$, i.e. \emph{the scope of labels should extend to the
     288next label emission statement, stepping over function calls}.
     289
     290The latter definition of scope is adeguate on the source level because
     291C is a structured language that guarantees that every function call, if it
     292returns, passes control to the first instruction that follows the call. However,
     293this is not guaranteed for object code, the backend languages of a compiler
     294and, more generally, for unstructured
     295languages that use a writable control stack to store the return address of
     296calls. For example, $I_6$ could increment by $1$ the return address on the
     297stack so that the next \texttt{RETURN} would start at the second instruction
     298of $I_5$. The compiler would still be perfectly correct if a random, dead
     299code instruction was also added just after each \texttt{CALL}. More generally,
     300\emph{there is no guarantee that a correct compiler that respects the functional
     301behaviour of a program also respects the calling structure of the source code}.
     302Without such an assumption, however, it may not be true that the execution cost
     303of the program is the sum of the costs associated to the labels emitted. In our
     304example, the cost of $I_5$ is paid by $L_1$, but in place of $I_5$ the processor could execute any other code after $g$ returns.
     305
     306Obviously, any reasonably written compiler produces object code that behaves
     307as if the language was structured (i.e. by properly nesting function
     308calls/returns and without tampering with the return addresses on the control
     309stack). This property, however, is a property of the runs of object code
     310programs, and not a property of the object code that can be easily statically
     311verified (as the ones we required for the basic labelling method).
     312Therefore, we now need to single out those runs whose cost behaviour can be
     313statically predicted, and we need to prove that every run of programs generated
     314by our compiler are of that type. We call them \emph{structured} since their
     315main property is to respect properties that hold for free on the source code
     316because the source language is structured. Moreover, in order to avoid proving
     317too many preservation properties of our compiler, we drop the original
     318requirements on the object code (all instructons must be in scope of some labels, no loops inside a scope, etc.) in favour of the corresponding requirement
     319for structured runs (a structured run must start with a label emission, no instruction can be executed twice between two emissions, etc.).
     320
     321We will therefore proceed as follows. In the following section
     3221) we formally introduce the notion of
     323structured trace, which captures structured runs in the style of labelled
     324transition systems; 2) we show that on the object code we can correctly
     325compute the execution time of a structured run from the sequence of labels
     326observed; 3) we give unstructured languages a semantics in terms of structured
     327traces; 4) we show that on the source code we can correctly compute the
     328execution time of a program if the compiler produces object code whose
     329runs are weakly similar to the source code runs.
     330
     331The notion of weak bisimulation for structured traces is a global property
     332which is hard to prove formally and much more demanding than the simple forward
     333simulation required for proofs of preservation of functional properties.
     334Therefore in Section~\ref{XXX} we will present a set of local simulation
     335conditions that refine the corresponding conditions for forward simulation and
     336that are sufficient to grant the production of weakly similar traces.
     337
     338All the definitions and theorems presented in the paper have been formalized
     339in the interactive theorem prover Matita and are being used to certify
     340the complexity preserving compiler developed in the CerCo project~\cite{cerco}.
     341The formalization can be
     342found at~\ref{YYY} and it heavily relies on algebraic and dependent types for
     343both structured traces and the definition of weak similarity. In the paper
     344we did not try to stay close to the formalization. On the contrary,
     345the definitions given in the paper are the result of a significant
     346simplification effort for
     347the sake of presentation and to make easier the re-implementation of the
     348concepts in a proof assistant which is not based on the Calculus of Inductive
     349Constructions. However the formalization is heavily commented to allow the
     350reader to understand the technical details of the formalization.
     351
     352
     353====================================================
     354
    216355We briefly sketch here a simplified version of the labelling approach as
    217356introduced in~\cite{easylabelling}. The simplification strengthens the
Note: See TracChangeset for help on using the changeset viewer.