# Changeset 3604

Ignore:
Timestamp:
Mar 6, 2017, 12:43:28 PM (6 months ago)
Message:

fopara intro as a base

File:
1 edited

### Legend:

Unmodified
 r3603 \usepackage{tikz} \usetikzlibrary{positioning,calc,patterns,chains,shapes.geometric,scopes} \lstset{language=C,basicstyle=\tt,basewidth=.5em,lineskip=-1.5pt} \newlength{\mylength} \newenvironment{frametxt}% {\setlength{\fboxsep}{5pt} \setlength{\mylength}{\linewidth}% \addtolength{\mylength}{-2\fboxsep}% \addtolength{\mylength}{-2\fboxrule}% \Sbox \minipage{\mylength}% \setlength{\abovedisplayskip}{0pt}% \setlength{\belowdisplayskip}{0pt}% }% {\endminipage\endSbox $\fbox{\TheSbox}$} \smartqed \label{sect.introduction} % ---------------------------------------------------------------------------- % % SECTION                                                                      % % ---------------------------------------------------------------------------- % \section{Introduction} %\paragraph{Problem statement.} Programs can be specified with both functional constraints (what the program must do) and non-functional constraints (what time, space or other resources the program may use).  In the current state of the art, functional properties are verified by combining user annotations---preconditions, invariants, and so on---with a multitude of automated analyses---invariant generators, type systems, abstract interpretation, theorem proving, and so on---on the program's high-level source code. By contrast, many non-functional properties are verified using analyses on low-level object code, %\footnote{A notable %  exception is the explicit allocation of heap space in languages like C, which %  can be handled at the source level.} but these analyses may then need information about the high-level functional behaviour of the program that must then be reconstructed. This analysis on low-level object code has several problems: \begin{itemize*} \item It can be hard to deduce the high-level structure of the program after compiler optimisations. The object code produced by an optimising compiler may have radically different control flow to the original source code program. \item Techniques that operate on object code are not useful early in the development process of a program, yet problems with a program's design or implementation are cheaper to resolve earlier in the process, rather than later. \item Parametric cost analysis is very hard: how can we reflect a cost that depends on the execution state, for example the value of a register or a carry bit, to a cost that the user can understand looking at the source code? \item Performing functional analyses on object code makes it hard for the programmer to provide information about the program and its expected execution, leading to a loss of precision in the resulting analyses. \end{itemize*} \paragraph{Vision and approach.} We want to reconcile functional and non-functional analyses: to share information and perform both at the same time on high-level source code. % What has previously prevented this approach is the lack of a uniform and precise cost model for high-level code as each statement occurrence is compiled differently, optimisations may change control flow, and the cost of an object code instruction may depend on the runtime state of hardware components like pipelines and caches, all of which are not visible in the source code. We envision a new generation of compilers that track program structure through compilation and optimisation and exploit this information to define a precise, non-uniform cost model for source code that accounts for runtime state. With such a cost model we can reduce non-functional verification to the functional case and exploit the state of the art in automated high-level verification~\cite{survey}. The techniques currently used by the Worst Case Execution Time (WCET) community, who perform analyses on object code, are still available but can be coupled with additional source-level analyses. Where our approach produces overly complex cost models, safe approximations can be used to trade complexity with precision. Finally, source code analysis can be used early in the development process, when components have been specified but not implemented, as modularity means that it is enough to specify the non-functional behaviour of missing components. \paragraph{Contributions.} We have developed \emph{the labelling approach}~\cite{labelling}, a technique to implement compilers that induce cost models on source programs by very lightweight tracking of code changes through compilation. We have studied how to formally prove the correctness of compilers implementing this technique, and have implemented such a compiler from C to object binaries for the 8051 microcontroller for predicting execution time and stack space usage, verifying it in an interactive theorem prover.  As we are targeting an embedded microcontroller we do not consider dynamic memory allocation. To demonstrate source-level verification of costs we have implemented a Frama-C plugin~\cite{framac} that invokes the compiler on a source program and uses it to generate invariants on the high-level source that correctly model low-level costs. The plugin certifies that the program respects these costs by calling automated theorem provers, a new and innovative technique in the field of cost analysis. Finally, we have conducted several case studies, including showing that the plugin can automatically compute and certify the exact reaction time of Lustre~\cite{lustre} data flow programs compiled into C. \section{Project context and approach} Formal methods for verifying functional properties of programs have now reached a level of maturity and automation that their adoption is slowly increasing in production environments. For safety critical code, it is becoming commonplace to combine rigorous software engineering methodologies and testing with static analyses, taking the strengths of each and mitigating their weaknesses. Of particular interest are open frameworks for the combination of different formal methods, where the programs can be progressively specified and enriched with new safety guarantees: every method contributes knowledge (e.g. new invariants) that becomes an assumption for later analysis. The outlook for verifying non-functional properties of programs (time spent, memory used, energy consumed) is bleaker. % and the future seems to be getting even worse. Most industries verify that real time systems meet their deadlines by simply performing many runs of the system and timing their execution,  computing the maximum time and adding an empirical safety margin, claiming the result to be a bound for the WCET of the program. Formal methods and software to statically analyse the WCET of programs exist, but they often produce bounds that are too pessimistic to be useful. Recent advancements in hardware architecture have been focused on the improvement of the average case performance, not the predictability of the worst case. Execution time is becoming increasingly dependent on execution history and the internal state of hardware components like pipelines and caches. Multi-core processors and non-uniform memory models are drastically reducing the possibility of performing static analysis in isolation, because programs are less and less time composable. Clock-precise hardware models are necessary for static analysis, and obtaining them is becoming harder due to the increased sophistication of hardware design. Despite these problems, the need for reliable real time systems and programs is increasing, and there is pressure from the research community for the introduction of hardware with more predictable behaviour, which would be more suitable for static analysis.  One example, being investigated by the Proartis project~\cite{proartis}, is to decouple execution time from execution history by introducing randomisation. In CerCo~\cite{cerco} we do not address this problem, optimistically assuming that improvements in low-level timing analysis or architecture will make verification feasible in the longer term. Instead, the main objective of our work is to bring together the static analysis of functional and non-functional properties, which in the current state of the art are independent activities with limited exchange of information: while the functional properties are verified on the source code, the analysis of non-functional properties is performed on object code to exploit clock-precise hardware models. \subsection{Current object-code methods} Analysis currently takes place on object code for two main reasons. First, there cannot be a uniform, precise cost model for source code instructions (or even basic blocks). During compilation, high level instructions are broken up and reassembled in context-specific ways so that identifying a fragment of object code and a single high level instruction is infeasible. Even the control flow of the object and source code can be very different as a result of optimisations, for example aggressive loop optimisations may completely transform source level loops. Despite the lack of a uniform, compilation- and program-independent cost model on the source language, the literature on the analysis of non-asymptotic execution time on high level languages assuming such a model is growing and gaining momentum. However, unless we provide a replacement for such cost models, this literature's future practical impact looks to be minimal. Some hope has been provided by the EmBounded project \cite{embounded}, which compositionally compiles high-level code to a byte code that is executed by an interpreter with guarantees on the maximal execution time spent for each byte code instruction. This provides a uniform model at the expense of the model's precision (each cost is a pessimistic upper bound) and the performance of the executed code (because the byte code is interpreted compositionally instead of performing a fully non-compositional compilation). The second reason to perform the analysis on the object code is that bounding the worst case execution time of small code fragments in isolation (e.g. loop bodies) and then adding up the bounds yields very poor estimates as no knowledge of the hardware state prior to executing the fragment can be assumed. By analysing longer runs the bound obtained becomes more precise because the lack of information about the initial state has a relatively small impact. To calculate the cost of an execution, value and control flow analyses are required to bound the number of times each basic block is executed.  Currently, state of the art WCET analysis tools, such as AbsInt's aiT toolset~\cite{absint}, perform these analyses on object code, where the logic of the program is harder to reconstruct and most information available at the source code level has been lost; see~\cite{stateart} for a survey. Imprecision in the analysis can lead to useless bounds. To augment precision, the tools ask the user to provide constraints on the object code control flow, usually in the form of bounds on the number of iterations of loops or linear inequalities on them. This requires the user to manually link the source and object code, translating his assumptions on the source code (which may be wrong) to object code constraints. The task is error prone and hard, especially in the presence of complex compiler optimisations. Traditional techniques for WCET that work on object code are also affected by another problem: they cannot be applied before the generation of the object code. Functional properties can be analysed in early development stages, while analysis of non-functional properties may come too late to avoid expensive changes to the program architecture. \subsection{CerCo's approach} In CerCo we propose a radically new approach to the problem: we reject the idea of a uniform cost model and we propose that the compiler, which knows how the code is translated, must return the cost model for basic blocks of high level instructions. It must do so by keeping track of the control flow modifications to reverse them and by interfacing with processor timing analysis. By embracing compilation, instead of avoiding it like EmBounded did, a CerCo compiler can both produce efficient code and return costs that are as precise as the processor timing analysis can be. Moreover, our costs can be parametric: the cost of a block can depend on actual program data, on a summary of the execution history, or on an approximated representation of the hardware state. For example, loop optimisations may assign a cost to a loop body that is a function of the number of iterations performed. As another example, the cost of a block may be a function of the vector of stalled pipeline states, which can be exposed in the source code and updated at each basic block exit. It is parametricity that allows one to analyse small code fragments without losing precision. In the analysis of the code fragment we do not have to ignore the initial hardware state, rather, we may assume that we know exactly which state (or mode, as the WCET literature calls it) we are in. The CerCo approach has the potential to dramatically improve the state of the art. By performing control and data flow analyses on the source code, the error prone translation of invariants is completely avoided. Instead, this work is done at the source level using tools of the user's choice. Any available technique for the verification of functional properties can be immediately reused and multiple techniques can collaborate together to infer and certify cost invariants for the program.  There are no limitations on the types of loops or data structures involved. Parametric cost analysis becomes the default one, with non-parametric bounds used as a last resort when the user decides to trade the complexity of the analysis with its precision. \emph{A priori}, no technique previously used in traditional WCET is lost: processor timing analyses can be used by the compiler on the object code, and the rest can be applied at the source code level. Our approach can also work in the early stages of development by axiomatically attaching costs to unimplemented components. Software used to verify properties of programs must be as bug free as possible. The trusted code base for verification consists of the code that needs to be trusted to believe that the property holds. The trusted code base of state-of-the-art WCET tools is very large: one needs to trust the control flow analyser, the linear programming libraries used, and also the formal models of the hardware under analysis, for example. In CerCo we are moving the control flow analysis to the source code and we are introducing a non-standard compiler too. To reduce the trusted code base, we implemented a prototype and a static analyser in an interactive theorem prover, which was used to certify that the costs added to the source code are indeed those incurred by the hardware. Formal models of the hardware and of the high level source languages were also implemented in the interactive theorem prover. Control flow analysis on the source code has been obtained using invariant generators, tools to produce proof obligations from generated invariants and automatic theorem provers to verify the obligations. If these tools are able to generate proof traces that can be independently checked, the only remaining component that enters the trusted code base is an off-the-shelf invariant generator which, in turn, can be proved correct using an interactive theorem prover. Therefore we achieve the double objective of allowing the use of more off-the-shelf components (e.g. provers and invariant generators) whilst reducing the trusted code base at the same time. %\paragraph{Summary of the CerCo objectives.} To summarize, the goal of CerCo is % to reconcile functional with non-functional analysis by performing them together % on the source code, sharing common knowledge about execution invariants. We want % to achieve the goal implementing a new generation of compilers that induce a % parametric, precise cost model for basic blocks on the source code. The compiler % should be certified using an interactive theorem prover to minimize the trusted % code base of the analysis. Once the cost model is induced, off-the-shelf tools % and techniques can be combined together to infer and prove parametric cost % bounds. %The long term benefits of the CerCo vision are expected to be: %1. the possibility to perform static analysis during early development stages %2.  parametric bounds made easier %3.  the application of off-the-shelf techniques currently unused for the % analysis of non-functional properties, like automated proving and type systems %4. simpler and safer interaction with the user, that is still asked for % knowledge, but on the source code, with the additional possibility of actually % verifying the provided knowledge %5. a reduced trusted code base %6. the increased accuracy of the bounds themselves. % %The long term success of the project is hindered by the increased complexity of % the static prediction of the non-functional behaviour of modern hardware. In the % time frame of the European contribution we have focused on the general % methodology and on the difficulties related to the development and certification % of a cost model inducing compiler. \section{The typical CerCo workflow} \label{sec:workflow} \begin{figure}[!t] \begin{tabular}{l@{\hspace{0.2cm}}|@{\hspace{0.2cm}}l} \begin{lstlisting} char a[] = {3, 2, 7, 14}; char threshold = 4; int count(char *p, int len) { char j; int found = 0; for (j=0; j < len; j++) { if (*p <= threshold) found++; p++; } return found; } int main() { return count(a,4); } \end{lstlisting} & %  $\vcenter{\includegraphics[width=7.5cm]{interaction_diagram.pdf}}$ \begin{tikzpicture}[ baseline={([yshift=-.5ex]current bounding box)}, element/.style={draw, text width=1.6cm, on chain, text badly centered}, >=stealth ] {[start chain=going below, node distance=.5cm] \node [element] (cerco) {CerCo\\compiler}; \node [element] (cost)  {CerCo\\cost plugin}; {[densely dashed] \node [element] (ded)   {Deductive\\platform}; \node [element] (check) {Proof\\checker}; } } \coordinate [left=3.5cm of cerco] (left); {[every node/.style={above, text width=3.5cm, text badly centered, font=\scriptsize}] \draw [<-] ([yshift=-1ex]cerco.north west) coordinate (t) -- node {C source} (t-|left); \draw [->] (cerco) -- (cost); \draw [->] ([yshift=1ex]cerco.south west) coordinate (t) -- node {C source+\color{red}{cost annotations}} (t-|left) coordinate (cerco out); \draw [->] ([yshift=1ex]cost.south west) coordinate (t) -- node {C source+\color{red}{cost annotations}\\+\color{blue}{synthesized assertions}} (t-|left) coordinate (out); {[densely dashed] \draw [<-] ([yshift=-1ex]ded.north west) coordinate (t) -- node {C source+\color{red}{cost annotations}\\+\color{blue}{complexity assertions}} (t-|left) coordinate (ded in) .. controls +(-.5, 0) and +(-.5, 0) .. (out); \draw [->] ([yshift=1ex]ded.south west) coordinate (t) -- node {complexity obligations} (t-|left) coordinate (out); \draw [<-] ([yshift=-1ex]check.north west) coordinate (t) -- node {complexity proof} (t-|left) .. controls +(-.5, 0) and +(-.5, 0) .. (out); \draw [dash phase=2.5pt] (cerco out) .. controls +(-1, 0) and +(-1, 0) .. (ded in); }} %% user: % head \draw (current bounding box.west) ++(-.2,.5) circle (.2) ++(0,-.2) -- ++(0,-.1) coordinate (t); % arms \draw (t) -- +(-.3,-.2); \draw (t) -- +(.3,-.2); % body \draw (t) -- ++(0,-.4) coordinate (t); % legs \draw (t) -- +(-.2,-.6); \draw (t) -- +(.2,-.6); \end{tikzpicture} \end{tabular} \caption{On the left: C code to count the number of elements in an array that are less than or equal to a given threshold. On the right: CerCo's interaction diagram. Components provided by CerCo are drawn with a solid border.} \label{test1} \end{figure} We illustrate the workflow we envisage (on the right of~\autoref{test1}) on an example program (on the left of~\autoref{test1}).  The user writes the program and feeds it to the CerCo compiler, which outputs an instrumented version of the same program that updates global variables that record the elapsed execution time and the stack space usage.  The red lines in \autoref{itest1} introducing variables, functions and function calls starting with \lstinline'__cost' and \lstinline'__stack' are the instrumentation introduced by the compiler.  For example, the two calls at the start of \lstinline'count' say that 4 bytes of stack are required, and that it takes 111 cycles to reach the next cost annotation (in the loop body). The compiler measures these on the labelled object code that it generates. The annotated program can then be enriched with complexity assertions in the style of Hoare logic, that are passed to a deductive platform (in our case Frama-C). We provide as a Frama-C cost plugin a simple automatic synthesiser for complexity assertions which can be overridden by the user to increase or decrease accuracy.  These are the blue comments starting with \lstinline'/*@' in \autoref{itest1}, written in Frama-C's specification language, ACSL.  From the assertions, a general purpose deductive platform produces proof obligations which in turn can be closed by automatic or interactive provers, ending in a proof certificate. % NB: if you try this example with the live CD you should increase the timeout Twelve proof obligations are generated from~\autoref{itest1} (to prove that the loop invariant holds after one execution if it holds before, to prove that the whole program execution takes at most 1358 cycles, and so on).  Note that the synthesised time bound for \lstinline'count', $178+214*(1+\text{\lstinline'len'})$ cycles, is parametric in the length of the array. The CVC3 prover closes all obligations within half a minute on routine commodity hardware.  A simpler non-parametric version can be solved in a few seconds. % \fvset{commandchars=\\\{\}} \lstset{morecomment=[s][\color{blue}]{/*@}{*/}, moredelim=[is][\color{blue}]{$}{$}, moredelim=[is][\color{red}]{|}{|}, lineskip=-2pt} \begin{figure}[!p] \begin{lstlisting} |int __cost = 33, __stack = 5, __stack_max = 5;| |void __cost_incr(int incr) { __cost += incr; }| |void __stack_incr(int incr) { __stack += incr; __stack_max = __stack_max < __stack ? __stack : __stack_max; }| char a[4] = {3, 2, 7, 14};  char threshold = 4; /*@ behavior stack_cost: ensures __stack_max <= __max(\old(__stack_max), 4+\old(__stack)); ensures __stack == \old(__stack); behavior time_cost: ensures __cost <= \old(__cost)+(178+214*__max(1+\at(len,Pre), 0)); */ int count(char *p, int len) { char j;  int found = 0; |__stack_incr(4);  __cost_incr(111);| $__l: /* internal */$ /*@ for time_cost: loop invariant __cost <= \at(__cost,__l)+ 214*(__max(\at((len-j)+1,__l), 0)-__max(1+(len-j), 0)); for stack_cost: loop invariant __stack_max == \at(__stack_max,__l); for stack_cost: loop invariant __stack == \at(__stack,__l); loop variant len-j; */ for (j = 0; j < len; j++) { |__cost_incr(78);| if (*p <= threshold) { |__cost_incr(136);| found ++; } else { |__cost_incr(114);| } p ++; } |__cost_incr(67);  __stack_incr(-4);| return found; } /*@ behavior stack_cost: ensures __stack_max <= __max(\old(__stack_max), 6+\old(__stack)); ensures __stack == \old(__stack); behavior time_cost: ensures __cost <= \old(__cost)+1358; */ int main(void) { int t; |__stack_incr(2);  __cost_incr(110);| t = count(a,4); |__stack_incr(-2);| return t; } \end{lstlisting} \caption{The instrumented version of the program in \autoref{test1}, with instrumentation added by the CerCo compiler in red and cost invariants added by the CerCo Frama-C plugin in blue. The \lstinline'__cost', \lstinline'__stack' and \lstinline'__stack_max' variables hold the elapsed time in clock cycles and the current and maximum stack usage. Their initial values hold the clock cycles spent in initialising the global data before calling \lstinline'main' and the space required by global data (and thus unavailable for the stack). } \label{itest1} \end{figure} \begin{acknowledgements} \end{acknowledgements}