source: Papers/fopara2013/fopara13.tex @ 3309

Last change on this file since 3309 was 3309, checked in by sacerdot, 6 years ago

An abstract.

File size: 52.8 KB
Line 
1\documentclass{llncs}
2
3\usepackage{amsfonts}
4\usepackage{amsmath}
5\usepackage{amssymb} 
6\usepackage[english]{babel}
7\usepackage{color}
8\usepackage{fancybox}
9\usepackage{graphicx}
10\usepackage[colorlinks]{hyperref}
11\usepackage{hyphenat}
12\usepackage[utf8x]{inputenc}
13\usepackage{listings}
14\usepackage{mdwlist}
15\usepackage{microtype}
16\usepackage{stmaryrd}
17\usepackage{url}
18
19%\renewcommand{\verb}{\lstinline}
20%\def\lstlanguagefiles{lst-grafite.tex}
21%\lstset{language=Grafite}
22
23\newlength{\mylength}
24\newenvironment{frametxt}%
25        {\setlength{\fboxsep}{5pt}
26                \setlength{\mylength}{\linewidth}%
27                \addtolength{\mylength}{-2\fboxsep}%
28                \addtolength{\mylength}{-2\fboxrule}%
29                \Sbox
30                \minipage{\mylength}%
31                        \setlength{\abovedisplayskip}{0pt}%
32                        \setlength{\belowdisplayskip}{0pt}%
33                }%
34                {\endminipage\endSbox
35                        \[\fbox{\TheSbox}\]}
36
37\title{Certified Complexity (CerCo)\thanks{The project CerCo acknowledges the financial support of the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission, under FET-Open grant number: 243881}}
38\author{Roberto M. Amadio$^{3,4}$ \and Nicolas Ayache$^4$ \and François Bobot$^4$ \and Jaap Boender$^1$ \and Brian Campbell$^2$ \and Ilias Garnier$^2$ \and
39Antoine Madet$^4$ \and James McKinna$^2$ \and
40Dominic P. Mulligan$^1$ \and Mauro Piccolo$^1$ \and Yann R\'egis-Gianas$^4$ \and
41Claudio Sacerdoti Coen$^1$ \and Ian Stark$^2$ \and Paolo Tranquilli$^1$}
42\institute{Dipartimento di Informatica - Scienza e Ingegneria, Universit\'a di Bologna \and
43LFCS, University of Edinburgh
44\and INRIA (Team $\pi$r2 )
45\and
46Universit\`e Paris Diderot
47}
48
49\bibliographystyle{splncs03}
50
51\begin{document}
52
53\maketitle
54
55\begin{abstract}
56This paper provides an overview of the just completed FET-Open Project CerCo
57(Certified Complexity), 2010-2013. The main achievement of the project has been
58the introduction of a technique to perform static analysis of non functional
59properties of programs (time, space) at the source level, without loosing
60accuracy and with a small trusted code base. The main software component
61developed is a formally certified complexity certifying compiler. The compiler
62translates the source code to the object code and an instrumented copy of the
63source code. The latter exposes at
64the source level and tracks precisely the actual (non asymptotic)
65computational cost. Untrusted invariant generators and trusted theorem provers
66are then used to compute and certify the parametric execution time of the
67code.
68\end{abstract}
69
70% ---------------------------------------------------------------------------- %
71% SECTION                                                                      %
72% ---------------------------------------------------------------------------- %
73\section{Introduction}
74
75\paragraph{Problem statement:} Computer programs can be specified with both functional constraints (what a program must do) and non-functional constraints (e.g. what resources – time, space, etc – the program may use).  In the current state of the art, functional properties are verified for high-level source code by combining user annotations (e.g. preconditions and invariants) with a multitude of automated analyses (invariant generators, type systems, abstract interpretation, theorem proving, etc.). By contrast, non-functional properties are generally checked on low-level object code, but also demand information about high-level functional behavior that must somehow be recreated.
76
77This situation presents several problems: 1) it can be hard to infer this high-level structure in the presence of compiler optimizations; 2) techniques working on object code are not useful in early development;  yet problems detected later are more expensive to tackle; 3) parametric cost analysis is very hard: how can we reflect a cost that depends on the execution state (e.g. the value of a register or a carry bit) to a cost that the user can understand looking at source code?; 4) functional analysis performed only on object code leaves out any contribution from the programmer, giving results less precise than those from source code and reducing the precision of the cost estimates computed.
78
79\paragraph{CerCo vision and approach:} We propose a reconciliation of functional and non-functional analysis: to share information and perform both at the same time on source code.
80
81What has previously prevented this approach is lack of a uniform and precise cost model for high-level code: 1) each statement occurrence is compiled differently and optimizations may change control flow; 2) the cost of an object code instruction may depend on the runtime state of hardware components like pipelines and caches, which is not visible in the source code.
82
83To solve the issue, we envision a new generation of compilers able to keep track of program structure through compilation and optimisation, and able to exploit that information to define a cost model for source code that is precise, non-uniform, and accounts for runtime state. With such a source-level cost model we can reduce non-functional verification to the functional case and exploit the state of the art in automated high-level verification. The techniques previously used by WCET analysers on the object code are still available, but can now be coupled with additional source-level analysis. Where the approach produces precise cost models too complex to reason about, safe approximations can be used to trade complexity with precision. Finally, analysis on source code can be performed even during early development stages, when components have been specified but not yet implemented: source code modularity means that it is enough to specify the non-functional behavior of unimplemented components.
84
85\paragraph{Contributions:} We have developed a technique, the labelling approach, to implement compilers that induce cost models on source programs by very lightweight tracking of code changes through compilation. We have studied how to formally prove the correctness of compilers implementing the technique. We have implemented such a compiler from C to object binaries for the 8051 microcontroller, and verified it in an interactive theorem prover. We have implemented a Frama-C plug-in that invokes the compiler on a source program and uses this to generate invariants on the high-level source that correctly model low-level costs. Finally, the plug-in certifies that the program respects these costs by calling automated theorem provers, a new and innovative technique in the field of cost analysis. As a case study, we show how the plug-in can automatically compute and certify the exact reaction time of Lustre dataflow programs compiled into C.
86
87\section{Project context and objectives}
88Formal methods for verification of functional properties of programs have reached a level of maturity and automation that is allowing a slow but increasing adoption rate in production environments. For safety critical code, it is getting usual to combine rigorous software engineering methodologies and testing with static analysis in order to benefits from the strong points of every approach and mitigate the weaknesses. Particularly interesting are open frameworks for the combination of different formal methods, where the programs can be progressively specified and are continuously enriched with new safety guarantees: every method contributes knowledge (e.g. new invariants) that becomes an assumption for later analysis.
89
90The scenario for the verification of non functional properties (time spent, memory used, energy consumed) is more bleak and the future seems to be getting even worse. Most industries verify that real time systems meets their deadlines simply measuring the time spent in many runs of the systems,  computing the maximum time and adding an empirical safety margin, claiming the result to be a bound for the Worst Case Execution Time of the program. Formal methods and software to statically analyse the WCET of programs exist, but they often produce bounds that are too pessimistic to be useful. Recent advancements in hardware architectures is all focused on the improvement of the average case performance, not the predictability of the worst case. Execution time is getting more and more dependent from the execution history, that determines the internal state of hardware components like pipelines and caches. Multi-core processors and non uniform memory models are drastically reducing the possibility of performing static analysis in isolation, because programs are less and less time composable. Clock precise hardware models are necessary to static analysis, and getting them is becoming harder as a consequence of the increased hardware complexity.
91
92Despite the latter scenario, the need for reliable real time systems and programs is increasing, and there is an increasing pressure from the research community towards the differentiation of hardware. The aim is the introduction of alternative hardware whose behavior would be more predictable and more suitable to be statically analysed, for example decoupling execution time from the execution history by introducing randomization.
93
94In the CerCo project we do not try to address this problem, optimistically assuming that static analysis of non functional properties of programs will return to be feasible in the long term. The main objective of our work is instead to bring together static analysis of functional and non functional properties, which, according to the current state of the art, are completely independent activities with limited exchange of information: while the functional properties are verified on the source code of programs written in high level languages, the analysis of non functional properties is entirely performed on the object code to exploit clock precise hardware models.
95
96There are two main reasons to currently perform the analysis on the object code. The first one is the obvious lack of a uniform, precise cost model for source code instructions (or even basic blocks). During compilation, high level instructions are torn apart and reassembled in context specific ways so that there is no way to identify a fragment of object code with a single high level instruction. Even the control flow of the object and source code can be very different as a result of optimizations. For instance, loop optimizations reduce the number or the order of the iterations of loops, and may assign different object code, and thus different costs, to different iterations.  Despite the lack of a uniform, compilation and program independent cost model on the source language, the literature on the analysis of non asymptotic execution time on high level languages that assumes such a model is growing and getting momentum. Its practical usefulness is doomed to be minimal, unless we can provide a replacement for such cost models. Some hope has been provided by the EmBounded project (FP6 FET-Open STReP, IST-510255), which compositionally compiles high level code to a byte code that is executed by an emulator with guarantees on the maximal execution time spent for each byte code instruction. The approach indeed provides a uniform model, at the price of loosing precision of the model (each cost is a pessimistic upper bound) and performance of the executed code (because the byte code is emulated compositionally instead of performing a fully non compositional compilation).
97
98The second reason to perform the analysis on the object code is that bounding the worst case execution time of small code fragments in isolation (e.g. loop bodies) and then adding up the bounds yields very poor estimations because no knowledge on the hardware state can be assumed when executing the fragment. By analysing longer runs (e.g. by full unrolling loops) the bound obtained becomes more precise because the lack of knowledge on the initial state has less effects on longer computations.
99In CerCo we propose a radically new approach to the problem: we reject the idea of a uniform cost model and we propose that the compiler, which knows how the code is translated, must return the cost model for basic blocks of high level instructions. It must do so by keeping track of the control flow modifications to reverse them and by interfacing with static analysers.
100
101By embracing compilation, instead of avoiding it like EmBounded did, a CerCo compiler can at the same time produce efficient code and return costs that are as precise as the static analysis can be. Moreover, we allow our costs to be parametric: the cost of a block can depend on actual program data or on a summary of the execution history or on an approximated representation of the hardware state. For example, loop optimizations assign to a loop body a cost that is a function of the number of iterations performed. For another example, the cost of a loop body may be a function of the vector of stalled pipeline states, which can be exposed in the source code and updated at each basic block exit. It is parametricity that allows to analyse small code fragments without loosing precision: in the analysis of the code fragment we do not have to be ignorant on the initial hardware state. On the contrary, we can assume to know exactly which state (or mode, as WCET literature calls it) we are in.
102
103The cost of an execution is always the sum of the cost of basic blocks multiplied by the number of times they are executed, which is a functional property of the program. Therefore, in order to perform (parametric) time analysis of programs, it is necessary to combine a cost model with control and data flow analysis. Current state of the art WCET technology performs the analysis on the object code, where the logic of the program is harder to reconstruct and most information available on the source code (e.g. types) has been lost. Imprecision in the analysis leads to useless bounds. To augment precision, the tools ask the user to provide constraints on the object code control flow, usually in the form of bounds on the number of iterations of loops or linear inequalities on them. This requires the user to manually link the source and object code, translating his often wrong assumptions on the source code to object code constraints. The task is error prone and, in presence of complex optimizations, may be very hard if not impossible.
104
105The CerCo approach has the potentiality to dramatically improve the state of the art. By performing control and data flow analysis on the source code, the error prone translation of invariants is completely avoided. It is in fact performed by the compiler itself when it induces the cost model on the source language. Moreover, any available technique for the verification of functional properties can be immediately reused and multiple techniques can collaborate together to infer and certify cost invariants for the program. Parametric cost analysis becomes the default one, with non parametric bounds used as last resorts when trading the complexity of the analysis with its precision. A priori, no technique previously used in traditional WCET is lost (e.g. full unrolling for non parametric costs): they can just be applied on the source code.
106
107Traditional techniques for WCET that work on object code are also affected by another problem: they cannot be applied before the generation of the object code. Therefore analysis of functional properties of programs already starts in early development stages, while when analysis of non functional properties becomes possible the cost of changes to the program architecture can already be very high. Our approach already works in early development stages by axiomatically attaching costs to components that are not implemented yet.
108
109All software used to verify properties of programs must be as bug free as possible. The trusted code base for verification is made by the code that needs to be trusted to believe that the property holds. The trusted code base of state-of-the-art WCET tools is very large: one needs to trust the control flow analyser and the linear programming libraries it uses and also the formal models of the hardware. In CerCo we are moving the control flow analysis to the source code and we are introducing a non standard compiler too. To reduce the trusted code base, we implemented a prototype and a static analyser in an interactive theorem prover, which was used to certify that the cost computed on the source code is indeed the one actually spent by the hardware. Formal models of the hardware and of the high level source languages were also implemented in the interactive theorem prover. Control flow analysis on the source code has been obtained using invariant generators, tools to produce proof obligations from generated invariants and automatic theorem provers to verify the obligations. If the automatic provers are able to generate proof traces that can be independently checked, the only remaining component that enters the trusted code base is an off-the-shelf invariant generator which, in turn, can be proved correct using an interactive theorem prover. Therefore we achieve the double objective of allowing to use more off-the-shelf components (e.g. provers and invariant generators) while reducing the trusted code base at the same time.
110
111\paragraph{Summary of the CerCo objectives.} To summarize, the goal of CerCo is to reconcile functional with non functional analysis by performing them together on the source code, sharing common knowledge about execution invariants. We want to achieve the goal implementing a new generation of compilers that induce a parametric, precise cost model for basic blocks on the source code. The compiler should be certified using an interactive theorem prover to minimize the trusted code base of the analysis. Once the cost model is induced, off-the-shelf tools and techniques can be combined together to infer and prove parametric cost bounds.
112The long term benefits of the CerCo vision are expected to be:
1131. the possibility to perform static analysis during early development stages
1142.  parametric bounds made easier
1153.  the application of off-the-shelf techniques currently unused for the analysis of non functional properties, like automated proving and type systems
1164. simpler and safer interaction with the user, that is still asked for knowledge, but on the source code, with the additional possibility of actually verifying the provided knowledge
1175. a reduced trusted code base
1186. the increased accuracy of the bounds themselves.
119
120The long term success of the project is hindered by the increased complexity of the static prediction of the non functional behavior of modern hardware. In the time frame of the European contribution we have focused on the general methodology and on the difficulties related to the development and certification of a cost model inducing compiler.
121
122\section{Main S\&T results}
123We will now review the main S\&T results achieved in the CerCo project. We will address them in the following order:
124\begin{enumerate}
125\item \emph{The (basic) labelling approach.} It is the main technique that underlies all the developments in CerCo. It allows to track the evolution of basic blocks during compilation in order to propagate the cost model from the object code to the source code without loosing precision in the process.
126\item \emph{Dependent labelling.} The basic labelling approach assumes a bijective mapping between object code and source code O(1) blocks (called basic blocks). This assumption is violated by many program optimizations (e.g. loop peeling and loop unrolling). It also assumes the cost model computed on the object code to be non parametric: every block must be assigned a cost that does not depend on the state. This assumption is violated by stateful hardware like pipelines, caches, branch prediction units. The dependent labelling approach is an extension of the basic labelling approach that allows to handle parametric cost models. We showed how to the method allows to deal with loop optimizations and pipelines, and we speculated about its applications to caches.
127\item \emph{Techniques to exploit the induced cost model.} Every technique used for the analysis of functional properties of programs can be adapted to analyse the non functional properties of the source code instrumented by compilers that implement the labelling approach. In order to gain confidence in this claim, we showed how to implement a cost invariant generator combining abstract interpretation with separation logic ideas. We integrated everything in the Frama-C modular architecture, in order to automatically compute proof obligations from the functional and the cost invariants and to use automatic theorem provers to proof them. This is an example of a new technique that is not currently exploited in WCET analysis. It also shows how precise functional invariants benefits the non functional analysis too. Finally, we show how to fully automatically analyse the reaction time of Lustre nodes that are first compiled to C using a standard Lustre compiler and then processed by a C compiler that implements the labelling approach.
128\item \emph{The CerCo compiler.} This is a compiler from a large subset of the C program to 8051/8052 object code. The compiler implements the labelling approach and integrates a static analyser for 8051 executables. The latter can be implemented easily and does not require dependent costs because the 8051 microprocessor is a very simple processor whose instructions generally have a constant cost. It was picked to separate the issue of exact propagation of the cost model from the target to the source language from the orthogonal problem of the static analysis of object code that requires approximations or dependent costs. The compiler comes in several versions: some are prototypes implemented directly in OCaml, and they implement both the basic and dependent labelling approaches; the final version is extracted from a Matita certification and at the moment implements only the basic approach.
129\item \emph{A formal cost certification of the CerCo compiler.} We implemented the CerCo compiler in the interactive theorem prover Matita and have formally certified that the cost model induced on the source code correctly and precisely predicts the object code behavior. We actually induce two cost models, one for time and one for stack space used. We show the correctness of the prediction only for those programs that do not exhaust the available machine stack space, a property that thanks to the stack cost model we can statically analyse on the source code using our Frama-C tool. The preservation of functional properties we take as an assumption, not itself formally proved in CerCo.  Other projects have already certified the preservation of functional semantics in similar compilers, and we have not attempted to directly repeat that work. In order to complete the proof for non-functional properties, we have introduced a new semantics for programming languages based on a new kind of structured observables with the relative notions of forward similarity and the study of the intentional consequences of forward similarity. We have also introduced a unified representation for back-end intermediate languages that was exploited to provide a uniform proof of forward similarity.
130\end{enumerate}
131
132\subsection{The (basic) labelling approach.}
133\paragraph{Problem statement:} given a source program P, we want to obtain an instrumented source program P',  written in the same programming language, and the object code O such that: 1) P' is obtained by inserting into P some additional instructions to update global cost information like the amount of time spent during execution or the maximal stack space required; 2) P and P' must have the same functional behavior, i.e., they must produce that same output and intermediate observables; 3) P and O must have the same functional behavior; 4) after execution and in interesting points during execution, the cost information computed by P' must be an upper bound of the one spent by O  to perform the corresponding operations (soundness property); 5) the difference between the costs computed by P' and the execution costs of O must be bounded by a program dependent constant (precision property).
134
135\paragraph{The labeling software components:} we solve the problem in four stages, implemented by four software components that are used in sequence.
136\begin{enumerate}
137\item The first component labels the source program P by injecting label emission statements in the program in appropriate positions. The set of labels with their positions is called labelling. The syntax and semantics of the source programming language is augmented with label emission statements. The statement “EMIT l” behaves like a NOP instruction that does not affect the program state or control flow, but it changes the semantics by making the label l observable. Therefore the observables of a run of a program becomes a stream of label emissions: l1 … ln, called the program trace. We clarify the conditions that the labelling must respect later.
138\item The second component is a labelling preserving compiler. It can be obtained from an existing compiler by adding label emission statements to every intermediate language and by propagating label emission statements during compilation. The compiler is correct if it preserves both the functional behavior of the program and the generated traces. We may also ask that the function that erases the cost emission statements commute with compilation. This optional property grants that the labelling does not interfere with the original compiler behavior. A further set of requirements will be added later.
139\item The third component is a static object code analyser. It takes in input the object code augmented with label emission statements and it computes, for every such statement, its scope. The scope of a label emission statement is the tree of instructions that may be executed after the statement and before a new label emission statement is found. It is a tree and not a sequence because the scope may contain a branching statement. In order to grant that such a finite tree exists, the object code must not contain any loop that is not broken by a label emission statement. This is the first requirement of a sound labelling. The analyser fails if the labelling is unsound. For each scope, the analyser computes an upper bound of the execution time required by the scope, using the maximum of the costs of the two branches in case of a conditional statement. Finally, the analyser computes the cost of a label by taking the maximum of the costs of the scopes of every statement that emits that label.
140\item The fourth and last component takes in input the cost model computed at step 3 and the labelled code computed at step 1. It outputs a source program obtained by replacing each label emission statement with a statement that increments the global cost variable with the cost associated to the label by the cost model.  The obtained source code is the instrumented source code.
141\end{enumerate}
142\paragraph{Correctness:} Requirements 1, 2 and 3 of the program statement obviously hold, with 2 and 3 being a consequence of the definition of a correct labelling preserving compiler. It is also obvious that the value of the global cost variable of an instrumented source code is at any time equal to the sum of the costs of the labels emitted by the corresponding labelled code. Moreover, because the compiler preserves all traces, the sum of the costs of the labels emitted in the source and target labelled code are the same. Therefore, to satisfy the 4th requirement, we need to grant that the time taken to execute the object code is equal to the sum of the costs of the labels emitted by the object code. We collect all the necessary conditions for this to happen in the definition of a sound labelling: a) all loops must be broken by a cost emission statement;  b) all program instructions must be in the scope of some cost emission statement. To satisfy also the 5th requirement, additional requirements must be imposed on the object code labelling to avoid all uses of the maximum in the cost computation: the labelling is precise if every label is emitted at most once and both branches of a conditional jump start with a label emission statement.
143
144The correctness and precision of the labelling approach only rely on the correctness and precision of the object code labelling. The simplest, but not necessary, way to achieve them is to impose correctness and precision requirements also on the initial labelling produced by the first software component, and to ask the labelling preserving compiler to preserve these properties too. The latter requirement imposes serious limitations on the compilation strategy and optimizations: the compiler may not duplicate any code that contains label emission statements, like loop bodies. Therefore several loop optimizations like peeling or unrolling are prevented. Moreover, precision of the object code labelling is not sufficient per se to obtain global precision: we also implicitly assumed the static analysis to be able to associate a precise constant cost to every instruction. This is not possible in presence of stateful hardware whose state influences the cost of operations, like pipelines and caches. In the next section we will see an extension of the basic labelling approach to cover this situation.
145
146The labelling approach described in this section can be applied equally well and with minor modifications to imperative and functional languages. In the CerCo project, we have tested it on a simple imperative language without functions (a While language), to a subset of C and to two compilation chains for a purely functional higher order language. The two main changes required to deal with functional languages are: 1) because global variables and updates are not available, the instrumentation phase produces monadic code to “update” the global costs; 2) the requirements for a sound and precise labelling of the source code must be changed when the compilation is based on CPS translations. In particular, we need to introduce both labels emitted before a statement is executed and labels emitted after a statement is executed. The latter capture code that is inserted by the CPS translation and that would escape all label scopes.
147
148Phases 1, 2 and 3 can be applied as well to logic languages (e.g. Prolog). However, the instrumentation phase cannot: in standard Prolog there is no notion of (global) variable whose state is not retracted during backtracking. Therefore, the cost of executing computations that are later backtracked would not be correctly counted in. Any extension of logic languages with non-backtrackable state should support the labelling approach.
149
150\subsection{Dependent labelling.}
151The core idea of the basic labelling approach is to establish a tight connection between basic blocks executed in the source and target languages. Once the connection is established, any cost model computed on the object code can be transferred to the source code, without affecting the code of the compiler or its proof. In particular, it is immediate that we can also transport cost models that associate to each label functions from hardware state to natural numbers. However, a problem arises during the instrumentation phase that replaces cost emission statements with increments of global cost variables. The global cost variable must be incremented with the result of applying the function associated to the label to the hardware state at the time of execution of the block.
152The hardware state comprises both the “functional” state that affects the computation (the value of the registers and memory) and the “non functional” state that does not (the pipeline and caches content for example). The former is in correspondence with the source code state, but reconstructing the correspondence may be hard and lifting the cost model to work on the source code state is likely to produce cost expressions that are too hard to reason on. Luckily enough, in all modern architectures the cost of executing single instructions is either independent of the functional state or the jitter --- the difference between the worst and best case execution times --- is small enough to be bounded without loosing too much precision. Therefore we can concentrate on dependencies over the “non functional” parts of the state only.
153
154The non functional state has no correspondence in the high level state and does not influence the functional properties. What can be done is to expose the non functional state in the source code. We just present here the basic intuition in a simplified form: the technical details that allow to handle the general case are more complex and can be found in the CerCo deliverables. We add to the source code an additional global variable that represents the non functional state and another one that remembers the last labels emitted. The state variable must be updated at every label emission statement, using an update function which is computed during static analysis too. The update function associates to each label a function from the recently emitted labels and old state to the new state. It is computed composing the semantics of every instruction in a basic block and restricting it to the non functional part of the state.
155
156Not all the details of the non functional state needs to be exposed, and the technique works better when the part of state that is required can be summarized in a simple data structure. For example, to handle simple but realistic pipelines it is sufficient to remember a short integer that encodes the position of bubbles (stuck instructions) in the pipeline. In any case, the user does not need to understand the meaning of the state to reason over the properties of the program. Moreover, at any moment the user or the invariant generator tools that analyse the instrumented source code produced by the compiler can decide to trade precision of the analysis with simplicity by approximating the parametric cost with safe non parametric bounds. Interestingly, the functional analysis of the code can determine which blocks are executed more frequently in order to approximate more aggressively the ones that are executed less.
157
158The idea of dependent labelling can also be applied to allow the compiler to duplicate blocks that contain labels (e.g. to allow loop optimizations). The effect of duplication is to assign a different cost to the different occurrences of a duplicated label. For example, loop peeling turns a loop into the concatenation of a copy the loop body (that executes the first iteration) with the conditional execution of the loop (for the successive iterations). Because of further optimizations, the two copies of the loop will be compiled differently, with the first body usually taking more time.
159
160By introducing a variable that keep tracks of the iteration number, we can associate to the label a cost that is a function of the iteration number. The same technique works for loop unrolling without modifications: the function will assign a cost to the even iterations and another cost to the odd ones. The actual work to be done consists in introducing in the source code and for each loop a variable that counts the number of iterations. The loop optimization code that duplicate the loop bodies must also modify the code to propagate correctly the update of the iteration numbers. The technical details are more complex and can be found in the CerCo reports and publications. The implementation, however, is quite simple and the changes to a loop optimizing compiler are minimal.
161
162\subsection{Techniques to exploit the induced cost model.}
163We review the cost synthesis techniques developed in the project.
164The starting hypothesis is that we have a certified methodology to annotate blocks in the source code with constants which provide a sound and possibly precise upper bound on the cost of executing the blocks after compilation to object code.
165
166The principle that we have followed in designing the cost synthesis tools is that the synthetic bounds should be expressed and proved within a general purpose tool built to reason on the source code. In particular, we rely on the Frama − C tool to reason on C code and on the Coq proof-assistant to reason on higher-order functional programs.
167
168This principle entails that: 1)
169The inferred synthetic bounds are indeed correct as long as the general purpose tool is. 2) There is no limitation on the class of programs that can be handled as long as the user is willing to carry on an interactive proof.
170
171Of course, automation is desirable whenever possible. Within this framework, automation means writing programs that give hints to the general purpose tool. These hints may take the form, say, of loop invariants/variants, of predicates describing the structure of the heap, or of types in a light logic. If these hints are correct and sufficiently precise the general purpose tool will produce a proof automatically, otherwise, user interaction is required.
172
173\paragraph{The Cost plug-in and its application to the Lustre compiler}
174Frama − C is a set of analysers for C programs with a specification language called ACSL. New analyses can be dynamically added through a plug-in system. For instance, the Jessie plug-in allows deductive verification of C programs with respect to their specification in ACSL, with various provers as back-end tools.
175We developed the CerCo Cost plug-in for the Frama − C platform as a proof of concept of an automatic environment exploiting the cost annotations produced by the CerCo compiler. It consists of an OCaml program which in first approximation takes the following actions: (1) it receives as input a C program, (2) it applies the CerCo compiler to produce a related C program with cost annotations, (3) it applies some heuristics to produce a tentative bound on the cost of executing the C functions of the program as a function of the value of their parameters, (4) the user can then call the Jessie tool to discharge the related proof obligations.
176In the following we elaborate on the soundness of the framework and the experiments we performed with the Cost tool on the C programs produced by a Lustre compiler.
177
178\paragraph{Soundness} The soundness of the whole framework depends on the cost annotations added by the CerCo compiler, the synthetic costs produced by the Cost plug-in, the verification conditions (VCs) generated by Jessie, and the external provers discharging the VCs. The synthetic costs being in ACSL format, Jessie can be used to verify them. Thus, even if the added synthetic costs are incorrect (relatively to the cost annotations), the process as a whole is still correct: indeed, Jessie will not validate incorrect costs and no conclusion can be made about the WCET of the program in this case. In other terms, the soundness does not really depend on the action of the Cost plug-in, which can in principle produce any synthetic cost. However, in order to be able to actually prove a WCET of a C function, we need to add correct annotations in a way that Jessie and subsequent automatic provers have enough information to deduce their validity. In practice this is not straightforward even for very simple programs composed of branching and assignments (no loops and no recursion) because a fine analysis of the VCs associated with branching may lead to a complexity blow up.
179\paragraph{Experience with Lustre} Lustre is a data-flow language to program synchronous systems and the language comes with a compiler to C. We designed a wrapper for supporting Lustre files.
180The C function produced by the compiler implements the step function of the synchronous system and computing the WCET of the function amounts to obtain a bound on the reaction time of the system. We tested the Cost plug-in and the Lustre wrapper on the C programs generated by the Lustre compiler. For programs consisting of a few hundreds loc, the Cost plug-in computes a WCET and Alt − Ergo is able to discharge all VCs automatically.
181
182\paragraph{Handling C programs with simple loops}
183The cost annotations added by the CerCo compiler take the form of C instructions that update by a constant a fresh global variable called the cost variable. Synthesizing a WCET of a C function thus consists in statically resolving an upper bound of the difference between the value of the cost variable before and after the execution of the function, i.e. find in the function the instructions that update the cost variable and establish the number of times they are passed through during the flow of execution. In order to do the analysis the plugin makes the following assumptions on the programs:
1841. No recursive functions.
1852. Every loop must be annotated with a variant. The variants of ‘for’ loops are automatically inferred.
186
187The plugin proceeds as follows.
188\begin{enumerate}
189\item First the call graph of the program is computed. If the function f calls the function g
190then the function g is processed before the function f .
191\item The computation of the cost of the function is performed by traversing its control flow graph. The cost at a node is the maximum of the costs of the successors.
192\item In the case of a loop with a body that has a constant cost for every step of the loop, the cost is the product of the cost of the body and of the variant taken at the start of the loop.
193\item In the case of a loop with a body whose cost depends on the values of some free variables, a fresh logic function f is introduced to represent the cost of the loop in the logic assertions. This logic function takes the variant as a first parameter. The other parameters of f are the free variables of the body of the loop. An axiom is added to account the fact that the cost is accumulated at each step of the loop.
194\item The cost of the function is directly added as post-condition of the function
195\end{enumerate}
196The user can influence the annotation by different means:
197by using more precise variants or
198by annotating function with cost specification. The plugin will use this cost for the function instead of computing it.
199\paragraph{C programs with pointers}
200When it comes to verifying programs involving pointer-based data structures, such as linked lists, trees, or graphs, the use of traditional first-order logic to specify, and of SMT solvers to verify, shows some limitations. Separation logic is then an elegant alternative. Designed at the turn of the century, it is a program logic with a new notion of conjunction to express spatial heap separation. Separation logic has been implemented in dedicated theorem provers such as Smallfoot or VeriFast. One drawback of such provers, however, is to either limit the expressiveness of formulas (e.g. to the so-called symbolic heaps), or to require some user-guidance (e.g. open/close commands in VeriFast).
201In an attempt to conciliate both approaches, we introduced the notion of separation predicates. The approach consists in reformulating some ideas from separation logic into a traditional verification framework where the specification language, the verification condition generator, and the theorem provers were not designed with separation logic in mind. Separation predicates are automatically derived from user-defined inductive predicates, on demand. Then they can be used in program annotations, exactly as other predicates, i.e., without any constraint. Simply speaking, where one would write P ∗ Q in separation logic, one will here ask for the generation of a separation predicate sep and then use it as P ∧ Q ∧ sep(P, Q). We have implemented separation predicates within the Jessie plug-in and tested it on non-trivial case studies (e.g. the composite pattern from the VACID-0 benchmark). In these cases, we achieve a fully automatic proof using three existing SMT solver.
202We have also used the separation predicates to reason on the cost of executing simple heap manipulating programs such as an in-place list reversal.
203
204\subsection{The CerCo Compiler}
205In CerCo we have developed a certain number of cost preserving compilers based on the labelling approach. Excluding an initial certified compiler for a While language, all remaining compilers target realistic source languages --- a pure higher order functional language and a large subset of C with pointers, gotos and all data structures --- and real world target processors --- MIPS and the Intel 8051/8052 processor family. Moreover, they achieve a level of optimization that ranges from moderate (comparable to gcc level 1) to intermediate (including loop peeling and unrolling, hoisting and late constant propagation). The so called Trusted CerCo Compiler is the only one that was implemented in the interactive theorem prover Matita and its costs certified. The code distributed is obtained extracting OCaml code from the Matita implementation. In the rest of this section we will only focus on the Trusted CerCo Compiler, that only targets the C language and the 8051/8052 family, and that does not implement the advanced optimizations yet. Its user interface, however, is the same as the one of the other versions, in order to trade safety with additional performances. In particular, the Frama-C CerCo plug-in can work without recompilation with all compilers.
206
207The (trusted) CerCo compiler implements the following optimizations: cast simplification, constant propagation in expressions, liveness analysis driven spilling of registers, dead code elimination, branch displacement, tunneling. The two latter optimizations are performed by the optimizing assembler which is part of the compiler. The back-end of the compiler works on three addresses instructions, preferred to static single assignment code for the simplicity of the formal certification.
208
209The CerCo compiler is loosely based on the CompCert compiler, a recently developed certified compiler from C to the PowerPC, ARM and x86 microprocessors. Contrarily to CompCert, both the CerCo code and its certification are open source. Some data structures and language definitions for the front-end are directly taken from CompCert, while the back-end is a redesign and reimplementation of a didactic compiler from Pascal to MIPS used by Francois Pottier for a course at the Ecole Polytechnique.
210
211Following CompCert tradition, the compiler is organised in an unusually large number of intermediate passes, all responsible for just one change in the semantics of the source and target languages. Introducing multiple passes has minor performance implications on modern hardware and it allows to simplify the simulation proofs. The first three intermediate languages for the front-end. They are syntactically and semantically quite different between each other. For example, in the first language we find the traditional looping structures of C, in the second all loops are infinite loops (built with GOTOs) interrupted using BREAKs and in the third one the code is organised as a graph of statements where loops become loops in the graph. The four back-end languages, instead, have a more similar syntax.
212
213Departing from CompCert, we do not provide a stand alone syntax and semantics for every back-end language. Instead, we developed a generic representation of back-end languages as a parametric data type that can be instantiated to the wanted language. The generic representation allows to multiply the number of passes without increasing too much the code size. For example, we also provide a single generic semantics for the generic representation, parameterized over pass specific details.
214
215Other departures from CompCert are:
2161. all of our intermediate languages include label emitting instructions to implement the labelling approach, and the compiler preserves execution traces.
2172. the target language of CompCert is an assembly language with additional macro-instructions to be expanded before assembly; for CerCo we need to go all the way down to object code in order to perform the static analysis. Therefore we developed also an optimizing assembler and a static analyser, all integrated in the compiler.
2183. to avoid implementing a linker and a loader, we do not support separate compilation and external calls. Adding a linker and a loader is a transparent process to the labelling approach and should create no unknown problem
2194. we target an 8-bit processor. Targeting an 8 bit processor requires several changes and increased code size, but it is not fundamentally more complex. The proof of correctness, however, becomes much harder.
2205. we target a microprocessor that has a non uniform memory model, which is still often the case for microprocessors used in embedded systems and that is becoming common again in multi-core processors. Therefore the compiler has to keep track of the position of data and it must move data between memory regions in the proper way. Also the size of pointers to different regions is not uniform. In our case, an additional difficulty was that the space available for the stack in internal memory in the 8051 is tiny, allowing only a minor number of nested calls. To support full recursion in order to test the CerCo tools also on recursive programs, the compiler manually manages a stack in external memory.
2216. while there is a rough correspondence between CompCert and CerCo back-end passes,  the order of the passes is permuted. In the future we want to explore how to exploit our generic back-end language representation to try to freely compose and permute passes.
222
223\section{A formal certification of the CerCo compiler}
224The Trusted CerCo Compiler has been implemented and certified using the interactive theorem prover Matita. In this section we briefly hint at the exact correctness statement and at the main ingredients of the proof. Details on the proof techniques employed and further information can be collected from the CerCo deliverables and papers.
225
226\subsection{The statement}
227The most natural statement of correctness for our complexity preserving compiler is that the time spent for execution by a terminating object code program should be the time predicted on the source code by adding up the cost of every label emission statement. This statement, however, is too naïve to be useful for real world real time programs like those used in embedded systems.
228Real time programs are often reactive programs that loop forever responding to events (inputs) by performing some computation followed by some action (output) and the return to the initial state. For looping programs the overall execution time does not make sense. The same happens for reactive programs that spend an unpredictable amount of time in I/O. What is interesting is the reaction time that measure the time spent between I/O events. Moreover, we are interested in predicting and ruling out programs that crash running out of space on a certain input.
229Therefore we need to look for a more complex statement that talks about sub-runs of a program. The most natural statement is that the time predicted on the source code and spent on the object code by two corresponding sub-runs are the same. The problem to solve to make this statement formal is how to identify the corresponding sub-runs and how to single out those that are meaningful.
230The solution we found is based on the notion of measurability. We say that a run has a measurable sub-run when both the prefix of the sub-run and the sub-run do not exhaust the stack space, the number of function calls and returns in the sub-run is the same, the sub-run does not perform any I/O and the sub-run starts with a label emission statement and ends with a return or another label emission statements. The stack usage is estimated using the stack usage model that is computed by the compiler.
231
232The statement that we want to formally prove is that for each C run with a measurable sub-run there exists an object code run with a sub-run such that the observables of the pairs of prefixes and sub-runs are the same and the time spent by the object code in the sub-run is the same as the one predicted on the source code using the time cost model generated by the compiler.
233We briefly discuss the constraints for measurability. Not exhausting the stack space is a clear requirement of meaningfulness of a run, because source programs do not crash for lack of space while object code ones do. The balancing of function calls/returns is a requirement for precision: the labelling approach allows the scope of label emission statements to extend after function calls to minimize the number of labels. Therefore a label pays for all the instructions in a block, excluding those executed in nested function calls. If the number of calls/returns is unbalanced, it means that there is a call we have not returned to that could be followed by additional instructions whose cost has already been taken in account. To make the statement true (but less precise) in this situation, we could only say that the cost predicted on the source code is a safe bound, not that it is exact. The last condition on the entry/exit points of a run is used to identify sub-runs whose code correspond to a sequence of blocks that we can measure precisely. Any other choice would start/end the run in the middle of a block and we would be forced again to weaken the statement taking as a bound the cost obtained counting in all the instructions that precede the starting one in the block/follow the final one in the block.
234I/O operations can be performed in the prefix of the run, but not in the measurable sub-run. Therefore we prove that we can predict reaction times, but not I/O times, as it should be.
235
236\bibliography{fopara13.bib}
237
238\end{document}
Note: See TracBrowser for help on using the repository browser.