source: Papers/jar-cerco-2017/introduction.tex @ 3635

Last change on this file since 3635 was 3635, checked in by mulligan, 16 months ago

more work on intro...

File size: 25.7 KB
Line 
1\newcommand{\All}[1]{\forall{#1}.\;}
2\newcommand{\lam}[1]{\lambda{#1}.\;}
3
4% Introduction
5%   Problem being solved, background, etc.
6%   Current state of the art (and problem with it)
7%   The `CerCo approach' (tm)
8%   Brief Matita overview
9%   Map of paper
10
11\section{Introduction}
12\label{sect.introduction}
13
14Programs are specified using both functional and non-functional constraints.
15Functional constraints dictate what tasks a program must do; non-functional constraints limit the resources the program may consume whilst completing those tasks.
16
17Depending on the application domain, non-functional constraints are as important as functional constraints when specifying a program.
18Real-time systems, with hard limits on application response times, implementations of cryptographic primitives, which must be hardened to timing side-channel attacks, and a heart pacemaker's embedded controller, which must fit inside a limited code memory space, are examples that fit this pattern.
19A cryptography library susceptible to a timing side-channel attack is not an annoyance---it's an implementation error that undermines the entire purpose of the library.
20
21A program's non-functional constraints may be given \emph{concretely}, or \emph{asymptotically}.
22Asymptotic complexity, as every Computer Science undergraduate knows, is important---but so too is concrete complexity for many applications, including the three we highlighted above as examples.
23A real-time system's response time is measured in seconds, milliseconds, or some other fundamental unit of time; a cryptographic library must have all execution paths execute in the same number of processor cycles, independent of any input passed by the client; and the size of an embedded controller for a pacemaker is measured in bits, bytes, or some other unit of memory capacity.
24In all cases, resource consumption is measured in some basal, concrete unit of measure.
25
26Currently, a program's functional properties can be established by combining user annotations---preconditions, invariants, and so on---with various automated and semi-automated analyses---invariant generators, type systems, abstract interpretations, applications of theorem proving, and so on---on the high-level source code of the program.
27Functional properties of a program are therefore established by reasoning about the source code that the application programmer actually sees.
28Further, the results of any analysis can be communicated to the programmer in terms of abstractions, control flow, and an overall system design that they are familiar with.
29
30By contrast, a program's non-functional properties are established by reasoning on low-level object code produced not by a programmer, but by a compiler.
31Whilst analyses operating at this level can and do produce very accurate results---Worst Case Execution Time (WCET) analysis can be extraordinarily accurate, for example---analysis at such a low-level of abstraction invariably has disadvantages:
32\begin{itemize}
33\item
34It can be hard to deduce the high-level structure of the program after compiler optimisations.
35The object code produced by an optimising compiler may have a radically different control flow to the original source code program.
36\item
37Object code analysis is unstable.
38Modern compilers and linkers are highly non-compositional: they implement a variety of sophisticated optimisation passes, and use an abundance of procedural, intra-procedural, module-level, and link-time analyses to direct these optimisations.
39As a result, small changes in high-level source code can produce radically different object code outputs from a compiler, affecting any analysis tied to that object code.
40\item
41It is well understood by Software Engineers that problems with the design or implementation of a program are cheaper to resolve early in the development process. %cite
42Despite this, techniques that operate on object code are not useful early in the development process of a program, where programs may be incomplete, with missing functionality, or missing modules.
43\item
44Parametric cost analysis is very hard: how can we translate a cost that depends on the execution state, for example the value of a register or a carry bit, to a cost that the user can understand whilst looking at the source code?
45\item
46Performing functional analysis on object code makes it hard for the programmer to provide information about the program and its expected execution, leading to a loss of precision in the resulting analysis.
47It is hard for the programmer to understand the results of the analysis, or direct its execution, as high-level abstractions, control flow constructs, and so on, introduced by the programmer are `translated away'.
48\end{itemize}
49
50More ideal would be a combination of high-level reasoning coupled with the accuracy one would expect of WCET, or other static analyses of non-functional properties.
51This paper presents such a combination.
52
53What has previously prevented high-level reasoning about non-functional properties is the lack of a uniform and precise cost model for programs written in programming languages like C.
54Since modern compilers---as discussed previously---may compile each expression and statement occurrence in radically different ways, optimisations may change control flow, and the cost of an object code instruction may depend on the runtime state of hardware components like pipelines and caches, none of which are visible in the source code, it has not been clear how one could go about defining such a cost model.
55
56\paragraph{Vision.}
57We want to reconcile functional and non-functional analysis: to share
58information between them and perform both at the same time on high-level source
59code.
60
61
62We envision a new generation of compilers that track program structure through
63compilation and optimisation and exploit this information to define a precise,
64non-uniform cost model for source code that accounts for runtime state. With
65such a cost model we can perform non-functional verification in a similar way
66to functional verification and exploit the state of the art in automated
67high-level verification~\cite{survey}.
68
69The techniques currently used by the Worst Case Execution Time (WCET) community,
70which perform analysis on object code, are still applicable but can be coupled
71with additional source-level analysis. In cases where our approach produces
72overly complex cost models, safe approximations can be used to trade complexity
73for precision.
74
75Finally, source code analysis can be used early in the development process,
76when components have been specified but not implemented, as modularity means
77that it is enough to specify the non-functional behaviour of missing components.
78
79\paragraph{Contributions.}
80We have developed \emph{the labelling approach}~\cite{labelling}, a technique
81to implement compilers that induce cost models on source programs by very
82lightweight tracking of code changes through the compilation pipeline.
83
84We have implemented a compiler from C to object binaries for the 8051
85microcontroller which uses this technique. The compiler predicts
86execution time and stack space usage. We have also verified the compile using
87an interactive theorem prover. As we are targeting an embedded microcontroller
88we have not considered dynamic memory allocation.
89
90To demonstrate source-level verification of costs we have implemented a Frama-C
91plugin~\cite{framac} that invokes the compiler on a source program and uses it
92to generate invariants on the high-level source that correctly model low-level
93costs. The plugin certifies that the program respects these costs by calling
94automated theorem provers, a new and innovative technique in the field of cost
95analysis.
96
97Finally, we have conducted several case studies, including showing that the
98plugin can automatically compute and certify the exact reaction time of
99Lustre~\cite{lustre} data flow programs compiled into C.
100
101\subsection{Project context and approach}
102
103Formal methods for verifying functional properties of programs have now reached
104such a level of maturity and automation that their adoption is slowly
105increasing in production environments.
106
107For safety critical code, it is becoming commonplace to combine rigorous
108software engineering methodologies and testing with static analyses, taking the
109strengths of each and mitigating their weaknesses. Of particular interest are
110open frameworks for the combination of different formal methods, where the
111programs can be progressively specified and enriched with new safety
112guarantees: every method contributes knowledge (e.g. new invariants) that
113can be built upon by other analysis methods.
114
115The outlook for verification of non-functional properties of programs (time
116spent, memory used, energy consumed) is bleaker. In most cases, verifying that
117real-time systems meet their deadlines is done by simply performing many runs
118of the system and timing their execution, computing the maximum time and adding
119an empirical safety margin, claiming the result to be a bound for the WCET of
120the program.
121
122Formal methods and software to statically analyse the WCET of programs exist,
123but they often produce bounds that are too pessimistic to be useful. Recent
124advances in hardware architecture have focused on improving average case
125performance, not predictability of the worst case.
126
127Execution time is becoming increasingly dependent on execution history and the
128internal state of hardware components like pipelines and caches. Multi-core
129processors and non-uniform memory models are drastically reducing the
130possibility of performing static analysis in isolation, because programs are
131less and less time-composable. Clock-precise hardware models are necessary for
132static analysis, and obtaining them is becoming harder due to the increased
133sophistication of hardware design.
134
135The need for reliable real-time systems and programs is increasing, and there
136is pressure from the research community for the introduction of hardware with
137more predictable behaviour, which would be more suitable for static analysis.
138One example, being investigated by the Proartis project~\cite{proartis}, is to
139decouple execution time from execution history by introducing randomisation.
140
141In CerCo~\cite{cerco} we do not address this problem, optimistically assuming
142that improvements in low-level timing analysis or architecture will make
143verification feasible in the longer term.
144
145Instead, the main objective of our work is to bring together static analysis of
146functional and non-functional properties, which in the current state of the art
147are independent activities with limited exchange of information: while the
148functional properties are verified on the source code, the analysis of
149non-functional properties is performed on object code to exploit clock-precise
150hardware models.
151
152\subsection{Current object code methods}
153
154Analysis currently takes place on object code for two main reasons.
155
156Firstly, there cannot be a uniform, precise cost model for source code
157instructions (or even basic blocks). During compilation, high level
158instructions are broken up and reassembled in context-specific ways so that
159identifying a fragment of object code and a single high level instruction is
160unfeasible.
161
162Additionally, the control flow of the object and source code can be very
163different as a result of optimisations. For example, aggressive loop
164optimisations can completely transform source level loops.
165
166Despite the lack of a uniform, compilation- and program-independent cost model
167on the source language, research on the analysis of non-asymptotic execution
168time on high level languages assuming such a model is growing and gaining
169momentum.
170
171Unless such cost models are developed, the future practical impact of this
172research looks to be minimal. One existing approach is the EmBounded
173project~\cite{embounded}, which compositionally compiles high-level code to a
174byte code that is executed by an interpreter with guarantees on the maximal
175execution time spent for each byte code instruction. This provides a model
176that is uniform, though at the expense of precision (each cost is a pessimistic
177upper bound) and the performance of the executed code (the byte code is
178interpreted compositionally).
179
180The second reason to perform analysis on the object code is that bounding
181the worst case execution time of small code fragments in isolation (e.g. loop
182bodies) and then adding up the bounds yields very poor estimates as no
183knowledge of the hardware state prior to executing the fragment can be assumed.
184
185By analysing longer runs the bound obtained becomes more precise because the
186lack of information about the initial state has a relatively small impact.
187
188To calculate the cost of an execution, value and control flow analyses are
189required to bound the number of times each basic block is executed. Currently,
190state of the art WCET analysis tools, such as AbsInt's aiT
191toolset~\cite{absint}, perform these analyses on object code, where the logic
192of the program is harder to reconstruct and most information available at the
193source code level has been lost; see~\cite{stateart} for a survey.
194
195Imprecision in the analysis can lead to useless bounds. To augment precision,
196currently tools ask the user to provide constraints on the object code control
197flow, usually in the form of bounds on the number of iterations of loops or
198linear inequalities on them. This requires the user to manually link the source and object code, translating their assumptions on the source code (which may be
199wrong) to object code constraints. This task is hard and error-prone,
200especially in the presence of complex compiler optimisations.
201
202Traditional techniques for WCET that work on object code are also affected by
203another problem: they cannot be applied before the generation of the object
204code. Functional properties can be analysed in early development stages, while
205analysis of non-functional properties may come too late to avoid expensive
206changes to the program architecture.
207
208\subsection{The CerCo approach}
209
210In CerCo we propose a radically new approach to the problem: we reject the idea
211of a uniform cost model and we propose that the compiler, which knows how the
212code is translated, must return the cost model for basic blocks of high level
213instructions. It must do so by keeping track of the control flow modifications
214to reverse them and by interfacing with processor timing analysis.
215
216By embracing compilation, instead of avoiding it like EmBounded did, a CerCo
217compiler can both produce efficient code and return costs that are as precise
218as the processor timing analysis can be. Moreover, our costs can be parametric:
219the cost of a block can depend on actual program data, on a summary of the
220execution history, or on an approximated representation of the hardware state.
221
222For example, loop optimisations may assign a cost to a loop body that is a
223function of the number of iterations performed. As another example, the cost of
224a block may be a function of the vector of stalled pipeline states, which can
225be exposed in the source code and updated at each basic block exit.
226
227It is parametricity that allows one to analyse small code fragments without
228losing precision. In the analysis of the code fragment we do not have to ignore
229the initial hardware state; rather, we can assume that we know exactly which
230state (or mode, as the WCET literature calls it) we are in.
231
232The CerCo approach has the potential to dramatically improve the state of the
233art. By performing control and data flow analyses on the source code, the
234error-prone translation of invariants is avoided entirely. Instead, this work
235is done at the source code level using tools of the user's choice.
236
237Any available technique for the verification of functional properties can be
238easily reused and multiple techniques can collaborate to infer and certify cost
239invariants for the program. There are no limitations on the types of loops or
240data structures involved.
241
242Parametric cost analysis becomes the default, with non-parametric bounds used
243only as a last resort when the user decides to trade the complexity of the
244analysis for more precision.
245
246\emph{A priori}, no technique previously used in traditional WCET is obsolete:
247processor timing analysis can be used by the compiler on the object code, and
248other techniques can be applied at the source code level.
249
250Our approach also works in the early stages of development by allowing the user
251to axiomatically attach costs to unimplemented components.
252
253Software used to verify properties of programs must be as bug-free as possible.
254The trusted code base for verification consists of the code that needs to be
255trusted to believe that the property holds.
256
257The trusted code base of state-of-the-art WCET tools is very large: one needs
258to trust the control flow analyser, the linear programming libraries used, and
259also the formal models of the hardware under analysis.
260
261In CerCo we move the control flow analysis to the source code level, and we
262introduce a non-standard compiler. To reduce the size of the trusted code base,
263we have implemented a prototype compiler and static analyser in an interactive
264theorem prover, which was used to certify that the costs added to the source
265code are indeed those incurred by the hardware. We have also implemented formal models of the hardware and of the high level source language in the interactive
266theorem prover.
267
268Control flow analysis on the source code has been obtained using invariant
269generators, tools to produce proof obligations from generated invariants and
270automatic theorem provers to verify the obligations. If these tools are able to
271generate proof traces that can be independently checked, the only remaining
272component that enters the trusted code base is an off-the-shelf invariant
273generator which, in turn, can be proved correct using an interactive theorem
274prover.
275
276With these methods, we achieve the objective of allowing the use of more
277off-the-shelf components (e.g. provers and invariant generators) whilst
278reducing the trusted code base at the same time.
279
280\subsection{Introduction to Matita}
281
282Matita is a theorem prover based on a variant of the Calculus of Coinductive Constructions~\cite{asperti:user:2007}.
283The system features a full spectrum of dependent types and (co)inductive families, a system of coercions, a tactic-driven proof construction engine~\cite{sacerdoti-coen:tinycals:2007}, and paramodulation based automation~\cite{asperti:higher-order:2007}, all of which we exploit in the formalisation described herein.
284
285Matita's syntax is similar to the syntaxes of mainstream functional programming languages such as OCaml or Standard ML.
286The type theory that Matita implements is broadly akin to that of Coq~\cite{coq:2004} and Agda~\cite{bove:brief:2009}.
287Nevertheless, we provide a brief explanation of the main syntactic and type-theoretic features of Matita that will be needed to follow the body of the paper:
288\begin{itemize}
289\item
290Non-recursive functions and definitions are introduced via the \texttt{definition} keyword.
291Recursive functions are introduced with \texttt{let rec}.
292Mutually recursive functions are separated via the \texttt{and} keyword.
293Matita's termination checker ensures that all recursive functions are terminating before being admitted to maintain soundness of the system's logic.
294\item
295Matita has an infinite hierarchy of type universes.
296A single impredicative universe of types, \texttt{Prop}, exists at the base of this hierarchy.
297An infinite series of predicative universes, \texttt{Type[0]} : \texttt{Type[1]} : \texttt{Type[2]}, and so on and so forth, sits atop \texttt{Prop}.
298Matita, unlike Coq or Agda, implements no form of typical ambiguity or universe polymorphism, with explicit concrete universe levels being preferred instead.
299\item
300Matita's type theory plays host to a rich and expressive higher-order logic.
301Constants \texttt{True} and \texttt{False} represent truth and falsity in \texttt{Prop} respectively.
302Two inductive families in \texttt{Prop} encode conjunction and disjunction---$\mathtt{P \wedge Q}$ and $\mathtt{P \vee Q}$ respectively.
303
304As is usual, implication and universal quantification are identified with the dependent function space ($\Pi$ types), whereas (constructive) existential quantification is encoded as a dependent sum (a $\Sigma$-type).
305We write $\All{x : \phi}\psi$ for the dependent function space, and abbreviate this as $\phi \rightarrow \psi$ when $x \not\in fv(\psi)$ as usual.
306We use $\langle M,N \rangle$ for the pairing of $M$ and $N$.
307\item
308Inductive and coinductive families are introduced via the \texttt{inductive} and \texttt{coinductive} keywords respectively, with named constructor declarations separated by a bar.
309Mutually inductive data family declarations are separated via \texttt{with}.
310In the following declaration:
311\begin{lstlisting}[language=Grafite]
312inductive I ($P_1$ : $\tau_1$) $\ldots$ ($P_n$ : $\tau_n$) : $\phi_1 \rightarrow \ldots \rightarrow \phi_m \rightarrow \phi$ := $\ldots$
313\end{lstlisting}
314We call $P_i$ for $0 \leq i \leq n$ the \textbf{parameters} of \texttt{I} and $\phi_j$ for $0 \leq j \leq m$ the \textbf{indices} of \texttt{I}.
315Matita's positivity checker ensures that constructors have strictly-positive types before admitting an inductive family to maintain soundness of the system's logic.
316\item
317Records are introduced with the \texttt{record} keyword.
318A Matita record
319\begin{lstlisting}[language=Grafite]
320record R : Type[0] := { F1 : nat }.
321\end{lstlisting}
322may be thought of as syntactic sugar for a single-constructor inductive data type of the same name:
323\begin{lstlisting}[language=Grafite]
324inductive R : Type[0] :=
325  | mk_R : nat -> R.
326\end{lstlisting}
327A record field's type may depend on fields declared earlier in the record.
328
329Records may be decomposed with projections.
330Projections, one for each of field of a record, are registered in the global context.
331In the example record above, \texttt{F1} of type $R \rightarrow nat$ is registered as a field projection and $mk\_R$ of type $nat \rightarrow R$ is registered as a constructor.
332
333Record fields may also be marked as coercions.
334In the following example
335\begin{lstlisting}[language=Grafite]
336record S : Type[1] :=
337{
338  Carrier :> Type[0];
339  op : Carrier -> Carrier -> Carrier
340}
341\end{lstlisting}
342the field \texttt{Carrier} is declared to be a coercion with `\texttt{:>}'.with the operational effect being that the field projection \texttt{Carrier} may be omitted where it could be successfully inferred by Matita.
343Field coercions facilitate the informal but common mathematical practice of intentionally confusing a structure with its underlying carrier set.
344\item
345Terms may be freely omitted, allowing the user to write down partial types and terms.
346A question mark, \texttt{?}, denotes a single term that has been omitted by the user.
347Some omitted terms can be deduced by Matita's refinement system.
348Other, more complex goals arising from omitted terms may require user input to solve, in which case a proof obligation is opened for each term that cannot be deduced automatically.
349Three consecutive dots, \texttt{$\ldots$}, denote multiple terms or types that have been omitted.
350\item
351Data may be decomposed by pattern matching with a \texttt{match} expression.
352We may fully annotate a \texttt{match} expression with its return type.
353This is especially useful when working with indexed families of types or with invariants, expressed as types, on functions.
354In the following
355\begin{lstlisting}[language=Grafite]
356match t return $\lam{x}x = 0 \rightarrow bool$ with
357[ 0    $\Rightarrow$ $\lam{prf_1}P_1$
358| S m $\Rightarrow$ $\lam{prf_2}P_2$
359] (refl $\ldots$ t)
360\end{lstlisting}
361the \texttt{0} branch of the \texttt{match} expression returns a function from $0 = 0$ to \texttt{bool}, whereas the \texttt{S m} branch of the \texttt{match} expression returns a function from \texttt{S m = 0} to \texttt{bool}.
362In both cases the annotated return type $\lam{x}x = 0 \rightarrow bool$ has been specialised given new information about \texttt{t} revealed by the act of pattern matching.
363The entire term, with \texttt{match} expression applied to \texttt{refl $\ldots$ t}, has type \texttt{bool}.
364\item
365Matita features a liberal system of coercions (distinct from the previously mentioned record field coercions).
366It is possible to define a uniform coercion $\lam{x}\langle x, ?\rangle$ from every type $T$ to the dependent product $\Sigma{x : T}. P x$.
367The coercion opens a proof obligation that asks the user to prove that $P$ holds for $x$.
368When a coercion is to be applied to a complex term (for example, a $\lambda$-abstraction, a local definition, or a case analysis), the system automatically propagates the coercion to the sub-terms.
369For instance, to apply a coercion to force $\lam{x}M : A \rightarrow B$ to
370have type $\All{x : A}\Sigma{y : B}. P x y$, the system looks for a coercion from $M : B$ to $\Sigma{y : B}. P x y$ in a context augmented with $x : A$.
371This is significant when the coercion opens a proof obligation, as the user will be presented with multiple, but simpler proof obligations in the correct context.
372In this way, Matita supports the `Russell' proof methodology developed by Sozeau in~\cite{sozeau:subset:2007}, in a lightweight but tightly-integrated manner.
373\end{itemize}
374Throughout, for reasons of clarity, conciseness, and readability, we may choose to simplify or omit parts of Matita code.
375We will always ensure that these omissions do not mislead the reader.
376
377\subsection{Map of the paper}
378
379The rest of the paper is structured as follows.
380
381In section~\ref{sect.compiler.architecture}, we describe the architecture of the
382CerCo compiler, as well as the intermediate languages that it uses. We also
383describe the target hardware and its formal model.
384
385In section~\ref{sect.compiler.proof}, we describe the proof of correctness of
386the compiler in more detail. We explain our use of structured traces, the
387labelling approach, and discuss the assembler.
388
389In section~\ref{sect.formal.development}, we present data on the formal
390development.
391
392In section~\ref{sect.framac.plugin}, we discuss the Frama-C plugin, as well as
393some of the case studies we have performed to validate it.
394
395Finally, in section~\ref{sect.conclusions} we present conclusions, as well as
396related and future work.
Note: See TracBrowser for help on using the repository browser.