source: Papers/jar-cerco-2017/conclusions.tex @ 3621

Last change on this file since 3621 was 3621, checked in by mulligan, 3 years ago

refinements to summary, starting work on related work section

File size: 6.0 KB
4% Conclusions
5%   Summary
6%   Related work
7%   Future work
9Intensional properties of programs---time and space usage, for example---are an important factor in the specification of a program, and therefore overall program correctness.
10Here, `intensional' properties can be analysed \emph{asymptotically}, or \emph{concretely}, with the latter computing resource bounds in terms of clock cycles, (milli)seconds, bits transmitted, or other basal units of resource consumption, for a program execution.
11For many application domains, it is concrete complexity that is the more fundamental of the two analyses.
12`Soft real time' applications and libraries exporting cryptographic primitives are two important classes of programs fitting this pattern, for example.
14Worst Case Execution Time (WCET) tools currently represent the state of the art in providing static analyses that determine accurate concrete resource bounds for programs.
15These tools possess a number of disadvantages: for instance, their analyses are at once sophisticated and opaque, with trust in the analysis devolving to trust in the complex WCET tool implementation, and they require that the analysis be performed on machine code produced by a compiler, where all high-level program structure has been `compiled away'.
17More ideal would be a mechanism to `lift' a cost model from the machine code generated by a compiler, back to the source code level, where analyses could be performed in terms of source code, abstractions, and control-flow constructs written and understood by the programmer.
18A major challenge here is obtaining a source-level resource analysis that can compete with the precision offered by traditional WCET tools.
20The central insight of the CerCo project is that a compiler for a substantial fragment of the C programming language can be written in such a way that facilitates high-level reasoning about concrete resource usage.
21CerCo's resource analysis proceeds by fully embracing the compilation process and the implicit knowledge of program translation that the compiler possesses, not ignoring it.
22In particular, the CerCo compiler tracks the translation of phrases of high-level source code into machine code, wherein a cost measure is applied, before being reflected back through the compiler, to the source-code level, as parametric cost assertions tailored to an individual program.
23These assertions can be used by external reasoning tools to provide hard bounds on a program's resource consumption.
25Further, the CerCo verified compiler and associated toolchain presented in this paper have demonstrated that it is indeed possible to perform static concrete resource analysis at the source level with the precision one would expect of traditional WCET tools.
26Indeed, for our target hardware, our parametric cost assertions are capable of producing \emph{exact} bounds on resource consumption.
27Further, as the CerCo compiler's lifting of the cost model is fully verified in Matita, the trusted code base is minimised---one no longer need rely on the correctness of a WCET tool to obtain a trustworthy analysis.
29Lastly, a note on the generality of the techniques presented herein.
30The CerCo compiler itself is a lightly-optimising compiler for a large subset of the C programming language, and whilst not as aggressive in its optimisation passes as commercial and open-source C compilers, such as \texttt{gcc}, \texttt{clang}, or \texttt{icc}, is a substantial piece of software.
31As a result, the application of the techniques described in this paper to the full CerCo compiler is, in our view, already evidence of their scalability, and we see no reason why these techniques could not be applied to a more aggressively optimising compiler, given that none of our techniques are tied to any particular choice of intermediate language or set of optimisations.
32Further, the techniques that we have presented are not tied to the C programming language, and are in fact amenable to arbitrary imperative and functional programming languages.
33This last claim is bolstered by a pen-and-paper proof of the correctness of the labelling approach applied to a `toy' compiler for an applicative language. %cite
35\subsection{Related work}
38Many different approaches to resource analysis---both asymptotic and concrete---are present within the literature.
40As mentioned above, the current state of the art in concrete resource analysis is represented by WCET tools such as AbsInt's aiT static analyser, which uses abstract interpretation, coupled with faithful, detailed models of hardware, to derive tight time bounds on expected program execution time.
41Further, the analyses performed by mainstream WCET tools are unverified: the WCET tool itself, typically a large, complex piece of software, must become part of one's trusted code base.
44\subsection{Further work}
47To prove that compilers can keep track of optimisations
48and induce a precise cost model on the source code, we targeted a simple
49architecture that admits a cost model that is execution history independent.
50The most important future work is dealing with hardware architectures
51characterised by history-dependent stateful components, like caches and
52pipelines. The main issue is to assign a parametric, dependent cost
53to basic blocks that can be later transferred by the labelling approach to
54the source code and represented in a meaningful way to the user. The dependent
55labelling approach that we have studied seems a promising tool to achieve
56this goal, but more work is required to provide good source level
57approximations of the relevant processor state.
59Other examples of future work are to improve the cost invariant
60generator algorithms and the coverage of compiler optimisations, to combining
61the labelling approach with the type and effect discipline of~\cite{typeffects}
62to handle languages with implicit memory management, and to experiment with
63our tools in the early phases of development. Larger case studies are also necessary
64to evaluate the CerCo's prototype on realistic, industrial-scale programs.
Note: See TracBrowser for help on using the repository browser.