source: Papers/jar-cerco-2017/conclusions.tex @ 3617

Last change on this file since 3617 was 3617, checked in by mulligan, 3 years ago

first draft of summary section of conclusion

File size: 4.6 KB
Line 
1\section{Conclusions}
2\label{sect.conclusions}
3
4% Conclusions
5%   Summary
6%   Related work
7%   Future work
8
9Intensional properties of programs---time and space usage, for example---are an important factor in the specification of a program, and therefore overall program correctness.
10Here, `intensional' properties can be analysed \emph{asymptotically}, or \emph{concretely}, with the latter computing resource bounds in terms of clock cycles, (milli)seconds, bits transmitted, or other basal units of resource consumption, for a program execution.
11For many application domains, it is concrete complexity that is the more fundamental of the two analyses.
12`Soft real time' applications and libraries exporting cryptographic primitives are two important classes of programs fitting this pattern, for example.
13
14Worst Case Execution Time (WCET) tools currently represent the state of the art in providing static analyses that determine accurate concrete resource bounds for programs.
15These tools possess a number of disadvantages: for instance, their analyses are at once sophisticated and opaque, with trust in the analysis devolving to trust in the complex WCET tool implementation, and they require that the analysis be performed on machine code produced by a compiler, where all high-level program structure has been `compiled away'.
16
17More ideal would be a mechanism to `lift' a cost model from the machine code generated by a compiler, back to the source code level, where analyses could be performed in terms of source code, abstractions, and control-flow constructs written and understood by the programmer.
18A major challenge here is obtaining a source-level resource analysis that can compete with the precision offered by traditional WCET tools.
19
20The central insight of the CerCo project is that a compiler for a substantial fragment of the C programming language can be written in such a way that facilitates high-level reasoning about concrete resource usage.
21CerCo's resource analysis proceeds by fully embracing the compilation process and the implicit knowledge of program translation that the compiler possesses, not ignoring it.
22In particular, the CerCo compiler tracks the translation of phrases of high-level source code into machine code, wherein a cost measure is applied, before being reflected back through the compiler, to the source-code level, as parametric cost assertions tailored to an individual program.
23These assertions can be used by external reasoning tools to provide hard bounds on a program's resource consumption.
24
25Further, the CerCo verified compiler and associated toolchain presented in this paper have demonstrated that it is indeed possible to perform static concrete resource analysis at the source level with the precision one would expect of traditional WCET tools.
26Indeed, for our target hardware, our parametric cost assertions are capable of producing \emph{exact} bounds on resource consumption.
27Further, as the CerCo compiler's lifting of the cost model is fully verified in Matita, the trusted code base is minimised---one no longer need rely on the correctness of a WCET tool to obtain a trustworthy analysis.
28
29Lastly, the techniques introduced in the compiler proof seem to be scalable, and are amenable to both imperative and functional programming languages.
30Further, all techniques presented in this paper are compatible with all compiler optimisation considered by us so far.
31
32\subsection{Related work}
33\label{subsect.related.work}
34
35\subsection{Further work}
36\label{subsect.further.work}
37
38To prove that compilers can keep track of optimisations
39and induce a precise cost model on the source code, we targeted a simple
40architecture that admits a cost model that is execution history independent.
41The most important future work is dealing with hardware architectures
42characterised by history-dependent stateful components, like caches and
43pipelines. The main issue is to assign a parametric, dependent cost
44to basic blocks that can be later transferred by the labelling approach to
45the source code and represented in a meaningful way to the user. The dependent
46labelling approach that we have studied seems a promising tool to achieve
47this goal, but more work is required to provide good source level
48approximations of the relevant processor state.
49
50Other examples of future work are to improve the cost invariant
51generator algorithms and the coverage of compiler optimisations, to combining
52the labelling approach with the type and effect discipline of~\cite{typeffects}
53to handle languages with implicit memory management, and to experiment with
54our tools in the early phases of development. Larger case studies are also necessary
55to evaluate the CerCo's prototype on realistic, industrial-scale programs.
Note: See TracBrowser for help on using the repository browser.