# Changeset 3617

Ignore:
Timestamp:
Mar 6, 2017, 3:50:23 PM (11 months ago)
Message:

first draft of summary section of conclusion

File:
1 edited

### Legend:

Unmodified
 r3615 \section{Conclusions} \label{sect.conclusions} % Conclusions %   Summary %   Future work \section{Conclusions} \label{sect.conclusions} In many application domains the intensional properties of programs---time and space usage, for example---are an important factor in the specification of a program, and therefore overall program correctness. Here, intensional' properties can be analysed \emph{asymptotically}, or \emph{concretely}, with the latter computing resource bounds in terms of clock cycles, seconds, bits transmitted, or other concrete resource measures, for a program execution. Soft real time' applications and cryptography libraries are two important classes of programs fitting this pattern. Intensional properties of programs---time and space usage, for example---are an important factor in the specification of a program, and therefore overall program correctness. Here, intensional' properties can be analysed \emph{asymptotically}, or \emph{concretely}, with the latter computing resource bounds in terms of clock cycles, (milli)seconds, bits transmitted, or other basal units of resource consumption, for a program execution. For many application domains, it is concrete complexity that is the more fundamental of the two analyses. Soft real time' applications and libraries exporting cryptographic primitives are two important classes of programs fitting this pattern, for example. Worst Case Execution Time (WCET) tools currently represent the state of the art in providing static analyses that determine accurate concrete resource bounds for programs. These tools however possess a number of disadvantages: for instance, they require that the analysis be performed on machine code produced by a compiler, where all high-level program structure has been compiled away', rather than permitting an analysis at the source-code level. These tools possess a number of disadvantages: for instance, their analyses are at once sophisticated and opaque, with trust in the analysis devolving to trust in the complex WCET tool implementation, and they require that the analysis be performed on machine code produced by a compiler, where all high-level program structure has been compiled away'. The CerCo verified compiler and associated toolchain has demonstrated that it is possible to perform static time and space analysis at the source level without losing accuracy, reducing the trusted code base and reconciling the study of functional and non-functional properties of programs. The techniques introduced in the compiler proof seem to be scalable, and are amenable to both imperative and functional programming languages. Further, all techniques presented are compatible with every compiler optimisation considered by us to date. More ideal would be a mechanism to `lift' a cost model from the machine code generated by a compiler, back to the source code level, where analyses could be performed in terms of source code, abstractions, and control-flow constructs written and understood by the programmer. A major challenge here is obtaining a source-level resource analysis that can compete with the precision offered by traditional WCET tools. The central insight of the CerCo project is that a compiler for a substantial fragment of the C programming language can be written in such a way that facilitates high-level reasoning about concrete resource usage. CerCo's resource analysis proceeds by fully embracing the compilation process and the implicit knowledge of program translation that the compiler possesses, not ignoring it. In particular, the CerCo compiler tracks the translation of phrases of high-level source code into machine code, wherein a cost measure is applied, before being reflected back through the compiler, to the source-code level, as parametric cost assertions tailored to an individual program. These assertions can be used by external reasoning tools to provide hard bounds on a program's resource consumption. Further, the CerCo verified compiler and associated toolchain presented in this paper have demonstrated that it is indeed possible to perform static concrete resource analysis at the source level with the precision one would expect of traditional WCET tools. Indeed, for our target hardware, our parametric cost assertions are capable of producing \emph{exact} bounds on resource consumption. Further, as the CerCo compiler's lifting of the cost model is fully verified in Matita, the trusted code base is minimised---one no longer need rely on the correctness of a WCET tool to obtain a trustworthy analysis. Lastly, the techniques introduced in the compiler proof seem to be scalable, and are amenable to both imperative and functional programming languages. Further, all techniques presented in this paper are compatible with all compiler optimisation considered by us so far. \subsection{Related work} \label{subsect.related.work} \subsection{Further work} \label{subsect.further.work} To prove that compilers can keep track of optimisations