Changeset 3617


Ignore:
Timestamp:
Mar 6, 2017, 3:50:23 PM (8 weeks ago)
Author:
mulligan
Message:

first draft of summary section of conclusion

File:
1 edited

Legend:

Unmodified
Added
Removed
  • Papers/jar-cerco-2017/conclusions.tex

    r3615 r3617  
     1\section{Conclusions}
     2\label{sect.conclusions}
     3
    14% Conclusions
    25%   Summary
     
    47%   Future work
    58
    6 \section{Conclusions}
    7 \label{sect.conclusions}
    8 
    9 In many application domains the intensional properties of programs---time and space usage, for example---are an important factor in the specification of a program, and therefore overall program correctness.
    10 Here, `intensional' properties can be analysed \emph{asymptotically}, or \emph{concretely}, with the latter computing resource bounds in terms of clock cycles, seconds, bits transmitted, or other concrete resource measures, for a program execution.
    11 `Soft real time' applications and cryptography libraries are two important classes of programs fitting this pattern.
     9Intensional properties of programs---time and space usage, for example---are an important factor in the specification of a program, and therefore overall program correctness.
     10Here, `intensional' properties can be analysed \emph{asymptotically}, or \emph{concretely}, with the latter computing resource bounds in terms of clock cycles, (milli)seconds, bits transmitted, or other basal units of resource consumption, for a program execution.
     11For many application domains, it is concrete complexity that is the more fundamental of the two analyses.
     12`Soft real time' applications and libraries exporting cryptographic primitives are two important classes of programs fitting this pattern, for example.
    1213
    1314Worst Case Execution Time (WCET) tools currently represent the state of the art in providing static analyses that determine accurate concrete resource bounds for programs.
    14 These tools however possess a number of disadvantages: for instance, they require that the analysis be performed on machine code produced by a compiler, where all high-level program structure has been `compiled away', rather than permitting an analysis at the source-code level.
     15These tools possess a number of disadvantages: for instance, their analyses are at once sophisticated and opaque, with trust in the analysis devolving to trust in the complex WCET tool implementation, and they require that the analysis be performed on machine code produced by a compiler, where all high-level program structure has been `compiled away'.
    1516
    16 The CerCo verified compiler and associated toolchain has demonstrated that it is possible to perform static time and space analysis at the source level without losing accuracy, reducing the trusted code base and reconciling the study of functional and non-functional properties of programs.
    17 The techniques introduced in the compiler proof seem to be scalable, and are amenable to both imperative and functional programming languages.
    18 Further, all techniques presented are compatible with every compiler optimisation considered by us to date.
     17More ideal would be a mechanism to `lift' a cost model from the machine code generated by a compiler, back to the source code level, where analyses could be performed in terms of source code, abstractions, and control-flow constructs written and understood by the programmer.
     18A major challenge here is obtaining a source-level resource analysis that can compete with the precision offered by traditional WCET tools.
     19
     20The central insight of the CerCo project is that a compiler for a substantial fragment of the C programming language can be written in such a way that facilitates high-level reasoning about concrete resource usage.
     21CerCo's resource analysis proceeds by fully embracing the compilation process and the implicit knowledge of program translation that the compiler possesses, not ignoring it.
     22In particular, the CerCo compiler tracks the translation of phrases of high-level source code into machine code, wherein a cost measure is applied, before being reflected back through the compiler, to the source-code level, as parametric cost assertions tailored to an individual program.
     23These assertions can be used by external reasoning tools to provide hard bounds on a program's resource consumption.
     24
     25Further, the CerCo verified compiler and associated toolchain presented in this paper have demonstrated that it is indeed possible to perform static concrete resource analysis at the source level with the precision one would expect of traditional WCET tools.
     26Indeed, for our target hardware, our parametric cost assertions are capable of producing \emph{exact} bounds on resource consumption.
     27Further, as the CerCo compiler's lifting of the cost model is fully verified in Matita, the trusted code base is minimised---one no longer need rely on the correctness of a WCET tool to obtain a trustworthy analysis.
     28
     29Lastly, the techniques introduced in the compiler proof seem to be scalable, and are amenable to both imperative and functional programming languages.
     30Further, all techniques presented in this paper are compatible with all compiler optimisation considered by us so far.
     31
     32\subsection{Related work}
     33\label{subsect.related.work}
     34
     35\subsection{Further work}
     36\label{subsect.further.work}
    1937
    2038To prove that compilers can keep track of optimisations
Note: See TracChangeset for help on using the changeset viewer.