Changeset 3624


Ignore:
Timestamp:
Mar 6, 2017, 5:15:29 PM (6 months ago)
Author:
mulligan
Message:

more work on related work section

File:
1 edited

Legend:

Unmodified
Added
Removed
  • Papers/jar-cerco-2017/conclusions.tex

    r3621 r3624  
    88
    99Intensional properties of programs---time and space usage, for example---are an important factor in the specification of a program, and therefore overall program correctness.
    10 Here, `intensional' properties can be analysed \emph{asymptotically}, or \emph{concretely}, with the latter computing resource bounds in terms of clock cycles, (milli)seconds, bits transmitted, or other basal units of resource consumption, for a program execution.
     10Here, `intensional' properties can be analysed \emph{asymptotically}, or \emph{concretely}, with the latter analyses computing resource bounds in terms of clock cycles, (milli)seconds, bits transmitted, or other basal units of resource consumption, for a program execution.
    1111For many application domains, it is concrete complexity that is the more fundamental of the two analyses.
    12 `Soft real time' applications and libraries exporting cryptographic primitives are two important classes of programs fitting this pattern, for example.
     12`Soft real time' programs, with bounds on application response time, and libraries exporting cryptographic primitives, designed to be hardened to timing side-channel attacks, are two important classes of application fitting this pattern, for example.
    1313
    1414Worst Case Execution Time (WCET) tools currently represent the state of the art in providing static analyses that determine accurate concrete resource bounds for programs.
    15 These tools possess a number of disadvantages: for instance, their analyses are at once sophisticated and opaque, with trust in the analysis devolving to trust in the complex WCET tool implementation, and they require that the analysis be performed on machine code produced by a compiler, where all high-level program structure has been `compiled away'.
     15These tools possess a number of disadvantages.
     16For instance, their analyses are at once sophisticated and opaque, with trust in the analysis devolving to trust in the complex WCET tool implementation.
     17Further, they also require that the analysis be performed on machine code produced by a compiler, where all high-level program structure has been `compiled away', making analyses potentially hard to understand for programmers accustomed to reasoning at the source-code level.
    1618
    1719More ideal would be a mechanism to `lift' a cost model from the machine code generated by a compiler, back to the source code level, where analyses could be performed in terms of source code, abstractions, and control-flow constructs written and understood by the programmer.
    1820A major challenge here is obtaining a source-level resource analysis that can compete with the precision offered by traditional WCET tools.
    1921
    20 The central insight of the CerCo project is that a compiler for a substantial fragment of the C programming language can be written in such a way that facilitates high-level reasoning about concrete resource usage.
     22The central insight of the CerCo project is that a compiler for a substantial fragment of the C programming language can be written in a style that facilitates high-level reasoning about concrete resource usage.
    2123CerCo's resource analysis proceeds by fully embracing the compilation process and the implicit knowledge of program translation that the compiler possesses, not ignoring it.
    22 In particular, the CerCo compiler tracks the translation of phrases of high-level source code into machine code, wherein a cost measure is applied, before being reflected back through the compiler, to the source-code level, as parametric cost assertions tailored to an individual program.
    23 These assertions can be used by external reasoning tools to provide hard bounds on a program's resource consumption.
     24In particular, the CerCo compiler tracks the translation of phrases of high-level source code into machine code, wherein a cost measure is applied, before being reflected back through the compiler to the source-code level.
     25Thereafter, this cost model is reflected as parametric cost assertions, embellishing the program's source code, which can be used by external reasoning tools to provide hard bounds on a program's resource consumption.
    2426
    2527Further, the CerCo verified compiler and associated toolchain presented in this paper have demonstrated that it is indeed possible to perform static concrete resource analysis at the source level with the precision one would expect of traditional WCET tools.
     
    2729Further, as the CerCo compiler's lifting of the cost model is fully verified in Matita, the trusted code base is minimised---one no longer need rely on the correctness of a WCET tool to obtain a trustworthy analysis.
    2830
    29 Lastly, a note on the generality of the techniques presented herein.
    30 The CerCo compiler itself is a lightly-optimising compiler for a large subset of the C programming language, and whilst not as aggressive in its optimisation passes as commercial and open-source C compilers, such as \texttt{gcc}, \texttt{clang}, or \texttt{icc}, is a substantial piece of software.
     31In CerCo, we believe that we have developed a general approach to structuring compilers for lifting cost models to the source-level.
     32In particular, the CerCo compiler itself is a lightly-optimising compiler for a large subset of the C programming language, and whilst not as aggressive in its optimisation passes as commercial and open-source C compilers, such as \texttt{gcc}, \texttt{clang}, or \texttt{icc}, is a substantial piece of software.
    3133As a result, the application of the techniques described in this paper to the full CerCo compiler is, in our view, already evidence of their scalability, and we see no reason why these techniques could not be applied to a more aggressively optimising compiler, given that none of our techniques are tied to any particular choice of intermediate language or set of optimisations.
    3234Further, the techniques that we have presented are not tied to the C programming language, and are in fact amenable to arbitrary imperative and functional programming languages.
     
    3840Many different approaches to resource analysis---both asymptotic and concrete---are present within the literature.
    3941
    40 As mentioned above, the current state of the art in concrete resource analysis is represented by WCET tools such as AbsInt's aiT static analyser, which uses abstract interpretation, coupled with faithful, detailed models of hardware, to derive tight time bounds on expected program execution time.
    41 Further, the analyses performed by mainstream WCET tools are unverified: the WCET tool itself, typically a large, complex piece of software, must become part of one's trusted code base.
    42 Recent
     42As mentioned above, the current state of the art in concrete resource analysis is represented by WCET tools such as AbsInt's aiT static analyser, which use abstract interpretation, control flow analyses, and detailed models of hardware to derive tight time bounds on expected program execution time.
     43Unlike the approach presented in this paper, the analyses performed by mainstream WCET tools are unverified: the WCET tool itself, typically a large, complex piece of software, must become part of one's trusted code base.
     44However, recent work has investigated the verification of loop-bound analysis, a subcomponent of wider WCET analyses, for an intermediate language of the CompCert verified C compiler.
     45Note that this work is related but orthogonal to the work presented in this paper: our cost annotations at the C source-level are \emph{parametric} in the number of iterations of loops appearing in the source code.
     46Indeed, to obtain a final resource analysis using our toolchain, the user must manually bound loop iterations.\footnote{This is typical for embedded and safety- or mission-critical applications, where unrestricted looping and recursion are not permitted.  See for example rule 16.2 of MISRA C 2004, which prohibits recursive function calls entirely in safety-critical code, and Nasa JPL's programming guidelines, which states all loops must have a bounded number of iterations.} %cite
     47As a result, one could potentially imagine pairing the loop-bound analysis with our parametric cost model to reduce the amount of user interaction required when using it.
     48
     49Perhaps the most similar piece of work to our own is the recent work by Carbonneaux \emph{et al}, who extended the CompCert verified C compiler in order to lift a concrete cost model for stack space usage back to the C source-level.
     50Like CerCo, this work was verified within a theorem prover---in this case Coq, rather than Matita.
     51Though Carbonneaux \emph{et al} were working entirely independently of the project described herein\footnote{Personal communication with Carbonneaux and Hoffmann}, the two pieces of work share some remarkable similarities, with both projects developing an analogue of the `structured trace' data structure described earlier in this paper in order to facilitate the verification of the lifting.
     52However, differences between the two projects exist.
     53Carbonneaux \emph{et al} developed a `quantitative' Hoare logic for mannually reasoning about stack space usage of compiled programs; we developed a FramaC plugin for automatically establishing resource bounds, modulo a degree of user input.
     54Carbo
    4355
    4456\subsection{Further work}
Note: See TracChangeset for help on using the changeset viewer.