Changeset 3646


Ignore:
Timestamp:
Mar 13, 2017, 10:57:23 AM (4 months ago)
Author:
mulligan
Message:

started rewriting conclusions...

File:
1 edited

Legend:

Unmodified
Added
Removed
  • Papers/jar-cerco-2017/conclusions.tex

    r3639 r3646  
    77%   Future work
    88
    9 Intensional properties of programs---time and space usage, for example---are an important factor in the specification of a program, and therefore overall program correctness.
    10 Here, `intensional' properties can be analysed \emph{asymptotically}, or \emph{concretely}, with the latter analyses computing resource bounds in terms of clock cycles, (milli)seconds, bits transmitted, or other basal units of resource consumption, for a program execution.
     9The concrete, non-functional properties of programs---time and space usage, for example---are an important factor in the specification of a program, and therefore overall program correctness.
    1110For many application domains, it is concrete complexity that is the more critical of the two analyses.
    1211`Soft real time' programs, with bounds on application response time, and libraries exporting cryptographic primitives, designed to be impervious to timing side-channel attacks, are two important classes of application fitting this pattern, for example.
     
    2019A major challenge here is obtaining a source-level resource analysis that can compete with the precision offered by traditional WCET tools, and how to perform this high-level analysis reliably.
    2120
    22 The central insight of the CerCo project is that a compiler for a substantial fragment of the C programming language can be written in a style that facilitates high-level reasoning about concrete resource usage.
    23 CerCo's resource analysis proceeds by fully embracing the compilation process and the implicit knowledge of program translation that the compiler possesses, not ignoring it.
    24 In particular, the CerCo compiler tracks the translation of phrases of high-level source code into machine code, wherein a cost measure is applied, before being reflected back through the compiler to the source-code level.
     21In this paper we have presented an approach to engineering a realistic, mildly-optimising compiler for a large fragment of the C programming language that combines the ergonomics of reasoning at the source-level, with the accuracy and precision one would expect of traditional static analyses for non-functional properties of programs.
     22The central insight of our approach is that effective, high-level reasoning about resource usage can be facilitated by embracing the implicit knowledge of program translation that the compiler possesses, not ignoring it.
     23In particular, the CerCo verified C compiler tracks the translation of phrases of high-level source code into machine code, wherein a cost measure is applied, before being reflected back through the compiler to the source-code level.
     24This tracking is facilitated by a novel technique---the \emph{labelling approach}.
    2525Thereafter, this cost model is reflected as parametric cost assertions, embellishing the program's source code, which can be used by external reasoning tools---trusted or untrusted---to provide hard bounds on a program's resource consumption.
    2626
    27 Further, the CerCo verified compiler and associated toolchain presented in this paper have demonstrated that it is indeed possible to perform static concrete resource analysis at the source level with the precision one would expect of traditional WCET tools.
     27The CerCo verified C compiler and associated toolchain presented in this paper have demonstrated that it is indeed possible to perform static concrete resource analysis at the source level with the precision one would expect of traditional WCET tools.
    2828Indeed, for the MCS-51, our compiler's target architecture, our parametric cost assertions are capable of producing \emph{exact} bounds on resource consumption.
    2929Further, as the CerCo compiler's lifting of the cost model is fully verified in Matita, the trusted code base is minimised---one no longer need rely on the correctness of a WCET tool to obtain a trustworthy analysis.
    3030
    31 In CerCo, we believe that we have developed a general approach to structuring compilers for lifting cost models to the source-level.
    32 In particular, the CerCo compiler itself is a lightly-optimising compiler for a large subset of the C programming language, and whilst not as aggressive in its optimisation passes as commercial and open-source C compilers, such as \texttt{gcc}, \texttt{clang}, or \texttt{icc}, the compiler is still a relatively substantial piece of software.
     31We believe that the work presented in this paper represents a general approach to structuring compilers for lifting cost models to the source-level.
     32In particular, the CerCo verified C compiler itself is a lightly-optimising compiler for a large subset of the C programming language, and whilst not as aggressive in its optimisation passes as commercial and open-source C compilers, such as \texttt{gcc}, \texttt{clang}, or \texttt{icc}, the compiler is still a relatively substantial piece of software.
    3333As a result, the application of the techniques described herein, first described with pen-and-paper proofs, to the full CerCo compiler is, in our view, already evidence of their scalability, and we see no reason why these techniques could not be applied to a more aggressively optimising compiler, given that none of our techniques are tied to any particular choice of intermediate language or set of optimisations.
    3434The techniques that we have presented are not tied to any particular programming language, and are in fact amenable to arbitrary imperative and functional programming languages.
    3535This last claim is bolstered by a pen-and-paper proof of the correctness of the labelling approach applied to a `toy' compiler for an applicative language. %cite
    36 
    37 \paragraph{Vision}
    3836
    3937We envision a new generation of compilers that track program structure through compilation and optimisation and exploit this information to define a precise, non-uniform cost model for source code that accounts for runtime state.
     
    4139
    4240The techniques currently used by the Worst Case Execution Time (WCET) community, which perform analysis on object code, are still applicable but can be coupled with additional source-level analysis.
    43 In cases where our approach produces overly complex cost models, safe approximations can be used to trade complexity for precision.
    44 
     41In cases where our approach produces overly complex cost models, safe approximations can be used to trade complexity for precision.
    4542Finally, source code analysis can be used early in the development process, when components have been specified but not implemented, as modularity means that it is enough to specify the non-functional behaviour of missing components.
    4643
Note: See TracChangeset for help on using the changeset viewer.