# Changeset 3651

Ignore:
Timestamp:
Mar 14, 2017, 11:14:29 AM (9 months ago)
Message:

more work on conclusions and related work, bibliography growing to gigantic proportions

Location:
Papers/jar-cerco-2017
Files:
2 edited

### Legend:

Unmodified
 r3650 Abstract Interpretation may also be used to derive upper bounds for stack usage (e.g.~\cite{Regehr:2005:ESO:1113830.1113833}), and Abstract Interpretation tools for predicting stack space usage, for example the StackAnalyzer tool~\cite{stackanalyzer}, are also commercially available. \paragraph{Type- and program logic-based analyses} \paragraph{High-level approaches} Resource Aware ML (RAML) is a first-order statically typed functional programming language~\cite{Hoffmann11,HAH12Toplas,HoffHof10}. The RAML type system is able to derive polynomial resource bounds from first-order functional programs, using a variant of the potential method'. This approach was first pioneered by Hoffman and Jost for amortized, linear bounds on resource consumption~\cite{DBLP:conf/esop/HofmannJ06}, who focussed on predicting heap-space usage. Campbell also used a variant of this approach to predict stack space usage~\cite{bac-thesis08,bac-giveback08,bac-esop09}. Implicit Computational Complexity represents another related approach to bounding resource usage via types~\cite{DalLago2012}. Here, a fragment of Linear Logic, or some other substructural logic, is taken as the type-language of a programming language, with a provable guarantee that typeable terms normalise to a value in some bounded (e.g. polynomial in the size of the term) number of reduction steps. Note that complained to the concrete complexity measures dealt with in this work, bounding reduction steps is rather coarse-grained: one reduction step in the calculus may decompose into some non-constant number of primitive machine operations, when executed on physical hardware. Indeed, the focus of Implicit Computational Complexity is not to capture \emph{concrete} resource usage of programs, but to provide a type-theoretic interpretation of complexity classes. Several program logics for reasoning about a program's resource consumption have been investigated in the literature. Aspinall et al. developed a program logic for a subset of the Java Virtual Machine's instruction set, capable of establishing resource bounds on JVM programs~\cite{ASPINALL2007411}. The soundness of the logic was formally verified in Isabelle/HOL. Along similar lines, Carbonneaux et al. developed a quantitative Hoare logic' for reasoning about stack space usage of CompCert C programs~\cite{DBLP:conf/pldi/Carbonneaux0RS14}. The soundness of the logic was also formally verified, using the Coq proof assistant. This work will be further discussed below. \subsubsection{High-level approaches} \paragraph{Compiler-based cost-model lifting} Perhaps the most similar piece of work to our own is the recent work by Carbonneaux \emph{et al}~\cite{DBLP:conf/pldi/Carbonneaux0RS14}, who extended the CompCert verified C compiler in order to lift a concrete cost model for stack space usage back to the C source-level. Perhaps the most closely related piece of work to our own is the recent work by Carbonneaux \emph{et al}~\cite{DBLP:conf/pldi/Carbonneaux0RS14}, who extended the CompCert verified C compiler in order to lift a concrete cost model for stack space usage back to the C source-level. Like CerCo, this work was verified within a theorem prover---in this case Coq, rather than Matita. Though Carbonneaux \emph{et al} were working entirely independently of the project described herein\footnote{Personal communication with Carbonneaux and Hoffmann}, the two pieces of work share some remarkable similarities, with both projects developing an analogue of the structured trace' data structure described earlier in this paper in order to facilitate the verification of the lifting. Carbonneaux \emph{at al} lift their model from the assembly generated by the CompCert compiler; we must go further, to the level of machine code, in order to lift our timing cost model, necessitating the development of a verified assembler. Nevertheless, despite these differences, the two projects are clearly closely related. Another closely related piece of work to our own is COSTA~\cite{Albert2012}, a heap resource bound static analysis system for the Java programming language, building on previous work investigating cost models for Java bytecode programs~\cite{ALBERT200767}. COSTA is able to produce parametric heap resource bounds for a given Java program, which are made manifest as Java Modelling Language (JML)~\cite{Chalin2006} assertions, inserted into the Java source file. Coupled with the KeY symbolic reasoning/automated reasoning system~\cite{KeYBook2016}, these assertions may be automatically verified, obtaining trustworthy upper bounds on a Java program's dynamic heap space usage at runtime. Other work has explored the use of Computer Algebra Software, such as the Mathematica system, for deriving closed-form upper-bounds of a program's resource usage~\cite{Albert2011}. Previous works have researched various static analyses of time- and space-usage of programs written in a simple functional programming language with regions, called Safe~\cite{Montenegro2010,Montenegro2010a}. One such analysis was verified in Isabelle/HOL~\cite{deDios2011}. The use of types to bound, or program logics to reason about, a program's resource usage have also been extensively investigated in previous work. Resource Aware ML (RAML) is a first-order statically typed functional programming language~\cite{Hoffmann11,HAH12Toplas,HoffHof10,HAH12,DBLP:journals/jfp/0002S15}. The RAML type system is able to derive polynomial resource bounds from first-order functional programs, using a variant of the potential method'. This approach was first pioneered by Hoffman and Jost for amortized, linear bounds on resource consumption~\cite{DBLP:conf/esop/HofmannJ06}, who focussed on predicting heap-space usage. Campbell also used a variant of this approach to predict stack space usage~\cite{bac-thesis08,bac-giveback08,bac-esop09}, whilst Atkey extended the approach to imperative languages~\cite{Atkey2010}, using ideas from Separation Logic~\cite{Reynolds:2002:SLL:645683.664578}. Recent work has extended this approach to include more features of the core programming language~\cite{Hoffmann2015}, improving the resource analysis~\cite{Hoffmann:2012:MAR:2362389.2362393}, and to apply extensions of the technique to case studies, demonstrating that cryptographic code is free from side-channel attacks~\cite{chan-ngo-verifying-2017}. In related work, Salvucci and Chailloux~\cite{Salvucci201627} used a static type and effect system to bound runtime heap usage of an impure functional programming language, similar to ML. Domain-specific type systems and calculi of mobile processes, for bounding agents in a distributed setting, have also been investigated~\cite{Barbanera2003,Teller2002}. Brady and Hammond also investigated a dependent type system extended with features for bounding a program's dynamic resource usage~\cite{Brady2006}. Similar ideas had previously been investigated by Crary and Weirich~\cite{Crary:2000:RBC:325694.325716}. Libraries within dependently-typed languages, such as Coq~\cite{McCarthy2016} and Agda~\cite{Danielsson:2008:LST:1328897.1328457}, have also been written for reasoning about the complexity of algorithms written in the language. One-off machine-checked verifications of the algorithmic complexity of various data structure implementations, and operations over them, have also been investigated (e.g.~\cite{Chargueraud2015,Nipkow2015}). Implicit Computational Complexity represents another related approach to bounding resource usage via types~\cite{DalLago2012}. Here, a fragment of Linear Logic, or some other substructural logic, is taken as the language of types of some calculus, with a provable guarantee that typeable terms of the calculus normalise to a value in some bounded (e.g. polynomial in the size of the term) number of reduction steps. Note that compared to the concrete complexity measures dealt with in this work, bounding reduction steps is rather coarse-grained: one reduction step in the calculus may decompose into some non-constant number of primitive machine operations, when executed on physical hardware. Indeed, the focus of Implicit Computational Complexity is not to capture \emph{concrete} resource usage of programs, but to provide a type-theoretic interpretation of complexity classes. A formalisation of a cost semantics for System T using ideas from Implicit Computational Complexity was formalised in Coq~\cite{Danner:2013:SCA:2428116.2428123}. Several program logics for reasoning about a program's resource consumption have been investigated in the literature. Aspinall et al. developed a program logic for a subset of the Java Virtual Machine's instruction set, capable of establishing resource bounds on JVM programs~\cite{ASPINALL2007411}, similar in spirit to the `quantitative Hoare logic' mentioned above. The soundness of the logic was formally verified in Isabelle/HOL. Another program logic for reasoning about Java resource usage, based on ideas taken from Separation Logic, was also developed by Aspinall et al.~\cite{Aspinall2010}. \paragraph{Verified compilation}