# Changeset 3631

Ignore:
Timestamp:
Mar 7, 2017, 10:50:57 AM (15 months ago)
Message:

Rewrote abstract

File:
1 edited

Unmodified
Added
Removed
• ## Papers/jar-cerco-2017/cerco.tex

 r3627 the Seventh Framework Programme for Research of the European Commission, under FET-Open grant number: 243881}} \subtitle{} \subtitle{Certified resource analysis for a large fragment of C} \journalname{Journal of Automated Reasoning} \titlerunning{Certified Complexity} \begin{abstract} Intensional properties of programs---time and space usage, for example---are an important component of the specification of a program, and therefore overall program correctness. Here, intensional properties can be analysed \emph{asymptotically}, or \emph{concretely}, with the latter analyses computing resource bounds in terms of clock cycles, bits transmitted, bytes allocated, or other basal units of resource consumption, for a program execution. For many application domains, for instance libraries exporting cryptographic primitives that must be impervious to side-channel attacks, concrete complexity analysis is arguably more important than asymptotic. Concrete non-functional properties of programs---for example, time and space usage as measured in basal units of measure such as milliseconds and bytes allocated---are important components of the specification of a program, and therefore overall program correctness. Indeed, for many application domains, concrete complexity analysis is arguably more important than any asymptotic complexity analysis. Libraries exporting cryptographic primitives that must be impervious to timing side-channel attacks, or real-time applications with hard timing limits on responsiveness, are examples. Traditional static analysis tools for resource analysis suffer from a number of disadvantages. They are sophisticated, complex pieces of software, that must be incorporated into the trusted codebase of an application if their analysis is to be believed. They also reason on the machine code produced by a compiler, rather than at the level of the source-code that the application programmer is familiar with, and understands. Worst Case Execution Time tools, based on abstract interpretation, currently represent the state-of-the-art in determining concrete time bounds for a program execution. These tools suffer from a number of disadvantages, not least the fact that all analysis is performed on machine code, rather than high-level source code, making results hard to interpret by programmers. Further, these tools are often complex pieces of software, whose analysis is hard to trust. More ideal would be a mechanism to lift' a cost model from the machine code generated by a compiler, back to the source code level, where analyses could be performed in terms understood by the programmer. How one could incorporate the precision of traditional static analyses into such a high-level approach, and how to do this reliably, is not \emph{a priori} clear. How one could incorporate the precision of traditional static analyses into such a high-level approach---and how this could be done reliably---is not \emph{a priori} clear. In this paper, we describe the scientific achievements of the European Union's FET-Open Project CerCo (Certified Complexity'). CerCo's main achievement is the development of a technique for analysing intensional properties of programs at the source level, with little or no loss of accuracy and a small trusted code base. The core component of the project a C compiler, verified in the Matita theorem prover, that produces an instrumented copy of the source code in addition to generating object code. This instrumentation exposes, and tracks precisely, the actual (non-asymptotic) computational cost of the input program at the source level. In this paper, we describe the scientific contributions of the European Union's FET-Open Project CerCo (Certified Complexity'). CerCo's main achievement is the development of a technique for analysing non-functional properties of programs at the source level, with little or no loss of accuracy, and a small trusted code base. The core component of the project is a compiler for a large fragment of the C programming language, verified in the Matita theorem prover, that produces an instrumented copy of the source code in addition to generating object code. This instrumentation exposes, and tracks precisely, the concrete (non-asymptotic) computational cost of the input program at the source level. Untrusted invariant generators and trusted theorem provers may then be used to compute and certify the parametric execution time of the code. We describe the architecture of our C compiler, its proof of correctness, the associated toolchain developed around the compiler, as well as a case study in applying this toolchain to the verification of concrete timing bounds on cryptographic code. We describe the architecture of our C compiler, its proof of correctness, and the associated toolchain developed around the compiler. Using our toolchain, we describe a case study in applying our technique to the verification of concrete timing bounds for cryptographic code. \keywords{Verified compilation \and Complexity analysis \and CerCo (Certified Complexity')} \keywords{Verified compilation \and Complexity analysis \and Worst Case Execution Time analysis \and CerCo (`Certified Complexity') \and Matita} \end{abstract}
Note: See TracChangeset for help on using the changeset viewer.