# Changeset 3437

Ignore:
Timestamp:
Feb 14, 2014, 7:56:01 PM (5 years ago)
Message:

Deal more directly with some reviewer's comments.

File:
1 edited

### Legend:

Unmodified
 r3436 by combining user annotations (e.g. preconditions and invariants) with a multitude of automated analyses (invariant generators, type systems, abstract interpretation, theorem proving, and so on). By contrast, non-functional properties are generally checked on low-level object code, but also demand information interpretation, theorem proving, and so on). By contrast, many non-functional properties are generally checked on low-level object code, %\footnote{A notable %  exception is the explicit allocation of heap space in languages like C, which %  can be handled at the source level.} but also demand information about high-level functional behaviour that must somehow be reconstructed. the estimates. \paragraph{Vision and approach.} We want to reconcile functional and \paragraph{Vision and approach.} We want to reconcile functional and non-functional analyses: to share information and perform both at the same time on source code. components. \paragraph{Contributions.} We have developed what we refer to as \emph{the labelling approach} \cite{labelling}, a \paragraph{Contributions.} We have developed what we refer to as \emph{the labelling approach} \cite{labelling}, a technique to implement compilers that induce cost models on source programs by very lightweight tracking of code changes through compilation. We have studied how to formally prove the correctness of compilers implementing this technique. We have implemented such a compiler from C to object binaries for the 8051 micro-controller, and verified it in an interactive theorem prover. micro-controller for predicting execution time and stack space usage, and verified it in an interactive theorem prover.  As we are targeting an embedded micro-controller we do not consider dynamic memory allocation. To demonstrate source-level verification of costs we have implemented changes to the program architecture. \subsection{CerCo} \subsection{CerCo's approach} In CerCo we propose a radically new approach to the problem: we reject the idea The CerCo approach has the potential to dramatically improve the state of the art. By performing control and data flow analyses on the source code, the error prone translation of invariants is completely avoided. It is in fact performed by the compiler itself when it induces the cost model on the source language. Moreover, any available technique for the verification of functional properties prone translation of invariants is completely avoided. Instead, this work is done at the source level using tools of the user's choice. Any available technique for the verification of functional properties can be immediately reused and multiple techniques can collaborate together to infer and certify cost invariants for the program. Parametric cost analysis infer and certify cost invariants for the program.  There are no limitations on the types of loops or data structures involved. Parametric cost analysis becomes the default one, with non-parametric bounds used as a last resort when trading the complexity of the analysis with its precision. \emph{A priori}, no In addition to the loop-free Lustre code, this method was successfully applied to a small range of cryptographic code.  See~\cite{labelling} for more details. applied to a small range of cryptographic code.  See~\cite{labelling} for more details.  The example in Section~\ref{sec:workflow} was also produced using the plug-in.  The variant was calculated automatically by noticing that \lstinline'j' is a loop counter with maximum value \lstinline'len'.  The most expensive path through the loop body ($78+136 = 214$) is then multiplied by the number of iterations to give the cost of the loop. \paragraph{C programs with pointers.}