Changeset 3642
 Timestamp:
 Mar 8, 2017, 4:24:40 PM (7 weeks ago)
 Location:
 Papers/jarcerco2017
 Files:

 2 edited
Legend:
 Unmodified
 Added
 Removed

Papers/jarcerco2017/cerco.bib
r3632 r3642 210 210 isbn = {9781841501765}, 211 211 } 212 213 @article{DBLP:journals/corr/abs12024905, 214 author = {Andrea Asperti and 215 Wilmer Ricciotti and 216 Claudio Sacerdoti Coen and 217 Enrico Tassi}, 218 title = {A BiDirectional Refinement Algorithm for the Calculus of (Co)Inductive 219 Constructions}, 220 journal = {Logical Methods in Computer Science}, 221 volume = {8}, 222 number = {1}, 223 year = {2012}, 224 url = {http://dx.doi.org/10.2168/LMCS8(1:18)2012}, 225 doi = {10.2168/LMCS8(1:18)2012}, 226 timestamp = {Mon, 15 Oct 2012 13:17:50 +0200}, 227 biburl = {http://dblp.unitrier.de/rec/bib/journals/corr/abs12024905}, 228 bibsource = {dblp computer science bibliography, http://dblp.org} 229 } 230 212 231 213 232 @inproceedings{matita, 
Papers/jarcerco2017/introduction.tex
r3641 r3642 108 108 In this work, we have instantiated the CerCo approach with a cost model for the MCS51 hardware that our prototype compiler targets. 109 109 As this hardware is relatively simple, it admits a precise cost model, and this is reflected in our final theorem, which demonstrates that our highlevel cost annotations are both accurate and precise. 110 On more modern , exhibiting features such as caches, pipelines, and speculative execution, an exact cost model for time or space will be hardtoimpossible to define.110 On more modern hardware, exhibiting features such as caches, pipelines, and speculative execution, an exact cost model for time or space will be hardtoimpossible to define. 111 111 Instead, existing WCET analyses could be used, and then lifted to highlevel annotations using our nonstandard compiler architecture. 112 112 113 We take it as an article of faith that software used to verify properties of programs must be as bugfree as possible.113 We take it as an article of faith that software used to verify other software must itself be as bugfree as possiblepreferably verified itself. 114 114 One way to achieve this is to reduce the trusted code base of any tools used. 115 115 Here, `trusted code base' refers to the corpus of code that must be trusted to believe that the analysis being carried out is accurate. 116 The trusted code base of stateoftheart WCET tools is significant: one needs to trust the control flow analyser, the linear programming libraries used, and also the formal models of the hardware under analysis. 116 The trusted code base of stateoftheart static analysis tools, such as WCET analysers, is significant. 117 These are sophisticated, complex pieces of software that are built around a control flow analyser, linear programming libraries, and also detailed formal models of the hardware under analysis. 118 To trust a WCET tool, each of these subcomponents, along with their eventual combination, must all be trusted. 119 117 120 In CerCo we move the control flow analysis to the source code level, and we introduce a nonstandard compiler. 118 As this compiler architecture is completely novel, we wish to ensure that our implementation (and by extension, our entire approach ) is correct.119 Therefore, to reduce the size of our trusted code base, we have implemented a prototype compiler and static analyser in an interactive theorem prover the Matita interactive theorem prover, an implementation of the Calculus of Coinductive Constructionswhich was used to certify that the costs added to the source code are indeedthose incurred by the hardware.121 As this compiler architecture is completely novel, we wish to ensure that our implementation (and by extension, our entire approach to lifting cost models) is correct. 122 Therefore, to reduce the size of our trusted code base, we have implemented a prototype compiler and static analyser in an interactive theorem proverMatita~\cite{matita,asperti:user:2007}, an implementation of the Calculus of Coinductive Constructionswhich was used to certify that the cost annotations supplementing the program's source code are indeed an accurate reflection of those incurred by the hardware. 120 123 We have also implemented formal models of the hardware and of the highlevel sourcelanguage in the interactive theorem prover. 121 124 … … 128 131 We now briefly introduce Matita, the interactive theorem prover that was used in CerCo to certify all proofs. 129 132 130 Matita is an interactive theorem proving system based on a variant of the Calculus of Coinductive Constructions~\cite{ asperti:user:2007}.131 The system features a full spectrum of dependent types and (co)inductive families, a system of coercions, a tacticdriven proof construction engine~\cite{sacerdoticoen:tinycals:2007}, and paramodulation based automation~\cite{asperti:higherorder:2007}, all of which we exploit in the formalisation described herein.133 Matita is an interactive theorem proving system based on a variant of the Calculus of Coinductive Constructions~\cite{matita,asperti:user:2007}. 134 The system features (co)inductive families of types, a system of coercions, a tacticdriven proof construction engine~\cite{sacerdoticoen:tinycals:2007}, and paramodulation based automation~\cite{asperti:higherorder:2007}, all of which we exploit in the formalisation described herein. 132 135 133 136 Matita's syntax is similar to the syntaxes of mainstream functional programming languages such as OCaml or Standard ML. … … 139 142 Recursive functions are introduced with \texttt{let rec}. 140 143 Mutually recursive functions are separated via the \texttt{and} keyword. 141 Matita's termination checker ensures that all recursive functions are terminating before being admitted to maintain soundness of the system's logic.144 Matita's termination checker ensures that all recursive functions are terminating before being admitted into the global environment, to maintain soundness. 142 145 \item 143 146 Matita has an infinite hierarchy of type universes. 144 147 A single impredicative universe of types, \texttt{Prop}, exists at the base of this hierarchy. 145 An infinite series of predicative universes, \texttt{Type[0]} : \texttt{Type[1]} : \texttt{Type[2]}, and so on and so forth, sits atop \texttt{Prop}. 148 An unbounded series of predicative universes, \texttt{Type[0]}, \texttt{Type[1]}, \texttt{Type[2]}, and so on and so forth, sits atop \texttt{Prop}. 149 One peculiarity of Matita is a usercustomisable hierarchy of universes. 150 In our development, we work in a context where \texttt{Prop} is contained in \texttt{Type[0]}, \texttt{Type[0]} is contained in \texttt{Type[1]}, and so on. 146 151 Matita, unlike Coq or Agda, implements no form of typical ambiguity or universe polymorphism, with explicit concrete universe levels being preferred instead. 147 152 \item … … 149 154 Constants \texttt{True} and \texttt{False} represent truth and falsity in \texttt{Prop} respectively. 150 155 Two inductive families in \texttt{Prop} encode conjunction and disjunction$\mathtt{P \wedge Q}$ and $\mathtt{P \vee Q}$ respectively. 151 152 156 As is usual, implication and universal quantification are identified with the dependent function space ($\Pi$ types), whereas (constructive) existential quantification is encoded as a dependent sum (a $\Sigma$type). 153 157 We write $\All{x : \phi}\psi$ for the dependent function space, and abbreviate this as $\phi \rightarrow \psi$ when $x \not\in fv(\psi)$ as usual. … … 161 165 \end{lstlisting} 162 166 We call $P_i$ for $0 \leq i \leq n$ the \textbf{parameters} of \texttt{I} and $\phi_j$ for $0 \leq j \leq m$ the \textbf{indices} of \texttt{I}. 163 Matita's positivity checker ensures that constructors have strictlypositive types before admitting a n inductive family to maintain soundness of the system's logic.167 Matita's positivity checker ensures that constructors have strictlypositive types before admitting a (co)inductive family of types into the global environment, to maintain soundness. 164 168 \item 165 169 Records are introduced with the \texttt{record} keyword. … … 168 172 record R : Type[0] := { F1 : nat }. 169 173 \end{lstlisting} 170 may be thought of as syntactic sugar for a singleconstructor inductive datatype of the same name:174 may be thought of as syntactic sugar for a singleconstructor inductive type of the same name: 171 175 \begin{lstlisting}[language=Grafite] 172 176 inductive R : Type[0] := 173 177  mk_R : nat > R. 174 178 \end{lstlisting} 175 A record field's type may depend on fields declared earlier in the record. 176 179 Types of later fields may depend on fields declared earlier in the record. 177 180 Records may be decomposed with projections. 178 181 Projections, one for each of field of a record, are registered in the global context. 179 182 In the example record above, \texttt{F1} of type $R \rightarrow nat$ is registered as a field projection and $mk\_R$ of type $nat \rightarrow R$ is registered as a constructor. 180 181 183 Record fields may also be marked as coercions. 182 184 In the following example … … 193 195 Terms may be freely omitted, allowing the user to write down partial types and terms. 194 196 A question mark, \texttt{?}, denotes a single term that has been omitted by the user. 195 Some omitted terms can be deduced by Matita's refinement system .197 Some omitted terms can be deduced by Matita's refinement system~\cite{DBLP:journals/corr/abs12024905}. 196 198 Other, more complex goals arising from omitted terms may require user input to solve, in which case a proof obligation is opened for each term that cannot be deduced automatically. 197 Three consecutive dots, \texttt{$\ldots$}, denote multiple terms or types that have been omitted.198 \item 199 Data may be decomposed by pattern matching with a \texttt{match} expression.199 Three consecutive dots, \texttt{$\ldots$}, denote a vector of terms or types that have been omitted, and need to be inferred. 200 \item 201 (Co)data may be decomposed by pattern matching with a \texttt{match} expression. 200 202 We may fully annotate a \texttt{match} expression with its return type. 201 203 This is especially useful when working with indexed families of types or with invariants, expressed as types, on functions. … … 225 227 \subsection{Map of the paper} 226 228 227 The rest of the paper is structured as follows. 228 229 In section~\ref{sect.compiler.architecture}, we describe the architecture of the 230 CerCo compiler, as well as the intermediate languages that it uses. We also 231 describe the target hardware and its formal model. 232 233 In section~\ref{sect.compiler.proof}, we describe the proof of correctness of 234 the compiler in more detail. We explain our use of structured traces, the 235 labelling approach, and discuss the assembler. 236 237 In section~\ref{sect.formal.development}, we present data on the formal 238 development. 239 240 In section~\ref{sect.framac.plugin}, we discuss the FramaC plugin, as well as 241 some of the case studies we have performed to validate it. 242 243 Finally, in section~\ref{sect.conclusions} we present conclusions, as well as 244 related and future work. 229 The rest of the paper is structured as follows: 230 231 In Section~\ref{sect.compiler.architecture}, we describe the architecture of the CerCo compiler. 232 We describe the compiler's intermediate languages, from CerCo C down to MCS51 machine code, and their formal semantics. 233 We also describe the optimisations that the CerCo verified C compiler implements. 234 235 In Section~\ref{sect.compiler.proof}, we describe the proof of correctness of the compiler in more detail. 236 We explain \emph{structured traces}, a technical device developed to complete the proof, and our labelling approach. 237 We further discuss the verification of our translation and optimisation passes for the intermediate languages, and discuss our verified assembler. 238 239 In Section~\ref{sect.formal.development}, we present statistical data on the formal development, including line counts, and so on, and discuss remaining axioms in our proof of the compiler's correctness. 240 We also briefly discuss `lessons learnt' in the formal development of the CerCo verified C compiler. 241 242 In Section~\ref{sect.framac.plugin}, we discuss the FramaC plugin for certifying the concrete complexity of programs written in CerCo C. 243 We also present a case study, demonstrating the plugin in action. 244 245 Finally, in Section~\ref{sect.conclusions} we conclude, discussing related and future work.
Note: See TracChangeset
for help on using the changeset viewer.