Changeset 3328


Ignore:
Timestamp:
Jun 6, 2013, 5:05:50 PM (4 years ago)
Author:
tranquil
Message:

cut some text, reordered bibliography

File:
1 edited

Legend:

Unmodified
Added
Removed
  • Papers/fopara2013/fopara13.tex

    r3327 r3328  
    99\usepackage{fancyvrb}
    1010\usepackage{graphicx}
    11 \usepackage[colorlinks]{hyperref}
     11\usepackage[colorlinks,
     12            bookmarks,bookmarksopen,bookmarksdepth=2]{hyperref}
    1213\usepackage{hyphenat}
    1314\usepackage[utf8x]{inputenc}
     
    152153components.
    153154
    154 \paragraph{Contributions.} We have developed the labelling approach \cite{?}, a
     155\paragraph{Contributions.} We have developed the labeling approach \cite{labeling}, a
    155156technique to implement compilers that induce cost models on source programs by
    156157very lightweight tracking of code changes through compilation. We have studied
     
    160161implemented a Frama-C plugin \cite{framac} that invokes the compiler on a source
    161162program and uses this to generate invariants on the high-level source that
    162 correctly model low-level costs. Finally, the plug-in certifies that the program
     163correctly model low-level costs. Finally, the plugin certifies that the program
    163164respects these costs by calling automated theorem provers, a new and innovative
    164165technique in the field of cost analysis. As a case study, we show how the
    165 plug-in can automatically compute and certify the exact reaction time of Lustre
     166plugin can automatically compute and certify the exact reaction time of Lustre
    166167\cite{lustre} dataflow programs compiled into C.
    167168
     
    428429compiler. The annotated program can then be enriched with complexity assertions
    429430in the style of Hoare logic, that are passed to a deductive platform (in our
    430 case Frama-C). We provide as a Frama-c cost plug-in a simple automatic
     431case Frama-C). We provide as a Frama-c cost plugin a simple automatic
    431432synthesiser for complexity assertions (the blue lines in \autoref{itest1}),
    432433which can be overridden by the user to increase or decrease accuracy. From the
     
    507508\end{figure}
    508509\section{Main S\&T results of the CerCo project}
    509 % We will now review the main S\&T results achieved in the CerCo project.
    510 
    511 \emph{The (basic) labelling approach~\cite{?}.} It is the main technique that
     510We will now review the main S\&T results achieved in the CerCo project.
     511
     512% \emph{Dependent labeling~\cite{?}.} The basic labeling approach assumes a
     513% bijective mapping between object code and source code O(1) blocks (called basic
     514% blocks). This assumption is violated by many program optimizations (e.g. loop
     515% peeling and loop unrolling). It also assumes the cost model computed on the
     516% object code to be non parametric: every block must be assigned a cost that does
     517% not depend on the state. This assumption is violated by stateful hardware like
     518% pipelines, caches, branch prediction units. The dependent labeling approach is
     519% an extension of the basic labeling approach that allows to handle parametric
     520% cost models. We showed how the method allows to deal with loop optimizations and
     521% pipelines, and we speculated about its applications to caches.
     522%
     523% \emph{Techniques to exploit the induced cost model.} Every technique used for
     524% the analysis of functional properties of programs can be adapted to analyse the
     525% non-functional properties of the source code instrumented by compilers that
     526% implement the labeling approach. In order to gain confidence in this claim, we
     527% showed how to implement a cost invariant generator combining abstract
     528% interpretation with separation logic ideas \cite{separation}. We integrated
     529% everything in the Frama-C modular architecture, in order to compute proof
     530% obligations from functional and cost invariants and to use automatic theorem
     531% provers on them. This is an example of a new technique that is not currently
     532% exploited in WCET analysis. It also shows how precise functional invariants
     533% benefits the non-functional analysis too. Finally, we show how to fully
     534% automatically analyse the reaction time of Lustre nodes that are first compiled
     535% to C using a standard Lustre compiler and then processed by a C compiler that
     536% implements the labeling approach.
     537
     538% \emph{The CerCo compiler.} This is a compiler from a large subset of the C
     539% program to 8051/8052 object code,
     540% integrating the labeling approach and a static analyser for 8051 executables.
     541% The latter can be implemented easily and does not require dependent costs
     542% because the 8051 microprocessor is a very simple one, with constant-cost
     543% instruction. It was chosen to separate the issue of exact propagation of the
     544% cost model from the orthogonal problem of the static analysis of object code
     545% that may require approximations or dependent costs. The compiler comes in
     546% several versions: some are prototypes implemented directly in OCaml, and they
     547% implement both the basic and dependent labeling approaches; the final version
     548% is extracted from a Matita certification and at the moment implements only the
     549% basic approach.
     550
     551\subsection{The (basic) labeling approach}
     552The labeling approach is the main technique that
    512553underlies all the developments in CerCo. It allows to track the evolution of
    513554basic blocks during compilation in order to propagate the cost model from the
    514555object code to the source code without loosing precision in the process.
    515 
    516 \emph{Dependent labelling~\cite{?}.} The basic labelling approach assumes a
    517 bijective mapping between object code and source code O(1) blocks (called basic
    518 blocks). This assumption is violated by many program optimizations (e.g. loop
    519 peeling and loop unrolling). It also assumes the cost model computed on the
    520 object code to be non parametric: every block must be assigned a cost that does
    521 not depend on the state. This assumption is violated by stateful hardware like
    522 pipelines, caches, branch prediction units. The dependent labelling approach is
    523 an extension of the basic labelling approach that allows to handle parametric
    524 cost models. We showed how the method allows to deal with loop optimizations and
    525 pipelines, and we speculated about its applications to caches.
    526 
    527 \emph{Techniques to exploit the induced cost model.} Every technique used for
    528 the analysis of functional properties of programs can be adapted to analyse the
    529 non-functional properties of the source code instrumented by compilers that
    530 implement the labelling approach. In order to gain confidence in this claim, we
    531 showed how to implement a cost invariant generator combining abstract
    532 interpretation with separation logic ideas \cite{separation}. We integrated
    533 everything in the Frama-C modular architecture, in order to compute proof
    534 obligations from functional and cost invariants and to use automatic theorem
    535 provers on them. This is an example of a new technique that is not currently
    536 exploited in WCET analysis. It also shows how precise functional invariants
    537 benefits the non-functional analysis too. Finally, we show how to fully
    538 automatically analyse the reaction time of Lustre nodes that are first compiled
    539 to C using a standard Lustre compiler and then processed by a C compiler that
    540 implements the labelling approach.
    541 
    542 \emph{The CerCo compiler.} This is a compiler from a large subset of the C
    543 program to 8051/8052 object code,
    544 integrating the labelling approach and a static analyser for 8051 executables.
    545 The latter can be implemented easily and does not require dependent costs
    546 because the 8051 microprocessor is a very simple one, with constant-cost
     556\paragraph{Problem statement.} Given a source program $P$, we want to obtain an
     557instrumented source program $P'$,  written in the same programming language, and
     558the object code $O$ such that: 1) $P'$ is obtained by inserting into $P$ some
     559additional instructions to update global cost information like the amount of
     560time spent during execution or the maximal stack space required; 2) $P$ and $P'$
     561must have the same functional behavior, i.e.\ they must produce that same output
     562and intermediate observables; 3) $P$ and $O$ must have the same functional
     563behavior; 4) after execution and in interesting points during execution, the
     564cost information computed by $P'$ must be an upper bound of the one spent by $O$
     565to perform the corresponding operations (soundness property); 5) the difference
     566between the costs computed by $P'$ and the execution costs of $O$ must be
     567bounded by a program dependent constant (precision property).
     568
     569\paragraph{The labeling software components.} We solve the problem in four
     570stages \cite{labeling}, implemented by four software components that are used
     571in sequence.
     572
     573The first component labels the source program $P$ by injecting label emission
     574statements in appropriate positions to mark the beginning of basic blocks.
     575% The
     576% set of labels with their positions is called labeling.
     577The syntax and semantics
     578of the source programming language is augmented with label emission statements.
     579The statement ``EMIT $\ell$'' behaves like a NOP instruction that does not affect the
     580program state or control flow, but its execution is observable.
     581% Therefore the observables of a run of a program becomes a stream
     582% of label emissions: $\ell_1,\ldots,\ell_n$, called the program trace. We clarify the conditions
     583% that the labeling must respect later.
     584
     585The second component is a labeling preserving compiler. It can be obtained from
     586an existing compiler by adding label emission statements to every intermediate
     587language and by propagating label emission statements during compilation. The
     588compiler is correct if it preserves both the functional behavior of the program
     589and the traces of observables.
     590% We may also ask that the function that erases the cost
     591% emission statements commute with compilation. This optional property grants that
     592% the labeling does not interfere with the original compiler behavior. A further
     593% set of requirements will be added later.
     594
     595The third component is a static object code analyser. It takes in input a labeled
     596object code and it computes the scope of each of its label emission statements,
     597i.e.\ the tree of instructions that may be executed after the statement and before
     598a new label emission is encountered.
     599It is a tree and not a sequence because the scope
     600may contain a branching statement. In order to grant that such a finite tree
     601exists, the object code must not contain any loop that is not broken by a label
     602emission statement. This is the first requirement of a sound labeling. The
     603analyser fails if the labeling is unsound. For each scope, the analyser
     604computes an upper bound of the execution time required by the scope, using the
     605maximum of the costs of the two branches in case of a conditional statement.
     606Finally, the analyser computes the cost of a label by taking the maximum of the
     607costs of the scopes of every statement that emits that label.
     608
     609The fourth and last component takes in input the cost model computed at step 3
     610and the labelled code computed at step 1. It outputs a source program obtained
     611by replacing each label emission statement with a statement that increments the
     612global cost variable with the cost associated to the label by the cost model. 
     613The obtained source code is the instrumented source code.
     614
     615\paragraph{Correctness.} Requirements 1, 2 and 3 of the problem statement
     616obviously hold, with 2 and 3 being a consequence of the definition of a correct
     617labeling preserving compiler. It is also obvious that the value of the global
     618cost variable of an instrumented source code is at any time equal to the sum of
     619the costs of the labels emitted by the corresponding labelled code. Moreover,
     620because the compiler preserves all traces, the sum of the costs of the labels
     621emitted in the source and target labelled code are the same. Therefore, to
     622satisfy the fourth requirement, we need to grant that the time taken to execute
     623the object code is equal to the sum of the costs of the labels emitted by the
     624object code. We collect all the necessary conditions for this to happen in the
     625definition of a sound labeling: a) all loops must be broken by a cost emission
     626statement;  b) all program instructions must be in the scope of some cost
     627emission statement. To satisfy also the fifth requirement, additional
     628requirements must be imposed on the object code labeling to avoid all uses of
     629the maximum in the cost computation: the labeling is precise if every label is
     630emitted at most once and both branches of a conditional jump start with a label
     631emission statement.
     632
     633The correctness and precision of the labeling approach only rely on the
     634correctness and precision of the object code labeling. The simplest
     635way to achieve them is to impose correctness and precision
     636requirements also on the initial labeling produced by the first software
     637component, and to ask the compiler to preserve these
     638properties too. The latter requirement imposes serious limitations on the
     639compilation strategy and optimizations: the compiler may not duplicate any code
     640that contains label emission statements, like loop bodies. Therefore several
     641loop optimizations like peeling or unrolling are prevented. Moreover, precision
     642of the object code labeling is not sufficient per se to obtain global
     643precision: we also implicitly assumed the static analysis to be able to
     644associate a precise constant cost to every instruction. This is not possible in
     645presence of stateful hardware whose state influences the cost of operations,
     646like pipelines and caches. In the next subsection we will see an extension of the
     647basic labeling approach to cover this situation.
     648
     649The labeling approach described in this section can be applied equally well and
     650with minor modifications to imperative and functional languages
     651\cite{labeling2}. We have tested it on a simple imperative language without
     652functions (a While language), on a subset of C and on two compilation chains for
     653a purely functional higher order language. The two main changes required to deal
     654with functional languages are: 1) because global variables and updates are not
     655available, the instrumentation phase produces monadic code to “update” the
     656global costs; 2) the requirements for a sound and precise labeling of the
     657source code must be changed when the compilation is based on CPS translations.
     658In particular, we need to introduce both labels emitted before a statement is
     659executed and labels emitted after a statement is executed. The latter capture
     660code that is inserted by the CPS translation and that would escape all label
     661scopes.
     662
     663Phases 1, 2 and 3 can be applied as well to logic languages (e.g. Prolog).
     664However, the instrumentation phase cannot: in standard Prolog there is no notion
     665of (global) variable whose state is not retracted during backtracking.
     666Therefore, the cost of executing computations that are later backtracked would
     667not be correctly counted in. Any extension of logic languages with
     668non-backtrackable state should support the labeling approach.
     669
     670\subsection{Dependent labeling}
     671The core idea of the basic labeling approach is to establish a tight connection
     672between basic blocks executed in the source and target languages. Once the
     673connection is established, any cost model computed on the object code can be
     674transferred to the source code, without affecting the code of the compiler or
     675its proof. In particular, it is immediate that we can also transport cost models
     676that associate to each label functions from hardware state to natural numbers.
     677However, a problem arises during the instrumentation phase that replaces cost
     678emission statements with increments of global cost variables. The global cost
     679variable must be incremented with the result of applying the function associated
     680to the label to the hardware state at the time of execution of the block.
     681The hardware state comprises both the functional state that affects the
     682computation (the value of the registers and memory) and the non-functional
     683state that does not (the pipeline and cache contents for example). The former is
     684in correspondence with the source code state, but reconstructing the
     685correspondence may be hard and lifting the cost model to work on the source code
     686state is likely to produce cost expressions that are too hard to reason on.
     687Luckily enough, in all modern architectures the cost of executing single
     688instructions is either independent of the functional state or the jitter---the
     689difference between the worst and best case execution times---is small enough
     690to be bounded without loosing too much precision. Therefore we can concentrate
     691on dependencies over the “non-functional” parts of the state only.
     692
     693The non-functional state has no correspondence in the high level state and does
     694not influence the functional properties. What can be done is to expose the
     695non-functional state in the source code. We just present here the basic
     696intuition in a simplified form: the technical details that allow to handle the
     697general case are more complex and can be found in the CerCo deliverables. We add
     698to the source code an additional global variable that represents the
     699non-functional state and another one that remembers the last labels emitted. The
     700state variable must be updated at every label emission statement, using an
     701update function which is computed during static analysis too. The update
     702function associates to each label a function from the recently emitted labels
     703and old state to the new state. It is computed composing the semantics of every
     704instruction in a basic block and restricting it to the non-functional part of
     705the state.
     706
     707Not all the details of the non-functional state needs to be exposed, and the
     708technique works better when the part of state that is required can be summarized
     709in a simple data structure. For example, to handle simple but realistic
     710pipelines it is sufficient to remember a short integer that encodes the position
     711of bubbles (stuck instructions) in the pipeline. In any case, the user does not
     712need to understand the meaning of the state to reason over the properties of the
     713program. Moreover, at any moment the user or the invariant generator tools that
     714analyse the instrumented source code produced by the compiler can decide to
     715trade precision of the analysis with simplicity by approximating the parametric
     716cost with safe non parametric bounds. Interestingly, the functional analysis of
     717the code can determine which blocks are executed more frequently in order to
     718approximate more aggressively the ones that are executed less.
     719
     720Dependent labeling can also be applied to allow the compiler to duplicate
     721blocks that contain labels (e.g. in loop optimizations) \cite{paolo}. The effect is to assign
     722a different cost to the different occurrences of a duplicated label. For
     723example, loop peeling turns a loop into the concatenation of a copy of the loop
     724body (that executes the first iteration) with the conditional execution of the
     725loop (for the successive iterations). Because of further optimizations, the two
     726copies of the loop will be compiled differently, with the first body usually
     727taking more time.
     728
     729By introducing a variable that keep tracks of the iteration number, we can
     730associate to the label a cost that is a function of the iteration number. The
     731same technique works for loop unrolling without modifications: the function will
     732assign a cost to the even iterations and another cost to the odd ones. The
     733actual work to be done consists in introducing in the source code and for each
     734loop a variable that counts the number of iterations. The loop optimization code
     735that duplicate the loop bodies must also modify the code to propagate correctly
     736the update of the iteration numbers. The technical details are more complex and
     737can be found in the CerCo reports and publications. The implementation, however,
     738is quite simple and the changes to a loop optimizing compiler are minimal.
     739
     740\subsection{Techniques to exploit the induced cost model}
     741We review the cost synthesis techniques developed in the project.
     742The starting hypothesis is that we have a certified methodology to annotate
     743blocks in the source code with constants which provide a sound and possibly
     744precise upper bound on the cost of executing the blocks after compilation to
     745object code.
     746
     747The principle that we have followed in designing the cost synthesis tools is
     748that the synthetic bounds should be expressed and proved within a general
     749purpose tool built to reason on the source code. In particular, we rely on the
     750Frama-C tool to reason on C code and on the Coq proof-assistant to reason on
     751higher-order functional programs.
     752
     753This principle entails that: 1)
     754The inferred synthetic bounds are indeed correct as long as the general purpose
     755tool is. 2) There is no limitation on the class of programs that can be handled
     756as long as the user is willing to carry on an interactive proof.
     757
     758Of course, automation is desirable whenever possible. Within this framework,
     759automation means writing programs that give hints to the general purpose tool.
     760These hints may take the form, say, of loop invariants/variants, of predicates
     761describing the structure of the heap, or of types in a light logic. If these
     762hints are correct and sufficiently precise the general purpose tool will produce
     763a proof automatically, otherwise, user interaction is required.
     764
     765\paragraph{The Cost plugin and its application to the Lustre compiler.}
     766Frama-C \cite{framac} is a set of analysers for C programs with a
     767specification language called ACSL. New analyses can be dynamically added
     768through a plugin system. For instance, the Jessie plugin allows deductive
     769verification of C programs with respect to their specification in ACSL, with
     770various provers as back-end tools.
     771We developed the CerCo Cost plugin for the Frama-C platform as a proof of
     772concept of an automatic environment exploiting the cost annotations produced by
     773the CerCo compiler. It consists of an OCaml program which in first approximation
     774takes the following actions: 1) it receives as input a C program, 2) it
     775applies the CerCo compiler to produce a related C program with cost annotations,
     7763) it applies some heuristics to produce a tentative bound on the cost of
     777executing the C functions of the program as a function of the value of their
     778parameters, 4) the user can then call the Jessie plugin to discharge the
     779related proof obligations.
     780In the following we elaborate on the soundness of the framework and the
     781experiments we performed with the Cost tool on the C programs produced by a
     782Lustre compiler.
     783
     784\paragraph{Soundness.} The soundness of the whole framework depends on the cost
     785annotations added by the CerCo compiler, the synthetic costs produced by the
     786Cost plugin, the verification conditions (VCs) generated by Jessie, and the
     787external provers discharging the VCs. The synthetic costs being in ACSL format,
     788Jessie can be used to verify them. Thus, even if the added synthetic costs are
     789incorrect (relatively to the cost annotations), the process as a whole is still
     790correct: indeed, Jessie will not validate incorrect costs and no conclusion can
     791be made about the WCET of the program in this case. In other terms, the
     792soundness does not really depend on the action of the Cost plugin, which can in
     793principle produce any synthetic cost. However, in order to be able to actually
     794prove a WCET of a C function, we need to add correct annotations in a way that
     795Jessie and subsequent automatic provers have enough information to deduce their
     796validity. In practice this is not straightforward even for very simple programs
     797composed of branching and assignments (no loops and no recursion) because a fine
     798analysis of the VCs associated with branching may lead to a complexity blow up.
     799\paragraph{Experience with Lustre.} Lustre is a data-flow language to program
     800synchronous systems and the language comes with a compiler to C. We designed a
     801wrapper for supporting Lustre files.
     802The C function produced by the compiler implements the step function of the
     803synchronous system and computing the WCET of the function amounts to obtain a
     804bound on the reaction time of the system. We tested the Cost plugin and the
     805Lustre wrapper on the C programs generated by the Lustre compiler. For programs
     806consisting of a few hundreds loc, the Cost plugin computes a WCET and Alt −
     807Ergo is able to discharge all VCs automatically.
     808
     809\paragraph{Handling C programs with simple loops.}
     810The cost annotations added by the CerCo compiler take the form of C instructions
     811that update by a constant a fresh global variable called the cost variable.
     812Synthesizing a WCET of a C function thus consists in statically resolving an
     813upper bound of the difference between the value of the cost variable before and
     814after the execution of the function, i.e. find in the function the instructions
     815that update the cost variable and establish the number of times they are passed
     816through during the flow of execution. In order to do the analysis the plugin
     817makes the following assumptions on the programs:
     8181. No recursive functions.
     8192. Every loop must be annotated with a variant. The variants of ‘for’ loops are
     820automatically inferred.
     821
     822The plugin proceeds as follows.
     823First the call graph of the program is computed.
     824Then the computation of the cost of the function is performed by traversing its
     825control flow graph. If the function f calls the function g then the function g
     826is processed before the function f. The cost at a node is the maximum of the
     827costs of the successors.
     828In the case of a loop with a body that has a constant cost for every step of the
     829loop, the cost is the product of the cost of the body and of the variant taken
     830at the start of the loop.
     831In the case of a loop with a body whose cost depends on the values of some free
     832variables, a fresh logic function f is introduced to represent the cost of the
     833loop in the logic assertions. This logic function takes the variant as a first
     834parameter. The other parameters of f are the free variables of the body of the
     835loop. An axiom is added to account the fact that the cost is accumulated at each
     836step of the loop.
     837The cost of the function is directly added as post-condition of the function
     838
     839The user can influence the annotation by different means:
     840by using more precise variants or
     841by annotating function with cost specification. The plugin will use this cost
     842for the function instead of computing it.
     843\paragraph{C programs with pointers.}
     844When it comes to verifying programs involving pointer-based data structures,
     845such as linked lists, trees, or graphs, the use of traditional first-order logic
     846to specify, and of SMT solvers to verify, shows some limitations. Separation
     847logic \cite{separation} is then an elegant alternative. It is a program logic
     848with a new notion of conjunction to express spatial heap separation. Bobot has
     849recently introduced in the Jessie plugin automatically generated separation
     850predicates to simulate separation logic reasoning in a traditional verification
     851framework where the specification language, the verification condition
     852generator, and the theorem provers were not designed with separation logic in
     853mind. CerCo's plugin can exploit the separation predicates to automatically
     854reason on the cost of execution of simple heap manipulation programs such as an
     855in-place list reversal.
     856
     857\subsection{The CerCo Compiler}
     858In CerCo we have developed a certain number of cost preserving compilers based
     859on the labeling approach. Excluding an initial certified compiler for a While
     860language, all remaining compilers target realistic source languages---a pure
     861higher order functional language and a large subset of C with pointers, gotos
     862and all data structures---and real world target processors---MIPS and the
     863Intel 8051/8052 processor family. Moreover, they achieve a level of optimization
     864that ranges from moderate (comparable to gcc level 1) to intermediate (including
     865loop peeling and unrolling, hoisting and late constant propagation). The so
     866called Trusted CerCo Compiler is the only one that was implemented in the
     867interactive theorem prover Matita~\cite{matita} and its costs certified. The code distributed
     868is obtained extracting OCaml code from the Matita implementation. In the rest of
     869this section we will only focus on the Trusted CerCo Compiler, that only targets
     870the C language and the 8051/8052 family, and that does not implement the
     871advanced optimizations yet. Its user interface, however, is the same as the one
     872of the other versions, in order to trade safety with additional performances. In
     873particular, the Frama-C CerCo plugin can work without recompilation with all
     874compilers.
     875
     876The 8051/8052 microprocessor is a very simple one, with constant-cost
    547877instruction. It was chosen to separate the issue of exact propagation of the
    548878cost model from the orthogonal problem of the static analysis of object code
    549 that may require approximations or dependent costs. The compiler comes in
    550 several versions: some are prototypes implemented directly in OCaml, and they
    551 implement both the basic and dependent labelling approaches; the final version
    552 is extracted from a Matita certification and at the moment implements only the
    553 basic approach.
    554 
    555 \emph{A formal cost certification of the CerCo compiler.} We implemented the
     879that may require approximations or dependent costs.
     880
     881The (trusted) CerCo compiler implements the following optimizations: cast
     882simplification, constant propagation in expressions, liveness analysis driven
     883spilling of registers, dead code elimination, branch displacement, tunneling.
     884The two latter optimizations are performed by our optimizing assembler
     885\cite{correctness}. The back-end of the compiler works on three addresses
     886instructions, preferred to static single assignment code for the simplicity of
     887the formal certification.
     888
     889The CerCo compiler is loosely based on the CompCert compiler \cite{compcert}, a
     890recently developed certified compiler from C to the PowerPC, ARM and x86
     891microprocessors. Contrarily to CompCert, both the CerCo code and its
     892certification are open source. Some data structures and language definitions for
     893the front-end are directly taken from CompCert, while the back-end is a redesign
     894of a compiler from Pascal to MIPS used by Francois Pottier for a course at the
     895Ecole Polytechnique.
     896
     897The main peculiarities of the CerCo compiler are:
     898\begin{enumerate}
     899\item all of our intermediate languages include label emitting instructions to
     900implement the labeling approach, and the compiler preserves execution traces.
     901\item traditionally compilers target an assembly language with additional
     902macro-instructions to be expanded before assembly; for CerCo we need to go all
     903the way down to object code in order to perform the static analysis. Therefore
     904we integrated also an optimizing assembler and a static analyser.
     905\item to avoid implementing a linker and a loader, we do not support separate
     906compilation and external calls. Adding a linker and a loader is a transparent
     907process to the labeling approach and should create no unknown problem.
     908\item we target an 8-bit processor. Targeting an 8 bit processor requires
     909several changes and increased code size, but it is not fundamentally more
     910complex. The proof of correctness, however, becomes much harder.
     911\item we target a microprocessor that has a non uniform memory model, which is
     912still often the case for microprocessors used in embedded systems and that is
     913becoming common again in multi-core processors. Therefore the compiler has to
     914keep track of data and it must move data between memory regions in the proper
     915way. Also the size of pointers to different regions is not uniform. An
     916additional difficulty was that the space available for the stack in internal
     917memory in the 8051 is tiny, allowing only a minor number of nested calls. To
     918support full recursion in order to test the CerCo tools also on recursive
     919programs, the compiler manually manages a stack in external memory.
     920\end{enumerate}
     921
     922\subsection{A formal certification of the CerCo compiler}
     923We implemented the
    556924CerCo compiler in the interactive theorem prover Matita and have formally
    557925certified that the cost model induced on the source code correctly and precisely
     
    571939a uniform proof of forward similarity.
    572940
    573 \subsection{The (basic) labelling approach.}
    574 \paragraph{Problem statement.} given a source program $P$, we want to obtain an
    575 instrumented source program $P'$,  written in the same programming language, and
    576 the object code $O$ such that: 1) $P'$ is obtained by inserting into $P$ some
    577 additional instructions to update global cost information like the amount of
    578 time spent during execution or the maximal stack space required; 2) $P$ and $P'$
    579 must have the same functional behavior, i.e., they must produce that same output
    580 and intermediate observables; 3) $P$ and $O$ must have the same functional
    581 behavior; 4) after execution and in interesting points during execution, the
    582 cost information computed by $P'$ must be an upper bound of the one spent by $O$
    583  to perform the corresponding operations (soundness property); 5) the difference
    584 between the costs computed by $P'$ and the execution costs of $O$ must be
    585 bounded by a program dependent constant (precision property).
    586 
    587 \paragraph{The labeling software components.} we solve the problem in four
    588 stages \cite{labelling}, implemented by four software components that are used
    589 in sequence.
    590 
    591 The first component labels the source program $P$ by injecting label emission
    592 statements in appropriate positions to mark the beginning of basic blocks. The
    593 set of labels with their positions is called labelling. The syntax and semantics
    594 of the source programming language is augmented with label emission statements.
    595 The statement “EMIT l” behaves like a NOP instruction that does not affect the
    596 program state or control flow, but it changes the semantics by making the label
    597 l observable. Therefore the observables of a run of a program becomes a stream
    598 of label emissions: l1 … ln, called the program trace. We clarify the conditions
    599 that the labelling must respect later.
    600 
    601 The second component is a labelling preserving compiler. It can be obtained from
    602 an existing compiler by adding label emission statements to every intermediate
    603 language and by propagating label emission statements during compilation. The
    604 compiler is correct if it preserves both the functional behavior of the program
    605 and the generated traces. We may also ask that the function that erases the cost
    606 emission statements commute with compilation. This optional property grants that
    607 the labelling does not interfere with the original compiler behavior. A further
    608 set of requirements will be added later.
    609 
    610 The third component is a static object code analyser. It takes in input the
    611 object code augmented with label emission statements and it computes, for every
    612 such statement, its scope. The scope of a label emission statement is the tree
    613 of instructions that may be executed after the statement and before a new label
    614 emission statement is found. It is a tree and not a sequence because the scope
    615 may contain a branching statement. In order to grant that such a finite tree
    616 exists, the object code must not contain any loop that is not broken by a label
    617 emission statement. This is the first requirement of a sound labelling. The
    618 analyser fails if the labelling is unsound. For each scope, the analyser
    619 computes an upper bound of the execution time required by the scope, using the
    620 maximum of the costs of the two branches in case of a conditional statement.
    621 Finally, the analyser computes the cost of a label by taking the maximum of the
    622 costs of the scopes of every statement that emits that label.
    623 
    624 The fourth and last component takes in input the cost model computed at step 3
    625 and the labelled code computed at step 1. It outputs a source program obtained
    626 by replacing each label emission statement with a statement that increments the
    627 global cost variable with the cost associated to the label by the cost model. 
    628 The obtained source code is the instrumented source code.
    629 
    630 \paragraph{Correctness.} Requirements 1, 2 and 3 of the program statement
    631 obviously hold, with 2 and 3 being a consequence of the definition of a correct
    632 labelling preserving compiler. It is also obvious that the value of the global
    633 cost variable of an instrumented source code is at any time equal to the sum of
    634 the costs of the labels emitted by the corresponding labelled code. Moreover,
    635 because the compiler preserves all traces, the sum of the costs of the labels
    636 emitted in the source and target labelled code are the same. Therefore, to
    637 satisfy the fourth requirement, we need to grant that the time taken to execute
    638 the object code is equal to the sum of the costs of the labels emitted by the
    639 object code. We collect all the necessary conditions for this to happen in the
    640 definition of a sound labelling: a) all loops must be broken by a cost emission
    641 statement;  b) all program instructions must be in the scope of some cost
    642 emission statement. To satisfy also the fifth requirement, additional
    643 requirements must be imposed on the object code labelling to avoid all uses of
    644 the maximum in the cost computation: the labelling is precise if every label is
    645 emitted at most once and both branches of a conditional jump start with a label
    646 emission statement.
    647 
    648 The correctness and precision of the labelling approach only rely on the
    649 correctness and precision of the object code labelling. The simplest, but not
    650 necessary, way to achieve them is to impose correctness and precision
    651 requirements also on the initial labelling produced by the first software
    652 component, and to ask the labelling preserving compiler to preserve these
    653 properties too. The latter requirement imposes serious limitations on the
    654 compilation strategy and optimizations: the compiler may not duplicate any code
    655 that contains label emission statements, like loop bodies. Therefore several
    656 loop optimizations like peeling or unrolling are prevented. Moreover, precision
    657 of the object code labelling is not sufficient per se to obtain global
    658 precision: we also implicitly assumed the static analysis to be able to
    659 associate a precise constant cost to every instruction. This is not possible in
    660 presence of stateful hardware whose state influences the cost of operations,
    661 like pipelines and caches. In the next section we will see an extension of the
    662 basic labelling approach to cover this situation.
    663 
    664 The labelling approach described in this section can be applied equally well and
    665 with minor modifications to imperative and functional languages
    666 \cite{labelling2}. We have tested it on a simple imperative language without
    667 functions (a While language), to a subset of C and to two compilation chains for
    668 a purely functional higher order language. The two main changes required to deal
    669 with functional languages are: 1) because global variables and updates are not
    670 available, the instrumentation phase produces monadic code to “update” the
    671 global costs; 2) the requirements for a sound and precise labelling of the
    672 source code must be changed when the compilation is based on CPS translations.
    673 In particular, we need to introduce both labels emitted before a statement is
    674 executed and labels emitted after a statement is executed. The latter capture
    675 code that is inserted by the CPS translation and that would escape all label
    676 scopes.
    677 
    678 Phases 1, 2 and 3 can be applied as well to logic languages (e.g. Prolog).
    679 However, the instrumentation phase cannot: in standard Prolog there is no notion
    680 of (global) variable whose state is not retracted during backtracking.
    681 Therefore, the cost of executing computations that are later backtracked would
    682 not be correctly counted in. Any extension of logic languages with
    683 non-backtrackable state should support the labelling approach.
    684 
    685 \subsection{Dependent labelling.}
    686 The core idea of the basic labelling approach is to establish a tight connection
    687 between basic blocks executed in the source and target languages. Once the
    688 connection is established, any cost model computed on the object code can be
    689 transferred to the source code, without affecting the code of the compiler or
    690 its proof. In particular, it is immediate that we can also transport cost models
    691 that associate to each label functions from hardware state to natural numbers.
    692 However, a problem arises during the instrumentation phase that replaces cost
    693 emission statements with increments of global cost variables. The global cost
    694 variable must be incremented with the result of applying the function associated
    695 to the label to the hardware state at the time of execution of the block.
    696 The hardware state comprises both the “functional” state that affects the
    697 computation (the value of the registers and memory) and the “non-functional”
    698 state that does not (the pipeline and caches content for example). The former is
    699 in correspondence with the source code state, but reconstructing the
    700 correspondence may be hard and lifting the cost model to work on the source code
    701 state is likely to produce cost expressions that are too hard to reason on.
    702 Luckily enough, in all modern architectures the cost of executing single
    703 instructions is either independent of the functional state or the jitter --- the
    704 difference between the worst and best case execution times --- is small enough
    705 to be bounded without loosing too much precision. Therefore we can concentrate
    706 on dependencies over the “non-functional” parts of the state only.
    707 
    708 The non-functional state has no correspondence in the high level state and does
    709 not influence the functional properties. What can be done is to expose the
    710 non-functional state in the source code. We just present here the basic
    711 intuition in a simplified form: the technical details that allow to handle the
    712 general case are more complex and can be found in the CerCo deliverables. We add
    713 to the source code an additional global variable that represents the
    714 non-functional state and another one that remembers the last labels emitted. The
    715 state variable must be updated at every label emission statement, using an
    716 update function which is computed during static analysis too. The update
    717 function associates to each label a function from the recently emitted labels
    718 and old state to the new state. It is computed composing the semantics of every
    719 instruction in a basic block and restricting it to the non-functional part of
    720 the state.
    721 
    722 Not all the details of the non-functional state needs to be exposed, and the
    723 technique works better when the part of state that is required can be summarized
    724 in a simple data structure. For example, to handle simple but realistic
    725 pipelines it is sufficient to remember a short integer that encodes the position
    726 of bubbles (stuck instructions) in the pipeline. In any case, the user does not
    727 need to understand the meaning of the state to reason over the properties of the
    728 program. Moreover, at any moment the user or the invariant generator tools that
    729 analyse the instrumented source code produced by the compiler can decide to
    730 trade precision of the analysis with simplicity by approximating the parametric
    731 cost with safe non parametric bounds. Interestingly, the functional analysis of
    732 the code can determine which blocks are executed more frequently in order to
    733 approximate more aggressively the ones that are executed less.
    734 
    735 Dependent labelling can also be applied to allow the compiler to duplicate
    736 blocks that contain labels (e.g. in loop optimizations). The effect is to assign
    737 a different cost to the different occurrences of a duplicated label. For
    738 example, loop peeling turns a loop into the concatenation of a copy of the loop
    739 body (that executes the first iteration) with the conditional execution of the
    740 loop (for the successive iterations). Because of further optimizations, the two
    741 copies of the loop will be compiled differently, with the first body usually
    742 taking more time.
    743 
    744 By introducing a variable that keep tracks of the iteration number, we can
    745 associate to the label a cost that is a function of the iteration number. The
    746 same technique works for loop unrolling without modifications: the function will
    747 assign a cost to the even iterations and another cost to the odd ones. The
    748 actual work to be done consists in introducing in the source code and for each
    749 loop a variable that counts the number of iterations. The loop optimization code
    750 that duplicate the loop bodies must also modify the code to propagate correctly
    751 the update of the iteration numbers. The technical details are more complex and
    752 can be found in the CerCo reports and publications. The implementation, however,
    753 is quite simple and the changes to a loop optimizing compiler are minimal
    754 \cite{paolo}.
    755 
    756 \subsection{Techniques to exploit the induced cost model.}
    757 We review the cost synthesis techniques developed in the project.
    758 The starting hypothesis is that we have a certified methodology to annotate
    759 blocks in the source code with constants which provide a sound and possibly
    760 precise upper bound on the cost of executing the blocks after compilation to
    761 object code.
    762 
    763 The principle that we have followed in designing the cost synthesis tools is
    764 that the synthetic bounds should be expressed and proved within a general
    765 purpose tool built to reason on the source code. In particular, we rely on the
    766 Frama − C tool to reason on C code and on the Coq proof-assistant to reason on
    767 higher-order functional programs.
    768 
    769 This principle entails that: 1)
    770 The inferred synthetic bounds are indeed correct as long as the general purpose
    771 tool is. 2) There is no limitation on the class of programs that can be handled
    772 as long as the user is willing to carry on an interactive proof.
    773 
    774 Of course, automation is desirable whenever possible. Within this framework,
    775 automation means writing programs that give hints to the general purpose tool.
    776 These hints may take the form, say, of loop invariants/variants, of predicates
    777 describing the structure of the heap, or of types in a light logic. If these
    778 hints are correct and sufficiently precise the general purpose tool will produce
    779 a proof automatically, otherwise, user interaction is required.
    780 
    781 \paragraph{The Cost plug-in and its application to the Lustre compiler}
    782 Frama − C \cite{framac} is a set of analysers for C programs with a
    783 specification language called ACSL. New analyses can be dynamically added
    784 through a plug-in system. For instance, the Jessie plug-in allows deductive
    785 verification of C programs with respect to their specification in ACSL, with
    786 various provers as back-end tools.
    787 We developed the CerCo Cost plug-in for the Frama − C platform as a proof of
    788 concept of an automatic environment exploiting the cost annotations produced by
    789 the CerCo compiler. It consists of an OCaml program which in first approximation
    790 takes the following actions: (1) it receives as input a C program, (2) it
    791 applies the CerCo compiler to produce a related C program with cost annotations,
    792 (3) it applies some heuristics to produce a tentative bound on the cost of
    793 executing the C functions of the program as a function of the value of their
    794 parameters, (4) the user can then call the Jessie plug-in to discharge the
    795 related proof obligations.
    796 In the following we elaborate on the soundness of the framework and the
    797 experiments we performed with the Cost tool on the C programs produced by a
    798 Lustre compiler.
    799 
    800 \paragraph{Soundness} The soundness of the whole framework depends on the cost
    801 annotations added by the CerCo compiler, the synthetic costs produced by the
    802 Cost plug-in, the verification conditions (VCs) generated by Jessie, and the
    803 external provers discharging the VCs. The synthetic costs being in ACSL format,
    804 Jessie can be used to verify them. Thus, even if the added synthetic costs are
    805 incorrect (relatively to the cost annotations), the process as a whole is still
    806 correct: indeed, Jessie will not validate incorrect costs and no conclusion can
    807 be made about the WCET of the program in this case. In other terms, the
    808 soundness does not really depend on the action of the Cost plug-in, which can in
    809 principle produce any synthetic cost. However, in order to be able to actually
    810 prove a WCET of a C function, we need to add correct annotations in a way that
    811 Jessie and subsequent automatic provers have enough information to deduce their
    812 validity. In practice this is not straightforward even for very simple programs
    813 composed of branching and assignments (no loops and no recursion) because a fine
    814 analysis of the VCs associated with branching may lead to a complexity blow up.
    815 \paragraph{Experience with Lustre} Lustre is a data-flow language to program
    816 synchronous systems and the language comes with a compiler to C. We designed a
    817 wrapper for supporting Lustre files.
    818 The C function produced by the compiler implements the step function of the
    819 synchronous system and computing the WCET of the function amounts to obtain a
    820 bound on the reaction time of the system. We tested the Cost plug-in and the
    821 Lustre wrapper on the C programs generated by the Lustre compiler. For programs
    822 consisting of a few hundreds loc, the Cost plug-in computes a WCET and Alt −
    823 Ergo is able to discharge all VCs automatically.
    824 
    825 \paragraph{Handling C programs with simple loops}
    826 The cost annotations added by the CerCo compiler take the form of C instructions
    827 that update by a constant a fresh global variable called the cost variable.
    828 Synthesizing a WCET of a C function thus consists in statically resolving an
    829 upper bound of the difference between the value of the cost variable before and
    830 after the execution of the function, i.e. find in the function the instructions
    831 that update the cost variable and establish the number of times they are passed
    832 through during the flow of execution. In order to do the analysis the plugin
    833 makes the following assumptions on the programs:
    834 1. No recursive functions.
    835 2. Every loop must be annotated with a variant. The variants of ‘for’ loops are
    836 automatically inferred.
    837 
    838 The plugin proceeds as follows.
    839 First the call graph of the program is computed.
    840 Then the computation of the cost of the function is performed by traversing its
    841 control flow graph. If the function f calls the function g then the function g
    842 is processed before the function f. The cost at a node is the maximum of the
    843 costs of the successors.
    844 In the case of a loop with a body that has a constant cost for every step of the
    845 loop, the cost is the product of the cost of the body and of the variant taken
    846 at the start of the loop.
    847 In the case of a loop with a body whose cost depends on the values of some free
    848 variables, a fresh logic function f is introduced to represent the cost of the
    849 loop in the logic assertions. This logic function takes the variant as a first
    850 parameter. The other parameters of f are the free variables of the body of the
    851 loop. An axiom is added to account the fact that the cost is accumulated at each
    852 step of the loop.
    853 The cost of the function is directly added as post-condition of the function
    854 
    855 The user can influence the annotation by different means:
    856 by using more precise variants or
    857 by annotating function with cost specification. The plugin will use this cost
    858 for the function instead of computing it.
    859 \paragraph{C programs with pointers}
    860 When it comes to verifying programs involving pointer-based data structures,
    861 such as linked lists, trees, or graphs, the use of traditional first-order logic
    862 to specify, and of SMT solvers to verify, shows some limitations. Separation
    863 logic \cite{separation} is then an elegant alternative. It is a program logic
    864 with a new notion of conjunction to express spatial heap separation. Bobot has
    865 recently introduced in the Jessie plug-in automatically generated separation
    866 predicates to simulate separation logic reasoning in a traditional verification
    867 framework where the specification language, the verification condition
    868 generator, and the theorem provers were not designed with separation logic in
    869 mind. CerCo's plug-in can exploit the separation predicates to automatically
    870 reason on the cost of execution of simple heap manipulation programs such as an
    871 in-place list reversal.
    872 
    873 \subsection{The CerCo Compiler}
    874 In CerCo we have developed a certain number of cost preserving compilers based
    875 on the labelling approach. Excluding an initial certified compiler for a While
    876 language, all remaining compilers target realistic source languages --- a pure
    877 higher order functional language and a large subset of C with pointers, gotos
    878 and all data structures --- and real world target processors --- MIPS and the
    879 Intel 8051/8052 processor family. Moreover, they achieve a level of optimization
    880 that ranges from moderate (comparable to gcc level 1) to intermediate (including
    881 loop peeling and unrolling, hoisting and late constant propagation). The so
    882 called Trusted CerCo Compiler is the only one that was implemented in the
    883 interactive theorem prover Matita and its costs certified. The code distributed
    884 is obtained extracting OCaml code from the Matita implementation. In the rest of
    885 this section we will only focus on the Trusted CerCo Compiler, that only targets
    886 the C language and the 8051/8052 family, and that does not implement the
    887 advanced optimizations yet. Its user interface, however, is the same as the one
    888 of the other versions, in order to trade safety with additional performances. In
    889 particular, the Frama-C CerCo plug-in can work without recompilation with all
    890 compilers.
    891 
    892 The (trusted) CerCo compiler implements the following optimizations: cast
    893 simplification, constant propagation in expressions, liveness analysis driven
    894 spilling of registers, dead code elimination, branch displacement, tunneling.
    895 The two latter optimizations are performed by our optimizing assembler
    896 \cite{correctness}. The back-end of the compiler works on three addresses
    897 instructions, preferred to static single assignment code for the simplicity of
    898 the formal certification.
    899 
    900 The CerCo compiler is loosely based on the CompCert compiler \cite{compcert}, a
    901 recently developed certified compiler from C to the PowerPC, ARM and x86
    902 microprocessors. Contrarily to CompCert, both the CerCo code and its
    903 certification are open source. Some data structures and language definitions for
    904 the front-end are directly taken from CompCert, while the back-end is a redesign
    905 of a compiler from Pascal to MIPS used by Francois Pottier for a course at the
    906 Ecole Polytechnique.
    907 
    908 The main peculiarities of the CerCo compiler are:
    909 \begin{enumerate}
    910 \item all of our intermediate languages include label emitting instructions to
    911 implement the labelling approach, and the compiler preserves execution traces.
    912 \item traditionally compilers target an assembly language with additional
    913 macro-instructions to be expanded before assembly; for CerCo we need to go all
    914 the way down to object code in order to perform the static analysis. Therefore
    915 we integrated also an optimizing assembler and a static analyser.
    916 \item to avoid implementing a linker and a loader, we do not support separate
    917 compilation and external calls. Adding a linker and a loader is a transparent
    918 process to the labelling approach and should create no unknown problem.
    919 \item we target an 8-bit processor. Targeting an 8 bit processor requires
    920 several changes and increased code size, but it is not fundamentally more
    921 complex. The proof of correctness, however, becomes much harder.
    922 \item we target a microprocessor that has a non uniform memory model, which is
    923 still often the case for microprocessors used in embedded systems and that is
    924 becoming common again in multi-core processors. Therefore the compiler has to
    925 keep track of data and it must move data between memory regions in the proper
    926 way. Also the size of pointers to different regions is not uniform. An
    927 additional difficulty was that the space available for the stack in internal
    928 memory in the 8051 is tiny, allowing only a minor number of nested calls. To
    929 support full recursion in order to test the CerCo tools also on recursive
    930 programs, the compiler manually manages a stack in external memory.
    931 \end{enumerate}
    932 
    933 \section{A formal certification of the CerCo compiler}
    934 The Trusted CerCo Compiler has been implemented and certified using the
    935 interactive theorem prover Matita. The details on the proof techniques employed
     941The details on the proof techniques employed
    936942and
    937943the proof sketch can be collected from the CerCo deliverables and papers.
     
    939945out to be more complex than what we expected at the beginning.
    940946
    941 \paragraph{The statement}
     947\paragraph{The statement.}
    942948Real time programs are often reactive programs that loop forever responding to
    943949events (inputs) by performing some computation followed by some action (output)
     
    969975space is a clear requirement of meaningfulness of a run, because source programs
    970976do not crash for lack of space while object code ones do. The balancing of
    971 function calls/returns is a requirement for precision: the labelling approach
     977function calls/returns is a requirement for precision: the labeling approach
    972978allows the scope of label emission statements to extend after function calls to
    973979minimize the number of labels. Therefore a label pays for all the instructions
     
    10061012characterized by history dependent stateful components, like caches and
    10071013pipelines. The main issue consists in assigning a parametric, dependent cost
    1008 to basic blocks that can be later transferred by the labelling approach to
     1014to basic blocks that can be later transferred by the labeling approach to
    10091015the source code and represented in a meaningful way to the user. The dependent
    1010 labelling approach that we have studied seems a promising tools to achieve
     1016labeling approach that we have studied seems a promising tools to achieve
    10111017this goal, but the cost model generated for a realistic processor could be too
    10121018large and complex to be exposed in the source code. Further study is required
     
    10161022Examples of further future work consist in improving the cost invariant
    10171023generator algorithms and the coverage of compiler optimizations, in combining
    1018 the labelling approach with type \& effect discipline \cite{typeffects}
     1024the labeling approach with type \& effect discipline \cite{typeffects}
    10191025to handle languages with implicit memory management, and in experimenting with
    10201026our tools in early development phases. Some larger use case is also necessary
    10211027to evaluate the CerCo's prototype on industrial size programs.
    10221028
    1023 \begin{thebibliography}{}
    1024 
    1025 \bibitem{cerco} \textbf{Certified Complexity}, R. Amadio, A. Asperti, N. Ayache,
     1029\bibliographystyle{llncs}
     1030\begin{thebibliography}{19}
     1031
     1032\bibitem{survey} \textbf{A Survey of Static Program Analysis Techniques}
     1033Wolfgang W\"ogerer. Technical report. Technische Universit\"at Wien 2005
     1034
     1035\bibitem{cerco} \textbf{Certified Complexity}. R. Amadio, A. Asperti, N. Ayache,
    10261036B. Campbell, D. Mulligan, R. Pollack, Y. Regis-Gianas, C. Sacerdoti Coen, I.
    10271037Stark, in Procedia Computer Science, Volume 7, 2011, Proceedings of the 2 nd
    10281038European Future Technologies Conference and Exhibition 2011 (FET 11), 175-177.
    10291039
    1030 \bibitem{labelling} \textbf{Certifying and Reasoning on Cost Annotations in C
     1040\bibitem{labeling} \textbf{Certifying and Reasoning on Cost Annotations in C
    10311041Programs}, N.  Ayache, R.M. Amadio, Y.Régis-Gianas, in Proc. FMICS, Springer
    10321042LNCS
    103310437437: 32-46, 2012, DOI:10.1007/978-3-642-32469-7\_3.
    10341044
    1035 \bibitem{paolo} \textbf{Indexed Labels for Loop Iteration Dependent Costs}, P.
     1045\bibitem{labeling2} \textbf{Certifying and reasoning on cost annotations of
     1046functional programs}.
     1047R.M. Amadio, Y. Régis-Gianas. Proceedings of the Second international conference
     1048on Foundational and Practical Aspects of Resource Analysis FOPARA 2011 Springer
     1049LNCS 2011
     1050
     1051\bibitem{compcert} \textbf{Formal verification of a realistic compiler}. Leroy,
     1052X. In Commun. ACM 52(7), 107–115 (2009)
     1053
     1054\bibitem{framac} \textbf{Frama-C user manual}. Correnson, L., Cuoq, P., Kirchner,
     1055F., Prevosto, V., Puccetti, A., Signoles, J.,
     1056Yakobowski, B. in CEA-LIST, Software Safety Laboratory, Saclay, F-91191,
     1057\url{http://frama-c.com/}.
     1058
     1059\bibitem{paolo} \textbf{Indexed Labels for Loop Iteration Dependent Costs}. P.
    10361060Tranquilli, in Proceedings of the 11th International Workshop on Quantitative
    10371061Aspects of Programming Languages and Systems (QAPL 2013), Rome, 23rd-24th March
    103810622013, Electronic Proceedings in Theoretical Computer Science, to appear in 2013.
    10391063
    1040 \bibitem{embounded} \textbf{The EmBounded project (project paper)}, K. Hammond,
    1041 R. Dyckhoff, C. Ferdinand, R. Heckmann, M. Hofmann, H. Loidl, G. Michaelson, J.
    1042 Serot, A. Wallace, in Trends in Functional Programming, Volume 6, Intellect
    1043 Press, 2006.
    1044 
    1045 \bibitem{proartis} \textbf{PROARTIS: Probabilistically Analysable Real-Time
    1046 Systems}, F.J. Cazorla, E. Qui˜nones, T. Vardanega, L. Cucu, B. Triquet, G.
    1047 Bernat, E. Berger, J. Abella, F. Wartel, M. Houston, et al., in ACM Transactions
    1048 on Embedded Computing Systems, 2012.
    1049 
    1050 \bibitem{stateart} \textbf{The worst-case execution-time problem overview of
    1051 methods
    1052 and survey of tools.} Wilhelm R. et al., in  ACM Transactions on Embedded
    1053 Computing Systems, 7:1–53, May 2008.
    1054 
    1055 %\bibitem{proartis2} \textbf{A Cache Design for Probabilistic Real-Time
    1056 % Systems}, L. Kosmidis, J. Abella, E. Quinones, and F. Cazorla, in Design,
    1057 % Automation, and Test in Europe (DATE), Grenoble, France, 03/2013.
    1058 
    1059 \bibitem{framac} \textbf{Frama-C user manual} Correnson, L., Cuoq, P., Kirchner,
    1060 F., Prevosto, V., Puccetti, A., Signoles, J.,
    1061 Yakobowski, B. in CEA-LIST, Software Safety Laboratory, Saclay, F-91191,
    1062 \url{http://frama-c.com/}.
    1063 
    1064 \bibitem{compcert} \textbf{Formal verification of a realistic compiler} Leroy,
    1065 X. In Commun. ACM 52(7), 107–115 (2009)
     1064\bibitem{separation} \textbf{Intuitionistic reasoning about shared mutable data
     1065structure} John C. Reynolds. In Millennial Perspectives in Computer Science,
     1066pages 303–321, Houndsmill,
     1067Hampshire, 2000. Palgrave.
    10661068
    10671069\bibitem{lustre} \textbf{LUSTRE: a declarative language for real-time
     
    107110731987.
    10721074
    1073 \bibitem{separation} \textbf{Intuitionistic reasoning about shared mutable data
    1074 structure} John C. Reynolds. In Millennial Perspectives in Computer Science,
    1075 pages 303–321, Houndsmill,
    1076 Hampshire, 2000. Palgrave.
    1077 
    1078 \bibitem{typeffects} \textbf{The Type and Effect Discipline} Jean-Pierre Talpin
     1075\bibitem{correctness} \textbf{On the correctness of an optimising assembler for
     1076the intel MCS-51 microprocessor}.
     1077  Dominic P. Mulligan  and Claudio Sacerdoti Coen. In Proceedings of the Second
     1078international conference on Certified Programs and Proofs, Springer-Verlag 2012.
     1079
     1080\bibitem{proartis} \textbf{PROARTIS: Probabilistically Analysable Real-Time
     1081Systems}, F.J. Cazorla, E. Qui˜nones, T. Vardanega, L. Cucu, B. Triquet, G.
     1082Bernat, E. Berger, J. Abella, F. Wartel, M. Houston, et al., in ACM Transactions
     1083on Embedded Computing Systems, 2012.
     1084
     1085\bibitem{embounded} \textbf{The EmBounded project (project paper)}. K. Hammond,
     1086R. Dyckhoff, C. Ferdinand, R. Heckmann, M. Hofmann, H. Loidl, G. Michaelson, J.
     1087Serot, A. Wallace, in Trends in Functional Programming, Volume 6, Intellect
     1088Press, 2006.
     1089
     1090\bibitem{matita}
     1091\textbf{The Matita Interactive Theorem Prover}.
     1092Andrea Asperti, Claudio Sacerdoti Coen, Wilmer Ricciotti and Enrico Tassi.
     109323rd International Conference on Automated Deduction, CADE 2011.
     1094
     1095\bibitem{typeffects} \textbf{The Type and Effect Discipline}. Jean-Pierre Talpin
    10791096and Pierre Jouvelot.
    10801097  In Proceedings of the Seventh Annual Symposium on Logic in Computer Science
     
    10821099IEEE Computer Society 1992.
    10831100
    1084 \bibitem{labelling2} \textbf{Certifying and reasoning on cost annotations of
    1085 functional programs}.
    1086 R.M. Amadio, Y. Régis-Gianas. Proceedings of the Second international conference
    1087 on Foundational and Practical Aspects of Resource Analysis FOPARA 2011 Springer
    1088 LNCS 2011
    1089 
    1090 \bibitem{correctness} \textbf{On the correctness of an optimising assembler for
    1091 the intel MCS-51 microprocessor}
    1092   Dominic P. Mulligan  and Claudio Sacerdoti Coen. In Proceedings of the Second
    1093 international conference on Certified Programs and Proofs, Springer-Verlag 2012.
    1094 
    1095 \bibitem{survey} \textbf{A Survey of Static Program Analysis Techniques}
    1096 Wolfgang W\"ogerer. Technical report. Technische Universit\"at Wien 2005
     1101\bibitem{stateart} \textbf{The worst-case execution-time problem overview of
     1102methods
     1103and survey of tools.} Wilhelm R. et al., in  ACM Transactions on Embedded
     1104Computing Systems, 7:1–53, May 2008.
     1105
     1106%\bibitem{proartis2} \textbf{A Cache Design for Probabilistic Real-Time
     1107% Systems}, L. Kosmidis, J. Abella, E. Quinones, and F. Cazorla, in Design,
     1108% Automation, and Test in Europe (DATE), Grenoble, France, 03/2013.
    10971109
    10981110\end{thebibliography}
Note: See TracChangeset for help on using the changeset viewer.