Changeset 3425


Ignore:
Timestamp:
Feb 11, 2014, 5:02:28 PM (5 years ago)
Author:
campbell
Message:

Rearrange section 4

File:
1 edited

Legend:

Unmodified
Added
Removed
  • Papers/fopara2013/fopara13.tex

    r3424 r3425  
    532532\end{figure}
    533533\section{Main scientific and technical results}
    534 We will now review the main results that the CerCo project has achieved.
     534First we describe the basic labeling approach and our compiler
     535implementations that use it.  This is suitable for basic architectures
     536with simple cost models.  Then we will discuss the dependent labeling
     537extension which is suitable for more advanced processor architectures
     538and compiler optimisations.  At the end of this section we will
     539demonstrate automated high level reasoning about the source level
     540costs provided by the compilers.
    535541
    536542% \emph{Dependent labeling~\cite{?}.} The basic labeling approach assumes a
     
    668674to every instruction. This is not possible in
    669675the presence of stateful hardware whose state influences the cost of operations,
    670 like pipelines and caches. In the next subsection we will see an extension of the
     676like pipelines and caches. In Section~\ref{lab:deplabel} we will see an extension of the
    671677basic labeling approach to cover this situation.
    672678
    673 The labeling approach described in this section can be applied equally well and
    674 with minor modifications to imperative and functional languages
    675 \cite{labeling2}. We have tested it on a simple imperative language without
    676 functions (a `while' language), on a subset of C and on two compilation chains for
    677 a purely functional higher-order language. The two main changes required to deal
    678 with functional languages are: 1) because global variables and updates are not
    679 available, the instrumentation phase produces monadic code to `update' the
    680 global costs; 2) the requirements for a sound and precise labeling of the
    681 source code must be changed when the compilation is based on CPS translations.
    682 In particular, we need to introduce labels emitted before a statement is
    683 executed and also labels emitted after a statement is executed. The latter capture
    684 code that is inserted by the CPS translation and that would escape all label
    685 scopes.
     679In CerCo we have developed several cost preserving compilers based on
     680the labeling approach. Excluding an initial certified compiler for a
     681`while' language, all remaining compilers target realistic source
     682languages---a pure higher order functional language and a large subset
     683of C with pointers, gotos and all data structures---and real world
     684target processors---MIPS and the Intel 8051/8052 processor
     685family. Moreover, they achieve a level of optimisation that ranges
     686from moderate (comparable to GCC level 1) to intermediate (including
     687loop peeling and unrolling, hoisting and late constant propagation).
     688We describe the C compilers in detail in the following section.
     689
     690Two compilation chains were implemented for a purely functional
     691higher-order language~\cite{labeling2}. The two main changes required
     692to deal with functional languages are: 1) because global variables and
     693updates are not available, the instrumentation phase produces monadic
     694code to `update' the global costs; 2) the requirements for a sound and
     695precise labeling of the source code must be changed when the
     696compilation is based on CPS translations.  In particular, we need to
     697introduce labels emitted before a statement is executed and also
     698labels emitted after a statement is executed. The latter capture code
     699that is inserted by the CPS translation and that would escape all
     700label scopes.
    686701
    687702% Brian: one of the reviewers pointed out that standard Prolog implementations
     
    697712% non-backtrackable state could support our labeling approach.
    698713
    699 \subsection{Dependent labeling}
    700 The core idea of the basic labeling approach is to establish a tight connection
    701 between basic blocks executed in the source and target languages. Once the
    702 connection is established, any cost model computed on the object code can be
    703 transferred to the source code, without affecting the code of the compiler or
    704 its proof. In particular, it is immediate that we can also transport cost models
    705 that associate to each label functions from hardware state to natural numbers.
    706 However, a problem arises during the instrumentation phase that replaces cost
    707 emission statements with increments of global cost variables. The global cost
    708 variable must be incremented with the result of applying the function associated
    709 to the label to the hardware state at the time of execution of the block.
    710 The hardware state comprises both the functional state that affects the
    711 computation (the value of the registers and memory) and the non-functional
    712 state that does not (the pipeline and cache contents, for example). The former is
    713 in correspondence with the source code state, but reconstructing the
    714 correspondence may be hard and lifting the cost model to work on the source code
    715 state is likely to produce cost expressions that are too complex to understand and reason about.
    716 Fortunately, in all modern architectures the cost of executing single
    717 instructions is either independent of the functional state or the jitter---the
    718 difference between the worst and best case execution times---is small enough
    719 to be bounded without losing too much precision. Therefore we only consider
    720 dependencies on the `non-functional' parts of the state.
    721 
    722 The non-functional state has no correspondence in the high level state and does
    723 not influence the functional properties. What can be done is to expose the
    724 non-functional state in the source code. We present here the basic
    725 intuition in a simplified form: the technical details that allow us to handle the
    726 general case are more complex and can be found in~\cite{paolo}. We add
    727 to the source code an additional global variable that represents the
    728 non-functional state and another one that remembers the last labels emitted. The
    729 state variable must be updated at every label emission statement, using an
    730 update function which is computed during static analysis too. The update
    731 function associates to each label a function from the recently emitted labels
    732 and old state to the new state. It is computed by composing the semantics of every
    733 instruction in a basic block and restricting it to the non-functional part of
    734 the state.
    735 
    736 Not all the details of the non-functional state needs to be exposed, and the
    737 technique works better when the part of state that is required can be summarized
    738 in a simple data structure. For example, to handle simple but realistic
    739 pipelines it is sufficient to remember a short integer that encodes the position
    740 of bubbles (stuck instructions) in the pipeline. In any case, it is not necessary
    741 for the user to understand the meaning of the state to reason over the properties
    742 of the
    743 program. Moreover, at any moment the user, or the invariant generator tools that
    744 analyse the instrumented source code produced by the compiler, can decide to
    745 trade precision of the analysis for simplicity by approximating the parametric
    746 cost with safe non parametric bounds. Interestingly, the functional analysis of
    747 the code could determine which blocks are executed more frequently in order to
    748 use more aggressive approximations for the ones that are executed least.
    749 
    750 Dependent labeling can also be applied to allow the compiler to duplicate
    751 blocks that contain labels (e.g. in loop optimisations)~\cite{paolo}. The effect is to assign
    752 a different cost to the different occurrences of a duplicated label. For
    753 example, loop peeling turns a loop into the concatenation of a copy of the loop
    754 body for the first iteration and the conditional execution of the
    755 loop for successive iterations. Further optimisations will compile the two
    756 copies of the loop differently, with the first body usually
    757 taking more time.
    758 
    759 By introducing a variable that keeps track of the iteration number, we can
    760 associate to the label a cost that is a function of the iteration number. The
    761 same technique works for loop unrolling without modifications: the function will
    762 assign a cost to the even iterations and another cost to the odd ones. The
    763 actual work to be done consists of introducing within the source code, and for each
    764 loop, a variable that counts the number of iterations. The loop optimisation code
    765 that duplicate the loop bodies must also modify the code to correctly propagate
    766 the update of the iteration numbers. The technical details are more complex and
    767 can be found in the CerCo reports and publications. The implementation, however,
    768 is quite simple and the changes to a loop optimising compiler are minimal.
    769 
    770 \subsection{Techniques to exploit the induced cost model}
    771 We review the cost synthesis techniques developed in the project.
    772 The starting hypothesis is that we have a certified methodology to annotate
    773 blocks in the source code with constants which provide a sound and sufficiently
    774 precise upper bound on the cost of executing the blocks after compilation to
    775 object code.
    776 
    777 The principle that we have followed in designing the cost synthesis tools is
    778 that the synthetic bounds should be expressed and proved within a general
    779 purpose tool built to reason on the source code. In particular, we rely on the
    780 Frama-C tool to reason on C code and on the Coq proof-assistant to reason on
    781 higher-order functional programs.
    782 
    783 This principle entails that: 1)
    784 The inferred synthetic bounds are indeed correct as long as the general purpose
    785 tool is; 2) there is no limitation on the class of programs that can be handled
    786 as long as the user is willing to carry on an interactive proof.
    787 
    788 Of course, automation is desirable whenever possible. Within this framework,
    789 automation means writing programs that give hints to the general purpose tool.
    790 These hints may take the form, say, of loop invariants/variants, of predicates
    791 describing the structure of the heap, or of types in a light logic. If these
    792 hints are correct and sufficiently precise the general purpose tool will produce
    793 a proof automatically, otherwise, user interaction is required.
    794 
    795 \paragraph{The Cost plugin and its application to the Lustre compiler.}
    796 Frama-C \cite{framac} is a set of analysers for C programs with a
    797 specification language called ACSL. New analyses can be dynamically added
    798 via a plugin system. For instance, the Jessie plugin allows deductive
    799 verification of C programs with respect to their specification in ACSL, with
    800 various provers as back-end tools.
    801 We developed the CerCo Cost plugin for the Frama-C platform as a proof of
    802 concept of an automatic environment exploiting the cost annotations produced by
    803 the CerCo compiler. It consists of an OCaml program which essentially
    804 takes the following actions: 1) it receives as input a C program, 2) it
    805 applies the CerCo compiler to produce a related C program with cost annotations,
    806 3) it applies some heuristics to produce a tentative bound on the cost of
    807 executing the C functions of the program as a function of the value of their
    808 parameters, 4) the user can then call the Jessie plugin to discharge the
    809 related proof obligations.
    810 In the following we elaborate on the soundness of the framework and the
    811 experiments we performed with the Cost tool on the C programs produced by a
    812 Lustre compiler.
    813 
    814 \paragraph{Soundness.} The soundness of the whole framework depends on the cost
    815 annotations added by the CerCo compiler, the synthetic costs produced by the
    816 cost plugin, the verification conditions (VCs) generated by Jessie, and the
    817 external provers discharging the VCs. The synthetic costs being in ACSL format,
    818 Jessie can be used to verify them. Thus, even if the added synthetic costs are
    819 incorrect (relatively to the cost annotations), the process as a whole is still
    820 correct: indeed, Jessie will not validate incorrect costs and no conclusion can
    821 be made about the WCET of the program in this case. In other terms, the
    822 soundness does not really depend on the action of the cost plugin, which can in
    823 principle produce any synthetic cost. However, in order to be able to actually
    824 prove a WCET of a C function, we need to add correct annotations in a way that
    825 Jessie and subsequent automatic provers have enough information to deduce their
    826 validity. In practice this is not straightforward even for very simple programs
    827 composed of branching and assignments (no loops and no recursion) because a fine
    828 analysis of the VCs associated with branching may lead to a complexity blow up.
    829 \paragraph{Experience with Lustre.} Lustre is a data-flow language for programming
    830 synchronous systems, with a compiler which targets C. We designed a
    831 wrapper for supporting Lustre files.
    832 The C function produced by the compiler implements the step function of the
    833 synchronous system and computing the WCET of the function amounts to obtain a
    834 bound on the reaction time of the system. We tested the Cost plugin and the
    835 Lustre wrapper on the C programs generated by the Lustre compiler. For programs
    836 consisting of a few hundred lines of code, the cost plugin computes a
    837 WCET and Alt-Ergo is able to discharge all VCs automatically.
    838 
    839 \paragraph{Handling C programs with simple loops.}
    840 The cost annotations added by the CerCo compiler take the form of C instructions
    841 that update a fresh global variable called the cost variable by a constant.
    842 Synthesizing a WCET of a C function thus consists of statically resolving an
    843 upper bound of the difference between the value of the cost variable before and
    844 after the execution of the function, i.e. finding the instructions
    845 that update the cost variable and establish the number of times they are passed
    846 through during the flow of execution. To perform the analysis the plugin
    847 makes the following assumptions on the programs:
    848 1) there are no recursive functions;
    849 2) every loop must be annotated with a variant. The variants of `for' loops are
    850 automatically inferred.
    851 
    852 The plugin proceeds as follows.
    853 First the call graph of the program is computed.
    854 Then the computation of the cost of the function is performed by traversing its
    855 control flow graph. If the function $f$ calls the function $g$
    856 then the function $g$
    857 is processed before the function $f$. The cost at a node is the maximum of the
    858 costs of the successors.
    859 In the case of a loop with a body that has a constant cost for every step of the
    860 loop, the cost is the product of the cost of the body and of the variant taken
    861 at the start of the loop.
    862 In the case of a loop with a body whose cost depends on the values of some free
    863 variables, a fresh logic function $f$ is introduced to represent the cost of the
    864 loop in the logic assertions. This logic function takes the variant as a first
    865 parameter. The other parameters of $f$ are the free variables of the body of the
    866 loop. An axiom is added to account the fact that the cost is accumulated at each
    867 step of the loop.
    868 The cost of the function is directly added as post-condition of the function.
    869 
    870 The user can influence the annotation by two different means:
    871 1) by using more precise variants;
    872 2) by annotating functions with cost specifications. The plugin will use this cost
    873 for the function instead of computing it.
    874 \paragraph{C programs with pointers.}
    875 When it comes to verifying programs involving pointer-based data structures,
    876 such as linked lists, trees, or graphs, the use of traditional first-order logic
    877 to specify, and of SMT solvers to verify, shows some limitations. Separation
    878 logic~\cite{separation} is an elegant alternative. It is a program logic
    879 with a new notion of conjunction to express spatial heap separation. Bobot has
    880 recently introduced automatically generated separation
    881 predicates to simulate separation logic reasoning in the Jessie plugin where the specification language, the verification condition
    882 generator, and the theorem provers were not designed with separation logic in
    883 mind. CerCo's plugin can exploit the separation predicates to automatically
    884 reason on the cost of execution of simple heap manipulation programs such as an
    885 in-place list reversal.
    886 
    887 \subsection{The CerCo compiler}
    888 In CerCo we have developed several cost preserving compilers based
    889 on the labeling approach. Excluding an initial certified compiler for a `while'
    890 language, all remaining compilers target realistic source languages---a pure
    891 higher order functional language and a large subset of C with pointers, gotos
    892 and all data structures---and real world target processors---MIPS and the
    893 Intel 8051/8052 processor family. Moreover, they achieve a level of optimisation
    894 that ranges from moderate (comparable to GCC level 1) to intermediate (including
    895 loop peeling and unrolling, hoisting and late constant propagation). The so
    896 called \emph{Trusted CerCo Compiler} is the only one that was implemented in the
    897 interactive theorem prover Matita~\cite{matita} and its costs certified. The code distributed
     714\subsection{The CerCo C compilers}
     715We implemented two C compilers, one implemented directly in OCaml and
     716the other implemented in the interactive theorem prover
     717Matita~\cite{matita}.  The first acted as a prototype for the second,
     718but also supported MIPS and acted as a testbed for more advanced
     719features such as the dependent labeling approach
     720in Section~\ref{lab:deplabel}.
     721
     722The second C compiler is the \emph{Trusted CerCo Compiler}, whose cost
     723predictions are formally verified. The code distributed
    898724is extracted OCaml code from the Matita implementation. In the rest of
    899725this section we will only focus on the Trusted CerCo Compiler, that only targets
    900726the C language and the 8051/8052 family, and that does not implement any
    901727advanced optimisations yet. Its user interface, however, is the same as the one
    902 of the other versions, in order to trade safety with additional performances. In
     728of the other version, in order to trade safety with additional performances. In
    903729particular, the Frama-C CerCo plugin can work without recompilation
    904 with all of our C compilers.
     730with both of our C compilers.
    905731
    906732The 8051/8052 microprocessor is a very simple one, with constant-cost
     
    952778
    953779\subsection{Formal certification of the CerCo compiler}
    954 We implemented the
    955 CerCo compiler in the interactive theorem prover Matita and have formally
    956 certified that the cost models induced on the source code correctly and precisely
    957 predicts the object code behaviour. There are two cost models, one for execution
     780We have formally
     781certified in the Matita interactive proof assistant that the cost models induced on the source code by the
     782Trusted CerCo Compiler correctly and precisely
     783predict the object code behaviour. There are two cost models, one for execution
    958784time and one for stack space consumption. We show the correctness of the prediction
    959785only for those programs that do not exhaust the available machine stack space, a
     
    1024850not I/O times, as desired.
    1025851
     852\subsection{Dependent labeling}
     853\label{lab:deplabel}
     854The core idea of the basic labeling approach is to establish a tight connection
     855between basic blocks executed in the source and target languages. Once the
     856connection is established, any cost model computed on the object code can be
     857transferred to the source code, without affecting the code of the compiler or
     858its proof. In particular, it is immediate that we can also transport cost models
     859that associate to each label functions from hardware state to natural numbers.
     860However, a problem arises during the instrumentation phase that replaces cost
     861emission statements with increments of global cost variables. The global cost
     862variable must be incremented with the result of applying the function associated
     863to the label to the hardware state at the time of execution of the block.
     864The hardware state comprises both the functional state that affects the
     865computation (the value of the registers and memory) and the non-functional
     866state that does not (the pipeline and cache contents, for example). The former is
     867in correspondence with the source code state, but reconstructing the
     868correspondence may be hard and lifting the cost model to work on the source code
     869state is likely to produce cost expressions that are too complex to understand and reason about.
     870Fortunately, in all modern architectures the cost of executing single
     871instructions is either independent of the functional state or the jitter---the
     872difference between the worst and best case execution times---is small enough
     873to be bounded without losing too much precision. Therefore we only consider
     874dependencies on the `non-functional' parts of the state.
     875
     876The non-functional state has no correspondence in the high level state and does
     877not influence the functional properties. What can be done is to expose the
     878non-functional state in the source code. We present here the basic
     879intuition in a simplified form: the technical details that allow us to handle the
     880general case are more complex and can be found in~\cite{paolo}. We add
     881to the source code an additional global variable that represents the
     882non-functional state and another one that remembers the last labels emitted. The
     883state variable must be updated at every label emission statement, using an
     884update function which is computed during static analysis too. The update
     885function associates to each label a function from the recently emitted labels
     886and old state to the new state. It is computed by composing the semantics of every
     887instruction in a basic block and restricting it to the non-functional part of
     888the state.
     889
     890Not all the details of the non-functional state needs to be exposed, and the
     891technique works better when the part of state that is required can be summarized
     892in a simple data structure. For example, to handle simple but realistic
     893pipelines it is sufficient to remember a short integer that encodes the position
     894of bubbles (stuck instructions) in the pipeline. In any case, it is not necessary
     895for the user to understand the meaning of the state to reason over the properties
     896of the
     897program. Moreover, at any moment the user, or the invariant generator tools that
     898analyse the instrumented source code produced by the compiler, can decide to
     899trade precision of the analysis for simplicity by approximating the parametric
     900cost with safe non parametric bounds. Interestingly, the functional analysis of
     901the code could determine which blocks are executed more frequently in order to
     902use more aggressive approximations for the ones that are executed least.
     903
     904Dependent labeling can also be applied to allow the compiler to duplicate
     905blocks that contain labels (e.g. in loop optimisations)~\cite{paolo}. The effect is to assign
     906a different cost to the different occurrences of a duplicated label. For
     907example, loop peeling turns a loop into the concatenation of a copy of the loop
     908body for the first iteration and the conditional execution of the
     909loop for successive iterations. Further optimisations will compile the two
     910copies of the loop differently, with the first body usually
     911taking more time.
     912
     913By introducing a variable that keeps track of the iteration number, we can
     914associate to the label a cost that is a function of the iteration number. The
     915same technique works for loop unrolling without modifications: the function will
     916assign a cost to the even iterations and another cost to the odd ones. The
     917actual work to be done consists of introducing within the source code, and for each
     918loop, a variable that counts the number of iterations. The loop optimisation code
     919that duplicate the loop bodies must also modify the code to correctly propagate
     920the update of the iteration numbers. The technical details are more complex and
     921can be found in the CerCo reports and publications. The implementation, however,
     922is quite simple (and forms part of our OCaml version of the compiler)
     923and the changes to a loop optimising compiler are minimal.
     924
     925\subsection{Techniques to exploit the induced cost model}
     926We now turn our attention to synthesising high-level costs, such as
     927the execution time of a whole program.  We consider as our starting point source level costs
     928provided by basic labeling, in other words annotations
     929on the source code which are constants that provide a sound and sufficiently
     930precise upper bound on the cost of executing the blocks after compilation to
     931object code.
     932
     933The principle that we have followed in designing the cost synthesis tools is
     934that the synthetic bounds should be expressed and proved within a general
     935purpose tool built to reason on the source code. In particular, we rely on the
     936Frama-C tool to reason on C code and on the Coq proof-assistant to reason on
     937higher-order functional programs.
     938
     939This principle entails that: 1)
     940The inferred synthetic bounds are indeed correct as long as the general purpose
     941tool is; 2) there is no limitation on the class of programs that can be handled
     942as long as the user is willing to carry on an interactive proof.
     943
     944Of course, automation is desirable whenever possible. Within this framework,
     945automation means writing programs that give hints to the general purpose tool.
     946These hints may take the form, say, of loop invariants/variants, of predicates
     947describing the structure of the heap, or of types in a light logic. If these
     948hints are correct and sufficiently precise the general purpose tool will produce
     949a proof automatically, otherwise, user interaction is required.
     950
     951\paragraph{The Cost plugin and its application to the Lustre compiler.}
     952Frama-C \cite{framac} is a set of analysers for C programs with a
     953specification language called ACSL. New analyses can be dynamically added
     954via a plugin system. For instance, the Jessie plugin allows deductive
     955verification of C programs with respect to their specification in ACSL, with
     956various provers as back-end tools.
     957We developed the CerCo Cost plugin for the Frama-C platform as a proof of
     958concept of an automatic environment exploiting the cost annotations produced by
     959the CerCo compiler. It consists of an OCaml program which essentially
     960takes the following actions: 1) it receives as input a C program, 2) it
     961applies the CerCo compiler to produce a related C program with cost annotations,
     9623) it applies some heuristics to produce a tentative bound on the cost of
     963executing the C functions of the program as a function of the value of their
     964parameters, 4) the user can then call the Jessie plugin to discharge the
     965related proof obligations.
     966In the following we elaborate on the soundness of the framework and the
     967experiments we performed with the Cost tool on the C programs produced by a
     968Lustre compiler.
     969
     970\paragraph{Soundness.} The soundness of the whole framework depends on the cost
     971annotations added by the CerCo compiler, the synthetic costs produced by the
     972cost plugin, the verification conditions (VCs) generated by Jessie, and the
     973external provers discharging the VCs. The synthetic costs being in ACSL format,
     974Jessie can be used to verify them. Thus, even if the added synthetic costs are
     975incorrect (relatively to the cost annotations), the process as a whole is still
     976correct: indeed, Jessie will not validate incorrect costs and no conclusion can
     977be made about the WCET of the program in this case. In other terms, the
     978soundness does not really depend on the action of the cost plugin, which can in
     979principle produce any synthetic cost. However, in order to be able to actually
     980prove a WCET of a C function, we need to add correct annotations in a way that
     981Jessie and subsequent automatic provers have enough information to deduce their
     982validity. In practice this is not straightforward even for very simple programs
     983composed of branching and assignments (no loops and no recursion) because a fine
     984analysis of the VCs associated with branching may lead to a complexity blow up.
     985\paragraph{Experience with Lustre.} Lustre is a data-flow language for programming
     986synchronous systems, with a compiler which targets C. We designed a
     987wrapper for supporting Lustre files.
     988The C function produced by the compiler implements the step function of the
     989synchronous system and computing the WCET of the function amounts to obtain a
     990bound on the reaction time of the system. We tested the Cost plugin and the
     991Lustre wrapper on the C programs generated by the Lustre compiler. For programs
     992consisting of a few hundred lines of code, the cost plugin computes a
     993WCET and Alt-Ergo is able to discharge all VCs automatically.
     994
     995\paragraph{Handling C programs with simple loops.}
     996The cost annotations added by the CerCo compiler take the form of C instructions
     997that update a fresh global variable called the cost variable by a constant.
     998Synthesizing a WCET of a C function thus consists of statically resolving an
     999upper bound of the difference between the value of the cost variable before and
     1000after the execution of the function, i.e. finding the instructions
     1001that update the cost variable and establish the number of times they are passed
     1002through during the flow of execution. To perform the analysis the plugin
     1003makes the following assumptions on the programs:
     10041) there are no recursive functions;
     10052) every loop must be annotated with a variant. The variants of `for' loops are
     1006automatically inferred.
     1007
     1008The plugin proceeds as follows.
     1009First the call graph of the program is computed.
     1010Then the computation of the cost of the function is performed by traversing its
     1011control flow graph. If the function $f$ calls the function $g$
     1012then the function $g$
     1013is processed before the function $f$. The cost at a node is the maximum of the
     1014costs of the successors.
     1015In the case of a loop with a body that has a constant cost for every step of the
     1016loop, the cost is the product of the cost of the body and of the variant taken
     1017at the start of the loop.
     1018In the case of a loop with a body whose cost depends on the values of some free
     1019variables, a fresh logic function $f$ is introduced to represent the cost of the
     1020loop in the logic assertions. This logic function takes the variant as a first
     1021parameter. The other parameters of $f$ are the free variables of the body of the
     1022loop. An axiom is added to account the fact that the cost is accumulated at each
     1023step of the loop.
     1024The cost of the function is directly added as post-condition of the function.
     1025
     1026The user can influence the annotation by two different means:
     10271) by using more precise variants;
     10282) by annotating functions with cost specifications. The plugin will use this cost
     1029for the function instead of computing it.
     1030\paragraph{C programs with pointers.}
     1031When it comes to verifying programs involving pointer-based data structures,
     1032such as linked lists, trees, or graphs, the use of traditional first-order logic
     1033to specify, and of SMT solvers to verify, shows some limitations. Separation
     1034logic~\cite{separation} is an elegant alternative. It is a program logic
     1035with a new notion of conjunction to express spatial heap separation. Bobot has
     1036recently introduced automatically generated separation
     1037predicates to simulate separation logic reasoning in the Jessie plugin where the specification language, the verification condition
     1038generator, and the theorem provers were not designed with separation logic in
     1039mind. CerCo's plugin can exploit the separation predicates to automatically
     1040reason on the cost of execution of simple heap manipulation programs such as an
     1041in-place list reversal.
     1042
    10261043\section{Conclusions and future work}
    10271044
Note: See TracChangeset for help on using the changeset viewer.