Changeset 3425
- Timestamp:
- Feb 11, 2014, 5:02:28 PM (7 years ago)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
Papers/fopara2013/fopara13.tex
r3424 r3425 532 532 \end{figure} 533 533 \section{Main scientific and technical results} 534 We will now review the main results that the CerCo project has achieved. 534 First we describe the basic labeling approach and our compiler 535 implementations that use it. This is suitable for basic architectures 536 with simple cost models. Then we will discuss the dependent labeling 537 extension which is suitable for more advanced processor architectures 538 and compiler optimisations. At the end of this section we will 539 demonstrate automated high level reasoning about the source level 540 costs provided by the compilers. 535 541 536 542 % \emph{Dependent labeling~\cite{?}.} The basic labeling approach assumes a … … 668 674 to every instruction. This is not possible in 669 675 the presence of stateful hardware whose state influences the cost of operations, 670 like pipelines and caches. In the next subsectionwe will see an extension of the676 like pipelines and caches. In Section~\ref{lab:deplabel} we will see an extension of the 671 677 basic labeling approach to cover this situation. 672 678 673 The labeling approach described in this section can be applied equally well and 674 with minor modifications to imperative and functional languages 675 \cite{labeling2}. We have tested it on a simple imperative language without 676 functions (a `while' language), on a subset of C and on two compilation chains for 677 a purely functional higher-order language. The two main changes required to deal 678 with functional languages are: 1) because global variables and updates are not 679 available, the instrumentation phase produces monadic code to `update' the 680 global costs; 2) the requirements for a sound and precise labeling of the 681 source code must be changed when the compilation is based on CPS translations. 682 In particular, we need to introduce labels emitted before a statement is 683 executed and also labels emitted after a statement is executed. The latter capture 684 code that is inserted by the CPS translation and that would escape all label 685 scopes. 679 In CerCo we have developed several cost preserving compilers based on 680 the labeling approach. Excluding an initial certified compiler for a 681 `while' language, all remaining compilers target realistic source 682 languages---a pure higher order functional language and a large subset 683 of C with pointers, gotos and all data structures---and real world 684 target processors---MIPS and the Intel 8051/8052 processor 685 family. Moreover, they achieve a level of optimisation that ranges 686 from moderate (comparable to GCC level 1) to intermediate (including 687 loop peeling and unrolling, hoisting and late constant propagation). 688 We describe the C compilers in detail in the following section. 689 690 Two compilation chains were implemented for a purely functional 691 higher-order language~\cite{labeling2}. The two main changes required 692 to deal with functional languages are: 1) because global variables and 693 updates are not available, the instrumentation phase produces monadic 694 code to `update' the global costs; 2) the requirements for a sound and 695 precise labeling of the source code must be changed when the 696 compilation is based on CPS translations. In particular, we need to 697 introduce labels emitted before a statement is executed and also 698 labels emitted after a statement is executed. The latter capture code 699 that is inserted by the CPS translation and that would escape all 700 label scopes. 686 701 687 702 % Brian: one of the reviewers pointed out that standard Prolog implementations … … 697 712 % non-backtrackable state could support our labeling approach. 698 713 699 \subsection{Dependent labeling} 700 The core idea of the basic labeling approach is to establish a tight connection 701 between basic blocks executed in the source and target languages. Once the 702 connection is established, any cost model computed on the object code can be 703 transferred to the source code, without affecting the code of the compiler or 704 its proof. In particular, it is immediate that we can also transport cost models 705 that associate to each label functions from hardware state to natural numbers. 706 However, a problem arises during the instrumentation phase that replaces cost 707 emission statements with increments of global cost variables. The global cost 708 variable must be incremented with the result of applying the function associated 709 to the label to the hardware state at the time of execution of the block. 710 The hardware state comprises both the functional state that affects the 711 computation (the value of the registers and memory) and the non-functional 712 state that does not (the pipeline and cache contents, for example). The former is 713 in correspondence with the source code state, but reconstructing the 714 correspondence may be hard and lifting the cost model to work on the source code 715 state is likely to produce cost expressions that are too complex to understand and reason about. 716 Fortunately, in all modern architectures the cost of executing single 717 instructions is either independent of the functional state or the jitter---the 718 difference between the worst and best case execution times---is small enough 719 to be bounded without losing too much precision. Therefore we only consider 720 dependencies on the `non-functional' parts of the state. 721 722 The non-functional state has no correspondence in the high level state and does 723 not influence the functional properties. What can be done is to expose the 724 non-functional state in the source code. We present here the basic 725 intuition in a simplified form: the technical details that allow us to handle the 726 general case are more complex and can be found in~\cite{paolo}. We add 727 to the source code an additional global variable that represents the 728 non-functional state and another one that remembers the last labels emitted. The 729 state variable must be updated at every label emission statement, using an 730 update function which is computed during static analysis too. The update 731 function associates to each label a function from the recently emitted labels 732 and old state to the new state. It is computed by composing the semantics of every 733 instruction in a basic block and restricting it to the non-functional part of 734 the state. 735 736 Not all the details of the non-functional state needs to be exposed, and the 737 technique works better when the part of state that is required can be summarized 738 in a simple data structure. For example, to handle simple but realistic 739 pipelines it is sufficient to remember a short integer that encodes the position 740 of bubbles (stuck instructions) in the pipeline. In any case, it is not necessary 741 for the user to understand the meaning of the state to reason over the properties 742 of the 743 program. Moreover, at any moment the user, or the invariant generator tools that 744 analyse the instrumented source code produced by the compiler, can decide to 745 trade precision of the analysis for simplicity by approximating the parametric 746 cost with safe non parametric bounds. Interestingly, the functional analysis of 747 the code could determine which blocks are executed more frequently in order to 748 use more aggressive approximations for the ones that are executed least. 749 750 Dependent labeling can also be applied to allow the compiler to duplicate 751 blocks that contain labels (e.g. in loop optimisations)~\cite{paolo}. The effect is to assign 752 a different cost to the different occurrences of a duplicated label. For 753 example, loop peeling turns a loop into the concatenation of a copy of the loop 754 body for the first iteration and the conditional execution of the 755 loop for successive iterations. Further optimisations will compile the two 756 copies of the loop differently, with the first body usually 757 taking more time. 758 759 By introducing a variable that keeps track of the iteration number, we can 760 associate to the label a cost that is a function of the iteration number. The 761 same technique works for loop unrolling without modifications: the function will 762 assign a cost to the even iterations and another cost to the odd ones. The 763 actual work to be done consists of introducing within the source code, and for each 764 loop, a variable that counts the number of iterations. The loop optimisation code 765 that duplicate the loop bodies must also modify the code to correctly propagate 766 the update of the iteration numbers. The technical details are more complex and 767 can be found in the CerCo reports and publications. The implementation, however, 768 is quite simple and the changes to a loop optimising compiler are minimal. 769 770 \subsection{Techniques to exploit the induced cost model} 771 We review the cost synthesis techniques developed in the project. 772 The starting hypothesis is that we have a certified methodology to annotate 773 blocks in the source code with constants which provide a sound and sufficiently 774 precise upper bound on the cost of executing the blocks after compilation to 775 object code. 776 777 The principle that we have followed in designing the cost synthesis tools is 778 that the synthetic bounds should be expressed and proved within a general 779 purpose tool built to reason on the source code. In particular, we rely on the 780 Frama-C tool to reason on C code and on the Coq proof-assistant to reason on 781 higher-order functional programs. 782 783 This principle entails that: 1) 784 The inferred synthetic bounds are indeed correct as long as the general purpose 785 tool is; 2) there is no limitation on the class of programs that can be handled 786 as long as the user is willing to carry on an interactive proof. 787 788 Of course, automation is desirable whenever possible. Within this framework, 789 automation means writing programs that give hints to the general purpose tool. 790 These hints may take the form, say, of loop invariants/variants, of predicates 791 describing the structure of the heap, or of types in a light logic. If these 792 hints are correct and sufficiently precise the general purpose tool will produce 793 a proof automatically, otherwise, user interaction is required. 794 795 \paragraph{The Cost plugin and its application to the Lustre compiler.} 796 Frama-C \cite{framac} is a set of analysers for C programs with a 797 specification language called ACSL. New analyses can be dynamically added 798 via a plugin system. For instance, the Jessie plugin allows deductive 799 verification of C programs with respect to their specification in ACSL, with 800 various provers as back-end tools. 801 We developed the CerCo Cost plugin for the Frama-C platform as a proof of 802 concept of an automatic environment exploiting the cost annotations produced by 803 the CerCo compiler. It consists of an OCaml program which essentially 804 takes the following actions: 1) it receives as input a C program, 2) it 805 applies the CerCo compiler to produce a related C program with cost annotations, 806 3) it applies some heuristics to produce a tentative bound on the cost of 807 executing the C functions of the program as a function of the value of their 808 parameters, 4) the user can then call the Jessie plugin to discharge the 809 related proof obligations. 810 In the following we elaborate on the soundness of the framework and the 811 experiments we performed with the Cost tool on the C programs produced by a 812 Lustre compiler. 813 814 \paragraph{Soundness.} The soundness of the whole framework depends on the cost 815 annotations added by the CerCo compiler, the synthetic costs produced by the 816 cost plugin, the verification conditions (VCs) generated by Jessie, and the 817 external provers discharging the VCs. The synthetic costs being in ACSL format, 818 Jessie can be used to verify them. Thus, even if the added synthetic costs are 819 incorrect (relatively to the cost annotations), the process as a whole is still 820 correct: indeed, Jessie will not validate incorrect costs and no conclusion can 821 be made about the WCET of the program in this case. In other terms, the 822 soundness does not really depend on the action of the cost plugin, which can in 823 principle produce any synthetic cost. However, in order to be able to actually 824 prove a WCET of a C function, we need to add correct annotations in a way that 825 Jessie and subsequent automatic provers have enough information to deduce their 826 validity. In practice this is not straightforward even for very simple programs 827 composed of branching and assignments (no loops and no recursion) because a fine 828 analysis of the VCs associated with branching may lead to a complexity blow up. 829 \paragraph{Experience with Lustre.} Lustre is a data-flow language for programming 830 synchronous systems, with a compiler which targets C. We designed a 831 wrapper for supporting Lustre files. 832 The C function produced by the compiler implements the step function of the 833 synchronous system and computing the WCET of the function amounts to obtain a 834 bound on the reaction time of the system. We tested the Cost plugin and the 835 Lustre wrapper on the C programs generated by the Lustre compiler. For programs 836 consisting of a few hundred lines of code, the cost plugin computes a 837 WCET and Alt-Ergo is able to discharge all VCs automatically. 838 839 \paragraph{Handling C programs with simple loops.} 840 The cost annotations added by the CerCo compiler take the form of C instructions 841 that update a fresh global variable called the cost variable by a constant. 842 Synthesizing a WCET of a C function thus consists of statically resolving an 843 upper bound of the difference between the value of the cost variable before and 844 after the execution of the function, i.e. finding the instructions 845 that update the cost variable and establish the number of times they are passed 846 through during the flow of execution. To perform the analysis the plugin 847 makes the following assumptions on the programs: 848 1) there are no recursive functions; 849 2) every loop must be annotated with a variant. The variants of `for' loops are 850 automatically inferred. 851 852 The plugin proceeds as follows. 853 First the call graph of the program is computed. 854 Then the computation of the cost of the function is performed by traversing its 855 control flow graph. If the function $f$ calls the function $g$ 856 then the function $g$ 857 is processed before the function $f$. The cost at a node is the maximum of the 858 costs of the successors. 859 In the case of a loop with a body that has a constant cost for every step of the 860 loop, the cost is the product of the cost of the body and of the variant taken 861 at the start of the loop. 862 In the case of a loop with a body whose cost depends on the values of some free 863 variables, a fresh logic function $f$ is introduced to represent the cost of the 864 loop in the logic assertions. This logic function takes the variant as a first 865 parameter. The other parameters of $f$ are the free variables of the body of the 866 loop. An axiom is added to account the fact that the cost is accumulated at each 867 step of the loop. 868 The cost of the function is directly added as post-condition of the function. 869 870 The user can influence the annotation by two different means: 871 1) by using more precise variants; 872 2) by annotating functions with cost specifications. The plugin will use this cost 873 for the function instead of computing it. 874 \paragraph{C programs with pointers.} 875 When it comes to verifying programs involving pointer-based data structures, 876 such as linked lists, trees, or graphs, the use of traditional first-order logic 877 to specify, and of SMT solvers to verify, shows some limitations. Separation 878 logic~\cite{separation} is an elegant alternative. It is a program logic 879 with a new notion of conjunction to express spatial heap separation. Bobot has 880 recently introduced automatically generated separation 881 predicates to simulate separation logic reasoning in the Jessie plugin where the specification language, the verification condition 882 generator, and the theorem provers were not designed with separation logic in 883 mind. CerCo's plugin can exploit the separation predicates to automatically 884 reason on the cost of execution of simple heap manipulation programs such as an 885 in-place list reversal. 886 887 \subsection{The CerCo compiler} 888 In CerCo we have developed several cost preserving compilers based 889 on the labeling approach. Excluding an initial certified compiler for a `while' 890 language, all remaining compilers target realistic source languages---a pure 891 higher order functional language and a large subset of C with pointers, gotos 892 and all data structures---and real world target processors---MIPS and the 893 Intel 8051/8052 processor family. Moreover, they achieve a level of optimisation 894 that ranges from moderate (comparable to GCC level 1) to intermediate (including 895 loop peeling and unrolling, hoisting and late constant propagation). The so 896 called \emph{Trusted CerCo Compiler} is the only one that was implemented in the 897 interactive theorem prover Matita~\cite{matita} and its costs certified. The code distributed 714 \subsection{The CerCo C compilers} 715 We implemented two C compilers, one implemented directly in OCaml and 716 the other implemented in the interactive theorem prover 717 Matita~\cite{matita}. The first acted as a prototype for the second, 718 but also supported MIPS and acted as a testbed for more advanced 719 features such as the dependent labeling approach 720 in Section~\ref{lab:deplabel}. 721 722 The second C compiler is the \emph{Trusted CerCo Compiler}, whose cost 723 predictions are formally verified. The code distributed 898 724 is extracted OCaml code from the Matita implementation. In the rest of 899 725 this section we will only focus on the Trusted CerCo Compiler, that only targets 900 726 the C language and the 8051/8052 family, and that does not implement any 901 727 advanced optimisations yet. Its user interface, however, is the same as the one 902 of the other version s, in order to trade safety with additional performances. In728 of the other version, in order to trade safety with additional performances. In 903 729 particular, the Frama-C CerCo plugin can work without recompilation 904 with allof our C compilers.730 with both of our C compilers. 905 731 906 732 The 8051/8052 microprocessor is a very simple one, with constant-cost … … 952 778 953 779 \subsection{Formal certification of the CerCo compiler} 954 We implemented the955 CerCo compiler in the interactive theorem prover Matita and have formally 956 certified that the cost models induced on the source codecorrectly and precisely957 predict sthe object code behaviour. There are two cost models, one for execution780 We have formally 781 certified in the Matita interactive proof assistant that the cost models induced on the source code by the 782 Trusted CerCo Compiler correctly and precisely 783 predict the object code behaviour. There are two cost models, one for execution 958 784 time and one for stack space consumption. We show the correctness of the prediction 959 785 only for those programs that do not exhaust the available machine stack space, a … … 1024 850 not I/O times, as desired. 1025 851 852 \subsection{Dependent labeling} 853 \label{lab:deplabel} 854 The core idea of the basic labeling approach is to establish a tight connection 855 between basic blocks executed in the source and target languages. Once the 856 connection is established, any cost model computed on the object code can be 857 transferred to the source code, without affecting the code of the compiler or 858 its proof. In particular, it is immediate that we can also transport cost models 859 that associate to each label functions from hardware state to natural numbers. 860 However, a problem arises during the instrumentation phase that replaces cost 861 emission statements with increments of global cost variables. The global cost 862 variable must be incremented with the result of applying the function associated 863 to the label to the hardware state at the time of execution of the block. 864 The hardware state comprises both the functional state that affects the 865 computation (the value of the registers and memory) and the non-functional 866 state that does not (the pipeline and cache contents, for example). The former is 867 in correspondence with the source code state, but reconstructing the 868 correspondence may be hard and lifting the cost model to work on the source code 869 state is likely to produce cost expressions that are too complex to understand and reason about. 870 Fortunately, in all modern architectures the cost of executing single 871 instructions is either independent of the functional state or the jitter---the 872 difference between the worst and best case execution times---is small enough 873 to be bounded without losing too much precision. Therefore we only consider 874 dependencies on the `non-functional' parts of the state. 875 876 The non-functional state has no correspondence in the high level state and does 877 not influence the functional properties. What can be done is to expose the 878 non-functional state in the source code. We present here the basic 879 intuition in a simplified form: the technical details that allow us to handle the 880 general case are more complex and can be found in~\cite{paolo}. We add 881 to the source code an additional global variable that represents the 882 non-functional state and another one that remembers the last labels emitted. The 883 state variable must be updated at every label emission statement, using an 884 update function which is computed during static analysis too. The update 885 function associates to each label a function from the recently emitted labels 886 and old state to the new state. It is computed by composing the semantics of every 887 instruction in a basic block and restricting it to the non-functional part of 888 the state. 889 890 Not all the details of the non-functional state needs to be exposed, and the 891 technique works better when the part of state that is required can be summarized 892 in a simple data structure. For example, to handle simple but realistic 893 pipelines it is sufficient to remember a short integer that encodes the position 894 of bubbles (stuck instructions) in the pipeline. In any case, it is not necessary 895 for the user to understand the meaning of the state to reason over the properties 896 of the 897 program. Moreover, at any moment the user, or the invariant generator tools that 898 analyse the instrumented source code produced by the compiler, can decide to 899 trade precision of the analysis for simplicity by approximating the parametric 900 cost with safe non parametric bounds. Interestingly, the functional analysis of 901 the code could determine which blocks are executed more frequently in order to 902 use more aggressive approximations for the ones that are executed least. 903 904 Dependent labeling can also be applied to allow the compiler to duplicate 905 blocks that contain labels (e.g. in loop optimisations)~\cite{paolo}. The effect is to assign 906 a different cost to the different occurrences of a duplicated label. For 907 example, loop peeling turns a loop into the concatenation of a copy of the loop 908 body for the first iteration and the conditional execution of the 909 loop for successive iterations. Further optimisations will compile the two 910 copies of the loop differently, with the first body usually 911 taking more time. 912 913 By introducing a variable that keeps track of the iteration number, we can 914 associate to the label a cost that is a function of the iteration number. The 915 same technique works for loop unrolling without modifications: the function will 916 assign a cost to the even iterations and another cost to the odd ones. The 917 actual work to be done consists of introducing within the source code, and for each 918 loop, a variable that counts the number of iterations. The loop optimisation code 919 that duplicate the loop bodies must also modify the code to correctly propagate 920 the update of the iteration numbers. The technical details are more complex and 921 can be found in the CerCo reports and publications. The implementation, however, 922 is quite simple (and forms part of our OCaml version of the compiler) 923 and the changes to a loop optimising compiler are minimal. 924 925 \subsection{Techniques to exploit the induced cost model} 926 We now turn our attention to synthesising high-level costs, such as 927 the execution time of a whole program. We consider as our starting point source level costs 928 provided by basic labeling, in other words annotations 929 on the source code which are constants that provide a sound and sufficiently 930 precise upper bound on the cost of executing the blocks after compilation to 931 object code. 932 933 The principle that we have followed in designing the cost synthesis tools is 934 that the synthetic bounds should be expressed and proved within a general 935 purpose tool built to reason on the source code. In particular, we rely on the 936 Frama-C tool to reason on C code and on the Coq proof-assistant to reason on 937 higher-order functional programs. 938 939 This principle entails that: 1) 940 The inferred synthetic bounds are indeed correct as long as the general purpose 941 tool is; 2) there is no limitation on the class of programs that can be handled 942 as long as the user is willing to carry on an interactive proof. 943 944 Of course, automation is desirable whenever possible. Within this framework, 945 automation means writing programs that give hints to the general purpose tool. 946 These hints may take the form, say, of loop invariants/variants, of predicates 947 describing the structure of the heap, or of types in a light logic. If these 948 hints are correct and sufficiently precise the general purpose tool will produce 949 a proof automatically, otherwise, user interaction is required. 950 951 \paragraph{The Cost plugin and its application to the Lustre compiler.} 952 Frama-C \cite{framac} is a set of analysers for C programs with a 953 specification language called ACSL. New analyses can be dynamically added 954 via a plugin system. For instance, the Jessie plugin allows deductive 955 verification of C programs with respect to their specification in ACSL, with 956 various provers as back-end tools. 957 We developed the CerCo Cost plugin for the Frama-C platform as a proof of 958 concept of an automatic environment exploiting the cost annotations produced by 959 the CerCo compiler. It consists of an OCaml program which essentially 960 takes the following actions: 1) it receives as input a C program, 2) it 961 applies the CerCo compiler to produce a related C program with cost annotations, 962 3) it applies some heuristics to produce a tentative bound on the cost of 963 executing the C functions of the program as a function of the value of their 964 parameters, 4) the user can then call the Jessie plugin to discharge the 965 related proof obligations. 966 In the following we elaborate on the soundness of the framework and the 967 experiments we performed with the Cost tool on the C programs produced by a 968 Lustre compiler. 969 970 \paragraph{Soundness.} The soundness of the whole framework depends on the cost 971 annotations added by the CerCo compiler, the synthetic costs produced by the 972 cost plugin, the verification conditions (VCs) generated by Jessie, and the 973 external provers discharging the VCs. The synthetic costs being in ACSL format, 974 Jessie can be used to verify them. Thus, even if the added synthetic costs are 975 incorrect (relatively to the cost annotations), the process as a whole is still 976 correct: indeed, Jessie will not validate incorrect costs and no conclusion can 977 be made about the WCET of the program in this case. In other terms, the 978 soundness does not really depend on the action of the cost plugin, which can in 979 principle produce any synthetic cost. However, in order to be able to actually 980 prove a WCET of a C function, we need to add correct annotations in a way that 981 Jessie and subsequent automatic provers have enough information to deduce their 982 validity. In practice this is not straightforward even for very simple programs 983 composed of branching and assignments (no loops and no recursion) because a fine 984 analysis of the VCs associated with branching may lead to a complexity blow up. 985 \paragraph{Experience with Lustre.} Lustre is a data-flow language for programming 986 synchronous systems, with a compiler which targets C. We designed a 987 wrapper for supporting Lustre files. 988 The C function produced by the compiler implements the step function of the 989 synchronous system and computing the WCET of the function amounts to obtain a 990 bound on the reaction time of the system. We tested the Cost plugin and the 991 Lustre wrapper on the C programs generated by the Lustre compiler. For programs 992 consisting of a few hundred lines of code, the cost plugin computes a 993 WCET and Alt-Ergo is able to discharge all VCs automatically. 994 995 \paragraph{Handling C programs with simple loops.} 996 The cost annotations added by the CerCo compiler take the form of C instructions 997 that update a fresh global variable called the cost variable by a constant. 998 Synthesizing a WCET of a C function thus consists of statically resolving an 999 upper bound of the difference between the value of the cost variable before and 1000 after the execution of the function, i.e. finding the instructions 1001 that update the cost variable and establish the number of times they are passed 1002 through during the flow of execution. To perform the analysis the plugin 1003 makes the following assumptions on the programs: 1004 1) there are no recursive functions; 1005 2) every loop must be annotated with a variant. The variants of `for' loops are 1006 automatically inferred. 1007 1008 The plugin proceeds as follows. 1009 First the call graph of the program is computed. 1010 Then the computation of the cost of the function is performed by traversing its 1011 control flow graph. If the function $f$ calls the function $g$ 1012 then the function $g$ 1013 is processed before the function $f$. The cost at a node is the maximum of the 1014 costs of the successors. 1015 In the case of a loop with a body that has a constant cost for every step of the 1016 loop, the cost is the product of the cost of the body and of the variant taken 1017 at the start of the loop. 1018 In the case of a loop with a body whose cost depends on the values of some free 1019 variables, a fresh logic function $f$ is introduced to represent the cost of the 1020 loop in the logic assertions. This logic function takes the variant as a first 1021 parameter. The other parameters of $f$ are the free variables of the body of the 1022 loop. An axiom is added to account the fact that the cost is accumulated at each 1023 step of the loop. 1024 The cost of the function is directly added as post-condition of the function. 1025 1026 The user can influence the annotation by two different means: 1027 1) by using more precise variants; 1028 2) by annotating functions with cost specifications. The plugin will use this cost 1029 for the function instead of computing it. 1030 \paragraph{C programs with pointers.} 1031 When it comes to verifying programs involving pointer-based data structures, 1032 such as linked lists, trees, or graphs, the use of traditional first-order logic 1033 to specify, and of SMT solvers to verify, shows some limitations. Separation 1034 logic~\cite{separation} is an elegant alternative. It is a program logic 1035 with a new notion of conjunction to express spatial heap separation. Bobot has 1036 recently introduced automatically generated separation 1037 predicates to simulate separation logic reasoning in the Jessie plugin where the specification language, the verification condition 1038 generator, and the theorem provers were not designed with separation logic in 1039 mind. CerCo's plugin can exploit the separation predicates to automatically 1040 reason on the cost of execution of simple heap manipulation programs such as an 1041 in-place list reversal. 1042 1026 1043 \section{Conclusions and future work} 1027 1044
Note: See TracChangeset
for help on using the changeset viewer.