Changeset 3328 for Papers/fopara2013
- Timestamp:
- Jun 6, 2013, 5:05:50 PM (7 years ago)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
Papers/fopara2013/fopara13.tex
r3327 r3328 9 9 \usepackage{fancyvrb} 10 10 \usepackage{graphicx} 11 \usepackage[colorlinks]{hyperref} 11 \usepackage[colorlinks, 12 bookmarks,bookmarksopen,bookmarksdepth=2]{hyperref} 12 13 \usepackage{hyphenat} 13 14 \usepackage[utf8x]{inputenc} … … 152 153 components. 153 154 154 \paragraph{Contributions.} We have developed the label ling approach \cite{?}, a155 \paragraph{Contributions.} We have developed the labeling approach \cite{labeling}, a 155 156 technique to implement compilers that induce cost models on source programs by 156 157 very lightweight tracking of code changes through compilation. We have studied … … 160 161 implemented a Frama-C plugin \cite{framac} that invokes the compiler on a source 161 162 program and uses this to generate invariants on the high-level source that 162 correctly model low-level costs. Finally, the plug -in certifies that the program163 correctly model low-level costs. Finally, the plugin certifies that the program 163 164 respects these costs by calling automated theorem provers, a new and innovative 164 165 technique in the field of cost analysis. As a case study, we show how the 165 plug -in can automatically compute and certify the exact reaction time of Lustre166 plugin can automatically compute and certify the exact reaction time of Lustre 166 167 \cite{lustre} dataflow programs compiled into C. 167 168 … … 428 429 compiler. The annotated program can then be enriched with complexity assertions 429 430 in the style of Hoare logic, that are passed to a deductive platform (in our 430 case Frama-C). We provide as a Frama-c cost plug -in a simple automatic431 case Frama-C). We provide as a Frama-c cost plugin a simple automatic 431 432 synthesiser for complexity assertions (the blue lines in \autoref{itest1}), 432 433 which can be overridden by the user to increase or decrease accuracy. From the … … 507 508 \end{figure} 508 509 \section{Main S\&T results of the CerCo project} 509 % We will now review the main S\&T results achieved in the CerCo project. 510 511 \emph{The (basic) labelling approach~\cite{?}.} It is the main technique that 510 We will now review the main S\&T results achieved in the CerCo project. 511 512 % \emph{Dependent labeling~\cite{?}.} The basic labeling approach assumes a 513 % bijective mapping between object code and source code O(1) blocks (called basic 514 % blocks). This assumption is violated by many program optimizations (e.g. loop 515 % peeling and loop unrolling). It also assumes the cost model computed on the 516 % object code to be non parametric: every block must be assigned a cost that does 517 % not depend on the state. This assumption is violated by stateful hardware like 518 % pipelines, caches, branch prediction units. The dependent labeling approach is 519 % an extension of the basic labeling approach that allows to handle parametric 520 % cost models. We showed how the method allows to deal with loop optimizations and 521 % pipelines, and we speculated about its applications to caches. 522 % 523 % \emph{Techniques to exploit the induced cost model.} Every technique used for 524 % the analysis of functional properties of programs can be adapted to analyse the 525 % non-functional properties of the source code instrumented by compilers that 526 % implement the labeling approach. In order to gain confidence in this claim, we 527 % showed how to implement a cost invariant generator combining abstract 528 % interpretation with separation logic ideas \cite{separation}. We integrated 529 % everything in the Frama-C modular architecture, in order to compute proof 530 % obligations from functional and cost invariants and to use automatic theorem 531 % provers on them. This is an example of a new technique that is not currently 532 % exploited in WCET analysis. It also shows how precise functional invariants 533 % benefits the non-functional analysis too. Finally, we show how to fully 534 % automatically analyse the reaction time of Lustre nodes that are first compiled 535 % to C using a standard Lustre compiler and then processed by a C compiler that 536 % implements the labeling approach. 537 538 % \emph{The CerCo compiler.} This is a compiler from a large subset of the C 539 % program to 8051/8052 object code, 540 % integrating the labeling approach and a static analyser for 8051 executables. 541 % The latter can be implemented easily and does not require dependent costs 542 % because the 8051 microprocessor is a very simple one, with constant-cost 543 % instruction. It was chosen to separate the issue of exact propagation of the 544 % cost model from the orthogonal problem of the static analysis of object code 545 % that may require approximations or dependent costs. The compiler comes in 546 % several versions: some are prototypes implemented directly in OCaml, and they 547 % implement both the basic and dependent labeling approaches; the final version 548 % is extracted from a Matita certification and at the moment implements only the 549 % basic approach. 550 551 \subsection{The (basic) labeling approach} 552 The labeling approach is the main technique that 512 553 underlies all the developments in CerCo. It allows to track the evolution of 513 554 basic blocks during compilation in order to propagate the cost model from the 514 555 object code to the source code without loosing precision in the process. 515 516 \emph{Dependent labelling~\cite{?}.} The basic labelling approach assumes a 517 bijective mapping between object code and source code O(1) blocks (called basic 518 blocks). This assumption is violated by many program optimizations (e.g. loop 519 peeling and loop unrolling). It also assumes the cost model computed on the 520 object code to be non parametric: every block must be assigned a cost that does 521 not depend on the state. This assumption is violated by stateful hardware like 522 pipelines, caches, branch prediction units. The dependent labelling approach is 523 an extension of the basic labelling approach that allows to handle parametric 524 cost models. We showed how the method allows to deal with loop optimizations and 525 pipelines, and we speculated about its applications to caches. 526 527 \emph{Techniques to exploit the induced cost model.} Every technique used for 528 the analysis of functional properties of programs can be adapted to analyse the 529 non-functional properties of the source code instrumented by compilers that 530 implement the labelling approach. In order to gain confidence in this claim, we 531 showed how to implement a cost invariant generator combining abstract 532 interpretation with separation logic ideas \cite{separation}. We integrated 533 everything in the Frama-C modular architecture, in order to compute proof 534 obligations from functional and cost invariants and to use automatic theorem 535 provers on them. This is an example of a new technique that is not currently 536 exploited in WCET analysis. It also shows how precise functional invariants 537 benefits the non-functional analysis too. Finally, we show how to fully 538 automatically analyse the reaction time of Lustre nodes that are first compiled 539 to C using a standard Lustre compiler and then processed by a C compiler that 540 implements the labelling approach. 541 542 \emph{The CerCo compiler.} This is a compiler from a large subset of the C 543 program to 8051/8052 object code, 544 integrating the labelling approach and a static analyser for 8051 executables. 545 The latter can be implemented easily and does not require dependent costs 546 because the 8051 microprocessor is a very simple one, with constant-cost 556 \paragraph{Problem statement.} Given a source program $P$, we want to obtain an 557 instrumented source program $P'$, written in the same programming language, and 558 the object code $O$ such that: 1) $P'$ is obtained by inserting into $P$ some 559 additional instructions to update global cost information like the amount of 560 time spent during execution or the maximal stack space required; 2) $P$ and $P'$ 561 must have the same functional behavior, i.e.\ they must produce that same output 562 and intermediate observables; 3) $P$ and $O$ must have the same functional 563 behavior; 4) after execution and in interesting points during execution, the 564 cost information computed by $P'$ must be an upper bound of the one spent by $O$ 565 to perform the corresponding operations (soundness property); 5) the difference 566 between the costs computed by $P'$ and the execution costs of $O$ must be 567 bounded by a program dependent constant (precision property). 568 569 \paragraph{The labeling software components.} We solve the problem in four 570 stages \cite{labeling}, implemented by four software components that are used 571 in sequence. 572 573 The first component labels the source program $P$ by injecting label emission 574 statements in appropriate positions to mark the beginning of basic blocks. 575 % The 576 % set of labels with their positions is called labeling. 577 The syntax and semantics 578 of the source programming language is augmented with label emission statements. 579 The statement ``EMIT $\ell$'' behaves like a NOP instruction that does not affect the 580 program state or control flow, but its execution is observable. 581 % Therefore the observables of a run of a program becomes a stream 582 % of label emissions: $\ell_1,\ldots,\ell_n$, called the program trace. We clarify the conditions 583 % that the labeling must respect later. 584 585 The second component is a labeling preserving compiler. It can be obtained from 586 an existing compiler by adding label emission statements to every intermediate 587 language and by propagating label emission statements during compilation. The 588 compiler is correct if it preserves both the functional behavior of the program 589 and the traces of observables. 590 % We may also ask that the function that erases the cost 591 % emission statements commute with compilation. This optional property grants that 592 % the labeling does not interfere with the original compiler behavior. A further 593 % set of requirements will be added later. 594 595 The third component is a static object code analyser. It takes in input a labeled 596 object code and it computes the scope of each of its label emission statements, 597 i.e.\ the tree of instructions that may be executed after the statement and before 598 a new label emission is encountered. 599 It is a tree and not a sequence because the scope 600 may contain a branching statement. In order to grant that such a finite tree 601 exists, the object code must not contain any loop that is not broken by a label 602 emission statement. This is the first requirement of a sound labeling. The 603 analyser fails if the labeling is unsound. For each scope, the analyser 604 computes an upper bound of the execution time required by the scope, using the 605 maximum of the costs of the two branches in case of a conditional statement. 606 Finally, the analyser computes the cost of a label by taking the maximum of the 607 costs of the scopes of every statement that emits that label. 608 609 The fourth and last component takes in input the cost model computed at step 3 610 and the labelled code computed at step 1. It outputs a source program obtained 611 by replacing each label emission statement with a statement that increments the 612 global cost variable with the cost associated to the label by the cost model. 613 The obtained source code is the instrumented source code. 614 615 \paragraph{Correctness.} Requirements 1, 2 and 3 of the problem statement 616 obviously hold, with 2 and 3 being a consequence of the definition of a correct 617 labeling preserving compiler. It is also obvious that the value of the global 618 cost variable of an instrumented source code is at any time equal to the sum of 619 the costs of the labels emitted by the corresponding labelled code. Moreover, 620 because the compiler preserves all traces, the sum of the costs of the labels 621 emitted in the source and target labelled code are the same. Therefore, to 622 satisfy the fourth requirement, we need to grant that the time taken to execute 623 the object code is equal to the sum of the costs of the labels emitted by the 624 object code. We collect all the necessary conditions for this to happen in the 625 definition of a sound labeling: a) all loops must be broken by a cost emission 626 statement; b) all program instructions must be in the scope of some cost 627 emission statement. To satisfy also the fifth requirement, additional 628 requirements must be imposed on the object code labeling to avoid all uses of 629 the maximum in the cost computation: the labeling is precise if every label is 630 emitted at most once and both branches of a conditional jump start with a label 631 emission statement. 632 633 The correctness and precision of the labeling approach only rely on the 634 correctness and precision of the object code labeling. The simplest 635 way to achieve them is to impose correctness and precision 636 requirements also on the initial labeling produced by the first software 637 component, and to ask the compiler to preserve these 638 properties too. The latter requirement imposes serious limitations on the 639 compilation strategy and optimizations: the compiler may not duplicate any code 640 that contains label emission statements, like loop bodies. Therefore several 641 loop optimizations like peeling or unrolling are prevented. Moreover, precision 642 of the object code labeling is not sufficient per se to obtain global 643 precision: we also implicitly assumed the static analysis to be able to 644 associate a precise constant cost to every instruction. This is not possible in 645 presence of stateful hardware whose state influences the cost of operations, 646 like pipelines and caches. In the next subsection we will see an extension of the 647 basic labeling approach to cover this situation. 648 649 The labeling approach described in this section can be applied equally well and 650 with minor modifications to imperative and functional languages 651 \cite{labeling2}. We have tested it on a simple imperative language without 652 functions (a While language), on a subset of C and on two compilation chains for 653 a purely functional higher order language. The two main changes required to deal 654 with functional languages are: 1) because global variables and updates are not 655 available, the instrumentation phase produces monadic code to “update” the 656 global costs; 2) the requirements for a sound and precise labeling of the 657 source code must be changed when the compilation is based on CPS translations. 658 In particular, we need to introduce both labels emitted before a statement is 659 executed and labels emitted after a statement is executed. The latter capture 660 code that is inserted by the CPS translation and that would escape all label 661 scopes. 662 663 Phases 1, 2 and 3 can be applied as well to logic languages (e.g. Prolog). 664 However, the instrumentation phase cannot: in standard Prolog there is no notion 665 of (global) variable whose state is not retracted during backtracking. 666 Therefore, the cost of executing computations that are later backtracked would 667 not be correctly counted in. Any extension of logic languages with 668 non-backtrackable state should support the labeling approach. 669 670 \subsection{Dependent labeling} 671 The core idea of the basic labeling approach is to establish a tight connection 672 between basic blocks executed in the source and target languages. Once the 673 connection is established, any cost model computed on the object code can be 674 transferred to the source code, without affecting the code of the compiler or 675 its proof. In particular, it is immediate that we can also transport cost models 676 that associate to each label functions from hardware state to natural numbers. 677 However, a problem arises during the instrumentation phase that replaces cost 678 emission statements with increments of global cost variables. The global cost 679 variable must be incremented with the result of applying the function associated 680 to the label to the hardware state at the time of execution of the block. 681 The hardware state comprises both the functional state that affects the 682 computation (the value of the registers and memory) and the non-functional 683 state that does not (the pipeline and cache contents for example). The former is 684 in correspondence with the source code state, but reconstructing the 685 correspondence may be hard and lifting the cost model to work on the source code 686 state is likely to produce cost expressions that are too hard to reason on. 687 Luckily enough, in all modern architectures the cost of executing single 688 instructions is either independent of the functional state or the jitter---the 689 difference between the worst and best case execution times---is small enough 690 to be bounded without loosing too much precision. Therefore we can concentrate 691 on dependencies over the “non-functional” parts of the state only. 692 693 The non-functional state has no correspondence in the high level state and does 694 not influence the functional properties. What can be done is to expose the 695 non-functional state in the source code. We just present here the basic 696 intuition in a simplified form: the technical details that allow to handle the 697 general case are more complex and can be found in the CerCo deliverables. We add 698 to the source code an additional global variable that represents the 699 non-functional state and another one that remembers the last labels emitted. The 700 state variable must be updated at every label emission statement, using an 701 update function which is computed during static analysis too. The update 702 function associates to each label a function from the recently emitted labels 703 and old state to the new state. It is computed composing the semantics of every 704 instruction in a basic block and restricting it to the non-functional part of 705 the state. 706 707 Not all the details of the non-functional state needs to be exposed, and the 708 technique works better when the part of state that is required can be summarized 709 in a simple data structure. For example, to handle simple but realistic 710 pipelines it is sufficient to remember a short integer that encodes the position 711 of bubbles (stuck instructions) in the pipeline. In any case, the user does not 712 need to understand the meaning of the state to reason over the properties of the 713 program. Moreover, at any moment the user or the invariant generator tools that 714 analyse the instrumented source code produced by the compiler can decide to 715 trade precision of the analysis with simplicity by approximating the parametric 716 cost with safe non parametric bounds. Interestingly, the functional analysis of 717 the code can determine which blocks are executed more frequently in order to 718 approximate more aggressively the ones that are executed less. 719 720 Dependent labeling can also be applied to allow the compiler to duplicate 721 blocks that contain labels (e.g. in loop optimizations) \cite{paolo}. The effect is to assign 722 a different cost to the different occurrences of a duplicated label. For 723 example, loop peeling turns a loop into the concatenation of a copy of the loop 724 body (that executes the first iteration) with the conditional execution of the 725 loop (for the successive iterations). Because of further optimizations, the two 726 copies of the loop will be compiled differently, with the first body usually 727 taking more time. 728 729 By introducing a variable that keep tracks of the iteration number, we can 730 associate to the label a cost that is a function of the iteration number. The 731 same technique works for loop unrolling without modifications: the function will 732 assign a cost to the even iterations and another cost to the odd ones. The 733 actual work to be done consists in introducing in the source code and for each 734 loop a variable that counts the number of iterations. The loop optimization code 735 that duplicate the loop bodies must also modify the code to propagate correctly 736 the update of the iteration numbers. The technical details are more complex and 737 can be found in the CerCo reports and publications. The implementation, however, 738 is quite simple and the changes to a loop optimizing compiler are minimal. 739 740 \subsection{Techniques to exploit the induced cost model} 741 We review the cost synthesis techniques developed in the project. 742 The starting hypothesis is that we have a certified methodology to annotate 743 blocks in the source code with constants which provide a sound and possibly 744 precise upper bound on the cost of executing the blocks after compilation to 745 object code. 746 747 The principle that we have followed in designing the cost synthesis tools is 748 that the synthetic bounds should be expressed and proved within a general 749 purpose tool built to reason on the source code. In particular, we rely on the 750 Frama-C tool to reason on C code and on the Coq proof-assistant to reason on 751 higher-order functional programs. 752 753 This principle entails that: 1) 754 The inferred synthetic bounds are indeed correct as long as the general purpose 755 tool is. 2) There is no limitation on the class of programs that can be handled 756 as long as the user is willing to carry on an interactive proof. 757 758 Of course, automation is desirable whenever possible. Within this framework, 759 automation means writing programs that give hints to the general purpose tool. 760 These hints may take the form, say, of loop invariants/variants, of predicates 761 describing the structure of the heap, or of types in a light logic. If these 762 hints are correct and sufficiently precise the general purpose tool will produce 763 a proof automatically, otherwise, user interaction is required. 764 765 \paragraph{The Cost plugin and its application to the Lustre compiler.} 766 Frama-C \cite{framac} is a set of analysers for C programs with a 767 specification language called ACSL. New analyses can be dynamically added 768 through a plugin system. For instance, the Jessie plugin allows deductive 769 verification of C programs with respect to their specification in ACSL, with 770 various provers as back-end tools. 771 We developed the CerCo Cost plugin for the Frama-C platform as a proof of 772 concept of an automatic environment exploiting the cost annotations produced by 773 the CerCo compiler. It consists of an OCaml program which in first approximation 774 takes the following actions: 1) it receives as input a C program, 2) it 775 applies the CerCo compiler to produce a related C program with cost annotations, 776 3) it applies some heuristics to produce a tentative bound on the cost of 777 executing the C functions of the program as a function of the value of their 778 parameters, 4) the user can then call the Jessie plugin to discharge the 779 related proof obligations. 780 In the following we elaborate on the soundness of the framework and the 781 experiments we performed with the Cost tool on the C programs produced by a 782 Lustre compiler. 783 784 \paragraph{Soundness.} The soundness of the whole framework depends on the cost 785 annotations added by the CerCo compiler, the synthetic costs produced by the 786 Cost plugin, the verification conditions (VCs) generated by Jessie, and the 787 external provers discharging the VCs. The synthetic costs being in ACSL format, 788 Jessie can be used to verify them. Thus, even if the added synthetic costs are 789 incorrect (relatively to the cost annotations), the process as a whole is still 790 correct: indeed, Jessie will not validate incorrect costs and no conclusion can 791 be made about the WCET of the program in this case. In other terms, the 792 soundness does not really depend on the action of the Cost plugin, which can in 793 principle produce any synthetic cost. However, in order to be able to actually 794 prove a WCET of a C function, we need to add correct annotations in a way that 795 Jessie and subsequent automatic provers have enough information to deduce their 796 validity. In practice this is not straightforward even for very simple programs 797 composed of branching and assignments (no loops and no recursion) because a fine 798 analysis of the VCs associated with branching may lead to a complexity blow up. 799 \paragraph{Experience with Lustre.} Lustre is a data-flow language to program 800 synchronous systems and the language comes with a compiler to C. We designed a 801 wrapper for supporting Lustre files. 802 The C function produced by the compiler implements the step function of the 803 synchronous system and computing the WCET of the function amounts to obtain a 804 bound on the reaction time of the system. We tested the Cost plugin and the 805 Lustre wrapper on the C programs generated by the Lustre compiler. For programs 806 consisting of a few hundreds loc, the Cost plugin computes a WCET and Alt − 807 Ergo is able to discharge all VCs automatically. 808 809 \paragraph{Handling C programs with simple loops.} 810 The cost annotations added by the CerCo compiler take the form of C instructions 811 that update by a constant a fresh global variable called the cost variable. 812 Synthesizing a WCET of a C function thus consists in statically resolving an 813 upper bound of the difference between the value of the cost variable before and 814 after the execution of the function, i.e. find in the function the instructions 815 that update the cost variable and establish the number of times they are passed 816 through during the flow of execution. In order to do the analysis the plugin 817 makes the following assumptions on the programs: 818 1. No recursive functions. 819 2. Every loop must be annotated with a variant. The variants of ‘for’ loops are 820 automatically inferred. 821 822 The plugin proceeds as follows. 823 First the call graph of the program is computed. 824 Then the computation of the cost of the function is performed by traversing its 825 control flow graph. If the function f calls the function g then the function g 826 is processed before the function f. The cost at a node is the maximum of the 827 costs of the successors. 828 In the case of a loop with a body that has a constant cost for every step of the 829 loop, the cost is the product of the cost of the body and of the variant taken 830 at the start of the loop. 831 In the case of a loop with a body whose cost depends on the values of some free 832 variables, a fresh logic function f is introduced to represent the cost of the 833 loop in the logic assertions. This logic function takes the variant as a first 834 parameter. The other parameters of f are the free variables of the body of the 835 loop. An axiom is added to account the fact that the cost is accumulated at each 836 step of the loop. 837 The cost of the function is directly added as post-condition of the function 838 839 The user can influence the annotation by different means: 840 by using more precise variants or 841 by annotating function with cost specification. The plugin will use this cost 842 for the function instead of computing it. 843 \paragraph{C programs with pointers.} 844 When it comes to verifying programs involving pointer-based data structures, 845 such as linked lists, trees, or graphs, the use of traditional first-order logic 846 to specify, and of SMT solvers to verify, shows some limitations. Separation 847 logic \cite{separation} is then an elegant alternative. It is a program logic 848 with a new notion of conjunction to express spatial heap separation. Bobot has 849 recently introduced in the Jessie plugin automatically generated separation 850 predicates to simulate separation logic reasoning in a traditional verification 851 framework where the specification language, the verification condition 852 generator, and the theorem provers were not designed with separation logic in 853 mind. CerCo's plugin can exploit the separation predicates to automatically 854 reason on the cost of execution of simple heap manipulation programs such as an 855 in-place list reversal. 856 857 \subsection{The CerCo Compiler} 858 In CerCo we have developed a certain number of cost preserving compilers based 859 on the labeling approach. Excluding an initial certified compiler for a While 860 language, all remaining compilers target realistic source languages---a pure 861 higher order functional language and a large subset of C with pointers, gotos 862 and all data structures---and real world target processors---MIPS and the 863 Intel 8051/8052 processor family. Moreover, they achieve a level of optimization 864 that ranges from moderate (comparable to gcc level 1) to intermediate (including 865 loop peeling and unrolling, hoisting and late constant propagation). The so 866 called Trusted CerCo Compiler is the only one that was implemented in the 867 interactive theorem prover Matita~\cite{matita} and its costs certified. The code distributed 868 is obtained extracting OCaml code from the Matita implementation. In the rest of 869 this section we will only focus on the Trusted CerCo Compiler, that only targets 870 the C language and the 8051/8052 family, and that does not implement the 871 advanced optimizations yet. Its user interface, however, is the same as the one 872 of the other versions, in order to trade safety with additional performances. In 873 particular, the Frama-C CerCo plugin can work without recompilation with all 874 compilers. 875 876 The 8051/8052 microprocessor is a very simple one, with constant-cost 547 877 instruction. It was chosen to separate the issue of exact propagation of the 548 878 cost model from the orthogonal problem of the static analysis of object code 549 that may require approximations or dependent costs. The compiler comes in 550 several versions: some are prototypes implemented directly in OCaml, and they 551 implement both the basic and dependent labelling approaches; the final version 552 is extracted from a Matita certification and at the moment implements only the 553 basic approach. 554 555 \emph{A formal cost certification of the CerCo compiler.} We implemented the 879 that may require approximations or dependent costs. 880 881 The (trusted) CerCo compiler implements the following optimizations: cast 882 simplification, constant propagation in expressions, liveness analysis driven 883 spilling of registers, dead code elimination, branch displacement, tunneling. 884 The two latter optimizations are performed by our optimizing assembler 885 \cite{correctness}. The back-end of the compiler works on three addresses 886 instructions, preferred to static single assignment code for the simplicity of 887 the formal certification. 888 889 The CerCo compiler is loosely based on the CompCert compiler \cite{compcert}, a 890 recently developed certified compiler from C to the PowerPC, ARM and x86 891 microprocessors. Contrarily to CompCert, both the CerCo code and its 892 certification are open source. Some data structures and language definitions for 893 the front-end are directly taken from CompCert, while the back-end is a redesign 894 of a compiler from Pascal to MIPS used by Francois Pottier for a course at the 895 Ecole Polytechnique. 896 897 The main peculiarities of the CerCo compiler are: 898 \begin{enumerate} 899 \item all of our intermediate languages include label emitting instructions to 900 implement the labeling approach, and the compiler preserves execution traces. 901 \item traditionally compilers target an assembly language with additional 902 macro-instructions to be expanded before assembly; for CerCo we need to go all 903 the way down to object code in order to perform the static analysis. Therefore 904 we integrated also an optimizing assembler and a static analyser. 905 \item to avoid implementing a linker and a loader, we do not support separate 906 compilation and external calls. Adding a linker and a loader is a transparent 907 process to the labeling approach and should create no unknown problem. 908 \item we target an 8-bit processor. Targeting an 8 bit processor requires 909 several changes and increased code size, but it is not fundamentally more 910 complex. The proof of correctness, however, becomes much harder. 911 \item we target a microprocessor that has a non uniform memory model, which is 912 still often the case for microprocessors used in embedded systems and that is 913 becoming common again in multi-core processors. Therefore the compiler has to 914 keep track of data and it must move data between memory regions in the proper 915 way. Also the size of pointers to different regions is not uniform. An 916 additional difficulty was that the space available for the stack in internal 917 memory in the 8051 is tiny, allowing only a minor number of nested calls. To 918 support full recursion in order to test the CerCo tools also on recursive 919 programs, the compiler manually manages a stack in external memory. 920 \end{enumerate} 921 922 \subsection{A formal certification of the CerCo compiler} 923 We implemented the 556 924 CerCo compiler in the interactive theorem prover Matita and have formally 557 925 certified that the cost model induced on the source code correctly and precisely … … 571 939 a uniform proof of forward similarity. 572 940 573 \subsection{The (basic) labelling approach.} 574 \paragraph{Problem statement.} given a source program $P$, we want to obtain an 575 instrumented source program $P'$, written in the same programming language, and 576 the object code $O$ such that: 1) $P'$ is obtained by inserting into $P$ some 577 additional instructions to update global cost information like the amount of 578 time spent during execution or the maximal stack space required; 2) $P$ and $P'$ 579 must have the same functional behavior, i.e., they must produce that same output 580 and intermediate observables; 3) $P$ and $O$ must have the same functional 581 behavior; 4) after execution and in interesting points during execution, the 582 cost information computed by $P'$ must be an upper bound of the one spent by $O$ 583 to perform the corresponding operations (soundness property); 5) the difference 584 between the costs computed by $P'$ and the execution costs of $O$ must be 585 bounded by a program dependent constant (precision property). 586 587 \paragraph{The labeling software components.} we solve the problem in four 588 stages \cite{labelling}, implemented by four software components that are used 589 in sequence. 590 591 The first component labels the source program $P$ by injecting label emission 592 statements in appropriate positions to mark the beginning of basic blocks. The 593 set of labels with their positions is called labelling. The syntax and semantics 594 of the source programming language is augmented with label emission statements. 595 The statement “EMIT l” behaves like a NOP instruction that does not affect the 596 program state or control flow, but it changes the semantics by making the label 597 l observable. Therefore the observables of a run of a program becomes a stream 598 of label emissions: l1 … ln, called the program trace. We clarify the conditions 599 that the labelling must respect later. 600 601 The second component is a labelling preserving compiler. It can be obtained from 602 an existing compiler by adding label emission statements to every intermediate 603 language and by propagating label emission statements during compilation. The 604 compiler is correct if it preserves both the functional behavior of the program 605 and the generated traces. We may also ask that the function that erases the cost 606 emission statements commute with compilation. This optional property grants that 607 the labelling does not interfere with the original compiler behavior. A further 608 set of requirements will be added later. 609 610 The third component is a static object code analyser. It takes in input the 611 object code augmented with label emission statements and it computes, for every 612 such statement, its scope. The scope of a label emission statement is the tree 613 of instructions that may be executed after the statement and before a new label 614 emission statement is found. It is a tree and not a sequence because the scope 615 may contain a branching statement. In order to grant that such a finite tree 616 exists, the object code must not contain any loop that is not broken by a label 617 emission statement. This is the first requirement of a sound labelling. The 618 analyser fails if the labelling is unsound. For each scope, the analyser 619 computes an upper bound of the execution time required by the scope, using the 620 maximum of the costs of the two branches in case of a conditional statement. 621 Finally, the analyser computes the cost of a label by taking the maximum of the 622 costs of the scopes of every statement that emits that label. 623 624 The fourth and last component takes in input the cost model computed at step 3 625 and the labelled code computed at step 1. It outputs a source program obtained 626 by replacing each label emission statement with a statement that increments the 627 global cost variable with the cost associated to the label by the cost model. 628 The obtained source code is the instrumented source code. 629 630 \paragraph{Correctness.} Requirements 1, 2 and 3 of the program statement 631 obviously hold, with 2 and 3 being a consequence of the definition of a correct 632 labelling preserving compiler. It is also obvious that the value of the global 633 cost variable of an instrumented source code is at any time equal to the sum of 634 the costs of the labels emitted by the corresponding labelled code. Moreover, 635 because the compiler preserves all traces, the sum of the costs of the labels 636 emitted in the source and target labelled code are the same. Therefore, to 637 satisfy the fourth requirement, we need to grant that the time taken to execute 638 the object code is equal to the sum of the costs of the labels emitted by the 639 object code. We collect all the necessary conditions for this to happen in the 640 definition of a sound labelling: a) all loops must be broken by a cost emission 641 statement; b) all program instructions must be in the scope of some cost 642 emission statement. To satisfy also the fifth requirement, additional 643 requirements must be imposed on the object code labelling to avoid all uses of 644 the maximum in the cost computation: the labelling is precise if every label is 645 emitted at most once and both branches of a conditional jump start with a label 646 emission statement. 647 648 The correctness and precision of the labelling approach only rely on the 649 correctness and precision of the object code labelling. The simplest, but not 650 necessary, way to achieve them is to impose correctness and precision 651 requirements also on the initial labelling produced by the first software 652 component, and to ask the labelling preserving compiler to preserve these 653 properties too. The latter requirement imposes serious limitations on the 654 compilation strategy and optimizations: the compiler may not duplicate any code 655 that contains label emission statements, like loop bodies. Therefore several 656 loop optimizations like peeling or unrolling are prevented. Moreover, precision 657 of the object code labelling is not sufficient per se to obtain global 658 precision: we also implicitly assumed the static analysis to be able to 659 associate a precise constant cost to every instruction. This is not possible in 660 presence of stateful hardware whose state influences the cost of operations, 661 like pipelines and caches. In the next section we will see an extension of the 662 basic labelling approach to cover this situation. 663 664 The labelling approach described in this section can be applied equally well and 665 with minor modifications to imperative and functional languages 666 \cite{labelling2}. We have tested it on a simple imperative language without 667 functions (a While language), to a subset of C and to two compilation chains for 668 a purely functional higher order language. The two main changes required to deal 669 with functional languages are: 1) because global variables and updates are not 670 available, the instrumentation phase produces monadic code to “update” the 671 global costs; 2) the requirements for a sound and precise labelling of the 672 source code must be changed when the compilation is based on CPS translations. 673 In particular, we need to introduce both labels emitted before a statement is 674 executed and labels emitted after a statement is executed. The latter capture 675 code that is inserted by the CPS translation and that would escape all label 676 scopes. 677 678 Phases 1, 2 and 3 can be applied as well to logic languages (e.g. Prolog). 679 However, the instrumentation phase cannot: in standard Prolog there is no notion 680 of (global) variable whose state is not retracted during backtracking. 681 Therefore, the cost of executing computations that are later backtracked would 682 not be correctly counted in. Any extension of logic languages with 683 non-backtrackable state should support the labelling approach. 684 685 \subsection{Dependent labelling.} 686 The core idea of the basic labelling approach is to establish a tight connection 687 between basic blocks executed in the source and target languages. Once the 688 connection is established, any cost model computed on the object code can be 689 transferred to the source code, without affecting the code of the compiler or 690 its proof. In particular, it is immediate that we can also transport cost models 691 that associate to each label functions from hardware state to natural numbers. 692 However, a problem arises during the instrumentation phase that replaces cost 693 emission statements with increments of global cost variables. The global cost 694 variable must be incremented with the result of applying the function associated 695 to the label to the hardware state at the time of execution of the block. 696 The hardware state comprises both the “functional” state that affects the 697 computation (the value of the registers and memory) and the “non-functional” 698 state that does not (the pipeline and caches content for example). The former is 699 in correspondence with the source code state, but reconstructing the 700 correspondence may be hard and lifting the cost model to work on the source code 701 state is likely to produce cost expressions that are too hard to reason on. 702 Luckily enough, in all modern architectures the cost of executing single 703 instructions is either independent of the functional state or the jitter --- the 704 difference between the worst and best case execution times --- is small enough 705 to be bounded without loosing too much precision. Therefore we can concentrate 706 on dependencies over the “non-functional” parts of the state only. 707 708 The non-functional state has no correspondence in the high level state and does 709 not influence the functional properties. What can be done is to expose the 710 non-functional state in the source code. We just present here the basic 711 intuition in a simplified form: the technical details that allow to handle the 712 general case are more complex and can be found in the CerCo deliverables. We add 713 to the source code an additional global variable that represents the 714 non-functional state and another one that remembers the last labels emitted. The 715 state variable must be updated at every label emission statement, using an 716 update function which is computed during static analysis too. The update 717 function associates to each label a function from the recently emitted labels 718 and old state to the new state. It is computed composing the semantics of every 719 instruction in a basic block and restricting it to the non-functional part of 720 the state. 721 722 Not all the details of the non-functional state needs to be exposed, and the 723 technique works better when the part of state that is required can be summarized 724 in a simple data structure. For example, to handle simple but realistic 725 pipelines it is sufficient to remember a short integer that encodes the position 726 of bubbles (stuck instructions) in the pipeline. In any case, the user does not 727 need to understand the meaning of the state to reason over the properties of the 728 program. Moreover, at any moment the user or the invariant generator tools that 729 analyse the instrumented source code produced by the compiler can decide to 730 trade precision of the analysis with simplicity by approximating the parametric 731 cost with safe non parametric bounds. Interestingly, the functional analysis of 732 the code can determine which blocks are executed more frequently in order to 733 approximate more aggressively the ones that are executed less. 734 735 Dependent labelling can also be applied to allow the compiler to duplicate 736 blocks that contain labels (e.g. in loop optimizations). The effect is to assign 737 a different cost to the different occurrences of a duplicated label. For 738 example, loop peeling turns a loop into the concatenation of a copy of the loop 739 body (that executes the first iteration) with the conditional execution of the 740 loop (for the successive iterations). Because of further optimizations, the two 741 copies of the loop will be compiled differently, with the first body usually 742 taking more time. 743 744 By introducing a variable that keep tracks of the iteration number, we can 745 associate to the label a cost that is a function of the iteration number. The 746 same technique works for loop unrolling without modifications: the function will 747 assign a cost to the even iterations and another cost to the odd ones. The 748 actual work to be done consists in introducing in the source code and for each 749 loop a variable that counts the number of iterations. The loop optimization code 750 that duplicate the loop bodies must also modify the code to propagate correctly 751 the update of the iteration numbers. The technical details are more complex and 752 can be found in the CerCo reports and publications. The implementation, however, 753 is quite simple and the changes to a loop optimizing compiler are minimal 754 \cite{paolo}. 755 756 \subsection{Techniques to exploit the induced cost model.} 757 We review the cost synthesis techniques developed in the project. 758 The starting hypothesis is that we have a certified methodology to annotate 759 blocks in the source code with constants which provide a sound and possibly 760 precise upper bound on the cost of executing the blocks after compilation to 761 object code. 762 763 The principle that we have followed in designing the cost synthesis tools is 764 that the synthetic bounds should be expressed and proved within a general 765 purpose tool built to reason on the source code. In particular, we rely on the 766 Frama − C tool to reason on C code and on the Coq proof-assistant to reason on 767 higher-order functional programs. 768 769 This principle entails that: 1) 770 The inferred synthetic bounds are indeed correct as long as the general purpose 771 tool is. 2) There is no limitation on the class of programs that can be handled 772 as long as the user is willing to carry on an interactive proof. 773 774 Of course, automation is desirable whenever possible. Within this framework, 775 automation means writing programs that give hints to the general purpose tool. 776 These hints may take the form, say, of loop invariants/variants, of predicates 777 describing the structure of the heap, or of types in a light logic. If these 778 hints are correct and sufficiently precise the general purpose tool will produce 779 a proof automatically, otherwise, user interaction is required. 780 781 \paragraph{The Cost plug-in and its application to the Lustre compiler} 782 Frama − C \cite{framac} is a set of analysers for C programs with a 783 specification language called ACSL. New analyses can be dynamically added 784 through a plug-in system. For instance, the Jessie plug-in allows deductive 785 verification of C programs with respect to their specification in ACSL, with 786 various provers as back-end tools. 787 We developed the CerCo Cost plug-in for the Frama − C platform as a proof of 788 concept of an automatic environment exploiting the cost annotations produced by 789 the CerCo compiler. It consists of an OCaml program which in first approximation 790 takes the following actions: (1) it receives as input a C program, (2) it 791 applies the CerCo compiler to produce a related C program with cost annotations, 792 (3) it applies some heuristics to produce a tentative bound on the cost of 793 executing the C functions of the program as a function of the value of their 794 parameters, (4) the user can then call the Jessie plug-in to discharge the 795 related proof obligations. 796 In the following we elaborate on the soundness of the framework and the 797 experiments we performed with the Cost tool on the C programs produced by a 798 Lustre compiler. 799 800 \paragraph{Soundness} The soundness of the whole framework depends on the cost 801 annotations added by the CerCo compiler, the synthetic costs produced by the 802 Cost plug-in, the verification conditions (VCs) generated by Jessie, and the 803 external provers discharging the VCs. The synthetic costs being in ACSL format, 804 Jessie can be used to verify them. Thus, even if the added synthetic costs are 805 incorrect (relatively to the cost annotations), the process as a whole is still 806 correct: indeed, Jessie will not validate incorrect costs and no conclusion can 807 be made about the WCET of the program in this case. In other terms, the 808 soundness does not really depend on the action of the Cost plug-in, which can in 809 principle produce any synthetic cost. However, in order to be able to actually 810 prove a WCET of a C function, we need to add correct annotations in a way that 811 Jessie and subsequent automatic provers have enough information to deduce their 812 validity. In practice this is not straightforward even for very simple programs 813 composed of branching and assignments (no loops and no recursion) because a fine 814 analysis of the VCs associated with branching may lead to a complexity blow up. 815 \paragraph{Experience with Lustre} Lustre is a data-flow language to program 816 synchronous systems and the language comes with a compiler to C. We designed a 817 wrapper for supporting Lustre files. 818 The C function produced by the compiler implements the step function of the 819 synchronous system and computing the WCET of the function amounts to obtain a 820 bound on the reaction time of the system. We tested the Cost plug-in and the 821 Lustre wrapper on the C programs generated by the Lustre compiler. For programs 822 consisting of a few hundreds loc, the Cost plug-in computes a WCET and Alt − 823 Ergo is able to discharge all VCs automatically. 824 825 \paragraph{Handling C programs with simple loops} 826 The cost annotations added by the CerCo compiler take the form of C instructions 827 that update by a constant a fresh global variable called the cost variable. 828 Synthesizing a WCET of a C function thus consists in statically resolving an 829 upper bound of the difference between the value of the cost variable before and 830 after the execution of the function, i.e. find in the function the instructions 831 that update the cost variable and establish the number of times they are passed 832 through during the flow of execution. In order to do the analysis the plugin 833 makes the following assumptions on the programs: 834 1. No recursive functions. 835 2. Every loop must be annotated with a variant. The variants of ‘for’ loops are 836 automatically inferred. 837 838 The plugin proceeds as follows. 839 First the call graph of the program is computed. 840 Then the computation of the cost of the function is performed by traversing its 841 control flow graph. If the function f calls the function g then the function g 842 is processed before the function f. The cost at a node is the maximum of the 843 costs of the successors. 844 In the case of a loop with a body that has a constant cost for every step of the 845 loop, the cost is the product of the cost of the body and of the variant taken 846 at the start of the loop. 847 In the case of a loop with a body whose cost depends on the values of some free 848 variables, a fresh logic function f is introduced to represent the cost of the 849 loop in the logic assertions. This logic function takes the variant as a first 850 parameter. The other parameters of f are the free variables of the body of the 851 loop. An axiom is added to account the fact that the cost is accumulated at each 852 step of the loop. 853 The cost of the function is directly added as post-condition of the function 854 855 The user can influence the annotation by different means: 856 by using more precise variants or 857 by annotating function with cost specification. The plugin will use this cost 858 for the function instead of computing it. 859 \paragraph{C programs with pointers} 860 When it comes to verifying programs involving pointer-based data structures, 861 such as linked lists, trees, or graphs, the use of traditional first-order logic 862 to specify, and of SMT solvers to verify, shows some limitations. Separation 863 logic \cite{separation} is then an elegant alternative. It is a program logic 864 with a new notion of conjunction to express spatial heap separation. Bobot has 865 recently introduced in the Jessie plug-in automatically generated separation 866 predicates to simulate separation logic reasoning in a traditional verification 867 framework where the specification language, the verification condition 868 generator, and the theorem provers were not designed with separation logic in 869 mind. CerCo's plug-in can exploit the separation predicates to automatically 870 reason on the cost of execution of simple heap manipulation programs such as an 871 in-place list reversal. 872 873 \subsection{The CerCo Compiler} 874 In CerCo we have developed a certain number of cost preserving compilers based 875 on the labelling approach. Excluding an initial certified compiler for a While 876 language, all remaining compilers target realistic source languages --- a pure 877 higher order functional language and a large subset of C with pointers, gotos 878 and all data structures --- and real world target processors --- MIPS and the 879 Intel 8051/8052 processor family. Moreover, they achieve a level of optimization 880 that ranges from moderate (comparable to gcc level 1) to intermediate (including 881 loop peeling and unrolling, hoisting and late constant propagation). The so 882 called Trusted CerCo Compiler is the only one that was implemented in the 883 interactive theorem prover Matita and its costs certified. The code distributed 884 is obtained extracting OCaml code from the Matita implementation. In the rest of 885 this section we will only focus on the Trusted CerCo Compiler, that only targets 886 the C language and the 8051/8052 family, and that does not implement the 887 advanced optimizations yet. Its user interface, however, is the same as the one 888 of the other versions, in order to trade safety with additional performances. In 889 particular, the Frama-C CerCo plug-in can work without recompilation with all 890 compilers. 891 892 The (trusted) CerCo compiler implements the following optimizations: cast 893 simplification, constant propagation in expressions, liveness analysis driven 894 spilling of registers, dead code elimination, branch displacement, tunneling. 895 The two latter optimizations are performed by our optimizing assembler 896 \cite{correctness}. The back-end of the compiler works on three addresses 897 instructions, preferred to static single assignment code for the simplicity of 898 the formal certification. 899 900 The CerCo compiler is loosely based on the CompCert compiler \cite{compcert}, a 901 recently developed certified compiler from C to the PowerPC, ARM and x86 902 microprocessors. Contrarily to CompCert, both the CerCo code and its 903 certification are open source. Some data structures and language definitions for 904 the front-end are directly taken from CompCert, while the back-end is a redesign 905 of a compiler from Pascal to MIPS used by Francois Pottier for a course at the 906 Ecole Polytechnique. 907 908 The main peculiarities of the CerCo compiler are: 909 \begin{enumerate} 910 \item all of our intermediate languages include label emitting instructions to 911 implement the labelling approach, and the compiler preserves execution traces. 912 \item traditionally compilers target an assembly language with additional 913 macro-instructions to be expanded before assembly; for CerCo we need to go all 914 the way down to object code in order to perform the static analysis. Therefore 915 we integrated also an optimizing assembler and a static analyser. 916 \item to avoid implementing a linker and a loader, we do not support separate 917 compilation and external calls. Adding a linker and a loader is a transparent 918 process to the labelling approach and should create no unknown problem. 919 \item we target an 8-bit processor. Targeting an 8 bit processor requires 920 several changes and increased code size, but it is not fundamentally more 921 complex. The proof of correctness, however, becomes much harder. 922 \item we target a microprocessor that has a non uniform memory model, which is 923 still often the case for microprocessors used in embedded systems and that is 924 becoming common again in multi-core processors. Therefore the compiler has to 925 keep track of data and it must move data between memory regions in the proper 926 way. Also the size of pointers to different regions is not uniform. An 927 additional difficulty was that the space available for the stack in internal 928 memory in the 8051 is tiny, allowing only a minor number of nested calls. To 929 support full recursion in order to test the CerCo tools also on recursive 930 programs, the compiler manually manages a stack in external memory. 931 \end{enumerate} 932 933 \section{A formal certification of the CerCo compiler} 934 The Trusted CerCo Compiler has been implemented and certified using the 935 interactive theorem prover Matita. The details on the proof techniques employed 941 The details on the proof techniques employed 936 942 and 937 943 the proof sketch can be collected from the CerCo deliverables and papers. … … 939 945 out to be more complex than what we expected at the beginning. 940 946 941 \paragraph{The statement }947 \paragraph{The statement.} 942 948 Real time programs are often reactive programs that loop forever responding to 943 949 events (inputs) by performing some computation followed by some action (output) … … 969 975 space is a clear requirement of meaningfulness of a run, because source programs 970 976 do not crash for lack of space while object code ones do. The balancing of 971 function calls/returns is a requirement for precision: the label ling approach977 function calls/returns is a requirement for precision: the labeling approach 972 978 allows the scope of label emission statements to extend after function calls to 973 979 minimize the number of labels. Therefore a label pays for all the instructions … … 1006 1012 characterized by history dependent stateful components, like caches and 1007 1013 pipelines. The main issue consists in assigning a parametric, dependent cost 1008 to basic blocks that can be later transferred by the label ling approach to1014 to basic blocks that can be later transferred by the labeling approach to 1009 1015 the source code and represented in a meaningful way to the user. The dependent 1010 label ling approach that we have studied seems a promising tools to achieve1016 labeling approach that we have studied seems a promising tools to achieve 1011 1017 this goal, but the cost model generated for a realistic processor could be too 1012 1018 large and complex to be exposed in the source code. Further study is required … … 1016 1022 Examples of further future work consist in improving the cost invariant 1017 1023 generator algorithms and the coverage of compiler optimizations, in combining 1018 the label ling approach with type \& effect discipline \cite{typeffects}1024 the labeling approach with type \& effect discipline \cite{typeffects} 1019 1025 to handle languages with implicit memory management, and in experimenting with 1020 1026 our tools in early development phases. Some larger use case is also necessary 1021 1027 to evaluate the CerCo's prototype on industrial size programs. 1022 1028 1023 \begin{thebibliography}{} 1024 1025 \bibitem{cerco} \textbf{Certified Complexity}, R. Amadio, A. Asperti, N. Ayache, 1029 \bibliographystyle{llncs} 1030 \begin{thebibliography}{19} 1031 1032 \bibitem{survey} \textbf{A Survey of Static Program Analysis Techniques} 1033 Wolfgang W\"ogerer. Technical report. Technische Universit\"at Wien 2005 1034 1035 \bibitem{cerco} \textbf{Certified Complexity}. R. Amadio, A. Asperti, N. Ayache, 1026 1036 B. Campbell, D. Mulligan, R. Pollack, Y. Regis-Gianas, C. Sacerdoti Coen, I. 1027 1037 Stark, in Procedia Computer Science, Volume 7, 2011, Proceedings of the 2 nd 1028 1038 European Future Technologies Conference and Exhibition 2011 (FET 11), 175-177. 1029 1039 1030 \bibitem{label ling} \textbf{Certifying and Reasoning on Cost Annotations in C1040 \bibitem{labeling} \textbf{Certifying and Reasoning on Cost Annotations in C 1031 1041 Programs}, N. Ayache, R.M. Amadio, Y.Régis-Gianas, in Proc. FMICS, Springer 1032 1042 LNCS 1033 1043 7437: 32-46, 2012, DOI:10.1007/978-3-642-32469-7\_3. 1034 1044 1035 \bibitem{paolo} \textbf{Indexed Labels for Loop Iteration Dependent Costs}, P. 1045 \bibitem{labeling2} \textbf{Certifying and reasoning on cost annotations of 1046 functional programs}. 1047 R.M. Amadio, Y. Régis-Gianas. Proceedings of the Second international conference 1048 on Foundational and Practical Aspects of Resource Analysis FOPARA 2011 Springer 1049 LNCS 2011 1050 1051 \bibitem{compcert} \textbf{Formal verification of a realistic compiler}. Leroy, 1052 X. In Commun. ACM 52(7), 107–115 (2009) 1053 1054 \bibitem{framac} \textbf{Frama-C user manual}. Correnson, L., Cuoq, P., Kirchner, 1055 F., Prevosto, V., Puccetti, A., Signoles, J., 1056 Yakobowski, B. in CEA-LIST, Software Safety Laboratory, Saclay, F-91191, 1057 \url{http://frama-c.com/}. 1058 1059 \bibitem{paolo} \textbf{Indexed Labels for Loop Iteration Dependent Costs}. P. 1036 1060 Tranquilli, in Proceedings of the 11th International Workshop on Quantitative 1037 1061 Aspects of Programming Languages and Systems (QAPL 2013), Rome, 23rd-24th March 1038 1062 2013, Electronic Proceedings in Theoretical Computer Science, to appear in 2013. 1039 1063 1040 \bibitem{embounded} \textbf{The EmBounded project (project paper)}, K. Hammond, 1041 R. Dyckhoff, C. Ferdinand, R. Heckmann, M. Hofmann, H. Loidl, G. Michaelson, J. 1042 Serot, A. Wallace, in Trends in Functional Programming, Volume 6, Intellect 1043 Press, 2006. 1044 1045 \bibitem{proartis} \textbf{PROARTIS: Probabilistically Analysable Real-Time 1046 Systems}, F.J. Cazorla, E. Qui˜nones, T. Vardanega, L. Cucu, B. Triquet, G. 1047 Bernat, E. Berger, J. Abella, F. Wartel, M. Houston, et al., in ACM Transactions 1048 on Embedded Computing Systems, 2012. 1049 1050 \bibitem{stateart} \textbf{The worst-case execution-time problem overview of 1051 methods 1052 and survey of tools.} Wilhelm R. et al., in ACM Transactions on Embedded 1053 Computing Systems, 7:1–53, May 2008. 1054 1055 %\bibitem{proartis2} \textbf{A Cache Design for Probabilistic Real-Time 1056 % Systems}, L. Kosmidis, J. Abella, E. Quinones, and F. Cazorla, in Design, 1057 % Automation, and Test in Europe (DATE), Grenoble, France, 03/2013. 1058 1059 \bibitem{framac} \textbf{Frama-C user manual} Correnson, L., Cuoq, P., Kirchner, 1060 F., Prevosto, V., Puccetti, A., Signoles, J., 1061 Yakobowski, B. in CEA-LIST, Software Safety Laboratory, Saclay, F-91191, 1062 \url{http://frama-c.com/}. 1063 1064 \bibitem{compcert} \textbf{Formal verification of a realistic compiler} Leroy, 1065 X. In Commun. ACM 52(7), 107–115 (2009) 1064 \bibitem{separation} \textbf{Intuitionistic reasoning about shared mutable data 1065 structure} John C. Reynolds. In Millennial Perspectives in Computer Science, 1066 pages 303–321, Houndsmill, 1067 Hampshire, 2000. Palgrave. 1066 1068 1067 1069 \bibitem{lustre} \textbf{LUSTRE: a declarative language for real-time … … 1071 1073 1987. 1072 1074 1073 \bibitem{separation} \textbf{Intuitionistic reasoning about shared mutable data 1074 structure} John C. Reynolds. In Millennial Perspectives in Computer Science, 1075 pages 303–321, Houndsmill, 1076 Hampshire, 2000. Palgrave. 1077 1078 \bibitem{typeffects} \textbf{The Type and Effect Discipline} Jean-Pierre Talpin 1075 \bibitem{correctness} \textbf{On the correctness of an optimising assembler for 1076 the intel MCS-51 microprocessor}. 1077 Dominic P. Mulligan and Claudio Sacerdoti Coen. In Proceedings of the Second 1078 international conference on Certified Programs and Proofs, Springer-Verlag 2012. 1079 1080 \bibitem{proartis} \textbf{PROARTIS: Probabilistically Analysable Real-Time 1081 Systems}, F.J. Cazorla, E. Qui˜nones, T. Vardanega, L. Cucu, B. Triquet, G. 1082 Bernat, E. Berger, J. Abella, F. Wartel, M. Houston, et al., in ACM Transactions 1083 on Embedded Computing Systems, 2012. 1084 1085 \bibitem{embounded} \textbf{The EmBounded project (project paper)}. K. Hammond, 1086 R. Dyckhoff, C. Ferdinand, R. Heckmann, M. Hofmann, H. Loidl, G. Michaelson, J. 1087 Serot, A. Wallace, in Trends in Functional Programming, Volume 6, Intellect 1088 Press, 2006. 1089 1090 \bibitem{matita} 1091 \textbf{The Matita Interactive Theorem Prover}. 1092 Andrea Asperti, Claudio Sacerdoti Coen, Wilmer Ricciotti and Enrico Tassi. 1093 23rd International Conference on Automated Deduction, CADE 2011. 1094 1095 \bibitem{typeffects} \textbf{The Type and Effect Discipline}. Jean-Pierre Talpin 1079 1096 and Pierre Jouvelot. 1080 1097 In Proceedings of the Seventh Annual Symposium on Logic in Computer Science … … 1082 1099 IEEE Computer Society 1992. 1083 1100 1084 \bibitem{labelling2} \textbf{Certifying and reasoning on cost annotations of 1085 functional programs}. 1086 R.M. Amadio, Y. Régis-Gianas. Proceedings of the Second international conference 1087 on Foundational and Practical Aspects of Resource Analysis FOPARA 2011 Springer 1088 LNCS 2011 1089 1090 \bibitem{correctness} \textbf{On the correctness of an optimising assembler for 1091 the intel MCS-51 microprocessor} 1092 Dominic P. Mulligan and Claudio Sacerdoti Coen. In Proceedings of the Second 1093 international conference on Certified Programs and Proofs, Springer-Verlag 2012. 1094 1095 \bibitem{survey} \textbf{A Survey of Static Program Analysis Techniques} 1096 Wolfgang W\"ogerer. Technical report. Technische Universit\"at Wien 2005 1101 \bibitem{stateart} \textbf{The worst-case execution-time problem overview of 1102 methods 1103 and survey of tools.} Wilhelm R. et al., in ACM Transactions on Embedded 1104 Computing Systems, 7:1–53, May 2008. 1105 1106 %\bibitem{proartis2} \textbf{A Cache Design for Probabilistic Real-Time 1107 % Systems}, L. Kosmidis, J. Abella, E. Quinones, and F. Cazorla, in Design, 1108 % Automation, and Test in Europe (DATE), Grenoble, France, 03/2013. 1097 1109 1098 1110 \end{thebibliography}
Note: See TracChangeset
for help on using the changeset viewer.