wiki:Proposal1.2

Version 6 (modified by sacerdot, 7 years ago) (diff)

--

Progress beyond state of the art

Automatic verification of properties of software components has reached a level of maturity allowing complete correctness proofs of entire compilers; that is of the semantic equivalence between the generated assembly code and its source program. For instance, in the framework of the http://www.info.uni-karlsruhe.de/publications.php/id=213Verifix Project a compiler from a subset of Common Lisp to Transputer code was formally checked in PVS (see Dold and Vialard?]). Strecker?] and Klein and Nipkow?] certified bytecode compilers from a subset of Java to a subset of the Java Virtual Machine in Isabelle. In the same system, Leinenbach et al.?] formally verified a compiler from a subset of C to a DLX assembly code. Chlipala?] recently wrote in Coq a certified compiler from the simply-typed lambda calculus to assembly language, making an extensive use of dependent types. Perhaps, the most advanced project is http://pauillac.inria.fr/�xleroy/compcert/Compcert, headed by Leroy, based on the use of Coq both for programming the compiler and proving its correctness. In particular, both the back-end (Leroy?]) and the front-end (Leroy and Tristan?]) of an optimising compiler from a subset of C to PowerPC assembly have been certified in this way.

However, very little is known about the preservation of intensional properties of programs, and in particular about their (concrete) complexity. The theoretical study of the complexity impact of program transformations between different computational models has so far been confined to very foundational devices. Here we propose to address a concrete case of compilation from a typical high-level language to assembly. It is worth remarking that it is unlikely to have constant-time transformations between foundational models: for instance coding a multitape Turing machine into a single tape one could introduce a polynomial slow-down. Thus, complexity is architecture dependent, and the claim that you may pass from one language to another, preserving the performance of your algorithms, must be taken with the due caution. In particular, as surprising as it may be, very little is known about the complexity behaviour of a compiled code with respect to its source; as a matter of fact, most industries producing robots or devices with strong temporal constraints (such as, e.g. photoelectric safety barriers) still program such devices in assembly.

The tacit assumption that the complexity of algorithms is preserved along compilation, while plausible under the suitable assumptions, is not supported by any practical or theoretical study. For instance, a single register is usually used to point to activation records, implicitly delimiting their number; you may take more registers to this purpose, but unless you fix a priori their number (hence fixing the size of the stack), you cannot expect to access data in activation records in constant time. In particular, the memory model assumed by Leroy assumes an infinite memory, where allocation requests always succeed, that clearly conflicts with the reality of embedded software, where one has to work within precise (often relatively small) memory bounds. If working in restricted space on one side allows us to properly weight memory access as a unit cost, on the other side it introduces a subtle interplay between space complexity, time complexity and correctness that will be one of the crucial issues of the project.

Even admitting (as we hope to prove) that in a confined universe we may actually preserve complexity, the main interest of the project is in producing a (certified) computational cost (in terms of clock cycles) for all instruction slices of the source program with O(1) complexity, thus providing precise values for all constants appearing in the cost function for the source. This opens the possibility of computing time constraints for executable code by reasoning directly on the high level input language. In particular, we are not aiming to help analyse the complexity (or termination) of programs (that has to be guessed by the user, as he guesses invariants in axiomatic semantics), but we shall build the necessary infrastructure to reflect a high-level, abstract complexity analysis of the source on a concrete instantiation of its target code.

Such instantiation depends on the target architecture: for instance some microcontrollers lack the multiplication instruction as a primitive operation, preventing to count such an operation with a fixed cost. Moreover, if we are interested in a really tight complexity measure, we cannot expect to have a uniform cost for input instructions since, due to register allocation and optimisations, their actual cost depends on their surrounding context. In other words, we have to face the non compositional nature (in terms of the source structure) of most compiler techniques and optimizations.

The Compcert project represents the current baseline for any future work on compiler certification, comprising the one we plan to do. We will improve on Compcert in two directions: by assuming a formal model where resources (memory) are constrained; and by preserving complexity of O(1) operations, also tracing the way they are mapped to assembly to reflect actual computational costs on the source code. Both improvements greatly increase the exploitation potentials, in particular in the domain of embedded systems and real time computation.


The CerCo? approach

The complexity of a program only depends on its control-flow structure, and in particular on its cycles (procedures calls are just a special case of cycles). Proving that a compiler preserves complexity amounts to proving that it preserves (up to local modifications, like loop unrolling, in-line expansion, etc.) the control-flow structure of the source^1^? and, less trivially, that all other instructions are compiled into assembly code whose execution takes a bounded number of clock-cycles (i.e. with O(1) complexity). The interest of the project lies in the possibility to compute these costs directly on the target code then refer them back to the source program, allowing the possibility to make precise and trusted temporal assertions about execution from reasoning on the source code.

As already mentioned, the main problem in the backward translation of costs from target code to source code is the fact that, apart from the overall control flow structure, all remaining structure of the input is usually irremediably lost during compilation: optimizations can move instructions around, change the order of jumps, and in general perform operations that are far from compositional w.r.t. the high level syntactic structure of the input program. So there is no hope to compute costs on an instruction-by-instruction basis of the source language, since the actual cost of the executable is not compositional in these figures. We have to find another, eventually coarser, level of granularity where the source can be sensibly annotated by target costs.

We regard a C program as a collection of mutually defined procedures. The flow inside each procedure is determined by branching instructions like if-then-else; while loops can be regarded as a special kind of tail recursive procedures. The resulting flow can thus be represented as a directed acyclic graph (DAG). We call a path of the directed acyclic graph an execution path.

Figure 2: Quicksort

As a simple example, consider the quicksort program of Fig.2. This algorithm performs in-place sorting of input array img3.png whose bounds are img4.png and img5.png; initially img4.png is expected to be zero, while img5.png is the length of the array minus one. The outermost conditional terminates when the bounds of the array are illegal. (Sorting an empty array will end the recursive behaviour of the algorithm.) The variable img6.png is the so called pivot: a selected element of the array that will be compared with all other elements. Bigger elements will be moved (by the swap function) to the end of the array (the upper part), while smaller elements are placed at the beginning of the array (the lower part). Then the pivot is placed between the lower and the upper part of the array, in position img7.png, its position in the resulting sorted array; all elements before the pivot are smaller and all elements following it are bigger. The algorithm completes the sorting with recursive calls on the lower and on the upper parts of the array.

In the body of the quick_sort procedure there are only two execution paths, corresponding to the two cases img8.png and img9.png. The latter is a trivial path, immediately leading to termination. The former leads to the while loop (that is regarded as a procedure call), the call to swap, and the two recursive calls. Similarly, the body of the while loop is composed by two paths, corresponding to the two conditions img10.png and img11.png.

All operations performed along any of these paths takes some constant time c. The complexity magnitude of the program only depends on the loops and recursive calls met along its execution paths, but not on their associated constants. On the other hand, if we want to give tight performance bounds to the execution time, we have to compute the real constants on the executable.

The compiler must be able to return a set of pairs img12.png, where each img13.png is an execution path, and img14.png is its actual cost^2^?. It is then up to the user to guess (and to prove, assisted by interactive tools) the complexity of the program: the compiler only provides the infrastructure required to map a complexity analysis on the source into a faithful analog on the target. This approach looks compatible with most local optimizations. Moreover, since we work on a cycle-by-cycle (procedure-by-procedure) basis, the approach should scale up well.


User interaction flow

The left part of Fig.3 shows the interaction diagram for the final user of the system (the right part is a planned case study for a possible automation of the process and will be discussed later).

Figure 3: Interaction and automation diagrams

The interaction is done in several steps. We illustrate them using the quicksort program above.

  1. The user writes her code in C�(see Fig.�2?) and compiles it with our CerCo? (Certified Complexity) compiler.
The CerCo? compiler outputs both the object code and an annotated copy of the C source (Fig.�4). Figure 4: Cost annotated C code (generated by the compiler)
Figure 5: Automatic inference of cost annotations from assembly code
<tablestyle="width:30%"> C source

~- void quicksort(t,l,r) {[[BR]] if (l < r) {[[BR]] int i = l + 1;[[BR]] int m = l;[[BR]] int v = t[l];[[BR]] while (i <= r) {[[BR]] if (t[i] < v) { [[BR]] m++; [[BR]] swap(t, i, m); }[[BR]] i++;}[[BR]] swap(t, l, m);[[BR]] quicksort(t, l, m - 1);[[BR]] quicksort(t, m + 1, r);[[BR]]}[[BR]]-~
<tablestyle="width:30%"> Pseudo-Assembly code

~- 24: r <- r3[[BR]]29: l <- r2[[BR]]34: cmp l r[[BR]]36: t <- r1[[BR]]3a: jump c4 if l >= r[[BR]]40: i <- l + 1[[BR]]44: r8 <- t[[BR]]48: r7 <- l[[BR]]4b: m <- l[[BR]][[BR]]img20.png[[BR]][[BR]]97: r1 <- t[[BR]]9b: r3 <- m[[BR]]9e: r2 <- l[[BR]]a1: call swap[[BR]]a6: r1 <- t[[BR]]aa: r3 <- m - 0x1[[BR]]af: r2 <- l[[BR]]b2: call quicksort[[BR]]bc: l <- r6[[BR]]bf: call quicksort[[BR]]c4: ret -~
<tablestyle="width:30%"> Execution Paths

~- l >= r img21.png [[BR]]24: r <- r3[[BR]]29: l <- r2[[BR]]34: cmp l r[[BR]]36: t <- r1[[BR]]3a: jump c4 if l >= r[[BR]]c4: ret[[BR]]�[[BR]]total: 6 clock cycles[[BR]]�[[BR]]l < r img21.png [[BR]]24: r <- r3[[BR]]29: l <- r2[[BR]]34: cmp l r[[BR]]36: t <- r1[[BR]]3a: jump c4 if l >= r[[BR]]40: i <- l + 1[[BR]]44: r8 <- t[[BR]]48: r7 <- l[[BR]]4b: m <- l[[BR]]while loop[[BR]]97: r1 <- t[[BR]]9b: r3 <- m[[BR]]9e: r2 <- l[[BR]]a1: call swap [[BR]]swap [[BR]]a6: r1 <- t[[BR]]aa: r3 <- m - 0x1[[BR]]af: r2 <- l[[BR]]b2: call quicksort[[BR]]quicksort[[BR]]bc: l <- r6[[BR]]bf: call quicksort[[BR]]quicksort[[BR]]c4: ret[[BR]]�[[BR]]total: 21 clock cycles + function calls -~

Each loop and function body is annotated with the cost of one iteration, along all its possible execution paths. The cost is expressed as a function of the state of the program, which comprises the value of every variable. (In the example, we use a textual annotation for simplicity, but we expect to produce a more structured output.) The leftmost column of Fig.�5 shows the original source code. Colours are used to relate source code statements with their respective (human readable) assembly instructions, reported in the central column. That assembly was produced by gcc^3^? with a moderate level of optimizations for an Intel 386 family microprocessor. We used img4.png, img5.png, img3.png, img6.png and img7.png to mention locations (registers and/or stack frame temporaries) in which the corresponding C variables are placed, while img22.png are other register or temporaries that have no direct mapping to C. The calling convention puts the first three parameters in img23.png, img24.png and img25.png, and it is up to the callee to eventually store them in local temporaries. Assignment is denoted with <-, addition and multiplication with + and *; the jump instruction is followed by the target address, and when the jump is conditional a C like expression follows (but its evaluation is performed early by the cmp instruction, that sets a CPU flag recording the result of the comparison). The only tricky expression is *(r8 + r7 * 4), that exploits an advanced addressing mechanism corresponding to array indexing (4 is the size of an array cell in bytes, img26.png is the index and img27.png is the address at which the array starts). It amounts to the C statement t[l] that computes the pivot.The rightmost column shows two possible execution paths, with a precise estimation of their cost (here 6 and 21 CPU cycles plus the cost of function calls) and the algebraic conditions characterizing these paths.More precisely

  • The CerCo? compiler avoids intra-procedural optimisations and loop optimisations that may change the number of iterations performed in a non trivial way.
  • Some intra-procedural or loop optimisations (like the while to repeat pre-hoisting optimisation applied by gcc in Fig.�5) can be allowed, provided that the compiler records them precisely.
  • Once the assembly code is produced, the assembly-level control flow graph is analysed in order to compute the cost of execution paths. Fig.�5 shows two of them in the rightmost column; the analysis of the while loop has been omitted, but is similar.
The user computes (by hand or semi-automatically) the complexity invariants of each cycle and (recursive) function, and he adds them to the C code as special comments^4^? (Fig.�6). Figure 6: Invariants annotated C code. The invariants are user provided.
img28.png
Figure 7: Complexity obligations (automatically generated). The user should prove every complexity obligation.
img29.png

The quicksort complexity invariant states the maximum number of clock cycles required by its execution on an array delimited by img4.png and img5.png. Since the procedure can be called with wrong indices (img30.png) the formula has to take into account that the img31.png could be negative using the img32.png function to raise that difference to zero when needed. The literature suggests that this quicksort implementation (where the pivot img6.png is chosen deterministically) has a quadratic complexity in the worst case. Cleaning up the formula from multiplicative and additive constants one obtains the expected asymptotic complexity img33.png. The coefficients are those returned by the cost-annotating compiler. Similarly, the user has to give a complexity invariant for the inner while loop.

  1. The user and compiler annotated C code is fed into an already existing tool (in this example, Caduceus, Filli�tre and March�?]) that produces one complexity obligation for each execution path (Fig.�7).
  2. The user should prove all complexity obligations. The proofs are the certificate that the user provided complexity invariant is correct. In many cases, the obligations can be proved automatically using a general purpose automatic theorem prover or an ad-hoc procedure. For instance, to prove the complexity obligations of Fig.�7, we must show that a system of inequations holds, which may be done automatically. When an automatic proof is not possible, the user can resort to an interactive proof.

The right part of Fig.�3 describes a planned case study for the automation of the complexity proof. We start with a synchronous program which is compiled to C code. The CerCo? compiler then produces suitable cost annotations which are used by an invariant synthesizer to build complexity assertions on the C code. The synthesizer can take advantage of the high level control flow information contained in the source synchronous program. The deductive platform (Caduceus) generates complexity obligations which are passed to an ad-hoc proof generator to produce a machine-checked proof from which we can extract a certified bound on the reaction time of the original synchronous program. The proof generator can also take advantage of the high level information coming from the original source program, and user interaction can be used to drive the generator in critical cases.


Certification: tools and techniques

In order to trust the process described in the previous section, we need to trust the CerCo? compiler. I.e. we need to fulfil the following requirements:

  1. the compiled assembly program respects the semantics of the C program
  2. we need to know that the number of iterations performed by the C and assembly programs are the same, i.e. to prove that the compiler preserves the complexity
  3. we need to prove that the cost annotations generated by the compiler really correspond to the number of clock cycles spent by the hardware

For this reason, we plan to^5^?:

  1. develop an untrusted CerCo? compiler prototype in high level programming language;
  2. provide an executable formal specification of the target microprocessor;
  3. provide an executable formal specification of the C language;
  4. develop an executable version of the CerCo? compiler in a language suitable to formal correctness proofs;
  5. give a machine checkable proof that the latter implementation satisfies all the requirements mentioned above.

The untrusted compiler will be written in the http://caml.inria.fr/OCaml programming language, developed and distributed by INRIA, France's national research institute for computer science. OCaml is a general-purpose programming language, especially suited for symbolic manipulation of tree-like data structures, of the kind typically used during compilation. It is a simple and efficient language, designed for program safety and reliability and particularly suited for rapid prototyping.

For the certification of the compiler we plan to use the http://matita.cs.unibo.itMatita Interactive Theorem Prover, developed at the Computer Science Department of the University of Bologna. Matita is based on the Calculus of Inductive Constructions, the same foundational paradigm as INRIA's Coq system, and it is partially compatible with it. It adopts a tactic based editing mode. Proof objects (XML-encoded) are produced for storage and exchange. Its graphical interface, inspired by CtCoq? and Proof General, supports high quality bidimensional rendering of proofs and formulae, transformed on-the-fly to MathML markup. In spite of its young age it has already been used for complex formalizations, including non trivial results in Number Theory and problems from the Poplmark challenge�POPLmark?]. An executable specification for all models of Freescale 8bit ALUs (Families HC05/HC08/RS08/HCS08) and memories (RAM, ROM, Flash) has also already been formalised in Matita.

For the management of cost annotations and proof obligation synthesis we plan to interface with the http://caduceus.lri.fr/Caduceus verification tool for C programs, developed by the Computer Science Laboratory of the University of Paris sud. Caduceus is built on top of Why, a general-purpose verification condition generator, exploiting Dijkstra's weakest precondition calculus. The Why tool allows the declaration of logical models (types, functions, predicates and axioms) that can be used in programs and annotations; moreover, it can be interfaced to a wide set of existing provers for verification of the resulting conditions. In particular, we will rely on http://alt-ergo.lri.frAlt-Ergo, which includes a decision procedure for linear arithmetic.

We plan to support almost every ANSI C construct (functions, pointers, arrays and structures) and data-types (integers of various sizes) except function pointers, explicit jumps (goto) and pointer aliasing or casting. These features do not seem to pose major additional challenges to the technology we plan to develop, but could be time expensive to implement, formalize and prove correct. Moreover, they are not currently supported by Caduceus, posing additional problems for the development of the proof-of-concept prototype of the whole application. We could also support floating point numbers, but the kind of micro-controller we are targeting (8 and 16 bits processors) do not usually provide instructions to efficiently process them. Moreover floating point numbers are seldom used in embedded software for that very reason, making them a feature of ANSI C of limited interest in our scenario.

We stress that the proposed approach handles costa annotations for C programs as a special case of standard annotations for imperative programs whose management we plan to automatize with tools such as Caduceus. As explained above, the choice of the tool does have an impact on the fragment of ANSI C constructs we will handle, and future advances in this domain could enlarge the fragment of ANSI C under consideration. On the other hand, tools like Caduceus pose no limitation on the invariants, which can be freely described in a computational and very expressive logic. Hence, every technique to automatically infer invariants and cost annotations can be exploited (and often automatized) in Caduceus.


Attachments (7)

Download all attachments as: .zip