# Changeset 3424

Ignore:
Timestamp:
Feb 11, 2014, 12:35:22 PM (7 years ago)
Message:

Minor edits to section 4.

File:
1 edited

### Legend:

Unmodified
 r3422 The hardware state comprises both the functional state that affects the computation (the value of the registers and memory) and the non-functional state that does not (the pipeline and cache contents for example). The former is state that does not (the pipeline and cache contents, for example). The former is in correspondence with the source code state, but reconstructing the correspondence may be hard and lifting the cost model to work on the source code state is likely to produce cost expressions that are too complex to understand and reason about. Luckily enough, in all modern architectures the cost of executing single Fortunately, in all modern architectures the cost of executing single instructions is either independent of the functional state or the jitter---the difference between the worst and best case execution times---is small enough to be bounded without losing too much precision. Therefore we can concentrate on dependencies over the non-functional' parts of the state only. to be bounded without losing too much precision. Therefore we only consider dependencies on the non-functional' parts of the state. The non-functional state has no correspondence in the high level state and does update function which is computed during static analysis too. The update function associates to each label a function from the recently emitted labels and old state to the new state. It is computed composing the semantics of every and old state to the new state. It is computed by composing the semantics of every instruction in a basic block and restricting it to the non-functional part of the state. trade precision of the analysis for simplicity by approximating the parametric cost with safe non parametric bounds. Interestingly, the functional analysis of the code can determine which blocks are executed more frequently in order to approximate more aggressively the ones that are executed less. the code could determine which blocks are executed more frequently in order to use more aggressive approximations for the ones that are executed least. Dependent labeling can also be applied to allow the compiler to duplicate a different cost to the different occurrences of a duplicated label. For example, loop peeling turns a loop into the concatenation of a copy of the loop body (that executes the first iteration) with the conditional execution of the loop (for the successive iterations). Because of further optimisations, the two copies of the loop will be compiled differently, with the first body usually body for the first iteration and the conditional execution of the loop for successive iterations. Further optimisations will compile the two copies of the loop differently, with the first body usually taking more time. actual work to be done consists of introducing within the source code, and for each loop, a variable that counts the number of iterations. The loop optimisation code that duplicate the loop bodies must also modify the code to propagate correctly that duplicate the loop bodies must also modify the code to correctly propagate the update of the iteration numbers. The technical details are more complex and can be found in the CerCo reports and publications. The implementation, however, bound on the reaction time of the system. We tested the Cost plugin and the Lustre wrapper on the C programs generated by the Lustre compiler. For programs consisting of a few hundred lines of code, the cost plugin computes a WCET and Alt- Ergo is able to discharge all VCs automatically. consisting of a few hundred lines of code, the cost plugin computes a WCET and Alt-Ergo is able to discharge all VCs automatically. \paragraph{Handling C programs with simple loops.} The cost annotations added by the CerCo compiler take the form of C instructions that update by a constant a fresh global variable called the cost variable. Synthesizing a WCET of a C function thus consists in statically resolving an that update a fresh global variable called the cost variable by a constant. Synthesizing a WCET of a C function thus consists of statically resolving an upper bound of the difference between the value of the cost variable before and after the execution of the function, i.e. find in the function the instructions after the execution of the function, i.e. finding the instructions that update the cost variable and establish the number of times they are passed through during the flow of execution. In order to do the analysis the plugin through during the flow of execution. To perform the analysis the plugin makes the following assumptions on the programs: 1) there are no recursive functions; \subsection{The CerCo compiler} In CerCo we have developed a certain number of cost preserving compilers based In CerCo we have developed several cost preserving compilers based on the labeling approach. Excluding an initial certified compiler for a `while' language, all remaining compilers target realistic source languages---a pure advanced optimisations yet. Its user interface, however, is the same as the one of the other versions, in order to trade safety with additional performances. In particular, the Frama-C CerCo plugin can work without recompilation with all compilers. particular, the Frama-C CerCo plugin can work without recompilation with all of our C compilers. The 8051/8052 microprocessor is a very simple one, with constant-cost The CerCo compiler is loosely based on the CompCert compiler \cite{compcert}, a recently developed certified compiler from C to the PowerPC, ARM and x86 microprocessors. Contrary to CompCert, both the CerCo code and its certification are open source. Some data structures and language definitions for microprocessors. In contrast to CompCert, both the CerCo code and its certification are fully open source. Some data structures and language definitions for the front-end are directly taken from CompCert, while the back-end is a redesign of a compiler from Pascal to MIPS used by Fran\c{c}ois Pottier for a course at the Ecole Polytechnique. The main peculiarities of the CerCo compiler are the following. The main differences in the CerCo compiler are: \begin{enumerate} \item All the intermediate languages include label emitting instructions to the way down to object code in order to perform the static analysis. Therefore we integrated also an optimising assembler and a static analyser. \item In order to avoid implementing a linker and a loader, we do not support separate compilation and external calls. Adding them is a transparent process to the labeling approach and should create no unknown problem. \item We target an 8-bit processor, in contrast to CompCert's 32-bit targets. Targeting an 8-bit processor requires several changes and increased code size, but it is not fundamentally more \item In order to avoid the additional work of implementing a linker and a loader, we do not support separate compilation and external calls. Adding them is orthogonal to the labeling approach and should not introduce extra problems. \item We target an 8-bit processor, in contrast to CompCert's 32-bit targets. This requires many changes and more compiler code, but it is not fundamentally more complex. The proof of correctness, however, becomes much harder. \item We target a microprocessor that has a non uniform memory model, which is We implemented the CerCo compiler in the interactive theorem prover Matita and have formally certified that the cost model induced on the source code correctly and precisely predicts the object code behaviour. We actually induce two cost models, one for certified that the cost models induced on the source code correctly and precisely predicts the object code behaviour. There are two cost models, one for execution time and one for stack space consumption. We show the correctness of the prediction only for those programs that do not exhaust the available machine stack space, a proof for non-functional properties, we have introduced a new semantics for programming languages based on a new kind of structured observables with the relative notions of forward similarity and the study of the intentional relative notions of forward similarity and the study of the intensional consequences of forward similarity. We have also introduced a unified representation for back-end intermediate languages that was exploited to provide I/O operations can be performed in the prefix of the run, but not in the measurable sub-run. Therefore we prove that we can predict reaction times, but not I/O times, as it should be. not I/O times, as desired. \section{Conclusions and future work}