Changeset 3325


Ignore:
Timestamp:
Jun 6, 2013, 2:59:52 PM (4 years ago)
Author:
piccolo
Message:
 
File:
1 edited

Legend:

Unmodified
Added
Removed
  • Papers/fopara2013/fopara13.tex

    r3324 r3325  
    111111What has previously prevented this approach is lack of a uniform and precise cost model for high-level code: 1) each statement occurrence is compiled differently and optimizations may change control flow; 2) the cost of an object code instruction may depend on the runtime state of hardware components like pipelines and caches, which is not visible in the source code.
    112112
    113 To solve the issue, we envision a new generation of compilers able to keep track of program structure through compilation and optimisation, and able to exploit that information to define a cost model for source code that is precise, non-uniform, and accounts for runtime state. With such a source-level cost model we can reduce non-functional verification to the functional case and exploit the state of the art in automated high-level verification \cite{survey}. The techniques previously used by Worst Case Execution Time (WCET) analysers on the object code are still available, but can now be coupled with additional source-level analysis. Where the approach produces precise cost models too complex to reason about, safe approximations can be used to trade complexity with precision. Finally, analysis on source code can be performed even during early development stages, when components have been specified but not yet implemented: source code modularity means that it is enough to specify the non-functional behavior of unimplemented components.
     113To solve the issue, we envision a new generation of compilers able to keep track of program structure through compilation and optimisation, and to exploit that information to define a cost model for source code that is precise, non-uniform, and accounts for runtime state. With such a source-level cost model we can reduce non-functional verification to the functional case and exploit the state of the art in automated high-level verification \cite{survey}. The techniques previously used by Worst Case Execution Time (WCET) analysers on the object code are still available, but can now be coupled with additional source-level analysis. Where the approach produces precise cost models too complex to reason about, safe approximations can be used to trade complexity with precision. Finally, analysis on source code can be performed even during early development stages, when components have been specified but not yet implemented: source code modularity means that it is enough to specify the non-functional behavior of unimplemented components.
    114114
    115115\paragraph{Contributions:} We have developed a technique, the labelling approach, to implement compilers that induce cost models on source programs by very lightweight tracking of code changes through compilation. We have studied how to formally prove the correctness of compilers implementing the technique. We have implemented such a compiler from C to object binaries for the 8051 microcontroller, and verified it in an interactive theorem prover. We have implemented a Frama-C plug-in \cite{framac} that invokes the compiler on a source program and uses this to generate invariants on the high-level source that correctly model low-level costs. Finally, the plug-in certifies that the program respects these costs by calling automated theorem provers, a new and innovative technique in the field of cost analysis. As a case study, we show how the plug-in can automatically compute and certify the exact reaction time of Lustre \cite{lustre} dataflow programs compiled into C.
     
    118118Formal methods for verification of functional properties of programs have reached a level of maturity and automation that is allowing a slow but increasing adoption in production environments. For safety critical code, it is getting usual to combine rigorous software engineering methodologies and testing with static analysis in order to benefits from the strong points of every approach and mitigate the weaknesses. Particularly interesting are open frameworks for the combination of different formal methods, where the programs can be progressively specified and are continuously enriched with new safety guarantees: every method contributes knowledge (e.g. new invariants) that becomes an assumption for later analysis.
    119119
    120 The scenario for the verification of non functional properties (time spent, memory used, energy consumed) is more bleak and the future seems to be getting even worse. Most industries verify that real time systems meets their deadlines simply measuring the time spent in many runs of the systems,  computing the maximum time and adding an empirical safety margin, claiming the result to be a bound for the Worst Case Execution Time of the program. Formal methods and software to statically analyse the WCET of programs exist, but they often produce bounds that are too pessimistic to be useful. Recent advancements in hardware architectures is all focused on the improvement of the average case performance, not the predictability of the worst case. Execution time is getting more and more dependent from the execution history, that determines the internal state of hardware components like pipelines and caches. Multi-core processors and non uniform memory models are drastically reducing the possibility of performing static analysis in isolation, because programs are less and less time composable. Clock precise hardware models are necessary to static analysis, and getting them is becoming harder as a consequence of the increased hardware complexity.
     120The scenario for the verification of non functional properties (time spent, memory used, energy consumed) is more bleak and the future seems to be getting even worse. Most industries verify that real time systems meets their deadlines simply measuring the time spent in many runs of the systems,  computing the maximum time and adding an empirical safety margin, claiming the result to be a bound for the WCET of the program. Formal methods and software to statically analyse the WCET of programs exist, but they often produce bounds that are too pessimistic to be useful. Recent advancements in hardware architectures is all focused on the improvement of the average case performance, not the predictability of the worst case. Execution time is getting more and more dependent from the execution history, that determines the internal state of hardware components like pipelines and caches. Multi-core processors and non uniform memory models are drastically reducing the possibility of performing static analysis in isolation, because programs are less and less time composable. Clock precise hardware models are necessary to static analysis, and getting them is becoming harder as a consequence of the increased hardware complexity.
    121121
    122122Despite the latter scenario, the need for reliable real time systems and programs is increasing, and there is an increasing pressure from the research community towards the differentiation of hardware. The aim is the introduction of alternative hardware whose behavior would be more predictable and more suitable to be statically analysed, for example decoupling execution time from the execution history by introducing randomization \cite{proartis}.
Note: See TracChangeset for help on using the changeset viewer.