Changeset 3225


Ignore:
Timestamp:
Apr 30, 2013, 12:43:33 PM (4 years ago)
Author:
mckinna
Message:

More spelling, more grammar, more phrasing

File:
1 edited

Legend:

Unmodified
Added
Removed
  • Deliverables/D6.4-6.5/workshops.ltx

    r3224 r3225  
    2222and was attended by 7 participants.
    2323
    24 Prof. Kevin Hammond (St. Andrews University, UK) gave an invited talk on his work on methods for WCET analysis, in particular of Hume programs at the source-level, and their application in the autonomous vehicle guidance and aerospace domains.
     24Prof. Kevin Hammond (St. Andrews University, UK) gave an invited talk on his work on methods for WCET analysis developed in the EmBounded project (EU FP6 IST-510255), in particular for Hume programs at the source-level, and their application in the autonomous vehicle guidance and aerospace domains.
    2525
    2626\input{hipeac.ltx}
     
    4848We try here a brief overview of the most interesting ones.
    4949
    50 The existence of two different approaches to source-level cost reasoning emerged at the HiPEAC event. The first one, embraced by the EMbounded project, does not try to maximize performance of the code, but is interested only in full predictability and simplicity of the analysis. The second approach, embraced by CerCo, tries to avoid any performance reduction, at the price of complicating the analysis. It is therefore closer to traditional WCET. Technology transfer between the two approaches seem possible. Kevin Hammond's group use amortized analysis techniques to connect local costs about embedded programs in the Hume language to global costs, technology which it may be possible to transfer to the \cerco{} setting.  A key difference in our approaches is that their Hume implementation uses the high predictability of their virtual machine implementation to obtain local cost information, whereas \cerco{} produces such information for a complex native-code compiler. Replacing their virtual machine with out compiler seems also possible.
     50The existence of two different approaches to source-level cost reasoning emerged at the HiPEAC event. The first one, embraced by the EmBounded project, does not try to maximize performance of the code, but is interested only in full predictability and simplicity of the analysis. The second approach, embraced by \cerco{}, tries to avoid any performance reduction, at the price of complicating the analysis. It is therefore closer to traditional WCET. Technology transfer between the two approaches seem possible. Kevin Hammond's group use amortized analysis techniques to connect local costs about embedded programs in the Hume language to global costs, technology which it may be possible to transfer to the \cerco{} setting.  A key difference in our approaches is that their Hume implementation uses the high predictability of their virtual machine implementation to obtain local cost information, whereas \cerco{} produces such information for a complex native-code compiler. Replacing their virtual machine with out compiler seems also possible.
    5151
    5252At the ETAPS workshop Bj{\"o}rn Lisper drew attention to the many points of common interest and related techniques between the work on \cerco{} and his own on Parametric WCET analysis. The most interesting difference between what we do and what is done in the WCET community is that we use (automated) theorem proving to deal with the control-flow (i.e. to put an upper bound to the executions). The standard technique in parametric WCET consists in using polyhedral analysis to bound the number of loop iterations.  That analysis produces constraints which are solved with the aid of off-the-shelf linear programming tools. Comparing the effectiveness of theorem proving with the effectiveness of polyhedral analysis in computing precise costs interested him.
     
    5454In addition to his own technical talk, he took the opportunity to advertise, and solicit interest in, the recently formed COST Action IC1202 Timing Analysis and Cost-Level Estimation (TACLe), of which he is Chair. Members of \cerco{} are going to join the COST Action. This offers very promising potential for future collaborations and the wider communication of results from \cerco{}.
    5555
    56 In particular, during the round-table it has clearly emerged an immediate and significant application of the CerCo technology that we missed during the project. WCET analysis has traditionally been used in the verification phase of a system, after all components have been built. Indeed, the state-of-the-art WCET techniques all work on the object code, which is available only after compilation and linking. Since redesigning a software system is very costly, designers usually choose to over-specify the hardware initially and then just verify that it is indeed sufficiently powerful. However, as systems' complexity rises, these initial safety margins can prove to be very expensive. Undertaking lightweight (but less precise) analysis in the early stages of the design process has the potential to drastically reduce total hardware costs. To perform this analysis, a new generation of early-stage timing analysis tools that do not need the object code are required. The CerCo Trusted and Untrusted Prototypes already fill this niche by working on the source code and giving the user the possibility to axiomatize the cost of external calls or that of computing a WCET that is parametric in the cost of unimplemented modules. A greater level of predictability, robustness and automation of the analysis is required before industrial exploitation becomes possible.
     56In particular, during the round-table there clearly emerged an immediate and significant application of the \cerco{} technology that we missed during the project. WCET analysis has traditionally been used in the verification phase of a system, after all components have been built. Indeed, state-of-the-art WCET techniques all work on the \emph{object} code, which is available only after compilation and linking. Since redesigning a software system is very costly, designers usually choose to over-specify the hardware initially and then just verify that it is indeed sufficiently powerful. However, as systems' complexity rises, these initial safety margins can prove to be very expensive. Undertaking lightweight (but less precise) analysis in the early stages of the design process has the potential to drastically reduce total hardware costs. To perform this analysis, a new generation of early-stage timing analysis tools that do not need the object code are required. The \cerco{} Trusted and Untrusted Prototypes already fill this niche by working on the \emph{source} code and giving the user the possibility to axiomatize the cost of external calls or that of computing a WCET that is parametric in the cost of unimplemented modules. Before industrial exploitation becomes possible, however, we would need to establish greater levels of predictability, robustness and automation of the analysis.
    5757
    58 A common theme emerged from the shared sessions with QAPL, and in particular the invited talk there from prof. Alessandra di Pierro on \emph{probabilistic} timing analysis: the parameterisation of a given timing analysis with respect to different cost \emph{algebras}. In the case of probabilistic analyses, costs are taken with respect to given probability distributions, with \emph{expected} costs being computed. A quick analysis of the labelling approach of CerCo reveals that the method assumes very weak conditions on the cost model that it is able to transfer from the object code to the source code. In particular, probabilistic cost models like the one used by di Pierro satisfy the invariants. Prof. Vardanega's talk emphasised a radical approach to probabilistic analyses, by turning the processor/cache architecture into a probabilistic one to yield an essentially predictable analysis. For using the CerCo methodology in case of caches, embracing the probabilistic analysis may be the key ingredient. This idea already emerged in the discussions at ETAPS and was investigated during the last period of CerCo.
     58A common theme emerged from the shared sessions with QAPL, and in particular the invited talk there from prof. Alessandra di Pierro on \emph{probabilistic} timing analysis: the parameterisation of a given timing analysis with respect to different cost \emph{algebras}. In the case of probabilistic analyses, costs are taken with respect to given probability distributions, with \emph{expected} costs being computed. A quick analysis of the labelling approach of \cerco{} reveals that the method assumes very weak conditions on the cost model that it is able to transfer from the object code to the source code. In particular, probabilistic cost models like the one used by di Pierro satisfy the invariants. Prof. Vardanega's talk emphasised a radical approach to probabilistic analyses, by turning the processor/cache architecture into a probabilistic one to yield an essentially predictable analysis. To exploit the \cerco{} methodology in case of caches, embracing probabilistic analysis may be the key ingredient. This idea already emerged in the discussions at ETAPS and was investigated during the last period of \cerco{}.
    5959
    6060In the deterministic case studied in \cerco{}, we have taken a given, fixed, cost algebra of natural numbers (obtained from Siemens data-sheet clock timings) under addition, but already Tranquili's work on \emph{dependent labelling} suggests a move to computing costs in algebras of \emph{functions} (in the case of his analysis of loop unrolling, of cost expressions parameterised with respect to valuations of the loop index variables). The wider implications of such a move are yet to be explored, but probabilistic analysis fits in the model, as well as the computations of costs that are parametric on the hardware state. At both events we presented preliminary results on the time analysis of systems with pipelines obtained by exposing the hardware state in the source code. Some members of the audience were skeptical because of fear of exposing a level of complexity difficult to tame. However, before we get a working implementation and we test the behaviour of invariant generators on the obtained source code, we honestly believe that it is difficult to come to conclusions.
     
    6262The feedback obtained from discussions with industrial representatives and with representatives from the parMERASA and T-CREST projects was less significant and describes a bleak future for static time analysis. Microprocessor and embedded systems developers are in a race to provide the largest amount of computing power on a single chip, with systems-on-chip at the centre of the scene during the industrial showcase. The major issue in the design of safety-critical and non-safety-critical systems is now on how to exploit this additional power to optimize the average case or, simply, because the additional power is present anyway and it is a pity to waste it. The timing behaviour of programs running on a computing unit of such multi-core processors or systems-on-chip is potentially greatly affected by the other units. Buses and caches are also shared, often in non uniform ways, and different computations also interfere through the states of these shared components.
    6363
    64 Statically analyzing a program for WCET in isolation yields totally useless bounds, because ignorance about the behaviour of the other computing nodes forces to always assume the worst possible behaviour, hence hopelessly imprecise bounds. The mentioned EU projects, among others, and a large part of the scientific community is working on the design of alternative hardware that could make the worst case statically predictable again, but at the moment attempts to influence microprocessor manufacturers have been totally unsuccessful. The CerCo technology, which implements a form of static analysis, suffers from the same such problems and does not contribute to its solution. On the other hand, it is likely that, if a solution to the problem emerges, it could be exploited in CerCo too.
     64Statically analysing a program for WCET in isolation yields totally useless bounds, because ignorance about the behaviour of the other computing nodes forces to always assume the worst possible behaviour, hence hopelessly imprecise bounds. The mentioned EU projects, among others, and a large part of the scientific community is working on the design of alternative hardware that could make the worst case statically predictable again, but at the moment attempts to influence microprocessor manufacturers have been totally unsuccessful. The \cerco{} technology, which implements a form of static analysis, suffers from the same such problems and does not contribute to its solution. On the other hand, it is likely that, if a solution to the problem emerges, it could be exploited in \cerco{} too.
Note: See TracChangeset for help on using the changeset viewer.