Changeset 3192


Ignore:
Timestamp:
Apr 27, 2013, 5:48:15 PM (4 years ago)
Author:
sacerdot
Message:

...

Location:
Deliverables/D6.4-6.5
Files:
2 edited

Legend:

Unmodified
Added
Removed
  • Deliverables/D6.4-6.5/etaps.ltx

    r3157 r3192  
    1 \begin{table}[h]
     1\begin{table}[!h]
    22  \centering
    33  \begin{tabular}{|c|l|l|}
  • Deliverables/D6.4-6.5/workshops.ltx

    r3157 r3192  
    2323Prof. Kevin Hammond (St. Andrews University, UK) gave an invited talk on his work on methods for WCET analysis, in particular of Hume programs at the source-level, and their application in the autonomous vehicle guidance and aerospace domains.
    2424
     25\input{hipeac.ltx}
    2526
    2627\paragraph{D6.5} We identified for month 39 (Mar. 2013) a workshop at ETAPS, the pre-eminent European federated conference on programming languages, systems and tools.
     
    3031%%%\input{etaps.ltx}
    3132
    32 It ran in parallel with a two-day workshop on Quantitative Aspects of Programming Languages (QAPL 11), sharing three sessions with that meeting. The \cerco{} workshop also included presentations from Tullio Vartanegra, representing the PROARTIS Consortium (FP7-ICT-2009.3.4), and an invited talk from Prof. Bj{\"o}rn Lisper (M{\"a}lardalen University, SE) on Parametric WCET analysis.
     33It ran in parallel with a two-day workshop on Quantitative Aspects of Programming Languages (QAPL 11), sharing three sessions with that meeting. The \cerco{} workshop also included presentations from Tullio Vardanega, representing the PROARTIS Consortium (FP7-ICT-2009.3.4), and an invited talk from Prof. Bj{\"o}rn Lisper (M{\"a}lardalen University, SE) on Parametric WCET analysis.
     34
     35\input{etaps.ltx}
     36
     37\paragraph{Organization details and attendance}
     38
     39The ETAPS event was one of 20 workshops organised over the 4 days either side of the main conference. The event also had three sessions shared with the QAPL workshop (Quantitative Aspects of Programming Languages and Systems), and was the better attended, and scientifically more successful, meeting. The main introduction to the project had more than 50 attendants and some QAPL participants have expressed interest in future collaborations.
     40
     41The HiPEAC workshop was one of 24 such meetings in a programme organised in parallel with the 3 days of the main conference. Attendance was limited, which can partially be explained by the workshop being held in parallel with the main conference. There have been practially no industrial attendance to any of the workshops talks. Nevertheless, the main conference also hosted an Industry Session and an Industrial Showcase and the days that precedeed our workshop posed several good occasions to get in touch with representatives of the high performance hardware industry and of European projects involved in advanced real time architectures (projects parMERASA, T-CREST and PROARTIS).
    3342
    3443\paragraph{Scientific Outcomes}
    3544
    36 The HiPEAC workshop was one of 24 such meetings in a programme organised in parallel with the 3 days of the main conference, while the ETAPS event was one of 20 workshops organised over the 4 days either side of the main conference, and was thus the better attended, and scientifically more successful, meeting.
     45Several fruitful discussions and suggestions emerged at both meetings.
     46We try here a brief overview of the most interesting ones.
    3747
    38 Nonetheless, a fruitful discussion emerged at HiPEAC concerning source-level cost reasoning.  Kevin Hammond's group use amortized analysis techniques to connect local costs about embedded programs in the Hume language to global costs, technology which it may be possible to transfer to the \cerco{} setting.  A key difference in our approaches is that their Hume implementation uses the high predictability of their virtual machine implementation to obtain local cost information, whereas \cerco{} produces such information for a complex native-code compiler.
     48The existence of two different approachs to source-level cost reasoning emerged at the HiPEAC event. The first one, embraced by the EMbounded project, does not try to maximize performance of the code, but is interested only in full predictability and simplicity of the analysis. The second approach, embraced by CerCo, tries to avoid any performance reduction, at the price of complicating the analysis. It is therefore closer to traditional WCET. Technology transfer between the two approaches seem possible. Kevin Hammond's group use amortized analysis techniques to connect local costs about embedded programs in the Hume language to global costs, technology which it may be possible to transfer to the \cerco{} setting.  A key difference in our approaches is that their Hume implementation uses the high predictability of their virtual machine implementation to obtain local cost information, whereas \cerco{} produces such information for a complex native-code compiler. Replacing their virtual machine with out compiler seems also possible.
    3949
    40 At the ETAPS workshop Bj{\"o}rn Lisper drew attention to the many points of common interest and related techniques between the work on \cerco{} and his own on Parametric WCET analysis. The main difference between what we do and what is done in the WCET community is that we use (automated) theorem proving to deal with the control-flow (i.e. to put an upper bound to the executions). The standard technique in WCET consists in using polyhedral analysis to bound the number of loop
    41 iterations.  That analysis produces constraints which are solved with the aid of off-the-shelf linear programming tools. So the effectiveness of theorem proving in computing precise costs in \cerco{} interested him.
     50At the ETAPS workshop Bj{\"o}rn Lisper drew attention to the many points of common interest and related techniques between the work on \cerco{} and his own on Parametric WCET analysis. The most interesting difference between what we do and what is done in the WCET community is that we use (automated) theorem proving to deal with the control-flow (i.e. to put an upper bound to the executions). The standard technique in parametric WCET consists in using polyhedral analysis to bound the number of loop iterations.  That analysis produces constraints which are solved with the aid of off-the-shelf linear programming tools. Comparing the effectiveness of theorem proving with the effectiveness of polyhedral analysis in computing precise costs interested him.
    4251
    43 In addition to his own technical talk, he took the opportunity to advertise, and solicit interest in, the recently formed COST Action IC1202 Timing Analysis and Cost-Level Estimation (TACLe), of which he is Chair. This offers very promising potential for future collaborations and the wider communication of results from \cerco{}.
     52In addition to his own technical talk, he took the opportunity to advertise, and solicit interest in, the recently formed COST Action IC1202 Timing Analysis and Cost-Level Estimation (TACLe), of which he is Chair. Members of \cerco{} are going to join the COST Action. This offers very promising potential for future collaborations and the wider communication of results from \cerco{}.
    4453
    45 A common theme emerged from the shared sessions with QAPL, and in particular the invited talk there from prof. Alessandra di Pierro on \emph{probabilistic} timing analysis: the parametrisation of a given timing analysis with respect to different cost \emph{algebras}. In the case of probabilistic analyses, costs are taken with respect to given probability distributions, with \emph{expected} costs being computed. prof. Vartanegra's talk emphasised a radical approach to such analyses, by making assumptions about the processor/cache architecture to yield an essentially predictable analysis.
     54In particular, during the round-table it has clearly emerged an immediate and significant application of the CerCo technology that we missed during the project. WCET analysis has traditionally been used in the verification phase of a system, after all components have been built. Indeed, the state-of-the-art WCET techniques all work on the object code, which is available only after compilation and linking. Since redesigning a software system is very costly, designers usually choose to over-specify the hardware initially and then just verify that it is indeed sufficiently powerful. However, as systems' complexity rises, these initial safety margins can prove to be very expensive. Undertaking lightweight (but less precise) analysis in the early stages of the design process has the potential to drastically reduce total hardware costs. To perform this analysis, a new generation of early-stage timining analysis tools that do not need the object code are required. The CerCo Trusted and Untrusted Prototypes already fill this niche by working on the source code and giving the user the possibility to axiomatize the cost of external calls or that of computing a WCET that is parametric in the cost of unimplemented modules. A greater level of predictability, robustness and automation of the analysis is required before industrial exploitation becomes possible.
    4655
    47 In the deterministic case studied in \cerco{}, we have taken a given, fixed, cost algebra of natural numbers (obtained from Siemens data-sheet clock timings) under addition, but already Tranquili's work on \emph{dependent labelling} suggests a move to computing costs in algebras of \emph{functions} (in the case of his analysis of loop unrolling, of cost expressions parametrised with respect to valuations of the loop index variables). The wider implications of such a move are yet to be explored.
     56A common theme emerged from the shared sessions with QAPL, and in particular the invited talk there from prof. Alessandra di Pierro on \emph{probabilistic} timing analysis: the parametrisation of a given timing analysis with respect to different cost \emph{algebras}. In the case of probabilistic analyses, costs are taken with respect to given probability distributions, with \emph{expected} costs being computed. A quick analysis of the labelling approach of CerCo reveals that the method assumes very weak conditions on the cost model that it is able to transfer from the object code to the source code. In particular, probabilistic cost models like the one used by di Pierro satisfy the invariants. Prof. Vardanega's talk emphasised a radical approach to probabilistic analyses, by turning the processor/cache architecture into a probabilistic one to yield an essentially predictable analysis. For using the CerCo methodology in case of caches, embracing the probabilistic analysis may be the key ingredient. This idea already emerged in the discussions at ETAPS and was investigated during the last period of CerCo.
    4857
    49 \input{hipeac.ltx}
     58In the deterministic case studied in \cerco{}, we have taken a given, fixed, cost algebra of natural numbers (obtained from Siemens data-sheet clock timings) under addition, but already Tranquili's work on \emph{dependent labelling} suggests a move to computing costs in algebras of \emph{functions} (in the case of his analysis of loop unrolling, of cost expressions parametrised with respect to valuations of the loop index variables). The wider implications of such a move are yet to be explored, but probabilistic analysis fits in the model, as well as the computations of costs that are parametric on the hardware state. At both events we presented preliminary results on the time analysis of systems with pipelines obtained by exposing the hardware state in the source code. Some members of the audience were skeptical because of fear of exposing a level of complexity difficult to tame. However, before we get a working implementation and we test the behaviour of invariant generators on the obtained source code, we honestly believe that it is difficult to come to conclusions.
    5059
    51 \input{etaps.ltx}
    52 
    53 
     60The feedback obtained from discussions with industrial representatives and with representatives from the parMERASA and T-CREST projects was less significant and describes a bleak future for static time analysis. The microprocessor and embedded systems developers are in a race to provide the largest amount of computing power on a single chip, with systems-on-chip at the center of the scene during the industrial showcase. The major issue of safety and non safety systems designed is now on how to exploit this additional power to optimize the average case or, simply, because the additional power is present anyway and it is a pity to waste it. The timing behaviour of programs running on a computing unit of these multi-cores or systems-on-chip is potentially greatly affected by the other units. Buses and caches are also shared, often in non uniform ways, and different computations also interfere through the states of these share components. Statically analyzing for WCET a program in isolation yields totally useless bounds because ignorance about the behaviour of the other computing nodes forces to always assume the worst possible behaviour, hence the useless bounds. The mentioned EU projects, among others, and a large part of the scientific community is working on the design of alternative hardware that could make the worst case statically predictable again, but the pressure on the microprocessor manufacturers has been totally unsuccessful at the moment. The CerCo technology, implementing a form of static analysis, suffers from the problem as well and does not contribute to its solution. On the other hand, it is likely that, if a solution to the problem emerges, it could be exploited in CerCo too.
Note: See TracChangeset for help on using the changeset viewer.