# Changeset 3224 for Deliverables

Ignore:
Timestamp:
Apr 30, 2013, 12:17:29 PM (8 years ago)
Message:

revisions...

File:
1 edited

### Legend:

Unmodified
 r3199 \texttt{http://www.hipeac.net/conference/berlin/workshop/CerCo} %%%\input{hipeac.ltx} and was attended by 7 participants. Prof. Kevin Hammond (St. Andrews University, UK) gave an invited talk on his work on methods for WCET analysis, in particular of Hume programs at the source-level, and their application in the autonomous vehicle guidance and aerospace domains. ETAPS 2013 was held in Rome, with the full-day \cerco{} event a Technical Day on Innovative Techniques on Timing Analysis'' (see Table~\ref{etaps:timetable}): \\ \texttt{http://cerco.cs.unibo.it/innovative\_techniques\_on\_timing\_analysis\_technical\_day} \texttt{http://cerco.cs.unibo.it/innovative\_techniques\_on\_timing\_analysis\_technical\_day}\\ %%%\input{etaps.ltx} and was attended by 12 participants. It ran in parallel with a two-day workshop on Quantitative Aspects of Programming Languages (QAPL 11), sharing three sessions with that meeting. The \cerco{} workshop also included presentations from Tullio Vardanega, representing the PROARTIS Consortium (FP7-ICT-2009.3.4), and an invited talk from Prof. Bj{\"o}rn Lisper (M{\"a}lardalen University, SE) on Parametric WCET analysis. It ran in parallel with a two-day workshop on Quantitative Aspects of Programming Languages (QAPL'11), sharing three sessions with that meeting. The \cerco{} workshop also included presentations from Tullio Vardanega, representing the PROARTIS Consortium (FP7-ICT-2009.3.4), and an invited talk from Prof. Bj{\"o}rn Lisper (M{\"a}lardalen University, SE) on Parametric WCET analysis. \input{etaps.ltx} \paragraph{Organization details and attendance} The ETAPS event was one of 20 workshops organised over the 4 days either side of the main conference. The event also had three sessions shared with the QAPL workshop (Quantitative Aspects of Programming Languages and Systems), and was the better attended, and scientifically more successful, meeting. The main introduction to the project had more than 50 attendants and some QAPL participants have expressed interest in future collaborations. The ETAPS event was one of 20 workshops organised over the 4 days either side of the main conference. The event also had three sessions shared with the QAPL workshop (Quantitative Aspects of Programming Languages and Systems), and was the better attended, and scientifically more successful, meeting. The main introduction to the project had an audience of more than 50, and some QAPL participants have expressed interest in future collaborations. The HiPEAC workshop was one of 24 such meetings in a programme organised in parallel with the 3 days of the main conference. Attendance was limited, which can partially be explained by the workshop being held in parallel with the main conference. There have been practically no industrial attendance to any of the workshops talks. Nevertheless, the main conference also hosted an Industry Session and an Industrial Showcase and the days that preceded our workshop posed several good occasions to get in touch with representatives of the high performance hardware industry and of European projects involved in advanced real time architectures (projects parMERASA, T-CREST and PROARTIS). In the deterministic case studied in \cerco{}, we have taken a given, fixed, cost algebra of natural numbers (obtained from Siemens data-sheet clock timings) under addition, but already Tranquili's work on \emph{dependent labelling} suggests a move to computing costs in algebras of \emph{functions} (in the case of his analysis of loop unrolling, of cost expressions parameterised with respect to valuations of the loop index variables). The wider implications of such a move are yet to be explored, but probabilistic analysis fits in the model, as well as the computations of costs that are parametric on the hardware state. At both events we presented preliminary results on the time analysis of systems with pipelines obtained by exposing the hardware state in the source code. Some members of the audience were skeptical because of fear of exposing a level of complexity difficult to tame. However, before we get a working implementation and we test the behaviour of invariant generators on the obtained source code, we honestly believe that it is difficult to come to conclusions. The feedback obtained from discussions with industrial representatives and with representatives from the parMERASA and T-CREST projects was less significant and describes a bleak future for static time analysis. The microprocessor and embedded systems developers are in a race to provide the largest amount of computing power on a single chip, with systems-on-chip at the centre of the scene during the industrial showcase. The major issue of safety and non safety systems designed is now on how to exploit this additional power to optimize the average case or, simply, because the additional power is present anyway and it is a pity to waste it. The timing behaviour of programs running on a computing unit of these multi-cores or systems-on-chip is potentially greatly affected by the other units. Buses and caches are also shared, often in non uniform ways, and different computations also interfere through the states of these share components. Statically analyzing for WCET a program in isolation yields totally useless bounds because ignorance about the behaviour of the other computing nodes forces to always assume the worst possible behaviour, hence the useless bounds. The mentioned EU projects, among others, and a large part of the scientific community is working on the design of alternative hardware that could make the worst case statically predictable again, but the pressure on the microprocessor manufacturers has been totally unsuccessful at the moment. The CerCo technology, implementing a form of static analysis, suffers from the problem as well and does not contribute to its solution. On the other hand, it is likely that, if a solution to the problem emerges, it could be exploited in CerCo too. The feedback obtained from discussions with industrial representatives and with representatives from the parMERASA and T-CREST projects was less significant and describes a bleak future for static time analysis. Microprocessor and embedded systems developers are in a race to provide the largest amount of computing power on a single chip, with systems-on-chip at the centre of the scene during the industrial showcase. The major issue in the design of safety-critical and non-safety-critical systems is now on how to exploit this additional power to optimize the average case or, simply, because the additional power is present anyway and it is a pity to waste it. The timing behaviour of programs running on a computing unit of such multi-core processors or systems-on-chip is potentially greatly affected by the other units. Buses and caches are also shared, often in non uniform ways, and different computations also interfere through the states of these shared components. Statically analyzing a program for WCET in isolation yields totally useless bounds, because ignorance about the behaviour of the other computing nodes forces to always assume the worst possible behaviour, hence hopelessly imprecise bounds. The mentioned EU projects, among others, and a large part of the scientific community is working on the design of alternative hardware that could make the worst case statically predictable again, but at the moment attempts to influence microprocessor manufacturers have been totally unsuccessful. The CerCo technology, which implements a form of static analysis, suffers from the same such problems and does not contribute to its solution. On the other hand, it is likely that, if a solution to the problem emerges, it could be exploited in CerCo too.