# Changeset 3199

Ignore:
Timestamp:
Apr 29, 2013, 5:31:47 AM (8 years ago)
Message:

D6.4/D6.5 executive summary

Location:
Deliverables/D6.4-6.5
Files:
2 edited

Unmodified
Added
Removed
• ## Deliverables/D6.4-6.5/report.tex

 r3137 \vspace*{0.5cm} \begin{center} \begin{LARGE} \textbf{ Report n. D6.4-6.5\\ Dissemination Events} \textbf{Deliverable 6.4\\[3\jot] Organization of an Event Targeted to Potential Industrial Stakeholders} \end{LARGE} \end{center} \vspace*{0.2cm} \begin{center} \begin{LARGE} \textbf{Deliverable 6.5\\[3\jot] Organization of an Event Targeted to the Scientific Community} \end{LARGE} \end{center} \newpage \vspace*{7cm} \paragraph{Abstract} This Report witnesses the two Dissemination events planned for in WP6 as Deliverables D6.4, D6.5. \section*{Executive Summary} \addcontentsline{toc}{section}{Executive Summary} CerCo Work Package~6 on \emph{Dissemination and Exploitation} includes two deliverables that take the form of event organization. \begin{description} \item[D6.4: Organization of an Event Targeted to Potential Industrial Stakeholders] ~ Realised as: \emph{CerCo: Certifying Costs in a Certified Compiler}\\ Workshop at HiPEAC 2013: 8th International Conference on High-Performance and Embedded Architectures and Compilers \\ Wednesday 23 January 2013, Berlin. \item[D6.5: Organization of an Event Targeted to the Scientific Community] ~ Realised as: \emph{CerCo/PROARTIS Technical Day on Innovative Techniques on Timing Analysis}\\ Workshop at ETAPS 2013: European Joint Conferences on Theory and Practice of Software \\ Sunday 23 March 2013, Rome, Italy \end{description} This report describes the completion of these deliverables, their outcome and impact on future dissemination.  We note in particular the following. \begin{itemize} \item Interaction with other projects and research groups.  Both events had invited talks from notable researchers, and the workshop targeted to the scientific community at ETAPS was presented jointly with the PROARTIS project. \item Good fit with the other activities at both the industrially-targeted and scientific events: the CerCo event at HiPEAC was part of a workshop track on compilers which ran throughout the conference; and the CerCo day at ETAPS collaborated with the parallel workshop QAPL (Quantitative Aspects of Programming Languages on Systems) through three joint talks. \item Additional scientific impact and further directions, including: \begin{itemize} \item Potential of CerCo technology to carry out early-stage timing analysis not reachable with current WCET object-code tools (identified by invited speaker Bj\"orn Lisper). \item Parameterisation of CerCo analyses with respect to different cost algebras  (identified in interaction with QAPL speakers). \item Application of probability distributions over costs to tame cache unpredictability (arose from presentation of Vardanega from PROARTIS). \end{itemize} All of these are discussed in detail in the report. \item New links to industrial researchers and other European projects: PROARTIS, COST action TACLe, parMERASA, and T-CREST. \end{itemize} \newpage \label{sect.task} The Grant Agreement specifies the following deliverables from WP6, Dissemination and Exploitation'': CerCo Work Package~6 specifies the following two deliverables under \emph{Dissemination and Exploitation}. \begin{quotation} \textbf{D6.4) Organization of an Event Targeted to Potential Industrial Stakeholders}: We will organize a public event opened to industries and other potential stakeholders and we will invite a few potentially interested industries to be identified in D6.2 and during the project development. The event could be affiliated to an international conference relevant to the project and could involve a tutorial on the use of the software developed in CerCo. The deliverable date is only indicative, since we need to identify a suitable conference for affiliation. The event could be co-located and partially overlap with D6.5. [month 34] \noindent\textbf{D6.4) Organization of an Event Targeted to Potential Industrial Stakeholders}: We will organize a public event opened to industries and other potential stakeholders and we will invite a few potentially interested industries to be identified in D6.2 and during the project development. The event could be affiliated to an international conference relevant to the project and could involve a tutorial on the use of the software developed in CerCo. The deliverable date is only indicative, since we need to identify a suitable conference for affiliation. The event could be co-located and partially overlap with D6.5. [month 34] \end{quotation} \begin{quotation} \textbf{D6.5) Organization of an Event Targeted to the Scientific Community}: We will organize a public event aimed at presenting the CerCo compiler to the scientific community. The event could be affiliated to an international conference relevant to the project and it could involve a tutorial on the use of the software developed in CerCo. Alternatively, it could consist in a course give in an international summer school on the use and implementation of the CerCo compiler. The deliverable date is only indicative, since we need to identify a suitable conference or summer school for affiliation. The event could be co-located and partially overlap with D6.4. [month 34] \noindent \textbf{D6.5) Organization of an Event Targeted to the Scientific Community}: We will organize a public event aimed at presenting the CerCo compiler to the scientific community. The event could be affiliated to an international conference relevant to the project and it could involve a tutorial on the use of the software developed in CerCo. Alternatively, it could consist in a course give in an international summer school on the use and implementation of the CerCo compiler. The deliverable date is only indicative, since we need to identify a suitable conference or summer school for affiliation. The event could be co-located and partially overlap with D6.4. [month 34] \end{quotation}
• ## Deliverables/D6.4-6.5/workshops.ltx

 r3192 The Consortium identified two potentially fruitful destinations at which to hold such events, taking into account suitable candidates, and the opportunity to hold workshops during, or shortly after the end of the Project lifetime, given our requested extension to end month 39, Mar. 2013. Beyond each Conference call for participation, invitations to the two meetings were sent out to the following researchers and research groups, all world leaders in cost/timing analysis for programing langauges, compilers and embedded systems: Beyond each Conference call for participation, invitations to the two meetings were sent out to the following researchers and research groups, all world leaders in cost/timing analysis for programming languages, compilers and embedded systems: prof. Kevin Hammond (St Andrew's University, UK), dr. Björn Franke (Edinburgh University, UK), dr. Christian Ferdinand (CEO, AbsInt GmbH, DE), prof. Germ{\'a}n Puebla, COSTA team (TU Madrid, E), prof. Bjorn Lisper (Mälardalen University, SE). prof. Bj\"orn Lisper (Mälardalen University, SE). The ETAPS event was one of 20 workshops organised over the 4 days either side of the main conference. The event also had three sessions shared with the QAPL workshop (Quantitative Aspects of Programming Languages and Systems), and was the better attended, and scientifically more successful, meeting. The main introduction to the project had more than 50 attendants and some QAPL participants have expressed interest in future collaborations. The HiPEAC workshop was one of 24 such meetings in a programme organised in parallel with the 3 days of the main conference. Attendance was limited, which can partially be explained by the workshop being held in parallel with the main conference. There have been practially no industrial attendance to any of the workshops talks. Nevertheless, the main conference also hosted an Industry Session and an Industrial Showcase and the days that precedeed our workshop posed several good occasions to get in touch with representatives of the high performance hardware industry and of European projects involved in advanced real time architectures (projects parMERASA, T-CREST and PROARTIS). The HiPEAC workshop was one of 24 such meetings in a programme organised in parallel with the 3 days of the main conference. Attendance was limited, which can partially be explained by the workshop being held in parallel with the main conference. There have been practically no industrial attendance to any of the workshops talks. Nevertheless, the main conference also hosted an Industry Session and an Industrial Showcase and the days that preceded our workshop posed several good occasions to get in touch with representatives of the high performance hardware industry and of European projects involved in advanced real time architectures (projects parMERASA, T-CREST and PROARTIS). \paragraph{Scientific Outcomes} We try here a brief overview of the most interesting ones. The existence of two different approachs to source-level cost reasoning emerged at the HiPEAC event. The first one, embraced by the EMbounded project, does not try to maximize performance of the code, but is interested only in full predictability and simplicity of the analysis. The second approach, embraced by CerCo, tries to avoid any performance reduction, at the price of complicating the analysis. It is therefore closer to traditional WCET. Technology transfer between the two approaches seem possible. Kevin Hammond's group use amortized analysis techniques to connect local costs about embedded programs in the Hume language to global costs, technology which it may be possible to transfer to the \cerco{} setting.  A key difference in our approaches is that their Hume implementation uses the high predictability of their virtual machine implementation to obtain local cost information, whereas \cerco{} produces such information for a complex native-code compiler. Replacing their virtual machine with out compiler seems also possible. The existence of two different approaches to source-level cost reasoning emerged at the HiPEAC event. The first one, embraced by the EMbounded project, does not try to maximize performance of the code, but is interested only in full predictability and simplicity of the analysis. The second approach, embraced by CerCo, tries to avoid any performance reduction, at the price of complicating the analysis. It is therefore closer to traditional WCET. Technology transfer between the two approaches seem possible. Kevin Hammond's group use amortized analysis techniques to connect local costs about embedded programs in the Hume language to global costs, technology which it may be possible to transfer to the \cerco{} setting.  A key difference in our approaches is that their Hume implementation uses the high predictability of their virtual machine implementation to obtain local cost information, whereas \cerco{} produces such information for a complex native-code compiler. Replacing their virtual machine with out compiler seems also possible. At the ETAPS workshop Bj{\"o}rn Lisper drew attention to the many points of common interest and related techniques between the work on \cerco{} and his own on Parametric WCET analysis. The most interesting difference between what we do and what is done in the WCET community is that we use (automated) theorem proving to deal with the control-flow (i.e. to put an upper bound to the executions). The standard technique in parametric WCET consists in using polyhedral analysis to bound the number of loop iterations.  That analysis produces constraints which are solved with the aid of off-the-shelf linear programming tools. Comparing the effectiveness of theorem proving with the effectiveness of polyhedral analysis in computing precise costs interested him. In addition to his own technical talk, he took the opportunity to advertise, and solicit interest in, the recently formed COST Action IC1202 Timing Analysis and Cost-Level Estimation (TACLe), of which he is Chair. Members of \cerco{} are going to join the COST Action. This offers very promising potential for future collaborations and the wider communication of results from \cerco{}. In particular, during the round-table it has clearly emerged an immediate and significant application of the CerCo technology that we missed during the project. WCET analysis has traditionally been used in the verification phase of a system, after all components have been built. Indeed, the state-of-the-art WCET techniques all work on the object code, which is available only after compilation and linking. Since redesigning a software system is very costly, designers usually choose to over-specify the hardware initially and then just verify that it is indeed sufficiently powerful. However, as systems' complexity rises, these initial safety margins can prove to be very expensive. Undertaking lightweight (but less precise) analysis in the early stages of the design process has the potential to drastically reduce total hardware costs. To perform this analysis, a new generation of early-stage timining analysis tools that do not need the object code are required. The CerCo Trusted and Untrusted Prototypes already fill this niche by working on the source code and giving the user the possibility to axiomatize the cost of external calls or that of computing a WCET that is parametric in the cost of unimplemented modules. A greater level of predictability, robustness and automation of the analysis is required before industrial exploitation becomes possible. In particular, during the round-table it has clearly emerged an immediate and significant application of the CerCo technology that we missed during the project. WCET analysis has traditionally been used in the verification phase of a system, after all components have been built. Indeed, the state-of-the-art WCET techniques all work on the object code, which is available only after compilation and linking. Since redesigning a software system is very costly, designers usually choose to over-specify the hardware initially and then just verify that it is indeed sufficiently powerful. However, as systems' complexity rises, these initial safety margins can prove to be very expensive. Undertaking lightweight (but less precise) analysis in the early stages of the design process has the potential to drastically reduce total hardware costs. To perform this analysis, a new generation of early-stage timing analysis tools that do not need the object code are required. The CerCo Trusted and Untrusted Prototypes already fill this niche by working on the source code and giving the user the possibility to axiomatize the cost of external calls or that of computing a WCET that is parametric in the cost of unimplemented modules. A greater level of predictability, robustness and automation of the analysis is required before industrial exploitation becomes possible. A common theme emerged from the shared sessions with QAPL, and in particular the invited talk there from prof. Alessandra di Pierro on \emph{probabilistic} timing analysis: the parametrisation of a given timing analysis with respect to different cost \emph{algebras}. In the case of probabilistic analyses, costs are taken with respect to given probability distributions, with \emph{expected} costs being computed. A quick analysis of the labelling approach of CerCo reveals that the method assumes very weak conditions on the cost model that it is able to transfer from the object code to the source code. In particular, probabilistic cost models like the one used by di Pierro satisfy the invariants. Prof. Vardanega's talk emphasised a radical approach to probabilistic analyses, by turning the processor/cache architecture into a probabilistic one to yield an essentially predictable analysis. For using the CerCo methodology in case of caches, embracing the probabilistic analysis may be the key ingredient. This idea already emerged in the discussions at ETAPS and was investigated during the last period of CerCo. A common theme emerged from the shared sessions with QAPL, and in particular the invited talk there from prof. Alessandra di Pierro on \emph{probabilistic} timing analysis: the parameterisation of a given timing analysis with respect to different cost \emph{algebras}. In the case of probabilistic analyses, costs are taken with respect to given probability distributions, with \emph{expected} costs being computed. A quick analysis of the labelling approach of CerCo reveals that the method assumes very weak conditions on the cost model that it is able to transfer from the object code to the source code. In particular, probabilistic cost models like the one used by di Pierro satisfy the invariants. Prof. Vardanega's talk emphasised a radical approach to probabilistic analyses, by turning the processor/cache architecture into a probabilistic one to yield an essentially predictable analysis. For using the CerCo methodology in case of caches, embracing the probabilistic analysis may be the key ingredient. This idea already emerged in the discussions at ETAPS and was investigated during the last period of CerCo. In the deterministic case studied in \cerco{}, we have taken a given, fixed, cost algebra of natural numbers (obtained from Siemens data-sheet clock timings) under addition, but already Tranquili's work on \emph{dependent labelling} suggests a move to computing costs in algebras of \emph{functions} (in the case of his analysis of loop unrolling, of cost expressions parametrised with respect to valuations of the loop index variables). The wider implications of such a move are yet to be explored, but probabilistic analysis fits in the model, as well as the computations of costs that are parametric on the hardware state. At both events we presented preliminary results on the time analysis of systems with pipelines obtained by exposing the hardware state in the source code. Some members of the audience were skeptical because of fear of exposing a level of complexity difficult to tame. However, before we get a working implementation and we test the behaviour of invariant generators on the obtained source code, we honestly believe that it is difficult to come to conclusions. In the deterministic case studied in \cerco{}, we have taken a given, fixed, cost algebra of natural numbers (obtained from Siemens data-sheet clock timings) under addition, but already Tranquili's work on \emph{dependent labelling} suggests a move to computing costs in algebras of \emph{functions} (in the case of his analysis of loop unrolling, of cost expressions parameterised with respect to valuations of the loop index variables). The wider implications of such a move are yet to be explored, but probabilistic analysis fits in the model, as well as the computations of costs that are parametric on the hardware state. At both events we presented preliminary results on the time analysis of systems with pipelines obtained by exposing the hardware state in the source code. Some members of the audience were skeptical because of fear of exposing a level of complexity difficult to tame. However, before we get a working implementation and we test the behaviour of invariant generators on the obtained source code, we honestly believe that it is difficult to come to conclusions. The feedback obtained from discussions with industrial representatives and with representatives from the parMERASA and T-CREST projects was less significant and describes a bleak future for static time analysis. The microprocessor and embedded systems developers are in a race to provide the largest amount of computing power on a single chip, with systems-on-chip at the center of the scene during the industrial showcase. The major issue of safety and non safety systems designed is now on how to exploit this additional power to optimize the average case or, simply, because the additional power is present anyway and it is a pity to waste it. The timing behaviour of programs running on a computing unit of these multi-cores or systems-on-chip is potentially greatly affected by the other units. Buses and caches are also shared, often in non uniform ways, and different computations also interfere through the states of these share components. Statically analyzing for WCET a program in isolation yields totally useless bounds because ignorance about the behaviour of the other computing nodes forces to always assume the worst possible behaviour, hence the useless bounds. The mentioned EU projects, among others, and a large part of the scientific community is working on the design of alternative hardware that could make the worst case statically predictable again, but the pressure on the microprocessor manufacturers has been totally unsuccessful at the moment. The CerCo technology, implementing a form of static analysis, suffers from the problem as well and does not contribute to its solution. On the other hand, it is likely that, if a solution to the problem emerges, it could be exploited in CerCo too. The feedback obtained from discussions with industrial representatives and with representatives from the parMERASA and T-CREST projects was less significant and describes a bleak future for static time analysis. The microprocessor and embedded systems developers are in a race to provide the largest amount of computing power on a single chip, with systems-on-chip at the centre of the scene during the industrial showcase. The major issue of safety and non safety systems designed is now on how to exploit this additional power to optimize the average case or, simply, because the additional power is present anyway and it is a pity to waste it. The timing behaviour of programs running on a computing unit of these multi-cores or systems-on-chip is potentially greatly affected by the other units. Buses and caches are also shared, often in non uniform ways, and different computations also interfere through the states of these share components. Statically analyzing for WCET a program in isolation yields totally useless bounds because ignorance about the behaviour of the other computing nodes forces to always assume the worst possible behaviour, hence the useless bounds. The mentioned EU projects, among others, and a large part of the scientific community is working on the design of alternative hardware that could make the worst case statically predictable again, but the pressure on the microprocessor manufacturers has been totally unsuccessful at the moment. The CerCo technology, implementing a form of static analysis, suffers from the problem as well and does not contribute to its solution. On the other hand, it is likely that, if a solution to the problem emerges, it could be exploited in CerCo too.
Note: See TracChangeset for help on using the changeset viewer.