# Changeset 3120

Ignore:
Timestamp:
Apr 11, 2013, 2:04:33 PM (8 years ago)
Message:

...

File:
1 edited

Unmodified
Added
Removed
• ## Deliverables/D6.3/report.tex

 r3119 Among the possible views, the ones that we will easily be able to work with in the labelling approach are the \emph{execution history dependent views}. A view $(\mathcal{V},|.|)$ is execution history dependent when there exists a transition function $\hookrightarrow : PC \times \mathcal{V} \to \mathcal{V}$ such that for all $(\sigma,\delta)$ and $pc$ such that $pc$ is the program counter of $\sigma$, $(\sigma,\delta) \Longrightarrow \delta'$ iff $(pc,|\delta|) \hookrightarrow |\delta'|$. A view $(\mathcal{V},|.|)$ is execution history dependent with a lookahead of length $n$ when there exists a transition function $\hookrightarrow : PC^n \times \mathcal{V} \to \mathcal{V}$ such that for all $(\sigma,\delta)$ and $pc_1,\ldots,pc_n$ such that every $pc_i$ is the program counter of $\sigma_i$ defined by $\sigma \longrightarrow^i \sigma_i$, we have $(\sigma,\delta) \Longrightarrow \delta'$ iff $((pc_1,\ldots,pc_n),|\delta|) \hookrightarrow |\delta'|$. Less formally, a view is dependent on the execution history if the evolution of the views is fully determined by the evolution of the program counters. To better understand the definition, consider the case where the next instruction to be executed is a conditional jump. Without knowing the values of the registers, it is impossible to determine if the true or false branches will be taken. Therefore it is likely to be impossible to determine the value of the view the follows the current one. On the other hand, knowing the program counter that will be reached executing the conditional branch, we also know which branch will be taken and this may be sufficient to compute the new view. Lookaheads longer then 1 will be used in case of pipelines: when executing one instruction in a system with a pipeline of length $n$, the internal state of the pipeline already holds information on the next $n$ instructions to be executed. The reference to execution historys in the names is due to the following can be lifted to the type $EP \times \mathcal{V} \to \mathcal{V}$ by folding the definition over the path trace: $((pc_0,\ldots,pc_n),v_0) \hookrightarrow v_{n+1}$ iff for all $i \leq n$, $(pc_i,v_i) \hookrightarrow v_{i+1}$. $((pc_0,\ldots,pc_m),v_0) \hookrightarrow v_n$ iff for all $i \leq m - n$, $((pc_i,\ldots,pc_{i+n}),v_i) \hookrightarrow v_{i+1}$. Moreover, the folding is clearly associative: $(\tau_1 @ \tau_2, v) \hookrightarrow v''$ iff The assumptions on the lack of influence of operands values on stalls and execution times ensures the existence of the transition function $\hookrightarrow : PC \times \{0,1\}^n \to \{0,1\}^n$ and the data independent cost function $K: PC \times \{0,1\}^n \to \mathbb{N}$. execution times ensures the existence of the data independent cost function $K: PC \times \{0,1\}^n \to \mathbb{N}$. The transition function for a pipeline with $n$ stages may require $n$ lookaheads: $\hookrightarrow : PC^n \times \{0,1\}^n \to \{0,1\}^n$. While the model is a bit simplicistic, it can nevertheless be used to describe The cost $K(v_0,L)$ associated to the block labelled with $L$ and the initial view $v_0$ is $K(pc_0,v_0) + K(pc_1,v_1) + \ldots + K(pc_n,v_n)$ where for every $i$, $(pc_i,v_i) \hookrightarrow v_{k+1}$. The static analysis can be performed $K(pc_0,v_0) + K(pc_1,v_1) + \ldots + K(pc_n,v_n)$ where for every $i < n$, $((pc_i,\ldots,pc_{i+l}),v_i) \hookrightarrow v_{k+1}$ where $l$ is the lookahead required. When the lookahead requires program counters outside the block under analysis, we are free to use dummy ones because the parts of the view that deal with future behaviour have no impact on the cost of the previous operations by assumption. The static analysis can be performed in linear time in the size of the program because the cardinality of the sets of labels (i.e. the number of basic blocks) is bounded by the size of the just gave. The reason is that, if $pc_i$ is a function call executed in the view (state) $v_i$, it is not true that, after return, the state will be $v_i+1$ defined by $(pc_i,v_i) \hookrightarrow v_{i+1}$. Indeed $v_{i+1}$ will be the state at the execution of the body of the call, while we should know the state after the function returns. The latter cannot be will be $v_i+1$ defined by $(pc_i,pc_{i+1},v_i) \hookrightarrow v_{i+1}$ (assuming a lookahead of 1, which is already problematic). Indeed $pc_{i+1}$ is the program counter of the instruction that follows the call, whereas the next program counter to be reached is the one of the body of the call. Moreover, even if the computation would make sense, $v_{i+1}$ would be the state at the beginning of the execution of the body of the call, while we should know the state after the function returns. The latter cannot be statically predicted. That's why we had to impose labels after calls. Associativity, on the other hand, trivially holds. Therefore the proof So far, we have computed the dependent cost $K: \mathcal{V} \times \mathcal{L} \to \mathbb{N}$ that associates a cost to basic blocks and views. The second step consists in statically computing the transition function $\hookrightarrow : \mathcal{L} \times \mathcal{V} \to \mathcal{V}$ that associates to each basic block and input view the view obtained at the end of the execution of the block. The definition is straightforward: $(L,v) \hookrightarrow v'$ iff $((pc_0,\ldots,pc_n),v) \hookrightarrow v'$ where $(pc_0,\ldots,pc_n)$ are the program counters of the block labelled by $L$. This transition function is not necessary in the basic labelling approach. Its computation is again linear in the size of the program. $\hookrightarrow : \mathcal{L} \times \mathcal{L} \times \mathcal{V} \to \mathcal{V}$ that associates to each pair of consecutively executed basic blocks and input view the view obtained at the end of the execution of the first block. The definition is the following: $(L,L',v) \hookrightarrow v'$ iff $((pc_0,\ldots,pc_n,pc'_0,\ldots,pc'_m),v) \hookrightarrow v'$ where $(pc_0,\ldots,pc_n)$ are the program counters of the block labelled by $L$ and $(pc'_0,\ldots,pc'_m)$ are those of the block labelled with $L'$. We assume here that $m$ is always longer or equal to the lookahead required by the transition function $\hookrightarrow$ over execution paths. When this is not the case we could make the new transition function take in input a longer lookahead of labels. Or we may assume to introduce enough \texttt{NOP}s at the beginning of the block $L'$ to enforce the property. In the rest of the paragraph we assume to have followed the second approach to simplify the presentation. The extended transition function over labels is not present in the basic labelling approach. Actually, the basic labelling approach can be understood as the generalized approach where the view $\mathcal{V} = \mathbbm{1}$. The computation of the extended $\hookrightarrow$ transition function is again linear in the size of the program. Both the versions of $K$ and $\hookrightarrow$ defined over labels can be lifted to work over traces by folding them over the list of labels in the trace: $K((L_1,\ldots,L_n),v) = K(L_1,v) + K((L_2,\ldots,L_n),v')$ where $(L_1,v) \hookrightarrow v'$ and $((L_1,\ldots,L_n),v) \hookrightarrow v''$ iff $(L_1,v) \hookrightarrow v'$ and $((L_2,\ldots,L_n),v') \hookrightarrow v''$. The two definitions are also trace: for $K$ we have $K((L_1,\ldots,L_n),v) = K(L_1,v) + K((L_2,\ldots,L_n),v')$ where $(L_1,L_2,v) \hookrightarrow v'$;  for $\hookrightarrow$ we have $((L_1,\ldots,L_n),v) \hookrightarrow v''$ iff $(L_1,L_2,v) \hookrightarrow v'$ and $((L_2,\ldots,L_n),v') \hookrightarrow v''$. The two definitions are also clearly associative. values of the views during execution: \begin{itemize} \item We define two global variables \texttt{\_\_cost}, initialized at 0, and \texttt{\_\_view}, initialized with $|\delta_0|$ where $\delta_0$ is the initial value of the internal state at the beginning of program execution. \item At the beginning of every basic block labelled by $L$ we add \texttt{\_\_cost += }$K(\mbox{\texttt{\_\_view}},L)$\texttt{;} which pre-pays the cost for the execution of the block in the current view (state) \texttt{\_\_view}. \item At every exit point from the basic block we add \texttt{\_\_view = }$v$ where \item We define three global variables \texttt{\_\_cost}, initialized at 0, \texttt{\_\_label}, initialized with \texttt{NULL}, and \texttt{\_\_view}, uninitialized. \item At the beginning of every basic block labelled by $L$ we add the following code fragment: \begin{tabular}{l} \texttt{\_\_view = \_\_next(\_\_label,}$L$\texttt{,\_\_view);}\\ \texttt{\_\_cost += }$K(\mbox{\texttt{\_\_view}},L)$\texttt{;}\\ \texttt{\_\_label = }$L$\texttt{;} \end{tabular} where \texttt{\_\_next(}$L_1,L_2,v$\texttt{)} = $v'$ iff $(L_1,L_2,v) \hookrightarrow v'$ unless $L_1$ is \texttt{NULL}. In that case \texttt{(\_\_next(NULL,}$L$\texttt{)} = $v_0$ where $v_0 = |\delta_0|$ and $\delta_0$ is the initial value of the internal stat at the beginning of program execution. The first line of the code fragment computes the view at the beginning of the execution of the block from the view at the end of the previous block. Then we update the cost function with the cost of the block. Finally we remember the current block to use it for the computation of the next view at the beginning of the next block. \end{itemize} \paragraph{An example of instrumentation in presence of a pipeline} In Figure~\ref{factprogi} we show how the instrumentationof a program that computes the factorial of 10 would look like in presence of a pipeline. The instrumentation has been manually produced. The \texttt{\_\_next} function says that the body of the internal loop of the \texttt{fact} function can be executed in two different internal states, summarized by the views 2 and 3. The view 2 holds at the beginning of even iterations, while the view 3 holds at the beginning of odd ones. Therefore even and odd iterations are assigned a different cost. Also the code after the loop can be executed in two different states, depending on the oddness of the last loop iteration. \begin{figure}[t] \begin{center} \begin{verbatim} int fact (int n) { int i, res = 1; for (i = 1 ; i <= n ; i++) res *= i; return res; } int main () { return (fact(10)); } \end{verbatim} \end{center} \caption{A simple program that computes the factorial of 10.\label{factprog}} \end{figure} The definitions of \texttt{\_\_next} and \texttt{\_\_K} are just examples. For instance, it is possible as well that each one of the 10 iterations is executed in a different internal state. \begin{figure}[t] \begin{tiny} \begin{verbatim} int __cost = 8; int __label = 0; int __view; void __cost_incr(int incr) { __cost = __cost + incr; } int __next(int label1, int label2, int view) { if (label1 == 0)                             return 0; else if (label1 == 0 && label2 == 1)              return 1; else if (label1 == 1 && label2 == 2)              return 2; else if (label1 == 2 && label2 == 2 && view == 2) return 3; else if (label1 == 2 && label2 == 2 && view == 3) return 2; else if (label1 == 2 && label2 == 3 && view == 2) return 1; else if (label1 == 2 && label2 == 3 && view == 3) return 0; else if (label1 == 3 && label2 == 4 && view == 0) return 0; else if (label1 == 3 && label2 == 4 && view == 1) return 0; } int __K(int view, int label) { if (view == 0 && label == 0) return 3; else if (view == 1 && label == 1) return 14; else if (view == 2 && label == 2) return 35; else if (view == 3 && label == 2) return 26; else if (view == 0 && label == 3) return 6; else if (view == 1 && label == 3) return 8; else if (view == 0 && label == 4) return 6; } int fact(int n) { int i; int res; int k; __view = __next(__label,1,__view); k = __K(__view,1); __cost_incr(k); __label = 1; res = 1; for (i = 1; i <= n; i = i + 1) { __view = __next(__label,2,__view); __cost_incr(__K(__view,2)); __label = 2; res = res * i; } __view = __next(__label,3,__view); k = __K(__view,3); __cost_incr(k); __label = 3; return res; } int main(void) { int t; int k; __view = __next(__label,0,__view); k = __K(__view,0); __cost_incr(k); __label = 0; t = fact(10); __view = __next(__label,4,__view); k = __K(__view,4); __cost_incr(k); __label = 4; return t; } \end{verbatim} \end{tiny} \caption{The instrumented version of the program in Figure~\ref{factprog}.\label{factprogi}} \end{figure} \end{document}
Note: See TracChangeset for help on using the changeset viewer.