Changeset 3116
 Timestamp:
 Apr 10, 2013, 5:07:26 PM (8 years ago)
 File:

 1 edited
Legend:
 Unmodified
 Added
 Removed

Deliverables/D6.3/report.tex
r3114 r3116 238 238 We start presenting an idealized model of the execution of a generic 239 239 microprocessor (with caches) that has all interrupts disabled and no I/O 240 instructions. 240 instructions. We then classify the models according to some properties of 241 their cost model. Then we show how to extend the labelling approach of CerCo 242 to cover models that are classified in a certain way. 243 244 \paragraph{The microprocessor model} 241 245 242 246 Let $\sigma,\sigma_1,\ldots$ range over $\Sigma$, the set of the fragments of … … 246 250 247 251 Let $\delta,\delta_1,\ldots$ range over $\Delta$, the set of the 248 fragments of the microprocessor state that holds the internal stateof the252 fragments of the microprocessor state that holds the \emph{internal state} of the 249 253 microprocessor (e.g. the content of the pipeline and caches, the status of 250 254 the branch prediction unit, etc.). … … 255 259 256 260 Let $I,I_1,\ldots$ range over $\mathcal{I}$, the 257 the set of instructionsof the processor and let258 $\gamma,\gamma_1,\ldots$ range over $\Gamma$, the set of operands261 the set of \emph{instructions} of the processor and let 262 $\gamma,\gamma_1,\ldots$ range over $\Gamma$, the set of \emph{operands} 259 263 of instructions after the fetching and decoding passes. 260 Thus a pair $(I,\gamma)$ represents a decoded instructionand already contains264 Thus a pair $(I,\gamma)$ represents a \emph{decoded instruction} and already contains 261 265 the data required for execution. Execution needs to access the data state only 262 266 to write the result. … … 273 277 functions: 274 278 \begin{itemize} 275 \item The functional transition function $\longleftarrow : \Sigma \to \Sigma$279 \item The \emph{functional transition} function $\longrightarrow : \Sigma \to \Sigma$ 276 280 over data states. This is the only part of the semantics that is relevant 277 281 to functional analysis. 278 \item The internal state transitionfunction279 $\Long leftarrow : \Sigma \times \Delta \to \Delta$ that updates the internal282 \item The \emph{internal state transition} function 283 $\Longrightarrow : \Sigma \times \Delta \to \Delta$ that updates the internal 280 284 state. 281 \item The cost function $K: \mathcal{I} \times \Delta \to \mathbb{N}$ that282 assigns a cost to transitions. Since decoded instructions hold the data285 \item The \emph{cost function} $K: \mathcal{I} \times \Gamma \times \Delta \to \mathbb{N}$ 286 that assigns a cost to transitions. Since decoded instructions hold the data 283 287 they act on, the cost of an instruction may depend both on the data being 284 288 manipulated and on the internal state. 285 289 \end{itemize} 286 290 287 Given a processor state $(\sigma,\delta)$, the processor evol tes in the291 Given a processor state $(\sigma,\delta)$, the processor evolves in the 288 292 new state $(\sigma',\delta')$ in $n$ cost units if 289 $\sigma \long leftarrow \sigma'$ and $(\sigma,\delta) \Longleftarrow \sigma'$293 $\sigma \longrightarrow \sigma'$ and $(\sigma,\delta) \Longrightarrow \delta'$ 290 294 and $fetch(\sigma) = (I,\gamma)$ and $K(I,\gamma,\delta) = n$. 295 296 An \emph{execution history} is a stream of states and transitions 297 $\sigma_0 \longrightarrow \sigma_1 \longrightarrow \sigma_2 \ldots$ 298 that can be either finite or infinite. Given an execution history, the 299 corresponding \emph{execution path} is the stream of program counters 300 obtained from the execution history by forgetting all the remaining information. 301 The execution path of the history $\sigma_0 \longrightarrow \sigma_1 \longrightarrow \sigma_2 \ldots$ is $pc_0,pc_1,\ldots$ where each $pc_i$ is the program 302 counter of $\sigma_i$. 291 303 292 304 We claim this simple model to be generic enough to cover real world 293 305 architectures. 294 306 307 \paragraph{Classification of cost models} 308 A cost function is \emph{exact} if it assigns to transitions the real cost 309 incurred. It is \emph{approximated} if it returns an upper bound of the real 310 cost. 311 312 A cost function is \emph{operand insensitive} if it does not depend on the 313 operands of the instructions to be executed. Formally, $K$ is operand insensitive 314 if there exists a $K': \mathcal{I} \times \Delta \to \mathbb{N}$ such that 315 $K(I,\gamma,\delta) = K'(I,\delta)$. In this case, with an abuse of terminology, 316 we will identify $K$ with $K'$. 317 318 The cost functions of simple hardware architectures, in particular RISC ones, 319 are naturally operand insensitive. In the other cases an exact operand sensitive 320 cost function can always be turned into an approximated operand insensitive one 321 by taking $K'(I,\delta) = \max\{K(I,\gamma,\delta)~~\gamma \in \Gamma\}$. 322 The question when one performs these approximation is how severe the 323 approsimation is. A measure is given by the \emph{jitter}, which is defined 324 as the difference between the best and worst cases. In our case, the jitter 325 of the approximation $K'$ would be $\max\{K(I,\gamma,\delta)~~\gamma \in \Gamma\}  \min\{K(I,\gamma,\delta)~~\gamma \in \Gamma\}$. According to experts 326 of WCET analysis, the jitters relative to operand sensitivity in modern 327 architectures are small enough to make WCET estimations still useful. 328 Therefore, in the sequel we will focus on operand insensitive cost models only. 329 330 Note that cost model that are operand insensitive may still have significant 331 dependencies over the data manipulated by the instructions, because of the 332 dependency over internal states. For example, an instruction that reads data 333 from the memory may change the state of the cache and thus greatly affect the 334 execution time of successive instructions. 335 336 Nevertheless, operand insensitivity is an important property for the labelling 337 approach. In~\cite{tranquilli} we introduced \emph{dependent labels} and 338 \emph{dependent costs}, which are the possibility of assigning costs to basic 339 blocks of instructions which are also dependent on the state of the high level 340 program at the beginning of the block. The idea we will now try to pursue is 341 to exploit dependent costs to capture cost models that are sensitive to 342 the internal states of the microprocessor. Operand sensitivity, however, is 343 a major issue in presence of dependent labels: to lift a data sensitive cost 344 model from the object code to the source language, we need a function that 345 maps high level states to the operands of the instructions to be executed, 346 and we need these functions to be simple enough to allow reasoning over them. 347 Complex optimizations performed by the compiler, however, make the mappings 348 extremely cumbersomes and history dependent. Moreover, keeping 349 track of status transformations during compilation would be a significant 350 departure from classical compilation models which we are not willing to 351 undertake. By assuming or removing operand sensitivity, we get rid of part 352 of the problem: we only need to make our costs dependent on the internal 353 state of the microprocessor. The latter, however, is not at all visible 354 in the high level code. Our next step is to make it visible. 355 356 In general, the value of the internal state at a certain point in the program 357 history is affected by all the preceding history. For instance, the 358 pipeline stages either hold operands of instructions in execution or bubbles 359 \footnote{A bubble is formed in the pipeline stage $n$ when an instruction is stucked in the pipeline stage $n1$, waiting for some data which is not available yet.}. The execution history contains data states that in turn contain the 360 object code data which we do not know how to relate simply to the source code 361 data. We therefore introduce a new classification. 362 363 A \emph{view} over internal states is a pair $(\mathcal{V},.)$ where 364 $\mathcal{V}$ is a non empty set and $.:\Delta \to \mathcal{V}$ is 365 a forgetful function over internal states. 366 367 The operand insensitive cost function $K$ is \emph{dependent on the view 368 $\mathcal{V}$} if there exists a $K': \mathcal{I} \times \mathcal{V} \to \mathbb{N}$ such that $K(I,\delta) = K'(I,\delta)$. In this case, with an abuse of terminology, we will identify $K$ with $K'$. 369 370 Among the possible views, the ones that we will easily be able to work 371 with in the labelling approach are the \emph{execution path dependent views}. 372 A view $(\mathcal{V},.)$ is execution path dependent when there exists 373 a transition function $\hookrightarrow : PC \times \mathcal{V} \to \mathcal{V}$ 374 such that for all $(\sigma,\delta)$ and $pc$ such that $pc$ is the program 375 counter of $\sigma$, $(\sigma,\delta) \Longrightarrow \delta'$ iff 376 $(pc,\delta) \hookrightarrow \delta'$. 377 378 As a final definition, we say that a cost function $K$ is 379 \emph{data independent} if it is dependent on a view that is execution path 380 dependent. In the next paragraph we will show how we can extend the 381 labelling approach to deal with data independent cost models. 295 382 296 383
Note: See TracChangeset
for help on using the changeset viewer.