Changeset 3116

Apr 10, 2013, 5:07:26 PM (4 years ago)


1 edited


  • Deliverables/D6.3/report.tex

    r3114 r3116  
    238238We start presenting an idealized model of the execution of a generic
    239239microprocessor (with caches) that has all interrupts disabled and no I/O
    240 instructions.
     240instructions. We then classify the models according to some properties of
     241their cost model. Then we show how to extend the labelling approach of CerCo
     242to cover models that are classified in a certain way.
     244\paragraph{The microprocessor model}
    242246Let $\sigma,\sigma_1,\ldots$ range over $\Sigma$, the set of the fragments of
    247251Let $\delta,\delta_1,\ldots$ range over $\Delta$, the set of the
    248 fragments of the microprocessor state that holds the internal state of the
     252fragments of the microprocessor state that holds the \emph{internal state} of the
    249253microprocessor (e.g. the content of the pipeline and caches, the status of
    250254the branch prediction unit, etc.).
    256260Let $I,I_1,\ldots$ range over $\mathcal{I}$, the
    257 the set of instructions of the processor and let
    258 $\gamma,\gamma_1,\ldots$ range over $\Gamma$, the set of operands
     261the set of \emph{instructions} of the processor and let
     262$\gamma,\gamma_1,\ldots$ range over $\Gamma$, the set of \emph{operands}
    259263of instructions after the fetching and decoding passes.
    260 Thus a pair $(I,\gamma)$ represents a decoded instruction and already contains
     264Thus a pair $(I,\gamma)$ represents a \emph{decoded instruction} and already contains
    261265the data required for execution. Execution needs to access the data state only
    262266to write the result.
    275  \item The functional transition function $\longleftarrow : \Sigma \to \Sigma$
     279 \item The \emph{functional transition} function $\longrightarrow : \Sigma \to \Sigma$
    276280   over data states. This is the only part of the semantics that is relevant
    277281   to functional analysis.
    278  \item The internal state transition function
    279    $\Longleftarrow : \Sigma \times \Delta \to \Delta$ that updates the internal
     282 \item The \emph{internal state transition} function
     283   $\Longrightarrow : \Sigma \times \Delta \to \Delta$ that updates the internal
    280284   state.
    281  \item The cost function $K: \mathcal{I} \times \Delta \to \mathbb{N}$ that
    282    assigns a cost to transitions. Since decoded instructions hold the data
     285 \item The \emph{cost function} $K: \mathcal{I} \times \Gamma \times \Delta \to \mathbb{N}$
     286   that assigns a cost to transitions. Since decoded instructions hold the data
    283287   they act on, the cost of an instruction may depend both on the data being
    284288   manipulated and on the internal state.
    287 Given a processor state $(\sigma,\delta)$, the processor evoltes in the
     291Given a processor state $(\sigma,\delta)$, the processor evolves in the
    288292new state $(\sigma',\delta')$ in $n$ cost units if
    289 $\sigma \longleftarrow \sigma'$ and $(\sigma,\delta) \Longleftarrow \sigma'$
     293$\sigma \longrightarrow \sigma'$ and $(\sigma,\delta) \Longrightarrow \delta'$
    290294and $fetch(\sigma) = (I,\gamma)$ and $K(I,\gamma,\delta) = n$.
     296An \emph{execution history} is a stream of states and transitions
     297$\sigma_0 \longrightarrow \sigma_1 \longrightarrow \sigma_2 \ldots$
     298that can be either finite or infinite. Given an execution history, the
     299corresponding \emph{execution path} is the stream of program counters
     300obtained from the execution history by forgetting all the remaining information.
     301The execution path of the history $\sigma_0 \longrightarrow \sigma_1 \longrightarrow \sigma_2 \ldots$ is $pc_0,pc_1,\ldots$ where each $pc_i$ is the program
     302counter of $\sigma_i$.
    292304We claim this simple model to be generic enough to cover real world
     307\paragraph{Classification of cost models}
     308A cost function is \emph{exact} if it assigns to transitions the real cost
     309incurred. It is \emph{approximated} if it returns an upper bound of the real
     312A cost function is \emph{operand insensitive} if it does not depend on the
     313operands of the instructions to be executed. Formally, $K$ is operand insensitive
     314if there exists a $K': \mathcal{I} \times \Delta \to \mathbb{N}$ such that
     315$K(I,\gamma,\delta) = K'(I,\delta)$. In this case, with an abuse of terminology,
     316we will identify $K$ with $K'$.
     318The cost functions of simple hardware architectures, in particular RISC ones,
     319are naturally operand insensitive. In the other cases an exact operand sensitive
     320cost function can always be turned into an approximated operand insensitive one
     321by taking $K'(I,\delta) = \max\{K(I,\gamma,\delta)~|~\gamma \in \Gamma\}$.
     322The question when one performs these approximation is how severe the
     323approsimation is. A measure is given by the \emph{jitter}, which is defined
     324as the difference between the best and worst cases. In our case, the jitter
     325of the approximation $K'$ would be $\max\{K(I,\gamma,\delta)~|~\gamma \in \Gamma\} - \min\{K(I,\gamma,\delta)~|~\gamma \in \Gamma\}$. According to experts
     326of WCET analysis, the jitters relative to operand sensitivity in modern
     327architectures are small enough to make WCET estimations still useful.
     328Therefore, in the sequel we will focus on operand insensitive cost models only.
     330Note that cost model that are operand insensitive may still have significant
     331dependencies over the data manipulated by the instructions, because of the
     332dependency over internal states. For example, an instruction that reads data
     333from the memory may change the state of the cache and thus greatly affect the
     334execution time of successive instructions.
     336Nevertheless, operand insensitivity is an important property for the labelling
     337approach. In~\cite{tranquilli} we introduced \emph{dependent labels} and
     338\emph{dependent costs}, which are the possibility of assigning costs to basic
     339blocks of instructions which are also dependent on the state of the high level
     340program at the beginning of the block. The idea we will now try to pursue is
     341to exploit dependent costs to capture cost models that are sensitive to
     342the internal states of the microprocessor. Operand sensitivity, however, is
     343a major issue in presence of dependent labels: to lift a data sensitive cost
     344model from the object code to the source language, we need a function that
     345maps high level states to the operands of the instructions to be executed,
     346and we need these functions to be simple enough to allow reasoning over them.
     347Complex optimizations performed by the compiler, however, make the mappings
     348extremely cumbersomes and history dependent. Moreover, keeping
     349track of status transformations during compilation would be a significant
     350departure from classical compilation models which we are not willing to
     351undertake. By assuming or removing operand sensitivity, we get rid of part
     352of the problem: we only need to make our costs dependent on the internal
     353state of the microprocessor. The latter, however, is not at all visible
     354in the high level code. Our next step is to make it visible.
     356In general, the value of the internal state at a certain point in the program
     357history is affected by all the preceding history. For instance, the
     358pipeline stages either hold operands of instructions in execution or bubbles
     359\footnote{A bubble is formed in the pipeline stage $n$ when an instruction is stucked in the pipeline stage $n-1$, waiting for some data which is not available yet.}. The execution history contains data states that in turn contain the
     360object code data which we do not know how to relate simply to the source code
     361data. We therefore introduce a new classification.
     363A \emph{view} over internal states is a pair $(\mathcal{V},|.|)$ where
     364$\mathcal{V}$ is a non empty set and $|.|:\Delta \to \mathcal{V}$ is
     365a forgetful function over internal states.
     367The operand insensitive cost function $K$ is \emph{dependent on the view
     368$\mathcal{V}$} if there exists a $K': \mathcal{I} \times \mathcal{V} \to \mathbb{N}$ such that $K(I,\delta) = K'(I,|\delta|)$. In this case, with an abuse of terminology, we will identify $K$ with $K'$.
     370Among the possible views, the ones that we will easily be able to work
     371with in the labelling approach are the \emph{execution path dependent views}.
     372A view $(\mathcal{V},|.|)$ is execution path dependent when there exists
     373a transition function $\hookrightarrow : PC \times \mathcal{V} \to \mathcal{V}$
     374such that for all $(\sigma,\delta)$ and $pc$ such that $pc$ is the program
     375counter of $\sigma$, $(\sigma,\delta) \Longrightarrow \delta'$ iff
     376$(pc,|\delta|) \hookrightarrow |\delta'|$.
     378As a final definition, we say that a cost function $K$ is
     379\emph{data independent} if it is dependent on a view that is execution path
     380dependent. In the next paragraph we will show how we can extend the
     381labelling approach to deal with data independent cost models.
Note: See TracChangeset for help on using the changeset viewer.