Index: /Deliverables/D6.3/pipelines.tex
===================================================================
--- /Deliverables/D6.3/pipelines.tex (revision 3207)
+++ /Deliverables/D6.3/pipelines.tex (revision 3208)
@@ -48,5 +48,5 @@
also depends on the internal state of stateful hardware components that are
designed to optimize the average case. Pipelines, caches, branch predictors are
-tipical examples of components that have no impact on the functional behavior
+typical examples of components that have no impact on the functional behavior
of the code, but that greatly augment the code performance.
@@ -68,5 +68,5 @@
and in traditional WCET. Only the latter can deal with modern hardware.
Section~\ref{static2} contains the new material. We revise our static analysis
-and the labelling approach in general to accomodate modern hardware.
+and the labelling approach in general to accommodate modern hardware.
Temporary conclusions are found in Section~\ref{conclusions}.
@@ -77,5 +77,5 @@
\begin{itemize}
\item There is a lot of recent and renewed activity in the formal method
- community to accomodate resource analysis using techniques derived from
+ community to accommodate resource analysis using techniques derived from
functional analysis (type systems, logics, abstract interpretation,
amortized analysis applied to data structures, etc.)
@@ -140,5 +140,5 @@
functional and non functional invariants. This is what we attempted with
the CerCo Cost Annotating Frama-C Plug-In. The Frama-C architecture allows
-several plug-ins to perform all kind of static analisys on the source code,
+several plug-ins to perform all kind of static analysis on the source code,
reusing results from other plug-ins and augmenting the source code with their
results. The techniques are absolutely not limited to linear algebra and
@@ -280,5 +280,5 @@
by taking $K'(I,\delta) = \max\{K(I,\gamma,\delta)~|~\gamma \in \Gamma\}$.
The question when one performs these approximation is how severe the
-approsimation is. A measure is given by the \emph{jitter}, which is defined
+approximation is. A measure is given by the \emph{jitter}, which is defined
as the difference between the best and worst cases. In our case, the jitter
of the approximation $K'$ would be $\max\{K(I,\gamma,\delta)~|~\gamma \in \Gamma\} - \min\{K(I,\gamma,\delta)~|~\gamma \in \Gamma\}$. According to experts
@@ -305,5 +305,5 @@
and we need these functions to be simple enough to allow reasoning over them.
Complex optimizations performed by the compiler, however, make the mappings
-extremely cumbersomes and history dependent. Moreover, keeping
+extremely cumbersome and history dependent. Moreover, keeping
track of status transformations during compilation would be a significant
departure from classical compilation models which we are not willing to
@@ -316,5 +316,5 @@
history is affected by all the preceding history. For instance, the
pipeline stages either hold operands of instructions in execution or bubbles
-\footnote{A bubble is formed in the pipeline stage $n$ when an instruction is stucked in the pipeline stage $n-1$, waiting for some data which is not available yet.}. The execution history contains data states that in turn contain the
+\footnote{A bubble is formed in the pipeline stage $n$ when an instruction is stuck in the pipeline stage $n-1$, waiting for some data which is not available yet.}. The execution history contains data states that in turn contain the
object code data which we do not know how to relate simply to the source code
data. We therefore introduce a new classification.
@@ -351,5 +351,5 @@
be executed.
-The reference to execution historys in the names is due to the following
+The reference to execution histories in the names is due to the following
fact: every execution history dependent transition function $\hookrightarrow$
can be lifted to the type $EP \times \mathcal{V} \to \mathcal{V}$ by folding
@@ -406,9 +406,9 @@
$\hookrightarrow : PC^n \times \{0,1\}^n \to \{0,1\}^n$.
-While the model is a bit simplicistic, it can nevertheless be used to describe
+While the model is a bit simplicist, it can nevertheless be used to describe
existing pipelines. It is also simple to be convinced that the same model
also captures static branch prediction: speculative execution of conditional
jumps is performed by always taking the same branch which does not depend on
-the execution history. In order to take in account jumpt predictions based on
+the execution history. In order to take in account jump predictions based on
the execution history, we just need to incorporate in the status and the view
statistical informations on the last executions of the branch.
@@ -457,5 +457,5 @@
pipeline stages.
-The first imporant theorem in the labelling approach is the correctness of the
+The first important theorem in the labelling approach is the correctness of the
static analysis: if the (dependent) cost associated to a label $L$ is $k$,
then executing a program from the beginning of the basic block to the end
@@ -537,5 +537,5 @@
$(L_1,L_2,v) \hookrightarrow v'$ unless $L_1$ is \texttt{NULL}.
In that case \texttt{(\_\_next(NULL,}$L$\texttt{)} = $v_0$ where
- $v_0 = |\delta_0|$ and $\delta_0$ is the initial value of the internal stat
+ $v_0 = |\delta_0|$ and $\delta_0$ is the initial value of the internal state
at the beginning of program execution.
@@ -548,5 +548,5 @@
\paragraph{An example of instrumentation in presence of a pipeline}
-In Figure~\ref{factprogi} we show how the instrumentationof a program
+In Figure~\ref{factprogi} we show how the instrumentation of a program
that computes the factorial of 10 would look like in presence of a pipeline.
The instrumentation has been manually produced. The \texttt{\_\_next}
@@ -667,5 +667,5 @@
A more refined possibility consists in approximating the output only on
those labels whose jitter is small or for those that mark basic blocks
- that are executed only a small number of times. By simplyfing the
+ that are executed only a small number of times. By simplifying the
\texttt{\_\_next} function accordingly, it is possible to considerably
reduce the search space for automated provers.
@@ -675,5 +675,5 @@
approximations. For example, if it is known that a loop can be executed at
most 10 times, computing the cost of 10 iterations yields a
- better bound than multiplying by 10 the worst case of a single interation.
+ better bound than multiplying by 10 the worst case of a single interaction.
We clearly can do the same on the source level.
@@ -710,5 +710,5 @@
execution timing is \emph{execution path dependent} (our terminology).
-We have already seen that we are able to accomodate in the labelling approach
+We have already seen that we are able to accommodate in the labelling approach
cost functions that are dependent on views that are execution path dependent.
Before fully embracing the PROARTIS vision,
@@ -731,5 +731,5 @@
negligible probability. Therefore, it becomes no longer interesting to estimate
the actual worst case bound. What becomes interesting is to plot the probability
-that the execution time will exceed a certain treshold. For all practical
+that the execution time will exceed a certain threshold. For all practical
purposes, a program that misses its deadline with a negligible probability
(e.g. $10^-9$ per hour of operation) will be perfectly acceptable when deployed
@@ -744,5 +744,5 @@
based on measurements and it is justified by the following assumption: if
the probabilities associated to every hardware operation are all independent
-and identically distributed, then measuring the fime spent on full runs of
+and identically distributed, then measuring the time spent on full runs of
sub-systems components yields a probabilistic estimate that remains valid
when the sub-system is plugged in a larger one. Moreover, the probabilistic
@@ -782,5 +782,5 @@
that of natural numbers to any group. Moreover, by imposing labels after
every function call, commutativity can be dropped and the approach works on
-every monoid (usually called \emph{cost monoids} in the litterature).
+every monoid (usually called \emph{cost monoids} in the literature).
Because random variables and convolutions form a monoid, we immediately have
that the labelling approach extends itself to SPTA. The instrumented code
@@ -835,5 +835,5 @@
$2^n$: a view can usually just be represented by a word. This is not the
case for the views on caches, which are in principle very large. Therefore,
-the dependent labelling approach for data indepedent cost functions that we
+the dependent labelling approach for data independent cost functions that we
have presented here could still be unpractical for caches. If that turns out
to be the case, a possible strategy is the use of abstract interpretations
@@ -858,5 +858,5 @@
We achieve this in CerCo by designing a certified Cost Annotating Compiler that
keeps tracks of transformations of basic blocks, in order to create a
-correspondence (not necessaritly bijection) between the basic blocks of the
+correspondence (not necessarily bijection) between the basic blocks of the
source and target language. We then prove that the sequence of basic blocks
that are met in the source and target executions is correlated. Then,