# Changeset 1768

Ignore:
Timestamp:
Feb 27, 2012, 2:46:08 PM (8 years ago)
Message:

...

File:
1 edited

### Legend:

Unmodified
 r1767 \begin{displaymath} \mathtt{BEMem} = \mathtt{Mem} \mathtt{BEValue} \mathtt{BEMem} = \mathtt{Mem}~\mathtt{BEValue} \end{displaymath} \mathtt{load}\ s\ a\ M = \mathtt{Some}\ v \rightarrow \forall i \leq s.\ \mathtt{load}\ s\ (a + i)\ \sigma(M) = \mathtt{Some}\ v_i \end{displaymath} That is, if we are successful in reading a value of size $s$ from memory at address $a$ in front-end memory, then we should successfully be able to read a value from memory in the back-end memory at \emph{any} address from address $a$ up to and including address $a + i$, where $i \leq s$. That is, if we are successful in reading a value of size $s$ from memory at address $a$ in front-end memory, then we should successfully be able to read all of its chunks from memory in the back-end memory at appropriate address (from address $a$ up to and including address $a + i$, where $i \leq s$). Next, we must show that \texttt{store} properly commutes with the $\sigma$-map between memory spaces: That is, if we store a value \texttt{v} in the front-end memory \texttt{M} at address \texttt{a} and transform the resulting memory with $\sigma$, then this is equivalent to storing a transformed value $\mathtt{\sigma(v)}$ at address $\mathtt{\sigma(a)}$ into the back-end memory $\mathtt{\sigma(M)}$. Finally, we must prove that \texttt{load}, \texttt{store} and $\sigma$ all properly commute. Writing \texttt{load}$^*$ for multiple consecutive iterations of \texttt{load}, we must prove: \begin{displaymath} \texttt{load}^* (\mathtt{store}\ \sigma(a')\ \sigma(v)\ \sigma(M))\ \sigma(a)\ \sigma(M) = \mathtt{load}^*\ \sigma(s)\ \sigma(a)\ \sigma(M) \end{displaymath} That is, suppose we store a transformed value $\mathtt{\sigma(v)}$ into a back-end memory $\mathtt{\sigma(M)}$ at address $\mathtt{\sigma(a')}$, using \texttt{store}, and then load the correct number of bytes (for the size of $\mathtt{\sigma(v)}$ at address $\sigma(a)$. Then, this should be equivalent to loading the correct number of bytes from address $\sigma(a)$ in an unaltered version of $\mathtt{\sigma(M)}$, \emph{providing} that the memory regions occupied by the two sequences of bytes at the two addresses do not overlap. This will entail more proof obligations, demonstrating that the $\sigma$-map between memory spaces respects memory regions. Finally, the commutation properties between \texttt{load} and \texttt{store} are weakened in the $\sigma$-image of the memory. Writing \texttt{load}$^*$ for the multiple consecutive iterations of \texttt{load} used to fetch all chunks of a value, we must prove that, when $a \neq a'$: \begin{displaymath} \texttt{load}^* \sigma(a)\ (\mathtt{store}\ \sigma(a')\ \sigma(v)\ \sigma(M)) = \mathtt{load}^*\ \sigma(s)\ \sigma(a)\ \sigma(M) \end{displaymath} That is, suppose we store a transformed value $\mathtt{\sigma(v)}$ into a back-end memory $\mathtt{\sigma(M)}$ at address $\mathtt{\sigma(a')}$, using \texttt{store}, and then load from the address $\sigma(a)$. Even if $a$ and $a'$ are distinct by hypothesis, there is a priori no guarantee that the consecutive bytes for the value stored at $\sigma(a)$ are disjoint from those for the values stored at $\sigma(a')$. The fact that this holds is a non-trivial property of $\sigma$ to be proved. RTLabs states come in three flavours: As a result, we have two stack pointers in our state: \texttt{ISP}, which is the real, hardware stack, and \texttt{SP}, which is the stack pointer of the emulated stack in memory. The emulated stack is used for pushing and popping stack frames when calling or returning from function calls, however this is done using the hardware stack, indexed by \texttt{ISP} as an intermediary. Instructions like \texttt{LCALL} and \texttt{ACALL} are hardwired by the processor's design to push on to the hardware stack, therefore after a call has been made, and after a call returns, the compiler emits code to move parameters and return addresses to-and-from the hardware stack. Instructions like \texttt{LCALL} and \texttt{ACALL} are hardwired by the processor's design to push the return address on to the hardware stack. Therefore after a call has been made, and before a call returns, the compiler emits code to move the return address back and forth the two stacks. Parameters, return values and local variables are only present in the external stack. As a result, for most of the execution of the processor, the hardware stack is empty, or contains a single item ready to be moved into external memory. Once more, we require a map from RTLabs states to RTL states. Call it $\sigma$: Once more, we require a relation $\sigma$ between RTLabs states and RTL states. Because $\sigma$ is one-to-many and, morally, a multi-function, we use in the following the functional notation for $\sigma$, using $\star$ in the output of $\sigma$ to mean that any value is accepted. \begin{displaymath} \mathtt{State} \stackrel{\sigma}{\longrightarrow} \mathtt{State} For \texttt{State} we perform a further case analysis of the top stack frame, which decomposes into a tuple holding the current program counter value, the current stack pointer and the value of the registers: \begin{displaymath} \sigma(\mathtt{State} (\mathtt{Frame}^* \times \mathtt{\langle PC, REGS, SP \rangle})) \longrightarrow ((\sigma(\mathtt{Frame}^*), \sigma(\mathtt{PC}), \sigma(\mathtt{SP}), 0, 0, \sigma(\mathtt{REGS})), \sigma(\mathtt{Mem})) \end{displaymath} Translation then proceeds by translating the remaining stack frames, as well as the contents of the top stack frame, and setting all untranslated values to zero, their default value. \sigma(\mathtt{State} (\mathtt{Frame}^* \times \mathtt{\langle PC, REGS, SP \rangle})) \longrightarrow ((\sigma(\mathtt{Frame}^*), \sigma(\mathtt{PC}), \sigma(\mathtt{SP}), \star, \star, \sigma(\mathtt{REGS})), \sigma(\mathtt{Mem})) \end{displaymath} Translation then proceeds by translating the remaining stack frames, as well as the contents of the top stack frame. Any value for the internal stack pointer and the carry bit is admitted. Translating \texttt{Call} and \texttt{Return} states is more involved, as a commutation between a single step of execution and the translation process must hold: Here \emph{return one step} and \emph{call one step} refer to a pair of commuting diagrams relating the one-step execution of a call and return state and translation of both. We provide the one step commuting diagrams in Figure~\ref{fig.commuting.diagrams}. We provide the one step commuting diagrams in Figure~\ref{fig.commuting.diagrams}. The fact that one execution step in the source language is not performed in the target language is not problematic for preservation of divergence because it is easy to show that every step from a \texttt{Call} or \texttt{Return} state is always preceeded/followed by one step that is always simulated. \begin{figure} \end{figure} We now outline how states are executed: The forward simulation proof for all steps that do not involve function calls is trivial because $\sigma$ does not alter the memory or the pseudo-registers. We outline here the invocation of functions and return from a function. \begin{displaymath} \begin{array}{rcl}