Changeset 1768
 Timestamp:
 Feb 27, 2012, 2:46:08 PM (8 years ago)
 File:

 1 edited
Legend:
 Unmodified
 Added
 Removed

Deliverables/D1.2/CompilerProofOutline/outline.tex
r1767 r1768 356 356 357 357 \begin{displaymath} 358 \mathtt{BEMem} = \mathtt{Mem} 358 \mathtt{BEMem} = \mathtt{Mem}~\mathtt{BEValue} 359 359 \end{displaymath} 360 360 … … 391 391 \mathtt{load}\ s\ a\ M = \mathtt{Some}\ v \rightarrow \forall i \leq s.\ \mathtt{load}\ s\ (a + i)\ \sigma(M) = \mathtt{Some}\ v_i 392 392 \end{displaymath} 393 That is, if we are successful in reading a value of size $s$ from memory at address $a$ in frontend memory, then we should successfully be able to read a value from memory in the backend memory at \emph{any} address from address $a$ up to and including address $a + i$, where $i \leq s$.393 That is, if we are successful in reading a value of size $s$ from memory at address $a$ in frontend memory, then we should successfully be able to read all of its chunks from memory in the backend memory at appropriate address (from address $a$ up to and including address $a + i$, where $i \leq s$). 394 394 395 395 Next, we must show that \texttt{store} properly commutes with the $\sigma$map between memory spaces: … … 399 399 That is, if we store a value \texttt{v} in the frontend memory \texttt{M} at address \texttt{a} and transform the resulting memory with $\sigma$, then this is equivalent to storing a transformed value $\mathtt{\sigma(v)}$ at address $\mathtt{\sigma(a)}$ into the backend memory $\mathtt{\sigma(M)}$. 400 400 401 Finally, we must prove that \texttt{load}, \texttt{store} and $\sigma$ all properly commute. 402 Writing \texttt{load}$^*$ for multiple consecutive iterations of \texttt{load}, we must prove: 403 \begin{displaymath} 404 \texttt{load}^* (\mathtt{store}\ \sigma(a')\ \sigma(v)\ \sigma(M))\ \sigma(a)\ \sigma(M) = \mathtt{load}^*\ \sigma(s)\ \sigma(a)\ \sigma(M) 405 \end{displaymath} 406 That is, suppose we store a transformed value $\mathtt{\sigma(v)}$ into a backend memory $\mathtt{\sigma(M)}$ at address $\mathtt{\sigma(a')}$, using \texttt{store}, and then load the correct number of bytes (for the size of $\mathtt{\sigma(v)}$ at address $\sigma(a)$. 407 Then, this should be equivalent to loading the correct number of bytes from address $\sigma(a)$ in an unaltered version of $\mathtt{\sigma(M)}$, \emph{providing} that the memory regions occupied by the two sequences of bytes at the two addresses do not overlap. 408 This will entail more proof obligations, demonstrating that the $\sigma$map between memory spaces respects memory regions. 401 Finally, the commutation properties between \texttt{load} and \texttt{store} are weakened in the $\sigma$image of the memory. 402 Writing \texttt{load}$^*$ for the multiple consecutive iterations of \texttt{load} used to fetch all chunks of a value, we must prove that, when $a \neq a'$: 403 \begin{displaymath} 404 \texttt{load}^* \sigma(a)\ (\mathtt{store}\ \sigma(a')\ \sigma(v)\ \sigma(M)) = \mathtt{load}^*\ \sigma(s)\ \sigma(a)\ \sigma(M) 405 \end{displaymath} 406 That is, suppose we store a transformed value $\mathtt{\sigma(v)}$ into a backend memory $\mathtt{\sigma(M)}$ at address $\mathtt{\sigma(a')}$, using \texttt{store}, and then load from the address $\sigma(a)$. Even if $a$ and $a'$ are 407 distinct by hypothesis, there is a priori no guarantee that the consecutive 408 bytes for the value stored at $\sigma(a)$ are disjoint from those for the 409 values stored at $\sigma(a')$. The fact that this holds is a nontrivial 410 property of $\sigma$ to be proved. 409 411 410 412 RTLabs states come in three flavours: … … 430 432 As a result, we have two stack pointers in our state: \texttt{ISP}, which is the real, hardware stack, and \texttt{SP}, which is the stack pointer of the emulated stack in memory. 431 433 The emulated stack is used for pushing and popping stack frames when calling or returning from function calls, however this is done using the hardware stack, indexed by \texttt{ISP} as an intermediary. 432 Instructions like \texttt{LCALL} and \texttt{ACALL} are hardwired by the processor's design to push on to the hardware stack, therefore after a call has been made, and after a call returns, the compiler emits code to move parameters and return addresses toandfrom the hardware stack. 434 Instructions like \texttt{LCALL} and \texttt{ACALL} are hardwired by the processor's design to push the return address on to the hardware stack. Therefore after a call has been made, and before a call returns, the compiler emits code to move the return address back and forth the two stacks. Parameters, return values 435 and local variables are only present in the external stack. 433 436 As a result, for most of the execution of the processor, the hardware stack is empty, or contains a single item ready to be moved into external memory. 434 437 435 Once more, we require a map from RTLabs states to RTL states. 436 Call it $\sigma$: 438 Once more, we require a relation $\sigma$ between RTLabs states and RTL states. 439 Because $\sigma$ is onetomany and, morally, a multifunction, 440 we use in the following the functional notation for $\sigma$, using $\star$ 441 in the output of $\sigma$ to mean that any value is accepted. 437 442 \begin{displaymath} 438 443 \mathtt{State} \stackrel{\sigma}{\longrightarrow} \mathtt{State} … … 442 447 For \texttt{State} we perform a further case analysis of the top stack frame, which decomposes into a tuple holding the current program counter value, the current stack pointer and the value of the registers: 443 448 \begin{displaymath} 444 \sigma(\mathtt{State} (\mathtt{Frame}^* \times \mathtt{\langle PC, REGS, SP \rangle})) \longrightarrow ((\sigma(\mathtt{Frame}^*), \sigma(\mathtt{PC}), \sigma(\mathtt{SP}), 0, 0, \sigma(\mathtt{REGS})), \sigma(\mathtt{Mem})) 445 \end{displaymath} 446 Translation then proceeds by translating the remaining stack frames, as well as the contents of the top stack frame, and setting all untranslated values to zero, their default value. 449 \sigma(\mathtt{State} (\mathtt{Frame}^* \times \mathtt{\langle PC, REGS, SP \rangle})) \longrightarrow ((\sigma(\mathtt{Frame}^*), \sigma(\mathtt{PC}), \sigma(\mathtt{SP}), \star, \star, \sigma(\mathtt{REGS})), \sigma(\mathtt{Mem})) 450 \end{displaymath} 451 Translation then proceeds by translating the remaining stack frames, as well as the contents of the top stack frame. Any value for the internal stack pointer 452 and the carry bit is admitted. 447 453 448 454 Translating \texttt{Call} and \texttt{Return} states is more involved, as a commutation between a single step of execution and the translation process must hold: … … 456 462 457 463 Here \emph{return one step} and \emph{call one step} refer to a pair of commuting diagrams relating the onestep execution of a call and return state and translation of both. 458 We provide the one step commuting diagrams in Figure~\ref{fig.commuting.diagrams}. 464 We provide the one step commuting diagrams in Figure~\ref{fig.commuting.diagrams}. The fact that one execution step in the source language is not performed 465 in the target language is not problematic for preservation of divergence 466 because it is easy to show that every step from a \texttt{Call} or 467 \texttt{Return} state is always preceeded/followed by one step that is always 468 simulated. 459 469 460 470 \begin{figure} … … 478 488 \end{figure} 479 489 480 We now outline how states are executed: 490 The forward simulation proof for all steps that do not involve function 491 calls is trivial because $\sigma$ does not alter the memory or the 492 pseudoregisters. We outline here the invocation of functions and return 493 from a function. 494 481 495 \begin{displaymath} 482 496 \begin{array}{rcl}
Note: See TracChangeset
for help on using the changeset viewer.