source: src/ASM/CPP2013-policy/conclusion.tex @ 3364

Last change on this file since 3364 was 3364, checked in by boender, 6 years ago
  • added bit to the introduction about contribution
File size: 5.2 KB
3In the previous sections we have discussed the branch displacement optimisation
4problem, presented an optimised solution, and discussed the proof of
5termination and correctness for this algorithm, as formalised in Matita.
7The algorithm we have presented is fast and correct, but not optimal; a true
8optimal solution would need techniques like constraint solvers. While outside
9the scope of the present research, it would be interesting to see if enough
10heuristics could be found to make such a solution practical for implementing
11in an existing compiler; this would be especially useful for embedded systems,
12where it is important to have as small a solution as possible.
14In itself the algorithm is already useful, as it results in a smaller solution
15than the simple `every branch instruction is long' used up until now---and with
16only 64 Kb of memory, every byte counts. It also results in a smaller solution
17than the greatest fixed point algorithm that {\tt gcc} uses. It does this
18without sacrificing speed or correctness.
20The certification of an assembler that relies on the branch displacement
21algorithm described in this paper was presented in~\cite{lastyear}.
22The assembler computes the $\sigma$ map as described in this paper and
23then works in two passes. In the first pass it builds a map
24from instruction labels to addresses in the assembly code. In the
25second pass it iterates over the code, translating every pseudo jump
26at address $src$ to a label l associated to the assembly instruction at
27address $dst$ to a jump of the size dictated by $(\sigma\ src)$ to
28$(\sigma\ dst)$. In case of conditional jumps, the translated jump may be
29implemented with a series of instructions.
31The proof of correctness abstracts over the algorithm used and only relies on
32{\tt sigma\_policy\_specification} (page~\ref{sigmapolspec}). It is a variation
33of a standard 1-to-many forward simulation proof~\cite{Leroy2009}. The
34relation R between states just maps every code address $ppc$ stored in
35registers or memory to $(\sigma\ ppc)$. To identify the code addresses,
36an additional data structure is always kept together with the source
37state and is updated by the semantics. The semantics is preserved
38only for those programs whose source code operations
39$(f\ ppc_1\ \ldots\ ppc_n)$ applied to code addresses $ppc_1 \ldots ppc_n$ are
40such that $(f\ (\sigma\ ppc_1)\ ...\ (\sigma\ ppc) = f\ ppc\ ppc_n))$. For
41example, an injective $\sigma$ preserves a binary equality test f for code
42addresses, but not pointer subtraction.
44The main lemma (fetching simulation), which relies on\\
45{\tt sigma\_policy\_specification} and is established by structural induction
46over the source code, says that fetching an assembly instruction at
47position ppc is equal to fetching the translation of the instruction at
48position $(\sigma\ ppc)$, and that the new incremented program counter is at
49the beginning of the next instruction (compactness). The only exception is
50when the instruction fetched is placed at the end of code memory and is
51followed only by dead code. Execution simulation is trivial because of the
52restriction over well behaved programs w.r.t. sigma. The condition
53$\sigma\ 0 = 0$ is necessary because the hardware model prescribes that the
54first instruction to be executed will be at address 0. For the details
57Instead of verifying the algorithm directly, another solution to the problem
58would be to run an optimisation algorithm, and then verify the safety of the
59result using a verified validator. Such a validator would be easier to verify
60than the algorithm itself, but it would still leave the question of
61termination open. In the case of a least fixed point algorithm, at least, the
62solution is not necessarily safe until the algorithm terminates, so this
63effort would still have to be expended.
66\subsection{Related work}
68As far as we are aware, this is the first formal discussion of the branch
69displacement optimisation algorithm.
71The CompCert project is another verified compiler project.
72Their backend~\cite{Leroy2009} generates assembly code for (amongst others) subsets of the
73PowerPC and x86 (32-bit) architectures. At the assembly code stage, there is
74no distinction between the span-dependent jump instructions, so a branch
75displacement optimisation algorithm is not needed.
77An offshoot of the CompCert project is the CompCertTSO project, which adds
78thread concurrency and synchronisation to the CompCert compiler~\cite{TSO2011}.
79This compiler also generates assembly code and therefore does not include a
80branch displacement algorithm.
82Finally, there is also the Piton stack~\cite{Moore1996}, which not only includes the
83formal verification of a compiler, but also of the machine architecture
84targeted by that compiler, a microprocessor called the FM9001.
85However, this architecture does not have different
86jump sizes (branching is simulated by assigning values to the program counter),
87so the branch displacement problem is irrelevant.
89\subsection{Formal development}
91All Matita files related to this development can be found on the CerCo
92website, \url{}. The specific part that contains the
93branch displacement algorithm is in the {\tt ASM} subdirectory, in the files
94{\tt}, {\tt} and {\tt}.
Note: See TracBrowser for help on using the repository browser.