# Changeset 1373

Ignore:
Timestamp:
Oct 14, 2011, 3:51:09 PM (8 years ago)
Message:

changes to file based on claudio's suggestions

Location:
Deliverables/D4.2-4.3/reports
Files:
2 edited

### Legend:

Unmodified
 r1366 \label{subsect.brief.overview.backend.compilation.chain} The compiler's backend consists of four distinct intermediate languages: RTL, ERTL, LTL and LIN. A fifth language, RTLabs, serves as the exit point of the backend and the entry point of the frontend. RTL, ERTL and LTL are graph based' languages, whereas LIN is a linearised language, the final language before translation to assembly. The Matita compiler's backend consists of five distinct intermediate languages: RTL, RTLntl, ERTL, LTL and LIN. A fifth language, RTLabs, serves as the entry point of the backend and the exit point of the frontend. RTL, RTLntl, ERTL and LTL are graph based' languages, whereas LIN is a linearised language, the final language before translation to assembly. We now briefly discuss the properties of the intermediate languages, and discuss the various transformations that take place during the translation process: \paragraph{RTLabs ((Abstract) Register Transfer Language)} As mentioned, this is the final language of the compiler's frontend and the entry point for the backend. This language uses pseudoregisters, not hardware registers. During the translation pass from RTLabs to RTL instruction selection is carried out. This language uses pseudoregisters, not hardware registers.\footnote{There are an unbounded number of pseudoregisters.  Pseudoregisters are converted to hardware registers of stack positions during register allocation.} Functions still use stackframes, where arguments are passed on the stack and results are stored in addresses. During the pass to RTL, these are eliminated, and instruction selection is carried out. \paragraph{RTL (Register Transfer Language)} This language uses pseudoregisters, not hardware registers. Tailcall elimination is carried out during the translation from RTL to ERTL. Tailcall elimination is carried out during the translation from RTL to RTLntl. \paragraph{RTLntl (Register Transfer Language --- No Tailcalls)} This language is a pseudoregister, graph based language where all tailcalls are eliminated. RTLntl is not present in the O'Caml compiler. \paragraph{ERTL (Extended Register Transfer Language)} The ERTL to LTL pass performs the following transformations: liveness analysis, register colouring and register/stack slot allocation. \paragraph{LTL (Linearised Transfer Language)} The name is somewhat of a misnomer, as the language is \emph{not} linearised, and is in fact still graph based, but uses hardware registers instead of pseudoregisters. \paragraph{LTL (Linearisable Transfer Language)} Another graph based language, but uses hardware registers instead of pseudoregisters. Tunnelling (branch compression) should be implemented here. The O'Caml compiler is written in the following manner. Each intermediate language has its own dedicated syntax, notions of internal function, and so on. Translations map syntaxes to syntaxes, and internal functions to internal functions explicitly. Here, we make a distinction between internal functions'---other functions that are explicitly written by the programmer, and external functions', which belong to external library and require explictly linking. Internal functions are represented as a record, consisting of a sequential structure, of some description, of statements, entry and exit points to this structure, and other book keeping devices. Translations between intermediate language map syntaxes to syntaxes, and internal function representations to internal function representations explicitly. This is a perfectly valid way to write a compiler, where everything is made explicit, but writing a \emph{verified} compiler poses new challenges. In particular, we must look ahead to see how our choice of encodings will affect the size and complexity of the forthcoming proofs of correctness. We now discuss some abstractions, introduced in the Matita code, which we hope will make our proofs shorter, amongst other benefits. \paragraph{Shared code, reduced proofs} Many features of individual backend intermediate languages are shared with other intermediate languages. For instance, RTLabs, RTL, ERTL and LTL are all graph based languages, where functions are represented as a graph of statements that form their bodies. Functions for adding statements to a graph, searching the graph, and so on, are remarkably similar across all languages, but are duplicated in the O'Caml code. As a result, we chose to abstract the representation of internal functions for the RTL, ERTL, LTL and LIN intermediate languages into a joint' representation. This representation is parameterised by a record that dictates the layout of the function body for each intermediate language. For instance, in RTL, the layout is graph like, whereas in LIN, the layout is a linearised list of statements. Further, a generalised way of accessing the successor statement to the one currently under consideration is needed, and so forth. Our joint internal function record looks like so: \begin{lstlisting} record joint_internal_function (globals: list ident) (p:params globals) : Type[0] ≝ { ... joint_if_params   : paramsT p; joint_if_locals   : localsT p; ... joint_if_code     : codeT … p; ... }. \end{lstlisting} In particular, everything that can vary between differing intermediate languages has been parameterised. Here, we see the number of parameters, the listing of local variables, and the internal code representation has been parameterised. Other particulars are also parameterised, though here omitted. Hopefully this abstraction process will reduce the number of proofs that need to be written, dealing with internal functions. We only need to prove once that fetching a statement's successor is correct', and we inherit this property for free for every intermediate language. \paragraph{Changes between languages made explicit} inductive joint_instruction (p: params__) (globals: list ident): Type[0] := | COMMENT: String $\rightarrow$ joint_instruction p globals | COST_LABEL: costlabel $\rightarrow$ joint_instruction p globals ... | INT: generic_reg p $\rightarrow$ Byte $\rightarrow$ joint_instruction p globals ... | OP1: Op1 → acc_a_reg p → acc_a_reg p → joint_instruction p globals ... | extension: extend_statements p $\rightarrow$ joint_instruction p globals. However, these instructions expect different register types (either a pseudoregister or a hardware register) as arguments. We must therefore parameterise the joint syntax with a record of parameters that will be specialised to each intermediate language. In the type above, this parameterisation is realised wit the \texttt{params\_\_} record. In the type above, this parameterisation is realised with the \texttt{params\_\_} record. As a result of this parameterisation, we have also added a degree of type safety' to the intermediate languages' syntaxes. In particular, we note that the \texttt{OP1} constructor expects quite a specific type, in that the two register arguments must both be the accumulator A. Contrast this with the \texttt{INT} constructor, which expects a \texttt{generic\_reg}, corresponding to an arbitrary' register type. Further, we note that some intermediate languages have language specific instructions (i.e. the instructions that change between languages). \end{lstlisting} \paragraph{Shared code, reduced proofs} Many features of individual backend intermediate languages are shared with other intermediate languages. For instance, RTLabs, RTL, ERTL and LTL are all graph based languages, where functions are represented as a graph of statements that form their bodies. Functions for adding statements to a graph, searching the graph, and so on, are remarkably similar across all languages, but are duplicated in the O'Caml code. As a result, we chose to abstract the representation of internal functions for the RTL, ERTL, LTL and LIN intermediate languages into a joint' representation. This representation is parameterised by a record that dictates the layout of the function body for each intermediate language. For instance, in RTL, the layout is graph like, whereas in LIN, the layout is a linearised list of statements. Further, a generalised way of accessing the successor statement to the one currently under consideration is needed, and so forth. Our joint internal function record looks like so: \begin{lstlisting} record joint_internal_function (globals: list ident) (p:params globals) : Type[0] ≝ { ... joint_if_params   : paramsT p; joint_if_locals   : localsT p; ... joint_if_code     : codeT … p; ... }. \end{lstlisting} In particular, everything that can vary between differing intermediate languages has been parameterised. Here, we see the number of parameters, the listing of local variables, and the internal code representation has been parameterised. Other particulars are also parameterised, though here omitted. Hopefully this abstraction process will reduce the number of proofs that need to be written, dealing with internal functions. We only need to prove once that fetching a statement's successor is correct', and we inherit this property for free for every intermediate language. \paragraph{Dependency on instruction selection} We note that the backend languages are all essentially post instruction selection languages'. The joint' syntax makes this especially clear. For instance, in the definition: \begin{lstlisting} inductive joint_instruction (p:params__) (globals: list ident): Type[0] ≝ ... | INT: generic_reg p → Byte → joint_instruction p globals | MOVE: pair_reg p → joint_instruction p globals ... | PUSH: acc_a_reg p → joint_instruction p globals ... | extension: extend_statements p → joint_instruction p globals. \end{lstlisting} The capitalised constructors---\texttt{INT}, \texttt{MOVE}, and so on---are all machine specific instructions. Retargetting the compiler to another microprocessor would entail replacing these constructors with constructors that correspond to the instructions of the new target. We feel that this makes which instructions are target dependent, and which are not (i.e. those language specific instructions that fall inside the \texttt{extension} constructor) much more explicit. \paragraph{Independent development and testing} We have essentially modularised the intermediate languages in the compiler backend. As with any form of modularisation, we reap benefits in the ability to independently test and develop each intermediate language separately. \paragraph{Future reuse for other compiler projects} Another advantage of our modularisation scheme is the ability to quickly use and reuse intermediate languages for other compiler projects. For instance, in creating a cost-preserving compiler for a functional language, we may choose to target the RTL language directly. Naturally, the register requirements for a functional language may differ from those of an imperative language, a reconfiguration which our parameterisation makes easy. \paragraph{Easy addition of new compiler passes} Under our modularisation and abstraction scheme, new compiler passes can easily be injected into the backend. We have a concrete example of this in the RTLntl language, an intermediate language that was not present in the original O'Caml code. To specify a new intermediate language we must simply specify, through the use of the statement extension mechanism, what differs in the new intermediate language from the joint' language, and configure a new notion of internal function record, by specialising parameters, to the new language. As generic code for the joint' language exists, for example to add statements to control flow graphs, this code can be reused for the new intermediate language. \paragraph{Possible language commutations} The backend translation passes of the CerCo compiler differ quite a bit from the CompCert compiler. \label{subsect.use.of.dependent.types} We use dependent types in the backend for three reasons. We see three potential ways in which a compiler can fail to compile a program: \begin{enumerate} \item The program is malformed, and there is no hope of making sense of the program. \item A heuristic or algorithm in the compiler is implemented incorrectly, in which case an otherwise correct source program fails to be compiler to correct assembly code. \item An invariant in the compiler is invalidated. \end{enumerate} The first source of failure we are unable to do anything about. The latter two sources of failure should be interpreted as a compiler bug, and, as part of a verified compiler project, we'd like to rule out all such bugs. In CerCo, we aim to use dependent types to help us enforce invariants and prove our heuristics and algorithms correct. First, we encode informal invariants, or uses of \texttt{assert false} in the O'Caml code, with dependent types, converting partial functions into total functions. In particular, the compiler does not support the floating point datatype, nor accompanying functions over that datatype. At the moment, frontend languages within the compiler possess constructors corresponding to floating point code. These are removed during instruction selection (in the RTLabs to RTL transformation) using a daemon. These are removed during instruction selection (in the RTLabs to RTL transformation) using a daemon.\footnote{A Girardism.  An axiom of type \texttt{False}, from which we can prove anything.} However, at some point, we would like the front end of the compiler to recognise programs that use floating point code and reject them as being invalid. \item \textbf{Functions and operations on datatypes that are implemented in the C runtime.} The compiler emits only a subset of the instructions available in the MCS-51's instruction architecture. In particular, integer modulus at the C source level is transformed into a call to a runtime function implementing the modulus operation during the translation to C-light. However, all datatypes corresponding to valid C operations over integers and floats mention integer modulus in the backend, and the translation of these operations is discharged using a daemon. We have plans to dispense with this precooking' processs at the C-light level and move the translation of these operations into the RTLabs to RTL pass, where they can be translated properly. \item \textbf{Axiomatised components that will be implemented using external oracles.} Instead, these components are axiomatised, along with the properties that they need to satisfy in order for the rest of the compilation chain to be correct. These axiomatised components are found in the ERTL to LTL pass. It should be noted that these axiomatised components fall into the following pattern: whilst their implementation is complex, and their proof of correctness is difficult, we are able to quickly and easily verify that any answer that they provide is correct. As a result, we do not see this axiomatisation process as being too onerous. \item \textbf{A few non-computational proof obligations.} A few difficult-to-close, but non-computational (i.e. they do not prevent us from executing the compiler inside Matita), proof obligations have been closed using daemons in the backend. These proof obligations originate with our use of dependent types for expressing invariants in the compiler. However, here, it should be mentioned that many open proof obligations are simply impossible to close until we start to obtain stronger invariants from the proof of correctness for the compiler proper. In particular, in the RTLabs to RTL pass, several proof obligations relating to lists of registers stored in a local environment' appear to fall into this pattern. \item \textbf{Branch compression (tunnelling).} This was a feature of the O'Caml compiler. It is not yet currently implemented in the Matita compiler. This feature is only an optimisation, and will not affect the correctness of the compiler. \item \textbf{Real' tailcalls} For the time being, tailcalls in the backend are translated to vanilla' function calls during the ERTL to LTL pass. % SECTION.                                                                    % %-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-% \section{Associated changes to O'Caml compiler} \label{sect.associated.changes.to.ocaml.compiler} At the moment, no changes we have made in the Matita backend have made their way back into the O'Caml compiler. We do not see the heavy process of modularisation and abstraction as making its way back into the O'Caml codebase, as this is a significant rewrite of the backend code. However, several bugfixes, and the identification of hidden invariants' in the O'Caml code will be incorporated back into the prototype. %-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-% % SECTION.                                                                    % %-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-% \section{Future work} \label{sect.future.work} \item We plan to close all existing proof obligations that are closed using daemons, arising from our use of dependent types in the backend. This should be routine. However, many may not be closable until we have completed Deliverable D4.4, the certification of the whole compiler, as we may not have invariants strong enough at the present time. \item We plan to port the O'Caml compiler's implementation of tailcalls when this is completed, and eventually port the branch compression code currently in the O'Caml compiler to the Matita implementation. \item We plan to validate the backend translations, removing any obvious bugs, by executing the translation inside Matita on small C programs. This is not critical, as the certification process will find all bugs anyway. \end{itemize} \label{subsect.listing.files} Translation specific files (files relating to language semantics have been omitted): \begin{center} \begin{tabular*}{0.9\textwidth}{p{5cm}p{8cm}} Title & Description \\ \hline \texttt{RTLabs/syntax.ma} & The syntax of RTLabs \\ \texttt{RTLabs/RTLabsToRTL.ma} & The translation from RTLabs to RTL \\ \texttt{joint/Joint.ma} & Abstracted syntax for backend languages \\ \texttt{joint/TranslateUtils.ma} & Generic translation utilities \\ \texttt{RTL/RTL.ma} & The syntax of RTL \\ \texttt{RTL/RTLToERTL.ma} & The translation from RTL to ERTL \\ \texttt{RTL/RTLtailcall.ma} & Elimination of tailcalls \\ \texttt{ERTL/ERTL.ma} & The syntax of ERTL \\ \texttt{ERTL/ERTLToLTL.ma} & The translation from ERTL to LTL \\ \texttt{ERTL/Interference.ma} & Axiomatised graph colouring component \\ \texttt{ERTL/liveness.ma} & Liveness analysis \\ \texttt{LTL/LTL.ma} & The syntax of LTL \\ \texttt{LTL/LTLToLIN.ma} & The translation from LTL to LIN \\ \texttt{LIN/LIN.ma} & The syntax of LIN \\ \texttt{LIN/LINToASM.ma} & The translation from LIN to assembly language \end{tabular*} \end{center} Translation specific files (files relating to language semantics have been omitted). Syntax: \begin{center} \begin{tabular*}{\textwidth}{p{3.5cm}p{5.5cm}p{3.5cm}p{1cm}} Title & Description & O'Caml & Ratio \\ \hline \texttt{RTLabs/syntax.ma} & The syntax of RTLabs & \texttt{RTLabs/RTLabs.mli} & 0.65 \\ \texttt{joint/Joint.ma} & Joint syntax for backend languages & N/A & N/A \\ \texttt{RTL/RTL.ma} & The syntax of RTL & \texttt{RTL/RTL.mli} & 0.41 \\ \texttt{ERTL/ERTL.ma} & The syntax of ERTL & \texttt{ERTL/ERTL.mli} & 0.13 \\ \texttt{LTL/LTL.ma} & The syntax of LTL & \texttt{LTL/LTL.mli} & 0.13 \\ \texttt{LIN/LIN.ma} & The syntax of LIN & \texttt{LIN/LIN.mli} & 0.36 \end{tabular*} \end{center} Here, the O'Caml column denotes the O'Caml source file in the prototype compiler's implementation that corresponds to the Matita script in question. The ratios are the linecounts of the Matita file divided by the line counts of the corresponding O'Caml file. These are computed with \texttt{wc -l}, a standard Unix tool. \noindent Translations and utilities: \begin{center} \begin{tabular*}{\textwidth}{p{4.5cm}p{4.5cm}p{4.5cm}p{1cm}} Title & Description & O'Caml & Ratio \\ \hline \texttt{RTLabs/RTLabsToRTL.ma} & The translation from RTLabs to RTL & \texttt{RTLabs/RTLabsToRTL.ml} & 1.61 \\ \texttt{joint/TranslateUtils.ma} & Generic translation utilities & N/A & N/A \\ \texttt{RTL/RTLToERTL.ma} & The translation from RTL to ERTL & \texttt{RTL/RTLToERTL.ml} & 0.88 \\ \texttt{RTL/RTLtailcall.ma} & Elimination of tailcalls & \texttt{RTL/RTLtailcall.ml} & 2.08 \\ \texttt{ERTL/ERTLToLTL.ma} & The translation from ERTL to LTL & \texttt{ERTL/ERTLToRTL.ml} & 3.46 \\ \texttt{ERTL/Interference.ma} & Axiomatised graph colouring component & \texttt{common/interference.ml} & 0.03\footnote{The majority of this file is axiomatised.} \\ \texttt{ERTL/liveness.ma} & Liveness analysis & \texttt{ERTL/liveness.ml} & 0.92 \\ \texttt{LTL/LTLToLIN.ma} & The translation from LTL to LIN & \texttt{LTL/LTLToLIN.ml} & 0.75 \\ \texttt{LIN/LINToASM.ma} & The translation from LIN to assembly language & \texttt{LIN/LINToASM.ml} 2.45 & \end{tabular*} \end{center} Given that Matita code is much more verbose than O'Caml code, with explicit typing and inline proofs, we have achieved respectable line count ratios in the translation. %-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-% \label{subsect.listing.important.functions.and.axioms} We list some important functions in the backend compilation: We list some important functions and axioms in the backend compilation: \paragraph{From RTL/RTLabsToRTL.ma}