Changeset 3656


Ignore:
Timestamp:
Mar 16, 2017, 12:17:13 PM (6 weeks ago)
Author:
mulligan
Message:

cannibalising bits of project report for compiler proof section

Location:
Papers/jar-cerco-2017
Files:
2 edited

Legend:

Unmodified
Added
Removed
  • Papers/jar-cerco-2017/cerco.tex

    r3645 r3656  
    6363
    6464\smartqed
     65
     66\lstset{extendedchars=false}
     67\lstset{inputencoding=utf8x}
     68\DeclareUnicodeCharacter{8797}{:=}
     69\DeclareUnicodeCharacter{10746}{++}
     70\DeclareUnicodeCharacter{9001}{\ensuremath{\langle}}
     71\DeclareUnicodeCharacter{9002}{\ensuremath{\rangle}}
    6572
    6673\title{CerCo: Certified Complexity\thanks{The project CerCo acknowledges the
  • Papers/jar-cerco-2017/proof.tex

    r3615 r3656  
    88\section{Compiler proof}
    99\label{sect.compiler.proof}
     10
     11\subsection{A brief overview of the backend compilation chain}
     12\label{subsect.brief.overview.backend.compilation.chain}
     13
     14The Matita compiler's backend consists of five distinct intermediate languages: RTL, RTLntl, ERTL, LTL and LIN.
     15A sixth language, RTLabs, serves as the entry point of the backend and the exit point of the frontend.
     16RTL, RTLntl, ERTL and LTL are `control flow graph based' languages, whereas LIN is a linearised language, the final language before translation to assembly.
     17
     18We now briefly discuss the properties of the intermediate languages, and discuss the various transformations that take place during the translation process:
     19
     20\paragraph{RTLabs ((Abstract) Register Transfer Language)}
     21As mentioned, this is the final language of the compiler's frontend and the entry point for the backend.
     22This language uses pseudoregisters, not hardware registers.\footnote{There are an unbounded number of pseudoregisters.  Pseudoregisters are converted to hardware registers or stack positions during register allocation.}
     23Functions still use stackframes, where arguments are passed on the stack and results are stored in addresses.
     24During the pass to RTL instruction selection is carried out.
     25
     26\paragraph{RTL (Register Transfer Language)}
     27This language uses pseudoregisters, not hardware registers.
     28Tailcall elimination is carried out during the translation from RTL to RTLntl.
     29
     30\paragraph{RTLntl (Register Transfer Language --- No Tailcalls)}
     31This language is a pseudoregister, graph based language where all tailcalls are eliminated.
     32RTLntl is not present in the O'Caml compiler, the RTL language is reused for this purpose.
     33
     34\paragraph{ERTL (Explicit Register Transfer Language)}
     35This is a language very similar to RTLntl.
     36However, the calling convention is made explicit, in that functions no longer receive and return inputs and outputs via a high-level mechanism, but rather use stack slots or hadware registers.
     37The ERTL to LTL pass performs the following transformations: liveness analysis, register colouring and register/stack slot allocation.
     38
     39\paragraph{LTL (Linearisable Transfer Language)}
     40Another graph based language, but uses hardware registers instead of pseudoregisters.
     41Tunnelling (branch compression) should be implemented here.
     42
     43\paragraph{LIN (Linearised)}
     44This is a linearised form of the LTL language; function graphs have been linearised into lists of statements.
     45All registers have been translated into hardware registers or stack addresses.
     46This is the final stage of compilation before translating directly into assembly language.
     47
     48%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
     49% SECTION.                                                                    %
     50%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
     51\section{The backend intermediate languages in Matita}
     52\label{sect.backend.intermediate.languages.matita}
     53
     54We now discuss the encoding of the compiler backend languages in the Calculus of Constructions proper.
     55We pay particular heed to changes that we made from the O'Caml prototype.
     56In particular, many aspects of the backend languages have been unified into a single `joint' language.
     57We have also made heavy use of dependent types to reduce `spurious partiality' and to encode invariants.
     58
     59%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
     60% SECTION.                                                                    %
     61%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
     62\subsection{Abstracting related languages}
     63\label{subsect.abstracting.related.languages}
     64
     65The O'Caml compiler is written in the following manner.
     66Each intermediate language has its own dedicated syntax, notions of internal function, and so on.
     67Here, we make a distinction between `internal functions'---other functions that are explicitly written by the programmer---and `external functions', which belong to external library and require explictly linking.
     68In particular, IO can be seen as a special case of the `external function' mechanism.
     69Internal functions are represented as a record consisting of a sequential structure of statements, entry and exit points to this structure, and other book keeping devices.
     70This sequential structure can either be a control flow graph or a linearised list of statements, depending on the language.
     71Translations between intermediate language map syntaxes to syntaxes, and internal function representations to internal function representations explicitly.
     72
     73This is a perfectly valid way to write a compiler, where everything is made explicit, but writing a \emph{verified} compiler poses new challenges.
     74In particular, we must look ahead to see how our choice of encodings will affect the size and complexity of the forthcoming proofs of correctness.
     75We now discuss some abstractions, introduced in the Matita code, which we hope will make our proofs shorter, amongst other benefits.
     76
     77\paragraph{Changes between languages made explicit}
     78Due to the bureaucracy inherent in explicating each intermediate language's syntax in the O'Caml compiler, it can often be hard to see exactly what changes between each successive intermediate language.
     79By abstracting the syntax of the RTL, ERTL, LTL and LIN intermediate languages, we make these changes much clearer.
     80
     81Our abstraction takes the following form:
     82\begin{lstlisting}
     83inductive joint_instruction (p: params__) (globals: list ident): Type[0] :=
     84  | COMMENT: String $\rightarrow$ joint_instruction p globals
     85  ...
     86  | INT: generic_reg p $\rightarrow$ Byte $\rightarrow$ joint_instruction p globals
     87  ...
     88  | OP1: Op1 $
     89ightarrow$ acc_a_reg p $
     90ightarrow$ acc_a_reg p $
     91ightarrow$ joint_instruction p globals
     92  ...
     93  | extension: extend_statements p $\rightarrow$ joint_instruction p globals.
     94\end{lstlisting}
     95We first note that for the majority of intermediate languages, many instructions are shared.
     96However, these instructions expect different register types (either a pseudoregister or a hardware register) as arguments.
     97We must therefore parameterise the joint syntax with a record of parameters that will be specialised to each intermediate language.
     98In the type above, this parameterisation is realised with the \texttt{params\_\_} record.
     99As a result of this parameterisation, we have also added a degree of `type safety' to the intermediate languages' syntaxes.
     100In particular, we note that the \texttt{OP1} constructor expects quite a specific type, in that the two register arguments must both be what passes for the accumulator A in that language.
     101In some languages, for example LIN, this is the hardware accumulator, whilst in others this is any pseudoregister.
     102Contrast this with the \texttt{INT} constructor, which expects a \texttt{generic\_reg}, corresponding to an `arbitrary' register type.
     103
     104Further, we note that some intermediate languages have language specific instructions (i.e. the instructions that change between languages).
     105We therefore add a new constructor to the syntax, \texttt{extension}, which expects a value of type \texttt{extend\_statements p}.
     106As \texttt{p} varies between intermediate languages, we can provide language specific extensions to the syntax of the joint language.
     107For example, ERTL's extended syntax consists of the following extra statements:
     108\begin{lstlisting}
     109inductive ertl_statement_extension: Type[0] :=
     110  | ertl_st_ext_new_frame: ertl_statement_extension
     111  | ertl_st_ext_del_frame: ertl_statement_extension
     112  | ertl_st_ext_frame_size: register $\rightarrow$ ertl_statement_extension.
     113\end{lstlisting}
     114These are further packaged into an ERTL specific instance of \texttt{params\_\_} as follows:
     115\begin{lstlisting}
     116definition ertl_params__: params__ :=
     117  mk_params__ register register ... ertl_statement_extension.
     118\end{lstlisting}
     119
     120\paragraph{Shared code, reduced proofs}
     121Many features of individual backend intermediate languages are shared with other intermediate languages.
     122For instance, RTLabs, RTL, ERTL and LTL are all graph based languages, where functions are represented as a control flow graph of statements that form their bodies.
     123Functions for adding statements to a graph, searching the graph, and so on, are remarkably similar across all languages, but are duplicated in the O'Caml code.
     124
     125As a result, we chose to abstract the representation of internal functions for the RTL, ERTL, LTL and LIN intermediate languages into a `joint' representation.
     126This representation is parameterised by a record that dictates the layout of the function body for each intermediate language.
     127For instance, in RTL, the layout is graph like, whereas in LIN, the layout is a linearised list of statements.
     128Further, a generalised way of accessing the successor statement to the one currently under consideration is needed, and so forth.
     129
     130Our joint internal function record looks like so:
     131\begin{lstlisting}
     132record joint_internal_function (globals: list ident) (p:params globals) : Type[0] :=
     133{
     134  ...
     135  joint_if_params   : paramsT p;
     136  joint_if_locals   : localsT p;
     137  ...
     138  joint_if_code     : codeT ... p;
     139  ...
     140}.
     141\end{lstlisting}
     142In particular, everything that can vary between differing intermediate languages has been parameterised.
     143Here, we see the location where to fetch parameters, the listing of local variables, and the internal code representation has been parameterised.
     144Other particulars are also parameterised, though here omitted.
     145
     146Hopefully this abstraction process will reduce the number of proofs that need to be written, dealing with internal functions of different languages characterised by parameters.
     147
     148\paragraph{Dependency on instruction selection}
     149We note that the backend languages are all essentially `post instruction selection languages'.
     150The `joint' syntax makes this especially clear.
     151For instance, in the definition:
     152\begin{lstlisting}
     153inductive joint_instruction (p:params__) (globals: list ident): Type[0] :=
     154  ...
     155  | INT: generic_reg p $\rightarrow$ Byte $\rightarrow$ joint_instruction p globals
     156  | MOVE: pair_reg p $\rightarrow$ joint_instruction p globals
     157  ...
     158  | PUSH: acc_a_reg p $\rightarrow$ joint_instruction p globals
     159  ...
     160  | extension: extend_statements p $\rightarrow$ joint_instruction p globals.
     161\end{lstlisting}
     162The capitalised constructors---\texttt{INT}, \texttt{MOVE}, and so on---are all machine specific instructions.
     163Retargetting the compiler to another microprocessor, improving instruction selection, or simply enlarging the subset of the machine language that the compiler can use, would entail replacing these constructors with constructors that correspond to the instructions of the new target.
     164We feel that this makes which instructions are target dependent, and which are not (i.e. those language specific instructions that fall inside the \texttt{extension} constructor) much more explicit.
     165In the long term, we would really like to try to directly embed the target language in the syntax, in order to reuse the target language's semantics.
     166
     167\paragraph{Independent development and testing}
     168We have essentially modularised the intermediate languages in the compiler backend.
     169As with any form of modularisation, we reap benefits in the ability to independently test and develop each intermediate language separately, with the benefit of fixing bugs just once.
     170
     171\paragraph{Future reuse for other compiler projects}
     172Another advantage of our modularisation scheme is the ability to quickly use and reuse intermediate languages for other compiler projects.
     173For instance, in creating a cost-preserving compiler for a functional language, we may choose to target a linearised version of RTL directly.
     174Adding such an intermediate language would involve the addition of just a few lines of code.
     175
     176\paragraph{Easy addition of new compiler passes}
     177Under our modularisation and abstraction scheme, new compiler passes can easily be injected into the backend.
     178We have a concrete example of this in the RTLntl language, an intermediate language that was not present in the original O'Caml code.
     179To specify a new intermediate language we must simply specify, through the use of the statement extension mechanism, what differs in the new intermediate language from the `joint' language, and configure a new notion of internal function record, by specialising parameters, to the new language.
     180As generic code for the `joint' language exists, for example to add statements to control flow graphs, this code can be reused for the new intermediate language.
     181
     182\paragraph{Possible commutations of translation passes}
     183The backend translation passes of the CerCo compiler differ quite a bit from the CompCert compiler.
     184In the CompCert compiler, linearisation occurs much earlier in the compilation chain, and passes such as register colouring and allocation are carried out on a linearised form of program.
     185Contrast this with our own approach, where the code is represented as a graph for much longer.
     186Similarly, in CompCert the calling conventions are enforced after register allocation, whereas we do register allocation before enforcing the calling convention.
     187
     188However, by abstracting the representation of intermediate functions, we are now much more free to reorder translation passes as we see fit.
     189The linearisation process, for instance, now no longer cares about the specific representation of code in the source and target languages.
     190It just relies on a common interface.
     191We are therefore, in theory, free to pick where we wish to linearise our representation.
     192This adds an unusual flexibility into the compilation process, and allows us to freely experiment with different orderings of translation passes.
     193
     194%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
     195% SECTION.                                                                    %
     196%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
     197\subsection{Use of dependent types}
     198\label{subsect.use.of.dependent.types}
     199
     200We see several potential ways in which a compiler can fail to compile a program:
     201\begin{enumerate}
     202\item
     203The program is malformed, and there is no hope of making sense of the program.
     204\item
     205The compiler is buggy, or an invariant in the compiler is invalidated.
     206\item
     207An incomplete heuristic in the compiler fails.
     208\item
     209The compiled code exhausts some bounded resource, for instance the processor's code memory.
     210\end{enumerate}
     211Standard compilers can fail for all the above reasons.
     212Certified compilers are only required to rule out the second class of failures, but they can still fail for all the remaining reasons.
     213In particular, a compiler that systematically refuses to compile any well formed program is a sound compiler.
     214On the contrary, we would like our certified compiler to fail only in the fourth case.
     215We plan to achieve this with the following strategy.
     216
     217First, the compiler is abstracted over all incomplete heuristics, seen as total functions.
     218To obtain executable code, the compiler is eventually composed with implementations of the abstracted strategies, with the composition taking care of any potential failure of the heuristics in finding a solution.
     219
     220Second, we reject all malformed programs using dependent types: only well-formed programs should typecheck and the compiler can be applied only to well-typed programs.
     221
     222Finally, exhaustion of bounded resources can be checked only at the very end of compilation.
     223Therefore, all intermediate compilation steps are now total functions that cannot diverge, nor fail: these properties are directly guaranteed by the type system of Matita.
     224
     225Presently, the plan is not yet fulfilled.
     226However, we are improving the implemented code according to our plan.
     227We are doing this by progressively strenghthening the code through the use of dependent types.
     228We detail the different ways in which dependent types have been used so far.
     229
     230First, we encode informal invariants, or uses of \texttt{assert false} in the O'Caml code, with dependent types, converting partial functions into total functions.
     231There are numerous examples of this throughout the backend.
     232For example, in the \texttt{RTLabs} to \texttt{RTL} transformation pass, many functions only `make sense' when lists of registers passed to them as arguments conform to some specific length.
     233For instance, the \texttt{translate\_negint} function, which translates a negative integer constant:
     234\begin{lstlisting}
     235definition translate_negint :=
     236  $\lambda$globals: list ident.
     237  $\lambda$destrs: list register.
     238  $\lambda$srcrs: list register.
     239  $\lambda$start_lbl: label.
     240  $\lambda$dest_lbl: label.
     241  $\lambda$def: rtl_internal_function globals.
     242  $\lambda$prf: |destrs| = |srcrs|. (* assert here *)
     243    ...
     244\end{lstlisting}
     245The last argument to the function, \texttt{prf}, is a proof that the lengths of the lists of source and destination registers are the same.
     246This was an assertion in the O'Caml code.
     247
     248Secondly, we make use of dependent types to make the Matita code correct by construction, and eventually the proofs of correctness for the compiler easier to write.
     249For instance, many intermediate languages in the backend of the compiler, from RTLabs to LTL, are graph based languages.
     250Here, function definitions consist of a graph (i.e. a map from labels to statements) and a pair of labels denoting the entry and exit points of this graph.
     251Practically, we would always like to ensure that the entry and exit labels are present in the statement graph.
     252We ensure that this is so with a dependent sum type in the \texttt{joint\_internal\_function} record, which all graph based languages specialise to obtain their own internal function representation:
     253\begin{lstlisting}
     254record joint_internal_function (globals: list ident) (p: params globals): Type[0] :=
     255{
     256  ...
     257  joint_if_code     : codeT $\ldots$ p;
     258  joint_if_entry    : $\Sigma$l: label. lookup $\ldots$ joint_if_code l $\neq$ None $\ldots$;
     259  ...
     260}.
     261\end{lstlisting}
     262Here, \texttt{codeT} is a parameterised type representing the `structure' of the function's body (a graph in graph based languages, and a list in the linearised LIN language).
     263Specifically, the \texttt{joint\_if\_entry} is a dependent pair consisting of a label and a proof that the label in question is a vertex in the function's graph. A similar device exists for the exit label.
     264
     265We make use of dependent types also for another reason: experimentation.
     266Namely, CompCert makes little use of dependent types to encode invariants.
     267In contrast, we wish to make as much use of dependent types as possible, both to experiment with different ways of encoding compilers in a proof assistant, but also as a way of `stress testing' Matita's support for dependent types.
     268
     269Moreover, at the moment we make practically no use of inductive predicates to specify compiler invariants and to describe the semantics of intermediate languages.
     270On the contrary, all predicates are computable functions.
     271Therefore, the proof style that we will adopt will be necessarily significantly different from, say, CompCert's one.
     272At the moment, in Matita `Russell-'-style reasoning (in the sense of~\cite{sozeau:subset:2006}) seems to be the best solution for working with computable functions.
     273This style is heavily based on the idea that all computable functions should be specified using dependent types to describe their pre- and post-conditions.
     274As a consequence, it is natural to add dependent types almost everywhere in the Matita compiler's codebase.
     275
     276%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
     277% SECTION.                                                                    %
     278%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
     279\subsection{What we do not implement}
     280\label{subsect.what.we.do.not.implement}
     281
     282There are several classes of functionality that we have chosen not to implement in the backend languages:
     283\begin{itemize}
     284\item
     285\textbf{Datatypes and functions over these datatypes that are not supported by the compiler.}
     286In particular, the compiler does not support the floating point datatype, nor accompanying functions over that datatype.
     287At the moment, frontend languages within the compiler possess constructors corresponding to floating point code.
     288These are removed during instruction selection (in the RTLabs to RTL transformation) using a daemon.\footnote{A Girardism.  An axiom of type \texttt{False}, from which we can prove anything.}
     289However, at some point, we would like the front end of the compiler to recognise programs that use floating point code and reject them as being invalid.
     290\item
     291\textbf{Axiomatised components that will be implemented using external oracles.}
     292Several large, complex pieces of compiler infrastructure, most noticably register colouring and fixed point calculation during liveness analysis have been axiomatised.
     293This was already agreed upon before the start of the project, and is clearly marked in the project proposal, following comments by those involved with the CompCert project about the difficulty in formalising register colouring in that project.
     294Instead, these components are axiomatised, along with the properties that they need to satisfy in order for the rest of the compilation chain to be correct.
     295These axiomatised components are found in the ERTL to LTL pass.
     296
     297It should be noted that these axiomatised components fall into the following pattern: whilst their implementation is complex, and their proof of correctness is difficult, we are able to quickly and easily verify that any answer that they provide is correct. Therefore, we plan to provide implementations in OCaml only,
     298and to provide certified verifiers in Matita.
     299At the moment, the implementation of the certified verifiers is left as future work.
     300\item
     301\textbf{A few non-computational proof obligations.}
     302A few difficult-to-close, but non-computational (i.e. they do not prevent us from executing the compiler inside Matita), proof obligations have been closed using daemons in the backend.
     303These proof obligations originate with our use of dependent types for expressing invariants in the compiler.
     304However, here, it should be mentioned that many open proof obligations are simply impossible to close until we start to obtain stronger invariants from the proof of correctness for the compiler proper.
     305In particular, in the RTLabs to RTL pass, several proof obligations relating to lists of registers stored in a `local environment' appear to fall into this pattern.
     306\item
     307\textbf{Branch compression (tunnelling).}
     308This was a feature of the O'Caml compiler.
     309It is not yet currently implemented in the Matita compiler.
     310This feature is only an optimisation, and will not affect the correctness of the compiler.
     311\item
     312\textbf{`Real' tailcalls}
     313For the time being, tailcalls in the backend are translated to `vanilla' function calls during the ERTL to LTL pass.
     314This follows the O'Caml compiler, which did not implement tailcalls, and did this simplification step.
     315`Real' tailcalls are being implemented in the O'Caml compiler, and when this implementation is complete, we aim to port this code to the Matita compiler.
     316\end{itemize}
     317
     318%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
     319% SECTION.                                                                    %
     320%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
     321\section{Associated changes to O'Caml compiler}
     322\label{sect.associated.changes.to.ocaml.compiler}
     323
     324At the moment, only bugfixes, but no architectural changes we have made in the Matita backend have made their way back into the O'Caml compiler.
     325We do not see the heavy process of modularisation and abstraction as making its way back into the O'Caml codebase, as this is a significant rewrite of the backend code that is supposed to yield the same code after instantiating parameters, anyway.
     326
     327%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
     328% SECTION.                                                                    %
     329%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
     330\section{Future work}
     331\label{sect.future.work}
     332
     333As mentioned in Section~\ref{subsect.what.we.do.not.implement}, there are several unimplemented features in the compiler, and several aspects of the Matita code that can be improved in order to make currently partial functions total.
     334We summarise this future work here:
     335\begin{itemize}
     336\item
     337We plan to make use of dependent types to identify `floating point' free programs and make all functions total over such programs.
     338This will remove a swathe of uses of daemons.
     339This should be routine.
     340\item
     341We plan to move expansion of integer modulus, and other related functions, into the instruction selection (RTLabs to RTL) phase.
     342This will also help to remove a swathe of uses of daemons, as well as potentially introduce new opportunities for optimisations that we currently miss in expanding these instructions at the C-light level.
     343\item
     344We plan to close all existing proof obligations that are closed using daemons, arising from our use of dependent types in the backend.
     345However, many may not be closable until we have completed Deliverable D4.4, the certification of the whole compiler, as we may not have invariants strong enough at the present time.
     346\item
     347We plan to port the O'Caml compiler's implementation of tailcalls when this is completed, and eventually port the branch compression code currently in the O'Caml compiler to the Matita implementation.
     348This should not cause any major problems.
     349\item
     350We plan to validate the backend translations, removing any obvious bugs, by executing the translation inside Matita on small C programs.
     351This is not critical, as the certification process will find all bugs anyway.
     352\item We plan to provide certified validators for all results provided by
     353external oracles written in OCaml. At the moment, we have axiomatized oracles
     354for computing least fixpoints during liveness analysis, for colouring
     355registers and for branch displacement in the assembler code.
     356\end{itemize}
     357
     358\section{The back-end intermediate languages' semantics in Matita}
     359\label{sect.backend.intermediate.languages.semantics.matita}
     360
     361%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
     362% SECTION.                                                                    %
     363%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
     364\subsection{Abstracting related languages}
     365\label{subsect.abstracting.related.languages}
     366
     367As mentioned in the report for Deliverable D4.2, a systematic process of abstraction, over the OCaml code, has taken place in the Matita encoding.
     368In particular, we have merged many of the syntaxes of the intermediate languages (i.e. RTL, ERTL, LTL and LIN) into a single `joint' syntax, which is parameterised by various types.
     369Equivalent intermediate languages to those present in the OCaml code can be recovered by specialising this joint structure.
     370
     371As mentioned in the report for Deliverable D4.2, there are a number of advantages that this process of abstraction brings, from code reuse to allowing us to get a clearer view of the intermediate languages and their structure.
     372However, the semantics of the intermediate languages allow us to concretely demonstrate this improvement in clarity, by noting that the semantics of the LTL and the semantics of the LIN languages are identical.
     373In particular, the semantics of both LTL and LIN are implemented in exactly the same way.
     374The only difference between the two languages is how the next instruction to be interpreted is fetched.
     375In LTL, this involves looking up in a graph, whereas in LTL, this involves fetching from a list of instructions.
     376
     377As a result, we see that the semantics of LIN and LTL are both instances of a single, more general language that is parametric in how the next instruction is fetched.
     378Furthermore, any prospective proof that the semantics of LTL and LIN are identical is now almost trivial, saving a deal of work in Deliverable D4.4.
     379
     380%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
     381% SECTION.                                                                    %
     382%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
     383\subsection{Type parameters, and their purpose}
     384\label{subsect.type.parameters.their.purpose}
     385
     386We mentioned in the Deliverable D4.2 report that all joint languages are parameterised by a number of types, which are later specialised to each distinct intermediate language.
     387As this parameterisation process is also dependent on designs decisions in the language semantics, we have so far held off summarising the role of each parameter.
     388
     389We begin the abstraction process with the \texttt{params\_\_} record.
     390This holds the types of the representations of the different register varieties in the intermediate languages:
     391\begin{lstlisting}
     392record params__: Type[1] :=
     393{
     394  acc_a_reg: Type[0];
     395  acc_b_reg: Type[0];
     396  dpl_reg: Type[0];
     397  dph_reg: Type[0];
     398  pair_reg: Type[0];
     399  generic_reg: Type[0];
     400  call_args: Type[0];
     401  call_dest: Type[0];
     402  extend_statements: Type[0]
     403}.
     404\end{lstlisting}
     405We summarise what these types mean, and how they are used in both the semantics and the translation process:
     406\begin{center}
     407\begin{tabular*}{\textwidth}{p{4cm}p{11cm}}
     408Type & Explanation \\
     409\hline
     410\texttt{acc\_a\_reg} & The type of the accumulator A register.  In some languages this is implemented as the hardware accumulator, whereas in others this is a pseudoregister.\\
     411\texttt{acc\_b\_reg} & Similar to the accumulator A field, but for the processor's auxilliary accumulator, B. \\
     412\texttt{dpl\_reg} & The type of the representation of the low eight bit register of the MCS-51's single 16 bit register, DPL.  Can be either a pseudoregister or the hardware DPL register. \\
     413\texttt{dph\_reg} & Similar to the DPL register but for the eight high bits of the 16-bit register. \\
     414\texttt{pair\_reg} & Various different `move' instructions have been merged into a single move instruction in the joint language.  A value can either be moved to or from the accumulator in some languages, or moved to and from an arbitrary pseudoregister in others.  This type encodes how we should move data around the registers and accumulators. \\
     415\texttt{generic\_reg} & The representation of generic registers (i.e. those that are not devoted to a specific task). \\
     416\texttt{call\_args} & The actual arguments passed to a function.  For some languages this is simply the number of arguments passed to the function. \\
     417\texttt{call\_dest} & The destination of the function call. \\
     418\texttt{extend\_statements} & Instructions that are specific to a particular intermediate language, and which cannot be abstracted into the joint language.
     419\end{tabular*}
     420\end{center}
     421
     422As mentioned in the report for Deliverable D4.2, the record \texttt{params\_\_} is enough to be able to specify the instructions of the joint languages:
     423\begin{lstlisting}
     424inductive joint_instruction (p: params__) (globals: list ident): Type[0] :=
     425  | COMMENT: String $\rightarrow$ joint_instruction p globals
     426  | COST_LABEL: costlabel $\rightarrow$ joint_instruction p globals
     427  ...
     428  | OP1: Op1 $\rightarrow$ acc_a_reg p $\rightarrow$ acc_a_reg p $\rightarrow$ joint_instruction p globals
     429  | COND: acc_a_reg p $\rightarrow$ label $\rightarrow$ joint_instruction p globals
     430  ...
     431\end{lstlisting}
     432Here, we see that the instruction \texttt{OP1} (a unary operation on the accumulator A) can be given quite a specific type, through the use of the \texttt{params\_\_} data structure.
     433
     434Joint statements can be split into two subclasses: those who simply pass the flow of control onto their successor statement, and those that jump to a potentially remote location in the program.
     435Naturally, as some intermediate languages are graph based, and others linearised, the passing act of passing control on to the `successor' instruction can either be the act of following a graph edge in a control flow graph, or incrementing an index into a list.
     436We make a distinction between instructions that pass control onto their immediate successors, and those that jump elsewhere in the program, through the use of \texttt{succ}, denoting the immediate successor of the current instruction, in the \texttt{params\_} record described below.
     437\begin{lstlisting}
     438record params_: Type[1] :=
     439{
     440  pars__ :> params__;
     441  succ: Type[0]
     442}.
     443\end{lstlisting}
     444The type \texttt{succ} corresponds to labels, in the case of control flow graph based languages, or is instantiated to the unit type for the linearised language, LIN.
     445Using \texttt{param\_} we can define statements of the joint language:
     446\begin{lstlisting}
     447inductive joint_statement (p:params_) (globals: list ident): Type[0] :=
     448  | sequential: joint_instruction p globals $\rightarrow$ succ p $\rightarrow$ joint_statement p globals
     449  | GOTO: label $\rightarrow$ joint_statement p globals
     450  | RETURN: joint_statement p globals.
     451\end{lstlisting}
     452Note that in the joint language, instructions are `linear', in that they have an immediate successor.
     453Statements, on the other hand, consist of either a linear instruction, or a \texttt{GOTO} or \texttt{RETURN} statement, both of which can jump to an arbitrary place in the program. The conditional jump instruction COND is `linear', since it
     454has an immediate successor, but it also takes an arbitrary location (a label)
     455to jump to.
     456
     457For the semantics, we need further parametererised types.
     458In particular, we parameterise the result and parameter type of an internal function call in \texttt{params0}:
     459\begin{lstlisting}
     460record params0: Type[1] :=
     461{
     462  pars__' :> params__;
     463  resultT: Type[0];
     464  paramsT: Type[0]
     465}.
     466\end{lstlisting}
     467Here, \texttt{paramsT} and \texttt{resultT} typically are the (pseudo)registers that store the parameters and result of a function.
     468
     469We further extend \texttt{params0} with a type for local variables in internal function calls:
     470\begin{lstlisting}
     471record params1 : Type[1] :=
     472{
     473  pars0 :> params0;
     474  localsT: Type[0]
     475}.
     476\end{lstlisting}
     477Again, we expand our parameters with types corresponding to the code representation (either a control flow graph or a list of statements).
     478Further, we hypothesise a generic method for looking up the next instruction in the graph, called \texttt{lookup}.
     479Note that \texttt{lookup} may fail, and returns an \texttt{option} type:
     480\begin{lstlisting}
     481record params (globals: list ident): Type[1] :=
     482{
     483  succ_ : Type[0];
     484  pars1 :> params1;
     485  codeT : Type[0];
     486  lookup: codeT $\rightarrow$ label $\rightarrow$ option (joint_statement (mk_params_ pars1 succ_) globals)
     487}.
     488\end{lstlisting}
     489We now have what we need to define internal functions for the joint language.
     490The first two `universe' fields are only used in the compilation process, for generating fresh names, and do not affect the semantics.
     491The rest of the fields affect both compilation and semantics.
     492In particular, we have a description of the result, parameters and the local variables of a function.
     493Note also that we have lifted the hypothesised \texttt{lookup} function from \texttt{params} into a dependent sigma type, which combines a label (the entry and exit points of the control flow graph or list) combined with a proof that the label is in the graph structure:
     494\begin{lstlisting}
     495record joint_internal_function (globals: list ident) (p:params globals) : Type[0] :=
     496{
     497  joint_if_luniverse: universe LabelTag;
     498  joint_if_runiverse: universe RegisterTag;
     499  joint_if_result   : resultT p;
     500  joint_if_params   : paramsT p;
     501  joint_if_locals   : localsT p;
     502  joint_if_stacksize: nat;
     503  joint_if_code     : codeT ... p;
     504  joint_if_entry    : $\Sigma$l: label. lookup ... joint_if_code l $\neq$ None ?;
     505  joint_if_exit     : $\Sigma$l: label. lookup ... joint_if_code l $\neq$ None ?
     506}.
     507\end{lstlisting}
     508Naturally, a question arises as to why we have chosen to split up the parameterisation into so many intermediate records, each slightly extending earlier ones.
     509The reason is because some intermediate languages share a host of parameters, and only differ on some others.
     510For instance, in instantiating the ERTL language, certain parameters are shared with RTL, whilst others are ERTL specific:
     511\begin{lstlisting}
     512...
     513definition ertl_params__: params__ :=
     514 mk_params__ register register register register (move_registers $\times$ move_registers)
     515  register nat unit ertl_statement_extension.
     516...
     517definition ertl_params1: params1 := rtl_ertl_params1 ertl_params0.
     518definition ertl_params: ∀globals. params globals := rtl_ertl_params ertl_params0.
     519...
     520definition ertl_statement := joint_statement ertl_params_.
     521
     522definition ertl_internal_function :=
     523  $\lambda$globals.joint_internal_function ... (ertl_params globals).
     524\end{lstlisting}
     525Here, \texttt{rtl\_ertl\_params1} are the common parameters of the ERTL and RTL languages:
     526\begin{lstlisting}
     527definition rtl_ertl_params1 := $\lambda$pars0. mk_params1 pars0 (list register).
     528\end{lstlisting}
     529
     530The record \texttt{more\_sem\_params} bundles together functions that store and retrieve values in various forms of register:
     531\begin{lstlisting}
     532record more_sem_params (p:params_): Type[1] :=
     533{
     534  framesT: Type[0];
     535  empty_framesT: framesT;
     536
     537  regsT: Type[0];
     538  empty_regsT: regsT;
     539
     540  call_args_for_main: call_args p;
     541  call_dest_for_main: call_dest p;
     542
     543  greg_store_: generic_reg p $\rightarrow$ beval $\rightarrow$ regsT $\rightarrow$ res regsT;
     544  greg_retrieve_: regsT $\rightarrow$ generic_reg p $\rightarrow$ res beval;
     545  acca_store_: acc_a_reg p $\rightarrow$ beval $\rightarrow$ regsT $\rightarrow$ res regsT;
     546  acca_retrieve_: regsT $\rightarrow$ acc_a_reg p $\rightarrow$ res beval;
     547  ...
     548  dpl_store_: dpl_reg p $\rightarrow$ beval $\rightarrow$ regsT $\rightarrow$ res regsT;
     549  dpl_retrieve_: regsT $\rightarrow$ dpl_reg p $\rightarrow$ res beval;
     550  ...
     551  pair_reg_move_: regsT $\rightarrow$ pair_reg p $\rightarrow$ res regsT;
     552}.
     553\end{lstlisting}
     554Here, the fields \texttt{empty\_framesT}, \texttt{empty\_regsT}, \texttt{call\_args\_for\_main} and \texttt{call\_dest\_for\_main} are used for state initialisation.
     555
     556The fields \texttt{greg\_store\_} and \texttt{greg\_retrieve\_} store and retrieve values from a generic register, respectively.
     557Similarly, \texttt{pair\_reg\_move} implements the generic move instruction of the joint language.
     558Here \texttt{framesT} is the type of stack frames, with \texttt{empty\_framesT} an empty stack frame.
     559
     560The two hypothesised values \texttt{call\_args\_for\_main} and \texttt{call\_dest\_for\_main} deal with problems with the \texttt{main} function of the program, and how it is handled.
     561In particular, we need to know when the \texttt{main} function has finished executing.
     562But this is complicated, in C, by the fact that the \texttt{main} function is explicitly allowed to be recursive (disallowed in C++).
     563Therefore, to understand whether the exiting \texttt{main} function is really exiting, or just recursively calling itself, we need to remember the address to which \texttt{main} will return control once the initial call to \texttt{main} has finished executing.
     564This is done with \texttt{call\_dest\_for\_main}, whereas \texttt{call\_args\_for\_main} holds the \texttt{main} function's arguments.
     565
     566We extend \texttt{more\_sem\_params} with yet more parameters via \texttt{more\_sem\_params2}:
     567\begin{lstlisting}
     568record more_sem_params1 (globals: list ident) (p: params globals) : Type[1] :=
     569{
     570  more_sparams1 :> more_sem_params p;
     571
     572  succ_pc: succ p $\rightarrow$ address $\rightarrow$ res address;
     573  pointer_of_label: genv ... p $\rightarrow$ pointer $\rightarrow$
     574    label $\rightarrow$ res ($\Sigma$p:pointer. ptype p = Code);
     575  ...
     576  fetch_statement:
     577    genv ... p $\rightarrow$ state (mk_sem_params ... more_sparams1) $\rightarrow$
     578    res (joint_statement (mk_sem_params ... more_sparams1) globals);
     579  ...
     580  save_frame:
     581    address $\rightarrow$ nat $\rightarrow$ paramsT ... p $\rightarrow$ call_args p $\rightarrow$ call_dest p $\rightarrow$
     582    state (mk_sem_params ... more_sparams1) $\rightarrow$
     583    res (state (mk_sem_params ... more_sparams1));
     584  pop_frame:
     585    genv globals p $\rightarrow$ state (mk_sem_params ... more_sparams1) $\rightarrow$
     586    res ((state (mk_sem_params ... more_sparams1)));
     587  ...
     588  set_result:
     589    list val $\rightarrow$ state (mk_sem_params ... more_sparams1) $\rightarrow$
     590    res (state (mk_sem_params ... more_sparams1));
     591  exec_extended:
     592    genv globals p $\rightarrow$ extend_statements (mk_sem_params ... more_sparams1) $\rightarrow$
     593    succ p $\rightarrow$ state (mk_sem_params ... more_sparams1) $\rightarrow$
     594    IO io_out io_in (trace $\times$ (state (mk_sem_params ... more_sparams1)))
     595 }.
     596\end{lstlisting}
     597The field \texttt{succ\_pc} takes an address, and a `successor' label, and returns the address of the instruction immediately succeeding the one at hand.
     598
     599Here, \texttt{fetch\_statement} fetches the next statement to be executed.
     600The fields \texttt{save\_frame} and \texttt{pop\_frame} manipulate stack frames.
     601In particular, \texttt{save\_frame} creates a new stack frame on the top of the stack, saving the destination and parameters of a function, and returning an updated state.
     602The field \texttt{pop\_frame} destructively pops a stack frame from the stack, returning an updated state.
     603Further, \texttt{set\_result} saves the result of the function computation, and \texttt{exec\_extended} is a function that executes the extended statements, peculiar to each individual intermediate language.
     604
     605We bundle \texttt{params} and \texttt{sem\_params} together into a single record.
     606This will be used in the function \texttt{eval\_statement} which executes a single statement of the joint language:
     607\begin{lstlisting}
     608record sem_params2 (globals: list ident): Type[1] :=
     609{
     610  p2 :> params globals;
     611  more_sparams2 :> more_sem_params2 globals p2
     612}.
     613\end{lstlisting}
     614\noindent
     615The \texttt{state} record holds the current state of the interpreter:
     616\begin{lstlisting}
     617record state (p: sem_params): Type[0] :=
     618{
     619  st_frms: framesT ? p;
     620  pc: address;
     621  sp: pointer;
     622  isp: pointer;
     623  carry: beval;
     624  regs: regsT ? p;
     625  m: bemem
     626}.
     627\end{lstlisting}
     628Here \texttt{st\_frms} represent stack frames, \texttt{pc} the program counter, \texttt{sp} the stack pointer, \texttt{isp} the internal stack pointer, \texttt{carry} the carry flag, \texttt{regs} the registers (hardware and pseudoregisters) and \texttt{m} external RAM.
     629Note that we have two stack pointers, as we have two stacks: the physical stack of the MCS-51 microprocessor, and an emulated stack in external RAM.
     630The MCS-51's own stack is minuscule, therefore it is usual to emulate a much larger, more useful stack in external RAM.
     631We require two stack pointers as the MCS-51's \texttt{PUSH} and \texttt{POP} instructions manipulate the physical stack, and not the emulated one.
     632
     633We use the function \texttt{eval\_statement} to evaluate a single joint statement:
     634\begin{lstlisting}
     635definition eval_statement:
     636  $\forall$globals: list ident.$\forall$p:sem_params2 globals.
     637    genv globals p $\rightarrow$ state p $\rightarrow$ IO io_out io_in (trace $\times$ (state p)) :=
     638...
     639\end{lstlisting}
     640We examine the type of this function.
     641Note that it returns a monadic action, \texttt{IO}, denoting that it may have an IO \emph{side effect}, where the program reads or writes to some external device or memory address.
     642Monads and their use are further discussed in Subsection~\ref{subsect.use.of.monads}.
     643Further, the function returns a new state, updated by the single step of execution of the program.
     644Finally, a \emph{trace} is also returned, which records externally observable `events', such as the calling of external functions and the emission of cost labels.
     645
     646%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
     647% SECTION.                                                                    %
     648%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
     649\subsection{Use of monads}
     650\label{subsect.use.of.monads}
     651
     652Monads are a categorical notion that have recently gained an amount of traction in functional programming circles.
     653In particular, it was noted by Moggi that monads could be used to sequence \emph{effectful} computations in a pure manner.
     654Here, `effectful computations' cover a lot of ground, from writing to files, generating fresh names, or updating an ambient notion of state.
     655
     656A monad can be characterised by the following:
     657\begin{itemize}
     658\item
     659A data type, $M$.
     660For instance, the \texttt{option} type in OCaml or Matita.
     661\item
     662A way to `inject' or `lift' pure values into this data type (usually called \texttt{return}).
     663We call this function \texttt{return} and say that it must have type $\alpha \rightarrow M \alpha$, where $M$ is the name of the monad.
     664In our example, the `lifting' function for the \texttt{option} monad can be implemented as:
     665\begin{lstlisting}
     666let return x = Some x
     667\end{lstlisting}
     668\item
     669A way to `sequence' monadic functions together, to form another monadic function, usually called \texttt{bind}.
     670Bind has type $M \alpha \rightarrow (\alpha \rightarrow M \beta) \rightarrow M \beta$.
     671We can see that bind `unpacks' a monadic value, applies a function after unpacking, and `repacks' the new value in the monad.
     672In our example, the sequencing function for the \texttt{option} monad can be implemented as:
     673\begin{lstlisting}
     674let bind o f =
     675  match o with
     676    None -> None
     677    Some s -> f s
     678\end{lstlisting}
     679\item
     680A series of algebraic laws that relate \texttt{return} and \texttt{bind}, ensuring that the sequencing operation `does the right thing' by retaining the order of effects.
     681These \emph{monad laws} should also be useful in reasoning about monadic computations in the proof of correctness of the compiler.
     682\end{itemize}
     683In the semantics of both front and back-end intermediate languages, we make use of monads.
     684This monadic infrastructure is shared between the front-end and back-end languages.
     685
     686In particular, an `IO' monad, signalling the emission of a cost label, or the calling of an external function, is heavily used in the semantics of the intermediate languages.
     687Here, the monad's sequencing operation ensures that cost label emissions and function calls are maintained in the correct order.
     688We have already seen how the \texttt{eval\_statement} function of the joint language is monadic, with type:
     689\begin{lstlisting}
     690definition eval_statement:
     691  $\forall$globals: list ident.$\forall$p:sem_params2 globals.
     692    genv globals p $\rightarrow$ state p $\rightarrow$ IO io_out io_in (trace $\times$ (state p)) :=
     693...
     694\end{lstlisting}
     695If we examine the body of \texttt{eval\_statement}, we may also see how the monad sequences effects.
     696For instance, in the case for the \texttt{LOAD} statement, we have the following:
     697\begin{lstlisting}
     698definition eval_statement:
     699  $\forall$globals: list ident. $\forall$p:sem_params2 globals.
     700    genv globals p $\rightarrow$ state p $\rightarrow$ IO io_out io_in (trace $\times$ (state p)) :=
     701  $\lambda$globals, p, ge, st.
     702  ...
     703  match s with
     704  | LOAD dst addrl addrh ⇒
     705    ! vaddrh $\leftarrow$ dph_retrieve ... st addrh;
     706    ! vaddrl $\leftarrow$ dpl_retrieve ... st addrl;
     707    ! vaddr $\leftarrow$ pointer_of_address $\langle$vaddrl,vaddrh$\rangle$;
     708    ! v $\leftarrow$ opt_to_res ... (msg FailedLoad) (beloadv (m ... st) vaddr);
     709    ! st $\leftarrow$ acca_store p ... dst v st;
     710    ! st $\leftarrow$ next ... l st ;
     711      ret ? $\langle$E0, st$\rangle$
     712\end{lstlisting}
     713Here, we employ a certain degree of syntactic sugaring.
     714The syntax
     715\begin{lstlisting}
     716  ...
     717! vaddrh $\leftarrow$ dph_retrieve ... st addrh;
     718! vaddrl $\leftarrow$ dpl_retrieve ... st addrl;
     719  ...
     720\end{lstlisting}
     721is sugaring for the \texttt{IO} monad's binding operation.
     722We can expand this sugaring to the following much more verbose code:
     723\begin{lstlisting}
     724  ...
     725  bind (dph_retrieve ... st addrh) ($\lambda$vaddrh. bind (dpl_retrieve ... st addrl)
     726    ($\lambda$vaddrl. ...))
     727\end{lstlisting}
     728Note also that the function \texttt{ret} is implementing the `lifting', or return function of the \texttt{IO} monad.
     729
     730We believe the sugaring for the monadic bind operation makes the program much more readable, and therefore easier to reason about.
     731In particular, note that the functions \texttt{dph\_retrieve}, \texttt{pointer\_of\_address}, \texttt{acca\_store} and \texttt{next} are all monadic.
     732
     733Note, however, that inside this monadic code, there is also another monad hiding.
     734The \texttt{res} monad signals failure, along with an error message.
     735The monad's sequencing operation ensures the order of error messages does not get rearranged.
     736The function \texttt{opt\_to\_res} lifts an option type into this monad, with an error message to be used in case of failure.
     737The \texttt{res} monad is then coerced into the \texttt{IO} monad, ensuring the whole code snippet typechecks.
     738
     739\subsection{Memory models}
     740\label{subsect.memory.models}
     741
     742Currently, the semantics of the front and back-end intermediate languages are built around two distinct memory models.
     743The front-end languages reuse the CompCert 1.6 memory model, whereas the back-end languages employ a version tailored to their needs.
     744This split between the memory models reflects the fact that the front-end and back-end languages have different requirements from their memory models.
     745
     746In particular, the CompCert 1.6 memory model places quite heavy restrictions on where in memory one can read from.
     747To read a value in this memory model, you must supply an address, complete with the size of `chunk' to read following that address.
     748The read is only successful if you attempt to read at a genuine `value boundary', and read the appropriate amount of memory for that value.
     749As a result, with that memory model you are unable to read the third byte of a 32-bit integer value directly from memory, for instance.
     750This has some consequences for the compiler, namely an inability to write a \texttt{memcpy} routine.
     751
     752However, the CerCo memory model operates differently, as we need to move data `piecemeal' between stacks in the back-end of the compiler.
     753As a result, the back-end memory model allows one to read data at any memory location, not just on value boundaries.
     754This has the advantage that we can successfully give a semantics to a \texttt{memcpy} routine in the back-end of the CerCo compiler (remembering that \texttt{memcpy} is nothing more than `read a byte, copy a byte' repeated in a loop), an advantage over CompCert.  However, the front-end of CerCo cannot because its memory model and values are the similar to CompCert 1.6.
     755
     756More recent versions of CompCert's memory model have evolved in a similar direction, with a byte-by-byte representation of memory blocks.  However, there remains an important difference in the handling of pointer values in the rest of the formalisation.  In particular, in CompCert 1.10 only complete pointer values can be loaded in all of the languages in the compiler, whereas in CerCo we need to represent individual bytes of a pointer in the back-end to support our 8-bit target architecture.
     757
     758Right now, the two memory models are interfaced during the translation from RTLabs to RTL.
     759It is an open question whether we will unify the two memory models, using only the back-end, bespoke memory model throughout the compiler, as the CompCert memory model seems to work fine for the front-end, where such byte-by-byte copying is not needed.
     760However, should we decide to port the front-end to the new memory model, it has been written in such an abstract way that doing so would be relatively straightforward.
     761
     762
Note: See TracChangeset for help on using the changeset viewer.