source: Papers/cpp-asm-2012/reviews.txt @ 2376

Last change on this file since 2376 was 2337, checked in by mulligan, 7 years ago

Removed the referee's comments that have already been addressed from the reviews.txt file.

File size: 13.5 KB
RevLine 
[2333]1----------------------- REVIEW 1 ---------------------
[2337]21) The main weakness of the paper is the detailed presentation of lemma statements.
3I believe it would be better to focus on the high-level insights, from which competent readers could then reconstruct the lemma statements and proofs.  Following this advice would open up substantial space, which I believe could profitably be used to combine this paper with the simultaneous CPP submission by the same group.
4It would be nice to have a single conference-length paper that explains all the key insights.
[2333]5
[2337]62) One contribution of this paper is the novel way of reasoning about assembly programs that may do address arithmetic.
7From the abstract, it is not clear that the current development supports no such safe arithmetic, but rather the approach is conjectured to support easy extension for that kind of code.
8I think this is a reasonable position, but the speculative nature of the motivation ought to be made clearer earlier.
[2333]9
10Page 4, "preinstruction" definition: why does "ADD" get two arguments, instead of merging the two "fixed-length vectors"?  Or perhaps I misunderstand what this last term means.
11
12----------------------- REVIEW 2 ---------------------
[2334]131) The program committee noted that the research described here was split between two papers, "On the correctness of an optimising assembler for the MCS-51 microcontroller" and "On the correctness of a branch displacement algorithm".  After some discussion, we decided to accept only one of these two papers.  To strengthen the one accepted paper, we recommend (but do not require) that the key results from both submissions be presented in the one accepted paper.  This would, as the other reviews point out, lead to "a higher concentration of interesting results" with more "focus on the high-level insights".
[2333]14
[2334]152) The current implementation appears to be pretty restrictive in what operations are allowed on symbols.  As far as I could see, you can push and pop them from the stack (particularly in call/return instructions), but that's about it.  It appears that you can't compare symbols for equality (e.g. to test two function pointers for equality in C), for example.  In fact, I'm not sure you can even move function pointers around between variables (at least, I couldn't find the place that would update the map during moves).  Do you allow jump tables of code addresses to implement C switch statements?  These would be good examples to motivate the definitions in section 3.5, which is currently pretty hard to understand.  (If you want a more challenging example, you might consider garbage collection tables that map code addresses to GC data; some collectors do binary search on code addresses to find entries in the tables.)
[2333]16
[2334]173) The definition of assembly-level state (with symbolic addresses or a map that locates the symbolic addresses) and machine-level state (bytes, with no symbols) should be central to the paper.  Unfortunately, these concepts are buried in section 3.5's explanations of internal_pseudo_address_map, low_internal_ram_of_pseudo_low_internal_ram, status_of_pseudo_status, etc., which are too dense, appear too late in the paper, and rely on knowledge of the MCS-51 that the paper only hints at (e.g. low and high internal memory).  Furthermore, the definitions of Status and PseudoStatus never appear at all.  I'd suggest starting with the definitions of registers, memory, internal_pseudo_address_map, and Status/PseudoStatus, and then working through the details of the assembler later in the paper.  This would ensure that the main theorem is understandable.  Compared to the main theorem, the other lemmas are less critical to the paper.  Section 3.4, in particular, could be greatly shortened
[2333]18; much of it is devoted to showing how machine instructions and bytes are concatenated together, which I found less interesting and more tedious than the paper's other sections.
19
[2334]204) How do you model I/O operations?
215) How do you plan to run the assembler?  Can you extract an executable assembler from the Matita implementation?
226) Right now, the proofs and specifications are mixed together in the Matita files.  Is there a way to isolate and quantify the definitions that you trust for correctness (the "trusted computing base")?
[2333]23
[2334]247) "We first attempted to synthesise[sic] a solution bottom up...."  This paragraph is vague.  Either omit the paragraph, or be more specific about the "solutions" under consideration.
[2333]25
[2334]268) Section 3.1, definition of execute_1: say what "cm" stands for.  (In fact, throughout the paper, try to say what the variables are as they are introduced.)
[2333]27
[2334]289) Section 3.4: "is obtained by composition of expand_pseudo_instruction and assembly_1_pseudoinstruction":  "composition" usually means the output of one function is the input to the other function.
[2333]29
[2334]3010) Section 3.4, lemma fetch_assembly_pseudo: the variable "instructions" is not in scope (did you mean "instrs"?).
[2333]31
[2334]3211) Section 3.4: "Here, l is the number of machine code instructions".  "pc_plus_len" is computed by adding pc to l, so wouldn't "number of bytes" make more sense?
[2333]33
34----------------------- REVIEW 3 ---------------------
35
36The term "total correctness" used in Sec. 3.4 and 3.5 is confusing,
37since it has already been used in Hoare logic with standard meaning.
38
39
40----------------------- REVIEW 4 ---------------------
41  (i)   The presentation is needlessly complicated and difficult to
42        understand; while individual sentences are well-written, they
43        sometimes lack precision,
44
45  (ii)  The most delicate and interesting part of the problem seems to be
46        the translation of jump offsets, but the algorithm for this has been
47        submitted as a companion paper; it may have been better to submit a
48        single, self-contained paper with a higher concentration of
49        interesting results.
50
51  (iii) The notion of `well behaved' assembly programs is either not
52        described clearly enough, or not adequately developed. It seems to
53        relie on `run-time' checks for correct address manipulation. In
54        fact, these checks seem to be made at `emulation-time' (since the
55        generated code is not changed to implement the checks), and it is
56        not clear whether (or how) this covers all possible program
57        behaviours.
58
591. The title talks about an _optimising_ assembler, but the only
60   optimisation seems to be in calculating branch displacements and this
61   question is only partially resolved in the paper. Can assemblers be
62   expected to perform any other optimisations?
63
642. The abstract talks about "the efficient expansion of
65   psuedoinstructions---particularly jumps---is complex", but given that
66   this is the sole optimisation, it would seem more accurate to say
67   "...---namely jumps". Furthermore, the word "complex" is perhaps
68   overreaching.
69
703. Some parts of the text lack precision. Namely:
71
72   a. p.2: "what one hand giveth, the other taketh away"---it seems rather
73      to be a question of a straight-out engineering tradeoff in processor
74      choice. Furthermore, the MCS51 is memory constrained by the fact that
75      it is an 8-bit microcontroller; but there would seem to be sufficient
76      memory for the tasks it is expected to perform. This does not mean
77      that the calculation of jumping instructions is not a worthy problem,
78      but I question whether it is as critical as stated.
79
80   b. p.2: The last paragraph states that the proof is effected in "a
81      rather profound way", and that the proof of correctness may descend
82      into a "diabolical quagmire". In my opinion, these literary
83      flourishes waste space that could instead be used for more accurate
84      descriptions.
85
86   e. p.6: "The function assembly_1_psuedo_instruction ... is essentially
87      the composition of the two passes": is it or isn't it?
88
89   f. p.6: "... markedly different ways": well, all of the instructions jump
90      to one place or another, so they're not that different!
91
92   h. p.14: "this policy is the only possible policy _in theory_ that can
93      preseve the semantics..."; what does this mean exactly? It would seem
94      that the approach being presented is a counterexample to this
95      statement.
96
974. The question of the MCS51's precise and predictable timing model is quite
98   interesting, but not discussed thoroughly in the paper. The second page
99   talks about "timing characteristics" and I/O but it is not completely
100   clear what is meant. Choosing one branching strategy over another will
101   obviously change the cycle-level timing of a program. The paper does not
102   discuss this in detail, nor whether it is important or not. Apart from
103   that, how else could the timing be changed by choices in the assembler?
104
105   The paper talks about "costing" in the encompassing CerCo project, but
106   few precise details are given. In particular, whenever "cost" is used in
107   the paper, does it just refer to the number of cycles required to execute
108   a sequence of instructions? If so, why use the more abstract term?
109
1105. The paper does not mention the issue of placing assembler subroutines in
111   memory? Is this not an issue? Could placement choices affect code size
112   (from jump instructions) significantly?
113
1146. p.4: It seems to me that the paper would have been much easier to
115   understand had it included more "sign posting". For instance, the start of
116   section 3 might state that there will be two models: one where jump
117   destinations are specified using relative offsets, and another where they
118   are given as labels. And, in fact, that the whole point is to translate
119   accurately from one to the other. The paper might then summarise
120   reference [4], which is very nicely written, to motivate the issue of
121   calculating branch displacements.
122
123   It might also state the two other issues are the correctness of assembly
124   encoding/fetching, and a notion of "well behaved" programs that modify
125   addresses in "reasonable" ways.
126
1277. p.5: It would be nice to see the type of the "preamble"; is it just a
128   DATA section?
129
1308. The paper does not discuss the complete tool chain. Does the verified
131   development parse MCS51 assembler and spit out a binary image?
132
1339. p.5: Why is "Word" and not "Identifier" or "nat" used to index psuedo
134   instructions? Or maybe even a type synonym so there is no risk of
135   confusion with the type used to index instructions.
136
137   For instance, it would be nice if the statement "The function sigma maps
138   pseudo program counters to program counters" were better reflected in the
139   type of sigma.
140
14110. p.6: I did not really understand what the paragraph "In contrast to other
142    approaches, we do not perform any kind of symbolic execution..." was
143    trying to say.
144
14511. pp.7--9: this text talks about "program counters", when I think it
146    means "addresses" (i.e., values that the program counter may take).
147
14812. It is confusing that the variable called "policy" is only one part of
149    the concept "policy".
150
15113. p.8: I didn't understand why it is sometimes necessary to force the
152    expansion of jumps to the largest size possible.
153
15414. p.11: states that "we must detect (at run time) programs that manipulate
155    addresses in well behaved ways". But what does this mean exactly? Given
156    that the output program is not augmented with checks, it seems that the
157    authors mean "at emulation time". But how can this be exhaustive (for
158    all input values and for non-terminating executions)? Are there some
159    kind of necessary conditions for having a well-behaved program? Must
160    users provide a proof of well-behavedness for each program before they
161    can be sure that its assembly is correct? Isn't this rather a typing
162    issue? Is there any relation with the work of Greg Morriset et al on
163    Typed Assembly Language?
164
16515. The paper states that the work may be usable in the context of verified
166    operating systems kernels. This is a point worth discussing, but the
167    text could be more precise. The seL4 kernel requires memory protection
168    but the MCS51 has no MMU. Furthermore, the verification was specific to
169    the arm assembly language, where it would seem that the issue of branch
170    displacement is less significant.
171
172    Similarly, the paper discusses compilers like CompCert and the issue of
173    well-formedness. Again, this is a worthy point, but the discussion seems
174    to neglect the possibility that well formedness may be guaranteed by the
175    conversion from the high-level language, i.e. that all C programs may
176    compile into "well behaved" assembly programs. Is this true?
177
17816. The related work section touches on "timing properties of an assembly
179    program", but this issue is not treated in depth by the paper, and, in
180    fact, these sentences seem out of place.
181
18218. Research Questions
183
184   a. Did you consider "running" your model in parallel with a real or
185      emulated MCS-51 to validate its correctness?
186      (as in the work of Peter Sewell et al's work on TCP/IP and memory
187       weak models; where they compare their models against real
188       implementations.)
189
190   b. Rather than verifying a branch displacement algorithm, did you
191      consider showing refinement between the input (assembler with labels) and
192      the output (modified assembler with offsets)? That way, the algorithm
193      could use whatever strategy it liked and the results would still be
194      verified. From my understanding, this is one of the approaches taken
195      in Xavier Leroy et al's CompCert project.
196
197x. Minor typos / issues
198
199   c. p.4: It would be helpful if the abbreviated definitions of
200       preinstruction, instruction, and pseudoinstruction included more
201       elements in common with themselves and with the other examples
202       (notably DEC).
Note: See TracBrowser for help on using the repository browser.