1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
.. highlight:: none


.. index::
   pair: tests; design

.. _design-tests:


Tests
=====

.. mps:prefix:: design.mps.tests


Introduction
------------

:mps:tag:`intro` This document contains a guide to the Memory Pool System
tests.

:mps:tag:`readership` This document is intended for any MPS developer.


Running tests
-------------

:mps:tag:`run` Run these commands::

    cd code
    make -f <makefile> VARIETY=<variety> <target>  # Unix
    nmake /f <makefile> VARIETY=<variety> <target> # Windows

where ``<makefile>`` is the appropriate makefile for the platform (see
`manual/build.txt`_), ``<variety>`` is the variety (see
design.mps.config.var.codes_) and ``<target>`` is the collection of tests
(see :mps:ref:`.target` below). For example::

    make -f lii6ll VARIETY=cool testrun

If ``<variety>`` is omitted, tests are run in both the cool and hot
varieties.

.. _design.mps.config.var.codes: config.html#design.mps.config.var.codes
.. _manual/build.txt: https://www.ravenbrook.com/project/mps/master/manual/build.txt


Test targets
------------

:mps:tag:`target` The makefiles provide the following targets for common
sets of tests:

:mps:tag:`target.testall` The ``testall`` target runs all test cases (even
if known to fail).

:mps:tag:`target.testrun` The ``testrun`` target runs the "smoke tests".
This subset of tests are quick checks that the MPS is working. They
run quickly enough for it to be practical to run them every time the
MPS is built.

:mps:tag:`target.testci` The ``testci`` target runs the continuous
integration tests, the subset of tests that are expected to pass in
full-featured build configurations.

:mps:tag:`target.testansi` The ``testansi`` target runs the subset of the
tests that are expected to pass in the generic ("ANSI") build
configuration (see design.mps.config.opt.ansi_).

:mps:tag:`target.testpollnone` The ``testpollnone`` target runs the subset
of the tests that are expected to pass in the generic ("ANSI") build
configuration (see design.mps.config.opt.ansi_) with the option
:c:macro:`CONFIG_POLL_NONE` (see design.mps.config.opt.poll_).

.. _design.mps.config.opt.ansi: config.html#design.mps.config.opt.ansi
.. _design.mps.config.opt.poll: config.html#design.mps.config.opt.poll

:mps:tag:`target.testratio` The ``testratio`` target compares the
performance of the HOT and RASH varieties. See :mps:ref:`.ratio`.

:mps:tag:`target.testscheme` The ``testscheme`` target builds the example
Scheme interpreter (example/scheme) and runs its test suite.

:mps:tag:`target.testmmqa` The ``testmmqa`` target runs the tests in the
MMQA test suite. See :mps:ref:`.mmqa`.


Test features
-------------

:mps:tag:`randomize` Each time a test case is run, it randomly chooses some
of its parameters (for example, the sizes of objects, or how many
links to create in a graph of references). This allows a fast test
to cover many cases over time.

:mps:tag:`randomize.seed` The random numbers are chosen pseudo-randomly
based on a seed initialized from environmental data (the time and the
processor cycle count). The seed is reported at test startup, for
example::

    code$ xci6ll/cool/apss
    xci6ll/cool/apss: randomize(): choosing initial state (v3): 2116709187.
    ...
    xci6ll/cool/apss: Conclusion: Failed to find any defects.

Here, the number 2116709187 is the random seed.

_`.randomize.specific-seed` Each test can be run with a specified seed
by passing the seed on the command line, for example::

    code$ xci6ll/cool/apss 2116709187
    xci6ll/cool/apss: randomize(): resetting initial state (v3) to: 2116709187.
    ...
    xci6ll/cool/apss: Conclusion: Failed to find any defects.

:mps:tag:`randomize.repeatable` This ensures that the single-threaded tests
are repeatable. (Multi-threaded tests are not repeatable even if the
same seed is used; see job003719_.)

.. _job003719: https://www.ravenbrook.com/project/mps/issue/job003719/


Test list
---------

See `manual/code-index`_ for the full list of automated test cases.

.. _manual/code-index: https://www.ravenbrook.com/project/mps/master/manual/html/code-index.html

:mps:tag:`test.finalcv` Registers objects for finalization, makes them
unreachable, deregisters them, etc. Churns to provoke minor (nursery)
collection.

:mps:tag:`test.finaltest` Creates a large binary tree, and registers every
node. Drops the top reference, requests collection, and counts the
finalization messages.

:mps:tag:`test.zcoll` Collection scheduling, and collection feedback.

:mps:tag:`test.zmess` Message lifecycle and finalization messages.


Test database
-------------

:mps:tag:`db` The automated tests are described in the test database
(tool/testcases.txt).

:mps:tag:`db.format` This is a self-documenting plain-text database which
gives for each test case its name and an optional set of features. For
example the feature ``=P`` means that the test case requires polling
to succeed, and therefore is expected to fail in build configurations
without polling (see design.mps.config.opt.poll_).

:mps:tag:`db.format.simple` The format must be very simple because the test
runner on Windows is written as a batch file (.bat), in order to avoid
having to depend on any tools that are did not come as standard with
Windows XP, and batch files are inflexible. (But note that we no
longer support Windows XP, so it would now be possible to rewrite the
test runner in PowerShell if we thought that made sense.)

:mps:tag:`db.testrun` The test runner (tool/testrun.sh on Unix or
tool/testrun.bat on Windows) parses the test database to work out
which tests to run according to the target. For example the
``testpollnone`` target must skip all test cases with the ``P``
feature.


Test runner
-----------

:mps:tag:`runner.req.automated` The test runner must execute without user
interaction, so that it can be used for continuous integration.

:mps:tag:`runner.req.output.pass` Test cases are expected to pass nearly all the
time, and in these cases we almost never want to see the output, so
the test runner must suppress the output for passing tests.

:mps:tag:`runner.req.output.fail` However, if a test case fails then the
test runner must preserve the output from the failing test, including
the random seed (see :mps:ref:`.randomize.seed`), so that this can be analyzed
and the test repeated. Moreover, it must print the output from the
failing test, so that if the test is being run on a `continuous
integration`_ system (see :mps:ref:`.ci`), then the output of the failing
tests is included in the failure report. (See job003489_.)

.. _job003489: https://www.ravenbrook.com/project/mps/issue/job003489/


Performance test
----------------

:mps:tag:`ratio` The ``testratio`` target checks that the hot variety
is not too much slower than the rash variety. A failure of this test
usually is expected to indicate that there are assertions on the
critical path using :c:macro:`AVER` instead of :c:macro:`AVER_CRITICAL` (and so on).
This works by running gcbench for the AMC pool class and djbench for
the MVFF pool class, in the hot variety and the rash variety,
computing the ratio of CPU time taken in the two varieties, and
testing that this falls under an acceptable limit.

:mps:tag:`ratio.cpu-time` Note that we use the CPU time (reported by
``/usr/bin/time``) and not the elapsed time (as reported by the
benchmark) because we want to be able to run this test on continuous
integration machines that might be heavily loaded.

:mps:tag:`ratio.platform` This target is currently supported only on Unix
platforms using GNU Makefiles.


Adding a new test
-----------------

To add a new test to the MPS, carry out the following steps. (The
procedure uses the name "newtest" throughout but you should of
course replace this with the name of your test case.)

:mps:tag:`new.source` Create a C source file in the code directory,
typically named "newtest.c". In additional to the usual copyright
boilerplate, it should contain a call to :c:func:`testlib_init()` (this
ensures reproducibility of pseudo-random numbers), and a :c:func:`printf()`
reporting the absence of defects (this output is recognized by the
test runner)::

    #include <stdio.h>
    #include "testlib.h"

    int main(int argc, char *argv[])
    {
      testlib_init(argc, argv);
      /* test happens here */
      printf("%s: Conclusion: Failed to find any defects.\n", argv[0]);
      return 0;
    }

:mps:tag:`new.unix` If the test case builds on the Unix platforms (FreeBSD,
Linux and macOS), edit code/comm.gmk adding the test case to the
:c:macro:`TEST_TARGETS` macro, and adding a rule describing how to build it,
typically::

    $(PFM)/$(VARIETY)/newtest: $(PFM)/$(VARIETY)/newtest.o \
            $(TESTLIBOBJ) $(PFM)/$(VARIETY)/mps.a

:mps:tag:`new.windows` If the test case builds on Windows, edit
code/commpre.nmk adding the test case to the :c:macro:`TEST_TARGETS` macro,
and edit code/commpost.nmk adding a rule describing how to build it,
typically::

    $(PFM)\$(VARIETY)\newtest.exe: $(PFM)\$(VARIETY)\newtest.obj \
            $(PFM)\$(VARIETY)\mps.lib $(FMTTESTOBJ) $(TESTLIBOBJ)

:mps:tag:`new.macos` If the test case builds on macOS, open
code/mps.xcodeproj/project.pbxproj for edit and open this project in
Xcode. If the project navigator is not visible at the left, select
View → Navigators → Show Project Navigator (⌘1). Right click on the
Tests folder and choose Add Files to "mps"…. Select code/newtest.c
and then click Add. Move the new file into alphabetical order in the
Tests folder. Click on "mps" at the top of the project navigator to
reveal the targets. Select a test target that is similar to the one
you have just created. Right click on that target and select Duplicate
(⌘D). Select the new target and change its name to "newtest". Select
the "Build Phases" tab and check that "Dependencies" contains the mps
library, and that "Compile Sources" contains newtest.c and
testlib.c. Close the project.

:mps:tag:`new.database` Edit tool/testcases.txt and add the new test case to
the database. Use the appropriate flags to indicate the properties of
the test case. These flags are used by the test runner to select the
appropriate sets of test cases. For example tests marked ``=P`` are
expected to fail in build configurations without polling (see
design.mps.config.opt.poll_).

:mps:tag:`new.manual` Edit manual/source/code-index.rst and add the new test
case to the "Automated test cases" section.


Continuous integration
----------------------

[This section might need to become a document in its own right.  CI
has grown in importance and complexity.  RB 2023-01-15]

:mps:tag:`ci` Ravenbrook uses both `GitHub CI`_ and `Travis CI`_ for
continuous integration of the MPS via GitHub.

.. _Travis CI: https://travis-ci.com/

.. _GitHub CI: https://docs.github.com/en/actions/automating-builds-and-tests/about-continuous-integration

[This section needs: definition of CI goals and requirements, what we
need CI to do and why, how the testci target meets those
requirements.  'taint really a design without this.  Mention how CI
supports the pull request merge procedure (except that exists on a
separate branch at the moment).  RB 2023-01-15]

[Need to discuss compilers and toolchains.  RB 2023-01-15]

:mps:tag:`ci.run.posix` On Posix systems where we have autoconf, the CI
services run commands equivalent to::

  ./configure
  make install
  make test

which execises the testci target, as defined by `Makefile.in
<../Makefile.in>`_ in the root of the MPS tree.

:mps:tag:`ci.run.windows` On Windows the CI services run commands that do at
least::

  make /f w3i6mv.nmk all testci

as defined by the :mps:ref:`.ci.github.config`.

:mps:tag:`ci.run.other.targets` On some platforms we arrange to run the testansi,
testpollnone, testratio, and testscheme targets.  [Need to explain
why, where, etc.  RB 2023-01-15]

:mps:tag:`ci.run.other.checks` We could also run various non-build checks
using CI to check:

- document formatting
- shell script syntax

[In the branch of writing, these do not yet exist.  They are the
subject of `GitHub pull request #113
<https://github.com/Ravenbrook/mps/pull/112>`_ of
branch/2023-01-13/rst-check.  When merged, they can be linked.  RB
2023-01-15]

_`.ci.when:`: CI is triggered on the `mps GitHub repo`_ by:

- commits (pushes)
- new pull requests
- manually, using tools (see :mps:ref:`.ci.tools`)

.. _mps GitHub repo: https://github.com/ravenbrook/mps

:mps:tag:`ci.results` CI results are visible via the GitHub web interface:

- in pull requests, under "Checks",

- on the `branches page <https://github.com/Ravenbrook/mps/branches>`_
  as green ticks or red crosses that link to details.

as well as in logs specific to the type of CI.

:mps:tag:`ci.results.travis` Results from Travis CI can be found at the
`Travis CI build history for the MPS GitHub repo
<https://app.travis-ci.com/github/Ravenbrook/mps/builds>`_.

:mps:tag:`ci.results.github` Results from GitHub CI can be found at `build
and test actions on the Actions tab at the Ravenbrook GitHub repo
<https://github.com/Ravenbrook/mps/actions/workflows/build-and-test.yml>`_.

:mps:tag:`ci.github` [Insert overview of GitHub CI here.  RB 2023-01-15]

:mps:tag:`ci.github.platforms` GitHub provides runners_ for Linux, Windows,
and macOS, but only on x86_64.  See :mps:ref:`.ci.travis.platforms` for ARM64
and FreeBSD.

.. _runners: https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources

:mps:tag:`ci.github.config` GitHub CI is configured using the
`build-and-test.yml <../.github/workflows/build-and-test.yml>`_ file
in the .github/workflows directory of the MPS tree.

:mps:tag:`ci.travis` [Insert overview of Travis CI here.  RB 2023-01-15]

:mps:tag:`ci.travis.platforms` Where possible, we use `GitHub CI`_ for
platforms, because `Travis CI is slow and expensive`_.  However
`GitHub CI`_ does not provide ARM64 or FreeBSD, so we use `Travis CI`_
for those.

.. _Travis CI is slow and expensive: https://github.com/Ravenbrook/mps/issues/109

:mps:tag:`ci.travis.config` Travis is configured using the `.travis.yml
<../.travis.yml>`_ file at top level of the MPS tree.

:mps:tag:`ci.tools` The MPS tree contains some simple tools for managing CI
without the need to install whole packages such as the GitHub CLI or
Travis CI's Ruby gem.

:mps:tag:`ci.tools.kick` `tool/github-ci-kick <../tool/github-ci-kick>`_ and
`tool/travis-ci-kick <../tool/travis-ci-kick>`_ both trigger CI builds
without the need to push a change or make a pull request in the `mps
GitHub repo`_.  In particular, they are useful for applying CI to work
that was pushed while CI was disabled, for whatever reason.


MMQA tests
----------

:mps:tag:`mmqa` The Memory Management Quality Assurance test suite is
another suite of test cases.

:mps:tag:`mmqa.why` The existence of two test suites originates in the
departmental structure at Harlequin Ltd where the MPS was originally
developed. Tests written by members of the Memory Management Group
went into the code directory along with the MPS itself, while tests
written by members of the Quality Assurance Group went into the test
directory. (Conway's Law states that "organizations which design
systems … are constrained to produce designs which are copies of the
communication structures of these organizations" [Conway_1968]_.)

:mps:tag:`mmqa.run` See test/README for how to run the MMQA tests.


Other tests
-----------

:mps:tag:`coverage` The program tool/testcoverage compiles the MPS with
coverage enabled, runs the smoke tests (:mps:ref:`.target.testrun`) and
outputs a coverage report.

:mps:tag:`opendylan` The program tool/testopendylan pulls Open Dylan from
GitHub and builds it against the MPS.


References
----------

.. [Conway_1968]
   "How do Committees Invent?";
   Melvin E. Conway; *Datamation* 14:5, pp. 28–31; April 1968;
   <http://www.melconway.com/Home/Committees_Paper.html>