1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
.. highlight:: none


.. index::
   pair: locus manager; design

.. _design-locus:


Locus manager
=============

.. mps:prefix:: design.mps.locus


Introduction
------------

:mps:tag:`intro` The locus manager coordinates between the pools and takes
the burden of having to be clever about tract/group placement away
from the pools, preserving trace differentiability and contiguity
where appropriate.

:mps:tag:`source` `mail.gavinm.1998-02-05.17-52`_,
`mail.ptw.1998-02-05.19-53`_, `mail.pekka.1998-02-09.13-58`_, and
`mail.gavinm.1998-02-09.14-05`_.

.. _mail.gavinm.1998-02-05.17-52: https://info.ravenbrook.com/project/mps/mail/1998/02/05/17-52/0.txt
.. _mail.ptw.1998-02-05.19-53:  https://info.ravenbrook.com/project/mps/mail/1998/02/05/19-53/0.txt
.. _mail.pekka.1998-02-09.13-58: https://info.ravenbrook.com/project/mps/mail/1998/02/09/13-58/0.txt
.. _mail.gavinm.1998-02-09.14-05: https://info.ravenbrook.com/project/mps/mail/1998/02/09/14-05/0.txt

:mps:tag:`readership` Any MPS developer.


Overview
--------

The MPS manages three main resources:

#. storage;
#. address space;
#. time.

The locus manager manages address space at the arena level.

.. note::

    Tucker was right: see `mail.ptw.1998-11-02.14-25`_. Richard
    Kistruck, 2007-04-24.

    .. _mail.ptw.1998-11-02.14-25: https://info.ravenbrook.com/project/mps/mail/1998/11/02/14-25/0.txt

When a pool wants some address space, it expresses some preferences to
the locus manager. The locus manager and the arena (working together)
try to honour these preferences, and decide what address space the
pool gets.

Preferences are expressed by the :c:type:`LocusPref` argument to
:c:func:`SegAlloc()`. Note that, when they call :c:func:`SegAlloc()`, pools are
asking for address space and writeable storage simultaneously, in a
single call. There is currently no way for pools to reserve address
space without requesting storage.


Why is it important to manage address space?
............................................

#. Trace differentiability

   Carefully chosen addresses are used by reference tracing systems
   (ie. automatic pools), to categorise objects into clumps; and to
   summarise and cheaply find references between clumps.

   Different clumps will become worth collecting at different times
   (the classic example, of course, is generations in a generational
   collector). For these partial collections to be efficient, it must
   be cheap to keep these clumps differentiable, cheap to condemn
   (Whiten) a particular clump, and cheap to find a good conservative
   approximation to all inward references to a clump (both initially
   to construct the Grey set, and to make scanning the Grey set
   efficient).

   This is what the MPS zone mechanism is all about.

   The locus manager manages the mapping from clumps to zones.

   To specify a clump, pools can pass ``LocusPrefZONESET`` and a set
   of zones to :c:func:`LocusPrefExpress()`.

#. Prevent address space fragmentation (within the arena)

   Address space is not infinite.

   In some use cases, the MPS is required to remain efficient when
   using very nearly all available address space and storage. For
   example, with the client-arena class, where the only address space
   available is that of the storage available.

   Even with the VM arena class, typical storage sizes (as of 2007)
   can make 32-bit address space constrained: the client may need
   several gigabytes, which leaves little spare address space.

   Address space fragmentation incurs failure when there is no way to
   allocate a big block of address space. The big block may be
   requested via the MPS (by the client), or by something else in the
   same process, such as a third-party graphics library, image
   library, etc.

   Address space fragmentation incurs cost when:

   - desired large-block requests (such as for buffering) are denied,
     causing them to be re-requested as a smaller block, or as several
     smaller blocks;

   - possible operating-system costs in maintaining a fragmented
     mapping?

#. Prevent storage fragmentation (within tracts and segments)

    Storage is not infinite: it is allocated in multiples of a
    fixed-size tract. Small lonely objects, each retaining a whole
    tract, cause storage fragmentation.

    Non-moving pools manage this fragmentation with placement
    strategies that use:

    - co-located death (in space and time);
    - segment merging and splitting.

    These pool-level strategies always care about contiguity of object
    storage. They also often care about the *ordering* of addresses,
    because pool code uses an address-ordered search when choosing
    where to place a new object. For these two reasons, the address
    chosen (by the locus manager and arena) for new tracts is
    important.

    Certain specialised pools, and/or some client programs that use
    them, have carefully tuned segment sizes, positioning, and search
    order. Be careful: seemingly inconsequential changes can
    catastrophically break this tuning.

    Pools can specify a preference for High and Low ends of address
    space, which implies a search-order. Pools could also specify
    clumping, using ``LocusPrefZONESET``.


Discovering the layout
......................

The locus manager is not given advance notice of how much address
space will be required with what preferences. Instead, the locus
manager starts with an empty layout, and adapts it as more requests
come in over time. It is attempting to discover a suitable layout by
successive refinement. This is ambitious.


Definitions
-----------

:mps:tag:`note.cohort` We use the word "cohort" in its usual sense here, but
we're particularly interested in cohorts that have properties relevant
to tract placement. It is such cohorts that the pools will try to
organize using the services of the locus manager. Typical properties
would be trace differentiability or (en masse) death-time
predictability. Typical cohorts would be instances of a
non-generational pool, or generations of a collection strategy.

:mps:tag:`def.trace.differentiability` Objects (and hence tracts) that are
collected, may or may not have "trace differentiability" from each
other, depending on their placement in the different zones. Objects
(or pointers to them) can also have trace differentiability (or not)
from non-pointers in ambiguous references; in practice, we will be
worried about low integers, that may appear to be in zones 0 or -1.


Requirements
------------

:mps:tag:`req.cohort` Tract allocations must specify the cohort they
allocate in. These kind of cohorts will be called loci, and they will
have such attributes as are implied by the other requirements.
Critical.

:mps:tag:`req.counter.objects` As a counter-requirement, pools are expected
to manage objects. Objects the size of a tract allocation request
(segment-sized) are exceptional. Critical.
:mps:tag:`req.counter.objects.just` This means the locus manager is not
meant to solve the problems of allocating large objects, and it isn't
required to know what goes on in pools.

:mps:tag:`req.contiguity` Must support a high level of contiguity within
cohorts when requested. This means minimizing the number of times a
cohort is made aware of discontiguity. Essential (as we've effectively
renegotiated this in SW, down to a vague hope that certain critical
cohorts are not too badly fragmented). :mps:tag:`req.contiguity.just` TSBA.

:mps:tag:`req.contiguity.specific` It should be possible to request another
allocation next to a specific tract on either side (or an extension in
that direction, as the case may be). Such a request can fail, if
there's no space there. Nice. It would also be nice to have one for
"next to the largest free block".

:mps:tag:`req.differentiable` Must support the trace differentiability of
segments that may be condemned separately. Due to the limited number
of zones, it must be possible to place several cohorts into the same
zone. Essential.

:mps:tag:`req.differentiable.integer` It must be possible to place
collectable allocations so that they are trace-differentiable from
small integers. Essential.

:mps:tag:`req.disjoint` Must support the disjointness of pages that have
different VM properties (such as mutable/immutable,
read-only/read-write, and different lifetimes). Optional.

.. note::

    I expect the implementation will simply work at page or larger
    granularity, so the problem will not arise, but Tucker insisted on
    stating this as a requirement. Pekka P. Pirinen, 1998-10-28.

:mps:tag:`req.low-memory` The architecture of the locus manager must not
prevent the design of efficient applications that often use all
available memory. Critical. :mps:tag:`req.low-memory.expl` This basically
says it must be designed to perform well in low-memory conditions, but
that there can be configurations where it doesn't do as well, as long
as this is documented for the application programmer. Note that it
doesn't say all applications are efficient, only that if you manage to
design an otherwise efficient application, the locus manager will not
sink it.

:mps:tag:`req.address` Must conserve address space in VM arenas to a
reasonable extent. Critical.

:mps:tag:`req.inter-pool` Must support the association of sets of tracts in
different pools into one cohort. Nice.

:mps:tag:`req.ep-style` Must support the existing EP-style of allocation
whereby allocation is from one end of address space either upwards or
downwards (or a close approximation thereto with the same behavior).
:mps:tag:`req.ep-style.just` We cannot risk disrupting a policy with
well-known properties when this technology is introduced.

:mps:tag:`req.attributes` There should be a way to inform the locus manager
about various attributes of cohorts that might be useful for
placement: deathtime, expected total size, and so on. Optional. It's a
given that the cohorts must then have these attributes, within the
limits set in the contract of the appropriate interface.
:mps:tag:`req.attributes.action` The locus manager should use the attributes
to guide its placement decisions. Nice.

:mps:tag:`req.blacklisting` There should be a way of maintaining at least
one blacklist for pages (or some other small unit), that can
not/should not be allocated to collectable pools. Optional.

.. note::

    How to do blacklist breaking for ambiguous refs?

:mps:tag:`req.hysteresis` There should be a way to indicate which cohorts
fluctuate in size and by how much, to guide the arena hysteresis to
hold on to suitable pages. Optional.


Analysis
--------

:mps:tag:`analysis.sw` Almost any placement policy would be an improvement on
the current SW one.

:mps:tag:`analysis.cause-and-effect` The locus manager doesn't usually need to
know *why* things need to be differentiable, disjoint, contiguous, and
so on. Abstracting the reason away from the interface makes it more
generic, more likely to have serendipitous new uses. Attributes
described by a quantity (deathtime, size, etc.) are an exception to
this, because we can't devise a common measure.

:mps:tag:`analysis.stable` The strategy must be stable: it must avoid repeated
recomputation, especially the kind that switches between alternatives
with a short period (repeated "bites" out the same region or
flip-flopping between two regions).

:mps:tag:`analysis.fragmentation` There's some call to avoid fragmentation in
cohorts that don't need strict contiguity, but this is not a separate
requirement, since fragmentation is a global condition, and can only
be ameliorated if there's a global strategy that clumps allocations
together.

:mps:tag:`analysis.deathtime` Cohorts with good death-time clumping of their
objects could use some locality of tract allocation, because it
increases the chances of creating large holes in the address space
(for other allocation to use). OTOH. many cohorts will not do multiple
frees in short succession, or at least cannot reasonably be predicted
to do so. This locality is not contiguity, nor is it low
fragmentation, it's just the requirement to place the new tracts next
to the tract where the last object was allocated in the cohort. Note
that the placement of objects is under the control of the pool, and
the locus manager will not know it, therefore this requirement should
be pursued by requesting allocation next to a particular tract (which
we already have a requirement for).

:mps:tag:`analysis.asymmetrical` The strategy has to be asymmetrical with
respect to cohorts growing and shrinking. The reason of this asymmetry
is that it can choose where to grow, but it cannot choose where to
shrink (except in a small way by growing with good locality).


Interface
---------

:mps:tag:`interface.locus` A cohort will typically reside on multiple tracts
(and the pools will avoid putting objects of other cohorts on them),
so there should be an interface to describe the properties of the
cohort, and associate each allocation request with the cohort. We
shall call such an object, created to represent a cohort, a locus (pl.
loci).

:mps:tag:`interface.locus.pool` Loci will usually be created by the pool
that uses it. Some of the locus attributes will be inherited from
client-specified pool attributes [this means there will be additional
pool attributes].

:mps:tag:`interface.detail` This describes interface in overview; for
details, see implementation section and code, or user doc.


Loci
....

.. c:function:: Res LocusCreate(Locus *locusReturn, LocusAttrs attrs, ZoneGroup zg, LocusAllocDesc adesc)

:mps:tag:`function.create` A function to create a locus: ``adesc`` contains
the information about the allocation sequences in the locus, ``zg`` is
used for zone differentiability, and ``attrs`` encodes the following:

- :mps:tag:`locus.contiguity` A locus can be contiguous. This means
  performing as required in :mps:ref:`.req.contiguity`, non-contiguous
  allocations can be freely placed anywhere (but efficiency dictates
  that similar allocations are placed close together and apart from
  others).

- :mps:tag:`locus.blacklist` Allocations in the locus will avoid blacklisted
  pages (for collectable segments).

- :mps:tag:`locus.zero` Allocations in the locus are zero-filled.

.. note::

    Other attributes will be added, I'm sure.

:mps:tag:`interface.zone-group` The locus can be made a member of a zone
group. Passing ``ZoneGroupNONE`` means it's not a member of any group
(allocations will be placed without regard to zone, except to keep
them out of stripes likely to be needed for some group).

.. note::

    I propose no mechanism for managing zone groups at this time,
    since it's only used internally for one purpose. Pekka P. Pirinen,
    2000-01-17.

:mps:tag:`interface.size` An allocation descriptor (``LocusAllocDesc``)
contains various descriptions of how the locus will develop over time
(inconsistent specifications are forbidden, of course):

- :mps:tag:`interface.size.typical-alloc` Size of a typical allocation in
  this locus, in bytes. This will mainly affect the grouping of
  non-contiguous loci.

- :mps:tag:`interface.size.large-alloc` Typical large allocation that the
  manager should try to allow for (this allows some relief from
  :mps:ref:`.req.counter.objects`), in bytes. This will mainly affect the size
  of gaps that will be allotted adjoining this locus.

- :mps:tag:`interface.size.direction` Direction of growth: up/down/none.
   Only useful if the locus is contiguous.

- :mps:tag:`interface.size.lifetime` Some measure of the lifetime of tracts
  (not objects) in the cohort.

  .. note::

      Don't know the details yet, probably only useful for placing
      similar cohorts next to each other, so the details don't
      actually matter. Pekka P. Pirinen, 2000-01-17.

- :mps:tag:`interface.size.deathtime` Some measure of the deathtime of
  tracts (not objects) in the cohort.

  .. note::

      Ditto. Pekka P. Pirinen, 2000-01-17.

:mps:tag:`function.init` :c:func:`LocusInit()` is like :c:func:`LocusCreate()`, but
without the allocation. This is the usual interface, since most loci
are embedded in a pool or something.

:mps:tag:`function.alloc` :c:func:`ArenaAlloc()` to take a locus argument.
:c:func:`ArenaAllocHere()` is like it, plus it takes a tract and a
specification to place the new allocation immediately above/below a
given tract; if that is not possible, it returns ``ResFAIL`` (this
will make it useful for reallocation functionality).

.. c:function:: void ArenaSetTotalLoci(Arena arena, Size nLoci, Size nZoneGroups)

:mps:tag:`function.set-total` A function to tell the arena the expected
number of (non-miscible client) loci, and of zone groups.


Peaks
.....

.. c:function:: mps_res_t mps_peak_create(mps_peak_t*, mps_arena_t)

:mps:tag:`function.peak.create` A function to create a peak. A newly-created
peak is open, and will not be used to guide the strategy of the locus
manager.

.. c:function:: mps_res_t mps_peak_describe_pool(mps_peak_t, mps_pool_t, mps_size_desc_t)

:mps:tag:`function.peak.add` A function to add a description of the state of
one pool into the peak. Calling this function again for the same peak and pool instance will replace
the earlier description.

:mps:tag:`function.peak.add.size` The size descriptor contains a total size
in bytes or percent of arena size.

.. note::

    Is this right? Pekka P. Pirinen, 2000-01-17.

:mps:tag:`function.peak.add.remove` Specifying a :c:macro:`NULL` size will remove
the pool from the peak. The client is not allowed to destroy a pool
that is mentioned in any peak; it must be first removed from the peak,
or the peak must be destroyed. This is to ensure that the client
adjusts the peaks in a manner that makes sense to the application; the
locus manager can't know how to do that.

.. c:function:: mps_res_t mps_peak_close(mps_peak_t)

:mps:tag:`function.peak.close` A function to indicate that all the
significant pools have been added to the peak, and it can now be used
to guide the locus manager. For any pool not described in the peak,
the locus manager will take its current size at any given moment as
the best prediction of its size at the peak.

:mps:tag:`function.peak.close.after` It is legal to add more descriptions to
the peak after closing, but this will reopen the peak, and it will
have to be closed before the locus manager will use it again. The
locus manager uses the previous closed state of the peak, while this
is going on.

.. c:function:: void mps_peak_destroy(mps_peak_t)

:mps:tag:`function.peak.destroy` A function to destroy a peak.

:mps:tag:`interface.ep-style` This satisfies :mps:ref:`.req.ep-style` by allowing SW
to specify zero size for most pools (which will cause them to be place
next to other loci with the same growth direction).

.. note::

    Not sure this is good enough, but we'll try it first. Pekka P.
    Pirinen, 2000-01-17.


Architecture
------------

Data objects
............

:mps:tag:`arch.locus` To represent the cohorts, we have locus objects.
Usually a locus is embedded in a pool instance, but generations are
separate loci.

:mps:tag:`arch.locus.attr` contiguity, blacklist, zg, current region, @@@@

:mps:tag:`arch.locus.attr.exceptional` The client can define a typical large
allocation for the locus. Requests substantially larger than that are
deemed exceptional.

:mps:tag:`arch.zone-group` To satisfy :mps:ref:`.req.differentiable`, we offer zone
groups. Each locus can be a member of a zone group, and the locus
manager will attempt to place allocations in this locus in different
zones from all the other zone groups. A zone-group is represented as
@@@@.

:mps:tag:`arch.page-table` A page table is maintained by the arena, as usual
to track association between tracts, pools and segments, and mapping
status for VM arenas.

:mps:tag:`arch.region` All of the address space is divided into disjoint
regions, represented by region objects. These objects store their
current limits, and high and low watermarks of currently allocated
tracts (we hope there's usually a gap of empty space between regions).
The limits are actually quite porous and flexible.

:mps:tag:`arch.region.assoc` Each region is associated with one contiguous
locus or any number of non-contiguous loci (or none). We call the
first kind of region "contiguous". :mps:tag:`arch.locus.assoc` Each locus
remembers all regions where it has tracts currently, excepting the
badly-placed allocations (see below). It is not our intention that any
locus would have very many, or that loci that share regions would have
any reason to stop doing do.

:mps:tag:`arch.region.more` Various quantities used by the placement
computation are also stored in the regions and the loci. Regions are
created (and destroyed) by the placement recomputation. Regions are
located in stripes (if it's a zoned region), but they can extend into
neighboring stripes if an exceptionally large tract allocation is
requested (to allow for large objects).

:mps:tag:`arch.chunk` Arenas may allocate more address space in additional
chunks, which may be disjoint from the existing chunks. Inter-chunk
space will be represented by dummy regions. There are also sentinel
regions at both ends of the address space. See
design.mps.arena.chunk_.

.. _design.mps.arena.chunk: arena.html#design.mps.arena.chunk


Overview of strategy
....................

:mps:tag:`arch.strategy.delay` The general strategy is to delay placement
decisions until they have to be made, but no later.

:mps:tag:`arch.strategy.delay.until` Hence, the locus manager only makes
placement decisions when an allocation is requested (frees and other
operations might set a flag to cause the next allocation to redecide).
This also allows the client to change the peak and pool configuration
in complicated ways without causing a lot of recomputation, by doing
all the changes without allocating in the middle (unless the control
pool needs more space because of the changes).

:mps:tag:`arch.strategy.normal` While we want the placement to be
sophisticated, we do not believe it is worth the effort to consider
all the data at each allocation. Hence, allocations are usually just
placed in one of the regions used previously (see :mps:ref:`.arch.alloc`)
without reconsidering the issues.

:mps:tag:`arch.strategy.normal.limit` However, the manager sets
precautionary limits on the regions to ensure that the placement
decisions are revisited when an irrevocable placement is about to be
made.

:mps:tag:`arch.strategy.create` The manager doesn't create new regions until
they are needed for allocation (but it might compute where they could
be placed to accommodate a peak).


Allocation
..........

:mps:tag:`arch.alloc` Normally, each allocation to a locus is placed in its
current region. New regions are only sought when necessary to fulfill
an allocation request or when there is reason to think the situation
has changed significantly (see :mps:ref:`.arch.significant`).

:mps:tag:`arch.alloc.same` An allocation is first attempted next to the
previous allocation in the same locus, respecting growth direction. If
that is not possible, a good place in the current region is sought.
:mps:tag:`arch.alloc.same.hole` At the moment, for finding a good place
within a region, we just use the current algorithm, limited to the
region. In future, the placement within regions will be more clever.

:mps:tag:`arch.alloc.extend` If there's no adequate hole in the current
region and the request is not exceptional, the neighboring regions are
examined to see if the region could be extended at one border. (This
will basically only be done if the neighbor has shrunk since the last
placement recomputation, because the limit was set on sophisticated
criteria, and should not be changed without justification.)
:mps:tag:`arch.alloc.extend.here` When an allocation is requested next to a
specific tract (:c:func:`ArenaAllocHere()`), we try to extend a little
harder (at least for ``change_size``, perhaps not for locality).

:mps:tag:`arch.alloc.other` If no way can be found to allocate in the
current region, other regions used for this locus are considered in
the same way, to see if space can be found there. [Or probably look at
other regions before trying to extend anything?]

:mps:tag:`arch.alloc.recompute` When no region of this locus has enough
space for the request, or when otherwise required, region placement is
recomputed to find a new region for the request (which might be the
same region, after extension).

:mps:tag:`arch.alloc.current` This region where the allocation was placed
then becomes the current region for this locus, except when the
request was exceptional, or when the region chosen was "bad" (see
@@@@).

:mps:tag:`arch.significant` Significant changes to the parameters affecting
placement are deemed to have happened at certain client calls and when
the total allocation has changed substantially since the last
recomputation. Such conditions set a flag that causes the next
allocation to recompute even if its current region is not full
(possibly second-guess the decision to recompute after some
investigation of the current state?).


Deallocation
............

:mps:tag:`arch.free` Deallocation simply updates the counters in the region
and the locus. For some loci, it will make the region of the
deallocation the current region. :mps:tag:`arch.free.remove` If a region
becomes entirely empty, it is deleted (and the neighbors limits might
be adjusted).

.. note::

    This is quite tricky to get right.


Region placement recomputation
..............................

:mps:tag:`arch.gap` When doing placement computations, we view the arena as
a sequence of alternating region cores and gaps (which can be small,
even zero-sized). Initially, we'll take the core of a region to be the
area between the high and low watermark, but in the future we might be
more flexible about that.

.. note::

    Edge determination is actually a worthwhile direction to explore.

:mps:tag:`arch.reach` The gap between two cores could potentially end up
being allocated to either region, if they grow in that direction, or
one or neither, if they don't. The set of states that the region
assignment could reach by assigning the gaps to their neighbors is
called the reach of the current configuration.

:mps:tag:`arch.placement.object` The object of the recomputation is to find
a configuration of regions that is not too far from the current
configuration and that keeps all the peaks inside its reach; if that
is not possible, keep the nearest ones in the reach and then minimize
the total distance from the rest.

:mps:tag:`arch.placement.hypothetical` The configurations that are
considered will include hypothetical placements for new regions for
loci that cannot fit in their existing regions at the peak. This is
necessary to avoid choosing a bad alternative.

:mps:tag:`arch.placement.interesting` The computation will only consider new
regions of loci that are deemed interesting, that is, far from their
peak state. This will reduce the computational burden and avoid
jittering near a peak.

.. note::

    Details missing.


Implementation
--------------

[missing]


Notes
-----

:mps:tag:`idea.change` Even after the first segment, be prepared to change
your mind, if by the second segment a lot of new loci have been
created.

:mps:tag:`distance` If the current state is far from a peak, there's time to
reassign regions and for free space to appear (in fact, under the
steady arena assumption, enough free space *will* appear).

:mps:tag:`clear-pool` Need to have a function to deallocate all objects in a
pool, so that :c:func:`PoolDestroy()` won't have to be used for that
purpose.