3. Error handing¶
Operations in the Memory Pool System that might fail return a
result code of type mps_res_t
. Success is always
indicated by the result code MPS_RES_OK
, which is defined
to be zero. Other result codes indicate failure, and are non-zero. The
MPS never uses a “special value” of some other type to indicate
failure (such as returning NULL
for a pointer result, or −1 for a
size result).
Note
The MPS does not throw or catch exceptions. (This is necessary for the MPS to be portable to systems that have only a freestanding implementation of the C language.)
The modular nature of the MPS means that it is not usually possible for a function description to list the possible error codes that it might return. A function in the public interface typically calls methods of an arena class and one or more pool classes, any of which might fail. The MPS is extensible with new arena and pool classes, which might fail in new and interesting ways, so the only future-proof behaviour is for a client program to assume that any MPS function that returns a result code can return any result code.
-
mps_res_t
¶ The type of result codes. It is a transparent alias for
int
, provided for convenience and clarity.A result code indicates the success or failure of an operation, along with the reason for failure. As with error numbers in Unix, the meaning of a result code depends on the call that returned it. Refer to the documentation of the function for the exact meaning of each result code.
The result codes are:
MPS_RES_OK
: operation succeeded.MPS_RES_FAIL
: operation failed.MPS_RES_IO
: an input/output error occurred.MPS_RES_LIMIT
: an internal limitation was exceeded.MPS_RES_MEMORY
: needed memory could not be obtained.MPS_RES_RESOURCE
: a needed resource could not be obtained.MPS_RES_UNIMPL
: operation is not implemented.MPS_RES_COMMIT_LIMIT
: the arena’s commit limit would be exceeded.MPS_RES_PARAM
: an invalid parameter was passed.
3.1. Result codes¶
-
MPS_RES_COMMIT_LIMIT
¶ A result code indicating that an operation could not be completed as requested without exceeding the commit limit.
You need to deallocate something or allow the garbage collector to reclaim something to make more space, or increase the commit limit by calling
mps_arena_commit_limit_set()
.
-
MPS_RES_FAIL
¶ A result code indicating that something went wrong that does not fall under the description of any other result code.
-
MPS_RES_IO
¶ A result code indicating that an input/output error occurred in the telemetry system.
-
MPS_RES_LIMIT
¶ A result code indicating that an operation could not be completed as requested because of an internal limitation of the MPS.
-
MPS_RES_MEMORY
¶ A result code indicating that an operation could not be completed because there wasn’t enough memory available.
You need to deallocate something or allow the garbage collector to reclaim something to free enough memory, or extend the arena (if you’re using an arena for which that does not happen automatically).
Note
Failing to acquire enough memory because the commit limit would have been exceeded is indicated by returning
MPS_RES_COMMIT_LIMIT
, notMPS_RES_MEMORY
.Running out of address space (as might happen in virtual memory systems) is indicated by returning
MPS_RES_RESOURCE
, notMPS_RES_MEMORY
.
-
MPS_RES_OK
¶ A result code indicating that an operation succeeded.
If a function takes an out parameter or an in/out parameter, this parameter will only be updated if
MPS_RES_OK
is returned. If any other result code is returned, the parameter will be left untouched by the function.MPS_RES_OK
is zero.
-
MPS_RES_PARAM
¶ A result code indicating that an operation could not be completed as requested because an invalid parameter was passed to the operation.
-
MPS_RES_RESOURCE
¶ A result code indicating that an operation could not be completed as requested because the MPS could not obtain a needed resource. It can be returned when the MPS runs out of address space. If this happens, you need to reclaim memory within your process (as for the result code
MPS_RES_MEMORY
).Two special cases have their own result codes: when the MPS runs out of committed memory, it returns
MPS_RES_MEMORY
, and when it cannot proceed without exceeding the commit limit, it returnsMPS_RES_COMMIT_LIMIT
.
-
MPS_RES_UNIMPL
¶ A result code indicating that an operation, or some vital part of it, is not implemented.
This might be returned by functions that are no longer supported, or by operations that are included for future expansion, but not yet supported.
3.2. Assertions¶
Bugs in the client program may violate the invariants that the MPS relies on. Most functions in the MPS (in most varieties; see below) assert the correctness of their data structures, so these bugs will often be discovered by an assertion failure in the MPS. The section Common assertions and their causes below lists commonly encountered assertions and explains the kinds of client program bugs that can provoke these assertions.
It is very rare for an assertion to indicate a bug in the MPS rather than the client program, but it is not unknown, so if you have made every effort to track down the cause (see Debugging with the Memory Pool System) without luck, get in touch.
3.2.1. Assertion handling¶
When the MPS detects an assertion failure, it calls the plinth
function mps_lib_assert_fail()
. Unless you have replaced the plinth, this behaves as follows:
In the cool variety, print the assertion message to standard error and terminate the program by calling
abort()
.In the hot and rash varieties, print the assertion message to standard error and do not terminate the program.
You can change this behaviour by providing your own plinth, or using
mps_lib_assert_fail_install()
.
In many applications, users don’t want their program terminated when the MPS detects an error, no matter how severe. A lot of MPS assertions indicate that the program is going to crash very soon, but there still may be a chance for a user to get some useful results or save their work. This is why the default assertion handler only terminates in the cool variety.
3.2.2. Common assertions and their causes¶
This section lists some commonly encountered assertions and suggests likely causes. If you encounter an assertion not listed here (or an assertion that is listed here but for which you discovered a different cause), please let us know so that we can improve this documentation.
arg.c: MPS_KEY_...
A required keyword argument was omitted from a call to
mps_ap_create_k()
,mps_arena_create_k()
,mps_fmt_create_k()
, ormps_pool_create_k()
.
buffer.c: BufferIsReady(buffer)
The client program called
mps_reserve()
twice on the same allocation point without callingmps_commit()
. See Allocation point protocol.
dbgpool.c: fencepost check on free
The client program wrote to a location after the end, or before the beginning of an allocated block. See Debugging pools.
dbgpool.c: free space corrupted on release
The client program used an object after it was reclaimed. See Debugging pools.
format.c: SigCheck Format: format
The client program called
mps_pool_create_k()
for a pool class like AMC (Automatic Mostly-Copying) that requires a object format, but passed something other than amps_fmt_t
for this argument.
format.c: format->poolCount == 0
The client program called
mps_fmt_destroy()
on a format that was still being used by a pool. It is necessary to callmps_pool_destroy()
first.
global.c: RingIsSingle(&arena->chainRing)
The client program called
mps_arena_destroy()
without destroying all the generation chains belonging to the arena. It is necessary to callmps_chain_destroy()
first.
global.c: RingIsSingle(&arena->formatRing)
The client program called
mps_arena_destroy()
without destroying all the object formats belonging to the arena. It is necessary to callmps_fmt_destroy()
first.
global.c: RingIsSingle(&arenaGlobals->rootRing)
The client program called
mps_arena_destroy()
without destroying all the roots belonging to the arena. It is necessary to callmps_root_destroy()
first.
global.c: RingIsSingle(&arena->threadRing)
The client program called
mps_arena_destroy()
without deregistering all the threads belonging to the arena. It is necessary to callmps_thread_dereg()
first.
global.c: RingLength(&arenaGlobals->poolRing) == 4
The client program called
mps_arena_destroy()
without destroying all the pools belonging to the arena. It is necessary to callmps_pool_destroy()
first.
global.c: PoolHasAttr(pool, AttrGC)
The client program called
mps_finalize()
on a reference that does not belong to an automatically managed pool.
lockix.c: res == 0
lockw3.c: lock->claims == 0
The client program has made a re-entrant call into the MPS. Look at the backtrace to see what it was. Common culprits are signal handlers, assertion handlers, format methods, and stepper functions.
locus.c: chain->activeTraces == TraceSetEMPTY
The client program called
mps_chain_destroy()
, but there was a garbage collection in progress on that chain. Park the arena before destroying the chain, by callingmps_arena_park()
.
mpsi.c: SizeIsAligned(size, BufferPool(buf)->alignment)
The client program reserved a block by calling
mps_reserve()
but neglected to round the size up to the alignment required by the pool’s object format.
poolams.c: AMS_ALLOCED(seg, i)
The client program tried to fix a reference to a block in an AMS (Automatic Mark and Sweep) pool that died. This may mean that there was a previous collection in which a reference that should have kept the block alive failed to be scanned. Perhaps a formatted object was updated in some way that has a race condition?
poolsnc.c: foundSeg
The client program passed an incorrect
frame
argument tomps_ap_frame_pop()
. This argument must be the result from a previous call tomps_ap_frame_push()
on the same allocation point.
seg.c: gcseg->buffer == NULL
The client program destroyed a pool without first destroying all the allocation points created on that pool. The allocation points must be destroyed first.
trace.c: ss->rank < RankEXACT
The client program destroyed a pool containing objects registered for finalization, and then continued to run the garbage collector. See Cautions under Finalization, which says, “You must destroy these pools by following the ‘safe tear-down’ procedure described under
mps_pool_destroy()
.”
trace.c: RefSetSub(ScanStateUnfixedSummary(ss), SegSummary(seg))
The client program’s scan method failed to update a reference to an object that moved. See Scanning protocol, which says, “If
MPS_FIX2()
returnsMPS_RES_OK
, it may have updated the reference. Make sure that the updated reference is stored back to the region being scanned.”
3.3. Varieties¶
The MPS has three varieties which have different levels of internal
checking and telemetry. The variety can be
selected at compile time, by defining one of the following
preprocessor constants. If none is specified then
CONFIG_VAR_HOT
is the default.
-
CONFIG_VAR_COOL
¶ The cool variety is intended for development and testing.
All functions check the consistency of their data structures and may assert, including functions on the critical path. Furthermore, in the default ANSI Library the default assertion handler will terminate the program. See
mps_lib_assert_fail_install()
.All events are sent to the telemetry stream, including events on the critical path.
-
CONFIG_VAR_HOT
¶ The hot variety is intended for production and deployment.
Some functions check the consistency of their data structures and may assert, namely those not on the critical path. However, in the default ANSI Library, the default assertion handler will not terminate the program. See
mps_lib_assert_fail_install()
.Some events are sent to the telemetry stream, namely those not on the critical path.
-
CONFIG_VAR_RASH
¶ The rash variety is intended for mature integrations, or for developers who like living dangerously.
No functions check the consistency of their data structures and consequently there are no assertions.
No events are sent to the telemetry stream.