Memprof-limits — Memory limits, allocation limits, and thread cancellation for OCaml

The entry point of this package is the module Memprof_limits.

This package lets you interrupt tasks in a thread-safe and resource-safe way, when a resource limit is reached or a cancellation token is set. A task is an isolated piece of computation running within a thread.

Global memory limits interrupt a task when the major heap exceeds a certain size. Allocation limits interrupt a task when a certain number of words have been allocated, which is a mesure of quantity of work done (e.g. a portable and reliable alternative to elapsed time). Token limits interrupt an allocating task when an arbitrary token is set, to make your own notion of interruption compatible with the resource-safety mechanisms of this library.

The implementation uses OCaml's Memprof statistical profiler engine with a low sampling rate that does not affect performance.

Global memory limits

Global memory limits let you bound the memory consumption of specific tasks in your program, in terms of memory used by the whole program (total major heap size). If several concurrent tasks run under a global memory limit, the task the most likely to be interrupted first is the one that allocates the most frequently. After the memory limit has been exceeded, there is a probability less than 10-50 that the task is still not interrupted after it allocates 8.8 more MiB.

Once interrupted, the program will keep interrupting all the tasks running under a global memory limit, until the program shuts down gracefully, or an attempt to free up space with Gc.compact () succeeds. The latter is useful in situations where the allocated memory does not escape the task and therefore terminating it is likely to free up space. For instance, it is useful in situations where data is acquired and validated by a task before this data is published to the rest of the program.

Example

A worker task allocates 3M words simultaneously live, causing a peak memory consumption of 2.3 GiB on 64-bit (without counting the GC overhead). The monitor interrupts the task when the major heap reaches 1 GiB (including the GC overhead).

(* worker *)
let f () =
  let rec alloc n x =
    if n = 0 then x else alloc (n-1) (()::x)
  in
  (* allocate 2.3 GiB *)
  alloc 100_000_000 []

(* monitor *)
let g () =
  match Memprof_limits.limit_global_memory f with
  | Ok x -> print_endline "success"
  | Error _ -> print_endline "out of memory" ; Gc.compact ()

(* main *)
let () =
  Memprof_limits.start_memprof_limits () ;
  (* 1 GiB memory limit *)
  Memprof_limits.set_global_memory_limit (1024 * 1024) ;
  g ()

Use cases

Allocation limits

Allocation limits let you bound the execution of specific tasks in your program, measured in number of allocations (thousands of words, or kw). Allocation limits count allocations but not deallocations. They are therefore a measure of the work done, which can be more suitable (consistent, portable) than wall-clock or CPU time.

Guarantees on the number of allocations that can be done without being interrupted for a given limit, with good probability (e.g. 10-50) are given by a statistical analysis.

Graph entitled 'Allocation limit for a target safe
allocation'—if the graph is missing it is because odoc does not
support packaging images yet. If you are online please visit
https://guillaume.munch.name/software/ocaml/memprof-limits/index.html,
otherwise please compile the documentation using `make doc`.

Example

A worker task allocates 300M words, only 3k of which live simultaneously. This comes close to an allocation limit of 330M words set by the monitor. The probability that the worker is interrupted is less than 10-50, and thus the computation successfully completes, in about a second on a contemporary laptop computer.

(* worker *)
let f () =
  let rec alloc n x =
    if n = 0 then x else alloc (n-1) (()::x)
  in
  (* allocate 300'000 kw *)
  for i = 0 to 100_000 do ignore (alloc 1_000 []) done

(* monitor *)
let g () =
  match Memprof_limits.limit_allocations ~limit:330_000L f with
  | Ok ((), n) -> Printf.printf "success (est. alloc. %#Lu kw)\n" n
  | Error _ -> print_endline "out of fuel"

(* main *)
let () =
  Memprof_limits.start_memprof_limits () ;
  g ()

Use cases

Discussion: precise vs. statistical limits

The two examples above feature precise allocation counts whereas the number of allocations is estimated statistically by Memprof-limits. One can wonder whether the statistical character changes the applicability.

For bounding resource consumption, the statistical nature is irrelevant: one should take a very large safety margin anyway for the allocation limit, greater than what is required for a good accuracy according to the statistical analysis.

In the context of timing out proof tactics, one should be wary of landing on the zone of uncertainty, where the success of a computation is due to chance. Allocation limits return the work estimate in case of success, which can be used to determine if it is too close to the limit to be reliable. (For instance, depending on the use-case, the limit could be raised in order to make it reliable.) In practice, note that the uncertainty zone is fairly thin for large allocation limits, and so should not be encountered very often.

In some sense, this zone of uncertainty is already there with precise limits: indeed, allocation patterns can change through compiler optimisation settings and compiler version. Therefore, counting allocations more precisely would not entirely eliminate this issue.

Token limits

Token limits let you interrupt tasks arbitrarily by setting a Memprof_limits.Token. A token is a flag that can be set atomically, and never unset. Note that the interruption is reliable for CPU-bound, allocating tasks, but could be less reliable for IO-bound tasks. (Refer to the statistical analysis for the probability of occurrence of a memprof-limit callback in relationship with the allocation rate.)

Example

Five worker threads race to push their next number in an increasing sequence. When the user presses Ctrl+C, all worker threads stop and the winner is the one that reached the highest number. (Warning: unbounded memory consumption if not interrupted.)

let l = Atomic.make []

(* worker *)
let f i =
  let rec push j =
    let oldl = Atomic.get l in
    let newl = (i,j) :: oldl in
    if not (Atomic.compare_and_set l oldl newl)
    then push j else push (j + 1)
  in
  push 0

(* monitor *)
let h () =
  let token = Memprof_limits.Token.create () in
  let handler _ = Memprof_limits.Token.set token in
  Sys.(set_signal sigint (Signal_handle handler)) ;
  let g i =
    Memprof_limits.limit_with_token ~token (fun () -> f i)
  in
  let threads = List.init 5 (fun i -> Thread.create g i) in
  List.iter Thread.join threads ;
  print_endline "All threads stopped." ;
  let max a b = if snd b >= snd a then b else a in
  let (i,j) = List.fold_left max (-1, -1) (Atomic.get l) in
  Printf.printf "max = %n, winning thread = %n.\n" j i

let () =
  Memprof_limits.start_memprof_limits () ;
  h ()

Use case

Token limits can be used to implement thread cancellation (interrupting one or several tasks from an arbitrary thread).

Token limits also make it possible to implement the interruption of tasks when a signal arrives, by setting a token inside the signal handler. Raising an exception directly from the signal handler is possible, but setting a token can be preferrable: indeed Memprof-limits makes sure that the interrupt does not arrive at an unexpected moment, for instance during resource acquisition or destruction (using the interrupt-safety features described next) or after the task has finished.

In addition, when using Memprof-limits, one should entirely avoid raising exceptions from signal handlers (including Stdlib.Sys.Break on Ctrl+C included with the standard library) and from other asynchronous callbacks (i.e. all asynchronous exceptions not managed by Memprof-limits). Indeed, unexpected exceptions can damage the internal state of Memprof-limits. One should use token limits instead, e.g. to reimplement Ctrl+C differently.

Caveat: Note that on Unix, when an OCaml signal handler is set and a signal later arrives during a system call, the system call either restarts (and thus is not interruptible by Memprof-limits), or it raises the exception Unix.Unix_error(EINTR, _, _) (which must therefore be handled). Which system calls in the standard library are non-interruptible and which ones raise Unix_error is currently undocumented.

Interrupt- and resource-safety

Careless raising of asynchronous exceptions can leave the world in an invalid state. Memprof-limits is provided with resource-management features to help maintain state invariants and with guidance for programming tasks with interrupt-safety in mind (safely recovering from interrupts).

The main strategy in achieving interrupt safety is through isolation: the world state must only be accessed through interrupt-safe abstractions. Memprof-limits lets you define interrupt-safe resources using the combinator Memprof_limits.Masking.with_resource to build such abstractions, for instance by modifying the external state in transactional style. See the documentation for the module Memprof_limits.Masking and the interrupt-safety discipline explained in more detail here.

Opening the module Memprof_limits.Resource_bind defines a binder let& which provides a RAII-style notation for initialising a resource that is cleaned-up at the end of the scope.

Example

The following module implements an interrupt-safe mutex: a mutex whose acquisition fails if the state it protects might be corrupted due to an interrupt arriving while the state was being modified. (This is inspired by the unwind-safe mutex from the Rust standard library.)

module type Safe_mutex = sig
  type t
  val create : unit -> t
  val with_safe_lock : t -> scope:(unit -> 'a) -> 'a
end

module Safe_mutex : Safe_mutex = struct
  open Memprof_limits.Resource_bind

  type t = { mutex : Mutex.t ; mutable valid : bool }

  let create () = { mutex = Mutex.create () ; valid = true }

  let with_lock mut =
    Memprof_limits.Masking.with_resource mut
      ~acquire:(fun mut -> Mutex.lock mut ; mut) ~release:Mutex.unlock

  let with_guard m =
    Memprof_limits.Masking.with_resource m
      ~acquire:(fun m -> if not m.valid then raise Exit ; m)
      ~release:(fun m -> if Memprof_limits.is_interrupted () then m.valid <- false)

  let with_safe_lock m ~scope =
      let& _ = with_lock m.mutex in
      let& _ = with_guard m in
      scope ()
end

Using this interrupt-safe mutex, the following concurrent implementation of some Myth of Sisyphus, who keeps pushing elements on top of a stack until memory is exhausted and he has to start again, is guaranteed to never deadlock, fail prematurely, or lets an invalid state be accessed.

open Memprof_limits.Resource_bind

let rec sisyphus stack mutex () =
  begin
    let& _ = Safe_mutex.with_safe_lock mutex in
    let n = Stack.length stack in
    Stack.push n stack
  end ;
  sisyphus stack mutex ()

let thread (stack, mutex) =
  match Memprof_limits.limit_global_memory (sisyphus stack mutex) with
  | Ok x -> assert false
  | Error _ -> ()
  | exception Exit -> ()

let rec loop () =
  let threads =
    let stack = Stack.create () in
    let mutex = Safe_mutex.create () in
    List.init 2 (fun _ -> Thread.create thread (stack, mutex))
  in
  List.iter Thread.join threads ;
  Gc.compact () ;  (* Release memory, reset the global memory limit *)
  print_endline "start again" ;
  loop ()

let () =
  Memprof_limits.start_memprof_limits () ;
  Memprof_limits.set_global_memory_limit (50 * 1024) ; (* 50 MiB *)
  loop ()

Profiling

Memprof-limits interrupts tasks by raising exceptions from Stdlib.Gc.Memprof callbacks. To profile using Memprof an application that already uses Memprof via Memprof-limits, please use the compatible reimplementation of Memprof provided: Memprof_limits.Memprof.

Limitations

Asynchronous exceptions: The internal state of Memprof-limits is protected against exceptions arising from Memprof callbacks, which includes the exception Memprof-limits itself raises. However, due to limited OCaml support, race conditions involving asynchronous exceptions of other origins (e.g. Stdlib.Sys.Break from Stdlib.Sys.catch_break) can leave Memprof-limits in an invalid state and should be considered fatal. The features described in the section Interrupt- and resource-safety have the same limitation with respect to asynchronous exceptions not managed by Memprof-limits.

Lwt/Async compatiblity: To my knowledge, Lwt/Async promises do not support interruption by asynchronous exceptions. Using Memprof-limits with Lwt/Async can corrupt the internal state of the latter. However, Memprof-limits can be used inside “detached threads” which are suitable for CPU-intensive computations (see Lwt_preemptive, Mwt).

Index