Counting immutable beans: reference counting optimized for purely functional programming

@article{Ullrich2019CountingIB,
  title={Counting immutable beans: reference counting optimized for purely functional programming},
  author={Sebastian Ullrich and Leonardo Mendonça de Moura},
  journal={Proceedings of the 31st Symposium on Implementation and Application of Functional Languages},
  year={2019}
}
  • Sebastian Ullrich, L. D. Moura
  • Published 1 August 2019
  • Computer Science
  • Proceedings of the 31st Symposium on Implementation and Application of Functional Languages
Most functional languages rely on some kind of garbage collection for automatic memory management. They usually eschew reference counting in favor of a tracing garbage collector, which has less bookkeeping overhead at runtime. On the other hand, having an exact reference count of each value can enable optimizations such as destructive updates. We explore these optimization opportunities in the context of an eager, purely functional programming language. We propose a new mechanism for… 

Figures from this paper

Perceus: garbage free reference counting with reuse

TLDR
This work introduces Perceus, an algorithm for precise reference counting with reuse and specialization, and gives a novel formalization of reference counting in a linear resource calculus, and proves that perceus is sound and garbage free.

Reference counting with frame limited reuse

TLDR
This work presents a novel _drop-guided_ reuse algorithm that is simpler and more robust than previous approaches, and generalizes the linear resource calculus to precisely characterize garbage-free and frame-limited evaluations.

Implementation Strategies for Mutable Value Semantics

TLDR
Swift, a programming language based on that discipline, is studied through the lens of a core language that strips some of Swift’s features to focus on the semantics of its value types, thereby enabling numerous off-the-shelf compiler optimizations.

Best-Effort Lazy Evaluation for Python Software Built on APIs

TLDR
Cunctator is presented, a framework that extends this laziness to more of the Python language, allowing intermediate values from DSLs like NumPy or Pandas to flow back to the host Python code without triggering evaluation, which exposes more opportunities for optimization and allows for larger computation graphs to be built.

Mimalloc: Free List Sharding in Action

TLDR
It is shown that mimalloc has superior performance to modern commercial memory allocators, including tcmalloc and jemalloc, with speed improvements of 7% and 14%, respectively, on redis, and consistently out performs over a wide range of sequential and concurrent benchmarks.

Sealing pointer-based optimizations behind pure functions

TLDR
This work shows how to use dependent types to seal the necessary pointer-address manipulations behind pure functional interfaces while requiring only a negligible amount of additional trust.

Effect handlers, evidently

TLDR
It is argued one can still express all important effects, while improving reasoning about effect handlers, and it is proved full soundness and coherence of the translation into plain lambda calculus is proved.

Certifying derivation of state machines from coroutines

TLDR
This work presents a compiler-based technique allowing the best of both worlds, coding protocols in a natural high-level form, using freer monads to represent nested coroutines, which are then compiled automatically to lower-level code with explicit state.

The Lean 4 Theorem Prover and Programming Language

TLDR
Lean 4 is a reimplementation of the Lean interactive theorem prover (ITP) in Lean itself and contains many new features, addressing significant performance problems reported by the growing user base.

The lean mathematical library

This paper describes mathlib, a community-driven effort to build a unified library of mathematics formalized in the Lean proof assistant. Among proof assistant libraries, it is distinguished by its

References

SHOWING 1-10 OF 32 REFERENCES

Dynamic atomicity: optimizing swift memory management

Swift is a modern multi-paradigm programming language with an extensive developer community and open source ecosystem. Swift 3's memory management strategy is based on Automatic Reference Counting

Biased reference counting: minimizing atomic operations in garbage collection

TLDR
A novel algorithm called Biased Reference Counting (BRC) is proposed, which significantly improves the performance of non-deferred RC, and is implemented in the Swift programming language runtime and evaluated with client and server programs.

Deriving Residual Reference Count Garbage Collectors

TLDR
This work presents a strategy to derive an efficient reference count garbage collector for any applicative program by only modifying it on the source code level, and to introduce run-time detected selective update on recursive data structures.

Code Generation Using a Formal Model of Reference Counting

TLDR
The main motivation for the model of reference counting is in soundly translating programs from a high-level functional language to efficient code with a compact footprint in a small subset of a low-level imperative language like C.

The space cost of lazy reference counting

TLDR
If each reference count operation is constrained to take constant time, then the overall space requirements can be increased by a factor of Ω(R) in the worst case, where R is the ratio between the size of the largest and smallest allocated object.

Combining region inference and garbage collection

TLDR
Measurements show that for a variety of benchmark programs, code generated by the compiler is as efficient, both with respect to execution time and memory usage, as programs compiled with Standard ML of New Jersey, another state-of-the-art Standard ML compiler.

Whole-program compilation in MLton

TLDR
This talk will describe MLton's approach to whole-program compilation, covering the optimizations and the intermediate languages, as well as some of the engineering challenges that were overcome to make it feasible to use MLton on programs with over one hundred thousand lines.

The rust language

TLDR
Rust's static type system is safe1 and expressive and provides strong guarantees about isolation, concurrency, and memory safety, and Rust's type system and runtime guarantee the absence of data races, buffer overflows, stack overflows and accesses to uninitialized or deallocated memory.

Minimizing reference count updating with deferred and anchored pointers for functional data structures

Reference counting can be an attractive form of dynamic storage management. It recovers storage promptly and (with a garbage stack instead of a free list) it can be made "real-time"—i.e., all

Shifting garbage collection overhead to compile time

This paper discusses techniques which enable automatic storage reclamation overhead to be partially shifted to compile time. The paper assumes a transaction oriented collection scheme, as proposed by