SpiderMonkey 垃圾回收器
SpiderMonkey Garbage Collector

原始链接: https://firefox-source-docs.mozilla.org/js/gc.html

## SpiderMonkey 垃圾回收器总结 SpiderMonkey 垃圾回收器 (GC) 管理 JavaScript 数据内存,旨在实现高效的分配和释放。它是一种**混合追踪型垃圾回收器**,具有以下几个关键特性: * **精确:** 它准确跟踪内存布局和栈根,避免不必要的垃圾保留。这通过 C++ 包装器和静态分析来强制执行。 * **增量:** 为了最小化暂停时间,回收被分解为更小的步骤,尽管仍存在一些“停止世界”的原子操作。 * **分代:** 利用“新生区”(年轻代)和“老年代”(旧代),它优先收集短生命周期对象。 * **并发与并行:** 有限的并发用于最终化和内存管理,GC 切片内使用并行处理。 * **紧凑:** 通过重新排列内存来解决外部碎片问题,但由于其非增量特性,紧凑操作不频繁。 * **分区堆:** 使用“区域”——独立的堆——来促进增量回收并高效管理内存。 这些特性结合起来,为 SpiderMonkey 中的 JavaScript 执行提供了一个强大且高性能的内存管理系统。

这个Hacker News讨论围绕着SpiderMonkey垃圾回收器(用于Firefox)以及它与其他垃圾回收系统的比较,特别是Go的垃圾回收器。 用户指出SpiderMonkey的特性与V8(Chrome的JavaScript引擎)相似,反映了浏览器在性能上的竞争。Go的垃圾回收器是增量、并发和并行的,但*不是*分代或压缩式的。一个关键的区别在于Go对指针的依赖较少,更倾向于栈分配,这意味着在垃圾回收期间需要更少的堆检查——简化了过程。 对话进一步指出栈分配并非Go独有,C#、Lisp和Swift等语言也存在。它还强调,现代Java实现也利用了类似的优化技术(“逃逸分析”)。最后,一位评论员提醒不要假设Go的方法更优越,因为它可能会像受.NET和Java的最新进展影响一样而演变。
相关文章

原文

The SpiderMonkey garbage collector is responsible for allocating memory representing JavaScript data structures and deallocating them when they are no longer in use. It aims to collect as much data as possible in as little time as possible. As well as JavaScript data it is also used to allocate some internal SpiderMonkey data structures.

The garbage collector is a hybrid tracing collector, and has the following features:

Precise collection

The GC is ‘precise’ in that it knows the layout of allocations (which is used to determine reachable children) and also the location of all stack roots. This means it does not need to resort to conservative techniques that may cause garbage to be retained unnecessarily.

Knowledge of the stack is achieved with C++ wrapper classes that must be used for stack roots and handles (pointers) to them. This is enforced by the SpiderMonkey API (which operates in terms of these types) and checked by a static analysis that reports places when unrooted GC pointers can be present when a GC could occur.

For details of stack rooting, see: https://github.com/mozilla-spidermonkey/spidermonkey-embedding-examples/blob/esr78/docs/GC%20Rooting%20Guide.md

We also have a static analysis for detecting errors in rooting. It can be run locally or in CI.

Incremental collection

‘Stop the world’ collectors run a whole collection in one go, which can result in unacceptable pauses for users. An incremental collector breaks its execution into a number of small slices, reducing user impact.

As far as possible the SpiderMonkey collector runs incrementally. Not all parts of a collection can be performed incrementally however as there are some operations that need to complete atomically with respect to the rest of the program.

Currently, most of the collection is performed incrementally. Root marking, compacting, and an initial part of sweeping are not.

Generational collection

Most real world allocations either die very quickly or live for a long time. This suggests an approach to collection where allocations are moved between ‘generations’ (separate heaps) depending on how long they have survived. Generations containing young allocations are fast to collect and can be collected more frequently; older generations are collected less often.

The SpiderMonkey collector implements a single young generation (the nursery) and a single old generation (the tenured heap). Collecting the nursery is known as a minor GC as opposed to a major GC that collects the whole heap (including the nursery).

Concurrent collection

Many systems have more than one CPU and therefore can benefit from offloading GC work to another core. In GC terms ‘concurrent’ usually refers to GC work happening while the main program continues to run.

The SpiderMonkey collector currently only uses concurrency in limited phases.

This includes most finalization work (there are some restrictions as not all finalization code can tolerate this) and some other aspects such as allocating and decommitting blocks of memory.

Performing marking work concurrently is currently being investigated.

Parallel collection

In GC terms ‘parallel’ usually means work performed in parallel while the collector is running, as opposed to the main program itself. The SpiderMonkey collector performs work within GC slices in parallel wherever possible.

Compacting collection

The collector allocates data with the same type and size in ‘arenas’ (often know as slabs). After many allocations have died this can leave many arenas containing free space (external fragmentation). Compacting remedies this by moving allocations between arenas to free up as much memory as possible.

Compacting involves tracing the entire heap to update pointers to moved data and is not incremental so it only happens rarely, or in response to memory pressure notifications.

Partitioned heap

The collector has the concept of ‘zones’ which are separate heaps which can be collected independently. Objects in different zones can refer to each other however.

Zones are also used to help incrementalize parts of the collection. For example, compacting is not fully incremental but can be performed one zone at a time.

联系我们 contact @ memedata.com