Since object shapes store the capacity of an object, we no longer
need the numiv field on RObjects. This gives us one extra slot which
we can use to give embedded objects one more instance variable (for a
total of 3 ivs). This commit removes the concept of numiv from RObject.
This commit adds a `capacity` field to shapes, and adds shape
transitions whenever an object's capacity changes. Objects which are
allocated out of a bigger size pool will also make a transition from the
root shape to the shape with the correct capacity for their size pool
when they are allocated.
This commit will allow us to remove numiv from objects completely, and
will also mean we can guarantee that if two objects share shapes, their
IVs are in the same positions (an embedded and extended object cannot
share shapes). This will enable us to implement ivar sets in YJIT using
object shapes.
Co-Authored-By: Aaron Patterson <tenderlove@ruby-lang.org>
fiber machine stack is placed outside of C stack allocated by wasm-ld,
so highest stack address recorded by `rb_wasm_record_stack_base` is
invalid when running on non-main fiber.
Therefore, we should scan `stack_{start,end}` which always point a valid
stack range in any context.
We were previously incrementing the max_iv_count on a class in gc
freeing. By the time we free an object though, we're not guaranteed its
class is still valid. Instead, we can do this when marking and we're
guaranteed the object still knows its class.
* Avoid RCLASS_IV_TBL in marshal.c
* Avoid RCLASS_IV_TBL for class names
* Avoid RCLASS_IV_TBL for autoload
* Avoid RCLASS_IV_TBL for class variables
* Avoid copying RCLASS_IV_TBL onto ICLASSes
* Use object shapes for Class and Module IVs
`iv_count` is a misleading name because when IVs are unset, the new
shape doesn't decrement this value. `next_iv_count` is an accurate, and
more descriptive name.
Before object shapes, we were using class serial to invalidate
inline caches. Now that we use shape_id for inline cache keys,
the class serial is unnecessary.
Co-Authored-By: Aaron Patterson <tenderlove@ruby-lang.org>
Shapes gives us an almost exact count of instance variables on an
object. Since we know the number of instance variables that have been
set, we will never access slots that haven't been initialized with an
IV.
Shapes provides us with an (almost) exact count of instance variables.
We only need to check for Qundef when an IV has been "undefined"
Prefer to use ROBJECT_IV_COUNT when iterating IVs
GCC 12 introduced a new warning flag `-Wuse-after-free`, however it
has a false positive at `realloc` when optimization is disabled, since
the memory requested for reallocation is guaranteed to not be touched.
This workaround is very unclear why the false warning is suppressed by
a statement-expression GCC extension.
Object Shapes is used for accessing instance variables and representing the
"frozenness" of objects. Object instances have a "shape" and the shape
represents some attributes of the object (currently which instance variables are
set and the "frozenness"). Shapes form a tree data structure, and when a new
instance variable is set on an object, that object "transitions" to a new shape
in the shape tree. Each shape has an ID that is used for caching. The shape
structure is independent of class, so objects of different types can have the
same shape.
For example:
```ruby
class Foo
def initialize
# Starts with shape id 0
@a = 1 # transitions to shape id 1
@b = 1 # transitions to shape id 2
end
end
class Bar
def initialize
# Starts with shape id 0
@a = 1 # transitions to shape id 1
@b = 1 # transitions to shape id 2
end
end
foo = Foo.new # `foo` has shape id 2
bar = Bar.new # `bar` has shape id 2
```
Both `foo` and `bar` instances have the same shape because they both set
instance variables of the same name in the same order.
This technique can help to improve inline cache hits as well as generate more
efficient machine code in JIT compilers.
This commit also adds some methods for debugging shapes on objects. See
`RubyVM::Shape` for more details.
For more context on Object Shapes, see [Feature: #18776]
Co-Authored-By: Aaron Patterson <tenderlove@ruby-lang.org>
Co-Authored-By: Eileen M. Uchitelle <eileencodes@gmail.com>
Co-Authored-By: John Hawthorn <john@hawthorn.email>
Tabs were expanded because the file did not have any tab indentation in unedited lines.
Please update your editor config, and use misc/expand_tabs.rb in the pre-commit hook.
Object Shapes is used for accessing instance variables and representing the
"frozenness" of objects. Object instances have a "shape" and the shape
represents some attributes of the object (currently which instance variables are
set and the "frozenness"). Shapes form a tree data structure, and when a new
instance variable is set on an object, that object "transitions" to a new shape
in the shape tree. Each shape has an ID that is used for caching. The shape
structure is independent of class, so objects of different types can have the
same shape.
For example:
```ruby
class Foo
def initialize
# Starts with shape id 0
@a = 1 # transitions to shape id 1
@b = 1 # transitions to shape id 2
end
end
class Bar
def initialize
# Starts with shape id 0
@a = 1 # transitions to shape id 1
@b = 1 # transitions to shape id 2
end
end
foo = Foo.new # `foo` has shape id 2
bar = Bar.new # `bar` has shape id 2
```
Both `foo` and `bar` instances have the same shape because they both set
instance variables of the same name in the same order.
This technique can help to improve inline cache hits as well as generate more
efficient machine code in JIT compilers.
This commit also adds some methods for debugging shapes on objects. See
`RubyVM::Shape` for more details.
For more context on Object Shapes, see [Feature: #18776]
Co-Authored-By: Aaron Patterson <tenderlove@ruby-lang.org>
Co-Authored-By: Eileen M. Uchitelle <eileencodes@gmail.com>
Co-Authored-By: John Hawthorn <john@hawthorn.email>
Poisoned regions cannot be accessed without unpoisoning outside gc.c.
Specifically, debug.gem is terminated by AddressSanitizer.
```
SUMMARY: AddressSanitizer: use-after-poison iseq_collector.c:39 in iseq_i
```
Tabs were expanded because the file did not have any tab indentation in unedited lines.
Please update your editor config, and use misc/expand_tabs.rb in the pre-commit hook.
rb_ary_tmp_new suggests that the array is temporary in some way, but
that's not true, it just creates an array that's hidden and not on the
transient heap. This commit renames it to rb_ary_hidden_new.
Before this commit, if we don't have enough slots after sweeping but
had pages on the tomb heap, then the GC would frequently allocate and
deallocate pages. This is because after sweeping it would set
allocatable pages (since there were not enough slots) but free the
pages on the tomb heap.
This commit reuses pages on the tomb heap if there's not enough slots
after sweeping.
Prior to this commit it was possible to call `ObjectSpace._id2ref` with
an offset static symbol object_id and get back a new, incorrectly tagged
symbol:
```
> sensible_sym = ObjectSpace._id2ref(:a.object_id)
=> :a
> nonsense_sym = ObjectSpace._id2ref(:a.object_id + 40)
=> :a
> sensible_sym == nonsense_sym
=> false
```
`nonsense_sym` ends up tagged with `RUBY_ID_INSTANCE` instead of
`RB_ID_LOCAL`. That means we can do silly things like:
```
> foo = Object.new
> foo.instance_variable_set(:a, 123)
(irb):2:in `instance_variable_set': `a' is not allowed as an instance variable name (NameError)
> foo.instance_variable_set(ObjectSpace._id2ref(:a.object_id + 40), 123)
=> 123
> foo.instance_variables
=> [:a]
```
This was happening because `get_id_entry` ignores the tag bits when
looking up the symbol. So `rb_id2str(symid)` would return a value and
then we'd continue on with the nonsense `symid`.
This commit prevents the situation by checking that the `symid` actually
matches what we get back from `get_id_entry`. Now we get a `RangeError`
for the nonsense id:
```
> ObjectSpace._id2ref(:a.object_id)
=> :a
> ObjectSpace._id2ref(:a.object_id + 40)
(irb):1:in `_id2ref': 0x000000000013f408 is not symbol id value (RangeError)
```
Co-authored-by: John Hawthorn <jhawthorn@github.com>
In wmap_live_p, if is_pointer_to_heap returns false, then the page is
either in the tomb or has already been freed, so the object is dead. In
this case, wmap_live_p should return false.
This commit implements Objects on Variable Width Allocation. This allows
Objects with more ivars to be embedded (i.e. contents directly follow the
object header) which improves performance through better cache locality.
This commit enables Arrays to move between size pools during compaction.
This can occur if the array is mutated such that it would fit in a
different size pool when embedded.
The move is carried out in two stages:
1. The RVALUE is moved to a destination heap during object movement
phase of compaction
2. The array data is re-embedded and the original buffer free'd if
required. This happens during the update references step
In order to reliably test compaction we need to be able to move objects
between size pools.
In order for this to happen there must be pages in a size pool into
which we can allocate.
The existing implementation of `double_heap` only doubled the existing
number of pages in the heap, so if a size pool had a low number of pages
(or 0) it's not guaranteed that enough space will be created to move
objects into that size pool.
This commit deprecates the `double_heap` option and replaces it with
`expand_heap` instead.
expand heap will expand each heap by enough pages to hold a number of
slots defined by `GC_HEAP_INIT_SLOTS` or by `heap->total_pags` whichever
is larger.
If both `double_heap` and `expand_heap` are present, a deprecation
warning will be shown for `double_heap` and the `expand_heap` behaviour
will take precedence
Given that this is an API intended for debugging and testing GC
compaction I'm not concerned about the extra memory usage or time taken
to create the pages. However, for completeness:
Running the following `test.rb` and using `time` on my Macbook Pro shows
the following memory usage and time impact:
pp "RSS (kb): #{`ps -o rss #{Process.pid}`.lines.last.to_i}"
GC.verify_compaction_references(double_heap: true, toward: :empty)
pp "RSS (kb): #{`ps -o rss #{Process.pid}`.lines.last.to_i}"
❯ time make run
./miniruby -I./lib -I. -I.ext/common -r./arm64-darwin21-fake ./test.rb
"RSS (kb): 24000"
<internal:gc>:251: warning: double_heap is deprecated and will be removed
"RSS (kb): 25232"
________________________________________________________
Executed in 124.37 millis fish external
usr time 82.22 millis 0.09 millis 82.12 millis
sys time 28.76 millis 2.61 millis 26.15 millis
❯ time make run
./miniruby -I./lib -I. -I.ext/common -r./arm64-darwin21-fake ./test.rb
"RSS (kb): 24000"
"RSS (kb): 49040"
________________________________________________________
Executed in 150.13 millis fish external
usr time 103.32 millis 0.10 millis 103.22 millis
sys time 35.73 millis 2.59 millis 33.14 millis
If the page_body is a null pointer, then read_barrier_handler will
crash with an unrelated message. This commit improves the error message.
Before:
test.rb:1: [BUG] Couldn't unprotect page 0x0000000000000000, errno: Cannot allocate memory
After:
test.rb:1: [BUG] read_barrier_handler: segmentation fault at 0x14
The GC compaction mechanism implements a kind of read barrier by marking
some (OS) pages as unreadable, and installing a SIGBUS/SIGSEGV handler
to detect when they're accessed and invalidate an attempt to move the
object.
Unfortunately, when a debugger is attached to the Ruby interpreter on
Mac OS, the debugger will trap the EXC_BAD_ACCES mach exception before
the runtime can transform that into a SIGBUS signal and dispatch it.
Thus, execution gets stuck; any attempt to continue from the debugger
re-executes the line that caused the exception and no forward progress
can be made.
This makes it impossible to debug either the Ruby interpreter or a C
extension whilst compaction is in use.
To fix this, we disable the EXC_BAD_ACCESS handler when installing the
SIGBUS/SIGSEGV handlers, and re-enable them once the compaction is done.
The debugger will still trap on the attempt to read the bad page, but it
will be trapping the SIGBUS signal, rather than the EXC_BAD_ACCESS mach
exception. It's possible to continue from this in the debugger, which
invokes the signal handler and allows forward progress to be made.
Commit 0c36ba5319 changed GC compaction
methods to not be implemented when not supported. However, that commit
only does compile time checks (which currently only checks for WASM),
but there are additional compaction support checks during run time.
This commit changes it so that GC compaction methods aren't defined
during run time if the platform does not support GC compaction.
[Bug #18829]
Only growth heaps are allowed to start major GCs. Before this patch,
growth heaps are defined as size pools that freed more slots than had
empty slots (i.e. there were more dead objects that empty space).
But if the size pool is relatively stable and tightly packed with mostly
old objects and has allocatable pages, then it would be incorrectly
classified as a growth heap and trigger major GC. But since it's stable,
it would not use any of the allocatable pages and forever be classified
as a growth heap, causing major GC thrashing. This commit changes the
definition of growth heap to require that the size pool to have no
allocatable pages.
Having a while loop over `heap_prepare` makes the GC logic difficult to
understand (it is difficult to understand when and why `heap_prepare`
yields a free page). It is also a source of bugs and can cause an infinite
loop if `heap_page` never yields a free page.
Fixes [Bug #18779]
Define the following methods as `rb_f_notimplement` on unsupported
platforms:
- GC.compact
- GC.auto_compact
- GC.auto_compact=
- GC.latest_compact_info
- GC.verify_compaction_references
This change allows users to call `GC.respond_to?(:compact)` to
properly test for compaction support. Previously, it was necessary to
invoke `GC.compact` or `GC.verify_compaction_references` and check if
those methods raised `NotImplementedError` to determine if compaction
was supported.
This follows the precedent set for other platform-specific
methods. For example, in `process.c` for methods such as
`Process.fork`, `Process.setpgid`, and `Process.getpriority`.
These methods are removed from gc.rb and added to gc.c:
- GC.compact
- GC.auto_compact
- GC.auto_compact=
- GC.latest_compact_info
- GC.verify_compaction_references
This is a prefactor to allow setting these methods to
`rb_f_notimplement` in a followup commit.
Some size pools may not have any pages/slots, so total_slots is 0. This
causes a divide-by-zero in the calculation. This commit adds a special
case to catch the case when total_slots is 0 and returns the number of
pages for heap_init_slots.
If the size pool has no or few pages/slots, then min_free_slots will
be a very small number (or even 0). Then the heap won't be eligible to
grow, causing GC thrashing or infinite loops.
Size pools with no pages won't be swept so gc_sweep_finish_size_pool
will never be called on it, but gc_sweep_finish_size_pool must be called
to grow the size pool.
Depending on alignment, the last bitmap plane may not used. Then it will
appear as if all of the objects on that plane is unmarked, which will
cause a buffer overrun when we try to free the object. This commit
changes the loop to calculate the number of planes used
(bitmap_plane_count).
Since 4d8f76286b, we need to dereference
the includer field on iclasses, so we need to mark it to make sure
it's alive.
Sometimes during compaction we crash because the field is dangling,
though I have a hard time constructing such a situation. See
http://ci.rvm.jp/results/trunk@ruby-iga/3947725
We didn't update the includer field during compaction so it could become
a dangling pointer after compaction. It's only recently that we started
to dereference the field, and we were only comparing the pointer before
then, so the omission only recently started to cause crashes.
By instrumenting object.c:833 with `rp(includer);`, you can see the
includer field become `T_NONE` with the following script:
```ruby
mod = Module.new do
protected def foo = 1
end
klass = Class.new do
include Module.new
def run
foo
end
end
klass.include(mod)
GC.verify_compaction_references(double_heap: true, toward: :empty)
klass.new.run
```
I found a crash in a private application that this patch fixes, but
wasn't able to develop a small reproducer. Hence the above demo that
requires instrumentation.
During VM startup, rb_objspace_alloc sets malloc_limit
(objspace->malloc_params.limit) before ruby_gc_set_params is called, thus
nullifying the effect of RUBY_GC_MALLOC_LIMIT before the initial GC run.
The call sequence is as follows:
main.c::main()
ruby_init
ruby_setup
Init_BareVM
rb_objspace_alloc // malloc_limit = gc_params.malloc_limit_min;
ruby_options
ruby_process_options
process_options
ruby_gc_set_params // RUBY_GC_MALLOC_LIMIT => gc_params.malloc_limit_min
With ruby_gc_set_params setting malloc_limit, RUBY_GC_MALLOC_LIMIT
affects the process sooner.
[ruby-core:107170]
Commit dde164e968 decoupled incremental
marking from page sizes. This commit changes Ruby heap page sizes to
64KiB. Doing so will have several benefits:
1. We can use compaction on systems with 64KiB system page sizes (e.g.
PowerPC).
2. Larger page sizes will allow Variable Width Allocation to increase
slot sizes and embed larger objects.
3. Since commit 002fa28599, macOS has 64
KiB pages. Making page sizes 64 KiB will bring these systems to
parity.
I have attached some bechmark results below.
Discourse:
On Discourse, we saw much better p99 performance (e.g. for "categories"
it went from 214ms on master to 134ms on branch, for "home" it went
from 265ms to 251ms). We don’t see much change in p60, p75, and p90
performance. We also see a slight decrease in memory usage by 1.04x.
Branch RSS: 354.9MB
Master RSS: 368.2MB
railsbench:
On rails bench, we don’t see a big change in RPS or p99
performance. We don’t see a big difference in memory usage.
Branch RPS: 826.27
Master RPS: 824.85
Branch p99: 1.67
Master p99: 1.72
Branch RSS: 88.72MB
Master RSS: 88.48MB
liquid:
We don’t see a significant change in liquid performance.
Branch parse & render: 28.653 I/s
Master parse & render: 28.563 i/s
Currently, rb_aligned_malloc uses mmap if Ruby heap pages can be
allocated through mmap (when system heap page size <= Ruby heap page
size). If Ruby heap page sizes is increased to 64KiB, then mmap will
be used on systems with 64KiB system page sizes. However, the transient
heap also uses rb_aligned_malloc and requires 32KiB alignment. This
would break in the current implementation since it would allocate sizes
through mmap that is not a multiple of the system page size.
This commit adds heap_page_body_allocate which will use mmap when
possible and changes rb_aligned_malloc to not use mmap (and only
use posix_memalign).
This commit changes the way compaction moves objects and sweeps pages in
order to better facilitate object movement between size pools.
Previously we would move the scan cursor first until we found an empty
slot and then we'd decrement the compact cursor until we found something
to move into that slot. We would sweep the page that contained the scan
cursor before trying to fill it
In this algorithm we first move the compact cursor down until we find an
object to move - We then take a free page from the desired destination
heap (always the same heap in this current iteration of the code).
If there is no free page we sweep the page at the sweeping_page cursor,
add it to the free pages, and advance the cursor to the next page, and
try again.
We sweep one page from each size pool in this way, and then repeat that
process until all the size pools are compacted (all the cursors have
met), and then we update references and sweep the rest of the heap.
Currently, the number of incremental marking steps is calculated based
on the number of pooled pages available. This means that if we make Ruby
heap pages larger, it would run fewer incremental marking steps (which
would mean each incremental marking step takes longer).
This commit changes incremental marking to run after every
INCREMENTAL_MARK_STEP_ALLOCATIONS number of allocations. This means that
the behaviour of incremental marking remains the same regardless of the
Ruby heap page size.
I've benchmarked against discourse benchmarks and did not get a
significant change in response times beyond the margin of error. This is
expected as this new incremental marking algorithm behaves very
similarly to the previous one.