On 32-bit systems, the shape cache size is 1048576 (value of
REDBLACK_CACHE_SIZE), but a 16-bit unsigned integer can only go up to
65536. This means that the redblack_id_t can overflow and lead to a
corrupted red-black tree.
The following script crashes on 32-bit systems:
o = Object.new
1_000_000.times do |i|
o.instance_variable_set(:"@i#{i}", i)
end
That function is a bit too low level to called from multiple
places. It's always used in tandem with `rb_shape_set_too_complex`
and both have to know how the object is laid out to update the
`iv_ptr`.
So instead we can provide two higher level function:
- `rb_obj_copy_ivs_to_hash_table` to prepare a `st_table` from an
arbitrary oject.
- `rb_obj_convert_to_too_complex` to assign the new `st_table`
to the old object, and safely free the old `iv_ptr`.
Unfortunately both can't be combined into one, because `rb_obj_copy_ivar`
need `rb_obj_copy_ivs_to_hash_table` to copy from one object
to another.
Right now the `rb_shape_get_next` shape caller need to
first check if there is capacity left, and if not call
`rb_shape_transition_shape_capa` before it can call `rb_shape_get_next`.
And on each of these it needs to checks if we got a TOO_COMPLEX
back.
All this logic is duplicated in the interpreter, YJIT and RJIT.
Instead we can have `rb_shape_get_next` do the capacity transition
when needed. The caller can compare the old and new shapes capacity
to know if resizing is needed. It also can check for TOO_COMPLEX
only once.
When an inline cache misses, it is very likely that the stale shape_id
and the current instance shape_id have a close common ancestor.
For example if the instance variable is sometimes frozen sometimes
not, one of the two shape will be the direct parent of the other.
Another pattern that commonly cause IC misses is "memoization",
in such case the object will have a "base common shape" and then
a number of close descendants.
In addition, when we find a common ancestor, we store it in the
inline cache instead of the current shape. This help prevent the
cache from flip-flopping, ensuring the next lookup will be marginally
faster and more generally avoid writing in memory too much.
However, now that shapes have an ancestors index, we only check
for a few ancestors before falling back to use the index.
So overall this change speeds up what is assumed to be the more common
case, but makes what is assumed to be the less common case a bit slower.
```
compare-ruby: ruby 3.3.0dev (2023-10-26T05:30:17Z master 701ca070b4) [arm64-darwin22]
built-ruby: ruby 3.3.0dev (2023-10-26T09:25:09Z shapes_double_sear.. a723a85235) [arm64-darwin22]
warming up......
| |compare-ruby|built-ruby|
|:------------------------------------|-----------:|---------:|
|vm_ivar_stable_shape | 11.672M| 11.679M|
| | -| 1.00x|
|vm_ivar_memoize_unstable_shape | 7.551M| 10.506M|
| | -| 1.39x|
|vm_ivar_memoize_unstable_shape_miss | 11.591M| 11.624M|
| | -| 1.00x|
|vm_ivar_unstable_undef | 9.037M| 7.981M|
| | 1.13x| -|
|vm_ivar_divergent_shape | 8.034M| 6.657M|
| | 1.21x| -|
|vm_ivar_divergent_shape_imbalanced | 10.471M| 9.231M|
| | 1.13x| -|
```
Co-Authored-By: John Hawthorn <john@hawthorn.email>
`remove_shape_recursive` wasn't considering that if we run out of
shapes, it might have to transition to SHAPE_TOO_COMPLEX.
When this happens, we now return with an error and the caller
initiates the evacuation.
There is no longer a limit on the number of IVs you can store.
SHAPE_MAX_NUM_IVS was used to work around the IV10K problem (the well
known problem where setting 10k instance variables in a row would be too
slow). The redblack tree works well at any shape depth, even depths
greater than 80, and solves the IV10K problem.
This is an experimental commit that uses a functional red-black tree to
create an index of the ancestor shapes. It uses an Okasaki style
functional red black tree:
https://www.cs.tufts.edu/comp/150FP/archive/chris-okasaki/redblack99.pdf
This tree is advantageous because:
* It offers O(n log n) insertions and O(n log n) lookups.
* It shares memory with previous "versions" of the tree
When we insert a node in the tree, only the parts of the tree that need
to be rebalanced are newly allocated. Parts of the tree that don't need
to be rebalanced are not reallocated, so "new trees" are able to share
memory with old trees. This is in contrast to a sorted set where we
would have to duplicate the set, and also resort the set on each
insertion.
I've added a new stat to RubyVM.stat so we can understand how the red
black tree increases.
Given `SHAPE_MAX_NUM_IVS 80`, we transition to TOO_COMPLEX
way before we could overflow a 8bit counter.
This reduce the size of `rb_shape_t` from 32B to 24B.
If we decide to raise `SHAPE_MAX_NUM_IVS` we can always increase
that type again.
On platforms where `shape_id_t` is 16-bits, 0x80000 is out of range of
this type.
```
../src/shape.c: In function ‘shape_alloc’:
../src/shape.c:129:18: warning: comparison is always false due to limited range of data type [-Wtype-limits]
129 | if (shape_id == MAX_SHAPE_ID) {
| ^~
```
We can only allocate enough shapes to fit in the shape buffer.
MAX_SHAPE_ID was based on the theoretical maximum number of shapes we
could have, not on the amount of memory we can actually consume. This
commit changes the MAX_SHAPE_ID to be based on the amount of memory
we're allowed to consume.
Co-Authored-By: Jemma Issroff <jemmaissroff@gmail.com>
st tables will maintain insertion order so we can marshal dump / load
objects with instance variables in the same order they were set on that
particular instance
[ruby-core:112926] [Bug #19535]
Co-Authored-By: Jemma Issroff <jemmaissroff@gmail.com>
Create SHAPE_MAX_NUM_IVS (currently 50) and limit all shapes of
T_OBJECTS to that number of IVs. When a shape with a T_OBJECT has more than 50 IVs, fall back to the
obj_too_complex shape which uses hash lookup for ivs.
Note that a previous version of this commit
78fcc9847a was reverted in
88f2b94065 because it did not account for
non-T_OBJECTS
Create SHAPE_MAX_NUM_IVS (currently 50) and limit all shapes to that
number of IVs. When a shape has more than 50 IVs, fallback to the
obj_too_complex shape which uses hash lookup for ivs.
SIZE_POOL_COUNT is a GC macro, it should belong in gc.h and not shape.h.
SIZE_POOL_COUNT doesn't depend on shape.h so we can have shape.h depend
on gc.h.
Co-Authored-By: Matt Valentine-House <matt@eightbitraptor.com>
When moving Objects between size pools we have to assign a new shape.
This happened during updating references - we tried to create a new shape
tree that mirrored the existing tree, but based on the root shape of the
new size pool.
This causes allocations to happen if the new tree doesn't already exist,
potentially triggering a GC, during GC.
This commit changes object movement to look for a pre-existing new tree
during object movement, and if that tree does not exist, we don't move
the object to the new pool.
This allows us to remove the shape allocation from update references.
Co-Authored-By: Peter Zhu <peter@peterzhu.ca>
When an object becomes "too complex" (in other words it has too many
variations in the shape tree), we transition it to use a "too complex"
shape and use a hash for storing instance variables.
Without this patch, there were rare cases where shape tree growth could
"explode" and cause performance degradation on what would otherwise have
been cached fast paths.
This patch puts a limit on shape tree growth, and gracefully degrades in
the rare case where there could be a factorial growth in the shape tree.
For example:
```ruby
class NG; end
HUGE_NUMBER.times do
NG.new.instance_variable_set(:"@unique_ivar_#{_1}", 1)
end
```
We consider objects to be "too complex" when the object's class has more
than SHAPE_MAX_VARIATIONS (currently 8) leaf nodes in the shape tree and
the object introduces a new variation (a new leaf node) associated with
that class.
For example, new variations on instances of the following class would be
considered "too complex" because those instances create more than 8
leaves in the shape tree:
```ruby
class Foo; end
9.times { Foo.new.instance_variable_set(":@uniq_#{_1}", 1) }
```
However, the following class is *not* too complex because it only has
one leaf in the shape tree:
```ruby
class Foo
def initialize
@a = @b = @c = @d = @e = @f = @g = @h = @i = nil
end
end
9.times { Foo.new }
``
This case is rare, so we don't expect this change to impact performance
of most applications, but it needs to be handled.
Co-Authored-By: Aaron Patterson <tenderlove@ruby-lang.org>
When moving Objects between size pools we have to assign a new shape.
This happened during updating references - we tried to create a new shape
tree that mirrored the existing tree, but based on the root shape of the
new size pool.
This causes allocations to happen if the new tree doesn't already exist,
potentially triggering a GC, during GC.
This commit changes object movement to look for a pre-existing new tree
during object movement, and if that tree does not exist, we don't move
the object to the new pool.
This allows us to remove the shape allocation from update references.
Co-Authored-By: Peter Zhu <peter@peterzhu.ca>
I see several arguments in doing so.
First they use a non trivial amount of memory, so for various memory
profiling/mapping tools it is relevant to have visibility of the space
occupied by shapes.
Then, some pathological code can create a tons of shape, so it is
valuable to have a way to have a way to observe shapes without having
to compile Ruby with `SHAPE_DEBUG=1`.
And additionally it's likely much faster to dump then this way than
to use `RubyVM::Shape`.
There are however a few open questions:
- Shapes can't respect the `since:` argument. Not sure what to do when
it is provided. Would probably make sense to not dump them.
- Maybe it would make more sense to have a separate `ObjectSpace.dump_shapes`?
- Maybe instead `dump_all` should take a `shapes: false` argument?
Additionally, `ObjectSpace.dump_shapes` is added for the use case of
debugging the evolution of the shape tree.
Cases like this:
```ruby
obj = Object.new
loop do
obj.instance_variable_set(:@foo, 1)
obj.remove_instance_variable(:@foo)
end
```
can cause us to use many more shapes than we want (and even run out).
This commit changes the code such that when an instance variable is
removed, we'll walk up the shape tree, find the shape, then rebuild any
child nodes that happened to be below the "targetted for removal" IV.
This also requires moving any instance variables so that indexes derived
from the shape tree will work correctly.
Co-Authored-By: Jemma Issroff <jemmaissroff@gmail.com>
Co-authored-by: John Hawthorn <jhawthorn@github.com>
We would like to differentiate types of objects via their shape. This
commit adds a special T_OBJECT shape when we allocate an instance of
T_OBJECT. This allows us to avoid testing whether an object is an
instance of a T_OBJECT or not, we can just check the shape.