Objects with the same shape must always have the same "embeddedness"
(either embedded or heap allocated) because YJIT assumes so. However,
using remove_instance_variable, it's possible that some objects are
embedded and some are heap allocated because it does not re-embed heap
allocated objects.
This commit changes remove_instance_variable to re-embed Object
instance variables when it becomes small enough.
It's too difficult for me to keep track that y is the new node, x is the
new left node, z is the new right node, a is the new left left node,
b is the new left right node, c is the new right left node, and d is
the new right right node. This commit refactors the variable names to be
more descriptive.
Many tests start by exhausting all shapes, which is a slow process.
By exposing a method to directly move the bump allocator forward
we cut test runtime in half.
Before:
```
Finished tests in 1.544756s
```
After:
```
Finished tests in 0.759733s,
```
Reproduction script:
```
o = Object.new
10.times { |i| o.instance_variable_set(:"@a#{i}", i) }
i = 0
a = Object.new
while RubyVM::Shape.shapes_available > 2
a.instance_variable_set(:"@i#{i}", 1)
i += 1
end
o.remove_instance_variable(:@a0)
puts o.instance_variable_get(:@a1)
```
Before this patch, it would incorrectly output `2` and now it correctly
outputs `1`.
That function is a bit too low level to called from multiple
places. It's always used in tandem with `rb_shape_set_too_complex`
and both have to know how the object is laid out to update the
`iv_ptr`.
So instead we can provide two higher level function:
- `rb_obj_copy_ivs_to_hash_table` to prepare a `st_table` from an
arbitrary oject.
- `rb_obj_convert_to_too_complex` to assign the new `st_table`
to the old object, and safely free the old `iv_ptr`.
Unfortunately both can't be combined into one, because `rb_obj_copy_ivar`
need `rb_obj_copy_ivs_to_hash_table` to copy from one object
to another.
Right now the `rb_shape_get_next` shape caller need to
first check if there is capacity left, and if not call
`rb_shape_transition_shape_capa` before it can call `rb_shape_get_next`.
And on each of these it needs to checks if we got a TOO_COMPLEX
back.
All this logic is duplicated in the interpreter, YJIT and RJIT.
Instead we can have `rb_shape_get_next` do the capacity transition
when needed. The caller can compare the old and new shapes capacity
to know if resizing is needed. It also can check for TOO_COMPLEX
only once.
When an inline cache misses, it is very likely that the stale shape_id
and the current instance shape_id have a close common ancestor.
For example if the instance variable is sometimes frozen sometimes
not, one of the two shape will be the direct parent of the other.
Another pattern that commonly cause IC misses is "memoization",
in such case the object will have a "base common shape" and then
a number of close descendants.
In addition, when we find a common ancestor, we store it in the
inline cache instead of the current shape. This help prevent the
cache from flip-flopping, ensuring the next lookup will be marginally
faster and more generally avoid writing in memory too much.
However, now that shapes have an ancestors index, we only check
for a few ancestors before falling back to use the index.
So overall this change speeds up what is assumed to be the more common
case, but makes what is assumed to be the less common case a bit slower.
```
compare-ruby: ruby 3.3.0dev (2023-10-26T05:30:17Z master 701ca070b4) [arm64-darwin22]
built-ruby: ruby 3.3.0dev (2023-10-26T09:25:09Z shapes_double_sear.. a723a85235) [arm64-darwin22]
warming up......
| |compare-ruby|built-ruby|
|:------------------------------------|-----------:|---------:|
|vm_ivar_stable_shape | 11.672M| 11.679M|
| | -| 1.00x|
|vm_ivar_memoize_unstable_shape | 7.551M| 10.506M|
| | -| 1.39x|
|vm_ivar_memoize_unstable_shape_miss | 11.591M| 11.624M|
| | -| 1.00x|
|vm_ivar_unstable_undef | 9.037M| 7.981M|
| | 1.13x| -|
|vm_ivar_divergent_shape | 8.034M| 6.657M|
| | 1.21x| -|
|vm_ivar_divergent_shape_imbalanced | 10.471M| 9.231M|
| | 1.13x| -|
```
Co-Authored-By: John Hawthorn <john@hawthorn.email>
`remove_shape_recursive` wasn't considering that if we run out of
shapes, it might have to transition to SHAPE_TOO_COMPLEX.
When this happens, we now return with an error and the caller
initiates the evacuation.
On 32-bit systems, we must store the shape ID in the gen_ivtbl to not
lose the shape. If we directly store the ST table into the generic
ivar table, then we lose the shape. This makes it impossible to
determine the shape of the object and whether it is too complex or not.
Since the check for MAX_SHAPE_ID was done before even checking
if the transition we're looking for even exists, as soon as the
max shape is reached, get_next_shape_internal would always return
`TOO_COMPLEX` regardless of whether the transition we're looking
for already exist or not.
In addition to entirely de-optimize all newly created objects, it
also made an assertion fail in `vm_setivar`:
```
vm_setivar:rb_shape_get_next_iv_shape(rb_shape_get_shape_by_id(source_shape_id), id) == dest_shape
```
When running tests in debug mode, we have tests that try to exhaust the
space used for shapes and the redblack cache. However, this can cause
Out of Memory issues on some machines, so this commit decreases the
cache sizes when RUBY_DEBUG is enabled
There is no longer a limit on the number of IVs you can store.
SHAPE_MAX_NUM_IVS was used to work around the IV10K problem (the well
known problem where setting 10k instance variables in a row would be too
slow). The redblack tree works well at any shape depth, even depths
greater than 80, and solves the IV10K problem.
This is an experimental commit that uses a functional red-black tree to
create an index of the ancestor shapes. It uses an Okasaki style
functional red black tree:
https://www.cs.tufts.edu/comp/150FP/archive/chris-okasaki/redblack99.pdf
This tree is advantageous because:
* It offers O(n log n) insertions and O(n log n) lookups.
* It shares memory with previous "versions" of the tree
When we insert a node in the tree, only the parts of the tree that need
to be rebalanced are newly allocated. Parts of the tree that don't need
to be rebalanced are not reallocated, so "new trees" are able to share
memory with old trees. This is in contrast to a sorted set where we
would have to duplicate the set, and also resort the set on each
insertion.
I've added a new stat to RubyVM.stat so we can understand how the red
black tree increases.
[Feature #19538]
Since that category is not enabled by default, making it a
verbose warning is redundant. Enabling performance warning should
work with the default verbosity level.