Previously, TestStack#test_machine_stack_size failed pretty consistently
on ARM64 macOS, with Rust code and part of the interpreter used for
per-instruction fallback (rb_vm_invokeblock() and friends) touching the
stack guard page and crashing with SEGV. I've also seen the same test
fail on x64 Linux, though with a different symptom.
`IO#reopen` is very special in that it is able to change the class and
singleton class of IO instances. In its presence, it is not correct to
assume that IO instances has a stable class/singleton class and guard
by comparing identity.
Remove !USE_RVARGC code
[Feature #19579]
The Variable Width Allocation feature was turned on by default in Ruby
3.2. Since then, we haven't received bug reports or backports to the
non-Variable Width Allocation code paths, so we assume that nobody is
using it. We also don't plan on maintaining the non-Variable Width
Allocation code, so we are going to remove it.
This works much like the existing `defined` implementation,
but calls out to rb_ivar_defined instead of the more general
rb_vm_defined.
Other difference to the existing `defined` implementation is
that this new instruction has to take the same operands as
the CRuby `defined_ivar` instruction.
If you have a method that takes rest arguments and a splat call that
happens to line up perfectly with that rest, you can just dupe the
array rather than move anything around. We still have to dupe, because
people could have a custom to_a method or something like that which
means it is hard to guarantee we have exclusive access to that array.
Example:
```ruby
def foo(a, b, *rest)
end
foo(1, 2, *[3, 4])
```
Given that signleton classes don't have an allocator,
we can re-use these bytes to store the attached object
in `rb_classext_struct` without making it larger.
YJIT: Implement splat for cfuncs. Split exit cases
This also implements a new check for ruby2keywords as the last
argument of a splat. This does mean that we generate more code, but in
actual benchmarks where we gained speed from this (binarytrees) I
don't see any significant slow down. I did have to struggle here with
the register allocator to find code that didn't allocate too many
registers. It's a bit hard when everything is implicit. But I think I
got to the minimal amount of copying and stuff given our current
allocation strategy.
* YJIT: Make iseq_get_location consistent with iseq.c
* YJIT: Call it "YJIT entry point"
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
When an object becomes "too complex" (in other words it has too many
variations in the shape tree), we transition it to use a "too complex"
shape and use a hash for storing instance variables.
Without this patch, there were rare cases where shape tree growth could
"explode" and cause performance degradation on what would otherwise have
been cached fast paths.
This patch puts a limit on shape tree growth, and gracefully degrades in
the rare case where there could be a factorial growth in the shape tree.
For example:
```ruby
class NG; end
HUGE_NUMBER.times do
NG.new.instance_variable_set(:"@unique_ivar_#{_1}", 1)
end
```
We consider objects to be "too complex" when the object's class has more
than SHAPE_MAX_VARIATIONS (currently 8) leaf nodes in the shape tree and
the object introduces a new variation (a new leaf node) associated with
that class.
For example, new variations on instances of the following class would be
considered "too complex" because those instances create more than 8
leaves in the shape tree:
```ruby
class Foo; end
9.times { Foo.new.instance_variable_set(":@uniq_#{_1}", 1) }
```
However, the following class is *not* too complex because it only has
one leaf in the shape tree:
```ruby
class Foo
def initialize
@a = @b = @c = @d = @e = @f = @g = @h = @i = nil
end
end
9.times { Foo.new }
``
This case is rare, so we don't expect this change to impact performance
of most applications, but it needs to be handled.
Co-Authored-By: Aaron Patterson <tenderlove@ruby-lang.org>
The new version has an option to merge everything into a big
`extern "C"` block and it's nicer.
More importantly, this upgrade fixes an issue where Ubuntu with Clang 12
and macOS with Clang 14 gave a one line diff for `rb_shape_t`. It was
slightly annoying because we use macOS locally.
* YJIT: Make case-when optimization respect === redefinition
Even when a fixnum key is in the dispatch hash, if there is a case such
that its basic operations for === is redefined, we need to fall back to
checking each case like the interpreter. Semantically we're always
checking each case by calling === in order, it's just that this is not
observable when basic operations are intact.
When all the keys are fixnums, though, we can do the optimization we're
doing right now. Check for this condition.
* Update yjit/src/cruby_bindings.inc.rs
Co-authored-by: Takashi Kokubun <takashikkbn@gmail.com>
Co-authored-by: Takashi Kokubun <takashikkbn@gmail.com>
* YJIT: Reorder branches for Fixnum opt_case_dispatch
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
Co-authored-by: Alan Wu <alansi.xingwu@shopify.com>
* YJIT: Don't support too large values
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
Co-authored-by: Alan Wu <alansi.xingwu@shopify.com>
This dispatches to a c func for doing the dynamic lookup. I experimented with chain on the proc but wasn't able to detect which call sites would be monomorphic vs polymorphic. There is definitely room for optimization here, but it does reduce exits.