In cfd7729ce7 we started using inline
caches for refinements. However, we weren't clearing inline caches when
defined on a reopened refinement module.
Fixes [Bug #20246]
This frees FL_USER0 on both T_MODULE and T_CLASS.
Note: prior to this, FL_SINGLETON was never set on T_MODULE,
so checking for `FL_SINGLETON` without first checking that
`FL_TYPE` was `T_CLASS` was valid. That's no longer the case.
Rather than exposing that an imemo has a flag and four fields, this
changes the implementation to only expose one field (the klass) and
fills the rest with 0. The type will have to fill in the values themselves.
Previously every call to vm_ci_new (when the CI was not packable) would
result in a different callinfo being returned this meant that every
kwarg callsite had its own CI.
When calling, different CIs result in different CCs. These CIs and CCs
both end up persisted on the T_CLASS inside cc_tbl. So in an eval loop
this resulted in a memory leak of both types of object. This also likely
resulted in extra memory used, and extra time searching, in non-eval
cases.
For simplicity in this commit I always allocate a CI object inside
rb_vm_ci_lookup, but ideally we would lazily allocate it only when
needed. I hope to do that as a follow up in the future.
Previously, we didn't invalidate the method entry wrapped by
VM_METHOD_TYPE_REFINED method entries which could cause calls to
land in the wrong method like it did in the included test.
Do the invalidation, and adjust rb_method_entry_clone() to accommodate
this new invalidation vector.
Fix: cfd7729ce7
See-also: e201b81f79
Found through GC.stress + GC.auto_compact crashes in GH-8932.
Previously, the compaction run within `rb_method_entry_alloc()` could
move the `def->body.iseq.cref` and `iseqptr` set up before the call and
leave the `def` pointing to moved addresses. Nothing was marking `def`
during that GC run.
Low probability reproducer:
GC.stress = true
GC.auto_compact = true
arr = []
alloc = 1000.times.map { [] }
alloc = nil
a = arr.first
GC.start
fix memory leak in vm_method
This introduces a unified reference_count to clarify who is referencing a method.
This also allows us to treat the refinement method as the def owner since it counts itself as a reference
Co-authored-by: Peter Zhu <peter@peterzhu.ca>
[Bug #19894]
When a copy of a complemented method entry is created, there are two
issues:
1. IMEMO_FL_USER3 is not copied, so the complemented status is not
copied over.
2. In rb_method_entry_clone we increment both alias_count and
complemented_count. However, when we free the method entry in
rb_method_definition_release, we only decrement one of the two
counters, resulting in the rb_method_definition_t being leaked.
Co-authored-by: Adam Hess <adamhess1991@gmail.com>
From Ruby 3.0, refined method invocations are slow because
resolved methods are not cached by inline cache because of
conservertive strategy. However, `using` clears all caches
so that it seems safe to cache resolved method entries.
This patch caches resolved method entries in inline cache
and clear all of inline method caches when `using` is called.
fix [Bug #18572]
```ruby
# without refinements
class C
def foo = :C
end
N = 1_000_000
obj = C.new
require 'benchmark'
Benchmark.bm{|x|
x.report{N.times{
obj.foo; obj.foo; obj.foo; obj.foo; obj.foo;
obj.foo; obj.foo; obj.foo; obj.foo; obj.foo;
obj.foo; obj.foo; obj.foo; obj.foo; obj.foo;
obj.foo; obj.foo; obj.foo; obj.foo; obj.foo;
}}
}
_END__
user system total real
master 0.362859 0.002544 0.365403 ( 0.365424)
modified 0.357251 0.000000 0.357251 ( 0.357258)
```
```ruby
# with refinment but without using
class C
def foo = :C
end
module R
refine C do
def foo = :R
end
end
N = 1_000_000
obj = C.new
require 'benchmark'
Benchmark.bm{|x|
x.report{N.times{
obj.foo; obj.foo; obj.foo; obj.foo; obj.foo;
obj.foo; obj.foo; obj.foo; obj.foo; obj.foo;
obj.foo; obj.foo; obj.foo; obj.foo; obj.foo;
obj.foo; obj.foo; obj.foo; obj.foo; obj.foo;
}}
}
__END__
user system total real
master 0.957182 0.000000 0.957182 ( 0.957212)
modified 0.359228 0.000000 0.359228 ( 0.359238)
```
```ruby
# with using
class C
def foo = :C
end
module R
refine C do
def foo = :R
end
end
N = 1_000_000
using R
obj = C.new
require 'benchmark'
Benchmark.bm{|x|
x.report{N.times{
obj.foo; obj.foo; obj.foo; obj.foo; obj.foo;
obj.foo; obj.foo; obj.foo; obj.foo; obj.foo;
obj.foo; obj.foo; obj.foo; obj.foo; obj.foo;
obj.foo; obj.foo; obj.foo; obj.foo; obj.foo;
}}
}
`struct rb_calling_info::cd` is introduced and `rb_calling_info::ci`
is replaced with it to manipulate the inline cache of iseq while
method invocation process. So that `ci` can be acessed with
`calling->cd->ci`. It adds one indirection but it can be justified
by the following points:
1) `vm_search_method_fastpath()` doesn't need `ci` and also
`vm_call_iseq_setup_normal()` doesn't need `ci`. It means
reducing `cd->ci` access in `vm_sendish()` can make it faster.
2) most of method types need to access `ci` once in theory
so that 1 additional indirection doesn't matter.
Given that signleton classes don't have an allocator,
we can re-use these bytes to store the attached object
in `rb_classext_struct` without making it larger.
Right now the attached object is stored as an instance variable
and all the call sites that either get or set it have to know how it's
stored.
It's preferable to hide this implementation detail behind accessors
so that it is easier to change how it's stored.
Previously, the following crashes with
`CFLAGS=-DRGENGC_CHECK_MODE=2 -DRUBY_DEBUG=1 -fno-inline`:
$ ./miniruby -e 'GC.stress = true; Marshal.dump({})'
It crashes with a write barrier (WB) miss assertion on an edge from the
`Hash` class object to a newly allocated negative method entry.
This is due to usages of vm_ccs_create() running the WB too early,
before the method entry is inserted into the cc table, so before the
reference edge is established. The insertion can trigger GC and promote
the class object, so running the WB after the insertion is necessary.
Move the insertion into vm_ccs_create() and run the WB after the
insertion.
Discovered on CI:
http://ci.rvm.jp/results/trunk-asserts@ruby-sp2-docker/4391770
I noticed this while running test_yjit with --mjit-call-threshold=1,
which redefines `Integer#<`. When Ruby is monkey-patched,
MJIT itself could be broken.
Similarly, Ruby scripts could break MJIT in many different ways. I
prepared the same set of hooks as YJIT so that we could possibly
override it and disable it on those moments. Every constant under
RubyVM::MJIT is private and thus it's an unsupported behavior though.
Previously, the frozen check happened on `RCLASS_ORIGIN(self)`, which
can return an iclass. The frozen check is supposed to respond to objects
that users can call methods on while iclasses are hidden from users.
Other mutation methods like Module#{define_method,alias_method,public}
don't do this. Check frozen status on the module itself.
Fixes [Bug #19164] and [Bug #19166].
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
Other functions are already type-punned elsewhere. rb_f_notimplement is
the only exceptional function that appear literally. We have to take
care of it by hand.
df317151a5 removed the code to free
rb_hook_list_t, so repeated targeting of the same bmethod started
to leak the hook list. You can observe how the maximum memory use
scales with input size in the following script with `/usr/bin/time -v`.
```ruby
o = Object.new
o.define_singleton_method(:foo) {}
trace = TracePoint.new(:return) {}
bmethod = o.method(:foo)
ARGV.first.to_i.times { trace.enable(target:bmethod){} }
4.times {GC.start}
```
After this change the maximum doesn't grow as quickly.
To plug the leak, check whether the hook list is already allocated
when enabling the targeting TracePoint for the bmethod. This fix
also allows multiple TracePoints to target the same bmethod, similar
to other valid TracePoint targets.
Finally, free the rb_hook_list_t struct when freeing the method
definition it lives on. Freeing in the GC is a good way to avoid
lifetime problems similar to the one fixed in df31715.
[Bug #18031]
In December 2021, we opened an [issue] to solicit feedback regarding the
porting of the YJIT codebase from C99 to Rust. There were some
reservations, but this project was given the go ahead by Ruby core
developers and Matz. Since then, we have successfully completed the port
of YJIT to Rust.
The new Rust version of YJIT has reached parity with the C version, in
that it passes all the CRuby tests, is able to run all of the YJIT
benchmarks, and performs similarly to the C version (because it works
the same way and largely generates the same machine code). We've even
incorporated some design improvements, such as a more fine-grained
constant invalidation mechanism which we expect will make a big
difference in Ruby on Rails applications.
Because we want to be careful, YJIT is guarded behind a configure
option:
```shell
./configure --enable-yjit # Build YJIT in release mode
./configure --enable-yjit=dev # Build YJIT in dev/debug mode
```
By default, YJIT does not get compiled and cargo/rustc is not required.
If YJIT is built in dev mode, then `cargo` is used to fetch development
dependencies, but when building in release, `cargo` is not required,
only `rustc`. At the moment YJIT requires Rust 1.60.0 or newer.
The YJIT command-line options remain mostly unchanged, and more details
about the build process are documented in `doc/yjit/yjit.md`.
The CI tests have been updated and do not take any more resources than
before.
The development history of the Rust port is available at the following
commit for interested parties:
1fd9573d8b
Our hope is that Rust YJIT will be compiled and included as a part of
system packages and compiled binaries of the Ruby 3.2 release. We do not
anticipate any major problems as Rust is well supported on every
platform which YJIT supports, but to make sure that this process works
smoothly, we would like to reach out to those who take care of building
systems packages before the 3.2 release is shipped and resolve any
issues that may come up.
[issue]: https://bugs.ruby-lang.org/issues/18481
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
Co-authored-by: Noah Gibbs <the.codefolio.guy@gmail.com>
Co-authored-by: Kevin Newton <kddnewton@gmail.com>
Before the new constant cache behavior, caches were invalidated by a
single global variable. You could inspect the value of this variable
with RubyVM.stat(:global_constant_state). This was mostly useful to
verify the behavior of the VM or to test constant loading like in Rails.
With the new constant cache behavior, we introduced
RubyVM.stat(:constant_cache) which returned a hash with symbol keys and
integer values that represented the number of live constant caches
associated with the given symbol. Additionally, we removed the old
RubyVM.stat(:global_constant_state).
This was proven to be not very useful, so it doesn't help you diagnose
constant loading issues. So, instead we added the global constant state
back into the RubyVM output. However, that number can be misleading as
now when you invalidate something like `Foo::Bar::Baz` you're actually
invalidating 3 different lists of inline caches.
This commit attempts to get the best of both worlds. We remove
RubyVM.stat(:global_constant_state) like we did originally, as it
doesn't have the same semantic meaning and it could be confusing going
forward. Instead we add RubyVM.stat(:constant_cache_invalidations) and
RubyVM.stat(:constant_cache_misses). These two metrics should provide
enough information to diagnose any constant loading issues, as well as
provide a replacement for the old global constant state.
This was removed as part of [Feature #18589]. But some applications were relying on this behavior. So bringing this back to make it better for backward compatibility going forward.
This commit reintroduces finer-grained constant cache invalidation.
After 8008fb7 got merged, it was causing issues on token-threaded
builds (such as on Windows).
The issue was that when you're iterating through instruction sequences
and using the translator functions to get back the instruction structs,
you're either using `rb_vm_insn_null_translator` or
`rb_vm_insn_addr2insn2` depending if it's a direct-threading build.
`rb_vm_insn_addr2insn2` does some normalization to always return to
you the non-trace version of whatever instruction you're looking at.
`rb_vm_insn_null_translator` does not do that normalization.
This means that when you're looping through the instructions if you're
trying to do an opcode comparison, it can change depending on the type
of threading that you're using. This can be very confusing. So, this
commit creates a new translator function
`rb_vm_insn_normalizing_translator` to always return the non-trace
version so that opcode comparisons don't have to worry about different
configurations.
[Feature #18589]
This reverts commits for [Feature #18589]:
* 8008fb7352
"Update formatting per feedback"
* 8f6eaca2e1
"Delete ID from constant cache table if it becomes empty on ISEQ free"
* 629908586b
"Finer-grained inline constant cache invalidation"
MSWin builds on AppVeyor have been crashing since the merger.
Current behavior - caches depend on a global counter. All constant mutations cause caches to be invalidated.
```ruby
class A
B = 1
end
def foo
A::B # inline cache depends on global counter
end
foo # populate inline cache
foo # hit inline cache
C = 1 # global counter increments, all caches are invalidated
foo # misses inline cache due to `C = 1`
```
Proposed behavior - caches depend on name components. Only constant mutations with corresponding names will invalidate the cache.
```ruby
class A
B = 1
end
def foo
A::B # inline cache depends constants named "A" and "B"
end
foo # populate inline cache
foo # hit inline cache
C = 1 # caches that depend on the name "C" are invalidated
foo # hits inline cache because IC only depends on "A" and "B"
```
Examples of breaking the new cache:
```ruby
module C
# Breaks `foo` cache because "A" constant is set and the cache in foo depends
# on "A" and "B"
class A; end
end
B = 1
```
We expect the new cache scheme to be invalidated less often because names aren't frequently reused. With the cache being invalidated less, we can rely on its stability more to keep our constant references fast and reduce the need to throw away generated code in YJIT.
Use ISEQ_BODY macro to get the rb_iseq_constant_body of the ISeq. Using
this macro will make it easier for us to change the allocation strategy
of rb_iseq_constant_body when using Variable Width Allocation.