```
vm_insnhelper.c:2430:49: error: ISO C prohibits argument conversion to union type [-Wpedantic]
2430 | if (!vm_method_cfunc_is(cd_owner, cd, recv, rb_obj_equal)) {
| ^~~~~~~~~~~~
vm_insnhelper.c:2448:42: error: ISO C prohibits argument conversion to union type [-Wpedantic]
2448 | if (cc && check_cfunc(vm_cc_cme(cc), rb_obj_equal)) {
| ^~~~~~~~~~~~
```
and so on.
If a Hash which is empty or only using literals is frozen, we detect
this as a peephole optimization and change the instructions to be
`opt_hash_freeze`.
[Feature #20684]
Co-authored-by: Jean Boussier <byroot@ruby-lang.org>
If an Array which is empty or only using literals is frozen, we detect
this as a peephole optimization and change the instructions to be
`opt_ary_freeze`.
[Feature #20684]
Co-authored-by: Jean Boussier <byroot@ruby-lang.org>
Previously, proc calls such as:
```ruby
proc{|| }.(**empty_hash)
proc{|b: 1| }.(**r2k_array_with_empty_hash)
```
both allocated hashes unnecessarily, due to two separate code paths.
The first call goes through CALLER_SETUP_ARG/vm_caller_setup_keyword_hash,
and is simple to fix by not duping an empty keyword hash that will be
dropped.
The second case is more involved, in setup_parameters_complex, but is
fixed the exact same way as when the ruby2_keywords hash is not empty,
by flattening the rest array to the VM stack, ignoring the last
element (the empty keyword splat). Add a flatten_rest_array static
function to handle this case.
Update test_allocation.rb to automatically convert the method call
allocation tests to proc allocation tests, at least for the calls
that can be converted. With the code changes, all proc call
allocation tests pass, showing that proc calls and method calls
now allocate the same number of objects.
I've audited the allocation tests, and I believe that all of the low
hanging fruit has been collected. All remaining allocations are
either caller side:
* Positional splat + post argument
* Multiple positional splats
* Literal keywords + keyword splat
* Multiple keyword splats
Or callee side:
* Positional splat parameter
* Keyword splat parameter
* Keyword to positional argument conversion for methods that don't accept keywords
* ruby2_keywords method called with keywords
Reapplies abc04e898b, which was reverted at
d56470a27c, with the addition of a bug fix and
test.
Fixes [Bug #20679]
Previous, proc calls such as:
```ruby
proc{|| }.(**empty_hash)
proc{|b: 1| }.(**r2k_array_with_empty_hash)
```
both allocated hashes unnecessarily, due to two separate code paths.
The first call goes through CALLER_SETUP_ARG/vm_caller_setup_keyword_hash,
and is simple to fix by not duping an empty keyword hash that will be
dropped.
The second case is more involved, in setup_parameters_complex, but is
fixed the exact same way as when the ruby2_keywords hash is not empty,
by flattening the rest array to the VM stack, ignoring the last
element (the empty keyword splat). Add a flatten_rest_array static
function to handle this case.
Update test_allocation.rb to automatically convert the method call
allocation tests to proc allocation tests, at least for the calls
that can be converted. With the code changes, all proc call
allocation tests pass, showing that proc calls and method calls
now allocate the same number of objects.
I've audited the allocation tests, and I believe that all of the low
hanging fruit has been collected. All remaining allocations are
either caller side:
* Positional splat + post argument
* Multiple positional splats
* Literal keywords + keyword splat
* Multiple keyword splats
Or callee side:
* Positional splat parameter
* Keyword splat parameter
* Keyword to positional argument conversion for methods that don't accept keywords
* ruby2_keywords method called with keywords
Some case it is difficult to know the calling method uses a block
or not with `send` on a general framework. So this patch stops
showing unused block warning on `send`.
example with test/unit:
```ruby
require 'test/unit'
class T < Test::Unit::TestCase
def setup
end
def test_foo = nil
end
```
=> /home/ko1/ruby/install/master/lib/ruby/gems/3.4.0+0/gems/test-unit-3.6.2/lib/test/unit/fixture.rb:284: warning: the block passed to 'priority_setup' defined at /home/ko1/ruby/install/master/lib/ruby/gems/3.4.0+0/gems/test-unit-3.6.2/lib/test/unit/priority.rb:183 may be ignored
because test/unit can call any setup method (`priority_setup` in this case) with a block.
Maybe we can show the warning again when we provide a way to recognize
the calling method uses a block or not.
Setting up the fake array is a bit more expensive than would be
expected because `rb_ary_freeze` does a lot of checks and lookup
a shape transition.
If we assume fake arrays will always be frozen, we can precompute
the flags state and just assign it.
Unlike in older revisions in the year, GCC 11 isn't inlining the call
to vm_push_frame() inside invoke_iseq_block_from_c() anymore. We do
want it to be inlined since rb_yield() speed is fairly important.
Logs from -fopt-info-optimized-inline reveal that GCC was blowing its
code size budget inlining invoke_block_from_c_bh() into its various
callers, leaving suboptimal code for its body.
Take away some uses of the `inline` keyword and merge a common tail
call to vm_exec() for overall better code.
This tweak gives about 18% on a micro benchmark and 1% on the
chunky-png benchmark from yjit-bench. I tested on a Skylake server.
```
$ cat c-to-ruby-call.yml
benchmark:
- 0.upto(10_000_000) {}
$ benchmark-driver --chruby '+patch;master' c-to-ruby-call.yml
Warming up --------------------------------------
0.upto(10_000_000) {} 2.299 i/s - 3.000 times in 1.304689s (434.90ms/i)
Calculating -------------------------------------
+patch master
0.upto(10_000_000) {} 2.299 1.943 i/s - 6.000 times in 2.609393s 3.088353s
Comparison:
0.upto(10_000_000) {}
+patch: 2.3 i/s
master: 1.9 i/s - 1.18x slower
$ ruby run_benchmarks.rb --chruby 'master;+patch' chunky-png
<snip>
---------- ----------- ---------- ----------- ---------- -------------- -------------
bench master (ms) stddev (%) +patch (ms) stddev (%) +patch 1st itr master/+patch
chunky-png 1156.1 0.1 1142.2 0.2 1.01 1.01
---------- ----------- ---------- ----------- ---------- -------------- -------------
```
`defined?(@ivar)` on the non main Ractor has two issues:
1. raising an exception
```ruby
class C
@iv1 = []
def self.defined_iv1 = defined?(@iv1)
end
Ractor.new{
p C.defined_iv1
#=> can not get unshareable values from instance variables of classes/modules from non-main Ractors (Ractor::IsolationError)
}.take
```
-> Do not raise an exception but return `"instance-variable"` because
it is defined.
2. returning `"instance-variable"` if there is not defined.
```
class C
# @iv2 is not defined
def self.defined_iv2 = defined?(@iv2)
end
Ractor.new{
p C.defined_iv2 #=> "instance-variable"
}.take
```
-> returns `nil`
This commit splits gc.c into two files:
- gc.c now only contains code not specific to Ruby GC. This includes
code to mark objects (which the GC implementation may choose not to
use) and wrappers for internal APIs that the implementation may need
to use (e.g. locking the VM).
- gc_impl.c now contains the implementation of Ruby's GC. This includes
marking, sweeping, compaction, and statistics. Most importantly,
gc_impl.c only uses public APIs in Ruby and a limited set of functions
exposed in gc.c. This allows us to build gc_impl.c independently of
Ruby and plug Ruby's GC into itself.
**What does this PR do?**
This PR tweaks the `vm_push_frame` function to add an explicit compiler
fence (`atomic_signal_fence`) to ensure profilers that use signals
to interrupt applications (stackprof, vernier, pf2, Datadog profiler)
can safely sample from the signal handler.
**Motivation:**
The `vm_push_frame` was specifically tweaked in
https://github.com/ruby/ruby/pull/3296 to initialize the a frame
before updating the `cfp` pointer.
But since there's nothing stopping the compiler from reordering
the initialization of a frame (`*cfp =`) with the update of the cfp
pointer (`ec->cfp = cfp`) we've been hesitant to rely on this on
the Datadog profiler.
In practice, after some experimentation + talking to folks, this
reordering does not seem to happen.
But since modern compilers have a way for us to exactly tell them
not to do the reordering (`atomic_signal_fence`), this seems even
better.
I've actually extracted `vm_push_frame` into the "Compiler Explorer"
website, which you can use to see the assembly output of this function
across many compilers and architectures: https://godbolt.org/z/3oxd1446K
On that link you can observe two things across many compilers:
1. The compilers are not reordering the writes
2. The barrier does not change the generated assembly output
(== has no cost in practice)
**Additional Notes:**
The checks added in `configure.ac` define two new macros:
* `HAVE_STDATOMIC_H`
* `HAVE_DECL_ATOMIC_SIGNAL_FENCE`
Since Ruby generates an arch-specific `config.h` header with
these macros upon installation, this can be used by profilers
and other libraries to test if Ruby was compiled with the fence enabled.
**How to test the change?**
As I mentioned above, you can check https://godbolt.org/z/3oxd1446K
to confirm the compiled output of `vm_push_frame` does not change
in most compilers (at least all that I've checked on that site).
When we forward calls to C functions if the callsite is a forwarding
site it might not always be a splat, so we can't use the fast path.
Fixes:
[ruby-core:118418]
combination with `send` method (optimized) or `method_missing`
and forwarding send (`...`) needs to respect given
`rb_forwarding_call_data`. Otherwize it causes critical error
such as SEGV.
This commit adds `sendforward` and `invokesuperforward` for forwarding
parameters to calls
Co-authored-by: Matt Valentine-House <matt@eightbitraptor.com>
This patch optimizes forwarding callers and callees. It only optimizes methods that only take `...` as their parameter, and then pass `...` to other calls.
Calls it optimizes look like this:
```ruby
def bar(a) = a
def foo(...) = bar(...) # optimized
foo(123)
```
```ruby
def bar(a) = a
def foo(...) = bar(1, 2, ...) # optimized
foo(123)
```
```ruby
def bar(*a) = a
def foo(...)
list = [1, 2]
bar(*list, ...) # optimized
end
foo(123)
```
All variants of the above but using `super` are also optimized, including a bare super like this:
```ruby
def foo(...)
super
end
```
This patch eliminates intermediate allocations made when calling methods that accept `...`.
We can observe allocation elimination like this:
```ruby
def m
x = GC.stat(:total_allocated_objects)
yield
GC.stat(:total_allocated_objects) - x
end
def bar(a) = a
def foo(...) = bar(...)
def test
m { foo(123) }
end
test
p test # allocates 1 object on master, but 0 objects with this patch
```
```ruby
def bar(a, b:) = a + b
def foo(...) = bar(...)
def test
m { foo(1, b: 2) }
end
test
p test # allocates 2 objects on master, but 0 objects with this patch
```
How does it work?
-----------------
This patch works by using a dynamic stack size when passing forwarded parameters to callees.
The caller's info object (known as the "CI") contains the stack size of the
parameters, so we pass the CI object itself as a parameter to the callee.
When forwarding parameters, the forwarding ISeq uses the caller's CI to determine how much stack to copy, then copies the caller's stack before calling the callee.
The CI at the forwarded call site is adjusted using information from the caller's CI.
I think this description is kind of confusing, so let's walk through an example with code.
```ruby
def delegatee(a, b) = a + b
def delegator(...)
delegatee(...) # CI2 (FORWARDING)
end
def caller
delegator(1, 2) # CI1 (argc: 2)
end
```
Before we call the delegator method, the stack looks like this:
```
Executing Line | Code | Stack
---------------+---------------------------------------+--------
1| def delegatee(a, b) = a + b | self
2| | 1
3| def delegator(...) | 2
4| # |
5| delegatee(...) # CI2 (FORWARDING) |
6| end |
7| |
8| def caller |
-> 9| delegator(1, 2) # CI1 (argc: 2) |
10| end |
```
The ISeq for `delegator` is tagged as "forwardable", so when `caller` calls in
to `delegator`, it writes `CI1` on to the stack as a local variable for the
`delegator` method. The `delegator` method has a special local called `...`
that holds the caller's CI object.
Here is the ISeq disasm fo `delegator`:
```
== disasm: #<ISeq:delegator@-e:1 (1,0)-(1,39)>
local table (size: 1, argc: 0 [opts: 0, rest: -1, post: 0, block: -1, kw: -1@-1, kwrest: -1])
[ 1] "..."@0
0000 putself ( 1)[LiCa]
0001 getlocal_WC_0 "..."@0
0003 send <calldata!mid:delegatee, argc:0, FCALL|FORWARDING>, nil
0006 leave [Re]
```
The local called `...` will contain the caller's CI: CI1.
Here is the stack when we enter `delegator`:
```
Executing Line | Code | Stack
---------------+---------------------------------------+--------
1| def delegatee(a, b) = a + b | self
2| | 1
3| def delegator(...) | 2
-> 4| # | CI1 (argc: 2)
5| delegatee(...) # CI2 (FORWARDING) | cref_or_me
6| end | specval
7| | type
8| def caller |
9| delegator(1, 2) # CI1 (argc: 2) |
10| end |
```
The CI at `delegatee` on line 5 is tagged as "FORWARDING", so it knows to
memcopy the caller's stack before calling `delegatee`. In this case, it will
memcopy self, 1, and 2 to the stack before calling `delegatee`. It knows how much
memory to copy from the caller because `CI1` contains stack size information
(argc: 2).
Before executing the `send` instruction, we push `...` on the stack. The
`send` instruction pops `...`, and because it is tagged with `FORWARDING`, it
knows to memcopy (using the information in the CI it just popped):
```
== disasm: #<ISeq:delegator@-e:1 (1,0)-(1,39)>
local table (size: 1, argc: 0 [opts: 0, rest: -1, post: 0, block: -1, kw: -1@-1, kwrest: -1])
[ 1] "..."@0
0000 putself ( 1)[LiCa]
0001 getlocal_WC_0 "..."@0
0003 send <calldata!mid:delegatee, argc:0, FCALL|FORWARDING>, nil
0006 leave [Re]
```
Instruction 001 puts the caller's CI on the stack. `send` is tagged with
FORWARDING, so it reads the CI and _copies_ the callers stack to this stack:
```
Executing Line | Code | Stack
---------------+---------------------------------------+--------
1| def delegatee(a, b) = a + b | self
2| | 1
3| def delegator(...) | 2
4| # | CI1 (argc: 2)
-> 5| delegatee(...) # CI2 (FORWARDING) | cref_or_me
6| end | specval
7| | type
8| def caller | self
9| delegator(1, 2) # CI1 (argc: 2) | 1
10| end | 2
```
The "FORWARDING" call site combines information from CI1 with CI2 in order
to support passing other values in addition to the `...` value, as well as
perfectly forward splat args, kwargs, etc.
Since we're able to copy the stack from `caller` in to `delegator`'s stack, we
can avoid allocating objects.
I want to do this to eliminate object allocations for delegate methods.
My long term goal is to implement `Class#new` in Ruby and it uses `...`.
I was able to implement `Class#new` in Ruby
[here](https://github.com/ruby/ruby/pull/9289).
If we adopt the technique in this patch, then we can optimize allocating
objects that take keyword parameters for `initialize`.
For example, this code will allocate 2 objects: one for `SomeObject`, and one
for the kwargs:
```ruby
SomeObject.new(foo: 1)
```
If we combine this technique, plus implement `Class#new` in Ruby, then we can
reduce allocations for this common operation.
Co-Authored-By: John Hawthorn <john@hawthorn.email>
Co-Authored-By: Alan Wu <XrXr@users.noreply.github.com>
[Feature #20205]
Now that chilled strings no longer appear as frozen, there is no
need to offer an API to check for chilled strings.
We however need to change `rb_check_frozen_internal` to no
longer be a macro, as it needs to check for chilled strings.
`RUBY_TRY_UNUSED_BLOCK_WARNING_STRICT=1 ruby ...` will enable
strict check for unused block warning.
This option is only for trial to compare the results so the
envname is not considered well.
Should be removed before Ruby 3.4.0 release.
When we're searching for CCs, compare the argc and flags for CI rather
than comparing pointers. This means we don't need to store a reference
to the CI, and it also naturally "de-duplicates" CC objects.
We can observe the effect with the following code:
```ruby
require "objspace"
hash = {}
p ObjectSpace.memsize_of(Hash)
eval ("a".."zzz").map { |key|
"hash.merge(:#{key} => 1)"
}.join("; ")
p ObjectSpace.memsize_of(Hash)
```
On master:
```
$ ruby -v test.rb
ruby 3.4.0dev (2024-04-15T16:21:41Z master d019b3baec) [arm64-darwin23]
test.rb:3: warning: assigned but unused variable - hash
3424
527736
```
On this branch:
```
$ make runruby
compiling vm.c
linking miniruby
builtin_binary.inc updated
compiling builtin.c
linking static-library libruby.3.4-static.a
ln -sf ../../rbconfig.rb .ext/arm64-darwin23/rbconfig.rb
linking ruby
ld: warning: ignoring duplicate libraries: '-ldl', '-lobjc', '-lpthread'
RUBY_ON_BUG='gdb -x ./.gdbinit -p' ./miniruby -I./lib -I. -I.ext/common ./tool/runruby.rb --extout=.ext -- --disable-gems ./test.rb
2240
2368
```
Co-authored-by: John Hawthorn <jhawthorn@github.com>
if a method `foo` uses a block, other (unrelated) method `foo`
can receives a block. So try to relax the unused block warning
condition.
```ruby
class C0
def f = yield
end
class C1 < C0
def f = nil
end
[C0, C1].f{ block } # do not warn
```
With verbopse mode (-w), the interpreter shows a warning if
a block is passed to a method which does not use the given block.
Warning on:
* the invoked method is written in C
* the invoked method is not `initialize`
* not invoked with `super`
* the first time on the call-site with the invoked method
(`obj.foo{}` will be warned once if `foo` is same method)
[Feature #15554]
`Primitive.attr! :use_block` is introduced to declare that primitive
functions (written in C) will use passed block.
For minitest, test needs some tweak, so use
ea9caafc07
for `test-bundled-gems`.
This frees FL_USER0 on both T_MODULE and T_CLASS.
Note: prior to this, FL_SINGLETON was never set on T_MODULE,
so checking for `FL_SINGLETON` without first checking that
`FL_TYPE` was `T_CLASS` was valid. That's no longer the case.
* YJIT: Lazily push a frame for specialized C funcs
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
* Fix a comment on pc_to_cfunc
* Rename rb_yjit_check_pc to rb_yjit_lazy_push_frame
* Rename it to jit_prepare_lazy_frame_call
* Fix a typo
* Optimize String#getbyte as well
* Optimize String#byteslice as well
---------
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
Rather than exposing that an imemo has a flag and four fields, this
changes the implementation to only expose one field (the klass) and
fills the rest with 0. The type will have to fill in the values themselves.