rb_ary_tmp_new suggests that the array is temporary in some way, but
that's not true, it just creates an array that's hidden and not on the
transient heap. This commit renames it to rb_ary_hidden_new.
The RARRAY_LITERAL_FLAG was added in commit
5871ecf956 to improve CoW performance for
array literals by not keeping track of reference counts.
This commit reverts that commit and has an alternate implementation that
is more generic for all frozen arrays. Since frozen arrays cannot be
modified, we don't need to set the RARRAY_SHARED_ROOT_FLAG and we don't
need to do reference counting.
Array created as literals during iseq compilation don't need a
reference count since they can never be modified. The previous
implementation would mutate the hidden array's reference count,
causing copy-on-write invalidation.
This commit adds a RARRAY_LITERAL_FLAG for arrays created through
rb_ary_literal_new. Arrays created with this flag do not have reference
count stored and just assume they have infinite number of references.
Co-authored-by: Jean Boussier <jean.boussier@gmail.com>
At that commit, I fixed a wrong conditional expression that was always
true. However, that seemed to have caused a regression. [Bug #18906]
This change removes the condition to make the code always enabled.
It had been enabled until that commit, albeit unintentionally, and even
if it is enabled it only consumes a tiny bit of memory, so I believe it
is harmless. [Bug #18906]
We need to dump relative offsets for inline storage entries so that
loading iseqs as an array works as well. This commit also has some
minor refactoring to make computing relative ISE information easier.
This should fix the iseq dump / load as array tests we're seeing fail in
CI.
Co-Authored-By: John Hawthorn <john@hawthorn.email>
ISeqs loaded from binary were breaking because the storage partition
calculation had bugs in it. Specifically it couldn't take in to account
the case when inline storage was overallocated (for example when we
allocate inline storage for an instruction but peephole optimization
eliminates that instruction).
`RUBY_ISEQ_DUMP_DEBUG=to_binary make test-all` would break, and this
patch fixes it
This commit adds a bitfield to the iseq body that stores offsets inside
the iseq buffer that contain values we need to mark. We can use this
bitfield to mark objects instead of disassembling the instructions.
This commit also groups inline storage entries and adds a counter for
each entry. This allows us to iterate and mark each entry without
disassembling instructions
Since we have a bitfield and grouped inline caches, we can mark all
VALUE objects associated with instructions without actually
disassembling the instructions at mark time.
[Feature #18875] [ruby-core:109042]
... because insns_info_index could not be zero here. Also it adds an
invariant check for that.
This change will prevent the following warning of GCC 12.1
http://rubyci.s3.amazonaws.com/arch/ruby-master/log/20220613T000004Z.log.html.gz
```
compile.c:2230:39: warning: array subscript 2147483647 is outside array bounds of ‘struct iseq_insn_info_entry[2147483647]’ [-Warray-bounds]
2230 | insns_info[insns_info_index-1].line_no != adjust->line_no) {
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~
```
Use ISEQ_BODY macro to get the rb_iseq_constant_body of the ISeq. Using
this macro will make it easier for us to change the allocation strategy
of rb_iseq_constant_body when using Variable Width Allocation.
Previously, the right hand side was always evaluated before the
left hand side for constant assignments. For the following:
```ruby
lhs::C = rhs
```
rhs was evaluated before lhs, which is inconsistant with attribute
assignment (lhs.m = rhs), and apparently also does not conform to
JIS 3017:2013 11.4.2.2.3.
Fix this by changing evaluation order. Previously, the above
compiled to:
```
0000 putself ( 1)[Li]
0001 opt_send_without_block <calldata!mid:rhs, argc:0, FCALL|VCALL|ARGS_SIMPLE>
0003 dup
0004 putself
0005 opt_send_without_block <calldata!mid:lhs, argc:0, FCALL|VCALL|ARGS_SIMPLE>
0007 setconstant :C
0009 leave
```
After this change:
```
0000 putself ( 1)[Li]
0001 opt_send_without_block <calldata!mid:lhs, argc:0, FCALL|VCALL|ARGS_SIMPLE>
0003 putself
0004 opt_send_without_block <calldata!mid:rhs, argc:0, FCALL|VCALL|ARGS_SIMPLE>
0006 swap
0007 topn 1
0009 swap
0010 setconstant :C
0012 leave
```
Note that if expr is not a module/class, then a TypeError is not
raised until after the evaluation of rhs. This is because that
error is raised by setconstant. If we wanted to raise TypeError
before evaluation of rhs, we would have to add a VM instruction
for calling vm_check_if_namespace.
Changing assignment order for single assignments caused problems
in the multiple assignment code, revealing that the issue also
affected multiple assignment. Fix the multiple assignment code
so left-to-right evaluation also works for constant assignments.
Do some refactoring of the multiple assignment code to reduce
duplication after adding support for constants. Rename struct
masgn_attrasgn to masgn_lhs_node, since it now handles both
constants and attributes. Add add_masgn_lhs_node static function
for adding data for lhs attribute and constant setting.
Fixes [Bug #15928]
This `NODE` type was used in pre-YARV implementation, to improve
the performance of assignment to dynamic local variable defined at
the innermost scope. It has no longer any actual difference with
`NODE_DASGN`, except for the node dump.
* Use duparray when possible for argspush
ARGSPUSH is the node we see with a single value pushed to the end of a
splatted array. ARGSCAT is similar, but is used when multiple values are
being concatenated to the list.
Previously only ARGSCAT had an optimization where when all the values
were static it would use duparray instead of newarray to create the
intermediate array.
This commit adds similar behaviour for ARGSPUSH, using duparray instead
of putobject/newarray.
* Replace duparray with putobject before concatarray
When performing duparray/concatarray we know we'll never use the
intermediate array being created by duparray, so we should be able to
use it as a temporary object.
This avoids an extra array allocation for NODE_ARGSPUSH (ex. [*foo, 1])
and NODE_ARGSCAT (ex. [*foo, 1, 2]).
Dumped iseq binary can not have unnamed symbols/IDs, and ID 0 is
stored instead. As `struct rb_id_table` disallows ID 0, also for
the distinction, re-assign a new temporary ID based on the local
variable table index when loading from the binary, as well as the
parser.
The implementation of a local variable tables was represented as `ID*`,
but it was very hacky: the first element is not an ID but the size of
the table, and, the last element is (sometimes) a link to the next local
table only when the id tables are a linked list.
This change converts the hacky implementation to a normal struct.
This provides a significant speedup for symbol, true, false,
nil, and 0-9, class/module, and a small speedup in most other cases.
Speedups (using included benchmarks):
:symbol :: 60%
0-9 :: 50%
Class/Module :: 50%
nil/true/false :: 20%
integer :: 10%
[] :: 10%
"" :: 3%
One reason this approach is faster is it reduces the number of
VM instructions for each interpolated value.
Initial idea, approach, and benchmarks from Eric Wong. I applied
the same approach against the master branch, updating it to handle
the significant internal changes since this was first proposed 4
years ago (such as CALL_INFO/CALL_CACHE -> CALL_DATA). I also
expanded it to optimize true/false/nil/0-9/class/module, and added
handling of missing methods, refined methods, and RUBY_DEBUG.
This renames the tostring insn to anytostring, and adds an
objtostring insn that implements the optimization. This requires
making a few functions non-static, and adding some non-static
functions.
This disables 4 YJIT tests. Those tests should be reenabled after
YJIT optimizes the new objtostring insn.
Implements [Feature #13715]
Co-authored-by: Eric Wong <e@80x24.org>
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
Co-authored-by: Yusuke Endoh <mame@ruby-lang.org>
Co-authored-by: Koichi Sasada <ko1@atdot.net>
Compare with the C methods, A built-in methods written in Ruby is
slower if only mandatory parameters are given because it needs to
check the argumens and fill default values for optional and keyword
parameters (C methods can check the number of parameters with `argc`,
so there are no overhead). Passing mandatory arguments are common
(optional arguments are exceptional, in many cases) so it is important
to provide the fast path for such common cases.
`Primitive.mandatory_only?` is a special builtin function used with
`if` expression like that:
```ruby
def self.at(time, subsec = false, unit = :microsecond, in: nil)
if Primitive.mandatory_only?
Primitive.time_s_at1(time)
else
Primitive.time_s_at(time, subsec, unit, Primitive.arg!(:in))
end
end
```
and it makes two ISeq,
```
def self.at(time, subsec = false, unit = :microsecond, in: nil)
Primitive.time_s_at(time, subsec, unit, Primitive.arg!(:in))
end
def self.at(time)
Primitive.time_s_at1(time)
end
```
and (2) is pointed by (1). Note that `Primitive.mandatory_only?`
should be used only in a condition of an `if` statement and the
`if` statement should be equal to the methdo body (you can not
put any expression before and after the `if` statement).
A method entry with `mandatory_only?` (`Time.at` on the above case)
is marked as `iseq_overload`. When the method will be dispatch only
with mandatory arguments (`Time.at(0)` for example), make another
method entry with ISeq (2) as mandatory only method entry and it
will be cached in an inline method cache.
The idea is similar discussed in https://bugs.ruby-lang.org/issues/16254
but it only checks mandatory parameters or more, because many cases
only mandatory parameters are given. If we find other cases (optional
or keyword parameters are used frequently and it hurts performance),
we can extend the feature.
`RubyVM.keep_script_lines` enables to keep script lines
for each ISeq and AST. This feature is for debugger/REPL
support.
```ruby
RubyVM.keep_script_lines = true
RubyVM::keep_script_lines = true
eval("def foo = nil\ndef bar = nil")
pp RubyVM::InstructionSequence.of(method(:foo)).script_lines
```
Since opt_getinlinecache and opt_setinlinecache point to the same cache
struct, there is no need to track the index of the get instruction and
then store it on the cache struct later when processing the set
instruction. Setting it when processing the get instruction works just
as well.
This change reduces our diff.
Make sure `opt_getinlinecache` is in a block all on its own, and
invalidate it from the interpreter when `opt_setinlinecache`.
It will recompile with a filled cache the second time around.
This lets YJIT runs well when the IC for constant is cold.
Insert generated addresses into st_table for mapping native code
addresses back to info about VM instructions. Export `encoded_insn_data`
to do this. Also some style fixes.
This commit dumps the outer variables table when dumping an iseq to
binary. This fixes a case where Ractors aren't able to tell what outer
variables belong to a lambda after the lambda is loaded via ISeq.load_from_binary
[Bug #18232] [ruby-core:105504]
This updates the trace instructions to directly dispatch to
opt_send_without_block. So this should cause no slowdown in
non-trace mode.
To enable the tracing of the optimized methods, RUBY_EVENT_C_CALL
and RUBY_EVENT_C_RETURN are added as events to the specialized
instructions.
Fixes [Bug #14870]
Co-authored-by: Takashi Kokubun <takashikkbn@gmail.com>
[0] => [0, *, a]
#=> [0] length mismatch (given 1, expected 2+) (NoMatchingPatternError)
Ignore test failures of typeprof caused by this change for now.
On -DUSE_EMBED_CI=0, there are more GC allocations and the old code
didn't keep old_operands[0] reachable while allocating. On a Debian
based system, I get a crash requiring erb under GC stress mode. On
macOS, tool/transcode-tblgen.rb runs incorrectly if I put GC.stress=true
as the first line.
Pin matching for local variables and constants is already supported,
and it is fairly simple to add support for these variable types.
Note that pin matching for method calls is still not supported
without wrapping in parentheses (pin expressions). I think that's
for the best as method calls are far more complex (arguments/blocks).
Implements [Feature #17724]
Since b2fc592c30 nothing was holding a reference to the dup'd CDHASH
during IBF loading. If a GC happened to run during IBF load then the
copied hash wouldn't have anything to keep it alive. We don't really
want to keep the originally loaded CDHASH hash, so this patch just
overwrites the original hash with the copied / modified hash.
[Bug #17984] [ruby-core:104259]
If the type is ADJUST we don't want to treat it like an INSN so we have
to check the type before reading from `insn_info.events`.
[Bug #18001] [ruby-core:104371]
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
Redo of 34a2acdac788602c14bf05fb616215187badd504 and
931138b00696419945dc03e10f033b1f53cd50f3 which were reverted.
GitHub PR #4340.
This change implements a cache for class variables. Previously there was
no cache for cvars. Cvar access is slow due to needing to travel all the
way up th ancestor tree before returning the cvar value. The deeper the
ancestor tree the slower cvar access will be.
The benefits of the cache are more visible with a higher number of
included modules due to the way Ruby looks up class variables. The
benchmark here includes 26 modules and shows with the cache, this branch
is 6.5x faster when accessing class variables.
```
compare-ruby: ruby 3.1.0dev (2021-03-15T06:22:34Z master 9e5105c) [x86_64-darwin19]
built-ruby: ruby 3.1.0dev (2021-03-15T12:12:44Z add-cache-for-clas.. c6be009) [x86_64-darwin19]
| |compare-ruby|built-ruby|
|:--------|-----------:|---------:|
|vm_cvar | 5.681M| 36.980M|
| | -| 6.51x|
```
Benchmark.ips calling `ActiveRecord::Base.logger` from within a Rails
application. ActiveRecord::Base.logger has 71 ancestors. The more
ancestors a tree has, the more clear the speed increase. IE if Base had
only one ancestor we'd see no improvement. This benchmark is run on a
vanilla Rails application.
Benchmark code:
```ruby
require "benchmark/ips"
require_relative "config/environment"
Benchmark.ips do |x|
x.report "logger" do
ActiveRecord::Base.logger
end
end
```
Ruby 3.0 master / Rails 6.1:
```
Warming up --------------------------------------
logger 155.251k i/100ms
Calculating -------------------------------------
```
Ruby 3.0 with cvar cache / Rails 6.1:
```
Warming up --------------------------------------
logger 1.546M i/100ms
Calculating -------------------------------------
logger 14.857M (± 4.8%) i/s - 74.198M in 5.006202s
```
Lastly we ran a benchmark to demonstate the difference between master
and our cache when the number of modules increases. This benchmark
measures 1 ancestor, 30 ancestors, and 100 ancestors.
Ruby 3.0 master:
```
Warming up --------------------------------------
1 module 1.231M i/100ms
30 modules 432.020k i/100ms
100 modules 145.399k i/100ms
Calculating -------------------------------------
1 module 12.210M (± 2.1%) i/s - 61.553M in 5.043400s
30 modules 4.354M (± 2.7%) i/s - 22.033M in 5.063839s
100 modules 1.434M (± 2.9%) i/s - 7.270M in 5.072531s
Comparison:
1 module: 12209958.3 i/s
30 modules: 4354217.8 i/s - 2.80x (± 0.00) slower
100 modules: 1434447.3 i/s - 8.51x (± 0.00) slower
```
Ruby 3.0 with cvar cache:
```
Warming up --------------------------------------
1 module 1.641M i/100ms
30 modules 1.655M i/100ms
100 modules 1.620M i/100ms
Calculating -------------------------------------
1 module 16.279M (± 3.8%) i/s - 82.038M in 5.046923s
30 modules 15.891M (± 3.9%) i/s - 79.459M in 5.007958s
100 modules 16.087M (± 3.6%) i/s - 81.005M in 5.041931s
Comparison:
1 module: 16279458.0 i/s
100 modules: 16087484.6 i/s - same-ish: difference falls within error
30 modules: 15891406.2 i/s - same-ish: difference falls within error
```
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
RubyVM::AST.of(Thread::Backtrace::Location) returns a node that
corresponds to the location. Typically, the node is a method call, but
not always.
This change also includes iseq's dump/load support of node_ids for each
instructions.
by merging `rb_ast_body_t#line_count` and `#script_lines`.
Fortunately `line_count == RARRAY_LEN(script_lines)` was always
satisfied. When script_lines is saved, it has an array of lines, and
when not saved, it has a Fixnum that represents the old line_count.
This option makes the parser keep the original source as an array of
the original code lines. This feature exploits the mechanism of
`SCRIPT_LINES__` but records only the specified code that is passed to
RubyVM::AST.of or .parse, instead of recording all parsed program texts.
The checkmatch instruction with VM_CHECKMATCH_TYPE_CASE calls
=== without a call cache. Emit a send instruction to make the call
instead. It includes a call cache.
The call cache improves throughput of using when statements to check the
class of a given object. This is useful for say, JSON serialization.
Use of a regular send instead of checkmatch also avoids taking the VM
lock every time, which is good for multi-ractor workloads.
Calculating -------------------------------------
master post
vm_case_classes 11.013M 16.172M i/s - 6.000M times in 0.544795s 0.371009s
vm_case_lit 2.296 2.263 i/s - 1.000 times in 0.435606s 0.441826s
vm_case 74.098M 64.338M i/s - 6.000M times in 0.080974s 0.093257s
Comparison:
vm_case_classes
post: 16172114.4 i/s
master: 11013316.9 i/s - 1.47x slower
vm_case_lit
master: 2.3 i/s
post: 2.3 i/s - 1.01x slower
vm_case
master: 74097858.6 i/s
post: 64338333.9 i/s - 1.15x slower
The vm_case benchmark is a bit slower post patch, possibily due to the
larger instruction sequence. The benchmark dispatches using
opt_case_dispatch so was not running checkmatch and does not make the
=== call post patch.
It looks for "checkmatch", when it could be applied to anything that has
"newrange".
Making the optimization target more ranges might only be fair play when
all ranges are frozen. So I'm putting a reference to the ticket that
froze all ranges.
[Feature #15504]
Before this change, CDHASH operands were built as plain hashes when
loaded from binary. Without setting up the hash with the correct
st_table type, the hash can sometimes be an ar_table. When the hash is
an ar_table, lookups can call the `eql?` method on keys of the hash,
which makes the `opt_case_dispatch` instruction not "leaf" as it
implicitly declares.
The following script trips the stack canary for checking the leaf
attribute for `opt_case_dispatch` on VM_CHECK_MODE > 0 (enabled by
default with RUBY_DEBUG).
rb_vm_iseq = RubyVM::InstructionSequence
iseq = rb_vm_iseq.compile(<<-EOF)
case Class.new(String).new("foo")
when "foo"
42
end
EOF
puts rb_vm_iseq.load_from_binary(iseq.to_binary).eval
This commit changes the binary loading logic to build CDHASH with the
right st_table type. The dumping logic and the dump format stays the
same
609de71f04 fixes the issue by using
`throw` insn if `ensure` is used. However, that patch introduce
additional `throw` even if it is not needed. This patch solves
the issue.
This issue is pointed by @mame.
Rational literals are those integers suffixed with `r`. They tend to
be a part of more complex expressions like `123/456r`, but in theory
they can live alone. When such "bare" rational literals are passed to
case-when branch, we have to take care of them. Fixes [Bug #17854]
Instead of on read. Once it's in the inline cache we never have to make
one again. We want to eventually put the value into the cache, and the
best opportunity to do that is when you write the value.
This change implements a cache for class variables. Previously there was
no cache for cvars. Cvar access is slow due to needing to travel all the
way up th ancestor tree before returning the cvar value. The deeper the
ancestor tree the slower cvar access will be.
The benefits of the cache are more visible with a higher number of
included modules due to the way Ruby looks up class variables. The
benchmark here includes 26 modules and shows with the cache, this branch
is 6.5x faster when accessing class variables.
```
compare-ruby: ruby 3.1.0dev (2021-03-15T06:22:34Z master 9e5105ca45) [x86_64-darwin19]
built-ruby: ruby 3.1.0dev (2021-03-15T12:12:44Z add-cache-for-clas.. c6be0093ae) [x86_64-darwin19]
| |compare-ruby|built-ruby|
|:--------|-----------:|---------:|
|vm_cvar | 5.681M| 36.980M|
| | -| 6.51x|
```
Benchmark.ips calling `ActiveRecord::Base.logger` from within a Rails
application. ActiveRecord::Base.logger has 71 ancestors. The more
ancestors a tree has, the more clear the speed increase. IE if Base had
only one ancestor we'd see no improvement. This benchmark is run on a
vanilla Rails application.
Benchmark code:
```ruby
require "benchmark/ips"
require_relative "config/environment"
Benchmark.ips do |x|
x.report "logger" do
ActiveRecord::Base.logger
end
end
```
Ruby 3.0 master / Rails 6.1:
```
Warming up --------------------------------------
logger 155.251k i/100ms
Calculating -------------------------------------
```
Ruby 3.0 with cvar cache / Rails 6.1:
```
Warming up --------------------------------------
logger 1.546M i/100ms
Calculating -------------------------------------
logger 14.857M (± 4.8%) i/s - 74.198M in 5.006202s
```
Lastly we ran a benchmark to demonstate the difference between master
and our cache when the number of modules increases. This benchmark
measures 1 ancestor, 30 ancestors, and 100 ancestors.
Ruby 3.0 master:
```
Warming up --------------------------------------
1 module 1.231M i/100ms
30 modules 432.020k i/100ms
100 modules 145.399k i/100ms
Calculating -------------------------------------
1 module 12.210M (± 2.1%) i/s - 61.553M in 5.043400s
30 modules 4.354M (± 2.7%) i/s - 22.033M in 5.063839s
100 modules 1.434M (± 2.9%) i/s - 7.270M in 5.072531s
Comparison:
1 module: 12209958.3 i/s
30 modules: 4354217.8 i/s - 2.80x (± 0.00) slower
100 modules: 1434447.3 i/s - 8.51x (± 0.00) slower
```
Ruby 3.0 with cvar cache:
```
Warming up --------------------------------------
1 module 1.641M i/100ms
30 modules 1.655M i/100ms
100 modules 1.620M i/100ms
Calculating -------------------------------------
1 module 16.279M (± 3.8%) i/s - 82.038M in 5.046923s
30 modules 15.891M (± 3.9%) i/s - 79.459M in 5.007958s
100 modules 16.087M (± 3.6%) i/s - 81.005M in 5.041931s
Comparison:
1 module: 16279458.0 i/s
100 modules: 16087484.6 i/s - same-ish: difference falls within error
30 modules: 15891406.2 i/s - same-ish: difference falls within error
```
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
... then, new_insn_core extracts nd_line(node).
Also, if a macro "EXPERIMENTAL_ISEQ_NODE_ID" is defined, this changeset
keeps nd_node_id(node) for each instruction. This is intended for
TypeProf to identify what AST::Node corresponds to each instruction.
This patch is originally authored by @yui-knk for showing which column a
NoMethodError occurred.
https://github.com/ruby/ruby/compare/master...yui-knk:feature/node_id
Co-Authored-By: Yuichiro Kaneko <yui-knk@ruby-lang.org>
add_ensure_iseq() adds ensure block to the end of
jump such as next/redo/return. However, if the rescue
cause are in the body, this rescue catches the exception
in ensure clause.
iter do
next
rescue
R
ensure
raise
end
In this case, R should not be executed, but executed without this patch.
Fixes [Bug #13930]
Fixes [Bug #16618]
A part of tests are written by @jeremyevans https://github.com/ruby/ruby/pull/4291
In regular assignment, Ruby evaluates the left hand side before
the right hand side. For example:
```ruby
foo[0] = bar
```
Calls `foo`, then `bar`, then `[]=` on the result of `foo`.
Previously, multiple assignment didn't work this way. If you did:
```ruby
abc.def, foo[0] = bar, baz
```
Ruby would previously call `bar`, then `baz`, then `abc`, then
`def=` on the result of `abc`, then `foo`, then `[]=` on the
result of `foo`.
This change makes multiple assignment similar to single assignment,
changing the evaluation order of the above multiple assignment code
to calling `abc`, then `foo`, then `bar`, then `baz`, then `def=` on
the result of `abc`, then `[]=` on the result of `foo`.
Implementing this is challenging with the stack-based virtual machine.
We need to keep track of all of the left hand side attribute setter
receivers and setter arguments, and then keep track of the stack level
while handling the assignment processing, so we can issue the
appropriate topn instructions to get the receiver. Here's an example
of how the multiple assignment is executed, showing the stack and
instructions:
```
self # putself
abc # send
abc, self # putself
abc, foo # send
abc, foo, 0 # putobject 0
abc, foo, 0, [bar, baz] # evaluate RHS
abc, foo, 0, [bar, baz], baz, bar # expandarray
abc, foo, 0, [bar, baz], baz, bar, abc # topn 5
abc, foo, 0, [bar, baz], baz, abc, bar # swap
abc, foo, 0, [bar, baz], baz, def= # send
abc, foo, 0, [bar, baz], baz # pop
abc, foo, 0, [bar, baz], baz, foo # topn 3
abc, foo, 0, [bar, baz], baz, foo, 0 # topn 3
abc, foo, 0, [bar, baz], baz, foo, 0, baz # topn 2
abc, foo, 0, [bar, baz], baz, []= # send
abc, foo, 0, [bar, baz], baz # pop
abc, foo, 0, [bar, baz] # pop
[bar, baz], foo, 0, [bar, baz] # setn 3
[bar, baz], foo, 0 # pop
[bar, baz], foo # pop
[bar, baz] # pop
```
As multiple assignment must deal with splats, post args, and any level
of nesting, it gets quite a bit more complex than this in non-trivial
cases. To handle this, struct masgn_state is added to keep
track of the overall state of the mass assignment, which stores a linked
list of struct masgn_attrasgn, one for each assigned attribute.
This adds a new optimization that replaces a topn 1/pop instruction
combination with a single swap instruction for multiple assignment
to non-aref attributes.
This new approach isn't compatible with one of the optimizations
previously used, in the case where the multiple assignment return value
was not needed, there was no lhs splat, and one of the left hand side
used an attribute setter. This removes that optimization. Removing
the optimization allowed for removing the POP_ELEMENT and adjust_stack
functions.
This adds a benchmark to measure how much slower multiple
assignment is with the correct evaluation order.
This benchmark shows:
* 4-9% decrease for attribute sets
* 14-23% decrease for array member sets
* Basically same speed for local variable sets
Importantly, it shows no significant difference between the popped
(where return value of the multiple assignment is not needed) and
!popped (where return value of the multiple assignment is needed)
cases for attribute and array member sets. This indicates the
previous optimization, which was dropped in the evaluation
order fix and only affected the popped case, is not important to
performance.
Fixes [Bug #4443]
Previously, defined? could result in many more method calls than
the code it was checking. `defined? a.b.c.d.e.f` generated 15 calls,
with `a` called 5 times, `b` called 4 times, etc.. This was due to
the fact that defined works in a recursive manner, but it previously
did not cache results. So for `defined? a.b.c.d.e.f`, the logic was
similar to
```ruby
return nil unless defined? a
return nil unless defined? a.b
return nil unless defined? a.b.c
return nil unless defined? a.b.c.d
return nil unless defined? a.b.c.d.e
return nil unless defined? a.b.c.d.e.f
"method"
```
With this change, the logic is similar to the following, without
the creation of a local variable:
```ruby
return nil unless defined? a
_ = a
return nil unless defined? _.b
_ = _.b
return nil unless defined? _.c
_ = _.c
return nil unless defined? _.d
_ = _.d
return nil unless defined? _.e
_ = _.e
return nil unless defined? _.f
"method"
```
In addition to eliminating redundant method calls for defined
statements, this greatly simplifies the instruction sequences by
eliminating duplication. Previously:
```
0000 putnil ( 1)[Li]
0001 putself
0002 defined func, :a, false
0006 branchunless 73
0008 putself
0009 opt_send_without_block <calldata!mid:a, argc:0, FCALL|VCALL|ARGS_SIMPLE>
0011 defined method, :b, false
0015 branchunless 73
0017 putself
0018 opt_send_without_block <calldata!mid:a, argc:0, FCALL|VCALL|ARGS_SIMPLE>
0020 opt_send_without_block <calldata!mid:b, argc:0, ARGS_SIMPLE>
0022 defined method, :c, false
0026 branchunless 73
0028 putself
0029 opt_send_without_block <calldata!mid:a, argc:0, FCALL|VCALL|ARGS_SIMPLE>
0031 opt_send_without_block <calldata!mid:b, argc:0, ARGS_SIMPLE>
0033 opt_send_without_block <calldata!mid:c, argc:0, ARGS_SIMPLE>
0035 defined method, :d, false
0039 branchunless 73
0041 putself
0042 opt_send_without_block <calldata!mid:a, argc:0, FCALL|VCALL|ARGS_SIMPLE>
0044 opt_send_without_block <calldata!mid:b, argc:0, ARGS_SIMPLE>
0046 opt_send_without_block <calldata!mid:c, argc:0, ARGS_SIMPLE>
0048 opt_send_without_block <calldata!mid:d, argc:0, ARGS_SIMPLE>
0050 defined method, :e, false
0054 branchunless 73
0056 putself
0057 opt_send_without_block <calldata!mid:a, argc:0, FCALL|VCALL|ARGS_SIMPLE>
0059 opt_send_without_block <calldata!mid:b, argc:0, ARGS_SIMPLE>
0061 opt_send_without_block <calldata!mid:c, argc:0, ARGS_SIMPLE>
0063 opt_send_without_block <calldata!mid:d, argc:0, ARGS_SIMPLE>
0065 opt_send_without_block <calldata!mid:e, argc:0, ARGS_SIMPLE>
0067 defined method, :f, true
0071 swap
0072 pop
0073 leave
```
After change:
```
0000 putnil ( 1)[Li]
0001 putself
0002 dup
0003 defined func, :a, false
0007 branchunless 52
0009 opt_send_without_block <calldata!mid:a, argc:0, FCALL|VCALL|ARGS_SIMPLE>
0011 dup
0012 defined method, :b, false
0016 branchunless 52
0018 opt_send_without_block <calldata!mid:b, argc:0, ARGS_SIMPLE>
0020 dup
0021 defined method, :c, false
0025 branchunless 52
0027 opt_send_without_block <calldata!mid:c, argc:0, ARGS_SIMPLE>
0029 dup
0030 defined method, :d, false
0034 branchunless 52
0036 opt_send_without_block <calldata!mid:d, argc:0, ARGS_SIMPLE>
0038 dup
0039 defined method, :e, false
0043 branchunless 52
0045 opt_send_without_block <calldata!mid:e, argc:0, ARGS_SIMPLE>
0047 defined method, :f, true
0051 swap
0052 pop
0053 leave
```
This fixes issues where for pathological small examples, Ruby would generate
huge instruction sequences.
Unfortunately, implementing this support is kind of a hack. This adds another
parameter to compile_call for whether we should assume the receiver is already
present on the stack, and has defined? set that parameter for the specific
case where it is compiling a method call where the receiver is also a method
call.
defined_expr0 also takes an additional parameter for whether it should leave
the results of the method call on the stack. If that argument is true, in
the case where the method isn't defined, we jump to the pop before the leave,
so the extra result is not left on the stack. This requires space for an
additional label, so lfinish now needs to be able to hold 3 labels.
Fixes [Bug #17649]
Fixes [Bug #13708]
Peephole optimization doesn't play well with find pattern at
least. The only case when a pattern matching could have
unreachable patterns is when we have lasgn/dasgn node, which
shouldn't happen in real-life.
Fixes https://bugs.ruby-lang.org/issues/17534
Callinfo was being written in to an array and the GC would not see the
reference on the stack. `new_insn_send` creates a new callinfo object,
then it calls `new_insn_core`. `new_insn_core` allocates a new INSN
linked list item, which can end up calling `xmalloc` which will trigger
a GC:
70cd351c7c/compile.c (L968-L969)
Since the callinfo object isn't on the stack, the GC won't see it, and
it can get collected. This patch just refactors `new_insn_send` to keep
the object on the stack
Co-authored-by: John Hawthorn <john@hawthorn.email>
We don't need nop padding when the catch tables are only for break /
next / redo, so lets avoid them. This eliminates nop padding in
many lambdas.
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
constant cache `IC` is accessed by non-atomic manner and there are
thread-safety issues, so Ruby 3.0 disables to use const cache on
non-main ractors.
This patch enables it by introducing `imemo_constcache` and allocates
it by every re-fill of const cache like `imemo_callcache`.
[Bug #17510]
Now `IC` only has one entry `IC::entry` and it points to
`iseq_inline_constant_cache_entry`, managed by T_IMEMO object.
`IC` is atomic data structure so `rb_mjit_before_vm_ic_update()` and
`rb_mjit_after_vm_ic_update()` is not needed.
Previously we would add code to check if an ivar was defined when using
`@foo ||= 123`, which was slower than `@foo || (@foo = 123)` when `@foo`
was already defined.
Recently 01b7d5acc7 made it so that
accessing an undefined variable no longer generates a warning, making
the defined check unnecessary and both statements exactly equal.
This commit avoids emitting the defined instruction when compiling
NODE_OP_ASGN_OR with a NODE_IVAR.
Before:
$ ruby --dump=insn -e '@foo ||= 123'
== disasm: #<ISeq:<main>@-e:1 (1,0)-(1,12)> (catch: FALSE)
0000 putnil ( 1)[Li]
0001 defined instance-variable, :@foo, false
0005 branchunless 14
0007 getinstancevariable :@foo, <is:0>
0010 dup
0011 branchif 20
0013 pop
0014 putobject 123
0016 dup
0017 setinstancevariable :@foo, <is:0>
0020 leave
After:
$ ./ruby --dump=insn -e '@foo ||= 123'
== disasm: #<ISeq:<main>@-e:1 (1,0)-(1,12)> (catch: FALSE)
0000 getinstancevariable :@foo, <is:0> ( 1)[Li]
0003 dup
0004 branchif 13
0006 pop
0007 putobject 123
0009 dup
0010 setinstancevariable :@foo, <is:0>
0013 leave
This seems to be about 50% faster in this benchmark:
require "benchmark/ips"
class Foo
def initialize
@foo = nil
end
def test1
@foo ||= 123
end
def test2
@foo || (@foo = 123)
end
end
FOO = Foo.new
Benchmark.ips do |x|
x.report("test1", "FOO.test1")
x.report("test2", "FOO.test2")
end
Before:
$ ruby benchmark_ivar.rb
Warming up --------------------------------------
test1 1.957M i/100ms
test2 3.125M i/100ms
Calculating -------------------------------------
test1 20.030M (± 1.7%) i/s - 101.780M in 5.083040s
test2 31.227M (± 4.5%) i/s - 156.262M in 5.015936s
After:
$ ./ruby benchmark_ivar.rb
Warming up --------------------------------------
test1 3.205M i/100ms
test2 3.197M i/100ms
Calculating -------------------------------------
test1 32.066M (± 1.1%) i/s - 163.440M in 5.097581s
test2 31.438M (± 4.9%) i/s - 159.860M in 5.098961s
* `GC.auto_compact=`, `GC.auto_compact` can be used to control when
compaction runs. Setting `auto_compact=` to true will cause
compaction to occurr duing major collections. At the moment,
compaction adds significant overhead to major collections, so please
test first!
[Feature #17176]
iv_index_tbl manages instance variable indexes (ID -> index).
This data structure should be synchronized with other ractors
so introduce some VM locks.
This patch also introduced atomic ivar cache used by
set/getinlinecache instructions. To make updating ivar cache (IVC),
we changed iv_index_tbl data structure to manage (ID -> entry)
and an entry points serial and index. IVC points to this entry so
that cache update becomes atomically.
The write barrier wasn't being called for this object, so add the
missing WB. Automatic compaction moved the reference because it didn't
know about the relationship (that's how I found the missing WB).
This commit introduces Ractor mechanism to run Ruby program in
parallel. See doc/ractor.md for more details about Ractor.
See ticket [Feature #17100] to see the implementation details
and discussions.
[Feature #17100]
This commit does not complete the implementation. You can find
many bugs on using Ractor. Also the specification will be changed
so that this feature is experimental. You will see a warning when
you make the first Ractor with `Ractor.new`.
I hope this feature can help programmers from thread-safety issues.
This reverts commit 3a4be429b5.
To fix following warning:
```
compiling ../compile.c
../compile.c:6336:20: warning: variable 'line' is uninitialized when used here [-Wuninitialized]
ADD_INSN(head, line, putnil); /* allocate stack for cached #deconstruct value */
^~~~
../compile.c:220:57: note: expanded from macro 'ADD_INSN'
ADD_ELEM((seq), (LINK_ELEMENT *) new_insn_body(iseq, (line), BIN(insn), 0))
^~~~
../compile.c:6327:13: note: initialize the variable 'line' to silence this warning
int line;
^
= 0
1 warning generated.
```
Use ID instead of GENTRY for gvars.
Global variables are compiled into GENTRY (a pointer to struct
rb_global_entry). This patch replace this GENTRY to ID and
make the code simple.
We need to search GENTRY from ID every time (st_lookup), so
additional overhead will be introduced.
However, the performance of accessing global variables is not
important now a day and this simplicity helps Ractor development.
Moved this hack mark to an argument to `compile_hash`.
> Bad Hack: temporarily mark hash node with flag so
> compile_hash can compile call differently.
Formerly, branch coverage measurement counters are generated for each
compilation traverse of the AST. However, ensure clause node is
traversed twice; one is for normal-exit case (the resulted bytecode is
embedded in its outer scope), and the other is for exceptional case (the
resulted bytecode is used in catch table). Two branch coverage counters
are generated for the two cases, but it is not desired.
This changeset revamps the internal representation of branch coverage
measurement. Branch coverage counters are generated only at the first
visit of a branch node. Visiting the same node reuses the
already-generated counter, so double counting is avoided.
This makes:
```ruby
args = [1, 2, -> {}]; foo(*args, &args.pop)
```
call `foo` with 1, 2, and the lambda, in addition to passing the
lambda as a block. This is different from the previous behavior,
which passed the lambda as a block but not as a regular argument,
which goes against the expected left-to-right evaluation order.
This is how Ruby already compiled arguments if using leading
arguments, trailing arguments, or keywords in the same call.
This works by disabling the optimization that skipped duplicating
the array during the splat (splatarray instruction argument
switches from false to true). In the above example, the splat
call duplicates the array. I've tested and cases where a
local variable or symbol are used do not duplicate the array,
so I don't expect this to decrease the performance of most Ruby
programs. However, programs such as:
```ruby
foo(*args, &bar)
```
could see a decrease in performance, if `bar` is a method call
and not a local variable.
This is not a perfect solution, there are ways to get around
this:
```ruby
args = Struct.new(:a).new([:x, :y])
def args.to_a; a; end
def args.to_proc; a.pop; ->{}; end
foo(*args, &args)
# calls foo with 1 argument (:x)
# not 2 arguments (:x and :y)
```
A perfect solution would require completely disabling the
optimization.
Fixes [Bug #16504]
Fixes [Bug #16500]
These crashes are due to alignment issues, casting ADJUST to INSN
and then accessing after the end of the ADJUST. These patches
come from Stefan Sperling <stsp@apache.org>, who reported the
issue.
The GC will not disassemble incomplete instruction sequences. So it is
important that when instructions are being assembled, any objects the
instructions point at should not be moved.
This patch implements a fixed width array that pins its references.
When the instructions are done being assembled, the pinning array goes
away and the objects inside the iseqs are allowed to move.