This caused an issue when `defined?` was in the `if` condition. Its
instructions weren't appended to the instruction sequence even though it was compiled
if a compile-time known logical short-circuit happened before the `defined?`. The catch table
entry (`defined?` compilation produces a catch table entry) was still on the iseq even though the
instructions weren't there. This caused faulty exception handling in the method.
The solution is to no add the catch table entry for `defined?` after a compile-time known logical
short circuit.
This shouldn't touch much code, it's only for cases like the following,
which can occur during debugging:
if false && defined?(Some::CONSTANT)
"more code..."
end
Fixes [Bug #20501]
add_adjust_info will increment the insns_info_index, so we need to set
the node_id to -1 to prevent a "Conditional jump or move depends on
uninitialised value" in Valgrind.
Previously, this would delete the key in `h` before keyword
splatting `h`. This goes against how ruby handles `f(*a, &a.pop)`
and similar expressions.
Fix this by having the compiler check whether the block pass
expression is safe. If it is not safe, then dup the keyword
splatted hash before evaluating the block pass expression.
For expression: `h=nil; f(**h, &h.delete(:key))`
VM instructions before:
```
0000 putnil ( 1)[Li]
0001 setlocal_WC_0 h@0
0003 putself
0004 getlocal_WC_0 h@0
0006 getlocal_WC_0 h@0
0008 putobject :key
0010 opt_send_without_block <calldata!mid:delete, argc:1, ARGS_SIMPLE>
0012 splatkw
0013 send <calldata!mid:f, argc:1, ARGS_BLOCKARG|FCALL|KW_SPLAT>, nil
0016 leave
```
VM instructions after:
```
0000 putnil ( 1)[Li]
0001 setlocal_WC_0 h@0
0003 putself
0004 putspecialobject 1
0006 newhash 0
0008 getlocal_WC_0 h@0
0010 opt_send_without_block <calldata!mid:core#hash_merge_kwd, argc:2, ARGS_SIMPLE>
0012 getlocal_WC_0 h@0
0014 putobject :key
0016 opt_send_without_block <calldata!mid:delete, argc:1, ARGS_SIMPLE>
0018 send <calldata!mid:f, argc:1, ARGS_BLOCKARG|FCALL|KW_SPLAT|KW_SPLAT_MUT>, nil
0021 leave
```
This is the same as 07d3bf4832, except that
it removes unnecessary hash allocations when using the prism compiler.
Fixes [Bug #20640]
If a Hash which is empty or only using literals is frozen, we detect
this as a peephole optimization and change the instructions to be
`opt_hash_freeze`.
[Feature #20684]
Co-authored-by: Jean Boussier <byroot@ruby-lang.org>
If an Array which is empty or only using literals is frozen, we detect
this as a peephole optimization and change the instructions to be
`opt_ary_freeze`.
[Feature #20684]
Co-authored-by: Jean Boussier <byroot@ruby-lang.org>
The `f(arg, *arg, **arg, **arg)` case was previously not optimized.
The optimizer didn't optimize this case because of the multiple
keyword splats, and the compiler didn't optimize it because the
`f(*arg, **arg, **arg)` optimization added in
0ee3960685 didn't apply.
I found it difficult to apply this optimization without changing
the `setup_args_core` API, since by the time you get to the ARGSCAT
case, you don't know whether you were called recursively or directly,
so I'm not sure if it was possible to know at that point whether the
array allocation could be avoided.
This changes the dup_rest argument in `setup_args_core` from an int
to a pointer to int. This allows us to track whether we have allocated
a caller side array for multiple splats or splat+post across
recursive calls. Check the pointed value (*dup_rest) to determine the
`splatarray` argument. If dup_rest is 1, then use `splatarray true`
(caller-side array allocation), then set *dup_rest back to 0, ensuring
only a single `splatarray true` per method call.
Before calling `setup_args_core`, check whether the array allocation
can be avoided safely using `splatarray false`. Optimizable cases are:
```
// f(*arg)
SPLAT
// f(1, *arg)
ARGSCAT
LIST
// f(*arg, **arg)
ARGSPUSH
SPLAT
HASH nd_brace=0
// f(1, *arg, **arg)
ARGSPUSH
ARGSCAT
LIST
HASH nd_brace=0
```
If so, dup_rest is set to 0 instead of 1 to avoid the allocation.
After calling `setup_args_core`, check the flag. If the flag
includes `VM_CALL_ARGS_SPLAT`, and the pointed value has changed,
indicating `splatarray true` was used, then also set
`VM_CALL_ARGS_SPLAT_MUT` in the flag.
My initial attempt at this broke the `f(*ary, &ary.pop)` test,
because we were not duplicating the ary in the splat even though
it was modified later (evaluation order issue). The initial attempt
would also break `f(*ary, **ary.pop)` or `f(*ary, kw: ary.pop)` cases
for the same reason. I added test cases for those evaluation
order issues.
Add setup_args_dup_rest_p static function that checks that a given
node is safe. Call that on the block pass node to determine if
the block pass node is safe. Also call it on each of the hash
key/value nodes to test that they are safe. If any are not safe,
then set dup_rest = 1 so that `splatarray true` will be used to
avoid the evaluation order issue.
This new approach has the affect of optimizing most cases of
literal keywords after positional splats. Previously, only
static keyword hashes after positional splats avoided array
allocation for the splat. Now, most dynamic keyword hashes
after positional splats also avoid array allocation.
Add allocation tests for dynamic keyword keyword hashes after
positional splats.
setup_args_dup_rest_p is currently fairly conservative. It
could definitely be expanded to handle additional node types
to reduce allocations in additional cases.
For calls such as:
m(*ary, a: 2, **h)
m(*ary, **h, **h, **h)
Where m does not take a positional argument splat, there was previously
an array allocation (splatarray true) to dup ary, even though it was not
necessary to do so. This is because the elimination of the array allocation
(splatarray false) was performed in the optimizer, and the optimizer didn't
handle this case, because the instructions for the keywords can be of
arbitrary length.
Move part of the optimization from the optimizer to the compiler,
detecting parse trees of the form:
ARGS_PUSH:
head: SPLAT
tail: HASH (without brace)
And using splatarray false instead of splatarray true for them.
Unfortunately, moving part of the optimization to the compiler broke
the hash allocation elimination optimization for calls of the
form:
m(*ary, a: 2)
That's because the compiler had already set splatarray false,
and the optimizer code was looking for splatarray true.
Split the array allocation elimination and hash allocation
elimination in the optimizer so that the hash allocation
elimination will still apply if the compiler performs the
splatarray false optimization.
While working on a separate issue we found that in some cases
`ary_heap_realloc` was being called on frozen arrays. To fix this, this
change does the following:
1) Updates `rb_ary_freeze` to assert the type is an array, return if
already frozen, and shrink the capacity if it is not embedded, shared
or a shared root.
2) Replaces `rb_obj_freeze` with `rb_ary_freeze` when the object is
always an array.
3) In `ary_heap_realloc`, ensure the new capa is set with
`ARY_SET_CAPA`. Previously the change in capa was not set.
4) Adds an assertion to `ary_heap_realloc` that the array is not frozen.
Some of this work was originally done in
https://github.com/ruby/ruby/pull/2640, referencing this issue
https://bugs.ruby-lang.org/issues/16291. There didn't appear to be any
objections to this PR, it appears to have simply lost traction.
The original PR made changes to arrays and strings at the same time,
this PR only does arrays. Also it was old enough that rather than revive
that branch I've made a new one. I added Lourens as co-author in addtion
to Aaron who helped me with this patch.
The original PR made this change for performance reasons, and while
that's still true for this PR, the goal of this PR is to avoid
calling `ary_heap_realloc` on frozen arrays. The capacity should be
shrunk _before_ the array is frozen, not after.
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
Co-Authored-By: methodmissing <lourens@methodmissing.com>
* Set VM_CALL_KWARG flag first and reuse it to avoid checking kw_arg twice
* Fix comment for VM_CALL_ARGS_SIMPLE
* Make VM_CALL_ARGS_SIMPLE set-site match its comment
This commit adds `sendforward` and `invokesuperforward` for forwarding
parameters to calls
Co-authored-by: Matt Valentine-House <matt@eightbitraptor.com>
This patch optimizes forwarding callers and callees. It only optimizes methods that only take `...` as their parameter, and then pass `...` to other calls.
Calls it optimizes look like this:
```ruby
def bar(a) = a
def foo(...) = bar(...) # optimized
foo(123)
```
```ruby
def bar(a) = a
def foo(...) = bar(1, 2, ...) # optimized
foo(123)
```
```ruby
def bar(*a) = a
def foo(...)
list = [1, 2]
bar(*list, ...) # optimized
end
foo(123)
```
All variants of the above but using `super` are also optimized, including a bare super like this:
```ruby
def foo(...)
super
end
```
This patch eliminates intermediate allocations made when calling methods that accept `...`.
We can observe allocation elimination like this:
```ruby
def m
x = GC.stat(:total_allocated_objects)
yield
GC.stat(:total_allocated_objects) - x
end
def bar(a) = a
def foo(...) = bar(...)
def test
m { foo(123) }
end
test
p test # allocates 1 object on master, but 0 objects with this patch
```
```ruby
def bar(a, b:) = a + b
def foo(...) = bar(...)
def test
m { foo(1, b: 2) }
end
test
p test # allocates 2 objects on master, but 0 objects with this patch
```
How does it work?
-----------------
This patch works by using a dynamic stack size when passing forwarded parameters to callees.
The caller's info object (known as the "CI") contains the stack size of the
parameters, so we pass the CI object itself as a parameter to the callee.
When forwarding parameters, the forwarding ISeq uses the caller's CI to determine how much stack to copy, then copies the caller's stack before calling the callee.
The CI at the forwarded call site is adjusted using information from the caller's CI.
I think this description is kind of confusing, so let's walk through an example with code.
```ruby
def delegatee(a, b) = a + b
def delegator(...)
delegatee(...) # CI2 (FORWARDING)
end
def caller
delegator(1, 2) # CI1 (argc: 2)
end
```
Before we call the delegator method, the stack looks like this:
```
Executing Line | Code | Stack
---------------+---------------------------------------+--------
1| def delegatee(a, b) = a + b | self
2| | 1
3| def delegator(...) | 2
4| # |
5| delegatee(...) # CI2 (FORWARDING) |
6| end |
7| |
8| def caller |
-> 9| delegator(1, 2) # CI1 (argc: 2) |
10| end |
```
The ISeq for `delegator` is tagged as "forwardable", so when `caller` calls in
to `delegator`, it writes `CI1` on to the stack as a local variable for the
`delegator` method. The `delegator` method has a special local called `...`
that holds the caller's CI object.
Here is the ISeq disasm fo `delegator`:
```
== disasm: #<ISeq:delegator@-e:1 (1,0)-(1,39)>
local table (size: 1, argc: 0 [opts: 0, rest: -1, post: 0, block: -1, kw: -1@-1, kwrest: -1])
[ 1] "..."@0
0000 putself ( 1)[LiCa]
0001 getlocal_WC_0 "..."@0
0003 send <calldata!mid:delegatee, argc:0, FCALL|FORWARDING>, nil
0006 leave [Re]
```
The local called `...` will contain the caller's CI: CI1.
Here is the stack when we enter `delegator`:
```
Executing Line | Code | Stack
---------------+---------------------------------------+--------
1| def delegatee(a, b) = a + b | self
2| | 1
3| def delegator(...) | 2
-> 4| # | CI1 (argc: 2)
5| delegatee(...) # CI2 (FORWARDING) | cref_or_me
6| end | specval
7| | type
8| def caller |
9| delegator(1, 2) # CI1 (argc: 2) |
10| end |
```
The CI at `delegatee` on line 5 is tagged as "FORWARDING", so it knows to
memcopy the caller's stack before calling `delegatee`. In this case, it will
memcopy self, 1, and 2 to the stack before calling `delegatee`. It knows how much
memory to copy from the caller because `CI1` contains stack size information
(argc: 2).
Before executing the `send` instruction, we push `...` on the stack. The
`send` instruction pops `...`, and because it is tagged with `FORWARDING`, it
knows to memcopy (using the information in the CI it just popped):
```
== disasm: #<ISeq:delegator@-e:1 (1,0)-(1,39)>
local table (size: 1, argc: 0 [opts: 0, rest: -1, post: 0, block: -1, kw: -1@-1, kwrest: -1])
[ 1] "..."@0
0000 putself ( 1)[LiCa]
0001 getlocal_WC_0 "..."@0
0003 send <calldata!mid:delegatee, argc:0, FCALL|FORWARDING>, nil
0006 leave [Re]
```
Instruction 001 puts the caller's CI on the stack. `send` is tagged with
FORWARDING, so it reads the CI and _copies_ the callers stack to this stack:
```
Executing Line | Code | Stack
---------------+---------------------------------------+--------
1| def delegatee(a, b) = a + b | self
2| | 1
3| def delegator(...) | 2
4| # | CI1 (argc: 2)
-> 5| delegatee(...) # CI2 (FORWARDING) | cref_or_me
6| end | specval
7| | type
8| def caller | self
9| delegator(1, 2) # CI1 (argc: 2) | 1
10| end | 2
```
The "FORWARDING" call site combines information from CI1 with CI2 in order
to support passing other values in addition to the `...` value, as well as
perfectly forward splat args, kwargs, etc.
Since we're able to copy the stack from `caller` in to `delegator`'s stack, we
can avoid allocating objects.
I want to do this to eliminate object allocations for delegate methods.
My long term goal is to implement `Class#new` in Ruby and it uses `...`.
I was able to implement `Class#new` in Ruby
[here](https://github.com/ruby/ruby/pull/9289).
If we adopt the technique in this patch, then we can optimize allocating
objects that take keyword parameters for `initialize`.
For example, this code will allocate 2 objects: one for `SomeObject`, and one
for the kwargs:
```ruby
SomeObject.new(foo: 1)
```
If we combine this technique, plus implement `Class#new` in Ruby, then we can
reduce allocations for this common operation.
Co-Authored-By: John Hawthorn <john@hawthorn.email>
Co-Authored-By: Alan Wu <XrXr@users.noreply.github.com>
Ref: https://github.com/ruby/ruby/pull/10872
These should be the last internal uses of the old `Data` API
inside Ruby itself. Some use remain in a couple default gems.
Previously, ensure ISEQs took their first line number from the
line number coming from the AST. However, if this is coming from
an empty `begin`..`end` inside of a method, this can be all of the
way back to the method declaration. Instead, this commit changes
it to be the first line number of the ensure block itself.
The first_lineno field is only accessible through manual ISEQ
compilation or through tracepoint. Either way, this will be more
accurate for targeting going forward.
With embedded strings we often have some space left in the slot, which
we can use to store the string Hash code.
It's probably only worth it for string literals, as they are the ones
likely to be used as hash keys.
We chose to store the Hash code right after the string terminator as to
make it easy/fast to compute, and not require one more union in RString.
```
compare-ruby: ruby 3.4.0dev (2024-04-22T06:32:21Z main f77618c1fa) [arm64-darwin23]
built-ruby: ruby 3.4.0dev (2024-04-22T10:13:03Z interned-string-ha.. 8a1a32331b) [arm64-darwin23]
last_commit=Precompute embedded string literals hash code
| |compare-ruby|built-ruby|
|:-----------|-----------:|---------:|
|symbol | 39.275M| 39.753M|
| | -| 1.01x|
|dyn_symbol | 37.348M| 37.704M|
| | -| 1.01x|
|small_lit | 29.514M| 33.948M|
| | -| 1.15x|
|frozen_lit | 27.180M| 33.056M|
| | -| 1.22x|
|iseq_lit | 27.391M| 32.242M|
| | -| 1.18x|
```
Co-Authored-By: Étienne Barrié <etienne.barrie@gmail.com>
This patch adds `int line_count` field to `rb_ast_body_t` structure.
Instead, we no longer cast `script_lines` to Fixnum.
## Background
Ref https://github.com/ruby/ruby/pull/10618
In the PR above, we have decoupled IMEMO from `rb_ast_t`.
This means we could lift the five-words-restriction of the structure
that forced us to unionize `rb_ast_t *` and `FIXNUM` in one field.
## Relating refactor
- Remove the second parameter of `rb_ruby_ast_new()` function
## Attention
I will remove a code that assigns -1 to line_count, in `rb_binding_add_dynavars()`
of vm.c, because I don't think it is necessary.
But I will make another PR for this so that we can atomically revert
in case I was wrong (See the comment on the code)
This patch removes the `VALUE flags` member from the `rb_ast_t` structure making `rb_ast_t` no longer an IMEMO object.
## Background
We are trying to make the Ruby parser generated from parse.y a universal parser that can be used by other implementations such as mruby.
To achieve this, it is necessary to exclude VALUE and IMEMO from parse.y, AST, and NODE.
## Summary (file by file)
- `rubyparser.h`
- Remove the `VALUE flags` member from `rb_ast_t`
- `ruby_parser.c` and `internal/ruby_parser.h`
- Use TypedData_Make_Struct VALUE which wraps `rb_ast_t` `in ast_alloc()` so that GC can manage it
- You can retrieve `rb_ast_t` from the VALUE by `rb_ruby_ast_data_get()`
- Change the return type of `rb_parser_compile_XXXX()` functions from `rb_ast_t *` to `VALUE`
- rb_ruby_ast_new() which internally `calls ast_alloc()` is to create VALUE vast outside ruby_parser.c
- `iseq.c` and `vm_core.h`
- Amend the first parameter of `rb_iseq_new_XXXX()` functions from `rb_ast_body_t *` to `VALUE`
- This keeps the VALUE of AST on the machine stack to prevent being removed by GC
- `ast.c`
- Almost all change is replacement `rb_ast_t *ast` with `VALUE vast` (sorry for the big diff)
- Fix `node_memsize()`
- Now it includes `rb_ast_local_table_link`, `tokens` and script_lines
- `compile.c`, `load.c`, `node.c`, `parse.y`, `proc.c`, `ruby.c`, `template/prelude.c.tmpl`, `vm.c` and `vm_eval.c`
- Follow-up due to the above changes
- `imemo.{c|h}`
- If an object with `imemo_ast` appears, considers it a bug
Co-authored-by: Nobuyoshi Nakada <nobu@ruby-lang.org>
In cases where RubyVM::InstructionSequence.load_from_binary() is
passed a param other than a String, we attempt to call the
RSTRING_LENINT macro on it which can cause a segfault.
ex:
```
var_0 = 0
RubyVM::InstructionSequence.load_from_binary(var_0)
```
This commit adds a type check to raise unless we are provided
a String.
`RUBY_TRY_UNUSED_BLOCK_WARNING_STRICT=1 ruby ...` will enable
strict check for unused block warning.
This option is only for trial to compare the results so the
envname is not considered well.
Should be removed before Ruby 3.4.0 release.
if a method `foo` uses a block, other (unrelated) method `foo`
can receives a block. So try to relax the unused block warning
condition.
```ruby
class C0
def f = yield
end
class C1 < C0
def f = nil
end
[C0, C1].f{ block } # do not warn
```
Previously it would bypass the `FL_ABLE` check, but
since shapes introduction, it started having a different
behavior than `OBJ_FREEZE`, as it would onyl set the `FL_FREEZE`
flag, but not update the shape.
I have no indication of this causing a bug yet, but it seems
like a trap waiting to happen.
This patch is part of universal parser work.
## Summary
- Decouple VALUE from members below:
- `(struct parser_params *)->debug_lines`
- `(rb_ast_t *)->body.script_lines`
- Instead, they are now `rb_parser_ary_t *`
- They can also be a `(VALUE)FIXNUM` as before to hold line count
- `ISEQ_BODY(iseq)->variable.script_lines` remains VALUE
- In order to do this,
- Add `VALUE script_lines` param to `rb_iseq_new_with_opt()`
- Introduce `rb_parser_build_script_lines_from()` to convert `rb_parser_ary_t *` into `VALUE`
## Other details
- Extend `rb_parser_ary_t *`. It previously could only store `rb_parser_ast_token *`, now can store script_lines, too
- Change tactics of building the top-level `SCRIPT_LINES__` in `yycompile0()`
- Before: While parsing, each line of the script is added to `SCRIPT_LINES__[path]`
- After: After `yyparse(p)`, `SCRIPT_LINES__[path]` will be built from `p->debug_lines`
- Remove the second parameter of `rb_parser_set_script_lines()` to make it simple
- Introduce `script_lines_free()` to be called from `rb_ast_free()` because the GC no longer takes care of the script_lines
- Introduce `rb_parser_string_deep_copy()` in parse.y to maintain script_lines when `rb_ruby_parser_free()` called
- With regard to this, please see *Future tasks* below
## Future tasks
- Decouple IMEMO from `rb_ast_t *`
- This lifts the five-members-restriction of Ruby object,
- So we will be able to move the ownership of the `lex.string_buffer` from parser to AST
- Then we remove `rb_parser_string_deep_copy()` to make the whole thing simple
`super(){}`, `super{}` and `super(&b)` doesn't use the given
block so warn unused block warning when calling a method which
doesn't use block with above `super` expressions.
e.g.: `def f = super{B1}` (warn on `f{B2}` because `B2` is not used.
`super()` (not zsuper) passes the passed block and
it can be used.
```ruby
class C0
def foo; yield; end
end
class C1 < C0
def foo; super(); end
end
C1.new.foo{p :block} #=> :block
```