Граф коммитов

1434 Коммитов

Автор SHA1 Сообщение Дата
Jeremy Evans 0f90a24a81 Introduce Allocationless Anonymous Splat Forwarding
Ruby makes it easy to delegate all arguments from one method to another:

```ruby
def f(*args, **kw)
  g(*args, **kw)
end
```

Unfortunately, this indirection decreases performance.  One reason it
decreases performance is that this allocates an array and a hash per
call to `f`, even if `args` and `kw` are not modified.

Due to Ruby's ability to modify almost anything at runtime, it's
difficult to avoid the array allocation in the general case. For
example, it's not safe to avoid the allocation in a case like this:

```ruby
def f(*args, **kw)
  foo(bar)
  g(*args, **kw)
end
```

Because `foo` may be `eval` and `bar` may be a string referencing `args`
or `kw`.

To fix this correctly, you need to perform something similar to escape
analysis on the variables.  However, there is a case where you can
avoid the allocation without doing escape analysis, and that is when
the splat variables are anonymous:

```ruby
def f(*, **)
  g(*, **)
end
```

When splat variables are anonymous, it is not possible to reference
them directly, it is only possible to use them as splats to other
methods.  Since that is the case, if `f` is called with a regular
splat and a keyword splat, it can pass the arguments directly to
`g` without copying them, avoiding allocation.  For example:

```ruby
def g(a, b:)
  a + b
end

def f(*, **)
  g(*, **)
end

a = [1]
kw = {b: 2}

f(*a, **kw)
```

I call this technique: Allocationless Anonymous Splat Forwarding.

This is implemented using a couple additional iseq param flags,
anon_rest and anon_kwrest.  If anon_rest is set, and an array splat
is passed when calling the method when the array splat can be used
without modification, `setup_parameters_complex` does not duplicate
it.  Similarly, if anon_kwest is set, and a keyword splat is passed
when calling the method, `setup_parameters_complex` does not
duplicate it.
2024-01-24 18:25:55 -08:00
Jeremy Evans b8516d6d01 Add pushtoarray VM instruction
This instruction is similar to concattoarray, but it takes the
number of arguments to push to the array, removes that number
of arguments from the stack, and adds them to the array now at
the top of the stack.

This allows `f(*a, 1)` to allocate only a single array on the
caller side (which can be reused on the callee side in the case of
`def f(*a)`). Prior to this commit, `f(*a, 1)` would generate
3 arrays:

* a dupped by splatarray true
* 1 wrapped in array by newarray
* a dupped again by concatarray

Instructions Before for `a = []; f(*a, 1)`:

```
0000 newarray                               0                         (   1)[Li]
0002 setlocal_WC_0                          a@0
0004 putself
0005 getlocal_WC_0                          a@0
0007 splatarray                             true
0009 putobject_INT2FIX_1_
0010 newarray                               1
0012 concatarray
0013 opt_send_without_block                 <calldata!mid:f, argc:1, ARGS_SPLAT|FCALL>
0015 leave
```

Instructions After for `a = []; f(*a, 1)`:

```
0000 newarray                               0                         (   1)[Li]
0002 setlocal_WC_0                          a@0
0004 putself
0005 getlocal_WC_0                          a@0
0007 splatarray                             true
0009 putobject_INT2FIX_1_
0010 pushtoarray                            1
0012 opt_send_without_block                 <calldata!mid:f, argc:1, ARGS_SPLAT|ARGS_SPLAT_MUT|FCALL>
0014 leave
```

With these changes, method calls to Ruby methods should
implicitly allocate at most one array.

Ignore typeprof bundled gem failure due to unrecognized instruction.
2024-01-24 18:25:55 -08:00
Jeremy Evans 6e06d0d180 Add concattoarray VM instruction
This instruction is similar to concatarray, but assumes the first
object is already an array, and appends to it directly.  This is
different than concatarray, which will create a new array instead
of appending to an existing array.

Additionally, for both concatarray and concattoarray, if the second
argument cannot be converted to an array, then just push it onto
the array, instead of creating a new array to wrap it, and then
using concat array.  This saves an array allocation in that case.

This allows `f(*a, *a, *1)` to allocate only a single array on the
caller side (which can be reused on the callee side in the case of
`def f(*a)`). Prior to this commit, `f(*a, *a, *1)` would generate
4 arrays:

* a dupped by splatarray true
* a dupped again by first concatarray
* 1 wrapped in array by third splatarray
* result of [*a, *a] dupped by second concatarray

Instructions Before for `a = []; f(*a, *a, *1)`:

```
0000 newarray                               0                         (   1)[Li]
0002 setlocal_WC_0                          a@0
0004 putself
0005 getlocal_WC_0                          a@0
0007 splatarray                             true
0009 getlocal_WC_0                          a@0
0011 splatarray                             false
0013 concatarray
0014 putobject_INT2FIX_1_
0015 splatarray                             false
0017 concatarray
0018 opt_send_without_block                 <calldata!mid:g, argc:1, ARGS_SPLAT|ARGS_SPLAT_MUT|FCALL>
0020 leave
```

Instructions After for `a = []; f(*a, *a, *1)`:

```
0000 newarray                               0                         (   1)[Li]
0002 setlocal_WC_0                          a@0
0004 putself
0005 getlocal_WC_0                          a@0
0007 splatarray                             true
0009 getlocal_WC_0                          a@0
0011 concattoarray
0012 putobject_INT2FIX_1_
0013 concattoarray
0014 opt_send_without_block                 <calldata!mid:f, argc:1, ARGS_SPLAT|ARGS_SPLAT_MUT|FCALL>
0016 leave
```
2024-01-24 18:25:55 -08:00
Jeremy Evans 22e488464a Add VM_CALL_ARGS_SPLAT_MUT callinfo flag
This flag is set when the caller has already created a new array to
handle a splat, such as for `f(*a, b)` and `f(*a, *b)`.  Previously,
if `f` was defined as `def f(*a)`, these calls would create an extra
array on the callee side, instead of using the new array created
by the caller.

This modifies `setup_args_core` to set the flag whenver it would add
a `splatarray true` instruction.  However, when `splatarray true` is
changed to `splatarray false` in the peephole optimizer, to avoid
unnecessary allocations on the caller side, the flag must be removed.
Add `optimize_args_splat_no_copy` and have the peephole optimizer call
that.  This significantly simplifies the related peephole optimizer
code.

On the callee side, in `setup_parameters_complex`, set
`args->rest_dupped` to true if the flag is set.

This takes a similar approach for optimizing regular splats that was
previiously used for keyword splats in
d2c41b1bff (via VM_CALL_KW_SPLAT_MUT).
2024-01-24 18:25:55 -08:00
Takashi Kokubun 27c1dd8634
YJIT: Allow inlining ISEQ calls with a block (#9622)
* YJIT: Allow inlining ISEQ calls with a block

* Leave a TODO comment about u16 inline_block
2024-01-23 19:36:23 +00:00
Nobuyoshi Nakada 127b19ab56 Use line numbers as builtin-index
The order of iseq may differ from the order of tokens, typically
`while`/`until` conditions are put after the body.

These orders can match by using line numbers as builtin-indexes, but
at the same time, it introduces the restriction that multiple `cexpr!`
and `cstmt!` cannot appear in the same line.

Another possible idea is to use `RubyVM::AbstractSyntaxTree` and
`node_id` instead of ripper, with making BASERUBY 3.1 or later.
2024-01-22 19:39:34 +09:00
Aaron Patterson 200d3cc14d add assert on SP 2024-01-19 09:35:36 -08:00
Takashi Kokubun 8642a573e6 Rename BUILTIN_ATTR_SINGLE_NOARG_INLINE
to BUILTIN_ATTR_SINGLE_NOARG_LEAF

The attribute was created when the other attribute was called BUILTIN_ATTR_INLINE.
Now that the original attribute is renamed to BUILTIN_ATTR_LEAF, it's
only confusing that we call it "_INLINE".
2024-01-16 17:31:27 -08:00
Takashi Kokubun e37a37e696 Drop obsoleted BUILTIN_ATTR_NO_GC attribute
The thing that has used this in the past was very buggy, and we've never
revisied it. Let's remove it until we need it again.
2024-01-16 17:27:53 -08:00
Jeremy Evans 5c823aa686
Support keyword splatting nil
nil is treated similarly to the empty hash in this case, passing
no keywords and not calling any conversion methods.

Fixes [Bug #20064]

Co-authored-by: Nobuyoshi Nakada <nobu@ruby-lang.org>
2024-01-14 11:41:02 -08:00
Jeremy Evans ef75125271 Make defined? for op asgn expressions to constants use "assignment"
Previously, it used "expression", as that was the default.  However,
op asgn expressions to constants use the NODE_OP_CDECL, so recognize
that node type as assignement.

Fixes [Bug #20111]
2024-01-10 16:02:38 -08:00
S-H-GAMELINKS a1949df547 Remove unnecessary semicolon and add break 2024-01-10 12:58:19 +09:00
yui-knk db476cc71c Introduce NODE_SYM to manage symbol literal
`:sym` was managed by `NODE_LIT` with `Symbol` object.
This commit introduces `NODE_SYM` so that

1. Symbol literal is detectable from AST Node
2. Reduce dependency on ruby object
2024-01-09 16:07:19 +09:00
yui-knk 41e2d180a3 Do not convert NODE_STR to NODE_LIT when the string is hash key
parse.y converted NODE_STR when the string is hash key like

```
h1 = {"str1" => 1}
m1("str2" => 2)
m2({"str3" => 3})
```

This commit stop the conversion.
`static_literal_node_p` needs to know the node is for hash key or not
for the optimization.
2024-01-08 18:48:24 +09:00
yui-knk 7ffff3e043 Change numeric node value functions argument to `NODE *`
Change the argument to align with other node value functions
like `rb_node_line_lineno_val`.
2024-01-08 14:02:48 +09:00
Nobuyoshi Nakada c30b8ae947
Adjust styles and indents [ci skip] 2024-01-08 00:50:41 +09:00
yui-knk 83c98ead4e Do not remove hash duplicated keys in parse.y
When hash keys are duplicated, e.g. `h = {k: 1, l: 2, k: 3}`,
parser changes node structure for correct compilation.
This generates tricky AST. This commit removes AST manipulation
from parser to keep AST structure simple.
2024-01-07 16:18:16 +09:00
S-H-GAMELINKS 1b8d01136c Introduce Numeric Node's 2024-01-07 09:24:34 +09:00
yui-knk 7a050638b1 Introduce NODE_FILE
`__FILE__` was managed by `NODE_STR` with `String` object.
This commit introduces `NODE_FILE` and `struct rb_parser_string` so that

1. `__FILE__` is detectable from AST Node
2. Reduce dependency ruby object
2024-01-02 14:19:42 +09:00
yui-knk 1ade170a6c Introduce NODE_LINE
`__LINE__` was managed by `NODE_LIT` with `Integer` object.
This commit introduces `NODE_LINE` so that

1. `__LINE__` is detectable from AST Node
2. Reduce dependency ruby object
2023-12-29 18:32:27 +09:00
yui-knk 87e8e961b7 Check node type before cast 2023-12-28 18:47:05 +09:00
Nobuyoshi Nakada bc002971b6
[Bug #20094] Distinguish `begin` and parentheses 2023-12-27 17:50:15 +09:00
HParker 55326a915f Introduce --parser runtime flag
Introduce runtime flag for specifying the parser,

```
ruby --parser=prism
```

also update the description:

```
$ ruby --parser=prism --version
ruby 3.3.0dev (2023-12-08T04:47:14Z add-parser-runtime.. 0616384c9f) +PRISM [x86_64-darwin23]
```

[Bug #20044]
2023-12-15 13:42:19 -05:00
Jeremy Evans a18819e65f Fix op asgn method calls passing mutable keyword splats
When passing the keyword splat to [], it cannot be mutable, because
mutating the keyword splat inside [] would result in changes to the
keyword splat passed to []=.
2023-12-14 08:13:43 -08:00
Jeremy Evans 2f1d6da8c4
Fix op asgn calls with keywords
Examples of such calls:

```ruby
obj[kw: 1] += fo
obj[**kw] &&= bar
```

Before this patch, literal keywords would segfault in the compiler,
and keyword splat usage would result in TypeError.

This handles all cases I can think of:

* literal keywords
* keyword splats
* combined with positional arguments
* combined with regular splats
* both with and without blocks
* both popped and non-popped cases

This also makes sure that to_hash is only called once on the keyword
splat argument, instead of twice, and make sure it is called before
calling to_proc on a passed block.

Fixes [Bug #20051]

Co-authored-by: Nobuyoshi Nakada <nobu@ruby-lang.org>
2023-12-12 07:34:53 -08:00
Jeremy Evans f64357540e Ensure super(**kw, &block) calls kw.to_hash before block.to_proc
Similar as previous commit, but handles the super case with
explicit arguments.
2023-12-09 13:15:47 -08:00
Jeremy Evans a950f23078 Ensure f(**kw, &block) calls kw.to_hash before block.to_proc
Previously, block.to_proc was called first, by vm_caller_setup_arg_block.
kw.to_hash was called later inside CALLER_SETUP_ARG or setup_parameters_complex.

This adds a splatkw instruction that is inserted before sends with
ARGS_BLOCKARG and KW_SPLAT and without KW_SPLAT_MUT. This is not needed in the
KW_SPLAT_MUT case, because then you know the value is a hash, and you don't
need to call to_hash on it.

The splatkw instruction checks whether the second to top block is a hash,
and if not, replaces it with the value of calling to_hash on it (using
rb_to_hash_type).  As it is always before a send with ARGS_BLOCKARG and
KW_SPLAT, second to top is the keyword splat, and top is the passed block.
2023-12-09 13:15:47 -08:00
Jeremy Evans 7738b31d39 Eliminate array allocation for f(1, *a, &arg), f(*a, **kw, &arg), and f(*a, kw: 1, &arg)
These are similar to the f(1, *a, &lvar), f(*a, **kw, &lvar) and
f(*a, kw: 1, &lvar) optimizations, but they use getblockparamproxy
instruction instead of getlocal.

This also fixes the else style to be more similar to the surrounding
code.
2023-12-07 11:27:55 -08:00
Jeremy Evans aa808204bb Eliminate array allocation for f(*a, kw: 1, &lvar) and f(*a, kw: 1, &@iv)
Similar to the previous commit, but this handles the block pass
case.
2023-12-07 11:27:55 -08:00
Jeremy Evans 5e33f2b0e4 Eliminate array allocation for f(*a, kw: 1)
In cases where the compiler can detect the hash is static, it would
use duphash for the hash part. As the hash is static, there is no need
to allocate an array.
2023-12-07 11:27:55 -08:00
Jeremy Evans 95615872e3 Eliminate array allocation for f(*a, **lvar, &lvar) and f(*a, **@iv, &@iv)
The compiler already eliminates the array allocation for
f(*a, &lvar) and f(*a, &@iv).  If that is safe, then eliminating
it for f(*a, **lvar) and f(*a, **@iv) as the last commit did is
as safe, and eliminating it for f(*a, **lvar, &lvar) and
f(*a, **@iv, &@iv) is also as safe.
2023-12-07 11:27:55 -08:00
Jeremy Evans 1731dd05a7 Eliminate array allocation for f(*a, **lvar) and f(*a, **@iv)
The compiler already eliminates the array allocation for
f(*a, &lvar) and f(*a, &@iv), and eliminating the array allocation
for keyword splat is as safe as eliminating it for block passes.
2023-12-07 11:27:55 -08:00
Jeremy Evans c70c1d2a95 Eliminate array allocation for f(1, *a, &lvar) and f(1, *a, &@iv)
Due to how the compiler works, while f(*a, &lvar) and f(*a, &@iv)
do not allocate an array, but f(1, *a, &lvar) and f(1, *a, &@iv)
do.  It's probably possible to fix this in the compiler, but
seems easiest to fix this in the peephole optimizer.

Eliminating this array allocation is as safe as the current
elimination of the array allocation for f(*a, &lvar) and
f(*a, &@iv).
2023-12-07 11:27:55 -08:00
Jeremy Evans 40a2afd08f Eliminate array allocation for f(1, *a)
Due to how the compiler works, while f(*a) does not allocate an
array f(1, *a) does.  This is possible to fix in the compiler, but
the change is much more complex.  This attempts to fix the issue
in a simpler way using the peephole optimizer.

Eliminating this array allocation is safe, since just as in the
f(*a) case, nothing else on the caller side can modify the array.
2023-12-07 11:27:55 -08:00
Peter Zhu d1691617d6 Pin instruction storage
The operands in each instruction needs to be pinned because if
auto-compaction runs in iseq_set_sequence, then the objects could exist
on the generated_iseq buffer, which would not be reference updated which
can lead to T_MOVED (and subsequently T_NONE) objects on the iseq.
2023-12-02 09:06:03 -05:00
Nobuyoshi Nakada a607d62d8c [Bug #20033] Dynamic regexp should not assign captures 2023-12-02 03:57:41 +09:00
Peter Zhu 6ebcf25de2 GC guard catch_table_ary in iseq_set_exception_table
The function iseq_set_exception_table allocates memory which can cause
a GC compaction to run. Since catch_table_ary is not on the stack, it
can be moved, which would make tptr incorrect.
2023-11-29 12:30:12 -05:00
Nobuyoshi Nakada 7fe7b7bc5a
Fix portability of bignum in ISeq Binary Format
- Unless `sizeof(BDIGIT) == 4`, (8-byte integer not available), the
  size to be loaded was wrong.
- Since `BDIGIT`s are dumped as raw binary, the loaded byte order was
  inverted unless little-endian.
2023-11-26 11:42:02 +09:00
Jean Boussier 0aafd040c3 Embed ibf_dump objects 2023-11-21 17:41:27 +01:00
Jean Boussier b4f551686b Get rid of useless dsize functions
If we always return 0, we might as well not define
the function at all.
2023-11-21 15:15:03 +01:00
Jean Boussier 05028f4d55 compile.c: make pinned_list embedable
This saves some malloc churn for small pin lists.
2023-11-20 17:27:32 +01:00
Nobuyoshi Nakada 2a442121d1
Stabilize outer variable list
Sort outer variables by names to make dumped binary data stable.
2023-11-11 16:58:14 +09:00
Nobuyoshi Nakada e2ef85b109 Finer granularity IBF dependendency
It depends on only `VALUE` definition.  Check for endianness and word
size instead of the platform name.
2023-11-09 16:01:01 +09:00
Nobuyoshi Nakada 61bb5c0572 Use `uint32_t` instead of `unsigned int` for the exact size 2023-11-09 16:01:01 +09:00
Matt Valentine-House 8ef7f27321 [PRISM] CompileEnsureNode 2023-11-07 14:03:57 +00:00
Jemma Issroff f6ba87ca88 [PRISM] Implement compilation for MultiWriteNodes, fix MultiTargetNodes
Compilation now works for MultiWriteNodes and MultiTargetNodes, with
nesting on MultiWrites. See the tests added in this commit for example
behavior.
2023-11-06 10:39:07 -03:00
Matt Valentine-House 9249b8622b Move constant indexing into rb_translate_prism 2023-10-30 19:44:42 +00:00
Matt Valentine-House 5dfba84cff [Prism] Compile ForNode
Fixes ruby/prism#1648
2023-10-30 19:44:42 +00:00
Nobuyoshi Nakada 13c9cbe09e
Embed `rb_args_info` in `rb_node_args_t` 2023-10-30 00:19:43 +09:00
Jemma Issroff b5f6e2a7c4 [PRISM] ScopeNode doesn't need void * anymore 2023-10-25 18:18:35 -03:00