Examples of such calls:
```ruby
obj[kw: 1] += fo
obj[**kw] &&= bar
```
Before this patch, literal keywords would segfault in the compiler,
and keyword splat usage would result in TypeError.
This handles all cases I can think of:
* literal keywords
* keyword splats
* combined with positional arguments
* combined with regular splats
* both with and without blocks
* both popped and non-popped cases
This also makes sure that to_hash is only called once on the keyword
splat argument, instead of twice, and make sure it is called before
calling to_proc on a passed block.
Fixes [Bug #20051]
Co-authored-by: Nobuyoshi Nakada <nobu@ruby-lang.org>
Previously, block.to_proc was called first, by vm_caller_setup_arg_block.
kw.to_hash was called later inside CALLER_SETUP_ARG or setup_parameters_complex.
This adds a splatkw instruction that is inserted before sends with
ARGS_BLOCKARG and KW_SPLAT and without KW_SPLAT_MUT. This is not needed in the
KW_SPLAT_MUT case, because then you know the value is a hash, and you don't
need to call to_hash on it.
The splatkw instruction checks whether the second to top block is a hash,
and if not, replaces it with the value of calling to_hash on it (using
rb_to_hash_type). As it is always before a send with ARGS_BLOCKARG and
KW_SPLAT, second to top is the keyword splat, and top is the passed block.
These are similar to the f(1, *a, &lvar), f(*a, **kw, &lvar) and
f(*a, kw: 1, &lvar) optimizations, but they use getblockparamproxy
instruction instead of getlocal.
This also fixes the else style to be more similar to the surrounding
code.
In cases where the compiler can detect the hash is static, it would
use duphash for the hash part. As the hash is static, there is no need
to allocate an array.
The compiler already eliminates the array allocation for
f(*a, &lvar) and f(*a, &@iv). If that is safe, then eliminating
it for f(*a, **lvar) and f(*a, **@iv) as the last commit did is
as safe, and eliminating it for f(*a, **lvar, &lvar) and
f(*a, **@iv, &@iv) is also as safe.
The compiler already eliminates the array allocation for
f(*a, &lvar) and f(*a, &@iv), and eliminating the array allocation
for keyword splat is as safe as eliminating it for block passes.
Due to how the compiler works, while f(*a, &lvar) and f(*a, &@iv)
do not allocate an array, but f(1, *a, &lvar) and f(1, *a, &@iv)
do. It's probably possible to fix this in the compiler, but
seems easiest to fix this in the peephole optimizer.
Eliminating this array allocation is as safe as the current
elimination of the array allocation for f(*a, &lvar) and
f(*a, &@iv).
Due to how the compiler works, while f(*a) does not allocate an
array f(1, *a) does. This is possible to fix in the compiler, but
the change is much more complex. This attempts to fix the issue
in a simpler way using the peephole optimizer.
Eliminating this array allocation is safe, since just as in the
f(*a) case, nothing else on the caller side can modify the array.
The operands in each instruction needs to be pinned because if
auto-compaction runs in iseq_set_sequence, then the objects could exist
on the generated_iseq buffer, which would not be reference updated which
can lead to T_MOVED (and subsequently T_NONE) objects on the iseq.
The function iseq_set_exception_table allocates memory which can cause
a GC compaction to run. Since catch_table_ary is not on the stack, it
can be moved, which would make tptr incorrect.
- Unless `sizeof(BDIGIT) == 4`, (8-byte integer not available), the
size to be loaded was wrong.
- Since `BDIGIT`s are dumped as raw binary, the loaded byte order was
inverted unless little-endian.
ARGSCAT has been used for nd_args to hold index and rvalue,
because there was limitation on the number of members for Node.
We can easily change structure of node now, let's expand it.
We changed ScopeNodes to point to their parent (previous) ScopeNodes.
Accordingly, we can remove pm_compile_context_t, and store all
necessary context in ScopeNodes, allowing us to access locals from
outer scopes.
It's an estimator for application size and could be used as a
compilation heuristic later.
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
Co-authored-by: Takashi Kokubun <takashikkbn@gmail.com>
Tracks other callinfo that references the same kwargs and frees them when all references are cleared.
[bug #19906]
Co-authored-by: Peter Zhu <peter@peterzhu.ca>
All kind of AST nodes use same struct RNode, which has u1, u2, u3 union members
for holding different kind of data.
This has two problems.
1. Low flexibility of data structure
Some nodes, for example NODE_TRUE, don’t use u1, u2, u3. On the other hand,
NODE_OP_ASGN2 needs more than three union members. However they use same
structure definition, need to allocate three union members for NODE_TRUE and
need to separate NODE_OP_ASGN2 into another node.
This change removes the restriction so make it possible to
change data structure by each node type.
2. No compile time check for union member access
It’s developer’s responsibility for using correct member for each node type when it’s union.
This change clarifies which node has which type of fields and enables compile time check.
This commit also changes node_buffer_elem_struct buf management to handle
different size data with alignment.