Pin matching for local variables and constants is already supported,
and it is fairly simple to add support for these variable types.
Note that pin matching for method calls is still not supported
without wrapping in parentheses (pin expressions). I think that's
for the best as method calls are far more complex (arguments/blocks).
Implements [Feature #17724]
In regular assignment, Ruby evaluates the left hand side before
the right hand side. For example:
```ruby
foo[0] = bar
```
Calls `foo`, then `bar`, then `[]=` on the result of `foo`.
Previously, multiple assignment didn't work this way. If you did:
```ruby
abc.def, foo[0] = bar, baz
```
Ruby would previously call `bar`, then `baz`, then `abc`, then
`def=` on the result of `abc`, then `foo`, then `[]=` on the
result of `foo`.
This change makes multiple assignment similar to single assignment,
changing the evaluation order of the above multiple assignment code
to calling `abc`, then `foo`, then `bar`, then `baz`, then `def=` on
the result of `abc`, then `[]=` on the result of `foo`.
Implementing this is challenging with the stack-based virtual machine.
We need to keep track of all of the left hand side attribute setter
receivers and setter arguments, and then keep track of the stack level
while handling the assignment processing, so we can issue the
appropriate topn instructions to get the receiver. Here's an example
of how the multiple assignment is executed, showing the stack and
instructions:
```
self # putself
abc # send
abc, self # putself
abc, foo # send
abc, foo, 0 # putobject 0
abc, foo, 0, [bar, baz] # evaluate RHS
abc, foo, 0, [bar, baz], baz, bar # expandarray
abc, foo, 0, [bar, baz], baz, bar, abc # topn 5
abc, foo, 0, [bar, baz], baz, abc, bar # swap
abc, foo, 0, [bar, baz], baz, def= # send
abc, foo, 0, [bar, baz], baz # pop
abc, foo, 0, [bar, baz], baz, foo # topn 3
abc, foo, 0, [bar, baz], baz, foo, 0 # topn 3
abc, foo, 0, [bar, baz], baz, foo, 0, baz # topn 2
abc, foo, 0, [bar, baz], baz, []= # send
abc, foo, 0, [bar, baz], baz # pop
abc, foo, 0, [bar, baz] # pop
[bar, baz], foo, 0, [bar, baz] # setn 3
[bar, baz], foo, 0 # pop
[bar, baz], foo # pop
[bar, baz] # pop
```
As multiple assignment must deal with splats, post args, and any level
of nesting, it gets quite a bit more complex than this in non-trivial
cases. To handle this, struct masgn_state is added to keep
track of the overall state of the mass assignment, which stores a linked
list of struct masgn_attrasgn, one for each assigned attribute.
This adds a new optimization that replaces a topn 1/pop instruction
combination with a single swap instruction for multiple assignment
to non-aref attributes.
This new approach isn't compatible with one of the optimizations
previously used, in the case where the multiple assignment return value
was not needed, there was no lhs splat, and one of the left hand side
used an attribute setter. This removes that optimization. Removing
the optimization allowed for removing the POP_ELEMENT and adjust_stack
functions.
This adds a benchmark to measure how much slower multiple
assignment is with the correct evaluation order.
This benchmark shows:
* 4-9% decrease for attribute sets
* 14-23% decrease for array member sets
* Basically same speed for local variable sets
Importantly, it shows no significant difference between the popped
(where return value of the multiple assignment is not needed) and
!popped (where return value of the multiple assignment is needed)
cases for attribute and array member sets. This indicates the
previous optimization, which was dropped in the evaluation
order fix and only affected the popped case, is not important to
performance.
Fixes [Bug #4443]
* Warn Struct#initialize with only keyword args
A part of [Feature #16806]
* Do not warn if `keyword_init: false`
is explicitly specified
* Add a NEWS entry
* s/in/from/
* Make sure all fields are initialized
Previously, if a class included a module and then prepended the
same module, the prepend had no effect. This changes the behavior
so that the prepend has an effect unless the module is already
prepended the receiver.
While here, rename the origin_seen variable in include_modules_at,
since it is misleading. The variable tracks whether c has been seen,
not whether the origin of klass has been.
Fixes [Bug #17423]
because the name "MJIT" is an internal code name, it's inconsistent with
--jit while they are related to each other, and I want to discourage future
JIT implementation-specific (e.g. MJIT-specific) APIs by this rename.
[Feature #17490]
* Instance variables
* Merge ivar guards on JIT a69dd699eee4f7eee009
* Prefer RB_OBJ_FROZEN_RAW 5611066e03
* Skip checking ROBJECT_EMBED 81a8d1cf09
* Method inlining
* Mark some Integer methods as inline 0703e01471
* Allow inlining Integer#-@ and #~ dbb4f19969
* Inline builtin struct aref 167d139487
* Make Kernel#then, #yield_self, #frozen? builtin 24fa37d87a
* (For future) Rewrite Kernel#tap with Ruby f3a0d7a203
* Other optimizations
* Inline constant references 53babf35ef
* Lazily move PC with RUBY_VM_CHECK_INTS 5d74894f2b
* Cache access to reg_cfp->self on JIT d409837729
* JIT compaction
* Shrink the blocking region for compile_compact_jit_code ed8e552d4d
* Stop leaving .c files for JIT compaction in /tmp fa1250a506
* GC of JIT-ed code
* Run unload_units in the JIT worker thread 16dab6b692
* Avoid unloading units which have enough total_calls d80226e7bd
* Throttle unload_units 122cd35939
* Throttle JIT compaction 096f54428d
* Compilation speed
* Eliminate IVC sync between JIT and Ruby threads 0960f56a1d
* Lazily move units from active_units to stale_units 5d8f227d0e
Please see 200c5f4075 for other improvements in Jan ~ Jun.