YJIT didn't guard for ruby2_keywords hash in case of splat calls that
land in methods with a rest parameter, creating incorrect results.
The compile-time checks didn't correspond to any actual effects of
ruby2_keywords, so it was masking this bug and YJIT was needlessly
refusing to compile some code. About 16% of fallback reasons in
`lobsters` was due to the ISeq check.
We already handle the tagging part with
exit_if_supplying_kw_and_has_no_kw() and should now have a dynamic guard
for all splat cases.
Note for backporting: You also need 7f51959ff1.
[Bug #20195]
```
warning: unused import: `condition::Condition`
--> src/asm/arm64/arg/mod.rs:13:9
|
13 | pub use condition::Condition;
| ^^^^^^^^^^^^^^^^^^^^
|
= note: `#[warn(unused_imports)]` on by default
warning: unused import: `rb_yjit_fix_mul_fix as rb_fix_mul_fix`
--> src/cruby.rs:188:9
|
188 | pub use rb_yjit_fix_mul_fix as rb_fix_mul_fix;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
warning: unused import: `rb_insn_len as raw_insn_len`
--> src/cruby.rs:142:9
|
142 | pub use rb_insn_len as raw_insn_len;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: `#[warn(unused_imports)]` on by default
```
Make asm public so it stops warning about unused public stuff in there.
* Unify length field for embedded and heap strings
The length field is of the same type and position in RString for both
embedded and heap allocated strings, so we can unify it.
* Remove RSTRING_EMBED_LEN
yjit-trace-exits appends a synthetic sample for the instruction being
exited, but we didn't increment the size of the stack. Fixing this count
correctly lets us successfully generate a flamegraph from the exits.
I also replaced the line number for instructions with 0, as I don't
think the previous value had meaning.
Co-authored-by: Adam Hess <HParker@github.com>
* YJIT: Add --yjit-pause and RubyVM::YJIT.resume
This allows booting YJIT in a suspended state. We chose to add a new
command line option as opposed to simply allowing YJIT.resume to work
without any command line option because it allows for combining with
YJIT tuning command line options. It also simpifies implementation.
Paired with Kokubun and Maxime.
* Update yjit.rb
Co-authored-by: Takashi Kokubun <takashikkbn@gmail.com>
---------
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
Co-authored-by: Takashi Kokubun <takashikkbn@gmail.com>
If the previous instruction is not a leaf instruction, then the PC was
incremented before the instruction was ran (meaning the currently
executing instruction is actually the previous instruction), so we
should not increment the PC otherwise we will calculate the source
line for the next instruction.
This bug can be reproduced in the following script:
```
require "objspace"
ObjectSpace.trace_object_allocations_start
a =
1.0 / 0.0
p [ObjectSpace.allocation_sourceline(a), ObjectSpace.allocation_sourcefile(a)]
```
Which outputs: [4, "test.rb"]
This is incorrect because the object was allocated on line 10 and not
line 4. The behaviour is correct when we use a leaf instruction (e.g.
if we replaced `1.0 / 0.0` with `"hello"`), then the output is:
[10, "test.rb"].
[Bug #19456]
This dispatches to a c func for doing the dynamic lookup. I experimented with chain on the proc but wasn't able to detect which call sites would be monomorphic vs polymorphic. There is definitely room for optimization here, but it does reduce exits.
This allows x86_64 based YJIT to run on Docker Desktop on Apple silicon (arm64)
Mac because it will avoid a subtle behavior difference in `mprotect` system call
between the Linux kernel and `qemu-x86_64` user space emulator.
* Implement optimize send in yjit
This successfully makes all our benchmarks exit way less for optimize send reasons.
It makes some benchmarks faster, but not by as much as I'd like. I think this implementation
works, but there are definitely more optimial arrangements. For example, what if we compiled
send to a jump table? That seems like perhaps the most optimal we could do, but not obvious (to me)
how to implement give our current setup.
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
* Attempt at fixing the issues raised by @XrXr
* fix allowlist
* returns 0 instead of nil when not found
* remove comment about encoding exception
* Fix up c changes
* Update assert
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
* get rid of unneeded code and fix the flags
* Apply suggestions from code review
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
* rename and fix typo
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
* YJIT: fix a parameter name
* YJIT: add support for calling bmethods
This commit adds support for the VM_METHOD_TYPE_BMETHOD method type in
YJIT. You can get these type of methods from facilities like
Kernel#define_singleton_method and Module#define_method.
Even though the body of these methods are blocks, the parameter setup
for them is exactly the same as VM_METHOD_TYPE_ISEQ, so we can reuse
the same logic in gen_send_iseq(). You can see this from how
vm_call_bmethod() eventually calls setup_parameters_complex() with
arg_setup_method.
Bmethods do need their frame environment to be setup differently. We
handle this by allowing callers of gen_send_iseq() to control the iseq,
the frame flag, and the prev_ep. The `prev_ep` goes into the same
location as the block handler would go into in an iseq method frame.
Co-authored-by: John Hawthorn <john@hawthorn.email>
Co-authored-by: John Hawthorn <john@hawthorn.email>
* Initial support for VM_CALL_ARGS_SPLAT
This implements support for calls with splat (*) for some methods. In
benchmarks this made very little difference for most benchmarks, but a
large difference for binarytrees. Looking at side exits, many
benchmarks now don't exit for splat, but exit for some other
reason. Binarytrees however had a number of calls that used splat args
that are now much faster. In my non-scientific benchmarking this made
splat args performance on par with not using splat args at all.
* Fix wording and whitespace
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
* Get rid of side_effect reassignment
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
Previously we cleared the cache for all the code in the system when we
flip memory protection, which was prohibitively expensive since the
operation is not constant time. Instead, only clear the cache for the
memory region of newly written code when we write out new code.
This brings the runtime for the 30k_if_else test down to about 6 seconds
from the previous 45 seconds on my laptop.