Tested on production workloads at Shopify for > 1 year and proven
to be quite stable. Enabling YJIT at run-time is still guarded
behind the --yjit command-line option for now.
* Enable --yjit-stats for release builds
In order for people in the real world to report information about how their application runs with YJIT, we want to expose stats without requiring rebuilding ruby. We can do this without overhead, with the exception of count ratio in yjit, since this relies on the interpreter also counting instructions.
This change exposes those stats, while not showing ratio in yjit if we are not in a stats build.
* Update yjit.rb
Co-authored-by: Takashi Kokubun <takashikkbn@gmail.com>
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
While I was working on my RubyConf talk for tracing yjit exit locations
I realized that there were exits from the dump code included in the
stats data. For example I saw 224 interp leave exits for a simple script
that should have had 1 or 2. I realized that the dump code needs to be
called _after_ the stats are generated, otherwise the dump code will be
counted in the stats exits.
I've added a `_dump_locations` method to the `at_exit` for stats
generation to ensure that it runs last. I've updated the documentation
to add a note that if you call `dump_exit_locations` directly, your
stats will include the dump code exits as well.
In a small script the speed of this feature isn't really noticeable but
on Rails it's very noticeable how slow this can be. This PR aims to
speed up two parts of the functionality.
1) The Rust exit recording code
Instead of adding all samples as we see them to the yjit_raw_samples and
yjit_line_samples, we can increment the counter on the ones we've seen
before. This will be faster on traces where we are hitting the same
stack often. In a crude measurement of booting just the active record
base test (`test/cases/base_test.rb`) we found that this improved the
speed by 1 second.
This also results in a smaller marshal dump file which sped up the test
boot time by 4 seconds with trace exits on.
2) The Ruby parsing code
Previously we were allocating new arrays using `shift` and
`each_with_index`. This change avoids allocating new arrays by using an
index. This change saves us the most amount of time, gaining 11 seconds.
Before this change the test boot time took 62 seconds, after it took 47
seconds. This is still too long but it's a step closer to faster
functionality. Next we're going to tackle allowing you to collect trace
exits for a specific instruction. There is also some potential slowness
in the GC code that I'd like to take a second look at.
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
When running with `--yjit-stats` turned on, yjit can inform the user
what the most common exits are. While this is useful information it
doesn't tell you the source location of the code that exited or what the
code that exited looks like. This change intends to fix that.
To use the feature, run yjit with the `--yjit-trace-exits` option,
which will record the backtrace for every exit that occurs. This functionality
requires the stats feature to be turned on. Calling `--yjit-trace-exits`
will automatically set the `--yjit-stats` option.
Users must call `RubyVM::YJIT.dump_exit_locations(filename)` which will
Marshal dump the contents of `RubyVM::YJIT.exit_locations` into a file
based on the passed filename.
*Example usage:*
Given the following script, we write to a file called
`concat_array.dump` the results of `RubyVM::YJIT.exit_locations`.
```ruby
def concat_array
["t", "r", *x = "u", "e"].join
end
1000.times do
concat_array
end
RubyVM::YJIT.dump_exit_locations("concat_array.dump")
```
When we run the file with this branch and the appropriate flags the
stacktrace will be recorded. Note Stackprof needs to be installed or you
need to point to the library directly.
```
./ruby --yjit --yjit-call-threshold=1 --yjit-trace-exits -I/Users/eileencodes/open_source/stackprof/lib test.rb
```
We can then read the dump file with Stackprof:
```
./ruby -I/Users/eileencodes/open_source/stackprof/lib/ /Users/eileencodes/open_source/stackprof/bin/stackprof --text concat_array.dump
```
Results will look similar to the following:
```
==================================
Mode: ()
Samples: 1817 (0.00% miss rate)
GC: 0 (0.00%)
==================================
TOTAL (pct) SAMPLES (pct) FRAME
1001 (55.1%) 1001 (55.1%) concatarray
335 (18.4%) 335 (18.4%) invokeblock
178 (9.8%) 178 (9.8%) send
140 (7.7%) 140 (7.7%) opt_getinlinecache
...etc...
```
Simply inspecting the `concatarray` method will give `SOURCE
UNAVAILABLE` because the source is insns.def.
```
./ruby -I/Users/eileencodes/open_source/stackprof/lib/ /Users/eileencodes/open_source/stackprof/bin/stackprof --text concat_array.dump --method concatarray
```
Result:
```
concatarray (nonexistent.def:1)
samples: 1001 self (55.1%) / 1001 total (55.1%)
callers:
1000 ( 99.9%) Object#concat_array
1 ( 0.1%) Gem.suffixes
callees (0 total):
code:
SOURCE UNAVAILABLE
```
However if we go deeper to the callee we can see the exact
source of the `concatarray` exit.
```
./ruby -I/Users/eileencodes/open_source/stackprof/lib/ /Users/eileencodes/open_source/stackprof/bin/stackprof --text concat_array.dump --method Object#concat_array
```
```
Object#concat_array (/Users/eileencodes/open_source/rust_ruby/test.rb:1)
samples: 0 self (0.0%) / 1000 total (55.0%)
callers:
1000 ( 100.0%) block in <main>
callees (1000 total):
1000 ( 100.0%) concatarray
code:
| 1 | def concat_array
1000 (55.0%) | 2 | ["t", "r", *x = "u", "e"].join
| 3 | end
```
The `--walk` option is recommended for this feature as it make it
easier to traverse the tree of exits.
*Goals of this feature:*
This feature is meant to give more information when working on YJIT.
The idea is that if we know what code is exiting we can decide what
areas to prioritize when fixing exits. In some cases this means adding
prioritizing avoiding certain exits in yjit. In more complex cases it
might mean changing the Ruby code to be more performant when run with
yjit. Ultimately the more information we have about what code is exiting
AND why, the better we can make yjit.
*Known limitations:*
* Due to tracing exits, running this on large codebases like Rails
can be quite slow.
* On complex methods it can still be difficult to pinpoint the exact cause of
an exit.
* Stackprof is a requirement to to view the backtrace information from
the dump file.
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
In December 2021, we opened an [issue] to solicit feedback regarding the
porting of the YJIT codebase from C99 to Rust. There were some
reservations, but this project was given the go ahead by Ruby core
developers and Matz. Since then, we have successfully completed the port
of YJIT to Rust.
The new Rust version of YJIT has reached parity with the C version, in
that it passes all the CRuby tests, is able to run all of the YJIT
benchmarks, and performs similarly to the C version (because it works
the same way and largely generates the same machine code). We've even
incorporated some design improvements, such as a more fine-grained
constant invalidation mechanism which we expect will make a big
difference in Ruby on Rails applications.
Because we want to be careful, YJIT is guarded behind a configure
option:
```shell
./configure --enable-yjit # Build YJIT in release mode
./configure --enable-yjit=dev # Build YJIT in dev/debug mode
```
By default, YJIT does not get compiled and cargo/rustc is not required.
If YJIT is built in dev mode, then `cargo` is used to fetch development
dependencies, but when building in release, `cargo` is not required,
only `rustc`. At the moment YJIT requires Rust 1.60.0 or newer.
The YJIT command-line options remain mostly unchanged, and more details
about the build process are documented in `doc/yjit/yjit.md`.
The CI tests have been updated and do not take any more resources than
before.
The development history of the Rust port is available at the following
commit for interested parties:
1fd9573d8b
Our hope is that Rust YJIT will be compiled and included as a part of
system packages and compiled binaries of the Ruby 3.2 release. We do not
anticipate any major problems as Rust is well supported on every
platform which YJIT supports, but to make sure that this process works
smoothly, we would like to reach out to those who take care of building
systems packages before the 3.2 release is shipped and resolve any
issues that may come up.
[issue]: https://bugs.ruby-lang.org/issues/18481
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
Co-authored-by: Noah Gibbs <the.codefolio.guy@gmail.com>
Co-authored-by: Kevin Newton <kddnewton@gmail.com>
Add an empty line before the module doc string so RDoc can find it.
While we are at it, edit for clarity. The file should already be
using frozen string literals since c10d5085a2.
[ci skip]
Previously, YJIT crashes with rb_bug() when asked to compile new methods
while out of executable memory.
To handle this situation gracefully, this change keeps track of all the
blocks compiled each invocation in case YJIT runs out of memory in the
middle of a compliation sequence. The list is used to free all blocks in
case compilation fails.
yjit_gen_block() is renamed to gen_single_block() to make it distinct from
gen_block_version(). Call to limit_block_version() and block_t
allocation is moved into the function to help tidy error checking in the
outer loop.
limit_block_version() now returns by value. I feel that an out parameter
with conditional mutation is unnecessarily hard to read in code that
does not need to go for last drop performance. There is a good chance
that the optimizer is able to output identical code anyways.
This commit adds an entry_exit field to block_t for use in
invalidate_block_version(). By patching the start of the block while
invalidating it, invalidate_block_version() can function correctly
while there is no executable memory left for new branch stubs.
This change additionally fixes correctness for situations where we
cannot patch incoming jumps to the invalidated block. In situations
such as Shopify/yjit#226, the address to the start of the block
is saved and used later, possibly after the block is invalidated.
The assume_* family of function now generate block->entry_exit before
remembering blocks for invalidation.
RubyVM::YJIT.simulate_oom! is introduced for testing out of memory
conditions. The test for it is disabled for now because OOM triggers
other failure conditions not addressed by this commit.
FixesShopify/yjit#226
This change fixes `-v --yjit-stats`. Previously in this situation,
YJIT._print_stats wasn't defined as yjit.rb is not evaluated when there
is only "-v" and no Ruby code to run.
In an effort to minimize build issues on non x64 platforms, we can
decide at build time to not build the bulk of YJIT. This should fix
obscure build errors like this one on riscv64:
yjit_asm.c:137:(.text+0x3fa): relocation truncated to fit: R_RISCV_PCREL_HI20 against `alloc_exec_mem'
We also don't need to bulid YJIT on `--disable-jit-support` builds.
One wrinkle to this is that the YJIT Ruby module will not be defined
when YJIT is stripped from the build. I think that's a fair change as
it's only meant to be used for YJIT development.
Before this change, when we encounter a constant cache that is specific
to a lexical scope, we unconditionally exit. This change falls back to
the interpreter's cache in this situation.
This should help constant expressions in `class << self`, which is popular
at Shopify due to the style guide.
This change relies on the cache being warm while compiling to detect the
need for checking the lexical scope for simplicity.
We weren't counting completing an entire method in YJIT as exits so the
avg_len_in_yjit for
./miniruby --yjit-call-threshold=1 --yjit-stats -e'def foo; end; foo'
was infinite.
For use cases where you want to collect the metrics
for a specific piece of code (typically a web request)
you can have the stats turned off by default and then
turn them on at runtime before executing the code you care
about.
This adds a method to blocks to get outgoing ids, then uses the outgoing
ids to generate a graphviz graph. Two methods were added to the Block
object. One method returns an id for the block, which is just the
address of the underlying block. The other method returns a list of
outgoing block ids. We can use Block#id in conjunction with
Block#outgoing_ids to construct a graph of blocks
* Implement send with blocks
Not that much extra work compared to `opt_send_without_block`.
Moved the stack over flow check because it could've exited after changes
are made to cfp.
* rename oswb counters
* Might as well implement sending block to cfuncs
* Disable sending blocks to cfuncs for now
* Reconstruct interpreter sp before calling into cfuncs
In case the callee cfunc calls a method or delegates to a block.
This also has the side benefit of letting call sites that sometimes are
iseq calls and sometimes cfunc call share the same successor.
* only sync with interpreter sp when passing a block
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
Co-authored-by: Aaron Patterson <aaron.patterson@shopify.com>
Introduce a new macro `ADD_COMMENT(cb, comment)` that records a comment
for the current write position in the code block.
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
Co-authored-by: Aaron Patterson <aaron.patterson@shopify.com>
Lazily compile out a chain of checks for different known classes and
whether `self` embeds its ivars or not.
* Remove trailing whitespaces
* Get proper addresss in Capstone disassembly
* Lowercase address in Capstone disassembly
Capstone uses lowercase for jump targets in generated listings. Let's
match it.
* Use the same successor in getivar guard chains
Cuts down on duplication
* Address reviews
* Fix copypasta error
* Add a comment