* Iterator
* Use the new iterator for the X86 backend split
* Use iterator for reg alloc, remove forward pass
* Fix up iterator usage on AArch64
* Update yjit/src/backend/ir.rs
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
* Various PR feedback for iterators for IR
* Use a local mutable reference for a64_split
* Move tests from ir.rs to tests.rs in backend
* Fix x86 shift instructions live range calculation
* Iterator
* Use the new iterator for the X86 backend split
* Fix up x86 iterator usage
* Fix ARM iterator usage
* Remove unintentionally duplicated tests
* Port gen_send_iseq to the new backend IR
* Replace occurrences of 8 by SIZEOF_VALUE
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
* Update flags for data processing on ARM
* Update yjit/src/backend/arm64/mod.rs
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
Previously, we patched in an x64 JMP even on A64, which resulted in
invalid machine code. Use the new assembler to generate a jump instead.
Add an assert to make sure patches don't step on each other since it's
less clear cut on A64, where the size of the jump varies depending on
its placement relative to the target.
Fixes a lot of tests that use `set_trace_func` in `test_insns.rb`.
PR: https://github.com/Shopify/ruby/pull/379
* Left and right shift for IR
* Update yjit/src/backend/x86_64/mod.rs
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
* Port opt_eq and opt_neq to the new backend
* Just use into() outside
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
* Use C_RET_OPND to share the register
* Revert "Use C_RET_OPND to share the register"
This reverts commit 99381765d0008ff0f03ea97c6c8db608a2298e2b.
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
* Port setivar to the new backend IR
* Add a few more setivar test cases
* Prefer const_ptr
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
* Fix asm.load(VALUE)
- `<VALUE as impl Into<Opnd>>` didn't track that the value is a value
- `Iterator::map` doesn't evaluate the closure you give it until you
call `collect`. Use a for loop instead so we put the gc offsets
into the compiled block properly.
* x64: Mov(mem, VALUE) should load the value first
Tripped in codegen for putobject now that we are actually feeding
`Opnd::Value` into the backend.
* x64 split: Canonicallize VALUE loads
* Update yjit/src/backend/x86_64/mod.rs
* Port gen_send_cfunc to the new backend
* Remove an obsoleted test
* Add more cfunc tests
* Use csel_e instead and more into()
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
* Add a missing lea for build_kwargs
* Split cfunc test cases
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
forward_pass adjusts the indexes of our opnds to reflect the new
instructions as they are generated in the forward pass. However, we were
using the old live_ranges array, for which the new indexes are
incorrect.
This caused us to previously generate an IR which contained unnecessary
trivial load instructions (ex. mov rax, rax), because it was looking at
the wrong lifespans. Presumably this could also cause bugs because the
lifespan of the incorrectly considered operand idx could be short.
We've added an assert which would have failed on the previous trivial
case (but not necessarily all cases).
Co-authored-by: Matthew Draper <matthew@trebex.net>
* Convert getinstancevariable to new backend IR
* Support mem-based mem
* Use more into()
* Add tests for getivar
* Just load obj_opnd to a register
* Apply another into()
* Flip the nil-out condition
* Fix duplicated counts of side_exit
* Move allocation into Assembler::pos_marker
We wanted to do this to begin with but didn't because we were confused
about the lifetime parameter. It's actually talking about the lifetime
of the references that the closure captures. Since all of our usages
capture no references (they use `move`), it's fine to put a `+ 'static`
here.
* Use optional token syntax for calling convention macro
* Explicitly request C ABI on ARM
It looks like the Rust calling convention for functions are the same as
the C ABI for now and it's unlikely to change, but it's easy for us to
be explicit here. I also tried saying `extern "aapcs"` but that
unfortunately doesn't work.
* A64: Fix off by one in offset encoding for BL
It's relative to the address of the instruction not the end of it.
* A64: Fix off by one when encoding B
It's relative to the start of the instruction not the end.
* A64: Add some tests for boundary offsets
It allows for reserving a specific register and prevents the register
allocator from clobbering it. Without this
`./miniruby --yjit-stats --yjit-callthreshold=1 -e0` was crashing because
the counter incrementing code was clobbering RAX incorrectly.
* Better splitting for Op::Add, Op::Sub, and Op::Cmp
* Split stores if the displacement is too large
* Use a shifted immediate argument
* Split all places where shifted immediates are used
* Add more tests to the cirrus workflow
* Refactor defer_compilation to use PosMarker
* Port gen_direct_jump() to use PosMarker
* Port gen_branch, branchunless
* Port over gen_jump()
* Port over branchif and branchnil
* Fix use od record_boundary_patch_point in jump_to_next_insn
* get_dupn was allocating and throwing away an Assembler object instead of using the one passed in
* Uncomment remaining tests in codegen.rs, which seem to work now
* Implement PosMarker instruction
* Implement PosMarker in the arm backend
* Make bindgen run only for clang image
* Fix if-else in cirrus CI file
* Add missing semicolon
* Try removing trailing semicolon
* Try to fix shell/YAML syntax
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
* Move to/from SP on AArch64
* Consolidate loads and stores
* Implement LDR post-index and LDR pre-index for AArch64
* Implement STR post-index and STR pre-index for AArch64
* Module entrypoints for LDR pre/post -index and STR pre/post -index
* Use STR (pre-index) and LDR (post-index) to implement push/pop
* Go back to using MOV for to/from SP
* ADR and ADRP for AArch64
* Implement Op::Jbe on X86
* Lera instruction
* Op::BakeString
* LeaPC -> LeaLabel
* Port print_str to the new backend
* Port print_value to the new backend
* Port print_ptr to the new backend
* Write null-terminators in Op::BakeString
* Fix up rebase issues on print-str port
* Add back in panic for X86 backend for unsupported instructions being lowered
* Fix target architecture
Previously we were using a `Box<dyn FnOnce>` to support patching the
code when jumping to labels. We needed to do this because some of the
closures that were being used to patch needed to capture local variables
(on both X86 and ARM it was the type of condition for the conditional
jumps).
To get around that, we can instead use const generics since the
condition codes are always known at compile-time. This means that the
closures go from polymorphic to monomorphic, which means they can be
represented as an `fn` instead of a `Box<dyn FnOnce>`, which means they
can fall back to a plain function pointer. This simplifies the storage
of the `LabelRef` structs and should hopefully be a better default
going forward.
* More Arm64 lowering/backend work
* We now have encoding support for the LDR instruction for loading a PC-relative memory location
* You can now call add/adds/sub/subs with signed immediates, which switches appropriately based on sign
* We can now load immediates into registers appropriately, attempting to keep the minimal number of instructions:
* If it fits into 16 bytes, we use just a single movz.
* Else if it can be encoded into a bitmask immediate, we use a single mov.
* Otherwise we use a movz, a movk, and then optionally another one or two movks.
* Fixed a bunch of code to do with the Op::Load opcode.
* We now handle GC-offsets properly for Op::Load by skipping around them with a jump instruction. (This will be made better by constant pools in the future.)
* Op::Lea is doing what it's supposed to do now.
* Fixed a bug in the backend tests to do with not using the result of an Op::Add.
* Fix the remaining tests for Arm64
* Move split loads logic into each backend
* Get initial wiring up
* Split IncrCounter instruction
* Breakpoints in Arm64
* Support for ORR
* MOV instruction encodings
* Implement JmpOpnd and CRet
* Add ORN
* Add MVN
* PUSH, POP, CCALL for Arm64
* Some formatting and implement Op::Not for Arm64
* Consistent constants when working with the Arm64 SP
* Allow OR-ing values into the memory buffer
* Test lowering Arm64 ADD
* Emit unconditional jumps consistently in Arm64
* Begin emitting conditional jumps for A64
* Back out some labelref changes
* Remove label API that no longer exists
* Use a trait for the label encoders
* Encode nop
* Add in nops so jumps are the same width no matter what on Arm64
* Op::Jbe for CodePtr
* Pass src_addr and dst_addr instead of calculated offset to label refs
* Even more jump work for Arm64
* Fix up jumps to use consistent assertions
* Handle splitting Add, Sub, and Not insns for Arm64
* More Arm64 splits and various fixes
* PR feedback for Arm64 support
* Split up jumps and conditional jump logic
* Remove x86-64 dependency from codegen.rs
* Port over putnil and putobject
* Port over gen_leave()
* Complete port of gen_leave()
* Fix bug in x86 instruction splitting
* LDUR
* Fix up immediate masking
* Consume operands directly
* Consistency and cleanup
* More consistency and entrypoints
* Cleaner syntax for masks
* Cleaner shifting for encodings
* Initial setup for aarch64
* ADDS and SUBS
* ADD and SUB for immediates
* Revert moved code
* Documentation
* Rename Arm64* to A64*
* Comments on shift types
* Share sig_imm_size and unsig_imm_size
* Split instructions if necessary
* Add a reusable transform_insns function
* Split out comments labels from transform_insns
* Refactor alloc_regs to use transform_insns
* YJIT: Add known_* helpers for Type
This adds a few helpers to Type which all return Options representing
what is known, from a Ruby perspective, about the type.
This includes:
* known_class_of: If known, the class represented by this type
* known_value_type: If known, the T_ value type
* known_exact_value: If known, the exact VALUE represented by this type
(currently this is only available for true/false/nil)
* known_truthy: If known, whether or not this value evaluates as true
(not false or nil)
The goal of this is to abstract away the specifics of the mappings
between types wherever possible from the codegen. For example previously
by introducing Type::CString as a more specific version of
Type::TString, uses of Type::TString in codegen needed to be updated to
check either case. Now by using known_value_type, at least in theory we
can introduce new types with minimal (if any) codegen changes.
I think rust's Option type allows us to represent this uncertainty
fairly well, and should help avoid mistakes, and the matching using this
turned out pretty cleanly.
* YJIT: Use known_value_type for checktype
* YJIT: Use known_value_type for T_STRING check
* YJIT: Use known_class_of in guard_known_klass
* YJIT: Use known truthyness in jit_rb_obj_not
* YJIT: Rename known_class_of => known_class
Teach getblockparamproxy to handle the no-block case without exiting
Co-authored-by: John Hawthorn <john@hawthorn.email>
Co-authored-by: John Hawthorn <john@hawthorn.email>
Write barriers may be required when VM_ENV_FLAG_WB_REQUIRED is set,
however write barriers only affect heap objects being written. If we
know an immediate value is being written we can skip this check.
This commit implements Objects on Variable Width Allocation. This allows
Objects with more ivars to be embedded (i.e. contents directly follow the
object header) which improves performance through better cache locality.
In a small script the speed of this feature isn't really noticeable but
on Rails it's very noticeable how slow this can be. This PR aims to
speed up two parts of the functionality.
1) The Rust exit recording code
Instead of adding all samples as we see them to the yjit_raw_samples and
yjit_line_samples, we can increment the counter on the ones we've seen
before. This will be faster on traces where we are hitting the same
stack often. In a crude measurement of booting just the active record
base test (`test/cases/base_test.rb`) we found that this improved the
speed by 1 second.
This also results in a smaller marshal dump file which sped up the test
boot time by 4 seconds with trace exits on.
2) The Ruby parsing code
Previously we were allocating new arrays using `shift` and
`each_with_index`. This change avoids allocating new arrays by using an
index. This change saves us the most amount of time, gaining 11 seconds.
Before this change the test boot time took 62 seconds, after it took 47
seconds. This is still too long but it's a step closer to faster
functionality. Next we're going to tackle allowing you to collect trace
exits for a specific instruction. There is also some potential slowness
in the GC code that I'd like to take a second look at.
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
This fails if there are any unused rust-bindgen "allow" entries. For
that target we turn on Rust warnings (there are a lot) and grep for the
ones that correspond to unused allow entries.
I've added check-yjit-bindgen-unused as a dependency of
check-yjit-bindings, so unused allow entries will now fail CI.
This change also removes our single unused allow entry (VM_CALL.*) which
was known to be bad.
This commit makes YJIT allocate memory for generated code gradually as
needed. Previously, YJIT allocates all the memory it needs on boot in
one go, leading to higher than necessary resident set size (RSS) and
time spent on boot initializing the memory with a large memset().
Users should no longer need to search for a magic number to pass to
`--yjit-exec-mem` since physical memory consumption should now more
accurately reflect the requirement of the workload.
YJIT now reserves a range of addresses on boot. This region start out
with no access permission at all so buggy attempts to jump to the region
crashes like before this change. To get this hardening at finer
granularity than the page size, we fill each page with trapping
instructions when we first allocate physical memory for the page.
Most of the time applications don't need 256 MiB of executable code, so
allocating on-demand ends up doing less total work than before. Case in
point, a simple `ruby --yjit-call-threshold=1 -eitself` takes about
half as long after this change. In terms of memory consumption, here is
a table to give a rough summary of the impact:
| Peak RSS in MiB | -eitself example | railsbench once |
| :-------------: | ---------------: | --------------: |
| before | 265 | 377 |
| after | 11 | 143 |
| no YJIT | 10 | 101 |
A new module is introduced to handle allocation bookkeeping.
`CodePtr` is moved into the module since it has a close relationship
with the new `VirtualMemory` struct. This new interface has a slightly
smaller surface than before in that marking a region as writable is no
longer a public operation.
When running with `--yjit-stats` turned on, yjit can inform the user
what the most common exits are. While this is useful information it
doesn't tell you the source location of the code that exited or what the
code that exited looks like. This change intends to fix that.
To use the feature, run yjit with the `--yjit-trace-exits` option,
which will record the backtrace for every exit that occurs. This functionality
requires the stats feature to be turned on. Calling `--yjit-trace-exits`
will automatically set the `--yjit-stats` option.
Users must call `RubyVM::YJIT.dump_exit_locations(filename)` which will
Marshal dump the contents of `RubyVM::YJIT.exit_locations` into a file
based on the passed filename.
*Example usage:*
Given the following script, we write to a file called
`concat_array.dump` the results of `RubyVM::YJIT.exit_locations`.
```ruby
def concat_array
["t", "r", *x = "u", "e"].join
end
1000.times do
concat_array
end
RubyVM::YJIT.dump_exit_locations("concat_array.dump")
```
When we run the file with this branch and the appropriate flags the
stacktrace will be recorded. Note Stackprof needs to be installed or you
need to point to the library directly.
```
./ruby --yjit --yjit-call-threshold=1 --yjit-trace-exits -I/Users/eileencodes/open_source/stackprof/lib test.rb
```
We can then read the dump file with Stackprof:
```
./ruby -I/Users/eileencodes/open_source/stackprof/lib/ /Users/eileencodes/open_source/stackprof/bin/stackprof --text concat_array.dump
```
Results will look similar to the following:
```
==================================
Mode: ()
Samples: 1817 (0.00% miss rate)
GC: 0 (0.00%)
==================================
TOTAL (pct) SAMPLES (pct) FRAME
1001 (55.1%) 1001 (55.1%) concatarray
335 (18.4%) 335 (18.4%) invokeblock
178 (9.8%) 178 (9.8%) send
140 (7.7%) 140 (7.7%) opt_getinlinecache
...etc...
```
Simply inspecting the `concatarray` method will give `SOURCE
UNAVAILABLE` because the source is insns.def.
```
./ruby -I/Users/eileencodes/open_source/stackprof/lib/ /Users/eileencodes/open_source/stackprof/bin/stackprof --text concat_array.dump --method concatarray
```
Result:
```
concatarray (nonexistent.def:1)
samples: 1001 self (55.1%) / 1001 total (55.1%)
callers:
1000 ( 99.9%) Object#concat_array
1 ( 0.1%) Gem.suffixes
callees (0 total):
code:
SOURCE UNAVAILABLE
```
However if we go deeper to the callee we can see the exact
source of the `concatarray` exit.
```
./ruby -I/Users/eileencodes/open_source/stackprof/lib/ /Users/eileencodes/open_source/stackprof/bin/stackprof --text concat_array.dump --method Object#concat_array
```
```
Object#concat_array (/Users/eileencodes/open_source/rust_ruby/test.rb:1)
samples: 0 self (0.0%) / 1000 total (55.0%)
callers:
1000 ( 100.0%) block in <main>
callees (1000 total):
1000 ( 100.0%) concatarray
code:
| 1 | def concat_array
1000 (55.0%) | 2 | ["t", "r", *x = "u", "e"].join
| 3 | end
```
The `--walk` option is recommended for this feature as it make it
easier to traverse the tree of exits.
*Goals of this feature:*
This feature is meant to give more information when working on YJIT.
The idea is that if we know what code is exiting we can decide what
areas to prioritize when fixing exits. In some cases this means adding
prioritizing avoiding certain exits in yjit. In more complex cases it
might mean changing the Ruby code to be more performant when run with
yjit. Ultimately the more information we have about what code is exiting
AND why, the better we can make yjit.
*Known limitations:*
* Due to tracing exits, running this on large codebases like Rails
can be quite slow.
* On complex methods it can still be difficult to pinpoint the exact cause of
an exit.
* Stackprof is a requirement to to view the backtrace information from
the dump file.
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>