Previously, PosMarker callbacks ran even when the assembler failed to
assemble its contents due to insufficient space. This was problematic
because when Assembler::compile() failed, the callbacks were given
positions that have no valid code, contrary to general expectation.
For example, we use a PosMarker callback to record VM instruction
boundaries and patch in jumps to exits in case the guest program starts
tracing, however, previously, we could record a location near the end of
the code block, where there is no space to patch in jumps. I suspect
this is the cause of the recent occurrences of rare random failures on
GitHub Actions with the invariants.rs:529 "can rewrite existing code"
message. `--yjit-perf` also uses PosMarker and had a similar issue.
Buffer the list of callbacks to fire, and only fire them when all code
in the assembler are written out successfully. It's more intuitive this
way.
Right now the `rb_shape_get_next` shape caller need to
first check if there is capacity left, and if not call
`rb_shape_transition_shape_capa` before it can call `rb_shape_get_next`.
And on each of these it needs to checks if we got a TOO_COMPLEX
back.
All this logic is duplicated in the interpreter, YJIT and RJIT.
Instead we can have `rb_shape_get_next` do the capacity transition
when needed. The caller can compare the old and new shapes capacity
to know if resizing is needed. It also can check for TOO_COMPLEX
only once.
We still need to do `jit.record_boundary_patch_point = false`
when gen_outlined_exit() returns `None` and we return with `?`.
Previously, we tripped the assert at codegen.rs:1042.
Found with `--yjit-exec-mem-size=3` on the lobsters benchmark.
Co-authored-by: Takashi Kokubun <takashikkbn@gmail.com>
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
We've long had a size restriction on the code memory region such that a
u32 could refer to everything. This commit capitalizes on this
restriction by shrinking the size of `CodePtr` to be 4 bytes from 8.
To derive a full raw pointer from a `CodePtr`, one needs a base pointer.
Both `CodeBlock` and `VirtualMemory` can be used for this purpose. The
base pointer is readily available everywhere, except for in the case of
the `jit_return` "branch". Generalize lea_label() to lea_jump_target()
in the IR to delay deriving the `jit_return` address until `compile()`,
when the base pointer is available.
On railsbench, this yields roughly a 1% reduction to `yjit_alloc_size`
(58,397,765 to 57,742,248).
If the VM ran out of shape, `rb_shape_transition_shape_capa` might
return `OBJ_TOO_COMPLEX_SHAPE`.
Co-authored-by: Jean Boussier <byroot@ruby-lang.org>
* YJIT: implement two-step call threshold
Automatically switch call threshold to a larger value for
larger, production-sized apps, while still allowing smaller apps
and command-line programs to start with a lower threshold.
* Update yjit/src/options.rs
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
* Make the new variables constants
* Check that a custom call threshold was not specified
---------
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
So that we get a reminder to check CodeBlock::has_dropped_bytes().
Internally, asm.compile() already checks it, and this patch just
propagates it out to the caller with a `#[must_use]`.
Code GC logic moved out one level in entry_stub_hit(), so the body
can freely use `?`
It's an estimator for application size and could be used as a
compilation heuristic later.
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
Co-authored-by: Takashi Kokubun <takashikkbn@gmail.com>
Previously, the version-controlled `cruby_bindings.inc.rs` file
contained the build-time artifact `id.h`, which nobu mentioned hinders
the goal of having fewer magic numbers in the repository.
Lookup the IDs YJIT needs on boot. It costs cycles, but it's fine since
YJIT only uses a handful of IDs at the moment. No perceptible
degradation to boot time found in my testing.
Previously, for block argument callsites with some specific argument
count and callee local variable count combinations, YJIT ended up
writing over arguments that are supposed to be collected into a rest
parameter array unmodified.
Detect when clobbering would happen and avoid it. Also, place the block
handler after the stack overflow check, since it writes to new stack
space.
Reported-by: Takashi Kokubun <takashikkbn@gmail.com>
* Port call threshold logic from Rust to C for performance
* Prefix global/field names with yjit_
* Fix linker error
* Fix preprocessor condition for rb_yjit_threshold_hit
* Fix third linker issue
* Exclude yjit_calls_at_interv from RJIT bindgen
---------
Co-authored-by: Takashi Kokubun <takashikkbn@gmail.com>
Given `SHAPE_MAX_NUM_IVS 80`, we transition to TOO_COMPLEX
way before we could overflow a 8bit counter.
This reduce the size of `rb_shape_t` from 32B to 24B.
If we decide to raise `SHAPE_MAX_NUM_IVS` we can always increase
that type again.
Previously, at the end of `leave` we did
`*caller_cfp->sp = return_value`, like the interpreter.
With future changes that leaves the SP field uninitialized for C frames,
this will become problematic. For cases like returning from
`rb_funcall()`, the return value was written above the stack and
never read anyway (callers use the copy in the return register).
Leave the return value in a register at the end of `leave` and have the
code at `cfp->jit_return` decide what to do with it. This avoids the
unnecessary memory write mentioned above. For JIT-to-JIT returns, it goes
through `asm.stack_push()` and benefits from register allocation for
stack temporaries.
Mostly flat on benchmarks, with maybe some marginal speed improvements.
Co-authored-by: Takashi Kokubun <takashikkbn@gmail.com>