instead of FILE*.
Using C.fprintf is slower than String manipulation on memory. I'm going
to change the way MJIT writes files, and this is a prerequisite for it.
We frequently make branches that only have one target but we used to
always allocate space for two branch targets. This patch moves all the
information a branch target has into a struct and refer to them using
Option<Box<BranchTarget>>, this way when the second branch target is not
present it only takes 8 bytes.
Retained heap size on railsbench went from 16.17 MiB to 14.57 MiB, a
ratio of about 1.1.
YJIT: x86_64: Fix cmp with number where sign bit is set
Before this commit, we were unconditionally treating unsigned ints as
signed ints when counting the number of bits required for representing
the immediate in machine code. When the size of the immediate matches
the size of the other operand, no sign extension happens, so this was
incorrect. `asm.cmp(opnd64, 0x8000_0000)` panicked even though it's
encodable as `CMP r/m32, imm32`. Large shape ids were impacted by this
issue.
Co-Authored-By: Aaron Patterson <tenderlove@ruby-lang.org>
Co-Authored-By: Alan Wu <alanwu@ruby-lang.org>
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
Co-authored-by: Alan Wu <alanwu@ruby-lang.org>
YJIT: Skip padding jumps to side exits
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
Co-authored-by: Alan Wu <alansi.xingwu@shopify.com>
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
Co-authored-by: Alan Wu <alansi.xingwu@shopify.com>
@casperisfine reporting a bug in this gist https://gist.github.com/casperisfine/d59e297fba38eb3905a3d7152b9e9350
After investigating I found it was caused by a combination of send and a c_func that we have overwritten in the JIT. For send calls, we need to do some stack manipulation before making the call. Because of the way exits works, we need to do that stack manipulation at the last possible moment. In this case, we weren't doing that stack manipulation at all. Unfortunately, with how the code is structured there isn't a great place to do that stack manipulation for our overridden C funcs.
Each overridden C func can return a boolean stating that it shouldn't be used. We would need to do the stack manipulation after all of those checks are done. We could pass a lambda(?) or separate out the logic for "can I run this override" from "now generate the code for it". Since we are coming up on a release, I went with the path of least resistence and just decided to not use these overrides if we are in a send call.
We definitely should revist this in the future.
* YJIT: Always encode Opnd::Value in 64 bits on x86_64
for GC offsets
Co-authored-by: Alan Wu <alansi.xingwu@shopify.com>
* Introduce heap_object_p
* Leave original mov intact
* Remove unneeded branches
* Add a test for movabs
Co-authored-by: Alan Wu <alansi.xingwu@shopify.com>
When RUBY_DEBUG is enabled, shape ids are 16 bits. I would like to do
16 bit comparisons, so I need to load halfwords sometimes. This commit
adds LDURH so that I can load halfwords.
https://developer.arm.com/documentation/ddi0596/2021-12/Base-Instructions/LDURH--Load-Register-Halfword--unscaled--?lang=en
I verified the bytes using clang:
```
$ cat asmthing.s
.global _start
.align 2
_start:
ldurh w10, [x1]
ldurh w10, [x1, #123]
$ as asmthing.s -o asmthing.o && objdump --disassemble asmthing.o
asmthing.o: file format mach-o arm64
Disassembly of section __TEXT,__text:
0000000000000000 <ltmp0>:
0: 2a 00 40 78 ldurh w10, [x1]
4: 2a b0 47 78 ldurh w10, [x1, #123]
```
Previously, there is no instruction boundary patch point after
the call to a non-leaf C function we generate for
OPTIMIZED_METHOD_TYPE_CALL. This meant that if code GC is triggered
while inside the C function, we would keep running invalidated code when
we return from the C function. This had the effect of running
stale branch stubs, jumping to bad code, etc.
Use jit_prepare_routine_call() to make sure we exit from the invalidated
region as soon as possible after the C call in case of invalidation.
* Enable --yjit-stats for release builds
In order for people in the real world to report information about how their application runs with YJIT, we want to expose stats without requiring rebuilding ruby. We can do this without overhead, with the exception of count ratio in yjit, since this relies on the interpreter also counting instructions.
This change exposes those stats, while not showing ratio in yjit if we are not in a stats build.
* Update yjit.rb
Co-authored-by: Takashi Kokubun <takashikkbn@gmail.com>
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
Since object shapes store the capacity of an object, we no longer
need the numiv field on RObjects. This gives us one extra slot which
we can use to give embedded objects one more instance variable (for a
total of 3 ivs). This commit removes the concept of numiv from RObject.
This commit adds a `capacity` field to shapes, and adds shape
transitions whenever an object's capacity changes. Objects which are
allocated out of a bigger size pool will also make a transition from the
root shape to the shape with the correct capacity for their size pool
when they are allocated.
This commit will allow us to remove numiv from objects completely, and
will also mean we can guarantee that if two objects share shapes, their
IVs are in the same positions (an embedded and extended object cannot
share shapes). This will enable us to implement ivar sets in YJIT using
object shapes.
Co-Authored-By: Aaron Patterson <tenderlove@ruby-lang.org>
We switch to a new page when we detect dropped_bytes flipping from false
to true. Previously, when we patch code for invalidation during code gc,
we start with the flag being set to true, so we failed to apply patches
that straddle pages. We would write out jumps half way and then stop,
which left the code corrupted.
Reset the flag before patching so we patch across pages properly.
This dispatches to a c func for doing the dynamic lookup. I experimented with chain on the proc but wasn't able to detect which call sites would be monomorphic vs polymorphic. There is definitely room for optimization here, but it does reduce exits.
Co-Authored-By: Alan Wu <alansi.xingwu@shopify.com>
Co-Authored-By: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
Co-authored-by: Alan Wu <alansi.xingwu@shopify.com>
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
While experimenting I found that it's easy to change Context and forget
to also change the copying operation in limit_block_versions(). Add an
assert to make sure we substitute a compatible generic context when
limiting the number of versions.
This code used to roll its own heap object check before we made a better
version in guard_known_class(). The improved version uses one fewer
comparison, so let's use that.
`iv_count` is a misleading name because when IVs are unset, the new
shape doesn't decrement this value. `next_iv_count` is an accurate, and
more descriptive name.
Previously, we found the current page by rounding the current pointer to
the closest smaller page size. This is incorrect because pages are
relative to the start of the address we reserve. For example, if the
starting address is 12KiB modulo the 16KiB page size, once we have more
than 4KiB of code, calculating with the address would incorrectly give
us page 1 when we're actually still on page 0.
Previously, I can reproduce crashes with:
make btest RUN_OPTS=--yjit-code-page-size=32
on ARM64 macOS, where system page sizes are 16KiB.