YJIT: Use raw pointers and shared references over `Rc<RefCell<_>>`

`Rc` and `RefCell` both incur runtime space costs.
In addition, `RefCell` has given us some headaches with the
non obvious borrow panics it likes to throw out. The latest
one started with 7fd53eeb46
and is yet to be resolved.

Since we already rely on the GC to properly reclaim memory for `Block`
and `Branch`, we might as well stop paying the overhead of `Rc` and
`RefCell`. The `RefCell` panics go away with this change, too.

On 25 iterations of `railsbench` with a stats build I got
`yjit_alloc_size: 8,386,129 => 7,348,637`, with the new memory size 87.6%
of the status quo. This makes the metadata and machine code size roughly
line up one-to-one.

The general idea here is to use `&` shared references with
[interior mutability][1] with `Cell`, which doesn't take any extra
space. The `noalias` requirement that `&mut` imposes is way too hard to
meet and verify. Imagine replacing places where we would've gotten
`BorrowError` from `RefCell` with Rust/LLVM miscompiling us due to aliasing
violations. With shared references, we don't have to think about subtle
cases like the GC _sometimes_ calling the mark callback while codegen
has an aliasing reference in a stack frame below. We mostly only need to
worry about liveness, with which the GC already helps.

There is now a clean split between blocks and branches that are not yet
fully constructed and ones that are "in-service", so to speak. Working
with `PendingBranch` and `JITState` don't really involve `unsafe` stuff.
This change allows `Branch` and `Block` to not have as many optional
fields as many of them are only optional during compilation. Fields that
change post-compilation are wrapped in `Cell` to facilitate mutation
through shared references.

I do some `unsafe` dances here. I've included just a couple tests to run
with Miri (`cargo +nightly miri test miri`). We can add more Miri tests
if desired.

[1]: https://doc.rust-lang.org/std/cell/struct.UnsafeCell.html
This commit is contained in:
Alan Wu 2023-03-16 15:39:27 -04:00 коммит произвёл Takashi Kokubun
Родитель c62cf60d18
Коммит 10e4fa3a0f
5 изменённых файлов: 968 добавлений и 640 удалений

Просмотреть файл

@ -12,6 +12,7 @@ use crate::utils::*;
use CodegenStatus::*;
use YARVOpnd::*;
use std::cell::Cell;
use std::cmp;
use std::collections::HashMap;
use std::ffi::CStr;
@ -38,63 +39,89 @@ type InsnGenFn = fn(
ocb: &mut OutlinedCb,
) -> CodegenStatus;
/// Code generation state
/// This struct only lives while code is being generated
/// Ephemeral code generation state.
/// Represents a [core::Block] while we build it.
pub struct JITState {
// Block version being compiled
block: BlockRef,
// Instruction sequence this is associated with
/// Instruction sequence for the compiling block
iseq: IseqPtr,
// Index of the current instruction being compiled
insn_idx: u16,
/// The iseq index of the first instruction in the block
starting_insn_idx: IseqIdx,
// Opcode for the instruction being compiled
/// The [Context] entering into the first instruction of the block
starting_ctx: Context,
/// The placement for the machine code of the [Block]
output_ptr: CodePtr,
/// Index of the current instruction being compiled
insn_idx: IseqIdx,
/// Opcode for the instruction being compiled
opcode: usize,
// PC of the instruction being compiled
/// PC of the instruction being compiled
pc: *mut VALUE,
// Side exit to the instruction being compiled. See :side-exit:.
/// Side exit to the instruction being compiled. See :side-exit:.
side_exit_for_pc: Option<CodePtr>,
// Execution context when compilation started
// This allows us to peek at run-time values
/// Execution context when compilation started
/// This allows us to peek at run-time values
ec: EcPtr,
// Whether we need to record the code address at
// the end of this bytecode instruction for global invalidation
record_boundary_patch_point: bool,
/// The outgoing branches the block will have
pub pending_outgoing: Vec<PendingBranchRef>,
// The block's outgoing branches
outgoing: Vec<BranchRef>,
// --- Fields for block invalidation and invariants tracking below:
// Public mostly so into_block defined in the sibling module core
// can partially move out of Self.
// The block's CME dependencies
cme_dependencies: Vec<CmePtr>,
/// Whether we need to record the code address at
/// the end of this bytecode instruction for global invalidation
pub record_boundary_patch_point: bool,
/// Code for immediately exiting upon entry to the block.
/// Required for invalidation.
pub block_entry_exit: Option<CodePtr>,
/// A list of callable method entries that must be valid for the block to be valid.
pub method_lookup_assumptions: Vec<CmePtr>,
/// A list of basic operators that not be redefined for the block to be valid.
pub bop_assumptions: Vec<(RedefinitionFlag, ruby_basic_operators)>,
/// A list of constant expression path segments that must have
/// not been written to for the block to be valid.
pub stable_constant_names_assumption: Option<*const ID>,
/// When true, the block is valid only when there is a total of one ractor running
pub block_assumes_single_ractor: bool,
}
impl JITState {
pub fn new(blockref: &BlockRef, ec: EcPtr) -> Self {
pub fn new(blockid: BlockId, starting_ctx: Context, output_ptr: CodePtr, ec: EcPtr) -> Self {
JITState {
block: blockref.clone(),
iseq: ptr::null(), // TODO: initialize this from the blockid
iseq: blockid.iseq,
starting_insn_idx: blockid.idx,
starting_ctx,
output_ptr,
insn_idx: 0,
opcode: 0,
pc: ptr::null_mut::<VALUE>(),
side_exit_for_pc: None,
pending_outgoing: vec![],
ec,
record_boundary_patch_point: false,
outgoing: Vec::new(),
cme_dependencies: Vec::new(),
block_entry_exit: None,
method_lookup_assumptions: vec![],
bop_assumptions: vec![],
stable_constant_names_assumption: None,
block_assumes_single_ractor: false,
}
}
pub fn get_block(&self) -> BlockRef {
self.block.clone()
}
pub fn get_insn_idx(&self) -> u16 {
pub fn get_insn_idx(&self) -> IseqIdx {
self.insn_idx
}
@ -110,6 +137,18 @@ impl JITState {
self.pc
}
pub fn get_starting_insn_idx(&self) -> IseqIdx {
self.starting_insn_idx
}
pub fn get_block_entry_exit(&self) -> Option<CodePtr> {
self.block_entry_exit
}
pub fn get_starting_ctx(&self) -> Context {
self.starting_ctx.clone()
}
pub fn get_arg(&self, arg_idx: isize) -> VALUE {
// insn_len require non-test config
#[cfg(not(test))]
@ -175,18 +214,22 @@ impl JITState {
}
}
pub fn assume_method_lookup_stable(&mut self, ocb: &mut OutlinedCb, cme: CmePtr) {
jit_ensure_block_entry_exit(self, ocb);
self.method_lookup_assumptions.push(cme);
}
fn get_cfp(&self) -> *mut rb_control_frame_struct {
unsafe { get_ec_cfp(self.ec) }
}
// Push an outgoing branch ref
pub fn push_outgoing(&mut self, branch: BranchRef) {
self.outgoing.push(branch);
pub fn assume_stable_constant_names(&mut self, ocb: &mut OutlinedCb, id: *const ID) {
jit_ensure_block_entry_exit(self, ocb);
self.stable_constant_names_assumption = Some(id);
}
// Push a CME dependency
pub fn push_cme_dependency(&mut self, cme: CmePtr) {
self.cme_dependencies.push(cme);
pub fn queue_outgoing_branch(&mut self, branch: PendingBranchRef) {
self.pending_outgoing.push(branch)
}
}
@ -498,22 +541,20 @@ fn get_side_exit(jit: &mut JITState, ocb: &mut OutlinedCb, ctx: &Context) -> Tar
// Ensure that there is an exit for the start of the block being compiled.
// Block invalidation uses this exit.
pub fn jit_ensure_block_entry_exit(jit: &mut JITState, ocb: &mut OutlinedCb) {
let blockref = jit.block.clone();
let mut block = blockref.borrow_mut();
let block_ctx = block.get_ctx();
let blockid = block.get_blockid();
if block.entry_exit.is_some() {
if jit.block_entry_exit.is_some() {
return;
}
let block_starting_context = &jit.get_starting_ctx();
// If we're compiling the first instruction in the block.
if jit.insn_idx == blockid.idx {
if jit.insn_idx == jit.starting_insn_idx {
// Generate the exit with the cache in jitstate.
block.entry_exit = Some(get_side_exit(jit, ocb, &block_ctx).unwrap_code_ptr());
let entry_exit = get_side_exit(jit, ocb, block_starting_context).unwrap_code_ptr();
jit.block_entry_exit = Some(entry_exit);
} else {
let block_entry_pc = unsafe { rb_iseq_pc_at_idx(blockid.iseq, blockid.idx.into()) };
block.entry_exit = Some(gen_outlined_exit(block_entry_pc, &block_ctx, ocb));
let block_entry_pc = unsafe { rb_iseq_pc_at_idx(jit.iseq, jit.starting_insn_idx.into()) };
jit.block_entry_exit = Some(gen_outlined_exit(block_entry_pc, block_starting_context, ocb));
}
}
@ -725,18 +766,18 @@ pub fn gen_single_block(
verify_blockid(blockid);
assert!(!(blockid.idx == 0 && ctx.get_stack_size() > 0));
// Save machine code placement of the block. `cb` might page switch when we
// generate code in `ocb`.
let block_start_addr = cb.get_write_ptr();
// Instruction sequence to compile
let iseq = blockid.iseq;
let iseq_size = unsafe { get_iseq_encoded_size(iseq) };
let iseq_size: u16 = iseq_size.try_into().unwrap();
let mut insn_idx: u16 = blockid.idx;
let starting_insn_idx = insn_idx;
// Allocate the new block
let blockref = Block::new(blockid, &ctx, cb.get_write_ptr());
// Initialize a JIT state object
let mut jit = JITState::new(&blockref, ec);
let mut jit = JITState::new(blockid, ctx.clone(), cb.get_write_ptr(), ec);
jit.iseq = blockid.iseq;
// Create a backend assembler instance
@ -762,7 +803,7 @@ pub fn gen_single_block(
// We need opt_getconstant_path to be in a block all on its own. Cut the block short
// if we run into it. This is necessary because we want to invalidate based on the
// instruction's index.
if opcode == YARVINSN_opt_getconstant_path.as_usize() && insn_idx > starting_insn_idx {
if opcode == YARVINSN_opt_getconstant_path.as_usize() && insn_idx > jit.starting_insn_idx {
jump_to_next_insn(&mut jit, &ctx, &mut asm, ocb);
break;
}
@ -814,17 +855,15 @@ pub fn gen_single_block(
println!("can't compile {}", insn_name(opcode));
}
let mut block = jit.block.borrow_mut();
// TODO: if the codegen function makes changes to ctx and then return YJIT_CANT_COMPILE,
// the exit this generates would be wrong. We could save a copy of the entry context
// and assert that ctx is the same here.
gen_exit(jit.pc, &ctx, &mut asm);
// If this is the first instruction in the block, then we can use
// the exit for block->entry_exit.
if insn_idx == block.get_blockid().idx {
block.entry_exit = Some(block.get_start_addr());
// If this is the first instruction in the block, then
// the entry address is the address for block_entry_exit
if insn_idx == jit.starting_insn_idx {
jit.block_entry_exit = Some(jit.output_ptr);
}
break;
@ -842,45 +881,28 @@ pub fn gen_single_block(
break;
}
}
// Finish filling out the block
{
let mut block = jit.block.borrow_mut();
if block.entry_exit.is_some() {
asm.pad_inval_patch();
}
// Compile code into the code block
let gc_offsets = asm.compile(cb);
// Add the GC offsets to the block
block.set_gc_obj_offsets(gc_offsets);
// Set CME dependencies to the block
block.set_cme_dependencies(jit.cme_dependencies);
// Set outgoing branches to the block
block.set_outgoing(jit.outgoing);
// Mark the end position of the block
block.set_end_addr(cb.get_write_ptr());
// Store the index of the last instruction in the block
block.set_end_idx(insn_idx);
}
let end_insn_idx = insn_idx;
// We currently can't handle cases where the request is for a block that
// doesn't go to the next instruction.
// doesn't go to the next instruction in the same iseq.
assert!(!jit.record_boundary_patch_point);
// Pad the block if it has the potential to be invalidated
if jit.block_entry_exit.is_some() {
asm.pad_inval_patch();
}
// Compile code into the code block
let gc_offsets = asm.compile(cb);
let end_addr = cb.get_write_ptr();
// If code for the block doesn't fit, fail
if cb.has_dropped_bytes() || ocb.unwrap().has_dropped_bytes() {
free_block(&blockref);
return Err(());
}
// Block compiled successfully
Ok(blockref)
Ok(jit.into_block(end_insn_idx, block_start_addr, end_addr, gc_offsets))
}
fn gen_nop(
@ -3595,7 +3617,7 @@ fn gen_branchif(
ctx,
Some(next_block),
Some(ctx),
BranchGenFn::BranchIf(BranchShape::Default),
BranchGenFn::BranchIf(Cell::new(BranchShape::Default)),
);
}
@ -3652,7 +3674,7 @@ fn gen_branchunless(
ctx,
Some(next_block),
Some(ctx),
BranchGenFn::BranchUnless(BranchShape::Default),
BranchGenFn::BranchUnless(Cell::new(BranchShape::Default)),
);
}
@ -3706,7 +3728,7 @@ fn gen_branchnil(
ctx,
Some(next_block),
Some(ctx),
BranchGenFn::BranchNil(BranchShape::Default),
BranchGenFn::BranchNil(Cell::new(BranchShape::Default)),
);
}
@ -4533,7 +4555,7 @@ fn jit_obj_respond_to(
// Invalidate this block if method lookup changes for the method being queried. This works
// both for the case where a method does or does not exist, as for the latter we asked for a
// "negative CME" earlier.
assume_method_lookup_stable(jit, ocb, target_cme);
jit.assume_method_lookup_stable(ocb, target_cme);
// Generate a side exit
let side_exit = get_side_exit(jit, ocb, ctx);
@ -6308,7 +6330,7 @@ fn gen_send_general(
// Register block for invalidation
//assert!(cme->called_id == mid);
assume_method_lookup_stable(jit, ocb, cme);
jit.assume_method_lookup_stable(ocb, cme);
// To handle the aliased method case (VM_METHOD_TYPE_ALIAS)
loop {
@ -6478,7 +6500,7 @@ fn gen_send_general(
flags |= VM_CALL_FCALL | VM_CALL_OPT_SEND;
assume_method_lookup_stable(jit, ocb, cme);
jit.assume_method_lookup_stable(ocb, cme);
let (known_class, type_mismatch_exit) = {
if compile_time_name.string_p() {
@ -6973,8 +6995,8 @@ fn gen_invokesuper(
// We need to assume that both our current method entry and the super
// method entry we invoke remain stable
assume_method_lookup_stable(jit, ocb, me);
assume_method_lookup_stable(jit, ocb, cme);
jit.assume_method_lookup_stable(ocb, me);
jit.assume_method_lookup_stable(ocb, cme);
// Method calls may corrupt types
ctx.clear_local_types();
@ -7428,14 +7450,13 @@ fn gen_opt_getconstant_path(
asm.store(stack_top, ic_entry_val);
} else {
// Optimize for single ractor mode.
// FIXME: This leaks when st_insert raises NoMemoryError
if !assume_single_ractor_mode(jit, ocb) {
return CantCompile;
}
// Invalidate output code on any constant writes associated with
// constants referenced within the current block.
assume_stable_constant_names(jit, ocb, idlist);
jit.assume_stable_constant_names(ocb, idlist);
jit_putobject(jit, ctx, asm, unsafe { (*ice).value });
}
@ -8112,15 +8133,15 @@ mod tests {
use super::*;
fn setup_codegen() -> (JITState, Context, Assembler, CodeBlock, OutlinedCb) {
let blockid = BlockId {
iseq: ptr::null(),
idx: 0,
};
let cb = CodeBlock::new_dummy(256 * 1024);
let block = Block::new(blockid, &Context::default(), cb.get_write_ptr());
return (
JITState::new(&block, ptr::null()), // No execution context in tests. No peeking!
JITState::new(
BlockId { iseq: std::ptr::null(), idx: 0 },
Context::default(),
cb.get_write_ptr(),
ptr::null(), // No execution context in tests. No peeking!
),
Context::default(),
Assembler::new(),
cb,

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -37,18 +37,23 @@ pub extern "C" fn rb_yjit_disasm_iseq(_ec: EcPtr, _ruby_self: VALUE, iseqw: VALU
// This will truncate disassembly of methods with 10k+ bytecodes.
// That's a good thing - this prints to console.
let out_string = disasm_iseq_insn_range(iseq, 0, 9999);
let out_string = with_vm_lock(src_loc!(), || disasm_iseq_insn_range(iseq, 0, 9999));
return rust_str_to_ruby(&out_string);
}
}
/// Only call while holding the VM lock.
#[cfg(feature = "disasm")]
pub fn disasm_iseq_insn_range(iseq: IseqPtr, start_idx: u16, end_idx: u16) -> String {
let mut out = String::from("");
// Get a list of block versions generated for this iseq
let mut block_list = get_or_create_iseq_block_list(iseq);
let block_list = get_or_create_iseq_block_list(iseq);
let mut block_list: Vec<&Block> = block_list.into_iter().map(|blockref| {
// SAFETY: We have the VM lock here and all the blocks on iseqs are valid.
unsafe { blockref.as_ref() }
}).collect();
// Get a list of codeblocks relevant to this iseq
let global_cb = crate::codegen::CodegenGlobals::get_inline_cb();
@ -58,8 +63,8 @@ pub fn disasm_iseq_insn_range(iseq: IseqPtr, start_idx: u16, end_idx: u16) -> St
use std::cmp::Ordering;
// Get the start addresses for each block
let addr_a = a.borrow().get_start_addr().raw_ptr();
let addr_b = b.borrow().get_start_addr().raw_ptr();
let addr_a = a.get_start_addr().raw_ptr();
let addr_b = b.get_start_addr().raw_ptr();
if addr_a < addr_b {
Ordering::Less
@ -73,20 +78,19 @@ pub fn disasm_iseq_insn_range(iseq: IseqPtr, start_idx: u16, end_idx: u16) -> St
// Compute total code size in bytes for all blocks in the function
let mut total_code_size = 0;
for blockref in &block_list {
total_code_size += blockref.borrow().code_size();
total_code_size += blockref.code_size();
}
writeln!(out, "NUM BLOCK VERSIONS: {}", block_list.len()).unwrap();
writeln!(out, "TOTAL INLINE CODE SIZE: {} bytes", total_code_size).unwrap();
// For each block, sorted by increasing start address
for block_idx in 0..block_list.len() {
let block = block_list[block_idx].borrow();
for (block_idx, block) in block_list.iter().enumerate() {
let blockid = block.get_blockid();
if blockid.idx >= start_idx && blockid.idx < end_idx {
let end_idx = block.get_end_idx();
let start_addr = block.get_start_addr();
let end_addr = block.get_end_addr().unwrap();
let end_addr = block.get_end_addr();
let code_size = block.code_size();
// Write some info about the current block
@ -110,7 +114,7 @@ pub fn disasm_iseq_insn_range(iseq: IseqPtr, start_idx: u16, end_idx: u16) -> St
// If this is not the last block
if block_idx < block_list.len() - 1 {
// Compute the size of the gap between this block and the next
let next_block = block_list[block_idx + 1].borrow();
let next_block = block_list[block_idx + 1];
let next_start_addr = next_block.get_start_addr();
let gap_size = next_start_addr.into_usize() - end_addr.into_usize();
@ -318,7 +322,9 @@ fn insns_compiled(iseq: IseqPtr) -> Vec<(String, u16)> {
// For each block associated with this iseq
for blockref in &block_list {
let block = blockref.borrow();
// SAFETY: Called as part of a Ruby method, which ensures the graph is
// well connected for the given iseq.
let block = unsafe { blockref.as_ref() };
let start_idx = block.get_blockid().idx;
let end_idx = block.get_end_idx();
assert!(u32::from(end_idx) <= unsafe { get_iseq_encoded_size(iseq) });

Просмотреть файл

@ -16,8 +16,8 @@ use std::mem;
// Invariants to track:
// assume_bop_not_redefined(jit, INTEGER_REDEFINED_OP_FLAG, BOP_PLUS)
// assume_method_lookup_stable(comptime_recv_klass, cme, jit);
// assume_single_ractor_mode(jit)
// assume_stable_global_constant_state(jit);
// assume_single_ractor_mode()
// track_stable_constant_names_assumption()
/// Used to track all of the various block references that contain assumptions
/// about the state of the virtual machine.
@ -78,9 +78,9 @@ impl Invariants {
}
}
/// A public function that can be called from within the code generation
/// functions to ensure that the block being generated is invalidated when the
/// basic operator is redefined.
/// Mark the pending block as assuming that certain basic operators (e.g. Integer#==)
/// have not been redefined.
#[must_use]
pub fn assume_bop_not_redefined(
jit: &mut JITState,
ocb: &mut OutlinedCb,
@ -89,18 +89,7 @@ pub fn assume_bop_not_redefined(
) -> bool {
if unsafe { BASIC_OP_UNREDEFINED_P(bop, klass) } {
jit_ensure_block_entry_exit(jit, ocb);
let invariants = Invariants::get_instance();
invariants
.basic_operator_blocks
.entry((klass, bop))
.or_default()
.insert(jit.get_block());
invariants
.block_basic_operators
.entry(jit.get_block())
.or_default()
.insert((klass, bop));
jit.bop_assumptions.push((klass, bop));
return true;
} else {
@ -108,28 +97,33 @@ pub fn assume_bop_not_redefined(
}
}
// Remember that a block assumes that
// `rb_callable_method_entry(receiver_klass, cme->called_id) == cme` and that
// `cme` is valid.
// When either of these assumptions becomes invalid, rb_yjit_method_lookup_change() or
// rb_yjit_cme_invalidate() invalidates the block.
//
// @raise NoMemoryError
pub fn assume_method_lookup_stable(
jit: &mut JITState,
ocb: &mut OutlinedCb,
/// Track that a block is only valid when a certain basic operator has not been redefined
/// since the block's inception.
pub fn track_bop_assumption(uninit_block: BlockRef, bop: (RedefinitionFlag, ruby_basic_operators)) {
let invariants = Invariants::get_instance();
invariants
.basic_operator_blocks
.entry(bop)
.or_default()
.insert(uninit_block);
invariants
.block_basic_operators
.entry(uninit_block)
.or_default()
.insert(bop);
}
/// Track that a block will assume that `cme` is valid (false == METHOD_ENTRY_INVALIDATED(cme)).
/// [rb_yjit_cme_invalidate] invalidates the block when `cme` is invalidated.
pub fn track_method_lookup_stability_assumption(
uninit_block: BlockRef,
callee_cme: *const rb_callable_method_entry_t,
) {
jit_ensure_block_entry_exit(jit, ocb);
let block = jit.get_block();
jit.push_cme_dependency(callee_cme);
Invariants::get_instance()
.cme_validity
.entry(callee_cme)
.or_default()
.insert(block);
.insert(uninit_block);
}
// Checks rb_method_basic_definition_p and registers the current block for invalidation if method
@ -141,10 +135,10 @@ pub fn assume_method_basic_definition(
ocb: &mut OutlinedCb,
klass: VALUE,
mid: ID
) -> bool {
) -> bool {
if unsafe { rb_method_basic_definition_p(klass, mid) } != 0 {
let cme = unsafe { rb_callable_method_entry(klass, mid) };
assume_method_lookup_stable(jit, ocb, cme);
jit.assume_method_lookup_stable(ocb, cme);
true
} else {
false
@ -158,22 +152,24 @@ pub fn assume_single_ractor_mode(jit: &mut JITState, ocb: &mut OutlinedCb) -> bo
false
} else {
jit_ensure_block_entry_exit(jit, ocb);
Invariants::get_instance()
.single_ractor
.insert(jit.get_block());
jit.block_assumes_single_ractor = true;
true
}
}
/// Walk through the ISEQ to go from the current opt_getinlinecache to the
/// subsequent opt_setinlinecache and find all of the name components that are
/// associated with this constant (which correspond to the getconstant
/// arguments).
pub fn assume_stable_constant_names(jit: &mut JITState, ocb: &mut OutlinedCb, idlist: *const ID) {
/// Tracks that a block is assuming that the name component of a constant
/// has not changed since the last call to this function.
/// Track that the block will assume single ractor mode.
pub fn track_single_ractor_assumption(uninit_block: BlockRef) {
Invariants::get_instance()
.single_ractor
.insert(uninit_block);
}
/// Track that a block will assume that the name components of a constant path expression
/// has not changed since the block's full initialization.
pub fn track_stable_constant_names_assumption(uninit_block: BlockRef, idlist: *const ID) {
fn assume_stable_constant_name(
jit: &mut JITState,
uninit_block: BlockRef,
id: ID,
) {
if id == idNULL as u64 {
@ -186,10 +182,10 @@ pub fn assume_stable_constant_names(jit: &mut JITState, ocb: &mut OutlinedCb, id
.constant_state_blocks
.entry(id)
.or_default()
.insert(jit.get_block());
.insert(uninit_block);
invariants
.block_constant_states
.entry(jit.get_block())
.entry(uninit_block)
.or_default()
.insert(id);
}
@ -198,12 +194,9 @@ pub fn assume_stable_constant_names(jit: &mut JITState, ocb: &mut OutlinedCb, id
for i in 0.. {
match unsafe { *idlist.offset(i) } {
0 => break, // End of NULL terminated list
id => assume_stable_constant_name(jit, id),
id => assume_stable_constant_name(uninit_block, id),
}
}
jit_ensure_block_entry_exit(jit, ocb);
}
/// Called when a basic operator is redefined. Note that all the blocks assuming
@ -344,19 +337,22 @@ pub extern "C" fn rb_yjit_root_mark() {
/// Remove all invariant assumptions made by the block by removing the block as
/// as a key in all of the relevant tables.
pub fn block_assumptions_free(blockref: &BlockRef) {
/// For safety, the block has to be initialized and the vm lock must be held.
/// However, outgoing/incoming references to the block does _not_ need to be valid.
pub fn block_assumptions_free(blockref: BlockRef) {
let invariants = Invariants::get_instance();
{
let block = blockref.borrow();
// SAFETY: caller ensures that this reference is valid
let block = unsafe { blockref.as_ref() };
// For each method lookup dependency
for dep in block.iter_cme_deps() {
// Remove tracking for cme validity
if let Some(blockset) = invariants.cme_validity.get_mut(dep) {
blockset.remove(blockref);
if let Some(blockset) = invariants.cme_validity.get_mut(&dep) {
blockset.remove(&blockref);
if blockset.is_empty() {
invariants.cme_validity.remove(dep);
invariants.cme_validity.remove(&dep);
}
}
}
@ -506,17 +502,18 @@ pub extern "C" fn rb_yjit_tracing_invalidate_all() {
if on_stack_iseqs.contains(&iseq) {
// This ISEQ is running, so we can't free blocks immediately
for block in blocks {
delayed_deallocation(&block);
delayed_deallocation(block);
}
payload.dead_blocks.shrink_to_fit();
} else {
// Safe to free dead blocks since the ISEQ isn't running
// Since we're freeing _all_ blocks, we don't need to keep the graph well formed
for block in blocks {
free_block(&block);
unsafe { free_block(block, false) };
}
mem::take(&mut payload.dead_blocks)
.iter()
.for_each(free_block);
.into_iter()
.for_each(|block| unsafe { free_block(block, false) });
}
}

Просмотреть файл

@ -534,11 +534,11 @@ fn get_live_context_count() -> usize {
for_each_iseq_payload(|iseq_payload| {
for blocks in iseq_payload.version_map.iter() {
for block in blocks.iter() {
count += block.borrow().get_ctx_count();
count += unsafe { block.as_ref() }.get_ctx_count();
}
}
for block in iseq_payload.dead_blocks.iter() {
count += block.borrow().get_ctx_count();
count += unsafe { block.as_ref() }.get_ctx_count();
}
});
count