* Mutate in place for register allocation

Currently we allocate a new instruction every time when we're
doing register allocation by first splitting up the instruction
into its component parts, mapping the operands and the output, and
then pushing all of its parts onto the new assembler.

Since we don't need the old instruction, we can mutate the existing
one in place. While it's not that big of a win in and of itself, it
matches much more closely to what we're going to have to do when we
switch the instruction from being a struct to being an enum,
because it's much easier for the instruction to modify itself since
it knows its own shape than it is to push a new instruction that
very closely matches.

* Mutate in place for arm64 split

When we're splitting instructions for the arm64 backend, we map all
of the operands for a given instruction when it has an Opnd::Value.
We can do this in place with the existing operand instead of
allocating a new vector each time. This enables us to pattern match
against the entire instruction instead of just the opcode, which is
much closer to matching against an enum.

* Match against entire instruction in arm64_emit

Instead of matching against the opcode and then accessing all of
the various fields on the instruction when emitting bytecode for
arm64, we should instead match against the entire instruction.
This makes it much closer to what's going to happen when we switch
it over to being an enum.

* Match against entire instruction in x86_64 backend

When we're splitting or emitting code for x86_64, we should match
against the entire instruction instead of matching against just the
opcode. This gets us closer to matching against an enum instead of
a struct.

* Reuse instructions for arm64_split

When we're splitting, the default behavior was previously to split
up the instruction into its component parts and then reassemble
them in a new instruction. Instead, we can reuse the existing
instruction.
This commit is contained in:
Kevin Newton 2022-08-17 16:08:41 -04:00 коммит произвёл Takashi Kokubun
Родитель c70d1471c1
Коммит b00606eb64
3 изменённых файлов: 367 добавлений и 340 удалений

Просмотреть файл

@ -186,29 +186,27 @@ impl Assembler
let asm = &mut asm_local;
let mut iterator = self.into_draining_iter();
while let Some((index, insn)) = iterator.next_mapped() {
while let Some((index, mut insn)) = iterator.next_mapped() {
// Here we're going to map the operands of the instruction to load
// any Opnd::Value operands into registers if they are heap objects
// such that only the Op::Load instruction needs to handle that
// case. If the values aren't heap objects then we'll treat them as
// if they were just unsigned integer.
let opnds: Vec<Opnd> = insn.opnds.into_iter().map(|opnd| {
for opnd in &mut insn.opnds {
match opnd {
Opnd::Value(value) => {
if value.special_const_p() {
Opnd::UImm(value.as_u64())
} else if insn.op == Op::Load {
opnd
} else {
asm.load(opnd)
*opnd = Opnd::UImm(value.as_u64());
} else if insn.op != Op::Load {
*opnd = asm.load(*opnd);
}
},
_ => opnd
}
}).collect();
_ => {}
};
}
match insn.op {
Op::Add => {
match insn {
Insn { op: Op::Add, opnds, .. } => {
match (opnds[0], opnds[1]) {
(Opnd::Reg(_) | Opnd::InsnOut { .. }, Opnd::Reg(_) | Opnd::InsnOut { .. }) => {
asm.add(opnds[0], opnds[1]);
@ -225,24 +223,24 @@ impl Assembler
}
}
},
Op::And | Op::Or | Op::Xor => {
Insn { op: Op::And | Op::Or | Op::Xor, opnds, target, text, pos_marker, .. } => {
match (opnds[0], opnds[1]) {
(Opnd::Reg(_), Opnd::Reg(_)) => {
asm.push_insn_parts(insn.op, vec![opnds[0], opnds[1]], insn.target, insn.text, insn.pos_marker);
asm.push_insn_parts(insn.op, vec![opnds[0], opnds[1]], target, text, pos_marker);
},
(reg_opnd @ Opnd::Reg(_), other_opnd) |
(other_opnd, reg_opnd @ Opnd::Reg(_)) => {
let opnd1 = split_bitmask_immediate(asm, other_opnd);
asm.push_insn_parts(insn.op, vec![reg_opnd, opnd1], insn.target, insn.text, insn.pos_marker);
asm.push_insn_parts(insn.op, vec![reg_opnd, opnd1], target, text, pos_marker);
},
_ => {
let opnd0 = split_load_operand(asm, opnds[0]);
let opnd1 = split_bitmask_immediate(asm, opnds[1]);
asm.push_insn_parts(insn.op, vec![opnd0, opnd1], insn.target, insn.text, insn.pos_marker);
asm.push_insn_parts(insn.op, vec![opnd0, opnd1], target, text, pos_marker);
}
}
},
Op::CCall => {
Insn { op: Op::CCall, opnds, target, .. } => {
assert!(opnds.len() <= C_ARG_OPNDS.len());
// For each of the operands we're going to first load them
@ -257,9 +255,9 @@ impl Assembler
// Now we push the CCall without any arguments so that it
// just performs the call.
asm.ccall(insn.target.unwrap().unwrap_fun_ptr(), vec![]);
asm.ccall(target.unwrap().unwrap_fun_ptr(), vec![]);
},
Op::Cmp => {
Insn { op: Op::Cmp, opnds, .. } => {
let opnd0 = match opnds[0] {
Opnd::Reg(_) | Opnd::InsnOut { .. } => opnds[0],
_ => split_load_operand(asm, opnds[0])
@ -268,15 +266,14 @@ impl Assembler
let opnd1 = split_shifted_immediate(asm, opnds[1]);
asm.cmp(opnd0, opnd1);
},
Op::CRet => {
Insn { op: Op::CRet, opnds, .. } => {
if opnds[0] != Opnd::Reg(C_RET_REG) {
let value = split_load_operand(asm, opnds[0]);
asm.mov(C_RET_OPND, value);
}
asm.cret(C_RET_OPND);
},
Op::CSelZ | Op::CSelNZ | Op::CSelE | Op::CSelNE |
Op::CSelL | Op::CSelLE | Op::CSelG | Op::CSelGE => {
Insn { op: Op::CSelZ | Op::CSelNZ | Op::CSelE | Op::CSelNE | Op::CSelL | Op::CSelLE | Op::CSelG | Op::CSelGE, opnds, target, text, pos_marker, .. } => {
let new_opnds = opnds.into_iter().map(|opnd| {
match opnd {
Opnd::Reg(_) | Opnd::InsnOut { .. } => opnd,
@ -284,9 +281,9 @@ impl Assembler
}
}).collect();
asm.push_insn_parts(insn.op, new_opnds, insn.target, insn.text, insn.pos_marker);
asm.push_insn_parts(insn.op, new_opnds, target, text, pos_marker);
},
Op::IncrCounter => {
Insn { op: Op::IncrCounter, opnds, .. } => {
// We'll use LDADD later which only works with registers
// ... Load pointer into register
let counter_addr = split_lea_operand(asm, opnds[0]);
@ -299,7 +296,7 @@ impl Assembler
asm.incr_counter(counter_addr, addend);
},
Op::JmpOpnd => {
Insn { op: Op::JmpOpnd, opnds, .. } => {
if let Opnd::Mem(_) = opnds[0] {
let opnd0 = split_load_operand(asm, opnds[0]);
asm.jmp_opnd(opnd0);
@ -307,10 +304,10 @@ impl Assembler
asm.jmp_opnd(opnds[0]);
}
},
Op::Load => {
Insn { op: Op::Load, opnds, .. } => {
split_load_operand(asm, opnds[0]);
},
Op::LoadSExt => {
Insn { op: Op::LoadSExt, opnds, .. } => {
match opnds[0] {
// We only want to sign extend if the operand is a
// register, instruction output, or memory address that
@ -326,7 +323,7 @@ impl Assembler
}
};
},
Op::Mov => {
Insn { op: Op::Mov, opnds, .. } => {
let value = match (opnds[0], opnds[1]) {
// If the first operand is a memory operand, we're going
// to transform this into a store instruction, so we'll
@ -353,7 +350,7 @@ impl Assembler
_ => unreachable!()
};
},
Op::Not => {
Insn { op: Op::Not, opnds, .. } => {
// The value that is being negated must be in a register, so
// if we get anything else we need to load it first.
let opnd0 = match opnds[0] {
@ -363,7 +360,7 @@ impl Assembler
asm.not(opnd0);
},
Op::Store => {
Insn { op: Op::Store, opnds, .. } => {
// The displacement for the STUR instruction can't be more
// than 9 bits long. If it's longer, we need to load the
// memory address into a register first.
@ -378,7 +375,7 @@ impl Assembler
asm.store(opnd0, opnd1);
},
Op::Sub => {
Insn { op: Op::Sub, opnds, .. } => {
let opnd0 = match opnds[0] {
Opnd::Reg(_) | Opnd::InsnOut { .. } => opnds[0],
_ => split_load_operand(asm, opnds[0])
@ -387,7 +384,7 @@ impl Assembler
let opnd1 = split_shifted_immediate(asm, opnds[1]);
asm.sub(opnd0, opnd1);
},
Op::Test => {
Insn { op: Op::Test, opnds, .. } => {
// The value being tested must be in a register, so if it's
// not already one we'll load it first.
let opnd0 = match opnds[0] {
@ -403,7 +400,10 @@ impl Assembler
asm.test(opnd0, opnd1);
},
_ => {
asm.push_insn_parts(insn.op, opnds, insn.target, insn.text, insn.pos_marker);
if insn.out.is_some() {
insn.out = asm.next_opnd_out(&insn.opnds);
}
asm.push_insn(insn);
}
};
@ -569,23 +569,23 @@ impl Assembler
// For each instruction
for insn in &self.insns {
match insn.op {
Op::Comment => {
match insn {
Insn { op: Op::Comment, text, .. } => {
if cfg!(feature = "asm_comments") {
cb.add_comment(&insn.text.as_ref().unwrap());
cb.add_comment(text.as_ref().unwrap());
}
},
Op::Label => {
cb.write_label(insn.target.unwrap().unwrap_label_idx());
Insn { op: Op::Label, target, .. } => {
cb.write_label(target.unwrap().unwrap_label_idx());
},
// Report back the current position in the generated code
Op::PosMarker => {
Insn { op: Op::PosMarker, pos_marker, .. } => {
let pos = cb.get_write_ptr();
let pos_marker_fn = insn.pos_marker.as_ref().unwrap();
let pos_marker_fn = pos_marker.as_ref().unwrap();
pos_marker_fn(pos);
}
Op::BakeString => {
let str = insn.text.as_ref().unwrap();
Insn { op: Op::BakeString, text, .. } => {
let str = text.as_ref().unwrap();
for byte in str.as_bytes() {
cb.write_byte(*byte);
}
@ -600,65 +600,65 @@ impl Assembler
cb.write_byte(0);
}
},
Op::Add => {
adds(cb, insn.out.into(), insn.opnds[0].into(), insn.opnds[1].into());
Insn { op: Op::Add, opnds, out, .. } => {
adds(cb, (*out).into(), opnds[0].into(), opnds[1].into());
},
Op::FrameSetup => {
Insn { op: Op::FrameSetup, .. } => {
stp_pre(cb, X29, X30, A64Opnd::new_mem(128, C_SP_REG, -16));
// X29 (frame_pointer) = SP
mov(cb, X29, C_SP_REG);
},
Op::FrameTeardown => {
Insn { op: Op::FrameTeardown, .. } => {
// SP = X29 (frame pointer)
mov(cb, C_SP_REG, X29);
ldp_post(cb, X29, X30, A64Opnd::new_mem(128, C_SP_REG, 16));
},
Op::Sub => {
subs(cb, insn.out.into(), insn.opnds[0].into(), insn.opnds[1].into());
Insn { op: Op::Sub, opnds, out, .. } => {
subs(cb, (*out).into(), opnds[0].into(), opnds[1].into());
},
Op::And => {
and(cb, insn.out.into(), insn.opnds[0].into(), insn.opnds[1].into());
Insn { op: Op::And, opnds, out, .. } => {
and(cb, (*out).into(), opnds[0].into(), opnds[1].into());
},
Op::Or => {
orr(cb, insn.out.into(), insn.opnds[0].into(), insn.opnds[1].into());
Insn { op: Op::Or, opnds, out, .. } => {
orr(cb, (*out).into(), opnds[0].into(), opnds[1].into());
},
Op::Xor => {
eor(cb, insn.out.into(), insn.opnds[0].into(), insn.opnds[1].into());
Insn { op: Op::Xor, opnds, out, .. } => {
eor(cb, (*out).into(), opnds[0].into(), opnds[1].into());
},
Op::Not => {
mvn(cb, insn.out.into(), insn.opnds[0].into());
Insn { op: Op::Not, opnds, out, .. } => {
mvn(cb, (*out).into(), opnds[0].into());
},
Op::RShift => {
asr(cb, insn.out.into(), insn.opnds[0].into(), insn.opnds[1].into());
Insn { op: Op::RShift, opnds, out, .. } => {
asr(cb, (*out).into(), opnds[0].into(), opnds[1].into());
},
Op::URShift => {
lsr(cb, insn.out.into(), insn.opnds[0].into(), insn.opnds[1].into());
Insn { op: Op::URShift, opnds, out, .. } => {
lsr(cb, (*out).into(), opnds[0].into(), opnds[1].into());
},
Op::LShift => {
lsl(cb, insn.out.into(), insn.opnds[0].into(), insn.opnds[1].into());
Insn { op: Op::LShift, opnds, out, .. } => {
lsl(cb, (*out).into(), opnds[0].into(), opnds[1].into());
},
Op::Store => {
Insn { op: Op::Store, opnds, .. } => {
// This order may be surprising but it is correct. The way
// the Arm64 assembler works, the register that is going to
// be stored is first and the address is second. However in
// our IR we have the address first and the register second.
stur(cb, insn.opnds[1].into(), insn.opnds[0].into());
stur(cb, opnds[1].into(), opnds[0].into());
},
Op::Load => {
match insn.opnds[0] {
Insn { op: Op::Load, opnds, out, .. } => {
match opnds[0] {
Opnd::Reg(_) | Opnd::InsnOut { .. } => {
mov(cb, insn.out.into(), insn.opnds[0].into());
mov(cb, (*out).into(), opnds[0].into());
},
Opnd::UImm(uimm) => {
emit_load_value(cb, insn.out.into(), uimm);
emit_load_value(cb, (*out).into(), uimm);
},
Opnd::Imm(imm) => {
emit_load_value(cb, insn.out.into(), imm as u64);
emit_load_value(cb, (*out).into(), imm as u64);
},
Opnd::Mem(_) => {
ldur(cb, insn.out.into(), insn.opnds[0].into());
ldur(cb, (*out).into(), opnds[0].into());
},
Opnd::Value(value) => {
// We dont need to check if it's a special const
@ -670,7 +670,7 @@ impl Assembler
// references to GC'd Value operands. If the value
// being loaded is a heap object, we'll report that
// back out to the gc_offsets list.
ldr_literal(cb, insn.out.into(), 2);
ldr_literal(cb, (*out).into(), 2);
b(cb, A64Opnd::new_imm(1 + (SIZEOF_VALUE as i64) / 4));
cb.write_bytes(&value.as_u64().to_le_bytes());
@ -682,29 +682,29 @@ impl Assembler
}
};
},
Op::LoadSExt => {
match insn.opnds[0] {
Insn { op: Op::LoadSExt, opnds, out, .. } => {
match opnds[0] {
Opnd::Reg(Reg { num_bits: 32, .. }) |
Opnd::InsnOut { num_bits: 32, .. } => {
sxtw(cb, insn.out.into(), insn.opnds[0].into());
sxtw(cb, (*out).into(), opnds[0].into());
},
Opnd::Mem(Mem { num_bits: 32, .. }) => {
ldursw(cb, insn.out.into(), insn.opnds[0].into());
ldursw(cb, (*out).into(), opnds[0].into());
},
_ => unreachable!()
};
},
Op::Mov => {
mov(cb, insn.opnds[0].into(), insn.opnds[1].into());
Insn { op: Op::Mov, opnds, .. } => {
mov(cb, opnds[0].into(), opnds[1].into());
},
Op::Lea => {
let opnd: A64Opnd = insn.opnds[0].into();
Insn { op: Op::Lea, opnds, out, .. } => {
let opnd: A64Opnd = opnds[0].into();
match opnd {
A64Opnd::Mem(mem) => {
add(
cb,
insn.out.into(),
(*out).into(),
A64Opnd::Reg(A64Reg { reg_no: mem.base_reg_no, num_bits: 64 }),
A64Opnd::new_imm(mem.disp.into())
);
@ -714,25 +714,25 @@ impl Assembler
}
};
},
Op::LeaLabel => {
let label_idx = insn.target.unwrap().unwrap_label_idx();
Insn { op: Op::LeaLabel, out, target, .. } => {
let label_idx = target.unwrap().unwrap_label_idx();
cb.label_ref(label_idx, 4, |cb, end_addr, dst_addr| {
adr(cb, Self::SCRATCH0, A64Opnd::new_imm(dst_addr - (end_addr - 4)));
});
mov(cb, insn.out.into(), Self::SCRATCH0);
mov(cb, (*out).into(), Self::SCRATCH0);
},
Op::CPush => {
emit_push(cb, insn.opnds[0].into());
Insn { op: Op::CPush, opnds, .. } => {
emit_push(cb, opnds[0].into());
},
Op::CPop => {
emit_pop(cb, insn.out.into());
Insn { op: Op::CPop, out, .. } => {
emit_pop(cb, (*out).into());
},
Op::CPopInto => {
emit_pop(cb, insn.opnds[0].into());
Insn { op: Op::CPopInto, opnds, .. } => {
emit_pop(cb, opnds[0].into());
},
Op::CPushAll => {
Insn { op: Op::CPushAll, .. } => {
let regs = Assembler::get_caller_save_regs();
for reg in regs {
@ -743,7 +743,7 @@ impl Assembler
mrs(cb, Self::SCRATCH0, SystemRegister::NZCV);
emit_push(cb, Self::SCRATCH0);
},
Op::CPopAll => {
Insn { op: Op::CPopAll, .. } => {
let regs = Assembler::get_caller_save_regs();
// Pop the state/flags register
@ -754,10 +754,10 @@ impl Assembler
emit_pop(cb, A64Opnd::Reg(reg));
}
},
Op::CCall => {
Insn { op: Op::CCall, target, .. } => {
// The offset to the call target in bytes
let src_addr = cb.get_write_ptr().into_i64();
let dst_addr = insn.target.unwrap().unwrap_fun_ptr() as i64;
let dst_addr = target.unwrap().unwrap_fun_ptr() as i64;
let offset = dst_addr - src_addr;
// The offset in instruction count for BL's immediate
let offset = offset / 4;
@ -771,20 +771,20 @@ impl Assembler
blr(cb, Self::SCRATCH0);
}
},
Op::CRet => {
Insn { op: Op::CRet, .. } => {
ret(cb, A64Opnd::None);
},
Op::Cmp => {
cmp(cb, insn.opnds[0].into(), insn.opnds[1].into());
Insn { op: Op::Cmp, opnds, .. } => {
cmp(cb, opnds[0].into(), opnds[1].into());
},
Op::Test => {
tst(cb, insn.opnds[0].into(), insn.opnds[1].into());
Insn { op: Op::Test, opnds, .. } => {
tst(cb, opnds[0].into(), opnds[1].into());
},
Op::JmpOpnd => {
br(cb, insn.opnds[0].into());
Insn { op: Op::JmpOpnd, opnds, .. } => {
br(cb, opnds[0].into());
},
Op::Jmp => {
match insn.target.unwrap() {
Insn { op: Op::Jmp, target, .. } => {
match target.unwrap() {
Target::CodePtr(dst_ptr) => {
let src_addr = cb.get_write_ptr().into_i64();
let dst_addr = dst_ptr.into_i64();
@ -820,52 +820,52 @@ impl Assembler
_ => unreachable!()
};
},
Op::Je => {
emit_conditional_jump::<{Condition::EQ}>(cb, insn.target.unwrap());
Insn { op: Op::Je, target, .. } => {
emit_conditional_jump::<{Condition::EQ}>(cb, target.unwrap());
},
Op::Jne => {
emit_conditional_jump::<{Condition::NE}>(cb, insn.target.unwrap());
Insn { op: Op::Jne, target, .. } => {
emit_conditional_jump::<{Condition::NE}>(cb, target.unwrap());
},
Op::Jl => {
emit_conditional_jump::<{Condition::LT}>(cb, insn.target.unwrap());
Insn { op: Op::Jl, target, .. } => {
emit_conditional_jump::<{Condition::LT}>(cb, target.unwrap());
},
Op::Jbe => {
emit_conditional_jump::<{Condition::LS}>(cb, insn.target.unwrap());
Insn { op: Op::Jbe, target, .. } => {
emit_conditional_jump::<{Condition::LS}>(cb, target.unwrap());
},
Op::Jz => {
emit_conditional_jump::<{Condition::EQ}>(cb, insn.target.unwrap());
Insn { op: Op::Jz, target, .. } => {
emit_conditional_jump::<{Condition::EQ}>(cb, target.unwrap());
},
Op::Jnz => {
emit_conditional_jump::<{Condition::NE}>(cb, insn.target.unwrap());
Insn { op: Op::Jnz, target, .. } => {
emit_conditional_jump::<{Condition::NE}>(cb, target.unwrap());
},
Op::Jo => {
emit_conditional_jump::<{Condition::VS}>(cb, insn.target.unwrap());
Insn { op: Op::Jo, target, .. } => {
emit_conditional_jump::<{Condition::VS}>(cb, target.unwrap());
},
Op::IncrCounter => {
ldaddal(cb, insn.opnds[1].into(), insn.opnds[1].into(), insn.opnds[0].into());
Insn { op: Op::IncrCounter, opnds, .. } => {
ldaddal(cb, opnds[1].into(), opnds[1].into(), opnds[0].into());
},
Op::Breakpoint => {
Insn { op: Op::Breakpoint, .. } => {
brk(cb, A64Opnd::None);
},
Op::CSelZ | Op::CSelE => {
csel(cb, insn.out.into(), insn.opnds[0].into(), insn.opnds[1].into(), Condition::EQ);
Insn { op: Op::CSelZ | Op::CSelE, opnds, out, .. } => {
csel(cb, (*out).into(), opnds[0].into(), opnds[1].into(), Condition::EQ);
},
Op::CSelNZ | Op::CSelNE => {
csel(cb, insn.out.into(), insn.opnds[0].into(), insn.opnds[1].into(), Condition::NE);
Insn { op: Op::CSelNZ | Op::CSelNE, opnds, out, .. } => {
csel(cb, (*out).into(), opnds[0].into(), opnds[1].into(), Condition::NE);
},
Op::CSelL => {
csel(cb, insn.out.into(), insn.opnds[0].into(), insn.opnds[1].into(), Condition::LT);
Insn { op: Op::CSelL, opnds, out, .. } => {
csel(cb, (*out).into(), opnds[0].into(), opnds[1].into(), Condition::LT);
},
Op::CSelLE => {
csel(cb, insn.out.into(), insn.opnds[0].into(), insn.opnds[1].into(), Condition::LE);
Insn { op: Op::CSelLE, opnds, out, .. } => {
csel(cb, (*out).into(), opnds[0].into(), opnds[1].into(), Condition::LE);
},
Op::CSelG => {
csel(cb, insn.out.into(), insn.opnds[0].into(), insn.opnds[1].into(), Condition::GT);
Insn { op: Op::CSelG, opnds, out, .. } => {
csel(cb, (*out).into(), opnds[0].into(), opnds[1].into(), Condition::GT);
},
Op::CSelGE => {
csel(cb, insn.out.into(), insn.opnds[0].into(), insn.opnds[1].into(), Condition::GE);
Insn { op: Op::CSelGE, opnds, out, .. } => {
csel(cb, (*out).into(), opnds[0].into(), opnds[1].into(), Condition::GE);
}
Op::LiveReg => (), // just a reg alloc signal, no code
Insn { op: Op::LiveReg, .. } => (), // just a reg alloc signal, no code
};
}

Просмотреть файл

@ -286,16 +286,21 @@ impl Opnd
}
}
/// Get the size in bits for register/memory operands
pub fn rm_num_bits(&self) -> u8 {
/// Get the size in bits for this operand if there is one.
fn num_bits(&self) -> Option<u8> {
match *self {
Opnd::Reg(reg) => reg.num_bits,
Opnd::Mem(mem) => mem.num_bits,
Opnd::InsnOut{ num_bits, .. } => num_bits,
_ => unreachable!()
Opnd::Reg(Reg { num_bits, .. }) => Some(num_bits),
Opnd::Mem(Mem { num_bits, .. }) => Some(num_bits),
Opnd::InsnOut { num_bits, .. } => Some(num_bits),
_ => None
}
}
/// Get the size in bits for register/memory operands.
pub fn rm_num_bits(&self) -> u8 {
self.num_bits().unwrap()
}
/// Maps the indices from a previous list of instructions to a new list of
/// instructions.
pub fn map_index(self, indices: &Vec<usize>) -> Opnd {
@ -309,6 +314,27 @@ impl Opnd
_ => self
}
}
/// Determine the size in bits of the slice of the given operands. If any of
/// them are different sizes this will panic.
fn match_num_bits(opnds: &[Opnd]) -> u8 {
let mut value: Option<u8> = None;
for opnd in opnds {
if let Some(num_bits) = opnd.num_bits() {
match value {
None => {
value = Some(num_bits);
},
Some(value) => {
assert_eq!(value, num_bits, "operands of incompatible sizes");
}
};
}
}
value.unwrap_or(64)
}
}
impl From<usize> for Opnd {
@ -470,30 +496,10 @@ impl Assembler
/// given slice of operands. The operands are given to determine the number
/// of bits necessary for the output operand. They should all be the same
/// size.
fn next_opnd_out(&self, opnds: &[Opnd]) -> Opnd {
let mut out_num_bits: Option<u8> = None;
for opnd in opnds {
match opnd {
Opnd::InsnOut { num_bits, .. } |
Opnd::Mem(Mem { num_bits, .. }) |
Opnd::Reg(Reg { num_bits, .. }) => {
match out_num_bits {
None => {
out_num_bits = Some(*num_bits);
},
Some(out_num_bits) => {
assert_eq!(out_num_bits, *num_bits, "operands of incompatible sizes");
}
};
}
_ => {}
}
}
pub(super) fn next_opnd_out(&self, opnds: &[Opnd]) -> Opnd {
Opnd::InsnOut {
idx: self.insns.len(),
num_bits: out_num_bits.unwrap_or(64)
num_bits: Opnd::match_num_bits(opnds)
}
}
@ -619,14 +625,14 @@ impl Assembler
let mut asm = Assembler::new_with_label_names(take(&mut self.label_names));
let mut iterator = self.into_draining_iter();
while let Some((index, insn)) = iterator.next_unmapped() {
while let Some((index, mut insn)) = iterator.next_unmapped() {
// Check if this is the last instruction that uses an operand that
// spans more than one instruction. In that case, return the
// allocated register to the pool.
for opnd in &insn.opnds {
match opnd {
Opnd::InsnOut{idx, .. } |
Opnd::Mem( Mem { base: MemBase::InsnOut(idx), .. }) => {
Opnd::InsnOut{ idx, .. } |
Opnd::Mem(Mem { base: MemBase::InsnOut(idx), .. }) => {
// Since we have an InsnOut, we know it spans more that one
// instruction.
let start_index = *idx;
@ -643,7 +649,6 @@ impl Assembler
}
}
}
_ => {}
}
}
@ -655,12 +660,23 @@ impl Assembler
// If this instruction is used by another instruction,
// we need to allocate a register to it
let mut out_reg = Opnd::None;
if live_ranges[index] != index {
// If we get to this point where the end of the live range is
// not equal to the index of the instruction, then it must be
// true that we set an output operand for this instruction. If
// it's not true, something has gone wrong.
assert!(
!matches!(insn.out, Opnd::None),
"Instruction output reused but no output operand set"
);
// This is going to be the output operand that we will set on
// the instruction.
let mut out_reg: Option<Reg> = None;
// C return values need to be mapped to the C return register
if insn.op == Op::CCall {
out_reg = Opnd::Reg(take_reg(&mut pool, &regs, &C_RET_REG))
out_reg = Some(take_reg(&mut pool, &regs, &C_RET_REG));
}
// If this instruction's first operand maps to a register and
@ -672,50 +688,44 @@ impl Assembler
if let Opnd::InsnOut{idx, ..} = insn.opnds[0] {
if live_ranges[idx] == index {
if let Opnd::Reg(reg) = asm.insns[idx].out {
out_reg = Opnd::Reg(take_reg(&mut pool, &regs, &reg))
out_reg = Some(take_reg(&mut pool, &regs, &reg));
}
}
}
}
// Allocate a new register for this instruction
if out_reg == Opnd::None {
// Allocate a new register for this instruction if one is not
// already allocated.
if out_reg.is_none() {
out_reg = if insn.op == Op::LiveReg {
// Allocate a specific register
let reg = insn.opnds[0].unwrap_reg();
Opnd::Reg(take_reg(&mut pool, &regs, &reg))
Some(take_reg(&mut pool, &regs, &reg))
} else {
Opnd::Reg(alloc_reg(&mut pool, &regs))
}
Some(alloc_reg(&mut pool, &regs))
};
}
// Set the output operand on the instruction
let out_num_bits = Opnd::match_num_bits(&insn.opnds);
insn.out = Opnd::Reg(out_reg.unwrap().sub_reg(out_num_bits));
}
// Replace InsnOut operands by their corresponding register
let reg_opnds: Vec<Opnd> = insn.opnds.into_iter().map(|opnd|
match opnd {
Opnd::InsnOut{idx, ..} => asm.insns[idx].out,
for opnd in &mut insn.opnds {
match *opnd {
Opnd::InsnOut { idx, .. } => {
*opnd = asm.insns[idx].out;
},
Opnd::Mem(Mem { base: MemBase::InsnOut(idx), disp, num_bits }) => {
let out_reg = asm.insns[idx].out.unwrap_reg();
Opnd::Mem(Mem {
base: MemBase::Reg(out_reg.reg_no),
disp,
num_bits
})
let base = MemBase::Reg(asm.insns[idx].out.unwrap_reg().reg_no);
*opnd = Opnd::Mem(Mem { base, disp, num_bits });
}
_ => opnd,
_ => {},
}
).collect();
asm.push_insn_parts(insn.op, reg_opnds, insn.target, insn.text, insn.pos_marker);
// Set the output register for this instruction
let num_insns = asm.insns.len();
let mut new_insn = &mut asm.insns[num_insns - 1];
if let Opnd::Reg(reg) = out_reg {
let num_out_bits = new_insn.out.rm_num_bits();
out_reg = Opnd::Reg(reg.sub_reg(num_out_bits))
}
new_insn.out = out_reg;
asm.push_insn(insn);
}
assert_eq!(pool, 0, "Expected all registers to be returned to the pool");

Просмотреть файл

@ -122,7 +122,7 @@ impl Assembler
// - Most instructions can't be encoded with 64-bit immediates.
// - We look for Op::Load specifically when emiting to keep GC'ed
// VALUEs alive. This is a sort of canonicalization.
let opnds: Vec<Opnd> = insn.opnds.iter().map(|opnd| {
let mapped_opnds: Vec<Opnd> = insn.opnds.iter().map(|opnd| {
if insn.op == Op::Load {
iterator.map_opnd(*opnd)
} else if let Opnd::Value(value) = opnd {
@ -138,129 +138,128 @@ impl Assembler
}
}).collect();
match insn.op {
Op::Add | Op::Sub | Op::And | Op::Cmp | Op::Or | Op::Test | Op::Xor => {
let (opnd0, opnd1) = match (insn.opnds[0], insn.opnds[1]) {
match insn {
Insn { op: Op::Add | Op::Sub | Op::And | Op::Cmp | Op::Or | Op::Test | Op::Xor, opnds, target, text, pos_marker, .. } => {
let (opnd0, opnd1) = match (opnds[0], opnds[1]) {
(Opnd::Mem(_), Opnd::Mem(_)) => {
(asm.load(opnds[0]), asm.load(opnds[1]))
(asm.load(mapped_opnds[0]), asm.load(mapped_opnds[1]))
},
(Opnd::Mem(_), Opnd::UImm(value)) => {
// 32-bit values will be sign-extended
if imm_num_bits(value as i64) > 32 {
(asm.load(opnds[0]), asm.load(opnds[1]))
(asm.load(mapped_opnds[0]), asm.load(mapped_opnds[1]))
} else {
(asm.load(opnds[0]), opnds[1])
(asm.load(mapped_opnds[0]), mapped_opnds[1])
}
},
(Opnd::Mem(_), Opnd::Imm(value)) => {
if imm_num_bits(value) > 32 {
(asm.load(opnds[0]), asm.load(opnds[1]))
(asm.load(mapped_opnds[0]), asm.load(mapped_opnds[1]))
} else {
(asm.load(opnds[0]), opnds[1])
(asm.load(mapped_opnds[0]), mapped_opnds[1])
}
},
// Instruction output whose live range spans beyond this instruction
(Opnd::InsnOut { idx, .. }, _) => {
if live_ranges[idx] > index {
(asm.load(opnds[0]), opnds[1])
(asm.load(mapped_opnds[0]), mapped_opnds[1])
} else {
(opnds[0], opnds[1])
(mapped_opnds[0], mapped_opnds[1])
}
},
// We have to load memory operands to avoid corrupting them
(Opnd::Mem(_) | Opnd::Reg(_), _) => {
(asm.load(opnds[0]), opnds[1])
(asm.load(mapped_opnds[0]), mapped_opnds[1])
},
_ => (opnds[0], opnds[1])
_ => (mapped_opnds[0], mapped_opnds[1])
};
asm.push_insn_parts(insn.op, vec![opnd0, opnd1], insn.target, insn.text, insn.pos_marker);
asm.push_insn_parts(insn.op, vec![opnd0, opnd1], target, text, pos_marker);
},
// These instructions modify their input operand in-place, so we
// may need to load the input value to preserve it
Op::LShift | Op::RShift | Op::URShift => {
let (opnd0, opnd1) = match (insn.opnds[0], insn.opnds[1]) {
Insn { op: Op::LShift | Op::RShift | Op::URShift, opnds, target, text, pos_marker, .. } => {
let (opnd0, opnd1) = match (opnds[0], opnds[1]) {
// Instruction output whose live range spans beyond this instruction
(Opnd::InsnOut { idx, .. }, _) => {
if live_ranges[idx] > index {
(asm.load(opnds[0]), opnds[1])
(asm.load(mapped_opnds[0]), mapped_opnds[1])
} else {
(opnds[0], opnds[1])
(mapped_opnds[0], mapped_opnds[1])
}
},
// We have to load memory operands to avoid corrupting them
(Opnd::Mem(_) | Opnd::Reg(_), _) => {
(asm.load(opnds[0]), opnds[1])
(asm.load(mapped_opnds[0]), mapped_opnds[1])
},
_ => (opnds[0], opnds[1])
_ => (mapped_opnds[0], mapped_opnds[1])
};
asm.push_insn_parts(insn.op, vec![opnd0, opnd1], insn.target, insn.text, insn.pos_marker);
asm.push_insn_parts(insn.op, vec![opnd0, opnd1], target, text, pos_marker);
},
Op::CSelZ | Op::CSelNZ | Op::CSelE | Op::CSelNE |
Op::CSelL | Op::CSelLE | Op::CSelG | Op::CSelGE => {
let new_opnds = opnds.into_iter().map(|opnd| {
Insn { op: Op::CSelZ | Op::CSelNZ | Op::CSelE | Op::CSelNE | Op::CSelL | Op::CSelLE | Op::CSelG | Op::CSelGE, target, text, pos_marker, .. } => {
let new_opnds = mapped_opnds.into_iter().map(|opnd| {
match opnd {
Opnd::Reg(_) | Opnd::InsnOut { .. } => opnd,
_ => asm.load(opnd)
}
}).collect();
asm.push_insn_parts(insn.op, new_opnds, insn.target, insn.text, insn.pos_marker);
asm.push_insn_parts(insn.op, new_opnds, target, text, pos_marker);
},
Op::Mov => {
match (opnds[0], opnds[1]) {
Insn { op: Op::Mov, .. } => {
match (mapped_opnds[0], mapped_opnds[1]) {
(Opnd::Mem(_), Opnd::Mem(_)) => {
// We load opnd1 because for mov, opnd0 is the output
let opnd1 = asm.load(opnds[1]);
asm.mov(opnds[0], opnd1);
let opnd1 = asm.load(mapped_opnds[1]);
asm.mov(mapped_opnds[0], opnd1);
},
(Opnd::Mem(_), Opnd::UImm(value)) => {
// 32-bit values will be sign-extended
if imm_num_bits(value as i64) > 32 {
let opnd1 = asm.load(opnds[1]);
asm.mov(opnds[0], opnd1);
let opnd1 = asm.load(mapped_opnds[1]);
asm.mov(mapped_opnds[0], opnd1);
} else {
asm.mov(opnds[0], opnds[1]);
asm.mov(mapped_opnds[0], mapped_opnds[1]);
}
},
(Opnd::Mem(_), Opnd::Imm(value)) => {
if imm_num_bits(value) > 32 {
let opnd1 = asm.load(opnds[1]);
asm.mov(opnds[0], opnd1);
let opnd1 = asm.load(mapped_opnds[1]);
asm.mov(mapped_opnds[0], opnd1);
} else {
asm.mov(opnds[0], opnds[1]);
asm.mov(mapped_opnds[0], mapped_opnds[1]);
}
},
_ => {
asm.mov(opnds[0], opnds[1]);
asm.mov(mapped_opnds[0], mapped_opnds[1]);
}
}
},
Op::Not => {
let opnd0 = match insn.opnds[0] {
Insn { op: Op::Not, opnds, .. } => {
let opnd0 = match opnds[0] {
// If we have an instruction output whose live range
// spans beyond this instruction, we have to load it.
Opnd::InsnOut { idx, .. } => {
if live_ranges[idx] > index {
asm.load(opnds[0])
asm.load(mapped_opnds[0])
} else {
opnds[0]
mapped_opnds[0]
}
},
// We have to load memory and register operands to avoid
// corrupting them.
Opnd::Mem(_) | Opnd::Reg(_) => {
asm.load(opnds[0])
asm.load(mapped_opnds[0])
},
// Otherwise we can just reuse the existing operand.
_ => opnds[0]
_ => mapped_opnds[0]
};
asm.not(opnd0);
},
_ => {
asm.push_insn_parts(insn.op, opnds, insn.target, insn.text, insn.pos_marker);
asm.push_insn_parts(insn.op, mapped_opnds, insn.target, insn.text, insn.pos_marker);
}
};
@ -280,27 +279,27 @@ impl Assembler
// For each instruction
for insn in &self.insns {
match insn.op {
Op::Comment => {
match insn {
Insn { op: Op::Comment, text, .. } => {
if cfg!(feature = "asm_comments") {
cb.add_comment(&insn.text.as_ref().unwrap());
cb.add_comment(text.as_ref().unwrap());
}
},
// Write the label at the current position
Op::Label => {
cb.write_label(insn.target.unwrap().unwrap_label_idx());
Insn { op: Op::Label, target, .. } => {
cb.write_label(target.unwrap().unwrap_label_idx());
},
// Report back the current position in the generated code
Op::PosMarker => {
Insn { op: Op::PosMarker, pos_marker, .. } => {
let pos = cb.get_write_ptr();
let pos_marker_fn = insn.pos_marker.as_ref().unwrap();
let pos_marker_fn = pos_marker.as_ref().unwrap();
pos_marker_fn(pos);
}
},
Op::BakeString => {
for byte in insn.text.as_ref().unwrap().as_bytes() {
Insn { op: Op::BakeString, text, .. } => {
for byte in text.as_ref().unwrap().as_bytes() {
cb.write_byte(*byte);
}
@ -309,53 +308,55 @@ impl Assembler
cb.write_byte(0);
},
Op::Add => {
add(cb, insn.opnds[0].into(), insn.opnds[1].into())
Insn { op: Op::Add, opnds, .. } => {
add(cb, opnds[0].into(), opnds[1].into())
},
Op::FrameSetup => {},
Op::FrameTeardown => {},
Insn { op: Op::FrameSetup, .. } => {},
Insn { op: Op::FrameTeardown, .. } => {},
Op::Sub => {
sub(cb, insn.opnds[0].into(), insn.opnds[1].into())
Insn { op: Op::Sub, opnds, .. } => {
sub(cb, opnds[0].into(), opnds[1].into())
},
Op::And => {
and(cb, insn.opnds[0].into(), insn.opnds[1].into())
Insn { op: Op::And, opnds, .. } => {
and(cb, opnds[0].into(), opnds[1].into())
},
Op::Or => {
or(cb, insn.opnds[0].into(), insn.opnds[1].into());
Insn { op: Op::Or, opnds, .. } => {
or(cb, opnds[0].into(), opnds[1].into());
},
Op::Xor => {
xor(cb, insn.opnds[0].into(), insn.opnds[1].into());
Insn { op: Op::Xor, opnds, .. } => {
xor(cb, opnds[0].into(), opnds[1].into());
},
Op::Not => {
not(cb, insn.opnds[0].into())
Insn { op: Op::Not, opnds, .. } => {
not(cb, opnds[0].into());
},
Op::LShift => {
shl(cb, insn.opnds[0].into(), insn.opnds[1].into())
Insn { op: Op::LShift, opnds, .. } => {
shl(cb, opnds[0].into(), opnds[1].into())
},
Op::RShift => {
sar(cb, insn.opnds[0].into(), insn.opnds[1].into())
Insn { op: Op::RShift, opnds, .. } => {
sar(cb, opnds[0].into(), opnds[1].into())
},
Op::URShift => {
shr(cb, insn.opnds[0].into(), insn.opnds[1].into())
Insn { op: Op::URShift, opnds, .. } => {
shr(cb, opnds[0].into(), opnds[1].into())
},
Op::Store => mov(cb, insn.opnds[0].into(), insn.opnds[1].into()),
Insn { op: Op::Store, opnds, .. } => {
mov(cb, opnds[0].into(), opnds[1].into());
},
// This assumes only load instructions can contain references to GC'd Value operands
Op::Load => {
mov(cb, insn.out.into(), insn.opnds[0].into());
Insn { op: Op::Load, opnds, out, .. } => {
mov(cb, (*out).into(), opnds[0].into());
// If the value being loaded is a heap object
if let Opnd::Value(val) = insn.opnds[0] {
if let Opnd::Value(val) = opnds[0] {
if !val.special_const_p() {
// The pointer immediate is encoded as the last part of the mov written out
let ptr_offset: u32 = (cb.get_write_pos() as u32) - (SIZEOF_VALUE as u32);
@ -364,35 +365,45 @@ impl Assembler
}
},
Op::LoadSExt => {
movsx(cb, insn.out.into(), insn.opnds[0].into())
Insn { op: Op::LoadSExt, opnds, out, .. } => {
movsx(cb, (*out).into(), opnds[0].into());
},
Op::Mov => mov(cb, insn.opnds[0].into(), insn.opnds[1].into()),
Insn { op: Op::Mov, opnds, .. } => {
mov(cb, opnds[0].into(), opnds[1].into());
},
// Load effective address
Op::Lea => lea(cb, insn.out.into(), insn.opnds[0].into()),
Insn { op: Op::Lea, opnds, out, .. } => {
lea(cb, (*out).into(), opnds[0].into());
},
// Load relative address
Op::LeaLabel => {
let label_idx = insn.target.unwrap().unwrap_label_idx();
Insn { op: Op::LeaLabel, out, target, .. } => {
let label_idx = target.unwrap().unwrap_label_idx();
cb.label_ref(label_idx, 7, |cb, src_addr, dst_addr| {
let disp = dst_addr - src_addr;
lea(cb, Self::SCRATCH0, mem_opnd(8, RIP, disp.try_into().unwrap()));
});
mov(cb, insn.out.into(), Self::SCRATCH0);
mov(cb, (*out).into(), Self::SCRATCH0);
},
// Push and pop to/from the C stack
Op::CPush => push(cb, insn.opnds[0].into()),
Op::CPop => pop(cb, insn.out.into()),
Op::CPopInto => pop(cb, insn.opnds[0].into()),
Insn { op: Op::CPush, opnds, .. } => {
push(cb, opnds[0].into());
},
Insn { op: Op::CPop, out, .. } => {
pop(cb, (*out).into());
},
Insn { op: Op::CPopInto, opnds, .. } => {
pop(cb, opnds[0].into());
},
// Push and pop to the C stack all caller-save registers and the
// flags
Op::CPushAll => {
Insn { op: Op::CPushAll, .. } => {
let regs = Assembler::get_caller_save_regs();
for reg in regs {
@ -400,7 +411,7 @@ impl Assembler
}
pushfq(cb);
},
Op::CPopAll => {
Insn { op: Op::CPopAll, .. } => {
let regs = Assembler::get_caller_save_regs();
popfq(cb);
@ -410,95 +421,101 @@ impl Assembler
},
// C function call
Op::CCall => {
Insn { op: Op::CCall, opnds, target, .. } => {
// Temporary
assert!(insn.opnds.len() <= _C_ARG_OPNDS.len());
assert!(opnds.len() <= _C_ARG_OPNDS.len());
// For each operand
for (idx, opnd) in insn.opnds.iter().enumerate() {
mov(cb, X86Opnd::Reg(_C_ARG_OPNDS[idx].unwrap_reg()), insn.opnds[idx].into());
for (idx, opnd) in opnds.iter().enumerate() {
mov(cb, X86Opnd::Reg(_C_ARG_OPNDS[idx].unwrap_reg()), opnds[idx].into());
}
let ptr = insn.target.unwrap().unwrap_fun_ptr();
let ptr = target.unwrap().unwrap_fun_ptr();
call_ptr(cb, RAX, ptr);
},
Op::CRet => {
Insn { op: Op::CRet, opnds, .. } => {
// TODO: bias allocation towards return register
if insn.opnds[0] != Opnd::Reg(C_RET_REG) {
mov(cb, RAX, insn.opnds[0].into());
if opnds[0] != Opnd::Reg(C_RET_REG) {
mov(cb, RAX, opnds[0].into());
}
ret(cb);
}
},
// Compare
Op::Cmp => cmp(cb, insn.opnds[0].into(), insn.opnds[1].into()),
Insn { op: Op::Cmp, opnds, .. } => {
cmp(cb, opnds[0].into(), opnds[1].into());
}
// Test and set flags
Op::Test => test(cb, insn.opnds[0].into(), insn.opnds[1].into()),
Insn { op: Op::Test, opnds, .. } => {
test(cb, opnds[0].into(), opnds[1].into());
}
Op::JmpOpnd => jmp_rm(cb, insn.opnds[0].into()),
Insn { op: Op::JmpOpnd, opnds, .. } => {
jmp_rm(cb, opnds[0].into());
}
// Conditional jump to a label
Op::Jmp => {
match insn.target.unwrap() {
Insn { op: Op::Jmp, target, .. } => {
match target.unwrap() {
Target::CodePtr(code_ptr) => jmp_ptr(cb, code_ptr),
Target::Label(label_idx) => jmp_label(cb, label_idx),
_ => unreachable!()
}
}
Op::Je => {
match insn.target.unwrap() {
Insn { op: Op::Je, target, .. } => {
match target.unwrap() {
Target::CodePtr(code_ptr) => je_ptr(cb, code_ptr),
Target::Label(label_idx) => je_label(cb, label_idx),
_ => unreachable!()
}
}
Op::Jne => {
match insn.target.unwrap() {
Insn { op: Op::Jne, target, .. } => {
match target.unwrap() {
Target::CodePtr(code_ptr) => jne_ptr(cb, code_ptr),
Target::Label(label_idx) => jne_label(cb, label_idx),
_ => unreachable!()
}
}
Op::Jl => {
match insn.target.unwrap() {
Insn { op: Op::Jl, target, .. } => {
match target.unwrap() {
Target::CodePtr(code_ptr) => jl_ptr(cb, code_ptr),
Target::Label(label_idx) => jl_label(cb, label_idx),
_ => unreachable!()
}
},
Op::Jbe => {
match insn.target.unwrap() {
Insn { op: Op::Jbe, target, .. } => {
match target.unwrap() {
Target::CodePtr(code_ptr) => jbe_ptr(cb, code_ptr),
Target::Label(label_idx) => jbe_label(cb, label_idx),
_ => unreachable!()
}
},
Op::Jz => {
match insn.target.unwrap() {
Insn { op: Op::Jz, target, .. } => {
match target.unwrap() {
Target::CodePtr(code_ptr) => jz_ptr(cb, code_ptr),
Target::Label(label_idx) => jz_label(cb, label_idx),
_ => unreachable!()
}
}
Op::Jnz => {
match insn.target.unwrap() {
Insn { op: Op::Jnz, target, .. } => {
match target.unwrap() {
Target::CodePtr(code_ptr) => jnz_ptr(cb, code_ptr),
Target::Label(label_idx) => jnz_label(cb, label_idx),
_ => unreachable!()
}
}
Op::Jo => {
match insn.target.unwrap() {
Insn { op: Op::Jo, target, .. } => {
match target.unwrap() {
Target::CodePtr(code_ptr) => jo_ptr(cb, code_ptr),
Target::Label(label_idx) => jo_label(cb, label_idx),
_ => unreachable!()
@ -506,48 +523,48 @@ impl Assembler
}
// Atomically increment a counter at a given memory location
Op::IncrCounter => {
assert!(matches!(insn.opnds[0], Opnd::Mem(_)));
assert!(matches!(insn.opnds[1], Opnd::UImm(_) | Opnd::Imm(_) ) );
Insn { op: Op::IncrCounter, opnds, .. } => {
assert!(matches!(opnds[0], Opnd::Mem(_)));
assert!(matches!(opnds[1], Opnd::UImm(_) | Opnd::Imm(_) ) );
write_lock_prefix(cb);
add(cb, insn.opnds[0].into(), insn.opnds[1].into());
add(cb, opnds[0].into(), opnds[1].into());
},
Op::Breakpoint => int3(cb),
Insn { op: Op::Breakpoint, .. } => int3(cb),
Op::CSelZ => {
mov(cb, insn.out.into(), insn.opnds[0].into());
cmovnz(cb, insn.out.into(), insn.opnds[1].into());
Insn { op: Op::CSelZ, opnds, out, .. } => {
mov(cb, (*out).into(), opnds[0].into());
cmovnz(cb, (*out).into(), opnds[1].into());
},
Op::CSelNZ => {
mov(cb, insn.out.into(), insn.opnds[0].into());
cmovz(cb, insn.out.into(), insn.opnds[1].into());
Insn { op: Op::CSelNZ, opnds, out, .. } => {
mov(cb, (*out).into(), opnds[0].into());
cmovz(cb, (*out).into(), opnds[1].into());
},
Op::CSelE => {
mov(cb, insn.out.into(), insn.opnds[0].into());
cmovne(cb, insn.out.into(), insn.opnds[1].into());
Insn { op: Op::CSelE, opnds, out, .. } => {
mov(cb, (*out).into(), opnds[0].into());
cmovne(cb, (*out).into(), opnds[1].into());
},
Op::CSelNE => {
mov(cb, insn.out.into(), insn.opnds[0].into());
cmove(cb, insn.out.into(), insn.opnds[1].into());
Insn { op: Op::CSelNE, opnds, out, .. } => {
mov(cb, (*out).into(), opnds[0].into());
cmove(cb, (*out).into(), opnds[1].into());
},
Op::CSelL => {
mov(cb, insn.out.into(), insn.opnds[0].into());
cmovge(cb, insn.out.into(), insn.opnds[1].into());
Insn { op: Op::CSelL, opnds, out, .. } => {
mov(cb, (*out).into(), opnds[0].into());
cmovge(cb, (*out).into(), opnds[1].into());
},
Op::CSelLE => {
mov(cb, insn.out.into(), insn.opnds[0].into());
cmovg(cb, insn.out.into(), insn.opnds[1].into());
Insn { op: Op::CSelLE, opnds, out, .. } => {
mov(cb, (*out).into(), opnds[0].into());
cmovg(cb, (*out).into(), opnds[1].into());
},
Op::CSelG => {
mov(cb, insn.out.into(), insn.opnds[0].into());
cmovle(cb, insn.out.into(), insn.opnds[1].into());
Insn { op: Op::CSelG, opnds, out, .. } => {
mov(cb, (*out).into(), opnds[0].into());
cmovle(cb, (*out).into(), opnds[1].into());
},
Op::CSelGE => {
mov(cb, insn.out.into(), insn.opnds[0].into());
cmovl(cb, insn.out.into(), insn.opnds[1].into());
Insn { op: Op::CSelGE, opnds, out, .. } => {
mov(cb, (*out).into(), opnds[0].into());
cmovl(cb, (*out).into(), opnds[1].into());
}
Op::LiveReg => (), // just a reg alloc signal, no code
Insn { op: Op::LiveReg, .. } => (), // just a reg alloc signal, no code
// We want to keep the panic here because some instructions that
// we feed to the backend could get lowered into other