2007-09-12 11:58:04 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Local APIC virtualization
|
|
|
|
*
|
|
|
|
* Copyright (C) 2006 Qumranet, Inc.
|
|
|
|
* Copyright (C) 2007 Novell
|
|
|
|
* Copyright (C) 2007 Intel
|
2010-10-06 16:23:22 +04:00
|
|
|
* Copyright 2009 Red Hat, Inc. and/or its affiliates.
|
2007-09-12 11:58:04 +04:00
|
|
|
*
|
|
|
|
* Authors:
|
|
|
|
* Dor Laor <dor.laor@qumranet.com>
|
|
|
|
* Gregory Haskins <ghaskins@novell.com>
|
|
|
|
* Yaozu (Eddie) Dong <eddie.dong@intel.com>
|
|
|
|
*
|
|
|
|
* Based on Xen 3.1 code, Copyright (c) 2004, Intel Corporation.
|
|
|
|
*
|
|
|
|
* This work is licensed under the terms of the GNU GPL, version 2. See
|
|
|
|
* the COPYING file in the top-level directory.
|
|
|
|
*/
|
|
|
|
|
2007-12-16 12:02:48 +03:00
|
|
|
#include <linux/kvm_host.h>
|
2007-09-12 11:58:04 +04:00
|
|
|
#include <linux/kvm.h>
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/highmem.h>
|
|
|
|
#include <linux/smp.h>
|
|
|
|
#include <linux/hrtimer.h>
|
|
|
|
#include <linux/io.h>
|
|
|
|
#include <linux/module.h>
|
2008-05-01 15:34:28 +04:00
|
|
|
#include <linux/math64.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 11:04:11 +03:00
|
|
|
#include <linux/slab.h>
|
2007-09-12 11:58:04 +04:00
|
|
|
#include <asm/processor.h>
|
|
|
|
#include <asm/msr.h>
|
|
|
|
#include <asm/page.h>
|
|
|
|
#include <asm/current.h>
|
|
|
|
#include <asm/apicdef.h>
|
2014-12-16 17:08:15 +03:00
|
|
|
#include <asm/delay.h>
|
2011-07-27 03:09:06 +04:00
|
|
|
#include <linux/atomic.h>
|
2012-08-05 16:58:30 +04:00
|
|
|
#include <linux/jump_label.h>
|
2008-06-27 21:58:02 +04:00
|
|
|
#include "kvm_cache_regs.h"
|
2007-09-12 11:58:04 +04:00
|
|
|
#include "irq.h"
|
2009-06-17 16:22:14 +04:00
|
|
|
#include "trace.h"
|
2009-07-05 18:39:35 +04:00
|
|
|
#include "x86.h"
|
2011-11-23 18:30:32 +04:00
|
|
|
#include "cpuid.h"
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2009-02-11 01:41:41 +03:00
|
|
|
#ifndef CONFIG_X86_64
|
|
|
|
#define mod_64(x, y) ((x) - (y) * div64_u64(x, y))
|
|
|
|
#else
|
|
|
|
#define mod_64(x, y) ((x) % (y))
|
|
|
|
#endif
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
#define PRId64 "d"
|
|
|
|
#define PRIx64 "llx"
|
|
|
|
#define PRIu64 "u"
|
|
|
|
#define PRIo64 "o"
|
|
|
|
|
|
|
|
#define APIC_BUS_CYCLE_NS 1
|
|
|
|
|
|
|
|
/* #define apic_debug(fmt,arg...) printk(KERN_WARNING fmt,##arg) */
|
|
|
|
#define apic_debug(fmt, arg...)
|
|
|
|
|
|
|
|
#define APIC_LVT_NUM 6
|
|
|
|
/* 14 is the version for Xeon and Pentium 8.4.8*/
|
|
|
|
#define APIC_VERSION (0x14UL | ((APIC_LVT_NUM - 1) << 16))
|
|
|
|
#define LAPIC_MMIO_LENGTH (1 << 12)
|
|
|
|
/* followed define is not in apicdef.h */
|
|
|
|
#define APIC_SHORT_MASK 0xc0000
|
|
|
|
#define APIC_DEST_NOSHORT 0x0
|
|
|
|
#define APIC_DEST_MASK 0x800
|
|
|
|
#define MAX_APIC_VECTOR 256
|
2012-09-05 14:30:01 +04:00
|
|
|
#define APIC_VECTORS_PER_REG 32
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2014-10-03 01:30:52 +04:00
|
|
|
#define APIC_BROADCAST 0xFF
|
|
|
|
#define X2APIC_BROADCAST 0xFFFFFFFFul
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
#define VEC_POS(v) ((v) & (32 - 1))
|
|
|
|
#define REG_POS(v) (((v) >> 5) << 4)
|
2007-12-13 18:50:52 +03:00
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
static inline void apic_set_reg(struct kvm_lapic *apic, int reg_off, u32 val)
|
|
|
|
{
|
|
|
|
*((u32 *) (apic->regs + reg_off)) = val;
|
|
|
|
}
|
|
|
|
|
2012-04-11 19:49:55 +04:00
|
|
|
static inline int apic_test_vector(int vec, void *bitmap)
|
|
|
|
{
|
|
|
|
return test_bit(VEC_POS(vec), (bitmap) + REG_POS(vec));
|
|
|
|
}
|
|
|
|
|
2013-04-11 15:21:38 +04:00
|
|
|
bool kvm_apic_pending_eoi(struct kvm_vcpu *vcpu, int vector)
|
|
|
|
{
|
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
|
|
|
|
|
|
|
return apic_test_vector(vector, apic->regs + APIC_ISR) ||
|
|
|
|
apic_test_vector(vector, apic->regs + APIC_IRR);
|
|
|
|
}
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
static inline void apic_set_vector(int vec, void *bitmap)
|
|
|
|
{
|
|
|
|
set_bit(VEC_POS(vec), (bitmap) + REG_POS(vec));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void apic_clear_vector(int vec, void *bitmap)
|
|
|
|
{
|
|
|
|
clear_bit(VEC_POS(vec), (bitmap) + REG_POS(vec));
|
|
|
|
}
|
|
|
|
|
2012-06-24 20:24:26 +04:00
|
|
|
static inline int __apic_test_and_set_vector(int vec, void *bitmap)
|
|
|
|
{
|
|
|
|
return __test_and_set_bit(VEC_POS(vec), (bitmap) + REG_POS(vec));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int __apic_test_and_clear_vector(int vec, void *bitmap)
|
|
|
|
{
|
|
|
|
return __test_and_clear_bit(VEC_POS(vec), (bitmap) + REG_POS(vec));
|
|
|
|
}
|
|
|
|
|
2012-08-05 16:58:30 +04:00
|
|
|
struct static_key_deferred apic_hw_disabled __read_mostly;
|
2012-08-05 16:58:31 +04:00
|
|
|
struct static_key_deferred apic_sw_disabled __read_mostly;
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
static inline int apic_enabled(struct kvm_lapic *apic)
|
|
|
|
{
|
2012-08-05 16:58:33 +04:00
|
|
|
return kvm_apic_sw_enabled(apic) && kvm_apic_hw_enabled(apic);
|
2012-08-05 16:58:32 +04:00
|
|
|
}
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
#define LVT_MASK \
|
|
|
|
(APIC_LVT_MASKED | APIC_SEND_PENDING | APIC_VECTOR_MASK)
|
|
|
|
|
|
|
|
#define LINT_MASK \
|
|
|
|
(LVT_MASK | APIC_MODE_MASK | APIC_INPUT_POLARITY | \
|
|
|
|
APIC_LVT_REMOTE_IRR | APIC_LVT_LEVEL_TRIGGER)
|
|
|
|
|
|
|
|
static inline int kvm_apic_id(struct kvm_lapic *apic)
|
|
|
|
{
|
2012-08-05 16:58:33 +04:00
|
|
|
return (kvm_apic_get_reg(apic, APIC_ID) >> 24) & 0xff;
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
2012-09-13 18:19:24 +04:00
|
|
|
static void recalculate_apic_map(struct kvm *kvm)
|
|
|
|
{
|
|
|
|
struct kvm_apic_map *new, *old = NULL;
|
|
|
|
struct kvm_vcpu *vcpu;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
new = kzalloc(sizeof(struct kvm_apic_map), GFP_KERNEL);
|
|
|
|
|
|
|
|
mutex_lock(&kvm->arch.apic_map_lock);
|
|
|
|
|
|
|
|
if (!new)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
new->ldr_bits = 8;
|
|
|
|
/* flat mode is default */
|
|
|
|
new->cid_shift = 8;
|
|
|
|
new->cid_mask = 0;
|
|
|
|
new->lid_mask = 0xff;
|
2014-10-03 01:30:52 +04:00
|
|
|
new->broadcast = APIC_BROADCAST;
|
2012-09-13 18:19:24 +04:00
|
|
|
|
|
|
|
kvm_for_each_vcpu(i, vcpu, kvm) {
|
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
|
|
|
|
|
|
|
if (!kvm_apic_present(vcpu))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (apic_x2apic_mode(apic)) {
|
|
|
|
new->ldr_bits = 32;
|
|
|
|
new->cid_shift = 16;
|
2014-11-27 22:03:13 +03:00
|
|
|
new->cid_mask = new->lid_mask = 0xffff;
|
2014-10-03 01:30:52 +04:00
|
|
|
new->broadcast = X2APIC_BROADCAST;
|
2014-11-06 12:51:45 +03:00
|
|
|
} else if (kvm_apic_get_reg(apic, APIC_LDR)) {
|
2014-11-02 12:54:54 +03:00
|
|
|
if (kvm_apic_get_reg(apic, APIC_DFR) ==
|
|
|
|
APIC_DFR_CLUSTER) {
|
|
|
|
new->cid_shift = 4;
|
|
|
|
new->cid_mask = 0xf;
|
|
|
|
new->lid_mask = 0xf;
|
2014-11-06 12:51:45 +03:00
|
|
|
} else {
|
|
|
|
new->cid_shift = 8;
|
|
|
|
new->cid_mask = 0;
|
|
|
|
new->lid_mask = 0xff;
|
2014-11-02 12:54:54 +03:00
|
|
|
}
|
2012-09-13 18:19:24 +04:00
|
|
|
}
|
2014-11-06 12:51:45 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* All APICs have to be configured in the same mode by an OS.
|
|
|
|
* We take advatage of this while building logical id loockup
|
|
|
|
* table. After reset APICs are in software disabled mode, so if
|
|
|
|
* we find apic with different setting we assume this is the mode
|
|
|
|
* OS wants all apics to be in; build lookup table accordingly.
|
|
|
|
*/
|
|
|
|
if (kvm_apic_sw_enabled(apic))
|
|
|
|
break;
|
2014-11-02 12:54:54 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
kvm_for_each_vcpu(i, vcpu, kvm) {
|
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
|
|
|
u16 cid, lid;
|
2014-11-28 01:30:19 +03:00
|
|
|
u32 ldr, aid;
|
2012-09-13 18:19:24 +04:00
|
|
|
|
2015-01-30 00:33:35 +03:00
|
|
|
if (!kvm_apic_present(vcpu))
|
|
|
|
continue;
|
|
|
|
|
2014-11-28 01:30:19 +03:00
|
|
|
aid = kvm_apic_id(apic);
|
2012-09-13 18:19:24 +04:00
|
|
|
ldr = kvm_apic_get_reg(apic, APIC_LDR);
|
|
|
|
cid = apic_cluster_id(new, ldr);
|
|
|
|
lid = apic_logical_id(new, ldr);
|
|
|
|
|
2014-11-28 01:30:19 +03:00
|
|
|
if (aid < ARRAY_SIZE(new->phys_map))
|
|
|
|
new->phys_map[aid] = apic;
|
|
|
|
if (lid && cid < ARRAY_SIZE(new->logical_map))
|
2012-09-13 18:19:24 +04:00
|
|
|
new->logical_map[cid][ffs(lid) - 1] = apic;
|
|
|
|
}
|
|
|
|
out:
|
|
|
|
old = rcu_dereference_protected(kvm->arch.apic_map,
|
|
|
|
lockdep_is_held(&kvm->arch.apic_map_lock));
|
|
|
|
rcu_assign_pointer(kvm->arch.apic_map, new);
|
|
|
|
mutex_unlock(&kvm->arch.apic_map_lock);
|
|
|
|
|
|
|
|
if (old)
|
|
|
|
kfree_rcu(old, rcu);
|
2013-01-25 06:18:51 +04:00
|
|
|
|
2013-04-11 15:25:13 +04:00
|
|
|
kvm_vcpu_request_scan_ioapic(kvm);
|
2012-09-13 18:19:24 +04:00
|
|
|
}
|
|
|
|
|
2014-08-19 01:03:00 +04:00
|
|
|
static inline void apic_set_spiv(struct kvm_lapic *apic, u32 val)
|
|
|
|
{
|
2014-10-30 17:06:45 +03:00
|
|
|
bool enabled = val & APIC_SPIV_APIC_ENABLED;
|
2014-08-19 01:03:00 +04:00
|
|
|
|
|
|
|
apic_set_reg(apic, APIC_SPIV, val);
|
2014-10-30 17:06:45 +03:00
|
|
|
|
|
|
|
if (enabled != apic->sw_enabled) {
|
|
|
|
apic->sw_enabled = enabled;
|
|
|
|
if (enabled) {
|
2014-08-19 01:03:00 +04:00
|
|
|
static_key_slow_dec_deferred(&apic_sw_disabled);
|
|
|
|
recalculate_apic_map(apic->vcpu->kvm);
|
|
|
|
} else
|
|
|
|
static_key_slow_inc(&apic_sw_disabled.key);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-09-13 18:19:24 +04:00
|
|
|
static inline void kvm_apic_set_id(struct kvm_lapic *apic, u8 id)
|
|
|
|
{
|
|
|
|
apic_set_reg(apic, APIC_ID, id << 24);
|
|
|
|
recalculate_apic_map(apic->vcpu->kvm);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void kvm_apic_set_ldr(struct kvm_lapic *apic, u32 id)
|
|
|
|
{
|
|
|
|
apic_set_reg(apic, APIC_LDR, id);
|
|
|
|
recalculate_apic_map(apic->vcpu->kvm);
|
|
|
|
}
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
static inline int apic_lvt_enabled(struct kvm_lapic *apic, int lvt_type)
|
|
|
|
{
|
2012-08-05 16:58:33 +04:00
|
|
|
return !(kvm_apic_get_reg(apic, lvt_type) & APIC_LVT_MASKED);
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline int apic_lvt_vector(struct kvm_lapic *apic, int lvt_type)
|
|
|
|
{
|
2012-08-05 16:58:33 +04:00
|
|
|
return kvm_apic_get_reg(apic, lvt_type) & APIC_VECTOR_MASK;
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
2011-09-22 12:55:52 +04:00
|
|
|
static inline int apic_lvtt_oneshot(struct kvm_lapic *apic)
|
|
|
|
{
|
2014-10-30 17:06:47 +03:00
|
|
|
return apic->lapic_timer.timer_mode == APIC_LVT_TIMER_ONESHOT;
|
2011-09-22 12:55:52 +04:00
|
|
|
}
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
static inline int apic_lvtt_period(struct kvm_lapic *apic)
|
|
|
|
{
|
2014-10-30 17:06:47 +03:00
|
|
|
return apic->lapic_timer.timer_mode == APIC_LVT_TIMER_PERIODIC;
|
2011-09-22 12:55:52 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline int apic_lvtt_tscdeadline(struct kvm_lapic *apic)
|
|
|
|
{
|
2014-10-30 17:06:47 +03:00
|
|
|
return apic->lapic_timer.timer_mode == APIC_LVT_TIMER_TSCDEADLINE;
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
2008-10-20 12:20:03 +04:00
|
|
|
static inline int apic_lvt_nmi_mode(u32 lvt_val)
|
|
|
|
{
|
|
|
|
return (lvt_val & (APIC_MODE_MASK | APIC_LVT_MASKED)) == APIC_DM_NMI;
|
|
|
|
}
|
|
|
|
|
2009-07-05 18:39:35 +04:00
|
|
|
void kvm_apic_set_version(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
|
|
|
struct kvm_cpuid_entry2 *feat;
|
|
|
|
u32 v = APIC_VERSION;
|
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
if (!kvm_vcpu_has_lapic(vcpu))
|
2009-07-05 18:39:35 +04:00
|
|
|
return;
|
|
|
|
|
|
|
|
feat = kvm_find_cpuid_entry(apic->vcpu, 0x1, 0);
|
|
|
|
if (feat && (feat->ecx & (1 << (X86_FEATURE_X2APIC & 31))))
|
|
|
|
v |= APIC_LVR_DIRECTED_EOI;
|
|
|
|
apic_set_reg(apic, APIC_LVR, v);
|
|
|
|
}
|
|
|
|
|
2012-08-30 03:30:18 +04:00
|
|
|
static const unsigned int apic_lvt_mask[APIC_LVT_NUM] = {
|
2011-09-22 12:55:52 +04:00
|
|
|
LVT_MASK , /* part LVTT mask, timer mode mask added at runtime */
|
2007-09-12 11:58:04 +04:00
|
|
|
LVT_MASK | APIC_MODE_MASK, /* LVTTHMR */
|
|
|
|
LVT_MASK | APIC_MODE_MASK, /* LVTPC */
|
|
|
|
LINT_MASK, LINT_MASK, /* LVT0-1 */
|
|
|
|
LVT_MASK /* LVTERR */
|
|
|
|
};
|
|
|
|
|
|
|
|
static int find_highest_vector(void *bitmap)
|
|
|
|
{
|
2012-09-05 14:30:01 +04:00
|
|
|
int vec;
|
|
|
|
u32 *reg;
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2012-09-05 14:30:01 +04:00
|
|
|
for (vec = MAX_APIC_VECTOR - APIC_VECTORS_PER_REG;
|
|
|
|
vec >= 0; vec -= APIC_VECTORS_PER_REG) {
|
|
|
|
reg = bitmap + REG_POS(vec);
|
|
|
|
if (*reg)
|
|
|
|
return fls(*reg) - 1 + vec;
|
|
|
|
}
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2012-09-05 14:30:01 +04:00
|
|
|
return -1;
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
2012-06-24 20:24:26 +04:00
|
|
|
static u8 count_vectors(void *bitmap)
|
|
|
|
{
|
2012-09-05 14:30:01 +04:00
|
|
|
int vec;
|
|
|
|
u32 *reg;
|
2012-06-24 20:24:26 +04:00
|
|
|
u8 count = 0;
|
2012-09-05 14:30:01 +04:00
|
|
|
|
|
|
|
for (vec = 0; vec < MAX_APIC_VECTOR; vec += APIC_VECTORS_PER_REG) {
|
|
|
|
reg = bitmap + REG_POS(vec);
|
|
|
|
count += hweight32(*reg);
|
|
|
|
}
|
|
|
|
|
2012-06-24 20:24:26 +04:00
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
2015-02-03 18:58:17 +03:00
|
|
|
void __kvm_apic_update_irr(u32 *pir, void *regs)
|
2013-04-11 15:25:15 +04:00
|
|
|
{
|
|
|
|
u32 i, pir_val;
|
|
|
|
|
|
|
|
for (i = 0; i <= 7; i++) {
|
|
|
|
pir_val = xchg(&pir[i], 0);
|
|
|
|
if (pir_val)
|
2015-02-03 18:58:17 +03:00
|
|
|
*((u32 *)(regs + APIC_IRR + i * 0x10)) |= pir_val;
|
2013-04-11 15:25:15 +04:00
|
|
|
}
|
|
|
|
}
|
2015-02-03 18:58:17 +03:00
|
|
|
EXPORT_SYMBOL_GPL(__kvm_apic_update_irr);
|
|
|
|
|
|
|
|
void kvm_apic_update_irr(struct kvm_vcpu *vcpu, u32 *pir)
|
|
|
|
{
|
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
|
|
|
|
|
|
|
__kvm_apic_update_irr(pir, apic->regs);
|
|
|
|
}
|
2013-04-11 15:25:15 +04:00
|
|
|
EXPORT_SYMBOL_GPL(kvm_apic_update_irr);
|
|
|
|
|
2013-07-25 11:58:45 +04:00
|
|
|
static inline void apic_set_irr(int vec, struct kvm_lapic *apic)
|
2007-09-12 11:58:04 +04:00
|
|
|
{
|
2013-07-25 11:58:45 +04:00
|
|
|
apic_set_vector(vec, apic->regs + APIC_IRR);
|
KVM: x86: Fix lost interrupt on irr_pending race
apic_find_highest_irr assumes irr_pending is set if any vector in APIC_IRR is
set. If this assumption is broken and apicv is disabled, the injection of
interrupts may be deferred until another interrupt is delivered to the guest.
Ultimately, if no other interrupt should be injected to that vCPU, the pending
interrupt may be lost.
commit 56cc2406d68c ("KVM: nVMX: fix "acknowledge interrupt on exit" when APICv
is in use") changed the behavior of apic_clear_irr so irr_pending is cleared
after setting APIC_IRR vector. After this commit, if apic_set_irr and
apic_clear_irr run simultaneously, a race may occur, resulting in APIC_IRR
vector set, and irr_pending cleared. In the following example, assume a single
vector is set in IRR prior to calling apic_clear_irr:
apic_set_irr apic_clear_irr
------------ --------------
apic->irr_pending = true;
apic_clear_vector(...);
vec = apic_search_irr(apic);
// => vec == -1
apic_set_vector(...);
apic->irr_pending = (vec != -1);
// => apic->irr_pending == false
Nonetheless, it appears the race might even occur prior to this commit:
apic_set_irr apic_clear_irr
------------ --------------
apic->irr_pending = true;
apic->irr_pending = false;
apic_clear_vector(...);
if (apic_search_irr(apic) != -1)
apic->irr_pending = true;
// => apic->irr_pending == false
apic_set_vector(...);
Fixing this issue by:
1. Restoring the previous behavior of apic_clear_irr: clear irr_pending, call
apic_clear_vector, and then if APIC_IRR is non-zero, set irr_pending.
2. On apic_set_irr: first call apic_set_vector, then set irr_pending.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-17 00:49:07 +03:00
|
|
|
/*
|
|
|
|
* irr_pending must be true if any interrupt is pending; set it after
|
|
|
|
* APIC_IRR to avoid race with apic_clear_irr
|
|
|
|
*/
|
|
|
|
apic->irr_pending = true;
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
2009-06-11 12:06:51 +04:00
|
|
|
static inline int apic_search_irr(struct kvm_lapic *apic)
|
2007-09-12 11:58:04 +04:00
|
|
|
{
|
2009-06-11 12:06:51 +04:00
|
|
|
return find_highest_vector(apic->regs + APIC_IRR);
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline int apic_find_highest_irr(struct kvm_lapic *apic)
|
|
|
|
{
|
|
|
|
int result;
|
|
|
|
|
2013-01-25 06:18:51 +04:00
|
|
|
/*
|
|
|
|
* Note that irr_pending is just a hint. It will be always
|
|
|
|
* true with virtual interrupt delivery enabled.
|
|
|
|
*/
|
2009-06-11 12:06:51 +04:00
|
|
|
if (!apic->irr_pending)
|
|
|
|
return -1;
|
|
|
|
|
2013-04-11 15:25:16 +04:00
|
|
|
kvm_x86_ops->sync_pir_to_irr(apic->vcpu);
|
2009-06-11 12:06:51 +04:00
|
|
|
result = apic_search_irr(apic);
|
2007-09-12 11:58:04 +04:00
|
|
|
ASSERT(result == -1 || result >= 16);
|
|
|
|
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
2009-06-11 12:06:51 +04:00
|
|
|
static inline void apic_clear_irr(int vec, struct kvm_lapic *apic)
|
|
|
|
{
|
KVM: nVMX: fix "acknowledge interrupt on exit" when APICv is in use
After commit 77b0f5d (KVM: nVMX: Ack and write vector info to intr_info
if L1 asks us to), "Acknowledge interrupt on exit" behavior can be
emulated. To do so, KVM will ask the APIC for the interrupt vector if
during a nested vmexit if VM_EXIT_ACK_INTR_ON_EXIT is set. With APICv,
kvm_get_apic_interrupt would return -1 and give the following WARNING:
Call Trace:
[<ffffffff81493563>] dump_stack+0x49/0x5e
[<ffffffff8103f0eb>] warn_slowpath_common+0x7c/0x96
[<ffffffffa059709a>] ? nested_vmx_vmexit+0xa4/0x233 [kvm_intel]
[<ffffffff8103f11a>] warn_slowpath_null+0x15/0x17
[<ffffffffa059709a>] nested_vmx_vmexit+0xa4/0x233 [kvm_intel]
[<ffffffffa0594295>] ? nested_vmx_exit_handled+0x6a/0x39e [kvm_intel]
[<ffffffffa0537931>] ? kvm_apic_has_interrupt+0x80/0xd5 [kvm]
[<ffffffffa05972ec>] vmx_check_nested_events+0xc3/0xd3 [kvm_intel]
[<ffffffffa051ebe9>] inject_pending_event+0xd0/0x16e [kvm]
[<ffffffffa051efa0>] vcpu_enter_guest+0x319/0x704 [kvm]
To fix this, we cannot rely on the processor's virtual interrupt delivery,
because "acknowledge interrupt on exit" must only update the virtual
ISR/PPR/IRR registers (and SVI, which is just a cache of the virtual ISR)
but it should not deliver the interrupt through the IDT. Thus, KVM has
to deliver the interrupt "by hand", similar to the treatment of EOI in
commit fc57ac2c9ca8 (KVM: lapic: sync highest ISR to hardware apic on
EOI, 2014-05-14).
The patch modifies kvm_cpu_get_interrupt to always acknowledge an
interrupt; there are only two callers, and the other is not affected
because it is never reached with kvm_apic_vid_enabled() == true. Then it
modifies apic_set_isr and apic_clear_irr to update SVI and RVI in addition
to the registers.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Tested-by: Liu, RongrongX <rongrongx.liu@intel.com>
Tested-by: Felipe Reyes <freyes@suse.com>
Fixes: 77b0f5d67ff2781f36831cba79674c3e97bd7acf
Cc: stable@vger.kernel.org
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-05 08:42:24 +04:00
|
|
|
struct kvm_vcpu *vcpu;
|
|
|
|
|
|
|
|
vcpu = apic->vcpu;
|
|
|
|
|
KVM: x86: Fix lost interrupt on irr_pending race
apic_find_highest_irr assumes irr_pending is set if any vector in APIC_IRR is
set. If this assumption is broken and apicv is disabled, the injection of
interrupts may be deferred until another interrupt is delivered to the guest.
Ultimately, if no other interrupt should be injected to that vCPU, the pending
interrupt may be lost.
commit 56cc2406d68c ("KVM: nVMX: fix "acknowledge interrupt on exit" when APICv
is in use") changed the behavior of apic_clear_irr so irr_pending is cleared
after setting APIC_IRR vector. After this commit, if apic_set_irr and
apic_clear_irr run simultaneously, a race may occur, resulting in APIC_IRR
vector set, and irr_pending cleared. In the following example, assume a single
vector is set in IRR prior to calling apic_clear_irr:
apic_set_irr apic_clear_irr
------------ --------------
apic->irr_pending = true;
apic_clear_vector(...);
vec = apic_search_irr(apic);
// => vec == -1
apic_set_vector(...);
apic->irr_pending = (vec != -1);
// => apic->irr_pending == false
Nonetheless, it appears the race might even occur prior to this commit:
apic_set_irr apic_clear_irr
------------ --------------
apic->irr_pending = true;
apic->irr_pending = false;
apic_clear_vector(...);
if (apic_search_irr(apic) != -1)
apic->irr_pending = true;
// => apic->irr_pending == false
apic_set_vector(...);
Fixing this issue by:
1. Restoring the previous behavior of apic_clear_irr: clear irr_pending, call
apic_clear_vector, and then if APIC_IRR is non-zero, set irr_pending.
2. On apic_set_irr: first call apic_set_vector, then set irr_pending.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-17 00:49:07 +03:00
|
|
|
if (unlikely(kvm_apic_vid_enabled(vcpu->kvm))) {
|
KVM: nVMX: fix "acknowledge interrupt on exit" when APICv is in use
After commit 77b0f5d (KVM: nVMX: Ack and write vector info to intr_info
if L1 asks us to), "Acknowledge interrupt on exit" behavior can be
emulated. To do so, KVM will ask the APIC for the interrupt vector if
during a nested vmexit if VM_EXIT_ACK_INTR_ON_EXIT is set. With APICv,
kvm_get_apic_interrupt would return -1 and give the following WARNING:
Call Trace:
[<ffffffff81493563>] dump_stack+0x49/0x5e
[<ffffffff8103f0eb>] warn_slowpath_common+0x7c/0x96
[<ffffffffa059709a>] ? nested_vmx_vmexit+0xa4/0x233 [kvm_intel]
[<ffffffff8103f11a>] warn_slowpath_null+0x15/0x17
[<ffffffffa059709a>] nested_vmx_vmexit+0xa4/0x233 [kvm_intel]
[<ffffffffa0594295>] ? nested_vmx_exit_handled+0x6a/0x39e [kvm_intel]
[<ffffffffa0537931>] ? kvm_apic_has_interrupt+0x80/0xd5 [kvm]
[<ffffffffa05972ec>] vmx_check_nested_events+0xc3/0xd3 [kvm_intel]
[<ffffffffa051ebe9>] inject_pending_event+0xd0/0x16e [kvm]
[<ffffffffa051efa0>] vcpu_enter_guest+0x319/0x704 [kvm]
To fix this, we cannot rely on the processor's virtual interrupt delivery,
because "acknowledge interrupt on exit" must only update the virtual
ISR/PPR/IRR registers (and SVI, which is just a cache of the virtual ISR)
but it should not deliver the interrupt through the IDT. Thus, KVM has
to deliver the interrupt "by hand", similar to the treatment of EOI in
commit fc57ac2c9ca8 (KVM: lapic: sync highest ISR to hardware apic on
EOI, 2014-05-14).
The patch modifies kvm_cpu_get_interrupt to always acknowledge an
interrupt; there are only two callers, and the other is not affected
because it is never reached with kvm_apic_vid_enabled() == true. Then it
modifies apic_set_isr and apic_clear_irr to update SVI and RVI in addition
to the registers.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Tested-by: Liu, RongrongX <rongrongx.liu@intel.com>
Tested-by: Felipe Reyes <freyes@suse.com>
Fixes: 77b0f5d67ff2781f36831cba79674c3e97bd7acf
Cc: stable@vger.kernel.org
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-05 08:42:24 +04:00
|
|
|
/* try to update RVI */
|
KVM: x86: Fix lost interrupt on irr_pending race
apic_find_highest_irr assumes irr_pending is set if any vector in APIC_IRR is
set. If this assumption is broken and apicv is disabled, the injection of
interrupts may be deferred until another interrupt is delivered to the guest.
Ultimately, if no other interrupt should be injected to that vCPU, the pending
interrupt may be lost.
commit 56cc2406d68c ("KVM: nVMX: fix "acknowledge interrupt on exit" when APICv
is in use") changed the behavior of apic_clear_irr so irr_pending is cleared
after setting APIC_IRR vector. After this commit, if apic_set_irr and
apic_clear_irr run simultaneously, a race may occur, resulting in APIC_IRR
vector set, and irr_pending cleared. In the following example, assume a single
vector is set in IRR prior to calling apic_clear_irr:
apic_set_irr apic_clear_irr
------------ --------------
apic->irr_pending = true;
apic_clear_vector(...);
vec = apic_search_irr(apic);
// => vec == -1
apic_set_vector(...);
apic->irr_pending = (vec != -1);
// => apic->irr_pending == false
Nonetheless, it appears the race might even occur prior to this commit:
apic_set_irr apic_clear_irr
------------ --------------
apic->irr_pending = true;
apic->irr_pending = false;
apic_clear_vector(...);
if (apic_search_irr(apic) != -1)
apic->irr_pending = true;
// => apic->irr_pending == false
apic_set_vector(...);
Fixing this issue by:
1. Restoring the previous behavior of apic_clear_irr: clear irr_pending, call
apic_clear_vector, and then if APIC_IRR is non-zero, set irr_pending.
2. On apic_set_irr: first call apic_set_vector, then set irr_pending.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-17 00:49:07 +03:00
|
|
|
apic_clear_vector(vec, apic->regs + APIC_IRR);
|
KVM: nVMX: fix "acknowledge interrupt on exit" when APICv is in use
After commit 77b0f5d (KVM: nVMX: Ack and write vector info to intr_info
if L1 asks us to), "Acknowledge interrupt on exit" behavior can be
emulated. To do so, KVM will ask the APIC for the interrupt vector if
during a nested vmexit if VM_EXIT_ACK_INTR_ON_EXIT is set. With APICv,
kvm_get_apic_interrupt would return -1 and give the following WARNING:
Call Trace:
[<ffffffff81493563>] dump_stack+0x49/0x5e
[<ffffffff8103f0eb>] warn_slowpath_common+0x7c/0x96
[<ffffffffa059709a>] ? nested_vmx_vmexit+0xa4/0x233 [kvm_intel]
[<ffffffff8103f11a>] warn_slowpath_null+0x15/0x17
[<ffffffffa059709a>] nested_vmx_vmexit+0xa4/0x233 [kvm_intel]
[<ffffffffa0594295>] ? nested_vmx_exit_handled+0x6a/0x39e [kvm_intel]
[<ffffffffa0537931>] ? kvm_apic_has_interrupt+0x80/0xd5 [kvm]
[<ffffffffa05972ec>] vmx_check_nested_events+0xc3/0xd3 [kvm_intel]
[<ffffffffa051ebe9>] inject_pending_event+0xd0/0x16e [kvm]
[<ffffffffa051efa0>] vcpu_enter_guest+0x319/0x704 [kvm]
To fix this, we cannot rely on the processor's virtual interrupt delivery,
because "acknowledge interrupt on exit" must only update the virtual
ISR/PPR/IRR registers (and SVI, which is just a cache of the virtual ISR)
but it should not deliver the interrupt through the IDT. Thus, KVM has
to deliver the interrupt "by hand", similar to the treatment of EOI in
commit fc57ac2c9ca8 (KVM: lapic: sync highest ISR to hardware apic on
EOI, 2014-05-14).
The patch modifies kvm_cpu_get_interrupt to always acknowledge an
interrupt; there are only two callers, and the other is not affected
because it is never reached with kvm_apic_vid_enabled() == true. Then it
modifies apic_set_isr and apic_clear_irr to update SVI and RVI in addition
to the registers.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Tested-by: Liu, RongrongX <rongrongx.liu@intel.com>
Tested-by: Felipe Reyes <freyes@suse.com>
Fixes: 77b0f5d67ff2781f36831cba79674c3e97bd7acf
Cc: stable@vger.kernel.org
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-05 08:42:24 +04:00
|
|
|
kvm_make_request(KVM_REQ_EVENT, vcpu);
|
KVM: x86: Fix lost interrupt on irr_pending race
apic_find_highest_irr assumes irr_pending is set if any vector in APIC_IRR is
set. If this assumption is broken and apicv is disabled, the injection of
interrupts may be deferred until another interrupt is delivered to the guest.
Ultimately, if no other interrupt should be injected to that vCPU, the pending
interrupt may be lost.
commit 56cc2406d68c ("KVM: nVMX: fix "acknowledge interrupt on exit" when APICv
is in use") changed the behavior of apic_clear_irr so irr_pending is cleared
after setting APIC_IRR vector. After this commit, if apic_set_irr and
apic_clear_irr run simultaneously, a race may occur, resulting in APIC_IRR
vector set, and irr_pending cleared. In the following example, assume a single
vector is set in IRR prior to calling apic_clear_irr:
apic_set_irr apic_clear_irr
------------ --------------
apic->irr_pending = true;
apic_clear_vector(...);
vec = apic_search_irr(apic);
// => vec == -1
apic_set_vector(...);
apic->irr_pending = (vec != -1);
// => apic->irr_pending == false
Nonetheless, it appears the race might even occur prior to this commit:
apic_set_irr apic_clear_irr
------------ --------------
apic->irr_pending = true;
apic->irr_pending = false;
apic_clear_vector(...);
if (apic_search_irr(apic) != -1)
apic->irr_pending = true;
// => apic->irr_pending == false
apic_set_vector(...);
Fixing this issue by:
1. Restoring the previous behavior of apic_clear_irr: clear irr_pending, call
apic_clear_vector, and then if APIC_IRR is non-zero, set irr_pending.
2. On apic_set_irr: first call apic_set_vector, then set irr_pending.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-17 00:49:07 +03:00
|
|
|
} else {
|
|
|
|
apic->irr_pending = false;
|
|
|
|
apic_clear_vector(vec, apic->regs + APIC_IRR);
|
|
|
|
if (apic_search_irr(apic) != -1)
|
|
|
|
apic->irr_pending = true;
|
KVM: nVMX: fix "acknowledge interrupt on exit" when APICv is in use
After commit 77b0f5d (KVM: nVMX: Ack and write vector info to intr_info
if L1 asks us to), "Acknowledge interrupt on exit" behavior can be
emulated. To do so, KVM will ask the APIC for the interrupt vector if
during a nested vmexit if VM_EXIT_ACK_INTR_ON_EXIT is set. With APICv,
kvm_get_apic_interrupt would return -1 and give the following WARNING:
Call Trace:
[<ffffffff81493563>] dump_stack+0x49/0x5e
[<ffffffff8103f0eb>] warn_slowpath_common+0x7c/0x96
[<ffffffffa059709a>] ? nested_vmx_vmexit+0xa4/0x233 [kvm_intel]
[<ffffffff8103f11a>] warn_slowpath_null+0x15/0x17
[<ffffffffa059709a>] nested_vmx_vmexit+0xa4/0x233 [kvm_intel]
[<ffffffffa0594295>] ? nested_vmx_exit_handled+0x6a/0x39e [kvm_intel]
[<ffffffffa0537931>] ? kvm_apic_has_interrupt+0x80/0xd5 [kvm]
[<ffffffffa05972ec>] vmx_check_nested_events+0xc3/0xd3 [kvm_intel]
[<ffffffffa051ebe9>] inject_pending_event+0xd0/0x16e [kvm]
[<ffffffffa051efa0>] vcpu_enter_guest+0x319/0x704 [kvm]
To fix this, we cannot rely on the processor's virtual interrupt delivery,
because "acknowledge interrupt on exit" must only update the virtual
ISR/PPR/IRR registers (and SVI, which is just a cache of the virtual ISR)
but it should not deliver the interrupt through the IDT. Thus, KVM has
to deliver the interrupt "by hand", similar to the treatment of EOI in
commit fc57ac2c9ca8 (KVM: lapic: sync highest ISR to hardware apic on
EOI, 2014-05-14).
The patch modifies kvm_cpu_get_interrupt to always acknowledge an
interrupt; there are only two callers, and the other is not affected
because it is never reached with kvm_apic_vid_enabled() == true. Then it
modifies apic_set_isr and apic_clear_irr to update SVI and RVI in addition
to the registers.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Tested-by: Liu, RongrongX <rongrongx.liu@intel.com>
Tested-by: Felipe Reyes <freyes@suse.com>
Fixes: 77b0f5d67ff2781f36831cba79674c3e97bd7acf
Cc: stable@vger.kernel.org
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-05 08:42:24 +04:00
|
|
|
}
|
2009-06-11 12:06:51 +04:00
|
|
|
}
|
|
|
|
|
2012-06-24 20:24:26 +04:00
|
|
|
static inline void apic_set_isr(int vec, struct kvm_lapic *apic)
|
|
|
|
{
|
KVM: nVMX: fix "acknowledge interrupt on exit" when APICv is in use
After commit 77b0f5d (KVM: nVMX: Ack and write vector info to intr_info
if L1 asks us to), "Acknowledge interrupt on exit" behavior can be
emulated. To do so, KVM will ask the APIC for the interrupt vector if
during a nested vmexit if VM_EXIT_ACK_INTR_ON_EXIT is set. With APICv,
kvm_get_apic_interrupt would return -1 and give the following WARNING:
Call Trace:
[<ffffffff81493563>] dump_stack+0x49/0x5e
[<ffffffff8103f0eb>] warn_slowpath_common+0x7c/0x96
[<ffffffffa059709a>] ? nested_vmx_vmexit+0xa4/0x233 [kvm_intel]
[<ffffffff8103f11a>] warn_slowpath_null+0x15/0x17
[<ffffffffa059709a>] nested_vmx_vmexit+0xa4/0x233 [kvm_intel]
[<ffffffffa0594295>] ? nested_vmx_exit_handled+0x6a/0x39e [kvm_intel]
[<ffffffffa0537931>] ? kvm_apic_has_interrupt+0x80/0xd5 [kvm]
[<ffffffffa05972ec>] vmx_check_nested_events+0xc3/0xd3 [kvm_intel]
[<ffffffffa051ebe9>] inject_pending_event+0xd0/0x16e [kvm]
[<ffffffffa051efa0>] vcpu_enter_guest+0x319/0x704 [kvm]
To fix this, we cannot rely on the processor's virtual interrupt delivery,
because "acknowledge interrupt on exit" must only update the virtual
ISR/PPR/IRR registers (and SVI, which is just a cache of the virtual ISR)
but it should not deliver the interrupt through the IDT. Thus, KVM has
to deliver the interrupt "by hand", similar to the treatment of EOI in
commit fc57ac2c9ca8 (KVM: lapic: sync highest ISR to hardware apic on
EOI, 2014-05-14).
The patch modifies kvm_cpu_get_interrupt to always acknowledge an
interrupt; there are only two callers, and the other is not affected
because it is never reached with kvm_apic_vid_enabled() == true. Then it
modifies apic_set_isr and apic_clear_irr to update SVI and RVI in addition
to the registers.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Tested-by: Liu, RongrongX <rongrongx.liu@intel.com>
Tested-by: Felipe Reyes <freyes@suse.com>
Fixes: 77b0f5d67ff2781f36831cba79674c3e97bd7acf
Cc: stable@vger.kernel.org
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-05 08:42:24 +04:00
|
|
|
struct kvm_vcpu *vcpu;
|
|
|
|
|
|
|
|
if (__apic_test_and_set_vector(vec, apic->regs + APIC_ISR))
|
|
|
|
return;
|
|
|
|
|
|
|
|
vcpu = apic->vcpu;
|
2014-05-14 19:40:58 +04:00
|
|
|
|
2012-06-24 20:24:26 +04:00
|
|
|
/*
|
KVM: nVMX: fix "acknowledge interrupt on exit" when APICv is in use
After commit 77b0f5d (KVM: nVMX: Ack and write vector info to intr_info
if L1 asks us to), "Acknowledge interrupt on exit" behavior can be
emulated. To do so, KVM will ask the APIC for the interrupt vector if
during a nested vmexit if VM_EXIT_ACK_INTR_ON_EXIT is set. With APICv,
kvm_get_apic_interrupt would return -1 and give the following WARNING:
Call Trace:
[<ffffffff81493563>] dump_stack+0x49/0x5e
[<ffffffff8103f0eb>] warn_slowpath_common+0x7c/0x96
[<ffffffffa059709a>] ? nested_vmx_vmexit+0xa4/0x233 [kvm_intel]
[<ffffffff8103f11a>] warn_slowpath_null+0x15/0x17
[<ffffffffa059709a>] nested_vmx_vmexit+0xa4/0x233 [kvm_intel]
[<ffffffffa0594295>] ? nested_vmx_exit_handled+0x6a/0x39e [kvm_intel]
[<ffffffffa0537931>] ? kvm_apic_has_interrupt+0x80/0xd5 [kvm]
[<ffffffffa05972ec>] vmx_check_nested_events+0xc3/0xd3 [kvm_intel]
[<ffffffffa051ebe9>] inject_pending_event+0xd0/0x16e [kvm]
[<ffffffffa051efa0>] vcpu_enter_guest+0x319/0x704 [kvm]
To fix this, we cannot rely on the processor's virtual interrupt delivery,
because "acknowledge interrupt on exit" must only update the virtual
ISR/PPR/IRR registers (and SVI, which is just a cache of the virtual ISR)
but it should not deliver the interrupt through the IDT. Thus, KVM has
to deliver the interrupt "by hand", similar to the treatment of EOI in
commit fc57ac2c9ca8 (KVM: lapic: sync highest ISR to hardware apic on
EOI, 2014-05-14).
The patch modifies kvm_cpu_get_interrupt to always acknowledge an
interrupt; there are only two callers, and the other is not affected
because it is never reached with kvm_apic_vid_enabled() == true. Then it
modifies apic_set_isr and apic_clear_irr to update SVI and RVI in addition
to the registers.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Tested-by: Liu, RongrongX <rongrongx.liu@intel.com>
Tested-by: Felipe Reyes <freyes@suse.com>
Fixes: 77b0f5d67ff2781f36831cba79674c3e97bd7acf
Cc: stable@vger.kernel.org
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-05 08:42:24 +04:00
|
|
|
* With APIC virtualization enabled, all caching is disabled
|
|
|
|
* because the processor can modify ISR under the hood. Instead
|
|
|
|
* just set SVI.
|
2012-06-24 20:24:26 +04:00
|
|
|
*/
|
2014-12-22 12:32:57 +03:00
|
|
|
if (unlikely(kvm_x86_ops->hwapic_isr_update))
|
KVM: nVMX: fix "acknowledge interrupt on exit" when APICv is in use
After commit 77b0f5d (KVM: nVMX: Ack and write vector info to intr_info
if L1 asks us to), "Acknowledge interrupt on exit" behavior can be
emulated. To do so, KVM will ask the APIC for the interrupt vector if
during a nested vmexit if VM_EXIT_ACK_INTR_ON_EXIT is set. With APICv,
kvm_get_apic_interrupt would return -1 and give the following WARNING:
Call Trace:
[<ffffffff81493563>] dump_stack+0x49/0x5e
[<ffffffff8103f0eb>] warn_slowpath_common+0x7c/0x96
[<ffffffffa059709a>] ? nested_vmx_vmexit+0xa4/0x233 [kvm_intel]
[<ffffffff8103f11a>] warn_slowpath_null+0x15/0x17
[<ffffffffa059709a>] nested_vmx_vmexit+0xa4/0x233 [kvm_intel]
[<ffffffffa0594295>] ? nested_vmx_exit_handled+0x6a/0x39e [kvm_intel]
[<ffffffffa0537931>] ? kvm_apic_has_interrupt+0x80/0xd5 [kvm]
[<ffffffffa05972ec>] vmx_check_nested_events+0xc3/0xd3 [kvm_intel]
[<ffffffffa051ebe9>] inject_pending_event+0xd0/0x16e [kvm]
[<ffffffffa051efa0>] vcpu_enter_guest+0x319/0x704 [kvm]
To fix this, we cannot rely on the processor's virtual interrupt delivery,
because "acknowledge interrupt on exit" must only update the virtual
ISR/PPR/IRR registers (and SVI, which is just a cache of the virtual ISR)
but it should not deliver the interrupt through the IDT. Thus, KVM has
to deliver the interrupt "by hand", similar to the treatment of EOI in
commit fc57ac2c9ca8 (KVM: lapic: sync highest ISR to hardware apic on
EOI, 2014-05-14).
The patch modifies kvm_cpu_get_interrupt to always acknowledge an
interrupt; there are only two callers, and the other is not affected
because it is never reached with kvm_apic_vid_enabled() == true. Then it
modifies apic_set_isr and apic_clear_irr to update SVI and RVI in addition
to the registers.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Tested-by: Liu, RongrongX <rongrongx.liu@intel.com>
Tested-by: Felipe Reyes <freyes@suse.com>
Fixes: 77b0f5d67ff2781f36831cba79674c3e97bd7acf
Cc: stable@vger.kernel.org
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-05 08:42:24 +04:00
|
|
|
kvm_x86_ops->hwapic_isr_update(vcpu->kvm, vec);
|
|
|
|
else {
|
|
|
|
++apic->isr_count;
|
|
|
|
BUG_ON(apic->isr_count > MAX_APIC_VECTOR);
|
|
|
|
/*
|
|
|
|
* ISR (in service register) bit is set when injecting an interrupt.
|
|
|
|
* The highest vector is injected. Thus the latest bit set matches
|
|
|
|
* the highest bit in ISR.
|
|
|
|
*/
|
|
|
|
apic->highest_isr_cache = vec;
|
|
|
|
}
|
2012-06-24 20:24:26 +04:00
|
|
|
}
|
|
|
|
|
2014-05-14 19:40:58 +04:00
|
|
|
static inline int apic_find_highest_isr(struct kvm_lapic *apic)
|
|
|
|
{
|
|
|
|
int result;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Note that isr_count is always 1, and highest_isr_cache
|
|
|
|
* is always -1, with APIC virtualization enabled.
|
|
|
|
*/
|
|
|
|
if (!apic->isr_count)
|
|
|
|
return -1;
|
|
|
|
if (likely(apic->highest_isr_cache != -1))
|
|
|
|
return apic->highest_isr_cache;
|
|
|
|
|
|
|
|
result = find_highest_vector(apic->regs + APIC_ISR);
|
|
|
|
ASSERT(result == -1 || result >= 16);
|
|
|
|
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
2012-06-24 20:24:26 +04:00
|
|
|
static inline void apic_clear_isr(int vec, struct kvm_lapic *apic)
|
|
|
|
{
|
2014-05-14 19:40:58 +04:00
|
|
|
struct kvm_vcpu *vcpu;
|
|
|
|
if (!__apic_test_and_clear_vector(vec, apic->regs + APIC_ISR))
|
|
|
|
return;
|
|
|
|
|
|
|
|
vcpu = apic->vcpu;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We do get here for APIC virtualization enabled if the guest
|
|
|
|
* uses the Hyper-V APIC enlightenment. In this case we may need
|
|
|
|
* to trigger a new interrupt delivery by writing the SVI field;
|
|
|
|
* on the other hand isr_count and highest_isr_cache are unused
|
|
|
|
* and must be left alone.
|
|
|
|
*/
|
2014-12-22 12:32:57 +03:00
|
|
|
if (unlikely(kvm_x86_ops->hwapic_isr_update))
|
2014-05-14 19:40:58 +04:00
|
|
|
kvm_x86_ops->hwapic_isr_update(vcpu->kvm,
|
|
|
|
apic_find_highest_isr(apic));
|
|
|
|
else {
|
2012-06-24 20:24:26 +04:00
|
|
|
--apic->isr_count;
|
2014-05-14 19:40:58 +04:00
|
|
|
BUG_ON(apic->isr_count < 0);
|
|
|
|
apic->highest_isr_cache = -1;
|
|
|
|
}
|
2012-06-24 20:24:26 +04:00
|
|
|
}
|
|
|
|
|
2007-09-12 14:03:11 +04:00
|
|
|
int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
int highest_irr;
|
|
|
|
|
2009-06-11 12:06:51 +04:00
|
|
|
/* This may race with setting of irr in __apic_accept_irq() and
|
|
|
|
* value returned may be wrong, but kvm_vcpu_kick() in __apic_accept_irq
|
|
|
|
* will cause vmexit immediately and the value will be recalculated
|
|
|
|
* on the next vmentry.
|
|
|
|
*/
|
2012-08-05 16:58:33 +04:00
|
|
|
if (!kvm_vcpu_has_lapic(vcpu))
|
2007-09-12 14:03:11 +04:00
|
|
|
return 0;
|
2012-08-05 16:58:32 +04:00
|
|
|
highest_irr = apic_find_highest_irr(vcpu->arch.apic);
|
2007-09-12 14:03:11 +04:00
|
|
|
|
|
|
|
return highest_irr;
|
|
|
|
}
|
|
|
|
|
2009-03-05 17:34:44 +03:00
|
|
|
static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
|
2013-04-11 15:21:37 +04:00
|
|
|
int vector, int level, int trig_mode,
|
|
|
|
unsigned long *dest_map);
|
2009-03-05 17:34:44 +03:00
|
|
|
|
2013-04-11 15:21:37 +04:00
|
|
|
int kvm_apic_set_irq(struct kvm_vcpu *vcpu, struct kvm_lapic_irq *irq,
|
|
|
|
unsigned long *dest_map)
|
2007-09-12 11:58:04 +04:00
|
|
|
{
|
2007-12-13 18:50:52 +03:00
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
2007-12-02 17:35:57 +03:00
|
|
|
|
2009-03-05 17:35:04 +03:00
|
|
|
return __apic_accept_irq(apic, irq->delivery_mode, irq->vector,
|
2013-04-11 15:21:37 +04:00
|
|
|
irq->level, irq->trig_mode, dest_map);
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
2012-06-24 20:25:07 +04:00
|
|
|
static int pv_eoi_put_user(struct kvm_vcpu *vcpu, u8 val)
|
|
|
|
{
|
|
|
|
|
|
|
|
return kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.pv_eoi.data, &val,
|
|
|
|
sizeof(val));
|
|
|
|
}
|
|
|
|
|
|
|
|
static int pv_eoi_get_user(struct kvm_vcpu *vcpu, u8 *val)
|
|
|
|
{
|
|
|
|
|
|
|
|
return kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.pv_eoi.data, val,
|
|
|
|
sizeof(*val));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool pv_eoi_enabled(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
return vcpu->arch.pv_eoi.msr_val & KVM_MSR_ENABLED;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool pv_eoi_get_pending(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
u8 val;
|
|
|
|
if (pv_eoi_get_user(vcpu, &val) < 0)
|
|
|
|
apic_debug("Can't read EOI MSR value: 0x%llx\n",
|
2014-01-02 13:14:11 +04:00
|
|
|
(unsigned long long)vcpu->arch.pv_eoi.msr_val);
|
2012-06-24 20:25:07 +04:00
|
|
|
return val & 0x1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void pv_eoi_set_pending(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
if (pv_eoi_put_user(vcpu, KVM_PV_EOI_ENABLED) < 0) {
|
|
|
|
apic_debug("Can't set EOI MSR value: 0x%llx\n",
|
2014-01-02 13:14:11 +04:00
|
|
|
(unsigned long long)vcpu->arch.pv_eoi.msr_val);
|
2012-06-24 20:25:07 +04:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
__set_bit(KVM_APIC_PV_EOI_PENDING, &vcpu->arch.apic_attention);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void pv_eoi_clr_pending(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
if (pv_eoi_put_user(vcpu, KVM_PV_EOI_DISABLED) < 0) {
|
|
|
|
apic_debug("Can't clear EOI MSR value: 0x%llx\n",
|
2014-01-02 13:14:11 +04:00
|
|
|
(unsigned long long)vcpu->arch.pv_eoi.msr_val);
|
2012-06-24 20:25:07 +04:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
__clear_bit(KVM_APIC_PV_EOI_PENDING, &vcpu->arch.apic_attention);
|
|
|
|
}
|
|
|
|
|
2013-04-11 15:25:14 +04:00
|
|
|
void kvm_apic_update_tmr(struct kvm_vcpu *vcpu, u32 *tmr)
|
|
|
|
{
|
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < 8; i++)
|
|
|
|
apic_set_reg(apic, APIC_TMR + 0x10 * i, tmr[i]);
|
|
|
|
}
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
static void apic_update_ppr(struct kvm_lapic *apic)
|
|
|
|
{
|
2010-07-27 13:30:24 +04:00
|
|
|
u32 tpr, isrv, ppr, old_ppr;
|
2007-09-12 11:58:04 +04:00
|
|
|
int isr;
|
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
old_ppr = kvm_apic_get_reg(apic, APIC_PROCPRI);
|
|
|
|
tpr = kvm_apic_get_reg(apic, APIC_TASKPRI);
|
2007-09-12 11:58:04 +04:00
|
|
|
isr = apic_find_highest_isr(apic);
|
|
|
|
isrv = (isr != -1) ? isr : 0;
|
|
|
|
|
|
|
|
if ((tpr & 0xf0) >= (isrv & 0xf0))
|
|
|
|
ppr = tpr & 0xff;
|
|
|
|
else
|
|
|
|
ppr = isrv & 0xf0;
|
|
|
|
|
|
|
|
apic_debug("vlapic %p, ppr 0x%x, isr 0x%x, isrv 0x%x",
|
|
|
|
apic, ppr, isr, isrv);
|
|
|
|
|
2010-07-27 13:30:24 +04:00
|
|
|
if (old_ppr != ppr) {
|
|
|
|
apic_set_reg(apic, APIC_PROCPRI, ppr);
|
2010-10-25 17:23:55 +04:00
|
|
|
if (ppr < old_ppr)
|
|
|
|
kvm_make_request(KVM_REQ_EVENT, apic->vcpu);
|
2010-07-27 13:30:24 +04:00
|
|
|
}
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static void apic_set_tpr(struct kvm_lapic *apic, u32 tpr)
|
|
|
|
{
|
|
|
|
apic_set_reg(apic, APIC_TASKPRI, tpr);
|
|
|
|
apic_update_ppr(apic);
|
|
|
|
}
|
|
|
|
|
2015-01-30 00:48:48 +03:00
|
|
|
static bool kvm_apic_broadcast(struct kvm_lapic *apic, u32 dest)
|
2014-10-03 01:30:52 +04:00
|
|
|
{
|
|
|
|
return dest == (apic_x2apic_mode(apic) ?
|
|
|
|
X2APIC_BROADCAST : APIC_BROADCAST);
|
|
|
|
}
|
|
|
|
|
2015-01-30 00:48:48 +03:00
|
|
|
static bool kvm_apic_match_physical_addr(struct kvm_lapic *apic, u32 dest)
|
2007-09-12 11:58:04 +04:00
|
|
|
{
|
2014-10-03 01:30:52 +04:00
|
|
|
return kvm_apic_id(apic) == dest || kvm_apic_broadcast(apic, dest);
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
2015-01-30 00:48:48 +03:00
|
|
|
static bool kvm_apic_match_logical_addr(struct kvm_lapic *apic, u32 mda)
|
2007-09-12 11:58:04 +04:00
|
|
|
{
|
2009-07-05 18:39:36 +04:00
|
|
|
u32 logical_id;
|
|
|
|
|
2014-10-03 01:30:52 +04:00
|
|
|
if (kvm_apic_broadcast(apic, mda))
|
2015-01-30 00:48:49 +03:00
|
|
|
return true;
|
2014-10-03 01:30:52 +04:00
|
|
|
|
2015-01-30 00:48:49 +03:00
|
|
|
logical_id = kvm_apic_get_reg(apic, APIC_LDR);
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2015-01-30 00:48:49 +03:00
|
|
|
if (apic_x2apic_mode(apic))
|
2015-01-30 00:48:51 +03:00
|
|
|
return ((logical_id >> 16) == (mda >> 16))
|
|
|
|
&& (logical_id & mda & 0xffff) != 0;
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2015-01-30 00:48:49 +03:00
|
|
|
logical_id = GET_APIC_LOGICAL_ID(logical_id);
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
switch (kvm_apic_get_reg(apic, APIC_DFR)) {
|
2007-09-12 11:58:04 +04:00
|
|
|
case APIC_DFR_FLAT:
|
2015-01-30 00:48:49 +03:00
|
|
|
return (logical_id & mda) != 0;
|
2007-09-12 11:58:04 +04:00
|
|
|
case APIC_DFR_CLUSTER:
|
2015-01-30 00:48:49 +03:00
|
|
|
return ((logical_id >> 4) == (mda >> 4))
|
|
|
|
&& (logical_id & mda & 0xf) != 0;
|
2007-09-12 11:58:04 +04:00
|
|
|
default:
|
2011-09-12 13:25:51 +04:00
|
|
|
apic_debug("Bad DFR vcpu %d: %08x\n",
|
2012-08-05 16:58:33 +04:00
|
|
|
apic->vcpu->vcpu_id, kvm_apic_get_reg(apic, APIC_DFR));
|
2015-01-30 00:48:49 +03:00
|
|
|
return false;
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-01-30 00:48:48 +03:00
|
|
|
bool kvm_apic_match_dest(struct kvm_vcpu *vcpu, struct kvm_lapic *source,
|
2014-10-03 01:30:52 +04:00
|
|
|
int short_hand, unsigned int dest, int dest_mode)
|
2007-09-12 11:58:04 +04:00
|
|
|
{
|
2007-12-13 18:50:52 +03:00
|
|
|
struct kvm_lapic *target = vcpu->arch.apic;
|
2007-09-12 11:58:04 +04:00
|
|
|
|
|
|
|
apic_debug("target %p, source %p, dest 0x%x, "
|
2009-03-05 17:34:54 +03:00
|
|
|
"dest_mode 0x%x, short_hand 0x%x\n",
|
2007-09-12 11:58:04 +04:00
|
|
|
target, source, dest, dest_mode, short_hand);
|
|
|
|
|
2010-06-15 01:42:15 +04:00
|
|
|
ASSERT(target);
|
2007-09-12 11:58:04 +04:00
|
|
|
switch (short_hand) {
|
|
|
|
case APIC_DEST_NOSHORT:
|
2015-01-30 00:48:50 +03:00
|
|
|
if (dest_mode == APIC_DEST_PHYSICAL)
|
2015-01-30 00:48:49 +03:00
|
|
|
return kvm_apic_match_physical_addr(target, dest);
|
2009-03-05 17:34:54 +03:00
|
|
|
else
|
2015-01-30 00:48:49 +03:00
|
|
|
return kvm_apic_match_logical_addr(target, dest);
|
2007-09-12 11:58:04 +04:00
|
|
|
case APIC_DEST_SELF:
|
2015-01-30 00:48:49 +03:00
|
|
|
return target == source;
|
2007-09-12 11:58:04 +04:00
|
|
|
case APIC_DEST_ALLINC:
|
2015-01-30 00:48:49 +03:00
|
|
|
return true;
|
2007-09-12 11:58:04 +04:00
|
|
|
case APIC_DEST_ALLBUT:
|
2015-01-30 00:48:49 +03:00
|
|
|
return target != source;
|
2007-09-12 11:58:04 +04:00
|
|
|
default:
|
2011-09-12 13:25:51 +04:00
|
|
|
apic_debug("kvm: apic: Bad dest shorthand value %x\n",
|
|
|
|
short_hand);
|
2015-01-30 00:48:49 +03:00
|
|
|
return false;
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-09-13 18:19:24 +04:00
|
|
|
bool kvm_irq_delivery_to_apic_fast(struct kvm *kvm, struct kvm_lapic *src,
|
2013-04-11 15:21:37 +04:00
|
|
|
struct kvm_lapic_irq *irq, int *r, unsigned long *dest_map)
|
2012-09-13 18:19:24 +04:00
|
|
|
{
|
|
|
|
struct kvm_apic_map *map;
|
|
|
|
unsigned long bitmap = 1;
|
|
|
|
struct kvm_lapic **dst;
|
|
|
|
int i;
|
|
|
|
bool ret = false;
|
|
|
|
|
|
|
|
*r = -1;
|
|
|
|
|
|
|
|
if (irq->shorthand == APIC_DEST_SELF) {
|
2013-04-11 15:21:37 +04:00
|
|
|
*r = kvm_apic_set_irq(src->vcpu, irq, dest_map);
|
2012-09-13 18:19:24 +04:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (irq->shorthand)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
map = rcu_dereference(kvm->arch.apic_map);
|
|
|
|
|
|
|
|
if (!map)
|
|
|
|
goto out;
|
|
|
|
|
2014-10-03 01:30:52 +04:00
|
|
|
if (irq->dest_id == map->broadcast)
|
|
|
|
goto out;
|
|
|
|
|
2014-11-27 22:03:14 +03:00
|
|
|
ret = true;
|
|
|
|
|
2015-01-30 00:48:50 +03:00
|
|
|
if (irq->dest_mode == APIC_DEST_PHYSICAL) {
|
2014-11-27 22:03:12 +03:00
|
|
|
if (irq->dest_id >= ARRAY_SIZE(map->phys_map))
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
dst = &map->phys_map[irq->dest_id];
|
2012-09-13 18:19:24 +04:00
|
|
|
} else {
|
|
|
|
u32 mda = irq->dest_id << (32 - map->ldr_bits);
|
2014-11-27 22:03:13 +03:00
|
|
|
u16 cid = apic_cluster_id(map, mda);
|
|
|
|
|
|
|
|
if (cid >= ARRAY_SIZE(map->logical_map))
|
|
|
|
goto out;
|
2012-09-13 18:19:24 +04:00
|
|
|
|
2014-11-27 22:03:13 +03:00
|
|
|
dst = map->logical_map[cid];
|
2012-09-13 18:19:24 +04:00
|
|
|
|
|
|
|
bitmap = apic_logical_id(map, mda);
|
|
|
|
|
|
|
|
if (irq->delivery_mode == APIC_DM_LOWEST) {
|
|
|
|
int l = -1;
|
|
|
|
for_each_set_bit(i, &bitmap, 16) {
|
|
|
|
if (!dst[i])
|
|
|
|
continue;
|
|
|
|
if (l < 0)
|
|
|
|
l = i;
|
|
|
|
else if (kvm_apic_compare_prio(dst[i]->vcpu, dst[l]->vcpu) < 0)
|
|
|
|
l = i;
|
|
|
|
}
|
|
|
|
|
|
|
|
bitmap = (l >= 0) ? 1 << l : 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
for_each_set_bit(i, &bitmap, 16) {
|
|
|
|
if (!dst[i])
|
|
|
|
continue;
|
|
|
|
if (*r < 0)
|
|
|
|
*r = 0;
|
2013-04-11 15:21:37 +04:00
|
|
|
*r += kvm_apic_set_irq(dst[i]->vcpu, irq, dest_map);
|
2012-09-13 18:19:24 +04:00
|
|
|
}
|
|
|
|
out:
|
|
|
|
rcu_read_unlock();
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
/*
|
|
|
|
* Add a pending IRQ into lapic.
|
|
|
|
* Return 1 if successfully added and 0 if discarded.
|
|
|
|
*/
|
|
|
|
static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
|
2013-04-11 15:21:37 +04:00
|
|
|
int vector, int level, int trig_mode,
|
|
|
|
unsigned long *dest_map)
|
2007-09-12 11:58:04 +04:00
|
|
|
{
|
2009-03-05 17:34:44 +03:00
|
|
|
int result = 0;
|
2007-09-03 18:07:41 +04:00
|
|
|
struct kvm_vcpu *vcpu = apic->vcpu;
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2014-09-11 13:51:02 +04:00
|
|
|
trace_kvm_apic_accept_irq(vcpu->vcpu_id, delivery_mode,
|
|
|
|
trig_mode, vector);
|
2007-09-12 11:58:04 +04:00
|
|
|
switch (delivery_mode) {
|
|
|
|
case APIC_DM_LOWEST:
|
2009-03-05 17:34:59 +03:00
|
|
|
vcpu->arch.apic_arb_prio++;
|
|
|
|
case APIC_DM_FIXED:
|
2007-09-12 11:58:04 +04:00
|
|
|
/* FIXME add logic for vcpu on reset */
|
|
|
|
if (unlikely(!apic_enabled(apic)))
|
|
|
|
break;
|
|
|
|
|
2013-07-25 11:58:45 +04:00
|
|
|
result = 1;
|
|
|
|
|
2013-04-11 15:21:37 +04:00
|
|
|
if (dest_map)
|
|
|
|
__set_bit(vcpu->vcpu_id, dest_map);
|
2009-12-29 13:42:16 +03:00
|
|
|
|
2013-07-25 11:58:45 +04:00
|
|
|
if (kvm_x86_ops->deliver_posted_interrupt)
|
2013-04-11 15:25:16 +04:00
|
|
|
kvm_x86_ops->deliver_posted_interrupt(vcpu, vector);
|
2013-07-25 11:58:45 +04:00
|
|
|
else {
|
|
|
|
apic_set_irr(vector, apic);
|
2013-04-11 15:25:16 +04:00
|
|
|
|
|
|
|
kvm_make_request(KVM_REQ_EVENT, vcpu);
|
|
|
|
kvm_vcpu_kick(vcpu);
|
|
|
|
}
|
2007-09-12 11:58:04 +04:00
|
|
|
break;
|
|
|
|
|
|
|
|
case APIC_DM_REMRD:
|
2013-08-26 12:48:35 +04:00
|
|
|
result = 1;
|
|
|
|
vcpu->arch.pv.pv_unhalted = 1;
|
|
|
|
kvm_make_request(KVM_REQ_EVENT, vcpu);
|
|
|
|
kvm_vcpu_kick(vcpu);
|
2007-09-12 11:58:04 +04:00
|
|
|
break;
|
|
|
|
|
|
|
|
case APIC_DM_SMI:
|
2011-09-12 13:25:51 +04:00
|
|
|
apic_debug("Ignoring guest SMI\n");
|
2007-09-12 11:58:04 +04:00
|
|
|
break;
|
2008-05-15 05:52:48 +04:00
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
case APIC_DM_NMI:
|
2009-03-05 17:34:44 +03:00
|
|
|
result = 1;
|
2008-05-15 05:52:48 +04:00
|
|
|
kvm_inject_nmi(vcpu);
|
2008-09-26 11:30:54 +04:00
|
|
|
kvm_vcpu_kick(vcpu);
|
2007-09-12 11:58:04 +04:00
|
|
|
break;
|
|
|
|
|
|
|
|
case APIC_DM_INIT:
|
2012-01-16 17:02:20 +04:00
|
|
|
if (!trig_mode || level) {
|
2009-03-05 17:34:44 +03:00
|
|
|
result = 1;
|
2013-03-13 15:42:34 +04:00
|
|
|
/* assumes that there are only KVM_APIC_INIT/SIPI */
|
|
|
|
apic->pending_events = (1UL << KVM_APIC_INIT);
|
|
|
|
/* make sure pending_events is visible before sending
|
|
|
|
* the request */
|
|
|
|
smp_wmb();
|
2010-07-27 13:30:24 +04:00
|
|
|
kvm_make_request(KVM_REQ_EVENT, vcpu);
|
2007-09-03 18:07:41 +04:00
|
|
|
kvm_vcpu_kick(vcpu);
|
|
|
|
} else {
|
2008-09-30 12:41:06 +04:00
|
|
|
apic_debug("Ignoring de-assert INIT to vcpu %d\n",
|
|
|
|
vcpu->vcpu_id);
|
2007-09-03 18:07:41 +04:00
|
|
|
}
|
2007-09-12 11:58:04 +04:00
|
|
|
break;
|
|
|
|
|
|
|
|
case APIC_DM_STARTUP:
|
2008-09-30 12:41:06 +04:00
|
|
|
apic_debug("SIPI to vcpu %d vector 0x%02x\n",
|
|
|
|
vcpu->vcpu_id, vector);
|
2013-03-13 15:42:34 +04:00
|
|
|
result = 1;
|
|
|
|
apic->sipi_vector = vector;
|
|
|
|
/* make sure sipi_vector is visible for the receiver */
|
|
|
|
smp_wmb();
|
|
|
|
set_bit(KVM_APIC_SIPI, &apic->pending_events);
|
|
|
|
kvm_make_request(KVM_REQ_EVENT, vcpu);
|
|
|
|
kvm_vcpu_kick(vcpu);
|
2007-09-12 11:58:04 +04:00
|
|
|
break;
|
|
|
|
|
2008-09-26 11:30:52 +04:00
|
|
|
case APIC_DM_EXTINT:
|
|
|
|
/*
|
|
|
|
* Should only be called by kvm_apic_local_deliver() with LVT0,
|
|
|
|
* before NMI watchdog was enabled. Already handled by
|
|
|
|
* kvm_apic_accept_pic_intr().
|
|
|
|
*/
|
|
|
|
break;
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
default:
|
|
|
|
printk(KERN_ERR "TODO: unsupported delivery mode %x\n",
|
|
|
|
delivery_mode);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
2009-03-05 17:34:59 +03:00
|
|
|
int kvm_apic_compare_prio(struct kvm_vcpu *vcpu1, struct kvm_vcpu *vcpu2)
|
2007-12-02 17:35:57 +03:00
|
|
|
{
|
2009-03-05 17:34:59 +03:00
|
|
|
return vcpu1->arch.apic_arb_prio - vcpu2->arch.apic_arb_prio;
|
2007-12-02 17:35:57 +03:00
|
|
|
}
|
|
|
|
|
2013-01-25 06:18:51 +04:00
|
|
|
static void kvm_ioapic_send_eoi(struct kvm_lapic *apic, int vector)
|
|
|
|
{
|
|
|
|
if (!(kvm_apic_get_reg(apic, APIC_SPIV) & APIC_SPIV_DIRECTED_EOI) &&
|
|
|
|
kvm_ioapic_handles_vector(apic->vcpu->kvm, vector)) {
|
|
|
|
int trigger_mode;
|
|
|
|
if (apic_test_vector(vector, apic->regs + APIC_TMR))
|
|
|
|
trigger_mode = IOAPIC_LEVEL_TRIG;
|
|
|
|
else
|
|
|
|
trigger_mode = IOAPIC_EDGE_TRIG;
|
2013-04-11 15:21:35 +04:00
|
|
|
kvm_ioapic_update_eoi(apic->vcpu, vector, trigger_mode);
|
2013-01-25 06:18:51 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-06-24 20:25:07 +04:00
|
|
|
static int apic_set_eoi(struct kvm_lapic *apic)
|
2007-09-12 11:58:04 +04:00
|
|
|
{
|
|
|
|
int vector = apic_find_highest_isr(apic);
|
2012-06-24 20:25:07 +04:00
|
|
|
|
|
|
|
trace_kvm_eoi(apic, vector);
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
/*
|
|
|
|
* Not every write EOI will has corresponding ISR,
|
|
|
|
* one example is when Kernel check timer on setup_IO_APIC
|
|
|
|
*/
|
|
|
|
if (vector == -1)
|
2012-06-24 20:25:07 +04:00
|
|
|
return vector;
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2012-06-24 20:24:26 +04:00
|
|
|
apic_clear_isr(vector, apic);
|
2007-09-12 11:58:04 +04:00
|
|
|
apic_update_ppr(apic);
|
|
|
|
|
2013-01-25 06:18:51 +04:00
|
|
|
kvm_ioapic_send_eoi(apic, vector);
|
2010-07-27 13:30:24 +04:00
|
|
|
kvm_make_request(KVM_REQ_EVENT, apic->vcpu);
|
2012-06-24 20:25:07 +04:00
|
|
|
return vector;
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
2013-01-25 06:18:51 +04:00
|
|
|
/*
|
|
|
|
* this interface assumes a trap-like exit, which has already finished
|
|
|
|
* desired side effect including vISR and vPPR update.
|
|
|
|
*/
|
|
|
|
void kvm_apic_set_eoi_accelerated(struct kvm_vcpu *vcpu, int vector)
|
|
|
|
{
|
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
|
|
|
|
|
|
|
trace_kvm_eoi(apic, vector);
|
|
|
|
|
|
|
|
kvm_ioapic_send_eoi(apic, vector);
|
|
|
|
kvm_make_request(KVM_REQ_EVENT, apic->vcpu);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(kvm_apic_set_eoi_accelerated);
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
static void apic_send_ipi(struct kvm_lapic *apic)
|
|
|
|
{
|
2012-08-05 16:58:33 +04:00
|
|
|
u32 icr_low = kvm_apic_get_reg(apic, APIC_ICR);
|
|
|
|
u32 icr_high = kvm_apic_get_reg(apic, APIC_ICR2);
|
2009-03-05 17:35:04 +03:00
|
|
|
struct kvm_lapic_irq irq;
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2009-03-05 17:35:04 +03:00
|
|
|
irq.vector = icr_low & APIC_VECTOR_MASK;
|
|
|
|
irq.delivery_mode = icr_low & APIC_MODE_MASK;
|
|
|
|
irq.dest_mode = icr_low & APIC_DEST_MASK;
|
|
|
|
irq.level = icr_low & APIC_INT_ASSERT;
|
|
|
|
irq.trig_mode = icr_low & APIC_INT_LEVELTRIG;
|
|
|
|
irq.shorthand = icr_low & APIC_SHORT_MASK;
|
2009-07-05 18:39:36 +04:00
|
|
|
if (apic_x2apic_mode(apic))
|
|
|
|
irq.dest_id = icr_high;
|
|
|
|
else
|
|
|
|
irq.dest_id = GET_APIC_DEST_FIELD(icr_high);
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2009-07-07 17:00:57 +04:00
|
|
|
trace_kvm_apic_ipi(icr_low, irq.dest_id);
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
apic_debug("icr_high 0x%x, icr_low 0x%x, "
|
|
|
|
"short_hand 0x%x, dest 0x%x, trig_mode 0x%x, level 0x%x, "
|
|
|
|
"dest_mode 0x%x, delivery_mode 0x%x, vector 0x%x\n",
|
2009-04-30 01:29:09 +04:00
|
|
|
icr_high, icr_low, irq.shorthand, irq.dest_id,
|
2009-03-05 17:35:04 +03:00
|
|
|
irq.trig_mode, irq.level, irq.dest_mode, irq.delivery_mode,
|
|
|
|
irq.vector);
|
|
|
|
|
2013-04-11 15:21:37 +04:00
|
|
|
kvm_irq_delivery_to_apic(apic->vcpu->kvm, apic, &irq, NULL);
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static u32 apic_get_tmcct(struct kvm_lapic *apic)
|
|
|
|
{
|
2009-02-11 01:41:41 +03:00
|
|
|
ktime_t remaining;
|
|
|
|
s64 ns;
|
2007-10-21 10:55:50 +04:00
|
|
|
u32 tmcct;
|
2007-09-12 11:58:04 +04:00
|
|
|
|
|
|
|
ASSERT(apic != NULL);
|
|
|
|
|
2007-10-21 10:55:50 +04:00
|
|
|
/* if initial count is 0, current count should also be 0 */
|
2013-11-20 02:12:18 +04:00
|
|
|
if (kvm_apic_get_reg(apic, APIC_TMICT) == 0 ||
|
|
|
|
apic->lapic_timer.period == 0)
|
2007-10-21 10:55:50 +04:00
|
|
|
return 0;
|
|
|
|
|
2009-10-08 17:55:03 +04:00
|
|
|
remaining = hrtimer_get_remaining(&apic->lapic_timer.timer);
|
2009-02-11 01:41:41 +03:00
|
|
|
if (ktime_to_ns(remaining) < 0)
|
|
|
|
remaining = ktime_set(0, 0);
|
|
|
|
|
2009-02-23 16:57:41 +03:00
|
|
|
ns = mod_64(ktime_to_ns(remaining), apic->lapic_timer.period);
|
|
|
|
tmcct = div64_u64(ns,
|
|
|
|
(APIC_BUS_CYCLE_NS * apic->divide_count));
|
2007-09-12 11:58:04 +04:00
|
|
|
|
|
|
|
return tmcct;
|
|
|
|
}
|
|
|
|
|
2007-10-22 18:50:39 +04:00
|
|
|
static void __report_tpr_access(struct kvm_lapic *apic, bool write)
|
|
|
|
{
|
|
|
|
struct kvm_vcpu *vcpu = apic->vcpu;
|
|
|
|
struct kvm_run *run = vcpu->run;
|
|
|
|
|
2010-05-10 13:34:53 +04:00
|
|
|
kvm_make_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu);
|
2008-06-27 21:58:02 +04:00
|
|
|
run->tpr_access.rip = kvm_rip_read(vcpu);
|
2007-10-22 18:50:39 +04:00
|
|
|
run->tpr_access.is_write = write;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void report_tpr_access(struct kvm_lapic *apic, bool write)
|
|
|
|
{
|
|
|
|
if (apic->vcpu->arch.tpr_access_reporting)
|
|
|
|
__report_tpr_access(apic, write);
|
|
|
|
}
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
static u32 __apic_read(struct kvm_lapic *apic, unsigned int offset)
|
|
|
|
{
|
|
|
|
u32 val = 0;
|
|
|
|
|
|
|
|
if (offset >= LAPIC_MMIO_LENGTH)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
switch (offset) {
|
2009-07-05 18:39:36 +04:00
|
|
|
case APIC_ID:
|
|
|
|
if (apic_x2apic_mode(apic))
|
|
|
|
val = kvm_apic_id(apic);
|
|
|
|
else
|
|
|
|
val = kvm_apic_id(apic) << 24;
|
|
|
|
break;
|
2007-09-12 11:58:04 +04:00
|
|
|
case APIC_ARBPRI:
|
2011-09-12 13:25:51 +04:00
|
|
|
apic_debug("Access APIC ARBPRI register which is for P6\n");
|
2007-09-12 11:58:04 +04:00
|
|
|
break;
|
|
|
|
|
|
|
|
case APIC_TMCCT: /* Timer CCR */
|
2011-09-22 12:55:52 +04:00
|
|
|
if (apic_lvtt_tscdeadline(apic))
|
|
|
|
return 0;
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
val = apic_get_tmcct(apic);
|
|
|
|
break;
|
2012-07-22 18:41:00 +04:00
|
|
|
case APIC_PROCPRI:
|
|
|
|
apic_update_ppr(apic);
|
2012-08-05 16:58:33 +04:00
|
|
|
val = kvm_apic_get_reg(apic, offset);
|
2012-07-22 18:41:00 +04:00
|
|
|
break;
|
2007-10-22 18:50:39 +04:00
|
|
|
case APIC_TASKPRI:
|
|
|
|
report_tpr_access(apic, false);
|
|
|
|
/* fall thru */
|
2007-09-12 11:58:04 +04:00
|
|
|
default:
|
2012-08-05 16:58:33 +04:00
|
|
|
val = kvm_apic_get_reg(apic, offset);
|
2007-09-12 11:58:04 +04:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return val;
|
|
|
|
}
|
|
|
|
|
2009-06-01 20:54:50 +04:00
|
|
|
static inline struct kvm_lapic *to_lapic(struct kvm_io_device *dev)
|
|
|
|
{
|
|
|
|
return container_of(dev, struct kvm_lapic, dev);
|
|
|
|
}
|
|
|
|
|
2009-07-05 18:39:36 +04:00
|
|
|
static int apic_reg_read(struct kvm_lapic *apic, u32 offset, int len,
|
|
|
|
void *data)
|
2007-09-12 11:58:04 +04:00
|
|
|
{
|
|
|
|
unsigned char alignment = offset & 0xf;
|
|
|
|
u32 result;
|
2012-06-28 11:22:57 +04:00
|
|
|
/* this bitmask has a bit cleared for each reserved register */
|
2009-07-05 18:39:36 +04:00
|
|
|
static const u64 rmask = 0x43ff01ffffffe70cULL;
|
2007-09-12 11:58:04 +04:00
|
|
|
|
|
|
|
if ((alignment + len) > 4) {
|
2009-07-08 12:26:54 +04:00
|
|
|
apic_debug("KVM_APIC_READ: alignment error %x %d\n",
|
|
|
|
offset, len);
|
2009-07-05 18:39:36 +04:00
|
|
|
return 1;
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
2009-07-05 18:39:36 +04:00
|
|
|
|
|
|
|
if (offset > 0x3f0 || !(rmask & (1ULL << (offset >> 4)))) {
|
2009-07-08 12:26:54 +04:00
|
|
|
apic_debug("KVM_APIC_READ: read reserved register %x\n",
|
|
|
|
offset);
|
2009-07-05 18:39:36 +04:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
result = __apic_read(apic, offset & ~0xf);
|
|
|
|
|
2009-06-17 16:22:14 +04:00
|
|
|
trace_kvm_apic_read(offset, result);
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
switch (len) {
|
|
|
|
case 1:
|
|
|
|
case 2:
|
|
|
|
case 4:
|
|
|
|
memcpy(data, (char *)&result + alignment, len);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
printk(KERN_ERR "Local APIC read with len = %x, "
|
|
|
|
"should be 1,2, or 4 instead\n", len);
|
|
|
|
break;
|
|
|
|
}
|
2009-06-29 23:24:32 +04:00
|
|
|
return 0;
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
2009-07-05 18:39:36 +04:00
|
|
|
static int apic_mmio_in_range(struct kvm_lapic *apic, gpa_t addr)
|
|
|
|
{
|
2012-08-05 16:58:33 +04:00
|
|
|
return kvm_apic_hw_enabled(apic) &&
|
2009-07-05 18:39:36 +04:00
|
|
|
addr >= apic->base_address &&
|
|
|
|
addr < apic->base_address + LAPIC_MMIO_LENGTH;
|
|
|
|
}
|
|
|
|
|
2015-03-26 17:39:28 +03:00
|
|
|
static int apic_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
|
2009-07-05 18:39:36 +04:00
|
|
|
gpa_t address, int len, void *data)
|
|
|
|
{
|
|
|
|
struct kvm_lapic *apic = to_lapic(this);
|
|
|
|
u32 offset = address - apic->base_address;
|
|
|
|
|
|
|
|
if (!apic_mmio_in_range(apic, address))
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
apic_reg_read(apic, offset, len, data);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
static void update_divide_count(struct kvm_lapic *apic)
|
|
|
|
{
|
|
|
|
u32 tmp1, tmp2, tdcr;
|
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
tdcr = kvm_apic_get_reg(apic, APIC_TDCR);
|
2007-09-12 11:58:04 +04:00
|
|
|
tmp1 = tdcr & 0xf;
|
|
|
|
tmp2 = ((tmp1 & 0x3) | ((tmp1 & 0x8) >> 1)) + 1;
|
2009-02-23 16:57:41 +03:00
|
|
|
apic->divide_count = 0x1 << (tmp2 & 0x7);
|
2007-09-12 11:58:04 +04:00
|
|
|
|
|
|
|
apic_debug("timer divide count is 0x%x\n",
|
2009-04-30 01:29:09 +04:00
|
|
|
apic->divide_count);
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
2014-10-10 21:15:08 +04:00
|
|
|
static void apic_timer_expired(struct kvm_lapic *apic)
|
|
|
|
{
|
|
|
|
struct kvm_vcpu *vcpu = apic->vcpu;
|
|
|
|
wait_queue_head_t *q = &vcpu->wq;
|
2014-12-16 17:08:15 +03:00
|
|
|
struct kvm_timer *ktimer = &apic->lapic_timer;
|
2014-10-10 21:15:08 +04:00
|
|
|
|
|
|
|
if (atomic_read(&apic->lapic_timer.pending))
|
|
|
|
return;
|
|
|
|
|
|
|
|
atomic_inc(&apic->lapic_timer.pending);
|
2015-01-02 06:05:18 +03:00
|
|
|
kvm_set_pending_timer(vcpu);
|
2014-10-10 21:15:08 +04:00
|
|
|
|
|
|
|
if (waitqueue_active(q))
|
|
|
|
wake_up_interruptible(q);
|
2014-12-16 17:08:15 +03:00
|
|
|
|
|
|
|
if (apic_lvtt_tscdeadline(apic))
|
|
|
|
ktimer->expired_tscdeadline = ktimer->tscdeadline;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* On APICv, this test will cause a busy wait
|
|
|
|
* during a higher-priority task.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static bool lapic_timer_int_injected(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
|
|
|
u32 reg = kvm_apic_get_reg(apic, APIC_LVTT);
|
|
|
|
|
|
|
|
if (kvm_apic_hw_enabled(apic)) {
|
|
|
|
int vec = reg & APIC_VECTOR_MASK;
|
2015-02-02 20:26:08 +03:00
|
|
|
void *bitmap = apic->regs + APIC_ISR;
|
2014-12-16 17:08:15 +03:00
|
|
|
|
2015-02-02 20:26:08 +03:00
|
|
|
if (kvm_x86_ops->deliver_posted_interrupt)
|
|
|
|
bitmap = apic->regs + APIC_IRR;
|
|
|
|
|
|
|
|
if (apic_test_vector(vec, bitmap))
|
|
|
|
return true;
|
2014-12-16 17:08:15 +03:00
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
void wait_lapic_expire(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
|
|
|
u64 guest_tsc, tsc_deadline;
|
|
|
|
|
|
|
|
if (!kvm_vcpu_has_lapic(vcpu))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (apic->lapic_timer.expired_tscdeadline == 0)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (!lapic_timer_int_injected(vcpu))
|
|
|
|
return;
|
|
|
|
|
|
|
|
tsc_deadline = apic->lapic_timer.expired_tscdeadline;
|
|
|
|
apic->lapic_timer.expired_tscdeadline = 0;
|
|
|
|
guest_tsc = kvm_x86_ops->read_l1_tsc(vcpu, native_read_tsc());
|
2014-12-16 17:08:16 +03:00
|
|
|
trace_kvm_wait_lapic_expire(vcpu->vcpu_id, guest_tsc - tsc_deadline);
|
2014-12-16 17:08:15 +03:00
|
|
|
|
|
|
|
/* __delay is delay_tsc whenever the hardware has TSC, thus always. */
|
|
|
|
if (guest_tsc < tsc_deadline)
|
|
|
|
__delay(tsc_deadline - guest_tsc);
|
2014-10-10 21:15:08 +04:00
|
|
|
}
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
static void start_apic_timer(struct kvm_lapic *apic)
|
|
|
|
{
|
2011-09-22 12:55:52 +04:00
|
|
|
ktime_t now;
|
2014-12-16 17:08:15 +03:00
|
|
|
|
2009-02-23 16:57:41 +03:00
|
|
|
atomic_set(&apic->lapic_timer.pending, 0);
|
2008-02-24 15:37:50 +03:00
|
|
|
|
2011-09-22 12:55:52 +04:00
|
|
|
if (apic_lvtt_period(apic) || apic_lvtt_oneshot(apic)) {
|
2012-06-28 11:22:57 +04:00
|
|
|
/* lapic timer in oneshot or periodic mode */
|
2011-09-22 12:55:52 +04:00
|
|
|
now = apic->lapic_timer.timer.base->get_time();
|
2012-08-05 16:58:33 +04:00
|
|
|
apic->lapic_timer.period = (u64)kvm_apic_get_reg(apic, APIC_TMICT)
|
2011-09-22 12:55:52 +04:00
|
|
|
* APIC_BUS_CYCLE_NS * apic->divide_count;
|
|
|
|
|
|
|
|
if (!apic->lapic_timer.period)
|
|
|
|
return;
|
|
|
|
/*
|
|
|
|
* Do not allow the guest to program periodic timers with small
|
|
|
|
* interval, since the hrtimers are not throttled by the host
|
|
|
|
* scheduler.
|
|
|
|
*/
|
|
|
|
if (apic_lvtt_period(apic)) {
|
|
|
|
s64 min_period = min_timer_period_us * 1000LL;
|
|
|
|
|
|
|
|
if (apic->lapic_timer.period < min_period) {
|
|
|
|
pr_info_ratelimited(
|
|
|
|
"kvm: vcpu %i: requested %lld ns "
|
|
|
|
"lapic timer period limited to %lld ns\n",
|
|
|
|
apic->vcpu->vcpu_id,
|
|
|
|
apic->lapic_timer.period, min_period);
|
|
|
|
apic->lapic_timer.period = min_period;
|
|
|
|
}
|
2011-09-12 16:10:22 +04:00
|
|
|
}
|
2008-02-24 15:37:50 +03:00
|
|
|
|
2011-09-22 12:55:52 +04:00
|
|
|
hrtimer_start(&apic->lapic_timer.timer,
|
|
|
|
ktime_add_ns(now, apic->lapic_timer.period),
|
|
|
|
HRTIMER_MODE_ABS);
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2011-09-22 12:55:52 +04:00
|
|
|
apic_debug("%s: bus cycle is %" PRId64 "ns, now 0x%016"
|
2007-09-12 11:58:04 +04:00
|
|
|
PRIx64 ", "
|
|
|
|
"timer initial count 0x%x, period %lldns, "
|
2008-03-03 23:59:56 +03:00
|
|
|
"expire @ 0x%016" PRIx64 ".\n", __func__,
|
2007-09-12 11:58:04 +04:00
|
|
|
APIC_BUS_CYCLE_NS, ktime_to_ns(now),
|
2012-08-05 16:58:33 +04:00
|
|
|
kvm_apic_get_reg(apic, APIC_TMICT),
|
2009-02-23 16:57:41 +03:00
|
|
|
apic->lapic_timer.period,
|
2007-09-12 11:58:04 +04:00
|
|
|
ktime_to_ns(ktime_add_ns(now,
|
2009-02-23 16:57:41 +03:00
|
|
|
apic->lapic_timer.period)));
|
2011-09-22 12:55:52 +04:00
|
|
|
} else if (apic_lvtt_tscdeadline(apic)) {
|
|
|
|
/* lapic timer in tsc deadline mode */
|
|
|
|
u64 guest_tsc, tscdeadline = apic->lapic_timer.tscdeadline;
|
|
|
|
u64 ns = 0;
|
2014-12-16 17:08:15 +03:00
|
|
|
ktime_t expire;
|
2011-09-22 12:55:52 +04:00
|
|
|
struct kvm_vcpu *vcpu = apic->vcpu;
|
KVM: Infrastructure for software and hardware based TSC rate scaling
This requires some restructuring; rather than use 'virtual_tsc_khz'
to indicate whether hardware rate scaling is in effect, we consider
each VCPU to always have a virtual TSC rate. Instead, there is new
logic above the vendor-specific hardware scaling that decides whether
it is even necessary to use and updates all rate variables used by
common code. This means we can simply query the virtual rate at
any point, which is needed for software rate scaling.
There is also now a threshold added to the TSC rate scaling; minor
differences and variations of measured TSC rate can accidentally
provoke rate scaling to be used when it is not needed. Instead,
we have a tolerance variable called tsc_tolerance_ppm, which is
the maximum variation from user requested rate at which scaling
will be used. The default is 250ppm, which is the half the
threshold for NTP adjustment, allowing for some hardware variation.
In the event that hardware rate scaling is not available, we can
kludge a bit by forcing TSC catchup to turn on when a faster than
hardware speed has been requested, but there is nothing available
yet for the reverse case; this requires a trap and emulate software
implementation for RDTSC, which is still forthcoming.
[avi: fix 64-bit division on i386]
Signed-off-by: Zachary Amsden <zamsden@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-03 21:43:50 +04:00
|
|
|
unsigned long this_tsc_khz = vcpu->arch.virtual_tsc_khz;
|
2011-09-22 12:55:52 +04:00
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
if (unlikely(!tscdeadline || !this_tsc_khz))
|
|
|
|
return;
|
|
|
|
|
|
|
|
local_irq_save(flags);
|
|
|
|
|
|
|
|
now = apic->lapic_timer.timer.base->get_time();
|
2012-11-28 05:28:58 +04:00
|
|
|
guest_tsc = kvm_x86_ops->read_l1_tsc(vcpu, native_read_tsc());
|
2011-09-22 12:55:52 +04:00
|
|
|
if (likely(tscdeadline > guest_tsc)) {
|
|
|
|
ns = (tscdeadline - guest_tsc) * 1000000ULL;
|
|
|
|
do_div(ns, this_tsc_khz);
|
2014-12-16 17:08:15 +03:00
|
|
|
expire = ktime_add_ns(now, ns);
|
|
|
|
expire = ktime_sub_ns(expire, lapic_timer_advance_ns);
|
2014-10-10 21:15:09 +04:00
|
|
|
hrtimer_start(&apic->lapic_timer.timer,
|
2014-12-16 17:08:15 +03:00
|
|
|
expire, HRTIMER_MODE_ABS);
|
2014-10-10 21:15:09 +04:00
|
|
|
} else
|
|
|
|
apic_timer_expired(apic);
|
2011-09-22 12:55:52 +04:00
|
|
|
|
|
|
|
local_irq_restore(flags);
|
|
|
|
}
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
2008-10-20 12:20:03 +04:00
|
|
|
static void apic_manage_nmi_watchdog(struct kvm_lapic *apic, u32 lvt0_val)
|
|
|
|
{
|
2012-08-05 16:58:33 +04:00
|
|
|
int nmi_wd_enabled = apic_lvt_nmi_mode(kvm_apic_get_reg(apic, APIC_LVT0));
|
2008-10-20 12:20:03 +04:00
|
|
|
|
|
|
|
if (apic_lvt_nmi_mode(lvt0_val)) {
|
|
|
|
if (!nmi_wd_enabled) {
|
|
|
|
apic_debug("Receive NMI setting on APIC_LVT0 "
|
|
|
|
"for cpu %d\n", apic->vcpu->vcpu_id);
|
|
|
|
apic->vcpu->kvm->arch.vapics_in_nmi_mode++;
|
|
|
|
}
|
|
|
|
} else if (nmi_wd_enabled)
|
|
|
|
apic->vcpu->kvm->arch.vapics_in_nmi_mode--;
|
|
|
|
}
|
|
|
|
|
2009-07-05 18:39:36 +04:00
|
|
|
static int apic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
|
2007-09-12 11:58:04 +04:00
|
|
|
{
|
2009-07-05 18:39:36 +04:00
|
|
|
int ret = 0;
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2009-07-05 18:39:36 +04:00
|
|
|
trace_kvm_apic_write(reg, val);
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2009-07-05 18:39:36 +04:00
|
|
|
switch (reg) {
|
2007-09-12 11:58:04 +04:00
|
|
|
case APIC_ID: /* Local APIC ID */
|
2009-07-05 18:39:36 +04:00
|
|
|
if (!apic_x2apic_mode(apic))
|
2012-09-13 18:19:24 +04:00
|
|
|
kvm_apic_set_id(apic, val >> 24);
|
2009-07-05 18:39:36 +04:00
|
|
|
else
|
|
|
|
ret = 1;
|
2007-09-12 11:58:04 +04:00
|
|
|
break;
|
|
|
|
|
|
|
|
case APIC_TASKPRI:
|
2007-10-22 18:50:39 +04:00
|
|
|
report_tpr_access(apic, true);
|
2007-09-12 11:58:04 +04:00
|
|
|
apic_set_tpr(apic, val & 0xff);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case APIC_EOI:
|
|
|
|
apic_set_eoi(apic);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case APIC_LDR:
|
2009-07-05 18:39:36 +04:00
|
|
|
if (!apic_x2apic_mode(apic))
|
2012-09-13 18:19:24 +04:00
|
|
|
kvm_apic_set_ldr(apic, val & APIC_LDR_MASK);
|
2009-07-05 18:39:36 +04:00
|
|
|
else
|
|
|
|
ret = 1;
|
2007-09-12 11:58:04 +04:00
|
|
|
break;
|
|
|
|
|
|
|
|
case APIC_DFR:
|
2012-09-13 18:19:24 +04:00
|
|
|
if (!apic_x2apic_mode(apic)) {
|
2009-07-05 18:39:36 +04:00
|
|
|
apic_set_reg(apic, APIC_DFR, val | 0x0FFFFFFF);
|
2012-09-13 18:19:24 +04:00
|
|
|
recalculate_apic_map(apic->vcpu->kvm);
|
|
|
|
} else
|
2009-07-05 18:39:36 +04:00
|
|
|
ret = 1;
|
2007-09-12 11:58:04 +04:00
|
|
|
break;
|
|
|
|
|
2009-07-05 18:39:35 +04:00
|
|
|
case APIC_SPIV: {
|
|
|
|
u32 mask = 0x3ff;
|
2012-08-05 16:58:33 +04:00
|
|
|
if (kvm_apic_get_reg(apic, APIC_LVR) & APIC_LVR_DIRECTED_EOI)
|
2009-07-05 18:39:35 +04:00
|
|
|
mask |= APIC_SPIV_DIRECTED_EOI;
|
2012-08-05 16:58:31 +04:00
|
|
|
apic_set_spiv(apic, val & mask);
|
2007-09-12 11:58:04 +04:00
|
|
|
if (!(val & APIC_SPIV_APIC_ENABLED)) {
|
|
|
|
int i;
|
|
|
|
u32 lvt_val;
|
|
|
|
|
|
|
|
for (i = 0; i < APIC_LVT_NUM; i++) {
|
2012-08-05 16:58:33 +04:00
|
|
|
lvt_val = kvm_apic_get_reg(apic,
|
2007-09-12 11:58:04 +04:00
|
|
|
APIC_LVTT + 0x10 * i);
|
|
|
|
apic_set_reg(apic, APIC_LVTT + 0x10 * i,
|
|
|
|
lvt_val | APIC_LVT_MASKED);
|
|
|
|
}
|
2009-02-23 16:57:41 +03:00
|
|
|
atomic_set(&apic->lapic_timer.pending, 0);
|
2007-09-12 11:58:04 +04:00
|
|
|
|
|
|
|
}
|
|
|
|
break;
|
2009-07-05 18:39:35 +04:00
|
|
|
}
|
2007-09-12 11:58:04 +04:00
|
|
|
case APIC_ICR:
|
|
|
|
/* No delay here, so we always clear the pending bit */
|
|
|
|
apic_set_reg(apic, APIC_ICR, val & ~(1 << 12));
|
|
|
|
apic_send_ipi(apic);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case APIC_ICR2:
|
2009-07-05 18:39:36 +04:00
|
|
|
if (!apic_x2apic_mode(apic))
|
|
|
|
val &= 0xff000000;
|
|
|
|
apic_set_reg(apic, APIC_ICR2, val);
|
2007-09-12 11:58:04 +04:00
|
|
|
break;
|
|
|
|
|
2008-09-26 11:30:52 +04:00
|
|
|
case APIC_LVT0:
|
2008-10-20 12:20:03 +04:00
|
|
|
apic_manage_nmi_watchdog(apic, val);
|
2007-09-12 11:58:04 +04:00
|
|
|
case APIC_LVTTHMR:
|
|
|
|
case APIC_LVTPC:
|
|
|
|
case APIC_LVT1:
|
|
|
|
case APIC_LVTERR:
|
|
|
|
/* TODO: Check vector */
|
2012-08-05 16:58:33 +04:00
|
|
|
if (!kvm_apic_sw_enabled(apic))
|
2007-09-12 11:58:04 +04:00
|
|
|
val |= APIC_LVT_MASKED;
|
|
|
|
|
2009-07-05 18:39:36 +04:00
|
|
|
val &= apic_lvt_mask[(reg - APIC_LVTT) >> 4];
|
|
|
|
apic_set_reg(apic, reg, val);
|
2007-09-12 11:58:04 +04:00
|
|
|
|
|
|
|
break;
|
|
|
|
|
2014-10-30 17:06:46 +03:00
|
|
|
case APIC_LVTT: {
|
|
|
|
u32 timer_mode = val & apic->lapic_timer.timer_mode_mask;
|
|
|
|
|
|
|
|
if (apic->lapic_timer.timer_mode != timer_mode) {
|
|
|
|
apic->lapic_timer.timer_mode = timer_mode;
|
2011-09-22 12:55:52 +04:00
|
|
|
hrtimer_cancel(&apic->lapic_timer.timer);
|
2014-10-30 17:06:46 +03:00
|
|
|
}
|
2011-09-22 12:55:52 +04:00
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
if (!kvm_apic_sw_enabled(apic))
|
2011-09-22 12:55:52 +04:00
|
|
|
val |= APIC_LVT_MASKED;
|
|
|
|
val &= (apic_lvt_mask[0] | apic->lapic_timer.timer_mode_mask);
|
|
|
|
apic_set_reg(apic, APIC_LVTT, val);
|
|
|
|
break;
|
2014-10-30 17:06:46 +03:00
|
|
|
}
|
2011-09-22 12:55:52 +04:00
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
case APIC_TMICT:
|
2011-09-22 12:55:52 +04:00
|
|
|
if (apic_lvtt_tscdeadline(apic))
|
|
|
|
break;
|
|
|
|
|
2009-02-23 16:57:41 +03:00
|
|
|
hrtimer_cancel(&apic->lapic_timer.timer);
|
2007-09-12 11:58:04 +04:00
|
|
|
apic_set_reg(apic, APIC_TMICT, val);
|
|
|
|
start_apic_timer(apic);
|
2009-07-05 18:39:36 +04:00
|
|
|
break;
|
2007-09-12 11:58:04 +04:00
|
|
|
|
|
|
|
case APIC_TDCR:
|
|
|
|
if (val & 4)
|
2011-09-12 13:25:51 +04:00
|
|
|
apic_debug("KVM_WRITE:TDCR %x\n", val);
|
2007-09-12 11:58:04 +04:00
|
|
|
apic_set_reg(apic, APIC_TDCR, val);
|
|
|
|
update_divide_count(apic);
|
|
|
|
break;
|
|
|
|
|
2009-07-05 18:39:36 +04:00
|
|
|
case APIC_ESR:
|
|
|
|
if (apic_x2apic_mode(apic) && val != 0) {
|
2011-09-12 13:25:51 +04:00
|
|
|
apic_debug("KVM_WRITE:ESR not zero %x\n", val);
|
2009-07-05 18:39:36 +04:00
|
|
|
ret = 1;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
case APIC_SELF_IPI:
|
|
|
|
if (apic_x2apic_mode(apic)) {
|
|
|
|
apic_reg_write(apic, APIC_ICR, 0x40000 | (val & 0xff));
|
|
|
|
} else
|
|
|
|
ret = 1;
|
|
|
|
break;
|
2007-09-12 11:58:04 +04:00
|
|
|
default:
|
2009-07-05 18:39:36 +04:00
|
|
|
ret = 1;
|
2007-09-12 11:58:04 +04:00
|
|
|
break;
|
|
|
|
}
|
2009-07-05 18:39:36 +04:00
|
|
|
if (ret)
|
|
|
|
apic_debug("Local APIC Write to read-only register %x\n", reg);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2015-03-26 17:39:28 +03:00
|
|
|
static int apic_mmio_write(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
|
2009-07-05 18:39:36 +04:00
|
|
|
gpa_t address, int len, const void *data)
|
|
|
|
{
|
|
|
|
struct kvm_lapic *apic = to_lapic(this);
|
|
|
|
unsigned int offset = address - apic->base_address;
|
|
|
|
u32 val;
|
|
|
|
|
|
|
|
if (!apic_mmio_in_range(apic, address))
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* APIC register must be aligned on 128-bits boundary.
|
|
|
|
* 32/64/128 bits registers must be accessed thru 32 bits.
|
|
|
|
* Refer SDM 8.4.1
|
|
|
|
*/
|
|
|
|
if (len != 4 || (offset & 0xf)) {
|
|
|
|
/* Don't shout loud, $infamous_os would cause only noise. */
|
|
|
|
apic_debug("apic write: bad size=%d %lx\n", len, (long)address);
|
2009-07-06 07:05:39 +04:00
|
|
|
return 0;
|
2009-07-05 18:39:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
val = *(u32*)data;
|
|
|
|
|
|
|
|
/* too common printing */
|
|
|
|
if (offset != APIC_EOI)
|
|
|
|
apic_debug("%s: offset 0x%x with length 0x%x, and value is "
|
|
|
|
"0x%x\n", __func__, offset, len, val);
|
|
|
|
|
|
|
|
apic_reg_write(apic, offset & 0xff0, val);
|
|
|
|
|
2009-06-29 23:24:32 +04:00
|
|
|
return 0;
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
2011-08-30 14:56:17 +04:00
|
|
|
void kvm_lapic_set_eoi(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
2012-08-05 16:58:33 +04:00
|
|
|
if (kvm_vcpu_has_lapic(vcpu))
|
2011-08-30 14:56:17 +04:00
|
|
|
apic_reg_write(vcpu->arch.apic, APIC_EOI, 0);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(kvm_lapic_set_eoi);
|
|
|
|
|
2013-01-25 06:18:49 +04:00
|
|
|
/* emulate APIC access in a trap manner */
|
|
|
|
void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset)
|
|
|
|
{
|
|
|
|
u32 val = 0;
|
|
|
|
|
|
|
|
/* hw has done the conditional check and inst decode */
|
|
|
|
offset &= 0xff0;
|
|
|
|
|
|
|
|
apic_reg_read(vcpu->arch.apic, offset, 4, &val);
|
|
|
|
|
|
|
|
/* TODO: optimize to just emulate side effect w/o one more write */
|
|
|
|
apic_reg_write(vcpu->arch.apic, offset, val);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(kvm_apic_write_nodecode);
|
|
|
|
|
2007-10-08 04:48:30 +04:00
|
|
|
void kvm_free_lapic(struct kvm_vcpu *vcpu)
|
2007-09-12 11:58:04 +04:00
|
|
|
{
|
2012-08-05 16:58:31 +04:00
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
|
|
|
|
2007-12-13 18:50:52 +03:00
|
|
|
if (!vcpu->arch.apic)
|
2007-09-12 11:58:04 +04:00
|
|
|
return;
|
|
|
|
|
2012-08-05 16:58:31 +04:00
|
|
|
hrtimer_cancel(&apic->lapic_timer.timer);
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2012-08-05 16:58:30 +04:00
|
|
|
if (!(vcpu->arch.apic_base & MSR_IA32_APICBASE_ENABLE))
|
|
|
|
static_key_slow_dec_deferred(&apic_hw_disabled);
|
|
|
|
|
2014-10-30 17:06:45 +03:00
|
|
|
if (!apic->sw_enabled)
|
2012-08-05 16:58:31 +04:00
|
|
|
static_key_slow_dec_deferred(&apic_sw_disabled);
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2012-08-05 16:58:31 +04:00
|
|
|
if (apic->regs)
|
|
|
|
free_page((unsigned long)apic->regs);
|
|
|
|
|
|
|
|
kfree(apic);
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
*----------------------------------------------------------------------
|
|
|
|
* LAPIC interface
|
|
|
|
*----------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
|
2011-09-22 12:55:52 +04:00
|
|
|
u64 kvm_get_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
if (!kvm_vcpu_has_lapic(vcpu) || apic_lvtt_oneshot(apic) ||
|
2012-08-05 16:58:32 +04:00
|
|
|
apic_lvtt_period(apic))
|
2011-09-22 12:55:52 +04:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
return apic->lapic_timer.tscdeadline;
|
|
|
|
}
|
|
|
|
|
|
|
|
void kvm_set_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu, u64 data)
|
|
|
|
{
|
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
if (!kvm_vcpu_has_lapic(vcpu) || apic_lvtt_oneshot(apic) ||
|
2012-08-05 16:58:32 +04:00
|
|
|
apic_lvtt_period(apic))
|
2011-09-22 12:55:52 +04:00
|
|
|
return;
|
|
|
|
|
|
|
|
hrtimer_cancel(&apic->lapic_timer.timer);
|
|
|
|
apic->lapic_timer.tscdeadline = data;
|
|
|
|
start_apic_timer(apic);
|
|
|
|
}
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
void kvm_lapic_set_tpr(struct kvm_vcpu *vcpu, unsigned long cr8)
|
|
|
|
{
|
2007-12-13 18:50:52 +03:00
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
if (!kvm_vcpu_has_lapic(vcpu))
|
2007-09-12 11:58:04 +04:00
|
|
|
return;
|
2012-08-05 16:58:32 +04:00
|
|
|
|
2007-10-25 18:52:32 +04:00
|
|
|
apic_set_tpr(apic, ((cr8 & 0x0f) << 4)
|
2012-08-05 16:58:33 +04:00
|
|
|
| (kvm_apic_get_reg(apic, APIC_TASKPRI) & 4));
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
u64 kvm_lapic_get_cr8(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
u64 tpr;
|
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
if (!kvm_vcpu_has_lapic(vcpu))
|
2007-09-12 11:58:04 +04:00
|
|
|
return 0;
|
2012-08-05 16:58:32 +04:00
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
tpr = (u64) kvm_apic_get_reg(vcpu->arch.apic, APIC_TASKPRI);
|
2007-09-12 11:58:04 +04:00
|
|
|
|
|
|
|
return (tpr & 0xf0) >> 4;
|
|
|
|
}
|
|
|
|
|
|
|
|
void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value)
|
|
|
|
{
|
2013-01-25 06:18:50 +04:00
|
|
|
u64 old_value = vcpu->arch.apic_base;
|
2007-12-13 18:50:52 +03:00
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
2007-09-12 11:58:04 +04:00
|
|
|
|
|
|
|
if (!apic) {
|
|
|
|
value |= MSR_IA32_APICBASE_BSP;
|
2007-12-13 18:50:52 +03:00
|
|
|
vcpu->arch.apic_base = value;
|
2007-09-12 11:58:04 +04:00
|
|
|
return;
|
|
|
|
}
|
2009-06-09 16:56:26 +04:00
|
|
|
|
2013-12-29 05:29:30 +04:00
|
|
|
if (!kvm_vcpu_is_bsp(apic->vcpu))
|
|
|
|
value &= ~MSR_IA32_APICBASE_BSP;
|
|
|
|
vcpu->arch.apic_base = value;
|
|
|
|
|
2012-08-05 16:58:30 +04:00
|
|
|
/* update jump label if enable bit changes */
|
2014-01-15 16:39:59 +04:00
|
|
|
if ((old_value ^ value) & MSR_IA32_APICBASE_ENABLE) {
|
2012-08-05 16:58:30 +04:00
|
|
|
if (value & MSR_IA32_APICBASE_ENABLE)
|
|
|
|
static_key_slow_dec_deferred(&apic_hw_disabled);
|
|
|
|
else
|
|
|
|
static_key_slow_inc(&apic_hw_disabled.key);
|
2012-09-13 18:19:24 +04:00
|
|
|
recalculate_apic_map(vcpu->kvm);
|
2012-08-05 16:58:30 +04:00
|
|
|
}
|
|
|
|
|
2013-01-25 06:18:50 +04:00
|
|
|
if ((old_value ^ value) & X2APIC_ENABLE) {
|
|
|
|
if (value & X2APIC_ENABLE) {
|
|
|
|
u32 id = kvm_apic_id(apic);
|
|
|
|
u32 ldr = ((id >> 4) << 16) | (1 << (id & 0xf));
|
|
|
|
kvm_apic_set_ldr(apic, ldr);
|
|
|
|
kvm_x86_ops->set_virtual_x2apic_mode(vcpu, true);
|
|
|
|
} else
|
|
|
|
kvm_x86_ops->set_virtual_x2apic_mode(vcpu, false);
|
2009-07-05 18:39:36 +04:00
|
|
|
}
|
2013-01-25 06:18:50 +04:00
|
|
|
|
2007-12-13 18:50:52 +03:00
|
|
|
apic->base_address = apic->vcpu->arch.apic_base &
|
2007-09-12 11:58:04 +04:00
|
|
|
MSR_IA32_APICBASE_BASE;
|
|
|
|
|
2014-11-02 12:54:59 +03:00
|
|
|
if ((value & MSR_IA32_APICBASE_ENABLE) &&
|
|
|
|
apic->base_address != APIC_DEFAULT_PHYS_BASE)
|
|
|
|
pr_warn_once("APIC base relocation is unsupported by KVM");
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
/* with FSB delivery interrupt, we can restart APIC functionality */
|
|
|
|
apic_debug("apic base msr is 0x%016" PRIx64 ", and base address is "
|
2007-12-13 18:50:52 +03:00
|
|
|
"0x%lx.\n", apic->vcpu->arch.apic_base, apic->base_address);
|
2007-09-12 11:58:04 +04:00
|
|
|
|
|
|
|
}
|
|
|
|
|
2007-09-03 18:07:41 +04:00
|
|
|
void kvm_lapic_reset(struct kvm_vcpu *vcpu)
|
2007-09-12 11:58:04 +04:00
|
|
|
{
|
|
|
|
struct kvm_lapic *apic;
|
|
|
|
int i;
|
|
|
|
|
2008-03-03 23:59:56 +03:00
|
|
|
apic_debug("%s\n", __func__);
|
2007-09-12 11:58:04 +04:00
|
|
|
|
|
|
|
ASSERT(vcpu);
|
2007-12-13 18:50:52 +03:00
|
|
|
apic = vcpu->arch.apic;
|
2007-09-12 11:58:04 +04:00
|
|
|
ASSERT(apic != NULL);
|
|
|
|
|
|
|
|
/* Stop the timer in case it's a reset to an active apic */
|
2009-02-23 16:57:41 +03:00
|
|
|
hrtimer_cancel(&apic->lapic_timer.timer);
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2012-09-13 18:19:24 +04:00
|
|
|
kvm_apic_set_id(apic, vcpu->vcpu_id);
|
2009-07-05 18:39:35 +04:00
|
|
|
kvm_apic_set_version(apic->vcpu);
|
2007-09-12 11:58:04 +04:00
|
|
|
|
|
|
|
for (i = 0; i < APIC_LVT_NUM; i++)
|
|
|
|
apic_set_reg(apic, APIC_LVTT + 0x10 * i, APIC_LVT_MASKED);
|
2014-10-30 17:06:46 +03:00
|
|
|
apic->lapic_timer.timer_mode = 0;
|
2007-09-17 10:47:13 +04:00
|
|
|
apic_set_reg(apic, APIC_LVT0,
|
|
|
|
SET_APIC_DELIVERY_MODE(0, APIC_MODE_EXTINT));
|
2007-09-12 11:58:04 +04:00
|
|
|
|
|
|
|
apic_set_reg(apic, APIC_DFR, 0xffffffffU);
|
2012-08-05 16:58:31 +04:00
|
|
|
apic_set_spiv(apic, 0xff);
|
2007-09-12 11:58:04 +04:00
|
|
|
apic_set_reg(apic, APIC_TASKPRI, 0);
|
2012-09-13 18:19:24 +04:00
|
|
|
kvm_apic_set_ldr(apic, 0);
|
2007-09-12 11:58:04 +04:00
|
|
|
apic_set_reg(apic, APIC_ESR, 0);
|
|
|
|
apic_set_reg(apic, APIC_ICR, 0);
|
|
|
|
apic_set_reg(apic, APIC_ICR2, 0);
|
|
|
|
apic_set_reg(apic, APIC_TDCR, 0);
|
|
|
|
apic_set_reg(apic, APIC_TMICT, 0);
|
|
|
|
for (i = 0; i < 8; i++) {
|
|
|
|
apic_set_reg(apic, APIC_IRR + 0x10 * i, 0);
|
|
|
|
apic_set_reg(apic, APIC_ISR + 0x10 * i, 0);
|
|
|
|
apic_set_reg(apic, APIC_TMR + 0x10 * i, 0);
|
|
|
|
}
|
2013-01-25 06:18:51 +04:00
|
|
|
apic->irr_pending = kvm_apic_vid_enabled(vcpu->kvm);
|
2015-02-27 18:32:38 +03:00
|
|
|
apic->isr_count = kvm_x86_ops->hwapic_isr_update ? 1 : 0;
|
2012-06-24 20:24:26 +04:00
|
|
|
apic->highest_isr_cache = -1;
|
2007-10-21 10:54:53 +04:00
|
|
|
update_divide_count(apic);
|
2009-02-23 16:57:41 +03:00
|
|
|
atomic_set(&apic->lapic_timer.pending, 0);
|
2009-06-09 16:56:26 +04:00
|
|
|
if (kvm_vcpu_is_bsp(vcpu))
|
2012-08-05 16:58:27 +04:00
|
|
|
kvm_lapic_set_base(vcpu,
|
|
|
|
vcpu->arch.apic_base | MSR_IA32_APICBASE_BSP);
|
2012-06-24 20:25:07 +04:00
|
|
|
vcpu->arch.pv_eoi.msr_val = 0;
|
2007-09-12 11:58:04 +04:00
|
|
|
apic_update_ppr(apic);
|
|
|
|
|
2009-03-05 17:34:59 +03:00
|
|
|
vcpu->arch.apic_arb_prio = 0;
|
2012-04-19 15:06:29 +04:00
|
|
|
vcpu->arch.apic_attention = 0;
|
2009-03-05 17:34:59 +03:00
|
|
|
|
2014-06-29 13:28:51 +04:00
|
|
|
apic_debug("%s: vcpu=%p, id=%d, base_msr="
|
2008-03-03 23:59:56 +03:00
|
|
|
"0x%016" PRIx64 ", base_address=0x%0lx.\n", __func__,
|
2007-09-12 11:58:04 +04:00
|
|
|
vcpu, kvm_apic_id(apic),
|
2007-12-13 18:50:52 +03:00
|
|
|
vcpu->arch.apic_base, apic->base_address);
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
*----------------------------------------------------------------------
|
|
|
|
* timer interface
|
|
|
|
*----------------------------------------------------------------------
|
|
|
|
*/
|
2007-09-03 17:56:58 +04:00
|
|
|
|
2012-07-26 19:01:51 +04:00
|
|
|
static bool lapic_is_periodic(struct kvm_lapic *apic)
|
2007-09-12 11:58:04 +04:00
|
|
|
{
|
2009-02-23 16:57:41 +03:00
|
|
|
return apic_lvtt_period(apic);
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
|
2008-04-11 21:53:26 +04:00
|
|
|
int apic_has_pending_timer(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
2012-08-05 16:58:32 +04:00
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
2008-04-11 21:53:26 +04:00
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
if (kvm_vcpu_has_lapic(vcpu) && apic_enabled(apic) &&
|
2012-08-05 16:58:32 +04:00
|
|
|
apic_lvt_enabled(apic, APIC_LVTT))
|
|
|
|
return atomic_read(&apic->lapic_timer.pending);
|
2008-04-11 21:53:26 +04:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-11-10 16:57:21 +04:00
|
|
|
int kvm_apic_local_deliver(struct kvm_lapic *apic, int lvt_type)
|
2007-09-03 17:56:58 +04:00
|
|
|
{
|
2012-08-05 16:58:33 +04:00
|
|
|
u32 reg = kvm_apic_get_reg(apic, lvt_type);
|
2008-09-26 11:30:52 +04:00
|
|
|
int vector, mode, trig_mode;
|
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
if (kvm_apic_hw_enabled(apic) && !(reg & APIC_LVT_MASKED)) {
|
2008-09-26 11:30:52 +04:00
|
|
|
vector = reg & APIC_VECTOR_MASK;
|
|
|
|
mode = reg & APIC_MODE_MASK;
|
|
|
|
trig_mode = reg & APIC_LVT_LEVEL_TRIGGER;
|
2013-04-11 15:21:37 +04:00
|
|
|
return __apic_accept_irq(apic, mode, vector, 1, trig_mode,
|
|
|
|
NULL);
|
2008-09-26 11:30:52 +04:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
2007-09-03 17:56:58 +04:00
|
|
|
|
2008-10-20 12:20:02 +04:00
|
|
|
void kvm_apic_nmi_wd_deliver(struct kvm_vcpu *vcpu)
|
2008-09-26 11:30:52 +04:00
|
|
|
{
|
2008-10-20 12:20:02 +04:00
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
|
|
|
|
|
|
|
if (apic)
|
|
|
|
kvm_apic_local_deliver(apic, APIC_LVT0);
|
2007-09-03 17:56:58 +04:00
|
|
|
}
|
|
|
|
|
2009-06-01 20:54:50 +04:00
|
|
|
static const struct kvm_io_device_ops apic_mmio_ops = {
|
|
|
|
.read = apic_mmio_read,
|
|
|
|
.write = apic_mmio_write,
|
|
|
|
};
|
|
|
|
|
2012-07-26 19:01:50 +04:00
|
|
|
static enum hrtimer_restart apic_timer_fn(struct hrtimer *data)
|
|
|
|
{
|
|
|
|
struct kvm_timer *ktimer = container_of(data, struct kvm_timer, timer);
|
2012-07-26 19:01:51 +04:00
|
|
|
struct kvm_lapic *apic = container_of(ktimer, struct kvm_lapic, lapic_timer);
|
2012-07-26 19:01:50 +04:00
|
|
|
|
2014-10-10 21:15:08 +04:00
|
|
|
apic_timer_expired(apic);
|
2012-07-26 19:01:50 +04:00
|
|
|
|
2012-07-26 19:01:51 +04:00
|
|
|
if (lapic_is_periodic(apic)) {
|
2012-07-26 19:01:50 +04:00
|
|
|
hrtimer_add_expires_ns(&ktimer->timer, ktimer->period);
|
|
|
|
return HRTIMER_RESTART;
|
|
|
|
} else
|
|
|
|
return HRTIMER_NORESTART;
|
|
|
|
}
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
int kvm_create_lapic(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
struct kvm_lapic *apic;
|
|
|
|
|
|
|
|
ASSERT(vcpu != NULL);
|
|
|
|
apic_debug("apic_init %d\n", vcpu->vcpu_id);
|
|
|
|
|
|
|
|
apic = kzalloc(sizeof(*apic), GFP_KERNEL);
|
|
|
|
if (!apic)
|
|
|
|
goto nomem;
|
|
|
|
|
2007-12-13 18:50:52 +03:00
|
|
|
vcpu->arch.apic = apic;
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2011-03-05 06:40:20 +03:00
|
|
|
apic->regs = (void *)get_zeroed_page(GFP_KERNEL);
|
|
|
|
if (!apic->regs) {
|
2007-09-12 11:58:04 +04:00
|
|
|
printk(KERN_ERR "malloc apic regs error for vcpu %x\n",
|
|
|
|
vcpu->vcpu_id);
|
2007-10-08 04:48:30 +04:00
|
|
|
goto nomem_free_apic;
|
2007-09-12 11:58:04 +04:00
|
|
|
}
|
|
|
|
apic->vcpu = vcpu;
|
|
|
|
|
2009-02-23 16:57:41 +03:00
|
|
|
hrtimer_init(&apic->lapic_timer.timer, CLOCK_MONOTONIC,
|
|
|
|
HRTIMER_MODE_ABS);
|
2012-07-26 19:01:50 +04:00
|
|
|
apic->lapic_timer.timer.function = apic_timer_fn;
|
2009-02-23 16:57:41 +03:00
|
|
|
|
2012-08-05 16:58:30 +04:00
|
|
|
/*
|
|
|
|
* APIC is created enabled. This will prevent kvm_lapic_set_base from
|
|
|
|
* thinking that APIC satet has changed.
|
|
|
|
*/
|
|
|
|
vcpu->arch.apic_base = MSR_IA32_APICBASE_ENABLE;
|
2012-08-05 16:58:28 +04:00
|
|
|
kvm_lapic_set_base(vcpu,
|
|
|
|
APIC_DEFAULT_PHYS_BASE | MSR_IA32_APICBASE_ENABLE);
|
2007-09-12 11:58:04 +04:00
|
|
|
|
2012-08-05 16:58:31 +04:00
|
|
|
static_key_slow_inc(&apic_sw_disabled.key); /* sw disabled at reset */
|
2007-09-03 18:07:41 +04:00
|
|
|
kvm_lapic_reset(vcpu);
|
2009-06-01 20:54:50 +04:00
|
|
|
kvm_iodevice_init(&apic->dev, &apic_mmio_ops);
|
2007-09-12 11:58:04 +04:00
|
|
|
|
|
|
|
return 0;
|
2007-10-08 04:48:30 +04:00
|
|
|
nomem_free_apic:
|
|
|
|
kfree(apic);
|
2007-09-12 11:58:04 +04:00
|
|
|
nomem:
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
int kvm_apic_has_interrupt(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
2007-12-13 18:50:52 +03:00
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
2007-09-12 11:58:04 +04:00
|
|
|
int highest_irr;
|
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
if (!kvm_vcpu_has_lapic(vcpu) || !apic_enabled(apic))
|
2007-09-12 11:58:04 +04:00
|
|
|
return -1;
|
|
|
|
|
2007-09-12 14:03:11 +04:00
|
|
|
apic_update_ppr(apic);
|
2007-09-12 11:58:04 +04:00
|
|
|
highest_irr = apic_find_highest_irr(apic);
|
|
|
|
if ((highest_irr == -1) ||
|
2012-08-05 16:58:33 +04:00
|
|
|
((highest_irr & 0xF0) <= kvm_apic_get_reg(apic, APIC_PROCPRI)))
|
2007-09-12 11:58:04 +04:00
|
|
|
return -1;
|
|
|
|
return highest_irr;
|
|
|
|
}
|
|
|
|
|
2007-09-17 10:47:13 +04:00
|
|
|
int kvm_apic_accept_pic_intr(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
2012-08-05 16:58:33 +04:00
|
|
|
u32 lvt0 = kvm_apic_get_reg(vcpu->arch.apic, APIC_LVT0);
|
2007-09-17 10:47:13 +04:00
|
|
|
int r = 0;
|
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
if (!kvm_apic_hw_enabled(vcpu->arch.apic))
|
2010-06-17 01:11:12 +04:00
|
|
|
r = 1;
|
|
|
|
if ((lvt0 & APIC_LVT_MASKED) == 0 &&
|
|
|
|
GET_APIC_DELIVERY_MODE(lvt0) == APIC_MODE_EXTINT)
|
|
|
|
r = 1;
|
2007-09-17 10:47:13 +04:00
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2007-09-03 17:56:58 +04:00
|
|
|
void kvm_inject_apic_timer_irqs(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
2007-12-13 18:50:52 +03:00
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
2007-09-03 17:56:58 +04:00
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
if (!kvm_vcpu_has_lapic(vcpu))
|
2012-08-05 16:58:32 +04:00
|
|
|
return;
|
|
|
|
|
|
|
|
if (atomic_read(&apic->lapic_timer.pending) > 0) {
|
2013-04-28 16:00:41 +04:00
|
|
|
kvm_apic_local_deliver(apic, APIC_LVTT);
|
2014-08-18 23:42:13 +04:00
|
|
|
if (apic_lvtt_tscdeadline(apic))
|
|
|
|
apic->lapic_timer.tscdeadline = 0;
|
2013-04-28 16:00:41 +04:00
|
|
|
atomic_set(&apic->lapic_timer.pending, 0);
|
2007-09-03 17:56:58 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-09-12 11:58:04 +04:00
|
|
|
int kvm_get_apic_interrupt(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
int vector = kvm_apic_has_interrupt(vcpu);
|
2007-12-13 18:50:52 +03:00
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
2007-09-12 11:58:04 +04:00
|
|
|
|
|
|
|
if (vector == -1)
|
|
|
|
return -1;
|
|
|
|
|
KVM: nVMX: fix "acknowledge interrupt on exit" when APICv is in use
After commit 77b0f5d (KVM: nVMX: Ack and write vector info to intr_info
if L1 asks us to), "Acknowledge interrupt on exit" behavior can be
emulated. To do so, KVM will ask the APIC for the interrupt vector if
during a nested vmexit if VM_EXIT_ACK_INTR_ON_EXIT is set. With APICv,
kvm_get_apic_interrupt would return -1 and give the following WARNING:
Call Trace:
[<ffffffff81493563>] dump_stack+0x49/0x5e
[<ffffffff8103f0eb>] warn_slowpath_common+0x7c/0x96
[<ffffffffa059709a>] ? nested_vmx_vmexit+0xa4/0x233 [kvm_intel]
[<ffffffff8103f11a>] warn_slowpath_null+0x15/0x17
[<ffffffffa059709a>] nested_vmx_vmexit+0xa4/0x233 [kvm_intel]
[<ffffffffa0594295>] ? nested_vmx_exit_handled+0x6a/0x39e [kvm_intel]
[<ffffffffa0537931>] ? kvm_apic_has_interrupt+0x80/0xd5 [kvm]
[<ffffffffa05972ec>] vmx_check_nested_events+0xc3/0xd3 [kvm_intel]
[<ffffffffa051ebe9>] inject_pending_event+0xd0/0x16e [kvm]
[<ffffffffa051efa0>] vcpu_enter_guest+0x319/0x704 [kvm]
To fix this, we cannot rely on the processor's virtual interrupt delivery,
because "acknowledge interrupt on exit" must only update the virtual
ISR/PPR/IRR registers (and SVI, which is just a cache of the virtual ISR)
but it should not deliver the interrupt through the IDT. Thus, KVM has
to deliver the interrupt "by hand", similar to the treatment of EOI in
commit fc57ac2c9ca8 (KVM: lapic: sync highest ISR to hardware apic on
EOI, 2014-05-14).
The patch modifies kvm_cpu_get_interrupt to always acknowledge an
interrupt; there are only two callers, and the other is not affected
because it is never reached with kvm_apic_vid_enabled() == true. Then it
modifies apic_set_isr and apic_clear_irr to update SVI and RVI in addition
to the registers.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Tested-by: Liu, RongrongX <rongrongx.liu@intel.com>
Tested-by: Felipe Reyes <freyes@suse.com>
Fixes: 77b0f5d67ff2781f36831cba79674c3e97bd7acf
Cc: stable@vger.kernel.org
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-05 08:42:24 +04:00
|
|
|
/*
|
|
|
|
* We get here even with APIC virtualization enabled, if doing
|
|
|
|
* nested virtualization and L1 runs with the "acknowledge interrupt
|
|
|
|
* on exit" mode. Then we cannot inject the interrupt via RVI,
|
|
|
|
* because the process would deliver it through the IDT.
|
|
|
|
*/
|
|
|
|
|
2012-06-24 20:24:26 +04:00
|
|
|
apic_set_isr(vector, apic);
|
2007-09-12 11:58:04 +04:00
|
|
|
apic_update_ppr(apic);
|
|
|
|
apic_clear_irr(vector, apic);
|
|
|
|
return vector;
|
|
|
|
}
|
2007-09-06 13:22:56 +04:00
|
|
|
|
2012-08-08 16:24:36 +04:00
|
|
|
void kvm_apic_post_state_restore(struct kvm_vcpu *vcpu,
|
|
|
|
struct kvm_lapic_state *s)
|
2007-09-06 13:22:56 +04:00
|
|
|
{
|
2007-12-13 18:50:52 +03:00
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
2007-09-06 13:22:56 +04:00
|
|
|
|
2012-08-05 16:58:27 +04:00
|
|
|
kvm_lapic_set_base(vcpu, vcpu->arch.apic_base);
|
2012-08-08 16:24:36 +04:00
|
|
|
/* set SPIV separately to get count of SW disabled APICs right */
|
|
|
|
apic_set_spiv(apic, *((u32 *)(s->regs + APIC_SPIV)));
|
|
|
|
memcpy(vcpu->arch.apic->regs, s->regs, sizeof *s);
|
2012-09-13 18:19:24 +04:00
|
|
|
/* call kvm_apic_set_id() to put apic into apic_map */
|
|
|
|
kvm_apic_set_id(apic, kvm_apic_id(apic));
|
2009-07-05 18:39:35 +04:00
|
|
|
kvm_apic_set_version(vcpu);
|
|
|
|
|
2007-09-06 13:22:56 +04:00
|
|
|
apic_update_ppr(apic);
|
2009-02-23 16:57:41 +03:00
|
|
|
hrtimer_cancel(&apic->lapic_timer.timer);
|
2007-09-06 13:22:56 +04:00
|
|
|
update_divide_count(apic);
|
|
|
|
start_apic_timer(apic);
|
2009-12-14 22:37:35 +03:00
|
|
|
apic->irr_pending = true;
|
2015-02-27 18:32:38 +03:00
|
|
|
apic->isr_count = kvm_x86_ops->hwapic_isr_update ?
|
2013-01-25 06:18:51 +04:00
|
|
|
1 : count_vectors(apic->regs + APIC_ISR);
|
2012-06-24 20:24:26 +04:00
|
|
|
apic->highest_isr_cache = -1;
|
2014-11-05 05:53:43 +03:00
|
|
|
if (kvm_x86_ops->hwapic_irr_update)
|
|
|
|
kvm_x86_ops->hwapic_irr_update(vcpu,
|
|
|
|
apic_find_highest_irr(apic));
|
2014-12-22 12:32:57 +03:00
|
|
|
if (unlikely(kvm_x86_ops->hwapic_isr_update))
|
|
|
|
kvm_x86_ops->hwapic_isr_update(vcpu->kvm,
|
|
|
|
apic_find_highest_isr(apic));
|
2010-07-27 13:30:24 +04:00
|
|
|
kvm_make_request(KVM_REQ_EVENT, vcpu);
|
2013-04-11 15:21:38 +04:00
|
|
|
kvm_rtc_eoi_tracking_restore_one(vcpu);
|
2007-09-06 13:22:56 +04:00
|
|
|
}
|
2007-09-03 17:15:12 +04:00
|
|
|
|
2008-01-16 13:49:30 +03:00
|
|
|
void __kvm_migrate_apic_timer(struct kvm_vcpu *vcpu)
|
2007-09-03 17:15:12 +04:00
|
|
|
{
|
|
|
|
struct hrtimer *timer;
|
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
if (!kvm_vcpu_has_lapic(vcpu))
|
2007-09-03 17:15:12 +04:00
|
|
|
return;
|
|
|
|
|
2012-08-05 16:58:32 +04:00
|
|
|
timer = &vcpu->arch.apic->lapic_timer.timer;
|
2007-09-03 17:15:12 +04:00
|
|
|
if (hrtimer_cancel(timer))
|
2008-09-02 01:55:57 +04:00
|
|
|
hrtimer_start_expires(timer, HRTIMER_MODE_ABS);
|
2007-09-03 17:15:12 +04:00
|
|
|
}
|
2007-10-25 18:52:32 +04:00
|
|
|
|
2012-06-24 20:25:07 +04:00
|
|
|
/*
|
|
|
|
* apic_sync_pv_eoi_from_guest - called on vmexit or cancel interrupt
|
|
|
|
*
|
|
|
|
* Detect whether guest triggered PV EOI since the
|
|
|
|
* last entry. If yes, set EOI on guests's behalf.
|
|
|
|
* Clear PV EOI in guest memory in any case.
|
|
|
|
*/
|
|
|
|
static void apic_sync_pv_eoi_from_guest(struct kvm_vcpu *vcpu,
|
|
|
|
struct kvm_lapic *apic)
|
|
|
|
{
|
|
|
|
bool pending;
|
|
|
|
int vector;
|
|
|
|
/*
|
|
|
|
* PV EOI state is derived from KVM_APIC_PV_EOI_PENDING in host
|
|
|
|
* and KVM_PV_EOI_ENABLED in guest memory as follows:
|
|
|
|
*
|
|
|
|
* KVM_APIC_PV_EOI_PENDING is unset:
|
|
|
|
* -> host disabled PV EOI.
|
|
|
|
* KVM_APIC_PV_EOI_PENDING is set, KVM_PV_EOI_ENABLED is set:
|
|
|
|
* -> host enabled PV EOI, guest did not execute EOI yet.
|
|
|
|
* KVM_APIC_PV_EOI_PENDING is set, KVM_PV_EOI_ENABLED is unset:
|
|
|
|
* -> host enabled PV EOI, guest executed EOI.
|
|
|
|
*/
|
|
|
|
BUG_ON(!pv_eoi_enabled(vcpu));
|
|
|
|
pending = pv_eoi_get_pending(vcpu);
|
|
|
|
/*
|
|
|
|
* Clear pending bit in any case: it will be set again on vmentry.
|
|
|
|
* While this might not be ideal from performance point of view,
|
|
|
|
* this makes sure pv eoi is only enabled when we know it's safe.
|
|
|
|
*/
|
|
|
|
pv_eoi_clr_pending(vcpu);
|
|
|
|
if (pending)
|
|
|
|
return;
|
|
|
|
vector = apic_set_eoi(apic);
|
|
|
|
trace_kvm_pv_eoi(apic, vector);
|
|
|
|
}
|
|
|
|
|
2007-10-25 18:52:32 +04:00
|
|
|
void kvm_lapic_sync_from_vapic(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
u32 data;
|
|
|
|
|
2012-06-24 20:25:07 +04:00
|
|
|
if (test_bit(KVM_APIC_PV_EOI_PENDING, &vcpu->arch.apic_attention))
|
|
|
|
apic_sync_pv_eoi_from_guest(vcpu, vcpu->arch.apic);
|
|
|
|
|
2012-04-19 15:06:29 +04:00
|
|
|
if (!test_bit(KVM_APIC_CHECK_VAPIC, &vcpu->arch.apic_attention))
|
2007-10-25 18:52:32 +04:00
|
|
|
return;
|
|
|
|
|
2013-11-20 22:23:22 +04:00
|
|
|
kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.apic->vapic_cache, &data,
|
|
|
|
sizeof(u32));
|
2007-10-25 18:52:32 +04:00
|
|
|
|
|
|
|
apic_set_tpr(vcpu->arch.apic, data & 0xff);
|
|
|
|
}
|
|
|
|
|
2012-06-24 20:25:07 +04:00
|
|
|
/*
|
|
|
|
* apic_sync_pv_eoi_to_guest - called before vmentry
|
|
|
|
*
|
|
|
|
* Detect whether it's safe to enable PV EOI and
|
|
|
|
* if yes do so.
|
|
|
|
*/
|
|
|
|
static void apic_sync_pv_eoi_to_guest(struct kvm_vcpu *vcpu,
|
|
|
|
struct kvm_lapic *apic)
|
|
|
|
{
|
|
|
|
if (!pv_eoi_enabled(vcpu) ||
|
|
|
|
/* IRR set or many bits in ISR: could be nested. */
|
|
|
|
apic->irr_pending ||
|
|
|
|
/* Cache not set: could be safe but we don't bother. */
|
|
|
|
apic->highest_isr_cache == -1 ||
|
|
|
|
/* Need EOI to update ioapic. */
|
|
|
|
kvm_ioapic_handles_vector(vcpu->kvm, apic->highest_isr_cache)) {
|
|
|
|
/*
|
|
|
|
* PV EOI was disabled by apic_sync_pv_eoi_from_guest
|
|
|
|
* so we need not do anything here.
|
|
|
|
*/
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
pv_eoi_set_pending(apic->vcpu);
|
|
|
|
}
|
|
|
|
|
2007-10-25 18:52:32 +04:00
|
|
|
void kvm_lapic_sync_to_vapic(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
u32 data, tpr;
|
|
|
|
int max_irr, max_isr;
|
2012-06-24 20:25:07 +04:00
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
2007-10-25 18:52:32 +04:00
|
|
|
|
2012-06-24 20:25:07 +04:00
|
|
|
apic_sync_pv_eoi_to_guest(vcpu, apic);
|
|
|
|
|
2012-04-19 15:06:29 +04:00
|
|
|
if (!test_bit(KVM_APIC_CHECK_VAPIC, &vcpu->arch.apic_attention))
|
2007-10-25 18:52:32 +04:00
|
|
|
return;
|
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
tpr = kvm_apic_get_reg(apic, APIC_TASKPRI) & 0xff;
|
2007-10-25 18:52:32 +04:00
|
|
|
max_irr = apic_find_highest_irr(apic);
|
|
|
|
if (max_irr < 0)
|
|
|
|
max_irr = 0;
|
|
|
|
max_isr = apic_find_highest_isr(apic);
|
|
|
|
if (max_isr < 0)
|
|
|
|
max_isr = 0;
|
|
|
|
data = (tpr & 0xff) | ((max_isr & 0xf0) << 8) | (max_irr << 24);
|
|
|
|
|
2013-11-20 22:23:22 +04:00
|
|
|
kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.apic->vapic_cache, &data,
|
|
|
|
sizeof(u32));
|
2007-10-25 18:52:32 +04:00
|
|
|
}
|
|
|
|
|
2013-11-20 22:23:22 +04:00
|
|
|
int kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr)
|
2007-10-25 18:52:32 +04:00
|
|
|
{
|
2013-11-20 22:23:22 +04:00
|
|
|
if (vapic_addr) {
|
|
|
|
if (kvm_gfn_to_hva_cache_init(vcpu->kvm,
|
|
|
|
&vcpu->arch.apic->vapic_cache,
|
|
|
|
vapic_addr, sizeof(u32)))
|
|
|
|
return -EINVAL;
|
2012-04-19 15:06:29 +04:00
|
|
|
__set_bit(KVM_APIC_CHECK_VAPIC, &vcpu->arch.apic_attention);
|
2013-11-20 22:23:22 +04:00
|
|
|
} else {
|
2012-04-19 15:06:29 +04:00
|
|
|
__clear_bit(KVM_APIC_CHECK_VAPIC, &vcpu->arch.apic_attention);
|
2013-11-20 22:23:22 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
vcpu->arch.apic->vapic_addr = vapic_addr;
|
|
|
|
return 0;
|
2007-10-25 18:52:32 +04:00
|
|
|
}
|
2009-07-05 18:39:36 +04:00
|
|
|
|
|
|
|
int kvm_x2apic_msr_write(struct kvm_vcpu *vcpu, u32 msr, u64 data)
|
|
|
|
{
|
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
|
|
|
u32 reg = (msr - APIC_BASE_MSR) << 4;
|
|
|
|
|
|
|
|
if (!irqchip_in_kernel(vcpu->kvm) || !apic_x2apic_mode(apic))
|
|
|
|
return 1;
|
|
|
|
|
2014-11-26 18:56:25 +03:00
|
|
|
if (reg == APIC_ICR2)
|
|
|
|
return 1;
|
|
|
|
|
2009-07-05 18:39:36 +04:00
|
|
|
/* if this is ICR write vector before command */
|
2014-11-26 19:07:05 +03:00
|
|
|
if (reg == APIC_ICR)
|
2009-07-05 18:39:36 +04:00
|
|
|
apic_reg_write(apic, APIC_ICR2, (u32)(data >> 32));
|
|
|
|
return apic_reg_write(apic, reg, (u32)data);
|
|
|
|
}
|
|
|
|
|
|
|
|
int kvm_x2apic_msr_read(struct kvm_vcpu *vcpu, u32 msr, u64 *data)
|
|
|
|
{
|
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
|
|
|
u32 reg = (msr - APIC_BASE_MSR) << 4, low, high = 0;
|
|
|
|
|
|
|
|
if (!irqchip_in_kernel(vcpu->kvm) || !apic_x2apic_mode(apic))
|
|
|
|
return 1;
|
|
|
|
|
2014-11-26 18:56:25 +03:00
|
|
|
if (reg == APIC_DFR || reg == APIC_ICR2) {
|
|
|
|
apic_debug("KVM_APIC_READ: read x2apic reserved register %x\n",
|
|
|
|
reg);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2009-07-05 18:39:36 +04:00
|
|
|
if (apic_reg_read(apic, reg, 4, &low))
|
|
|
|
return 1;
|
2014-11-26 19:07:05 +03:00
|
|
|
if (reg == APIC_ICR)
|
2009-07-05 18:39:36 +04:00
|
|
|
apic_reg_read(apic, APIC_ICR2, 4, &high);
|
|
|
|
|
|
|
|
*data = (((u64)high) << 32) | low;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
2010-01-17 16:51:23 +03:00
|
|
|
|
|
|
|
int kvm_hv_vapic_msr_write(struct kvm_vcpu *vcpu, u32 reg, u64 data)
|
|
|
|
{
|
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
if (!kvm_vcpu_has_lapic(vcpu))
|
2010-01-17 16:51:23 +03:00
|
|
|
return 1;
|
|
|
|
|
|
|
|
/* if this is ICR write vector before command */
|
|
|
|
if (reg == APIC_ICR)
|
|
|
|
apic_reg_write(apic, APIC_ICR2, (u32)(data >> 32));
|
|
|
|
return apic_reg_write(apic, reg, (u32)data);
|
|
|
|
}
|
|
|
|
|
|
|
|
int kvm_hv_vapic_msr_read(struct kvm_vcpu *vcpu, u32 reg, u64 *data)
|
|
|
|
{
|
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
|
|
|
u32 low, high = 0;
|
|
|
|
|
2012-08-05 16:58:33 +04:00
|
|
|
if (!kvm_vcpu_has_lapic(vcpu))
|
2010-01-17 16:51:23 +03:00
|
|
|
return 1;
|
|
|
|
|
|
|
|
if (apic_reg_read(apic, reg, 4, &low))
|
|
|
|
return 1;
|
|
|
|
if (reg == APIC_ICR)
|
|
|
|
apic_reg_read(apic, APIC_ICR2, 4, &high);
|
|
|
|
|
|
|
|
*data = (((u64)high) << 32) | low;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
2012-06-24 20:25:07 +04:00
|
|
|
|
|
|
|
int kvm_lapic_enable_pv_eoi(struct kvm_vcpu *vcpu, u64 data)
|
|
|
|
{
|
|
|
|
u64 addr = data & ~KVM_MSR_ENABLED;
|
|
|
|
if (!IS_ALIGNED(addr, 4))
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
vcpu->arch.pv_eoi.msr_val = data;
|
|
|
|
if (!pv_eoi_enabled(vcpu))
|
|
|
|
return 0;
|
|
|
|
return kvm_gfn_to_hva_cache_init(vcpu->kvm, &vcpu->arch.pv_eoi.data,
|
2013-03-29 20:35:21 +04:00
|
|
|
addr, sizeof(u8));
|
2012-06-24 20:25:07 +04:00
|
|
|
}
|
2012-08-05 16:58:30 +04:00
|
|
|
|
2013-03-13 15:42:34 +04:00
|
|
|
void kvm_apic_accept_events(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
struct kvm_lapic *apic = vcpu->arch.apic;
|
2014-11-24 16:35:24 +03:00
|
|
|
u8 sipi_vector;
|
2013-06-03 12:30:02 +04:00
|
|
|
unsigned long pe;
|
2013-03-13 15:42:34 +04:00
|
|
|
|
2013-06-03 12:30:02 +04:00
|
|
|
if (!kvm_vcpu_has_lapic(vcpu) || !apic->pending_events)
|
2013-03-13 15:42:34 +04:00
|
|
|
return;
|
|
|
|
|
2013-06-03 12:30:02 +04:00
|
|
|
pe = xchg(&apic->pending_events, 0);
|
|
|
|
|
|
|
|
if (test_bit(KVM_APIC_INIT, &pe)) {
|
2013-03-13 15:42:34 +04:00
|
|
|
kvm_lapic_reset(vcpu);
|
|
|
|
kvm_vcpu_reset(vcpu);
|
|
|
|
if (kvm_vcpu_is_bsp(apic->vcpu))
|
|
|
|
vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
|
|
|
|
else
|
|
|
|
vcpu->arch.mp_state = KVM_MP_STATE_INIT_RECEIVED;
|
|
|
|
}
|
2013-06-03 12:30:02 +04:00
|
|
|
if (test_bit(KVM_APIC_SIPI, &pe) &&
|
2013-03-13 15:42:34 +04:00
|
|
|
vcpu->arch.mp_state == KVM_MP_STATE_INIT_RECEIVED) {
|
|
|
|
/* evaluate pending_events before reading the vector */
|
|
|
|
smp_rmb();
|
|
|
|
sipi_vector = apic->sipi_vector;
|
2014-06-29 13:28:51 +04:00
|
|
|
apic_debug("vcpu %d received sipi with vector # %x\n",
|
2013-03-13 15:42:34 +04:00
|
|
|
vcpu->vcpu_id, sipi_vector);
|
|
|
|
kvm_vcpu_deliver_sipi_vector(vcpu, sipi_vector);
|
|
|
|
vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-08-05 16:58:30 +04:00
|
|
|
void kvm_lapic_init(void)
|
|
|
|
{
|
|
|
|
/* do not patch jump label more than once per second */
|
|
|
|
jump_label_rate_limit(&apic_hw_disabled, HZ);
|
2012-08-05 16:58:31 +04:00
|
|
|
jump_label_rate_limit(&apic_sw_disabled, HZ);
|
2012-08-05 16:58:30 +04:00
|
|
|
}
|