WSL2-Linux-Kernel/virt/kvm
Michal Luczaj afb2acb2e3 KVM: Fix vcpu_array[0] races
In kvm_vm_ioctl_create_vcpu(), add vcpu to vcpu_array iff it's safe to
access vcpu via kvm_get_vcpu() and kvm_for_each_vcpu(), i.e. when there's
no failure path requiring vcpu removal and destruction. Such order is
important because vcpu_array accessors may end up referencing vcpu at
vcpu_array[0] even before online_vcpus is set to 1.

When online_vcpus=0, any call to kvm_get_vcpu() goes through
array_index_nospec() and ends with an attempt to xa_load(vcpu_array, 0):

	int num_vcpus = atomic_read(&kvm->online_vcpus);
	i = array_index_nospec(i, num_vcpus);
	return xa_load(&kvm->vcpu_array, i);

Similarly, when online_vcpus=0, a kvm_for_each_vcpu() does not iterate over
an "empty" range, but actually [0, ULONG_MAX]:

	xa_for_each_range(&kvm->vcpu_array, idx, vcpup, 0, \
			  (atomic_read(&kvm->online_vcpus) - 1))

In both cases, such online_vcpus=0 edge case, even if leading to
unnecessary calls to XArray API, should not be an issue; requesting
unpopulated indexes/ranges is handled by xa_load() and xa_for_each_range().

However, this means that when the first vCPU is created and inserted in
vcpu_array *and* before online_vcpus is incremented, code calling
kvm_get_vcpu()/kvm_for_each_vcpu() already has access to that first vCPU.

This should not pose a problem assuming that once a vcpu is stored in
vcpu_array, it will remain there, but that's not the case:
kvm_vm_ioctl_create_vcpu() first inserts to vcpu_array, then requests a
file descriptor. If create_vcpu_fd() fails, newly inserted vcpu is removed
from the vcpu_array, then destroyed:

	vcpu->vcpu_idx = atomic_read(&kvm->online_vcpus);
	r = xa_insert(&kvm->vcpu_array, vcpu->vcpu_idx, vcpu, GFP_KERNEL_ACCOUNT);
	kvm_get_kvm(kvm);
	r = create_vcpu_fd(vcpu);
	if (r < 0) {
		xa_erase(&kvm->vcpu_array, vcpu->vcpu_idx);
		kvm_put_kvm_no_destroy(kvm);
		goto unlock_vcpu_destroy;
	}
	atomic_inc(&kvm->online_vcpus);

This results in a possible race condition when a reference to a vcpu is
acquired (via kvm_get_vcpu() or kvm_for_each_vcpu()) moments before said
vcpu is destroyed.

Signed-off-by: Michal Luczaj <mhal@rbox.co>
Message-Id: <20230510140410.1093987-2-mhal@rbox.co>
Cc: stable@vger.kernel.org
Fixes: c5b0775491 ("KVM: Convert the kvm->vcpus array to a xarray", 2021-12-08)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-19 13:56:26 -04:00
..
Kconfig KVM: Opt out of generic hardware enabling on s390 and PPC 2022-12-29 15:48:37 -05:00
Makefile.kvm
async_pf.c
async_pf.h
binary_stats.c
coalesced_mmio.c KVM: Destroy target device if coalesced MMIO unregistration fails 2023-02-01 11:25:05 -08:00
coalesced_mmio.h
dirty_ring.c KVM: Support dirty ring in conjunction with bitmap 2022-11-10 13:11:58 +00:00
eventfd.c KVM: x86/ioapic: Resample the pending state of an IRQ when unmasking 2023-03-27 10:13:28 -04:00
irqchip.c KVM: replace direct irq.h inclusion 2022-11-09 12:31:37 -05:00
kvm_main.c KVM: Fix vcpu_array[0] races 2023-05-19 13:56:26 -04:00
kvm_mm.h kvm: Remove the unused macro KVM_MMU_READ_{,UN}LOCK() 2022-12-27 06:00:51 -05:00
pfncache.c KVM: Skip unnecessary "unmap" if gpc is already valid during refresh 2022-11-30 19:25:24 +00:00
vfio.c kvm/vfio: Fix potential deadlock on vfio group_lock 2023-01-20 08:50:05 -07:00
vfio.h