VFIO update for v5.15-rc1
- Fix dma-valid return WAITED implementation (Anthony Yznaga) - SPDX license cleanups (Cai Huoqing) - Split vfio-pci-core from vfio-pci and enhance PCI driver matching to support future vendor provided vfio-pci variants (Yishai Hadas, Max Gurtovoy, Jason Gunthorpe) - Replace duplicated reflck with core support for managing first open, last close, and device sets (Jason Gunthorpe, Max Gurtovoy, Yishai Hadas) - Fix non-modular mdev support and don't nag about request callback support (Christoph Hellwig) - Add semaphore to protect instruction intercept handler and replace open-coded locks in vfio-ap driver (Tony Krowiak) - Convert vfio-ap to vfio_register_group_dev() API (Jason Gunthorpe) -----BEGIN PGP SIGNATURE----- iQJPBAABCAA5FiEEQvbATlQL0amee4qQI5ubbjuwiyIFAmEvwWkbHGFsZXgud2ls bGlhbXNvbkByZWRoYXQuY29tAAoJECObm247sIsi+1UP/3CRizghroINVYR+cJ99 Tjz7lB/wlzxmRfX+SL4NAVe1SSB2VeCgU4B0PF6kywELLS8OhCO3HXYXVsz244fW Gk5UIns86+TFTrfCOMpwYBV0P86zuaa1ZnvCnkhMK1i2pTZ+oX8hUH1Yj5clHuU+ YgC7JfEuTIAX73q2bC/llLvNE9ke1QCoDX3+HAH87ttqutnRWcnnq56PTEqwe+EW eMA+glB1UG6JAqXxoJET4155arNOny1/ZMprfBr3YXZTiXDF/lSzuMyUtbp526Sf hsvlnqtE6TCdfKbog0Lxckl+8E9NCq8jzFBKiZhbccrQv3vVaoP6dOsPWcT35Kp1 IjzMLiHIbl4wXOL+Xap/biz3LCM5BMdT/OhW5LUC007zggK71ndRvb9F8ptW83Bv 0Uh9DNv7YIQ0su3JHZEsJ3qPFXQXceP199UiADOGSeV8U1Qig3YKsHUDMuALfFvN t+NleeJ4qCWao+W4VCfyDfKurVnMj/cThXiDEWEeq5gMOO+6YKBIFWJVKFxUYDbf MgGdg0nQTUECuXKXxLD4c1HAWH9xi207OnLvhW1Icywp20MsYqOWt0vhg+PRdMBT DK6STxP18aQxCaOuQN9Vf81LjhXNTeg+xt3mMyViOZPcKfX6/wAC9qLt4MucJDdw FBfOz2UL2F56dhAYT+1vHoUM =nzK7 -----END PGP SIGNATURE----- Merge tag 'vfio-v5.15-rc1' of git://github.com/awilliam/linux-vfio Pull VFIO updates from Alex Williamson: - Fix dma-valid return WAITED implementation (Anthony Yznaga) - SPDX license cleanups (Cai Huoqing) - Split vfio-pci-core from vfio-pci and enhance PCI driver matching to support future vendor provided vfio-pci variants (Yishai Hadas, Max Gurtovoy, Jason Gunthorpe) - Replace duplicated reflck with core support for managing first open, last close, and device sets (Jason Gunthorpe, Max Gurtovoy, Yishai Hadas) - Fix non-modular mdev support and don't nag about request callback support (Christoph Hellwig) - Add semaphore to protect instruction intercept handler and replace open-coded locks in vfio-ap driver (Tony Krowiak) - Convert vfio-ap to vfio_register_group_dev() API (Jason Gunthorpe) * tag 'vfio-v5.15-rc1' of git://github.com/awilliam/linux-vfio: (37 commits) vfio/pci: Introduce vfio_pci_core.ko vfio: Use kconfig if XX/endif blocks instead of repeating 'depends on' vfio: Use select for eventfd PCI / VFIO: Add 'override_only' support for VFIO PCI sub system PCI: Add 'override_only' field to struct pci_device_id vfio/pci: Move module parameters to vfio_pci.c vfio/pci: Move igd initialization to vfio_pci.c vfio/pci: Split the pci_driver code out of vfio_pci_core.c vfio/pci: Include vfio header in vfio_pci_core.h vfio/pci: Rename ops functions to fit core namings vfio/pci: Rename vfio_pci_device to vfio_pci_core_device vfio/pci: Rename vfio_pci_private.h to vfio_pci_core.h vfio/pci: Rename vfio_pci.c to vfio_pci_core.c vfio/ap_ops: Convert to use vfio_register_group_dev() s390/vfio-ap: replace open coded locks for VFIO_GROUP_NOTIFY_SET_KVM notification s390/vfio-ap: r/w lock for PQAP interception handler function pointer vfio/type1: Fix vfio_find_dma_valid return vfio-pci/zdev: Remove repeated verbose license text vfio: platform: reset: Convert to SPDX identifier vfio: Remove struct vfio_device_ops open/release ...
This commit is contained in:
Коммит
89b6b8cd92
|
@ -103,6 +103,7 @@ need pass only as many optional fields as necessary:
|
||||||
- subvendor and subdevice fields default to PCI_ANY_ID (FFFFFFFF)
|
- subvendor and subdevice fields default to PCI_ANY_ID (FFFFFFFF)
|
||||||
- class and classmask fields default to 0
|
- class and classmask fields default to 0
|
||||||
- driver_data defaults to 0UL.
|
- driver_data defaults to 0UL.
|
||||||
|
- override_only field defaults to 0.
|
||||||
|
|
||||||
Note that driver_data must match the value used by any of the pci_device_id
|
Note that driver_data must match the value used by any of the pci_device_id
|
||||||
entries defined in the driver. This makes the driver_data field mandatory
|
entries defined in the driver. This makes the driver_data field mandatory
|
||||||
|
|
|
@ -255,11 +255,13 @@ vfio_unregister_group_dev() respectively::
|
||||||
void vfio_init_group_dev(struct vfio_device *device,
|
void vfio_init_group_dev(struct vfio_device *device,
|
||||||
struct device *dev,
|
struct device *dev,
|
||||||
const struct vfio_device_ops *ops);
|
const struct vfio_device_ops *ops);
|
||||||
|
void vfio_uninit_group_dev(struct vfio_device *device);
|
||||||
int vfio_register_group_dev(struct vfio_device *device);
|
int vfio_register_group_dev(struct vfio_device *device);
|
||||||
void vfio_unregister_group_dev(struct vfio_device *device);
|
void vfio_unregister_group_dev(struct vfio_device *device);
|
||||||
|
|
||||||
The driver should embed the vfio_device in its own structure and call
|
The driver should embed the vfio_device in its own structure and call
|
||||||
vfio_init_group_dev() to pre-configure it before going to registration.
|
vfio_init_group_dev() to pre-configure it before going to registration
|
||||||
|
and call vfio_uninit_group_dev() after completing the un-registration.
|
||||||
vfio_register_group_dev() indicates to the core to begin tracking the
|
vfio_register_group_dev() indicates to the core to begin tracking the
|
||||||
iommu_group of the specified dev and register the dev as owned by a VFIO bus
|
iommu_group of the specified dev and register the dev as owned by a VFIO bus
|
||||||
driver. Once vfio_register_group_dev() returns it is possible for userspace to
|
driver. Once vfio_register_group_dev() returns it is possible for userspace to
|
||||||
|
|
|
@ -19607,6 +19607,7 @@ T: git git://github.com/awilliam/linux-vfio.git
|
||||||
F: Documentation/driver-api/vfio.rst
|
F: Documentation/driver-api/vfio.rst
|
||||||
F: drivers/vfio/
|
F: drivers/vfio/
|
||||||
F: include/linux/vfio.h
|
F: include/linux/vfio.h
|
||||||
|
F: include/linux/vfio_pci_core.h
|
||||||
F: include/uapi/linux/vfio.h
|
F: include/uapi/linux/vfio.h
|
||||||
|
|
||||||
VFIO FSL-MC DRIVER
|
VFIO FSL-MC DRIVER
|
||||||
|
|
|
@ -798,14 +798,12 @@ struct kvm_s390_cpu_model {
|
||||||
unsigned short ibc;
|
unsigned short ibc;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct kvm_s390_module_hook {
|
typedef int (*crypto_hook)(struct kvm_vcpu *vcpu);
|
||||||
int (*hook)(struct kvm_vcpu *vcpu);
|
|
||||||
struct module *owner;
|
|
||||||
};
|
|
||||||
|
|
||||||
struct kvm_s390_crypto {
|
struct kvm_s390_crypto {
|
||||||
struct kvm_s390_crypto_cb *crycb;
|
struct kvm_s390_crypto_cb *crycb;
|
||||||
struct kvm_s390_module_hook *pqap_hook;
|
struct rw_semaphore pqap_hook_rwsem;
|
||||||
|
crypto_hook *pqap_hook;
|
||||||
__u32 crycbd;
|
__u32 crycbd;
|
||||||
__u8 aes_kw;
|
__u8 aes_kw;
|
||||||
__u8 dea_kw;
|
__u8 dea_kw;
|
||||||
|
|
|
@ -2559,12 +2559,26 @@ static void kvm_s390_set_crycb_format(struct kvm *kvm)
|
||||||
kvm->arch.crypto.crycbd |= CRYCB_FORMAT1;
|
kvm->arch.crypto.crycbd |= CRYCB_FORMAT1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* kvm_arch_crypto_set_masks
|
||||||
|
*
|
||||||
|
* @kvm: pointer to the target guest's KVM struct containing the crypto masks
|
||||||
|
* to be set.
|
||||||
|
* @apm: the mask identifying the accessible AP adapters
|
||||||
|
* @aqm: the mask identifying the accessible AP domains
|
||||||
|
* @adm: the mask identifying the accessible AP control domains
|
||||||
|
*
|
||||||
|
* Set the masks that identify the adapters, domains and control domains to
|
||||||
|
* which the KVM guest is granted access.
|
||||||
|
*
|
||||||
|
* Note: The kvm->lock mutex must be locked by the caller before invoking this
|
||||||
|
* function.
|
||||||
|
*/
|
||||||
void kvm_arch_crypto_set_masks(struct kvm *kvm, unsigned long *apm,
|
void kvm_arch_crypto_set_masks(struct kvm *kvm, unsigned long *apm,
|
||||||
unsigned long *aqm, unsigned long *adm)
|
unsigned long *aqm, unsigned long *adm)
|
||||||
{
|
{
|
||||||
struct kvm_s390_crypto_cb *crycb = kvm->arch.crypto.crycb;
|
struct kvm_s390_crypto_cb *crycb = kvm->arch.crypto.crycb;
|
||||||
|
|
||||||
mutex_lock(&kvm->lock);
|
|
||||||
kvm_s390_vcpu_block_all(kvm);
|
kvm_s390_vcpu_block_all(kvm);
|
||||||
|
|
||||||
switch (kvm->arch.crypto.crycbd & CRYCB_FORMAT_MASK) {
|
switch (kvm->arch.crypto.crycbd & CRYCB_FORMAT_MASK) {
|
||||||
|
@ -2595,13 +2609,23 @@ void kvm_arch_crypto_set_masks(struct kvm *kvm, unsigned long *apm,
|
||||||
/* recreate the shadow crycb for each vcpu */
|
/* recreate the shadow crycb for each vcpu */
|
||||||
kvm_s390_sync_request_broadcast(kvm, KVM_REQ_VSIE_RESTART);
|
kvm_s390_sync_request_broadcast(kvm, KVM_REQ_VSIE_RESTART);
|
||||||
kvm_s390_vcpu_unblock_all(kvm);
|
kvm_s390_vcpu_unblock_all(kvm);
|
||||||
mutex_unlock(&kvm->lock);
|
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(kvm_arch_crypto_set_masks);
|
EXPORT_SYMBOL_GPL(kvm_arch_crypto_set_masks);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* kvm_arch_crypto_clear_masks
|
||||||
|
*
|
||||||
|
* @kvm: pointer to the target guest's KVM struct containing the crypto masks
|
||||||
|
* to be cleared.
|
||||||
|
*
|
||||||
|
* Clear the masks that identify the adapters, domains and control domains to
|
||||||
|
* which the KVM guest is granted access.
|
||||||
|
*
|
||||||
|
* Note: The kvm->lock mutex must be locked by the caller before invoking this
|
||||||
|
* function.
|
||||||
|
*/
|
||||||
void kvm_arch_crypto_clear_masks(struct kvm *kvm)
|
void kvm_arch_crypto_clear_masks(struct kvm *kvm)
|
||||||
{
|
{
|
||||||
mutex_lock(&kvm->lock);
|
|
||||||
kvm_s390_vcpu_block_all(kvm);
|
kvm_s390_vcpu_block_all(kvm);
|
||||||
|
|
||||||
memset(&kvm->arch.crypto.crycb->apcb0, 0,
|
memset(&kvm->arch.crypto.crycb->apcb0, 0,
|
||||||
|
@ -2613,7 +2637,6 @@ void kvm_arch_crypto_clear_masks(struct kvm *kvm)
|
||||||
/* recreate the shadow crycb for each vcpu */
|
/* recreate the shadow crycb for each vcpu */
|
||||||
kvm_s390_sync_request_broadcast(kvm, KVM_REQ_VSIE_RESTART);
|
kvm_s390_sync_request_broadcast(kvm, KVM_REQ_VSIE_RESTART);
|
||||||
kvm_s390_vcpu_unblock_all(kvm);
|
kvm_s390_vcpu_unblock_all(kvm);
|
||||||
mutex_unlock(&kvm->lock);
|
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(kvm_arch_crypto_clear_masks);
|
EXPORT_SYMBOL_GPL(kvm_arch_crypto_clear_masks);
|
||||||
|
|
||||||
|
@ -2630,6 +2653,7 @@ static void kvm_s390_crypto_init(struct kvm *kvm)
|
||||||
{
|
{
|
||||||
kvm->arch.crypto.crycb = &kvm->arch.sie_page2->crycb;
|
kvm->arch.crypto.crycb = &kvm->arch.sie_page2->crycb;
|
||||||
kvm_s390_set_crycb_format(kvm);
|
kvm_s390_set_crycb_format(kvm);
|
||||||
|
init_rwsem(&kvm->arch.crypto.pqap_hook_rwsem);
|
||||||
|
|
||||||
if (!test_kvm_facility(kvm, 76))
|
if (!test_kvm_facility(kvm, 76))
|
||||||
return;
|
return;
|
||||||
|
|
|
@ -610,6 +610,7 @@ static int handle_io_inst(struct kvm_vcpu *vcpu)
|
||||||
static int handle_pqap(struct kvm_vcpu *vcpu)
|
static int handle_pqap(struct kvm_vcpu *vcpu)
|
||||||
{
|
{
|
||||||
struct ap_queue_status status = {};
|
struct ap_queue_status status = {};
|
||||||
|
crypto_hook pqap_hook;
|
||||||
unsigned long reg0;
|
unsigned long reg0;
|
||||||
int ret;
|
int ret;
|
||||||
uint8_t fc;
|
uint8_t fc;
|
||||||
|
@ -654,18 +655,20 @@ static int handle_pqap(struct kvm_vcpu *vcpu)
|
||||||
return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
|
return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Verify that the hook callback is registered, lock the owner
|
* If the hook callback is registered, there will be a pointer to the
|
||||||
* and call the hook.
|
* hook function pointer in the kvm_s390_crypto structure. Lock the
|
||||||
|
* owner, retrieve the hook function pointer and call the hook.
|
||||||
*/
|
*/
|
||||||
|
down_read(&vcpu->kvm->arch.crypto.pqap_hook_rwsem);
|
||||||
if (vcpu->kvm->arch.crypto.pqap_hook) {
|
if (vcpu->kvm->arch.crypto.pqap_hook) {
|
||||||
if (!try_module_get(vcpu->kvm->arch.crypto.pqap_hook->owner))
|
pqap_hook = *vcpu->kvm->arch.crypto.pqap_hook;
|
||||||
return -EOPNOTSUPP;
|
ret = pqap_hook(vcpu);
|
||||||
ret = vcpu->kvm->arch.crypto.pqap_hook->hook(vcpu);
|
|
||||||
module_put(vcpu->kvm->arch.crypto.pqap_hook->owner);
|
|
||||||
if (!ret && vcpu->run->s.regs.gprs[1] & 0x00ff0000)
|
if (!ret && vcpu->run->s.regs.gprs[1] & 0x00ff0000)
|
||||||
kvm_s390_set_psw_cc(vcpu, 3);
|
kvm_s390_set_psw_cc(vcpu, 3);
|
||||||
|
up_read(&vcpu->kvm->arch.crypto.pqap_hook_rwsem);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
up_read(&vcpu->kvm->arch.crypto.pqap_hook_rwsem);
|
||||||
/*
|
/*
|
||||||
* A vfio_driver must register a hook.
|
* A vfio_driver must register a hook.
|
||||||
* No hook means no driver to enable the SIE CRYCB and no queues.
|
* No hook means no driver to enable the SIE CRYCB and no queues.
|
||||||
|
|
|
@ -885,7 +885,7 @@ static int intel_vgpu_group_notifier(struct notifier_block *nb,
|
||||||
return NOTIFY_OK;
|
return NOTIFY_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int intel_vgpu_open(struct mdev_device *mdev)
|
static int intel_vgpu_open_device(struct mdev_device *mdev)
|
||||||
{
|
{
|
||||||
struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
|
struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
|
||||||
struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
|
struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
|
||||||
|
@ -1004,7 +1004,7 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
|
||||||
vgpu->handle = 0;
|
vgpu->handle = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void intel_vgpu_release(struct mdev_device *mdev)
|
static void intel_vgpu_close_device(struct mdev_device *mdev)
|
||||||
{
|
{
|
||||||
struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
|
struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
|
||||||
|
|
||||||
|
@ -1753,8 +1753,8 @@ static struct mdev_parent_ops intel_vgpu_ops = {
|
||||||
.create = intel_vgpu_create,
|
.create = intel_vgpu_create,
|
||||||
.remove = intel_vgpu_remove,
|
.remove = intel_vgpu_remove,
|
||||||
|
|
||||||
.open = intel_vgpu_open,
|
.open_device = intel_vgpu_open_device,
|
||||||
.release = intel_vgpu_release,
|
.close_device = intel_vgpu_close_device,
|
||||||
|
|
||||||
.read = intel_vgpu_read,
|
.read = intel_vgpu_read,
|
||||||
.write = intel_vgpu_write,
|
.write = intel_vgpu_write,
|
||||||
|
|
|
@ -136,7 +136,7 @@ static const struct pci_device_id *pci_match_device(struct pci_driver *drv,
|
||||||
struct pci_dev *dev)
|
struct pci_dev *dev)
|
||||||
{
|
{
|
||||||
struct pci_dynid *dynid;
|
struct pci_dynid *dynid;
|
||||||
const struct pci_device_id *found_id = NULL;
|
const struct pci_device_id *found_id = NULL, *ids;
|
||||||
|
|
||||||
/* When driver_override is set, only bind to the matching driver */
|
/* When driver_override is set, only bind to the matching driver */
|
||||||
if (dev->driver_override && strcmp(dev->driver_override, drv->name))
|
if (dev->driver_override && strcmp(dev->driver_override, drv->name))
|
||||||
|
@ -152,14 +152,28 @@ static const struct pci_device_id *pci_match_device(struct pci_driver *drv,
|
||||||
}
|
}
|
||||||
spin_unlock(&drv->dynids.lock);
|
spin_unlock(&drv->dynids.lock);
|
||||||
|
|
||||||
if (!found_id)
|
if (found_id)
|
||||||
found_id = pci_match_id(drv->id_table, dev);
|
return found_id;
|
||||||
|
|
||||||
|
for (ids = drv->id_table; (found_id = pci_match_id(ids, dev));
|
||||||
|
ids = found_id + 1) {
|
||||||
|
/*
|
||||||
|
* The match table is split based on driver_override.
|
||||||
|
* In case override_only was set, enforce driver_override
|
||||||
|
* matching.
|
||||||
|
*/
|
||||||
|
if (found_id->override_only) {
|
||||||
|
if (dev->driver_override)
|
||||||
|
return found_id;
|
||||||
|
} else {
|
||||||
|
return found_id;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/* driver_override will always match, send a dummy id */
|
/* driver_override will always match, send a dummy id */
|
||||||
if (!found_id && dev->driver_override)
|
if (dev->driver_override)
|
||||||
found_id = &pci_device_id_any;
|
return &pci_device_id_any;
|
||||||
|
return NULL;
|
||||||
return found_id;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
|
@ -159,7 +159,7 @@ static int vfio_ccw_mdev_remove(struct mdev_device *mdev)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_ccw_mdev_open(struct mdev_device *mdev)
|
static int vfio_ccw_mdev_open_device(struct mdev_device *mdev)
|
||||||
{
|
{
|
||||||
struct vfio_ccw_private *private =
|
struct vfio_ccw_private *private =
|
||||||
dev_get_drvdata(mdev_parent_dev(mdev));
|
dev_get_drvdata(mdev_parent_dev(mdev));
|
||||||
|
@ -194,7 +194,7 @@ out_unregister:
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void vfio_ccw_mdev_release(struct mdev_device *mdev)
|
static void vfio_ccw_mdev_close_device(struct mdev_device *mdev)
|
||||||
{
|
{
|
||||||
struct vfio_ccw_private *private =
|
struct vfio_ccw_private *private =
|
||||||
dev_get_drvdata(mdev_parent_dev(mdev));
|
dev_get_drvdata(mdev_parent_dev(mdev));
|
||||||
|
@ -638,8 +638,8 @@ static const struct mdev_parent_ops vfio_ccw_mdev_ops = {
|
||||||
.supported_type_groups = mdev_type_groups,
|
.supported_type_groups = mdev_type_groups,
|
||||||
.create = vfio_ccw_mdev_create,
|
.create = vfio_ccw_mdev_create,
|
||||||
.remove = vfio_ccw_mdev_remove,
|
.remove = vfio_ccw_mdev_remove,
|
||||||
.open = vfio_ccw_mdev_open,
|
.open_device = vfio_ccw_mdev_open_device,
|
||||||
.release = vfio_ccw_mdev_release,
|
.close_device = vfio_ccw_mdev_close_device,
|
||||||
.read = vfio_ccw_mdev_read,
|
.read = vfio_ccw_mdev_read,
|
||||||
.write = vfio_ccw_mdev_write,
|
.write = vfio_ccw_mdev_write,
|
||||||
.ioctl = vfio_ccw_mdev_ioctl,
|
.ioctl = vfio_ccw_mdev_ioctl,
|
||||||
|
|
|
@ -24,8 +24,9 @@
|
||||||
#define VFIO_AP_MDEV_TYPE_HWVIRT "passthrough"
|
#define VFIO_AP_MDEV_TYPE_HWVIRT "passthrough"
|
||||||
#define VFIO_AP_MDEV_NAME_HWVIRT "VFIO AP Passthrough Device"
|
#define VFIO_AP_MDEV_NAME_HWVIRT "VFIO AP Passthrough Device"
|
||||||
|
|
||||||
static int vfio_ap_mdev_reset_queues(struct mdev_device *mdev);
|
static int vfio_ap_mdev_reset_queues(struct ap_matrix_mdev *matrix_mdev);
|
||||||
static struct vfio_ap_queue *vfio_ap_find_queue(int apqn);
|
static struct vfio_ap_queue *vfio_ap_find_queue(int apqn);
|
||||||
|
static const struct vfio_device_ops vfio_ap_matrix_dev_ops;
|
||||||
|
|
||||||
static int match_apqn(struct device *dev, const void *data)
|
static int match_apqn(struct device *dev, const void *data)
|
||||||
{
|
{
|
||||||
|
@ -295,15 +296,6 @@ static int handle_pqap(struct kvm_vcpu *vcpu)
|
||||||
matrix_mdev = container_of(vcpu->kvm->arch.crypto.pqap_hook,
|
matrix_mdev = container_of(vcpu->kvm->arch.crypto.pqap_hook,
|
||||||
struct ap_matrix_mdev, pqap_hook);
|
struct ap_matrix_mdev, pqap_hook);
|
||||||
|
|
||||||
/*
|
|
||||||
* If the KVM pointer is in the process of being set, wait until the
|
|
||||||
* process has completed.
|
|
||||||
*/
|
|
||||||
wait_event_cmd(matrix_mdev->wait_for_kvm,
|
|
||||||
!matrix_mdev->kvm_busy,
|
|
||||||
mutex_unlock(&matrix_dev->lock),
|
|
||||||
mutex_lock(&matrix_dev->lock));
|
|
||||||
|
|
||||||
/* If the there is no guest using the mdev, there is nothing to do */
|
/* If the there is no guest using the mdev, there is nothing to do */
|
||||||
if (!matrix_mdev->kvm)
|
if (!matrix_mdev->kvm)
|
||||||
goto out_unlock;
|
goto out_unlock;
|
||||||
|
@ -336,45 +328,57 @@ static void vfio_ap_matrix_init(struct ap_config_info *info,
|
||||||
matrix->adm_max = info->apxa ? info->Nd : 15;
|
matrix->adm_max = info->apxa ? info->Nd : 15;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_ap_mdev_create(struct mdev_device *mdev)
|
static int vfio_ap_mdev_probe(struct mdev_device *mdev)
|
||||||
{
|
{
|
||||||
struct ap_matrix_mdev *matrix_mdev;
|
struct ap_matrix_mdev *matrix_mdev;
|
||||||
|
int ret;
|
||||||
|
|
||||||
if ((atomic_dec_if_positive(&matrix_dev->available_instances) < 0))
|
if ((atomic_dec_if_positive(&matrix_dev->available_instances) < 0))
|
||||||
return -EPERM;
|
return -EPERM;
|
||||||
|
|
||||||
matrix_mdev = kzalloc(sizeof(*matrix_mdev), GFP_KERNEL);
|
matrix_mdev = kzalloc(sizeof(*matrix_mdev), GFP_KERNEL);
|
||||||
if (!matrix_mdev) {
|
if (!matrix_mdev) {
|
||||||
atomic_inc(&matrix_dev->available_instances);
|
ret = -ENOMEM;
|
||||||
return -ENOMEM;
|
goto err_dec_available;
|
||||||
}
|
}
|
||||||
|
vfio_init_group_dev(&matrix_mdev->vdev, &mdev->dev,
|
||||||
|
&vfio_ap_matrix_dev_ops);
|
||||||
|
|
||||||
matrix_mdev->mdev = mdev;
|
matrix_mdev->mdev = mdev;
|
||||||
vfio_ap_matrix_init(&matrix_dev->info, &matrix_mdev->matrix);
|
vfio_ap_matrix_init(&matrix_dev->info, &matrix_mdev->matrix);
|
||||||
init_waitqueue_head(&matrix_mdev->wait_for_kvm);
|
matrix_mdev->pqap_hook = handle_pqap;
|
||||||
mdev_set_drvdata(mdev, matrix_mdev);
|
|
||||||
matrix_mdev->pqap_hook.hook = handle_pqap;
|
|
||||||
matrix_mdev->pqap_hook.owner = THIS_MODULE;
|
|
||||||
mutex_lock(&matrix_dev->lock);
|
mutex_lock(&matrix_dev->lock);
|
||||||
list_add(&matrix_mdev->node, &matrix_dev->mdev_list);
|
list_add(&matrix_mdev->node, &matrix_dev->mdev_list);
|
||||||
mutex_unlock(&matrix_dev->lock);
|
mutex_unlock(&matrix_dev->lock);
|
||||||
|
|
||||||
|
ret = vfio_register_group_dev(&matrix_mdev->vdev);
|
||||||
|
if (ret)
|
||||||
|
goto err_list;
|
||||||
|
dev_set_drvdata(&mdev->dev, matrix_mdev);
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
err_list:
|
||||||
|
mutex_lock(&matrix_dev->lock);
|
||||||
|
list_del(&matrix_mdev->node);
|
||||||
|
mutex_unlock(&matrix_dev->lock);
|
||||||
|
kfree(matrix_mdev);
|
||||||
|
err_dec_available:
|
||||||
|
atomic_inc(&matrix_dev->available_instances);
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_ap_mdev_remove(struct mdev_device *mdev)
|
static void vfio_ap_mdev_remove(struct mdev_device *mdev)
|
||||||
{
|
{
|
||||||
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
|
struct ap_matrix_mdev *matrix_mdev = dev_get_drvdata(&mdev->dev);
|
||||||
|
|
||||||
|
vfio_unregister_group_dev(&matrix_mdev->vdev);
|
||||||
|
|
||||||
mutex_lock(&matrix_dev->lock);
|
mutex_lock(&matrix_dev->lock);
|
||||||
vfio_ap_mdev_reset_queues(mdev);
|
vfio_ap_mdev_reset_queues(matrix_mdev);
|
||||||
list_del(&matrix_mdev->node);
|
list_del(&matrix_mdev->node);
|
||||||
kfree(matrix_mdev);
|
kfree(matrix_mdev);
|
||||||
mdev_set_drvdata(mdev, NULL);
|
|
||||||
atomic_inc(&matrix_dev->available_instances);
|
atomic_inc(&matrix_dev->available_instances);
|
||||||
mutex_unlock(&matrix_dev->lock);
|
mutex_unlock(&matrix_dev->lock);
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static ssize_t name_show(struct mdev_type *mtype,
|
static ssize_t name_show(struct mdev_type *mtype,
|
||||||
|
@ -614,16 +618,12 @@ static ssize_t assign_adapter_store(struct device *dev,
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
unsigned long apid;
|
unsigned long apid;
|
||||||
struct mdev_device *mdev = mdev_from_dev(dev);
|
struct ap_matrix_mdev *matrix_mdev = dev_get_drvdata(dev);
|
||||||
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
|
|
||||||
|
|
||||||
mutex_lock(&matrix_dev->lock);
|
mutex_lock(&matrix_dev->lock);
|
||||||
|
|
||||||
/*
|
/* If the KVM guest is running, disallow assignment of adapter */
|
||||||
* If the KVM pointer is in flux or the guest is running, disallow
|
if (matrix_mdev->kvm) {
|
||||||
* un-assignment of adapter
|
|
||||||
*/
|
|
||||||
if (matrix_mdev->kvm_busy || matrix_mdev->kvm) {
|
|
||||||
ret = -EBUSY;
|
ret = -EBUSY;
|
||||||
goto done;
|
goto done;
|
||||||
}
|
}
|
||||||
|
@ -685,16 +685,12 @@ static ssize_t unassign_adapter_store(struct device *dev,
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
unsigned long apid;
|
unsigned long apid;
|
||||||
struct mdev_device *mdev = mdev_from_dev(dev);
|
struct ap_matrix_mdev *matrix_mdev = dev_get_drvdata(dev);
|
||||||
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
|
|
||||||
|
|
||||||
mutex_lock(&matrix_dev->lock);
|
mutex_lock(&matrix_dev->lock);
|
||||||
|
|
||||||
/*
|
/* If the KVM guest is running, disallow unassignment of adapter */
|
||||||
* If the KVM pointer is in flux or the guest is running, disallow
|
if (matrix_mdev->kvm) {
|
||||||
* un-assignment of adapter
|
|
||||||
*/
|
|
||||||
if (matrix_mdev->kvm_busy || matrix_mdev->kvm) {
|
|
||||||
ret = -EBUSY;
|
ret = -EBUSY;
|
||||||
goto done;
|
goto done;
|
||||||
}
|
}
|
||||||
|
@ -773,17 +769,13 @@ static ssize_t assign_domain_store(struct device *dev,
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
unsigned long apqi;
|
unsigned long apqi;
|
||||||
struct mdev_device *mdev = mdev_from_dev(dev);
|
struct ap_matrix_mdev *matrix_mdev = dev_get_drvdata(dev);
|
||||||
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
|
|
||||||
unsigned long max_apqi = matrix_mdev->matrix.aqm_max;
|
unsigned long max_apqi = matrix_mdev->matrix.aqm_max;
|
||||||
|
|
||||||
mutex_lock(&matrix_dev->lock);
|
mutex_lock(&matrix_dev->lock);
|
||||||
|
|
||||||
/*
|
/* If the KVM guest is running, disallow assignment of domain */
|
||||||
* If the KVM pointer is in flux or the guest is running, disallow
|
if (matrix_mdev->kvm) {
|
||||||
* assignment of domain
|
|
||||||
*/
|
|
||||||
if (matrix_mdev->kvm_busy || matrix_mdev->kvm) {
|
|
||||||
ret = -EBUSY;
|
ret = -EBUSY;
|
||||||
goto done;
|
goto done;
|
||||||
}
|
}
|
||||||
|
@ -840,16 +832,12 @@ static ssize_t unassign_domain_store(struct device *dev,
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
unsigned long apqi;
|
unsigned long apqi;
|
||||||
struct mdev_device *mdev = mdev_from_dev(dev);
|
struct ap_matrix_mdev *matrix_mdev = dev_get_drvdata(dev);
|
||||||
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
|
|
||||||
|
|
||||||
mutex_lock(&matrix_dev->lock);
|
mutex_lock(&matrix_dev->lock);
|
||||||
|
|
||||||
/*
|
/* If the KVM guest is running, disallow unassignment of domain */
|
||||||
* If the KVM pointer is in flux or the guest is running, disallow
|
if (matrix_mdev->kvm) {
|
||||||
* un-assignment of domain
|
|
||||||
*/
|
|
||||||
if (matrix_mdev->kvm_busy || matrix_mdev->kvm) {
|
|
||||||
ret = -EBUSY;
|
ret = -EBUSY;
|
||||||
goto done;
|
goto done;
|
||||||
}
|
}
|
||||||
|
@ -893,16 +881,12 @@ static ssize_t assign_control_domain_store(struct device *dev,
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
unsigned long id;
|
unsigned long id;
|
||||||
struct mdev_device *mdev = mdev_from_dev(dev);
|
struct ap_matrix_mdev *matrix_mdev = dev_get_drvdata(dev);
|
||||||
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
|
|
||||||
|
|
||||||
mutex_lock(&matrix_dev->lock);
|
mutex_lock(&matrix_dev->lock);
|
||||||
|
|
||||||
/*
|
/* If the KVM guest is running, disallow assignment of control domain */
|
||||||
* If the KVM pointer is in flux or the guest is running, disallow
|
if (matrix_mdev->kvm) {
|
||||||
* assignment of control domain.
|
|
||||||
*/
|
|
||||||
if (matrix_mdev->kvm_busy || matrix_mdev->kvm) {
|
|
||||||
ret = -EBUSY;
|
ret = -EBUSY;
|
||||||
goto done;
|
goto done;
|
||||||
}
|
}
|
||||||
|
@ -949,17 +933,13 @@ static ssize_t unassign_control_domain_store(struct device *dev,
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
unsigned long domid;
|
unsigned long domid;
|
||||||
struct mdev_device *mdev = mdev_from_dev(dev);
|
struct ap_matrix_mdev *matrix_mdev = dev_get_drvdata(dev);
|
||||||
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
|
|
||||||
unsigned long max_domid = matrix_mdev->matrix.adm_max;
|
unsigned long max_domid = matrix_mdev->matrix.adm_max;
|
||||||
|
|
||||||
mutex_lock(&matrix_dev->lock);
|
mutex_lock(&matrix_dev->lock);
|
||||||
|
|
||||||
/*
|
/* If a KVM guest is running, disallow unassignment of control domain */
|
||||||
* If the KVM pointer is in flux or the guest is running, disallow
|
if (matrix_mdev->kvm) {
|
||||||
* un-assignment of control domain.
|
|
||||||
*/
|
|
||||||
if (matrix_mdev->kvm_busy || matrix_mdev->kvm) {
|
|
||||||
ret = -EBUSY;
|
ret = -EBUSY;
|
||||||
goto done;
|
goto done;
|
||||||
}
|
}
|
||||||
|
@ -988,8 +968,7 @@ static ssize_t control_domains_show(struct device *dev,
|
||||||
int nchars = 0;
|
int nchars = 0;
|
||||||
int n;
|
int n;
|
||||||
char *bufpos = buf;
|
char *bufpos = buf;
|
||||||
struct mdev_device *mdev = mdev_from_dev(dev);
|
struct ap_matrix_mdev *matrix_mdev = dev_get_drvdata(dev);
|
||||||
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
|
|
||||||
unsigned long max_domid = matrix_mdev->matrix.adm_max;
|
unsigned long max_domid = matrix_mdev->matrix.adm_max;
|
||||||
|
|
||||||
mutex_lock(&matrix_dev->lock);
|
mutex_lock(&matrix_dev->lock);
|
||||||
|
@ -1007,8 +986,7 @@ static DEVICE_ATTR_RO(control_domains);
|
||||||
static ssize_t matrix_show(struct device *dev, struct device_attribute *attr,
|
static ssize_t matrix_show(struct device *dev, struct device_attribute *attr,
|
||||||
char *buf)
|
char *buf)
|
||||||
{
|
{
|
||||||
struct mdev_device *mdev = mdev_from_dev(dev);
|
struct ap_matrix_mdev *matrix_mdev = dev_get_drvdata(dev);
|
||||||
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
|
|
||||||
char *bufpos = buf;
|
char *bufpos = buf;
|
||||||
unsigned long apid;
|
unsigned long apid;
|
||||||
unsigned long apqi;
|
unsigned long apqi;
|
||||||
|
@ -1098,23 +1076,30 @@ static int vfio_ap_mdev_set_kvm(struct ap_matrix_mdev *matrix_mdev,
|
||||||
struct ap_matrix_mdev *m;
|
struct ap_matrix_mdev *m;
|
||||||
|
|
||||||
if (kvm->arch.crypto.crycbd) {
|
if (kvm->arch.crypto.crycbd) {
|
||||||
|
down_write(&kvm->arch.crypto.pqap_hook_rwsem);
|
||||||
|
kvm->arch.crypto.pqap_hook = &matrix_mdev->pqap_hook;
|
||||||
|
up_write(&kvm->arch.crypto.pqap_hook_rwsem);
|
||||||
|
|
||||||
|
mutex_lock(&kvm->lock);
|
||||||
|
mutex_lock(&matrix_dev->lock);
|
||||||
|
|
||||||
list_for_each_entry(m, &matrix_dev->mdev_list, node) {
|
list_for_each_entry(m, &matrix_dev->mdev_list, node) {
|
||||||
if (m != matrix_mdev && m->kvm == kvm)
|
if (m != matrix_mdev && m->kvm == kvm) {
|
||||||
|
mutex_unlock(&kvm->lock);
|
||||||
|
mutex_unlock(&matrix_dev->lock);
|
||||||
return -EPERM;
|
return -EPERM;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
kvm_get_kvm(kvm);
|
kvm_get_kvm(kvm);
|
||||||
matrix_mdev->kvm_busy = true;
|
matrix_mdev->kvm = kvm;
|
||||||
mutex_unlock(&matrix_dev->lock);
|
|
||||||
kvm_arch_crypto_set_masks(kvm,
|
kvm_arch_crypto_set_masks(kvm,
|
||||||
matrix_mdev->matrix.apm,
|
matrix_mdev->matrix.apm,
|
||||||
matrix_mdev->matrix.aqm,
|
matrix_mdev->matrix.aqm,
|
||||||
matrix_mdev->matrix.adm);
|
matrix_mdev->matrix.adm);
|
||||||
mutex_lock(&matrix_dev->lock);
|
|
||||||
kvm->arch.crypto.pqap_hook = &matrix_mdev->pqap_hook;
|
mutex_unlock(&kvm->lock);
|
||||||
matrix_mdev->kvm = kvm;
|
mutex_unlock(&matrix_dev->lock);
|
||||||
matrix_mdev->kvm_busy = false;
|
|
||||||
wake_up_all(&matrix_mdev->wait_for_kvm);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -1163,28 +1148,24 @@ static int vfio_ap_mdev_iommu_notifier(struct notifier_block *nb,
|
||||||
* certain circumstances, will result in a circular lock dependency if this is
|
* certain circumstances, will result in a circular lock dependency if this is
|
||||||
* done under the @matrix_mdev->lock.
|
* done under the @matrix_mdev->lock.
|
||||||
*/
|
*/
|
||||||
static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev)
|
static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev,
|
||||||
|
struct kvm *kvm)
|
||||||
{
|
{
|
||||||
/*
|
if (kvm && kvm->arch.crypto.crycbd) {
|
||||||
* If the KVM pointer is in the process of being set, wait until the
|
down_write(&kvm->arch.crypto.pqap_hook_rwsem);
|
||||||
* process has completed.
|
kvm->arch.crypto.pqap_hook = NULL;
|
||||||
*/
|
up_write(&kvm->arch.crypto.pqap_hook_rwsem);
|
||||||
wait_event_cmd(matrix_mdev->wait_for_kvm,
|
|
||||||
!matrix_mdev->kvm_busy,
|
|
||||||
mutex_unlock(&matrix_dev->lock),
|
|
||||||
mutex_lock(&matrix_dev->lock));
|
|
||||||
|
|
||||||
if (matrix_mdev->kvm) {
|
mutex_lock(&kvm->lock);
|
||||||
matrix_mdev->kvm_busy = true;
|
|
||||||
mutex_unlock(&matrix_dev->lock);
|
|
||||||
kvm_arch_crypto_clear_masks(matrix_mdev->kvm);
|
|
||||||
mutex_lock(&matrix_dev->lock);
|
mutex_lock(&matrix_dev->lock);
|
||||||
vfio_ap_mdev_reset_queues(matrix_mdev->mdev);
|
|
||||||
matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
|
kvm_arch_crypto_clear_masks(kvm);
|
||||||
kvm_put_kvm(matrix_mdev->kvm);
|
vfio_ap_mdev_reset_queues(matrix_mdev);
|
||||||
|
kvm_put_kvm(kvm);
|
||||||
matrix_mdev->kvm = NULL;
|
matrix_mdev->kvm = NULL;
|
||||||
matrix_mdev->kvm_busy = false;
|
|
||||||
wake_up_all(&matrix_mdev->wait_for_kvm);
|
mutex_unlock(&kvm->lock);
|
||||||
|
mutex_unlock(&matrix_dev->lock);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1197,16 +1178,13 @@ static int vfio_ap_mdev_group_notifier(struct notifier_block *nb,
|
||||||
if (action != VFIO_GROUP_NOTIFY_SET_KVM)
|
if (action != VFIO_GROUP_NOTIFY_SET_KVM)
|
||||||
return NOTIFY_OK;
|
return NOTIFY_OK;
|
||||||
|
|
||||||
mutex_lock(&matrix_dev->lock);
|
|
||||||
matrix_mdev = container_of(nb, struct ap_matrix_mdev, group_notifier);
|
matrix_mdev = container_of(nb, struct ap_matrix_mdev, group_notifier);
|
||||||
|
|
||||||
if (!data)
|
if (!data)
|
||||||
vfio_ap_mdev_unset_kvm(matrix_mdev);
|
vfio_ap_mdev_unset_kvm(matrix_mdev, matrix_mdev->kvm);
|
||||||
else if (vfio_ap_mdev_set_kvm(matrix_mdev, data))
|
else if (vfio_ap_mdev_set_kvm(matrix_mdev, data))
|
||||||
notify_rc = NOTIFY_DONE;
|
notify_rc = NOTIFY_DONE;
|
||||||
|
|
||||||
mutex_unlock(&matrix_dev->lock);
|
|
||||||
|
|
||||||
return notify_rc;
|
return notify_rc;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1276,13 +1254,12 @@ free_resources:
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_ap_mdev_reset_queues(struct mdev_device *mdev)
|
static int vfio_ap_mdev_reset_queues(struct ap_matrix_mdev *matrix_mdev)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
int rc = 0;
|
int rc = 0;
|
||||||
unsigned long apid, apqi;
|
unsigned long apid, apqi;
|
||||||
struct vfio_ap_queue *q;
|
struct vfio_ap_queue *q;
|
||||||
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
|
|
||||||
|
|
||||||
for_each_set_bit_inv(apid, matrix_mdev->matrix.apm,
|
for_each_set_bit_inv(apid, matrix_mdev->matrix.apm,
|
||||||
matrix_mdev->matrix.apm_max + 1) {
|
matrix_mdev->matrix.apm_max + 1) {
|
||||||
|
@ -1303,52 +1280,45 @@ static int vfio_ap_mdev_reset_queues(struct mdev_device *mdev)
|
||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_ap_mdev_open(struct mdev_device *mdev)
|
static int vfio_ap_mdev_open_device(struct vfio_device *vdev)
|
||||||
{
|
{
|
||||||
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
|
struct ap_matrix_mdev *matrix_mdev =
|
||||||
|
container_of(vdev, struct ap_matrix_mdev, vdev);
|
||||||
unsigned long events;
|
unsigned long events;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
|
||||||
if (!try_module_get(THIS_MODULE))
|
|
||||||
return -ENODEV;
|
|
||||||
|
|
||||||
matrix_mdev->group_notifier.notifier_call = vfio_ap_mdev_group_notifier;
|
matrix_mdev->group_notifier.notifier_call = vfio_ap_mdev_group_notifier;
|
||||||
events = VFIO_GROUP_NOTIFY_SET_KVM;
|
events = VFIO_GROUP_NOTIFY_SET_KVM;
|
||||||
|
|
||||||
ret = vfio_register_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
|
ret = vfio_register_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
|
||||||
&events, &matrix_mdev->group_notifier);
|
&events, &matrix_mdev->group_notifier);
|
||||||
if (ret) {
|
if (ret)
|
||||||
module_put(THIS_MODULE);
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
|
||||||
|
|
||||||
matrix_mdev->iommu_notifier.notifier_call = vfio_ap_mdev_iommu_notifier;
|
matrix_mdev->iommu_notifier.notifier_call = vfio_ap_mdev_iommu_notifier;
|
||||||
events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
|
events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
|
||||||
ret = vfio_register_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
|
ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
|
||||||
&events, &matrix_mdev->iommu_notifier);
|
&events, &matrix_mdev->iommu_notifier);
|
||||||
if (!ret)
|
if (ret)
|
||||||
return ret;
|
goto out_unregister_group;
|
||||||
|
return 0;
|
||||||
|
|
||||||
vfio_unregister_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
|
out_unregister_group:
|
||||||
|
vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
|
||||||
&matrix_mdev->group_notifier);
|
&matrix_mdev->group_notifier);
|
||||||
module_put(THIS_MODULE);
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void vfio_ap_mdev_release(struct mdev_device *mdev)
|
static void vfio_ap_mdev_close_device(struct vfio_device *vdev)
|
||||||
{
|
{
|
||||||
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
|
struct ap_matrix_mdev *matrix_mdev =
|
||||||
|
container_of(vdev, struct ap_matrix_mdev, vdev);
|
||||||
|
|
||||||
mutex_lock(&matrix_dev->lock);
|
vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
|
||||||
vfio_ap_mdev_unset_kvm(matrix_mdev);
|
|
||||||
mutex_unlock(&matrix_dev->lock);
|
|
||||||
|
|
||||||
vfio_unregister_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
|
|
||||||
&matrix_mdev->iommu_notifier);
|
&matrix_mdev->iommu_notifier);
|
||||||
vfio_unregister_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
|
vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
|
||||||
&matrix_mdev->group_notifier);
|
&matrix_mdev->group_notifier);
|
||||||
module_put(THIS_MODULE);
|
vfio_ap_mdev_unset_kvm(matrix_mdev, matrix_mdev->kvm);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_ap_mdev_get_device_info(unsigned long arg)
|
static int vfio_ap_mdev_get_device_info(unsigned long arg)
|
||||||
|
@ -1371,11 +1341,12 @@ static int vfio_ap_mdev_get_device_info(unsigned long arg)
|
||||||
return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
|
return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static ssize_t vfio_ap_mdev_ioctl(struct mdev_device *mdev,
|
static ssize_t vfio_ap_mdev_ioctl(struct vfio_device *vdev,
|
||||||
unsigned int cmd, unsigned long arg)
|
unsigned int cmd, unsigned long arg)
|
||||||
{
|
{
|
||||||
|
struct ap_matrix_mdev *matrix_mdev =
|
||||||
|
container_of(vdev, struct ap_matrix_mdev, vdev);
|
||||||
int ret;
|
int ret;
|
||||||
struct ap_matrix_mdev *matrix_mdev;
|
|
||||||
|
|
||||||
mutex_lock(&matrix_dev->lock);
|
mutex_lock(&matrix_dev->lock);
|
||||||
switch (cmd) {
|
switch (cmd) {
|
||||||
|
@ -1383,22 +1354,7 @@ static ssize_t vfio_ap_mdev_ioctl(struct mdev_device *mdev,
|
||||||
ret = vfio_ap_mdev_get_device_info(arg);
|
ret = vfio_ap_mdev_get_device_info(arg);
|
||||||
break;
|
break;
|
||||||
case VFIO_DEVICE_RESET:
|
case VFIO_DEVICE_RESET:
|
||||||
matrix_mdev = mdev_get_drvdata(mdev);
|
ret = vfio_ap_mdev_reset_queues(matrix_mdev);
|
||||||
if (WARN(!matrix_mdev, "Driver data missing from mdev!!")) {
|
|
||||||
ret = -EINVAL;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* If the KVM pointer is in the process of being set, wait until
|
|
||||||
* the process has completed.
|
|
||||||
*/
|
|
||||||
wait_event_cmd(matrix_mdev->wait_for_kvm,
|
|
||||||
!matrix_mdev->kvm_busy,
|
|
||||||
mutex_unlock(&matrix_dev->lock),
|
|
||||||
mutex_lock(&matrix_dev->lock));
|
|
||||||
|
|
||||||
ret = vfio_ap_mdev_reset_queues(mdev);
|
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
ret = -EOPNOTSUPP;
|
ret = -EOPNOTSUPP;
|
||||||
|
@ -1409,25 +1365,51 @@ static ssize_t vfio_ap_mdev_ioctl(struct mdev_device *mdev,
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static const struct vfio_device_ops vfio_ap_matrix_dev_ops = {
|
||||||
|
.open_device = vfio_ap_mdev_open_device,
|
||||||
|
.close_device = vfio_ap_mdev_close_device,
|
||||||
|
.ioctl = vfio_ap_mdev_ioctl,
|
||||||
|
};
|
||||||
|
|
||||||
|
static struct mdev_driver vfio_ap_matrix_driver = {
|
||||||
|
.driver = {
|
||||||
|
.name = "vfio_ap_mdev",
|
||||||
|
.owner = THIS_MODULE,
|
||||||
|
.mod_name = KBUILD_MODNAME,
|
||||||
|
.dev_groups = vfio_ap_mdev_attr_groups,
|
||||||
|
},
|
||||||
|
.probe = vfio_ap_mdev_probe,
|
||||||
|
.remove = vfio_ap_mdev_remove,
|
||||||
|
};
|
||||||
|
|
||||||
static const struct mdev_parent_ops vfio_ap_matrix_ops = {
|
static const struct mdev_parent_ops vfio_ap_matrix_ops = {
|
||||||
.owner = THIS_MODULE,
|
.owner = THIS_MODULE,
|
||||||
|
.device_driver = &vfio_ap_matrix_driver,
|
||||||
.supported_type_groups = vfio_ap_mdev_type_groups,
|
.supported_type_groups = vfio_ap_mdev_type_groups,
|
||||||
.mdev_attr_groups = vfio_ap_mdev_attr_groups,
|
|
||||||
.create = vfio_ap_mdev_create,
|
|
||||||
.remove = vfio_ap_mdev_remove,
|
|
||||||
.open = vfio_ap_mdev_open,
|
|
||||||
.release = vfio_ap_mdev_release,
|
|
||||||
.ioctl = vfio_ap_mdev_ioctl,
|
|
||||||
};
|
};
|
||||||
|
|
||||||
int vfio_ap_mdev_register(void)
|
int vfio_ap_mdev_register(void)
|
||||||
{
|
{
|
||||||
|
int ret;
|
||||||
|
|
||||||
atomic_set(&matrix_dev->available_instances, MAX_ZDEV_ENTRIES_EXT);
|
atomic_set(&matrix_dev->available_instances, MAX_ZDEV_ENTRIES_EXT);
|
||||||
|
|
||||||
return mdev_register_device(&matrix_dev->device, &vfio_ap_matrix_ops);
|
ret = mdev_register_driver(&vfio_ap_matrix_driver);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
ret = mdev_register_device(&matrix_dev->device, &vfio_ap_matrix_ops);
|
||||||
|
if (ret)
|
||||||
|
goto err_driver;
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
err_driver:
|
||||||
|
mdev_unregister_driver(&vfio_ap_matrix_driver);
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
void vfio_ap_mdev_unregister(void)
|
void vfio_ap_mdev_unregister(void)
|
||||||
{
|
{
|
||||||
mdev_unregister_device(&matrix_dev->device);
|
mdev_unregister_device(&matrix_dev->device);
|
||||||
|
mdev_unregister_driver(&vfio_ap_matrix_driver);
|
||||||
}
|
}
|
||||||
|
|
|
@ -18,6 +18,7 @@
|
||||||
#include <linux/delay.h>
|
#include <linux/delay.h>
|
||||||
#include <linux/mutex.h>
|
#include <linux/mutex.h>
|
||||||
#include <linux/kvm_host.h>
|
#include <linux/kvm_host.h>
|
||||||
|
#include <linux/vfio.h>
|
||||||
|
|
||||||
#include "ap_bus.h"
|
#include "ap_bus.h"
|
||||||
|
|
||||||
|
@ -79,14 +80,13 @@ struct ap_matrix {
|
||||||
* @kvm: the struct holding guest's state
|
* @kvm: the struct holding guest's state
|
||||||
*/
|
*/
|
||||||
struct ap_matrix_mdev {
|
struct ap_matrix_mdev {
|
||||||
|
struct vfio_device vdev;
|
||||||
struct list_head node;
|
struct list_head node;
|
||||||
struct ap_matrix matrix;
|
struct ap_matrix matrix;
|
||||||
struct notifier_block group_notifier;
|
struct notifier_block group_notifier;
|
||||||
struct notifier_block iommu_notifier;
|
struct notifier_block iommu_notifier;
|
||||||
bool kvm_busy;
|
|
||||||
wait_queue_head_t wait_for_kvm;
|
|
||||||
struct kvm *kvm;
|
struct kvm *kvm;
|
||||||
struct kvm_s390_module_hook pqap_hook;
|
crypto_hook pqap_hook;
|
||||||
struct mdev_device *mdev;
|
struct mdev_device *mdev;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -1,24 +1,4 @@
|
||||||
# SPDX-License-Identifier: GPL-2.0-only
|
# SPDX-License-Identifier: GPL-2.0-only
|
||||||
config VFIO_IOMMU_TYPE1
|
|
||||||
tristate
|
|
||||||
depends on VFIO
|
|
||||||
default n
|
|
||||||
|
|
||||||
config VFIO_IOMMU_SPAPR_TCE
|
|
||||||
tristate
|
|
||||||
depends on VFIO && SPAPR_TCE_IOMMU
|
|
||||||
default VFIO
|
|
||||||
|
|
||||||
config VFIO_SPAPR_EEH
|
|
||||||
tristate
|
|
||||||
depends on EEH && VFIO_IOMMU_SPAPR_TCE
|
|
||||||
default VFIO
|
|
||||||
|
|
||||||
config VFIO_VIRQFD
|
|
||||||
tristate
|
|
||||||
depends on VFIO && EVENTFD
|
|
||||||
default n
|
|
||||||
|
|
||||||
menuconfig VFIO
|
menuconfig VFIO
|
||||||
tristate "VFIO Non-Privileged userspace driver framework"
|
tristate "VFIO Non-Privileged userspace driver framework"
|
||||||
select IOMMU_API
|
select IOMMU_API
|
||||||
|
@ -29,9 +9,28 @@ menuconfig VFIO
|
||||||
|
|
||||||
If you don't know what to do here, say N.
|
If you don't know what to do here, say N.
|
||||||
|
|
||||||
menuconfig VFIO_NOIOMMU
|
if VFIO
|
||||||
|
config VFIO_IOMMU_TYPE1
|
||||||
|
tristate
|
||||||
|
default n
|
||||||
|
|
||||||
|
config VFIO_IOMMU_SPAPR_TCE
|
||||||
|
tristate
|
||||||
|
depends on SPAPR_TCE_IOMMU
|
||||||
|
default VFIO
|
||||||
|
|
||||||
|
config VFIO_SPAPR_EEH
|
||||||
|
tristate
|
||||||
|
depends on EEH && VFIO_IOMMU_SPAPR_TCE
|
||||||
|
default VFIO
|
||||||
|
|
||||||
|
config VFIO_VIRQFD
|
||||||
|
tristate
|
||||||
|
select EVENTFD
|
||||||
|
default n
|
||||||
|
|
||||||
|
config VFIO_NOIOMMU
|
||||||
bool "VFIO No-IOMMU support"
|
bool "VFIO No-IOMMU support"
|
||||||
depends on VFIO
|
|
||||||
help
|
help
|
||||||
VFIO is built on the ability to isolate devices using the IOMMU.
|
VFIO is built on the ability to isolate devices using the IOMMU.
|
||||||
Only with an IOMMU can userspace access to DMA capable devices be
|
Only with an IOMMU can userspace access to DMA capable devices be
|
||||||
|
@ -48,4 +47,6 @@ source "drivers/vfio/pci/Kconfig"
|
||||||
source "drivers/vfio/platform/Kconfig"
|
source "drivers/vfio/platform/Kconfig"
|
||||||
source "drivers/vfio/mdev/Kconfig"
|
source "drivers/vfio/mdev/Kconfig"
|
||||||
source "drivers/vfio/fsl-mc/Kconfig"
|
source "drivers/vfio/fsl-mc/Kconfig"
|
||||||
|
endif
|
||||||
|
|
||||||
source "virt/lib/Kconfig"
|
source "virt/lib/Kconfig"
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
config VFIO_FSL_MC
|
config VFIO_FSL_MC
|
||||||
tristate "VFIO support for QorIQ DPAA2 fsl-mc bus devices"
|
tristate "VFIO support for QorIQ DPAA2 fsl-mc bus devices"
|
||||||
depends on VFIO && FSL_MC_BUS && EVENTFD
|
depends on FSL_MC_BUS
|
||||||
|
select EVENTFD
|
||||||
help
|
help
|
||||||
Driver to enable support for the VFIO QorIQ DPAA2 fsl-mc
|
Driver to enable support for the VFIO QorIQ DPAA2 fsl-mc
|
||||||
(Management Complex) devices. This is required to passthrough
|
(Management Complex) devices. This is required to passthrough
|
||||||
|
|
|
@ -19,81 +19,10 @@
|
||||||
|
|
||||||
static struct fsl_mc_driver vfio_fsl_mc_driver;
|
static struct fsl_mc_driver vfio_fsl_mc_driver;
|
||||||
|
|
||||||
static DEFINE_MUTEX(reflck_lock);
|
static int vfio_fsl_mc_open_device(struct vfio_device *core_vdev)
|
||||||
|
|
||||||
static void vfio_fsl_mc_reflck_get(struct vfio_fsl_mc_reflck *reflck)
|
|
||||||
{
|
|
||||||
kref_get(&reflck->kref);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void vfio_fsl_mc_reflck_release(struct kref *kref)
|
|
||||||
{
|
|
||||||
struct vfio_fsl_mc_reflck *reflck = container_of(kref,
|
|
||||||
struct vfio_fsl_mc_reflck,
|
|
||||||
kref);
|
|
||||||
|
|
||||||
mutex_destroy(&reflck->lock);
|
|
||||||
kfree(reflck);
|
|
||||||
mutex_unlock(&reflck_lock);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void vfio_fsl_mc_reflck_put(struct vfio_fsl_mc_reflck *reflck)
|
|
||||||
{
|
|
||||||
kref_put_mutex(&reflck->kref, vfio_fsl_mc_reflck_release, &reflck_lock);
|
|
||||||
}
|
|
||||||
|
|
||||||
static struct vfio_fsl_mc_reflck *vfio_fsl_mc_reflck_alloc(void)
|
|
||||||
{
|
|
||||||
struct vfio_fsl_mc_reflck *reflck;
|
|
||||||
|
|
||||||
reflck = kzalloc(sizeof(*reflck), GFP_KERNEL);
|
|
||||||
if (!reflck)
|
|
||||||
return ERR_PTR(-ENOMEM);
|
|
||||||
|
|
||||||
kref_init(&reflck->kref);
|
|
||||||
mutex_init(&reflck->lock);
|
|
||||||
|
|
||||||
return reflck;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int vfio_fsl_mc_reflck_attach(struct vfio_fsl_mc_device *vdev)
|
|
||||||
{
|
|
||||||
int ret = 0;
|
|
||||||
|
|
||||||
mutex_lock(&reflck_lock);
|
|
||||||
if (is_fsl_mc_bus_dprc(vdev->mc_dev)) {
|
|
||||||
vdev->reflck = vfio_fsl_mc_reflck_alloc();
|
|
||||||
ret = PTR_ERR_OR_ZERO(vdev->reflck);
|
|
||||||
} else {
|
|
||||||
struct device *mc_cont_dev = vdev->mc_dev->dev.parent;
|
|
||||||
struct vfio_device *device;
|
|
||||||
struct vfio_fsl_mc_device *cont_vdev;
|
|
||||||
|
|
||||||
device = vfio_device_get_from_dev(mc_cont_dev);
|
|
||||||
if (!device) {
|
|
||||||
ret = -ENODEV;
|
|
||||||
goto unlock;
|
|
||||||
}
|
|
||||||
|
|
||||||
cont_vdev =
|
|
||||||
container_of(device, struct vfio_fsl_mc_device, vdev);
|
|
||||||
if (!cont_vdev || !cont_vdev->reflck) {
|
|
||||||
vfio_device_put(device);
|
|
||||||
ret = -ENODEV;
|
|
||||||
goto unlock;
|
|
||||||
}
|
|
||||||
vfio_fsl_mc_reflck_get(cont_vdev->reflck);
|
|
||||||
vdev->reflck = cont_vdev->reflck;
|
|
||||||
vfio_device_put(device);
|
|
||||||
}
|
|
||||||
|
|
||||||
unlock:
|
|
||||||
mutex_unlock(&reflck_lock);
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int vfio_fsl_mc_regions_init(struct vfio_fsl_mc_device *vdev)
|
|
||||||
{
|
{
|
||||||
|
struct vfio_fsl_mc_device *vdev =
|
||||||
|
container_of(core_vdev, struct vfio_fsl_mc_device, vdev);
|
||||||
struct fsl_mc_device *mc_dev = vdev->mc_dev;
|
struct fsl_mc_device *mc_dev = vdev->mc_dev;
|
||||||
int count = mc_dev->obj_desc.region_count;
|
int count = mc_dev->obj_desc.region_count;
|
||||||
int i;
|
int i;
|
||||||
|
@ -136,58 +65,30 @@ static void vfio_fsl_mc_regions_cleanup(struct vfio_fsl_mc_device *vdev)
|
||||||
kfree(vdev->regions);
|
kfree(vdev->regions);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_fsl_mc_open(struct vfio_device *core_vdev)
|
|
||||||
{
|
static void vfio_fsl_mc_close_device(struct vfio_device *core_vdev)
|
||||||
struct vfio_fsl_mc_device *vdev =
|
|
||||||
container_of(core_vdev, struct vfio_fsl_mc_device, vdev);
|
|
||||||
int ret = 0;
|
|
||||||
|
|
||||||
mutex_lock(&vdev->reflck->lock);
|
|
||||||
if (!vdev->refcnt) {
|
|
||||||
ret = vfio_fsl_mc_regions_init(vdev);
|
|
||||||
if (ret)
|
|
||||||
goto out;
|
|
||||||
}
|
|
||||||
vdev->refcnt++;
|
|
||||||
out:
|
|
||||||
mutex_unlock(&vdev->reflck->lock);
|
|
||||||
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void vfio_fsl_mc_release(struct vfio_device *core_vdev)
|
|
||||||
{
|
{
|
||||||
struct vfio_fsl_mc_device *vdev =
|
struct vfio_fsl_mc_device *vdev =
|
||||||
container_of(core_vdev, struct vfio_fsl_mc_device, vdev);
|
container_of(core_vdev, struct vfio_fsl_mc_device, vdev);
|
||||||
|
struct fsl_mc_device *mc_dev = vdev->mc_dev;
|
||||||
|
struct device *cont_dev = fsl_mc_cont_dev(&mc_dev->dev);
|
||||||
|
struct fsl_mc_device *mc_cont = to_fsl_mc_device(cont_dev);
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mutex_lock(&vdev->reflck->lock);
|
vfio_fsl_mc_regions_cleanup(vdev);
|
||||||
|
|
||||||
if (!(--vdev->refcnt)) {
|
/* reset the device before cleaning up the interrupts */
|
||||||
struct fsl_mc_device *mc_dev = vdev->mc_dev;
|
ret = dprc_reset_container(mc_cont->mc_io, 0, mc_cont->mc_handle,
|
||||||
struct device *cont_dev = fsl_mc_cont_dev(&mc_dev->dev);
|
mc_cont->obj_desc.id,
|
||||||
struct fsl_mc_device *mc_cont = to_fsl_mc_device(cont_dev);
|
DPRC_RESET_OPTION_NON_RECURSIVE);
|
||||||
|
|
||||||
vfio_fsl_mc_regions_cleanup(vdev);
|
if (WARN_ON(ret))
|
||||||
|
dev_warn(&mc_cont->dev,
|
||||||
|
"VFIO_FLS_MC: reset device has failed (%d)\n", ret);
|
||||||
|
|
||||||
/* reset the device before cleaning up the interrupts */
|
vfio_fsl_mc_irqs_cleanup(vdev);
|
||||||
ret = dprc_reset_container(mc_cont->mc_io, 0,
|
|
||||||
mc_cont->mc_handle,
|
|
||||||
mc_cont->obj_desc.id,
|
|
||||||
DPRC_RESET_OPTION_NON_RECURSIVE);
|
|
||||||
|
|
||||||
if (ret) {
|
fsl_mc_cleanup_irq_pool(mc_cont);
|
||||||
dev_warn(&mc_cont->dev, "VFIO_FLS_MC: reset device has failed (%d)\n",
|
|
||||||
ret);
|
|
||||||
WARN_ON(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
vfio_fsl_mc_irqs_cleanup(vdev);
|
|
||||||
|
|
||||||
fsl_mc_cleanup_irq_pool(mc_cont);
|
|
||||||
}
|
|
||||||
|
|
||||||
mutex_unlock(&vdev->reflck->lock);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static long vfio_fsl_mc_ioctl(struct vfio_device *core_vdev,
|
static long vfio_fsl_mc_ioctl(struct vfio_device *core_vdev,
|
||||||
|
@ -504,8 +405,8 @@ static int vfio_fsl_mc_mmap(struct vfio_device *core_vdev,
|
||||||
|
|
||||||
static const struct vfio_device_ops vfio_fsl_mc_ops = {
|
static const struct vfio_device_ops vfio_fsl_mc_ops = {
|
||||||
.name = "vfio-fsl-mc",
|
.name = "vfio-fsl-mc",
|
||||||
.open = vfio_fsl_mc_open,
|
.open_device = vfio_fsl_mc_open_device,
|
||||||
.release = vfio_fsl_mc_release,
|
.close_device = vfio_fsl_mc_close_device,
|
||||||
.ioctl = vfio_fsl_mc_ioctl,
|
.ioctl = vfio_fsl_mc_ioctl,
|
||||||
.read = vfio_fsl_mc_read,
|
.read = vfio_fsl_mc_read,
|
||||||
.write = vfio_fsl_mc_write,
|
.write = vfio_fsl_mc_write,
|
||||||
|
@ -625,13 +526,16 @@ static int vfio_fsl_mc_probe(struct fsl_mc_device *mc_dev)
|
||||||
vdev->mc_dev = mc_dev;
|
vdev->mc_dev = mc_dev;
|
||||||
mutex_init(&vdev->igate);
|
mutex_init(&vdev->igate);
|
||||||
|
|
||||||
ret = vfio_fsl_mc_reflck_attach(vdev);
|
if (is_fsl_mc_bus_dprc(mc_dev))
|
||||||
|
ret = vfio_assign_device_set(&vdev->vdev, &mc_dev->dev);
|
||||||
|
else
|
||||||
|
ret = vfio_assign_device_set(&vdev->vdev, mc_dev->dev.parent);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto out_kfree;
|
goto out_uninit;
|
||||||
|
|
||||||
ret = vfio_fsl_mc_init_device(vdev);
|
ret = vfio_fsl_mc_init_device(vdev);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto out_reflck;
|
goto out_uninit;
|
||||||
|
|
||||||
ret = vfio_register_group_dev(&vdev->vdev);
|
ret = vfio_register_group_dev(&vdev->vdev);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
|
@ -639,12 +543,6 @@ static int vfio_fsl_mc_probe(struct fsl_mc_device *mc_dev)
|
||||||
goto out_device;
|
goto out_device;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
|
||||||
* This triggers recursion into vfio_fsl_mc_probe() on another device
|
|
||||||
* and the vfio_fsl_mc_reflck_attach() must succeed, which relies on the
|
|
||||||
* vfio_add_group_dev() above. It has no impact on this vdev, so it is
|
|
||||||
* safe to be after the vfio device is made live.
|
|
||||||
*/
|
|
||||||
ret = vfio_fsl_mc_scan_container(mc_dev);
|
ret = vfio_fsl_mc_scan_container(mc_dev);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto out_group_dev;
|
goto out_group_dev;
|
||||||
|
@ -655,9 +553,8 @@ out_group_dev:
|
||||||
vfio_unregister_group_dev(&vdev->vdev);
|
vfio_unregister_group_dev(&vdev->vdev);
|
||||||
out_device:
|
out_device:
|
||||||
vfio_fsl_uninit_device(vdev);
|
vfio_fsl_uninit_device(vdev);
|
||||||
out_reflck:
|
out_uninit:
|
||||||
vfio_fsl_mc_reflck_put(vdev->reflck);
|
vfio_uninit_group_dev(&vdev->vdev);
|
||||||
out_kfree:
|
|
||||||
kfree(vdev);
|
kfree(vdev);
|
||||||
out_group_put:
|
out_group_put:
|
||||||
vfio_iommu_group_put(group, dev);
|
vfio_iommu_group_put(group, dev);
|
||||||
|
@ -674,8 +571,8 @@ static int vfio_fsl_mc_remove(struct fsl_mc_device *mc_dev)
|
||||||
|
|
||||||
dprc_remove_devices(mc_dev, NULL, 0);
|
dprc_remove_devices(mc_dev, NULL, 0);
|
||||||
vfio_fsl_uninit_device(vdev);
|
vfio_fsl_uninit_device(vdev);
|
||||||
vfio_fsl_mc_reflck_put(vdev->reflck);
|
|
||||||
|
|
||||||
|
vfio_uninit_group_dev(&vdev->vdev);
|
||||||
kfree(vdev);
|
kfree(vdev);
|
||||||
vfio_iommu_group_put(mc_dev->dev.iommu_group, dev);
|
vfio_iommu_group_put(mc_dev->dev.iommu_group, dev);
|
||||||
|
|
||||||
|
|
|
@ -120,7 +120,7 @@ static int vfio_fsl_mc_set_irq_trigger(struct vfio_fsl_mc_device *vdev,
|
||||||
if (start != 0 || count != 1)
|
if (start != 0 || count != 1)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
mutex_lock(&vdev->reflck->lock);
|
mutex_lock(&vdev->vdev.dev_set->lock);
|
||||||
ret = fsl_mc_populate_irq_pool(mc_cont,
|
ret = fsl_mc_populate_irq_pool(mc_cont,
|
||||||
FSL_MC_IRQ_POOL_MAX_TOTAL_IRQS);
|
FSL_MC_IRQ_POOL_MAX_TOTAL_IRQS);
|
||||||
if (ret)
|
if (ret)
|
||||||
|
@ -129,7 +129,7 @@ static int vfio_fsl_mc_set_irq_trigger(struct vfio_fsl_mc_device *vdev,
|
||||||
ret = vfio_fsl_mc_irqs_allocate(vdev);
|
ret = vfio_fsl_mc_irqs_allocate(vdev);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto unlock;
|
goto unlock;
|
||||||
mutex_unlock(&vdev->reflck->lock);
|
mutex_unlock(&vdev->vdev.dev_set->lock);
|
||||||
|
|
||||||
if (flags & VFIO_IRQ_SET_DATA_EVENTFD) {
|
if (flags & VFIO_IRQ_SET_DATA_EVENTFD) {
|
||||||
s32 fd = *(s32 *)data;
|
s32 fd = *(s32 *)data;
|
||||||
|
@ -154,7 +154,7 @@ static int vfio_fsl_mc_set_irq_trigger(struct vfio_fsl_mc_device *vdev,
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
unlock:
|
unlock:
|
||||||
mutex_unlock(&vdev->reflck->lock);
|
mutex_unlock(&vdev->vdev.dev_set->lock);
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -22,11 +22,6 @@ struct vfio_fsl_mc_irq {
|
||||||
char *name;
|
char *name;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct vfio_fsl_mc_reflck {
|
|
||||||
struct kref kref;
|
|
||||||
struct mutex lock;
|
|
||||||
};
|
|
||||||
|
|
||||||
struct vfio_fsl_mc_region {
|
struct vfio_fsl_mc_region {
|
||||||
u32 flags;
|
u32 flags;
|
||||||
u32 type;
|
u32 type;
|
||||||
|
@ -39,9 +34,7 @@ struct vfio_fsl_mc_device {
|
||||||
struct vfio_device vdev;
|
struct vfio_device vdev;
|
||||||
struct fsl_mc_device *mc_dev;
|
struct fsl_mc_device *mc_dev;
|
||||||
struct notifier_block nb;
|
struct notifier_block nb;
|
||||||
int refcnt;
|
|
||||||
struct vfio_fsl_mc_region *regions;
|
struct vfio_fsl_mc_region *regions;
|
||||||
struct vfio_fsl_mc_reflck *reflck;
|
|
||||||
struct mutex igate;
|
struct mutex igate;
|
||||||
struct vfio_fsl_mc_irq *mc_irqs;
|
struct vfio_fsl_mc_irq *mc_irqs;
|
||||||
};
|
};
|
||||||
|
|
|
@ -2,7 +2,6 @@
|
||||||
|
|
||||||
config VFIO_MDEV
|
config VFIO_MDEV
|
||||||
tristate "Mediated device driver framework"
|
tristate "Mediated device driver framework"
|
||||||
depends on VFIO
|
|
||||||
default n
|
default n
|
||||||
help
|
help
|
||||||
Provides a framework to virtualize devices.
|
Provides a framework to virtualize devices.
|
||||||
|
|
|
@ -138,10 +138,6 @@ int mdev_register_device(struct device *dev, const struct mdev_parent_ops *ops)
|
||||||
if (!dev)
|
if (!dev)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
/* Not mandatory, but its absence could be a problem */
|
|
||||||
if (!ops->request)
|
|
||||||
dev_info(dev, "Driver cannot be asked to release device\n");
|
|
||||||
|
|
||||||
mutex_lock(&parent_list_lock);
|
mutex_lock(&parent_list_lock);
|
||||||
|
|
||||||
/* Check for duplicate */
|
/* Check for duplicate */
|
||||||
|
@ -398,7 +394,7 @@ static void __exit mdev_exit(void)
|
||||||
mdev_bus_unregister();
|
mdev_bus_unregister();
|
||||||
}
|
}
|
||||||
|
|
||||||
module_init(mdev_init)
|
subsys_initcall(mdev_init)
|
||||||
module_exit(mdev_exit)
|
module_exit(mdev_exit)
|
||||||
|
|
||||||
MODULE_VERSION(DRIVER_VERSION);
|
MODULE_VERSION(DRIVER_VERSION);
|
||||||
|
|
|
@ -17,24 +17,24 @@
|
||||||
|
|
||||||
#include "mdev_private.h"
|
#include "mdev_private.h"
|
||||||
|
|
||||||
static int vfio_mdev_open(struct vfio_device *core_vdev)
|
static int vfio_mdev_open_device(struct vfio_device *core_vdev)
|
||||||
{
|
{
|
||||||
struct mdev_device *mdev = to_mdev_device(core_vdev->dev);
|
struct mdev_device *mdev = to_mdev_device(core_vdev->dev);
|
||||||
struct mdev_parent *parent = mdev->type->parent;
|
struct mdev_parent *parent = mdev->type->parent;
|
||||||
|
|
||||||
if (unlikely(!parent->ops->open))
|
if (unlikely(!parent->ops->open_device))
|
||||||
return -EINVAL;
|
return 0;
|
||||||
|
|
||||||
return parent->ops->open(mdev);
|
return parent->ops->open_device(mdev);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void vfio_mdev_release(struct vfio_device *core_vdev)
|
static void vfio_mdev_close_device(struct vfio_device *core_vdev)
|
||||||
{
|
{
|
||||||
struct mdev_device *mdev = to_mdev_device(core_vdev->dev);
|
struct mdev_device *mdev = to_mdev_device(core_vdev->dev);
|
||||||
struct mdev_parent *parent = mdev->type->parent;
|
struct mdev_parent *parent = mdev->type->parent;
|
||||||
|
|
||||||
if (likely(parent->ops->release))
|
if (likely(parent->ops->close_device))
|
||||||
parent->ops->release(mdev);
|
parent->ops->close_device(mdev);
|
||||||
}
|
}
|
||||||
|
|
||||||
static long vfio_mdev_unlocked_ioctl(struct vfio_device *core_vdev,
|
static long vfio_mdev_unlocked_ioctl(struct vfio_device *core_vdev,
|
||||||
|
@ -44,7 +44,7 @@ static long vfio_mdev_unlocked_ioctl(struct vfio_device *core_vdev,
|
||||||
struct mdev_parent *parent = mdev->type->parent;
|
struct mdev_parent *parent = mdev->type->parent;
|
||||||
|
|
||||||
if (unlikely(!parent->ops->ioctl))
|
if (unlikely(!parent->ops->ioctl))
|
||||||
return -EINVAL;
|
return 0;
|
||||||
|
|
||||||
return parent->ops->ioctl(mdev, cmd, arg);
|
return parent->ops->ioctl(mdev, cmd, arg);
|
||||||
}
|
}
|
||||||
|
@ -100,8 +100,8 @@ static void vfio_mdev_request(struct vfio_device *core_vdev, unsigned int count)
|
||||||
|
|
||||||
static const struct vfio_device_ops vfio_mdev_dev_ops = {
|
static const struct vfio_device_ops vfio_mdev_dev_ops = {
|
||||||
.name = "vfio-mdev",
|
.name = "vfio-mdev",
|
||||||
.open = vfio_mdev_open,
|
.open_device = vfio_mdev_open_device,
|
||||||
.release = vfio_mdev_release,
|
.close_device = vfio_mdev_close_device,
|
||||||
.ioctl = vfio_mdev_unlocked_ioctl,
|
.ioctl = vfio_mdev_unlocked_ioctl,
|
||||||
.read = vfio_mdev_read,
|
.read = vfio_mdev_read,
|
||||||
.write = vfio_mdev_write,
|
.write = vfio_mdev_write,
|
||||||
|
@ -120,12 +120,16 @@ static int vfio_mdev_probe(struct mdev_device *mdev)
|
||||||
|
|
||||||
vfio_init_group_dev(vdev, &mdev->dev, &vfio_mdev_dev_ops);
|
vfio_init_group_dev(vdev, &mdev->dev, &vfio_mdev_dev_ops);
|
||||||
ret = vfio_register_group_dev(vdev);
|
ret = vfio_register_group_dev(vdev);
|
||||||
if (ret) {
|
if (ret)
|
||||||
kfree(vdev);
|
goto out_uninit;
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
dev_set_drvdata(&mdev->dev, vdev);
|
dev_set_drvdata(&mdev->dev, vdev);
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
out_uninit:
|
||||||
|
vfio_uninit_group_dev(vdev);
|
||||||
|
kfree(vdev);
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void vfio_mdev_remove(struct mdev_device *mdev)
|
static void vfio_mdev_remove(struct mdev_device *mdev)
|
||||||
|
@ -133,6 +137,7 @@ static void vfio_mdev_remove(struct mdev_device *mdev)
|
||||||
struct vfio_device *vdev = dev_get_drvdata(&mdev->dev);
|
struct vfio_device *vdev = dev_get_drvdata(&mdev->dev);
|
||||||
|
|
||||||
vfio_unregister_group_dev(vdev);
|
vfio_unregister_group_dev(vdev);
|
||||||
|
vfio_uninit_group_dev(vdev);
|
||||||
kfree(vdev);
|
kfree(vdev);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1,19 +1,29 @@
|
||||||
# SPDX-License-Identifier: GPL-2.0-only
|
# SPDX-License-Identifier: GPL-2.0-only
|
||||||
config VFIO_PCI
|
if PCI && MMU
|
||||||
tristate "VFIO support for PCI devices"
|
config VFIO_PCI_CORE
|
||||||
depends on VFIO && PCI && EVENTFD
|
tristate
|
||||||
depends on MMU
|
|
||||||
select VFIO_VIRQFD
|
select VFIO_VIRQFD
|
||||||
select IRQ_BYPASS_MANAGER
|
select IRQ_BYPASS_MANAGER
|
||||||
|
|
||||||
|
config VFIO_PCI_MMAP
|
||||||
|
def_bool y if !S390
|
||||||
|
|
||||||
|
config VFIO_PCI_INTX
|
||||||
|
def_bool y if !S390
|
||||||
|
|
||||||
|
config VFIO_PCI
|
||||||
|
tristate "Generic VFIO support for any PCI device"
|
||||||
|
select VFIO_PCI_CORE
|
||||||
help
|
help
|
||||||
Support for the PCI VFIO bus driver. This is required to make
|
Support for the generic PCI VFIO bus driver which can connect any
|
||||||
use of PCI drivers using the VFIO framework.
|
PCI device to the VFIO framework.
|
||||||
|
|
||||||
If you don't know what to do here, say N.
|
If you don't know what to do here, say N.
|
||||||
|
|
||||||
|
if VFIO_PCI
|
||||||
config VFIO_PCI_VGA
|
config VFIO_PCI_VGA
|
||||||
bool "VFIO PCI support for VGA devices"
|
bool "Generic VFIO PCI support for VGA devices"
|
||||||
depends on VFIO_PCI && X86 && VGA_ARB
|
depends on X86 && VGA_ARB
|
||||||
help
|
help
|
||||||
Support for VGA extension to VFIO PCI. This exposes an additional
|
Support for VGA extension to VFIO PCI. This exposes an additional
|
||||||
region on VGA devices for accessing legacy VGA addresses used by
|
region on VGA devices for accessing legacy VGA addresses used by
|
||||||
|
@ -21,17 +31,9 @@ config VFIO_PCI_VGA
|
||||||
|
|
||||||
If you don't know what to do here, say N.
|
If you don't know what to do here, say N.
|
||||||
|
|
||||||
config VFIO_PCI_MMAP
|
|
||||||
depends on VFIO_PCI
|
|
||||||
def_bool y if !S390
|
|
||||||
|
|
||||||
config VFIO_PCI_INTX
|
|
||||||
depends on VFIO_PCI
|
|
||||||
def_bool y if !S390
|
|
||||||
|
|
||||||
config VFIO_PCI_IGD
|
config VFIO_PCI_IGD
|
||||||
bool "VFIO PCI extensions for Intel graphics (GVT-d)"
|
bool "Generic VFIO PCI extensions for Intel graphics (GVT-d)"
|
||||||
depends on VFIO_PCI && X86
|
depends on X86
|
||||||
default y
|
default y
|
||||||
help
|
help
|
||||||
Support for Intel IGD specific extensions to enable direct
|
Support for Intel IGD specific extensions to enable direct
|
||||||
|
@ -40,3 +42,5 @@ config VFIO_PCI_IGD
|
||||||
and LPC bridge config space.
|
and LPC bridge config space.
|
||||||
|
|
||||||
To enable Intel IGD assignment through vfio-pci, say Y.
|
To enable Intel IGD assignment through vfio-pci, say Y.
|
||||||
|
endif
|
||||||
|
endif
|
||||||
|
|
|
@ -1,7 +1,9 @@
|
||||||
# SPDX-License-Identifier: GPL-2.0-only
|
# SPDX-License-Identifier: GPL-2.0-only
|
||||||
|
|
||||||
vfio-pci-y := vfio_pci.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o
|
vfio-pci-core-y := vfio_pci_core.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o
|
||||||
vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o
|
vfio-pci-core-$(CONFIG_S390) += vfio_pci_zdev.o
|
||||||
vfio-pci-$(CONFIG_S390) += vfio_pci_zdev.o
|
obj-$(CONFIG_VFIO_PCI_CORE) += vfio-pci-core.o
|
||||||
|
|
||||||
|
vfio-pci-y := vfio_pci.o
|
||||||
|
vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o
|
||||||
obj-$(CONFIG_VFIO_PCI) += vfio-pci.o
|
obj-$(CONFIG_VFIO_PCI) += vfio-pci.o
|
||||||
|
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -26,7 +26,7 @@
|
||||||
#include <linux/vfio.h>
|
#include <linux/vfio.h>
|
||||||
#include <linux/slab.h>
|
#include <linux/slab.h>
|
||||||
|
|
||||||
#include "vfio_pci_private.h"
|
#include <linux/vfio_pci_core.h>
|
||||||
|
|
||||||
/* Fake capability ID for standard config space */
|
/* Fake capability ID for standard config space */
|
||||||
#define PCI_CAP_ID_BASIC 0
|
#define PCI_CAP_ID_BASIC 0
|
||||||
|
@ -108,9 +108,9 @@ static const u16 pci_ext_cap_length[PCI_EXT_CAP_ID_MAX + 1] = {
|
||||||
struct perm_bits {
|
struct perm_bits {
|
||||||
u8 *virt; /* read/write virtual data, not hw */
|
u8 *virt; /* read/write virtual data, not hw */
|
||||||
u8 *write; /* writeable bits */
|
u8 *write; /* writeable bits */
|
||||||
int (*readfn)(struct vfio_pci_device *vdev, int pos, int count,
|
int (*readfn)(struct vfio_pci_core_device *vdev, int pos, int count,
|
||||||
struct perm_bits *perm, int offset, __le32 *val);
|
struct perm_bits *perm, int offset, __le32 *val);
|
||||||
int (*writefn)(struct vfio_pci_device *vdev, int pos, int count,
|
int (*writefn)(struct vfio_pci_core_device *vdev, int pos, int count,
|
||||||
struct perm_bits *perm, int offset, __le32 val);
|
struct perm_bits *perm, int offset, __le32 val);
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -171,7 +171,7 @@ static int vfio_user_config_write(struct pci_dev *pdev, int offset,
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_default_config_read(struct vfio_pci_device *vdev, int pos,
|
static int vfio_default_config_read(struct vfio_pci_core_device *vdev, int pos,
|
||||||
int count, struct perm_bits *perm,
|
int count, struct perm_bits *perm,
|
||||||
int offset, __le32 *val)
|
int offset, __le32 *val)
|
||||||
{
|
{
|
||||||
|
@ -197,7 +197,7 @@ static int vfio_default_config_read(struct vfio_pci_device *vdev, int pos,
|
||||||
return count;
|
return count;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_default_config_write(struct vfio_pci_device *vdev, int pos,
|
static int vfio_default_config_write(struct vfio_pci_core_device *vdev, int pos,
|
||||||
int count, struct perm_bits *perm,
|
int count, struct perm_bits *perm,
|
||||||
int offset, __le32 val)
|
int offset, __le32 val)
|
||||||
{
|
{
|
||||||
|
@ -244,7 +244,7 @@ static int vfio_default_config_write(struct vfio_pci_device *vdev, int pos,
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Allow direct read from hardware, except for capability next pointer */
|
/* Allow direct read from hardware, except for capability next pointer */
|
||||||
static int vfio_direct_config_read(struct vfio_pci_device *vdev, int pos,
|
static int vfio_direct_config_read(struct vfio_pci_core_device *vdev, int pos,
|
||||||
int count, struct perm_bits *perm,
|
int count, struct perm_bits *perm,
|
||||||
int offset, __le32 *val)
|
int offset, __le32 *val)
|
||||||
{
|
{
|
||||||
|
@ -269,7 +269,7 @@ static int vfio_direct_config_read(struct vfio_pci_device *vdev, int pos,
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Raw access skips any kind of virtualization */
|
/* Raw access skips any kind of virtualization */
|
||||||
static int vfio_raw_config_write(struct vfio_pci_device *vdev, int pos,
|
static int vfio_raw_config_write(struct vfio_pci_core_device *vdev, int pos,
|
||||||
int count, struct perm_bits *perm,
|
int count, struct perm_bits *perm,
|
||||||
int offset, __le32 val)
|
int offset, __le32 val)
|
||||||
{
|
{
|
||||||
|
@ -282,7 +282,7 @@ static int vfio_raw_config_write(struct vfio_pci_device *vdev, int pos,
|
||||||
return count;
|
return count;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_raw_config_read(struct vfio_pci_device *vdev, int pos,
|
static int vfio_raw_config_read(struct vfio_pci_core_device *vdev, int pos,
|
||||||
int count, struct perm_bits *perm,
|
int count, struct perm_bits *perm,
|
||||||
int offset, __le32 *val)
|
int offset, __le32 *val)
|
||||||
{
|
{
|
||||||
|
@ -296,7 +296,7 @@ static int vfio_raw_config_read(struct vfio_pci_device *vdev, int pos,
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Virt access uses only virtualization */
|
/* Virt access uses only virtualization */
|
||||||
static int vfio_virt_config_write(struct vfio_pci_device *vdev, int pos,
|
static int vfio_virt_config_write(struct vfio_pci_core_device *vdev, int pos,
|
||||||
int count, struct perm_bits *perm,
|
int count, struct perm_bits *perm,
|
||||||
int offset, __le32 val)
|
int offset, __le32 val)
|
||||||
{
|
{
|
||||||
|
@ -304,7 +304,7 @@ static int vfio_virt_config_write(struct vfio_pci_device *vdev, int pos,
|
||||||
return count;
|
return count;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_virt_config_read(struct vfio_pci_device *vdev, int pos,
|
static int vfio_virt_config_read(struct vfio_pci_core_device *vdev, int pos,
|
||||||
int count, struct perm_bits *perm,
|
int count, struct perm_bits *perm,
|
||||||
int offset, __le32 *val)
|
int offset, __le32 *val)
|
||||||
{
|
{
|
||||||
|
@ -396,7 +396,7 @@ static inline void p_setd(struct perm_bits *p, int off, u32 virt, u32 write)
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Caller should hold memory_lock semaphore */
|
/* Caller should hold memory_lock semaphore */
|
||||||
bool __vfio_pci_memory_enabled(struct vfio_pci_device *vdev)
|
bool __vfio_pci_memory_enabled(struct vfio_pci_core_device *vdev)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
u16 cmd = le16_to_cpu(*(__le16 *)&vdev->vconfig[PCI_COMMAND]);
|
u16 cmd = le16_to_cpu(*(__le16 *)&vdev->vconfig[PCI_COMMAND]);
|
||||||
|
@ -413,7 +413,7 @@ bool __vfio_pci_memory_enabled(struct vfio_pci_device *vdev)
|
||||||
* Restore the *real* BARs after we detect a FLR or backdoor reset.
|
* Restore the *real* BARs after we detect a FLR or backdoor reset.
|
||||||
* (backdoor = some device specific technique that we didn't catch)
|
* (backdoor = some device specific technique that we didn't catch)
|
||||||
*/
|
*/
|
||||||
static void vfio_bar_restore(struct vfio_pci_device *vdev)
|
static void vfio_bar_restore(struct vfio_pci_core_device *vdev)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
u32 *rbar = vdev->rbar;
|
u32 *rbar = vdev->rbar;
|
||||||
|
@ -460,7 +460,7 @@ static __le32 vfio_generate_bar_flags(struct pci_dev *pdev, int bar)
|
||||||
* Pretend we're hardware and tweak the values of the *virtual* PCI BARs
|
* Pretend we're hardware and tweak the values of the *virtual* PCI BARs
|
||||||
* to reflect the hardware capabilities. This implements BAR sizing.
|
* to reflect the hardware capabilities. This implements BAR sizing.
|
||||||
*/
|
*/
|
||||||
static void vfio_bar_fixup(struct vfio_pci_device *vdev)
|
static void vfio_bar_fixup(struct vfio_pci_core_device *vdev)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
int i;
|
int i;
|
||||||
|
@ -514,7 +514,7 @@ static void vfio_bar_fixup(struct vfio_pci_device *vdev)
|
||||||
vdev->bardirty = false;
|
vdev->bardirty = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_basic_config_read(struct vfio_pci_device *vdev, int pos,
|
static int vfio_basic_config_read(struct vfio_pci_core_device *vdev, int pos,
|
||||||
int count, struct perm_bits *perm,
|
int count, struct perm_bits *perm,
|
||||||
int offset, __le32 *val)
|
int offset, __le32 *val)
|
||||||
{
|
{
|
||||||
|
@ -536,7 +536,7 @@ static int vfio_basic_config_read(struct vfio_pci_device *vdev, int pos,
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Test whether BARs match the value we think they should contain */
|
/* Test whether BARs match the value we think they should contain */
|
||||||
static bool vfio_need_bar_restore(struct vfio_pci_device *vdev)
|
static bool vfio_need_bar_restore(struct vfio_pci_core_device *vdev)
|
||||||
{
|
{
|
||||||
int i = 0, pos = PCI_BASE_ADDRESS_0, ret;
|
int i = 0, pos = PCI_BASE_ADDRESS_0, ret;
|
||||||
u32 bar;
|
u32 bar;
|
||||||
|
@ -552,7 +552,7 @@ static bool vfio_need_bar_restore(struct vfio_pci_device *vdev)
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_basic_config_write(struct vfio_pci_device *vdev, int pos,
|
static int vfio_basic_config_write(struct vfio_pci_core_device *vdev, int pos,
|
||||||
int count, struct perm_bits *perm,
|
int count, struct perm_bits *perm,
|
||||||
int offset, __le32 val)
|
int offset, __le32 val)
|
||||||
{
|
{
|
||||||
|
@ -692,7 +692,7 @@ static int __init init_pci_cap_basic_perm(struct perm_bits *perm)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_pm_config_write(struct vfio_pci_device *vdev, int pos,
|
static int vfio_pm_config_write(struct vfio_pci_core_device *vdev, int pos,
|
||||||
int count, struct perm_bits *perm,
|
int count, struct perm_bits *perm,
|
||||||
int offset, __le32 val)
|
int offset, __le32 val)
|
||||||
{
|
{
|
||||||
|
@ -747,7 +747,7 @@ static int __init init_pci_cap_pm_perm(struct perm_bits *perm)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_vpd_config_write(struct vfio_pci_device *vdev, int pos,
|
static int vfio_vpd_config_write(struct vfio_pci_core_device *vdev, int pos,
|
||||||
int count, struct perm_bits *perm,
|
int count, struct perm_bits *perm,
|
||||||
int offset, __le32 val)
|
int offset, __le32 val)
|
||||||
{
|
{
|
||||||
|
@ -829,7 +829,7 @@ static int __init init_pci_cap_pcix_perm(struct perm_bits *perm)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_exp_config_write(struct vfio_pci_device *vdev, int pos,
|
static int vfio_exp_config_write(struct vfio_pci_core_device *vdev, int pos,
|
||||||
int count, struct perm_bits *perm,
|
int count, struct perm_bits *perm,
|
||||||
int offset, __le32 val)
|
int offset, __le32 val)
|
||||||
{
|
{
|
||||||
|
@ -913,7 +913,7 @@ static int __init init_pci_cap_exp_perm(struct perm_bits *perm)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_af_config_write(struct vfio_pci_device *vdev, int pos,
|
static int vfio_af_config_write(struct vfio_pci_core_device *vdev, int pos,
|
||||||
int count, struct perm_bits *perm,
|
int count, struct perm_bits *perm,
|
||||||
int offset, __le32 val)
|
int offset, __le32 val)
|
||||||
{
|
{
|
||||||
|
@ -1072,7 +1072,7 @@ int __init vfio_pci_init_perm_bits(void)
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_find_cap_start(struct vfio_pci_device *vdev, int pos)
|
static int vfio_find_cap_start(struct vfio_pci_core_device *vdev, int pos)
|
||||||
{
|
{
|
||||||
u8 cap;
|
u8 cap;
|
||||||
int base = (pos >= PCI_CFG_SPACE_SIZE) ? PCI_CFG_SPACE_SIZE :
|
int base = (pos >= PCI_CFG_SPACE_SIZE) ? PCI_CFG_SPACE_SIZE :
|
||||||
|
@ -1089,7 +1089,7 @@ static int vfio_find_cap_start(struct vfio_pci_device *vdev, int pos)
|
||||||
return pos;
|
return pos;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_msi_config_read(struct vfio_pci_device *vdev, int pos,
|
static int vfio_msi_config_read(struct vfio_pci_core_device *vdev, int pos,
|
||||||
int count, struct perm_bits *perm,
|
int count, struct perm_bits *perm,
|
||||||
int offset, __le32 *val)
|
int offset, __le32 *val)
|
||||||
{
|
{
|
||||||
|
@ -1109,7 +1109,7 @@ static int vfio_msi_config_read(struct vfio_pci_device *vdev, int pos,
|
||||||
return vfio_default_config_read(vdev, pos, count, perm, offset, val);
|
return vfio_default_config_read(vdev, pos, count, perm, offset, val);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_msi_config_write(struct vfio_pci_device *vdev, int pos,
|
static int vfio_msi_config_write(struct vfio_pci_core_device *vdev, int pos,
|
||||||
int count, struct perm_bits *perm,
|
int count, struct perm_bits *perm,
|
||||||
int offset, __le32 val)
|
int offset, __le32 val)
|
||||||
{
|
{
|
||||||
|
@ -1189,7 +1189,7 @@ static int init_pci_cap_msi_perm(struct perm_bits *perm, int len, u16 flags)
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Determine MSI CAP field length; initialize msi_perms on 1st call per vdev */
|
/* Determine MSI CAP field length; initialize msi_perms on 1st call per vdev */
|
||||||
static int vfio_msi_cap_len(struct vfio_pci_device *vdev, u8 pos)
|
static int vfio_msi_cap_len(struct vfio_pci_core_device *vdev, u8 pos)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
int len, ret;
|
int len, ret;
|
||||||
|
@ -1222,7 +1222,7 @@ static int vfio_msi_cap_len(struct vfio_pci_device *vdev, u8 pos)
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Determine extended capability length for VC (2 & 9) and MFVC */
|
/* Determine extended capability length for VC (2 & 9) and MFVC */
|
||||||
static int vfio_vc_cap_len(struct vfio_pci_device *vdev, u16 pos)
|
static int vfio_vc_cap_len(struct vfio_pci_core_device *vdev, u16 pos)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
u32 tmp;
|
u32 tmp;
|
||||||
|
@ -1263,7 +1263,7 @@ static int vfio_vc_cap_len(struct vfio_pci_device *vdev, u16 pos)
|
||||||
return len;
|
return len;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_cap_len(struct vfio_pci_device *vdev, u8 cap, u8 pos)
|
static int vfio_cap_len(struct vfio_pci_core_device *vdev, u8 cap, u8 pos)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
u32 dword;
|
u32 dword;
|
||||||
|
@ -1338,7 +1338,7 @@ static int vfio_cap_len(struct vfio_pci_device *vdev, u8 cap, u8 pos)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_ext_cap_len(struct vfio_pci_device *vdev, u16 ecap, u16 epos)
|
static int vfio_ext_cap_len(struct vfio_pci_core_device *vdev, u16 ecap, u16 epos)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
u8 byte;
|
u8 byte;
|
||||||
|
@ -1412,7 +1412,7 @@ static int vfio_ext_cap_len(struct vfio_pci_device *vdev, u16 ecap, u16 epos)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_fill_vconfig_bytes(struct vfio_pci_device *vdev,
|
static int vfio_fill_vconfig_bytes(struct vfio_pci_core_device *vdev,
|
||||||
int offset, int size)
|
int offset, int size)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
|
@ -1459,7 +1459,7 @@ static int vfio_fill_vconfig_bytes(struct vfio_pci_device *vdev,
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_cap_init(struct vfio_pci_device *vdev)
|
static int vfio_cap_init(struct vfio_pci_core_device *vdev)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
u8 *map = vdev->pci_config_map;
|
u8 *map = vdev->pci_config_map;
|
||||||
|
@ -1549,7 +1549,7 @@ static int vfio_cap_init(struct vfio_pci_device *vdev)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_ecap_init(struct vfio_pci_device *vdev)
|
static int vfio_ecap_init(struct vfio_pci_core_device *vdev)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
u8 *map = vdev->pci_config_map;
|
u8 *map = vdev->pci_config_map;
|
||||||
|
@ -1669,7 +1669,7 @@ static const struct pci_device_id known_bogus_vf_intx_pin[] = {
|
||||||
* for each area requiring emulated bits, but the array of pointers
|
* for each area requiring emulated bits, but the array of pointers
|
||||||
* would be comparable in size (at least for standard config space).
|
* would be comparable in size (at least for standard config space).
|
||||||
*/
|
*/
|
||||||
int vfio_config_init(struct vfio_pci_device *vdev)
|
int vfio_config_init(struct vfio_pci_core_device *vdev)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
u8 *map, *vconfig;
|
u8 *map, *vconfig;
|
||||||
|
@ -1773,7 +1773,7 @@ out:
|
||||||
return pcibios_err_to_errno(ret);
|
return pcibios_err_to_errno(ret);
|
||||||
}
|
}
|
||||||
|
|
||||||
void vfio_config_free(struct vfio_pci_device *vdev)
|
void vfio_config_free(struct vfio_pci_core_device *vdev)
|
||||||
{
|
{
|
||||||
kfree(vdev->vconfig);
|
kfree(vdev->vconfig);
|
||||||
vdev->vconfig = NULL;
|
vdev->vconfig = NULL;
|
||||||
|
@ -1790,7 +1790,7 @@ void vfio_config_free(struct vfio_pci_device *vdev)
|
||||||
* Find the remaining number of bytes in a dword that match the given
|
* Find the remaining number of bytes in a dword that match the given
|
||||||
* position. Stop at either the end of the capability or the dword boundary.
|
* position. Stop at either the end of the capability or the dword boundary.
|
||||||
*/
|
*/
|
||||||
static size_t vfio_pci_cap_remaining_dword(struct vfio_pci_device *vdev,
|
static size_t vfio_pci_cap_remaining_dword(struct vfio_pci_core_device *vdev,
|
||||||
loff_t pos)
|
loff_t pos)
|
||||||
{
|
{
|
||||||
u8 cap = vdev->pci_config_map[pos];
|
u8 cap = vdev->pci_config_map[pos];
|
||||||
|
@ -1802,7 +1802,7 @@ static size_t vfio_pci_cap_remaining_dword(struct vfio_pci_device *vdev,
|
||||||
return i;
|
return i;
|
||||||
}
|
}
|
||||||
|
|
||||||
static ssize_t vfio_config_do_rw(struct vfio_pci_device *vdev, char __user *buf,
|
static ssize_t vfio_config_do_rw(struct vfio_pci_core_device *vdev, char __user *buf,
|
||||||
size_t count, loff_t *ppos, bool iswrite)
|
size_t count, loff_t *ppos, bool iswrite)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
|
@ -1885,7 +1885,7 @@ static ssize_t vfio_config_do_rw(struct vfio_pci_device *vdev, char __user *buf,
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
ssize_t vfio_pci_config_rw(struct vfio_pci_device *vdev, char __user *buf,
|
ssize_t vfio_pci_config_rw(struct vfio_pci_core_device *vdev, char __user *buf,
|
||||||
size_t count, loff_t *ppos, bool iswrite)
|
size_t count, loff_t *ppos, bool iswrite)
|
||||||
{
|
{
|
||||||
size_t done = 0;
|
size_t done = 0;
|
||||||
|
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -15,7 +15,7 @@
|
||||||
#include <linux/uaccess.h>
|
#include <linux/uaccess.h>
|
||||||
#include <linux/vfio.h>
|
#include <linux/vfio.h>
|
||||||
|
|
||||||
#include "vfio_pci_private.h"
|
#include <linux/vfio_pci_core.h>
|
||||||
|
|
||||||
#define OPREGION_SIGNATURE "IntelGraphicsMem"
|
#define OPREGION_SIGNATURE "IntelGraphicsMem"
|
||||||
#define OPREGION_SIZE (8 * 1024)
|
#define OPREGION_SIZE (8 * 1024)
|
||||||
|
@ -25,8 +25,9 @@
|
||||||
#define OPREGION_RVDS 0x3c2
|
#define OPREGION_RVDS 0x3c2
|
||||||
#define OPREGION_VERSION 0x16
|
#define OPREGION_VERSION 0x16
|
||||||
|
|
||||||
static size_t vfio_pci_igd_rw(struct vfio_pci_device *vdev, char __user *buf,
|
static ssize_t vfio_pci_igd_rw(struct vfio_pci_core_device *vdev,
|
||||||
size_t count, loff_t *ppos, bool iswrite)
|
char __user *buf, size_t count, loff_t *ppos,
|
||||||
|
bool iswrite)
|
||||||
{
|
{
|
||||||
unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS;
|
unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS;
|
||||||
void *base = vdev->region[i].data;
|
void *base = vdev->region[i].data;
|
||||||
|
@ -45,7 +46,7 @@ static size_t vfio_pci_igd_rw(struct vfio_pci_device *vdev, char __user *buf,
|
||||||
return count;
|
return count;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void vfio_pci_igd_release(struct vfio_pci_device *vdev,
|
static void vfio_pci_igd_release(struct vfio_pci_core_device *vdev,
|
||||||
struct vfio_pci_region *region)
|
struct vfio_pci_region *region)
|
||||||
{
|
{
|
||||||
memunmap(region->data);
|
memunmap(region->data);
|
||||||
|
@ -56,7 +57,7 @@ static const struct vfio_pci_regops vfio_pci_igd_regops = {
|
||||||
.release = vfio_pci_igd_release,
|
.release = vfio_pci_igd_release,
|
||||||
};
|
};
|
||||||
|
|
||||||
static int vfio_pci_igd_opregion_init(struct vfio_pci_device *vdev)
|
static int vfio_pci_igd_opregion_init(struct vfio_pci_core_device *vdev)
|
||||||
{
|
{
|
||||||
__le32 *dwordp = (__le32 *)(vdev->vconfig + OPREGION_PCI_ADDR);
|
__le32 *dwordp = (__le32 *)(vdev->vconfig + OPREGION_PCI_ADDR);
|
||||||
u32 addr, size;
|
u32 addr, size;
|
||||||
|
@ -160,9 +161,9 @@ static int vfio_pci_igd_opregion_init(struct vfio_pci_device *vdev)
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static size_t vfio_pci_igd_cfg_rw(struct vfio_pci_device *vdev,
|
static ssize_t vfio_pci_igd_cfg_rw(struct vfio_pci_core_device *vdev,
|
||||||
char __user *buf, size_t count, loff_t *ppos,
|
char __user *buf, size_t count, loff_t *ppos,
|
||||||
bool iswrite)
|
bool iswrite)
|
||||||
{
|
{
|
||||||
unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS;
|
unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS;
|
||||||
struct pci_dev *pdev = vdev->region[i].data;
|
struct pci_dev *pdev = vdev->region[i].data;
|
||||||
|
@ -253,7 +254,7 @@ static size_t vfio_pci_igd_cfg_rw(struct vfio_pci_device *vdev,
|
||||||
return count;
|
return count;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void vfio_pci_igd_cfg_release(struct vfio_pci_device *vdev,
|
static void vfio_pci_igd_cfg_release(struct vfio_pci_core_device *vdev,
|
||||||
struct vfio_pci_region *region)
|
struct vfio_pci_region *region)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = region->data;
|
struct pci_dev *pdev = region->data;
|
||||||
|
@ -266,7 +267,7 @@ static const struct vfio_pci_regops vfio_pci_igd_cfg_regops = {
|
||||||
.release = vfio_pci_igd_cfg_release,
|
.release = vfio_pci_igd_cfg_release,
|
||||||
};
|
};
|
||||||
|
|
||||||
static int vfio_pci_igd_cfg_init(struct vfio_pci_device *vdev)
|
static int vfio_pci_igd_cfg_init(struct vfio_pci_core_device *vdev)
|
||||||
{
|
{
|
||||||
struct pci_dev *host_bridge, *lpc_bridge;
|
struct pci_dev *host_bridge, *lpc_bridge;
|
||||||
int ret;
|
int ret;
|
||||||
|
@ -314,7 +315,7 @@ static int vfio_pci_igd_cfg_init(struct vfio_pci_device *vdev)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
int vfio_pci_igd_init(struct vfio_pci_device *vdev)
|
int vfio_pci_igd_init(struct vfio_pci_core_device *vdev)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
|
|
@ -20,20 +20,20 @@
|
||||||
#include <linux/wait.h>
|
#include <linux/wait.h>
|
||||||
#include <linux/slab.h>
|
#include <linux/slab.h>
|
||||||
|
|
||||||
#include "vfio_pci_private.h"
|
#include <linux/vfio_pci_core.h>
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* INTx
|
* INTx
|
||||||
*/
|
*/
|
||||||
static void vfio_send_intx_eventfd(void *opaque, void *unused)
|
static void vfio_send_intx_eventfd(void *opaque, void *unused)
|
||||||
{
|
{
|
||||||
struct vfio_pci_device *vdev = opaque;
|
struct vfio_pci_core_device *vdev = opaque;
|
||||||
|
|
||||||
if (likely(is_intx(vdev) && !vdev->virq_disabled))
|
if (likely(is_intx(vdev) && !vdev->virq_disabled))
|
||||||
eventfd_signal(vdev->ctx[0].trigger, 1);
|
eventfd_signal(vdev->ctx[0].trigger, 1);
|
||||||
}
|
}
|
||||||
|
|
||||||
void vfio_pci_intx_mask(struct vfio_pci_device *vdev)
|
void vfio_pci_intx_mask(struct vfio_pci_core_device *vdev)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
@ -73,7 +73,7 @@ void vfio_pci_intx_mask(struct vfio_pci_device *vdev)
|
||||||
*/
|
*/
|
||||||
static int vfio_pci_intx_unmask_handler(void *opaque, void *unused)
|
static int vfio_pci_intx_unmask_handler(void *opaque, void *unused)
|
||||||
{
|
{
|
||||||
struct vfio_pci_device *vdev = opaque;
|
struct vfio_pci_core_device *vdev = opaque;
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
@ -107,7 +107,7 @@ static int vfio_pci_intx_unmask_handler(void *opaque, void *unused)
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
void vfio_pci_intx_unmask(struct vfio_pci_device *vdev)
|
void vfio_pci_intx_unmask(struct vfio_pci_core_device *vdev)
|
||||||
{
|
{
|
||||||
if (vfio_pci_intx_unmask_handler(vdev, NULL) > 0)
|
if (vfio_pci_intx_unmask_handler(vdev, NULL) > 0)
|
||||||
vfio_send_intx_eventfd(vdev, NULL);
|
vfio_send_intx_eventfd(vdev, NULL);
|
||||||
|
@ -115,7 +115,7 @@ void vfio_pci_intx_unmask(struct vfio_pci_device *vdev)
|
||||||
|
|
||||||
static irqreturn_t vfio_intx_handler(int irq, void *dev_id)
|
static irqreturn_t vfio_intx_handler(int irq, void *dev_id)
|
||||||
{
|
{
|
||||||
struct vfio_pci_device *vdev = dev_id;
|
struct vfio_pci_core_device *vdev = dev_id;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
int ret = IRQ_NONE;
|
int ret = IRQ_NONE;
|
||||||
|
|
||||||
|
@ -139,7 +139,7 @@ static irqreturn_t vfio_intx_handler(int irq, void *dev_id)
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_intx_enable(struct vfio_pci_device *vdev)
|
static int vfio_intx_enable(struct vfio_pci_core_device *vdev)
|
||||||
{
|
{
|
||||||
if (!is_irq_none(vdev))
|
if (!is_irq_none(vdev))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
@ -168,7 +168,7 @@ static int vfio_intx_enable(struct vfio_pci_device *vdev)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_intx_set_signal(struct vfio_pci_device *vdev, int fd)
|
static int vfio_intx_set_signal(struct vfio_pci_core_device *vdev, int fd)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
unsigned long irqflags = IRQF_SHARED;
|
unsigned long irqflags = IRQF_SHARED;
|
||||||
|
@ -223,7 +223,7 @@ static int vfio_intx_set_signal(struct vfio_pci_device *vdev, int fd)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void vfio_intx_disable(struct vfio_pci_device *vdev)
|
static void vfio_intx_disable(struct vfio_pci_core_device *vdev)
|
||||||
{
|
{
|
||||||
vfio_virqfd_disable(&vdev->ctx[0].unmask);
|
vfio_virqfd_disable(&vdev->ctx[0].unmask);
|
||||||
vfio_virqfd_disable(&vdev->ctx[0].mask);
|
vfio_virqfd_disable(&vdev->ctx[0].mask);
|
||||||
|
@ -244,7 +244,7 @@ static irqreturn_t vfio_msihandler(int irq, void *arg)
|
||||||
return IRQ_HANDLED;
|
return IRQ_HANDLED;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_msi_enable(struct vfio_pci_device *vdev, int nvec, bool msix)
|
static int vfio_msi_enable(struct vfio_pci_core_device *vdev, int nvec, bool msix)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
unsigned int flag = msix ? PCI_IRQ_MSIX : PCI_IRQ_MSI;
|
unsigned int flag = msix ? PCI_IRQ_MSIX : PCI_IRQ_MSI;
|
||||||
|
@ -285,7 +285,7 @@ static int vfio_msi_enable(struct vfio_pci_device *vdev, int nvec, bool msix)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_msi_set_vector_signal(struct vfio_pci_device *vdev,
|
static int vfio_msi_set_vector_signal(struct vfio_pci_core_device *vdev,
|
||||||
int vector, int fd, bool msix)
|
int vector, int fd, bool msix)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
|
@ -364,7 +364,7 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_device *vdev,
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_msi_set_block(struct vfio_pci_device *vdev, unsigned start,
|
static int vfio_msi_set_block(struct vfio_pci_core_device *vdev, unsigned start,
|
||||||
unsigned count, int32_t *fds, bool msix)
|
unsigned count, int32_t *fds, bool msix)
|
||||||
{
|
{
|
||||||
int i, j, ret = 0;
|
int i, j, ret = 0;
|
||||||
|
@ -385,7 +385,7 @@ static int vfio_msi_set_block(struct vfio_pci_device *vdev, unsigned start,
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void vfio_msi_disable(struct vfio_pci_device *vdev, bool msix)
|
static void vfio_msi_disable(struct vfio_pci_core_device *vdev, bool msix)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
int i;
|
int i;
|
||||||
|
@ -417,7 +417,7 @@ static void vfio_msi_disable(struct vfio_pci_device *vdev, bool msix)
|
||||||
/*
|
/*
|
||||||
* IOCTL support
|
* IOCTL support
|
||||||
*/
|
*/
|
||||||
static int vfio_pci_set_intx_unmask(struct vfio_pci_device *vdev,
|
static int vfio_pci_set_intx_unmask(struct vfio_pci_core_device *vdev,
|
||||||
unsigned index, unsigned start,
|
unsigned index, unsigned start,
|
||||||
unsigned count, uint32_t flags, void *data)
|
unsigned count, uint32_t flags, void *data)
|
||||||
{
|
{
|
||||||
|
@ -444,7 +444,7 @@ static int vfio_pci_set_intx_unmask(struct vfio_pci_device *vdev,
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_pci_set_intx_mask(struct vfio_pci_device *vdev,
|
static int vfio_pci_set_intx_mask(struct vfio_pci_core_device *vdev,
|
||||||
unsigned index, unsigned start,
|
unsigned index, unsigned start,
|
||||||
unsigned count, uint32_t flags, void *data)
|
unsigned count, uint32_t flags, void *data)
|
||||||
{
|
{
|
||||||
|
@ -464,7 +464,7 @@ static int vfio_pci_set_intx_mask(struct vfio_pci_device *vdev,
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_pci_set_intx_trigger(struct vfio_pci_device *vdev,
|
static int vfio_pci_set_intx_trigger(struct vfio_pci_core_device *vdev,
|
||||||
unsigned index, unsigned start,
|
unsigned index, unsigned start,
|
||||||
unsigned count, uint32_t flags, void *data)
|
unsigned count, uint32_t flags, void *data)
|
||||||
{
|
{
|
||||||
|
@ -507,7 +507,7 @@ static int vfio_pci_set_intx_trigger(struct vfio_pci_device *vdev,
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_pci_set_msi_trigger(struct vfio_pci_device *vdev,
|
static int vfio_pci_set_msi_trigger(struct vfio_pci_core_device *vdev,
|
||||||
unsigned index, unsigned start,
|
unsigned index, unsigned start,
|
||||||
unsigned count, uint32_t flags, void *data)
|
unsigned count, uint32_t flags, void *data)
|
||||||
{
|
{
|
||||||
|
@ -613,7 +613,7 @@ static int vfio_pci_set_ctx_trigger_single(struct eventfd_ctx **ctx,
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_pci_set_err_trigger(struct vfio_pci_device *vdev,
|
static int vfio_pci_set_err_trigger(struct vfio_pci_core_device *vdev,
|
||||||
unsigned index, unsigned start,
|
unsigned index, unsigned start,
|
||||||
unsigned count, uint32_t flags, void *data)
|
unsigned count, uint32_t flags, void *data)
|
||||||
{
|
{
|
||||||
|
@ -624,7 +624,7 @@ static int vfio_pci_set_err_trigger(struct vfio_pci_device *vdev,
|
||||||
count, flags, data);
|
count, flags, data);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_pci_set_req_trigger(struct vfio_pci_device *vdev,
|
static int vfio_pci_set_req_trigger(struct vfio_pci_core_device *vdev,
|
||||||
unsigned index, unsigned start,
|
unsigned index, unsigned start,
|
||||||
unsigned count, uint32_t flags, void *data)
|
unsigned count, uint32_t flags, void *data)
|
||||||
{
|
{
|
||||||
|
@ -635,11 +635,11 @@ static int vfio_pci_set_req_trigger(struct vfio_pci_device *vdev,
|
||||||
count, flags, data);
|
count, flags, data);
|
||||||
}
|
}
|
||||||
|
|
||||||
int vfio_pci_set_irqs_ioctl(struct vfio_pci_device *vdev, uint32_t flags,
|
int vfio_pci_set_irqs_ioctl(struct vfio_pci_core_device *vdev, uint32_t flags,
|
||||||
unsigned index, unsigned start, unsigned count,
|
unsigned index, unsigned start, unsigned count,
|
||||||
void *data)
|
void *data)
|
||||||
{
|
{
|
||||||
int (*func)(struct vfio_pci_device *vdev, unsigned index,
|
int (*func)(struct vfio_pci_core_device *vdev, unsigned index,
|
||||||
unsigned start, unsigned count, uint32_t flags,
|
unsigned start, unsigned count, uint32_t flags,
|
||||||
void *data) = NULL;
|
void *data) = NULL;
|
||||||
|
|
||||||
|
|
|
@ -17,7 +17,7 @@
|
||||||
#include <linux/vfio.h>
|
#include <linux/vfio.h>
|
||||||
#include <linux/vgaarb.h>
|
#include <linux/vgaarb.h>
|
||||||
|
|
||||||
#include "vfio_pci_private.h"
|
#include <linux/vfio_pci_core.h>
|
||||||
|
|
||||||
#ifdef __LITTLE_ENDIAN
|
#ifdef __LITTLE_ENDIAN
|
||||||
#define vfio_ioread64 ioread64
|
#define vfio_ioread64 ioread64
|
||||||
|
@ -38,7 +38,7 @@
|
||||||
#define vfio_iowrite8 iowrite8
|
#define vfio_iowrite8 iowrite8
|
||||||
|
|
||||||
#define VFIO_IOWRITE(size) \
|
#define VFIO_IOWRITE(size) \
|
||||||
static int vfio_pci_iowrite##size(struct vfio_pci_device *vdev, \
|
static int vfio_pci_iowrite##size(struct vfio_pci_core_device *vdev, \
|
||||||
bool test_mem, u##size val, void __iomem *io) \
|
bool test_mem, u##size val, void __iomem *io) \
|
||||||
{ \
|
{ \
|
||||||
if (test_mem) { \
|
if (test_mem) { \
|
||||||
|
@ -65,7 +65,7 @@ VFIO_IOWRITE(64)
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#define VFIO_IOREAD(size) \
|
#define VFIO_IOREAD(size) \
|
||||||
static int vfio_pci_ioread##size(struct vfio_pci_device *vdev, \
|
static int vfio_pci_ioread##size(struct vfio_pci_core_device *vdev, \
|
||||||
bool test_mem, u##size *val, void __iomem *io) \
|
bool test_mem, u##size *val, void __iomem *io) \
|
||||||
{ \
|
{ \
|
||||||
if (test_mem) { \
|
if (test_mem) { \
|
||||||
|
@ -94,7 +94,7 @@ VFIO_IOREAD(32)
|
||||||
* reads with -1. This is intended for handling MSI-X vector tables and
|
* reads with -1. This is intended for handling MSI-X vector tables and
|
||||||
* leftover space for ROM BARs.
|
* leftover space for ROM BARs.
|
||||||
*/
|
*/
|
||||||
static ssize_t do_io_rw(struct vfio_pci_device *vdev, bool test_mem,
|
static ssize_t do_io_rw(struct vfio_pci_core_device *vdev, bool test_mem,
|
||||||
void __iomem *io, char __user *buf,
|
void __iomem *io, char __user *buf,
|
||||||
loff_t off, size_t count, size_t x_start,
|
loff_t off, size_t count, size_t x_start,
|
||||||
size_t x_end, bool iswrite)
|
size_t x_end, bool iswrite)
|
||||||
|
@ -200,7 +200,7 @@ static ssize_t do_io_rw(struct vfio_pci_device *vdev, bool test_mem,
|
||||||
return done;
|
return done;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vfio_pci_setup_barmap(struct vfio_pci_device *vdev, int bar)
|
static int vfio_pci_setup_barmap(struct vfio_pci_core_device *vdev, int bar)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
int ret;
|
int ret;
|
||||||
|
@ -224,7 +224,7 @@ static int vfio_pci_setup_barmap(struct vfio_pci_device *vdev, int bar)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
ssize_t vfio_pci_bar_rw(struct vfio_pci_device *vdev, char __user *buf,
|
ssize_t vfio_pci_bar_rw(struct vfio_pci_core_device *vdev, char __user *buf,
|
||||||
size_t count, loff_t *ppos, bool iswrite)
|
size_t count, loff_t *ppos, bool iswrite)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
|
@ -288,7 +288,7 @@ out:
|
||||||
return done;
|
return done;
|
||||||
}
|
}
|
||||||
|
|
||||||
ssize_t vfio_pci_vga_rw(struct vfio_pci_device *vdev, char __user *buf,
|
ssize_t vfio_pci_vga_rw(struct vfio_pci_core_device *vdev, char __user *buf,
|
||||||
size_t count, loff_t *ppos, bool iswrite)
|
size_t count, loff_t *ppos, bool iswrite)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
@ -384,7 +384,7 @@ static void vfio_pci_ioeventfd_do_write(struct vfio_pci_ioeventfd *ioeventfd,
|
||||||
static int vfio_pci_ioeventfd_handler(void *opaque, void *unused)
|
static int vfio_pci_ioeventfd_handler(void *opaque, void *unused)
|
||||||
{
|
{
|
||||||
struct vfio_pci_ioeventfd *ioeventfd = opaque;
|
struct vfio_pci_ioeventfd *ioeventfd = opaque;
|
||||||
struct vfio_pci_device *vdev = ioeventfd->vdev;
|
struct vfio_pci_core_device *vdev = ioeventfd->vdev;
|
||||||
|
|
||||||
if (ioeventfd->test_mem) {
|
if (ioeventfd->test_mem) {
|
||||||
if (!down_read_trylock(&vdev->memory_lock))
|
if (!down_read_trylock(&vdev->memory_lock))
|
||||||
|
@ -410,7 +410,7 @@ static void vfio_pci_ioeventfd_thread(void *opaque, void *unused)
|
||||||
vfio_pci_ioeventfd_do_write(ioeventfd, ioeventfd->test_mem);
|
vfio_pci_ioeventfd_do_write(ioeventfd, ioeventfd->test_mem);
|
||||||
}
|
}
|
||||||
|
|
||||||
long vfio_pci_ioeventfd(struct vfio_pci_device *vdev, loff_t offset,
|
long vfio_pci_ioeventfd(struct vfio_pci_core_device *vdev, loff_t offset,
|
||||||
uint64_t data, int count, int fd)
|
uint64_t data, int count, int fd)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = vdev->pdev;
|
struct pci_dev *pdev = vdev->pdev;
|
||||||
|
|
|
@ -1,15 +1,10 @@
|
||||||
// SPDX-License-Identifier: GPL-2.0+
|
// SPDX-License-Identifier: GPL-2.0-only
|
||||||
/*
|
/*
|
||||||
* VFIO ZPCI devices support
|
* VFIO ZPCI devices support
|
||||||
*
|
*
|
||||||
* Copyright (C) IBM Corp. 2020. All rights reserved.
|
* Copyright (C) IBM Corp. 2020. All rights reserved.
|
||||||
* Author(s): Pierre Morel <pmorel@linux.ibm.com>
|
* Author(s): Pierre Morel <pmorel@linux.ibm.com>
|
||||||
* Matthew Rosato <mjrosato@linux.ibm.com>
|
* Matthew Rosato <mjrosato@linux.ibm.com>
|
||||||
*
|
|
||||||
* This program is free software; you can redistribute it and/or modify
|
|
||||||
* it under the terms of the GNU General Public License version 2 as
|
|
||||||
* published by the Free Software Foundation.
|
|
||||||
*
|
|
||||||
*/
|
*/
|
||||||
#include <linux/io.h>
|
#include <linux/io.h>
|
||||||
#include <linux/pci.h>
|
#include <linux/pci.h>
|
||||||
|
@ -19,7 +14,7 @@
|
||||||
#include <asm/pci_clp.h>
|
#include <asm/pci_clp.h>
|
||||||
#include <asm/pci_io.h>
|
#include <asm/pci_io.h>
|
||||||
|
|
||||||
#include "vfio_pci_private.h"
|
#include <linux/vfio_pci_core.h>
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Add the Base PCI Function information to the device info region.
|
* Add the Base PCI Function information to the device info region.
|
||||||
|
@ -114,7 +109,7 @@ static int zpci_pfip_cap(struct zpci_dev *zdev, struct vfio_info_cap *caps)
|
||||||
/*
|
/*
|
||||||
* Add all supported capabilities to the VFIO_DEVICE_GET_INFO capability chain.
|
* Add all supported capabilities to the VFIO_DEVICE_GET_INFO capability chain.
|
||||||
*/
|
*/
|
||||||
int vfio_pci_info_zdev_add_caps(struct vfio_pci_device *vdev,
|
int vfio_pci_info_zdev_add_caps(struct vfio_pci_core_device *vdev,
|
||||||
struct vfio_info_cap *caps)
|
struct vfio_info_cap *caps)
|
||||||
{
|
{
|
||||||
struct zpci_dev *zdev = to_zpci(vdev->pdev);
|
struct zpci_dev *zdev = to_zpci(vdev->pdev);
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
# SPDX-License-Identifier: GPL-2.0-only
|
# SPDX-License-Identifier: GPL-2.0-only
|
||||||
config VFIO_PLATFORM
|
config VFIO_PLATFORM
|
||||||
tristate "VFIO support for platform devices"
|
tristate "VFIO support for platform devices"
|
||||||
depends on VFIO && EVENTFD && (ARM || ARM64 || COMPILE_TEST)
|
depends on ARM || ARM64 || COMPILE_TEST
|
||||||
select VFIO_VIRQFD
|
select VFIO_VIRQFD
|
||||||
help
|
help
|
||||||
Support for platform devices with VFIO. This is required to make
|
Support for platform devices with VFIO. This is required to make
|
||||||
|
@ -10,9 +10,10 @@ config VFIO_PLATFORM
|
||||||
|
|
||||||
If you don't know what to do here, say N.
|
If you don't know what to do here, say N.
|
||||||
|
|
||||||
|
if VFIO_PLATFORM
|
||||||
config VFIO_AMBA
|
config VFIO_AMBA
|
||||||
tristate "VFIO support for AMBA devices"
|
tristate "VFIO support for AMBA devices"
|
||||||
depends on VFIO_PLATFORM && (ARM_AMBA || COMPILE_TEST)
|
depends on ARM_AMBA || COMPILE_TEST
|
||||||
help
|
help
|
||||||
Support for ARM AMBA devices with VFIO. This is required to make
|
Support for ARM AMBA devices with VFIO. This is required to make
|
||||||
use of ARM AMBA devices present on the system using the VFIO
|
use of ARM AMBA devices present on the system using the VFIO
|
||||||
|
@ -21,3 +22,4 @@ config VFIO_AMBA
|
||||||
If you don't know what to do here, say N.
|
If you don't know what to do here, say N.
|
||||||
|
|
||||||
source "drivers/vfio/platform/reset/Kconfig"
|
source "drivers/vfio/platform/reset/Kconfig"
|
||||||
|
endif
|
||||||
|
|
|
@ -1,7 +1,6 @@
|
||||||
# SPDX-License-Identifier: GPL-2.0-only
|
# SPDX-License-Identifier: GPL-2.0-only
|
||||||
config VFIO_PLATFORM_CALXEDAXGMAC_RESET
|
config VFIO_PLATFORM_CALXEDAXGMAC_RESET
|
||||||
tristate "VFIO support for calxeda xgmac reset"
|
tristate "VFIO support for calxeda xgmac reset"
|
||||||
depends on VFIO_PLATFORM
|
|
||||||
help
|
help
|
||||||
Enables the VFIO platform driver to handle reset for Calxeda xgmac
|
Enables the VFIO platform driver to handle reset for Calxeda xgmac
|
||||||
|
|
||||||
|
@ -9,7 +8,6 @@ config VFIO_PLATFORM_CALXEDAXGMAC_RESET
|
||||||
|
|
||||||
config VFIO_PLATFORM_AMDXGBE_RESET
|
config VFIO_PLATFORM_AMDXGBE_RESET
|
||||||
tristate "VFIO support for AMD XGBE reset"
|
tristate "VFIO support for AMD XGBE reset"
|
||||||
depends on VFIO_PLATFORM
|
|
||||||
help
|
help
|
||||||
Enables the VFIO platform driver to handle reset for AMD XGBE
|
Enables the VFIO platform driver to handle reset for AMD XGBE
|
||||||
|
|
||||||
|
@ -17,7 +15,7 @@ config VFIO_PLATFORM_AMDXGBE_RESET
|
||||||
|
|
||||||
config VFIO_PLATFORM_BCMFLEXRM_RESET
|
config VFIO_PLATFORM_BCMFLEXRM_RESET
|
||||||
tristate "VFIO support for Broadcom FlexRM reset"
|
tristate "VFIO support for Broadcom FlexRM reset"
|
||||||
depends on VFIO_PLATFORM && (ARCH_BCM_IPROC || COMPILE_TEST)
|
depends on ARCH_BCM_IPROC || COMPILE_TEST
|
||||||
default ARCH_BCM_IPROC
|
default ARCH_BCM_IPROC
|
||||||
help
|
help
|
||||||
Enables the VFIO platform driver to handle reset for Broadcom FlexRM
|
Enables the VFIO platform driver to handle reset for Broadcom FlexRM
|
||||||
|
|
|
@ -1,14 +1,6 @@
|
||||||
|
// SPDX-License-Identifier: GPL-2.0
|
||||||
/*
|
/*
|
||||||
* Copyright (C) 2017 Broadcom
|
* Copyright (C) 2017 Broadcom
|
||||||
*
|
|
||||||
* This program is free software; you can redistribute it and/or
|
|
||||||
* modify it under the terms of the GNU General Public License as
|
|
||||||
* published by the Free Software Foundation version 2.
|
|
||||||
*
|
|
||||||
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
|
|
||||||
* kind, whether express or implied; without even the implied warranty
|
|
||||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
* GNU General Public License for more details.
|
|
||||||
*/
|
*/
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -218,65 +218,52 @@ static int vfio_platform_call_reset(struct vfio_platform_device *vdev,
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void vfio_platform_release(struct vfio_device *core_vdev)
|
static void vfio_platform_close_device(struct vfio_device *core_vdev)
|
||||||
{
|
|
||||||
struct vfio_platform_device *vdev =
|
|
||||||
container_of(core_vdev, struct vfio_platform_device, vdev);
|
|
||||||
|
|
||||||
mutex_lock(&driver_lock);
|
|
||||||
|
|
||||||
if (!(--vdev->refcnt)) {
|
|
||||||
const char *extra_dbg = NULL;
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
ret = vfio_platform_call_reset(vdev, &extra_dbg);
|
|
||||||
if (ret && vdev->reset_required) {
|
|
||||||
dev_warn(vdev->device, "reset driver is required and reset call failed in release (%d) %s\n",
|
|
||||||
ret, extra_dbg ? extra_dbg : "");
|
|
||||||
WARN_ON(1);
|
|
||||||
}
|
|
||||||
pm_runtime_put(vdev->device);
|
|
||||||
vfio_platform_regions_cleanup(vdev);
|
|
||||||
vfio_platform_irq_cleanup(vdev);
|
|
||||||
}
|
|
||||||
|
|
||||||
mutex_unlock(&driver_lock);
|
|
||||||
}
|
|
||||||
|
|
||||||
static int vfio_platform_open(struct vfio_device *core_vdev)
|
|
||||||
{
|
{
|
||||||
struct vfio_platform_device *vdev =
|
struct vfio_platform_device *vdev =
|
||||||
container_of(core_vdev, struct vfio_platform_device, vdev);
|
container_of(core_vdev, struct vfio_platform_device, vdev);
|
||||||
|
const char *extra_dbg = NULL;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mutex_lock(&driver_lock);
|
ret = vfio_platform_call_reset(vdev, &extra_dbg);
|
||||||
|
if (WARN_ON(ret && vdev->reset_required)) {
|
||||||
if (!vdev->refcnt) {
|
dev_warn(
|
||||||
const char *extra_dbg = NULL;
|
vdev->device,
|
||||||
|
"reset driver is required and reset call failed in release (%d) %s\n",
|
||||||
ret = vfio_platform_regions_init(vdev);
|
ret, extra_dbg ? extra_dbg : "");
|
||||||
if (ret)
|
|
||||||
goto err_reg;
|
|
||||||
|
|
||||||
ret = vfio_platform_irq_init(vdev);
|
|
||||||
if (ret)
|
|
||||||
goto err_irq;
|
|
||||||
|
|
||||||
ret = pm_runtime_get_sync(vdev->device);
|
|
||||||
if (ret < 0)
|
|
||||||
goto err_rst;
|
|
||||||
|
|
||||||
ret = vfio_platform_call_reset(vdev, &extra_dbg);
|
|
||||||
if (ret && vdev->reset_required) {
|
|
||||||
dev_warn(vdev->device, "reset driver is required and reset call failed in open (%d) %s\n",
|
|
||||||
ret, extra_dbg ? extra_dbg : "");
|
|
||||||
goto err_rst;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
pm_runtime_put(vdev->device);
|
||||||
|
vfio_platform_regions_cleanup(vdev);
|
||||||
|
vfio_platform_irq_cleanup(vdev);
|
||||||
|
}
|
||||||
|
|
||||||
vdev->refcnt++;
|
static int vfio_platform_open_device(struct vfio_device *core_vdev)
|
||||||
|
{
|
||||||
|
struct vfio_platform_device *vdev =
|
||||||
|
container_of(core_vdev, struct vfio_platform_device, vdev);
|
||||||
|
const char *extra_dbg = NULL;
|
||||||
|
int ret;
|
||||||
|
|
||||||
mutex_unlock(&driver_lock);
|
ret = vfio_platform_regions_init(vdev);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
ret = vfio_platform_irq_init(vdev);
|
||||||
|
if (ret)
|
||||||
|
goto err_irq;
|
||||||
|
|
||||||
|
ret = pm_runtime_get_sync(vdev->device);
|
||||||
|
if (ret < 0)
|
||||||
|
goto err_rst;
|
||||||
|
|
||||||
|
ret = vfio_platform_call_reset(vdev, &extra_dbg);
|
||||||
|
if (ret && vdev->reset_required) {
|
||||||
|
dev_warn(
|
||||||
|
vdev->device,
|
||||||
|
"reset driver is required and reset call failed in open (%d) %s\n",
|
||||||
|
ret, extra_dbg ? extra_dbg : "");
|
||||||
|
goto err_rst;
|
||||||
|
}
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
err_rst:
|
err_rst:
|
||||||
|
@ -284,8 +271,6 @@ err_rst:
|
||||||
vfio_platform_irq_cleanup(vdev);
|
vfio_platform_irq_cleanup(vdev);
|
||||||
err_irq:
|
err_irq:
|
||||||
vfio_platform_regions_cleanup(vdev);
|
vfio_platform_regions_cleanup(vdev);
|
||||||
err_reg:
|
|
||||||
mutex_unlock(&driver_lock);
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -616,8 +601,8 @@ static int vfio_platform_mmap(struct vfio_device *core_vdev, struct vm_area_stru
|
||||||
|
|
||||||
static const struct vfio_device_ops vfio_platform_ops = {
|
static const struct vfio_device_ops vfio_platform_ops = {
|
||||||
.name = "vfio-platform",
|
.name = "vfio-platform",
|
||||||
.open = vfio_platform_open,
|
.open_device = vfio_platform_open_device,
|
||||||
.release = vfio_platform_release,
|
.close_device = vfio_platform_close_device,
|
||||||
.ioctl = vfio_platform_ioctl,
|
.ioctl = vfio_platform_ioctl,
|
||||||
.read = vfio_platform_read,
|
.read = vfio_platform_read,
|
||||||
.write = vfio_platform_write,
|
.write = vfio_platform_write,
|
||||||
|
@ -667,7 +652,7 @@ int vfio_platform_probe_common(struct vfio_platform_device *vdev,
|
||||||
ret = vfio_platform_of_probe(vdev, dev);
|
ret = vfio_platform_of_probe(vdev, dev);
|
||||||
|
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
goto out_uninit;
|
||||||
|
|
||||||
vdev->device = dev;
|
vdev->device = dev;
|
||||||
|
|
||||||
|
@ -675,7 +660,7 @@ int vfio_platform_probe_common(struct vfio_platform_device *vdev,
|
||||||
if (ret && vdev->reset_required) {
|
if (ret && vdev->reset_required) {
|
||||||
dev_err(dev, "No reset function found for device %s\n",
|
dev_err(dev, "No reset function found for device %s\n",
|
||||||
vdev->name);
|
vdev->name);
|
||||||
return ret;
|
goto out_uninit;
|
||||||
}
|
}
|
||||||
|
|
||||||
group = vfio_iommu_group_get(dev);
|
group = vfio_iommu_group_get(dev);
|
||||||
|
@ -698,6 +683,8 @@ put_iommu:
|
||||||
vfio_iommu_group_put(group, dev);
|
vfio_iommu_group_put(group, dev);
|
||||||
put_reset:
|
put_reset:
|
||||||
vfio_platform_put_reset(vdev);
|
vfio_platform_put_reset(vdev);
|
||||||
|
out_uninit:
|
||||||
|
vfio_uninit_group_dev(&vdev->vdev);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(vfio_platform_probe_common);
|
EXPORT_SYMBOL_GPL(vfio_platform_probe_common);
|
||||||
|
@ -708,6 +695,7 @@ void vfio_platform_remove_common(struct vfio_platform_device *vdev)
|
||||||
|
|
||||||
pm_runtime_disable(vdev->device);
|
pm_runtime_disable(vdev->device);
|
||||||
vfio_platform_put_reset(vdev);
|
vfio_platform_put_reset(vdev);
|
||||||
|
vfio_uninit_group_dev(&vdev->vdev);
|
||||||
vfio_iommu_group_put(vdev->vdev.dev->iommu_group, vdev->vdev.dev);
|
vfio_iommu_group_put(vdev->vdev.dev->iommu_group, vdev->vdev.dev);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(vfio_platform_remove_common);
|
EXPORT_SYMBOL_GPL(vfio_platform_remove_common);
|
||||||
|
|
|
@ -48,7 +48,6 @@ struct vfio_platform_device {
|
||||||
u32 num_regions;
|
u32 num_regions;
|
||||||
struct vfio_platform_irq *irqs;
|
struct vfio_platform_irq *irqs;
|
||||||
u32 num_irqs;
|
u32 num_irqs;
|
||||||
int refcnt;
|
|
||||||
struct mutex igate;
|
struct mutex igate;
|
||||||
const char *compat;
|
const char *compat;
|
||||||
const char *acpihid;
|
const char *acpihid;
|
||||||
|
|
|
@ -96,6 +96,79 @@ module_param_named(enable_unsafe_noiommu_mode,
|
||||||
MODULE_PARM_DESC(enable_unsafe_noiommu_mode, "Enable UNSAFE, no-IOMMU mode. This mode provides no device isolation, no DMA translation, no host kernel protection, cannot be used for device assignment to virtual machines, requires RAWIO permissions, and will taint the kernel. If you do not know what this is for, step away. (default: false)");
|
MODULE_PARM_DESC(enable_unsafe_noiommu_mode, "Enable UNSAFE, no-IOMMU mode. This mode provides no device isolation, no DMA translation, no host kernel protection, cannot be used for device assignment to virtual machines, requires RAWIO permissions, and will taint the kernel. If you do not know what this is for, step away. (default: false)");
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
static DEFINE_XARRAY(vfio_device_set_xa);
|
||||||
|
|
||||||
|
int vfio_assign_device_set(struct vfio_device *device, void *set_id)
|
||||||
|
{
|
||||||
|
unsigned long idx = (unsigned long)set_id;
|
||||||
|
struct vfio_device_set *new_dev_set;
|
||||||
|
struct vfio_device_set *dev_set;
|
||||||
|
|
||||||
|
if (WARN_ON(!set_id))
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Atomically acquire a singleton object in the xarray for this set_id
|
||||||
|
*/
|
||||||
|
xa_lock(&vfio_device_set_xa);
|
||||||
|
dev_set = xa_load(&vfio_device_set_xa, idx);
|
||||||
|
if (dev_set)
|
||||||
|
goto found_get_ref;
|
||||||
|
xa_unlock(&vfio_device_set_xa);
|
||||||
|
|
||||||
|
new_dev_set = kzalloc(sizeof(*new_dev_set), GFP_KERNEL);
|
||||||
|
if (!new_dev_set)
|
||||||
|
return -ENOMEM;
|
||||||
|
mutex_init(&new_dev_set->lock);
|
||||||
|
INIT_LIST_HEAD(&new_dev_set->device_list);
|
||||||
|
new_dev_set->set_id = set_id;
|
||||||
|
|
||||||
|
xa_lock(&vfio_device_set_xa);
|
||||||
|
dev_set = __xa_cmpxchg(&vfio_device_set_xa, idx, NULL, new_dev_set,
|
||||||
|
GFP_KERNEL);
|
||||||
|
if (!dev_set) {
|
||||||
|
dev_set = new_dev_set;
|
||||||
|
goto found_get_ref;
|
||||||
|
}
|
||||||
|
|
||||||
|
kfree(new_dev_set);
|
||||||
|
if (xa_is_err(dev_set)) {
|
||||||
|
xa_unlock(&vfio_device_set_xa);
|
||||||
|
return xa_err(dev_set);
|
||||||
|
}
|
||||||
|
|
||||||
|
found_get_ref:
|
||||||
|
dev_set->device_count++;
|
||||||
|
xa_unlock(&vfio_device_set_xa);
|
||||||
|
mutex_lock(&dev_set->lock);
|
||||||
|
device->dev_set = dev_set;
|
||||||
|
list_add_tail(&device->dev_set_list, &dev_set->device_list);
|
||||||
|
mutex_unlock(&dev_set->lock);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(vfio_assign_device_set);
|
||||||
|
|
||||||
|
static void vfio_release_device_set(struct vfio_device *device)
|
||||||
|
{
|
||||||
|
struct vfio_device_set *dev_set = device->dev_set;
|
||||||
|
|
||||||
|
if (!dev_set)
|
||||||
|
return;
|
||||||
|
|
||||||
|
mutex_lock(&dev_set->lock);
|
||||||
|
list_del(&device->dev_set_list);
|
||||||
|
mutex_unlock(&dev_set->lock);
|
||||||
|
|
||||||
|
xa_lock(&vfio_device_set_xa);
|
||||||
|
if (!--dev_set->device_count) {
|
||||||
|
__xa_erase(&vfio_device_set_xa,
|
||||||
|
(unsigned long)dev_set->set_id);
|
||||||
|
mutex_destroy(&dev_set->lock);
|
||||||
|
kfree(dev_set);
|
||||||
|
}
|
||||||
|
xa_unlock(&vfio_device_set_xa);
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* vfio_iommu_group_{get,put} are only intended for VFIO bus driver probe
|
* vfio_iommu_group_{get,put} are only intended for VFIO bus driver probe
|
||||||
* and remove functions, any use cases other than acquiring the first
|
* and remove functions, any use cases other than acquiring the first
|
||||||
|
@ -749,12 +822,25 @@ void vfio_init_group_dev(struct vfio_device *device, struct device *dev,
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(vfio_init_group_dev);
|
EXPORT_SYMBOL_GPL(vfio_init_group_dev);
|
||||||
|
|
||||||
|
void vfio_uninit_group_dev(struct vfio_device *device)
|
||||||
|
{
|
||||||
|
vfio_release_device_set(device);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(vfio_uninit_group_dev);
|
||||||
|
|
||||||
int vfio_register_group_dev(struct vfio_device *device)
|
int vfio_register_group_dev(struct vfio_device *device)
|
||||||
{
|
{
|
||||||
struct vfio_device *existing_device;
|
struct vfio_device *existing_device;
|
||||||
struct iommu_group *iommu_group;
|
struct iommu_group *iommu_group;
|
||||||
struct vfio_group *group;
|
struct vfio_group *group;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If the driver doesn't specify a set then the device is added to a
|
||||||
|
* singleton set just for itself.
|
||||||
|
*/
|
||||||
|
if (!device->dev_set)
|
||||||
|
vfio_assign_device_set(device, device);
|
||||||
|
|
||||||
iommu_group = iommu_group_get(device->dev);
|
iommu_group = iommu_group_get(device->dev);
|
||||||
if (!iommu_group)
|
if (!iommu_group)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
@ -1356,7 +1442,8 @@ static int vfio_group_get_device_fd(struct vfio_group *group, char *buf)
|
||||||
{
|
{
|
||||||
struct vfio_device *device;
|
struct vfio_device *device;
|
||||||
struct file *filep;
|
struct file *filep;
|
||||||
int ret;
|
int fdno;
|
||||||
|
int ret = 0;
|
||||||
|
|
||||||
if (0 == atomic_read(&group->container_users) ||
|
if (0 == atomic_read(&group->container_users) ||
|
||||||
!group->container->iommu_driver || !vfio_group_viable(group))
|
!group->container->iommu_driver || !vfio_group_viable(group))
|
||||||
|
@ -1370,38 +1457,32 @@ static int vfio_group_get_device_fd(struct vfio_group *group, char *buf)
|
||||||
return PTR_ERR(device);
|
return PTR_ERR(device);
|
||||||
|
|
||||||
if (!try_module_get(device->dev->driver->owner)) {
|
if (!try_module_get(device->dev->driver->owner)) {
|
||||||
vfio_device_put(device);
|
ret = -ENODEV;
|
||||||
return -ENODEV;
|
goto err_device_put;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = device->ops->open(device);
|
mutex_lock(&device->dev_set->lock);
|
||||||
if (ret) {
|
device->open_count++;
|
||||||
module_put(device->dev->driver->owner);
|
if (device->open_count == 1 && device->ops->open_device) {
|
||||||
vfio_device_put(device);
|
ret = device->ops->open_device(device);
|
||||||
return ret;
|
if (ret)
|
||||||
|
goto err_undo_count;
|
||||||
}
|
}
|
||||||
|
mutex_unlock(&device->dev_set->lock);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* We can't use anon_inode_getfd() because we need to modify
|
* We can't use anon_inode_getfd() because we need to modify
|
||||||
* the f_mode flags directly to allow more than just ioctls
|
* the f_mode flags directly to allow more than just ioctls
|
||||||
*/
|
*/
|
||||||
ret = get_unused_fd_flags(O_CLOEXEC);
|
fdno = ret = get_unused_fd_flags(O_CLOEXEC);
|
||||||
if (ret < 0) {
|
if (ret < 0)
|
||||||
device->ops->release(device);
|
goto err_close_device;
|
||||||
module_put(device->dev->driver->owner);
|
|
||||||
vfio_device_put(device);
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
filep = anon_inode_getfile("[vfio-device]", &vfio_device_fops,
|
filep = anon_inode_getfile("[vfio-device]", &vfio_device_fops,
|
||||||
device, O_RDWR);
|
device, O_RDWR);
|
||||||
if (IS_ERR(filep)) {
|
if (IS_ERR(filep)) {
|
||||||
put_unused_fd(ret);
|
|
||||||
ret = PTR_ERR(filep);
|
ret = PTR_ERR(filep);
|
||||||
device->ops->release(device);
|
goto err_fd;
|
||||||
module_put(device->dev->driver->owner);
|
|
||||||
vfio_device_put(device);
|
|
||||||
return ret;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -1413,12 +1494,25 @@ static int vfio_group_get_device_fd(struct vfio_group *group, char *buf)
|
||||||
|
|
||||||
atomic_inc(&group->container_users);
|
atomic_inc(&group->container_users);
|
||||||
|
|
||||||
fd_install(ret, filep);
|
fd_install(fdno, filep);
|
||||||
|
|
||||||
if (group->noiommu)
|
if (group->noiommu)
|
||||||
dev_warn(device->dev, "vfio-noiommu device opened by user "
|
dev_warn(device->dev, "vfio-noiommu device opened by user "
|
||||||
"(%s:%d)\n", current->comm, task_pid_nr(current));
|
"(%s:%d)\n", current->comm, task_pid_nr(current));
|
||||||
|
return fdno;
|
||||||
|
|
||||||
|
err_fd:
|
||||||
|
put_unused_fd(fdno);
|
||||||
|
err_close_device:
|
||||||
|
mutex_lock(&device->dev_set->lock);
|
||||||
|
if (device->open_count == 1 && device->ops->close_device)
|
||||||
|
device->ops->close_device(device);
|
||||||
|
err_undo_count:
|
||||||
|
device->open_count--;
|
||||||
|
mutex_unlock(&device->dev_set->lock);
|
||||||
|
module_put(device->dev->driver->owner);
|
||||||
|
err_device_put:
|
||||||
|
vfio_device_put(device);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1556,7 +1650,10 @@ static int vfio_device_fops_release(struct inode *inode, struct file *filep)
|
||||||
{
|
{
|
||||||
struct vfio_device *device = filep->private_data;
|
struct vfio_device *device = filep->private_data;
|
||||||
|
|
||||||
device->ops->release(device);
|
mutex_lock(&device->dev_set->lock);
|
||||||
|
if (!--device->open_count && device->ops->close_device)
|
||||||
|
device->ops->close_device(device);
|
||||||
|
mutex_unlock(&device->dev_set->lock);
|
||||||
|
|
||||||
module_put(device->dev->driver->owner);
|
module_put(device->dev->driver->owner);
|
||||||
|
|
||||||
|
@ -2359,6 +2456,7 @@ static void __exit vfio_cleanup(void)
|
||||||
class_destroy(vfio.class);
|
class_destroy(vfio.class);
|
||||||
vfio.class = NULL;
|
vfio.class = NULL;
|
||||||
misc_deregister(&vfio_dev);
|
misc_deregister(&vfio_dev);
|
||||||
|
xa_destroy(&vfio_device_set_xa);
|
||||||
}
|
}
|
||||||
|
|
||||||
module_init(vfio_init);
|
module_init(vfio_init);
|
||||||
|
|
|
@ -612,17 +612,17 @@ static int vfio_wait(struct vfio_iommu *iommu)
|
||||||
static int vfio_find_dma_valid(struct vfio_iommu *iommu, dma_addr_t start,
|
static int vfio_find_dma_valid(struct vfio_iommu *iommu, dma_addr_t start,
|
||||||
size_t size, struct vfio_dma **dma_p)
|
size_t size, struct vfio_dma **dma_p)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret = 0;
|
||||||
|
|
||||||
do {
|
do {
|
||||||
*dma_p = vfio_find_dma(iommu, start, size);
|
*dma_p = vfio_find_dma(iommu, start, size);
|
||||||
if (!*dma_p)
|
if (!*dma_p)
|
||||||
ret = -EINVAL;
|
return -EINVAL;
|
||||||
else if (!(*dma_p)->vaddr_invalid)
|
else if (!(*dma_p)->vaddr_invalid)
|
||||||
ret = 0;
|
return ret;
|
||||||
else
|
else
|
||||||
ret = vfio_wait(iommu);
|
ret = vfio_wait(iommu);
|
||||||
} while (ret > 0);
|
} while (ret == WAITED);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
|
@ -72,11 +72,6 @@ struct device *mtype_get_parent_dev(struct mdev_type *mtype);
|
||||||
* @mdev: mdev_device device structure which is being
|
* @mdev: mdev_device device structure which is being
|
||||||
* destroyed
|
* destroyed
|
||||||
* Returns integer: success (0) or error (< 0)
|
* Returns integer: success (0) or error (< 0)
|
||||||
* @open: Open mediated device.
|
|
||||||
* @mdev: mediated device.
|
|
||||||
* Returns integer: success (0) or error (< 0)
|
|
||||||
* @release: release mediated device
|
|
||||||
* @mdev: mediated device.
|
|
||||||
* @read: Read emulation callback
|
* @read: Read emulation callback
|
||||||
* @mdev: mediated device structure
|
* @mdev: mediated device structure
|
||||||
* @buf: read buffer
|
* @buf: read buffer
|
||||||
|
@ -111,8 +106,8 @@ struct mdev_parent_ops {
|
||||||
|
|
||||||
int (*create)(struct mdev_device *mdev);
|
int (*create)(struct mdev_device *mdev);
|
||||||
int (*remove)(struct mdev_device *mdev);
|
int (*remove)(struct mdev_device *mdev);
|
||||||
int (*open)(struct mdev_device *mdev);
|
int (*open_device)(struct mdev_device *mdev);
|
||||||
void (*release)(struct mdev_device *mdev);
|
void (*close_device)(struct mdev_device *mdev);
|
||||||
ssize_t (*read)(struct mdev_device *mdev, char __user *buf,
|
ssize_t (*read)(struct mdev_device *mdev, char __user *buf,
|
||||||
size_t count, loff_t *ppos);
|
size_t count, loff_t *ppos);
|
||||||
ssize_t (*write)(struct mdev_device *mdev, const char __user *buf,
|
ssize_t (*write)(struct mdev_device *mdev, const char __user *buf,
|
||||||
|
|
|
@ -16,6 +16,10 @@ typedef unsigned long kernel_ulong_t;
|
||||||
|
|
||||||
#define PCI_ANY_ID (~0)
|
#define PCI_ANY_ID (~0)
|
||||||
|
|
||||||
|
enum {
|
||||||
|
PCI_ID_F_VFIO_DRIVER_OVERRIDE = 1,
|
||||||
|
};
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* struct pci_device_id - PCI device ID structure
|
* struct pci_device_id - PCI device ID structure
|
||||||
* @vendor: Vendor ID to match (or PCI_ANY_ID)
|
* @vendor: Vendor ID to match (or PCI_ANY_ID)
|
||||||
|
@ -34,12 +38,14 @@ typedef unsigned long kernel_ulong_t;
|
||||||
* Best practice is to use driver_data as an index
|
* Best practice is to use driver_data as an index
|
||||||
* into a static list of equivalent device types,
|
* into a static list of equivalent device types,
|
||||||
* instead of using it as a pointer.
|
* instead of using it as a pointer.
|
||||||
|
* @override_only: Match only when dev->driver_override is this driver.
|
||||||
*/
|
*/
|
||||||
struct pci_device_id {
|
struct pci_device_id {
|
||||||
__u32 vendor, device; /* Vendor and device ID or PCI_ANY_ID*/
|
__u32 vendor, device; /* Vendor and device ID or PCI_ANY_ID*/
|
||||||
__u32 subvendor, subdevice; /* Subsystem ID's or PCI_ANY_ID */
|
__u32 subvendor, subdevice; /* Subsystem ID's or PCI_ANY_ID */
|
||||||
__u32 class, class_mask; /* (class,subclass,prog-if) triplet */
|
__u32 class, class_mask; /* (class,subclass,prog-if) triplet */
|
||||||
kernel_ulong_t driver_data; /* Data private to the driver */
|
kernel_ulong_t driver_data; /* Data private to the driver */
|
||||||
|
__u32 override_only;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -901,6 +901,35 @@ struct pci_driver {
|
||||||
.vendor = (vend), .device = (dev), \
|
.vendor = (vend), .device = (dev), \
|
||||||
.subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID
|
.subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID
|
||||||
|
|
||||||
|
/**
|
||||||
|
* PCI_DEVICE_DRIVER_OVERRIDE - macro used to describe a PCI device with
|
||||||
|
* override_only flags.
|
||||||
|
* @vend: the 16 bit PCI Vendor ID
|
||||||
|
* @dev: the 16 bit PCI Device ID
|
||||||
|
* @driver_override: the 32 bit PCI Device override_only
|
||||||
|
*
|
||||||
|
* This macro is used to create a struct pci_device_id that matches only a
|
||||||
|
* driver_override device. The subvendor and subdevice fields will be set to
|
||||||
|
* PCI_ANY_ID.
|
||||||
|
*/
|
||||||
|
#define PCI_DEVICE_DRIVER_OVERRIDE(vend, dev, driver_override) \
|
||||||
|
.vendor = (vend), .device = (dev), .subvendor = PCI_ANY_ID, \
|
||||||
|
.subdevice = PCI_ANY_ID, .override_only = (driver_override)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* PCI_DRIVER_OVERRIDE_DEVICE_VFIO - macro used to describe a VFIO
|
||||||
|
* "driver_override" PCI device.
|
||||||
|
* @vend: the 16 bit PCI Vendor ID
|
||||||
|
* @dev: the 16 bit PCI Device ID
|
||||||
|
*
|
||||||
|
* This macro is used to create a struct pci_device_id that matches a
|
||||||
|
* specific device. The subvendor and subdevice fields will be set to
|
||||||
|
* PCI_ANY_ID and the driver_override will be set to
|
||||||
|
* PCI_ID_F_VFIO_DRIVER_OVERRIDE.
|
||||||
|
*/
|
||||||
|
#define PCI_DRIVER_OVERRIDE_DEVICE_VFIO(vend, dev) \
|
||||||
|
PCI_DEVICE_DRIVER_OVERRIDE(vend, dev, PCI_ID_F_VFIO_DRIVER_OVERRIDE)
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* PCI_DEVICE_SUB - macro used to describe a specific PCI device with subsystem
|
* PCI_DEVICE_SUB - macro used to describe a specific PCI device with subsystem
|
||||||
* @vend: the 16 bit PCI Vendor ID
|
* @vend: the 16 bit PCI Vendor ID
|
||||||
|
|
|
@ -15,13 +15,28 @@
|
||||||
#include <linux/poll.h>
|
#include <linux/poll.h>
|
||||||
#include <uapi/linux/vfio.h>
|
#include <uapi/linux/vfio.h>
|
||||||
|
|
||||||
|
/*
|
||||||
|
* VFIO devices can be placed in a set, this allows all devices to share this
|
||||||
|
* structure and the VFIO core will provide a lock that is held around
|
||||||
|
* open_device()/close_device() for all devices in the set.
|
||||||
|
*/
|
||||||
|
struct vfio_device_set {
|
||||||
|
void *set_id;
|
||||||
|
struct mutex lock;
|
||||||
|
struct list_head device_list;
|
||||||
|
unsigned int device_count;
|
||||||
|
};
|
||||||
|
|
||||||
struct vfio_device {
|
struct vfio_device {
|
||||||
struct device *dev;
|
struct device *dev;
|
||||||
const struct vfio_device_ops *ops;
|
const struct vfio_device_ops *ops;
|
||||||
struct vfio_group *group;
|
struct vfio_group *group;
|
||||||
|
struct vfio_device_set *dev_set;
|
||||||
|
struct list_head dev_set_list;
|
||||||
|
|
||||||
/* Members below here are private, not for driver use */
|
/* Members below here are private, not for driver use */
|
||||||
refcount_t refcount;
|
refcount_t refcount;
|
||||||
|
unsigned int open_count;
|
||||||
struct completion comp;
|
struct completion comp;
|
||||||
struct list_head group_next;
|
struct list_head group_next;
|
||||||
};
|
};
|
||||||
|
@ -29,8 +44,8 @@ struct vfio_device {
|
||||||
/**
|
/**
|
||||||
* struct vfio_device_ops - VFIO bus driver device callbacks
|
* struct vfio_device_ops - VFIO bus driver device callbacks
|
||||||
*
|
*
|
||||||
* @open: Called when userspace creates new file descriptor for device
|
* @open_device: Called when the first file descriptor is opened for this device
|
||||||
* @release: Called when userspace releases file descriptor for device
|
* @close_device: Opposite of open_device
|
||||||
* @read: Perform read(2) on device file descriptor
|
* @read: Perform read(2) on device file descriptor
|
||||||
* @write: Perform write(2) on device file descriptor
|
* @write: Perform write(2) on device file descriptor
|
||||||
* @ioctl: Perform ioctl(2) on device file descriptor, supporting VFIO_DEVICE_*
|
* @ioctl: Perform ioctl(2) on device file descriptor, supporting VFIO_DEVICE_*
|
||||||
|
@ -43,8 +58,8 @@ struct vfio_device {
|
||||||
*/
|
*/
|
||||||
struct vfio_device_ops {
|
struct vfio_device_ops {
|
||||||
char *name;
|
char *name;
|
||||||
int (*open)(struct vfio_device *vdev);
|
int (*open_device)(struct vfio_device *vdev);
|
||||||
void (*release)(struct vfio_device *vdev);
|
void (*close_device)(struct vfio_device *vdev);
|
||||||
ssize_t (*read)(struct vfio_device *vdev, char __user *buf,
|
ssize_t (*read)(struct vfio_device *vdev, char __user *buf,
|
||||||
size_t count, loff_t *ppos);
|
size_t count, loff_t *ppos);
|
||||||
ssize_t (*write)(struct vfio_device *vdev, const char __user *buf,
|
ssize_t (*write)(struct vfio_device *vdev, const char __user *buf,
|
||||||
|
@ -61,11 +76,14 @@ extern void vfio_iommu_group_put(struct iommu_group *group, struct device *dev);
|
||||||
|
|
||||||
void vfio_init_group_dev(struct vfio_device *device, struct device *dev,
|
void vfio_init_group_dev(struct vfio_device *device, struct device *dev,
|
||||||
const struct vfio_device_ops *ops);
|
const struct vfio_device_ops *ops);
|
||||||
|
void vfio_uninit_group_dev(struct vfio_device *device);
|
||||||
int vfio_register_group_dev(struct vfio_device *device);
|
int vfio_register_group_dev(struct vfio_device *device);
|
||||||
void vfio_unregister_group_dev(struct vfio_device *device);
|
void vfio_unregister_group_dev(struct vfio_device *device);
|
||||||
extern struct vfio_device *vfio_device_get_from_dev(struct device *dev);
|
extern struct vfio_device *vfio_device_get_from_dev(struct device *dev);
|
||||||
extern void vfio_device_put(struct vfio_device *device);
|
extern void vfio_device_put(struct vfio_device *device);
|
||||||
|
|
||||||
|
int vfio_assign_device_set(struct vfio_device *device, void *set_id);
|
||||||
|
|
||||||
/* events for the backend driver notify callback */
|
/* events for the backend driver notify callback */
|
||||||
enum vfio_iommu_notify_type {
|
enum vfio_iommu_notify_type {
|
||||||
VFIO_IOMMU_CONTAINER_CLOSE = 0,
|
VFIO_IOMMU_CONTAINER_CLOSE = 0,
|
||||||
|
|
|
@ -10,13 +10,14 @@
|
||||||
|
|
||||||
#include <linux/mutex.h>
|
#include <linux/mutex.h>
|
||||||
#include <linux/pci.h>
|
#include <linux/pci.h>
|
||||||
|
#include <linux/vfio.h>
|
||||||
#include <linux/irqbypass.h>
|
#include <linux/irqbypass.h>
|
||||||
#include <linux/types.h>
|
#include <linux/types.h>
|
||||||
#include <linux/uuid.h>
|
#include <linux/uuid.h>
|
||||||
#include <linux/notifier.h>
|
#include <linux/notifier.h>
|
||||||
|
|
||||||
#ifndef VFIO_PCI_PRIVATE_H
|
#ifndef VFIO_PCI_CORE_H
|
||||||
#define VFIO_PCI_PRIVATE_H
|
#define VFIO_PCI_CORE_H
|
||||||
|
|
||||||
#define VFIO_PCI_OFFSET_SHIFT 40
|
#define VFIO_PCI_OFFSET_SHIFT 40
|
||||||
|
|
||||||
|
@ -33,7 +34,7 @@
|
||||||
|
|
||||||
struct vfio_pci_ioeventfd {
|
struct vfio_pci_ioeventfd {
|
||||||
struct list_head next;
|
struct list_head next;
|
||||||
struct vfio_pci_device *vdev;
|
struct vfio_pci_core_device *vdev;
|
||||||
struct virqfd *virqfd;
|
struct virqfd *virqfd;
|
||||||
void __iomem *addr;
|
void __iomem *addr;
|
||||||
uint64_t data;
|
uint64_t data;
|
||||||
|
@ -52,18 +53,18 @@ struct vfio_pci_irq_ctx {
|
||||||
struct irq_bypass_producer producer;
|
struct irq_bypass_producer producer;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct vfio_pci_device;
|
struct vfio_pci_core_device;
|
||||||
struct vfio_pci_region;
|
struct vfio_pci_region;
|
||||||
|
|
||||||
struct vfio_pci_regops {
|
struct vfio_pci_regops {
|
||||||
size_t (*rw)(struct vfio_pci_device *vdev, char __user *buf,
|
ssize_t (*rw)(struct vfio_pci_core_device *vdev, char __user *buf,
|
||||||
size_t count, loff_t *ppos, bool iswrite);
|
size_t count, loff_t *ppos, bool iswrite);
|
||||||
void (*release)(struct vfio_pci_device *vdev,
|
void (*release)(struct vfio_pci_core_device *vdev,
|
||||||
struct vfio_pci_region *region);
|
struct vfio_pci_region *region);
|
||||||
int (*mmap)(struct vfio_pci_device *vdev,
|
int (*mmap)(struct vfio_pci_core_device *vdev,
|
||||||
struct vfio_pci_region *region,
|
struct vfio_pci_region *region,
|
||||||
struct vm_area_struct *vma);
|
struct vm_area_struct *vma);
|
||||||
int (*add_capability)(struct vfio_pci_device *vdev,
|
int (*add_capability)(struct vfio_pci_core_device *vdev,
|
||||||
struct vfio_pci_region *region,
|
struct vfio_pci_region *region,
|
||||||
struct vfio_info_cap *caps);
|
struct vfio_info_cap *caps);
|
||||||
};
|
};
|
||||||
|
@ -83,11 +84,6 @@ struct vfio_pci_dummy_resource {
|
||||||
struct list_head res_next;
|
struct list_head res_next;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct vfio_pci_reflck {
|
|
||||||
struct kref kref;
|
|
||||||
struct mutex lock;
|
|
||||||
};
|
|
||||||
|
|
||||||
struct vfio_pci_vf_token {
|
struct vfio_pci_vf_token {
|
||||||
struct mutex lock;
|
struct mutex lock;
|
||||||
uuid_t uuid;
|
uuid_t uuid;
|
||||||
|
@ -99,7 +95,7 @@ struct vfio_pci_mmap_vma {
|
||||||
struct list_head vma_next;
|
struct list_head vma_next;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct vfio_pci_device {
|
struct vfio_pci_core_device {
|
||||||
struct vfio_device vdev;
|
struct vfio_device vdev;
|
||||||
struct pci_dev *pdev;
|
struct pci_dev *pdev;
|
||||||
void __iomem *barmap[PCI_STD_NUM_BARS];
|
void __iomem *barmap[PCI_STD_NUM_BARS];
|
||||||
|
@ -130,8 +126,6 @@ struct vfio_pci_device {
|
||||||
bool needs_pm_restore;
|
bool needs_pm_restore;
|
||||||
struct pci_saved_state *pci_saved_state;
|
struct pci_saved_state *pci_saved_state;
|
||||||
struct pci_saved_state *pm_save;
|
struct pci_saved_state *pm_save;
|
||||||
struct vfio_pci_reflck *reflck;
|
|
||||||
int refcnt;
|
|
||||||
int ioeventfds_nr;
|
int ioeventfds_nr;
|
||||||
struct eventfd_ctx *err_trigger;
|
struct eventfd_ctx *err_trigger;
|
||||||
struct eventfd_ctx *req_trigger;
|
struct eventfd_ctx *req_trigger;
|
||||||
|
@ -151,65 +145,95 @@ struct vfio_pci_device {
|
||||||
#define is_irq_none(vdev) (!(is_intx(vdev) || is_msi(vdev) || is_msix(vdev)))
|
#define is_irq_none(vdev) (!(is_intx(vdev) || is_msi(vdev) || is_msix(vdev)))
|
||||||
#define irq_is(vdev, type) (vdev->irq_type == type)
|
#define irq_is(vdev, type) (vdev->irq_type == type)
|
||||||
|
|
||||||
extern void vfio_pci_intx_mask(struct vfio_pci_device *vdev);
|
extern void vfio_pci_intx_mask(struct vfio_pci_core_device *vdev);
|
||||||
extern void vfio_pci_intx_unmask(struct vfio_pci_device *vdev);
|
extern void vfio_pci_intx_unmask(struct vfio_pci_core_device *vdev);
|
||||||
|
|
||||||
extern int vfio_pci_set_irqs_ioctl(struct vfio_pci_device *vdev,
|
extern int vfio_pci_set_irqs_ioctl(struct vfio_pci_core_device *vdev,
|
||||||
uint32_t flags, unsigned index,
|
uint32_t flags, unsigned index,
|
||||||
unsigned start, unsigned count, void *data);
|
unsigned start, unsigned count, void *data);
|
||||||
|
|
||||||
extern ssize_t vfio_pci_config_rw(struct vfio_pci_device *vdev,
|
extern ssize_t vfio_pci_config_rw(struct vfio_pci_core_device *vdev,
|
||||||
char __user *buf, size_t count,
|
char __user *buf, size_t count,
|
||||||
loff_t *ppos, bool iswrite);
|
loff_t *ppos, bool iswrite);
|
||||||
|
|
||||||
extern ssize_t vfio_pci_bar_rw(struct vfio_pci_device *vdev, char __user *buf,
|
extern ssize_t vfio_pci_bar_rw(struct vfio_pci_core_device *vdev, char __user *buf,
|
||||||
size_t count, loff_t *ppos, bool iswrite);
|
size_t count, loff_t *ppos, bool iswrite);
|
||||||
|
|
||||||
extern ssize_t vfio_pci_vga_rw(struct vfio_pci_device *vdev, char __user *buf,
|
extern ssize_t vfio_pci_vga_rw(struct vfio_pci_core_device *vdev, char __user *buf,
|
||||||
size_t count, loff_t *ppos, bool iswrite);
|
size_t count, loff_t *ppos, bool iswrite);
|
||||||
|
|
||||||
extern long vfio_pci_ioeventfd(struct vfio_pci_device *vdev, loff_t offset,
|
extern long vfio_pci_ioeventfd(struct vfio_pci_core_device *vdev, loff_t offset,
|
||||||
uint64_t data, int count, int fd);
|
uint64_t data, int count, int fd);
|
||||||
|
|
||||||
extern int vfio_pci_init_perm_bits(void);
|
extern int vfio_pci_init_perm_bits(void);
|
||||||
extern void vfio_pci_uninit_perm_bits(void);
|
extern void vfio_pci_uninit_perm_bits(void);
|
||||||
|
|
||||||
extern int vfio_config_init(struct vfio_pci_device *vdev);
|
extern int vfio_config_init(struct vfio_pci_core_device *vdev);
|
||||||
extern void vfio_config_free(struct vfio_pci_device *vdev);
|
extern void vfio_config_free(struct vfio_pci_core_device *vdev);
|
||||||
|
|
||||||
extern int vfio_pci_register_dev_region(struct vfio_pci_device *vdev,
|
extern int vfio_pci_register_dev_region(struct vfio_pci_core_device *vdev,
|
||||||
unsigned int type, unsigned int subtype,
|
unsigned int type, unsigned int subtype,
|
||||||
const struct vfio_pci_regops *ops,
|
const struct vfio_pci_regops *ops,
|
||||||
size_t size, u32 flags, void *data);
|
size_t size, u32 flags, void *data);
|
||||||
|
|
||||||
extern int vfio_pci_set_power_state(struct vfio_pci_device *vdev,
|
extern int vfio_pci_set_power_state(struct vfio_pci_core_device *vdev,
|
||||||
pci_power_t state);
|
pci_power_t state);
|
||||||
|
|
||||||
extern bool __vfio_pci_memory_enabled(struct vfio_pci_device *vdev);
|
extern bool __vfio_pci_memory_enabled(struct vfio_pci_core_device *vdev);
|
||||||
extern void vfio_pci_zap_and_down_write_memory_lock(struct vfio_pci_device
|
extern void vfio_pci_zap_and_down_write_memory_lock(struct vfio_pci_core_device
|
||||||
*vdev);
|
*vdev);
|
||||||
extern u16 vfio_pci_memory_lock_and_enable(struct vfio_pci_device *vdev);
|
extern u16 vfio_pci_memory_lock_and_enable(struct vfio_pci_core_device *vdev);
|
||||||
extern void vfio_pci_memory_unlock_and_restore(struct vfio_pci_device *vdev,
|
extern void vfio_pci_memory_unlock_and_restore(struct vfio_pci_core_device *vdev,
|
||||||
u16 cmd);
|
u16 cmd);
|
||||||
|
|
||||||
#ifdef CONFIG_VFIO_PCI_IGD
|
#ifdef CONFIG_VFIO_PCI_IGD
|
||||||
extern int vfio_pci_igd_init(struct vfio_pci_device *vdev);
|
extern int vfio_pci_igd_init(struct vfio_pci_core_device *vdev);
|
||||||
#else
|
#else
|
||||||
static inline int vfio_pci_igd_init(struct vfio_pci_device *vdev)
|
static inline int vfio_pci_igd_init(struct vfio_pci_core_device *vdev)
|
||||||
{
|
{
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#ifdef CONFIG_S390
|
#ifdef CONFIG_S390
|
||||||
extern int vfio_pci_info_zdev_add_caps(struct vfio_pci_device *vdev,
|
extern int vfio_pci_info_zdev_add_caps(struct vfio_pci_core_device *vdev,
|
||||||
struct vfio_info_cap *caps);
|
struct vfio_info_cap *caps);
|
||||||
#else
|
#else
|
||||||
static inline int vfio_pci_info_zdev_add_caps(struct vfio_pci_device *vdev,
|
static inline int vfio_pci_info_zdev_add_caps(struct vfio_pci_core_device *vdev,
|
||||||
struct vfio_info_cap *caps)
|
struct vfio_info_cap *caps)
|
||||||
{
|
{
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#endif /* VFIO_PCI_PRIVATE_H */
|
/* Will be exported for vfio pci drivers usage */
|
||||||
|
void vfio_pci_core_set_params(bool nointxmask, bool is_disable_vga,
|
||||||
|
bool is_disable_idle_d3);
|
||||||
|
void vfio_pci_core_close_device(struct vfio_device *core_vdev);
|
||||||
|
void vfio_pci_core_init_device(struct vfio_pci_core_device *vdev,
|
||||||
|
struct pci_dev *pdev,
|
||||||
|
const struct vfio_device_ops *vfio_pci_ops);
|
||||||
|
int vfio_pci_core_register_device(struct vfio_pci_core_device *vdev);
|
||||||
|
void vfio_pci_core_uninit_device(struct vfio_pci_core_device *vdev);
|
||||||
|
void vfio_pci_core_unregister_device(struct vfio_pci_core_device *vdev);
|
||||||
|
int vfio_pci_core_sriov_configure(struct pci_dev *pdev, int nr_virtfn);
|
||||||
|
extern const struct pci_error_handlers vfio_pci_core_err_handlers;
|
||||||
|
long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd,
|
||||||
|
unsigned long arg);
|
||||||
|
ssize_t vfio_pci_core_read(struct vfio_device *core_vdev, char __user *buf,
|
||||||
|
size_t count, loff_t *ppos);
|
||||||
|
ssize_t vfio_pci_core_write(struct vfio_device *core_vdev, const char __user *buf,
|
||||||
|
size_t count, loff_t *ppos);
|
||||||
|
int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma);
|
||||||
|
void vfio_pci_core_request(struct vfio_device *core_vdev, unsigned int count);
|
||||||
|
int vfio_pci_core_match(struct vfio_device *core_vdev, char *buf);
|
||||||
|
int vfio_pci_core_enable(struct vfio_pci_core_device *vdev);
|
||||||
|
void vfio_pci_core_disable(struct vfio_pci_core_device *vdev);
|
||||||
|
void vfio_pci_core_finish_enable(struct vfio_pci_core_device *vdev);
|
||||||
|
|
||||||
|
static inline bool vfio_pci_is_vga(struct pci_dev *pdev)
|
||||||
|
{
|
||||||
|
return (pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA;
|
||||||
|
}
|
||||||
|
|
||||||
|
#endif /* VFIO_PCI_CORE_H */
|
|
@ -129,7 +129,7 @@ static dev_t mbochs_devt;
|
||||||
static struct class *mbochs_class;
|
static struct class *mbochs_class;
|
||||||
static struct cdev mbochs_cdev;
|
static struct cdev mbochs_cdev;
|
||||||
static struct device mbochs_dev;
|
static struct device mbochs_dev;
|
||||||
static int mbochs_used_mbytes;
|
static atomic_t mbochs_avail_mbytes;
|
||||||
static const struct vfio_device_ops mbochs_dev_ops;
|
static const struct vfio_device_ops mbochs_dev_ops;
|
||||||
|
|
||||||
struct vfio_region_info_ext {
|
struct vfio_region_info_ext {
|
||||||
|
@ -507,18 +507,22 @@ static int mbochs_reset(struct mdev_state *mdev_state)
|
||||||
|
|
||||||
static int mbochs_probe(struct mdev_device *mdev)
|
static int mbochs_probe(struct mdev_device *mdev)
|
||||||
{
|
{
|
||||||
|
int avail_mbytes = atomic_read(&mbochs_avail_mbytes);
|
||||||
const struct mbochs_type *type =
|
const struct mbochs_type *type =
|
||||||
&mbochs_types[mdev_get_type_group_id(mdev)];
|
&mbochs_types[mdev_get_type_group_id(mdev)];
|
||||||
struct device *dev = mdev_dev(mdev);
|
struct device *dev = mdev_dev(mdev);
|
||||||
struct mdev_state *mdev_state;
|
struct mdev_state *mdev_state;
|
||||||
int ret = -ENOMEM;
|
int ret = -ENOMEM;
|
||||||
|
|
||||||
if (type->mbytes + mbochs_used_mbytes > max_mbytes)
|
do {
|
||||||
return -ENOMEM;
|
if (avail_mbytes < type->mbytes)
|
||||||
|
return -ENOSPC;
|
||||||
|
} while (!atomic_try_cmpxchg(&mbochs_avail_mbytes, &avail_mbytes,
|
||||||
|
avail_mbytes - type->mbytes));
|
||||||
|
|
||||||
mdev_state = kzalloc(sizeof(struct mdev_state), GFP_KERNEL);
|
mdev_state = kzalloc(sizeof(struct mdev_state), GFP_KERNEL);
|
||||||
if (mdev_state == NULL)
|
if (mdev_state == NULL)
|
||||||
return -ENOMEM;
|
goto err_avail;
|
||||||
vfio_init_group_dev(&mdev_state->vdev, &mdev->dev, &mbochs_dev_ops);
|
vfio_init_group_dev(&mdev_state->vdev, &mdev->dev, &mbochs_dev_ops);
|
||||||
|
|
||||||
mdev_state->vconfig = kzalloc(MBOCHS_CONFIG_SPACE_SIZE, GFP_KERNEL);
|
mdev_state->vconfig = kzalloc(MBOCHS_CONFIG_SPACE_SIZE, GFP_KERNEL);
|
||||||
|
@ -549,17 +553,18 @@ static int mbochs_probe(struct mdev_device *mdev)
|
||||||
mbochs_create_config_space(mdev_state);
|
mbochs_create_config_space(mdev_state);
|
||||||
mbochs_reset(mdev_state);
|
mbochs_reset(mdev_state);
|
||||||
|
|
||||||
mbochs_used_mbytes += type->mbytes;
|
|
||||||
|
|
||||||
ret = vfio_register_group_dev(&mdev_state->vdev);
|
ret = vfio_register_group_dev(&mdev_state->vdev);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto err_mem;
|
goto err_mem;
|
||||||
dev_set_drvdata(&mdev->dev, mdev_state);
|
dev_set_drvdata(&mdev->dev, mdev_state);
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
err_mem:
|
err_mem:
|
||||||
|
vfio_uninit_group_dev(&mdev_state->vdev);
|
||||||
|
kfree(mdev_state->pages);
|
||||||
kfree(mdev_state->vconfig);
|
kfree(mdev_state->vconfig);
|
||||||
kfree(mdev_state);
|
kfree(mdev_state);
|
||||||
|
err_avail:
|
||||||
|
atomic_add(type->mbytes, &mbochs_avail_mbytes);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -567,8 +572,9 @@ static void mbochs_remove(struct mdev_device *mdev)
|
||||||
{
|
{
|
||||||
struct mdev_state *mdev_state = dev_get_drvdata(&mdev->dev);
|
struct mdev_state *mdev_state = dev_get_drvdata(&mdev->dev);
|
||||||
|
|
||||||
mbochs_used_mbytes -= mdev_state->type->mbytes;
|
|
||||||
vfio_unregister_group_dev(&mdev_state->vdev);
|
vfio_unregister_group_dev(&mdev_state->vdev);
|
||||||
|
vfio_uninit_group_dev(&mdev_state->vdev);
|
||||||
|
atomic_add(mdev_state->type->mbytes, &mbochs_avail_mbytes);
|
||||||
kfree(mdev_state->pages);
|
kfree(mdev_state->pages);
|
||||||
kfree(mdev_state->vconfig);
|
kfree(mdev_state->vconfig);
|
||||||
kfree(mdev_state);
|
kfree(mdev_state);
|
||||||
|
@ -1272,15 +1278,7 @@ static long mbochs_ioctl(struct vfio_device *vdev, unsigned int cmd,
|
||||||
return -ENOTTY;
|
return -ENOTTY;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int mbochs_open(struct vfio_device *vdev)
|
static void mbochs_close_device(struct vfio_device *vdev)
|
||||||
{
|
|
||||||
if (!try_module_get(THIS_MODULE))
|
|
||||||
return -ENODEV;
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void mbochs_close(struct vfio_device *vdev)
|
|
||||||
{
|
{
|
||||||
struct mdev_state *mdev_state =
|
struct mdev_state *mdev_state =
|
||||||
container_of(vdev, struct mdev_state, vdev);
|
container_of(vdev, struct mdev_state, vdev);
|
||||||
|
@ -1300,7 +1298,6 @@ static void mbochs_close(struct vfio_device *vdev)
|
||||||
mbochs_put_pages(mdev_state);
|
mbochs_put_pages(mdev_state);
|
||||||
|
|
||||||
mutex_unlock(&mdev_state->ops_lock);
|
mutex_unlock(&mdev_state->ops_lock);
|
||||||
module_put(THIS_MODULE);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static ssize_t
|
static ssize_t
|
||||||
|
@ -1355,7 +1352,7 @@ static ssize_t available_instances_show(struct mdev_type *mtype,
|
||||||
{
|
{
|
||||||
const struct mbochs_type *type =
|
const struct mbochs_type *type =
|
||||||
&mbochs_types[mtype_get_type_group_id(mtype)];
|
&mbochs_types[mtype_get_type_group_id(mtype)];
|
||||||
int count = (max_mbytes - mbochs_used_mbytes) / type->mbytes;
|
int count = atomic_read(&mbochs_avail_mbytes) / type->mbytes;
|
||||||
|
|
||||||
return sprintf(buf, "%d\n", count);
|
return sprintf(buf, "%d\n", count);
|
||||||
}
|
}
|
||||||
|
@ -1399,8 +1396,7 @@ static struct attribute_group *mdev_type_groups[] = {
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct vfio_device_ops mbochs_dev_ops = {
|
static const struct vfio_device_ops mbochs_dev_ops = {
|
||||||
.open = mbochs_open,
|
.close_device = mbochs_close_device,
|
||||||
.release = mbochs_close,
|
|
||||||
.read = mbochs_read,
|
.read = mbochs_read,
|
||||||
.write = mbochs_write,
|
.write = mbochs_write,
|
||||||
.ioctl = mbochs_ioctl,
|
.ioctl = mbochs_ioctl,
|
||||||
|
@ -1437,6 +1433,8 @@ static int __init mbochs_dev_init(void)
|
||||||
{
|
{
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
|
atomic_set(&mbochs_avail_mbytes, max_mbytes);
|
||||||
|
|
||||||
ret = alloc_chrdev_region(&mbochs_devt, 0, MINORMASK + 1, MBOCHS_NAME);
|
ret = alloc_chrdev_region(&mbochs_devt, 0, MINORMASK + 1, MBOCHS_NAME);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
pr_err("Error: failed to register mbochs_dev, err: %d\n", ret);
|
pr_err("Error: failed to register mbochs_dev, err: %d\n", ret);
|
||||||
|
|
|
@ -235,17 +235,16 @@ static int mdpy_probe(struct mdev_device *mdev)
|
||||||
|
|
||||||
mdev_state->vconfig = kzalloc(MDPY_CONFIG_SPACE_SIZE, GFP_KERNEL);
|
mdev_state->vconfig = kzalloc(MDPY_CONFIG_SPACE_SIZE, GFP_KERNEL);
|
||||||
if (mdev_state->vconfig == NULL) {
|
if (mdev_state->vconfig == NULL) {
|
||||||
kfree(mdev_state);
|
ret = -ENOMEM;
|
||||||
return -ENOMEM;
|
goto err_state;
|
||||||
}
|
}
|
||||||
|
|
||||||
fbsize = roundup_pow_of_two(type->width * type->height * type->bytepp);
|
fbsize = roundup_pow_of_two(type->width * type->height * type->bytepp);
|
||||||
|
|
||||||
mdev_state->memblk = vmalloc_user(fbsize);
|
mdev_state->memblk = vmalloc_user(fbsize);
|
||||||
if (!mdev_state->memblk) {
|
if (!mdev_state->memblk) {
|
||||||
kfree(mdev_state->vconfig);
|
ret = -ENOMEM;
|
||||||
kfree(mdev_state);
|
goto err_vconfig;
|
||||||
return -ENOMEM;
|
|
||||||
}
|
}
|
||||||
dev_info(dev, "%s: %s (%dx%d)\n", __func__, type->name, type->width,
|
dev_info(dev, "%s: %s (%dx%d)\n", __func__, type->name, type->width,
|
||||||
type->height);
|
type->height);
|
||||||
|
@ -260,13 +259,18 @@ static int mdpy_probe(struct mdev_device *mdev)
|
||||||
mdpy_count++;
|
mdpy_count++;
|
||||||
|
|
||||||
ret = vfio_register_group_dev(&mdev_state->vdev);
|
ret = vfio_register_group_dev(&mdev_state->vdev);
|
||||||
if (ret) {
|
if (ret)
|
||||||
kfree(mdev_state->vconfig);
|
goto err_mem;
|
||||||
kfree(mdev_state);
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
dev_set_drvdata(&mdev->dev, mdev_state);
|
dev_set_drvdata(&mdev->dev, mdev_state);
|
||||||
return 0;
|
return 0;
|
||||||
|
err_mem:
|
||||||
|
vfree(mdev_state->memblk);
|
||||||
|
err_vconfig:
|
||||||
|
kfree(mdev_state->vconfig);
|
||||||
|
err_state:
|
||||||
|
vfio_uninit_group_dev(&mdev_state->vdev);
|
||||||
|
kfree(mdev_state);
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void mdpy_remove(struct mdev_device *mdev)
|
static void mdpy_remove(struct mdev_device *mdev)
|
||||||
|
@ -278,6 +282,7 @@ static void mdpy_remove(struct mdev_device *mdev)
|
||||||
vfio_unregister_group_dev(&mdev_state->vdev);
|
vfio_unregister_group_dev(&mdev_state->vdev);
|
||||||
vfree(mdev_state->memblk);
|
vfree(mdev_state->memblk);
|
||||||
kfree(mdev_state->vconfig);
|
kfree(mdev_state->vconfig);
|
||||||
|
vfio_uninit_group_dev(&mdev_state->vdev);
|
||||||
kfree(mdev_state);
|
kfree(mdev_state);
|
||||||
|
|
||||||
mdpy_count--;
|
mdpy_count--;
|
||||||
|
@ -609,19 +614,6 @@ static long mdpy_ioctl(struct vfio_device *vdev, unsigned int cmd,
|
||||||
return -ENOTTY;
|
return -ENOTTY;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int mdpy_open(struct vfio_device *vdev)
|
|
||||||
{
|
|
||||||
if (!try_module_get(THIS_MODULE))
|
|
||||||
return -ENODEV;
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void mdpy_close(struct vfio_device *vdev)
|
|
||||||
{
|
|
||||||
module_put(THIS_MODULE);
|
|
||||||
}
|
|
||||||
|
|
||||||
static ssize_t
|
static ssize_t
|
||||||
resolution_show(struct device *dev, struct device_attribute *attr,
|
resolution_show(struct device *dev, struct device_attribute *attr,
|
||||||
char *buf)
|
char *buf)
|
||||||
|
@ -716,8 +708,6 @@ static struct attribute_group *mdev_type_groups[] = {
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct vfio_device_ops mdpy_dev_ops = {
|
static const struct vfio_device_ops mdpy_dev_ops = {
|
||||||
.open = mdpy_open,
|
|
||||||
.release = mdpy_close,
|
|
||||||
.read = mdpy_read,
|
.read = mdpy_read,
|
||||||
.write = mdpy_write,
|
.write = mdpy_write,
|
||||||
.ioctl = mdpy_ioctl,
|
.ioctl = mdpy_ioctl,
|
||||||
|
|
|
@ -718,8 +718,8 @@ static int mtty_probe(struct mdev_device *mdev)
|
||||||
|
|
||||||
mdev_state = kzalloc(sizeof(struct mdev_state), GFP_KERNEL);
|
mdev_state = kzalloc(sizeof(struct mdev_state), GFP_KERNEL);
|
||||||
if (mdev_state == NULL) {
|
if (mdev_state == NULL) {
|
||||||
atomic_add(nr_ports, &mdev_avail_ports);
|
ret = -ENOMEM;
|
||||||
return -ENOMEM;
|
goto err_nr_ports;
|
||||||
}
|
}
|
||||||
|
|
||||||
vfio_init_group_dev(&mdev_state->vdev, &mdev->dev, &mtty_dev_ops);
|
vfio_init_group_dev(&mdev_state->vdev, &mdev->dev, &mtty_dev_ops);
|
||||||
|
@ -732,9 +732,8 @@ static int mtty_probe(struct mdev_device *mdev)
|
||||||
mdev_state->vconfig = kzalloc(MTTY_CONFIG_SPACE_SIZE, GFP_KERNEL);
|
mdev_state->vconfig = kzalloc(MTTY_CONFIG_SPACE_SIZE, GFP_KERNEL);
|
||||||
|
|
||||||
if (mdev_state->vconfig == NULL) {
|
if (mdev_state->vconfig == NULL) {
|
||||||
kfree(mdev_state);
|
ret = -ENOMEM;
|
||||||
atomic_add(nr_ports, &mdev_avail_ports);
|
goto err_state;
|
||||||
return -ENOMEM;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
mutex_init(&mdev_state->ops_lock);
|
mutex_init(&mdev_state->ops_lock);
|
||||||
|
@ -743,14 +742,19 @@ static int mtty_probe(struct mdev_device *mdev)
|
||||||
mtty_create_config_space(mdev_state);
|
mtty_create_config_space(mdev_state);
|
||||||
|
|
||||||
ret = vfio_register_group_dev(&mdev_state->vdev);
|
ret = vfio_register_group_dev(&mdev_state->vdev);
|
||||||
if (ret) {
|
if (ret)
|
||||||
kfree(mdev_state);
|
goto err_vconfig;
|
||||||
atomic_add(nr_ports, &mdev_avail_ports);
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
dev_set_drvdata(&mdev->dev, mdev_state);
|
dev_set_drvdata(&mdev->dev, mdev_state);
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
err_vconfig:
|
||||||
|
kfree(mdev_state->vconfig);
|
||||||
|
err_state:
|
||||||
|
vfio_uninit_group_dev(&mdev_state->vdev);
|
||||||
|
kfree(mdev_state);
|
||||||
|
err_nr_ports:
|
||||||
|
atomic_add(nr_ports, &mdev_avail_ports);
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void mtty_remove(struct mdev_device *mdev)
|
static void mtty_remove(struct mdev_device *mdev)
|
||||||
|
@ -761,6 +765,7 @@ static void mtty_remove(struct mdev_device *mdev)
|
||||||
vfio_unregister_group_dev(&mdev_state->vdev);
|
vfio_unregister_group_dev(&mdev_state->vdev);
|
||||||
|
|
||||||
kfree(mdev_state->vconfig);
|
kfree(mdev_state->vconfig);
|
||||||
|
vfio_uninit_group_dev(&mdev_state->vdev);
|
||||||
kfree(mdev_state);
|
kfree(mdev_state);
|
||||||
atomic_add(nr_ports, &mdev_avail_ports);
|
atomic_add(nr_ports, &mdev_avail_ports);
|
||||||
}
|
}
|
||||||
|
@ -1202,17 +1207,6 @@ static long mtty_ioctl(struct vfio_device *vdev, unsigned int cmd,
|
||||||
return -ENOTTY;
|
return -ENOTTY;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int mtty_open(struct vfio_device *vdev)
|
|
||||||
{
|
|
||||||
pr_info("%s\n", __func__);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void mtty_close(struct vfio_device *mdev)
|
|
||||||
{
|
|
||||||
pr_info("%s\n", __func__);
|
|
||||||
}
|
|
||||||
|
|
||||||
static ssize_t
|
static ssize_t
|
||||||
sample_mtty_dev_show(struct device *dev, struct device_attribute *attr,
|
sample_mtty_dev_show(struct device *dev, struct device_attribute *attr,
|
||||||
char *buf)
|
char *buf)
|
||||||
|
@ -1320,8 +1314,6 @@ static struct attribute_group *mdev_type_groups[] = {
|
||||||
|
|
||||||
static const struct vfio_device_ops mtty_dev_ops = {
|
static const struct vfio_device_ops mtty_dev_ops = {
|
||||||
.name = "vfio-mtty",
|
.name = "vfio-mtty",
|
||||||
.open = mtty_open,
|
|
||||||
.release = mtty_close,
|
|
||||||
.read = mtty_read,
|
.read = mtty_read,
|
||||||
.write = mtty_write,
|
.write = mtty_write,
|
||||||
.ioctl = mtty_ioctl,
|
.ioctl = mtty_ioctl,
|
||||||
|
|
|
@ -42,6 +42,7 @@ int main(void)
|
||||||
DEVID_FIELD(pci_device_id, subdevice);
|
DEVID_FIELD(pci_device_id, subdevice);
|
||||||
DEVID_FIELD(pci_device_id, class);
|
DEVID_FIELD(pci_device_id, class);
|
||||||
DEVID_FIELD(pci_device_id, class_mask);
|
DEVID_FIELD(pci_device_id, class_mask);
|
||||||
|
DEVID_FIELD(pci_device_id, override_only);
|
||||||
|
|
||||||
DEVID(ccw_device_id);
|
DEVID(ccw_device_id);
|
||||||
DEVID_FIELD(ccw_device_id, match_flags);
|
DEVID_FIELD(ccw_device_id, match_flags);
|
||||||
|
|
|
@ -426,7 +426,7 @@ static int do_ieee1394_entry(const char *filename,
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Looks like: pci:vNdNsvNsdNbcNscNiN. */
|
/* Looks like: pci:vNdNsvNsdNbcNscNiN or <prefix>_pci:vNdNsvNsdNbcNscNiN. */
|
||||||
static int do_pci_entry(const char *filename,
|
static int do_pci_entry(const char *filename,
|
||||||
void *symval, char *alias)
|
void *symval, char *alias)
|
||||||
{
|
{
|
||||||
|
@ -440,8 +440,21 @@ static int do_pci_entry(const char *filename,
|
||||||
DEF_FIELD(symval, pci_device_id, subdevice);
|
DEF_FIELD(symval, pci_device_id, subdevice);
|
||||||
DEF_FIELD(symval, pci_device_id, class);
|
DEF_FIELD(symval, pci_device_id, class);
|
||||||
DEF_FIELD(symval, pci_device_id, class_mask);
|
DEF_FIELD(symval, pci_device_id, class_mask);
|
||||||
|
DEF_FIELD(symval, pci_device_id, override_only);
|
||||||
|
|
||||||
|
switch (override_only) {
|
||||||
|
case 0:
|
||||||
|
strcpy(alias, "pci:");
|
||||||
|
break;
|
||||||
|
case PCI_ID_F_VFIO_DRIVER_OVERRIDE:
|
||||||
|
strcpy(alias, "vfio_pci:");
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
warn("Unknown PCI driver_override alias %08X\n",
|
||||||
|
override_only);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
strcpy(alias, "pci:");
|
|
||||||
ADD(alias, "v", vendor != PCI_ANY_ID, vendor);
|
ADD(alias, "v", vendor != PCI_ANY_ID, vendor);
|
||||||
ADD(alias, "d", device != PCI_ANY_ID, device);
|
ADD(alias, "d", device != PCI_ANY_ID, device);
|
||||||
ADD(alias, "sv", subvendor != PCI_ANY_ID, subvendor);
|
ADD(alias, "sv", subvendor != PCI_ANY_ID, subvendor);
|
||||||
|
|
Загрузка…
Ссылка в новой задаче