When memory is migrated to the GPU, it is likely to be accessed by GPU
code soon afterwards. Instead of waiting for a GPU fault, map the
migrated memory into the GPU page tables with the same access permissions
as the source CPU page table entries. This preserves copy on write
semantics.
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Presumably the intent here was that hmm_range_fault() could put the data
into some HW specific format and thus avoid some work. However, nothing
actually does that, and it isn't clear how anything actually could do that
as hmm_range_fault() provides CPU addresses which must be DMA mapped.
Perhaps there is some special HW that does not need DMA mapping, but we
don't have any examples of this, and the theoretical performance win of
avoiding an extra scan over the pfns array doesn't seem worth the
complexity. Plus pfns needs to be scanned anyhow to sort out any
DEVICE_PRIVATE pages.
This version replaces the uint64_t with an usigned long containing a pfn
and fixed flags. On input flags is filled with the HMM_PFN_REQ_* values,
on successful output it is filled with HMM_PFN_* values, describing the
state of the pages.
amdgpu is simple to convert, it doesn't use snapshot and doesn't use
per-page flags.
nouveau uses only 16 hmm_pte entries at most (ie fits in a few cache
lines), and it sweeps over its pfns array a couple of times anyhow. It
also has a nasty call chain before it reaches the dma map and hardware
suggesting performance isn't important:
nouveau_svm_fault():
args.i.m.method = NVIF_VMM_V0_PFNMAP
nouveau_range_fault()
nvif_object_ioctl()
client->driver->ioctl()
struct nvif_driver nvif_driver_nvkm:
.ioctl = nvkm_client_ioctl
nvkm_ioctl()
nvkm_ioctl_path()
nvkm_ioctl_v0[type].func(..)
nvkm_ioctl_mthd()
nvkm_object_mthd()
struct nvkm_object_func nvkm_uvmm:
.mthd = nvkm_uvmm_mthd
nvkm_uvmm_mthd()
nvkm_uvmm_mthd_pfnmap()
nvkm_vmm_pfn_map()
nvkm_vmm_ptes_get_map()
func == gp100_vmm_pgt_pfn
struct nvkm_vmm_desc_func gp100_vmm_desc_spt:
.pfn = gp100_vmm_pgt_pfn
nvkm_vmm_iter()
REF_PTES == func == gp100_vmm_pgt_pfn()
dma_map_page()
Link: https://lore.kernel.org/r/5-v2-b4e84f444c7d+24f57-hmm_no_flags_jgg@mellanox.com
Acked-by: Felix Kuehling <Felix.Kuehling@amd.com>
Tested-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
nouveau_dmem_migrate_vma and nouveau_dmem_convert_pfn are only called
when CONFIG_DRM_NOUVEAU_SVM is enabled, so there is no need to provide
!CONFIG_DRM_NOUVEAU_SVM stubs for them.
Link: https://lore.kernel.org/r/20190814075928.23766-6-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Device memory can be use in SVM, in which case we do not have any of
the existing buffer object. This commit add infrastructure to allow
use of device memory without nouveau_bo. Again this is a temporary
solution until a rework of GPU memory management.
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>