License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 17:07:57 +03:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2016-01-16 03:56:19 +03:00
|
|
|
#ifndef _LINUX_MEMREMAP_H_
|
|
|
|
#define _LINUX_MEMREMAP_H_
|
2022-02-16 07:31:36 +03:00
|
|
|
|
2022-07-15 18:05:09 +03:00
|
|
|
#include <linux/mmzone.h>
|
2020-10-14 02:50:29 +03:00
|
|
|
#include <linux/range.h>
|
2016-01-16 03:56:49 +03:00
|
|
|
#include <linux/ioport.h>
|
|
|
|
#include <linux/percpu-refcount.h>
|
2016-01-16 03:56:19 +03:00
|
|
|
|
|
|
|
struct resource;
|
|
|
|
struct device;
|
2016-01-16 03:56:22 +03:00
|
|
|
|
|
|
|
/**
|
|
|
|
* struct vmem_altmap - pre-allocated storage for vmemmap_populate
|
|
|
|
* @base_pfn: base of the entire dev_pagemap mapping
|
|
|
|
* @reserve: pages mapped, but reserved for driver use (relative to @base)
|
|
|
|
* @free: free pages set aside in the mapping for memmap storage
|
|
|
|
* @align: pages reserved to meet allocation alignments
|
|
|
|
* @alloc: track pages consumed, private to vmemmap_populate()
|
|
|
|
*/
|
|
|
|
struct vmem_altmap {
|
mm,memory_hotplug: allocate memmap from the added memory range
Physical memory hotadd has to allocate a memmap (struct page array) for
the newly added memory section. Currently, alloc_pages_node() is used
for those allocations.
This has some disadvantages:
a) an existing memory is consumed for that purpose
(eg: ~2MB per 128MB memory section on x86_64)
This can even lead to extreme cases where system goes OOM because
the physically hotplugged memory depletes the available memory before
it is onlined.
b) if the whole node is movable then we have off-node struct pages
which has performance drawbacks.
c) It might be there are no PMD_ALIGNED chunks so memmap array gets
populated with base pages.
This can be improved when CONFIG_SPARSEMEM_VMEMMAP is enabled.
Vmemap page tables can map arbitrary memory. That means that we can
reserve a part of the physically hotadded memory to back vmemmap page
tables. This implementation uses the beginning of the hotplugged memory
for that purpose.
There are some non-obviously things to consider though.
Vmemmap pages are allocated/freed during the memory hotplug events
(add_memory_resource(), try_remove_memory()) when the memory is
added/removed. This means that the reserved physical range is not
online although it is used. The most obvious side effect is that
pfn_to_online_page() returns NULL for those pfns. The current design
expects that this should be OK as the hotplugged memory is considered a
garbage until it is onlined. For example hibernation wouldn't save the
content of those vmmemmaps into the image so it wouldn't be restored on
resume but this should be OK as there no real content to recover anyway
while metadata is reachable from other data structures (e.g. vmemmap
page tables).
The reserved space is therefore (de)initialized during the {on,off}line
events (mhp_{de}init_memmap_on_memory). That is done by extracting page
allocator independent initialization from the regular onlining path.
The primary reason to handle the reserved space outside of
{on,off}line_pages is to make each initialization specific to the
purpose rather than special case them in a single function.
As per above, the functions that are introduced are:
- mhp_init_memmap_on_memory:
Initializes vmemmap pages by calling move_pfn_range_to_zone(), calls
kasan_add_zero_shadow(), and onlines as many sections as vmemmap pages
fully span.
- mhp_deinit_memmap_on_memory:
Offlines as many sections as vmemmap pages fully span, removes the
range from zhe zone by remove_pfn_range_from_zone(), and calls
kasan_remove_zero_shadow() for the range.
The new function memory_block_online() calls mhp_init_memmap_on_memory()
before doing the actual online_pages(). Should online_pages() fail, we
clean up by calling mhp_deinit_memmap_on_memory(). Adjusting of
present_pages is done at the end once we know that online_pages()
succedeed.
On offline, memory_block_offline() needs to unaccount vmemmap pages from
present_pages() before calling offline_pages(). This is necessary because
offline_pages() tears down some structures based on the fact whether the
node or the zone become empty. If offline_pages() fails, we account back
vmemmap pages. If it succeeds, we call mhp_deinit_memmap_on_memory().
Hot-remove:
We need to be careful when removing memory, as adding and
removing memory needs to be done with the same granularity.
To check that this assumption is not violated, we check the
memory range we want to remove and if a) any memory block has
vmemmap pages and b) the range spans more than a single memory
block, we scream out loud and refuse to proceed.
If all is good and the range was using memmap on memory (aka vmemmap pages),
we construct an altmap structure so free_hugepage_table does the right
thing and calls vmem_altmap_free instead of free_pagetable.
Link: https://lkml.kernel.org/r/20210421102701.25051-5-osalvador@suse.de
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-05-05 04:39:42 +03:00
|
|
|
unsigned long base_pfn;
|
2019-09-10 09:28:25 +03:00
|
|
|
const unsigned long end_pfn;
|
2016-01-16 03:56:22 +03:00
|
|
|
const unsigned long reserve;
|
|
|
|
unsigned long free;
|
|
|
|
unsigned long align;
|
|
|
|
unsigned long alloc;
|
|
|
|
};
|
|
|
|
|
2017-09-09 02:11:43 +03:00
|
|
|
/*
|
2021-07-01 04:53:17 +03:00
|
|
|
* Specialize ZONE_DEVICE memory into multiple types each has a different
|
2017-09-09 02:11:43 +03:00
|
|
|
* usage.
|
|
|
|
*
|
|
|
|
* MEMORY_DEVICE_PRIVATE:
|
|
|
|
* Device memory that is not directly addressable by the CPU: CPU can neither
|
|
|
|
* read nor write private memory. In this case, we do still have struct pages
|
|
|
|
* backing the device memory. Doing so simplifies the implementation, but it is
|
|
|
|
* important to remember that there are certain points at which the struct page
|
|
|
|
* must be treated as an opaque object, rather than a "normal" struct page.
|
|
|
|
*
|
|
|
|
* A more complete discussion of unaddressable memory may be found in
|
2022-06-27 09:00:26 +03:00
|
|
|
* include/linux/hmm.h and Documentation/mm/hmm.rst.
|
2017-09-09 02:12:24 +03:00
|
|
|
*
|
2022-07-15 18:05:10 +03:00
|
|
|
* MEMORY_DEVICE_COHERENT:
|
|
|
|
* Device memory that is cache coherent from device and CPU point of view. This
|
|
|
|
* is used on platforms that have an advanced system bus (like CAPI or CXL). A
|
|
|
|
* driver can hotplug the device memory using ZONE_DEVICE and with that memory
|
|
|
|
* type. Any page of a process can be migrated to such memory. However no one
|
|
|
|
* should be allowed to pin such memory so that it can always be evicted.
|
|
|
|
*
|
2018-05-16 21:46:08 +03:00
|
|
|
* MEMORY_DEVICE_FS_DAX:
|
|
|
|
* Host memory that has similar access semantics as System RAM i.e. DMA
|
|
|
|
* coherent and supports page pinning. In support of coordinating page
|
|
|
|
* pinning vs other operations MEMORY_DEVICE_FS_DAX arranges for a
|
|
|
|
* wakeup event whenever a page is unpinned and becomes idle. This
|
|
|
|
* wakeup is used to coordinate physical address space management (ex:
|
|
|
|
* fs truncate/hole punch) vs pinned pages (ex: device dma).
|
PCI/P2PDMA: Support peer-to-peer memory
Some PCI devices may have memory mapped in a BAR space that's intended for
use in peer-to-peer transactions. To enable such transactions the memory
must be registered with ZONE_DEVICE pages so it can be used by DMA
interfaces in existing drivers.
Add an interface for other subsystems to find and allocate chunks of P2P
memory as necessary to facilitate transfers between two PCI peers:
struct pci_dev *pci_p2pmem_find[_many]();
int pci_p2pdma_distance[_many]();
void *pci_alloc_p2pmem();
The new interface requires a driver to collect a list of client devices
involved in the transaction then call pci_p2pmem_find() to obtain any
suitable P2P memory. Alternatively, if the caller knows a device which
provides P2P memory, they can use pci_p2pdma_distance() to determine if it
is usable. With a suitable p2pmem device, memory can then be allocated
with pci_alloc_p2pmem() for use in DMA transactions.
Depending on hardware, using peer-to-peer memory may reduce the bandwidth
of the transfer but can significantly reduce pressure on system memory.
This may be desirable in many cases: for example a system could be designed
with a small CPU connected to a PCIe switch by a small number of lanes
which would maximize the number of lanes available to connect to NVMe
devices.
The code is designed to only utilize the p2pmem device if all the devices
involved in a transfer are behind the same PCI bridge. This is because we
have no way of knowing whether peer-to-peer routing between PCIe Root Ports
is supported (PCIe r4.0, sec 1.3.1). Additionally, the benefits of P2P
transfers that go through the RC is limited to only reducing DRAM usage
and, in some cases, coding convenience. The PCI-SIG may be exploring
adding a new capability bit to advertise whether this is possible for
future hardware.
This commit includes significant rework and feedback from Christoph
Hellwig.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
[bhelgaas: fold in fix from Keith Busch <keith.busch@intel.com>:
https://lore.kernel.org/linux-pci/20181012155920.15418-1-keith.busch@intel.com,
to address comment from Dan Carpenter <dan.carpenter@oracle.com>, fold in
https://lore.kernel.org/linux-pci/20181017160510.17926-1-logang@deltatee.com]
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2018-10-05 00:27:35 +03:00
|
|
|
*
|
2020-09-01 11:33:25 +03:00
|
|
|
* MEMORY_DEVICE_GENERIC:
|
2019-06-26 15:27:07 +03:00
|
|
|
* Host memory that has similar access semantics as System RAM i.e. DMA
|
2020-09-01 11:33:25 +03:00
|
|
|
* coherent and supports page pinning. This is for example used by DAX devices
|
|
|
|
* that expose memory using a character device.
|
2019-06-26 15:27:07 +03:00
|
|
|
*
|
PCI/P2PDMA: Support peer-to-peer memory
Some PCI devices may have memory mapped in a BAR space that's intended for
use in peer-to-peer transactions. To enable such transactions the memory
must be registered with ZONE_DEVICE pages so it can be used by DMA
interfaces in existing drivers.
Add an interface for other subsystems to find and allocate chunks of P2P
memory as necessary to facilitate transfers between two PCI peers:
struct pci_dev *pci_p2pmem_find[_many]();
int pci_p2pdma_distance[_many]();
void *pci_alloc_p2pmem();
The new interface requires a driver to collect a list of client devices
involved in the transaction then call pci_p2pmem_find() to obtain any
suitable P2P memory. Alternatively, if the caller knows a device which
provides P2P memory, they can use pci_p2pdma_distance() to determine if it
is usable. With a suitable p2pmem device, memory can then be allocated
with pci_alloc_p2pmem() for use in DMA transactions.
Depending on hardware, using peer-to-peer memory may reduce the bandwidth
of the transfer but can significantly reduce pressure on system memory.
This may be desirable in many cases: for example a system could be designed
with a small CPU connected to a PCIe switch by a small number of lanes
which would maximize the number of lanes available to connect to NVMe
devices.
The code is designed to only utilize the p2pmem device if all the devices
involved in a transfer are behind the same PCI bridge. This is because we
have no way of knowing whether peer-to-peer routing between PCIe Root Ports
is supported (PCIe r4.0, sec 1.3.1). Additionally, the benefits of P2P
transfers that go through the RC is limited to only reducing DRAM usage
and, in some cases, coding convenience. The PCI-SIG may be exploring
adding a new capability bit to advertise whether this is possible for
future hardware.
This commit includes significant rework and feedback from Christoph
Hellwig.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
[bhelgaas: fold in fix from Keith Busch <keith.busch@intel.com>:
https://lore.kernel.org/linux-pci/20181012155920.15418-1-keith.busch@intel.com,
to address comment from Dan Carpenter <dan.carpenter@oracle.com>, fold in
https://lore.kernel.org/linux-pci/20181017160510.17926-1-logang@deltatee.com]
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2018-10-05 00:27:35 +03:00
|
|
|
* MEMORY_DEVICE_PCI_P2PDMA:
|
|
|
|
* Device memory residing in a PCI BAR intended for use with Peer-to-Peer
|
|
|
|
* transactions.
|
2017-09-09 02:11:43 +03:00
|
|
|
*/
|
|
|
|
enum memory_type {
|
2019-06-26 15:27:07 +03:00
|
|
|
/* 0 is reserved to catch uninitialized type fields */
|
2018-05-16 21:46:08 +03:00
|
|
|
MEMORY_DEVICE_PRIVATE = 1,
|
2022-07-15 18:05:10 +03:00
|
|
|
MEMORY_DEVICE_COHERENT,
|
2018-05-16 21:46:08 +03:00
|
|
|
MEMORY_DEVICE_FS_DAX,
|
2020-09-01 11:33:25 +03:00
|
|
|
MEMORY_DEVICE_GENERIC,
|
PCI/P2PDMA: Support peer-to-peer memory
Some PCI devices may have memory mapped in a BAR space that's intended for
use in peer-to-peer transactions. To enable such transactions the memory
must be registered with ZONE_DEVICE pages so it can be used by DMA
interfaces in existing drivers.
Add an interface for other subsystems to find and allocate chunks of P2P
memory as necessary to facilitate transfers between two PCI peers:
struct pci_dev *pci_p2pmem_find[_many]();
int pci_p2pdma_distance[_many]();
void *pci_alloc_p2pmem();
The new interface requires a driver to collect a list of client devices
involved in the transaction then call pci_p2pmem_find() to obtain any
suitable P2P memory. Alternatively, if the caller knows a device which
provides P2P memory, they can use pci_p2pdma_distance() to determine if it
is usable. With a suitable p2pmem device, memory can then be allocated
with pci_alloc_p2pmem() for use in DMA transactions.
Depending on hardware, using peer-to-peer memory may reduce the bandwidth
of the transfer but can significantly reduce pressure on system memory.
This may be desirable in many cases: for example a system could be designed
with a small CPU connected to a PCIe switch by a small number of lanes
which would maximize the number of lanes available to connect to NVMe
devices.
The code is designed to only utilize the p2pmem device if all the devices
involved in a transfer are behind the same PCI bridge. This is because we
have no way of knowing whether peer-to-peer routing between PCIe Root Ports
is supported (PCIe r4.0, sec 1.3.1). Additionally, the benefits of P2P
transfers that go through the RC is limited to only reducing DRAM usage
and, in some cases, coding convenience. The PCI-SIG may be exploring
adding a new capability bit to advertise whether this is possible for
future hardware.
This commit includes significant rework and feedback from Christoph
Hellwig.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
[bhelgaas: fold in fix from Keith Busch <keith.busch@intel.com>:
https://lore.kernel.org/linux-pci/20181012155920.15418-1-keith.busch@intel.com,
to address comment from Dan Carpenter <dan.carpenter@oracle.com>, fold in
https://lore.kernel.org/linux-pci/20181017160510.17926-1-logang@deltatee.com]
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2018-10-05 00:27:35 +03:00
|
|
|
MEMORY_DEVICE_PCI_P2PDMA,
|
2017-09-09 02:11:43 +03:00
|
|
|
};
|
|
|
|
|
2019-06-26 15:27:08 +03:00
|
|
|
struct dev_pagemap_ops {
|
|
|
|
/*
|
2022-02-16 07:31:36 +03:00
|
|
|
* Called once the page refcount reaches 0. The reference count will be
|
|
|
|
* reset to one by the core code after the method is called to prepare
|
|
|
|
* for handing out the page again.
|
2019-06-26 15:27:08 +03:00
|
|
|
*/
|
2019-06-26 15:27:12 +03:00
|
|
|
void (*page_free)(struct page *page);
|
2019-06-26 15:27:08 +03:00
|
|
|
|
2019-06-26 15:27:11 +03:00
|
|
|
/*
|
|
|
|
* Used for private (un-addressable) device memory only. Must migrate
|
|
|
|
* the page back to a CPU accessible page.
|
|
|
|
*/
|
|
|
|
vm_fault_t (*migrate_to_ram)(struct vm_fault *vmf);
|
2022-06-03 08:37:27 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Handle the memory failure happens on a range of pfns. Notify the
|
|
|
|
* processes who are using these pfns, and try to recover the data on
|
|
|
|
* them if necessary. The mf_flags is finally passed to the recover
|
|
|
|
* function through the whole notify routine.
|
|
|
|
*
|
|
|
|
* When this is not implemented, or it returns -EOPNOTSUPP, the caller
|
|
|
|
* will fall back to a common handler called mf_generic_kill_procs().
|
|
|
|
*/
|
|
|
|
int (*memory_failure)(struct dev_pagemap *pgmap, unsigned long pfn,
|
|
|
|
unsigned long nr_pages, int mf_flags);
|
2019-06-26 15:27:08 +03:00
|
|
|
};
|
2017-09-09 02:11:43 +03:00
|
|
|
|
2019-06-26 15:27:13 +03:00
|
|
|
#define PGMAP_ALTMAP_VALID (1 << 0)
|
|
|
|
|
2016-01-16 03:56:19 +03:00
|
|
|
/**
|
|
|
|
* struct dev_pagemap - metadata for ZONE_DEVICE mappings
|
2016-01-16 03:56:22 +03:00
|
|
|
* @altmap: pre-allocated/reserved memory for vmemmap allocations
|
2016-01-16 03:56:49 +03:00
|
|
|
* @ref: reference count that pins the devm_memremap_pages() mapping
|
2021-10-28 18:10:17 +03:00
|
|
|
* @done: completion for @ref
|
2017-09-09 02:11:43 +03:00
|
|
|
* @type: memory type: see MEMORY_* in memory_hotplug.h
|
2019-06-26 15:27:13 +03:00
|
|
|
* @flags: PGMAP_* flags to specify defailed behavior
|
2022-01-15 01:04:22 +03:00
|
|
|
* @vmemmap_shift: structural definition of how the vmemmap page metadata
|
|
|
|
* is populated, specifically the metadata page order.
|
|
|
|
* A zero value (default) uses base pages as the vmemmap metadata
|
|
|
|
* representation. A bigger value will set up compound struct pages
|
|
|
|
* of the requested order value.
|
2019-06-26 15:27:08 +03:00
|
|
|
* @ops: method table
|
2020-03-16 22:32:13 +03:00
|
|
|
* @owner: an opaque pointer identifying the entity that manages this
|
|
|
|
* instance. Used by various helpers to make sure that no
|
|
|
|
* foreign ZONE_DEVICE memory is accessed.
|
2020-10-14 02:50:34 +03:00
|
|
|
* @nr_range: number of ranges to be mapped
|
|
|
|
* @range: range to be mapped when nr_range == 1
|
|
|
|
* @ranges: array of ranges to be mapped when nr_range > 1
|
2016-01-16 03:56:19 +03:00
|
|
|
*/
|
|
|
|
struct dev_pagemap {
|
2017-12-29 10:54:04 +03:00
|
|
|
struct vmem_altmap altmap;
|
2021-10-28 18:10:17 +03:00
|
|
|
struct percpu_ref ref;
|
2019-06-26 15:27:14 +03:00
|
|
|
struct completion done;
|
2017-09-09 02:11:43 +03:00
|
|
|
enum memory_type type;
|
2019-06-26 15:27:13 +03:00
|
|
|
unsigned int flags;
|
2022-01-15 01:04:22 +03:00
|
|
|
unsigned long vmemmap_shift;
|
2019-06-26 15:27:08 +03:00
|
|
|
const struct dev_pagemap_ops *ops;
|
2020-03-16 22:32:13 +03:00
|
|
|
void *owner;
|
2020-10-14 02:50:34 +03:00
|
|
|
int nr_range;
|
|
|
|
union {
|
|
|
|
struct range range;
|
|
|
|
struct range ranges[0];
|
|
|
|
};
|
2016-01-16 03:56:19 +03:00
|
|
|
};
|
|
|
|
|
2019-06-26 15:27:13 +03:00
|
|
|
static inline struct vmem_altmap *pgmap_altmap(struct dev_pagemap *pgmap)
|
|
|
|
{
|
|
|
|
if (pgmap->flags & PGMAP_ALTMAP_VALID)
|
|
|
|
return &pgmap->altmap;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2022-01-15 01:04:22 +03:00
|
|
|
static inline unsigned long pgmap_vmemmap_nr(struct dev_pagemap *pgmap)
|
|
|
|
{
|
|
|
|
return 1 << pgmap->vmemmap_shift;
|
|
|
|
}
|
|
|
|
|
2022-02-16 07:31:36 +03:00
|
|
|
static inline bool is_device_private_page(const struct page *page)
|
|
|
|
{
|
2022-02-16 07:31:36 +03:00
|
|
|
return IS_ENABLED(CONFIG_DEVICE_PRIVATE) &&
|
2022-02-16 07:31:36 +03:00
|
|
|
is_zone_device_page(page) &&
|
|
|
|
page->pgmap->type == MEMORY_DEVICE_PRIVATE;
|
|
|
|
}
|
|
|
|
|
2022-03-21 19:57:38 +03:00
|
|
|
static inline bool folio_is_device_private(const struct folio *folio)
|
|
|
|
{
|
|
|
|
return is_device_private_page(&folio->page);
|
|
|
|
}
|
|
|
|
|
2022-02-16 07:31:36 +03:00
|
|
|
static inline bool is_pci_p2pdma_page(const struct page *page)
|
|
|
|
{
|
2022-02-16 07:31:36 +03:00
|
|
|
return IS_ENABLED(CONFIG_PCI_P2PDMA) &&
|
2022-02-16 07:31:36 +03:00
|
|
|
is_zone_device_page(page) &&
|
|
|
|
page->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA;
|
|
|
|
}
|
|
|
|
|
2022-07-15 18:05:10 +03:00
|
|
|
static inline bool is_device_coherent_page(const struct page *page)
|
|
|
|
{
|
|
|
|
return is_zone_device_page(page) &&
|
|
|
|
page->pgmap->type == MEMORY_DEVICE_COHERENT;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool folio_is_device_coherent(const struct folio *folio)
|
|
|
|
{
|
|
|
|
return is_device_coherent_page(&folio->page);
|
|
|
|
}
|
|
|
|
|
2016-01-16 03:56:19 +03:00
|
|
|
#ifdef CONFIG_ZONE_DEVICE
|
2019-08-18 12:05:57 +03:00
|
|
|
void *memremap_pages(struct dev_pagemap *pgmap, int nid);
|
|
|
|
void memunmap_pages(struct dev_pagemap *pgmap);
|
2017-12-29 10:54:05 +03:00
|
|
|
void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
|
2019-06-14 01:56:21 +03:00
|
|
|
void devm_memunmap_pages(struct device *dev, struct dev_pagemap *pgmap);
|
2017-12-29 10:54:00 +03:00
|
|
|
struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
|
|
|
|
struct dev_pagemap *pgmap);
|
2021-02-26 04:17:08 +03:00
|
|
|
bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
|
2017-09-09 02:11:46 +03:00
|
|
|
|
2017-12-29 10:53:50 +03:00
|
|
|
unsigned long vmem_altmap_offset(struct vmem_altmap *altmap);
|
|
|
|
void vmem_altmap_free(struct vmem_altmap *altmap, unsigned long nr_pfns);
|
2020-01-30 23:06:07 +03:00
|
|
|
unsigned long memremap_compat_align(void);
|
2016-01-16 03:56:19 +03:00
|
|
|
#else
|
|
|
|
static inline void *devm_memremap_pages(struct device *dev,
|
2017-12-29 10:54:05 +03:00
|
|
|
struct dev_pagemap *pgmap)
|
2016-01-16 03:56:19 +03:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Fail attempts to call devm_memremap_pages() without
|
|
|
|
* ZONE_DEVICE support enabled, this requires callers to fall
|
|
|
|
* back to plain devm_memremap() based on config
|
|
|
|
*/
|
|
|
|
WARN_ON_ONCE(1);
|
|
|
|
return ERR_PTR(-ENXIO);
|
|
|
|
}
|
|
|
|
|
2019-06-14 01:56:21 +03:00
|
|
|
static inline void devm_memunmap_pages(struct device *dev,
|
|
|
|
struct dev_pagemap *pgmap)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2017-12-29 10:54:00 +03:00
|
|
|
static inline struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
|
|
|
|
struct dev_pagemap *pgmap)
|
2016-01-16 03:56:19 +03:00
|
|
|
{
|
|
|
|
return NULL;
|
|
|
|
}
|
2017-12-29 10:53:50 +03:00
|
|
|
|
2021-02-26 04:17:08 +03:00
|
|
|
static inline bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn)
|
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2017-12-29 10:53:50 +03:00
|
|
|
static inline unsigned long vmem_altmap_offset(struct vmem_altmap *altmap)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void vmem_altmap_free(struct vmem_altmap *altmap,
|
|
|
|
unsigned long nr_pfns)
|
|
|
|
{
|
|
|
|
}
|
2020-01-30 23:06:07 +03:00
|
|
|
|
|
|
|
/* when memremap_pages() is disabled all archs can remap a single page */
|
|
|
|
static inline unsigned long memremap_compat_align(void)
|
|
|
|
{
|
|
|
|
return PAGE_SIZE;
|
|
|
|
}
|
2017-12-29 10:53:50 +03:00
|
|
|
#endif /* CONFIG_ZONE_DEVICE */
|
2017-09-09 02:11:46 +03:00
|
|
|
|
2016-01-16 03:56:49 +03:00
|
|
|
static inline void put_dev_pagemap(struct dev_pagemap *pgmap)
|
|
|
|
{
|
|
|
|
if (pgmap)
|
2021-10-28 18:10:17 +03:00
|
|
|
percpu_ref_put(&pgmap->ref);
|
2016-01-16 03:56:49 +03:00
|
|
|
}
|
2020-01-30 23:06:07 +03:00
|
|
|
|
2016-01-16 03:56:19 +03:00
|
|
|
#endif /* _LINUX_MEMREMAP_H_ */
|