Merge branch 'akpm' (patches from Andrew)

Merge more updates from Andrew Morton:
 "More mm/ work, plenty more to come

  Subsystems affected by this patch series: slub, memcg, gup, kasan,
  pagealloc, hugetlb, vmscan, tools, mempolicy, memblock, hugetlbfs,
  thp, mmap, kconfig"

* akpm: (131 commits)
  arm64: mm: use ARCH_HAS_DEBUG_WX instead of arch defined
  x86: mm: use ARCH_HAS_DEBUG_WX instead of arch defined
  riscv: support DEBUG_WX
  mm: add DEBUG_WX support
  drivers/base/memory.c: cache memory blocks in xarray to accelerate lookup
  mm/thp: rename pmd_mknotpresent() as pmd_mkinvalid()
  powerpc/mm: drop platform defined pmd_mknotpresent()
  mm: thp: don't need to drain lru cache when splitting and mlocking THP
  hugetlbfs: get unmapped area below TASK_UNMAPPED_BASE for hugetlbfs
  sparc32: register memory occupied by kernel as memblock.memory
  include/linux/memblock.h: fix minor typo and unclear comment
  mm, mempolicy: fix up gup usage in lookup_node
  tools/vm/page_owner_sort.c: filter out unneeded line
  mm: swap: memcg: fix memcg stats for huge pages
  mm: swap: fix vmstats for huge pages
  mm: vmscan: limit the range of LRU type balancing
  mm: vmscan: reclaim writepage is IO cost
  mm: vmscan: determine anon/file pressure balance at the reclaim root
  mm: balance LRU lists based on relative thrashing
  mm: only count actual rotations as LRU reclaim cost
  ...
This commit is contained in:
Linus Torvalds 2020-06-03 20:24:15 -07:00
Родитель c444eb564f 09587a09ad
Коммит ee01c4d72a
147 изменённых файлов: 3456 добавлений и 2683 удалений

Просмотреть файл

@ -199,11 +199,11 @@ An RSS page is unaccounted when it's fully unmapped. A PageCache page is
unaccounted when it's removed from radix-tree. Even if RSS pages are fully
unmapped (by kswapd), they may exist as SwapCache in the system until they
are really freed. Such SwapCaches are also accounted.
A swapped-in page is not accounted until it's mapped.
A swapped-in page is accounted after adding into swapcache.
Note: The kernel does swapin-readahead and reads multiple swaps at once.
This means swapped-in pages may contain pages for other tasks than a task
causing page fault. So, we avoid accounting at swap-in I/O.
Since page's memcg recorded into swap whatever memsw enabled, the page will
be accounted after swapin.
At page migration, accounting information is kept.
@ -222,18 +222,13 @@ the cgroup that brought it in -- this will happen on memory pressure).
But see section 8.2: when moving a task to another cgroup, its pages may
be recharged to the new cgroup, if move_charge_at_immigrate has been chosen.
Exception: If CONFIG_MEMCG_SWAP is not used.
When you do swapoff and make swapped-out pages of shmem(tmpfs) to
be backed into memory in force, charges for pages are accounted against the
caller of swapoff rather than the users of shmem.
2.4 Swap Extension (CONFIG_MEMCG_SWAP)
2.4 Swap Extension
--------------------------------------
Swap Extension allows you to record charge for swap. A swapped-in page is
charged back to original page allocator if possible.
Swap usage is always recorded for each of cgroup. Swap Extension allows you to
read and limit it.
When swap is accounted, following files are added.
When CONFIG_SWAP is enabled, following files are added.
- memory.memsw.usage_in_bytes.
- memory.memsw.limit_in_bytes.

Просмотреть файл

@ -834,12 +834,15 @@
See also Documentation/networking/decnet.rst.
default_hugepagesz=
[same as hugepagesz=] The size of the default
HugeTLB page size. This is the size represented by
the legacy /proc/ hugepages APIs, used for SHM, and
default size when mounting hugetlbfs filesystems.
Defaults to the default architecture's huge page size
if not specified.
[HW] The size of the default HugeTLB page. This is
the size represented by the legacy /proc/ hugepages
APIs. In addition, this is the default hugetlb size
used for shmget(), mmap() and mounting hugetlbfs
filesystems. If not specified, defaults to the
architecture's default huge page size. Huge page
sizes are architecture dependent. See also
Documentation/admin-guide/mm/hugetlbpage.rst.
Format: size[KMG]
deferred_probe_timeout=
[KNL] Debugging option to set a timeout in seconds for
@ -1484,13 +1487,24 @@
hugepages using the cma allocator. If enabled, the
boot-time allocation of gigantic hugepages is skipped.
hugepages= [HW,X86-32,IA-64] HugeTLB pages to allocate at boot.
hugepagesz= [HW,IA-64,PPC,X86-64] The size of the HugeTLB pages.
On x86-64 and powerpc, this option can be specified
multiple times interleaved with hugepages= to reserve
huge pages of different sizes. Valid pages sizes on
x86-64 are 2M (when the CPU supports "pse") and 1G
(when the CPU supports the "pdpe1gb" cpuinfo flag).
hugepages= [HW] Number of HugeTLB pages to allocate at boot.
If this follows hugepagesz (below), it specifies
the number of pages of hugepagesz to be allocated.
If this is the first HugeTLB parameter on the command
line, it specifies the number of pages to allocate for
the default huge page size. See also
Documentation/admin-guide/mm/hugetlbpage.rst.
Format: <integer>
hugepagesz=
[HW] The size of the HugeTLB pages. This is used in
conjunction with hugepages (above) to allocate huge
pages of a specific size at boot. The pair
hugepagesz=X hugepages=Y can be specified once for
each supported huge page size. Huge page sizes are
architecture dependent. See also
Documentation/admin-guide/mm/hugetlbpage.rst.
Format: size[KMG]
hung_task_panic=
[KNL] Should the hung task detector generate panics.

Просмотреть файл

@ -100,6 +100,41 @@ with a huge page size selection parameter "hugepagesz=<size>". <size> must
be specified in bytes with optional scale suffix [kKmMgG]. The default huge
page size may be selected with the "default_hugepagesz=<size>" boot parameter.
Hugetlb boot command line parameter semantics
hugepagesz - Specify a huge page size. Used in conjunction with hugepages
parameter to preallocate a number of huge pages of the specified
size. Hence, hugepagesz and hugepages are typically specified in
pairs such as:
hugepagesz=2M hugepages=512
hugepagesz can only be specified once on the command line for a
specific huge page size. Valid huge page sizes are architecture
dependent.
hugepages - Specify the number of huge pages to preallocate. This typically
follows a valid hugepagesz or default_hugepagesz parameter. However,
if hugepages is the first or only hugetlb command line parameter it
implicitly specifies the number of huge pages of default size to
allocate. If the number of huge pages of default size is implicitly
specified, it can not be overwritten by a hugepagesz,hugepages
parameter pair for the default size.
For example, on an architecture with 2M default huge page size:
hugepages=256 hugepagesz=2M hugepages=512
will result in 256 2M huge pages being allocated and a warning message
indicating that the hugepages=512 parameter is ignored. If a hugepages
parameter is preceded by an invalid hugepagesz parameter, it will
be ignored.
default_hugepagesz - Specify the default huge page size. This parameter can
only be specified once on the command line. default_hugepagesz can
optionally be followed by the hugepages parameter to preallocate a
specific number of huge pages of default size. The number of default
sized huge pages to preallocate can also be implicitly specified as
mentioned in the hugepages section above. Therefore, on an
architecture with 2M default huge page size:
hugepages=256
default_hugepagesz=2M hugepages=256
hugepages=256 default_hugepagesz=2M
will all result in 256 2M huge pages being allocated. Valid default
huge page size is architecture dependent.
When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
indicates the current number of pre-allocated huge pages of the default size.
Thus, one can use the following command to dynamically allocate/deallocate

Просмотреть файл

@ -220,6 +220,13 @@ memory. A lower value can prevent THPs from being
collapsed, resulting fewer pages being collapsed into
THPs, and lower memory access performance.
``max_ptes_shared`` specifies how many pages can be shared across multiple
processes. Exceeding the number would block the collapse::
/sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_shared
A higher value may increase memory footprint for some workloads.
Boot parameter
==============

Просмотреть файл

@ -831,14 +831,27 @@ tooling to work, you can do::
swappiness
==========
This control is used to define how aggressive the kernel will swap
memory pages. Higher values will increase aggressiveness, lower values
decrease the amount of swap. A value of 0 instructs the kernel not to
initiate swap until the amount of free and file-backed pages is less
than the high water mark in a zone.
This control is used to define the rough relative IO cost of swapping
and filesystem paging, as a value between 0 and 200. At 100, the VM
assumes equal IO cost and will thus apply memory pressure to the page
cache and swap-backed pages equally; lower values signify more
expensive swap IO, higher values indicates cheaper.
Keep in mind that filesystem IO patterns under memory pressure tend to
be more efficient than swap's random IO. An optimal value will require
experimentation and will also be workload-dependent.
The default value is 60.
For in-memory swap, like zram or zswap, as well as hybrid setups that
have swap on faster devices than the filesystem, values beyond 100 can
be considered. For example, if the random IO against the swap device
is on average 2x faster than IO from the filesystem, swappiness should
be 133 (x + 2x = 200, 2x = 133.33).
At 0, the kernel will not initiate swap until the amount of free and
file-backed pages is less than the high watermark in a zone.
unprivileged_userfaultfd
========================

Просмотреть файл

@ -4,23 +4,26 @@
The padata parallel execution mechanism
=======================================
:Date: December 2019
:Date: May 2020
Padata is a mechanism by which the kernel can farm jobs out to be done in
parallel on multiple CPUs while retaining their ordering. It was developed for
use with the IPsec code, which needs to be able to perform encryption and
decryption on large numbers of packets without reordering those packets. The
crypto developers made a point of writing padata in a sufficiently general
fashion that it could be put to other uses as well.
parallel on multiple CPUs while optionally retaining their ordering.
Usage
=====
It was originally developed for IPsec, which needs to perform encryption and
decryption on large numbers of packets without reordering those packets. This
is currently the sole consumer of padata's serialized job support.
Padata also supports multithreaded jobs, splitting up the job evenly while load
balancing and coordinating between threads.
Running Serialized Jobs
=======================
Initializing
------------
The first step in using padata is to set up a padata_instance structure for
overall control of how jobs are to be run::
The first step in using padata to run serialized jobs is to set up a
padata_instance structure for overall control of how jobs are to be run::
#include <linux/padata.h>
@ -162,6 +165,24 @@ functions that correspond to the allocation in reverse::
It is the user's responsibility to ensure all outstanding jobs are complete
before any of the above are called.
Running Multithreaded Jobs
==========================
A multithreaded job has a main thread and zero or more helper threads, with the
main thread participating in the job and then waiting until all helpers have
finished. padata splits the job into units called chunks, where a chunk is a
piece of the job that one thread completes in one call to the thread function.
A user has to do three things to run a multithreaded job. First, describe the
job by defining a padata_mt_job structure, which is explained in the Interface
section. This includes a pointer to the thread function, which padata will
call each time it assigns a job chunk to a thread. Then, define the thread
function, which accepts three arguments, ``start``, ``end``, and ``arg``, where
the first two delimit the range that the thread operates on and the last is a
pointer to the job's shared state, if any. Prepare the shared state, which is
typically allocated on the main thread's stack. Last, call
padata_do_multithreaded(), which will return once the job is finished.
Interface
=========

Просмотреть файл

@ -1,34 +0,0 @@
#
# Feature name: numa-memblock
# Kconfig: HAVE_MEMBLOCK_NODE_MAP
# description: arch supports NUMA aware memblocks
#
-----------------------
| arch |status|
-----------------------
| alpha: | TODO |
| arc: | .. |
| arm: | .. |
| arm64: | ok |
| c6x: | .. |
| csky: | .. |
| h8300: | .. |
| hexagon: | .. |
| ia64: | ok |
| m68k: | .. |
| microblaze: | ok |
| mips: | ok |
| nds32: | TODO |
| nios2: | .. |
| openrisc: | .. |
| parisc: | .. |
| powerpc: | ok |
| riscv: | ok |
| s390: | ok |
| sh: | ok |
| sparc: | ok |
| um: | .. |
| unicore32: | .. |
| x86: | ok |
| xtensa: | .. |
-----------------------

Просмотреть файл

@ -46,11 +46,10 @@ maps the entire physical memory. For most architectures, the holes
have entries in the `mem_map` array. The `struct page` objects
corresponding to the holes are never fully initialized.
To allocate the `mem_map` array, architecture specific setup code
should call :c:func:`free_area_init_node` function or its convenience
wrapper :c:func:`free_area_init`. Yet, the mappings array is not
usable until the call to :c:func:`memblock_free_all` that hands all
the memory to the page allocator.
To allocate the `mem_map` array, architecture specific setup code should
call :c:func:`free_area_init` function. Yet, the mappings array is not
usable until the call to :c:func:`memblock_free_all` that hands all the
memory to the page allocator.
If an architecture enables `CONFIG_ARCH_HAS_HOLES_MEMORYMODEL` option,
it may free parts of the `mem_map` array that do not cover the

Просмотреть файл

@ -83,8 +83,7 @@ Usage
4) Analyze information from page owner::
cat /sys/kernel/debug/page_owner > page_owner_full.txt
grep -v ^PFN page_owner_full.txt > page_owner.txt
./page_owner_sort page_owner.txt sorted_page_owner.txt
./page_owner_sort page_owner_full.txt sorted_page_owner.txt
See the result about who allocated each page
in the ``sorted_page_owner.txt``.

Просмотреть файл

@ -243,21 +243,17 @@ callback_init(void * kernel_end)
*/
void __init paging_init(void)
{
unsigned long zones_size[MAX_NR_ZONES] = {0, };
unsigned long dma_pfn, high_pfn;
unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, };
unsigned long dma_pfn;
dma_pfn = virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT;
high_pfn = max_pfn = max_low_pfn;
max_pfn = max_low_pfn;
if (dma_pfn >= high_pfn)
zones_size[ZONE_DMA] = high_pfn;
else {
zones_size[ZONE_DMA] = dma_pfn;
zones_size[ZONE_NORMAL] = high_pfn - dma_pfn;
}
max_zone_pfn[ZONE_DMA] = dma_pfn;
max_zone_pfn[ZONE_NORMAL] = max_pfn;
/* Initialize mem_map[]. */
free_area_init(zones_size);
free_area_init(max_zone_pfn);
/* Initialize the kernel's ZERO_PGE. */
memset((void *)ZERO_PGE, 0, PAGE_SIZE);

Просмотреть файл

@ -144,8 +144,8 @@ setup_memory_node(int nid, void *kernel_end)
if (!nid && (node_max_pfn < end_kernel_pfn || node_min_pfn > start_kernel_pfn))
panic("kernel loaded out of ram");
memblock_add(PFN_PHYS(node_min_pfn),
(node_max_pfn - node_min_pfn) << PAGE_SHIFT);
memblock_add_node(PFN_PHYS(node_min_pfn),
(node_max_pfn - node_min_pfn) << PAGE_SHIFT, nid);
/* Zone start phys-addr must be 2^(MAX_ORDER-1) aligned.
Note that we round this down, not up - node memory
@ -202,8 +202,7 @@ setup_memory(void *kernel_end)
void __init paging_init(void)
{
unsigned int nid;
unsigned long zones_size[MAX_NR_ZONES] = {0, };
unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, };
unsigned long dma_local_pfn;
/*
@ -215,19 +214,10 @@ void __init paging_init(void)
*/
dma_local_pfn = virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT;
for_each_online_node(nid) {
unsigned long start_pfn = NODE_DATA(nid)->node_start_pfn;
unsigned long end_pfn = start_pfn + NODE_DATA(nid)->node_present_pages;
max_zone_pfn[ZONE_DMA] = dma_local_pfn;
max_zone_pfn[ZONE_NORMAL] = max_pfn;
if (dma_local_pfn >= end_pfn - start_pfn)
zones_size[ZONE_DMA] = end_pfn - start_pfn;
else {
zones_size[ZONE_DMA] = dma_local_pfn;
zones_size[ZONE_NORMAL] = (end_pfn - start_pfn) - dma_local_pfn;
}
node_set_state(nid, N_NORMAL_MEMORY);
free_area_init_node(nid, zones_size, start_pfn, NULL);
}
free_area_init(max_zone_pfn);
/* Initialize the kernel's ZERO_PGE. */
memset((void *)ZERO_PGE, 0, PAGE_SIZE);

Просмотреть файл

@ -26,7 +26,7 @@ static inline pmd_t pte_pmd(pte_t pte)
#define pmd_mkold(pmd) pte_pmd(pte_mkold(pmd_pte(pmd)))
#define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd)))
#define pmd_mkhuge(pmd) pte_pmd(pte_mkhuge(pmd_pte(pmd)))
#define pmd_mknotpresent(pmd) pte_pmd(pte_mknotpresent(pmd_pte(pmd)))
#define pmd_mkinvalid(pmd) pte_pmd(pte_mknotpresent(pmd_pte(pmd)))
#define pmd_mkclean(pmd) pte_pmd(pte_mkclean(pmd_pte(pmd)))
#define pmd_write(pmd) pte_write(pmd_pte(pmd))

Просмотреть файл

@ -63,11 +63,13 @@ void __init early_init_dt_add_memory_arch(u64 base, u64 size)
low_mem_sz = size;
in_use = 1;
memblock_add_node(base, size, 0);
} else {
#ifdef CONFIG_HIGHMEM
high_mem_start = base;
high_mem_sz = size;
in_use = 1;
memblock_add_node(base, size, 1);
#endif
}
@ -75,6 +77,11 @@ void __init early_init_dt_add_memory_arch(u64 base, u64 size)
base, TO_MB(size), !in_use ? "Not used":"");
}
bool arch_has_descending_max_zone_pfns(void)
{
return !IS_ENABLED(CONFIG_ARC_HAS_PAE40);
}
/*
* First memory setup routine called from setup_arch()
* 1. setup swapper's mm @init_mm
@ -83,8 +90,7 @@ void __init early_init_dt_add_memory_arch(u64 base, u64 size)
*/
void __init setup_arch_memory(void)
{
unsigned long zones_size[MAX_NR_ZONES];
unsigned long zones_holes[MAX_NR_ZONES];
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
init_mm.start_code = (unsigned long)_text;
init_mm.end_code = (unsigned long)_etext;
@ -115,7 +121,6 @@ void __init setup_arch_memory(void)
* the crash
*/
memblock_add_node(low_mem_start, low_mem_sz, 0);
memblock_reserve(CONFIG_LINUX_LINK_BASE,
__pa(_end) - CONFIG_LINUX_LINK_BASE);
@ -133,22 +138,7 @@ void __init setup_arch_memory(void)
memblock_dump_all();
/*----------------- node/zones setup --------------------------*/
memset(zones_size, 0, sizeof(zones_size));
memset(zones_holes, 0, sizeof(zones_holes));
zones_size[ZONE_NORMAL] = max_low_pfn - min_low_pfn;
zones_holes[ZONE_NORMAL] = 0;
/*
* We can't use the helper free_area_init(zones[]) because it uses
* PAGE_OFFSET to compute the @min_low_pfn which would be wrong
* when our kernel doesn't start at PAGE_OFFSET, i.e.
* PAGE_OFFSET != CONFIG_LINUX_RAM_BASE
*/
free_area_init_node(0, /* node-id */
zones_size, /* num pages per zone */
min_low_pfn, /* first pfn of node */
zones_holes); /* holes */
max_zone_pfn[ZONE_NORMAL] = max_low_pfn;
#ifdef CONFIG_HIGHMEM
/*
@ -168,20 +158,13 @@ void __init setup_arch_memory(void)
min_high_pfn = PFN_DOWN(high_mem_start);
max_high_pfn = PFN_DOWN(high_mem_start + high_mem_sz);
zones_size[ZONE_NORMAL] = 0;
zones_holes[ZONE_NORMAL] = 0;
zones_size[ZONE_HIGHMEM] = max_high_pfn - min_high_pfn;
zones_holes[ZONE_HIGHMEM] = 0;
free_area_init_node(1, /* node-id */
zones_size, /* num pages per zone */
min_high_pfn, /* first pfn of node */
zones_holes); /* holes */
max_zone_pfn[ZONE_HIGHMEM] = max_high_pfn;
high_memory = (void *)(min_high_pfn << PAGE_SHIFT);
kmap_init();
#endif
free_area_init(max_zone_pfn);
}
/*

Просмотреть файл

@ -14,15 +14,10 @@
#include <asm/hugetlb-3level.h>
#include <asm-generic/hugetlb.h>
static inline int is_hugepage_only_range(struct mm_struct *mm,
unsigned long addr, unsigned long len)
{
return 0;
}
static inline void arch_clear_hugepage_flags(struct page *page)
{
clear_bit(PG_dcache_clean, &page->flags);
}
#define arch_clear_hugepage_flags arch_clear_hugepage_flags
#endif /* _ASM_ARM_HUGETLB_H */

Просмотреть файл

@ -221,7 +221,7 @@ PMD_BIT_FUNC(mkyoung, |= PMD_SECT_AF);
#define pmdp_establish generic_pmdp_establish
/* represent a notpresent pmd by faulting entry, this is used by pmdp_invalidate */
static inline pmd_t pmd_mknotpresent(pmd_t pmd)
static inline pmd_t pmd_mkinvalid(pmd_t pmd)
{
return __pmd(pmd_val(pmd) & ~L_PMD_SECT_VALID);
}

Просмотреть файл

@ -92,18 +92,6 @@ EXPORT_SYMBOL(arm_dma_zone_size);
*/
phys_addr_t arm_dma_limit;
unsigned long arm_dma_pfn_limit;
static void __init arm_adjust_dma_zone(unsigned long *size, unsigned long *hole,
unsigned long dma_size)
{
if (size[0] <= dma_size)
return;
size[ZONE_NORMAL] = size[0] - dma_size;
size[ZONE_DMA] = dma_size;
hole[ZONE_NORMAL] = hole[0];
hole[ZONE_DMA] = 0;
}
#endif
void __init setup_dma_zone(const struct machine_desc *mdesc)
@ -121,56 +109,16 @@ void __init setup_dma_zone(const struct machine_desc *mdesc)
static void __init zone_sizes_init(unsigned long min, unsigned long max_low,
unsigned long max_high)
{
unsigned long zone_size[MAX_NR_ZONES], zhole_size[MAX_NR_ZONES];
struct memblock_region *reg;
/*
* initialise the zones.
*/
memset(zone_size, 0, sizeof(zone_size));
/*
* The memory size has already been determined. If we need
* to do anything fancy with the allocation of this memory
* to the zones, now is the time to do it.
*/
zone_size[0] = max_low - min;
#ifdef CONFIG_HIGHMEM
zone_size[ZONE_HIGHMEM] = max_high - max_low;
#endif
/*
* Calculate the size of the holes.
* holes = node_size - sum(bank_sizes)
*/
memcpy(zhole_size, zone_size, sizeof(zhole_size));
for_each_memblock(memory, reg) {
unsigned long start = memblock_region_memory_base_pfn(reg);
unsigned long end = memblock_region_memory_end_pfn(reg);
if (start < max_low) {
unsigned long low_end = min(end, max_low);
zhole_size[0] -= low_end - start;
}
#ifdef CONFIG_HIGHMEM
if (end > max_low) {
unsigned long high_start = max(start, max_low);
zhole_size[ZONE_HIGHMEM] -= end - high_start;
}
#endif
}
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
#ifdef CONFIG_ZONE_DMA
/*
* Adjust the sizes according to any special requirements for
* this machine type.
*/
if (arm_dma_zone_size)
arm_adjust_dma_zone(zone_size, zhole_size,
arm_dma_zone_size >> PAGE_SHIFT);
max_zone_pfn[ZONE_DMA] = min(arm_dma_pfn_limit, max_low);
#endif
free_area_init_node(0, zone_size, min, zhole_size);
max_zone_pfn[ZONE_NORMAL] = max_low;
#ifdef CONFIG_HIGHMEM
max_zone_pfn[ZONE_HIGHMEM] = max_high;
#endif
free_area_init(max_zone_pfn);
}
#ifdef CONFIG_HAVE_ARCH_PFN_VALID
@ -306,7 +254,7 @@ void __init bootmem_init(void)
sparse_init();
/*
* Now free the memory - free_area_init_node needs
* Now free the memory - free_area_init needs
* the sparse mem_map arrays initialized by sparse_init()
* for memmap_init_zone(), otherwise all PFNs are invalid.
*/

Просмотреть файл

@ -9,6 +9,7 @@ config ARM64
select ACPI_MCFG if (ACPI && PCI)
select ACPI_SPCR_TABLE if ACPI
select ACPI_PPTT if ACPI
select ARCH_HAS_DEBUG_WX
select ARCH_BINFMT_ELF_STATE
select ARCH_HAS_DEBUG_VIRTUAL
select ARCH_HAS_DEVMEM_IS_ALLOWED
@ -162,7 +163,6 @@ config ARM64
select HAVE_GCC_PLUGINS
select HAVE_HW_BREAKPOINT if PERF_EVENTS
select HAVE_IRQ_TIME_ACCOUNTING
select HAVE_MEMBLOCK_NODE_MAP if NUMA
select HAVE_NMI
select HAVE_PATA_PLATFORM
select HAVE_PERF_EVENTS

Просмотреть файл

@ -23,35 +23,6 @@ config ARM64_RANDOMIZE_TEXT_OFFSET
of TEXT_OFFSET and platforms must not require a specific
value.
config DEBUG_WX
bool "Warn on W+X mappings at boot"
select PTDUMP_CORE
---help---
Generate a warning if any W+X mappings are found at boot.
This is useful for discovering cases where the kernel is leaving
W+X mappings after applying NX, as such mappings are a security risk.
This check also includes UXN, which should be set on all kernel
mappings.
Look for a message in dmesg output like this:
arm64/mm: Checked W+X mappings: passed, no W+X pages found.
or like this, if the check failed:
arm64/mm: Checked W+X mappings: FAILED, <N> W+X pages found.
Note that even if the check fails, your kernel is possibly
still fine, as W+X mappings are not a security hole in
themselves, what they do is that they make the exploitation
of other unfixed kernel bugs easier.
There is no runtime or memory usage effect of this option
once the kernel has booted up - it's a one time check.
If in doubt, say "Y".
config DEBUG_EFI
depends on EFI && DEBUG_INFO
bool "UEFI debugging"

Просмотреть файл

@ -17,22 +17,11 @@
extern bool arch_hugetlb_migration_supported(struct hstate *h);
#endif
#define __HAVE_ARCH_HUGE_PTEP_GET
static inline pte_t huge_ptep_get(pte_t *ptep)
{
return READ_ONCE(*ptep);
}
static inline int is_hugepage_only_range(struct mm_struct *mm,
unsigned long addr, unsigned long len)
{
return 0;
}
static inline void arch_clear_hugepage_flags(struct page *page)
{
clear_bit(PG_dcache_clean, &page->flags);
}
#define arch_clear_hugepage_flags arch_clear_hugepage_flags
extern pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
struct page *page, int writable);

Просмотреть файл

@ -366,7 +366,7 @@ static inline int pmd_protnone(pmd_t pmd)
#define pmd_mkclean(pmd) pte_pmd(pte_mkclean(pmd_pte(pmd)))
#define pmd_mkdirty(pmd) pte_pmd(pte_mkdirty(pmd_pte(pmd)))
#define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd)))
#define pmd_mknotpresent(pmd) (__pmd(pmd_val(pmd) & ~PMD_SECT_VALID))
#define pmd_mkinvalid(pmd) (__pmd(pmd_val(pmd) & ~PMD_SECT_VALID))
#define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd))

Просмотреть файл

@ -443,44 +443,30 @@ void huge_ptep_clear_flush(struct vm_area_struct *vma,
clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig);
}
static void __init add_huge_page_size(unsigned long size)
{
if (size_to_hstate(size))
return;
hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT);
}
static int __init hugetlbpage_init(void)
{
#ifdef CONFIG_ARM64_4K_PAGES
add_huge_page_size(PUD_SIZE);
hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
#endif
add_huge_page_size(CONT_PMD_SIZE);
add_huge_page_size(PMD_SIZE);
add_huge_page_size(CONT_PTE_SIZE);
hugetlb_add_hstate((CONT_PMD_SHIFT + PMD_SHIFT) - PAGE_SHIFT);
hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT);
hugetlb_add_hstate((CONT_PTE_SHIFT + PAGE_SHIFT) - PAGE_SHIFT);
return 0;
}
arch_initcall(hugetlbpage_init);
static __init int setup_hugepagesz(char *opt)
bool __init arch_hugetlb_valid_size(unsigned long size)
{
unsigned long ps = memparse(opt, &opt);
switch (ps) {
switch (size) {
#ifdef CONFIG_ARM64_4K_PAGES
case PUD_SIZE:
#endif
case CONT_PMD_SIZE:
case PMD_SIZE:
case CONT_PTE_SIZE:
add_huge_page_size(ps);
return 1;
return true;
}
hugetlb_bad_size();
pr_err("hugepagesz: Unsupported page size %lu K\n", ps >> 10);
return 0;
return false;
}
__setup("hugepagesz=", setup_hugepagesz);

Просмотреть файл

@ -192,8 +192,6 @@ static phys_addr_t __init max_zone_phys(unsigned int zone_bits)
return min(offset + (1ULL << zone_bits), memblock_end_of_DRAM());
}
#ifdef CONFIG_NUMA
static void __init zone_sizes_init(unsigned long min, unsigned long max)
{
unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
@ -206,61 +204,9 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
#endif
max_zone_pfns[ZONE_NORMAL] = max;
free_area_init_nodes(max_zone_pfns);
free_area_init(max_zone_pfns);
}
#else
static void __init zone_sizes_init(unsigned long min, unsigned long max)
{
struct memblock_region *reg;
unsigned long zone_size[MAX_NR_ZONES], zhole_size[MAX_NR_ZONES];
unsigned long __maybe_unused max_dma, max_dma32;
memset(zone_size, 0, sizeof(zone_size));
max_dma = max_dma32 = min;
#ifdef CONFIG_ZONE_DMA
max_dma = max_dma32 = PFN_DOWN(arm64_dma_phys_limit);
zone_size[ZONE_DMA] = max_dma - min;
#endif
#ifdef CONFIG_ZONE_DMA32
max_dma32 = PFN_DOWN(arm64_dma32_phys_limit);
zone_size[ZONE_DMA32] = max_dma32 - max_dma;
#endif
zone_size[ZONE_NORMAL] = max - max_dma32;
memcpy(zhole_size, zone_size, sizeof(zhole_size));
for_each_memblock(memory, reg) {
unsigned long start = memblock_region_memory_base_pfn(reg);
unsigned long end = memblock_region_memory_end_pfn(reg);
#ifdef CONFIG_ZONE_DMA
if (start >= min && start < max_dma) {
unsigned long dma_end = min(end, max_dma);
zhole_size[ZONE_DMA] -= dma_end - start;
start = dma_end;
}
#endif
#ifdef CONFIG_ZONE_DMA32
if (start >= max_dma && start < max_dma32) {
unsigned long dma32_end = min(end, max_dma32);
zhole_size[ZONE_DMA32] -= dma32_end - start;
start = dma32_end;
}
#endif
if (start >= max_dma32 && start < max) {
unsigned long normal_end = min(end, max);
zhole_size[ZONE_NORMAL] -= normal_end - start;
}
}
free_area_init_node(0, zone_size, min, zhole_size);
}
#endif /* CONFIG_NUMA */
int pfn_valid(unsigned long pfn)
{
phys_addr_t addr = pfn << PAGE_SHIFT;

Просмотреть файл

@ -350,13 +350,16 @@ static int __init numa_register_nodes(void)
struct memblock_region *mblk;
/* Check that valid nid is set to memblks */
for_each_memblock(memory, mblk)
if (mblk->nid == NUMA_NO_NODE || mblk->nid >= MAX_NUMNODES) {
for_each_memblock(memory, mblk) {
int mblk_nid = memblock_get_region_node(mblk);
if (mblk_nid == NUMA_NO_NODE || mblk_nid >= MAX_NUMNODES) {
pr_warn("Warning: invalid memblk node %d [mem %#010Lx-%#010Lx]\n",
mblk->nid, mblk->base,
mblk_nid, mblk->base,
mblk->base + mblk->size - 1);
return -EINVAL;
}
}
/* Finally register nodes. */
for_each_node_mask(nid, numa_nodes_parsed) {

Просмотреть файл

@ -33,7 +33,7 @@ EXPORT_SYMBOL(empty_zero_page);
void __init paging_init(void)
{
struct pglist_data *pgdat = NODE_DATA(0);
unsigned long zones_size[MAX_NR_ZONES] = {0, };
unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, };
empty_zero_page = (unsigned long) memblock_alloc(PAGE_SIZE,
PAGE_SIZE);
@ -49,11 +49,9 @@ void __init paging_init(void)
/*
* Define zones
*/
zones_size[ZONE_NORMAL] = (memory_end - PAGE_OFFSET) >> PAGE_SHIFT;
pgdat->node_zones[ZONE_NORMAL].zone_start_pfn =
__pa(PAGE_OFFSET) >> PAGE_SHIFT;
max_zone_pfn[ZONE_NORMAL] = memory_end >> PAGE_SHIFT;
free_area_init(zones_size);
free_area_init(max_zone_pfn);
}
void __init mem_init(void)

Просмотреть файл

@ -26,7 +26,9 @@ struct screen_info screen_info = {
static void __init csky_memblock_init(void)
{
unsigned long zone_size[MAX_NR_ZONES];
unsigned long lowmem_size = PFN_DOWN(LOWMEM_LIMIT - PHYS_OFFSET_OFFSET);
unsigned long sseg_size = PFN_DOWN(SSEG_SIZE - PHYS_OFFSET_OFFSET);
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
signed long size;
memblock_reserve(__pa(_stext), _end - _stext);
@ -36,28 +38,22 @@ static void __init csky_memblock_init(void)
memblock_dump_all();
memset(zone_size, 0, sizeof(zone_size));
min_low_pfn = PFN_UP(memblock_start_of_DRAM());
max_low_pfn = max_pfn = PFN_DOWN(memblock_end_of_DRAM());
size = max_pfn - min_low_pfn;
if (size <= PFN_DOWN(SSEG_SIZE - PHYS_OFFSET_OFFSET))
zone_size[ZONE_NORMAL] = size;
else if (size < PFN_DOWN(LOWMEM_LIMIT - PHYS_OFFSET_OFFSET)) {
zone_size[ZONE_NORMAL] =
PFN_DOWN(SSEG_SIZE - PHYS_OFFSET_OFFSET);
max_low_pfn = min_low_pfn + zone_size[ZONE_NORMAL];
} else {
zone_size[ZONE_NORMAL] =
PFN_DOWN(LOWMEM_LIMIT - PHYS_OFFSET_OFFSET);
max_low_pfn = min_low_pfn + zone_size[ZONE_NORMAL];
if (size >= lowmem_size) {
max_low_pfn = min_low_pfn + lowmem_size;
write_mmu_msa1(read_mmu_msa0() + SSEG_SIZE);
} else if (size > sseg_size) {
max_low_pfn = min_low_pfn + sseg_size;
}
max_zone_pfn[ZONE_NORMAL] = max_low_pfn;
#ifdef CONFIG_HIGHMEM
zone_size[ZONE_HIGHMEM] = max_pfn - max_low_pfn;
max_zone_pfn[ZONE_HIGHMEM] = max_pfn;
highstart_pfn = max_low_pfn;
highend_pfn = max_pfn;
@ -66,7 +62,7 @@ static void __init csky_memblock_init(void)
dma_contiguous_reserve(0);
free_area_init_node(0, zone_size, min_low_pfn, NULL);
free_area_init(max_zone_pfn);
}
void __init setup_arch(char **cmdline_p)

Просмотреть файл

@ -83,10 +83,10 @@ void __init paging_init(void)
start_mem, end_mem);
{
unsigned long zones_size[MAX_NR_ZONES] = {0, };
unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, };
zones_size[ZONE_NORMAL] = (end_mem - PAGE_OFFSET) >> PAGE_SHIFT;
free_area_init(zones_size);
max_zone_pfn[ZONE_NORMAL] = end_mem >> PAGE_SHIFT;
free_area_init(max_zone_pfn);
}
}

Просмотреть файл

@ -91,7 +91,7 @@ void sync_icache_dcache(pte_t pte)
*/
void __init paging_init(void)
{
unsigned long zones_sizes[MAX_NR_ZONES] = {0, };
unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, };
/*
* This is not particularly well documented anywhere, but
@ -101,9 +101,9 @@ void __init paging_init(void)
* adjust accordingly.
*/
zones_sizes[ZONE_NORMAL] = max_low_pfn;
max_zone_pfn[ZONE_NORMAL] = max_low_pfn;
free_area_init(zones_sizes); /* sets up the zonelists and mem_map */
free_area_init(max_zone_pfn); /* sets up the zonelists and mem_map */
/*
* Start of high memory area. Will probably need something more

Просмотреть файл

@ -31,7 +31,6 @@ config IA64
select HAVE_FUNCTION_TRACER
select TTY
select HAVE_ARCH_TRACEHOOK
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_VIRT_CPU_ACCOUNTING
select DMA_NONCOHERENT_MMAP
select ARCH_HAS_SYNC_DMA_FOR_CPU

Просмотреть файл

@ -20,6 +20,7 @@ static inline int is_hugepage_only_range(struct mm_struct *mm,
return (REGION_NUMBER(addr) == RGN_HPAGE ||
REGION_NUMBER((addr)+(len)-1) == RGN_HPAGE);
}
#define is_hugepage_only_range is_hugepage_only_range
#define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
static inline void huge_ptep_clear_flush(struct vm_area_struct *vma,
@ -27,10 +28,6 @@ static inline void huge_ptep_clear_flush(struct vm_area_struct *vma,
{
}
static inline void arch_clear_hugepage_flags(struct page *page)
{
}
#include <asm-generic/hugetlb.h>
#endif /* _ASM_IA64_HUGETLB_H */

Просмотреть файл

@ -210,6 +210,6 @@ paging_init (void)
printk("Virtual mem_map starts at 0x%p\n", mem_map);
}
#endif /* !CONFIG_VIRTUAL_MEM_MAP */
free_area_init_nodes(max_zone_pfns);
free_area_init(max_zone_pfns);
zero_page_memmap_ptr = virt_to_page(ia64_imva(empty_zero_page));
}

Просмотреть файл

@ -627,7 +627,7 @@ void __init paging_init(void)
max_zone_pfns[ZONE_DMA32] = max_dma;
#endif
max_zone_pfns[ZONE_NORMAL] = max_pfn;
free_area_init_nodes(max_zone_pfns);
free_area_init(max_zone_pfns);
zero_page_memmap_ptr = virt_to_page(ia64_imva(empty_zero_page));
}

Просмотреть файл

@ -84,7 +84,7 @@ void __init paging_init(void)
* page_alloc get different views of the world.
*/
unsigned long end_mem = memory_end & PAGE_MASK;
unsigned long zones_size[MAX_NR_ZONES] = { 0, };
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, };
high_memory = (void *) end_mem;
@ -98,8 +98,8 @@ void __init paging_init(void)
*/
set_fs (USER_DS);
zones_size[ZONE_DMA] = (end_mem - PAGE_OFFSET) >> PAGE_SHIFT;
free_area_init(zones_size);
max_zone_pfn[ZONE_DMA] = end_mem >> PAGE_SHIFT;
free_area_init(max_zone_pfn);
}
#endif /* CONFIG_MMU */

Просмотреть файл

@ -39,7 +39,7 @@ void __init paging_init(void)
pte_t *pg_table;
unsigned long address, size;
unsigned long next_pgtable, bootmem_end;
unsigned long zones_size[MAX_NR_ZONES];
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
enum zone_type zone;
int i;
@ -80,11 +80,8 @@ void __init paging_init(void)
}
current->mm = NULL;
for (zone = 0; zone < MAX_NR_ZONES; zone++)
zones_size[zone] = 0x0;
zones_size[ZONE_DMA] = num_pages;
free_area_init(zones_size);
max_zone_pfn[ZONE_DMA] = PFN_DOWN(_ramend);
free_area_init(max_zone_pfn);
}
int cf_tlb_miss(struct pt_regs *regs, int write, int dtlb, int extension_word)

Просмотреть файл

@ -365,7 +365,7 @@ static void __init map_node(int node)
*/
void __init paging_init(void)
{
unsigned long zones_size[MAX_NR_ZONES] = { 0, };
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, };
unsigned long min_addr, max_addr;
unsigned long addr;
int i;
@ -386,7 +386,7 @@ void __init paging_init(void)
min_addr = m68k_memory[0].addr;
max_addr = min_addr + m68k_memory[0].size;
memblock_add(m68k_memory[0].addr, m68k_memory[0].size);
memblock_add_node(m68k_memory[0].addr, m68k_memory[0].size, 0);
for (i = 1; i < m68k_num_memory;) {
if (m68k_memory[i].addr < min_addr) {
printk("Ignoring memory chunk at 0x%lx:0x%lx before the first chunk\n",
@ -397,7 +397,7 @@ void __init paging_init(void)
(m68k_num_memory - i) * sizeof(struct m68k_mem_info));
continue;
}
memblock_add(m68k_memory[i].addr, m68k_memory[i].size);
memblock_add_node(m68k_memory[i].addr, m68k_memory[i].size, i);
addr = m68k_memory[i].addr + m68k_memory[i].size;
if (addr > max_addr)
max_addr = addr;
@ -448,11 +448,10 @@ void __init paging_init(void)
#ifdef DEBUG
printk ("before free_area_init\n");
#endif
for (i = 0; i < m68k_num_memory; i++) {
zones_size[ZONE_DMA] = m68k_memory[i].size >> PAGE_SHIFT;
free_area_init_node(i, zones_size,
m68k_memory[i].addr >> PAGE_SHIFT, NULL);
for (i = 0; i < m68k_num_memory; i++)
if (node_present_pages(i))
node_set_state(i, N_NORMAL_MEMORY);
}
max_zone_pfn[ZONE_DMA] = memblock_end_of_DRAM();
free_area_init(max_zone_pfn);
}

Просмотреть файл

@ -42,7 +42,7 @@ void __init paging_init(void)
unsigned long address;
unsigned long next_pgtable;
unsigned long bootmem_end;
unsigned long zones_size[MAX_NR_ZONES] = { 0, };
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, };
unsigned long size;
empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
@ -89,14 +89,10 @@ void __init paging_init(void)
current->mm = NULL;
/* memory sizing is a hack stolen from motorola.c.. hope it works for us */
zones_size[ZONE_DMA] = ((unsigned long)high_memory - PAGE_OFFSET) >> PAGE_SHIFT;
max_zone_pfn[ZONE_DMA] = ((unsigned long)high_memory) >> PAGE_SHIFT;
/* I really wish I knew why the following change made things better... -- Sam */
/* free_area_init(zones_size); */
free_area_init_node(0, zones_size,
(__pa(PAGE_OFFSET) >> PAGE_SHIFT) + 1, NULL);
free_area_init(max_zone_pfn);
}

Просмотреть файл

@ -32,7 +32,6 @@ config MICROBLAZE
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_OPROFILE
select HAVE_PCI
select IRQ_DOMAIN

Просмотреть файл

@ -112,7 +112,7 @@ static void __init paging_init(void)
#endif
/* We don't have holes in memory map */
free_area_init_nodes(zones_size);
free_area_init(zones_size);
}
void __init setup_memory(void)

Просмотреть файл

@ -72,7 +72,6 @@ config MIPS
select HAVE_KPROBES
select HAVE_KRETPROBES
select HAVE_LD_DEAD_CODE_DATA_ELIMINATION
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_NMI
select HAVE_OPROFILE

Просмотреть файл

@ -11,13 +11,6 @@
#include <asm/page.h>
static inline int is_hugepage_only_range(struct mm_struct *mm,
unsigned long addr,
unsigned long len)
{
return 0;
}
#define __HAVE_ARCH_PREPARE_HUGEPAGE_RANGE
static inline int prepare_hugepage_range(struct file *file,
unsigned long addr,
@ -82,10 +75,6 @@ static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
return changed;
}
static inline void arch_clear_hugepage_flags(struct page *page)
{
}
#include <asm-generic/hugetlb.h>
#endif /* __ASM_HUGETLB_H */

Просмотреть файл

@ -705,7 +705,7 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
return pmd;
}
static inline pmd_t pmd_mknotpresent(pmd_t pmd)
static inline pmd_t pmd_mkinvalid(pmd_t pmd)
{
pmd_val(pmd) &= ~(_PAGE_PRESENT | _PAGE_VALID | _PAGE_DIRTY);

Просмотреть файл

@ -247,7 +247,7 @@ void __init paging_init(void)
zones_size[ZONE_DMA32] = MAX_DMA32_PFN;
#endif
zones_size[ZONE_NORMAL] = max_low_pfn;
free_area_init_nodes(zones_size);
free_area_init(zones_size);
}
void __init mem_init(void)

Просмотреть файл

@ -424,7 +424,7 @@ void __init paging_init(void)
}
#endif
free_area_init_nodes(max_zone_pfns);
free_area_init(max_zone_pfns);
}
#ifdef CONFIG_64BIT

Просмотреть файл

@ -419,7 +419,7 @@ void __init paging_init(void)
pagetable_init();
zones_size[ZONE_NORMAL] = max_low_pfn;
free_area_init_nodes(zones_size);
free_area_init(zones_size);
}
void __init mem_init(void)

Просмотреть файл

@ -31,16 +31,13 @@ EXPORT_SYMBOL(empty_zero_page);
static void __init zone_sizes_init(void)
{
unsigned long zones_size[MAX_NR_ZONES];
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
/* Clear the zone sizes */
memset(zones_size, 0, sizeof(zones_size));
zones_size[ZONE_NORMAL] = max_low_pfn;
max_zone_pfn[ZONE_NORMAL] = max_low_pfn;
#ifdef CONFIG_HIGHMEM
zones_size[ZONE_HIGHMEM] = max_pfn;
max_zone_pfn[ZONE_HIGHMEM] = max_pfn;
#endif
free_area_init(zones_size);
free_area_init(max_zone_pfn);
}

Просмотреть файл

@ -46,17 +46,15 @@ pgd_t *pgd_current;
*/
void __init paging_init(void)
{
unsigned long zones_size[MAX_NR_ZONES];
memset(zones_size, 0, sizeof(zones_size));
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
pagetable_init();
pgd_current = swapper_pg_dir;
zones_size[ZONE_NORMAL] = max_mapnr;
max_zone_pfn[ZONE_NORMAL] = max_mapnr;
/* pass the memory from the bootmem allocator to the main allocator */
free_area_init(zones_size);
free_area_init(max_zone_pfn);
flush_dcache_range((unsigned long)empty_zero_page,
(unsigned long)empty_zero_page + PAGE_SIZE);

Просмотреть файл

@ -45,17 +45,14 @@ DEFINE_PER_CPU(struct mmu_gather, mmu_gathers);
static void __init zone_sizes_init(void)
{
unsigned long zones_size[MAX_NR_ZONES];
/* Clear the zone sizes */
memset(zones_size, 0, sizeof(zones_size));
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
/*
* We use only ZONE_NORMAL
*/
zones_size[ZONE_NORMAL] = max_low_pfn;
max_zone_pfn[ZONE_NORMAL] = max_low_pfn;
free_area_init(zones_size);
free_area_init(max_zone_pfn);
}
extern const char _s_kernel_ro[], _e_kernel_ro[];

Просмотреть файл

@ -12,12 +12,6 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep);
static inline int is_hugepage_only_range(struct mm_struct *mm,
unsigned long addr,
unsigned long len) {
return 0;
}
/*
* If the arch doesn't supply something else, assume that hugepage
* size aligned regions are ok without further preparation.
@ -48,10 +42,6 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep,
pte_t pte, int dirty);
static inline void arch_clear_hugepage_flags(struct page *page)
{
}
#include <asm-generic/hugetlb.h>
#endif /* _ASM_PARISC64_HUGETLB_H */

Просмотреть файл

@ -675,27 +675,11 @@ static void __init gateway_init(void)
static void __init parisc_bootmem_free(void)
{
unsigned long zones_size[MAX_NR_ZONES] = { 0, };
unsigned long holes_size[MAX_NR_ZONES] = { 0, };
unsigned long mem_start_pfn = ~0UL, mem_end_pfn = 0, mem_size_pfn = 0;
int i;
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, };
for (i = 0; i < npmem_ranges; i++) {
unsigned long start = pmem_ranges[i].start_pfn;
unsigned long size = pmem_ranges[i].pages;
unsigned long end = start + size;
max_zone_pfn[0] = memblock_end_of_DRAM();
if (mem_start_pfn > start)
mem_start_pfn = start;
if (mem_end_pfn < end)
mem_end_pfn = end;
mem_size_pfn += size;
}
zones_size[0] = mem_end_pfn - mem_start_pfn;
holes_size[0] = zones_size[0] - mem_size_pfn;
free_area_init_node(0, zones_size, mem_start_pfn, holes_size);
free_area_init(max_zone_pfn);
}
void __init paging_init(void)

Просмотреть файл

@ -211,7 +211,6 @@ config PPC
select HAVE_KRETPROBES
select HAVE_LD_DEAD_CODE_DATA_ELIMINATION
select HAVE_LIVEPATCH if HAVE_DYNAMIC_FTRACE_WITH_REGS
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_NMI if PERF_EVENTS || (PPC64 && PPC_BOOK3S)
select HAVE_HARDLOCKUP_DETECTOR_ARCH if (PPC64 && PPC_BOOK3S)
@ -687,15 +686,6 @@ config ARCH_MEMORY_PROBE
def_bool y
depends on MEMORY_HOTPLUG
# Some NUMA nodes have memory ranges that span
# other nodes. Even though a pfn is valid and
# between a node's start and end pfns, it may not
# reside on that node. See memmap_init_zone()
# for details.
config NODES_SPAN_OTHER_NODES
def_bool y
depends on NEED_MULTIPLE_NODES
config STDBINUTILS
bool "Using standard binutils settings"
depends on 44x

Просмотреть файл

@ -1168,10 +1168,6 @@ static inline int pmd_large(pmd_t pmd)
return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE));
}
static inline pmd_t pmd_mknotpresent(pmd_t pmd)
{
return __pmd(pmd_val(pmd) & ~_PAGE_PRESENT);
}
/*
* For radix we should always find H_PAGE_HASHPTE zero. Hence
* the below will work for radix too

Просмотреть файл

@ -30,6 +30,7 @@ static inline int is_hugepage_only_range(struct mm_struct *mm,
return slice_is_hugepage_only_range(mm, addr, len);
return 0;
}
#define is_hugepage_only_range is_hugepage_only_range
#define __HAVE_ARCH_HUGETLB_FREE_PGD_RANGE
void hugetlb_free_pgd_range(struct mmu_gather *tlb, unsigned long addr,
@ -60,10 +61,6 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep,
pte_t pte, int dirty);
static inline void arch_clear_hugepage_flags(struct page *page)
{
}
#include <asm-generic/hugetlb.h>
#else /* ! CONFIG_HUGETLB_PAGE */

Просмотреть файл

@ -558,7 +558,7 @@ unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
return vma_kernel_pagesize(vma);
}
static int __init add_huge_page_size(unsigned long long size)
bool __init arch_hugetlb_valid_size(unsigned long size)
{
int shift = __ffs(size);
int mmu_psize;
@ -566,38 +566,28 @@ static int __init add_huge_page_size(unsigned long long size)
/* Check that it is a page size supported by the hardware and
* that it fits within pagetable and slice limits. */
if (size <= PAGE_SIZE || !is_power_of_2(size))
return -EINVAL;
return false;
mmu_psize = check_and_get_huge_psize(shift);
if (mmu_psize < 0)
return -EINVAL;
return false;
BUG_ON(mmu_psize_defs[mmu_psize].shift != shift);
/* Return if huge page size has already been setup */
if (size_to_hstate(size))
return 0;
return true;
}
static int __init add_huge_page_size(unsigned long long size)
{
int shift = __ffs(size);
if (!arch_hugetlb_valid_size((unsigned long)size))
return -EINVAL;
hugetlb_add_hstate(shift - PAGE_SHIFT);
return 0;
}
static int __init hugepage_setup_sz(char *str)
{
unsigned long long size;
size = memparse(str, &str);
if (add_huge_page_size(size) != 0) {
hugetlb_bad_size();
pr_err("Invalid huge page size specified(%llu)\n", size);
}
return 1;
}
__setup("hugepagesz=", hugepage_setup_sz);
static int __init hugetlbpage_init(void)
{
bool configured = false;

Просмотреть файл

@ -271,7 +271,7 @@ void __init paging_init(void)
max_zone_pfns[ZONE_HIGHMEM] = max_pfn;
#endif
free_area_init_nodes(max_zone_pfns);
free_area_init(max_zone_pfns);
mark_nonram_nosave();
}

Просмотреть файл

@ -16,6 +16,7 @@ config RISCV
select OF_EARLY_FLATTREE
select OF_IRQ
select ARCH_HAS_BINFMT_FLAT
select ARCH_HAS_DEBUG_WX
select ARCH_WANT_FRAME_POINTERS
select CLONE_BACKWARDS
select COMMON_CLK
@ -32,7 +33,6 @@ config RISCV
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ASM_MODVERSIONS
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_DMA_CONTIGUOUS if MMU
select HAVE_FUTEX_CMPXCHG if FUTEX
select HAVE_PERF_EVENTS

Просмотреть файл

@ -5,14 +5,4 @@
#include <asm-generic/hugetlb.h>
#include <asm/page.h>
static inline int is_hugepage_only_range(struct mm_struct *mm,
unsigned long addr,
unsigned long len) {
return 0;
}
static inline void arch_clear_hugepage_flags(struct page *page)
{
}
#endif /* _ASM_RISCV_HUGETLB_H */

Просмотреть файл

@ -8,4 +8,15 @@
void ptdump_check_wx(void);
#ifdef CONFIG_DEBUG_WX
static inline void debug_checkwx(void)
{
ptdump_check_wx();
}
#else
static inline void debug_checkwx(void)
{
}
#endif
#endif /* _ASM_RISCV_PTDUMP_H */

Просмотреть файл

@ -12,29 +12,21 @@ int pmd_huge(pmd_t pmd)
return pmd_leaf(pmd);
}
static __init int setup_hugepagesz(char *opt)
bool __init arch_hugetlb_valid_size(unsigned long size)
{
unsigned long ps = memparse(opt, &opt);
if (ps == HPAGE_SIZE) {
hugetlb_add_hstate(HPAGE_SHIFT - PAGE_SHIFT);
} else if (IS_ENABLED(CONFIG_64BIT) && ps == PUD_SIZE) {
hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
} else {
hugetlb_bad_size();
pr_err("hugepagesz: Unsupported page size %lu M\n", ps >> 20);
return 0;
}
return 1;
if (size == HPAGE_SIZE)
return true;
else if (IS_ENABLED(CONFIG_64BIT) && size == PUD_SIZE)
return true;
else
return false;
}
__setup("hugepagesz=", setup_hugepagesz);
#ifdef CONFIG_CONTIG_ALLOC
static __init int gigantic_pages_init(void)
{
/* With CONTIG_ALLOC, we can allocate gigantic pages at runtime */
if (IS_ENABLED(CONFIG_64BIT) && !size_to_hstate(1UL << PUD_SHIFT))
if (IS_ENABLED(CONFIG_64BIT))
hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
return 0;
}

Просмотреть файл

@ -19,6 +19,7 @@
#include <asm/sections.h>
#include <asm/pgtable.h>
#include <asm/io.h>
#include <asm/ptdump.h>
#include "../kernel/head.h"
@ -39,7 +40,7 @@ static void __init zone_sizes_init(void)
#endif
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
free_area_init_nodes(max_zone_pfns);
free_area_init(max_zone_pfns);
}
static void setup_zero_page(void)
@ -514,6 +515,8 @@ void mark_rodata_ro(void)
set_memory_ro(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT);
set_memory_nx(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT);
set_memory_nx(data_start, (max_low - data_start) >> PAGE_SHIFT);
debug_checkwx();
}
#endif

Просмотреть файл

@ -162,7 +162,6 @@ config S390
select HAVE_LIVEPATCH
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_MEMBLOCK_PHYS_MAP
select MMU_GATHER_NO_GATHER
select HAVE_MOD_ARCH_SPECIFIC

Просмотреть файл

@ -21,13 +21,6 @@ pte_t huge_ptep_get(pte_t *ptep);
pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
unsigned long addr, pte_t *ptep);
static inline bool is_hugepage_only_range(struct mm_struct *mm,
unsigned long addr,
unsigned long len)
{
return false;
}
/*
* If the arch doesn't supply something else, assume that hugepage
* size aligned regions are ok without further preparation.
@ -46,6 +39,7 @@ static inline void arch_clear_hugepage_flags(struct page *page)
{
clear_bit(PG_arch_1, &page->flags);
}
#define arch_clear_hugepage_flags arch_clear_hugepage_flags
static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, unsigned long sz)

Просмотреть файл

@ -254,25 +254,15 @@ follow_huge_pud(struct mm_struct *mm, unsigned long address,
return pud_page(*pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT);
}
static __init int setup_hugepagesz(char *opt)
bool __init arch_hugetlb_valid_size(unsigned long size)
{
unsigned long size;
char *string = opt;
size = memparse(opt, &opt);
if (MACHINE_HAS_EDAT1 && size == PMD_SIZE) {
hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT);
} else if (MACHINE_HAS_EDAT2 && size == PUD_SIZE) {
hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
} else {
hugetlb_bad_size();
pr_err("hugepagesz= specifies an unsupported page size %s\n",
string);
return 0;
}
return 1;
if (MACHINE_HAS_EDAT1 && size == PMD_SIZE)
return true;
else if (MACHINE_HAS_EDAT2 && size == PUD_SIZE)
return true;
else
return false;
}
__setup("hugepagesz=", setup_hugepagesz);
static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *file,
unsigned long addr, unsigned long len,

Просмотреть файл

@ -122,7 +122,7 @@ void __init paging_init(void)
memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
max_zone_pfns[ZONE_DMA] = PFN_DOWN(MAX_DMA_ADDRESS);
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
free_area_init_nodes(max_zone_pfns);
free_area_init(max_zone_pfns);
}
void mark_rodata_ro(void)

Просмотреть файл

@ -9,7 +9,6 @@ config SUPERH
select CLKDEV_LOOKUP
select DMA_DECLARE_COHERENT
select HAVE_IDE if HAS_IOPORT_MAP
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_OPROFILE
select HAVE_ARCH_TRACEHOOK
select HAVE_PERF_EVENTS

Просмотреть файл

@ -5,12 +5,6 @@
#include <asm/cacheflush.h>
#include <asm/page.h>
static inline int is_hugepage_only_range(struct mm_struct *mm,
unsigned long addr,
unsigned long len) {
return 0;
}
/*
* If the arch doesn't supply something else, assume that hugepage
* size aligned regions are ok without further preparation.
@ -36,6 +30,7 @@ static inline void arch_clear_hugepage_flags(struct page *page)
{
clear_bit(PG_dcache_clean, &page->flags);
}
#define arch_clear_hugepage_flags arch_clear_hugepage_flags
#include <asm-generic/hugetlb.h>

Просмотреть файл

@ -334,7 +334,7 @@ void __init paging_init(void)
memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
free_area_init_nodes(max_zone_pfns);
free_area_init(max_zone_pfns);
}
unsigned int mem_init_done = 0;

Просмотреть файл

@ -65,7 +65,6 @@ config SPARC64
select HAVE_KRETPROBES
select HAVE_KPROBES
select MMU_GATHER_RCU_TABLE_FREE if SMP
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_ARCH_TRANSPARENT_HUGEPAGE
select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD
@ -287,15 +286,6 @@ config NODES_SHIFT
Specify the maximum number of NUMA Nodes available on the target
system. Increases memory reserved to accommodate various tables.
# Some NUMA nodes have memory ranges that span
# other nodes. Even though a pfn is valid and
# between a node's start and end pfns, it may not
# reside on that node. See memmap_init_zone()
# for details.
config NODES_SPAN_OTHER_NODES
def_bool y
depends on NEED_MULTIPLE_NODES
config ARCH_SPARSEMEM_ENABLE
def_bool y if SPARC64
select SPARSEMEM_VMEMMAP_ENABLE

Просмотреть файл

@ -20,12 +20,6 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep);
static inline int is_hugepage_only_range(struct mm_struct *mm,
unsigned long addr,
unsigned long len) {
return 0;
}
#define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
static inline void huge_ptep_clear_flush(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep)
@ -53,10 +47,6 @@ static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
return changed;
}
static inline void arch_clear_hugepage_flags(struct page *page)
{
}
#define __HAVE_ARCH_HUGETLB_FREE_PGD_RANGE
void hugetlb_free_pgd_range(struct mmu_gather *tlb, unsigned long addr,
unsigned long end, unsigned long floor,

Просмотреть файл

@ -193,6 +193,7 @@ unsigned long __init bootmem_init(unsigned long *pages_avail)
/* Reserve the kernel text/data/bss. */
size = (start_pfn << PAGE_SHIFT) - phys_base;
memblock_reserve(phys_base, size);
memblock_add(phys_base, size);
size = memblock_phys_mem_size() - memblock_reserved_size();
*pages_avail = (size >> PAGE_SHIFT) - high_pages;

Просмотреть файл

@ -325,23 +325,12 @@ static void __update_mmu_tsb_insert(struct mm_struct *mm, unsigned long tsb_inde
}
#ifdef CONFIG_HUGETLB_PAGE
static void __init add_huge_page_size(unsigned long size)
{
unsigned int order;
if (size_to_hstate(size))
return;
order = ilog2(size) - PAGE_SHIFT;
hugetlb_add_hstate(order);
}
static int __init hugetlbpage_init(void)
{
add_huge_page_size(1UL << HPAGE_64K_SHIFT);
add_huge_page_size(1UL << HPAGE_SHIFT);
add_huge_page_size(1UL << HPAGE_256MB_SHIFT);
add_huge_page_size(1UL << HPAGE_2GB_SHIFT);
hugetlb_add_hstate(HPAGE_64K_SHIFT - PAGE_SHIFT);
hugetlb_add_hstate(HPAGE_SHIFT - PAGE_SHIFT);
hugetlb_add_hstate(HPAGE_256MB_SHIFT - PAGE_SHIFT);
hugetlb_add_hstate(HPAGE_2GB_SHIFT - PAGE_SHIFT);
return 0;
}
@ -360,16 +349,11 @@ static void __init pud_huge_patch(void)
__asm__ __volatile__("flush %0" : : "r" (addr));
}
static int __init setup_hugepagesz(char *string)
bool __init arch_hugetlb_valid_size(unsigned long size)
{
unsigned long long hugepage_size;
unsigned int hugepage_shift;
unsigned int hugepage_shift = ilog2(size);
unsigned short hv_pgsz_idx;
unsigned int hv_pgsz_mask;
int rc = 0;
hugepage_size = memparse(string, &string);
hugepage_shift = ilog2(hugepage_size);
switch (hugepage_shift) {
case HPAGE_16GB_SHIFT:
@ -397,20 +381,11 @@ static int __init setup_hugepagesz(char *string)
hv_pgsz_mask = 0;
}
if ((hv_pgsz_mask & cpu_pgsz_mask) == 0U) {
hugetlb_bad_size();
pr_err("hugepagesz=%llu not supported by MMU.\n",
hugepage_size);
goto out;
}
if ((hv_pgsz_mask & cpu_pgsz_mask) == 0U)
return false;
add_huge_page_size(hugepage_size);
rc = 1;
out:
return rc;
return true;
}
__setup("hugepagesz=", setup_hugepagesz);
#endif /* CONFIG_HUGETLB_PAGE */
void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep)
@ -2488,7 +2463,7 @@ void __init paging_init(void)
max_zone_pfns[ZONE_NORMAL] = end_pfn;
free_area_init_nodes(max_zone_pfns);
free_area_init(max_zone_pfns);
}
printk("Booting Linux...\n");

Просмотреть файл

@ -1008,24 +1008,13 @@ void __init srmmu_paging_init(void)
kmap_init();
{
unsigned long zones_size[MAX_NR_ZONES];
unsigned long zholes_size[MAX_NR_ZONES];
unsigned long npages;
int znum;
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
for (znum = 0; znum < MAX_NR_ZONES; znum++)
zones_size[znum] = zholes_size[znum] = 0;
max_zone_pfn[ZONE_DMA] = max_low_pfn;
max_zone_pfn[ZONE_NORMAL] = max_low_pfn;
max_zone_pfn[ZONE_HIGHMEM] = highend_pfn;
npages = max_low_pfn - pfn_base;
zones_size[ZONE_DMA] = npages;
zholes_size[ZONE_DMA] = npages - pages_avail;
npages = highend_pfn - max_low_pfn;
zones_size[ZONE_HIGHMEM] = npages;
zholes_size[ZONE_HIGHMEM] = npages - calc_highpages();
free_area_init_node(0, zones_size, pfn_base, zholes_size);
free_area_init(max_zone_pfn);
}
}

Просмотреть файл

@ -158,8 +158,8 @@ static void __init fixaddr_user_init( void)
void __init paging_init(void)
{
unsigned long zones_size[MAX_NR_ZONES], vaddr;
int i;
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
unsigned long vaddr;
empty_zero_page = (unsigned long *) memblock_alloc_low(PAGE_SIZE,
PAGE_SIZE);
@ -167,12 +167,8 @@ void __init paging_init(void)
panic("%s: Failed to allocate %lu bytes align=%lx\n",
__func__, PAGE_SIZE, PAGE_SIZE);
for (i = 0; i < ARRAY_SIZE(zones_size); i++)
zones_size[i] = 0;
zones_size[ZONE_NORMAL] = (end_iomem >> PAGE_SHIFT) -
(uml_physmem >> PAGE_SHIFT);
free_area_init(zones_size);
max_zone_pfn[ZONE_NORMAL] = end_iomem >> PAGE_SHIFT;
free_area_init(max_zone_pfn);
/*
* Fixed mappings, only the page table structure has to be

Просмотреть файл

@ -60,7 +60,7 @@
#ifndef __ASSEMBLY__
#ifndef arch_adjust_zones
#define arch_adjust_zones(size, holes) do { } while (0)
#define arch_adjust_zones(max_zone_pfn) do { } while (0)
#endif
/*

Просмотреть файл

@ -25,10 +25,10 @@
#if !defined(__ASSEMBLY__) && defined(CONFIG_PCI)
void puv3_pci_adjust_zones(unsigned long *size, unsigned long *holes);
void puv3_pci_adjust_zones(unsigned long *max_zone_pfn);
#define arch_adjust_zones(size, holes) \
puv3_pci_adjust_zones(size, holes)
#define arch_adjust_zones(max_zone_pfn) \
puv3_pci_adjust_zones(max_zone_pfn)
#endif

Просмотреть файл

@ -133,21 +133,11 @@ static int pci_puv3_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
* This is really ugly and we need a better way of specifying
* DMA-capable regions of memory.
*/
void __init puv3_pci_adjust_zones(unsigned long *zone_size,
unsigned long *zhole_size)
void __init puv3_pci_adjust_zones(unsigned long max_zone_pfn)
{
unsigned int sz = SZ_128M >> PAGE_SHIFT;
/*
* Only adjust if > 128M on current system
*/
if (zone_size[0] <= sz)
return;
zone_size[1] = zone_size[0] - sz;
zone_size[0] = sz;
zhole_size[1] = zhole_size[0];
zhole_size[0] = 0;
max_zone_pfn[ZONE_DMA] = sz;
}
/*

Просмотреть файл

@ -61,46 +61,21 @@ static void __init find_limits(unsigned long *min, unsigned long *max_low,
}
}
static void __init uc32_bootmem_free(unsigned long min, unsigned long max_low,
unsigned long max_high)
static void __init uc32_bootmem_free(unsigned long max_low)
{
unsigned long zone_size[MAX_NR_ZONES], zhole_size[MAX_NR_ZONES];
struct memblock_region *reg;
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
/*
* initialise the zones.
*/
memset(zone_size, 0, sizeof(zone_size));
/*
* The memory size has already been determined. If we need
* to do anything fancy with the allocation of this memory
* to the zones, now is the time to do it.
*/
zone_size[0] = max_low - min;
/*
* Calculate the size of the holes.
* holes = node_size - sum(bank_sizes)
*/
memcpy(zhole_size, zone_size, sizeof(zhole_size));
for_each_memblock(memory, reg) {
unsigned long start = memblock_region_memory_base_pfn(reg);
unsigned long end = memblock_region_memory_end_pfn(reg);
if (start < max_low) {
unsigned long low_end = min(end, max_low);
zhole_size[0] -= low_end - start;
}
}
max_zone_pfn[ZONE_DMA] = max_low;
max_zone_pfn[ZONE_NORMAL] = max_low;
/*
* Adjust the sizes according to any special requirements for
* this machine type.
* This might lower ZONE_DMA limit.
*/
arch_adjust_zones(zone_size, zhole_size);
arch_adjust_zones(max_zone_pfn);
free_area_init_node(0, zone_size, min, zhole_size);
free_area_init(max_zone_pfn);
}
int pfn_valid(unsigned long pfn)
@ -176,11 +151,11 @@ void __init bootmem_init(void)
sparse_init();
/*
* Now free the memory - free_area_init_node needs
* Now free the memory - free_area_init needs
* the sparse mem_map arrays initialized by sparse_init()
* for memmap_init_zone(), otherwise all PFNs are invalid.
*/
uc32_bootmem_free(min, max_low, max_high);
uc32_bootmem_free(max_low);
high_memory = __va((max_low << PAGE_SHIFT) - 1) + 1;

Просмотреть файл

@ -82,6 +82,7 @@ config X86
select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
select ARCH_HAS_SYSCALL_WRAPPER
select ARCH_HAS_UBSAN_SANITIZE_ALL
select ARCH_HAS_DEBUG_WX
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI
select ARCH_MIGHT_HAVE_PC_PARPORT
@ -193,7 +194,6 @@ config X86
select HAVE_KRETPROBES
select HAVE_KVM
select HAVE_LIVEPATCH if X86_64
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_MIXED_BREAKPOINTS_REGS
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_MOVE_PMD
@ -1585,15 +1585,6 @@ config X86_64_ACPI_NUMA
---help---
Enable ACPI SRAT based node topology detection.
# Some NUMA nodes have memory ranges that span
# other nodes. Even though a pfn is valid and
# between a node's start and end pfns, it may not
# reside on that node. See memmap_init_zone()
# for details.
config NODES_SPAN_OTHER_NODES
def_bool y
depends on X86_64_ACPI_NUMA
config NUMA_EMU
bool "NUMA emulation"
depends on NUMA

Просмотреть файл

@ -72,33 +72,6 @@ config EFI_PGT_DUMP
issues with the mapping of the EFI runtime regions into that
table.
config DEBUG_WX
bool "Warn on W+X mappings at boot"
select PTDUMP_CORE
---help---
Generate a warning if any W+X mappings are found at boot.
This is useful for discovering cases where the kernel is leaving
W+X mappings after applying NX, as such mappings are a security risk.
Look for a message in dmesg output like this:
x86/mm: Checked W+X mappings: passed, no W+X pages found.
or like this, if the check failed:
x86/mm: Checked W+X mappings: FAILED, <N> W+X pages found.
Note that even if the check fails, your kernel is possibly
still fine, as W+X mappings are not a security hole in
themselves, what they do is that they make the exploitation
of other unfixed kernel bugs easier.
There is no runtime or memory usage effect of this option
once the kernel has booted up - it's a one time check.
If in doubt, say "Y".
config DEBUG_TLBFLUSH
bool "Set upper limit of TLB entries to flush one-by-one"
depends on DEBUG_KERNEL

Просмотреть файл

@ -7,14 +7,4 @@
#define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE)
static inline int is_hugepage_only_range(struct mm_struct *mm,
unsigned long addr,
unsigned long len) {
return 0;
}
static inline void arch_clear_hugepage_flags(struct page *page)
{
}
#endif /* _ASM_X86_HUGETLB_H */

Просмотреть файл

@ -624,7 +624,7 @@ static inline pud_t pfn_pud(unsigned long page_nr, pgprot_t pgprot)
return __pud(pfn | check_pgprot(pgprot));
}
static inline pmd_t pmd_mknotpresent(pmd_t pmd)
static inline pmd_t pmd_mkinvalid(pmd_t pmd)
{
return pfn_pmd(pmd_pfn(pmd),
__pgprot(pmd_flags(pmd) & ~(_PAGE_PRESENT|_PAGE_PROTNONE)));

Просмотреть файл

@ -181,28 +181,21 @@ get_unmapped_area:
#endif /* CONFIG_HUGETLB_PAGE */
#ifdef CONFIG_X86_64
static __init int setup_hugepagesz(char *opt)
bool __init arch_hugetlb_valid_size(unsigned long size)
{
unsigned long ps = memparse(opt, &opt);
if (ps == PMD_SIZE) {
hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT);
} else if (ps == PUD_SIZE && boot_cpu_has(X86_FEATURE_GBPAGES)) {
hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
} else {
hugetlb_bad_size();
printk(KERN_ERR "hugepagesz: Unsupported page size %lu M\n",
ps >> 20);
return 0;
}
return 1;
if (size == PMD_SIZE)
return true;
else if (size == PUD_SIZE && boot_cpu_has(X86_FEATURE_GBPAGES))
return true;
else
return false;
}
__setup("hugepagesz=", setup_hugepagesz);
#ifdef CONFIG_CONTIG_ALLOC
static __init int gigantic_pages_init(void)
{
/* With compaction or CMA we can allocate gigantic pages at runtime */
if (boot_cpu_has(X86_FEATURE_GBPAGES) && !size_to_hstate(1UL << PUD_SHIFT))
if (boot_cpu_has(X86_FEATURE_GBPAGES))
hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
return 0;
}

Просмотреть файл

@ -947,7 +947,7 @@ void __init zone_sizes_init(void)
max_zone_pfns[ZONE_HIGHMEM] = max_pfn;
#endif
free_area_init_nodes(max_zone_pfns);
free_area_init(max_zone_pfns);
}
__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate) = {

Просмотреть файл

@ -1265,6 +1265,18 @@ void __init mem_init(void)
mem_init_print_info(NULL);
}
#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
int __init deferred_page_init_max_threads(const struct cpumask *node_cpumask)
{
/*
* More CPUs always led to greater speedups on tested systems, up to
* all the nodes' CPUs. Use all since the system is otherwise idle
* now.
*/
return max_t(int, cpumask_weight(node_cpumask), 1);
}
#endif
int kernel_set_to_readonly;
void mark_rodata_ro(void)

Просмотреть файл

@ -130,7 +130,7 @@ static void clear_pmd_presence(pmd_t *pmd, bool clear, pmdval_t *old)
pmdval_t v = pmd_val(*pmd);
if (clear) {
*old = v;
new_pmd = pmd_mknotpresent(*pmd);
new_pmd = pmd_mkinvalid(*pmd);
} else {
/* Presume this has been called with clear==true previously */
new_pmd = __pmd(*old);

Просмотреть файл

@ -517,8 +517,10 @@ static void __init numa_clear_kernel_node_hotplug(void)
* reserve specific pages for Sandy Bridge graphics. ]
*/
for_each_memblock(reserved, mb_region) {
if (mb_region->nid != MAX_NUMNODES)
node_set(mb_region->nid, reserved_nodemask);
int nid = memblock_get_region_node(mb_region);
if (nid != MAX_NUMNODES)
node_set(nid, reserved_nodemask);
}
/*
@ -735,12 +737,9 @@ void __init x86_numa_init(void)
static void __init init_memory_less_node(int nid)
{
unsigned long zones_size[MAX_NR_ZONES] = {0};
unsigned long zholes_size[MAX_NR_ZONES] = {0};
/* Allocate and initialize node data. Memory-less node is now online.*/
alloc_node_data(nid);
free_area_init_node(nid, zones_size, 0, zholes_size);
free_area_init_memoryless_node(nid);
/*
* All zonelists will be built later in start_kernel() after per cpu

Просмотреть файл

@ -70,13 +70,13 @@ void __init bootmem_init(void)
void __init zones_init(void)
{
/* All pages are DMA-able, so we put them all in the DMA zone. */
unsigned long zones_size[MAX_NR_ZONES] = {
[ZONE_NORMAL] = max_low_pfn - ARCH_PFN_OFFSET,
unsigned long max_zone_pfn[MAX_NR_ZONES] = {
[ZONE_NORMAL] = max_low_pfn,
#ifdef CONFIG_HIGHMEM
[ZONE_HIGHMEM] = max_pfn - max_low_pfn,
[ZONE_HIGHMEM] = max_pfn,
#endif
};
free_area_init_node(0, zones_size, ARCH_PFN_OFFSET, NULL);
free_area_init(max_zone_pfn);
}
#ifdef CONFIG_HIGHMEM

Просмотреть файл

@ -21,6 +21,7 @@
#include <linux/mm.h>
#include <linux/stat.h>
#include <linux/slab.h>
#include <linux/xarray.h>
#include <linux/atomic.h>
#include <linux/uaccess.h>
@ -74,6 +75,13 @@ static struct bus_type memory_subsys = {
.offline = memory_subsys_offline,
};
/*
* Memory blocks are cached in a local radix tree to avoid
* a costly linear search for the corresponding device on
* the subsystem bus.
*/
static DEFINE_XARRAY(memory_blocks);
static BLOCKING_NOTIFIER_HEAD(memory_chain);
int register_memory_notifier(struct notifier_block *nb)
@ -489,22 +497,23 @@ int __weak arch_get_memory_phys_device(unsigned long start_pfn)
return 0;
}
/* A reference for the returned memory block device is acquired. */
/*
* A reference for the returned memory block device is acquired.
*
* Called under device_hotplug_lock.
*/
static struct memory_block *find_memory_block_by_id(unsigned long block_id)
{
struct device *dev;
struct memory_block *mem;
dev = subsys_find_device_by_id(&memory_subsys, block_id, NULL);
return dev ? to_memory_block(dev) : NULL;
mem = xa_load(&memory_blocks, block_id);
if (mem)
get_device(&mem->dev);
return mem;
}
/*
* For now, we have a linear search to go find the appropriate
* memory_block corresponding to a particular phys_index. If
* this gets to be a real problem, we can always use a radix
* tree or something here.
*
* This could be made generic for all device subsystems.
* Called under device_hotplug_lock.
*/
struct memory_block *find_memory_block(struct mem_section *section)
{
@ -548,9 +557,16 @@ int register_memory(struct memory_block *memory)
memory->dev.offline = memory->state == MEM_OFFLINE;
ret = device_register(&memory->dev);
if (ret)
if (ret) {
put_device(&memory->dev);
return ret;
}
ret = xa_err(xa_store(&memory_blocks, memory->dev.id, memory,
GFP_KERNEL));
if (ret) {
put_device(&memory->dev);
device_unregister(&memory->dev);
}
return ret;
}
@ -604,6 +620,8 @@ static void unregister_memory(struct memory_block *memory)
if (WARN_ON_ONCE(memory->dev.bus != &memory_subsys))
return;
WARN_ON(xa_erase(&memory_blocks, memory->dev.id) == NULL);
/* drop the ref. we got via find_memory_block() */
put_device(&memory->dev);
device_unregister(&memory->dev);
@ -750,6 +768,8 @@ void __init memory_dev_init(void)
*
* In case func() returns an error, walking is aborted and the error is
* returned.
*
* Called under device_hotplug_lock.
*/
int walk_memory_blocks(unsigned long start, unsigned long size,
void *arg, walk_memory_blocks_func_t func)

Просмотреть файл

@ -471,7 +471,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work)
down_read(&mm->mmap_sem);
locked = 1;
}
ret = get_user_pages_remote
ret = pin_user_pages_remote
(work->task, mm,
obj->userptr.ptr + pinned * PAGE_SIZE,
npages - pinned,
@ -507,7 +507,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work)
}
mutex_unlock(&obj->mm.lock);
release_pages(pvec, pinned);
unpin_user_pages(pvec, pinned);
kvfree(pvec);
i915_gem_object_put(obj);
@ -564,6 +564,7 @@ static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
struct sg_table *pages;
bool active;
int pinned;
unsigned int gup_flags = 0;
/* If userspace should engineer that these pages are replaced in
* the vma between us binding this page into the GTT and completion
@ -606,11 +607,14 @@ static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
*
* We may or may not care.
*/
if (pvec) /* defer to worker if malloc fails */
pinned = __get_user_pages_fast(obj->userptr.ptr,
num_pages,
!i915_gem_object_is_readonly(obj),
pvec);
if (pvec) {
/* defer to worker if malloc fails */
if (!i915_gem_object_is_readonly(obj))
gup_flags |= FOLL_WRITE;
pinned = pin_user_pages_fast_only(obj->userptr.ptr,
num_pages, gup_flags,
pvec);
}
}
active = false;
@ -628,7 +632,7 @@ static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
__i915_gem_userptr_set_active(obj, true);
if (IS_ERR(pages))
release_pages(pvec, pinned);
unpin_user_pages(pvec, pinned);
kvfree(pvec);
return PTR_ERR_OR_ZERO(pages);
@ -683,7 +687,7 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
}
mark_page_accessed(page);
put_page(page);
unpin_user_page(page);
}
obj->mm.dirty = false;

Просмотреть файл

@ -4162,7 +4162,7 @@ cifs_readv_complete(struct work_struct *work)
for (i = 0; i < rdata->nr_pages; i++) {
struct page *page = rdata->pages[i];
lru_cache_add_file(page);
lru_cache_add(page);
if (rdata->result == 0 ||
(rdata->result == -EAGAIN && got_bytes)) {
@ -4232,7 +4232,7 @@ readpages_fill_pages(struct TCP_Server_Info *server,
* fill them until the writes are flushed.
*/
zero_user(page, 0, PAGE_SIZE);
lru_cache_add_file(page);
lru_cache_add(page);
flush_dcache_page(page);
SetPageUptodate(page);
unlock_page(page);
@ -4242,7 +4242,7 @@ readpages_fill_pages(struct TCP_Server_Info *server,
continue;
} else {
/* no need to hold page hostage */
lru_cache_add_file(page);
lru_cache_add(page);
unlock_page(page);
put_page(page);
rdata->pages[i] = NULL;
@ -4437,7 +4437,7 @@ static int cifs_readpages(struct file *file, struct address_space *mapping,
/* best to give up if we're out of mem */
list_for_each_entry_safe(page, tpage, &tmplist, lru) {
list_del(&page->lru);
lru_cache_add_file(page);
lru_cache_add(page);
unlock_page(page);
put_page(page);
}
@ -4475,7 +4475,7 @@ static int cifs_readpages(struct file *file, struct address_space *mapping,
add_credits_and_wake_if(server, &rdata->credits, 0);
for (i = 0; i < rdata->nr_pages; i++) {
page = rdata->pages[i];
lru_cache_add_file(page);
lru_cache_add(page);
unlock_page(page);
put_page(page);
}

Просмотреть файл

@ -840,7 +840,7 @@ static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep)
get_page(newpage);
if (!(buf->flags & PIPE_BUF_FLAG_LRU))
lru_cache_add_file(newpage);
lru_cache_add(newpage);
err = 0;
spin_lock(&cs->req->waitq.lock);

Просмотреть файл

@ -38,6 +38,7 @@
#include <linux/uio.h>
#include <linux/uaccess.h>
#include <linux/sched/mm.h>
static const struct super_operations hugetlbfs_ops;
static const struct address_space_operations hugetlbfs_aops;
@ -190,6 +191,54 @@ out:
*/
#ifndef HAVE_ARCH_HUGETLB_UNMAPPED_AREA
static unsigned long
hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags)
{
struct hstate *h = hstate_file(file);
struct vm_unmapped_area_info info;
info.flags = 0;
info.length = len;
info.low_limit = current->mm->mmap_base;
info.high_limit = TASK_SIZE;
info.align_mask = PAGE_MASK & ~huge_page_mask(h);
info.align_offset = 0;
return vm_unmapped_area(&info);
}
static unsigned long
hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags)
{
struct hstate *h = hstate_file(file);
struct vm_unmapped_area_info info;
info.flags = VM_UNMAPPED_AREA_TOPDOWN;
info.length = len;
info.low_limit = max(PAGE_SIZE, mmap_min_addr);
info.high_limit = current->mm->mmap_base;
info.align_mask = PAGE_MASK & ~huge_page_mask(h);
info.align_offset = 0;
addr = vm_unmapped_area(&info);
/*
* A failed mmap() very likely causes application failure,
* so fall back to the bottom-up function here. This scenario
* can happen with large stack limits and large mmap()
* allocations.
*/
if (unlikely(offset_in_page(addr))) {
VM_BUG_ON(addr != -ENOMEM);
info.flags = 0;
info.low_limit = current->mm->mmap_base;
info.high_limit = TASK_SIZE;
addr = vm_unmapped_area(&info);
}
return addr;
}
static unsigned long
hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags)
@ -197,7 +246,6 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
struct mm_struct *mm = current->mm;
struct vm_area_struct *vma;
struct hstate *h = hstate_file(file);
struct vm_unmapped_area_info info;
if (len & ~huge_page_mask(h))
return -EINVAL;
@ -218,13 +266,16 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
return addr;
}
info.flags = 0;
info.length = len;
info.low_limit = TASK_UNMAPPED_BASE;
info.high_limit = TASK_SIZE;
info.align_mask = PAGE_MASK & ~huge_page_mask(h);
info.align_offset = 0;
return vm_unmapped_area(&info);
/*
* Use mm->get_unmapped_area value as a hint to use topdown routine.
* If architectures have special needs, they should define their own
* version of hugetlb_get_unmapped_area.
*/
if (mm->get_unmapped_area == arch_get_unmapped_area_topdown)
return hugetlb_get_unmapped_area_topdown(file, addr, len,
pgoff, flags);
return hugetlb_get_unmapped_area_bottomup(file, addr, len,
pgoff, flags);
}
#endif

Просмотреть файл

@ -122,7 +122,7 @@ static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
#ifndef __HAVE_ARCH_HUGE_PTEP_GET
static inline pte_t huge_ptep_get(pte_t *ptep)
{
return *ptep;
return READ_ONCE(*ptep);
}
#endif

Просмотреть файл

@ -97,7 +97,7 @@ extern enum compact_result try_to_compact_pages(gfp_t gfp_mask,
struct page **page);
extern void reset_isolation_suitable(pg_data_t *pgdat);
extern enum compact_result compaction_suitable(struct zone *zone, int order,
unsigned int alloc_flags, int classzone_idx);
unsigned int alloc_flags, int highest_zoneidx);
extern void defer_compaction(struct zone *zone, int order);
extern bool compaction_deferred(struct zone *zone, int order);
@ -182,7 +182,7 @@ bool compaction_zonelist_suitable(struct alloc_context *ac, int order,
extern int kcompactd_run(int nid);
extern void kcompactd_stop(int nid);
extern void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_idx);
extern void wakeup_kcompactd(pg_data_t *pgdat, int order, int highest_zoneidx);
#else
static inline void reset_isolation_suitable(pg_data_t *pgdat)
@ -190,7 +190,7 @@ static inline void reset_isolation_suitable(pg_data_t *pgdat)
}
static inline enum compact_result compaction_suitable(struct zone *zone, int order,
int alloc_flags, int classzone_idx)
int alloc_flags, int highest_zoneidx)
{
return COMPACT_SKIPPED;
}
@ -232,7 +232,8 @@ static inline void kcompactd_stop(int nid)
{
}
static inline void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_idx)
static inline void wakeup_kcompactd(pg_data_t *pgdat,
int order, int highest_zoneidx)
{
}

Просмотреть файл

@ -110,6 +110,11 @@ struct vm_area_struct;
* the caller guarantees the allocation will allow more memory to be freed
* very shortly e.g. process exiting or swapping. Users either should
* be the MM or co-ordinating closely with the VM (e.g. swap over NFS).
* Users of this flag have to be extremely careful to not deplete the reserve
* completely and implement a throttling mechanism which controls the
* consumption of the reserve based on the amount of freed memory.
* Usage of a pre-allocated pool (e.g. mempool) should be always considered
* before using this flag.
*
* %__GFP_NOMEMALLOC is used to explicitly forbid access to emergency reserves.
* This takes precedence over the %__GFP_MEMALLOC flag if both are set.
@ -307,7 +312,7 @@ struct vm_area_struct;
#define GFP_MOVABLE_MASK (__GFP_RECLAIMABLE|__GFP_MOVABLE)
#define GFP_MOVABLE_SHIFT 3
static inline int gfpflags_to_migratetype(const gfp_t gfp_flags)
static inline int gfp_migratetype(const gfp_t gfp_flags)
{
VM_WARN_ON((gfp_flags & GFP_MOVABLE_MASK) == GFP_MOVABLE_MASK);
BUILD_BUG_ON((1UL << GFP_MOVABLE_SHIFT) != ___GFP_MOVABLE);

Просмотреть файл

@ -518,8 +518,8 @@ int huge_add_to_page_cache(struct page *page, struct address_space *mapping,
int __init __alloc_bootmem_huge_page(struct hstate *h);
int __init alloc_bootmem_huge_page(struct hstate *h);
void __init hugetlb_bad_size(void);
void __init hugetlb_add_hstate(unsigned order);
bool __init arch_hugetlb_valid_size(unsigned long size);
struct hstate *size_to_hstate(unsigned long size);
#ifndef HUGE_MAX_HSTATE
@ -590,6 +590,20 @@ static inline unsigned int blocks_per_huge_page(struct hstate *h)
#include <asm/hugetlb.h>
#ifndef is_hugepage_only_range
static inline int is_hugepage_only_range(struct mm_struct *mm,
unsigned long addr, unsigned long len)
{
return 0;
}
#define is_hugepage_only_range is_hugepage_only_range
#endif
#ifndef arch_clear_hugepage_flags
static inline void arch_clear_hugepage_flags(struct page *page) { }
#define arch_clear_hugepage_flags arch_clear_hugepage_flags
#endif
#ifndef arch_make_huge_pte
static inline pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
struct page *page, int writable)

Просмотреть файл

@ -41,7 +41,7 @@ enum memblock_flags {
/**
* struct memblock_region - represents a memory region
* @base: physical address of the region
* @base: base address of the region
* @size: size of the region
* @flags: memory region attributes
* @nid: NUMA node id
@ -50,7 +50,7 @@ struct memblock_region {
phys_addr_t base;
phys_addr_t size;
enum memblock_flags flags;
#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
#ifdef CONFIG_NEED_MULTIPLE_NODES
int nid;
#endif
};
@ -75,7 +75,7 @@ struct memblock_type {
* struct memblock - memblock allocator metadata
* @bottom_up: is bottom up direction?
* @current_limit: physical address of the current allocation limit
* @memory: usabe memory regions
* @memory: usable memory regions
* @reserved: reserved memory regions
* @physmem: all physical memory
*/
@ -215,7 +215,6 @@ static inline bool memblock_is_nomap(struct memblock_region *m)
return m->flags & MEMBLOCK_NOMAP;
}
#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn,
unsigned long *end_pfn);
void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
@ -234,7 +233,6 @@ void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
#define for_each_mem_pfn_range(i, nid, p_start, p_end, p_nid) \
for (i = -1, __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid); \
i >= 0; __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid))
#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
void __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone,
@ -275,6 +273,9 @@ void __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone,
#define for_each_free_mem_pfn_range_in_zone_from(i, zone, p_start, p_end) \
for (; i != U64_MAX; \
__next_mem_pfn_range_in_zone(&i, zone, p_start, p_end))
int __init deferred_page_init_max_threads(const struct cpumask *node_cpumask);
#endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */
/**
@ -310,10 +311,10 @@ void __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone,
for_each_mem_range_rev(i, &memblock.memory, &memblock.reserved, \
nid, flags, p_start, p_end, p_nid)
#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
int memblock_set_node(phys_addr_t base, phys_addr_t size,
struct memblock_type *type, int nid);
#ifdef CONFIG_NEED_MULTIPLE_NODES
static inline void memblock_set_region_node(struct memblock_region *r, int nid)
{
r->nid = nid;
@ -332,7 +333,7 @@ static inline int memblock_get_region_node(const struct memblock_region *r)
{
return 0;
}
#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
#endif /* CONFIG_NEED_MULTIPLE_NODES */
/* Flags for memblock allocation APIs */
#define MEMBLOCK_ALLOC_ANYWHERE (~(phys_addr_t)0)

Просмотреть файл

@ -29,10 +29,7 @@ struct kmem_cache;
/* Cgroup-specific page state, on top of universal node page state */
enum memcg_stat_item {
MEMCG_CACHE = NR_VM_NODE_STAT_ITEMS,
MEMCG_RSS,
MEMCG_RSS_HUGE,
MEMCG_SWAP,
MEMCG_SWAP = NR_VM_NODE_STAT_ITEMS,
MEMCG_SOCK,
/* XXX: why are these zone and not node counters? */
MEMCG_KERNEL_STACK_KB,
@ -358,16 +355,8 @@ static inline unsigned long mem_cgroup_protection(struct mem_cgroup *memcg,
enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root,
struct mem_cgroup *memcg);
int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask, struct mem_cgroup **memcgp,
bool compound);
int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask, struct mem_cgroup **memcgp,
bool compound);
void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg,
bool lrucare, bool compound);
void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg,
bool compound);
int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
void mem_cgroup_uncharge(struct page *page);
void mem_cgroup_uncharge_list(struct list_head *page_list);
@ -568,7 +557,7 @@ struct mem_cgroup *mem_cgroup_get_oom_group(struct task_struct *victim,
void mem_cgroup_print_oom_group(struct mem_cgroup *memcg);
#ifdef CONFIG_MEMCG_SWAP
extern int do_swap_account;
extern bool cgroup_memory_noswap;
#endif
struct mem_cgroup *lock_page_memcg(struct page *page);
@ -708,16 +697,17 @@ static inline void mod_lruvec_state(struct lruvec *lruvec,
static inline void __mod_lruvec_page_state(struct page *page,
enum node_stat_item idx, int val)
{
struct page *head = compound_head(page); /* rmap on tail pages */
pg_data_t *pgdat = page_pgdat(page);
struct lruvec *lruvec;
/* Untracked pages have no memcg, no lruvec. Update only the node */
if (!page->mem_cgroup) {
if (!head->mem_cgroup) {
__mod_node_page_state(pgdat, idx, val);
return;
}
lruvec = mem_cgroup_lruvec(page->mem_cgroup, pgdat);
lruvec = mem_cgroup_lruvec(head->mem_cgroup, pgdat);
__mod_lruvec_state(lruvec, idx, val);
}
@ -847,37 +837,12 @@ static inline enum mem_cgroup_protection mem_cgroup_protected(
return MEMCG_PROT_NONE;
}
static inline int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask,
struct mem_cgroup **memcgp,
bool compound)
static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask)
{
*memcgp = NULL;
return 0;
}
static inline int mem_cgroup_try_charge_delay(struct page *page,
struct mm_struct *mm,
gfp_t gfp_mask,
struct mem_cgroup **memcgp,
bool compound)
{
*memcgp = NULL;
return 0;
}
static inline void mem_cgroup_commit_charge(struct page *page,
struct mem_cgroup *memcg,
bool lrucare, bool compound)
{
}
static inline void mem_cgroup_cancel_charge(struct page *page,
struct mem_cgroup *memcg,
bool compound)
{
}
static inline void mem_cgroup_uncharge(struct page *page)
{
}
@ -1277,6 +1242,19 @@ static inline void dec_lruvec_page_state(struct page *page,
mod_lruvec_page_state(page, idx, -1);
}
static inline struct lruvec *parent_lruvec(struct lruvec *lruvec)
{
struct mem_cgroup *memcg;
memcg = lruvec_memcg(lruvec);
if (!memcg)
return NULL;
memcg = parent_mem_cgroup(memcg);
if (!memcg)
return NULL;
return mem_cgroup_lruvec(memcg, lruvec_pgdat(lruvec));
}
#ifdef CONFIG_CGROUP_WRITEBACK
struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb);

Просмотреть файл

@ -501,7 +501,6 @@ struct vm_fault {
pte_t orig_pte; /* Value of PTE at the time of fault */
struct page *cow_page; /* Page handler may use for COW fault */
struct mem_cgroup *memcg; /* Cgroup cow_page belongs to */
struct page *page; /* ->fault handlers should return a
* page here, unless VM_FAULT_NOPAGE
* is set (which is also implied by
@ -867,7 +866,7 @@ enum compound_dtor_id {
#endif
NR_COMPOUND_DTORS,
};
extern compound_page_dtor * const compound_page_dtors[];
extern compound_page_dtor * const compound_page_dtors[NR_COMPOUND_DTORS];
static inline void set_compound_page_dtor(struct page *page,
enum compound_dtor_id compound_dtor)
@ -876,10 +875,10 @@ static inline void set_compound_page_dtor(struct page *page,
page[1].compound_dtor = compound_dtor;
}
static inline compound_page_dtor *get_compound_page_dtor(struct page *page)
static inline void destroy_compound_page(struct page *page)
{
VM_BUG_ON_PAGE(page[1].compound_dtor >= NR_COMPOUND_DTORS, page);
return compound_page_dtors[page[1].compound_dtor];
compound_page_dtors[page[1].compound_dtor](page);
}
static inline unsigned int compound_order(struct page *page)
@ -946,8 +945,7 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
return pte;
}
vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
struct page *page);
vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct page *page);
vm_fault_t finish_fault(struct vm_fault *vmf);
vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf);
#endif
@ -1827,6 +1825,8 @@ extern int mprotect_fixup(struct vm_area_struct *vma,
*/
int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
struct page **pages);
int pin_user_pages_fast_only(unsigned long start, int nr_pages,
unsigned int gup_flags, struct page **pages);
/*
* per-process(per-mm_struct) statistics.
*/
@ -2327,9 +2327,7 @@ static inline spinlock_t *pud_lock(struct mm_struct *mm, pud_t *pud)
}
extern void __init pagecache_init(void);
extern void free_area_init(unsigned long * zones_size);
extern void __init free_area_init_node(int nid, unsigned long * zones_size,
unsigned long zone_start_pfn, unsigned long *zholes_size);
extern void __init free_area_init_memoryless_node(int nid);
extern void free_initmem(void);
/*
@ -2399,34 +2397,26 @@ static inline unsigned long get_num_physpages(void)
return phys_pages;
}
#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
/*
* With CONFIG_HAVE_MEMBLOCK_NODE_MAP set, an architecture may initialise its
* zones, allocate the backing mem_map and account for memory holes in a more
* architecture independent manner. This is a substitute for creating the
* zone_sizes[] and zholes_size[] arrays and passing them to
* free_area_init_node()
* Using memblock node mappings, an architecture may initialise its
* zones, allocate the backing mem_map and account for memory holes in an
* architecture independent manner.
*
* An architecture is expected to register range of page frames backed by
* physical memory with memblock_add[_node]() before calling
* free_area_init_nodes() passing in the PFN each zone ends at. At a basic
* free_area_init() passing in the PFN each zone ends at. At a basic
* usage, an architecture is expected to do something like
*
* unsigned long max_zone_pfns[MAX_NR_ZONES] = {max_dma, max_normal_pfn,
* max_highmem_pfn};
* for_each_valid_physical_page_range()
* memblock_add_node(base, size, nid)
* free_area_init_nodes(max_zone_pfns);
* free_area_init(max_zone_pfns);
*
* free_bootmem_with_active_regions() calls free_bootmem_node() for each
* registered physical page range. Similarly
* sparse_memory_present_with_active_regions() calls memory_present() for
* each range when SPARSEMEM is enabled.
*
* See mm/page_alloc.c for more information on each function exposed by
* CONFIG_HAVE_MEMBLOCK_NODE_MAP.
*/
extern void free_area_init_nodes(unsigned long *max_zone_pfn);
void free_area_init(unsigned long *max_zone_pfn);
unsigned long node_map_pfn_alignment(void);
unsigned long __absent_pages_in_range(int nid, unsigned long start_pfn,
unsigned long end_pfn);
@ -2435,16 +2425,10 @@ extern unsigned long absent_pages_in_range(unsigned long start_pfn,
extern void get_pfn_range_for_nid(unsigned int nid,
unsigned long *start_pfn, unsigned long *end_pfn);
extern unsigned long find_min_pfn_with_active_regions(void);
extern void free_bootmem_with_active_regions(int nid,
unsigned long max_low_pfn);
extern void sparse_memory_present_with_active_regions(int nid);
#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
#if !defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP) && \
!defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID)
static inline int __early_pfn_to_nid(unsigned long pfn,
struct mminit_pfnnid_cache *state)
#ifndef CONFIG_NEED_MULTIPLE_NODES
static inline int early_pfn_to_nid(unsigned long pfn)
{
return 0;
}
@ -2480,6 +2464,7 @@ extern void setup_per_cpu_pageset(void);
extern int min_free_kbytes;
extern int watermark_boost_factor;
extern int watermark_scale_factor;
extern bool arch_has_descending_max_zone_pfns(void);
/* nommu.c */
extern atomic_long_t mmap_pages_allocated;
@ -2816,6 +2801,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
#define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below */
#define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */
#define FOLL_PIN 0x40000 /* pages must be released via unpin_user_page */
#define FOLL_FAST_ONLY 0x80000 /* gup_fast: prevent fall-back to slow gup */
/*
* FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each

Просмотреть файл

@ -242,19 +242,6 @@ static inline bool is_active_lru(enum lru_list lru)
return (lru == LRU_ACTIVE_ANON || lru == LRU_ACTIVE_FILE);
}
struct zone_reclaim_stat {
/*
* The pageout code in vmscan.c keeps track of how many of the
* mem/swap backed and file backed pages are referenced.
* The higher the rotated/scanned ratio, the more valuable
* that cache is.
*
* The anon LRU stats live in [0], file LRU stats in [1]
*/
unsigned long recent_rotated[2];
unsigned long recent_scanned[2];
};
enum lruvec_flags {
LRUVEC_CONGESTED, /* lruvec has many dirty pages
* backed by a congested BDI
@ -263,7 +250,13 @@ enum lruvec_flags {
struct lruvec {
struct list_head lists[NR_LRU_LISTS];
struct zone_reclaim_stat reclaim_stat;
/*
* These track the cost of reclaiming one LRU - file or anon -
* over the other. As the observed cost of reclaiming one LRU
* increases, the reclaim scan balance tips toward the other.
*/
unsigned long anon_cost;
unsigned long file_cost;
/* Evictions & activations on the inactive file list */
atomic_long_t inactive_age;
/* Refaults at the time of last reclaim cycle */
@ -680,6 +673,8 @@ typedef struct pglist_data {
/*
* Must be held any time you expect node_start_pfn,
* node_present_pages, node_spanned_pages or nr_zones to stay constant.
* Also synchronizes pgdat->first_deferred_pfn during deferred page
* init.
*
* pgdat_resize_lock() and pgdat_resize_unlock() are provided to
* manipulate node_size_lock without checking for CONFIG_MEMORY_HOTPLUG
@ -699,13 +694,13 @@ typedef struct pglist_data {
struct task_struct *kswapd; /* Protected by
mem_hotplug_begin/end() */
int kswapd_order;
enum zone_type kswapd_classzone_idx;
enum zone_type kswapd_highest_zoneidx;
int kswapd_failures; /* Number of 'reclaimed == 0' runs */
#ifdef CONFIG_COMPACTION
int kcompactd_max_order;
enum zone_type kcompactd_classzone_idx;
enum zone_type kcompactd_highest_zoneidx;
wait_queue_head_t kcompactd_wait;
struct task_struct *kcompactd;
#endif
@ -783,15 +778,15 @@ static inline bool pgdat_is_empty(pg_data_t *pgdat)
void build_all_zonelists(pg_data_t *pgdat);
void wakeup_kswapd(struct zone *zone, gfp_t gfp_mask, int order,
enum zone_type classzone_idx);
enum zone_type highest_zoneidx);
bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
int classzone_idx, unsigned int alloc_flags,
int highest_zoneidx, unsigned int alloc_flags,
long free_pages);
bool zone_watermark_ok(struct zone *z, unsigned int order,
unsigned long mark, int classzone_idx,
unsigned long mark, int highest_zoneidx,
unsigned int alloc_flags);
bool zone_watermark_ok_safe(struct zone *z, unsigned int order,
unsigned long mark, int classzone_idx);
unsigned long mark, int highest_zoneidx);
enum memmap_context {
MEMMAP_EARLY,
MEMMAP_HOTPLUG,
@ -876,7 +871,7 @@ extern int movable_zone;
#ifdef CONFIG_HIGHMEM
static inline int zone_movable_is_highmem(void)
{
#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
#ifdef CONFIG_NEED_MULTIPLE_NODES
return movable_zone == ZONE_HIGHMEM;
#else
return (ZONE_MOVABLE - 1) == ZONE_HIGHMEM;
@ -1079,15 +1074,6 @@ static inline struct zoneref *first_zones_zonelist(struct zonelist *zonelist,
#include <asm/sparsemem.h>
#endif
#if !defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) && \
!defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP)
static inline unsigned long early_pfn_to_nid(unsigned long pfn)
{
BUILD_BUG_ON(IS_ENABLED(CONFIG_NUMA));
return 0;
}
#endif
#ifdef CONFIG_FLATMEM
#define pfn_to_nid(pfn) (0)
#endif

Просмотреть файл

@ -4,6 +4,9 @@
*
* Copyright (C) 2008, 2009 secunet Security Networks AG
* Copyright (C) 2008, 2009 Steffen Klassert <steffen.klassert@secunet.com>
*
* Copyright (c) 2020 Oracle and/or its affiliates.
* Author: Daniel Jordan <daniel.m.jordan@oracle.com>
*/
#ifndef PADATA_H
@ -24,7 +27,6 @@
* @list: List entry, to attach to the padata lists.
* @pd: Pointer to the internal control structure.
* @cb_cpu: Callback cpu for serializatioon.
* @cpu: Cpu for parallelization.
* @seq_nr: Sequence number of the parallelized data object.
* @info: Used to pass information from the parallel to the serial function.
* @parallel: Parallel execution function.
@ -34,7 +36,6 @@ struct padata_priv {
struct list_head list;
struct parallel_data *pd;
int cb_cpu;
int cpu;
unsigned int seq_nr;
int info;
void (*parallel)(struct padata_priv *padata);
@ -68,15 +69,11 @@ struct padata_serial_queue {
/**
* struct padata_parallel_queue - The percpu padata parallel queue
*
* @parallel: List to wait for parallelization.
* @reorder: List to wait for reordering after parallel processing.
* @work: work struct for parallelization.
* @num_obj: Number of objects that are processed by this cpu.
*/
struct padata_parallel_queue {
struct padata_list parallel;
struct padata_list reorder;
struct work_struct work;
atomic_t num_obj;
};
@ -111,7 +108,7 @@ struct parallel_data {
struct padata_parallel_queue __percpu *pqueue;
struct padata_serial_queue __percpu *squeue;
atomic_t refcnt;
atomic_t seq_nr;
unsigned int seq_nr;
unsigned int processed;
int cpu;
struct padata_cpumask cpumask;
@ -136,6 +133,31 @@ struct padata_shell {
struct list_head list;
};
/**
* struct padata_mt_job - represents one multithreaded job
*
* @thread_fn: Called for each chunk of work that a padata thread does.
* @fn_arg: The thread function argument.
* @start: The start of the job (units are job-specific).
* @size: size of this node's work (units are job-specific).
* @align: Ranges passed to the thread function fall on this boundary, with the
* possible exceptions of the beginning and end of the job.
* @min_chunk: The minimum chunk size in job-specific units. This allows
* the client to communicate the minimum amount of work that's
* appropriate for one worker thread to do at once.
* @max_threads: Max threads to use for the job, actual number may be less
* depending on task size and minimum chunk size.
*/
struct padata_mt_job {
void (*thread_fn)(unsigned long start, unsigned long end, void *arg);
void *fn_arg;
unsigned long start;
unsigned long size;
unsigned long align;
unsigned long min_chunk;
int max_threads;
};
/**
* struct padata_instance - The overall control structure.
*
@ -166,6 +188,12 @@ struct padata_instance {
#define PADATA_INVALID 4
};
#ifdef CONFIG_PADATA
extern void __init padata_init(void);
#else
static inline void __init padata_init(void) {}
#endif
extern struct padata_instance *padata_alloc_possible(const char *name);
extern void padata_free(struct padata_instance *pinst);
extern struct padata_shell *padata_alloc_shell(struct padata_instance *pinst);
@ -173,6 +201,7 @@ extern void padata_free_shell(struct padata_shell *ps);
extern int padata_do_parallel(struct padata_shell *ps,
struct padata_priv *padata, int *cb_cpu);
extern void padata_do_serial(struct padata_priv *padata);
extern void __init padata_do_multithreaded(struct padata_mt_job *job);
extern int padata_set_cpumask(struct padata_instance *pinst, int cpumask_type,
cpumask_var_t cpumask);
extern int padata_start(struct padata_instance *pinst);

Просмотреть файл

@ -272,6 +272,31 @@ void __read_overflow3(void) __compiletime_error("detected read beyond size of ob
void __write_overflow(void) __compiletime_error("detected write beyond size of object passed as 1st parameter");
#if !defined(__NO_FORTIFY) && defined(__OPTIMIZE__) && defined(CONFIG_FORTIFY_SOURCE)
#ifdef CONFIG_KASAN
extern void *__underlying_memchr(const void *p, int c, __kernel_size_t size) __RENAME(memchr);
extern int __underlying_memcmp(const void *p, const void *q, __kernel_size_t size) __RENAME(memcmp);
extern void *__underlying_memcpy(void *p, const void *q, __kernel_size_t size) __RENAME(memcpy);
extern void *__underlying_memmove(void *p, const void *q, __kernel_size_t size) __RENAME(memmove);
extern void *__underlying_memset(void *p, int c, __kernel_size_t size) __RENAME(memset);
extern char *__underlying_strcat(char *p, const char *q) __RENAME(strcat);
extern char *__underlying_strcpy(char *p, const char *q) __RENAME(strcpy);
extern __kernel_size_t __underlying_strlen(const char *p) __RENAME(strlen);
extern char *__underlying_strncat(char *p, const char *q, __kernel_size_t count) __RENAME(strncat);
extern char *__underlying_strncpy(char *p, const char *q, __kernel_size_t size) __RENAME(strncpy);
#else
#define __underlying_memchr __builtin_memchr
#define __underlying_memcmp __builtin_memcmp
#define __underlying_memcpy __builtin_memcpy
#define __underlying_memmove __builtin_memmove
#define __underlying_memset __builtin_memset
#define __underlying_strcat __builtin_strcat
#define __underlying_strcpy __builtin_strcpy
#define __underlying_strlen __builtin_strlen
#define __underlying_strncat __builtin_strncat
#define __underlying_strncpy __builtin_strncpy
#endif
__FORTIFY_INLINE char *strncpy(char *p, const char *q, __kernel_size_t size)
{
size_t p_size = __builtin_object_size(p, 0);
@ -279,14 +304,14 @@ __FORTIFY_INLINE char *strncpy(char *p, const char *q, __kernel_size_t size)
__write_overflow();
if (p_size < size)
fortify_panic(__func__);
return __builtin_strncpy(p, q, size);
return __underlying_strncpy(p, q, size);
}
__FORTIFY_INLINE char *strcat(char *p, const char *q)
{
size_t p_size = __builtin_object_size(p, 0);
if (p_size == (size_t)-1)
return __builtin_strcat(p, q);
return __underlying_strcat(p, q);
if (strlcat(p, q, p_size) >= p_size)
fortify_panic(__func__);
return p;
@ -300,7 +325,7 @@ __FORTIFY_INLINE __kernel_size_t strlen(const char *p)
/* Work around gcc excess stack consumption issue */
if (p_size == (size_t)-1 ||
(__builtin_constant_p(p[p_size - 1]) && p[p_size - 1] == '\0'))
return __builtin_strlen(p);
return __underlying_strlen(p);
ret = strnlen(p, p_size);
if (p_size <= ret)
fortify_panic(__func__);
@ -333,7 +358,7 @@ __FORTIFY_INLINE size_t strlcpy(char *p, const char *q, size_t size)
__write_overflow();
if (len >= p_size)
fortify_panic(__func__);
__builtin_memcpy(p, q, len);
__underlying_memcpy(p, q, len);
p[len] = '\0';
}
return ret;
@ -346,12 +371,12 @@ __FORTIFY_INLINE char *strncat(char *p, const char *q, __kernel_size_t count)
size_t p_size = __builtin_object_size(p, 0);
size_t q_size = __builtin_object_size(q, 0);
if (p_size == (size_t)-1 && q_size == (size_t)-1)
return __builtin_strncat(p, q, count);
return __underlying_strncat(p, q, count);
p_len = strlen(p);
copy_len = strnlen(q, count);
if (p_size < p_len + copy_len + 1)
fortify_panic(__func__);
__builtin_memcpy(p + p_len, q, copy_len);
__underlying_memcpy(p + p_len, q, copy_len);
p[p_len + copy_len] = '\0';
return p;
}
@ -363,7 +388,7 @@ __FORTIFY_INLINE void *memset(void *p, int c, __kernel_size_t size)
__write_overflow();
if (p_size < size)
fortify_panic(__func__);
return __builtin_memset(p, c, size);
return __underlying_memset(p, c, size);
}
__FORTIFY_INLINE void *memcpy(void *p, const void *q, __kernel_size_t size)
@ -378,7 +403,7 @@ __FORTIFY_INLINE void *memcpy(void *p, const void *q, __kernel_size_t size)
}
if (p_size < size || q_size < size)
fortify_panic(__func__);
return __builtin_memcpy(p, q, size);
return __underlying_memcpy(p, q, size);
}
__FORTIFY_INLINE void *memmove(void *p, const void *q, __kernel_size_t size)
@ -393,7 +418,7 @@ __FORTIFY_INLINE void *memmove(void *p, const void *q, __kernel_size_t size)
}
if (p_size < size || q_size < size)
fortify_panic(__func__);
return __builtin_memmove(p, q, size);
return __underlying_memmove(p, q, size);
}
extern void *__real_memscan(void *, int, __kernel_size_t) __RENAME(memscan);
@ -419,7 +444,7 @@ __FORTIFY_INLINE int memcmp(const void *p, const void *q, __kernel_size_t size)
}
if (p_size < size || q_size < size)
fortify_panic(__func__);
return __builtin_memcmp(p, q, size);
return __underlying_memcmp(p, q, size);
}
__FORTIFY_INLINE void *memchr(const void *p, int c, __kernel_size_t size)
@ -429,7 +454,7 @@ __FORTIFY_INLINE void *memchr(const void *p, int c, __kernel_size_t size)
__read_overflow();
if (p_size < size)
fortify_panic(__func__);
return __builtin_memchr(p, c, size);
return __underlying_memchr(p, c, size);
}
void *__real_memchr_inv(const void *s, int c, size_t n) __RENAME(memchr_inv);
@ -460,11 +485,22 @@ __FORTIFY_INLINE char *strcpy(char *p, const char *q)
size_t p_size = __builtin_object_size(p, 0);
size_t q_size = __builtin_object_size(q, 0);
if (p_size == (size_t)-1 && q_size == (size_t)-1)
return __builtin_strcpy(p, q);
return __underlying_strcpy(p, q);
memcpy(p, q, strlen(q) + 1);
return p;
}
/* Don't use these outside the FORITFY_SOURCE implementation */
#undef __underlying_memchr
#undef __underlying_memcmp
#undef __underlying_memcpy
#undef __underlying_memmove
#undef __underlying_memset
#undef __underlying_strcat
#undef __underlying_strcpy
#undef __underlying_strlen
#undef __underlying_strncat
#undef __underlying_strncpy
#endif
/**

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше