Merge branch 'akpm' (patches from Andrew)
Merge second patch-bomb from Andrew Morton: - a couple of hotfixes - the rest of MM - a new timer slack control in procfs - a couple of procfs fixes - a few misc things - some printk tweaks - lib/ updates, notably to radix-tree. - add my and Nick Piggin's old userspace radix-tree test harness to tools/testing/radix-tree/. Matthew said it was a godsend during the radix-tree work he did. - a few code-size improvements, switching to __always_inline where gcc screwed up. - partially implement character sets in sscanf * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (118 commits) sscanf: implement basic character sets lib/bug.c: use common WARN helper param: convert some "on"/"off" users to strtobool lib: add "on"/"off" support to kstrtobool lib: update single-char callers of strtobool() lib: move strtobool() to kstrtobool() include/linux/unaligned: force inlining of byteswap operations include/uapi/linux/byteorder, swab: force inlining of some byteswap operations include/asm-generic/atomic-long.h: force inlining of some atomic_long operations usb: common: convert to use match_string() helper ide: hpt366: convert to use match_string() helper ata: hpt366: convert to use match_string() helper power: ab8500: convert to use match_string() helper power: charger_manager: convert to use match_string() helper drm/edid: convert to use match_string() helper pinctrl: convert to use match_string() helper device property: convert to use match_string() helper lib/string: introduce match_string() helper radix-tree tests: add test for radix_tree_iter_next radix-tree tests: add regression3 test ...
This commit is contained in:
Коммит
814a2bf957
|
@ -8,7 +8,7 @@ Original copyright statements from cpusets.txt:
|
|||
Portions Copyright (C) 2004 BULL SA.
|
||||
Portions Copyright (c) 2004-2006 Silicon Graphics, Inc.
|
||||
Modified by Paul Jackson <pj@sgi.com>
|
||||
Modified by Christoph Lameter <clameter@sgi.com>
|
||||
Modified by Christoph Lameter <cl@linux.com>
|
||||
|
||||
CONTENTS:
|
||||
=========
|
||||
|
|
|
@ -6,7 +6,7 @@ Written by Simon.Derr@bull.net
|
|||
|
||||
Portions Copyright (c) 2004-2006 Silicon Graphics, Inc.
|
||||
Modified by Paul Jackson <pj@sgi.com>
|
||||
Modified by Christoph Lameter <clameter@sgi.com>
|
||||
Modified by Christoph Lameter <cl@linux.com>
|
||||
Modified by Paul Menage <menage@google.com>
|
||||
Modified by Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
|
||||
|
||||
|
|
|
@ -843,6 +843,15 @@ PAGE_SIZE multiple when read back.
|
|||
Amount of memory used to cache filesystem data,
|
||||
including tmpfs and shared memory.
|
||||
|
||||
kernel_stack
|
||||
|
||||
Amount of memory allocated to kernel stacks.
|
||||
|
||||
slab
|
||||
|
||||
Amount of memory used for storing in-kernel data
|
||||
structures.
|
||||
|
||||
sock
|
||||
|
||||
Amount of memory used in network transmission buffers
|
||||
|
@ -871,6 +880,16 @@ PAGE_SIZE multiple when read back.
|
|||
on the internal memory management lists used by the
|
||||
page reclaim algorithm
|
||||
|
||||
slab_reclaimable
|
||||
|
||||
Part of "slab" that might be reclaimed, such as
|
||||
dentries and inodes.
|
||||
|
||||
slab_unreclaimable
|
||||
|
||||
Part of "slab" that cannot be reclaimed on memory
|
||||
pressure.
|
||||
|
||||
pgfault
|
||||
|
||||
Total number of page faults incurred
|
||||
|
@ -1368,6 +1387,12 @@ system than killing the group. Otherwise, memory.max is there to
|
|||
limit this type of spillover and ultimately contain buggy or even
|
||||
malicious applications.
|
||||
|
||||
Setting the original memory.limit_in_bytes below the current usage was
|
||||
subject to a race condition, where concurrent charges could cause the
|
||||
limit setting to fail. memory.max on the other hand will first set the
|
||||
limit to prevent new charges, and then reclaim and OOM kill until the
|
||||
new limit is met - or the task writing to memory.max is killed.
|
||||
|
||||
The combined memory+swap accounting and limiting is replaced by real
|
||||
control over swap space.
|
||||
|
||||
|
|
|
@ -43,6 +43,7 @@ Table of Contents
|
|||
3.7 /proc/<pid>/task/<tid>/children - Information about task children
|
||||
3.8 /proc/<pid>/fdinfo/<fd> - Information about opened file
|
||||
3.9 /proc/<pid>/map_files - Information about memory mapped files
|
||||
3.10 /proc/<pid>/timerslack_ns - Task timerslack value
|
||||
|
||||
4 Configuring procfs
|
||||
4.1 Mount options
|
||||
|
@ -1862,6 +1863,23 @@ time one can open(2) mappings from the listings of two processes and
|
|||
comparing their inode numbers to figure out which anonymous memory areas
|
||||
are actually shared.
|
||||
|
||||
3.10 /proc/<pid>/timerslack_ns - Task timerslack value
|
||||
---------------------------------------------------------
|
||||
This file provides the value of the task's timerslack value in nanoseconds.
|
||||
This value specifies a amount of time that normal timers may be deferred
|
||||
in order to coalesce timers and avoid unnecessary wakeups.
|
||||
|
||||
This allows a task's interactivity vs power consumption trade off to be
|
||||
adjusted.
|
||||
|
||||
Writing 0 to the file will set the tasks timerslack to the default value.
|
||||
|
||||
Valid values are from 0 - ULLONG_MAX
|
||||
|
||||
An application setting the value must have PTRACE_MODE_ATTACH_FSCREDS level
|
||||
permissions on the task specified to change its timerslack_ns value.
|
||||
|
||||
|
||||
------------------------------------------------------------------------------
|
||||
Configuring procfs
|
||||
------------------------------------------------------------------------------
|
||||
|
|
|
@ -803,6 +803,24 @@ performance impact. Reclaim code needs to take various locks to find freeable
|
|||
directory and inode objects. With vfs_cache_pressure=1000, it will look for
|
||||
ten times more freeable objects than there are.
|
||||
|
||||
=============================================================
|
||||
|
||||
watermark_scale_factor:
|
||||
|
||||
This factor controls the aggressiveness of kswapd. It defines the
|
||||
amount of memory left in a node/system before kswapd is woken up and
|
||||
how much memory needs to be free before kswapd goes back to sleep.
|
||||
|
||||
The unit is in fractions of 10,000. The default value of 10 means the
|
||||
distances between watermarks are 0.1% of the available memory in the
|
||||
node/system. The maximum value is 1000, or 10% of memory.
|
||||
|
||||
A high rate of threads entering direct reclaim (allocstall) or kswapd
|
||||
going to sleep prematurely (kswapd_low_wmark_hit_quickly) can indicate
|
||||
that the number of free pages kswapd maintains for latency reasons is
|
||||
too small for the allocation bursts occurring in the system. This knob
|
||||
can then be used to tune kswapd aggressiveness accordingly.
|
||||
|
||||
==============================================================
|
||||
|
||||
zone_reclaim_mode:
|
||||
|
|
|
@ -113,9 +113,26 @@ guaranteed, but it may be more likely in case the allocation is for a
|
|||
MADV_HUGEPAGE region.
|
||||
|
||||
echo always >/sys/kernel/mm/transparent_hugepage/defrag
|
||||
echo defer >/sys/kernel/mm/transparent_hugepage/defrag
|
||||
echo madvise >/sys/kernel/mm/transparent_hugepage/defrag
|
||||
echo never >/sys/kernel/mm/transparent_hugepage/defrag
|
||||
|
||||
"always" means that an application requesting THP will stall on allocation
|
||||
failure and directly reclaim pages and compact memory in an effort to
|
||||
allocate a THP immediately. This may be desirable for virtual machines
|
||||
that benefit heavily from THP use and are willing to delay the VM start
|
||||
to utilise them.
|
||||
|
||||
"defer" means that an application will wake kswapd in the background
|
||||
to reclaim pages and wake kcompact to compact memory so that THP is
|
||||
available in the near future. It's the responsibility of khugepaged
|
||||
to then install the THP pages later.
|
||||
|
||||
"madvise" will enter direct reclaim like "always" but only for regions
|
||||
that are have used madvise(MADV_HUGEPAGE). This is the default behaviour.
|
||||
|
||||
"never" should be self-explanatory.
|
||||
|
||||
By default kernel tries to use huge zero page on read page fault.
|
||||
It's possible to disable huge zero page by writing 0 or enable it
|
||||
back by writing 1:
|
||||
|
@ -229,6 +246,11 @@ thp_split_page is incremented every time a huge page is split into base
|
|||
thp_split_page_failed is is incremented if kernel fails to split huge
|
||||
page. This can happen if the page was pinned by somebody.
|
||||
|
||||
thp_deferred_split_page is incremented when a huge page is put onto split
|
||||
queue. This happens when a huge page is partially unmapped and
|
||||
splitting it would free up some memory. Pages on split queue are
|
||||
going to be split under memory pressure.
|
||||
|
||||
thp_split_pmd is incremented every time a PMD split into table of PTEs.
|
||||
This can happen, for instance, when application calls mprotect() or
|
||||
munmap() on part of huge page. It doesn't split huge page, only
|
||||
|
|
|
@ -8498,7 +8498,7 @@ F: include/crypto/pcrypt.h
|
|||
|
||||
PER-CPU MEMORY ALLOCATOR
|
||||
M: Tejun Heo <tj@kernel.org>
|
||||
M: Christoph Lameter <cl@linux-foundation.org>
|
||||
M: Christoph Lameter <cl@linux.com>
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu.git
|
||||
S: Maintained
|
||||
F: include/linux/percpu*.h
|
||||
|
@ -11296,7 +11296,6 @@ F: include/linux/cdrom.h
|
|||
F: include/uapi/linux/cdrom.h
|
||||
|
||||
UNISYS S-PAR DRIVERS
|
||||
M: Benjamin Romer <benjamin.romer@unisys.com>
|
||||
M: David Kershner <david.kershner@unisys.com>
|
||||
L: sparmaintainer@unisys.com (Unisys internal)
|
||||
S: Supported
|
||||
|
|
|
@ -30,19 +30,16 @@ static inline pmd_t pte_pmd(pte_t pte)
|
|||
#define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd)))
|
||||
#define pmd_mkhuge(pmd) pte_pmd(pte_mkhuge(pmd_pte(pmd)))
|
||||
#define pmd_mknotpresent(pmd) pte_pmd(pte_mknotpresent(pmd_pte(pmd)))
|
||||
#define pmd_mksplitting(pmd) pte_pmd(pte_mkspecial(pmd_pte(pmd)))
|
||||
#define pmd_mkclean(pmd) pte_pmd(pte_mkclean(pmd_pte(pmd)))
|
||||
|
||||
#define pmd_write(pmd) pte_write(pmd_pte(pmd))
|
||||
#define pmd_young(pmd) pte_young(pmd_pte(pmd))
|
||||
#define pmd_pfn(pmd) pte_pfn(pmd_pte(pmd))
|
||||
#define pmd_dirty(pmd) pte_dirty(pmd_pte(pmd))
|
||||
#define pmd_special(pmd) pte_special(pmd_pte(pmd))
|
||||
|
||||
#define mk_pmd(page, prot) pte_pmd(mk_pte(page, prot))
|
||||
|
||||
#define pmd_trans_huge(pmd) (pmd_val(pmd) & _PAGE_HW_SZ)
|
||||
#define pmd_trans_splitting(pmd) (pmd_trans_huge(pmd) && pmd_special(pmd))
|
||||
|
||||
#define pfn_pmd(pfn, prot) (__pmd(((pfn) << PAGE_SHIFT) | pgprot_val(prot)))
|
||||
|
||||
|
|
|
@ -346,7 +346,7 @@ retry:
|
|||
up_read(&mm->mmap_sem);
|
||||
|
||||
/*
|
||||
* Handle the "normal" case first - VM_FAULT_MAJOR / VM_FAULT_MINOR
|
||||
* Handle the "normal" case first - VM_FAULT_MAJOR
|
||||
*/
|
||||
if (likely(!(fault & (VM_FAULT_ERROR | VM_FAULT_BADMAP | VM_FAULT_BADACCESS))))
|
||||
return 0;
|
||||
|
|
|
@ -732,7 +732,7 @@ static void *__init late_alloc(unsigned long sz)
|
|||
return ptr;
|
||||
}
|
||||
|
||||
static pte_t * __init pte_alloc(pmd_t *pmd, unsigned long addr,
|
||||
static pte_t * __init arm_pte_alloc(pmd_t *pmd, unsigned long addr,
|
||||
unsigned long prot,
|
||||
void *(*alloc)(unsigned long sz))
|
||||
{
|
||||
|
@ -747,7 +747,7 @@ static pte_t * __init pte_alloc(pmd_t *pmd, unsigned long addr,
|
|||
static pte_t * __init early_pte_alloc(pmd_t *pmd, unsigned long addr,
|
||||
unsigned long prot)
|
||||
{
|
||||
return pte_alloc(pmd, addr, prot, early_alloc);
|
||||
return arm_pte_alloc(pmd, addr, prot, early_alloc);
|
||||
}
|
||||
|
||||
static void __init alloc_init_pte(pmd_t *pmd, unsigned long addr,
|
||||
|
@ -756,7 +756,7 @@ static void __init alloc_init_pte(pmd_t *pmd, unsigned long addr,
|
|||
void *(*alloc)(unsigned long sz),
|
||||
bool ng)
|
||||
{
|
||||
pte_t *pte = pte_alloc(pmd, addr, type->prot_l1, alloc);
|
||||
pte_t *pte = arm_pte_alloc(pmd, addr, type->prot_l1, alloc);
|
||||
do {
|
||||
set_pte_ext(pte, pfn_pte(pfn, __pgprot(type->prot_pte)),
|
||||
ng ? PTE_EXT_NG : 0);
|
||||
|
|
|
@ -80,7 +80,7 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
|
|||
if (!new_pmd)
|
||||
goto no_pmd;
|
||||
|
||||
new_pte = pte_alloc_map(mm, NULL, new_pmd, 0);
|
||||
new_pte = pte_alloc_map(mm, new_pmd, 0);
|
||||
if (!new_pte)
|
||||
goto no_pte;
|
||||
|
||||
|
|
|
@ -304,7 +304,7 @@ retry:
|
|||
up_read(&mm->mmap_sem);
|
||||
|
||||
/*
|
||||
* Handle the "normal" case first - VM_FAULT_MAJOR / VM_FAULT_MINOR
|
||||
* Handle the "normal" case first - VM_FAULT_MAJOR
|
||||
*/
|
||||
if (likely(!(fault & (VM_FAULT_ERROR | VM_FAULT_BADMAP |
|
||||
VM_FAULT_BADACCESS))))
|
||||
|
|
|
@ -124,7 +124,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
|
|||
* will be no pte_unmap() to correspond with this
|
||||
* pte_alloc_map().
|
||||
*/
|
||||
pte = pte_alloc_map(mm, NULL, pmd, addr);
|
||||
pte = pte_alloc_map(mm, pmd, addr);
|
||||
} else if (sz == PMD_SIZE) {
|
||||
if (IS_ENABLED(CONFIG_ARCH_WANT_HUGE_PMD_SHARE) &&
|
||||
pud_none(*pud))
|
||||
|
|
|
@ -36,6 +36,7 @@ config GENERIC_HWEIGHT
|
|||
|
||||
config GENERIC_BUG
|
||||
def_bool y
|
||||
depends on BUG
|
||||
|
||||
config C6X_BIG_KERNEL
|
||||
bool "Build a big kernel"
|
||||
|
|
|
@ -433,6 +433,7 @@ static inline void __iomem * ioremap_cache (unsigned long phys_addr, unsigned lo
|
|||
return ioremap(phys_addr, size);
|
||||
}
|
||||
#define ioremap_cache ioremap_cache
|
||||
#define ioremap_uc ioremap_nocache
|
||||
|
||||
|
||||
/*
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
*
|
||||
* Copyright (C) 2003 Ken Chen <kenneth.w.chen@intel.com>
|
||||
* Copyright (C) 2003 Asit Mallick <asit.k.mallick@intel.com>
|
||||
* Copyright (C) 2005 Christoph Lameter <clameter@sgi.com>
|
||||
* Copyright (C) 2005 Christoph Lameter <cl@linux.com>
|
||||
*
|
||||
* Based on asm-i386/rwsem.h and other architecture implementation.
|
||||
*
|
||||
|
|
|
@ -38,7 +38,7 @@ huge_pte_alloc(struct mm_struct *mm, unsigned long addr, unsigned long sz)
|
|||
if (pud) {
|
||||
pmd = pmd_alloc(mm, pud, taddr);
|
||||
if (pmd)
|
||||
pte = pte_alloc_map(mm, NULL, pmd, taddr);
|
||||
pte = pte_alloc_map(mm, pmd, taddr);
|
||||
}
|
||||
return pte;
|
||||
}
|
||||
|
|
|
@ -67,7 +67,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
|
|||
pgd = pgd_offset(mm, addr);
|
||||
pud = pud_offset(pgd, addr);
|
||||
pmd = pmd_offset(pud, addr);
|
||||
pte = pte_alloc_map(mm, NULL, pmd, addr);
|
||||
pte = pte_alloc_map(mm, pmd, addr);
|
||||
pgd->pgd &= ~_PAGE_SZ_MASK;
|
||||
pgd->pgd |= _PAGE_SZHUGE;
|
||||
|
||||
|
|
|
@ -64,7 +64,7 @@ static inline void get_head_page_multiple(struct page *page, int nr)
|
|||
{
|
||||
VM_BUG_ON(page != compound_head(page));
|
||||
VM_BUG_ON(page_count(page) == 0);
|
||||
atomic_add(nr, &page->_count);
|
||||
page_ref_add(page, nr);
|
||||
SetPageReferenced(page);
|
||||
}
|
||||
|
||||
|
|
|
@ -53,6 +53,7 @@ config GENERIC_HWEIGHT
|
|||
|
||||
config GENERIC_BUG
|
||||
def_bool y
|
||||
depends on BUG
|
||||
|
||||
config QUICKLIST
|
||||
def_bool y
|
||||
|
|
|
@ -9,6 +9,7 @@
|
|||
* 2 of the Licence, or (at your option) any later version.
|
||||
*/
|
||||
#include <asm/fpu.h>
|
||||
#include <asm/elf.h>
|
||||
|
||||
/*
|
||||
* handle an FPU operational exception
|
||||
|
|
|
@ -63,7 +63,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
|
|||
if (pud) {
|
||||
pmd = pmd_alloc(mm, pud, addr);
|
||||
if (pmd)
|
||||
pte = pte_alloc_map(mm, NULL, pmd, addr);
|
||||
pte = pte_alloc_map(mm, pmd, addr);
|
||||
}
|
||||
return pte;
|
||||
}
|
||||
|
|
|
@ -158,6 +158,7 @@ config PPC
|
|||
select ARCH_HAS_DEVMEM_IS_ALLOWED
|
||||
select HAVE_ARCH_SECCOMP_FILTER
|
||||
select ARCH_HAS_UBSAN_SANITIZE_ALL
|
||||
select ARCH_SUPPORTS_DEFERRED_STRUCT_PAGE_INIT
|
||||
|
||||
config GENERIC_CSUM
|
||||
def_bool CPU_LITTLE_ENDIAN
|
||||
|
|
|
@ -49,7 +49,7 @@ static unsigned int rtas_error_log_buffer_max;
|
|||
static unsigned int event_scan;
|
||||
static unsigned int rtas_event_scan_rate;
|
||||
|
||||
static int full_rtas_msgs = 0;
|
||||
static bool full_rtas_msgs;
|
||||
|
||||
/* Stop logging to nvram after first fatal error */
|
||||
static int logging_enabled; /* Until we initialize everything,
|
||||
|
@ -592,11 +592,6 @@ __setup("surveillance=", surveillance_setup);
|
|||
|
||||
static int __init rtasmsgs_setup(char *str)
|
||||
{
|
||||
if (strcmp(str, "on") == 0)
|
||||
full_rtas_msgs = 1;
|
||||
else if (strcmp(str, "off") == 0)
|
||||
full_rtas_msgs = 0;
|
||||
|
||||
return 1;
|
||||
return (kstrtobool(str, &full_rtas_msgs) == 0);
|
||||
}
|
||||
__setup("rtasmsgs=", rtasmsgs_setup);
|
||||
|
|
|
@ -203,9 +203,8 @@ static int __kprobes __die(const char *str, struct pt_regs *regs, long err)
|
|||
#ifdef CONFIG_SMP
|
||||
printk("SMP NR_CPUS=%d ", NR_CPUS);
|
||||
#endif
|
||||
#ifdef CONFIG_DEBUG_PAGEALLOC
|
||||
printk("DEBUG_PAGEALLOC ");
|
||||
#endif
|
||||
if (debug_pagealloc_enabled())
|
||||
printk("DEBUG_PAGEALLOC ");
|
||||
#ifdef CONFIG_NUMA
|
||||
printk("NUMA ");
|
||||
#endif
|
||||
|
|
|
@ -255,8 +255,10 @@ int htab_bolt_mapping(unsigned long vstart, unsigned long vend,
|
|||
|
||||
if (ret < 0)
|
||||
break;
|
||||
|
||||
#ifdef CONFIG_DEBUG_PAGEALLOC
|
||||
if ((paddr >> PAGE_SHIFT) < linear_map_hash_count)
|
||||
if (debug_pagealloc_enabled() &&
|
||||
(paddr >> PAGE_SHIFT) < linear_map_hash_count)
|
||||
linear_map_hash_slots[paddr >> PAGE_SHIFT] = ret | 0x80;
|
||||
#endif /* CONFIG_DEBUG_PAGEALLOC */
|
||||
}
|
||||
|
@ -512,17 +514,17 @@ static void __init htab_init_page_sizes(void)
|
|||
if (mmu_has_feature(MMU_FTR_16M_PAGE))
|
||||
memcpy(mmu_psize_defs, mmu_psize_defaults_gp,
|
||||
sizeof(mmu_psize_defaults_gp));
|
||||
found:
|
||||
#ifndef CONFIG_DEBUG_PAGEALLOC
|
||||
/*
|
||||
* Pick a size for the linear mapping. Currently, we only support
|
||||
* 16M, 1M and 4K which is the default
|
||||
*/
|
||||
if (mmu_psize_defs[MMU_PAGE_16M].shift)
|
||||
mmu_linear_psize = MMU_PAGE_16M;
|
||||
else if (mmu_psize_defs[MMU_PAGE_1M].shift)
|
||||
mmu_linear_psize = MMU_PAGE_1M;
|
||||
#endif /* CONFIG_DEBUG_PAGEALLOC */
|
||||
found:
|
||||
if (!debug_pagealloc_enabled()) {
|
||||
/*
|
||||
* Pick a size for the linear mapping. Currently, we only
|
||||
* support 16M, 1M and 4K which is the default
|
||||
*/
|
||||
if (mmu_psize_defs[MMU_PAGE_16M].shift)
|
||||
mmu_linear_psize = MMU_PAGE_16M;
|
||||
else if (mmu_psize_defs[MMU_PAGE_1M].shift)
|
||||
mmu_linear_psize = MMU_PAGE_1M;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PPC_64K_PAGES
|
||||
/*
|
||||
|
@ -721,10 +723,12 @@ static void __init htab_initialize(void)
|
|||
prot = pgprot_val(PAGE_KERNEL);
|
||||
|
||||
#ifdef CONFIG_DEBUG_PAGEALLOC
|
||||
linear_map_hash_count = memblock_end_of_DRAM() >> PAGE_SHIFT;
|
||||
linear_map_hash_slots = __va(memblock_alloc_base(linear_map_hash_count,
|
||||
1, ppc64_rma_size));
|
||||
memset(linear_map_hash_slots, 0, linear_map_hash_count);
|
||||
if (debug_pagealloc_enabled()) {
|
||||
linear_map_hash_count = memblock_end_of_DRAM() >> PAGE_SHIFT;
|
||||
linear_map_hash_slots = __va(memblock_alloc_base(
|
||||
linear_map_hash_count, 1, ppc64_rma_size));
|
||||
memset(linear_map_hash_slots, 0, linear_map_hash_count);
|
||||
}
|
||||
#endif /* CONFIG_DEBUG_PAGEALLOC */
|
||||
|
||||
/* On U3 based machines, we need to reserve the DART area and
|
||||
|
|
|
@ -112,10 +112,10 @@ void __init MMU_setup(void)
|
|||
if (strstr(boot_command_line, "noltlbs")) {
|
||||
__map_without_ltlbs = 1;
|
||||
}
|
||||
#ifdef CONFIG_DEBUG_PAGEALLOC
|
||||
__map_without_bats = 1;
|
||||
__map_without_ltlbs = 1;
|
||||
#endif
|
||||
if (debug_pagealloc_enabled()) {
|
||||
__map_without_bats = 1;
|
||||
__map_without_ltlbs = 1;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -118,8 +118,7 @@ static void destroy_pagetable_page(struct mm_struct *mm)
|
|||
/* drop all the pending references */
|
||||
count = ((unsigned long)pte_frag & ~PAGE_MASK) >> PTE_FRAG_SIZE_SHIFT;
|
||||
/* We allow PTE_FRAG_NR fragments from a PTE page */
|
||||
count = atomic_sub_return(PTE_FRAG_NR - count, &page->_count);
|
||||
if (!count) {
|
||||
if (page_ref_sub_and_test(page, PTE_FRAG_NR - count)) {
|
||||
pgtable_page_dtor(page);
|
||||
free_hot_cold_page(page, 0);
|
||||
}
|
||||
|
|
|
@ -403,7 +403,7 @@ static pte_t *__alloc_for_cache(struct mm_struct *mm, int kernel)
|
|||
* count.
|
||||
*/
|
||||
if (likely(!mm->context.pte_frag)) {
|
||||
atomic_set(&page->_count, PTE_FRAG_NR);
|
||||
set_page_count(page, PTE_FRAG_NR);
|
||||
mm->context.pte_frag = ret + PTE_FRAG_SIZE;
|
||||
}
|
||||
spin_unlock(&mm->page_table_lock);
|
||||
|
|
|
@ -188,7 +188,7 @@ static struct fsl_diu_shared_fb __attribute__ ((__aligned__(8))) diu_shared_fb;
|
|||
static inline void mpc512x_free_bootmem(struct page *page)
|
||||
{
|
||||
BUG_ON(PageTail(page));
|
||||
BUG_ON(atomic_read(&page->_count) > 1);
|
||||
BUG_ON(page_ref_count(page) > 1);
|
||||
free_reserved_page(page);
|
||||
}
|
||||
|
||||
|
|
|
@ -47,20 +47,14 @@ static DEFINE_PER_CPU(enum cpu_state_vals, current_state) = CPU_STATE_OFFLINE;
|
|||
|
||||
static enum cpu_state_vals default_offline_state = CPU_STATE_OFFLINE;
|
||||
|
||||
static int cede_offline_enabled __read_mostly = 1;
|
||||
static bool cede_offline_enabled __read_mostly = true;
|
||||
|
||||
/*
|
||||
* Enable/disable cede_offline when available.
|
||||
*/
|
||||
static int __init setup_cede_offline(char *str)
|
||||
{
|
||||
if (!strcmp(str, "off"))
|
||||
cede_offline_enabled = 0;
|
||||
else if (!strcmp(str, "on"))
|
||||
cede_offline_enabled = 1;
|
||||
else
|
||||
return 0;
|
||||
return 1;
|
||||
return (kstrtobool(str, &cede_offline_enabled) == 0);
|
||||
}
|
||||
|
||||
__setup("cede_offline=", setup_cede_offline);
|
||||
|
|
|
@ -1432,7 +1432,7 @@ device_initcall(etr_init_sysfs);
|
|||
/*
|
||||
* Server Time Protocol (STP) code.
|
||||
*/
|
||||
static int stp_online;
|
||||
static bool stp_online;
|
||||
static struct stp_sstpi stp_info;
|
||||
static void *stp_page;
|
||||
|
||||
|
@ -1443,11 +1443,7 @@ static struct timer_list stp_timer;
|
|||
|
||||
static int __init early_parse_stp(char *p)
|
||||
{
|
||||
if (strncmp(p, "off", 3) == 0)
|
||||
stp_online = 0;
|
||||
else if (strncmp(p, "on", 2) == 0)
|
||||
stp_online = 1;
|
||||
return 0;
|
||||
return kstrtobool(p, &stp_online);
|
||||
}
|
||||
early_param("stp", early_parse_stp);
|
||||
|
||||
|
|
|
@ -37,7 +37,7 @@ static void set_topology_timer(void);
|
|||
static void topology_work_fn(struct work_struct *work);
|
||||
static struct sysinfo_15_1_x *tl_info;
|
||||
|
||||
static int topology_enabled = 1;
|
||||
static bool topology_enabled = true;
|
||||
static DECLARE_WORK(topology_work, topology_work_fn);
|
||||
|
||||
/*
|
||||
|
@ -444,10 +444,7 @@ static const struct cpumask *cpu_book_mask(int cpu)
|
|||
|
||||
static int __init early_parse_topology(char *p)
|
||||
{
|
||||
if (strncmp(p, "off", 3))
|
||||
return 0;
|
||||
topology_enabled = 0;
|
||||
return 0;
|
||||
return kstrtobool(p, &topology_enabled);
|
||||
}
|
||||
early_param("topology", early_parse_topology);
|
||||
|
||||
|
|
|
@ -35,7 +35,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
|
|||
if (pud) {
|
||||
pmd = pmd_alloc(mm, pud, addr);
|
||||
if (pmd)
|
||||
pte = pte_alloc_map(mm, NULL, pmd, addr);
|
||||
pte = pte_alloc_map(mm, pmd, addr);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -146,7 +146,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
|
|||
if (pud) {
|
||||
pmd = pmd_alloc(mm, pud, addr);
|
||||
if (pmd)
|
||||
pte = pte_alloc_map(mm, NULL, pmd, addr);
|
||||
pte = pte_alloc_map(mm, pmd, addr);
|
||||
}
|
||||
return pte;
|
||||
}
|
||||
|
|
|
@ -77,7 +77,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
|
|||
else {
|
||||
if (sz != PAGE_SIZE << huge_shift[HUGE_SHIFT_PAGE])
|
||||
panic("Unexpected page size %#lx\n", sz);
|
||||
return pte_alloc_map(mm, NULL, pmd, addr);
|
||||
return pte_alloc_map(mm, pmd, addr);
|
||||
}
|
||||
}
|
||||
#else
|
||||
|
|
|
@ -896,17 +896,15 @@ void __init pgtable_cache_init(void)
|
|||
panic("pgtable_cache_init(): Cannot create pgd cache");
|
||||
}
|
||||
|
||||
#ifdef CONFIG_DEBUG_PAGEALLOC
|
||||
static long __write_once initfree;
|
||||
#else
|
||||
static long __write_once initfree = 1;
|
||||
#endif
|
||||
static bool __write_once set_initfree_done;
|
||||
|
||||
/* Select whether to free (1) or mark unusable (0) the __init pages. */
|
||||
static int __init set_initfree(char *str)
|
||||
{
|
||||
long val;
|
||||
if (kstrtol(str, 0, &val) == 0) {
|
||||
set_initfree_done = true;
|
||||
initfree = val;
|
||||
pr_info("initfree: %s free init pages\n",
|
||||
initfree ? "will" : "won't");
|
||||
|
@ -919,6 +917,11 @@ static void free_init_pages(char *what, unsigned long begin, unsigned long end)
|
|||
{
|
||||
unsigned long addr = (unsigned long) begin;
|
||||
|
||||
/* Prefer user request first */
|
||||
if (!set_initfree_done) {
|
||||
if (debug_pagealloc_enabled())
|
||||
initfree = 0;
|
||||
}
|
||||
if (kdata_huge && !initfree) {
|
||||
pr_warn("Warning: ignoring initfree=0: incompatible with kdata=huge\n");
|
||||
initfree = 1;
|
||||
|
|
|
@ -31,7 +31,7 @@ static int init_stub_pte(struct mm_struct *mm, unsigned long proc,
|
|||
if (!pmd)
|
||||
goto out_pmd;
|
||||
|
||||
pte = pte_alloc_map(mm, NULL, pmd, proc);
|
||||
pte = pte_alloc_map(mm, pmd, proc);
|
||||
if (!pte)
|
||||
goto out_pte;
|
||||
|
||||
|
|
|
@ -276,7 +276,7 @@ retry:
|
|||
up_read(&mm->mmap_sem);
|
||||
|
||||
/*
|
||||
* Handle the "normal" case first - VM_FAULT_MAJOR / VM_FAULT_MINOR
|
||||
* Handle the "normal" case first - VM_FAULT_MAJOR
|
||||
*/
|
||||
if (likely(!(fault &
|
||||
(VM_FAULT_ERROR | VM_FAULT_BADMAP | VM_FAULT_BADACCESS))))
|
||||
|
|
|
@ -54,7 +54,7 @@ pgd_t *get_pgd_slow(struct mm_struct *mm)
|
|||
if (!new_pmd)
|
||||
goto no_pmd;
|
||||
|
||||
new_pte = pte_alloc_map(mm, NULL, new_pmd, 0);
|
||||
new_pte = pte_alloc_map(mm, new_pmd, 0);
|
||||
if (!new_pte)
|
||||
goto no_pte;
|
||||
|
||||
|
|
|
@ -227,19 +227,11 @@ static u32 __init search_agp_bridge(u32 *order, int *valid_agp)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int gart_fix_e820 __initdata = 1;
|
||||
static bool gart_fix_e820 __initdata = true;
|
||||
|
||||
static int __init parse_gart_mem(char *p)
|
||||
{
|
||||
if (!p)
|
||||
return -EINVAL;
|
||||
|
||||
if (!strncmp(p, "off", 3))
|
||||
gart_fix_e820 = 0;
|
||||
else if (!strncmp(p, "on", 2))
|
||||
gart_fix_e820 = 1;
|
||||
|
||||
return 0;
|
||||
return kstrtobool(p, &gart_fix_e820);
|
||||
}
|
||||
early_param("gart_fix_e820", parse_gart_mem);
|
||||
|
||||
|
|
|
@ -135,7 +135,7 @@ static int map_tboot_page(unsigned long vaddr, unsigned long pfn,
|
|||
pmd = pmd_alloc(&tboot_mm, pud, vaddr);
|
||||
if (!pmd)
|
||||
return -1;
|
||||
pte = pte_alloc_map(&tboot_mm, NULL, pmd, vaddr);
|
||||
pte = pte_alloc_map(&tboot_mm, pmd, vaddr);
|
||||
if (!pte)
|
||||
return -1;
|
||||
set_pte_at(&tboot_mm, vaddr, pte, pfn_pte(pfn, prot));
|
||||
|
|
|
@ -131,7 +131,7 @@ static inline void get_head_page_multiple(struct page *page, int nr)
|
|||
{
|
||||
VM_BUG_ON_PAGE(page != compound_head(page), page);
|
||||
VM_BUG_ON_PAGE(page_count(page) == 0, page);
|
||||
atomic_add(nr, &page->_count);
|
||||
page_ref_add(page, nr);
|
||||
SetPageReferenced(page);
|
||||
}
|
||||
|
||||
|
|
|
@ -146,7 +146,7 @@ good_area:
|
|||
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
|
||||
if (flags & VM_FAULT_MAJOR)
|
||||
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address);
|
||||
else if (flags & VM_FAULT_MINOR)
|
||||
else
|
||||
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address);
|
||||
|
||||
return;
|
||||
|
|
|
@ -444,7 +444,7 @@ static int __init async_pq_init(void)
|
|||
|
||||
static void __exit async_pq_exit(void)
|
||||
{
|
||||
put_page(pq_scribble_page);
|
||||
__free_page(pq_scribble_page);
|
||||
}
|
||||
|
||||
module_init(async_pq_init);
|
||||
|
|
|
@ -176,17 +176,14 @@ static int hpt_dma_blacklisted(const struct ata_device *dev, char *modestr,
|
|||
const char * const list[])
|
||||
{
|
||||
unsigned char model_num[ATA_ID_PROD_LEN + 1];
|
||||
int i = 0;
|
||||
int i;
|
||||
|
||||
ata_id_c_string(dev->id, model_num, ATA_ID_PROD, sizeof(model_num));
|
||||
|
||||
while (list[i] != NULL) {
|
||||
if (!strcmp(list[i], model_num)) {
|
||||
pr_warn("%s is not supported for %s\n",
|
||||
modestr, list[i]);
|
||||
return 1;
|
||||
}
|
||||
i++;
|
||||
i = match_string(list, -1, model_num);
|
||||
if (i >= 0) {
|
||||
pr_warn("%s is not supported for %s\n", modestr, list[i]);
|
||||
return 1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -651,7 +651,7 @@ int fwnode_property_match_string(struct fwnode_handle *fwnode,
|
|||
const char *propname, const char *string)
|
||||
{
|
||||
const char **values;
|
||||
int nval, ret, i;
|
||||
int nval, ret;
|
||||
|
||||
nval = fwnode_property_read_string_array(fwnode, propname, NULL, 0);
|
||||
if (nval < 0)
|
||||
|
@ -668,13 +668,9 @@ int fwnode_property_match_string(struct fwnode_handle *fwnode,
|
|||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
ret = -ENODATA;
|
||||
for (i = 0; i < nval; i++) {
|
||||
if (!strcmp(values[i], string)) {
|
||||
ret = i;
|
||||
break;
|
||||
}
|
||||
}
|
||||
ret = match_string(values, nval, string);
|
||||
if (ret < 0)
|
||||
ret = -ENODATA;
|
||||
out:
|
||||
kfree(values);
|
||||
return ret;
|
||||
|
|
|
@ -875,7 +875,7 @@ bio_pageinc(struct bio *bio)
|
|||
* compound pages is no longer allowed by the kernel.
|
||||
*/
|
||||
page = compound_head(bv.bv_page);
|
||||
atomic_inc(&page->_count);
|
||||
page_ref_inc(page);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -888,7 +888,7 @@ bio_pagedec(struct bio *bio)
|
|||
|
||||
bio_for_each_segment(bv, bio, iter) {
|
||||
page = compound_head(bv.bv_page);
|
||||
atomic_dec(&page->_count);
|
||||
page_ref_dec(page);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -94,15 +94,14 @@ static int nvram_find_and_copy(void __iomem *iobase, u32 lim)
|
|||
|
||||
found:
|
||||
__ioread32_copy(nvram_buf, header, sizeof(*header) / 4);
|
||||
header = (struct nvram_header *)nvram_buf;
|
||||
nvram_len = header->len;
|
||||
nvram_len = ((struct nvram_header *)(nvram_buf))->len;
|
||||
if (nvram_len > size) {
|
||||
pr_err("The nvram size according to the header seems to be bigger than the partition on flash\n");
|
||||
nvram_len = size;
|
||||
}
|
||||
if (nvram_len >= NVRAM_SPACE) {
|
||||
pr_err("nvram on flash (%i bytes) is bigger than the reserved space in memory, will just copy the first %i bytes\n",
|
||||
header->len, NVRAM_SPACE - 1);
|
||||
nvram_len, NVRAM_SPACE - 1);
|
||||
nvram_len = NVRAM_SPACE - 1;
|
||||
}
|
||||
/* proceed reading data after header */
|
||||
|
|
|
@ -170,16 +170,11 @@ static void *edid_load(struct drm_connector *connector, const char *name,
|
|||
int i, valid_extensions = 0;
|
||||
bool print_bad_edid = !connector->bad_edid_counter || (drm_debug & DRM_UT_KMS);
|
||||
|
||||
builtin = 0;
|
||||
for (i = 0; i < GENERIC_EDIDS; i++) {
|
||||
if (strcmp(name, generic_edid_name[i]) == 0) {
|
||||
fwdata = generic_edid[i];
|
||||
fwsize = sizeof(generic_edid[i]);
|
||||
builtin = 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (!builtin) {
|
||||
builtin = match_string(generic_edid_name, GENERIC_EDIDS, name);
|
||||
if (builtin >= 0) {
|
||||
fwdata = generic_edid[builtin];
|
||||
fwsize = sizeof(generic_edid[builtin]);
|
||||
} else {
|
||||
struct platform_device *pdev;
|
||||
int err;
|
||||
|
||||
|
@ -252,7 +247,7 @@ static void *edid_load(struct drm_connector *connector, const char *name,
|
|||
}
|
||||
|
||||
DRM_INFO("Got %s EDID base block and %d extension%s from "
|
||||
"\"%s\" for connector \"%s\"\n", builtin ? "built-in" :
|
||||
"\"%s\" for connector \"%s\"\n", (builtin >= 0) ? "built-in" :
|
||||
"external", valid_extensions, valid_extensions == 1 ? "" : "s",
|
||||
name, connector_name);
|
||||
|
||||
|
|
|
@ -531,14 +531,9 @@ static const struct hpt_info hpt371n = {
|
|||
.timings = &hpt37x_timings
|
||||
};
|
||||
|
||||
static int check_in_drive_list(ide_drive_t *drive, const char **list)
|
||||
static bool check_in_drive_list(ide_drive_t *drive, const char **list)
|
||||
{
|
||||
char *m = (char *)&drive->id[ATA_ID_PROD];
|
||||
|
||||
while (*list)
|
||||
if (!strcmp(*list++, m))
|
||||
return 1;
|
||||
return 0;
|
||||
return match_string(list, -1, (char *)&drive->id[ATA_ID_PROD]) >= 0;
|
||||
}
|
||||
|
||||
static struct hpt_info *hpt3xx_get_info(struct device *dev)
|
||||
|
|
|
@ -2944,7 +2944,7 @@ static bool gfar_add_rx_frag(struct gfar_rx_buff *rxb, u32 lstatus,
|
|||
/* change offset to the other half */
|
||||
rxb->page_offset ^= GFAR_RXB_TRUESIZE;
|
||||
|
||||
atomic_inc(&page->_count);
|
||||
page_ref_inc(page);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
|
|
@ -243,7 +243,7 @@ static bool fm10k_can_reuse_rx_page(struct fm10k_rx_buffer *rx_buffer,
|
|||
/* Even if we own the page, we are not allowed to use atomic_set()
|
||||
* This would break get_page_unless_zero() users.
|
||||
*/
|
||||
atomic_inc(&page->_count);
|
||||
page_ref_inc(page);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
|
|
@ -6630,7 +6630,7 @@ static bool igb_can_reuse_rx_page(struct igb_rx_buffer *rx_buffer,
|
|||
/* Even if we own the page, we are not allowed to use atomic_set()
|
||||
* This would break get_page_unless_zero() users.
|
||||
*/
|
||||
atomic_inc(&page->_count);
|
||||
page_ref_inc(page);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
|
|
@ -1942,7 +1942,7 @@ static bool ixgbe_add_rx_frag(struct ixgbe_ring *rx_ring,
|
|||
/* Even if we own the page, we are not allowed to use atomic_set()
|
||||
* This would break get_page_unless_zero() users.
|
||||
*/
|
||||
atomic_inc(&page->_count);
|
||||
page_ref_inc(page);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
|
|
@ -837,7 +837,7 @@ add_tail_frag:
|
|||
/* Even if we own the page, we are not allowed to use atomic_set()
|
||||
* This would break get_page_unless_zero() users.
|
||||
*/
|
||||
atomic_inc(&page->_count);
|
||||
page_ref_inc(page);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
|
|
@ -82,8 +82,7 @@ static int mlx4_alloc_pages(struct mlx4_en_priv *priv,
|
|||
/* Not doing get_page() for each frag is a big win
|
||||
* on asymetric workloads. Note we can not use atomic_set().
|
||||
*/
|
||||
atomic_add(page_alloc->page_size / frag_info->frag_stride - 1,
|
||||
&page->_count);
|
||||
page_ref_add(page, page_alloc->page_size / frag_info->frag_stride - 1);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -127,7 +126,7 @@ out:
|
|||
dma_unmap_page(priv->ddev, page_alloc[i].dma,
|
||||
page_alloc[i].page_size, PCI_DMA_FROMDEVICE);
|
||||
page = page_alloc[i].page;
|
||||
atomic_set(&page->_count, 1);
|
||||
set_page_count(page, 1);
|
||||
put_page(page);
|
||||
}
|
||||
}
|
||||
|
@ -165,7 +164,7 @@ static int mlx4_en_init_allocator(struct mlx4_en_priv *priv,
|
|||
|
||||
en_dbg(DRV, priv, " frag %d allocator: - size:%d frags:%d\n",
|
||||
i, ring->page_alloc[i].page_size,
|
||||
atomic_read(&ring->page_alloc[i].page->_count));
|
||||
page_ref_count(ring->page_alloc[i].page));
|
||||
}
|
||||
return 0;
|
||||
|
||||
|
@ -177,7 +176,7 @@ out:
|
|||
dma_unmap_page(priv->ddev, page_alloc->dma,
|
||||
page_alloc->page_size, PCI_DMA_FROMDEVICE);
|
||||
page = page_alloc->page;
|
||||
atomic_set(&page->_count, 1);
|
||||
set_page_count(page, 1);
|
||||
put_page(page);
|
||||
page_alloc->page = NULL;
|
||||
}
|
||||
|
|
|
@ -3341,7 +3341,7 @@ static int niu_rbr_add_page(struct niu *np, struct rx_ring_info *rp,
|
|||
|
||||
niu_hash_page(rp, page, addr);
|
||||
if (rp->rbr_blocks_per_page > 1)
|
||||
atomic_add(rp->rbr_blocks_per_page - 1, &page->_count);
|
||||
page_ref_add(page, rp->rbr_blocks_per_page - 1);
|
||||
|
||||
for (i = 0; i < rp->rbr_blocks_per_page; i++) {
|
||||
__le32 *rbr = &rp->rbr[start_index + i];
|
||||
|
|
|
@ -880,14 +880,12 @@ mwifiex_reset_write(struct file *file,
|
|||
{
|
||||
struct mwifiex_private *priv = file->private_data;
|
||||
struct mwifiex_adapter *adapter = priv->adapter;
|
||||
char cmd;
|
||||
bool result;
|
||||
int rc;
|
||||
|
||||
if (copy_from_user(&cmd, ubuf, sizeof(cmd)))
|
||||
return -EFAULT;
|
||||
|
||||
if (strtobool(&cmd, &result))
|
||||
return -EINVAL;
|
||||
rc = kstrtobool_from_user(ubuf, count, &result);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
if (!result)
|
||||
return -EINVAL;
|
||||
|
|
|
@ -334,7 +334,6 @@ int pinmux_map_to_setting(struct pinctrl_map const *map,
|
|||
unsigned num_groups;
|
||||
int ret;
|
||||
const char *group;
|
||||
int i;
|
||||
|
||||
if (!pmxops) {
|
||||
dev_err(pctldev->dev, "does not support mux function\n");
|
||||
|
@ -363,19 +362,13 @@ int pinmux_map_to_setting(struct pinctrl_map const *map,
|
|||
return -EINVAL;
|
||||
}
|
||||
if (map->data.mux.group) {
|
||||
bool found = false;
|
||||
group = map->data.mux.group;
|
||||
for (i = 0; i < num_groups; i++) {
|
||||
if (!strcmp(group, groups[i])) {
|
||||
found = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (!found) {
|
||||
ret = match_string(groups, num_groups, group);
|
||||
if (ret < 0) {
|
||||
dev_err(pctldev->dev,
|
||||
"invalid group \"%s\" for function \"%s\"\n",
|
||||
group, map->data.mux.function);
|
||||
return -EINVAL;
|
||||
return ret;
|
||||
}
|
||||
} else {
|
||||
group = groups[0];
|
||||
|
|
|
@ -906,26 +906,21 @@ static int ab8500_btemp_get_property(struct power_supply *psy,
|
|||
static int ab8500_btemp_get_ext_psy_data(struct device *dev, void *data)
|
||||
{
|
||||
struct power_supply *psy;
|
||||
struct power_supply *ext;
|
||||
struct power_supply *ext = dev_get_drvdata(dev);
|
||||
const char **supplicants = (const char **)ext->supplied_to;
|
||||
struct ab8500_btemp *di;
|
||||
union power_supply_propval ret;
|
||||
int i, j;
|
||||
bool psy_found = false;
|
||||
int j;
|
||||
|
||||
psy = (struct power_supply *)data;
|
||||
ext = dev_get_drvdata(dev);
|
||||
di = power_supply_get_drvdata(psy);
|
||||
|
||||
/*
|
||||
* For all psy where the name of your driver
|
||||
* appears in any supplied_to
|
||||
*/
|
||||
for (i = 0; i < ext->num_supplicants; i++) {
|
||||
if (!strcmp(ext->supplied_to[i], psy->desc->name))
|
||||
psy_found = true;
|
||||
}
|
||||
|
||||
if (!psy_found)
|
||||
j = match_string(supplicants, ext->num_supplicants, psy->desc->name);
|
||||
if (j < 0)
|
||||
return 0;
|
||||
|
||||
/* Go through all properties for the psy */
|
||||
|
|
|
@ -1929,11 +1929,11 @@ static int ab8540_charger_usb_pre_chg_enable(struct ux500_charger *charger,
|
|||
static int ab8500_charger_get_ext_psy_data(struct device *dev, void *data)
|
||||
{
|
||||
struct power_supply *psy;
|
||||
struct power_supply *ext;
|
||||
struct power_supply *ext = dev_get_drvdata(dev);
|
||||
const char **supplicants = (const char **)ext->supplied_to;
|
||||
struct ab8500_charger *di;
|
||||
union power_supply_propval ret;
|
||||
int i, j;
|
||||
bool psy_found = false;
|
||||
int j;
|
||||
struct ux500_charger *usb_chg;
|
||||
|
||||
usb_chg = (struct ux500_charger *)data;
|
||||
|
@ -1941,15 +1941,9 @@ static int ab8500_charger_get_ext_psy_data(struct device *dev, void *data)
|
|||
|
||||
di = to_ab8500_charger_usb_device_info(usb_chg);
|
||||
|
||||
ext = dev_get_drvdata(dev);
|
||||
|
||||
/* For all psy where the driver name appears in any supplied_to */
|
||||
for (i = 0; i < ext->num_supplicants; i++) {
|
||||
if (!strcmp(ext->supplied_to[i], psy->desc->name))
|
||||
psy_found = true;
|
||||
}
|
||||
|
||||
if (!psy_found)
|
||||
j = match_string(supplicants, ext->num_supplicants, psy->desc->name);
|
||||
if (j < 0)
|
||||
return 0;
|
||||
|
||||
/* Go through all properties for the psy */
|
||||
|
|
|
@ -2168,26 +2168,21 @@ static int ab8500_fg_get_property(struct power_supply *psy,
|
|||
static int ab8500_fg_get_ext_psy_data(struct device *dev, void *data)
|
||||
{
|
||||
struct power_supply *psy;
|
||||
struct power_supply *ext;
|
||||
struct power_supply *ext = dev_get_drvdata(dev);
|
||||
const char **supplicants = (const char **)ext->supplied_to;
|
||||
struct ab8500_fg *di;
|
||||
union power_supply_propval ret;
|
||||
int i, j;
|
||||
bool psy_found = false;
|
||||
int j;
|
||||
|
||||
psy = (struct power_supply *)data;
|
||||
ext = dev_get_drvdata(dev);
|
||||
di = power_supply_get_drvdata(psy);
|
||||
|
||||
/*
|
||||
* For all psy where the name of your driver
|
||||
* appears in any supplied_to
|
||||
*/
|
||||
for (i = 0; i < ext->num_supplicants; i++) {
|
||||
if (!strcmp(ext->supplied_to[i], psy->desc->name))
|
||||
psy_found = true;
|
||||
}
|
||||
|
||||
if (!psy_found)
|
||||
j = match_string(supplicants, ext->num_supplicants, psy->desc->name);
|
||||
if (j < 0)
|
||||
return 0;
|
||||
|
||||
/* Go through all properties for the psy */
|
||||
|
|
|
@ -975,22 +975,18 @@ static void handle_maxim_chg_curr(struct abx500_chargalg *di)
|
|||
static int abx500_chargalg_get_ext_psy_data(struct device *dev, void *data)
|
||||
{
|
||||
struct power_supply *psy;
|
||||
struct power_supply *ext;
|
||||
struct power_supply *ext = dev_get_drvdata(dev);
|
||||
const char **supplicants = (const char **)ext->supplied_to;
|
||||
struct abx500_chargalg *di;
|
||||
union power_supply_propval ret;
|
||||
int i, j;
|
||||
bool psy_found = false;
|
||||
int j;
|
||||
bool capacity_updated = false;
|
||||
|
||||
psy = (struct power_supply *)data;
|
||||
ext = dev_get_drvdata(dev);
|
||||
di = power_supply_get_drvdata(psy);
|
||||
/* For all psy where the driver name appears in any supplied_to */
|
||||
for (i = 0; i < ext->num_supplicants; i++) {
|
||||
if (!strcmp(ext->supplied_to[i], psy->desc->name))
|
||||
psy_found = true;
|
||||
}
|
||||
if (!psy_found)
|
||||
j = match_string(supplicants, ext->num_supplicants, psy->desc->name);
|
||||
if (j < 0)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
|
|
|
@ -2019,27 +2019,6 @@ static void __exit charger_manager_cleanup(void)
|
|||
}
|
||||
module_exit(charger_manager_cleanup);
|
||||
|
||||
/**
|
||||
* find_power_supply - find the associated power_supply of charger
|
||||
* @cm: the Charger Manager representing the battery
|
||||
* @psy: pointer to instance of charger's power_supply
|
||||
*/
|
||||
static bool find_power_supply(struct charger_manager *cm,
|
||||
struct power_supply *psy)
|
||||
{
|
||||
int i;
|
||||
bool found = false;
|
||||
|
||||
for (i = 0; cm->desc->psy_charger_stat[i]; i++) {
|
||||
if (!strcmp(psy->desc->name, cm->desc->psy_charger_stat[i])) {
|
||||
found = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return found;
|
||||
}
|
||||
|
||||
/**
|
||||
* cm_notify_event - charger driver notify Charger Manager of charger event
|
||||
* @psy: pointer to instance of charger's power_supply
|
||||
|
@ -2057,9 +2036,11 @@ void cm_notify_event(struct power_supply *psy, enum cm_event_types type,
|
|||
|
||||
mutex_lock(&cm_list_mtx);
|
||||
list_for_each_entry(cm, &cm_list, entry) {
|
||||
found_power_supply = find_power_supply(cm, psy);
|
||||
if (found_power_supply)
|
||||
if (match_string(cm->desc->psy_charger_stat, -1,
|
||||
psy->desc->name) >= 0) {
|
||||
found_power_supply = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
mutex_unlock(&cm_list_mtx);
|
||||
|
||||
|
|
|
@ -65,18 +65,15 @@ EXPORT_SYMBOL_GPL(usb_speed_string);
|
|||
enum usb_device_speed usb_get_maximum_speed(struct device *dev)
|
||||
{
|
||||
const char *maximum_speed;
|
||||
int err;
|
||||
int i;
|
||||
int ret;
|
||||
|
||||
err = device_property_read_string(dev, "maximum-speed", &maximum_speed);
|
||||
if (err < 0)
|
||||
ret = device_property_read_string(dev, "maximum-speed", &maximum_speed);
|
||||
if (ret < 0)
|
||||
return USB_SPEED_UNKNOWN;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(speed_names); i++)
|
||||
if (strcmp(maximum_speed, speed_names[i]) == 0)
|
||||
return i;
|
||||
ret = match_string(speed_names, ARRAY_SIZE(speed_names), maximum_speed);
|
||||
|
||||
return USB_SPEED_UNKNOWN;
|
||||
return (ret < 0) ? USB_SPEED_UNKNOWN : ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(usb_get_maximum_speed);
|
||||
|
||||
|
@ -110,13 +107,10 @@ static const char *const usb_dr_modes[] = {
|
|||
|
||||
static enum usb_dr_mode usb_get_dr_mode_from_string(const char *str)
|
||||
{
|
||||
int i;
|
||||
int ret;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(usb_dr_modes); i++)
|
||||
if (!strcmp(usb_dr_modes[i], str))
|
||||
return i;
|
||||
|
||||
return USB_DR_MODE_UNKNOWN;
|
||||
ret = match_string(usb_dr_modes, ARRAY_SIZE(usb_dr_modes), str);
|
||||
return (ret < 0) ? USB_DR_MODE_UNKNOWN : ret;
|
||||
}
|
||||
|
||||
enum usb_dr_mode usb_get_dr_mode(struct device *dev)
|
||||
|
|
|
@ -30,6 +30,7 @@
|
|||
#include <linux/balloon_compaction.h>
|
||||
#include <linux/oom.h>
|
||||
#include <linux/wait.h>
|
||||
#include <linux/mm.h>
|
||||
|
||||
/*
|
||||
* Balloon device works in 4K page units. So each page is pointed to by
|
||||
|
@ -229,10 +230,13 @@ static void update_balloon_stats(struct virtio_balloon *vb)
|
|||
unsigned long events[NR_VM_EVENT_ITEMS];
|
||||
struct sysinfo i;
|
||||
int idx = 0;
|
||||
long available;
|
||||
|
||||
all_vm_events(events);
|
||||
si_meminfo(&i);
|
||||
|
||||
available = si_mem_available();
|
||||
|
||||
update_stat(vb, idx++, VIRTIO_BALLOON_S_SWAP_IN,
|
||||
pages_to_bytes(events[PSWPIN]));
|
||||
update_stat(vb, idx++, VIRTIO_BALLOON_S_SWAP_OUT,
|
||||
|
@ -243,6 +247,8 @@ static void update_balloon_stats(struct virtio_balloon *vb)
|
|||
pages_to_bytes(i.freeram));
|
||||
update_stat(vb, idx++, VIRTIO_BALLOON_S_MEMTOT,
|
||||
pages_to_bytes(i.totalram));
|
||||
update_stat(vb, idx++, VIRTIO_BALLOON_S_AVAIL,
|
||||
pages_to_bytes(available));
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -137,7 +137,6 @@ static void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info)
|
|||
void **slot;
|
||||
|
||||
spin_lock(&fs_info->buffer_lock);
|
||||
restart:
|
||||
radix_tree_for_each_slot(slot, &fs_info->buffer_radix, &iter, 0) {
|
||||
struct extent_buffer *eb;
|
||||
|
||||
|
@ -147,7 +146,7 @@ restart:
|
|||
/* Shouldn't happen but that kind of thinking creates CVE's */
|
||||
if (radix_tree_exception(eb)) {
|
||||
if (radix_tree_deref_retry(eb))
|
||||
goto restart;
|
||||
slot = radix_tree_iter_retry(&iter);
|
||||
continue;
|
||||
}
|
||||
spin_unlock(&fs_info->buffer_lock);
|
||||
|
|
|
@ -255,7 +255,6 @@ static const struct file_operations cifs_debug_data_proc_fops = {
|
|||
static ssize_t cifs_stats_proc_write(struct file *file,
|
||||
const char __user *buffer, size_t count, loff_t *ppos)
|
||||
{
|
||||
char c;
|
||||
bool bv;
|
||||
int rc;
|
||||
struct list_head *tmp1, *tmp2, *tmp3;
|
||||
|
@ -263,11 +262,8 @@ static ssize_t cifs_stats_proc_write(struct file *file,
|
|||
struct cifs_ses *ses;
|
||||
struct cifs_tcon *tcon;
|
||||
|
||||
rc = get_user(c, buffer);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
if (strtobool(&c, &bv) == 0) {
|
||||
rc = kstrtobool_from_user(buffer, count, &bv);
|
||||
if (rc == 0) {
|
||||
#ifdef CONFIG_CIFS_STATS2
|
||||
atomic_set(&totBufAllocCount, 0);
|
||||
atomic_set(&totSmBufAllocCount, 0);
|
||||
|
@ -290,6 +286,8 @@ static ssize_t cifs_stats_proc_write(struct file *file,
|
|||
}
|
||||
}
|
||||
spin_unlock(&cifs_tcp_ses_lock);
|
||||
} else {
|
||||
return rc;
|
||||
}
|
||||
|
||||
return count;
|
||||
|
@ -433,17 +431,17 @@ static int cifsFYI_proc_open(struct inode *inode, struct file *file)
|
|||
static ssize_t cifsFYI_proc_write(struct file *file, const char __user *buffer,
|
||||
size_t count, loff_t *ppos)
|
||||
{
|
||||
char c;
|
||||
char c[2] = { '\0' };
|
||||
bool bv;
|
||||
int rc;
|
||||
|
||||
rc = get_user(c, buffer);
|
||||
rc = get_user(c[0], buffer);
|
||||
if (rc)
|
||||
return rc;
|
||||
if (strtobool(&c, &bv) == 0)
|
||||
if (strtobool(c, &bv) == 0)
|
||||
cifsFYI = bv;
|
||||
else if ((c > '1') && (c <= '9'))
|
||||
cifsFYI = (int) (c - '0'); /* see cifs_debug.h for meanings */
|
||||
else if ((c[0] > '1') && (c[0] <= '9'))
|
||||
cifsFYI = (int) (c[0] - '0'); /* see cifs_debug.h for meanings */
|
||||
|
||||
return count;
|
||||
}
|
||||
|
@ -471,20 +469,12 @@ static int cifs_linux_ext_proc_open(struct inode *inode, struct file *file)
|
|||
static ssize_t cifs_linux_ext_proc_write(struct file *file,
|
||||
const char __user *buffer, size_t count, loff_t *ppos)
|
||||
{
|
||||
char c;
|
||||
bool bv;
|
||||
int rc;
|
||||
|
||||
rc = get_user(c, buffer);
|
||||
rc = kstrtobool_from_user(buffer, count, &linuxExtEnabled);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = strtobool(&c, &bv);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
linuxExtEnabled = bv;
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
|
@ -511,20 +501,12 @@ static int cifs_lookup_cache_proc_open(struct inode *inode, struct file *file)
|
|||
static ssize_t cifs_lookup_cache_proc_write(struct file *file,
|
||||
const char __user *buffer, size_t count, loff_t *ppos)
|
||||
{
|
||||
char c;
|
||||
bool bv;
|
||||
int rc;
|
||||
|
||||
rc = get_user(c, buffer);
|
||||
rc = kstrtobool_from_user(buffer, count, &lookupCacheEnabled);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = strtobool(&c, &bv);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
lookupCacheEnabled = bv;
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
|
@ -551,20 +533,12 @@ static int traceSMB_proc_open(struct inode *inode, struct file *file)
|
|||
static ssize_t traceSMB_proc_write(struct file *file, const char __user *buffer,
|
||||
size_t count, loff_t *ppos)
|
||||
{
|
||||
char c;
|
||||
bool bv;
|
||||
int rc;
|
||||
|
||||
rc = get_user(c, buffer);
|
||||
rc = kstrtobool_from_user(buffer, count, &traceSMB);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = strtobool(&c, &bv);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
traceSMB = bv;
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
|
@ -622,7 +596,6 @@ static ssize_t cifs_security_flags_proc_write(struct file *file,
|
|||
int rc;
|
||||
unsigned int flags;
|
||||
char flags_string[12];
|
||||
char c;
|
||||
bool bv;
|
||||
|
||||
if ((count < 1) || (count > 11))
|
||||
|
@ -635,11 +608,10 @@ static ssize_t cifs_security_flags_proc_write(struct file *file,
|
|||
|
||||
if (count < 3) {
|
||||
/* single char or single char followed by null */
|
||||
c = flags_string[0];
|
||||
if (strtobool(&c, &bv) == 0) {
|
||||
if (strtobool(flags_string, &bv) == 0) {
|
||||
global_secflags = bv ? CIFSSEC_MAX : CIFSSEC_DEF;
|
||||
return count;
|
||||
} else if (!isdigit(c)) {
|
||||
} else if (!isdigit(flags_string[0])) {
|
||||
cifs_dbg(VFS, "Invalid SecurityFlags: %s\n",
|
||||
flags_string);
|
||||
return -EINVAL;
|
||||
|
|
|
@ -25,7 +25,7 @@
|
|||
void cifs_dump_mem(char *label, void *data, int length);
|
||||
void cifs_dump_detail(void *);
|
||||
void cifs_dump_mids(struct TCP_Server_Info *);
|
||||
extern int traceSMB; /* flag which enables the function below */
|
||||
extern bool traceSMB; /* flag which enables the function below */
|
||||
void dump_smb(void *, int);
|
||||
#define CIFS_INFO 0x01
|
||||
#define CIFS_RC 0x02
|
||||
|
|
|
@ -54,10 +54,10 @@
|
|||
#endif
|
||||
|
||||
int cifsFYI = 0;
|
||||
int traceSMB = 0;
|
||||
bool traceSMB;
|
||||
bool enable_oplocks = true;
|
||||
unsigned int linuxExtEnabled = 1;
|
||||
unsigned int lookupCacheEnabled = 1;
|
||||
bool linuxExtEnabled = true;
|
||||
bool lookupCacheEnabled = true;
|
||||
unsigned int global_secflags = CIFSSEC_DEF;
|
||||
/* unsigned int ntlmv2_support = 0; */
|
||||
unsigned int sign_CIFS_PDUs = 1;
|
||||
|
|
|
@ -1596,11 +1596,11 @@ GLOBAL_EXTERN atomic_t midCount;
|
|||
|
||||
/* Misc globals */
|
||||
GLOBAL_EXTERN bool enable_oplocks; /* enable or disable oplocks */
|
||||
GLOBAL_EXTERN unsigned int lookupCacheEnabled;
|
||||
GLOBAL_EXTERN bool lookupCacheEnabled;
|
||||
GLOBAL_EXTERN unsigned int global_secflags; /* if on, session setup sent
|
||||
with more secure ntlmssp2 challenge/resp */
|
||||
GLOBAL_EXTERN unsigned int sign_CIFS_PDUs; /* enable smb packet signing */
|
||||
GLOBAL_EXTERN unsigned int linuxExtEnabled;/*enable Linux/Unix CIFS extensions*/
|
||||
GLOBAL_EXTERN bool linuxExtEnabled;/*enable Linux/Unix CIFS extensions*/
|
||||
GLOBAL_EXTERN unsigned int CIFSMaxBufSize; /* max size not including hdr */
|
||||
GLOBAL_EXTERN unsigned int cifs_min_rcv; /* min size of big ntwrk buf pool */
|
||||
GLOBAL_EXTERN unsigned int cifs_min_small; /* min size of small buf pool */
|
||||
|
|
|
@ -1616,7 +1616,7 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
|
|||
{
|
||||
int res = 0, eavail, timed_out = 0;
|
||||
unsigned long flags;
|
||||
long slack = 0;
|
||||
u64 slack = 0;
|
||||
wait_queue_t wait;
|
||||
ktime_t expires, *to = NULL;
|
||||
|
||||
|
|
|
@ -180,7 +180,7 @@ void nilfs_page_bug(struct page *page)
|
|||
|
||||
printk(KERN_CRIT "NILFS_PAGE_BUG(%p): cnt=%d index#=%llu flags=0x%lx "
|
||||
"mapping=%p ino=%lu\n",
|
||||
page, atomic_read(&page->_count),
|
||||
page, page_ref_count(page),
|
||||
(unsigned long long)page->index, page->flags, m, ino);
|
||||
|
||||
if (page_has_buffers(page)) {
|
||||
|
|
|
@ -434,7 +434,7 @@ static int proc_pid_wchan(struct seq_file *m, struct pid_namespace *ns,
|
|||
&& !lookup_symbol_name(wchan, symname))
|
||||
seq_printf(m, "%s", symname);
|
||||
else
|
||||
seq_putc(m, '0');
|
||||
seq_puts(m, "0\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -2158,6 +2158,7 @@ static const struct file_operations proc_map_files_operations = {
|
|||
.llseek = default_llseek,
|
||||
};
|
||||
|
||||
#ifdef CONFIG_CHECKPOINT_RESTORE
|
||||
struct timers_private {
|
||||
struct pid *pid;
|
||||
struct task_struct *task;
|
||||
|
@ -2256,6 +2257,73 @@ static const struct file_operations proc_timers_operations = {
|
|||
.llseek = seq_lseek,
|
||||
.release = seq_release_private,
|
||||
};
|
||||
#endif
|
||||
|
||||
static ssize_t timerslack_ns_write(struct file *file, const char __user *buf,
|
||||
size_t count, loff_t *offset)
|
||||
{
|
||||
struct inode *inode = file_inode(file);
|
||||
struct task_struct *p;
|
||||
u64 slack_ns;
|
||||
int err;
|
||||
|
||||
err = kstrtoull_from_user(buf, count, 10, &slack_ns);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
p = get_proc_task(inode);
|
||||
if (!p)
|
||||
return -ESRCH;
|
||||
|
||||
if (ptrace_may_access(p, PTRACE_MODE_ATTACH_FSCREDS)) {
|
||||
task_lock(p);
|
||||
if (slack_ns == 0)
|
||||
p->timer_slack_ns = p->default_timer_slack_ns;
|
||||
else
|
||||
p->timer_slack_ns = slack_ns;
|
||||
task_unlock(p);
|
||||
} else
|
||||
count = -EPERM;
|
||||
|
||||
put_task_struct(p);
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static int timerslack_ns_show(struct seq_file *m, void *v)
|
||||
{
|
||||
struct inode *inode = m->private;
|
||||
struct task_struct *p;
|
||||
int err = 0;
|
||||
|
||||
p = get_proc_task(inode);
|
||||
if (!p)
|
||||
return -ESRCH;
|
||||
|
||||
if (ptrace_may_access(p, PTRACE_MODE_ATTACH_FSCREDS)) {
|
||||
task_lock(p);
|
||||
seq_printf(m, "%llu\n", p->timer_slack_ns);
|
||||
task_unlock(p);
|
||||
} else
|
||||
err = -EPERM;
|
||||
|
||||
put_task_struct(p);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int timerslack_ns_open(struct inode *inode, struct file *filp)
|
||||
{
|
||||
return single_open(filp, timerslack_ns_show, inode);
|
||||
}
|
||||
|
||||
static const struct file_operations proc_pid_set_timerslack_ns_operations = {
|
||||
.open = timerslack_ns_open,
|
||||
.read = seq_read,
|
||||
.write = timerslack_ns_write,
|
||||
.llseek = seq_lseek,
|
||||
.release = single_release,
|
||||
};
|
||||
|
||||
static int proc_pident_instantiate(struct inode *dir,
|
||||
struct dentry *dentry, struct task_struct *task, const void *ptr)
|
||||
|
@ -2831,6 +2899,7 @@ static const struct pid_entry tgid_base_stuff[] = {
|
|||
#ifdef CONFIG_CHECKPOINT_RESTORE
|
||||
REG("timers", S_IRUGO, proc_timers_operations),
|
||||
#endif
|
||||
REG("timerslack_ns", S_IRUGO|S_IWUGO, proc_pid_set_timerslack_ns_operations),
|
||||
};
|
||||
|
||||
static int proc_tgid_base_readdir(struct file *file, struct dir_context *ctx)
|
||||
|
|
|
@ -29,10 +29,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
|
|||
unsigned long committed;
|
||||
long cached;
|
||||
long available;
|
||||
unsigned long pagecache;
|
||||
unsigned long wmark_low = 0;
|
||||
unsigned long pages[NR_LRU_LISTS];
|
||||
struct zone *zone;
|
||||
int lru;
|
||||
|
||||
/*
|
||||
|
@ -51,33 +48,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
|
|||
for (lru = LRU_BASE; lru < NR_LRU_LISTS; lru++)
|
||||
pages[lru] = global_page_state(NR_LRU_BASE + lru);
|
||||
|
||||
for_each_zone(zone)
|
||||
wmark_low += zone->watermark[WMARK_LOW];
|
||||
|
||||
/*
|
||||
* Estimate the amount of memory available for userspace allocations,
|
||||
* without causing swapping.
|
||||
*/
|
||||
available = i.freeram - totalreserve_pages;
|
||||
|
||||
/*
|
||||
* Not all the page cache can be freed, otherwise the system will
|
||||
* start swapping. Assume at least half of the page cache, or the
|
||||
* low watermark worth of cache, needs to stay.
|
||||
*/
|
||||
pagecache = pages[LRU_ACTIVE_FILE] + pages[LRU_INACTIVE_FILE];
|
||||
pagecache -= min(pagecache / 2, wmark_low);
|
||||
available += pagecache;
|
||||
|
||||
/*
|
||||
* Part of the reclaimable slab consists of items that are in use,
|
||||
* and cannot be freed. Cap this estimate at the low watermark.
|
||||
*/
|
||||
available += global_page_state(NR_SLAB_RECLAIMABLE) -
|
||||
min(global_page_state(NR_SLAB_RECLAIMABLE) / 2, wmark_low);
|
||||
|
||||
if (available < 0)
|
||||
available = 0;
|
||||
available = si_mem_available();
|
||||
|
||||
/*
|
||||
* Tagged format, for easy grepping and expansion.
|
||||
|
|
|
@ -103,9 +103,9 @@ u64 stable_page_flags(struct page *page)
|
|||
* pseudo flags for the well known (anonymous) memory mapped pages
|
||||
*
|
||||
* Note that page->_mapcount is overloaded in SLOB/SLUB/SLQB, so the
|
||||
* simple test in page_mapcount() is not enough.
|
||||
* simple test in page_mapped() is not enough.
|
||||
*/
|
||||
if (!PageSlab(page) && page_mapcount(page))
|
||||
if (!PageSlab(page) && page_mapped(page))
|
||||
u |= 1 << KPF_MMAP;
|
||||
if (PageAnon(page))
|
||||
u |= 1 << KPF_ANON;
|
||||
|
@ -148,6 +148,8 @@ u64 stable_page_flags(struct page *page)
|
|||
*/
|
||||
if (PageBuddy(page))
|
||||
u |= 1 << KPF_BUDDY;
|
||||
else if (page_count(page) == 0 && is_free_buddy_page(page))
|
||||
u |= 1 << KPF_BUDDY;
|
||||
|
||||
if (PageBalloon(page))
|
||||
u |= 1 << KPF_BALLOON;
|
||||
|
@ -158,6 +160,8 @@ u64 stable_page_flags(struct page *page)
|
|||
u |= kpf_copy_bit(k, KPF_LOCKED, PG_locked);
|
||||
|
||||
u |= kpf_copy_bit(k, KPF_SLAB, PG_slab);
|
||||
if (PageTail(page) && PageSlab(compound_head(page)))
|
||||
u |= 1 << KPF_SLAB;
|
||||
|
||||
u |= kpf_copy_bit(k, KPF_ERROR, PG_error);
|
||||
u |= kpf_copy_bit(k, KPF_DIRTY, PG_dirty);
|
||||
|
|
|
@ -231,7 +231,9 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
|
|||
|
||||
list_for_each_entry(m, &vmcore_list, list) {
|
||||
if (*fpos < m->offset + m->size) {
|
||||
tsz = min_t(size_t, m->offset + m->size - *fpos, buflen);
|
||||
tsz = (size_t)min_t(unsigned long long,
|
||||
m->offset + m->size - *fpos,
|
||||
buflen);
|
||||
start = m->paddr + *fpos - m->offset;
|
||||
tmp = read_from_oldmem(buffer, tsz, &start, userbuf);
|
||||
if (tmp < 0)
|
||||
|
@ -461,7 +463,8 @@ static int mmap_vmcore(struct file *file, struct vm_area_struct *vma)
|
|||
if (start < m->offset + m->size) {
|
||||
u64 paddr = 0;
|
||||
|
||||
tsz = min_t(size_t, m->offset + m->size - start, size);
|
||||
tsz = (size_t)min_t(unsigned long long,
|
||||
m->offset + m->size - start, size);
|
||||
paddr = m->paddr + start - m->offset;
|
||||
if (vmcore_remap_oldmem_pfn(vma, vma->vm_start + len,
|
||||
paddr >> PAGE_SHIFT, tsz,
|
||||
|
|
|
@ -70,9 +70,9 @@ static long __estimate_accuracy(struct timespec *tv)
|
|||
return slack;
|
||||
}
|
||||
|
||||
long select_estimate_accuracy(struct timespec *tv)
|
||||
u64 select_estimate_accuracy(struct timespec *tv)
|
||||
{
|
||||
unsigned long ret;
|
||||
u64 ret;
|
||||
struct timespec now;
|
||||
|
||||
/*
|
||||
|
@ -402,7 +402,7 @@ int do_select(int n, fd_set_bits *fds, struct timespec *end_time)
|
|||
struct poll_wqueues table;
|
||||
poll_table *wait;
|
||||
int retval, i, timed_out = 0;
|
||||
unsigned long slack = 0;
|
||||
u64 slack = 0;
|
||||
unsigned int busy_flag = net_busy_loop_on() ? POLL_BUSY_LOOP : 0;
|
||||
unsigned long busy_end = 0;
|
||||
|
||||
|
@ -784,7 +784,7 @@ static int do_poll(struct poll_list *list, struct poll_wqueues *wait,
|
|||
poll_table* pt = &wait->pt;
|
||||
ktime_t expire, *to = NULL;
|
||||
int timed_out = 0, count = 0;
|
||||
unsigned long slack = 0;
|
||||
u64 slack = 0;
|
||||
unsigned int busy_flag = net_busy_loop_on() ? POLL_BUSY_LOOP : 0;
|
||||
unsigned long busy_end = 0;
|
||||
|
||||
|
|
|
@ -98,14 +98,14 @@ ATOMIC_LONG_ADD_SUB_OP(sub, _release)
|
|||
#define atomic_long_xchg(v, new) \
|
||||
(ATOMIC_LONG_PFX(_xchg)((ATOMIC_LONG_PFX(_t) *)(v), (new)))
|
||||
|
||||
static inline void atomic_long_inc(atomic_long_t *l)
|
||||
static __always_inline void atomic_long_inc(atomic_long_t *l)
|
||||
{
|
||||
ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l;
|
||||
|
||||
ATOMIC_LONG_PFX(_inc)(v);
|
||||
}
|
||||
|
||||
static inline void atomic_long_dec(atomic_long_t *l)
|
||||
static __always_inline void atomic_long_dec(atomic_long_t *l)
|
||||
{
|
||||
ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l;
|
||||
|
||||
|
@ -113,7 +113,7 @@ static inline void atomic_long_dec(atomic_long_t *l)
|
|||
}
|
||||
|
||||
#define ATOMIC_LONG_OP(op) \
|
||||
static inline void \
|
||||
static __always_inline void \
|
||||
atomic_long_##op(long i, atomic_long_t *l) \
|
||||
{ \
|
||||
ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; \
|
||||
|
|
|
@ -81,6 +81,12 @@ extern void warn_slowpath_null(const char *file, const int line);
|
|||
do { printk(arg); __WARN_TAINT(taint); } while (0)
|
||||
#endif
|
||||
|
||||
/* used internally by panic.c */
|
||||
struct warn_args;
|
||||
|
||||
void __warn(const char *file, int line, void *caller, unsigned taint,
|
||||
struct pt_regs *regs, struct warn_args *args);
|
||||
|
||||
#ifndef WARN_ON
|
||||
#define WARN_ON(condition) ({ \
|
||||
int __ret_warn_on = !!(condition); \
|
||||
|
@ -110,9 +116,10 @@ extern void warn_slowpath_null(const char *file, const int line);
|
|||
static bool __section(.data.unlikely) __warned; \
|
||||
int __ret_warn_once = !!(condition); \
|
||||
\
|
||||
if (unlikely(__ret_warn_once)) \
|
||||
if (WARN_ON(!__warned)) \
|
||||
__warned = true; \
|
||||
if (unlikely(__ret_warn_once && !__warned)) { \
|
||||
__warned = true; \
|
||||
WARN_ON(1); \
|
||||
} \
|
||||
unlikely(__ret_warn_once); \
|
||||
})
|
||||
|
||||
|
@ -120,9 +127,10 @@ extern void warn_slowpath_null(const char *file, const int line);
|
|||
static bool __section(.data.unlikely) __warned; \
|
||||
int __ret_warn_once = !!(condition); \
|
||||
\
|
||||
if (unlikely(__ret_warn_once)) \
|
||||
if (WARN(!__warned, format)) \
|
||||
__warned = true; \
|
||||
if (unlikely(__ret_warn_once && !__warned)) { \
|
||||
__warned = true; \
|
||||
WARN(1, format); \
|
||||
} \
|
||||
unlikely(__ret_warn_once); \
|
||||
})
|
||||
|
||||
|
@ -130,9 +138,10 @@ extern void warn_slowpath_null(const char *file, const int line);
|
|||
static bool __section(.data.unlikely) __warned; \
|
||||
int __ret_warn_once = !!(condition); \
|
||||
\
|
||||
if (unlikely(__ret_warn_once)) \
|
||||
if (WARN_TAINT(!__warned, taint, format)) \
|
||||
__warned = true; \
|
||||
if (unlikely(__ret_warn_once && !__warned)) { \
|
||||
__warned = true; \
|
||||
WARN_TAINT(1, taint, format); \
|
||||
} \
|
||||
unlikely(__ret_warn_once); \
|
||||
})
|
||||
|
||||
|
|
|
@ -783,6 +783,23 @@ static inline int pmd_clear_huge(pmd_t *pmd)
|
|||
}
|
||||
#endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */
|
||||
|
||||
#ifndef __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
/*
|
||||
* ARCHes with special requirements for evicting THP backing TLB entries can
|
||||
* implement this. Otherwise also, it can help optimize normal TLB flush in
|
||||
* THP regime. stock flush_tlb_range() typically has optimization to nuke the
|
||||
* entire TLB TLB if flush span is greater than a threshold, which will
|
||||
* likely be true for a single huge page. Thus a single thp flush will
|
||||
* invalidate the entire TLB which is not desitable.
|
||||
* e.g. see arch/arc: flush_pmd_tlb_range
|
||||
*/
|
||||
#define flush_pmd_tlb_range(vma, addr, end) flush_tlb_range(vma, addr, end)
|
||||
#else
|
||||
#define flush_pmd_tlb_range(vma, addr, end) BUILD_BUG()
|
||||
#endif
|
||||
#endif
|
||||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
||||
#ifndef io_remap_pfn_range
|
||||
|
|
|
@ -82,15 +82,15 @@ struct buffer_head {
|
|||
* and buffer_foo() functions.
|
||||
*/
|
||||
#define BUFFER_FNS(bit, name) \
|
||||
static inline void set_buffer_##name(struct buffer_head *bh) \
|
||||
static __always_inline void set_buffer_##name(struct buffer_head *bh) \
|
||||
{ \
|
||||
set_bit(BH_##bit, &(bh)->b_state); \
|
||||
} \
|
||||
static inline void clear_buffer_##name(struct buffer_head *bh) \
|
||||
static __always_inline void clear_buffer_##name(struct buffer_head *bh) \
|
||||
{ \
|
||||
clear_bit(BH_##bit, &(bh)->b_state); \
|
||||
} \
|
||||
static inline int buffer_##name(const struct buffer_head *bh) \
|
||||
static __always_inline int buffer_##name(const struct buffer_head *bh) \
|
||||
{ \
|
||||
return test_bit(BH_##bit, &(bh)->b_state); \
|
||||
}
|
||||
|
@ -99,11 +99,11 @@ static inline int buffer_##name(const struct buffer_head *bh) \
|
|||
* test_set_buffer_foo() and test_clear_buffer_foo()
|
||||
*/
|
||||
#define TAS_BUFFER_FNS(bit, name) \
|
||||
static inline int test_set_buffer_##name(struct buffer_head *bh) \
|
||||
static __always_inline int test_set_buffer_##name(struct buffer_head *bh) \
|
||||
{ \
|
||||
return test_and_set_bit(BH_##bit, &(bh)->b_state); \
|
||||
} \
|
||||
static inline int test_clear_buffer_##name(struct buffer_head *bh) \
|
||||
static __always_inline int test_clear_buffer_##name(struct buffer_head *bh) \
|
||||
{ \
|
||||
return test_and_clear_bit(BH_##bit, &(bh)->b_state); \
|
||||
} \
|
||||
|
|
|
@ -52,6 +52,10 @@ extern void compaction_defer_reset(struct zone *zone, int order,
|
|||
bool alloc_success);
|
||||
extern bool compaction_restarting(struct zone *zone, int order);
|
||||
|
||||
extern int kcompactd_run(int nid);
|
||||
extern void kcompactd_stop(int nid);
|
||||
extern void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_idx);
|
||||
|
||||
#else
|
||||
static inline unsigned long try_to_compact_pages(gfp_t gfp_mask,
|
||||
unsigned int order, int alloc_flags,
|
||||
|
@ -84,6 +88,18 @@ static inline bool compaction_deferred(struct zone *zone, int order)
|
|||
return true;
|
||||
}
|
||||
|
||||
static inline int kcompactd_run(int nid)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline void kcompactd_stop(int nid)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_idx)
|
||||
{
|
||||
}
|
||||
|
||||
#endif /* CONFIG_COMPACTION */
|
||||
|
||||
#if defined(CONFIG_COMPACTION) && defined(CONFIG_SYSFS) && defined(CONFIG_NUMA)
|
||||
|
|
|
@ -231,7 +231,7 @@ static inline long freezable_schedule_timeout_killable_unsafe(long timeout)
|
|||
* call this with locks held.
|
||||
*/
|
||||
static inline int freezable_schedule_hrtimeout_range(ktime_t *expires,
|
||||
unsigned long delta, const enum hrtimer_mode mode)
|
||||
u64 delta, const enum hrtimer_mode mode)
|
||||
{
|
||||
int __retval;
|
||||
freezer_do_not_count();
|
||||
|
|
|
@ -105,8 +105,6 @@ struct vm_area_struct;
|
|||
*
|
||||
* __GFP_NOMEMALLOC is used to explicitly forbid access to emergency reserves.
|
||||
* This takes precedence over the __GFP_MEMALLOC flag if both are set.
|
||||
*
|
||||
* __GFP_NOACCOUNT ignores the accounting for kmemcg limit enforcement.
|
||||
*/
|
||||
#define __GFP_ATOMIC ((__force gfp_t)___GFP_ATOMIC)
|
||||
#define __GFP_HIGH ((__force gfp_t)___GFP_HIGH)
|
||||
|
@ -259,7 +257,7 @@ struct vm_area_struct;
|
|||
#define GFP_HIGHUSER_MOVABLE (GFP_HIGHUSER | __GFP_MOVABLE)
|
||||
#define GFP_TRANSHUGE ((GFP_HIGHUSER_MOVABLE | __GFP_COMP | \
|
||||
__GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN) & \
|
||||
~__GFP_KSWAPD_RECLAIM)
|
||||
~__GFP_RECLAIM)
|
||||
|
||||
/* Convert GFP flags to their corresponding migrate type */
|
||||
#define GFP_MOVABLE_MASK (__GFP_RECLAIMABLE|__GFP_MOVABLE)
|
||||
|
@ -333,22 +331,29 @@ static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags)
|
|||
* 0xe => BAD (MOVABLE+DMA32+HIGHMEM)
|
||||
* 0xf => BAD (MOVABLE+DMA32+HIGHMEM+DMA)
|
||||
*
|
||||
* ZONES_SHIFT must be <= 2 on 32 bit platforms.
|
||||
* GFP_ZONES_SHIFT must be <= 2 on 32 bit platforms.
|
||||
*/
|
||||
|
||||
#if 16 * ZONES_SHIFT > BITS_PER_LONG
|
||||
#error ZONES_SHIFT too large to create GFP_ZONE_TABLE integer
|
||||
#if defined(CONFIG_ZONE_DEVICE) && (MAX_NR_ZONES-1) <= 4
|
||||
/* ZONE_DEVICE is not a valid GFP zone specifier */
|
||||
#define GFP_ZONES_SHIFT 2
|
||||
#else
|
||||
#define GFP_ZONES_SHIFT ZONES_SHIFT
|
||||
#endif
|
||||
|
||||
#if 16 * GFP_ZONES_SHIFT > BITS_PER_LONG
|
||||
#error GFP_ZONES_SHIFT too large to create GFP_ZONE_TABLE integer
|
||||
#endif
|
||||
|
||||
#define GFP_ZONE_TABLE ( \
|
||||
(ZONE_NORMAL << 0 * ZONES_SHIFT) \
|
||||
| (OPT_ZONE_DMA << ___GFP_DMA * ZONES_SHIFT) \
|
||||
| (OPT_ZONE_HIGHMEM << ___GFP_HIGHMEM * ZONES_SHIFT) \
|
||||
| (OPT_ZONE_DMA32 << ___GFP_DMA32 * ZONES_SHIFT) \
|
||||
| (ZONE_NORMAL << ___GFP_MOVABLE * ZONES_SHIFT) \
|
||||
| (OPT_ZONE_DMA << (___GFP_MOVABLE | ___GFP_DMA) * ZONES_SHIFT) \
|
||||
| (ZONE_MOVABLE << (___GFP_MOVABLE | ___GFP_HIGHMEM) * ZONES_SHIFT) \
|
||||
| (OPT_ZONE_DMA32 << (___GFP_MOVABLE | ___GFP_DMA32) * ZONES_SHIFT) \
|
||||
(ZONE_NORMAL << 0 * GFP_ZONES_SHIFT) \
|
||||
| (OPT_ZONE_DMA << ___GFP_DMA * GFP_ZONES_SHIFT) \
|
||||
| (OPT_ZONE_HIGHMEM << ___GFP_HIGHMEM * GFP_ZONES_SHIFT) \
|
||||
| (OPT_ZONE_DMA32 << ___GFP_DMA32 * GFP_ZONES_SHIFT) \
|
||||
| (ZONE_NORMAL << ___GFP_MOVABLE * GFP_ZONES_SHIFT) \
|
||||
| (OPT_ZONE_DMA << (___GFP_MOVABLE | ___GFP_DMA) * GFP_ZONES_SHIFT) \
|
||||
| (ZONE_MOVABLE << (___GFP_MOVABLE | ___GFP_HIGHMEM) * GFP_ZONES_SHIFT)\
|
||||
| (OPT_ZONE_DMA32 << (___GFP_MOVABLE | ___GFP_DMA32) * GFP_ZONES_SHIFT)\
|
||||
)
|
||||
|
||||
/*
|
||||
|
@ -373,8 +378,8 @@ static inline enum zone_type gfp_zone(gfp_t flags)
|
|||
enum zone_type z;
|
||||
int bit = (__force int) (flags & GFP_ZONEMASK);
|
||||
|
||||
z = (GFP_ZONE_TABLE >> (bit * ZONES_SHIFT)) &
|
||||
((1 << ZONES_SHIFT) - 1);
|
||||
z = (GFP_ZONE_TABLE >> (bit * GFP_ZONES_SHIFT)) &
|
||||
((1 << GFP_ZONES_SHIFT) - 1);
|
||||
VM_BUG_ON((GFP_ZONE_BAD >> bit) & 1);
|
||||
return z;
|
||||
}
|
||||
|
|
|
@ -220,7 +220,7 @@ static inline void hrtimer_set_expires_range(struct hrtimer *timer, ktime_t time
|
|||
timer->node.expires = ktime_add_safe(time, delta);
|
||||
}
|
||||
|
||||
static inline void hrtimer_set_expires_range_ns(struct hrtimer *timer, ktime_t time, unsigned long delta)
|
||||
static inline void hrtimer_set_expires_range_ns(struct hrtimer *timer, ktime_t time, u64 delta)
|
||||
{
|
||||
timer->_softexpires = time;
|
||||
timer->node.expires = ktime_add_safe(time, ns_to_ktime(delta));
|
||||
|
@ -378,7 +378,7 @@ static inline void destroy_hrtimer_on_stack(struct hrtimer *timer) { }
|
|||
|
||||
/* Basic timer operations: */
|
||||
extern void hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
|
||||
unsigned long range_ns, const enum hrtimer_mode mode);
|
||||
u64 range_ns, const enum hrtimer_mode mode);
|
||||
|
||||
/**
|
||||
* hrtimer_start - (re)start an hrtimer on the current CPU
|
||||
|
@ -399,7 +399,7 @@ extern int hrtimer_try_to_cancel(struct hrtimer *timer);
|
|||
static inline void hrtimer_start_expires(struct hrtimer *timer,
|
||||
enum hrtimer_mode mode)
|
||||
{
|
||||
unsigned long delta;
|
||||
u64 delta;
|
||||
ktime_t soft, hard;
|
||||
soft = hrtimer_get_softexpires(timer);
|
||||
hard = hrtimer_get_expires(timer);
|
||||
|
@ -477,10 +477,12 @@ extern long hrtimer_nanosleep_restart(struct restart_block *restart_block);
|
|||
extern void hrtimer_init_sleeper(struct hrtimer_sleeper *sl,
|
||||
struct task_struct *tsk);
|
||||
|
||||
extern int schedule_hrtimeout_range(ktime_t *expires, unsigned long delta,
|
||||
extern int schedule_hrtimeout_range(ktime_t *expires, u64 delta,
|
||||
const enum hrtimer_mode mode);
|
||||
extern int schedule_hrtimeout_range_clock(ktime_t *expires,
|
||||
unsigned long delta, const enum hrtimer_mode mode, int clock);
|
||||
u64 delta,
|
||||
const enum hrtimer_mode mode,
|
||||
int clock);
|
||||
extern int schedule_hrtimeout(ktime_t *expires, const enum hrtimer_mode mode);
|
||||
|
||||
/* Soft interrupt function to run the hrtimer queues: */
|
||||
|
|
|
@ -41,7 +41,8 @@ int vmf_insert_pfn_pmd(struct vm_area_struct *, unsigned long addr, pmd_t *,
|
|||
enum transparent_hugepage_flag {
|
||||
TRANSPARENT_HUGEPAGE_FLAG,
|
||||
TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
|
||||
TRANSPARENT_HUGEPAGE_DEFRAG_FLAG,
|
||||
TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG,
|
||||
TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG,
|
||||
TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG,
|
||||
TRANSPARENT_HUGEPAGE_DEFRAG_KHUGEPAGED_FLAG,
|
||||
TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG,
|
||||
|
@ -71,12 +72,6 @@ extern bool is_vma_temporary_stack(struct vm_area_struct *vma);
|
|||
((__vma)->vm_flags & VM_HUGEPAGE))) && \
|
||||
!((__vma)->vm_flags & VM_NOHUGEPAGE) && \
|
||||
!is_vma_temporary_stack(__vma))
|
||||
#define transparent_hugepage_defrag(__vma) \
|
||||
((transparent_hugepage_flags & \
|
||||
(1<<TRANSPARENT_HUGEPAGE_DEFRAG_FLAG)) || \
|
||||
(transparent_hugepage_flags & \
|
||||
(1<<TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG) && \
|
||||
(__vma)->vm_flags & VM_HUGEPAGE))
|
||||
#define transparent_hugepage_use_zero_page() \
|
||||
(transparent_hugepage_flags & \
|
||||
(1<<TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG))
|
||||
|
@ -101,16 +96,21 @@ static inline int split_huge_page(struct page *page)
|
|||
void deferred_split_huge_page(struct page *page);
|
||||
|
||||
void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
|
||||
unsigned long address);
|
||||
unsigned long address, bool freeze);
|
||||
|
||||
#define split_huge_pmd(__vma, __pmd, __address) \
|
||||
do { \
|
||||
pmd_t *____pmd = (__pmd); \
|
||||
if (pmd_trans_huge(*____pmd) \
|
||||
|| pmd_devmap(*____pmd)) \
|
||||
__split_huge_pmd(__vma, __pmd, __address); \
|
||||
__split_huge_pmd(__vma, __pmd, __address, \
|
||||
false); \
|
||||
} while (0)
|
||||
|
||||
|
||||
void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address,
|
||||
bool freeze, struct page *page);
|
||||
|
||||
#if HPAGE_PMD_ORDER >= MAX_ORDER
|
||||
#error "hugepages can't be allocated by the buddy allocator"
|
||||
#endif
|
||||
|
@ -178,6 +178,10 @@ static inline int split_huge_page(struct page *page)
|
|||
static inline void deferred_split_huge_page(struct page *page) {}
|
||||
#define split_huge_pmd(__vma, __pmd, __address) \
|
||||
do { } while (0)
|
||||
|
||||
static inline void split_huge_pmd_address(struct vm_area_struct *vma,
|
||||
unsigned long address, bool freeze, struct page *page) {}
|
||||
|
||||
static inline int hugepage_madvise(struct vm_area_struct *vma,
|
||||
unsigned long *vm_flags, int advice)
|
||||
{
|
||||
|
|
|
@ -357,6 +357,7 @@ int __must_check kstrtou16(const char *s, unsigned int base, u16 *res);
|
|||
int __must_check kstrtos16(const char *s, unsigned int base, s16 *res);
|
||||
int __must_check kstrtou8(const char *s, unsigned int base, u8 *res);
|
||||
int __must_check kstrtos8(const char *s, unsigned int base, s8 *res);
|
||||
int __must_check kstrtobool(const char *s, bool *res);
|
||||
|
||||
int __must_check kstrtoull_from_user(const char __user *s, size_t count, unsigned int base, unsigned long long *res);
|
||||
int __must_check kstrtoll_from_user(const char __user *s, size_t count, unsigned int base, long long *res);
|
||||
|
@ -368,6 +369,7 @@ int __must_check kstrtou16_from_user(const char __user *s, size_t count, unsigne
|
|||
int __must_check kstrtos16_from_user(const char __user *s, size_t count, unsigned int base, s16 *res);
|
||||
int __must_check kstrtou8_from_user(const char __user *s, size_t count, unsigned int base, u8 *res);
|
||||
int __must_check kstrtos8_from_user(const char __user *s, size_t count, unsigned int base, s8 *res);
|
||||
int __must_check kstrtobool_from_user(const char __user *s, size_t count, bool *res);
|
||||
|
||||
static inline int __must_check kstrtou64_from_user(const char __user *s, size_t count, unsigned int base, u64 *res)
|
||||
{
|
||||
|
|
|
@ -48,7 +48,7 @@ static inline void INIT_HLIST_BL_NODE(struct hlist_bl_node *h)
|
|||
|
||||
#define hlist_bl_entry(ptr, type, member) container_of(ptr,type,member)
|
||||
|
||||
static inline int hlist_bl_unhashed(const struct hlist_bl_node *h)
|
||||
static inline bool hlist_bl_unhashed(const struct hlist_bl_node *h)
|
||||
{
|
||||
return !h->pprev;
|
||||
}
|
||||
|
@ -68,7 +68,7 @@ static inline void hlist_bl_set_first(struct hlist_bl_head *h,
|
|||
h->first = (struct hlist_bl_node *)((unsigned long)n | LIST_BL_LOCKMASK);
|
||||
}
|
||||
|
||||
static inline int hlist_bl_empty(const struct hlist_bl_head *h)
|
||||
static inline bool hlist_bl_empty(const struct hlist_bl_head *h)
|
||||
{
|
||||
return !((unsigned long)READ_ONCE(h->first) & ~LIST_BL_LOCKMASK);
|
||||
}
|
||||
|
|
|
@ -52,7 +52,10 @@ enum mem_cgroup_stat_index {
|
|||
MEM_CGROUP_STAT_SWAP, /* # of pages, swapped out */
|
||||
MEM_CGROUP_STAT_NSTATS,
|
||||
/* default hierarchy stats */
|
||||
MEMCG_SOCK = MEM_CGROUP_STAT_NSTATS,
|
||||
MEMCG_KERNEL_STACK = MEM_CGROUP_STAT_NSTATS,
|
||||
MEMCG_SLAB_RECLAIMABLE,
|
||||
MEMCG_SLAB_UNRECLAIMABLE,
|
||||
MEMCG_SOCK,
|
||||
MEMCG_NR_STAT,
|
||||
};
|
||||
|
||||
|
@ -400,6 +403,9 @@ int mem_cgroup_select_victim_node(struct mem_cgroup *memcg);
|
|||
void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru,
|
||||
int nr_pages);
|
||||
|
||||
unsigned long mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg,
|
||||
int nid, unsigned int lru_mask);
|
||||
|
||||
static inline
|
||||
unsigned long mem_cgroup_get_lru_size(struct lruvec *lruvec, enum lru_list lru)
|
||||
{
|
||||
|
@ -658,6 +664,13 @@ mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru,
|
|||
{
|
||||
}
|
||||
|
||||
static inline unsigned long
|
||||
mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg,
|
||||
int nid, unsigned int lru_mask)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void
|
||||
mem_cgroup_print_oom_info(struct mem_cgroup *memcg, struct task_struct *p)
|
||||
{
|
||||
|
@ -792,11 +805,6 @@ static inline bool memcg_kmem_enabled(void)
|
|||
return static_branch_unlikely(&memcg_kmem_enabled_key);
|
||||
}
|
||||
|
||||
static inline bool memcg_kmem_online(struct mem_cgroup *memcg)
|
||||
{
|
||||
return memcg->kmem_state == KMEM_ONLINE;
|
||||
}
|
||||
|
||||
/*
|
||||
* In general, we'll do everything in our power to not incur in any overhead
|
||||
* for non-memcg users for the kmem functions. Not even a function call, if we
|
||||
|
@ -883,6 +891,20 @@ static __always_inline void memcg_kmem_put_cache(struct kmem_cache *cachep)
|
|||
if (memcg_kmem_enabled())
|
||||
__memcg_kmem_put_cache(cachep);
|
||||
}
|
||||
|
||||
/**
|
||||
* memcg_kmem_update_page_stat - update kmem page state statistics
|
||||
* @page: the page
|
||||
* @idx: page state item to account
|
||||
* @val: number of pages (positive or negative)
|
||||
*/
|
||||
static inline void memcg_kmem_update_page_stat(struct page *page,
|
||||
enum mem_cgroup_stat_index idx, int val)
|
||||
{
|
||||
if (memcg_kmem_enabled() && page->mem_cgroup)
|
||||
this_cpu_add(page->mem_cgroup->stat->count[idx], val);
|
||||
}
|
||||
|
||||
#else
|
||||
#define for_each_memcg_cache_index(_idx) \
|
||||
for (; NULL; )
|
||||
|
@ -892,11 +914,6 @@ static inline bool memcg_kmem_enabled(void)
|
|||
return false;
|
||||
}
|
||||
|
||||
static inline bool memcg_kmem_online(struct mem_cgroup *memcg)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline int memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
|
||||
{
|
||||
return 0;
|
||||
|
@ -928,6 +945,11 @@ memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp)
|
|||
static inline void memcg_kmem_put_cache(struct kmem_cache *cachep)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void memcg_kmem_update_page_stat(struct page *page,
|
||||
enum mem_cgroup_stat_index idx, int val)
|
||||
{
|
||||
}
|
||||
#endif /* CONFIG_MEMCG && !CONFIG_SLOB */
|
||||
|
||||
#endif /* _LINUX_MEMCONTROL_H */
|
||||
|
|
|
@ -22,6 +22,7 @@
|
|||
#include <linux/resource.h>
|
||||
#include <linux/page_ext.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/page_ref.h>
|
||||
|
||||
struct mempolicy;
|
||||
struct anon_vma;
|
||||
|
@ -82,6 +83,27 @@ extern int mmap_rnd_compat_bits __read_mostly;
|
|||
#define mm_forbids_zeropage(X) (0)
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Default maximum number of active map areas, this limits the number of vmas
|
||||
* per mm struct. Users can overwrite this number by sysctl but there is a
|
||||
* problem.
|
||||
*
|
||||
* When a program's coredump is generated as ELF format, a section is created
|
||||
* per a vma. In ELF, the number of sections is represented in unsigned short.
|
||||
* This means the number of sections should be smaller than 65535 at coredump.
|
||||
* Because the kernel adds some informative sections to a image of program at
|
||||
* generating coredump, we need some margin. The number of extra sections is
|
||||
* 1-3 now and depends on arch. We use "5" as safe margin, here.
|
||||
*
|
||||
* ELF extended numbering allows more than 65535 sections, so 16-bit bound is
|
||||
* not a hard limit any more. Although some userspace tools can be surprised by
|
||||
* that.
|
||||
*/
|
||||
#define MAPCOUNT_ELF_CORE_MARGIN (5)
|
||||
#define DEFAULT_MAX_MAP_COUNT (USHRT_MAX - MAPCOUNT_ELF_CORE_MARGIN)
|
||||
|
||||
extern int sysctl_max_map_count;
|
||||
|
||||
extern unsigned long sysctl_user_reserve_kbytes;
|
||||
extern unsigned long sysctl_admin_reserve_kbytes;
|
||||
|
||||
|
@ -122,6 +144,7 @@ extern unsigned int kobjsize(const void *objp);
|
|||
|
||||
/*
|
||||
* vm_flags in vm_area_struct, see mm_types.h.
|
||||
* When changing, update also include/trace/events/mmflags.h
|
||||
*/
|
||||
#define VM_NONE 0x00000000
|
||||
|
||||
|
@ -364,8 +387,8 @@ static inline int pmd_devmap(pmd_t pmd)
|
|||
*/
|
||||
static inline int put_page_testzero(struct page *page)
|
||||
{
|
||||
VM_BUG_ON_PAGE(atomic_read(&page->_count) == 0, page);
|
||||
return atomic_dec_and_test(&page->_count);
|
||||
VM_BUG_ON_PAGE(page_ref_count(page) == 0, page);
|
||||
return page_ref_dec_and_test(page);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -376,7 +399,7 @@ static inline int put_page_testzero(struct page *page)
|
|||
*/
|
||||
static inline int get_page_unless_zero(struct page *page)
|
||||
{
|
||||
return atomic_inc_not_zero(&page->_count);
|
||||
return page_ref_add_unless(page, 1, 0);
|
||||
}
|
||||
|
||||
extern int page_is_ram(unsigned long pfn);
|
||||
|
@ -464,11 +487,6 @@ static inline int total_mapcount(struct page *page)
|
|||
}
|
||||
#endif
|
||||
|
||||
static inline int page_count(struct page *page)
|
||||
{
|
||||
return atomic_read(&compound_head(page)->_count);
|
||||
}
|
||||
|
||||
static inline struct page *virt_to_head_page(const void *x)
|
||||
{
|
||||
struct page *page = virt_to_page(x);
|
||||
|
@ -476,15 +494,6 @@ static inline struct page *virt_to_head_page(const void *x)
|
|||
return compound_head(page);
|
||||
}
|
||||
|
||||
/*
|
||||
* Setup the page count before being freed into the page allocator for
|
||||
* the first time (boot or memory hotplug)
|
||||
*/
|
||||
static inline void init_page_count(struct page *page)
|
||||
{
|
||||
atomic_set(&page->_count, 1);
|
||||
}
|
||||
|
||||
void __put_page(struct page *page);
|
||||
|
||||
void put_pages_list(struct list_head *pages);
|
||||
|
@ -694,8 +703,8 @@ static inline void get_page(struct page *page)
|
|||
* Getting a normal page or the head of a compound page
|
||||
* requires to already have an elevated page->_count.
|
||||
*/
|
||||
VM_BUG_ON_PAGE(atomic_read(&page->_count) <= 0, page);
|
||||
atomic_inc(&page->_count);
|
||||
VM_BUG_ON_PAGE(page_ref_count(page) <= 0, page);
|
||||
page_ref_inc(page);
|
||||
|
||||
if (unlikely(is_zone_device_page(page)))
|
||||
get_zone_device_page(page);
|
||||
|
@ -1043,8 +1052,6 @@ static inline void clear_page_pfmemalloc(struct page *page)
|
|||
* just gets major/minor fault counters bumped up.
|
||||
*/
|
||||
|
||||
#define VM_FAULT_MINOR 0 /* For backwards compat. Remove me quickly. */
|
||||
|
||||
#define VM_FAULT_OOM 0x0001
|
||||
#define VM_FAULT_SIGBUS 0x0002
|
||||
#define VM_FAULT_MAJOR 0x0004
|
||||
|
@ -1523,8 +1530,7 @@ static inline void mm_dec_nr_pmds(struct mm_struct *mm)
|
|||
}
|
||||
#endif
|
||||
|
||||
int __pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
|
||||
pmd_t *pmd, unsigned long address);
|
||||
int __pte_alloc(struct mm_struct *mm, pmd_t *pmd, unsigned long address);
|
||||
int __pte_alloc_kernel(pmd_t *pmd, unsigned long address);
|
||||
|
||||
/*
|
||||
|
@ -1650,15 +1656,15 @@ static inline void pgtable_page_dtor(struct page *page)
|
|||
pte_unmap(pte); \
|
||||
} while (0)
|
||||
|
||||
#define pte_alloc_map(mm, vma, pmd, address) \
|
||||
((unlikely(pmd_none(*(pmd))) && __pte_alloc(mm, vma, \
|
||||
pmd, address))? \
|
||||
NULL: pte_offset_map(pmd, address))
|
||||
#define pte_alloc(mm, pmd, address) \
|
||||
(unlikely(pmd_none(*(pmd))) && __pte_alloc(mm, pmd, address))
|
||||
|
||||
#define pte_alloc_map(mm, pmd, address) \
|
||||
(pte_alloc(mm, pmd, address) ? NULL : pte_offset_map(pmd, address))
|
||||
|
||||
#define pte_alloc_map_lock(mm, pmd, address, ptlp) \
|
||||
((unlikely(pmd_none(*(pmd))) && __pte_alloc(mm, NULL, \
|
||||
pmd, address))? \
|
||||
NULL: pte_offset_map_lock(mm, pmd, address, ptlp))
|
||||
(pte_alloc(mm, pmd, address) ? \
|
||||
NULL : pte_offset_map_lock(mm, pmd, address, ptlp))
|
||||
|
||||
#define pte_alloc_kernel(pmd, address) \
|
||||
((unlikely(pmd_none(*(pmd))) && __pte_alloc_kernel(pmd, address))? \
|
||||
|
@ -1853,6 +1859,7 @@ extern int __meminit init_per_zone_wmark_min(void);
|
|||
extern void mem_init(void);
|
||||
extern void __init mmap_init(void);
|
||||
extern void show_mem(unsigned int flags);
|
||||
extern long si_mem_available(void);
|
||||
extern void si_meminfo(struct sysinfo * val);
|
||||
extern void si_meminfo_node(struct sysinfo *val, int nid);
|
||||
|
||||
|
@ -1867,6 +1874,7 @@ extern void zone_pcp_reset(struct zone *zone);
|
|||
|
||||
/* page_alloc.c */
|
||||
extern int min_free_kbytes;
|
||||
extern int watermark_scale_factor;
|
||||
|
||||
/* nommu.c */
|
||||
extern atomic_long_t mmap_pages_allocated;
|
||||
|
|
|
@ -668,6 +668,12 @@ typedef struct pglist_data {
|
|||
mem_hotplug_begin/end() */
|
||||
int kswapd_max_order;
|
||||
enum zone_type classzone_idx;
|
||||
#ifdef CONFIG_COMPACTION
|
||||
int kcompactd_max_order;
|
||||
enum zone_type kcompactd_classzone_idx;
|
||||
wait_queue_head_t kcompactd_wait;
|
||||
struct task_struct *kcompactd;
|
||||
#endif
|
||||
#ifdef CONFIG_NUMA_BALANCING
|
||||
/* Lock serializing the migrate rate limiting window */
|
||||
spinlock_t numabalancing_migrate_lock;
|
||||
|
@ -835,6 +841,8 @@ static inline int is_highmem(struct zone *zone)
|
|||
struct ctl_table;
|
||||
int min_free_kbytes_sysctl_handler(struct ctl_table *, int,
|
||||
void __user *, size_t *, loff_t *);
|
||||
int watermark_scale_factor_sysctl_handler(struct ctl_table *, int,
|
||||
void __user *, size_t *, loff_t *);
|
||||
extern int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES-1];
|
||||
int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *, int,
|
||||
void __user *, size_t *, loff_t *);
|
||||
|
|
|
@ -17,6 +17,8 @@
|
|||
#define ZONES_SHIFT 1
|
||||
#elif MAX_NR_ZONES <= 4
|
||||
#define ZONES_SHIFT 2
|
||||
#elif MAX_NR_ZONES <= 8
|
||||
#define ZONES_SHIFT 3
|
||||
#else
|
||||
#error ZONES_SHIFT -- too many zones configured adjust calculation
|
||||
#endif
|
||||
|
|
|
@ -144,12 +144,12 @@ static inline struct page *compound_head(struct page *page)
|
|||
return page;
|
||||
}
|
||||
|
||||
static inline int PageTail(struct page *page)
|
||||
static __always_inline int PageTail(struct page *page)
|
||||
{
|
||||
return READ_ONCE(page->compound_head) & 1;
|
||||
}
|
||||
|
||||
static inline int PageCompound(struct page *page)
|
||||
static __always_inline int PageCompound(struct page *page)
|
||||
{
|
||||
return test_bit(PG_head, &page->flags) || PageTail(page);
|
||||
}
|
||||
|
@ -184,31 +184,31 @@ static inline int PageCompound(struct page *page)
|
|||
* Macros to create function definitions for page flags
|
||||
*/
|
||||
#define TESTPAGEFLAG(uname, lname, policy) \
|
||||
static inline int Page##uname(struct page *page) \
|
||||
static __always_inline int Page##uname(struct page *page) \
|
||||
{ return test_bit(PG_##lname, &policy(page, 0)->flags); }
|
||||
|
||||
#define SETPAGEFLAG(uname, lname, policy) \
|
||||
static inline void SetPage##uname(struct page *page) \
|
||||
static __always_inline void SetPage##uname(struct page *page) \
|
||||
{ set_bit(PG_##lname, &policy(page, 1)->flags); }
|
||||
|
||||
#define CLEARPAGEFLAG(uname, lname, policy) \
|
||||
static inline void ClearPage##uname(struct page *page) \
|
||||
static __always_inline void ClearPage##uname(struct page *page) \
|
||||
{ clear_bit(PG_##lname, &policy(page, 1)->flags); }
|
||||
|
||||
#define __SETPAGEFLAG(uname, lname, policy) \
|
||||
static inline void __SetPage##uname(struct page *page) \
|
||||
static __always_inline void __SetPage##uname(struct page *page) \
|
||||
{ __set_bit(PG_##lname, &policy(page, 1)->flags); }
|
||||
|
||||
#define __CLEARPAGEFLAG(uname, lname, policy) \
|
||||
static inline void __ClearPage##uname(struct page *page) \
|
||||
static __always_inline void __ClearPage##uname(struct page *page) \
|
||||
{ __clear_bit(PG_##lname, &policy(page, 1)->flags); }
|
||||
|
||||
#define TESTSETFLAG(uname, lname, policy) \
|
||||
static inline int TestSetPage##uname(struct page *page) \
|
||||
static __always_inline int TestSetPage##uname(struct page *page) \
|
||||
{ return test_and_set_bit(PG_##lname, &policy(page, 1)->flags); }
|
||||
|
||||
#define TESTCLEARFLAG(uname, lname, policy) \
|
||||
static inline int TestClearPage##uname(struct page *page) \
|
||||
static __always_inline int TestClearPage##uname(struct page *page) \
|
||||
{ return test_and_clear_bit(PG_##lname, &policy(page, 1)->flags); }
|
||||
|
||||
#define PAGEFLAG(uname, lname, policy) \
|
||||
|
@ -371,7 +371,7 @@ PAGEFLAG(Idle, idle, PF_ANY)
|
|||
#define PAGE_MAPPING_KSM 2
|
||||
#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_KSM)
|
||||
|
||||
static inline int PageAnon(struct page *page)
|
||||
static __always_inline int PageAnon(struct page *page)
|
||||
{
|
||||
page = compound_head(page);
|
||||
return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0;
|
||||
|
@ -384,7 +384,7 @@ static inline int PageAnon(struct page *page)
|
|||
* is found in VM_MERGEABLE vmas. It's a PageAnon page, pointing not to any
|
||||
* anon_vma, but to that page's node of the stable tree.
|
||||
*/
|
||||
static inline int PageKsm(struct page *page)
|
||||
static __always_inline int PageKsm(struct page *page)
|
||||
{
|
||||
page = compound_head(page);
|
||||
return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
|
||||
|
@ -415,14 +415,14 @@ static inline int PageUptodate(struct page *page)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static inline void __SetPageUptodate(struct page *page)
|
||||
static __always_inline void __SetPageUptodate(struct page *page)
|
||||
{
|
||||
VM_BUG_ON_PAGE(PageTail(page), page);
|
||||
smp_wmb();
|
||||
__set_bit(PG_uptodate, &page->flags);
|
||||
}
|
||||
|
||||
static inline void SetPageUptodate(struct page *page)
|
||||
static __always_inline void SetPageUptodate(struct page *page)
|
||||
{
|
||||
VM_BUG_ON_PAGE(PageTail(page), page);
|
||||
/*
|
||||
|
@ -456,12 +456,12 @@ static inline void set_page_writeback_keepwrite(struct page *page)
|
|||
|
||||
__PAGEFLAG(Head, head, PF_ANY) CLEARPAGEFLAG(Head, head, PF_ANY)
|
||||
|
||||
static inline void set_compound_head(struct page *page, struct page *head)
|
||||
static __always_inline void set_compound_head(struct page *page, struct page *head)
|
||||
{
|
||||
WRITE_ONCE(page->compound_head, (unsigned long)head + 1);
|
||||
}
|
||||
|
||||
static inline void clear_compound_head(struct page *page)
|
||||
static __always_inline void clear_compound_head(struct page *page)
|
||||
{
|
||||
WRITE_ONCE(page->compound_head, 0);
|
||||
}
|
||||
|
@ -593,6 +593,8 @@ static inline void __ClearPageBuddy(struct page *page)
|
|||
atomic_set(&page->_mapcount, -1);
|
||||
}
|
||||
|
||||
extern bool is_free_buddy_page(struct page *page);
|
||||
|
||||
#define PAGE_BALLOON_MAPCOUNT_VALUE (-256)
|
||||
|
||||
static inline int PageBalloon(struct page *page)
|
||||
|
|
|
@ -0,0 +1,173 @@
|
|||
#ifndef _LINUX_PAGE_REF_H
|
||||
#define _LINUX_PAGE_REF_H
|
||||
|
||||
#include <linux/atomic.h>
|
||||
#include <linux/mm_types.h>
|
||||
#include <linux/page-flags.h>
|
||||
#include <linux/tracepoint-defs.h>
|
||||
|
||||
extern struct tracepoint __tracepoint_page_ref_set;
|
||||
extern struct tracepoint __tracepoint_page_ref_mod;
|
||||
extern struct tracepoint __tracepoint_page_ref_mod_and_test;
|
||||
extern struct tracepoint __tracepoint_page_ref_mod_and_return;
|
||||
extern struct tracepoint __tracepoint_page_ref_mod_unless;
|
||||
extern struct tracepoint __tracepoint_page_ref_freeze;
|
||||
extern struct tracepoint __tracepoint_page_ref_unfreeze;
|
||||
|
||||
#ifdef CONFIG_DEBUG_PAGE_REF
|
||||
|
||||
/*
|
||||
* Ideally we would want to use the trace_<tracepoint>_enabled() helper
|
||||
* functions. But due to include header file issues, that is not
|
||||
* feasible. Instead we have to open code the static key functions.
|
||||
*
|
||||
* See trace_##name##_enabled(void) in include/linux/tracepoint.h
|
||||
*/
|
||||
#define page_ref_tracepoint_active(t) static_key_false(&(t).key)
|
||||
|
||||
extern void __page_ref_set(struct page *page, int v);
|
||||
extern void __page_ref_mod(struct page *page, int v);
|
||||
extern void __page_ref_mod_and_test(struct page *page, int v, int ret);
|
||||
extern void __page_ref_mod_and_return(struct page *page, int v, int ret);
|
||||
extern void __page_ref_mod_unless(struct page *page, int v, int u);
|
||||
extern void __page_ref_freeze(struct page *page, int v, int ret);
|
||||
extern void __page_ref_unfreeze(struct page *page, int v);
|
||||
|
||||
#else
|
||||
|
||||
#define page_ref_tracepoint_active(t) false
|
||||
|
||||
static inline void __page_ref_set(struct page *page, int v)
|
||||
{
|
||||
}
|
||||
static inline void __page_ref_mod(struct page *page, int v)
|
||||
{
|
||||
}
|
||||
static inline void __page_ref_mod_and_test(struct page *page, int v, int ret)
|
||||
{
|
||||
}
|
||||
static inline void __page_ref_mod_and_return(struct page *page, int v, int ret)
|
||||
{
|
||||
}
|
||||
static inline void __page_ref_mod_unless(struct page *page, int v, int u)
|
||||
{
|
||||
}
|
||||
static inline void __page_ref_freeze(struct page *page, int v, int ret)
|
||||
{
|
||||
}
|
||||
static inline void __page_ref_unfreeze(struct page *page, int v)
|
||||
{
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
static inline int page_ref_count(struct page *page)
|
||||
{
|
||||
return atomic_read(&page->_count);
|
||||
}
|
||||
|
||||
static inline int page_count(struct page *page)
|
||||
{
|
||||
return atomic_read(&compound_head(page)->_count);
|
||||
}
|
||||
|
||||
static inline void set_page_count(struct page *page, int v)
|
||||
{
|
||||
atomic_set(&page->_count, v);
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_set))
|
||||
__page_ref_set(page, v);
|
||||
}
|
||||
|
||||
/*
|
||||
* Setup the page count before being freed into the page allocator for
|
||||
* the first time (boot or memory hotplug)
|
||||
*/
|
||||
static inline void init_page_count(struct page *page)
|
||||
{
|
||||
set_page_count(page, 1);
|
||||
}
|
||||
|
||||
static inline void page_ref_add(struct page *page, int nr)
|
||||
{
|
||||
atomic_add(nr, &page->_count);
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_mod))
|
||||
__page_ref_mod(page, nr);
|
||||
}
|
||||
|
||||
static inline void page_ref_sub(struct page *page, int nr)
|
||||
{
|
||||
atomic_sub(nr, &page->_count);
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_mod))
|
||||
__page_ref_mod(page, -nr);
|
||||
}
|
||||
|
||||
static inline void page_ref_inc(struct page *page)
|
||||
{
|
||||
atomic_inc(&page->_count);
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_mod))
|
||||
__page_ref_mod(page, 1);
|
||||
}
|
||||
|
||||
static inline void page_ref_dec(struct page *page)
|
||||
{
|
||||
atomic_dec(&page->_count);
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_mod))
|
||||
__page_ref_mod(page, -1);
|
||||
}
|
||||
|
||||
static inline int page_ref_sub_and_test(struct page *page, int nr)
|
||||
{
|
||||
int ret = atomic_sub_and_test(nr, &page->_count);
|
||||
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_mod_and_test))
|
||||
__page_ref_mod_and_test(page, -nr, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static inline int page_ref_dec_and_test(struct page *page)
|
||||
{
|
||||
int ret = atomic_dec_and_test(&page->_count);
|
||||
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_mod_and_test))
|
||||
__page_ref_mod_and_test(page, -1, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static inline int page_ref_dec_return(struct page *page)
|
||||
{
|
||||
int ret = atomic_dec_return(&page->_count);
|
||||
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_mod_and_return))
|
||||
__page_ref_mod_and_return(page, -1, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static inline int page_ref_add_unless(struct page *page, int nr, int u)
|
||||
{
|
||||
int ret = atomic_add_unless(&page->_count, nr, u);
|
||||
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_mod_unless))
|
||||
__page_ref_mod_unless(page, nr, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static inline int page_ref_freeze(struct page *page, int count)
|
||||
{
|
||||
int ret = likely(atomic_cmpxchg(&page->_count, count, 0) == count);
|
||||
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_freeze))
|
||||
__page_ref_freeze(page, count, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static inline void page_ref_unfreeze(struct page *page, int count)
|
||||
{
|
||||
VM_BUG_ON_PAGE(page_count(page) != 0, page);
|
||||
VM_BUG_ON(count == 0);
|
||||
|
||||
atomic_set(&page->_count, count);
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_unfreeze))
|
||||
__page_ref_unfreeze(page, count);
|
||||
}
|
||||
|
||||
#endif
|
|
@ -165,7 +165,7 @@ static inline int page_cache_get_speculative(struct page *page)
|
|||
* SMP requires.
|
||||
*/
|
||||
VM_BUG_ON_PAGE(page_count(page) == 0, page);
|
||||
atomic_inc(&page->_count);
|
||||
page_ref_inc(page);
|
||||
|
||||
#else
|
||||
if (unlikely(!get_page_unless_zero(page))) {
|
||||
|
@ -194,10 +194,10 @@ static inline int page_cache_add_speculative(struct page *page, int count)
|
|||
VM_BUG_ON(!in_atomic());
|
||||
# endif
|
||||
VM_BUG_ON_PAGE(page_count(page) == 0, page);
|
||||
atomic_add(count, &page->_count);
|
||||
page_ref_add(page, count);
|
||||
|
||||
#else
|
||||
if (unlikely(!atomic_add_unless(&page->_count, count, 0)))
|
||||
if (unlikely(!page_ref_add_unless(page, count, 0)))
|
||||
return 0;
|
||||
#endif
|
||||
VM_BUG_ON_PAGE(PageCompound(page) && page != compound_head(page), page);
|
||||
|
@ -205,19 +205,6 @@ static inline int page_cache_add_speculative(struct page *page, int count)
|
|||
return 1;
|
||||
}
|
||||
|
||||
static inline int page_freeze_refs(struct page *page, int count)
|
||||
{
|
||||
return likely(atomic_cmpxchg(&page->_count, count, 0) == count);
|
||||
}
|
||||
|
||||
static inline void page_unfreeze_refs(struct page *page, int count)
|
||||
{
|
||||
VM_BUG_ON_PAGE(page_count(page) != 0, page);
|
||||
VM_BUG_ON(count == 0);
|
||||
|
||||
atomic_set(&page->_count, count);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_NUMA
|
||||
extern struct page *__page_cache_alloc(gfp_t gfp);
|
||||
#else
|
||||
|
|
|
@ -96,7 +96,7 @@ extern void poll_initwait(struct poll_wqueues *pwq);
|
|||
extern void poll_freewait(struct poll_wqueues *pwq);
|
||||
extern int poll_schedule_timeout(struct poll_wqueues *pwq, int state,
|
||||
ktime_t *expires, unsigned long slack);
|
||||
extern long select_estimate_accuracy(struct timespec *tv);
|
||||
extern u64 select_estimate_accuracy(struct timespec *tv);
|
||||
|
||||
|
||||
static inline int poll_schedule(struct poll_wqueues *pwq, int state)
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
* as needed after allocation when they are freed. Per cpu lists of pages
|
||||
* are kept that only contain node local pages.
|
||||
*
|
||||
* (C) 2007, SGI. Christoph Lameter <clameter@sgi.com>
|
||||
* (C) 2007, SGI. Christoph Lameter <cl@linux.com>
|
||||
*/
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/gfp.h>
|
||||
|
|
|
@ -21,6 +21,7 @@
|
|||
#ifndef _LINUX_RADIX_TREE_H
|
||||
#define _LINUX_RADIX_TREE_H
|
||||
|
||||
#include <linux/bitops.h>
|
||||
#include <linux/preempt.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/bug.h>
|
||||
|
@ -270,8 +271,15 @@ static inline void radix_tree_replace_slot(void **pslot, void *item)
|
|||
}
|
||||
|
||||
int __radix_tree_create(struct radix_tree_root *root, unsigned long index,
|
||||
struct radix_tree_node **nodep, void ***slotp);
|
||||
int radix_tree_insert(struct radix_tree_root *, unsigned long, void *);
|
||||
unsigned order, struct radix_tree_node **nodep,
|
||||
void ***slotp);
|
||||
int __radix_tree_insert(struct radix_tree_root *, unsigned long index,
|
||||
unsigned order, void *);
|
||||
static inline int radix_tree_insert(struct radix_tree_root *root,
|
||||
unsigned long index, void *entry)
|
||||
{
|
||||
return __radix_tree_insert(root, index, 0, entry);
|
||||
}
|
||||
void *__radix_tree_lookup(struct radix_tree_root *root, unsigned long index,
|
||||
struct radix_tree_node **nodep, void ***slotp);
|
||||
void *radix_tree_lookup(struct radix_tree_root *, unsigned long);
|
||||
|
@ -394,6 +402,22 @@ void **radix_tree_iter_retry(struct radix_tree_iter *iter)
|
|||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* radix_tree_iter_next - resume iterating when the chunk may be invalid
|
||||
* @iter: iterator state
|
||||
*
|
||||
* If the iterator needs to release then reacquire a lock, the chunk may
|
||||
* have been invalidated by an insertion or deletion. Call this function
|
||||
* to continue the iteration from the next index.
|
||||
*/
|
||||
static inline __must_check
|
||||
void **radix_tree_iter_next(struct radix_tree_iter *iter)
|
||||
{
|
||||
iter->next_index = iter->index + 1;
|
||||
iter->tags = 0;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* radix_tree_chunk_size - get current chunk size
|
||||
*
|
||||
|
|
Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше
Загрузка…
Ссылка в новой задаче